Our Google Glass Hackathon – Winners and Lessons Learned

Reading Time: 5 minutes

On Saturday we hosted our first Google Glass hackathon at our offices in Menlo Park. Our business is not translatable to Glass (we connect brands to bloggers), but our company’s strength and differentiator in the content marketing space is that our team is highly technical at the core.  We just enjoy tech and play around with our Glass a lot, so we created a Facebook event page for a Google Glass hackathon for fun to see what other people could come up with and were surprised at the amount of interest it garnered.

Everyone is interested in Glass. Actually, only ⅓ of our hackathon attendees were developers! The other ⅔ came to support and were deeply interested in talking to the various teams about the technology they were building for Glass. A reporter from Popular Science even stayed around until the wee hours of the morning to see the final version of one of our winners, WayGo (500 Startups).

I found this to be very telling of the environment around Glass; it’s unusual for non-developers to hang out at a hackathon just to watch the action and play around on the hardware. We even had cheerleaders and Glass enthusiasts such as Robert Scoble and Fred Davis stop by to check out the action.

photo (8)

photo (5)photo (10)

This video is a shining example of what #PeopleOnGlassSay (try that hand dance yourself):

Here are some photos taken through Fred’s Glass:

Screen Shot 2013-10-01 at 11.48.52 AM Screen Shot 2013-10-01 at 11.49.07 AM Screen Shot 2013-10-01 at 11.49.30 AM

There were a lot of interesting ideas thrown around for Glass apps from being able to wiki the history of locations, to reporting unsafe drivers, to hacking Glass to include facial recognition capabilities. One company, TiKL (y combinator), a walkie-talkie app with over 27 million Android and iPhone users, came in to hack and put their app functionality on Glass. Glass as a walkie talkie? Pretty cool.

An interesting thing about Glass is how it works with iPhone and Android so differently. The two phones are worlds apart when it comes to seamless integration. Some key differences:

Android Development

iPhone Development

Any app on your Android phone can be used on Glass (it’s not optimal, like when iPad first came out and people downloaded iPhone apps onto it).

If you have an iPhone, you can’t use existing apps on Glass.

Developing for Glass is a lot easier – you can write the app for Android and slide load it to Glass.

Developing for Glass is a lot harder – you cannot slide load any iPhone app to Glass. It has to be written using the Android SDK. You cannot use iPhone SDK.

You can screencast (similar to screenshare) your Glass experience to an Android phone. Whatever you see on Glass can be screencasted to Android phone. This makes it easy to share the Glass experience.

You cannot view Glass on iPhone.

Glass syncs with your phone, but you need wifi for full capabilities. With Android you can seamlessly connect to wifi to take full advantage of all that Glass has to offer.

Using Glass on iPhone is a sloppy experience if you want to take advantage of wifi. In order to connect you have to go to your Glass account on desktop and connect to wifi there, then scan a barcode that displays on your desktop screen with your Glass. It’s a hassle.

The Winners

photo (9)

Our Glass hackathon had 2 first place winners, one for Most Completed App, and the other for Most Creative App.

Most Completed App – WayGo, by team WayGo

WayGo is a visual translator app that instantly translates Chinese text to English by pointing your phone at the Chinese characters. In just one day their team was able to build WayGo app onto Glass so that you can translate text just by looking at it! You can imagine how valuable this would be while traveling in a foreign country, as Glass is an extension of yourself and you can more organically navigate without having to point your phone at everything.

Video of WayGo working on their app #ThroughGlass

Most Innovative App – SnapBook, by team Real World San Francisco

With SnapBook you can take a picture of a person you have just met and the Glass will show the person’s name, and his/her publicly available linked accounts, including: Facebook, LinkedIn, Email, Phone, etc. The technology used for this includes Android API 14+ (native) Camera API, Lambda Lab’s Facial Recognition API served through Mashape, and a Facebook profile picture album as training data.

Video of SnapBook Team #ThroughGlass

Fang, one of the builders of SnapBook provided us with this breakdown of “Google Glass Lessons Learned” and sums up the experience very well for anyone that wants to build on Glass:

Google Glass lessons: The main thing for me in going to this thing is in figuring out the caveats of Glass development, because if you going on Google’s documentations alone there’s simply too much ambiguity and too little information; the Mirror API is closed access so I can’t even enable its service through my Google account, there’s no Glass SDK and therefore no official word on native APIs (no documentation), and I don’t have a pair Glass to hack on (so I’m pretty much screwed). So, here’s the real meat of this whole experience summed up for ANYBODY who wants to build on glass.

1) Building natively: No GDK, that’s OK; You can use Android’s SDK API 14+’s Camera API to take snapshots, or handle streams of image data, which is the heart of Glass development; Computer Visions. (also, stay away from Autofocus; that will crash)

2) Tools: Eclipse + Android SDK plugins; But you can’t use eclipse to build the app onto the device; you’ll need to sideload via ADT in the command line. Trying to get the app to run via Eclipse can be tricky and you may have to time attack it (see #3)

3) Getting it to run on the Glass: Glass must be on DEBUG MODE and Glass must be on standby, which is the active view right when you tell it to turn on. Then you will call the command line script which builds the .apk to the Glass Device AND start the app. Otherwise, you will have moved your app to the device, but there’s no way to actually get it to run.

4) Someone else at the hackathon was doing something similar to us (computer vision technology). This is really encouraging as it takes us one step closer to “real time” augmented reality — much better than anyone would of thought when Google first released the Mirror API. It now seems much more achievable.

“The future of glass is indeed bright after all.” Fang Chen, Markerly Google Glass Hackathon Winner

Want more content like this?

Influencers, brands and marketing are our passions, and sharing our perspective is our way of starting a conversation! We'd love to have you follow us, and more importantly, engage with us. Sign up to be the first to hear what's making an impression on us!