Through the prism of Glass
Google’s sneek peaks of the GDK (Glass Development Kit) lets us envisage Google Glass as an interesting platform when designing mobile services.

Key changes with the introduction of this new development kit are all about Google’s willingness to open up their platform. We now have :
- Access to hardware features (camera, microphone, gyroscope, touchpad)
- Glassware that can work offline
- A Mirror API that we can still use but that is no longer the unique entry point.
As we started manipulating both the device and the toolkit, we started wondering about the synergy between mobile apps and glasswares, and what we could take from one paradigm to the other. Here are our first findings.
It’s all about the context
Putting an interface between our eyes and the world may looks a bit odd. Having tried a pair of Google Glasses, we confirmed that this is indeed strange, but only at first.
After a few seconds of adaptation and playing with the controls, we found it very simple and easy to use. Nevertheless we realize that the real impact in term of experience will be in proposing the right piece of app for the right scenario. And when we say “piece of app”, we actually mean it. We think you cannot transfer the entire feature set of an existing app - even the most simplistic - directly on Glass. We have to use a combination of sensors and voice recognition to suggest the relevant information or tool within the prism. Just as in the traditional eyewear industry, people wear shades when it’s sunny, and magnifying glasses when reading tiny typos. That’s why we need to think context first.
Natural UI is the new “Less is More”
Besides the device itself, Google introduces a new interface paradigm with Glass. In a nutshell, the navigation follows two axis:
- The sequence of activities is displayed right to left - from the past to the future.
- One can “dive” within an activity by swiping down.
More over, the search menu is accessible from anywhere - wait, is Google behind this? - with the vocal command “OK Glass”. In term of design, screens are standardized and have to follow a card template, focused on readability and clarity. As you can imagine, this “Natural UI” is pretty frugal and leaves very little room for creativity. Thus, it guides both designers and engineers (just as mobile apps SDKs did a few years ago, compared to web standards) and help them focus on essentials: the service to deliver.
Different form factors, same skills
Despite its specific UI principles, the Google Glass is built atop of Android 4.0.3. Therefore no specific technical skills are required to master the GDK, apart from a different mindset when designing your glassware. To look back and make a comparison, one could say that glasswares have a lot in common with Android widgets that appeared on Android mobile apps not a so long ago: smaller screen real estate, a search for instantaneity, etc…
OK Glass… what’s next?
We see in Glass tremendous opportunities to enhance the mobile services we are creating by proposing new solutions to fluidify the user experience… and this is just the beginning. One may compare this GDK announcement to the Apple iOS SDK release in 2008… a few months before the App Store.
In the meantime, we are developing our first glasswares these days and will be delighted to share this experience with you.
We're hiring!
We're looking for bright people. Want to be part of the mobile revolution?
Open positions