EXPLORING WITH GOOGLE GLASS

By February 4, 2014 April 8th, 2019 Enterprise Mobility, Technologies

At BlueFletch we’re constantly evaluating new mobile technologies, both hardware and software, to see what advantages they can give our clients.  One of the more exciting mobile technologies that came out over the past year was Google Glass. With all of the buzz around it and with an almost equal number of skeptics and believers, we decided to try it out for ourselves.

What is Google Glass?

Google Glass is essentially a wearable computer.With the display above your right eye, it can becontrolled via voice commands or gestures.  You can opt to turn it on by tilting your head upward or by touching the side of the unit.  What’s nice is that it responds to multi-touch gestures and has sensors on the inside wall that detect when it is being worn and detects eye events.  It has built in Wi-Fi (that you can connect either using the MyGlass app or a browser) and Bluetooth to connect to your phone to make calls (and data outside of Wi-Fi coverage).  The most controversial component of Glass is the 5MP camera, which can be controlled via voice (or on the recent update, by wink).

Personal Impressions

GoogleGlass5

After the initial setup, it took a while to get used to looking up into the display as well as using the commands and navigating through the timeline.  It’s definitely not a place to browse through long messages, but I was pleasantly surprised on the ease of use when receiving and responding to short emails or SMS.  Turn by turn navigation works well on the Glass, however, after a couple of miles I found myself looking up instead of forward and realized it was too much of a distraction to keep on while driving, even after a week of use. Other Glass users’ opinion may vary, but similar to phones, I would recommend not wearing or operating the Glass while driving.

 

Developing for Glass

Glass applications can be developed either using the Mirror API or using the Glass Development Kit (GDK), which was released for explorers late last year.  The Mirror API lets you build web services (aka Glassware) against Google APIs to push/interact with the authenticated users’ Glass without running code on the Glass, while the GDK lets you build native applications for Glass.

Building the Prototype

With our focus on our enterprise clients, I decided to try and repurpose an existing Inventory Scanning and Lookup application we built for one of our retail clients into a Glass application using the GDK.  Creating the basic application was pretty straightforward  if you know how to develop native Android applications, developing using the GDK is not that much different:  The application still consists of Activities, but the UI views can be implemented using “Cards” within a “CardScrollView” and, being a native android application, I was able to reuse the existing native code we had to perform REST calls to our client’s JSON services.  To launch the application, I only had to create a few XML files and setting up the manifest file to have my app show as an option in the “ok glass” menu. After setting up the Glass in debug mode I was able to compile and push the application onto the Glass.

Prototype Challenges and Results

Our Inventory Lookup application used the phone camera as a barcode scanner using the ZXing Library, which didn’t work out of the box to work on Glass.  I ultimately decided on a slimmed down version of ZXing (Android QR Code) but had to modify the CameraManager’s frame height to work with the Glass odd aspect ratio (640×360).

Ultimately this setup works, however, you needed to awkwardly hold up the item about 3 inches from your face in front of the Glass camera for it to successfully read the barcode.  I tried incorporating image processing to locate and crop the barcode while holding the item at arms length, but at this distance the barcode comes in too low a resolution to be decoded properly.  This approach might be feasible if the camera had optical zoom (digital zoom will not work either, as digital artifacts from this makes the barcode unreadable as well), but at its current resolution and wide focal length, scanning barcodes is a bit awkward.

In the end I decided to use Google’s Speech Recognizer Intent to allow the user to say the item and use the transcribed text for inventory search.  Google’s speech recognition has improved tremendously over the past couple of years, and on Glass using the speech recognizer was nearly spot on and works considerably better than the speech recognizer on the phone.

Conclusions

As a consumer device, I believe Google did a great job of executing their vision: to have technology available when you want it and out of the way when you don’t.  Although it will take some time to placate privacy advocates and for the device itself to become culturally acceptable, it’s an important piece of tech that holds a lot of potential.

For the enterprise, it can be a great companion to existing handheld devices with its small and lightweight form factor and great voice command/speech recognition capabilities, but it would benefit from better battery life, a better camera or a more specialized scanning device, and a larger display.  In a task-based enterprise application, it would be good to have the display on the users line of sight that acts more of a HUD to prevent eye fatigue, similar to more specialized (albeit more expensive) products from Lumus Optical or Optinvent.  Nevertheless, the ease of development and potential low cost for the final product make it an attractive piece of mobile technology worth evaluating and exploring.

 

About the Author

Related Posts