Immediately, we thought about how our colleagues might be able to use Glass to check stock hands-free, or how our customers might be able to add a product to their grocery delivery basket while making a cup of tea. Getting to that stage has been a journey into entirely new areas of user interaction: new gestures, user interface elements, and input mechanisms.
Most of all, it’s about trying to understand the use-cases for Glass. It’s unlike any other hardware technology we’ve had before, so we tend to try and apply the use-cases we see for mobiles, tablets and desktop computing to see if they stick to Glass. They mostly don’t. Glass isn’t the kind of tech you use for 15, 10 or even 5 minutes at a time. You’re not going to comfortably do your entire grocery shop by staring at the top right-hand corner of your field of vision, but you might just add a single item, see some nutritional information, and then move on. You might get a notification about your delivery, including a photo of your delivery driver.
Click to play video.
Other than some time-compression between adding items to the basket and the actual delivery, the prototype app is real; no smoke and mirrors there. Every once in a while, a new piece of technology comes a long that pushes the boundaries of science-fiction, making all sorts of potential use-cases an immediately reality. Glass feels like one of those technologies, in the sort of way WiFi or Smart Phones changed things. This is just the beginning of our journey with Glass, but we’re very excited about it and other wearables, and most importantly how you will use wearable technology to interact with Tesco.
Let us know your thoughts!