The other side of NRF…

…this year we exhibited at the world’s largest retail show in New York, the National Retail Federation’s Big Show.

This year’s trip to NRF’s Big Show, which is the largest retail tech conference on the planet, was a bit different for us. Instead of just going as visitors we were there exhibiting alongside IBM.

We’ve been working on a project with one of their research teams in Israel for a while now on a platform that recognises products on a shelf and feeds back whether or not they’re in the right place according to the plan, or flags them if they’re out of stock.

IMG_1172

It was great to hear from other retailers on what they thought of the system and how it might be able to help them too. You can see a video about it from last year here if you want to know more.

It’s not just for store colleagues though, it can also overlay nutritional information on a photo of a shelf so you can easily see, for example, which product has the lowest sugar or fat. In the future this sort of tech could be combined with wearables to enabled personalised merchandising – imagine having a peanut allergy and your glasses hiding or marking all the products with peanuts in them?

Some of the highlights of the rest of the show included HPs new Sprout concept which brings together a 3D camera, projector and large touch surface along with an all in one PC to great effect. The way it made digitising a fabric sample so it could be manipulated and shared was really quick and we’re sure there will be lots of interesting things we can do with it.

IMG_1174

One of the partners on Microsoft’s stand had rigged up a Kinect high above a set of Xbox game shelves. When you reached out for one of the games the trailer for it would start playing on the screen above. Really simple idea and execution – we’ve dabbled with this sort of thing in the past at a hackathon but every product had to have a switch underneath it. Using the Kinect meant there was no need to instrument everything.

IMG_1153

Following on from last years trends there was a continuing growth in wifi and video analytics and adding in new data sources to better understand customers behaviour and improve their shopping experience.

After the show we visited some interesting stores which we’ll save for another post…

Ok Glass, Find a product…

A developer’s perspective on one of the most talked about wearables of recent times.

My colleagues have heard me say this several hundred times over the last few months. They have taken delight in the different search terms I have had to come up with; partly to test the glassware, but also just to entertain them. It’s rather liberating to talk to yourself at your desk, despite the ridicule from your colleagues.

Of course, you can also scan a barcode: “Ok Glass scan a product”.

We started this experiment in June last year. We had a prototype working, and filmed a conceptual video about how customers might use the glassware. Since then, it has changed substantially, although the principle functions remain. We have refined and shortened the user journeys and also clarified the experience to make it consistent with the Glass design patterns. 

If you are already a Glass wearer, you should find the experience very familiar and you can try the glassware out. As this is a very early experiment you can only add items to your basket and view nutritional information, but it’s enough to give a sense of what it would be like to interact with Tesco on this type of hardware.

Download it

More info and support

From a developers’ perspective, working with Glass has been a joy. The updates to Android Studio that have made Android development more accessible all apply to Glass development. The Glass Development Kit (GDK) documentation is good and getting better. The community is helpful and proactive about sharing knowledge, especially on stackoverflow. The Glass team at Google does all they can to try to make sure the glassware delivers the best experience possible.  This is a challenge given how Glass is still being developed, so it can be somewhat of a moving target. The Glass software platform went through 6 updates in the time we worked with it, which shows how much Google is still investing in the platform.

Given the steady flow of software updates, and the various articles that have been published alluding to updated Glass hardware,  I can’t help but feel this is still the beginning of the journey for Glass and for Tesco.

Getting back in touch with the Internet of Things

Our second instalment in an increasingly inaccurately titled series of blogs about the Internet of Things, or IoT to those in the know 😉

Nine months ago I posted about a project we were working on to make some of the stuff we use in stores a bit smarter and more communicative. Things were going pretty well: we’d built a Proof of Concept device, based on an Arduino micro controller, and got a few people excited about the possible applications. But then we learnt a hard lesson concerning the difference between a Proof of Concept and a prototype, a pain which I shall attempt to convey in the following paragraphs.

Most people use the terms Proof of Concept (PoC) and prototype interchangeably, but in truth they are very different things. The PoC demonstrates that something is physically possible to do in its broadest sense. With a PoC, the little details can be put aside. Worried about battery life? No problem, attach it to wall socket. Heat problems? No drama, put a fan over it. Two weeks to build and test each unit – fine, it’s just a proof of concept. Conversely, the prototype demonstrates exactly how it will be implemented in an operationally viable way that won’t bankrupt the company. Battery life – how are we going to monitor it? How does it get recharged? Will it last long enough between charges? Is it easy to find the socket to plug it into? Can it be recycled?

The first challenge we faced when moving to a prototype stage was how could we make the electronics come in at a reasonable price point. It’s worth noting that, in our naivety, we thought we could pass the PoC to an electronics company and get back a complete solution. However, we found the costs were prohibitively expensive, so we were back to doing it ourselves. But this was a good thing because we made a rather marvellous discovery: it’s possible to get a small run of printed circuit boards manufactured for under £100. So using some free software, called DesignSpark, we were able to build a nice circuit board that we could assemble in a couple of hours. How cool is that? The second discovery we made was that the expensive, and rather unwieldy, Arduino could be replaced by a much smaller and cheaper ‘Pro’ version. So a couple of revisions later, we had a very robust and economically viable ‘brain’ for our device.

Our second challenge was which sensors to use. Our Proof of Concept detected changes in state using magnetic switches. Unfortunately, this would have meant installing powerful magnets to the equipment that would be used with it: a big no-no as far as economics and health and safety were concerned! Micro switches were too hard to mount, optical switches didn’t work reliably and laser range finders seemed a bit like overkill. Fortunately we discovered some infrared proximity sensors that could be used. These are fairly useful, and some are supplied with a big rely such that they could be used to control mains voltage. Not for this project, but worth keeping in mind.

We still have a few challenges to overcome, but I think we’re on the final strait. Hopefully in my final instalment I’ll be able to give the ‘big reveal’ and say what it is that we’ve actually been working on.