Project: Meal Deal Express

We’re running a live experiment in our Dean Street Metro store next week!

Background and Introduction

We’re working on a project with the London Format team which will ultimately achieve the goal of “I Don’t Queue”. Queuing is real issue in central Metro and Express London stores particularly at lunchtimes when customers are choosing the popular £3 Meal Deal and proceeding to the checkouts faster than the checkouts can process them.

The latest checkouts can process customers faster (as can ‘scan as you shop’) but checking-out at the end of the shop can still be improved. So our aim is to find a way where we completely eliminate checkouts. In our vision of a future customers walk into any Tesco, fill their basket or trolley, and walk straight out again.

Goals and Objectives

To start the first phase of this work, we’ve created a one week (Monday to Friday) experiment called “Meal Deal Express” where we have taken queuing and completely designed it out.

We have built a ‘Meal Deal Express’ zone in the centre of Dean Street Metro in Soho which consists of three adjacent ‘stations’. Customers move along the stations choosing one item from each. They then checkout by either tapping a contactless payment card on an NFC terminal constantly set to debit £3 from any card tapped on it, or by using a mobile payments app that makes them scan a QR code then tap ‘Pay’. These two options were chosen because it takes no more than 5 seconds to process the transaction. The entire checkout time is 5 seconds. The customer then happily walks out of the door.

The ’30 second challenge’ is on!

Summary (for those in a hurry)
• Tesco Labs are conducting an experiment to speed up the purchasing of popular Meal Deals.
• We are running the experiment for one week (13-17 Oct 12-2pm) in one store, Dean Street Metro, London
• We are working with innovative suppliers to see how fast customers can purchase their meals with new technology.
• Results of the week-long experiment will be fed in.

Retail Week Hackathon – Judges Report

Our very own Nick Lansley tells us what it was like to judge at the Retail Week Hackathon.

As chairman of the Judging Panel for the inaugural Retail Week Hackathon, I can say on behalf of my fellow judges that we’ve witnessed some outstanding examples of innovation from a set of 10 inspiring teams.

We encountered so many examples of ‘thought through’ innovation where teams tested their ideas through research and refinement, checked business models and built a credible, tangible customer experience.

Our three finalists provided us with outstanding examples, and scored highly in the four judged areas of innovation, business value, customer experience and functionality.

Deloitte Digital presented us with “Fit”, an end-to-end fashion clothes choosing, trying-on and purchasing journey in-store. They had really thought through the customer journey by finding the easiest way for customers to scan product labels – not through barcodes (which can be fiddly to scan with a camera phone), or iBeacons (which cost the retailer to install and maintain) but ‘SnowShoes’, a passive tag that can be placed on the phone’s screen and presents a unique arrangement of what the phone thinks are fingers touching it.

This was quick and easy to use. Having selected the clothes, the customer could go to the changing room to try them on, and alert staff to go and get different sizes using a touch screen. They could alter the lighting in the changing room to suit different environments from ‘daylight’ to ‘at the disco’ to see how the garments looked. Finally the customer could pay there and then (for the benefit of staff, the lights go green!), and walk out of the changing room and the store. The team had really examined the business case to our satisfaction, built a real running prototype and executed a complete customer journey live as we watched their pitch.

Tesco Labs showed us ‘Quick Coffee’, a way to make it easy for customers to buy and pick up their coffee as they approached the coffee shop. By the time they arrived, the coffee would be ready and personally labelled. The team built an app used by the customer to choose their coffee type. In real-time we saw the coffee’s requirements appear in a down-projected image on the barista’s work table, along with a circle slowly growing around the description words to show the estimated countdown to the customer’s arrival.

The barista then created the coffee and placed the cup at the very spot where the description was being seen. The system detected the coffee using a Kinect sensor and alerted the customer that it was ready by a push notification to their phone, then projected the customer’s name and picture next to the cup. When the customer arrived, they picked up the cup with their name projected next to it and the system would set the transaction as complete. The team demonstrated several orders being processed in parallel. We loved the simplicity, the relatively low cost technology deployed, and its good fit with the ‘theatre’ environment found in coffee shops.

The winner, Kega Retail, was chosen because their outstanding hack created a customer journey that travelled across online and in-store channels, with each channel helping the other in an innovative ‘bi-directional’ manner. The team built most of this journey in their hack and demonstrated it to great success. It came closest to ‘the ultimate customer experience’ theme of the Hackathon.

The journey starts when the customer engages with a product online at a retail web-store but ends up not purchasing. The next day, the customer passes the shop belonging to the same retailer. As they are a loyal customer and have the retailer’s app, their phone receives a push notification inviting them into the store. Window signage and in-store screens would highlight the item.

Staff in the store would be alerted to make the item available and answer questions (thanks to staff tablet computers showing detailed information) should the customer wish. If, on the other hand, the customer continues to pass the store and keep walking, they receive a second push notification with an offer that is hopefully enticing enough for the customer to be persuaded into the store.

But the journey doesn’t end there: If a customer were to linger looking at items in a certain part of the store, ibeacons would pick this up. Next time they are at the retailer’s web-store, those items would be more prominent. The business value was clear, and the innovation was to use technology in an easy to understand customer journey that merged the in-store and online channels – and made those channels work for each another to create the nearest we’ve yet come to ‘omnichannel’.

We also gave two commendations: Clear Returns looked at how to use data to filter products by dietary requirements and envisioned how customers could highlight products by wearing a device such as Google Glass. And Ometria explored how to really engage with wish-lists that would work across multiple retailers.

We liked how both these teams took ‘the ultimate customer experience’ to mean ‘ease existing conscious customer frustrations’ – an important lesson for our retail world.

This post was first published on Retail Week

Is someone else’s programming ruling your life?

Or the consequences of algorithmic bias.

You are in a driverless car. While you relax, the car is taking you to work. Part of the journey is for you to be driven speedily over a narrow bridge with a steep drop on either side.

Unbeknown to you (or the car), some individual has decided to use the bridge as a shortcut and is walking across it. There’s no width for this person to step to one side (or the car to swerve) and no ability to avoid them at this speed.

So, the computer program in the car has to make a decision: To kill the other person by driving over them, or kill you by swerving over the side of the bridge.

If / Else. It’s not you. It’s not the other person. It’s up to the computer algorithm, written by some programmer some time ago in a nice comfortable office far, far away. Is the programmer inclined to save the car and you (so you’ll be grateful and maybe more brand-loyal), or save the third party who has no protection around them like you have. Perhaps the car body may just save you from dying from the drop; after all, the depth is unknown to the algorithm and may  decide that it’s worth the outcome?

All of us who are computer programmers exhibit something called “algorithmic bias” when we code. We don’t notice it but, when we code those If / Else statements in our apps and services, we decide the intention – and that intention may be based on our personal values and biases. We decide whether the If or the Else is more worthy; more valuable.

Let me suggest another scenario: I’m worried about the safety of my family when I drive, so I choose a large vehicle with lots of protection and safety features. One day I have an accident. My car is big and heavy, but my vehicle serves its intended function since everyone in it is kept nice and safe. Unfortunately, the other car isn’t so lucky, and suffers even more damage than if it had collided with another average sized car.

Conclusion? People tend to focus on products that protect themselves and their families. Therefore more products will always be designed to protect the customer, since this sells more. Could we see, in the future, a form of Darwinism where the customer with the most money will choose products which make the best possible decision to protect them in these critical situations? Could we end up with a kind of algorithmic arms race? The principles of Game Theory could probably apply here!

The fact is that all the software powering all the tech around us – home, office, car – has algorithmic bias built in. Fortunately the worst it can do is annoy us, but as we come to rely on software for our safety, maybe it’s something we pay attention to. For example, what bias is running in the software controlling your next lift journey and it has to deal with an error condition? What bias is taking place in the increasingly insistent auto-correct when you type your next email that could replace a word and distort your message? Algorthmic Bias is already everywhere.

So let me leave you with this thought: Are you slowly being forced to live according to the personal values and biases of some far-off development team right now?

2014: The Year Of The Robot!

Is 2014 the year of the robot? Read on for our opinion.

I’ve been waiting for this year for a long time! Finally… FINALLY… I’m sensing that this is the Year of the Robot. I’ve been promised this year all my life from the earliest TV shows to the latest Robocop movie so I’m glad it’s finally here.

Google’s purchase of major military robot maker Boston Dynamics is the marker. Combining the Big G’s powerful algorithms and those ominous walking machines means that all the component pieces are now ready to be built together.

At the other end of the spectrum of robotic technology are the hobbyist components that can be formed into home-brew robots.

I spent my Christmas break building a 4-wheel drive robot that combines a Raspberry Pi with an Arduino control board. The Raspberry Pi is being taught to follow a red ball using its HD camera, and compares that to an image of the red ball it has stored in memory. It works out the difference between where the red ball is in the viewfinder and its ideal image, and send commands to the Arduino to move the 4 wheel drive motors in such a way that it ‘thinks’ will align the two balls. If the ball in the viewfinder is too small, the robot will move forward. If too large, backwards. If too much to the left, a right ‘spin’ is applied, and so forth.

Neither the Raspberry Pi or Arduino are powerful computers, but they are cheap, can run off batteries and give a complete whiteboard on which to experiment. The limit is your imagination – all you need is a sense of how to program. Those of us with some programming experience can even teach the Raspberry Pi to program itself – using the original robot control program to learn and write out adapted source code which it then proceeds to run. Adapting and evolving as its environment changes, the robot will learn more and adjust as it gains experience.

At the powerful end of the scale, I think it’s only a matter of time before robots reach ‘sentience’ – that is, a sense of their own existence. Once we get there, I’m quite sure a whole moral landscape will be encountered; a landscape that starts with the question: Do robots dream of electric sheep?