Is someone else’s programming ruling your life?
Or the consequences of algorithmic bias.
You are in a driverless car. While you relax, the car is taking you to work. Part of the journey is for you to be driven speedily over a narrow bridge with a steep drop on either side.
Unbeknown to you (or the car), some individual has decided to use the bridge as a shortcut and is walking across it. There’s no width for this person to step to one side (or the car to swerve) and no ability to avoid them at this speed.
So, the computer program in the car has to make a decision: To kill the other person by driving over them, or kill you by swerving over the side of the bridge.
If / Else. It’s not you. It’s not the other person. It’s up to the computer algorithm, written by some programmer some time ago in a nice comfortable office far, far away. Is the programmer inclined to save the car and you (so you’ll be grateful and maybe more brand-loyal), or save the third party who has no protection around them like you have. Perhaps the car body may just save you from dying from the drop; after all, the depth is unknown to the algorithm and may decide that it’s worth the outcome?
All of us who are computer programmers exhibit something called “algorithmic bias” when we code. We don’t notice it but, when we code those If / Else statements in our apps and services, we decide the intention – and that intention may be based on our personal values and biases. We decide whether the If or the Else is more worthy; more valuable.
Let me suggest another scenario: I’m worried about the safety of my family when I drive, so I choose a large vehicle with lots of protection and safety features. One day I have an accident. My car is big and heavy, but my vehicle serves its intended function since everyone in it is kept nice and safe. Unfortunately, the other car isn’t so lucky, and suffers even more damage than if it had collided with another average sized car.
Conclusion? People tend to focus on products that protect themselves and their families. Therefore more products will always be designed to protect the customer, since this sells more. Could we see, in the future, a form of Darwinism where the customer with the most money will choose products which make the best possible decision to protect them in these critical situations? Could we end up with a kind of algorithmic arms race? The principles of Game Theory could probably apply here!
The fact is that all the software powering all the tech around us – home, office, car – has algorithmic bias built in. Fortunately the worst it can do is annoy us, but as we come to rely on software for our safety, maybe it’s something we pay attention to. For example, what bias is running in the software controlling your next lift journey and it has to deal with an error condition? What bias is taking place in the increasingly insistent auto-correct when you type your next email that could replace a word and distort your message? Algorthmic Bias is already everywhere.
So let me leave you with this thought: Are you slowly being forced to live according to the personal values and biases of some far-off development team right now?
If you'd like to know more information you can email us at firstname.lastname@example.org, or let us know your thoughts with a tweet @TescoLabs