Will robots ever become aware of their own existence?

Science fiction writers like Asimov have written about this idea for some time. But could it ever happen?

Just before I left the office for the weekend, my colleague Nick posed this question to ponder.  Here are my thoughts. 

Science fiction writers like Asimov have written about this idea for some time.  Bicentennial Man and iRobot are examples of this, and there have been many memorable film ‘characters’ like HAL 9000 and David from AI that help us speculate what could happen if machines ever became self-aware.  But could this ever happen?

Well, perhaps the first concept to consider is that of self-awareness.   As humans, most of us would probably consider ourselves to be self-aware, sentient beings.  We’d probably be fairly comfortable in saying that some mammals, such as dolphins and primates, have at least some degree of self-awareness too. Cats and Dogs? Maybe.  Ants, probably not.  This suggests we tend to believe that intelligence is a prerequisite of self-awareness and that to be aware of its own existence a machine would need to be intelligent.  But is this possible?

This is a big question, on which there are two broad perspectives.  The symbolic AI perspective suggests that even if machines become infinitely complex, they will only appear to show intelligent behaviours rather than possessing actual innate intelligence. Take chess, for example. We assume that any human who is good at chess is also intelligent.  We’d think that a dog that could play chess was nothing short of miraculous.  But a computer, well, even if it’s hard to beat we know that it’s just following a set of rules.  Similarly Expert systems use a large set of rules to come to a decision.  If the rule base is infinitely large, it might seem omniscient. It might even seem to be self-aware, but would it be?  Google Chinese Room and let me know what you think.

An alternative perspective is connectionist AI.  With tools like neural networks, we can simulate intelligence by simulating how the brain actually works, with layers of interconnected neurons that fire when they are sufficiently excited by an external stimulus. At the moment, the simulation is pretty crude, since the computing power required to handle the interconnections is enormous.  But here’s the killer question: if we were able to simulate a human brain at a deep enough granularity, and situate it in a robot that had the same senses we do, would it be intelligent? Would it be self-aware?  Would the Chinese Room argument still apply?

One final difficulty to consider is that there is also no universally agreed definition of what intelligence is, or how it can be measured.  Sure, we have metrics like IQ, but these tend to be flawed due to their cultural bias.  For example, a typical IQ test question might involve adding together some letters based on their positions in the alphabet, which measures a) ability to add b) knowing the sequence of the alphabet c) taking a mental leap – which typically comes from having seen this kind of puzzle before.  This is all culturally dependant information: it measure experience rather than intelligence. Intelligence is more a measure of capacity to acquire capacity.  The Turing test, which implies that any computer that is capable of fooling a human into thinking it is another human must be intelligent, suffers from the same failings.

So in conclusion there seems to be no reason why machines could not eventually become self-aware, given enough computing power.  But we probably shouldn’t assume that once they are, they would be able to tell us about it.   It would be also interesting how they would they look upon their relationship with us?  Will they consider us to be violent, and if so, how will they react? *  Tell us what you think!

*Possibly with a blue screen and a frowny face.

If you'd like to know more information you can email us at labs@uk.tesco.com, or let us know your thoughts with a tweet @TescoLabs