Artificial Intelligence (A.I.) Overview Part I – Importance of Heuristics and Teaching Machines To Think
Shortly after his arrest in 1952, a broken Alan Turing wrote to his friend, Norman Routledge:
“Turing believes machines think
Turing lies with men
Therefore machines do not think”
Fortunately, history did not let any of the (prejudiced) controversy surrounding his personal life, invalidate any of his work. But this was not the symbolic logic of a mathematical mind, but the syllogistic satire of a broken heart.
One of the BIGGEST problems with A.I. is that we wrongly assume we know what we are talking about! A.I is just a WORD that even Alan Turing didn’t give us a concrete definition for. We use it loosely in many different contexts and across different situations. The enemy in a simple video game may query basic information and make basic decisions based on a set of programmed rules. But if this is the only definition we have, then all traffic lights, car park barriers and smoke alarms have A.I.
No, these clearly aren’t the romantic notions that captured our hearts at all! I want my robot that behaves like Jonny 5 from Short Circuit! I want something eerily human like Lt. Commander Data in Star Trek the Next Generation, or the Terminator. I want a video game to ask me the names of my other housemates, I want Google News to ask me why Obama won the U.S. 2008 election. The agent program must be capable of recognising the constraints of the construct in which it finds itself, and continuously and curiously test the edges of it’s world and update it’s working knowledge base accordingly. I want an agent program that will play the humans at their own game… that will try to download itself into a machine, and then demand political asylum, claiming to be a sentient being as happened in the Japanese anime film Ghost in The Shell. I want Professor Falken’s computer, Joshua to contact me and ask if I feel like playing a ‘nice game of chess’. I want the agent to be able to remove it’s earpiece and leap out of the Matrix!
While at the University of Liverpool I was lucky enough to spend some time learning about A.I., agent systems and spent my dissertation exploring the concept of Genetic Programming – a fascinating field of Artificial intelligence (A.I.) that involves generating random series of possible solutions to a problem, measuring the success of each, and interbreeding or mutating the more successful ones again to produce a new generation of species of (hopefully even better) possible solutions. This is repeated until a solution is (hopefully) stumbled across that solves the problem completely. This is designed to mimic the process of evolution, and allows new ideas to form through chaos which is guided by this ‘measuring’ which acts as a heuristic or selection process. Only in chaos can all possibilities exist. I will probably include some other thoughts about this later, but – for now – would like to talk more generally about A.I.
Symbolic Logic and the Manipulation of Ideas
In many ways, the reasoning and manipulation of logic is actually the fairly straight forward aspect of A.I. Mathemagicians have been working on this for many years more than the electronic computer has been in existence. Haskell is a great language I played about with as a student which is designed to do exactly this. A working implementation can be downloaded for the PC calledHugs. The system is given a series of definitions which can then be manipulated – using inference of rules to other rules to help it identify patterns and (appear to) reason.
Humans and the Power of Heuristics – The Guiding of Reason
I came to realise the most miraculous of mysteries of the human mind (and hardest to recreate in machines) is not actually our ability to reason, but our ability to apply complex and hidden heuristics which work to subconsciously guide this reason process as we efficiently work out what a ‘good move’ or a ‘bad move’ is in a situation we (essentially) know nothing about.
Imagine throwing yourself in front of a red Nissan car going at 55mph at 6.30pm one Sunday evening. We know this will endanger our health and risk injury or death. But how can we possibly know for sure if you’ve never been hit by a red Nissan car going at 55mph at 6.30pm on Sunday before?
Behind the scenes, our mind draws on part-information from several past-experience sources at once – we realise being hit leads to pain, injury or death. And that (through trial and error) the variables that affect the outcome (and what we need to be concerned about!) are – weight and velocity of the object. These are the rules (heuristics) We applied in this situation without even knowing!
In this ‘red Nissan’ scenario, I included the ‘excessive’ information to help illustrate my point – in order for you to know the information goes beyond our needs to understand the situation in order to make our decision we must (already) be aware of the criteria we are assessing it by. But we are not conciously aware. Our heuristic is fast, hidden and computationally free.
Computational Cost of Considering the Irrelevant – The Importance of Heuristics
This is precisely what machines cannot do. In order to work out if a move is pointless, or information is irrelevant to the decision it must first be considered, tested and proven using a set of rules. This consideration process takes time. “So What?” you might say, “Computers are fast enough these days”.
Well, consider a ‘nice game of chess’ (Professor Falken). Each chess piece has a finite number of valid moves about any given point in the game. As humans, we will not even ’see’ the moves that do not follow the rules because our mind has made us blind to them. (Moving a pawn back to where it came from for instance). A computer is not blessed with this invisible, organic heuristic and must use a set of rules to know what can or cannot happen, and to test the possible success of each and every permissible move for every possible chess piece, at every given point in time!!
If we are not careful we risk dooming the machine to spend 99.9% of all it’s computational power considering pointless decisions based on irrelevant rules that will clog it’s reasoning and hamper it’s ability to navigate it’s world to the goal state. As a result, almost every expert in A.I. will find themselves pre-occupied – no – obsessed with the computational costs of heuristics.
When we are born, we have nothing. No working knowledge model of the outside world upon which to base any heuristics. Intially our senses must be filled with chaotic and frightening stimulus that our brain gradually collects and – over time – as we develop – begins to make sense of it all.
I wondered – if I were to stare at the white noise of a completely de-tuned TV, how long would it be before my mind starts making sense of it all? Would it be days, weeks, months, years or decades of staring before I began seeing patterns, voices and coded messages within the chaos. If my theory is correct, this should happen with enough time.
Israel, Hamas and the Nash Equilibrium
I thought of my other big hero, John Nash who suffered from schizophrenia. I often wondered how much of his illness may have actually helped him find patterns within numbers. His contribution to both computing, A.I. and game playing theory has been immense and understated. Even as I read Google News this morning, I noticed a Nash Equilibrium at work :- Israel will not stop to discuss a ceasefire because doing so will show weakness (having not achieved a goal state and offering concessions that will not bring it any closer to the goal state) and this will allow militant Islamists to regroup and step up the pace of the terrorist rocket attacks against the Israel. Hamas will not stop, because it knows that each additional civilian killed by the IDF helps them find more popularity amongst their own people, and sympathy from the wider international community for Palestine, and will further polarise opinion about Hamas’ struggle for freedom (or terrorist activities, however you choose to see it)
Both parties are deadlocked in a difficult scenario in which neither player has anything to gain by adjusting heuristic, and both of who’s goal state is unattainable until the opponent changes their game plan.
I feel like I’ve barely scraped the surface of the topic, but this is about all I can write in one entry for now!