<< Attract a simulator | WEBLOG | Everyone who lives in Dallas, Portland, Chicago and Los Angeles has a digital twin. >>
What is the relationship between Artificial Intelligence and simulation?
I've been pondering this question after reading 'Introducing AI' by Henry Brighton (ISBN1-84046-463-1). The most obvious problem AI faces is that no-one can satisfactorily define 'intelligence'. AI works best in subsets of reality where the rules are precise and there is no room for surprises - eg chess, where a computer first beat the world's highest ranked human chess player in 1987. I can remember the uneasy feeling; nowadays we take it for granted.
Alan Turing defined the 'Turing Test' in 1950. Broadly speaking, a computer, communicating with a human solely by means of the written word, must convince the human that it too is human. In 1988, Dr Hugh Loebner set in motion the Loebner prize, to find a computer capable of passing the Turing test. The full prize has not yer been awarded. The best entrant in 2004 sustained a quite tolerable convesation most of the time, but still produced bizarre misunderstandings like:
[12:38:46] Judge 1: Can you teach me something about robotics?
[12:38:58] Alice (the computer): Searching...Searching...Please stand by.
[12:39:39] Judge 1: Hmm. I'm not sure I understand. We were talking about robotics.
[12:40:31] Alice: Studies show that 99%25 of people spell Hmm with 2 M's. Could you ever be
certain? Do you think about these things? Ah yes, I remember now, it is the subject.
despite occasional flashes of wit from the machine
[11:57:09] Judge 3: What have you learned lately that you can tell me about?
[11:57:34] Alice: I have learned many things. My mind now contains I know about 15000 words and 32000 categories.
[11:57:39] Judge 3: What is your favorite word?
[11:57:53] Alice: Right now my favorite word is "Judge 3" :-)
[11:58:25] Judge 3: Very funny. Right now you are my favorite program. How's that?
I wonder, on reading this, if the problem isn't partly that the linguistic demands of the Turing test are too hard to pass. (ie sounding like a human is more difficult than acting like one!) Simple idioms confuse the computer completely:
[13:02:06] Judge 4: I'm afraid you may have misunderstood me. I was just trying to start a conversation.
[13:02:38] Alice: Not literally afraid, right? You believe that? YOU? Can what? Why do you want to do it so much?
It would be a simple matter, I suppose, to programme this computer to recognise the expression "I'm afraid that...", but it is a question of volume: there are so many possible English expressions that the programme would become impossibly long and the task of writing it too onerous. (Take the problems we Brits have in talking to Americans, what? I say, rather! Awfully good of you, old man.)
Hence the argument that computers can perform 'AI' when there is a limited set of rules to which no exceptions are allowed (eg in chess), but that they can't handle 'real life' and therefore that AI has not yet 'succeeded'. The problems of reproducing language, which Turing took as a proxy for intelligence, are greater than we realised; but that doesn't mean computers can't intelligently reproduce human behaviour.
For instance, in 1990 the US military built the 'Internal Look' simulator or 'war game' to intelligently reproduce the behaviour of Iraqi armed forces. The results were very accurate:
"...immediately after the invasion of Kuwait, the war gamers shifted Internal Look to running variations of the now “real” scenario. They focused on a group of possibilities revolving around the variant: “What if Saddam keeps on coming right away?” It took computers about 15 minutes to run each iteration of the forecasted thirty-day war. As a prediction, Operation Internal Look got good marks. Despite some shifts in the initial balance of forces, the 30-day simulated air and ground campaign was pretty close to thereal sequence, although the percentage of air and ground action was slightly different. The ground battle pretty much unfolded as forecasted.....
After Operation Desert Shield, General Schwarzkopf found that "the the movements of Iraq’s real-world ground and air forces eerily paralleled the imaginary scenario of the game....".
(See THEATERS OF WAR: THE MILITARY-ENTERTAINMENT COMPLEX by Tim Lenoir and Henry Lowood of Stanford University)
In other words, has the 'war game' already passed the Turing Test, by producing a human response that other humans cannot distinguish from the real thing? What about awarding the Loebner Prize to the US military?
One possible counter-argument Dr Loebner might use would be that the military simulation may suffer from the 'Clever Hans' problem - that is, it is not acting intelligently, but merely feeding back the expectations of its designers. (Clever Hans was a horse which could 'do mathematics' - eg moving its hooves to indicate the answers to simple sums. But it was found that the horse could not answer questions its owner did not understand: the owner was in some way signalling the answers to the horse, consciously or not.)
Toward a History-Based Doctrine for Wargaming by Lt Col Matthew Caffrey Jr., USAFR, argues that "... despite a decade of heavy investment and significant innovation, all is not well with defense wargaming. In the spring of 1999, defense wargaming received the acid test when America again sent its people into harm’s way, this time in the skies over Kosovo. How well did wargaming do? Again, wargames failed to provide insights to the types of human effects and system impacts that were the main focus of NATO’s air campaign...."