Technology |
![]() |
Does thought have to be an exclusively human activity? Ironically,every stage of the debate has helped artificial intelligence become a reality. |
Build an artificial intelligence, a computer that thinks like a person, and
- as every reader of science fiction knows - it soon wants to take over the
world. The reality, of course, is very different. Just as
robot-builders
have dismally failed to reproduce a whole human body, so researchers in
artificial intelligence
(AI) have
created only fragments of human intelligence. These creations are far stranger
than the destructive artificial minds of science fiction, but also far more
useful.
![]() |
'By the end of the century one will be able to speak of machines thinking without being contradicted' Alan Turing, 1950 |
The irony of artificial intelligence is that tasks that are hard for people
- like playing good chess, solving logic problems and juggling complex sets
of rules - are relatively easy for machines. But common-sense tasks - like
getting to work, holding a conversation, or avoiding danger - are still far
beyond the capability of even the smartest machines.
Artificially intelligent computers can beat a grandmaster at chess, but would
sit pondering their next move as the room burned down around them. In the
factory, artificial intelligences manage production schedules incomprehensible
to people - yet they couldn't get to work in the moming if they had to. Computers
"read" and route paperwork in banks and offices, but cannot understand a
child's first book.
Are such machines "intelligent" then? Certainly they fail by the criterion
proposed by computer pioneer
Alan
Turing. Intelligence,he reckoned,is in the eye of the beholder. A machine
could be deemed intelligent only if it could hold a conversation with a person
(through a telex machine if necessary, to disguise the fact that the machine
lacked a body) and convince the person that it was human. No machine has
passed that test and none looks likely to for quite some time.
Make your move,
robot Chess is hard for humans to play well, but machines can beat most people easily. This robot is capable of dismissing a good club player using only a few seconds of computing time. Yet, despite its humanoid shape, it is less able to move about than a young toddler. So how intelligent is that? |
|
But it is precisely because machines do not replicate human abilities that
they are so potentially useful. After all, there are plenty of people and,
should a shortage arise, there is a well-known technique for creating more.
In today's information economy, any unusual reasoning ability is valuable.
Although the extraordinary skills of computers in some sorts of reasoning
are offset by equally extraordinary stupidity in other areas, by building
the right partnership between human and machine it is possible to offset
computer stupidity with human intelligence, and vice versa.
At the beginning of each era of its historical development, Al has thrown up at least one good new idea about the underlying roots of intelligence. By the end of the era the idea is found to be at best only a partial solution - but a partial solution with some potentially useful applications nevertheless.
The first stirrings of artificial
intelligence
The first era started in the late 1950s, when Herbert Simon, a professor
at Pittsburgh's Carnegie Mellon University (who was later to win a Nobel
prize for economics), optimistically named his "General Problem Solver".
Simon and his colleague Alan Newell had spent hours getting students to talk
aloud as they solved simple reasoning problems. By analysing the mutterings,
they reckoned they had found the general principles underlying intelligence:
in short, they thought it was all about finding ways to sift through all
the known possibilities to arrive at the right answer.
Smart machines go to the movies |
![]() |
||
|
![]() |
||
Artificial Intelligence on a bad
day The sentient
computer in 2001: A Space Oddysey,
HAL
(centre), is driven mad by conflicting orders. Today, it is more likely for
a human not to understand an AI than the other way around. Stand up for robot rights Robbie (right) was star of the 1950s sci-fi classic Forbidden Planet. Able to drive a car, serve drinks and hold a pleasant conversation, he would easily have passed the Turing test. Although he opened shops and had his own series, Robbie was still a man in a suit. We are never likely to see his like. |
Simon and Newell believed a machine could carry out this procedure, which
they called "means-end analysis". If a mechanism could generate, step by
step, every possible solution, it would effectively reproduce intelligence.
For their machine to solve a problem, it first had to figure out how to know
when it had found the answer. One type of problem studied by Simon and Newell
was the cryptarithmetic puzzle, for example "SEND + MORE = MONEY". You've
got the solution when you know which digits substitute for which letters
in the equation.
![]() |
|
Machines adapt to changes of plan easily - but they fail other tests that humans pass |
|
Planning by computer means you can adapt to any change instantly. If a shipment is held up at the docks (above) the computer can reschedule production. The same AI system can make sure best use is made of warehouse space (top) |
Then the General Problem Solver would look for means to achieve that end. It might begin by trying to substitute one for S and see if that could be the first step towards a solution. If it wasn't, the machine would try two, and so on. Having tried out all the means, the machine selects only the ones that meet its goals.
The problem is that it can, and typically does, take so long for the computer
to find a solution as to render it useless. A computer searching for a way
to get to work on time is wrong if arrives at lunchtime - even if it has
found a more cunning way than any previously devised. And, as can be shown
mathematically, there are some problems that would require at least the age
of the known universe for a computer to solve by the means-end method.
There are, however, techniques for narrowing down the search. In our
cryptarithmetic problem, for example. even a fairly simple- minded computer
could be programmed to spot that the letter to start with is M, as it is
the first digit of two of the terms, one of which is the answer. (It must
equal one as, no matter what each digit is, two of them added together will
not equal more than 18: so only a one will ever be carried over.)
For a few problems, a clever search will outdo even the best human performance. Chess is probably the best example. The computers that now routinely beat grandmasters have no grasp of strategy at all. Unlike humans, they can analyse several million possible chess positions a second, and score each according to some simple measure. In this way the computer can look six, eight, even ten moves into the future. In chess, the key to narrowing the search is to assume that the opponent will always make the best move available. Once the computer finds the replies that would leave it worse off, it can dismiss any move that allows the opponent to make those replies. This facility for sifting through vast realms of possibilities for the best combination of steps towards a given goal gives machines many planning applications. Computers helped with the logistics of moving men and equipment into place for the Gulf war. Another military use for AI is organising maintenance schedules for aircraft. Black and Decker uses Alto schedule its production of tools.
The big advantage of planning by computer is the ability to keep track of
interactions between the various parts of a production plan. If, say, a shipment
of parts is delayed, the computer can quickly reschedule production to make
sure machines do not sit idle. Despite their successes at chess and planning,
today's computers cannot solve most other kinds of problems. Computers cannot
play the Japanese game Go, for example, because at each turn there are many
more moves available to each player than in chess - the sheer number of
possibilities quickly becomes overwhelming. To overcome these difficulties,
researchers in the 1960s and 1970s began trying to mimic the ways humans
use knowledge to reason their way directly to an answer, instead of detouring
through all sorts of unlikely possibilities.
Expert Systems: questions and answers |
||||||||
The classic example of what expert systems do is provided by
medical diagnosis computers, which re-create the reasoning doctors use in
diagnosing and prescribing a cure. An early expert system was MYCIN, developed
in the late 1960s by Ed Shortliffe at Stanford University, which specialised
in diagnosing bacterial infections and prescribing antibiotic cures. Diagnosing disease involves mastering a collection of fairly simple rules. One rule might be: "If the patient has a fever, consider the following list of diseases..." Another might be: "If the patient has white spots on the throat, consider..." Whereas doctors laboriously learn hundreds of such rules in medical school, a computer can be programmed in a jiffy to apply them.
'When an expert system messes up, it does so big time - they
have to be supervised as well as specialised'
|
![]() |
Machines that tell the human body
what's wrong with it Computers are more accurate than human doctors at diagnosing illness but they need to be constantly fed information - and mistakes can happen. |
Neural Networks: from mind to brain |
|||||
Neural networks are the closest thing to voodoo yet cre ated
by the "white
magic" of computer science. Data goes in and information comes out, but
what happens in between is a bit of a mystery. Also, they "learn".
'Because it learns in mysterious ways and cannot explain itself
a neural net must be taken on trust-or not at all'
|
![]() |
A ghost at the wheel One of the most impressive things a neural network can do is drive. An army truck developed at Carnegie Mellon University, learnt to copy a human driver - road conditions are input by video and laser rangefinder. |
While neural networkers have tried to re-create
the human. brain, and some programmers have created expert systems (see boxes
above), others have been
exploring
the abstract realm of mind. Since Greek times, philosophers have worked
to create hard rules of reasoning, called
logic.
For most of its history, logic's goal has been to discover largely abstract
eternal truths through formally constructed arguments.
The advent of artificial intelligence inspired people to try to use formal
arguments to describe the messy world around them. Two developments helped
greatly in bringing logic into the real world. The first, which began early
in the 20th century, was modal logic.
Whereas
classical
logic assumes a statement is either true or false for all eternity,
modal logic can cope with statements that may be true at one time
but false at another - for example, "John Major is Prime Minister", which
is true now, but won't always be.
The second development, non-monotonic logic, was largely developed
by artificial-intelligence researchers in the 1970s and 1980s. Non-monotonic
logic addresses the related problem of incomplete information. In day- to-day
life, we often make useful assumptions that later turn out to be wrong. "Birds
fly" is a good assumption, even if you later discover penguins and roast
turkeys.
In modal logic, if an assumption is always false - for example "John Major
is a woman" - then the whole system collapses and all its conclusions fall
apart. Non-monotonic logic, by contrast, copes gracefully if the bird turns
out to be flightless later on. Any conclusions that depend on the erroneous
assumption are withdrawn and the rest are still true.
Every time a computer fails in a new job we learn something more about human reason | AI and the law AI helps judges deal with petty criminals - freeing courtrooms for important cases |
|
![]() |
![]() |
Will computers ever be able to think like
a human? The processors of a computer's chip-brain are made up of hundreds of "logic gates" which only understand 0 or 1. If both inputs (A and B) to a gate are 1, then the output will be 1; if one input is 1 (A or B), the output will also be 1. This "Boolean" logic - from the 19th-century English theorist George Boole - is powerful. But now computer scientists are developing non-binary alternatives. "Multivalued" logic works with inputs and outputs that take more than two values. Fuzzy logic operates across a spectrum from "no", through "possibly" to "almost certainly" and "yes". The prospect is for computers that think in a less black-and- white way - but that doesn't necessarily mean more like a human being. |
Logic that washes Bertrand Russell (left) helped end the rule of abstract classical logic. Now there are many different "logics" - fuzzy logic is used to program advanced washing machines (right) |
Thinking like a human
These new thinking tools spurred logicians to try to capture more of the
subtleties of real life in their formal statements. This is no mean feat.
To reduce humans' instinctive understanding of the world to formal logic,
logicians must cope with a variety of philosophical conundrums. Wood, for
example, is still wood when it is cut up into bits, but a cut-up table is
just so much junk.
What other things are like wood, and how do we distinguish them from things
like tables whose identity seems more fragile? Such questions have added
fuel to the larger debate about whether the real world - or indeed human
intelligence - can be reduced to logical terms. Some argue that the world
is fundamentally illogical, and that to reason as humans requires a mass
of arbitrary assumptions and leaps of reasoning derived from the experience
of living in a body, in time, in the world. To which the logicians reply
that although no system of formal reasoning can capture the subtleties of
human thought, computers would be no use at all without the guarantee
such a formal system provides.
It is out of the see-sawing progress of this argument over computers' abilities
that progress in the field of artificial intelligence emerges. Each advance
in formalising some aspect of human reasoning (or the world) enables computers
to do some new job. And with each new task computers take on, new limitations
become apparent.
Even flexible non-monotonic logic treats a statement about the world in terms
of either true or false. In practice, however, many things are "sort of"
true. Lofti Zadeh, a professor at the University of California at Berkeley,
developed a kind of reasoning that uses "fuzzy"
terms. Japanese researchers promptly adopted the new techniques to improve
controls for washing machines. Instead of trying to define just how dirty
is "dirty", fuzzy controls enable a machine to treat, say, an 80 per cent
full load of clothes as if they are about 40 per cent dirty.
Humans learn from experience: whatever worked in the past may work in the
future. But "case-based" reasoning enables one person to benefit from the
experience of others. When a problem is solved, a computer files away the
solution and indexes it by a description of the situation. Faced with a new
problem, you enter a description and see if anything like it has been tackled
before.
Compaq, a maker of personal computers, uses this technique to keep its technical
support staff abreast of what can go wrong.
Building on research by Imperial College, London, courts in New York and
Singapore now use AI in straightforward cases. Not only do the computers
speed up justice, they also make intelligent suggestions about rehabilitation.
Each addition to the grab-bag of artificial intelligence techniques broadens
the range of tasks computers can perform; it also pushes us towards a fundamental
philosophical question.
Will
all these pieces of intelligence ever add up to the mental equivalent of
a human being? Needless to say, philosophers debate the question heavily.
The litmus test of philosophical opin ion is a thought experiment pro posed
by thinker
John
Searle Imagine a man who lives in a tin' enclosed room. He speaks only
English, but in the room is a huge Chinese dictionary. Each morning, someone
pushes under the door a set of tiles, also written in Chinese. The man's
job is to rearrange the incomprehensible symbols according to the patterns
given in the dictionary, and then each evening, to pass the tiles back through
the door. Is the man's work intelligent?
Some argue that the mans work cannot be considered intelligent because he
has no inkling of the meaning of the tiles - and therefore symbol-processing
computers cannot be intelligent either. Others reply that humans are
overconfident when the argue that they "understand' words and sense impressions
and more than a computer does. The debate is a deep and intriguing one, but
it may prove irrelevant in the end. From this point of view of the person
outside the room, after all, it doesn't really matter if the work going or
inside is intelligent or meaningful - so long as it is useful.
John Browning
Mar95 p30
File Info: Created 14/8/2001 Updated
16/8/2001 Page Address:
http://www.oocities.org/Omegaman_UK/ai.html