Frontiers of Complexity |
"Life is not some sort of essence added to a
physico-chemical system,but neither can it simply be described in ordinary
physico-chemical terms. It is an emergent property
which manifests itself when physico-chemical systems are organised
and interact in particular ways." These are the words of the former Archbishop
of York,John Habgood,a one-time physiologist who believes that the scientific
world-view afforded by complexity is in many ways a more theologically
comfortable notion than old ideas about vitalism. In his address to the 1994
annual meeting of the British Association for the Advancement of Science,Habgood
voiced the opinion that the creative work of God can be found in the growing
complexity of organisation during the development of organisms: "Indeed,there
is a hint of this in the very first words of the first chapter of Genesis
where God is seen as bringing order out of chaos."
In the animal kingdom there is a similar debate over where to draw the line defining the place at which consciousness and intelligence begin and the actions of unthinking automata end,although it certainly depends on the complexity of the nervous system involved.Again, as the Archbishop of York put it,"One of the long-term implications of the acceptance of evolution is that we all see life as a continuum,therefore there is no precise break between other animals and ourselves....The more we become aware of some very human-like capacities in animals,such as the higher apes,I think the more worried one is that they may have something at least beginning to approximate to a consciousness." For similar reasons,artificial life,artificial consciousness,and artificial cannot be assigned the simple sharp boundaries dictated by a reductionist world-view. [p13-14]
Mathematics is,after all,the only way we know of to carry out rigorous arguments,to extract infallible consequences from a set of statements.[p19]
Today,those who pursue the study of complexity are followers of a secret art in which colour,form,and motions of the universe are painted in atoms of logic.And,like their post-Impressionist forebears,these logical pointillists expect that a greater whole - the essence of the world - will emerge from the discrete elements of their mathematical code. Recent developments,notably in mathematical logic,are increasingly testing our faith in mathematics,and making us question whether we can use it to describe all forms of complexity.Fortunately,we will show that these problems are more theoretical than practical.[p19] Why is it that mathematics is so successful at describing nature? We don't know.It seems that the greatest engine of cultural change - the scientific world view - rests on a mathematical foundation that,in many respects,is ultimately religious. [p21]
Ironically, Chaitin's inspiration came from the tenth problem listed by Hilbert: is there a systematic way of deciding whether a given Diophantine equation has a solution in whole numbers such as 1, 2, and 3 ? Diophantine equations, named after a Greek mathematician, are those that only handle whole-number quantities, a restriction with important consequences. If we want to find the solution of an equation in which quantity A squared plus quantity B squared equals one, we would have an infinite number of solutions. However, if we restrict A and B to positive whole numbers, there are only two: A is 1 and B is 0, or vice versa. In 1970, an answer to Hilbert's problem was provided by Yuri Matijasevich at the Steklov Institute in Leningrad.65 In a two-page paper that was the tip of an iceberg of mathematical endeavor, Matijasevich showed that no such method to find a whole-number solution to an algebraic equation exists.In fact, he demonstrated that Hilbert's 4 tenth problem was directly equivalent, in a deep way, to the halting problem for Turing's machine. Chaitin exploited this profound relationship to use a Turing-type approach for tackling a variation of Hilbert's tenth problem. Instead of asking whether an algebraic equation has a whole-number solution, he asked whether it had a finite or an infinite number of whole-number solutions. To do this, he first specified a universal Turing machine capable of handling whole numbers. The result was a "universal Diophantine equation," although elephantine might be a more accurate adjective-it contains 17,000 variables, and can only be crammed into 200 pages of text. The equation, in effect, represented a computer that, given an infinite amount of time, could calculate whether a program will halt. Instead of exploring the halting problem one program at a time, Chaitin considered the ensemble of all computer programs and investigated the probability of any one such program halting for a family of universal Diophantine equations, each created by altering a single parameter. This probability is expressed by an infinitely long binary number called omega. Each equation will have finitely many solutions if a particular bit (binary digit) of the omega number is zero and infinitely many if it is a one. "Each equation in the family is perversely constructed. Whether it has a finite or infinite number of solutions is so delicately balanced that there is no reason it should come out one way or another," noted Chaitin.Whereas Turing had found that the halting problem could only be resolved in an infinite time (in other words, it is undecidable by finite means), Chaitin found that the halting probability is algorithmically random: expressed as a binary number, its series of ones and zeroes is indistinguishable from a series of heads and tails obtained by tossing a coin. That means that the answers to the questions about these elephantine equations must be random, too. Chaitin expanded on the coin-tossing analogy to emphasize just how accidental and random "simple" integer mathematics can be. "This halting probability is maximally unknowable," he said. "Each outcome of a coin toss tells you nothing about any future outcomes or any past outcomes. It is exactly the same way with knowing whether each of my equations has a finite or an infinite number of solutions. The answer is an irreducible mathematical fact, not connected with any other mathematical fact." Surprisingly, even arithmetic possesses random elements. As Chaitin put it, "God not only plays dice in quantum mechanics, but even with the whole numbers!" This finding also has a deep resonance with the discovery in science of deterministic chaos, a seeming oxymoron we will return to later. It also has a corollary that alarms pure mathematicians. Sometimes the only way to explore mathematics is by trial and error-to conduct experiments. [p32] THE MISSING LINK Early thinking about how life arose did not distinguish between the origins of life and the origins of the huge diversity of forms that walk and grow on the planet.God,by creating a world full of living things, provided both simultaneously. In the nineteenth century,though, the latter problem was solved by Charles Darwin in a way that had no recourse to God,although it shed no light on the origin of life.Darwin came to the conclusion that all contemporary species have a common ancestry dating back over many eons.The origin of species arose because of variations from random mutation and selection through the competition for limited resources.Molecular biology has added powerful support to Darwin's ideas by revealing how all living things use the same type of "genetic programming".[p195]
THE RULES OF BEHAVIOR Just as living things are in themselves patterns in space and time, so communities of creatures also display structures on a larger scale. There can be spatiotemporal organization, ranging from the waxing and waning of flour beetle populations81 to the emergence of cooperative behavior between species in the struggle for survival. Structure arises in the way animal communities organize themselves: many examples of such organization spring to mind, none more graphic than those within the class of social insects known as hymenoptera, which includes bees, wasps, termites, as well as ants. In colonies of these insects, one finds a plethora of intriguing phenomena: the presence of infertile workers and soldiers who heroically sacrifice themselves for the greater good of the colony in construction work and defense activities. This seems like "altruism," if one expects that life is ultimately a ruthless struggle of the individual, driven by its selfish genes, to survive. Indeed, an explanation for the existence of this kind of cooperation was one of the biggest problems for Darwin himself. However, we now recognize that these examples can be understood by viewing colonies of hymenoptera as "superorganisms" (rather like the slime mould), with each individual sharing the same pool of genes. We will not pursue these cases of kinship-based cooperation further, preferring instead to concentrate on cooperation between genetically unrelated individuals to show how reciprocal strategies of behavior can emerge spontaneously as a result of the same blind driving forces of survival.82 This subject has been investigated using a branch of mathematics that today is called game theory. It aims to determine the strategies that individuals or organizations should adopt in their search for rewards when the outcome is uncertain and depends crucially on what strategies others adopt. Von Neumann is the founding father of this subject, which weighs the risks and benefits of all the strategies in a game of war, economics, survival, or whatever. As with much of his other work, he was keen to use mathematics for analyzing what appeared at face value to be a nonmathematical subject. His first paper on the theory of games appeared in 1928. While at Princeton University, he collaborated in the late 1930s with the mathematical economist Oskar Morgenstern. In a style typical of von Neumann, this work not only led to important applications in economics, but also enriched pure mathematics through the advances simultaneously made in combinatorics (the theory of arrangements of sets of objects). Von Neumann and Morgenstern published their now classic tome on game theory, Theory of Games and Economic Behaviour, in 1944. It is now increasingly apparent that the same principles can be applied to understand how cooperation emerges within human societies, "in a world of self-seeking egoists- whether superpowers, politicians, or private individuals- when there is no central authority to police their actions."83 These principles therefore have relevance to cooperation between commercial companies, between individuals inside organizations, within government, in politics, economics, and international relations, as well as in biological science proper.84 Economists were fascinated by game theory because it offered to explain mathematically why Adam Smith's invisible hand can apparently fail to deliver the collective good. The theory helps us to understand how companies make business decisions in competitive markets. Political scientists, too, picked up on game theory because it shows how "rational" self-interest can then make everybody worse off. Game theory was introduced to biologists in the 1970s, mainly due to the work of John Maynard Smith. Robert Axelrod, a professor of political science and public policy at the University of Michigan, is a leading worker in the field. He has modeled interactions between individuals on the basis of a simple game called the Prisoner's Dilemma.85 The idea of this game is to simulate the conflicts that exist in real life between the selfish desire of each player to pursue the "winner-takes-all" philosophy, and the necessity for cooperation and compromise to advance that selfsame need. Like so many complex problems we have previously encountered, such as finding the lowest energy state of a spin glass, learning in neural networks, and the solution of the traveling salesman's problem, it is an example of an optimization problem that must be solved in the presence of conflicting constraints. It works like this. Two individuals can choose to cooperate with one another or not. If both cooperate, each receives a reward of, say, three points. If one cooperates and the other does not, the defector gets a bigger reward, say five points. The reward for the "sucker" is nothing. Finally, if both defect each gets a small reward,one point.Even though each player inevitably gains if both cooperate,there is always a temptation to defect,both to maximise profit and avoid being suckered. That is the dilemma. It is easy to put flesh and bones on this game. Imagine that you and a friend have been caught with a stolen painting, spattered with blood. The police rightly suspect both of you of having committed another, more serious offense, of which they have no proof. You are being held in separate cells and not allowed to contact each other. A detective offers you a deal: if you inform on your friend and reveal his other crime, you will not be charged with stealing the painting. It is reasonable for you to assume that the police have offered your friend the same deal. What to do? If each of you refuses to give evidence, you both will be charged only with the lesser crime-a reasonable result. If each one informs on the other, the two of you will go to jail for the more serious offense, on the evidence of each other's testimony-a bad result. Here comes the dilemma: if you alone stay silent, you will be punished for both offenses while your accomplice walks free. The Prisoner's Dilemma exercises mathematicians, social scientists, and biologists because it illustrates a widespread problem: how individual ambition can lead to collective misery. If the two players are never going to meet again, there is no reason for them to cooperate. But in real-world situations, which range from traffic jams to global wars, it is often more likely that they will encounter one another in the future. Consequently, different strategies emerge. Robert Axelrod held a worldwide tournament for computer programs to play the Prisoner's Dilemma in an attempt to uncover the best strategy. He made the fourteen entries-some of which used very complex strategies-compete against one another. "To my considerable surprise," said Axelrod, "the winner was the simplest of all the programs devised, tit-for-tat."86 Created by Anatol Rapoport, a psychologist and game theoretician at Toronto University, the tit-for-tat strategy is very simple: cooperate in the first round, and then do what-ever your opponent does in successive rounds. It is a nice strategy, since it signals willingness to cooperate at first, and then retaliates whenever the opponent defects. Moreover, it has the property of "forgiveness" in that it does not bear a grudge beyond the immediate retaliation, thereby perpetually furnishing the opportunity of establishing "trust" between opponents: if the opponent is conciliatory, it forgives, and both reap the greater rewards of cooperation. Finally, it is not too clever. Highly complex strategies are incomprehensible: if you appear unresponsive, your adversary has no incentive to cooperate with you. Tit-for-tat's great success is its simplicity. Axelrod circulated the results and solicited entries for a second round that saw sixty-two entries from six countries, including some very elaborate programs. Tit-for-tat was again sent in by Anatol Rapoport. Again it won. "Something very interesting was happening here," remarked Axelrod.
After thinking about the evolution of cooperation in a social context,
he realized that the findings also had implications for biological evolution
and collaborated with the Oxford University biologist William Hamilton to
investigate them.87 In many scenarios, the same two individuals
may meet up more than once. If an individual has a sufficiently powerful
brain to recognize from memory another from a previous interaction, and remembers
some of the previous outcomes, then the strategic situation becomes one known
as the iterated Prisoner's Dilemma. The strategies now allow for the development
of rules that take into account the history of the interactions of the two
individuals in the game to date. Early in the 1970s the sociologist and former
lawyer Robert Trivers of Harvard University had suggested that reciprocation
of this form was the chief way in which animals not sharing the same gene
pool achieve cooperation.88 His discussion included the Prisoner's
Dilemma; symbioses where one organism, such as the wrasse, cleans another,
such as the grouper; the warning calls of birds and reciprocal altruism
in human societies, which may be employed to avert possible revenge.
One particularly colorful example was provided by the Bushmen of the Kalahari.
They have a saying to the effect that "if you wish to sleep with someone
else's wife, you get him to sleep with yours, then neither of you goes after
the other with poisoned arrows."89 [p222-225]
Among those present was Richard Dawkins, the Oxford zoologist. He discussed how his Blind Watchmaker program could evolve "creatures" displayed on the screen of his Apple Macintosh personal computer. "Borrowing the word used by Desmond Morris for the animal-like shapes in his surrealistic paintings, I called them biomorphs," he explained.19 "My main objective in designing Blind Watchmaker was to reduce to the barest minimum the extent to which I designed biomorphs. I wanted as much as possible of the biology of biomorphs to emerge. The biomorphs were generated by strings of computer code rather like bodies are generated by genes. The computer carried out minor changes ("mutations") in the code describing a biomorph, and displayed the range of body shapes that resulted. Being unable to select biomorphs according to how well they performed in an environment, Dawkins picked some for aesthetic reasons, and then bred new generations from them. After a few generations, surprisingly life-like biomorphs resulted. "I was genuinely astonished and delighted at the richness of morphological types that emerged before my eyes as I bred," he remarked. Dawkins used his program to show how a "blind watchmaker" could produce the diversity of living things, without recourse to God or a grand designer. Dawkins, following in Darwin's footsteps, argued that the intricate design of the human eye could result from evolution by natural selection, through the interplay of chance and competition. This was elegantly demonstrated in later work by Dan Nusson and Susanne Pelger at Lund University in Sweden, who showed that if selection always favors an increase in the amount of visual information processed, a light-sensitive patch of tissue will gradually turn into a focused lens eye through continuous small improvements of design over a few hundred thousand generations.20 The study suggests that the evolution of something as complex as the eye could in theory at least have taken place in less than a million years, an eyeblink in terms of the vast span of geological time.21 Dawkins' biomorphs provide a striking illustration of how a series of random mutations can turn a simple structure into a life-like object. However, they are still a product of unnatural selection. It was a "god"-Dawkins who provided the selection pressure that led to complex shapes appearing, rather than open-ended evolution through competition with other objects in the environment. As we have repeatedly stated, living things have an innate ability to evolve by natural selection, that is, via "survival of the fittest." Those species best optimized to perpetuate themselves in the complex but finite environment furnished by all other species and energy resources will be the ones that survive. Less than optimal creatures will eventually die out. In Chapter 4 we saw that, couched in these terms, evolution is rather like the many other hard optimization problems we discussed there, with the added complication that the fitness landscape (valleys and mountains) is itself coevolving, due to the individual struggle of all other species to survive. This means that evolution follows highly nonlinear dynamics, involving massive feedback loops, leading to a system that arguably represents the apotheosis of complexity. Nevertheless, based on the sheer power of modern computers, we can be optimistic that the secrets of biological evolution may be simulated by computational processes. Recall John Holland's genetic algorithms (GA) discussed in Chapter 5, which were inspired by Darwinian ideas. In the first twenty or 50 years following their creation, GAs have been largely used to solve complex problems within the inanimate world. [p246-247]
Another splendid virtual creature-a shoal of them to be precise can be found in the Department of Computer Science at the University of Toronto. There, one can glimpse a striking "tank" of "virtual fish." The fish offer all the advantages of the marine world without any worries about feeding, cleaning out the tank, or disposing of the occasional victim of disease and aggression. To enjoy the fish, all you need is a high-powered Silicon Graphics workstation worth a few tens of thousands of dollars and a copy of the "Artificial Fish World" program. The simulations were designed to emulate real fish as much as possible, capturing their form, motions, and behavior, by Demetri Terzopoulos, working with doctoral students Xiaoyuan Tu and Radek Grzeszczuk.53 The fish swim gracefully through water, scatter when pursued by a leopard shark, and compete for morsels of food. They even produce elaborate courtship displays. And yet they do not exist physically. Each is described by an individual computer program nested within a larger program, which generates a simple underwater ecosystem. "We have demonstrated realistic-looking artificial fish that are capable of some astonishingly lifelike behaviors," said Terzopoulos.54 (See color plate 10.) To write the program, the Canadians first used images of the real thing to give the fish coloration and texture. Next they gave the fish "brains," rules patterned after the real thing that control its twelve muscles, and "eyes" that enable them to perceive and react to their surroundings. The program took account of the mass and elastic properties of each fish, modeling it so that it is able to deform as it swims through simulated water. To coordinate the complex action of all the muscles, the fish learn to swim "pretty much the same way a baby learns to walk," said Terzopoulos. The fish tries random combinations of muscle actions, using an algorithm to refine their use. With the help of simulated annealing, the best combination is chosen depending on speed and, most important, swimming efficiency. After ninety annealing steps a virtual leopard shark hardly moves at all, because its muscles twitch randomly. After several thousands such steps, it can swim gracefully. "What comes out is very very natural," Terzopoulos said, "what ichthyologists call caudal locomotion because it depends mostly on the rear, caudal, fin." And, just as the undulating swimming motion is not programmed but emerges naturally, so the behavior emerges from simple rules: the researchers program each fish's affinity for darkness, coolness, and schooling, plus motivations such as its level of hunger, fear, or desire to mate. 'Then we can start to develop predators and prey," explained Terzopoulos, "where the prey form schools, take evasive action, and scatter-as real fish do." To model the elaborate courtship rituals found in the real world, the Toronto team studied the literature. "For some fish, the female displays an ascending behavior and the male goes underneath and nuzzles her belly," he said. "There are also courtship behaviors where the female and male circle around, chasing each other's tail." To create these displays, the team chained together such primitive behaviors as looping, ascending, nuzzling, in a sequence that depends on various events-for instance, the female has to witness the male's mating dance for a certain time before she responds. Although the behavioral repertoire of the fish is programmed, what happens is highly complex and unpredictable because it depends on other fish in the neighborhood and what they are doing. "If the male gets interrupted by a predator, then the mating may not get consumated." Terzopoulos hopes to take the work forward by allowing the fish to mate by mixing the genetic components of male and female fish in forming offspring:"We may be within reach of computational models that can imitate the spawning of behaviours of the female and the male,hence the evolution of new varieties of artificial fish through simulated sexual reproduction." [p262-p264]
The nub of the problem of consciousness is admirably expressed by John Searle, a philosopher at the University of California, Berkeley: "The secret of understanding consciousness is to see that it is a biological phenomenon on all fours with all other biological phenomena such as digestion or growth. Brains cause consciousness in the same sense that stomachs cause digestion and in neither case are we talking about something spiritual or ethereal or mystical or something that stands outside ordinary physical processes in the world. The two biggest mistakes, and at bottom they are both the same mistake, is to think that consciousness, because it is private, subjective, touchy, feely, ethereal, etc., cannot be part of the ordinary sordid physical world of drinking beer and eating sausage. The second big mistake is to think that it is all a matter of computer programs. "2 Astonishing progress has already been made in our knowledge of the brain, for many reasons. First, we are armed with unprecedented understanding of its chemistry. Second, we now have a range of tools to watch the living brain at work. Third, neural network simulations of the brain have managed to capture some of its emergent properties in silico: these range from the way a damaged brain behaves to how it processes the signals from the retina for vision. It is becoming clear that obstacles to creating artificial consciousness may not be as formidable as we had thought.
A HISTORY OF THE MIND Our understanding of the brain has come a long way since the father of modern philosophy, Rene' Descartes, put forward his ideas about it. Born in Touraine, France, in 1596, Descartes is said to have had divine revelation of his mission in life - the unfolding of the general principles of science (Scientia mirabilis) -while shut away one day in a "stoveheated room."3 Thirteen years later, in 1632, Descartes launched the scientific study of the brain with publication of The World.4 He rejected the prevalent view that biology could only be explained by invoking special "vital" principles of life, claiming that there was nothing about the human body that could not be explained by the same laws that governed the behavior of stars and rainbows.5 Descartes believed that all operations of the senses and the nerves both began and ended with the tiny pineal gland at the base of the brain. The pineal influenced the flow of "animal spirits," the term he used for a refined form of blood that supposedly surged through nerve and brain. Information from the senses, meanwhile, was transmitted to the brain by cords, which ran inside the same nerves. A cut to the skin of a finger would tug a glorified bell-pull, which would open a valve inside the head, sending a flock of animal spirits to the muscles to pull the hand out of harm's way. But Descartes did not pursue this reductionist picture to its ultimate conclusion. He claimed that the soul ultimately ruled the pineal. Modern brain scientists beg to differ. The brain is in charge, not the soul. And the brain consists not of spirits, plumbing, and bell-pulls, but cells. The structure and function of many of the cells within the brain are beginning to be understood, as are many of the molecules that shuttle signals between them. On a computer screen, we can display a colorful image of the receptor sites on the surface of brain cells, revealing the precise docking site where these messenger molecules act. We can even mimic the ripple of electrical activity generated when a single brain cell fires. Yet when all this detail has been accounted for, the most important feature will still be lacking. This is the big picture of how emergent properties such as memory result from the brain's structure. While it is essential to understand the molecular detail of brain chemistry, only the science of complexity enables us to make sense of the higher level of organization at which networks of billions and billions of neurons act together apparently miraculously to handle not only memory, but also vision, learning, emotion, and consciousness. Complexity arises in the brain through self-organization at several levels. First, during development, self-organization fashions the brain from a series of feedback and selection processes between neurons. Second, within the complex chemistry of living brain cells, vast interacting networks of molecules self-organize to create spatial and temporal order, which can be seen as dissipative structures similar to those we encountered in the Belousov-Zhabotinski reaction (Chapter 6). Third, self-organization constantly rewires the huge numbers of neurons in the brain to store memories, tailor its performance to the environment. [p280-p281]
The brain is a child of the rich environment it experiences through its senses.Its origins lie in the self organisation of he central nervous system,over billions of years of evolution.Faced with the necessity for survival,for making order in a buzzing booming chaos,as the American philosopher and psychologist William James called it,the nervous system evolved to extract ever more useful snippets of information about its surroundings.As a result,a profound harmony exists between the organisation and structure of the brain and the world in which we live. [p282]
......a way to dynamically alter the connections between the neurons during learning. This is a fundamental hurdle. The connections underpin the emergent properties of networks of neurons, which in turn underpin the emergent properties of networks of networks -that is, the global properties of the brain itself. And an entirely separate effort to build a brain by culturing networks of living neurons also has an extremely long way to go to mimic the complexity of the real brain. One could be forgiven for thinking that we will never be able to capture the details of the brain's interconnections in silico. For the communication medium of the silicon neuron, indeed of any computer, pales in comparison with the complexity of that used within the brain. When an action potential sends a signal across a synapse, the firing of the cell does not consist of electrons racing along a wire but instead of molecules called neurotransmitters that diffuse across the junction between nerve cells. Since the first neurotransmitter was identified in 1921, some fifty more have been labeled. Dopamine, one of this list, is as good as any for highlighting their importance. Found primarily in a region of the brain called the substantia nigra, this chemical messenger is used during cognitive processes, emotional states, walking, and running. As has so often been the case in brain research, its role has been emphasized by individual misfortune. Studies of patients suffering from Parkinson's disease conducted in the 1950s showed that they had unusually low levels of dopamine. A factor in the environment, perhaps combined with a quirk of brain chemistry, can trigger the loss of the nerve cells that manufacture dopamine in the substantia nigra. Losses of over 90 percent of these cells start to deprive the patient of mobility and muscle coordination.41 The brain's complexity is further increased by the variety of sites where chemical messengers like dopamine act: at protein receptors within the neural cell wall. One recent study provides intriguing evidence that schizophrenia, the most common psychotic illness, is linked to a disorder in the way dopamine acts on receptors.Schizophrenics have normal levels of dopamine in the brain but an excess of sites where it acts.4One site, called the D4 receptor, was discovered to be six times mote abundant in the brain tissue of schizophrenics than in normal people. For every bombardment of dopamine, schizophrenics received six times as much message. The discovery that dopamine over-stimulates sufferers seems to go along with the symptoms of hallucinations and delusions, though it is unlikely that this finding alone will explain such a complex condition.44 WEAVING THE LOOM Reductionists, among whom may be included many molecular biologists and biochemists, believe in the dictum "God is to be found in the details." Fortunately, we do not have to understand the monstrous tail of neurotransmitters and receptors for an overall grasp of the way the brain works. The most important emergent property of such detailed brain activity is that experience of the real world strengthens and weakens synaptic connections between brain cells, along the lines described by Donald Hebb, whose work we briefly encountered in Chapter 5. Building on the ideas of Eugenio Tanzi45 and Ramón y Cajal, Hebb proposed that, during learning, synaptic connection strengths would be increased if a neuron fires at the same time as one or more neurons connected to it. Hebb wrote: "The most obvious and I believe much the most probable suggestion concerning the way in which one cell could become more capable of firing another is that synaptic knobs develop and increase the area of contact between [nerve cells]." In other words, action potentials not only carry signals between neurons, their metabolic wake also alters the circuits over which they are transmitted. Within the brain are thousands of millions of neural connections of varying strength that change in this way with use. If, as a result of stimulation of the senses, interesting things happen simultaneously, frequently, and to neighboring neurons in the brain, these neurons tend to be connected by the network. Because of this plasticity, or adaptability, of neural connections, instruction by the world enables the brain to self-organize networks for recognizing objects (e.g., a pen), whether viewed end-on, from the side, or at any other angle. The result of these underlying molecular mechanisms is that the structure of the brain adapts to reflect the connections between events in the real world. The process of creating memories provides a good example of such networks.[p294-p295]
Extrapolating somewhat,this research suggests that innate preferences,whether in personal relationships or in the arts, may be an indirect result of the way the brain has evolved to interpret sensory information. Indeed, our disdain for the irregular, distorted, and lopsided seems to be inherent in the way our brains recognize patterns. We may at last understand why it is that we love the symmetry of a snowflake, a beautiful face, or William Blake's Tyger, with its "fearful symmetry." Subsequent work by Arak and Enquist, complemented by independent work by Rufus Johnstone, has shown that neural networks have an inherent preference for symmetry when trained to recognize visual patterns because symmetrical patterns are easier to ascertain from a variety of viewing angles-think of a sphere compared with a cube. These findings are corroborated by the discovery that our own love of symmetry is shared by other creatures-for instance, crows and monkeys. There are other examples of how our senses can be understood on the basis of artificial neural network simulations. However, we are still faced with the important but difficult question of how these networks interact with one another; specifically, the roles of integration versus specialization need resolving, along with the associated problem of net-net synchronization. We touched on this issue earlier in this chapter in our discussion of a particular, though restricted, measure of neural complexity developed at the Neurosciences Institute in La Jolla that attempts to express and quantify the subtle link between local and global functions. There are centers in the brain that can recognize a face, while others detect movement, colors, and expressions. How do we reconcile the existence of a unified mental scene and, ultimately, the unity of consciousness with the astonishing specialization of the brain, often called the binding problem? This may sound esoteric but it is important in conditions such as schizophrenia, which occur when the process breaks down.The same group in La Jolla-Olaf Sporns, Leif Finkel, Giuho Tononi, and American Nobel laureate Gerald Edelman-focused on binding in the visual cortex. For example, when we gaze at a red picket fence, how do the cells within the cortex that register the vertical orientation of the fence posts know that it is the selfsame stimulus (the fence) that makes other cells register the color red? The team drew on the work of Charles Gray and Wolf Singer of the Max Planck Institut fir Hirnforschung in Frankfurt who had found high degrees of synchrony of neural activity in studying the primary visual cortex of the cat. Gray and Singer's experiments suggested that different processes in the brain were bound together by the fine temporal structure of neural activity. This inspired a bottom-up model by Sporns and his colleagues on a supercomputer that exploited the temporal properties of discharges between networks of about 200,000 artificial neurons. The neurons were arranged in three separate streams for form, color, and motion, analogous to those of the mammalian visual system. To process images from a video camera, the team connected the units in a biologically plausible way via several million connections, most of them arranged to link individual visual maps in a reciprocal fashion. From this and subsequent models that integrated up to nine cortical areas there emerged a dynamic alternative to the traditional idea of a static grandmother binding cell. The group used this approach to segregate a moving figure from various backgrounds and applied it to give an account of visual illusions and Gestalt phenomena:an image of shapes and symbols that appears meaningless when viewed in close up can reveal a face, cube, or pattern when we stand back and see the image in its entirety. Using a more idealized network, John Taylor of Kings College, London, has been modeling a part of the brain, called the nucleus reticularis thalami or NRT, which acts as the playground for competition between many distinct activities in separate cortical areas.Taylor envisages it as a gateway linking primitive centers that govern emotion, as well as inputs through the eyes and ears, with the cortex, the outer layer of the brain responsible for memory, language, thought, and intellect. In his neural caricature, Taylor has mimicked this process by allowing competition between different activities in an artificial network of inhibitory neurons. What emerges is a single wave of electrical activity across the net that, he claims, provides global correlation of cortical activity. Similar waves of activity have been observed in vivo using magnetoencephalography, a brain-scanning technique we discuss in the Appendix. "These fit exactly with what I would expect from my model," says Taylor. The jury is still out on the significance of this work, though it does complement in some ways Edelman's Darwinian model of thought processes, in which ideas compete for "workspace" within the brain. Perceptions of the thinker's current environment and memories of past environments may bias that competition and shape an emerging thought. We should not forget, however, that no one has yet succeeded in providing a plausible description of such higher-level cognitive functions as awareness-the basis of consciousness-let alone the multitude of emotional states such as happiness, pleasure, pain, and sadness. NEURAL NETS AND DAMAGED BRAINS Artificial neural networks have further demonstrated their realism by providing deeper understanding of the effects of brain damage. Tim Shallice of University College, London, working with Geoffrey Hinton and David Plaut, used a neural network to model how the damage resulting from a stroke can lead to visual errors and difficulty with certain abstract words that superficially appear to present a random collection of behaviors. Efforts to retrain the neural network after damage show that some strategies are better than others. Doctors can now begin to try rehabilitation procedures based on this conceptual framework. Neural networks have also been used to model a form of amnesia called prosopagnosia, caused by lesions with the boundaries between the occipital and temporal regions of the cortex. A sufferer loses the ability to recognize the faces of friends and family, even a photograph of his or her own face. Such patients often have difficulties in distinguishing between individual members of a given class of objects. For example, they can usually classify objects such as cars, dogs, cats, and kettles correctly while being unable to identify individuals within these classes. Since prosopagnosia affects only the visual recognition of faces, and memories of other classes of objects, it may still be possible for the afflicted individual to identify the person or object from other cues, such as posture or gait, or by using another sense, such as a telltale sound. A sufferer may not be able to recognize the face of his pet cat, Pushkin, but if he hears her plaintive meow he may immediately know the beast. As ever, these unfortunate patients help reveal how information on individuals and objects is stored within the brain. Are there many representations of a single object, each derived from a different sense, whether smell, sound, or sight? Perhaps one abstract representation can be accessed in numerous ways via different senses? This idea can be readily grasped from the understanding we gained in Chapter 5 of the way in which recurrent neural networks act. Inputs from the different senses may all converge on some global attracting network state that represents Pushkin. The global nature of this state means that it is very likely to be spatially highly distributed-that is, the neurons whose collective activation serves to represent Pushkin are spread throughout the brain. Thus, if this interpretation of prosopagnosia is correct, it implies that only the visual stimulus route to the concept of Pushkin is cut by the lesions, since it can still be reached by other sensory routes. Indeed, it is possible that a patient, given such alternative routes to the neural representation of Pushkin, may even be able to describe her visual attributes accurately. The hypothesis is that there is a hierarchy of recurrent networks nested within recurrent networks nested within recurrent networks. It is to be expected on the basis of such a hierarchy that, when synaptic connections-connection weights in the language of Chapter 5-are destroyed by lesions, fine-grained learned patterns (more specific, individual recognition patterns) will degrade first, while the broader classes will tend to survive. The retrieval of stored classes of objects (e.g., faces) from neural networks, as opposed to individuals within each class (one person's face), was investigated by the Argentinian physicist, Miguel Virasoro, at the Universita degli Studi di Roma "La Sapienza," and now director of the International Center for Theoretical Physics, Trieste, Italy. He found, using the Hopfield type of neural network model, that the stability of the class was much greater than for individuals. In other words, brain damage would indeed tend to wipe out sites that distinguish the faces of individuals, precisely the effect observed in prosopagnosia. CONSCIOUSNESS The study of consciousness was shunned in learned scientific circles until recently. It was widely believed-and still is by many-that the phenomenon lies beyond the reach of scientific explanation. One key element of consciousness is its subjectivity-each of us can only know of his or her own conscious state. And the realm of subjective experiences is normally regarded as a strictly private affair. But brainscanning techniques (see the Appendix) can now glimpse this private world and artificial neural nets offer the means to model it. The realization of artificial consciousness is a tall order and has not yet been attained; however, we should take heart from the many examples described in this book of how we can re-create extraordinary real-world complexity using computers. Given this kind of progress, it is no wonder that today a lively debate involving scientists and philosophers is attempting to find out what it is about that three-pound lump of grey and pink cells in our heads that is responsible for consciousness. On one matter most are agreed: the effort is among the most challenging and exciting ever undertaken. The challenge is rooted in the complexity of the brain's endless tangles of neurons and synapses. The excitement of the quest rests on the claim by some that our future survival and that of the planet could depend on a more complete understanding of the human brain. This urgency is heightened by the increasing numbers of scientists who believe an explanation of consciousness, whether neurobiological or neurocomputational, is now feasible. As Francis Crick has stated, "I believe the problem of consciousness is now open to scientific attack. . . . The flavors (qualia) of what we see (such as the redness of red) may be private but it should be possible to discover the general type of activity in the brain that corresponds to consciousness. To find out what sort of activity that may be, Crick and his collaborator Christof Koch are studying the visual illusion posed by the Necker cube, a line drawing that either appears to be going into the page or popping out, depending on how long you stare at it (see Fig. 9.9). "Aside from eye movements, what is coming into your eye is constant but your percept is changing," said Crick. "What we want to know is which neurons in the brain are changing when your percept is changing."
Gerald Edelman is convinced that the mystery of consciousness will never be solved at any single level of description, whether molecular, neuronal, or psychological. Instead, he places great emphasis on the effects of evolutionary selection at a range of levels, starting with the myriad neural connections within the brain. Faced with the necessity for survival, for making order out of a chaotic world, the brain is highly plastic and adapts itself, mapping sensations, categorizing and recategorizing them constantly. "Nerves that fire together, wire together," he says. Every neuronal map, every part of the brain, is dynamically, or, to use Edelman's term, "reentrantly" connected with every other, evolving and integrating itself in continuous cross-talk. Thus, the brain actively represents and maps the world, and compares these mappings with one another. Crucially, however, this evolution of self is made possible by selection, strengthening of existing neuronal groups, and the constant emergence of new neural networks on the basis of "value systems" derived from evolution, such as reflexes, taste, and appetite. All these processes naturally develop a diverse and degenerate repertoire of connections, no two of which are alike, even in identical twins. Understanding how the brain categorizes the world is a key problem in the search to explain consciousness. As Edelman puts it, "The world does not come in neat little packages with labels." Artificial neural networks and parallel processing consistently feature in many current efforts to pin down the phenomenon of consciousness. By envisaging the brain as a massively parallel device, there is no need for a Cartesian theater on which events are enacted before an all-seeing homunculus. Since there is no homunculus, we do not need to maintain the notion of Cartesian dualism, that mind is separate from matter. As an example, the philosopher Daniel Dennett maintains that "It is beyond serious argument that the brain is a computer. It is not a serial computer of the familiar sort, but a parallel computer, an architecture alluded to in the name for my theory of consciousness, the 'multiple drafts model.' I envisage the mind to work rather like the Reagan presidency-lots of sub-agencies and coalitions and competitive functionaries working simultaneously to create the illusion that one Boss agent is actually in control." This connectionist picture of consciousness creates problems for the traditional reductionist mission to dissect the workings of the brain. "Science has always had its great triumphs when and where it succeeded in subdividing complex phenomena into very simple paradigms. Doing the same to the brain we are in danger of being left with bits and pieces in our hand. Using a simile, in order to understand the value of money, we shouldn't stare at dollar bills," wrote von der Malsburg. "We should rather try to understand the system of beliefs and habits that make money do what it does. What I am saying here is that none of the isolated components of the brain can be expected to hold the essence of consciousness. That resides in the modes of interaction of all parts of the brain, and maybe even in the way the brain is integrated into the social world and the world at large. " ARTIFICIAL CONSCIOUSNESS The ultimate test of our understanding of the brain will come with the design and simulation of an artificial one, which displays such attributes as intelligence and consciousness. We have seen how neural networks are today providing insights into memory, pattern recognition, and the way the brain is organized. With these more realistic models of brain function, as well as our knowledge of artificial life, we can begin to see why intelligent behavior-and consciousness-may not necessarily be restricted to biological beings alone.
Perhaps the most useful early contribution to the debate over artificial intelligence was made by its founding father Alan Turing, who took a pragmatic "operational" view. An operationalist would say that a computer has a human attribute so long as the computer's attempts to imitate that attribute are indistinguishable from the real thing. Turing gave a description of this, the Turing test, in an article entitled "Computing Machinery and Intelligence" that appeared in the philosophical journal Mind in 1950. Turing reasoned that a computer must be said to be capable of thinking if a human being, conducting a dialogue by electronic typewritten messages, cannot tell whether he or she is communicating with a machine or with another person. Such issues have a substantial philosophical content, and have stimulated the growth of a huge body of literature. The mathematician Sir Roger Penrose, in his thought-provoking and widely read book The Emperor's New Mind, delivered an interesting critique of the entire enterprise of artificial intelligence (AI). He has developed his views in a recent sequel, Shadows of the Mind. His argument turns on the significance of Gödel's undecidability theorems in mathematical logic, which we encountered in Chapter 2. Penrose maintains that human brains have an ability to "see" the truth or falsity of Gödelian statements whose truth-values cannot be decided within the formal axiomatic framework of the logical system concerned. This does not mean he believes that human brains are essentially different from those of many other animals. The important point is that, for Penrose, this ability is "a clear-cut instance of noncomputability-a noncomputability which must be present in conscious processes generally and [is] not at all unique to human brains." According to Roger Penrose, because we can step outside these formal axiomatic frameworks and gain insight into the truth value of such undecidable statements by reasoning that is nevertheless mathematical, it follows that our brains cannot operate algorithmically. Since computers merely execute programmed instructions, they are algorithmic; hence computers cannot be as smart as we are. If Penrose's argument were correct, the ambitious goal of strong artificial intelligence-namely, to build a conscious computational device-would crumble into dust. This argument has considerable force, as well as a certain mystical appeal. For, like many mathematicians, Penrose is committed to the notion of a Platonic reality, existing independently of us, yet one which we can make contact with through mathematical insight. Thanks to Gödel, at least a part of this abstract reality is forever veiled to "dumb" algorithmic computation. Our conscious brains-or at least those of mathematicians-are what we need to reach into the Platonic cosmos and divine answers to these uncomputable problems. Roger Penrose takes his argument still further. As we saw in Chapter 2, it is possible that the laws of physics-or at least their mathematical representations-may have consequences that are not computable. Indeed, we described this possibility when discussing the mathematical work of Pour-El and Richards, whose relevance to science remains unclear. For reasons wholly unconnected with AI, Penrose claims that the many problems that bedevil quantum mechanics and gravity will be resolved by the explicit incorporation of noncomputable elements in some more successful but hitherto unknown theory of quantum gravity."From this, he speculates that it is precisely the supposedly nonalgorithmic new physics that lies behind the intelligent properties of conscious brains. To provide a bridge from this currently unknown quantum gravity to the neurons within a conscious brain, Penrose draws on the ideas of Stuart Hameroff on microtubules. Most neuroscientists agree that microtubules provide a "skeleton" for the neuron with two functions: to control the neuron's shape, and to transport molecules back and forth between cell body and synapses. Penrose goes beyond this consensus, suggesting that the network of microtubules might exhibit behavior that would correspond to a quantum measurement and that this could yield the noncomputability he believes he has shown is necessary for consciousness. However, microtubules would provide only one part of the overall setup envisaged by Penrose, which would still require many cells acting together in concert. "The neuron level of description that provides the currently fashionable picture of the brain and mind is a mere shadow of the deeper level of cytoskeletal action-and it is at this deeper level where we must seek the physical basis of mind! " The basic Gödelian argument that Penrose and others have used to attack AI has been widely criticized. Crick believes that, by now extending his ideas to microtubules, Penrose has moved far out of his depth. The scientist who elucidated the three-dimensional structure of microtubules, Sir Aaron Klug, is unimpressed by their proposed new role in consciousness while Gerald Edelman points out that "There is an old fashioned drug to treat gout and arthritis that dissolves your microtubules-what happens to your soul then?" Yet despite numerous criticisms, Sir Roger Penrose's argument about the elusiveness of computational consciousness is an important one, and we cannot easily dismiss it. However, it can be argued that Penrose's position is based on a somewhat restricted view of what constitutes a computer. For Gödel's theorem is a theorem of logic, concerning mathematical systems of axioms; it does not apply to machines. Michael Arbib, a computer scientist at the University of Southern California, concluded many years ago, as others have also, that, though fascinating, Gödel's theorem is an irrelevant technical statement. "Those of us who model human intelligence know that people do not argue from axioms all the time. We argue by analogy. We keep learning new things. We make mistakes. We are not consistent, unlike the axioms," says Arbib Gödel's theorem would indeed limit artificial intelligence if it were as restricted as a GOFAI system. Penrose regards the potential of modern machines to learn mathematical axioms and rules of thumb as irrelevant to the way humans understand mathematics.But for most people, it is precisely the ability to learn that enables modern AI to escape Gödel's clutches, as Turing argued long ago. Any artificial consciousness would have the ability to incorporate new "axioms" into its structure as a result of experience with sensory or other data. The neural computing machines inspired by the brain are ultimately not intended to be logical inference engines but machines that interact with and explore the world, and can learn from their mistakes. In the jargon, these machines are "situated" so that they can constantly match their behavior to that of the world. In Arbib's opinion, " Gödel's theorem has absolutely nothing to say about that" : it is a red herring. Most of those currently working on the simulation of intelligence and consciousness feel that the real challenge lies with the biochemical machine we call the brain. The demise of GOFAI hegemony has led to a proliferation of computational strategies, as we have discussed throughout this book. What is happening now is that a menagerie of competing computational approaches is evolving that can only enrich artificial intelligence research. At this early stage, these efforts will continue to thrive, regardless of Roger Penrose's arguments against strong AI. For instance, though many details of a human brain and that of a slug are similar, they are wildly different in ability, reflecting their relative complexity. It seems highly probable that different degrees of complexity lead to different degrees of consciousness. Therefore, the quest for AI is a quest for complexity. Earlier in the chapter, we described efforts to construct the complex "wetware" that brains are made of; such efforts are still at a primitive stage. Other approaches try to re-create that complexity in an artificial neural network by computer hardware, software, or a combination of both. Although the representation of artificial neurons is far simpler than their realization in wetware, they do at least capture several important features crucial to the way the brain functions. Most important, the complexity of such networks results from the collective action of a large number of simple units, just as the brain's complexity rests on its myriad neurons. What is novel about the neural net approach is that it does not involve explicit programming. To be sure, some kind of algorithm is always present in all software neural network simulations in order to specify the dynamics of learning, in the same way as the DNA code "programs" the brain's architecture and learning processes. But such nets, once established, learn by experiencing the world with which they interact. Therefore, it seems entirely conceivable that sufficiently complex types of machines could also learn to "see" solutions to certain types of Gödelian undecidable statements. Indeed, this process is the same as that by which some humans develop an ability to resolve Gödelian problems-through a sufficiently deep education (i.e., a lengthy and highly specialized learning process) that enables them to stand outside any given formal logical system. And even if, as Roger Penrose contends, there do turn out to be significant limitations to the intelligent capabilities of digital neural networks, analog recurrent neural networks possess the ability to perform "super-Turing" computations, rendering computable that which for a finite state (Turing) machine would be noncomputable.
Whatever laws of physics the brain obeys, one thing is certain: they are
the laws governing the behavior of any physical object, whether a neuron
or silicon chip. Most scientists feel there is plenty of mileage left in
exploring the complexity of the brain using established ideas. As we have
emphasized throughout, existing physics can generate enough exotic emergent
behavior to explain many of the fundamental processes of life and the brain.
"Nobody has said they are stuck in their study of the hippocampus because
the fundamental laws of physics are restrictive," Arbib remarked. That does
not mean one should rule out the possibility of revolutions to come. Just
as investigating the very small gave us
quantum
mechanics and investigating the very large led to
general
relativity, perhaps something new will come from studying the very complex.
But the overwhelming majority of cognitive scientists would agree with Arbib
when he says, "I would be surprised if Penrose has got the answer in quantum
gravity"-and Arbib adds, "if he has, it is a pure fluke." [p314-p325]
Cyclic AMP and Slime Molds Advanced organisms such as the one reading this book comprise many billions of cells that are organized into enormously elaborate structures during the process of development from egg to offspring. There are scarcely any mechanisms in this development that we understand well enough yet to be able to give a decent mathematical description. Nevertheless, nonlinear mathematics can provide a qualitative sketch of the self-organization of a community of cells, as we can illustrate with the help of a strange creature called a slime mold. The slime mold falls halfway between a collection of single cells and an organism. Like the ant hive, Dictyostelium discoideum is a superorganism. At times it is multicellular (with around 100,000 cells), while at others, its cells wander independently. When the bacteria that make up its food are plentiful, individual cells feed voraciously, behaving like solitary wanderers and multiplying by direct cell division. Eventually, however, the colony runs short of food. Now the cells "notice" each other. For nonlinear reasons not yet fully understood, certain cells in the colony become active and act as pacemakers, "ringleaders'- that send out rhythmic pulses of a chemical called cyclic adenosine monophosphate (cAMP). This is a ubiquitous molecule in biology that acts as a molecular message between neighboring cells. It is a glucose distress signal, announcing they have run out of food. This clarion call to close ranks and organize travels at a few microns a second. Cells amplify and pass on the message, a form of feedback mechanism providing the nonlinearity that induces still more cells to hone in on the pacemaker centers. There are two additional ingredients: once a cell has released a burst of cAMP it cannot immediately respond to another signal, going into a "refractory state" before returning to an excitable condition. The cells also exude another enzyme phosphodiesterase-that destroys cAMP, setting up a gradient of the chemical that provides a signpost. The starving cells slither toward the pacemaker cells, in the direction of increasing cAMP concentration. Aggregating populations can produce concentric and spiral waves that bear a compelling resemblance to the spiral waves occurring in the BZ reaction. This is no surprise: though the details are different, the positive and negative feedback processes are the same. Once the cells have formed a slimy mass, they begin to differentiate and a tip forms that secretes cAMP continuously. The whole mass becomes organized into a glistening multicellular "slug," with a head and a tail, that wriggles in search of light and water. All in all, it takes several hours for these cells to form this simple organism. Between one and two millimeters long, it crawls along under the leadership of the pulsating source at its tip. It then rights itself to form a hard stalk above which perches a small head containing spores; eventually, the head breaks open and the wind casts its spores far and wide. If they settle in a suitable place, they can germinate and begin the cycle of this strange organism's life anew. Remarkable biochemistry underlies this behavior, reminiscent of the sugar clock in glycolysis. The messenger molecule that organizes this wriggling mass, cAMP, is formed from ATP by the help of an enzyme called adenylate cyclase. Feedback occurs, just as in glycolysis: cAMP already present in the medium surrounding the cells switches on adenylate cyclase to produce more cAMP from ATP. In this way autocatalysis arises, an essential ingredient of self-organization. By employing largely the same nonlinear analysis as he used to model glycolytic oscillations in yeast cells, Albert Goldbeter was able to show in a detailed way on the basis of limit-cycles how oscillations of cAMP could be produced every few minutes.74 This is an excellent example of self-organized behavior; moreover, chaotic cAMP oscillations are also now known. Indeed, in the mutant form of D. discoideum, we have observed temporal chaos in the form of cAMP oscillations and spatial disorder manifested in aberrant stalks and fruiting bodies, all of which can be returned to ordered behavior by adding phosphodiesterase.[p214-5]
THE DREAM MACHINE We are a long way from running computer simulations of the human brain in its full glory. Yet surprises continue to emerge in even highly simplified simulations. As one example, a computer model of the hippocampus, quite distinct from that of Treves and Rolls, was developed by Roger Traub at IBM in a collaboration with Columbia University to study the brain's electrical rhythms. The model connected 10,000 simulated neurons, each one described in considerable microscopic detail so that it would respond in a manner close to that of the real thing. It was a bottom-up approach to complexity similar in spirit to what we encountered in Denis Noble's work on the heart. What resulted was unexpected: Traub's network produced electrical waves similar to those generated in large populations of brain cells that can be detected by electroencephalography. One emergent behavior in the model was the "theta rhythm," which occurs during dream sleep. The origin of these waves, also called population oscillations, is not understood in the brain or the IBM 3090 computer. "It is quite a surprise," said Roger Traub of IBM.'33 "When I was starting out, we only used the model to confirm things we saw in the laboratory. Now we are beginning to do experiments on it as if it were an organism in its own right." Working with John Jefferys of St. Mary's Hospital Medical School, London, Traub has extended his work to model the most explosive spasm of electrical activity that can occur in the brain, when a ripple of activity spreads out from a single spot during an epileptic seizure. What is particularly striking about his simulation of so-called "after-discharges" is that they compare well with experiments on slices of guinea pig hippocampus. (An after-discharge is an abnormal electrical potential that is extended in time, usually appearing as a series of oscillations.) Both the spatial and temporal properties of these electrical discharges were successfully reproduced in a computer model consisting of between 100 and 8,000 pyramidal neurons, each broken down into nineteen compartments so that their electrical properties were reasonably realistic.134 The most intriguing rhythm of the brain has also been simulated by the team in a network of artificial neurons. Only present during consciousness or dream sleep, the rhythm cycles forty times every second and can be detected by monitoring the electrical or magnetic activity of the brain. Some claim that it acts rather like a clock in a computer to coordinate activity in the many specialized regions. In other words, this forty Hertz rhythm could be the way the brain tackles the "binding problem" underpinning a unified consciousness, which we encountered earlier in the chapter. Various explanations have been put forward to account for this beat: perhaps individual cells have an intrinsic forty Hertz rhythm; perhaps the rhythm arises from a feedback loop between inhibitory neurons and pyramidal cells, or even between brain structures such as the cortex and the thalamus. With Jefferys and Miles Whittingron, Roger Traub produced a forty Hertz rhythm in a virtual slice of hippocampus consisting of 128 inhibitory neurons, each modeled as a branching cell consisting of forty-six compartments. And indeed, in experiments where drugs are used to switch off the pyramidal cells in a hippocampal slice, the rhythm persists in the remaining active inhibitory cells. The rhythm appears to be an emergent property of networks of inhibitory neurons alone. "We have provided a tool to investigate the role of the forty Hertz rhythm in the binding problem," said Traub.135 The largest brain network simulated by Traub has imitated the action of only 10,000 cells. A hardware version of the brain, based on artificial neural networks, would require on the order of 10^11 neurons or processors. The massively parallel CM-2 computer from Thinking Machines Corporation, for example, has about 65,000 processors; if each one were made to act as an individual neuron, we would need to harness the combined power of ten million such Connection Machines before we would have achieved something like parity with the brain. But, as we have seen, it is not merely numbers of neurons that are important-what matters is the way they are connected together, with complex hierarchies of networks within networks, and nets coupled to nets. This connectivity is a combined result of biological genetic programming and adaptive learning; it is not hidebound by the initial hard wiring. Even when it eventually becomes possible to achieve simulations of hundreds of billions of neurons, the resulting networks and their emergent properties would not display intelligence and consciousness similar to that of the human brain unless they were subjected to similar sensory stimuli and experiences. This point was evident in the work of Rodney Brooks, described in the last chapter, where the computer is hooked up to a large amount of complex sensory apparatus. Conventional Al failed because it overlooked the essential importance of context-dependent knowledge and an ability to learn on the job. Instead, it was predicated on the unlikely suggestion that a programmer could design and implant something as subtle and complex as consciousness in a machine. Intelligence is an attribute that reflects brain plasticity and a direct experience of the way the world works. To be intelligent, therefore, a machine must be able to interact with the world as well as learn from it. This state of affairs is what biological evolution has wrought: it is a crucial yet often neglected ingredient of intelligence. Throughout this book there has been one dominant theme: how we are seeking to understand complexity through a symbiosis between nature, science, and computers. In the previous chapter, we saw the remarkable progress being made in the field of artificial life. In this chapter, we have shown how computational models of neural networks offer much insight into the complexity of brain structure and function. These insights will get wider and deeper as computer power soars and important biological detail is re-created more completely within computer models. We believe that there are good reasons to suppose that a sufficiently complex machine could one day emulate intelligence and consciousness, the most sophisticated hallmarks of the most evolved of biological species. We place our faith not in human computer programmers but rather in the complementary creative forces of self-organization and evolution.
As we pointed out in Chapter 8, even the human eye,
which tested Darwin's faith in his own creation
and which has frequently been cited in attacks on the plausibility of biological
evolution, has recently been shown to be a likely product of blind evolution.136
Some may mourn the power of such an approach, claiming that it diminishes
our existence by substituting shallow contingency and randomness for profound
metaphysical meaning. However, the insights into creativity, life, and
consciousness derived from an understanding of their inherent complexity
in no way threaten but instead enrich the notions of chance, indeterminism,
and free will so precious to us. [p324-7] No one should doubt that our innermost thoughts,our emotions of love and hate,are more than a rush of individual hormones,or the firing of individual neurons in the brain. The study of complexity,through its emphasis on emergent properties,goes some way to restoring the balance between the spiritual and materialistic sides of our nature.[p330-331]
Our moral and ethical standards of behaviour,not to mention science itself,have evolved and will continue to do so in the light of political,social and economic circumstances.These factors provide the "selection pressure" and determine in large measure what is or is not suitable. Such "meta" processes take place in the minds of conscious individuals,as ideas compete with one another for ascendancy. In this context,ideas are what Richard Dawkins calls memes,loosely speaking,units of cultural transmission,as they propagate from brain to brain.Examples include ideas, tunes, and clothes fads. "When a craze, say for pogo sticks, paper darts, slinkies or jacks sweeps through a school it follows a history just like a measles epidemic," wrote Dawkins. "Fashions and crazes succeed each other, not because the later one is more correct or superior to earlier ones, but simply as any epidemic hits a school."'3 As the philosopher Daniel Dennett remarked, the meme concept is a good way of thinking about ideas but the perspective it provides is somewhat unsettling, even appalling. "I don't know about you, but I'm not initially attracted by the idea of my brain as a sort of dung heap in which the larvae of other people's ideas renew themselves, before sending out copies of themselves in an informational Diaspora. "'4 Dawkins does not want to apply his viral metaphor to all culture, all knowledge, and all ideas. "Not all computer programs spread because they are viruses," he believes. "Good programs-word-processors, spreadsheets and calculating programs-spread because people want them. Computer viruses spread almost entirely because their program-code says 'Spread Me.' No doubt there is a spectrum from the pure virus at one end to the useful and genuinely desirable program at the other, perhaps with addictive computer games somewhere in the middle." Dawkins' selfish gene theory recognizes a similar spectrum, from viral genes to useful genes that make animals good survivors. "The genetic instructions, 'Build a speedy, strong-boned, keen-witted, sexually attractive antelope,' are saying 'Duplicate Me' in only a very indirect sense which seems to us far less mindlessly futile than the simple and unsubtle 'Duplicate Me' programs at the virus end of the spectrum," he wrote. In the domain of culture, he believes that innovative ideas and beautiful musical works spread, not because they embody instructions that are slavishly carried out, but because they are great. "The works of Darwin and Bach are not viruses. At the other end of the spectrum, the televangelist's appeal for money to finance his appeals for yet more money is pretty directly translatable into 'Duplicate Me'." Dawkins has taken this idea further in an attempt to draw a firm distinction between religious ideas, which he believes to be "pretty close to the virus end of the spectrum," and scientific ones. He argues that religions survive, not because of cynical manipulation by priests, and certainly not because they are true, since different religions survive equally well while contradicting each other. "Religious doctrines survive because they are told to children at a susceptible age and the children therefore see to it, when they grow up, that their own children are told the same thing." In other words, in Dawkins' opinion, religious beliefs are held for reasons of epidemiology alone. There are, however, difficulties with Dawkins' argument. It is certainly true that while several successful strains of religion coexist and compete, ranging from Judaism, Christianity, and Islam to Buddhism and Hinduism, scientific memes tend to have an all-or-nothing feel to them, with one established theory excluding more or less all others most of the time in any given area. Religious creeds, like political ideas, are judged by every individual according to his or her own background, beliefs, and prejudices. Scientists, by this argument, are expected to believe in some things rather than others because of superior evidence in favor of them. Scientific memes will be more successful the more they can correctly account for and predict the results of experiments and observations -in short, these criteria provide the measure of the memes' "fitness." In the case of "nonscientific" concepts, such as religion, no objective yardstick or 'fitness measure exists for carrying out a ranking, and so which ideas win out depends on a collection of more arbitrary and subjective criteria. We must therefore expect religions to include more unprovable statements than science, while such disciplines as economics lie somewhere between these extremes. Economics straddles the divide between science and the humanities. The world's economies possess nonlinear features characteristic of complex dynamical systems, although the marketplace is very much associated with a form of financial "survival of the fittest." There are objective measures of economic and financial success, whether of nations or companies, such as gross national product, budget deficit, market share, profits and losses, revenues, and stock prices. Yet many factors on which these quantities depend are themselves ill defined. A Wall Street catastrophe could be triggered by a financial earthquake or a whispering campaign. Beliefs and rumors generated by stockholders, analysts, and speculators can induce fluctuations in price, stock, and currency markets that in turn feed back on the objective "fitness" measures. It is interesting to note that it has taken economists a long time to recognize the inherent complexity of their subject. For decades, the central dogma of economics revolved around stale equilibrium principles in a manner entirely analogous to the application of equilibrium thermodynarnics in physics, chemistry, and even biology. For the same reasons as natural scientists, many economists have sought to shoehorn all economics into theories whose merits are their mathematical simplicity and elegance rather than their ability to say anything about the way real-world economies work. [p333-336]
My Comment: Anyone who reads this book would conclude that this universe
needs no God - vivisection is pointless - life should be respected
- the mind is the emergent behaviour of a brain - we have no soul and
that understanding logic - non-linear science - and mathematics
leads inexorably to these conclusions.Note from the above
that altruism is not a divine gift - but a necessary social
strategy - and that there is no mystery about social cooperation - it
is all a product of emergence and propogation - there is nothing peculiar
about an eye that needs a special
explanation-nothing about
life needs "God" as an explanation. KISS AND MAKE UP Look at the world's worst trouble spots and you can't fail to notice they have one thing in common: tit-for-tat attacks between warring parties. Escalation of violence is incredibly destructive, yet we humans find it very difficult to break the vicious cycles. It seems we are not good at conflict resolution. Perhaps we should learn a lesson or two from the spotted hyena. Indeed, no less than 27 species of primates, dolphins and even goats can settle an argument. So why do we find it so hard?[New Scientist]
|
Related Articles |
Ordinary Miracles Complexity Theory |
Why the Difference Between Quantum and Classical Physics is Irrelevant to the Mind/Body Problem |
Why Classical Mechanics Cannot Naturally Accommodate Consciousness but Quantum Mechanics Can |
Is Quantum Mechanics relevant to understanding consciousness? |
Everyone's a Winner Quantum Theory |
Lifes Patterns Turing's Diffusion reactions |
World of Patterns BZ,DLA & Bénard cells |
A Random Walk in Arithmetic by Greg Chaitin |
Rationally Speaking Massimo Pigliucci's page |
Random Reality by Greg Chaitin |