The Idea of the Mind

The idea, when it first arrived, was very much like the bottle with a faded scroll. It made the most imperceptible thud on the soaked shores of my consciousness, but I felt that it might contain something important, and got down to decoding the script with the fervour that attends the vague sense of doing something that would result in uncommon grandeur.

The image that is associated with its arrival is of the two hemispheres of the brain engaged in combat. Each had sprouted two hands with which they attempted tear the other apart. Suddenly they stopped ...

One hemisphere says to the other," Hey, I have an idea ! "

"Oh and I'm sweltering in the searing heat of this historic occasion. I mean WOW, a brain has had an idea. Of course you've an idea. That's what we're meant for."

"No, but this one's diferent. I know how we can improve our intelligence."

"Sure, you're the one with the extra brains. I'll tell you what, you improve mine and I promise to do the same for you."

"Alright here goes."

The first hemisphere fiddles with the second hemisphere for a while.

The first hemisphere asks,"Do you feel ummm... more intelligent"

"Oh I feel brilliant .... just brilliant. I wonder where I could find company more commensurate with my intellect."

The first hemisphere was no fool and understood the insinuation. Another mighty altercation arose between the hemispheres. Until...

"Hold on I say", said the second hemisphere, " I have an idea !"

*******

I laughed at it and had almost forgotten about it when funnily enough, I took it seriously.

What did they mean when they said 'improve' their intelligence ? It must of course mean what `they' felt was intelligence and what `they' felt was an improvement. Could both of them just go on incrementing their intelligence this way? From being downright foolish could they elevate themselves to an exalted level of wisdom? Could this incremental mechanism be behind the gradual development and expression of our consciousness? Isn't it the way civilization is actually progressing, by its parts improving other parts ?

When I first took up the problem of consciousness, I was troubled if it was possible, even in principle, that we might `understand' it ? Could Bugs Bunny actually come out of the television, examine the VCR and say, "Aha!" ? I presumed that we could otherwise there's no point in the first place.

Looking at the existing work in the field there was a lot that was being said for and against the Turing test. So that seemed a nice enough thing to try on the sparring hemispheres. Inspite of conflicting arguments as to its validity I found its reasoning quite sound. How do we know if anybody's conscious? We realise by now that merely our exterior form is not sufficient justification for attributing consciousness. So Turing put the computer behind a wall to eliminate any bias on the basis of form. I changed the Turing test a bit to make it sound better. Instead of putting the computer behind the wall its there up in front of you. The person now does not have to tell whether the entity in front of him is a human or a machine, he KNOWS that it is a machine.

What he must decide is whether the machine is conscious or not. The trick is that the person is knowledgeable enough and too much a man of science to say that the machine is not conscious simply because it does not look human. In the earlier Turing test we considered an average man to conduct the test. In this situation we ask a genius to do it. It is still a valid test. Just as a very small kid might presume that a programmed robot is alive and grow up to realize that hey he was wrong, an average person might mistake a genuinely conscious machine as being lifeless while the genius might realize that it is indeed alive. The genius in my case has as much information as is possible about what is going on inside the machine. It is true that he might still be mistaken about its consciousness, but that, taking the analogy of the kid, is the limitation of our knowledge as a civilization not that of the test. Of course the test in this case seems a useless one in that it does not tell us whether it is we who are at our wits ends or that the machine is truly not alive. But look what happens when I use it.

What if two entities conducted the test on each other and are convinced that the other is conscious ? Can I deny them this mutual feeling between them ?

Of course the Turing test requires that the entity that conducts it be conscious. But I am not testing whether they appear conscious to ME. Indeed I cannot in most cases say that a certain entity to me looks alive.

What I am saying is that I cannot certify that they can't have such feelings for each other. And if each of them feels the other is conscious then each would feel quite satisfied with the other having conducted a perfectly valid Turing test on him and so they have confirmation for what they suspected all along, that they are both conscious !

What this would mean is that every kind of interaction I might witness is capable of being suspected of the above and therefore we have a whole bunch of conscious beings. Frankly who needs aliens.

Again I cannot even assure myself that I can be aware of all kinds interactions and that does nothing to help matters.

What I'm increasingly coming to believe is this `consciousness' was no big deal in the first place. That the `Turing test' is not really a test but infact a basis for calling ourselves conscious, indeed for being in this state called `consciousness'.

How do I say I am conscious ? I talk to myself. I listen to myself. And I am effectively measuring the same parameters as I would if I were trying to figure out if another person is conscious. So what's happening is that two entities within me (who infact as a combination constitute me) interact and convince each other that they are alive. We might do that in gibberish as babies and in English (or any other language) as adults. And what happens when I'm interacting with someone else ? One component in me listens to the other person's corresponding component that is talking and the other component in me talks to the corresponding component that's listening in him. Or in other words `I' of the two components has interacted with `he' also composed of two similar components.

And what of the components ? Well they have each two further entities within them that're doing the same thing at a lower level what the two components are doing at a higher level !

But you might now want to say that `I' am actually composed of four components and so is the `he' and when `I' interact with `he' the counterparts are actually interacting with each other.

Sure ! For, as you might have anticipated, I intend to repeat this procedure even for the entities composing the components that make `me' and so on and on.

So in other words I'm suggesting that all our interactions have important repercussions down to the smallest possible level. Infact when I talk of splitting each component further and further into two parts its only because we can consider any interaction to be between two entities. For example in an auditorium containing a hundred people we could consider an orator and the audience as separate entities. Considering an entity as consisting of two, four or indeed any number will not make any difference.

I am not even specifically talking about the brain. I intend my arguments to be general. Indeed when I consider two entities interacting with each other I impose no constraints that they be identical or that both be organised in the same way. Of course since each human is organised in a similar manner we have a special concept of consciousness of each other. And infact the only one at present. What I am suggesting is that the two entities may be different and their `consciousness' of the other ie. the way they look at each other will also differ. The difference would not be substantial between two humans, would increase between man and say a dog and would become significant between a man and a rock. Yet a dog can be conscious of a man. And just like a rock may stimulate a man's senses so can the man in some way influence the rock. What I'm trying to emphasise is the fractal nature of `consciousness'. The components of our brain can infact be conversing with other components of the brain in one way, the components of another brain in a different way (which is however reflective of the fact that the brains are organized similarly) and the components of nature in a totally different way. And all these components are composed of further components and so on.

But the way two units may interact within the brain will be radically different from the way the `counterparts' would interact when `I' and `he' would interact.

Yes it would be different. Different pathways that is. For example if you are talking to yourself, let's say the region that makes sentences and the region that is allocated with understanding language are interacting. All communication at present is attributed to neurons but according to me the interaction goes right down to the level of atoms and still farther below. You could consider all that goes on between the interacting parties as merely a pathway. So the two regions are essentially communicating with each other in the sense I'm talking about.

When you're talking to another person you'd again consider the intervening space to be merely a medium, and that includes the air, the ears, the neurons connecting to it, the works. So the part that understands language (in the other person) and the part that makes sentences (in you) when you are talking, are interacting. All the rest can be considered a pathway. An analogy would be the way the internet works. Datagrams are repeatedly wrapped and unwrapped as they descend or ascend a network's heirarchy, enroute to their destination. Yet for the computers that are communicating it is as if they were kept side by side. Again we could talk of subsets of each these regions which would themselves be engaged in conversation.

This explanation is only to show that my scheme is concomitant with the brains operation. But the way I see it, the brain in itself is nothing special. The code for `wrapping' and `unwrapping' information in the brain is not something extraordinary. It's just the way WE happen to do it. Our consciousness of other humans seems special only because all humans are similarly constructed and so we can empathize with each other in a similar way too. The WAY we are doing it may be a good way for the survival of the form of our species. Otherwise my basic philosophy is that any collection can be considered as an entity and its interaction with another entity must be considered as proof of its being conscious. Simultaneously the mass constituting one entity may even be partially shared by another entity. There is no enforced `twoness'. Any interaction can be considered as occuring between two entities whether or not we find it overtly clear. And these entities can further be looked upon as an ensemble of smaller ones.

Consider this analogy. You are a businessman who goes to the bank for a loan. You talk to the customer service officer regarding your requirement. Let's consider the officer and you. You might chat for a while talk about family, talk about golf, things like that. Then you might talk about why you need the loan, how you'll manage to pay back, fill some forms. You've actually conversed with both the person and the company. Some of what you said will reach other employees via forms, telephone, letters. You get a distinct feeling that you're dealing with a responsive company. And yes the company too will have felt you though in a different way. However your empathy with the officer was almost complete. Both of you atleast partially understood each others compulsions. What's the problem in carrying this analogy several levels down ?

Infact what I called an analogy is actually an example of what I'm trying to suggest. If two entities empathize with each other you can't deny them consciousness or life. And you can't decide what kind of interaction is not mutual empathy. That means you can take this whole thing right down to an electron swinging around a proton or qaurks wiggling inside the proton to entire galaxies crashing into one another. They can all be alive.

Surely this has got to stop somewhere. What would be the smallest possible entity ? What kind of interactions take place at this level and exactly what does an `interaction', in this scheme, mean in the first place ?

"IT HAS NO BEGINNING, IT HAS NO END."

- From the Gita (Ancient Hindu text) on the nature of the soul.

I'm not trying to be incantatory. The sentence truly summarizes what I believe is implied by my line of thought. That there is nothing that can be the smallest, for there shall always be something smaller. And by the same token there is nothing that can be the biggest, for there shall always be something bigger.

I don't know what kind of physics rules the microscopic and the macroscopic domains or how they might be related. Particle physicists keep finding smaller and smaller particles yet I don't know of any proof regarding the smallest possible particle. On the macroscopic scale, EVERYTHING constitutes the Universe and to us it will always be infinite in its size and extent. Nobody as far as I know claims that time is quantized (whatever that may mean). To us there can be as small an instant of time as required. How do interactions take place at all scales ? I do not know. The particle physics/quantum mechanics model of interactions at the microscopic scale are very involved and not completely validated. My definition of an interaction in this scheme is a mutual change effected between two entities. When I use the word 'change', I don't mean to imply any change even if we cannot be aware of it. There is, according to me, no change that does not affect us... all changes and interactions affect us and our awareness even though we might not be in a position to point it out. The specific nature of these interactions - these two-way communications (especially those that are not obvious and occur at the extremities of our consciousness at the minutest level or at gargantuan scales and those interactions which occur in the fraction of an instant or take millions of years...) is however unclear to me.

Yet I have a simple explanation.

Consider a computer calculating an iteration procedure. Let me denote it as-

STATEnext = FUNCTION ( STATEprev )

It is infact what any computer would do if there were no humans to disturb it.

The evolution of STATE would obviously depend upon the function and the initial STATE. The STATE would simply be a vector of some state variables. Let us consider this to represent a virtual universe within the computer. However, in order that it may be called a universe there must be `someone' to call it a universe. Obviously it would not represent a universe to us. `Who' then would call it a universe ? Lets say that the people within the universe realize that they are in a universe !

To illustrate what I mean lets consider the state of the computer at a particular clock cycle. It would represent a frozen moment in this virtual universe (VU). By the next clock cycle it would have changed according the function. This would be another frozen moment in the VU. Now a collection of such VUs one after another would generate a `living' VU.

The iteration would produce lets say `objects' within this VU. By objects I mean entities representing lets say sub-iterations within the overall iteration being executed. The objects in object-oriented programming would be an appropriate analogy. Although object-oriented programming strives to build objects that reflect real-life objects, my `objects' are objects arising purely as part of the bigger iteration.

Would these objects interact ? Can these objects be conscious ?

It is here that I can further clarify the meaning of an interaction. An interaction is simply an iteration of the VU. It is the iteration which creates time and consciousness. Indeed these objects could be considered as interacting with each other. And therefore by my reasoning they ought to be conscious of each other.

Would these objects consist of further objects ?

They would of course contain smaller objects. It would appear to us that since our computer has limited resources that there could not be smaller and smaller objects within it. And similarly that our computer's limited memory would certainly not allow an infinite size for the VU. However it is not WE who live within the VU. For us it is not a universe. But can it not be that the `denizens' of the VU see, realize and be conscious of the infinite vastness of their universe and the infinite minuteness of their constituents ? They too could find laws that seem to govern their existence. They too might find that `moving' too fast starts imposing a restraint on them (because the computer can't move them around at any high speed). If they tried to look deeper and deeper within themselves they would find an odd form of lumpiness. Or maybe it would differ depending on the function we choose. Its analogous to Bugs Bunny looking out into the sky and seeing an endless universe even while he is enclosed in a 30-inch TV. Or... for two computers playing chess against each other there is an endless universe of chess moves. And why can't the citizens of the VU be conscious of each other ?

It should be very clear that this VU does not represent anything but an algorithm to us. We can never empathize with the algorithm. Why should this be true ?

The reason is that a computer is a machine and to call something a machine it must have the important property of being TOTALLY within our control. It should have NO free will of its own (of course we call objects that are not totally under our control, machines. But a strict definition will have to be the one above) . Strictly speaking this condition cannot be satisfied by anything. We cannot have anything under our complete control. There shall always be some aspect of it outside our domination. Lets however consider the computer. As a physical entity it is very much a part of `our' world its components interacting just the way everything does. We cannot control its minutest behaviour (ie. right down to the atoms and beyond). But we do have full control over the contents of its registers and memory at any given point of time.

There might be minute fluctuations going on in the voltage levels but the binary value they represent is unambiguous. This representational scheme, not the actual physical computer, can be considered a machine. It will necessarily have only a finite number of different states. Due to this the machine would require a sequence of states to say anything different. As soon as it has used all the combinations available in a sequence of a particular length it will have to increase the length of the sequence if it has to express something that it has not already said before. If the machine were allowed to express itself in sequences of any length then of course it could give infinite responses. But the computer's resources as WE see it are limited. So if an entity of `this' world interacts with an algorithmic entity iterating within a computer which itself is in `this' world then quite soon the computer would begin to take increasingly more time in order to give a novel response like us. Our interactions in this world are different from the previous ones because they are composed of infinitely many smaller signals and interactions and therefore an algorithm running on a computer would never be able to cope with this diversity. It would also not be possible for the computer to simulate `this world' because even if you were to collect all its states from any number of clock cycles they still would not constitute even the minutest fraction of an instant of `this world' since any given time duration in `this world' can be broken down into infinitely many instants of time (I can't prove this, it just seems obvious).

However, as I've already mentioned, for the entities within the VU the scenario will be quite the same as it is for us. They too would perceive that time could be split infinitisimally even though it might not be possible for them to measure it because of `their' version of an uncertainty principle. This brings us to the possibility of considering ourselves as being inside a machine in lets say `God's' world. Our interactions and our existence is the result of some algorithm running in that machine. We could of course fall into the infinite loop of asking who made God and so on, but I'll ignore that for the moment.

The main reason why we think that it is possible to construct minds as algorithms running on computers ( that is a finite state machine ) is that all kinds of communication between people can be put in a symbolic or digital format. After all people talking on a phone have a digital medium between them which does not prevent them from understanding each other. Also any kind of sound or visual output a person presents can in today's technology be precisely denoted in 1's and 0's and retrieved with almost no noticeable discrepancy for a human audience. Any kind of quantisation involves an error vis-a-vis the original analog signal. This error can be made very small thanks to present-day technology but it cannot be altogether eliminated. So when this quantisation does not produce any significant effect in our perception is it not reasonable to conclude that extremely weak processes and signals are of no consequence in evoking consciousness ? That there is no rationale in considering interactions that go down to the smallest possible scales ? That the brain too is really performing a discrete algorithm at some level ? And if all that is true why can't a computer make those 1's and 0's ? Across a digital telephone line, for example, in a given time duration say a minute there will only be a limited number of bits say k bits transmitted and therefore in each 1 minute segment only 2^k different things can be said, whether by a computer or a human. For all practical purposes the entity at the other end of the telephone is a finite state machine. If people talk for 2^k minutes then they'll necessarily be repeating earlier states. Its just like language. We use a fixed number of words, but that is complicated by all sorts of visual and audio inputs. Across the digital telephone line its precisely the same signal being repeated for an entire minute ! So why can't a finite state machine masquerade as a conscious being ?

It is a question for which no satisfactory answer has been provided, only arguments that try to make one feel better one way or the other. Here's my (peurile ?) argument as to why computers don't fit the bill.

Lets say you have a computer which we wish to test for consciousness. Let us perform the Turing test on this venerable piece of equipment. Either the machine or a person is sitting behind a screen and our expert has to determine who's who by typing questions and viewing the answers on a computer terminal.

The expert in this case is smart and has an exact copy of that machine sitting right next to him. Since we don't know whether the machine is conscious or not the one sitting next to us is given the benefit of doubt and considered as a `helper'. Nothing wrong about that. One must allow our expert all possible help in doing his thankless job of asking inane questions. The expert now simply keeps saying the same things to both the `entity' behind the screen and the duplicate machine with the care that his entire interaction with the duplicate is done say a minute earlier. If the machine is behind the screen then he knows in advance EXACTLY what its going to say. That would indicate that the entity behind the screen has no free will. It has to say what the expert already knows its going to say. This would not be possible for the human subject however.

Even if a duplicate (cloned ?) human brain were available to the interrogator, it would not give precisely the same response as the human subject behind the screen.The human brain has been shown to exhibit chaotic behaviour even at a sensory level. Chaotic phenomena in the brain is not simply background noise but actually contributes to behaviour in a way that is only now tangible to us.

That puts paid to any attempts by a computer to duplicate brain processes PRECISELY. Even if the brain were executing a simple iterative function like Xn+1=4*Xn*(1-Xn) no computer can have enough memory to precisely store the value of X. For example if Xo were irrational the computer could only store part of it causing an initial truncation error. If Xo were rational then even though Xn would remain rational throughout the iteration, the precision required for representing them would keep increasing and some rational number which requires a larger capacity than the computer can provide must be truncated. Infact since the number of values the computer can hold is limited, after a certain number of iterations the value of X would return to some point it had already visited and hence X would actually be following a limit cycle.

If the brain is following a chaotic differential equation then the computer has no hope whatsoever. Any numerical evaluation of a differential equation involves a time step. That entails error not only due to truncation but also because we skip evaluation for some time period no matter how small it is.

Therefore it is well beyond a computer's capability to ape a chaotic process totally even though they may be the strongest tools for studying chaos. The upshot is that since there is no means computational or otherwise (for even if an artificial brain were made, we could never set it exactly as another one) to predict precisely what a brain may do next, WE atleast cannot deny that we have a free will. And that is an essential feature of consciousness...to see, feel, hear and respond as we `feel' not as someone else could have predicted. The representational form of the computer, that is denoted by the contents of its registers, is, in my view, forever doomed to be unconscious.

But if a computer was made to follow a very complicated and intricate limit cycle resembling the brain's strange attractor (in say a phase space consisting of the inputs and outputs of the brain) and one which repeats itself say once in a million years couldn't we say that for all practical purposes, the computer is alive and has free will ?

It might seem so. But the truth is that WE do know that it is actually not conscious. If I may invoke my favourite cartoon character once again, an analogy would be that the cartoonist makes a million year long Bugs Bunny series. As long as it runs it really does look like there is a real person on the screen, but what happens after the million years... the video suddenly snaps shut and Bugs is exposed for what he was, an effort to copy life. The cartoonist KNOWS that Bugs Bunny is not alive.

Similarly if Deep Blue plays good chess, its not `ITS' consciousness or intelligence that is on display but its designer's skills, time and effort.

There is nothing spontaneous. Wouldn't Kasparov have won if he had an identical Deep Blue predict the outcomes of different options before playing every move ? On the other hand even if Deep Blue had another person of the stature of Kasparov helping it, that person could never predict *exactly* what Kasparov was going to do. Therefore an algorithm running on a finite state machine can only be an approximation to reality.

In another perspective, what I am suggesting is that whenever one has full information about a certain entity then that entity cannot appear conscious to him/her. Gary Kasparov might have felt that Deep Blue had a personality but the engineers who designed it know it for its worth. Schroedinger's cat is + for scientists outside the room but for a person inside it is merely either or . Since computers in their present form are entities about which we have full information they cannot appear conscious to us.

The Construction

So how does one make consciousness ?

`IT WASN'T BORN SO IT CANNOT DIE'

- From the Gita on the soul's immortality.

One cannot. Atleast I think we can't.

Because its already there...all around us. All we have to do is to mould whatever we can see, hear and feel into something we recognize as `alive'... as conscious. Whether it happens in a mother's womb or in a research laboratory the end result is the same. Something that recognizes us as we recognize ourselves and other people.

So what can we use to make a brain that we can recognize ?

In making a brain we'll have to avoid the pitfall of designing something similar in form to the computer, finite and discrete. Mankind's prowess over the electrical form of energy makes it very tempting for us to choose it as the force behind an artificial brain. The electronic brain. Though according to my reasoning it could as well be done with pendulums.

The best way to begin, I believe, would be to take an electrical circuit which is chaotic. This unit would satisfy all the requirements of not being representational in form. It would be affected by interactions right downto the minutest scales (due to the property of chaotic systems being extremely sensitive to the initial conditions of parameters with a positive lyapunov constant) . It would, according to what I have said, be the ideal building block for an electronic brain.

The thing to understand is that if two of these devices are allowed to communicate, we cannot deny them consciousness of each other at the device level. So if we were to just populate a bread board with these devices, I argue that, they should self organize into a community. A collection of such communities and so on *could* result in something which we can verify as being conscious.

Lets briefly consider chaos. There are two ways to look at it. The theory of chaos and the phenomenon of chaos. The theory is simply a mathematical artifice. The phenomenon by itself appears to be some random process. The theory when used to describe real life phenomena becomes a model. But can any model hope to describe a real life chaotic phenomena when theory dictates that even the smallest change in the initial conditions leads to different results.

Consider the Chua's oscillator which exhibits chaos and for which there exists a mathematical proof to show that it should indeed exhibit chaos. However the model is based on other models of electronic components. None of them not even one as basic as V=i*R is EXACTLY the way those components behave. So any model can only indicate whether the process might be chaotic and what shape it is likely to assume but it might fail to indicate the presence of chaos because of factors unaccounted for. How small can these unaccounted factors get ? In real life phenomena it is likely that processes at the lowest levels are responsible for what is apparent at higher levels. No model has as yet (or possibly can) account for everything. So it might be that chaotic behaviour at the macroscopic level does infact reflect the quantum mechanical attitudes at the nanoscopic levels. And it is probably this that the computer loses out on.

But how can I guarantee that a population of such chaotic devices interacting amongst themselves could ever appear conscious to us ?

I cannot guarantee that it will appear conscious to us in that special way that other humans do. There might be some structural quirks that preclude certain behaviour essential for us to certify its consciousness. It would be akin to the fact that it is impossible for one to feel, beyond a certain level, any empathy with say a dog. Yet I do claim, crudely speaking, that by altering its structure so that its strange attractor comes to reflect that of the brain one can expect ... recognizable consciousness. The brain is only one specific form and our effort would go into nudging our breadboard entity towards it. Such tampering would be next to impossible in a dog's brain but is rather less gory on a breadboard.

To that end we must delve a little into the construction of our brain. The most significant feature is that its basic structure is decided by our genes so that we may be compatible with each other (to my mind that is the MOST important function of genes) and appear conscious to each other. The finer detail is decided by external inputs. In my scheme however there is no meaning of development or construction from recorded instructions like genes. That is merely how it appears to us. In my scheme they would simply be interactions on another scale. Yet for making an electronic brain it is the basic structure constructed from our specific genetic instructions that we need to duplicate on our breadboard because without it the electronic brain will never be able to `learn' (I use `learn' to specifically mean human learning for it is this special behaviour that needs to be emulated) quickly enough in the particular way that our brains do (which as I have already mentioned is no big deal but for the fact that it is the way WE do it) . Universality in certain chaotic systems like the Chua circuit is a great help. We know atleast that the chaotic behaviour of neurons might be duplicated in electronic circuits. Not that my strategy would involve doing that, but it does show that it would not be impossible for recognizable consciousness to arise in our breadboard. My scheme allows an interesting alternative. Since the basic framework encoded in our genes is nothing but the result of interactions albeit at another scale, why can't we let the breadboard organize itself merely on the basis of external input and its response to it ? That would involve giving extreme `flexibilty' to the devices at the beginning and slowly reducing it as it organizes itself according to `us'. It is not different from what actually happens in a child.

Its brain is, to begin with, very flexible but as it gains experience its brain becomes more and more rigid shedding extra neurons on the way. Of course it will maintain a certain amount of flexibility all its life. But the child would learn much faster as compared to our breadboard because it already has the basic framework with which to cope with human consciousness, and that basic framework itself was probably encoded in our genes in response to our requirement for faster learning and compatibility. In the absence of that basic framework we must provide our breadboard entity with an even greater flexiblity so that it can so to speak `copy' the strange attractor of human consciousness (forgive my crudity once again) by experience.

I have an idea as to how we might implement this as an electronic circuit.

Lets, however, look at an analogy first. Consider a pendulum whose bob can move freely for a certain range along the pendulum's rod. If we force the pendulum to oscillate at a certain frequency then if possible the bob will slide and position itself in such a way that the natural frequency of the pendulum matches the frequency of the forced oscillations. If the bob could move along the rod without friction it would happen quickly however if there is some friction it might take some time. The pendulum therefore has adapted to the frequency being forced upon it to the extent it could. If the frequency itself is varying rapidly the bob would be hardput to match it instantaneously. Lets however change our pendulum into one that is chaotic. A chaotic pendulum is one merely consisting of three or more rods connected at the ends with a bob at the end of the last rod. Now this pendulum left to oscillate on its own, will do so chaotically. Suppose again the bob has freedom to move within a certain range along the rod. Let us fix the bob say 0.5 cm above its lowest position and record the motion of all rods. Now we'll let the bob free to move and let it be initially in its lowest position. Lets playback and force the recorded motion of each rod on the chaotic pendulum. Again we can expect that in order to match the shape of the attractor (of the signals on each rod) the bob will rise and settle at 0.5 cm above its lowest position along the rod. The pendulum has infact matched the attractor. Now if the attractor forced upon the pendulum were one derived from a more complicated chaotic pendulum lets say one having 15 rods connected in some fashion then the complexity of the signal derived from any three rods of this complicated pendulum would be too much for the three-rod pendulum. Again we can expect that if we repeat the experiment for a 15-rod pendulum, that it would adapt. If we extrapolate this trend we can deduce that a system with as much complexity in terms of state variables and flexibility in terms of variable parameters (such as the moveable bob in this case) can copy another similar complex system.

But what if we don't know how to construct a complex system similar to an existing natural complex system so that it may adapt to it. This is the case with our brain. It is here that electronic chaotic circuits comes into play. It has been shown that the vector field of the Chua's oscillator is topologically conjugate to the vector field of most 3-D vector fields. Therefore by merely adjusting some parameters of this circuit one can obtain any chaotic phenomena observed in other systems with 3-d vector fields. Therefore if the signals of a chaotic pendulum were somehow forced upon this electronic oscillator and we had some mechanism by which the capacitances, the inductance and the characteristics of the Chua diode(which is part of the oscillator) itself could vary then they would adjust themselves to match the behaviour of the pendulum ! (I have done no experiments on this, neither have I consulted any authority on whether any such thing might happen. It just seems intuitive). Further more I believe that by linking up many such circuits together one may get an electrical network which might have the capability to adapt to any n-dimensional vector field. That is the essence of my strategy to emulate the brain using electronic circuits...and its intuitiveness arises from the reasoning I had presented at the beginning of this long letter.

The "flexibility" of these circuits will obviously play a key role in determining their success in appearing conscious. Flexibility, in the pendulum analogy, would be according to the friction faced by the bob along the rod ie.

the `difficulty' in changing its position and the range in which the bob can move along the rod. Similarly flexibility in the electronic "being " (ebee) would be determined by the range in which its components can vary and the ease with which they can accomplish that change. That would require components such as capacitances, inductors and active components like the Chua diode to vary and adjust according to the imposed signal. This flexibility ought to be substantial initially and also be capable of being reduced in some global manner so that, as discussed previously, when a basic framework has emerged we can decrease the ebee's flexibility to in order to retain its gains.

I envisage many millions of flexible chaotic oscillators connected by means of adaptive filters, so that changes in signal frequencies automatically result in a form of switching action changing signal routes according to same-frequency paths. The adaptive filters would adjust themselves to the band in which the local oscillators operate making frequency pathways connecting many nodes at once and with different weights. The signal would in a way be "grabbed" by the route which gives it the least resistance. It is not required that all nodes (consisting of an oscillator / oscillators) be connected to each other or even in any particular way initially. However a certain density of connections must be in place so that the ebee is not hampered `structurally' ie. enough connections must be in place so that it has the capability to adapt to an attractor in an n-dimensional vector field. Some nodes may be taken as input nodes and others as outputs whose output would in some way change the `environment' and fed back via the input nodes. The `environment' as seen by the ebee consists merely of the signals provided by its input nodes. Nothing else has any meaning for it. Of course the effect of the ambient on the ebee and the power supplies for all its active components are part of the input/output.

In order that the ebee is indeed guided towards human-like consciousness, it is obvious that we should provide it with the same inputs that a human gets and intrepret its actions(output) as ours are intrepreted.

In order that we may put this to test we could first begin by creating a simplistic artificial environment for the ebee, since putting it directly in `our' world be more than we could cope with at this rudimentary stage. So we could design, lets say, a 2-d world consisting of objects and creatures with certain properties. All creatures will have identical appearances in this virtual world, so that we can avoid any aberrations due to differring perceptions of each other. One of these creatures shall be represented by the ebee while the others shall be simulated by actual humans. In a 2-d room we could have three creatures, two backed by humans and the third being the ebee.

It is also stocked with rudimentary `things' like squares, circles, L-shaped objects, etc. It also has something akin to light which keeps updating each creature's point-of-view(POV). Each creature may be given some `input' ports where they can recieve light or physical pressure. And output points which help the creature alter its environment. We could implement simple laws of physics in this environment. The two humans shall be seated in front of their panels which show their respective POVs and equipped with controls that manipulate their ouput ports in this 2-D virtual world. In case of the ebee, the input and output points are merely connected to its corresponding nodes. We now begin the experiment. The humans and the ebee will start interacting in this limited and controlled environment.

My belief is that the ebee, initially with great flexibility, will quickly learn the ways of this 2-d world. In the ebee's electronic network, frequency pathways will form, intermediate nodes becoming mere relay stations. But it would also very quickly lose what it has learnt. However if we slowly keep reducing the flexibility of the ebee, it should soon become a `wise' and experienced being which while retaining the capability to learn new ideas in terms of what it already knows, becomes increasingly limited in understanding radical new changes.( Again I must repeat that when I use "wise", "ideas", "learn", etc I mean these terms only in the particular way that we know them and in the particular way that they are applicable to us, not as terms which have a meaning over and above what we have given them ie. they are used purely in a human context. Therefore when I say that the ebee is becoming wise, I mean that it has started to exhibit behaviour which WE understand as being wise, if the ebee has an idea, what I mean is that it looks to US that it has an idea and if I say it has learnt I only mean that that is what WE feel about its actions.) Providing a large number of nodes to begin with, helps the important nodes to use some nodes as mere relay stations to other nodes they `want' to talk to. In this limited environment the ebee, I beleive, would appear conscious to both the human subjects and ... vice versa. No two ebees will be the same. Each will have its own unique personality, for when I said that they would adapt to their enviornment, I did not mean that they would (or could) ape the exact behaviour. The environment will always be of a much greater complexity than the ebee can ever hope to be. When it interacts with other creatures of similar complexity, it apes them because it receives the same inputs as they are getting (because we arrange it to be so) and its output is treated in the same way as theirs. But all the while this interaction is embedded in the overall environment which is far more complex than either and although two separate ebees should converge to the same average behaviour, the same average appearance, they shall have their own distinct persona as each would adapt initially in a slightly different way and these differences will accumulate to give them their unique personality. Therefore several ebees will appear to be doing what *they* want in a bigger framework of what they all must do.

I have not given any specific circuit because there are so many alternatives many of which I don't even understand very well. The possibilities abound not only in VLSI electronic circuits but also in other areas such as nanotechnonlogy. One could imagine variable inductors, capacitors and diodes being manufactured using both semiconductor and nano moving parts, that are capable of reacting to imposed signals by adjusting their positions accordingly. Many other techniques are available, like circuits that grow by self-organization, since there is no specific design or hardwire code to be followed. Although controlled self- organization might provide us with a DNA-like control over the construction of the ebee. Then there is the possibility of somehow using electro-polymerization ie. electricity induced growth(polymerization) of conducting polymers in making adaptive electronic elements or for dynamically making connections between nodes. Similarly one could imagine wires being grown under electronic control. The construction of a device like the ebee though is certainly not trivial, but I believe it is within the scope of present-day technology even though it will require innovation in both manufacturing technique as well as material.

My overall strategy in the construction of a device that appears conscious to us has been derived from the fact that it would take too long if we were to start learning all the key features of our brain's neurodynamics in order that we may duplicate it artificially. Instead we must set a thief to catch a thief, ie. use a complex system to copy another complex system. That indeed is the rationale behind neural networks. Infact my scheme does resemble a simulated annealing scheme implemented on a neural network. To highlight the difference we must separate the neural networks running on computers or digital ICs from analog neural networks. I have already shown why I feel that ANY algorithm running on a digital platform is incapable of `being'. The analog implementations of neural networks, though, are ALL capable of being conscious according to my scheme. The difference is in the aim and implementation.

Conventional neural network models are not aimed at being `alive' but at getting better at certain tasks and therefore their response as well as their worth is viewed as such. The implementation of these neural nets, hence, is directed towards that specific objective. Just because the neuron exhibits some kind of threshold function all neural nets are compelled to have one too. Some intrepid ones even have `adpative' thresholds. Therefore current analog neural networks with `restricted' and `unflexible' structures focussing on very narrow subsets of human consciousness will have a hard time becoming `verifiably conscious' anytime soon.

Thus I present my idea for the meaning and artificial evocation of consciousness.

Sachin Gupta

1st March 1998

[Home][Other Ideas]