“Nothing is easier than to familiarise one’s self with the mammalian brain” wrote William James in a footnote to Psychology

"Nothing is easier than to familiarise one’s self with the mammalian brain" wrote William James in Psychology. "Get a sheep’s head, a small saw, chisel, scalpel and forceps (all three can best be had from a surgical-instrument maker) and unravel its parts."

Nothing quite so confident as that was written for most of this century, until about ten years ago, when the prospect of discovering where our minds are in our brains suddenly began to seem possible again. Behaviourism was discredited; new computerised techniques made it possible to study living brains at work; and — though this last factor is hardly admissible – a generation of bright scientists were coming into power who had played enough with LSD to realise that consciousness could not be explained away, yet is intimately bound up with the physical world.

Unravelling the parts of the brain has come on a long way since then, using much more delicate techniquesin the last twenty years. But the new discoveries, however overwhelming, still seem unsatisfying: what really interests us about brains is where they stop; and where minds start. Actually, that is the wrong metaphor. It is terribly crude to suppose nowadays that there is one single place in the brain where minds start. What we would really like to know is not where the mind is in the brain, but how it is manifest there. The really ambitious want to know why there is mind in the brain, and to formulate the rules which state when a mind can emerge., or even why.

Why there is mind in the brain is what is known as the Hard Problem of consciousness: how there is mind in the brain is the easy, though still unimaginably hard, problem. Both gain a large part of their fascination from the way that the answers seem to recede like mirages, always a little ahead of the advance of knowledge. We may know no more about them than William James did, and express ourselves much less clearly.

Our tools are much better than his were. In place of the small saw, chisel, scalpel and forceps, we have computers and quantum mechanics. The modern explosion of knowledge about the brain and its workings is dependent on computers: they have been essential tools both for speculation and for research. Computer-assisted techniques have made it possible to watch brains working as they work. Before about twenty years ago the brain scientist really did have to murder to dissect; or to rely on nature to murder for you. If you wanted to know what a particular part of the human brain did, you hoped for someone in whom it had been destroyed, usually by a stroke, and then studied them. The development of CAT and PET scans and MRI have made it possible to study the brain’s workings in unprecedented detail, almost as they happen. There is a sort of epiphany that people have when first they see a photographs of a brain scan, and watch a sensation spread inside the skull like a puff of garish smoke before it dissipates: perhaps their own brain, scanned, would show the same excitement as they’re watching.

Later, the excitement wears away, a little. Seeing thought is something, after all, we have done ever since we learnt to read faces. It is a hard-wired part of humanity; and an MRI scan cannot actually show a stranger or more exciting view of what goes on inside a stranger’s head than language can. But it makes the mystery fresh, and it makes it seem soluble. There is even a particular area which lights up when we are consciously attending to some task rather than doing it automatically. So, has the man who found that area found consciousness? Has he found humanity?

Everyone professionally interested in consciousness at the moment, whether Neuroscientists, philosophers, psychologists, and computer people would agree that such a scan has not found consciousness. But that is about all they would agree on. They would give different accounts of the failure, and have different hopes for success. What makes the whole thing more complicated is that most of these disagreements do not follow the fualt lines of scientific disciplines,. You will find few believers in strong AI among the people who know most about the workings of the brain, but most of the disagreements are as much philosophical as they are scientific. This does not mean that it is philosophers rather than scientists who will solve the riddle of consciousness, though some philosophers seem to believe it does; it means that philosophical sophistication is required to make progress; and that the central problems in the field are philosophical disagreements about the place of mind in the universe, as much as they are biological or technical.

Some philosophers believe they are insoluble. Colin McGinn has argued that human beings are constituted so that they cannot understand how, in his own phrase, "conscious states depend upon brain states? How can technicolor phenomenology arise from soggy grey matter?" Obviously conscious states are connected to brain states, and in some very subtle ways, as anyone knows who has been drunk, anaesthetised, or eaten a few thousandths of a gram of LSD. The challenge is to discover what laws, if any, govern these relations. Or, to rephrase the question, why should anything give rise to experience?

The Zombie philosophers, such as Daniel Dennett and the husband and wife team Pat and Paul Churchland, argue that this is a mistaken question. To them, "Consciousness" is just the name that ignorant pre-scientific people give to certain neural interactions. Proper scientific materialists will see that conscious experience is as unnecessary a category as phlogiston, or élan vital. Just as there is no principle of life which animates otherwise inanimate matter; but only inanimate molecules arranged in particular ways, so there will turn out to be nothing more to consciousness than a peculiar arrangement of electrical currents and chemistry, in neurons or perhaps in silicon. When we have done all the "easy" problem, and worked out how the brain works, we will discover that the "hard" problem has evanesced.

I find this a difficult position to understand and to do justice to: Dennett is a writer of tremendous vigour, who addresses important problems clearly. But his furies seem often out of proportion to their object. In fact Sometimes he is like he reminds me of the mechanical dragon that Dr No built to frighten the natives. Roaring and clanking, he stamps furiously through the undergrowth. Huge gouts of flame blossom from his nostrils. A terrible searchlight stabs through the night from his eyes, hunting for targets to annihilate. The reader, cowers in the undergrowth, praying that the terrible searchlight will not expose him. But when the terrible monster has finally clanked off into the night, and we scurry like shrews to administer last rites to the wounded, it turns out the dragon did no more than trample the vegetable patch.

"Consciousness Explained", Dennett’s most recent book on the subject, does a tremendous demolition job on the idea of a single, central "meaner" or centre of consciousness inside us. Only long afterwards, picking our way among the scorched and trampled cabbages that have felt his wrath, does it occur to us to ask whether his demolition job could really be said to explain consciousness rather than to explain it away.

Similarly, the argument that "life" is simply an arrangement of the same molecules as make up dead matter, this seems rather to miss the point. There may be very little chemical change indeed between a man who is dying and the same man, five minutes later, when he is dead; but no one would take seriously a book called "Life Explained" which maintained that these chemical facts much helped us to understand what death is.

That is a criticism which can be generally directed at both Dennett and the Churchlands. But Dennett goes further in one respect. He seems to get fairly close to behaviourism: to arguing that there is no first-person perspective in the world at all: the act of describing consciousness creates what it describes, whether you are describing it to yourself or a third party. This puts a very high value on language. It is not clear whether Dennett believes that pre-verbal babies, for example, are conscious, or in what sense. The willingness to consider that there might be human beings who can function perfectly well in the world without consciousness links Dennett to one of the most remarkable books ever written on this (and perhaps any other) subject: Julian Jaynes’s "The origin of consciousness in the breakdown of the bicameral mind."

It is one of the most rewarding books with which to start an enquiry into the field, not because it is right – it is almost certainly spectacularly wrong — but because it goes off like a bomb in the mind, leaving echoes that roll around for years.. There is a certain poetic justice in this, since one of the major themes of the book is that the first great civilisations of Mesopotamia and central America were built in response to hallucinated voices. Jaynes argues quite seriously that consciousness originated in the Eastern Mediterranean around three thousand years ago, somewhere between the Iliad and the Odyssey. The characters of the Iliad, he says, were pre-conscious: what we would now call schizophrenic. "Iliadic man did not have subjectivity as do we; he had no awareness of his awareness of the world, no internal mind-space to introspect upon…Volition, planning, initiative is organised with no consciousness whatever and then ‘told’ to the individual in his familiar language, sometimes with the visual aura of a familiar friend or authority figure or ‘god’, or sometimes as a voice alone…The Trojan War was directed by hallucinations. And the soldiers who were so directed were not at all like us. They were noble automatons who knew not what they did."

They did not deliberate or ruminate. They acted on instinct; and when a problem arose to which instinct was inadequate, their left brains heard the gods, quite literally, speaking to them, from the opposing area of the right brain. But by the time of the Odyssey, the characters no longer hear the voices. they are alone in the world as we are, burdened with conscious choices and without gods.

Jaynes’ dramatic characterisation of the Iliadic world leaves out an important point he has established earlier in the book: that almost everything he says about Achilles is true of any piano player in the act. Almost all our really impressive feats are performed unconsciously, or at least while we are unconscious of our actions — we may well have our consciousness filled with what we are reacting to.

This is one of the great evolutionary puzzles of consciousness. Many of the actions we imagine would be essential to preserving animal life are best performed unconsciously. A tennis player at Wimbledon, exhibiting the sort of speed and grace required to keep our ancestors on a savannah full of hungry lions, will have returned a service before he is aware that the ball has crossed the net coming towards him. So why does he need to be conscious of his acts?

One answer is that consciousness is an adaptation to the problem of other people, rather than of other animals. Nicholas Humphreys has suggested that this is what drove the evolution of human brain size: consciousness helps us form a model of what other people are likely to do (and so to be able to outsmart them). We then gain an idea of ourselves by analogy with what we observe of others’ behaviour and postulate of their inner life. This view tends to see human consciousness as very close to language, though that is more prominent in Dennett and Jaynes than in Humphrey. Jaynes, for example, has a very clear and subtle account of the development of language through metaphor.

But for most observers, consciousness goes a long way further back in time and deeper into the animal kingdom than among hominids. If the central question is "Why should anything give rise to consciousness", one popular answer is that brains just do. We are conscious. Our brains cause consciousness, and the debate must start from these facts. The most forceful and feared exponent of this view is John Searle, professor of philosophy at Berkeley.

Searle is a serve and volley philosopher. If his first sentence does not blow an opponent away, he will rush straight to the net for the second. Here he is attacking strong AI: "The study of the mind starts with such facts as that human beings have beliefs, while thermostats, telephones, and adding machines don’t. If you get a theory that denies this point you have produced a counterexample to the theory and the theory is false, One gets the impression that people in AI who write this sort of thing think they can get away with it because they don’t really take it seriously and they don’t think anyone else will either. I propose, for a moment at least, to take it seriously."

That passage comes from his famous 1980 essay on the Chinese Room, which laid out some of the strongest arguments for the view that consciousness is not computation. Both words are of course extremely slippery; but Searle was reacting against the strong AI position that what the brain does is to run a sort of computer program, and if we could reproduce the program, we would have produced consciousness.

The essence of the Chinese room experiment was to imagine a man in a room who is manipulating slips of paper according to certain rules. Paper comes in with certain markings on it: he looks up the rule for each diagram and sends out of the room differently marked sheets of paper. If these markings are in fact Chinese ideograms, and the rules are well-chosen, he may be answering questions in Chinese. But that does not mean that he understands a word of the language. This, says Searle, proves the difference between consciousness and computation. It is possible to simulate by computation tasks which in humans require understanding. When you have done that, you have not produced understanding in the simulation.

This argument has not of course stopped the believers that what goes on in our minds is computation. Pat Hayes, a British AI researcher now working in Florida, puts it this way: "The computationalist hypothesis is far more radical than the claim that the mind could be simulated on a computer. Rather, it is the claim that the mind actually consists of computational activity in a biological computer."

A this point, a less obvious line of attack is deployed against the computationalist hypotheses, by Searle, and also by Jaron Lanier, a professor of computer science who coined the term "virtual reality" and is one of the strongest attackers of strong AI. Both men claim that "computation" is a fuzzy and observer-dependent term. All sorts of systems in the universe can be viewed as computing or encoding symbolic operations. Lanier’s favourite example is a meteor shower, which, if measured, will yield a string of numbers which must be readable on some hypothetical computer as a program. Does this meant, he asks, that the meteor shower is computing? He asks.

To this, Hayes’ response is that pure computing is never found in nature, any more than pure numbers: all are abstractions which are only ever found bound up with concrete things. Computing may defined as the manipulation of symbols according to formal rules, but wherever you find computing going on, you find these symbols are physically incarnated, whether as pulses of electricity in a silicon chip or patterns of neuronal activity; and they are in some way physically connected to the things they represent, whether by nerve endings or sockets on circuit boards. Meteor showers aren’t, so they don’t count.

To this, Hayes’s response is that computing is never found in nature as a pure symbolic activity. It is of the essence of computing that symbols are manipulated according to formal rules, without reference to the world outside; but all these symbols are at some level bound to particular inputs and outputs to a computing system, where these are sockets or nerves: there is no need to look for "pure computing" because that is nowhere to be found. There are only particular devices which compute particular things, even if they are mathematically all equivalent toa Turing machine at a sufficient level of abstraction: "Bear in mind that a silicon computer is also a material system with nothing in it but its physical make-up; but part of this makeup is the physical encoding of symbols which influence the behaviour of the machine in ways that reflect the meanings of the symbols."

So it is possible with goodwill to see that everyone agrees that brains are doing more than simply processing symbols, though, even if they disagree about how much more and what this is. One point made very clearly by Gerald Edelman is that any system that has evolved as an aid to survival, as our consciousness must have done, will have values – or emotionsal colours — built into it from the very start. A worm has evolved an aversion to being stuck on a hook aeons before its descendants might begin to develop ideas of what a hook is.

Edelman won a Nobel prize for his work on the immune system., Since then he has worked on a Darwinian account of the brain. He wants to see how the cortex can structure itself in response to experience, by a process of selecting certain patterns of neural connection and allowing other to die away. These patterns then feed back into one another by a complicated process he calls re-entry, to build the kind of complicated models of the world that all animals live inside. His books are clear but dense; his theories are probably the closest anyone has come to a scientific explanation of some of the processes that underlie the emergence of conscious life. But in the nature of things they raise more questions than they answer

Edelman is the only one of these theorists to address clearly the question of how brains grow to be conscious. The fertilised egg, after all, is not conscious. It does not think or feel, yet it contains the instructions necessary to produce a baby that can feel and will think, if properly stimulated. The basic architecture of the brain is the same for everyone; but brains grow. They change physically as their bodies grow and learn. If long-term memories and concepts are stored as nets of associations, as seems likely, then they will be stored in different places in different people’s cortexes. Considered simply as an engineering process, this is as awe-inspiring as anything in the universe..

It is worth emphasising that even if the processes of the brain can be described as computation, we are infinitely more complex than any computer yet built or even conceivable. Susan Greenfield, an Oxford neurochemist, has just written the best perplexed person’s guide to the brain ever published, which is full of illustrations of this complexity. There are about as many neurons, she says, in each adult brain as there are trees in the Amazon rain forest; and there are about as many connections between these neurones as there are leaves in the rain forest. These figures suggest how complex the wiring can be, if the brain is considered purely as an electrical system. But it is not purely electrical. Signals travel within neurons as electricity, but they cross between them, at the synapses, through chemical processes involving specialised molecules called neurotransmitters. there are xx different kinds of neurotransmitter known, each of which can have different effects. To further complicate the picture, the composition of this chemical soup varies with our mood and with the time of day. Broadly speaking, they affect the electrical workings of the brain in the same sort of ways that monetary policy affects the workings of the economy. That is how most drugs have their effect. But the brain, like the economy, has its own integrities, and can cannot indefinitely be manipulated in this way. Cocaine doesn’t work in the long run for the same sort of reason that Keynesian stimulation doesn’t.

One way of examining this argument is to ask whether is a rule-bound, or algorithmic process. The most celebrated exponent of the idea that it cannot be algorithmic: that there is something necessarily uncomputable and unconstrained about the emergence of consciousness, is Roger Penrose, the Oxford mathematician and Platonist. Working with Stuart Hameroff, an anaesthesiologist, he has developed a theory that makes consciousness the product of quantum events within microscopic cell-stiffening structures known as micro-tubules. Penrose is a Platonist, which makes him doubly unfashionable among the great panjandra of the field. The human mind has access to mathematical truths by a form of intuition, he believes., which no purely algorithmic process can ever match. Goedel’s theorem proves there will always be mathematical truths which cannot be proved inside the system which gives rise to them, yet which we can see are true. Therefore, he says, consciousness must involve non-algorithmic knowledge. Against this, Dennett and others have argued that it greatly over-rates the certainty of mathematical knowledge. Intuitions can after all be mistaken; and it is easy to imagine an algorithm that will generate intuitions that don’t have to be right.

Part of his argument is that the emergence of consciousness in the universe and the relation of quantum laws to physics at ordinary scales are both mysteries. Might they not prove to be the same mystery? Hameroff is the showman of the pair. His interest in consciousness started professionally: anaesthesiologists spend their working lives making consciousness appear and vanish to order. They know how to do it well enough by now, but the underlying laws are still a mystery.

The Penrose / Hameroff theory is important partly because it is so thoroughly rejected by almost everyone else in the field. It is much the closest what most people in the world believe, for a belief in vital spirits of one sort or another is widespread all around the world.. Most of the people who have ever believed in ghosts, zombies, and other forms of disembodied consciousness spirit are probably alive today; yet almost all scientific researchers are convinced that conscious states are dependent entirely on brain states. If we assume — as seems safe — that any really comprehensive account of the brain’s workings is decades away; and any computer that might mimic that, still further off, the question rises, why is the field so interesting and fashionable. What do people hope to find from it?

Hameroff’s interest has led the University of Arizona to host large conferences at which everyone on the field comes and shows off for their peers for a week. Late one night in a bar at the last Tucson conference, Patrick Wilken, the Australian founder of the Association for the Scientific Study of Consciousness, finished off a long evening’s argument by saying "Don’t you see, Andrew, what we’re trying to do here? We’re trying to make a soul!" And they are.

John Lucas, the Oxford philosopher who first formulated the Goedelian arguments against artificial intelligence later developed by Roger Penrose, concluded his paper with the words: "Since the time of Newton, the bogey of mechanist determinism has obsessed philosophers. If we were to be scientific, it seemed that we must look on human beings as determined automata, and not as autonomous moral agents; if we were to be moral, it seemed that we must deny science its due, set an arbitrary limit to its progress in understanding human neurophysiology, and take refuge in obscurantist mysticism. Not even Kant could resolve the tension between the two standpoints. But now, though many arguments against human freedom still remain, the argument from mechanism, perhaps the most compelling argument of them all, has lost its power. No longer on this count will it be incumbent on the natural philosopher to deny freedom in the name of science: no longer will the moralist feel the urge to abolish knowledge to make room for faith. We can even begin to see how there could be room for morality, without its being necessary to abolish or even to circumscribe the province of science."

This hope was premature. The Churchlands, for instance, still seem to think that a third-person account of the mind will become available which will render first-person experience redundant. But the hope is not lost. The arguments continue, if in decreasingly vicious circles. But Lucas may have been right in the long term. Brain science may have abolished the The division between body and soul has been abolished; but in the process, both ideas have changed. It turtns out that there is neither pure ghost, nor pure not machine, but that we are a sort of animal that is both, or works like both at once..

Front Cuts Book Back

This stuff written and copyright Andrew Brown. If the page looks bad, that's my fault, unless you're using Netscape 4.x. Then it's yours. Upgrade, and do yourself a favour.