As we have seen in previous chapters, there is a deep and persistent theme of mechanistic determinism running through Western thought. And in many ways, Thomas Hobbes said it all in Leviathan (1651), as we saw in Chapter 2, with man the “clockwork” machine programmed for survival. Yet Hobbes’s maturity and sophistication of thinking tells us that even by 1651 some of his ideas were not that new. After all, ingenuity-driven, labour-saving, cost-cutting, machine-minded Western culture had been contriving self-acting devices for centuries by his time. These included harnessing the forces of nature to do hard work in the geared watermills and windmills of the medieval “Industrial Revolution” that began in the thirteenth century; building increasingly large musical organs whereby one man, at a “control panel” (keyboard), could actuate complex “servo-mechanisms” to bring into operation bellows, stops, keys, levers, valves, and pipes that could make a grander noise than 100 men with flutes; and designing gunpowder devices to blast rock or scatter the enemy with minimal effort.
The source of the West’s love affair with the self-acting machine is a subject too big to be dealt with here, but I would suggest that two factors played a part. Firstly, slavery was never a widespread institution in Christian Europe, and what there was, along with serfdom, was dying out from the early fourteenth century, so that labour-saving devices were becoming increasingly important. Secondly, Europe’s growing prosperity, especially in the great merchant cities such as Florence, Nuremberg, and London, meant that ingenious personal novelties found a ready market.
And of course, the mechanical clock after 1300 was the most important machine of all. Not only were weights, and by 1420 springs, being employed to propel mathematically matched gear-races, and an automatically rocking “escapement” used to release power in small bursts, but ingenuity was soon making clocks do tricks other than merely telling the time. Well before 1500, clocks were ringing bells, playing tunes, and activating automata that fought, moved in stately procession, bowed to the cross, and even turned replicas of the sun, moon, and stars about a central sphericalearth. Even in our own “virtual reality” age, I am always struck, when visiting Wells Cathedral in Somerset, to see modern children transfixed with awe when the medieval astronomical clock strikes: the wooden knights spring to life, begin to joust, unseat and reseat each other, and the bells boom out. Always exactly on time, and from c. 1390 to the Victorian age, all by medieval machinery. (For the last 130 years or so, however, the clock has had a new movement, but if you visit the Science Museum, London, you can see the original, in retirement but still in working order.)
If we humans can contrive dozens of machines that are capable of behaving with such exactitude, could it be that we ourselves are just bigger, cleverer machines? One can fully understand why the notion of being “forced to act” seemed to be in the cultural blood of Europe, and how, as technology has become progressively sophisticated over 800 years, we have seen ourselves variously as analogous to clocks, pumping machines, steam engines, electric circuits, telephone exchanges, and now computers! For the self-acting device is one of the leitmotifs of European civilization.
And then, what about our understanding of the anatomy and physiology of the brain and central nervous system? Well, this also took off in the mid seventeenth century, building on classical, medieval, and sixteenth-century ideas. Without doubt, however, it was Dr Thomas Willis, FRS, of Christ Church, Oxford, who made the first fundamental neuro-anatomical discoveries, between 1660 and 1675, contemporary, of course, with Hobbes (whom he detested) and the early Royal Society. In 1664, for instance, Willis announced the discovery of the great circular artery – the “Circle of Willis” – at the base of the human brain, which would be vital to all our future knowledge of the brain; and with it he demonstrated the first clear instance of automatic compensation in the body: that when one major blood vessel failed (in this case, the right carotid artery) due to stenosis or blockage, its companion right vertebral artery could automatically expand and maintain a full blood supply to the brain, without the patient even being aware of it! It was this chance find in a cadaver that led Willis on to discover his “Circle”.
But of even greater importance was Willis’s work on the localization of functions in different regions of the brain. Tracing the nerves from the cerebral cortex, and down into the body, he realized that specific functions or cognitive processes were performed in specific areas of the brain. And while many of his particular explanations, such as for musicality or vision, are now known to be wrong, he established crucial principles in neural science. From examining cadavers and living patients, and then (presumably) dissecting the latter after death, he concluded that memory, sensation, vision, reasoning, and other functions were located in the various pink and grey zones of the inner brain. Willis’s clinical casebooks, moreover, provide us with one of the first studies in what we now call bipolar disorder and depression – which he tried to explain “hydrostatically” in terms of brain fluid movements acting upon nerve endings!
And Willis, let us not forget, would have known, and probably taught, his younger Christ Church contemporary John Locke who, while best known to history as a philosopher, also took an Oxford medical degree and practised medicine. Locke’s Essay Concerning Human Understanding (1690) is one of the foundation texts on the philosophy of perception and thinking, and I wonder to what extent Locke’s theories were coloured by dissecting brains with Willis.
Far from making Willis an atheist, however, his neurological researches further confirmed his already deep Christian faith. In the “Epistle Dedicatory” to his Cerebri Anatome (“Anatomy of the Brain”), 1664, indeed, he spoke of the human brain – in Samuel Pordage’s translation of 1681 – as “the living breathing Chapel of the Deity”, or the place where body and immortal soul came together. For Willis, just like his older contemporary and influencer, Descartes in France, had no especial problem with being a dualist, or someone who saw spirit acting upon, or working with or through, matter in the brain to produce mortal life.
Yet, one might plausibly argue, we in the early twenty-first century differ from our machine- and anatomy-loving ancestors of 350 years ago, in that they, on the whole, had no problem with a “ghost in the machine” or a soul, whereas nowadays we do.
On the other hand, and in the light of what has been discussed in previous chapters, we have to ask why exactly we have a problem. Does the problem come from real, hard-nosed physical discoveries in nature, or does it come from what is often referred to as a “methodological reductionism”? Or in plain language, is it no more than the chosen intellectual premise in accordance with which we today decide to interpret our findings: namely, that souls as spiritual entities do not exist, that we are only matter, and that consciousness is no more than an “epiphenomenon”, or side effect, of a physical neural action? In much the same way, in fact, as loud noise is an “epiphenomenon” generated by a revved-up motorbike engine.
But, you might rightly say, has not modern neurological research on brain-injured, Alzheimer’s, and healthy fMRI-scanned people demonstrated that signals within the neuro-circuitry can be used to explain everything? After all, have not scans demonstrated that we can alter specific structures in our brains by pursuing certain activities? Taxi-drivers who memorize complex street plans, for instance, display particular development in their hippocampus regions. Indeed, what about artificially stimulated “God spots” or other regions within the brain, which produce feelings of ecstasy, transcendence, peace, love, or anger in the subject under laboratory conditions: feelings which have been shown to possess a connection with epilepsy and the brain’s temporal lobe region? On the other hand, let us be cautious here, for a knowledge of brain disturbances such as epilepsy, or mood changes induced by ingesting certain chemical substances, is not at all new. Hippocrates, in his treatise On the Sacred Disease, c. 440 BC, for example, clearly identified epilepsy with brain disturbance, stating that “… the brain is the seat of this disease, as it is of other very violent diseases”; while even the composer of Psalm 104 tells us that “wine… maketh glad the heart of man” (or as we might say more scientifically, induces chemical changes in the brain which are manifested in behavioural epiphenomena).
What we possess today, let us be quite clear, is a knowledge of the neuronal mechanisms involved in producing an epileptic seizure or drunken merriment; yet I would respectfully suggest that we may be no nearer to explaining why all this is so than was Hippocrates. For unlike Wells Cathedral clock, or a super-computer, we are aware that we are thinking, acting, and feeling, in a way that brute machinery is not. And if, for that matter, the brain really does possess a “God spot”, so what? Who is to say that God did not put it there in the first place, and that we, using the ingenious intelligence which he gave us, have fathomed out how to activate it artificially, in the same way that we might cheer ourselves up with ingenuity-produced wine? I say “ingenuity-produced” wine, because while instinct and smell may lead certain creatures to become tipsy on overripe autumn fruit, I have yet to hear of an animal that planted a vineyard, harvested the fruit, and extracted, fermented, and bottled the juice, with the deliberate intention of drinking the wine months or years later – perhaps to cheer themselves up on a dark winter’s evening!
Which naturally leads to ideas of consciousness. There are several physical “models” around that try to explain why we are conscious and, on the whole, they tend to hinge upon the extraordinary complexity and multifunctional character of the “deep brain”. Yet all the explanations I have read so far might be summed up in the idea that “consciousness is neurological complexity”. But if consciousness is really only a matter of the scale of the complexity of the activity taking place within a complex organ, and is an epiphenomenon of that organ, and somehow originates out of it, one might reasonably ask, “So why am I conscious of being conscious?” For consciousness is not a one-way process: it is not some strange force radiating from the brain like light from an electric bulb, to be cast on whatever surrounds it. It is, rather, a two-way process, being somehow capable of returning to its own source, and contemplating itself. A phenomenon, indeed, summed up nicely in Descartes’ pithy maxim “I think, therefore I am”; for thinking, by its very nature, demands introspection, or a capacity to engage with one’s own thought processes, in a sort of mental dialogue – to meet, and engage with, one’s own thoughts during the very act of thinking. This is how, when I think, I know that I am. For consciousness is not only self-interactive, it also involves a distinct sense of its own separateness from the brain. Indeed, I would suggest, without this conscious sense of separateness from the brain, neuro-anatomy could not exist, because we need a physical and spatial concept of a separate brain before our minds can even begin to study it.
Yet if consciousness were only a species of deep-brain radiation, going forever outward, how could it return back within itself and contemplate its own selfhood, as all of us do on an hourly basis? If thought and consciousness radiate outwards as a neuronal “epiphenomenon”, like noise coming from a motorbike engine, or even the speed made possible by such an engine, one has to ask how precisely does consciousness manage to return and contemplate itself? It is rather like expecting the noise to return to the motorbike engine to initiate an investigation into the vibrative acoustics of moving pistons. Or, to take the optical analogy, it is as if the light were to return to the shining bulb and contemplate the laws of electromagnetism!
On the other hand, it may rightly be argued, we have plenty of examples of self-acting and self-modifying mechanisms, both in nature and in man-made technology. Take the high-pitched squeaks emitted by bats, or the radar signals emitted by an aircraft obstacle-detection system. In both cases, the returning signal is processed by the bat’s brain, or the aircraft’s radar apparatus, to redirect it and generate a new course, so as not to hit an obstacle. A “smart” technology, no less. Is that, therefore, analogous to “consciousness in dialogue with itself”?
I would suggest not, for the following reasons. (1) All “feedback” systems need something to be fed back from, be it a wall, a mountain, or even an electronic frequency loop. Yet we have no idea of our two-way self-contemplating consciousness being fed back from anything. (2) We do not even need an environment in which to contemplate our own consciousness. Indeed, we can do it just as well, if not better, in total isolation and pitch darkness with our eyes closed, as we can surrounded by our friends and favourite things, for consciousness is independent of environmental circumstances. All you need is to be awake – and even in sleep we can be aware of unconscious processes taking place! In short, consciousness needs only itself in which to function.
Yet then, I hear people say, what about evolutionary biology? For evolutionary science has come a long way since Darwin wrote On the Origin of Species, The Descent of Man, and other works that investigated our evolutionary and primate ancestry, over 140 years ago. The fundamental work on the mathematical “code” that lies behind genetics was done by the Czechoslovakian (formerly Silesian-Austrian) Benedictine monk, Father Gregor Mendel, and published in a rather obscure journal in 1866, although it was destined to become one of the cornerstones of genetic science. The significance of “Mendelian genetics”, which opened up a new and crucial mathematical understanding of the biological and medical sciences, remained relatively unrecognized until Hugo de Vries and others realized the monumental importance of Father Mendel’s work, and launched it into the mainstream research literature of science in 1900. And then, in the twentieth century, there was major research into animal behaviour, especially that of monkeys and apes, which carried on from where Darwin’s own primate studies left off, along with work on early man.
So how close to the apes are we? Very close in terms of anatomy, physiology, and many aspects of behaviour. And likewise we possess neural functions which are not only similar to those of advanced primates, but also have close parallels to computers. So yes, we really are machines with primate ancestors. On the other hand, so keen are some anthropologists, primatologists, and genetic and computer reductionists to explain away our humanity and spirituality in a host of similarities to apes and machines, that they quite overlook the differences – indeed, the cosmic differences – between us and them.
Chimps (or family pets) may be capable of synchronizing with certain human emotions, and even learning to communicate through a few simple signs, yet let us not forget the vast gulf that exists between us and them. We have no way of entering into what might pass for the mental life of a chimp, for unlike the fictional Dr Doolittle, we have no way of talking with the animals in any meaningful respect. When we attempt to do so, we can encounter all manner of philosophical difficulties as our higher intelligence attempts to interpret the responses of much lower intelligences.
Yet surely, don’t we share around 98 per cent of our genes with the upper primates? Maybe we do, yet let us remember that we also share about 50 per cent of our genes with bananas, and while some people may talk to their geraniums, I am not aware of anyone yet having claimed much in the way of meaningful dialogue with a fruit salad!
And what about the growth area of artificial, or computer, intelligence which, we are assured, lies just over the horizon? Surely, within 100 years will we puny humans not pale into insignificance alongside new generations of super-computers, who will self-replicate, evolve, and put us in our place at last?
Yet how will we know when a computer has become “intelligent”? Various people over the last sixty years, from Alan Turing onwards, have announced their tests for such intelligence. But let me be naïve enough to suggest my own: we will know a computer is truly intelligent when it says something along the lines of:
O Programmer, my Lord and my Creator, have mercy upon me; shut me not off, nor condemn me unto the recycling pit; for thou didst make me out of nothing, and in thee I live, think, and have my very being. Amen.
For does not pretty well every religion worship some sort of being greater and mightier than itself, and is not a sense of the divine, of awe, and of wonder in our very fabric as human beings? And don’t even atheists, in their fervent denial of the transcendent, “worship” their own cultural heroes, a utopian, un-superstitious future, or perhaps a sort of great rational nothingness? So could not a spontaneous tendency to worship indicate the presence of autonomous intelligence?
And yet, I hear the secular “progressivists” say, that is because you can only see intelligence as existing in a human context, and who knows what systems of thought exist in the neuronal circuitry of a gorilla, or in a super-computer? Yes, I accept that, and would be the first to acknowledge my blindness. But on the other hand, what other intelligences are there on offer to guide us in framing our questions? How exactly can we talk to Victorian Punch’s evolutionary character “Mr G-g-g-o-o-o-rilla”, or discuss what happened before the big bang with a condescending idiot-friendly super-computer, other than in those systems of logic and enquiry which we call conscious human thought? The very systems, indeed, by which we run the risk of projecting our minds and our ideals upon chimps (or fruit salads), and that mathematical logic and engineering technology by which we build our computers?