Is science and its successes somehow inimical to the quest of the arts and humanities to uncover ‘the secret of man’? Is science taking the poetry from human experience, and substituting for it the dry austerity of equations? Is science an enemy to the meaning of human life, and its possibilities, because it reduces it to an object of study in laboratory terms? In what follows I offer not just negative answers to these questions, but a reason for thinking that science is a coadjutor, even an exemplification, of the artistic and humanistic quest itself.
At an ever-increasing rate since the seventeenth century in the Western world, and latterly in the world as a whole, the natural and social sciences have been transforming both the daily lives and the self-image of humankind in dramatic and irreversible ways. For some – those who are invested in traditional, and often typically religious, views of the nature and destiny of man – this indubitable historical fact is not a matter of celebration. Even some of those who have a secular and generally modernising outlook are alarmed by what they see as the reductive tendency of science, reductivism at its worst being the propensity ‘to see nothing in the pearl but the disease of the oyster’.
These fears are misplaced. The truth is that in revealing much of great value about the world and humankind, science invites a transformed and imaginative response in those aspects of humanity’s conversation with itself that science does not pretend to address. These aspects are art, literature and the humanities, which between them explore human relationships and experience as they are perceived from the perspective that most deeply and revealingly captures them – namely, the intensely subjective perspective which gives them their fullest meaning in our lives. It is love and friendship, creative play and artistic expression, learning, literature, the quest for philosophical insight, the building of institutions that enhance our ethical possibilities, that matter in all the most thoughtful and constructive human endeavours. To them science is neither a threat nor an alternative, but a component and an adjunct.
It really matters that one should see that this is so, and that there is no conflict of interest in the pursuits that lie under the encompassing labels ‘science’ and ‘humanities’. There are two areas of natural science, and one of social science, that are most relevant in penetrating to the heart of humankind’s self-image and self-understanding in a way that causes concern to the traditionalists anxious about their effect. They are biology, cosmology and psychology. It is largely ignorance concerning the content and aims of these sciences that generates anxiety in traditionalists, who in rightly thinking that poetry and subjectivity are the true sources of the understanding we seek, are fearful that these sciences will displace and corrode their effect. A note on each of these exemplary sciences is therefore instructive. I believe that once we understand their content and purposes, the notion of a conflict between them and what matters to us in the humanities will be recognised as mistaken. Let us take each in turn.
Biology is the science of life. It is arguably the most important of the sciences for the question in hand, in the sense that some of its subdivisions and offshoots – not least genetics, the theory of evolution, and sociobiology and evolutionary psychology – are at the frontier of changes in medical technologies and practices, and in human self-awareness, developments which unquestionably present new ethical and social challenges.
Biology has a wide remit, reflecting the variety and complexity of all the many forms of life that exist. Its deepest unifying concept is evolution, without which – so it has been rightly said – nothing in any of its domains of enquiry makes sense.
The roots of biology lie in the pursuit of what was once called ‘natural history’, which received its first organisation as an enquiry in the work of Aristotle (384–322 BCE) and his successor Theophrastus (died 287 BCE). They engaged in direct observation in both zoology and botany, and were arguably the first to introduce systematicity into taxonomy, which is the classification of plants and animals into rational arrangements by similarity of form and other criteria.
A second source of biological science was medicine, not just as a result of enquiry into human anatomy and physiology as first systematically conducted by Hippocrates (born 460 BCE) and Galen (c.130–200 CE), but in the collection and examination of plants which had come to be recognised as having medicinal properties. Because of this, and because of the practical knowledge gained in thousands of years of agriculture, botany was an advanced science even before Carl Linnaeus (1707–78) undertook his work of classification in the eighteenth century. Indeed, detailed herbals existed in medieval times, and nearly three-quarters of today’s modern pharmacopeia is derived from substances known to herbal or traditional medicine.
But it is with Zacharias Janssen in the seventeenth century and Linnaeus in the eighteenth century that modern biology properly began. The invention of the microscope by Janssen and its continuing improvement was essential to progress in the study of cells (cytology) and tissues (histology), which in turn laid the foundation for biology’s rapid advances thereafter – most particularly in embryology, heredity, and therefore evolutionary theory itself.
Among the twentieth century’s many advances in biology, including the identification of viruses and the first application to evolutionary theory of genetics (Gregor Mendel’s discovery of genes in the nineteenth century was only fully appreciated in 1905), the undoubted chief was the identification of the structure of DNA by Rosalind Franklin, Maurice Wilkins, Francis Crick and James Watson in 1953.
The revolutionary impact of DNA’s discovery was profound. In less than half a century it resulted in the mapping of the human genome, and with it wonderful prospects for great advances in medicine and a cornucopia of knowledge besides, including better and further-reaching understanding of humankind’s history.
The sheer scale and speed of advances in biology explain the nature of debates in our time over questions of biodiversity and the environment, in bioethics generally but not least in medical ethics, and in sociobiology and evolutionary psychology as accounts of the nature of humankind.
These last are perhaps the main concern of those who fear that science threatens to erode traditional understandings of humanity. This is because they explicitly concern the place of human nature in nature. Charles Darwin’s discoveries firmly placed humankind in the natural world, thus causing immense shock to his contemporaries, almost all of whom accepted the previous and long-prevailing orthodoxy that humanity is a divine and therefore special creation inserted into the world, of an order different from all other life in it, this other life having been provided by a deity for humanity’s convenience and pleasure.
The fear and dismay generated by so reductive-seeming a finding that human beings are apes took a variety of forms. Whereas the physiological similarities between people and apes were sufficient to make the latter a symbol of the degenerate and beastly side of the former, the connection for a while was made almost entirely in the imaginative resource of film (think King Kong, Planet of the Apes) and lurid magazine stories. At first the evidence from palaeoanthropology, whose serious beginnings lie in the nineteenth-century discovery of Neanderthal Man, fed the ape-man myth.
But following recognition of the close relationship between humans and other primates in so many respects – resulting for example from ethological observation of chimpanzee tool-use, familial bonding and tribal warfare – genetics has revealed that today’s higher primate species have a common ancestor in the recent evolutionary past (a mere 5 million years in the case of chimpanzees and humans), and so many commonalities persist that it has become impossible not to accept that study of non-human primates yields much insight into our human selves. Chimpanzees and humans have 98.4 per cent of their genes in common.
The word ‘sociobiology’ was coined in 1975 by Edward O. Wilson, who described it as a ‘new synthesis’ of biology, sociology, ethology, anthropology, population genetics and other associated disciplines. Its premise is that animal behaviour has been selected under evolutionary pressures to enhance reproductive success. As applied to monkeys, sheep and the like, the idea is illuminating. As applied to human beings it is controversial, because its critics see it as premising genetic determinism, which overlooks the vastly powerful influence of culture and intelligence in shaping human behaviour in all its intricacies.
Sociobiology’s critics especially dislike the implication that if certain patterns of behaviour are genetically determined, we cannot do anything about them. Consider male aggression, which might often enhance male reproductive success (by fighting off rivals, and by frequent rape, say). If this behaviour is hard-wired, does it mean that we are helplessly unable to do anything about it? Surely sociobiology cannot mean to suggest that because a given form of behaviour is ‘natural’ it is good?
Critics of sociobiology – among them Richard Lewontin and Stephen Jay Gould – say that the right reaction to it is to keep insisting that humanity is special because of its intelligence and its concomitant creation of culture, which they have an easy time describing as a far more significant prompt of our intricate behaviours than mere appeal to reproductive success.
It is obvious, in turn, why sociobiologists find reductive explanations in terms of reproductive success so plausible. Right across the mammalian world mothers are zealously and instinctively protective of their offspring, with culture and intelligence having nothing to do with it. Some sociobiologists claim that among their critics’ motivations is a ‘politically correct’ distrust of anything that seems to imply a deterministic basis for differences between races and the sexes, for example in such respects as intelligence. In opposing this implication, sociobiology’s critics point out that 85 per cent of all genetic variation among humans exists within populations, that historical and cultural factors are largely responsible for community differences in ‘intelligence’ as ‘measured’ by IQ tests, and likewise for inequalities and hierarchies in human societies and for many alleged male and female differences.
But what threat is really posed to human self-understanding by what biology teaches us about humanity’s place in nature? One striking thought that undercuts any anxiety in this regard is that historical and cultural factors are themselves evolutionary adaptations of humankind. In place of fur and claws, sharp teeth or feathered wings, humans have evolved self-reflexive consciousness and high levels of intelligence, which allow us to invent many different and elaborate strategies for interacting successfully with the environments we encounter. Even if the most fundamental drive in humanity really is a genetic imperative to reproduce in such a way as to maximise the number of copies of our genes in following generations, and even if in all creatures other than humans the result of this imperative is a relatively inflexible repertoire of behaviours (chimpanzees do not write symphonies and construct scientific laboratories as part of theirs), it does not follow that sociobiological explanation is misleading in the human case. On the contrary, it seems to offer deep insights into aspects of the creative variety of human behaviour, and into human nature itself.
The naturalists of bygone times with their butterfly nets could not have imagined how controversial and exciting these descendants of their pursuit would be – until Darwin came along. To most of them, nature seemed to give testimony to the creative activity of a wise and bountiful deity. Accordingly, to them Darwin’s ideas appeared as a blasphemy. Yet when they reflected on the evidence of their own eyes they could not fail to see what a challenge Darwinism posed to the tidy certainties of previous thinking. A poignant account of the conflict between the old and new biology occurs in Edmund Gosse’s Father and Son, a memoir of his father Philip Gosse, who attempted to oppose Darwinian views with a creationist account of nature in which all the biological and geological phenomena cited by Darwin were explained as having been laid down, just as they appear, in the six days of creation. He was ridiculed in the press (jokes were made about God having concealed fossils in rocks to tempt scientists to infidelity), so he exiled himself from London and its scientific community. His son says that he sincerely believed that his book opposing Darwin, Omphalos, would show how Genesis is consistent with the fossil record; its comprehensive failure to do so settled matters on the eastern side of the Atlantic, but has yet to do so on the western side, where creationism and its disguised version as ‘Intelligent Design Theory’ still persist.
These remarks are intended to show that biological science has not refuted the view that there are large areas of our thought about humankind and its experience which remain the domain of art and the humanities. Instead, biology and its express application to an improved understanding of human nature via sociobiology are supplements, not alternatives, in this quest.
What, now, of cosmology? Before the sixteenth century the orthodox view was that our planet, and we on it, sat at the centre of creation. Astronomy, and in particular cosmogony and cosmology – the studies respectively of the origins and nature of the universe – have had an immense effect in changing human self-perception. Where once we were the apex of the universal creation, and at the centre of things, now we are inhabitants of a small planet orbiting an average star in an outer arm of one of many billions of galaxies. That is a remarkable change of perspective.
The ‘Big Bang’ theory is the best-known account of the origin of the universe, though in the evolution of cosmological debate it is undergoing major revisions even as these words are written. A summary of it, bringing together the idea of an expanding universe with influential ideas about how the first moment of that expansion occurred, offer a picture in which the universe came into existence about 13 billion years ago in a ‘singularity’ which, from an initial immensely rapid size ‘inflation’ in the first infinitesimal fractions of a second of its history, set the universe on a course which has resulted in its present character.
The origin of the Big Bang theory lies in the observation made by astronomer Edwin Hubble (1889–1953) that the universe is expanding. He saw that in every direction one looks in the sky, every galaxy is travelling away from us, and that the rate at which they are receding is proportional to their distance from us (Hubble’s Law): the further away they are, the faster they are going. This implies that at earlier points in the universe’s history everything was closer together; running the clock back eventually gives us everything compacted at a starting point. Proponents of the then-rival ‘Steady State’ theory which says that the universe exists eternally, with matter spontaneously coming into existence in the vacuum of space, sarcastically described this first moment of the universe’s existence as a ‘Big Bang’. As often happens with names that begin as criticisms, it stuck.
At the very beginning of the universe’s history, says the Big Bang theory, what had a moment before been a vacuum was an enormously hot plasma which, as it cooled at about 10–43 seconds into its history, came to consist of near-equal numbers of matter and antimatter particles annihilating each other as they collided. Because of an asymmetry in favour of matter over antimatter, initially to the order of about one part per billion, the dominance of the former over the latter increased as the universe matured, so that matter particles could interact and decay in the way our current theories describe. As the initial ‘quark soup’ cooled to about 3,000 billion degrees Kelvin a ‘phase transition’ led to the formation of the heavy particles (protons and neutrons) and then the lighter particles (photons, electrons and neutrinos). A more familiar example of a ‘phase transition’ is the transformation of water into ice when the temperature of the water reaches zero degrees Celsius.
Between one and three minutes into the universe’s history the formation of hydrogen and helium began, these being the commonest elements in the universe, in a ratio of about one helium atom to every ten hydrogen atoms. Another element formed in the early process of nucleosynthesis was lithium. As the universe continued to expand, gravity operated on matter in such a way as to begin the process of star and galaxy formation.
Hubble’s founding observation that led to the Big Bang theory can be understood by visualising the following: imagine that our galaxy is a raisin in a lump of dough swelling up in a hot oven. From the point of view of the raisin on which we ourselves are located, all the other raisins will be seen to be getting further and further away as the dough expands, and the further away they are the faster they will be receding, just as Hubble’s Law states. Relatedly, the speed and distance of galaxies can be calculated by measuring the degree to which the light emanating from them is shifted to the red end of the colour spectrum. The greater the red-shift, the faster the source of light is moving away and therefore the further away it is.
The Big Bang theory received powerful support from observations of cosmic background microwave radiation, left over from the universe’s earliest history. This observation won the 1978 Nobel Prize for physics for the two astronomers who made it, Arno Penzias and Robert Wilson. It is also supported by the observation that the most abundant elements in the universe are helium and hydrogen, just as the Big Bang model predicts.
The standard version of the Big Bang theory requires a consistent mathematical description of the universe’s large-scale properties, and the foundation of such a description is the law of gravity, the basic force by which large structures in the universe interact. The standard theory of this force is Einstein’s general relativity, which gives a description of the curved geometry of space-time, states equations describing how gravity operates, and gives an account of the large-scale properties of matter. The standard model premises that the universe is homogeneous and isotropic, meaning that the same laws operate everywhere and that we (the observers) do not occupy a special position in it – which in turn entails that the universe looks the same to observers anywhere.
These assumptions, jointly known as the ‘cosmological principle’, are just that: assumptions, and are of course challengeable – and there are indeed questions about them, not least one that asks how the universe’s properties had sufficient time to evolve (especially in the very early history of the universe) to be as they now are. This is known as the ‘horizon problem’. A currently persuasive answer to it is the ‘inflationary theory’ which hypothesises a much smaller starting point for the universe, and a much more rapid expansion from that point, together allowing the known laws of physics to explain how the universe’s properties arose. There are other less conservative answers, some requiring adjustments to Einstein’s equations, or more generally requiring acceptance of the idea that the values of what we now think of as the constants of nature (such as the speed of light) might have been different in the early universe.
One puzzle concerns whether the universe will continue to expand for ever, or whether gravity will eventually slow down its expansion and then pull it back into an eventual ‘Big Crunch’ – which perhaps, if the cycle repeats itself endlessly, will be a new Big Bang that starts everything over again. The answer depends on the density of the universe. This is estimated by working out the density of our own and nearby galaxies, and extrapolating the figure to the whole universe – which involves the assumption that the universe is everywhere homogeneous, something there is reason to doubt. This is the ‘observed density’. The ratio of this density to the ‘critical density’ – the density of the universe which, so it can be calculated, would eventually stop it expanding – is known as Omega. If Omega is less than or equal to 1 then the universe will expand until it cools to the point of extinction (a ‘cold death’). If it is greater than 1 it will stop expanding and begin to contract, suffering a catastrophically explosive death in a Big Crunch. For reasons of theoretical convenience, Omega is arbitrarily assigned the value 1, but measurements achieved by observation suggest that it is about 0.1, which if right predicts the continual expansion to a cold death scenario.
Although the Big Bang theory is the one most widely held by cosmologists, it leaves many questions unanswered, so that research into the origins of the universe remains a highly active field, and neither the theory itself nor the various supplementations and emendations of it are uncontroversial.
One historical rival to it, mentioned above, is the ‘Steady State’ concept put forward by Fred Hoyle, Hermann Bondi and others. This theory hypothesises that the universe exists infinitely in the same average density, with new matter being spontaneously generated in galaxies at a rate which equals the rate at which distant objects become unobservable at the edge of the expanding universe. (Hoyle and Bondi accepted that the universe must be expanding because in a static universe stellar energy could not be dispersed, and would heat up, eventually destroying the universe.) The rate of appearance of new matter required for the Steady State need only be very small – just one nucleon per cubic kilometre per year.
Apart from the discovery of the cosmic background radiation, which powerfully supports the Big Bang model, another reason for scepticism about the Steady State theory is that its assumption that the universe does not change appears to be refuted by the existence of quasars (quasi-stellar objects) and radio galaxies only in distant regions of the universe, showing that the earlier universe was different from how it is today. (Distance in space is equal to remoteness in past time; to look at far objects in space is to see into the history of the universe.)
There are a number of other rivals to the Big Bang theory, in the form of alternative models: the Symmetric theory, plasma cosmology, the ‘ekpyrotic model’, the ‘meta model’, ‘subquantum kinetics cosmology’ and others. These proposed alternatives have different degrees of plausibility – some of them are as exotic as they are imaginative; but one of them might be right. The universe is not guaranteed not to be a very strange place.
Such competitors to the Big Bang theory are motivated by the fact that it has many problems. Among the criticisms it faces are these. It has to adjust parameters, such as the cosmic deceleration parameter or those that relate to the relative abundance of elements in the universe, to conform to observation. It has to explain why the cosmic microwave background temperature is the residuum of the heat of the Big Bang rather than the warming of space effected by radiation from stars. It has to account for the fact that the universe has too much large-scale structure to have formed in just 12–15 billion years, thus needing the ‘inflationary’ hypothesis to render consistent, in a way that is ad hoc and untestable, the apparent age of the universe and the greater age needed for the formation of its structures. A particular example is that the age of some globular stellar clusters appears to be greater than the calculated age of the universe. Some observers claim that the most distant – and by hypothesis therefore the oldest – galaxies in the universe, those in the ‘Hubble Deep Field’, show a level of evolution discrepant with their supposed age. And perhaps most puzzling of all, the Big Bang theory requires that we accept that we know nothing about 80 per cent of the universe, which has to take the form of ‘dark matter’ to explain the distribution and relationships of observed galaxies and galaxy clusters, and moreover with another mysterious ingredient – ‘dark energy’ – pushing the universe apart.
What does all this tell us about the effort of humankind to make sense of itself in its world? Well: it very significantly reminds us of the power of the human intellect. Cosmology is a field that offers ample room for speculation, and science is as creative and imaginative an enterprise as any other, if not indeed more so. But it is not fancifully so: it is constrained by observation, evidence, publicly repeatable experiment, and mathematics. The Big Bang theory itself undergoes modification and adjustment constantly as new discoveries are made, new challenges and criticisms offered, and new hypotheses advanced in physics both at the cosmological and quantum scales. Whether it or one of its competitors – or perhaps a theory not yet put forward – is right, cosmology itself is a testament to the powers of observation and reason, and of creative intelligence, that drive the human mind to levels of understanding that are utterly remarkable.
But mention of the human mind reminds us that there is a social science – or rather: a science which combines as perceptively as it can the methods of both social and natural science – which expressly enquires into the nature of mind. This is psychology.
Of all the areas of human enquiry, psychology must be one of the broadest in scope. It means the study of mind and mental life, but the variety of approaches and methods, and the specialist subdivisions of psychological interest, are numerous. The work of educational psychologists, developmental psychologists, industrial psychologists, psychoneurologists, psychiatrists, psychotherapists of all kinds from the Jungian and Freudian schools to cognitive behavioural therapy, and more, makes this region of enquiry highly diverse and rich.
As this in turn suggests, the targets of these enquiries are diverse too, from every aspect of the development and normal and abnormal functioning of perception, memory, learning, reasoning, intelligence, emotion, sexuality and more, to the social expression of and influence upon them of education, work, family life and relationships, and further to their physiological and neurological underpinnings. In the recent development of powerful non-invasive means of studying brain function, for example by means of fMRI, a great deal has been learned in neuropsychology about the physical correlates of psychological phenomena, though it is accepted on all sides that there are aspects of enquiry in psychology that cannot rely on brain imaging alone.
The mind and its workings have undoubtedly been of interest to human beings since the dawn of human experience, and we can see psychological theories symbolised in the myths and legends of antiquity – not least in the highly sophisticated and insightful mythology of ancient Greece – as well as in the literature of all ages. But it is in the eighteenth century that theories of mind and mental functioning first began to emerge systematically; the ‘associationist’ psychology developed by David Hartley and employed by David Hume in his philosophy is an example of the wide acceptance of the view, dominant at the time, that mental activity consists in the linking of ideas by associations of similarity or habit.
In the 1870s Wilhelm Wundt brought the study of psychology into the laboratory, a departure which marks the beginnings of scientific psychology proper. Important contributions were made by William James in his Principles of Psychology (1890) and Ivan Pavlov (1850–1909) who demonstrated the conditioned response by – famously – training dogs to salivate at the sound of a bell.
A different impulse was given to psychology by Sigmund Freud and Carl Jung, younger contemporaries of James and Pavlov, in the direction of clinical psychology, addressing the ambiguous field of human experience between the ‘normal’ psychological range and the kind of mental pathology that could then be adequately handled only by putting its victims into lunatic asylums.
Whereas the diverse approaches of James and Freud relied on introspection and subjective data, another powerful school of thought, the behaviourist school, insisted on a different approach. As a technical term in psychology, ‘behaviourism’ denotes the thesis, first put forward by J. B. Watson in 1913, that mental phenomena should be explained wholly in terms of objectively observable behaviour, thus attempting to make psychology an empirical science. His most notable and more radical successor was B. F. Skinner. Their key idea was that behaviour, in humans as much as in animals, consists in conditioned responses to external stimuli, and that all psychological phenomena can be accounted for in these terms.
A major implication of their view for the debate about the relative importance of nature (innate mechanisms) and nurture (experience) in learning, is that it comes down strongly on the side of nurture. This means that what is learned is the result of conditioning, for which the mammalian central nervous system is in general apt, this being the only mechanism in play – contrast the view otherwise held, that a mind consists of many separate mechanisms adapted to handling different types of problems.
A distinction between ‘methodological’ and ‘scientific’ behaviourism is significant here. The former insists that psychology must be an empirical discipline reliant on publicly observable evidence, not on the subjective avowals of the human subjects being studied. ‘Scientific behaviourism’ is more stringent, in requiring that psychology should concern itself exclusively with the formulation of laws correlating input (stimuli) and output (behaviour), ignoring all talk of ‘inner’ mechanisms and processes. This means having to do without such concepts as intention and attention, motive and memory.
Today most empirical psychology takes its cue from ‘methodological’ behaviourism. The key advance in empirical psychology lies in cognitive science, which draws on advances in computing, philosophy and artificial intelligence as well as the various branches of psychology themselves, together with the development of brain-scanning techniques allowing real-time investigation of brain function correlated with the performance of mental tasks and the occurrence of emotional and cognitive responses. Along with greatly increased knowledge about the physical and psychological effects of brain injury and disease, these advances have pushed psychology and its related fields into an area far in advance of anything that could have been imagined by the eighteenth-century ‘associationists’.
Do the advances in psychology threaten to displace the need for the humanities in achieving insight and understanding of human nature and the human condition? Again the answer is ‘No’, just as with the implications of biological understanding that places human nature in nature, and the cosmological perspective on our place in the cosmos. This is because the insights of psychology illuminate rather than diminish the richness of those aspects of our mental life which produce the ultimately value-bearing activities that we cherish. To know more about how we learn and perceive, remember and reason, does not undermine the objects of perception, enquiry and memory. To know more about how the intricacies of mental life can be disturbed by illness, injury and age, is to help us recover or retain the powers that make life most worth living. To enquire into genius and creativity is fascinating; more, it might help us to unlock yet further potentialities in the human psyche, so that we can educate better, and more abundantly benefit from what the mind can do.
My emphatic view is that just as science itself is the outcome of humanity’s creativity and intelligence – is itself one wonderful form of a great artistic and humanistic achievement in its own right – so it is no threat to those other aspects of arts and humanities which more directly offer us the many interpretations of what we might call the poetry of life, its meanings and its possibilities: among which the furtherance of the scientific adventure itself is a central part.