Despite its corroboration of the view that ours is not the only way of regarding time, Whorf's study of the Hopi provides powerful evidence of a universal connection between time and language. Similarly, despite the great diversity of existing languages and dialect, the capacity for language appears to be identical in all races. Consequently, we can conclude that man's linguistic ability existed before racial diversification occurred.
In a famous paper on 'The Problem of Serial Order in Behaviour', delivered at the Hixon Symposium on Cerebral Mechanisms in Behavior in 1948, the American physiological psychologist K. S. Lashley argued that the organizing principle underlying the problems of syntax in speech and language is essentially rhythmic in nature, a view which is now generally accepted. Lashley's pioneer investigation of the temporal aspects of language has been developed further by the American physiologist E. H. Lenneberg, notably in his seminal book Biological Foundations of Language, published in 1968. Lenneberg has pointed out that many physiological processes which might be thought to have no temporal aspect do, in fact, exhibit one: for example, in the process of seeing, which appears to be instantaneous, time plays a role, for the identification of even the simplest shapes requires temporal integration in the nervous system. Like Lashley, he believes that the foundations of linguistics are to be found in our anatomy and physiology. In his words, 'Language is best regarded as a peculiar adaptation of a very universal physiological process to a species-specific ethological function: communication among members of our species.'1 Lenneberg came to the conclusion that human articulation involves a basic periodicity of about six cycles a second (with a possible variation of up to a cycle from one individual to another), and he showed that a great variety of phenomena could be explained by this hypothesis.
In his Clayton Memorial Lecture on 'Some Aspects of Speech', which he delivered to the Manchester Literary and Philosophical Society in 1959, C. M. Bowra pointed out that the vocabularies of most primitive peoples are much more extensive than those used by modern sophisticated Europeans and that the reason for this is that, although they have no words for abstract concepts, they tend to be extremely subtle in their detection of fine distinctions in the visible world, which they denote by separate words. Their highly complex languages suit them very well so long as they are not obliged to come to terms with novel and unprecedented conditions. Since the state of equilibrium between survival and starvation which they normally experience is often finely balanced, it is not surprising that they usually consider it dangerous to deviate from their traditional customs and habits. Because they tend to adapt their lives and way of thinking to circumstances which they believe to be immutable, their rules and customs inevitably become rigid. Consequently, their languages, which are intimately adjusted to their way of life, tend to prevent the free movement of their minds into new regions of experience. As Bowra points out:
In so far as these languages change, and they certainly do, it is towards an ever greater elaboration in their own special methods of dealing with individual impressions and with the finer shades of difference in social relations. It is not surprising that men who spoke them were quite unable to understand what was happening when white men shot them for breaking rules which were to them totally unintelligible.2
It is now generally recognized that language is man's most outstanding characteristic. The possibility of human language seems to have depended not only on the potentialities of the vocal tract in man but also on the development of Broca's area in the neo-cortex. This area is thought to be concerned with the regulation of sequences of sounds. If this is so, the apparent lack of such an area in the brain of other primates may explain why the calls of these animals are not formed by varying the order in time of elementary units.3
Children are born with a general facility for language in so far as they exhibit an irresistible drive to express themselves. The babbling of infants is a spontaneous reflex activity, broadly similar to the uncoordinated movements of their limbs. It is an obvious, but none the less remarkable, fact that every normal child has the inherent ability to produce all the sounds of every language in the world, of which there are several thousands. Nevertheless, it is only the child's 'mother tongue' which he learns to speak spontaneously, and he must begin to do this before the age of about 6, as has been shown by the failure to learn to speak of the so-called 'wolf children' who have been unable to make contact with other human beings before that age. Every other language that one tries to learn later requires a special effort. Nevertheless language-learning comes easily only to some. The maximum number of languages that any one man is definitely known to have acquired is just under sixty. The famous orientalist Sir William Jones ( 1746-94) is said to have known over forty.
The reason why speech is based on sound, rather than gesture, is probably because sound is the sense most closely related to time. Nevertheless, although sound is transitory, the development of language originally depended on man's recognition of long-enduring objects to which names could be given, for there is good reason to believe that the introduction of verb-tenses was a comparatively late development. Our knowledge of the evolution of language is necessarily confined to written records, but they support this conclusion. For example, in Middle Egyptian of about 2000 BC, the 'tenses' were concerned with the repetition of the notion expressed by the verb rather than with the temporal relation of the action concerned to the time associated with the speaker. This was not just a peculiarity of Middle Egyptian, for we find that in other ancient forms of language the dominant temporal characteristic was duration rather than tense. Indeed, it is only in Indo- European languages that distinctions between past, present, and future have been fully developed. In Hebrew, for example, the verb treats action not in this way but as either incomplete or perfected. Moreover, 'the future is preponderantly thought to lie before us, while in Hebrew future events are always expressed as coming after US.'4 On the other hand, already in archaic Greek we find evidence of verbal forms that discriminated between the tenses.
'Old English', the language spoken in England before the Norman Conquest, contained no distinct words for the future tense. Instead, the present tense was specially adapted for that purpose as and when necessary.
Suzanne Fleischman has drawn attention to the fact that the tenses we now use correspond to distinct mental activities: the past to knowledge; the present to feeling; and the future to desire and obligation, as well as potentiality. Owing to the stress laid by Christianity on moral obligation, it has been claimed that the rise of that religion was the sole reason that new modal futures were introduced about the fifth century AD, but in her opinion no less importance should be assigned to the effect of the shift that occurred about the same time in the basic word-order of Latin sentences from SOV (subject-object-verb) to SVO. She considers that 'an appeal to multiple causation--not ruling out the possibility of cultural determinants--may well prove to be the most satisfactory approach to the problem.'5
George Steiner has recalled the shock he experienced when, as a young child, he first realized that statements could be made about the far future. 'I remember', he writes, 'a moment by an open window when the thought that I was standing in an ordinary place and "now" and could say sentences about the weather and those trees fifty years on, filled me with a sense of physical awe. Future tenses, future subjunctives in particular, seemed to me possessed of a literal magic force.' He compares that feeling with the mental vertigo which is often produced by contemplating extremely large numbers, and draws attention to the interesting suggestion made by some scholars of Sanskrit, the oldest Indo-European language known, that 'the development of a grammatical system of futurity may have coincided with an interest in recursive series of very large numbers'.6
Be that as it may, it is clear that the origin of the concept of number, like the origin of language, is closely connected with the way in which our minds work in time, that is, by our being able to attend, strictly speaking, to only one thing at a time and our inability to do this for long without our minds wandering. Our idea of time is thus closely linked with the fact that our process of thinking consists of a linear sequence of discrete acts of attention. As a result, time is naturally associated by us with counting, which is the simplest of all rhythms. It is surely no accident that the words 'arithmetic' and 'rhythm' come from two Greek terms which are derived from a common root meaning 'to flow'. The relation between time and counting is further discussed in my The Natural Philosophy of Time.7
Time and natural bases of measurement
Most people, however primitive, have some method of time-recording and time-reckoning based either on the phases of nature indicated by temporal variations of climate and of plant and animal life or on celestial phenomena revealed by elementary astronomical observations. Time- reckoning, that is the continuous counting of time-units, was preceded by time-indications provided by particular occurrences. The oldest method of counting time was by means of some readily recognizable recurrent phenomenon, for example the counting of days in terms of dawns such as we find in Homer ( "'This is the twelfth dawn since I came to Ilion'", Iliad, xxi. 80-1). In this method of time-reckoning, as M. P. Nilsson has remarked, it is not the units as a whole that are counted, since the unit as such has not been conceived, but a concrete phenomenon occurring only once within this unit. It is what he calls the 'pars pro toto method', so extensively used in chronology.8
A good example of this method is provided by the extended use of the word 'day'. The fusion of day and night into a single unit of twenty-four hours did not occur to primitive man, who regarded them as essentially distinct phenomena. It is a curious fact that even now very few languages have a special word to denote this important unit. Notable exceptions are the Scandinavian terms, for example the Swedish dygn, whereas in English we use the same word 'day' to denote the full twenty-four-hour period and also the daylight part of it. Instead of appealing to 'dawn' and 'day', some peoples count time by the number of nights. This may be because sleeping provides a particularly convenient time-indicator. A familiar relic of this in English is the word 'fortnight', a term which is now as obsolete in the United States as the word 'sennight' is in Britain.
To indicate a particular time in the period of daylight the sun can often be used, either by reference to its position in the sky or in some other way. Thus, the Australian aborigine will fix the time for a proposed action by placing a stone in the fork of a tree so that the sun will strike it at the required time. Many tribes in the tropics indicate the time of day by referring to the direction of the sun or to the length or position of the shadow cast by an upright stick, but before sunrise the natural phenomenon most widely used as a time-indicator is cock-crow.
A wide variety of conventions have been adopted for deciding when the day-unit begins. Dawn was chosen by the ancient Egyptians, whereas sunset was chosen by the Babylonians, Jews, and Muslims. The Romans at first chose sunrise but later midnight, because of the variable length of the daylight period. Dawn was the beginning of the day-unit in Western Europe before the advent of the striking clock in the fourteenth century, but later midnight was chosen as the beginning of the civil day. Astronomers, such as Ptolemy, found it more convenient to choose midday, and this remained the beginning of the astronomical day until 1 January 1925 when, by international agreement, the astronomical day was made to coincide with the civil day.
Besides the day the other most important natural unit of time is the year. Nevertheless, although each year normally presents the same cycle of phenomena, man only gradually learned to unite the different seasons into a definite temporal unit. This step was particularly difficult to take by people living in those equatorial regions where there are two similar half-years, each with its own seed-time and harvest, since by a 'year' a vegetation-period was originally understood. There is an important difference between the natural year, that is, the period of the earth's annual revolution around the sun, and the agricultural year. The former has no natural beginning or end, whereas the latter has. In Old Norse, German, and Anglo-Saxon years tended to be reckoned in winters. The reason for this practice, which was of course rare in the tropics, was the same as that for counting days by nights, winter being a season of rest, an undivided whole, and therefore more convenient than summer with its many activities. Nevertheless, there were exceptions to this rule. For example, in Slavonic time was reckoned in summers and in English expressions such as 'a maiden of eighteen summers' were used, whereas in medieval Bavaria years were reckoned in autumns.
Time-indications from climatic and other natural phases during the course of the year are only approximate and tend to fluctuate from year to year. Greater accuracy is often desirable for agriculture, and it was recognized long ago that this could be provided by the stars, particularly by their rising and setting. Observation of these phenomena did not make great intellectual demands on primitive man, who rises and goes to bed with the sun. Experience teaches him which stars rise in the east just before the sun and which appear in the west at dusk and shortly afterwards set there. These 'heliacal' risings and settings, as they are called, vary throughout the year and can be readily correlated with particular natural phenomena. The stars therefore provide us with a ready and more accurate means of determining the time of year than any based on the phases of terrestrial phenomena. Just as the time of day may be revealed by the position of the sun, so the time of year can be determined by means of heliacal risings and settings, and this can form the basis of a calendar. Timings can also be approximately determined by observing the position of stellar groupings that can be easily recognized, notably the Pleiades.
Although the stars can help man to determine the seasons, they do not enable him to divide the year into parts. Instead, the moon has been used to produce a temporal unit between the year and the day. Moreover, unlike time-indications from natural phases and the stars, the moon's waxing and waning provide a continuous means of time-reckoning. Consequently, the moon can be regarded as the first chronometer, since its continually changing appearance drew attention to the durational aspect of time. Although the concept of the month is much more readily attained than that of the year, it is difficult to combine the two satisfactorily, because the solar period is not a convenient multiple of the lunar period. So long as the beginning of the month was determined by observing the new moon, the month was based on lunations, but they are inconvenient for measuring time, since it is the movement of the sun that determines the seasons and the rhythm of life associated with them. As a result, our system of months no longer has any connection with the moon but is a purely arbitrary way of dividing the solar year into twelve parts. Our present concept of the year can be traced back to the Romans and through them to the Egyptians, who disregarded lunation as a time- measure.
As regards shorter intervals of time than the year and the day, primitive people have often made use of convenient physiological intervals such as 'the twinkling of an eye' or occupational intervals such as the time required for cooking a given quantity of rice. Indeed, man's unwillingness to abandon natural bases of measurement was for long a hindrance to the development of a scientific system of timekeeping. This is particularly evident in the case of the hour. The division of the daylight period into twelve parts was introduced by the Egyptians, who first of all divided the interval from sunrise to sunset into ten hours and then added two more for morning and evening twilight respectively. They also divided the night into twelve equal parts. These 'seasonal hours', as they are called, varied in duration according to the time of year. The inconvenience of this practice, although not so great in countries like Egypt as in more northerly places, introduced an unnecessary complication into the development of the water-clock and was quite impracticable in scientific astronomy.
Time in contemporary society
What particularly distinguishes man in contemporary society from his forebears is that he has become increasingly time-conscious. The moment we rouse ourselves from sleep we usually wonder what time it is. During our daily routine we are continually concerned about time and are forever consulting our clocks and watches. In previous ages most people worked hard but worried less about time than we do. Until the rise of modern industrial civilization people's lives were far less consciously dominated by time than they have been since. The development and continual improvement of the mechanical clock and, more recently, of portable watches has had a profound influence on the way we live. Nowadays we are governed by time-schedules and many of us carry diaries, not to record what we have done but to make sure that we are at the right place at the right time. There is an ever-growing need for us to adhere to given routines, so that the complex operations of our society can function smoothly and effectively. We even tend to eat not when we feel hungry but when the clock indicates that it is meal-time. Consequently, although there are differences between the objective order of physical time and the individual time of personal experience, we are compelled more and more to relate our personal 'now' to the time- scale determined by the clock and the calendar. Similarly, in our study of the natural world, never has more importance been attached to the temporal aspects of phenomena than today. To understand why this is so and how it has come about that the concept of time now dominates our understanding of both the physical universe and human society, no less than it controls the way we organize our lives and social activities, we must examine the role that it has played throughout history.