British Archaeology, no 21, February 1997: Features


Chew, chew, that ancient chewing gum

A slovenly modern habit? Or one of the world's oldest pastimes? Elizabeth Aveling explains

The chewing of gum is often thought of as a modern habit, imported to Europe from America this century. In fact, however, chewing gum has a long history stretching back at least 9,000 years, and tar-like materials were commonly chewed throughout much of northern Europe from at least the Early Mesolithic period.

Examples of black lumps of tar with well defined human tooth impressions have been found at several waterlogged bog sites in northern Europe, notably in Germany and Scandinavia. These sites span a period of about 5,000 years, the earliest dating from the beginning of the Mesolithic. Now, in a research project at the University of Bradford, chewed tar from a number of sites has been analysed and found most probably to have come from destructively-heated birch bark. The tar does not appear to have been mixed with any other materials, and chewing gums from different sites and periods were found to be remarkably similar in composition.

Birch bark tar was not just chewed during the Mesolithic and later times, but also had other uses such as waterproofing material and as a hafting agent. Birch bark was used as the glue on the axe belonging to the `Ice Man' - the Copper Age mummy found in an Alpine glacier in 1991 - and although no chewing gums have been found in Britain, cakes of birch bark tar were found at the Mesolithic site of Star Carr in Yorkshire. In Britain birch bark tar seems to have fallen out of favour in later times and by the Roman period its use was rare. In other countries, especially in Scandinavia, Germany, and Eastern Europe as far as Siberia, birch bark tar continued to be used in a variety of functions until relatively recent times.

So, why were prehistoric people chewing birch bark tar? A number of researchers have wondered about possible narcotic properties of the tar, and parallels have been drawn with the practices of betel nut or tobacco chewing, for which there is abundant ethnographic evidence. There are well known addictive stimulants in both of these but whether birch bark tar contains any potential stimulants or addictive substances has yet to be demonstrated. No narcotic effects have been observed by those who have experimented with chewing the tar.

That birch bark tar was chewed for its medicinal properties is another possibility. Birch bark tar contains compounds which could serve as disinfectants, and these might be slowly released during chewing. There are historical records referring to the use of birch bark tar to relieve sore throats. Another suggestion is that herbs or roots used to relieve toothache were pressed into tooth cavities using a piece of tar. At the 6,500-year-old site of Bökeberg in Sweden a piece of chewing gum has been found with the tooth impressions of a 30-40-year-old with a cavity in one tooth. By chewing the gum, it may be that he or she was treating their ailment. It could also be that chewing birch bark tar was an early form of dental hygiene. It is common knowledge today that chewing gum between meals helps to reduce the build-up of plaque.

It may also have been chewed purely for enjoyment. Although the taste cannot be described as pleasant, neither is it entirely unpleasant - and who knows what appealed to the Mesolithic palate? A pattern that has emerged from studies of the tooth impressions is that the majority of chewers were children aged 6-15. This is the age range during which the milk teeth are lost, so it may be they chewed on tar to help remove loose teeth and reduce the pain of teething. Alternatively, children may perhaps have been given birch bark tar to chew in the same way children today are given sweets as pacifiers.

It is puzzling, however, why birch bark tar should have been favoured as a chew over materials requiring less preparation, such as pine resin. Pine species were common in northern Europe during the Mesolithic, and there is abundant historical evidence of pine resin used as a chewing gum until relatively recent times, especially in Scandinavia. Could it be that birch bark tar was a special, maybe even a ritual material? How the tar was produced in a pre-ceramic era is also a mystery. Experiments have shown that tar begins to form at 807°C, but is produced efficiently only at a much higher temperature. The bark must also be heated as far as possible in the absence of air, otherwise all that happens is that the bark chars and no tar is produced. From the Neolithic onwards, sealed pots were available, but no evidence has been found to suggest how the process was achieved in earlier times. Modern attempts to produce tar by combining birch bark with heated stones in a pit have been unsuccessful.

Elizabeth Aveling is a research student at the University of Bradford


Return to Table of Contents Return to CBA Homepage


Many hunter-gatherers never wanted to farm, argues Peter Rowley-Conwy

In sorrow shalt thou eat all thy days

Archaeologists have traditionally placed hunter-gatherers at the bottom of the social evolutionary heap. Some have given the impression that the most interesting thing about hunter-gatherers is that they finally gave it up and started farming - the only question is why it took them so long.

But contrary to popular belief, agriculture is not an inevitable advance. We call hard but boring work `the daily grind', a reference to milling cultivated grain, and current research is showing that you didn't take up farming unless you had to. This is quite clear from the archaeological record. Until very recently archaeologists were scouring the Near Eastern Epipalaeolithic (c 20-10,000BC) for the earliest traces of farming, because that was where they assumed you should find the first developments towards full Neolithic agriculture. The classic weasel word `incipient' sometimes crept in - you could claim `incipient agriculture' without having to specify either what you meant, or produce much evidence. When grains of cultivated barley were found at Wadi Kubbaniya in Egypt, it was proclaimed that people were `already' farmers 16,000 years ago. But when the grains were radiocarbon dated they turned out to be modern, probably carried into the early layers by ants.

All the evidence is that the final huntergatherers in the Near East were just that - hunter-gatherers, with no thought of incipiently becoming anything else. Agriculture was apparently forced on them by a short sharp period of drought, which threatened the productivity of the wild resources they had been collecting. One response was to replant seeds of the wild grasses people had been collecting, in the hope that this would assure supplies. It was their bad luck that harvesting and replanting caused a genetic change in the grasses - a non-shattering seedhead. Once this happened the plants could no longer reproduce by themselves, but for ever had to be replanted by humans - an unforeseeable catastrophe.

Europe at this time was populated by Mesolithic hunter-gatherers. We know that farming from the Near East was to replace their way of life, but Mesolithic people of course did not know this. The Mesolithic is sometimes presented as a period of progress leading up towards agriculture, with simple groups (ie, nomadic, egalitarian) in the Early Postglacial, followed by complex ones (more sedentary, socially hierarchical, using cemeteries) in the later Mesolithic, which appeared just in time to move on to the next stage - Neolithic agriculture.

But this is faulty reasoning - what would complex groups have done if Near Eastern agriculture had not conveniently arrived? It also goes against the evidence. Sedentary groups who buried their dead in cemeteries are found in the later Mesolithic, to be sure; some created the Ertebølle shell middens of Denmark, and those at Muge and Sado in Portugal. But these people weren't interested in agriculture. The Erteb›lle stuck to its hunter-gatherer (and fisher) way of life for over 1,000 years after making contact with nearby farmers. The Muge and Sado groups ignored the new economy for nearly as long, even though they were surrounded by farmers some of whom were building megaliths just a few tens of kilometres away. So the complex groups, often said to be en route to farming, are in fact the ones that held out longest - exactly the opposite to what the progress theory would predict.

In the Baltic there is even evidence that the earliest farming was jettisoned and people reverted to hunting and gathering. The island of Gotland is a superb laboratory for examining this. In the Early Neolithic the island was occupied by farmers with sheep, cows, pigs, cereals, and even a token megalith. But in the Middle Neolithic, the Pitted Ware inhabitants moved back to the coast, hunted seals, and fished; of the land mammals only pigs remain, and they are very large and show a classic seasonal hunting pattern - though it is not known whether they were feral descendants of the earlier domestic pigs, or a deliberate introduction of wild boar for hunting. The latter may have been quite a common practice - consider the red deer that were released on Sardinia at the start of the Neolithic, or the fallow deer on Cyprus, Crete and Rhodes.

So why are the complex groups found mostly in the Late Mesolithic? I argue that it is to do with the survival of evidence. Such groups are based on plentiful resources, which coastal regions more often provide. Early Mesolithic and Upper Palaeolithic coastlines are now under water because of the Postglacial sea-level rise, so complex hunter-gatherer groups of those periods are just not visible. The coasts of southern Europe would have been highly productive; can we really claim that the painters of Altamira and Lascaux were somehow too primitive to exploit them?

Dr Peter Rowley-Conwy is a Reader in Archaeology at the University of Durham


Return to Table of Contents Return to CBA Homepage


Landscapes chosen for battle reflect the ideology of the age, writes John Carman

Interpreting the landscapes of battle

History can be made anywhere and in any way but the history we remember and are generally taught in schools is very often that of battles - Hastings (1066), Agincourt (1415), Waterloo (1815), Normandy (1944), and the rest. A revival of interest in military history has created an interest in ancient and modern battles as events, and historic battles have become a vital part of our national heritage.

While the places where those events took place - the battlefields themselves - are usually only considered in passing by students of warfare, the creation of the category of `historic battlefield' (through English Heritage's 1995 Battlefields Register) invites us to examine battlefields in their own right. They have a great deal to tell us.

The type of landscape where battles were fought has changed greatly over time. Three phases can be recognised, from the earliest periods of organised warfare, through early modern battles, and on into our own age. At Maldon (991) the battlefield is flat and featureless, generally typical of warfare from Megiddo (1469BC) - the earliest battle of which we have record - to the end of the Middle Ages. At Maldon, two forces of heavily armed and armoured foot warriors slugged it out hand to hand in bloody mêlée, Anglo-Saxons unsuccessfully defending their land and homes from marauding Norsemen. The flat ground was integral to this style of fighting and type of war. The Saxons allowed the Norsemen space and time to deploy, giving up the advantages of both surprise and manoeuvring space to engage in combat.

The same idea can be seen in other places at earlier times. At Marathon (490BC) the Athenian hoplites waited eight days before charging into the more numerous Persians, whose strength had grown over that time; and at Plataeia (479BC) eight days also passed before the Greek army abandoned its strong positions in hills to deploy onto the empty plain to meet its outnumbering foe. For the ancient Greeks, it seems, battle with a foreign enemy could perhaps only take place after eight days had passed; and in both these cases strategic advantages were given up in order to use the battlefield in the `correct' way.

The Greeks' internecine wars were also governed by rules, with a fixed sequence of events - invasion of territory; the pointless infliction of limited damage to agricultural production; the mustering of an army; and on the day of battle the shock of a sudden charge followed by an intense but very short period of hand to hand butchery.

Flat featureless space was the landscape of macho fighting men and highly ritualised warfare where the architecture of battle was created by bodies of armed fighters.

The landscape of battle began to change by the end of the Middle Ages, albeit slowly and gradually. At each of the three great English victories of the Hundred Years' War against France - Crécy (1346), Poitiers (1356) and Agincourt (1415) - apparently flat space was sought to fight on, neatly bounded by woods or obviously wet ground, an ideal theatre for the headon clash of armoured horsemen and foot soldiers. In all three, the French mounted men at arms encased in heavy armour recklessly charged home, foundered in mud and confusion, and died not only under accurate fire from longbows, but also of drowning, heat-stroke, dehydration and exhaustion. They never learned - charging home was how war was done.

By the 17th century, however, gunpowder ruled the battlefield and landscape features were coming into use although the majority of battles of the English Civil War were fought on relatively featureless ground. At Edgehill (1642) an attempt to use the advantage of a steep slope as an obstacle failed since the enemy refused to attack; the Royalist forces occupying the hill had to deploy in the valley below to meet the enemy. They need not have - strategy simply required blocking the Parliamentarian advance, an objective met by the placement of the Royalist army on its high position. By moving downslope, battle was met but without military necessity.

The use of landscape features was more prominent at Naseby (1644) where dis-mounted dragoons were stationed along a fence to fire into the flank of enemy cavalry. From this period on, features which provide obstacles to movement, cover and places to deploy are highly prominent in the battlefield landscape. At Waterloo (1815), hillslopes, buildings and sunken roads were essential components of the battlefield space.

It is no accident that the 18th century is also the period of the Enclosure Acts, the division of common land into private plots, of Model Farms, of Capability Brown and the landscape garden - where landscape features themselves take on significance and meaning. In early modern history, the landscape of battle is made up of bodies of troops and landforms in conjunction.

The third period of battlefield landscapes emerged during the course of the 19th century, together with the increasing industrialisation of the modern world. In the United States, the American Civil War (1860-1865) saw the first battlefield use of barbed wire and extensive entrenchment. Sherman's `March to the Sea' (Atlanta to Savannah, 1864) saw the deliberate devastation of territory. The fighting around Port Arthur between Russia and Japan (1904-1905) demonstrated the effects of the machine gun on infantry. By the time of the Battle of the Somme (1916), military technology could encompass the destruction of everything within the battle area - soldiers, trees, buildings, whole hillsides. The featureless landscape had returned, not chosen this time but made by destruction on a truly industrial scale. The power of the nuclear blast is the ultimate expression of this mode of warmaking.

Battles are essentially material phenomena because they comprise the coming-together of three material elements - the landscape of the place of battle; the technology of a particular style of warfare; and the people who fight. The type of landscape chosen or constructed for battle reflects the attitude of its age towards how war should be conducted.

The relationship between landscape form and the technology brought to it determines the precise tactics of the battle. The relationship of technology and soldiers determines systems of movement and control for that battle. The relationship of landscape with the soldiers converts that landscape to a place with meaning.

Where landscape as a category meets the idea of place, the location begins to evolve a cultural significance. The battlefield becomes a receptacle of memory of the event that took place there, and the people who were present become associated with the place; it becomes part of their identity. Where memory meets identity, we enter the field of heritage, where the category of `historic battlefield' finds its home.

The form of battlefield landscapes gives expression to the ideology of a particular period. As temporal and spatial lenses focusing the concerns of an age, they are the places where issues of political, moral and legal legitimacy were contested and decided. They express attitudes towards life, death, place and landscape. They are places where questions of identity - local, regional, national, professional, individual - were resolved. They formed and disrupted social bonds. Battlefields have hitherto suffered a lack of interest as landscapes for study, but their historical importance gives them the right to claim further investigation.

Dr John Carman is a Research Fellow in Archaeology at Clare Hall, Cambridge


Return to the British Archaeology homepage

Return to the CBA homepage


© Council for British Archaeology, 1997