John A J Gowlett & Richard W Wrangham. Azania: Archaeological Research in Africa. Volume 48, Issue 1, 2013.
The question of early fire use has recently become topical after a long period in which fire scarcely featured in general anthropological and archaeological texts and contributions were rare. Interest has been stimulated by a series of new archaeological finds, but also by major inputs from other disciplines, including primatology and evolutionary psychology. However, the study of fire in human evolution is not a mature science. It remains controversial, with a wide range of views about the dates of first control and about the driving forces governing fire use. We review these here, looking particularly at the African picture, but also considering apparent discrepancies with the European record which have come to the fore recently. This paper represents an effort by an archaeologist/palaeoanthropologist and a primatologist/biologist to reconcile two of the viewpoints in a more integrated approach. We set out to make a new evaluation of the African record, just as Roebroeks and Villa (2011) have made a new survey of the European evidence. Dates for the earliest occupation of northern latitudes (up to 40°N at Dmanisi in Georgia), have doubled in recent years to almost two million years (Gabunia et al. 2000). Roebroeks and Villa indicate that the majority of scholars regard the use of fire as a necessity for such occupation, while disagreeing with this view themselves on the basis of their analysis. Their analysis works from direct archaeological fire evidence rather than from general considerations about early human adaptation: in a frequency study they found that in Europe sites with fire evidence begin in small numbers in Marine Isotope Stages 9-11 (up to 400,000 years ago) increasing steadily in later interglacials. The authors took this pattern as militating against earlier fire use in Europe.
Our paper has a very different standpoint but a commonality in recognising the difficulties of all existing positions. It thus works towards a more mature pattern-based approach to early fire studies (cf. also Alperson-Afil and Goren-Inbar 2010) rather than engaging in an exercise of claim versus counter-claim.
Fire studies have been tentative largely because they have depended so much on rare chance finds, but also because the ideas expressed have often lagged in taking into account findings in other disciplines. In this account we highlight two factors that frequently influence judgements. First, come the implications of a huge bias in archaeological preservation, meaning that our surviving evidence simply does not accurately map the past. Second, is the prevalent yes/no presence/absence approach to fire, which concentrates on hearths, rather than dealing with the likelihood that the earliest use of fire by hominins occurred in interaction with natural fire and may not even have included deliberate or systematic hearth use in its first stages (Burton 2009).
Even allowing for these factors, however, the distribution of early fire evidence presents major puzzles that may call for new hypotheses in interpretation. In this paper we seek to address these issues formally, observing that the past cannot be investigated by a single record, but that science must seek to reconcile and combine several lines of evidence. These include not just archaeological evidence, but evidence derived from a comparative view, illustrating complex changes in human makeup and behaviour that necessarily happened somewhere along the evolutionary trajectory (i.e. characters that were certainly absent in an ape ancestor, but that are unquestionably present in modern humans). Both primatological and genetic studies cast additional light on these developments (e.g. Green et al. 2010; Pruetz and LaDuke 2010).
To take models for fire use beyond the simple ‘present or absent?’ questions, we argue that fire must be set into more general resource and cognition models that adequately reflect the complexity both of advanced mammals and of the natural environment. Following the spirit of observations by Bickerton (2003: 78) on the origins of language, it should ‘given goodwill and a little effort’ be possible to take on board the key elements of several relevant disciplines and to accept that a working hypothesis has to maintain validity in all of them—it cannot fail in one.
It can be asked whether it is necessary to struggle now with records that are so dissonant. Why not simply leave each discipline on its own? Apart from the general necessity of interdisciplinary work in human evolution, we would argue that it is often the non sequiturs of the record that provide the loose ends for progress. Reflecting on the philosophy of research, Arber (1985: 59) saw the explanation of a phenomenon as ‘the discovery of its own intrinsic place in a nexus of relations, extending indefinitely in all directions.’ The idea of seeing something simultaneously ‘in its full individuality’ and as ‘one element in a larger whole’ harmonises well with recent archaeological approaches (e.g. Gamble and Porr 2005).
Hypotheses About Early Fire Use
We aim here to outline the major hypotheses and then to discuss evidence. These come from at least the following main areas:
Archaeological hypotheses have been shaped since the 1950s, when K.P. Oakley took a particular interest in the problem (Oakley 1956, 1961; Perlès 1975, 1977; James 1989). Since Oakley’s time they have been essentially minimalist, that is, with some exceptions of broader treatment, each piece of fire evidence has been approached outside a general framework, somewhat as a curiosity. A general acceptance that fire use would have been important is combined with a view that it would represent a significant cognitive leap, perhaps more than required for the development of stone technologies. Rolland (2000: 212) expresses a common view in stating that ‘a date as early as 1.6 m BP for fire-harnessing would require re-evaluating radically early hominid behavioural capabilities.’ For some authors there has also been a perceived gap between the human and the animal worlds, with Rolland (2002), for example, citing an absence of animal analogues, in contrast with the position for hard technologies. This gap is reduced, however, inter alia by bird fire-following (Berthold et al. 2001) and signs of chimpanzee fire-awareness (Pruetz and LaDuke 2010).
In this environment, claims for fire use on archaeological sites have tended to falter against the interpretation, ‘how do we know it isn’t wildfire?’ This crucial question needs, of course, to be addressed and given a balanced methodological context, by information about both wildfires and humanly controlled fires. In other words, to assert ‘it is wildfire’ is a claim requiring as much scientific knowledge and research involvement as the assertion ‘it is humanly controlled fire’. Neither is an easy option, considering that (for example) on later Palaeolithic sites hearths vastly outnumber documented cases of wildfire passing across sites.
To date, however, archaeology’s null hypothesis tends to be that fire-control demands advanced cognition not proven to be essential to early hunter-gatherer subsistence, and hence the former will not be accepted as a working hypothesis unless it can be demonstrated beyond all reasonable doubt. This current framework is admirably well suited for ruling out ‘false positives’, but in immediate terms it inevitably promotes an inaccurate picture. Support for this point can be seen in the general changes of Palaeolithic chronology through the last thirty years, in which the dates for many phenomena of technology or of settlement have at least doubled. Increasing the sampling has greatly extended the dates in many respects (e.g. for beads, in which the oldest dates have risen from c. 40 ka to c. 120 ka (d’Errico et al. 2005; Vanhaeren et al. 2006), or in the controlled heating of stone, which has gone back from the Upper Palaeolithic to c. 160 ka (Brown et al. 2009)). If archaeology worked on its own, this unfolding process would be of little importance, because we would merely be dealing with a somewhat concertinaed version of the reality in time. But independent dated perspectives are now available from the fossil record, genetics and evolutionary psychology, meaning that for an integrated science of human evolution the calibrations have come to matter far more. If these records give different answers (as we see below), then it becomes a matter of particular interest for scientific enquiry.
Most archaeologists would agree that a primary role of archaeology is to provide evidence that is reliable for interpretation, so that preferably we would make statements only if the material record demonstrates them with certainty. If so, it is fair to consider what constitutes an acceptable level of proof. In fire research there is no one generally accepted protocol for investigation, but archaeologists and archaeological scientists have worked to establish areas of expertise (e.g. Perlès 1975, 1977; Barbetti 1986; Bellomo 1993, 1994) that provide reliable answers. Bellomo (1993, 1994) set out to establish a general methodology for discriminating modes of fire. His work emphasised the main forms of fire—hearth fire, grass fire and stump fire—so as to provide a rulebook and his research is disposed to conclude that temperature of burning will be a good discriminator between the different modes of fire. Thus, his experiments showed that grass fires lead to transient high temperatures and tend not to bake through sediments and bones, while natural stump fires are held to be rare, because ignition was not easily obtained in the experiments. This leaves hearths recognisable as often bowl-shaped features, with sediments baked through at temperatures of usually c. 300-800°C. A preoccupation with discrimination through temperature is echoed in other work. It is certainly the case that controlled camp fires normally burn in the temperature range 300-800°C, but an idea sometimes expressed that grass fires and other wild fires do not overlap with this range is erroneous. Many grass fires move rapidly, achieving high temperatures only for a few minutes, so that sediments, artefacts or bones would not bake (Clements 2010, for example, records an experimental grass fire with high heat lasting some three minutes, combustion zone gases at c. 750°C, and a plume temperature of up to 295°C). It is easy, however, for such fires to catch hold of more substantial vegetation, especially bushes with dry bushy stems (Gowlett, personal observation, at Florisbad, South Africa, 2008, and Kilombe, Kenya, 2011). Bellomo was, as we have already mentioned, inclined to dismiss the possibilities of stump fires, as these are not easily ignited in experiments. There is, however, ample evidence of them in forest fires, including falling tree boles, while in other examples a lightning strike caused a tree to burn back into its roots (Tutin et al. 1996). (At Kilombe an example of a stump burn in a bushfire is currently under study). In all these cases natural fire temperatures will be at least as high as those of campfires and discrimination by temperature appears very difficult.
These studies show that it is not easy to discriminate between all sources of fire because they can actually have near-identical forms. Fire is able to mimic—but not in quite symmetrical form. A human fire breaking free will resemble a wildfire, but a wildfire ‘burning small’ will not exactly mimic hearths. In certain circumstances, human fire use can be very certainly documented, even in the Lower Palaeolithic. Such instances occur especially where burning is highly localised and are ideally closely linked with artefacts. They include Schöningen in Germany, Beeches Pit in the United Kingdom and Kalambo Falls in Zambia. At Schöningen several hearths around one metre across are recognised, but most important a wooden stave—a digging or throwing stick—was found by one of them, with just one end burnt (Thieme 2005; see Lang et al. 2012 for an evaluation of the Schöningen dates, which appear to be around 300 ka). At Beeches Pit, dated to c. 400 ka, there are similar hearths; in this case an individual sat by one of them knapping a biface through a series of nearly thirty strikes. Just three flakes in the series moved forwards into the visible hearth area and only they are burnt red (Gowlett et al. 2005; Preece et al. 2006). African evidence from the same period is as strong, with a probable hearth recognised at Kalambo Falls, as well as individual burnt wooden artefacts (Clark 2001; see below).
In other cases where the direct artefact connection is less strong, such as at Terra Amata near Nice, France, (de Lumley 2009) and Gesher Benot Ya’aqov in Israel (Goren-Inbar et al. 2004; Alperson-Afil 2008; Alperson-Afil and Goren-Inbar 2010) the case is built up by very detailed work across a range of approaches, rather than by one generally agreed methodology. Alperson-Afil and Goren-Inbar (2010) list the numbers of lines of evidence available for different sites, noting burnt stones, burnt bones, burnt wood, burnt sediments, burnt shells and charcoal as separate categories. Microstratigraphic approaches can also often provide detailed information about their contexts (e.g. Berna et al. 2012).
Overall, where fire evidence is strong, but where highly specific evidence for human involvement is lacking, it seems difficult to resolve doubts once they have been expressed in the literature. Zhoukoudian in northern China offers an example of a site where argument has raged back and forth. Here, human fire control was for long broadly accepted (Black 1934; Oakley 1956, 1964), then contested (Binford and Ho 1985; Binford and Stone 1986), then re-evaluated (Weiner et al. 1998; Goldberg et al. 2001) and evaluated further (Boaz et al. 2004). In this research much of the ‘burnt’ material was shown not to be burnt, but to result from mineral staining, and it was also shown that there was no evidence for formal hearths. Nevertheless, burnt bone occurs in this early site (Weiner et al. 1998) and it remains very likely that this results from human fire use rather than reflecting a pattern of natural burning. Reservations appear in many discussions of the site, partly on the grounds that studies of bone taphonomy there have shown a high carnivore involvement. The locality was a part-open cave used as a hyena den and the argument follows that bones of prey would have been burnt by natural fires of grass or other vegetation that swept through the site. Accounts such as Binford’s emphasising the faunal taphonomy do not, however, evaluate the point that more than 100,000 artefacts have been found through the various levels, marking a repeated and significant human presence. Carnivore activity cannot be taken to minimise the abundant evidence for human activity and the complexities of human and animal use can even be seasonal. The case of Zhoukoudian echoes Bickerton’s (2003) point that an interpretation must take into account all the main arguments and cannot fail in one.
In summary, archaeology as a discipline still has difficulties with the conundrum of ‘absence of evidence is not evidence of absence’. Any time this mantra appears it is a sign of frustration with the record, but it is also a challenge to engage with that record through alternative approaches and modelling, rather than simply accepting defeat.
Preservation and sampling remain absolutely central issues. We know for certain from patterns in recent evidence that most of the evidence of fire disappears most of the time. If we automatically accept the apparent picture as representing the reality, sampling factors alone would ensure that the early record largely disappears from serious consideration. Inevitably archaeology will be likely to have a mismatch with evidence from other disciplines. In particular, it will offer a contracted view, with sophisticated behaviours systematically taken as appearing later than they actually did. We must ask whether archaeology can address such problems (Gamble et al. 2011). It could perhaps seek something closer to the symmetry approach employed by David Clarke (1968) in his discussion of Occam’s razor and the inverse Occam’s razor. Occam’s razor urges adoption of the most parsimonious explanations without any unnecessary complexity; the inverse razor as discussed by Clarke cautions against the adoption of explanations that are simpler than we know to be the case.
Although fire studies are sometimes controversial, there are clearly cases where archaeology can resolve matters and a few basic points that might gain broad support:
|•||natural fire certainly comes first, occurring in areas frequented by humans (otherwise they would not have had their primary access to fire) with the effect that human fire will overlap with natural fire;|
|•||hearths are the most valuable archaeological indicator of fire-use, but may be a small part of a general picture in which fire was also exploited on landscapes;|
|•||in discriminating natural and human-controlled fire, traditional measures such as temperature are not always helpful, while localised and repeated burning are useful indicators and part-burnt single artefacts or artefact sets are the most powerful deciding evidence, with the very selective heating of humanly made material ruling out larger bushfires.|
Dietary hypotheses relating to fire have been proposed a number of times (Brace 1995; Wrangham et al. 1999; Wrangham 2009, 2011) and reflect the major change of ecology that is strikingly obvious when comparing human and ape adaptations, especially those of the chimpanzee, our nearest living relative. Apes are essentially fruit-and foliage-eating animals of the rain forests. Chimpanzees have a broader-based diet than gorillas, extending to honey, insects, and meat, but fruits remain their preferred and dominant staple, with other soft vegetation used as fallback. In tropical rainforests ripe fruits are available at most times. Apes living in bush or savanna environments with lower rainfall and a more marked regime of wet and dry seasons would not be able to obtain an equivalent diet. In particular, fruit would be seasonal, creating feeding pressures in dry seasons. The addition of carbohydrates in the form of tubers, corms, bulbs, rhizomes and other plant storage organs would have significantly eased this problem for australopithecines (Laden and Wrangham 2005).
Just like any other animal, humans have traditionally been thought to be capable of thriving on a raw food diet. Based on that assumption the dietary specialisation associated with hominisation would have consisted merely of a change in composition, for example adding more meat (Milton 1999), extending variety (Ungar et al. 2006) or emphasising tubers (O’Connell et al. 1999). However even when eating domesticated foods, women eating an all-raw diet tend to suffer chronic energy shortage and amenorrhea (Koebnick et al. 1999). Animals also tend to show superior growth or weight gain on diets of cooked foods, thanks to an increase in digestibility and a reduction in the costs of digestion (Carmody and Wrangham 2009; Carmody et al. 2011). Since wild foods have numerous energetic deficiencies compared to domesticated foods, including fewer sugars, less starch and more fibre, this suggests that raw wild foods would be an inadequate basis for population survival. Indeed, no cases have been reported of societies relying on raw food for most of their diet, nor are any individuals known to have survived for more than a few weeks on wild raw food. Accordingly, humans appear to be different from every other animal in needing their food to be of such high quality that in practice a large proportion of it must be cooked (Wrangham and Conklin-Brittain 2003). The only alternative is that prehistoric humans found highly digestible foods that could be eaten without thermal processing. While uncooked items such as honey, fruits, marrow and liver could well be adequate at times, there is no model for such foods being sufficiently available year-round to sustain a hunter-gatherer group.
The most obvious aspect of human biology associated with the requirement for a cooked diet is a reduced digestive system compared to great apes, including small molars, mouth, stomach and large intestine. The benefits of gut minimisation include less energy having to be spent in metabolic activity. Since these benefits could not have been realised until high-quality foods were available all year, tooth and gut reduction appear to be unambiguous markers of adaptation to cooking. Furthermore, the calorific benefits of cooked food are so large, the food-softening effect of cooking so strong, and the typical hunter-gatherer diet so difficult to ingest and digest in its raw forms that the adoption of cooking is expected to have a greater influence on tooth and gut reduction than any other dietary factors (Wrangham 2009).
The timing of the major reduction of the digestive system is therefore a provocative issue. Since both teeth and guts show their most striking reduction with the origin of Homo erectus, the cooking hypothesis argues that Homo erectus must have controlled fire (Organ et al. 2011). In support of this, the post-cranial anatomy of Homo erectus, including long legs and modern shape of trunk and shoulders, suggests that this species climbed poorly. It was therefore unlikely to have been able to spend its nights in tree refuges. Homo erectus therefore surely slept on the ground. A shift to ground-sleeping is hard to explain without the protective effects of the control of fire (Wrangham 2009; Wrangham and Carmody 2010). For chimpanzees the presence of leopards ensures the use of tree nests and in ancient savanna environments the predators were both varied and formidable. Similar factors would seem to apply to varieties of Homo erectus beyond Africa. Although early Homo from Dmanisi in Georgia retains primitive features at 1.8 Ma, and is relatively small bodied, it had already acquired adaptations for striding locomotion and its toothwear patterns suggest a broadening of diet (Pontzer et al. 2010, 2011). By this time, it can now be seen, the distribution of early Homo was so wide that it is certain that it must have mastered many environments (Clark 1993: 160) and sleeping in trees could not always have been an option.
At present archaeology’s longstanding awareness of social factors is coming into contact with other social hypotheses, in part derived from primate studies. They include concepts of Machiavellian intelligence, especially the idea of the Social Brain (Byrne and Whiten 1988; Dunbar 1998; Dunbar et al. 2010). The cooking hypothesis is primarily ecological and physiological, but it can be linked with social factors.
Central in all hypotheses—the cause for the search for an evolutionary ‘driver’—is the fact that the greatly enlarged modern human brain is energetically very expensive (Herculano-Houzel 2012): some process of selection must have made it pay off. Even the initial enlargement from c. 400 cm3 in Australopithecines to c. 650 cm3 in early Homo represents a major change in costs. The social brain hypothesis seeks to explain the huge increase in hominin brain size through the Pleistocene, seen at different stages in several species. At each stage a larger brain has to work in terms of physiology, that is, its pressing energy needs must be supported and provide sufficient return in fitness for their extra costs. The social brain hypothesis—itself depending on ecological circumstances—postulates that there will be a selection pressure for optimum group size, and that—where group sizes rise in consequence—this pressure will push towards improved social cognition. Larger brains are thus largely about the management of relations in a complex social environment (Dunbar 1998; Dunbar and Shultz 2007) and language also appears to have evolved as part of this constellation of factors (Aiello and Dunbar 1993; cf. Ronen 1998).
The social brain hypothesis meshes easily with another idea, the expensive tissue hypothesis (Aiello and Wheeler 1995). This argues that in terms of metabolism the body cannot sustain a larger (more expensive) brain except by reducing other crucial tissues. Aiello and Wheeler suggest that this is the gut and that a reduction of gut size and energy costs could be achieved through the consumption of higher quality foods. Hence the idea that fire, as a major step in pre-digestion, would greatly ease the metabolic constraints on brain size (cf. Wrangham 2009). A closely related argument is that raw foods are so much less rewarding than cooked foods that they demand too much ingestion time for hominins with a human-sized brain to be able to satisfy their energy demands (Fonseca-Azevedo and Herculano-Houzel 2012).
In assessing these hypotheses a particular question is whether they point to changes at the same stage in the timescale. In effect, are they datable? The primary evidence here is more likely to be the hominin fossils, rather than the archaeology, since the former are reasonably well dated and can provide specific evidence about brain size, teeth and skeletal proportions, as well as a certain amount of isotopic evidence. The actual increase in brain size, and the costs of the brains, are the most incontrovertible parts of this survey. The gradient of change is also reasonably well dated as having been particularly strong in the period ~2.3-1.7 Ma (Schoenemann 2006). The relationships between hypotheses are less straightforward. The cooking hypothesis primarily provides a mechanism for delivering increased energy to larger brains. The expensive tissue hypothesis primarily explains how the larger brain might be resourced within a similar overall metabolism. The social brain hypothesis provides a set of reasons for having a larger brain: improved social cognition resulting from activity in larger group sizes required by ecological pressures. Other hypotheses can be interpreted as related. Thus, O’Connell et al. (1999) emphasise increasing diet breadth and the possible care provided by a grandparent generation, while Allen et al. (2005) argue that increased brain size correlates with long lifespan, and again may be related to the selective advantages conferred by grandparenting. As they admit the correlation with language, or at least increased sociality, these ideas actually interlink with those of the social brain. In summary, a large brain correlates with complexity, is expensive and requires a higher quality of food. Smaller teeth and much reduced chewing muscles are proof of the delivery of this; larger brains may be seen in some sense as both benefiting from, and allowing the release from, large teeth.
Do these hypotheses provide an alternative to archaeology? Some might argue that their general power could make the detail of archaeology redundant, but these hypotheses are not easy to disentangle and in scientific hypothesis testing nothing can replace actual data. Moreover, fire-use is also a sociocultural phenomenon and past cultural aspects can be investigated directly only from the archaeological record.
Evidence of Early Fire
The Archaeological Record
At present Africa provides the earliest sites that preserve traces of burning in one form or other.
The earliest fire evidence with human associations so far comes from the area of Lake Turkana in northern Kenya. At East Turkana some Oldowan localities date to >1.8 Ma, but they are far outnumbered by those of the somewhat later Karari complex (~1.6 Ma). Sites on the Karari ridge embrace a range of contexts, both channel and floodplain, and show extensive hominid activities across the landscape (Harris and Isaac 1997). The great majority of artefacts are made from lava cobbles, collected from the beds of streams that ran ephemerally then, as now, in the area. The site complex FxJj20 provides the key evidence for fire. It was excavated over large areas, with four main assemblages totalling around 4000 artefacts. All are found in fine-grained mud-silt horizons, but the shape of the distributions suggests some reworking by water. In FxJj20E, in the base of the horizon, four reddened patches were found, which are taken to suggest fire use, especially as thermally altered stones were also found (Harris and Isaac 1997: 165; Bellomo 1993, 1994). Similar areas were also found in FxJj20M (Bellomo and Kean 1997).
Bellomo and Kean (1997) present an analysis of the FxJj20 evidence, following Bellomo’s methodology. The oxidised patches are about the ‘right’ size for hearths and are fully oxidised to a depth of at least several centimetres. Magnetic susceptibility tests show readings 150-170% higher than those for adjacent sediments. Bellomo and Kean were convinced that at least the FxJj20E occurrences show unequivocal evidence for early human control of fire. In favour of this idea is the pattern of occurrences, which occur repeatedly within the artefact scatters, and do not occur in similar form elsewhere. Furthermore, Rowlett (1999, 2000) suggested that the presence of varied phytoliths in the Koobi Fora patches is consistent with the presence of controlled fire, rather than the natural burning of a trunk of a single species. Following this research carried out within excavation areas, Ludwig (2000) adopted a landscape approach around Lake Turkana, seeking to screen for burning in artefacts. He found that burnt artefacts occurred on land surfaces back to c. 1.5 Ma but not beyond and suggested a threshold in early human fire control at this stage.
Further early evidence of burning comes from Chesowanja in central Kenya and may be as old. The Chesowanja sediments lie on the eastern side of Lake Baringo (Carney et al. 1971; Bishop et al. 1975, 1978; Gowlett et al. 1981; Harris et al. 1983). In the mapped sequence both the older Chemoigut and younger Chesowanja Formations have similar gentle dips to the west, suggesting deposition within the local same basin. The basalt member at the base of the Chesowanja Formation has been dated to 1.42±0.07 Ma, an age that fits well with all other observed evidence, including the presence of Deinotherium bozasi and Australopithecus boisei in sediments underlying the basalt (Bishop et al. 1978).
The Oldowan occurs at a number of levels in the Chemoigut Formation. The evidence for burning comes from the largest excavated occurrence, GnJi 1/6E, which uncovered around 700 artefacts, faunal remains and concentrations of baked clay, all in close physical association. These finds have been interpreted as probable evidence of controlled fire (Gowlett et al. 1981), although alternative scenarios cannot be entirely ruled out (Gowlett et al. 1981; Clark and Harris 1985). Isaac (1982) presented a critique, noting that burnt clasts are common elsewhere, for example at Olorgesailie in southern Kenya (Isaac 1977), and that the clasts found at Chesowanja may have been redistributed along a small channel. In a taphonomic reassessment Gowlett (1999) used new data to present a three-dimensional analysis of the distribution of burned clay clasts. This shows that the large clasts are concentrated in a very small area less than one metre across (the ‘Q zone’) and indicates that small clasts and flecks generally less than 20 g in weight washed or rolled out from this area as sediments in the adjacent hollow accumulated through some 50 cm. Stone refits were also found within the excavation area, including two parts of a flake, and two flakes that conjoined with a cobble. This detailed taphonomic analysis demonstrates the limited disturbance to the remains during site formation (pace Isaac 1982). The ‘Q zone’ concentration thus remains as a possible hearth feature.
Even so, the baked clasts at Chesowanja do not themselves add up to a preserved hearth, though a possible ‘phantom hearth’ could be constituted by the stone arrangement, the baked clay and the distributions of the artefacts and bones. Baking of clasts is common when clay adheres to tree roots used as fuel. The distribution of bones is inverse in density to the artefact concentration as might be expected if bones at the centre had been destroyed by fire. One explanation, which remains to be ruled out, is that a lightning strike on a dead tree surrounded by a termite nest caused burning back into the roots. The subsequent break up of the earthen mound might then have released baked clasts to roll into the archaeological area. The limited quantity of clay (~1 kg) and the tight distribution of the major baked clasts argue against this interpretation, but more work is needed to determine if this possibility can be eliminated.
In addition to these examples there is the case of Gadeb in Ethiopia (Barbetti et al. 1980, 1986). Here, a group of stones of lava was thought to be burnt because of discolouration (such as typically appears in the present if lava rocks are used as hearthstones). Barbetti and colleagues concluded that the stones had probably been burnt, but could not be entirely certain because the stones had lost their original orientation.
Two cave sites in southern Africa round up this tour of early evidence. The Swartkrans site is one of many dolomite caves in Gauteng Province, South Africa. Just across the valley from Sterkfontein, it presents a complementary picture, known for its robust australopithecines, stone tools and fire-evidence. The cave faces southeast and overlooks the Blaubank River, but the present conformation of the site is owed to erosion that has planed off the cave roof (Brain 1981). A key factor in its formation was the collapse at an early stage of a huge dolomite block, which now separates the deposits of the inner and outer cave (Brain 1981). Brain’s (1981: 224) diagrams demonstrate the complex development that can characterise cave formation and infilling. The levels are labelled as Members 1 to 5 in ascending order, in a series separated by erosional intervals (the older members were formerly known as the brown and pink breccias; Butzer 1984 gives details). They may have been deposited in interglacial conditions and were dissected by erosion during cooler periods (Brain 1981; Clark 1992). Members 1-3 are all relevant to the early Pleistocene picture. Member 1 is preserved in two distinct parts, a ‘hanging remnant’ attached to the north wall of the cave, plus an area on the cave floor. It includes numerous remains of robust australopithecines (Australopithecus/Paranthropus robustus), along with scarce remains of Homo, stone tools and other fauna. Member 2 was then deposited in an erosional cavity and is highly calcified. It also includes Homo and similar stone tools, but the robust australopithecines have gone. Member 3, containing the fire evidence, fills a gully eroded into its surface. In addition to artefacts and bone, numbers of burnt bones were described by Brain and Sillen (1988). They come from several layers within Member 3 (sometimes known as the dark fill). All together there are around 270 burnt bones out of a total of almost 60,000 bones from the Member. Brain (2005) considers that at this time the area was ‘a roofed erosional gully’. From the recurrent finds of bones in different layers he argues that fires were tended repeatedly in the entrance area of Member 3 and that some of the burnt bones tended to roll downslope to their eventual place of deposition. Natural burning can occur on the savanna landscape, as well as in accumulated plant or guano debris in caves. Brain concedes that natural grass fires might be expected at the cave mouth. The bones are charred through, however, at relatively high temperatures, whereas the containing sediments are not. Cutmarks also attest to human association; significantly, although cutmarks are largely absent from early South African bone assemblages, including those in Members 1 and 2, Brain found a few examples along with the burnt material in Member 3. In an important development Pickering et al. (2005) bring the total up to about 60 pieces, noting also some limited traces of percussion on the bones, and Pickering (2012) has recently reported the co-occurrence of cutmarks and burning on four specimens. Finally, the recurrent nature of the evidence is a further factor arguing in favour of human fire use.
Although Member 3 at Swartkrans was thought to date to c. 1.0-1.5 Ma, around the same range as evidence of burning at Chesowanja and other sites, continuing research suggests that the age of Member 3 is 0.6-1.0 Ma, considerably younger than previously argued (Balter et al. 2008; Herries et al. 2009). Some faunal elements suggest that the age is at the earlier end of this range (James Brink, pers. comm.), but the absence of robust australopithecines (already absent in Member 2) is noteworthy.
Finally, Wonderwerk Cave in South Africa’s Northern Cape Province preserves fire evidence that may be of comparable age. A large horizontal phreatic tube cave, more than 100 metres long, this site contains Pleistocene sediments to a depth of more than 5 metres (Beaumont and Vogel 2006; Chazan et al. 2008; Beaumont 2011; Berna et al. 2012). The artefact assemblages here extend from the Later Stone Age possibly as far back as the Oldowan, with the Acheulean well represented through deeper levels. Berna et al. (2012) report on fire evidence from Excavation I, more than 30 m from the cave entrance. Level 10 here has Acheulean finds and its palaeomagnetic record gives a Normal direction, which, with the support of Al/Be dates, Berna et al. (2012) ascribe to the Jaramillo event, hence giving it an age of about one million years ago. No hearths are preserved, but microstratigraphic analysis shows associations of light wood ash and burnt bone fragments throughout this layer. Temperature determinations are in the range 500-700°C, consistent with campfires (although also with natural fires). The analyses show no evidence of burning of guano. The direct association with Acheulean, the repeated nature of burning and the association with bone are all important points. The dating is perhaps the least secure part of the evidence, since a possible reversed interval in the upper sequence (Chazan et al. 2008) has not been confirmed (Matmon et al. 2012), placing the main emphasis on cosmogenic nuclide dates. Although these support the early age, they are each presented with very large statistical errors, being of the order 1.0±0.2 Ma (Matmon et al. 2012) allowing the outside possibility of a later date. Together, however, Swartkrans and Wonderwerk present a very similar picture, attested by different lines of evidence and analyses. Although hearths are not evident, these cases can be doubted only if one takes repeated natural burning of vegetation in caves as a regular phenomenon, despite specific evidence to the contrary at Wonderwerk given that there is no obvious source of plant fuel there other than through human action.
These occurrences are the limit of the evidence of early Pleistocene fire in Africa. Throughout the Middle Pleistocene, traces of fire remain very rare on African open sites. Those at Olorgesailie, Kenya, where scattered burnt clasts are found, have been studied in detail and seem likely to reflect circumstances of natural burning involving the spontaneous combustion of underground decaying vegetation (Melson and Potts 2002). There is no doubt, however, that fire was known and used in later stages of the Acheulean, as shown now exceptionally well by sites in Europe, the Middle East and Africa. Gesher Benot Ya’aqov, at the crossroads of the Middle East, has been explicitly regarded as lying on a corridor out of Africa (Goren-Inbar et al. 2000).
Kalambo Falls on Zambia’s border with Tanzania provides unequivocal evidence from southern Africa. Here not only is charred wood found, but artefacts opportunistically made with the aid of fire (Clark 2001), as well as an oval area of burnt clay approximately one metre in diameter (Clark 1969: 160) that indicates a probable hearth. Important features of the Kalambo Falls setting that recur at Beeches Pit in England and Gesher Benot Ya’aqov in Israel are that the fire evidence occurs close to water and that it is repeated at separate levels (Gowlett et al. 2005; Gowlett 2006; Preece et al. 2006; Alperson-Afil 2008). These features are unusual in natural fires (cf. Alperson-Afil and Goren-Inbar 2010; Gowlett 2010). Elsewhere Middle Pleistocene hearths are found in Acheulean levels in caves (such as at Montagu Cave in South Africa (Keller 1973) or at Qesem in Israel, dated to around 400 kya (Karkanas et al. 2007)), as well as in open sites (such as at Terra Amata in southern France, a raised beach site dated to c. 300 kya; de Lumley 2009).
In the set of Acheulean finds those from Gesher Benot Ya’aqov are somewhat earlier than most of the others and have been studied in particular detail (Goren-Inbar et al. 2004; Alperson-Afil 2008; Alperson-Afil and Goren-Inbar 2010). With dates of c. 700 kya (occurring close above the Brunhes-Matuyama boundary in the site sequence), they now fall into a similar time frame as Swartkrans, Wonderwerk and one other possibly significant find. This comprises fragments of burnt and calcined bones from Bogatyri (Russia), on the Taman peninsula at the northern end of the Black Sea, dating to about 900 kya (correlated with the Jaramillo Event on faunal grounds) (Bosinski 2006).
In an archaeological evaluation of all the earliest finds in Africa, we would note that there is no certain evidence of hearths. At Koobi Fora and Chesowanja there is evidence to speak for concentrated burning, closely associated with artefacts. In each case there is a possible hearth arrangement, but their status is not easily confirmed this long after the original excavations. The same is true for Gadeb. At Swartkrans and Wonderwerk the pattern of evidence is highly convincing, but there is no evidence of hearths and the dates are not certainly above one million years, though within the range of ~200,000 years they begin to fall into a group with Gesher Benot Ya’aqov, Zhoukoudian and possibly Bogatyri.
Evidence for Dietary Hypotheses
We are looking here for evidence that is independent of the archaeological fire record. Archaeologists have tended to dismiss Wrangham’s hypothesis for fire use by Homo erectus as unsupported by evidence. There are, however, aspects of the archaeological record that are indirectly relevant and, if not from archaeology, supporting evidence may come from early hominin ecology and physiology. Hominin anatomy can be one pointer towards these. The essential elements (that are reasonably well dated) are encephalisation, tooth reduction and changes in body size and proportion. These are all long term trends, but with phases of accelerated change. Aiello (1996) noted two steepenings of gradient in encephalisation, at around 1.7 Ma and 0.4 Ma, postulating that the later one coincided with fire use. But with more recent finds the curve has tended to smooth out (Schoenemann 2006). The key points appear to be that brain size had already reached 1000 cm3 one million years ago, that major tooth size reduction in Homo had already happened by 1.7 Ma and that changes to more modern body form came soon afterwards (being achieved by 1.5 Ma as shown by the Nariokotome specimen (Pontzer et al. 2010). Certainly, the archaeological record around Lake Turkana shows hominins ranging widely on landscapes that were often semi-arid and offered no prospect of the kind of stable diet of fruits and herbs available to chimpanzees (Bunn and Ezzo 1993; Bunn 1994; Isaac and Behrensmeyer 1997) and much the same can be said for Olduvai (Blumenschine et al. 2003). These environments did, however, offer opportunities for broadly based foraging, with an input of meat, as indicated both by cutmarks on bones and, in some cases, isotopic analysis (Lee Thorp et al. 2010; Copeland et al. 2011), which also suggests major changes of diet in the period ~3-2 Ma. Indeed, some sites, such as FxJj50, indicate a very substantial meat aspect to the diet (Bunn et al. 1981; Harris and Isaac 1997). In these drier environments it is inevitable that carbohydrates were an important further component of consumption, especially in dry seasons.
There are other aspects of human physiology and genetics that may point to early human fire use. Fire use is well attested in both the Neanderthal and modern human stems from at least 400,000 years ago. Yet the two species are closely related, to the extent of sharing most genetic variation (Green et al. 2010) and it would be parsimonious—particularly in the light of the common substrate of association of fire with the Acheulean (Beeches Pit, Gesher Benot Ya’aqov, Kalambo Falls)—to suggest that fire-use was indeed shared by the common ancestor of the two species, especially as on present interpretations (Green et al. 2010) Gesher would fall before the divergence. This argument, mentioned by Alperson-Afil and Goren-Inbar (2010), is essentially cladistic, but given the great conservatism of material culture demonstrable in earlier hominins, and also the extreme concentration on a meat-based diet indicated in the Neanderthals (Bocherens et al. 1999), it has some force.
Social hypotheses—and the Social Brain hypothesis in particular—can be assessed against the evidence of hominin fossils, landscapes and various aspects of the archaeological record. Pleistocene evidence of brain size, and also of group size, are thus key lines of evidence that may relate to fire use. Larger groups need better social bonding, and probably more efficient use of food resources, because of the costs of aggregation. It is thus important to make good use of social time and it is notable that for humans this is made available as evening time because of a major shift in sleep-wake parameters—the modern human day approaches twice the length of that of many primates, with a mere eight hours sleep time and the time of peak attention coming in the early evening, around 6-7 pm (for a review see Schmidt et al. 2007; also Burazeri et al. 2003; Conroy et al. 2005; Gowlett 2010). It seems that this extended day is engineered through a rotation apart of the parameters of CBT (core body temperature) and cerebral blood flow (CBFV) (cerebral blood flow velocity) (Conroy et al. 2005). It is highly likely that it was fire that provided this extension of daytime, thus generating the selection for biological changes in the circadian rhythm (Gowlett 2010).
The social brain, and the corollaries of expensive issue and ‘fire time’, give a good account of what has happened and the finally resulting changes are unarguable. Again, the question is ‘when?’ If we look for a complex of factors, then it can be argued that a strong new package is present at 1.0 Ma, which was not fully there at Dmanisi at 1.7 Ma (Gabunia et al. 2000, 2001): large Homo erectus individuals of strong body form, with reduced sexual dimorphism and 1000 cm3-sized brains. They are also associated with several cultural traits that similarly indicate effective social co-operation: long distance transportation of artefacts, major investment in artefact manufacture and hunting, or at least butchery, of large animals. These aspects of high competence may nevertheless go with a ‘bend with it’ path of least resistance in the use of resources. Overall the complex of linked and repeating elements is so well established over such large areas as to indicate the consolidation of a new pattern about 1.5 million years ago, one—we argue—that included the regular use of fire.
When the first significant evidence for early fire in Africa was published (Gowlett et al. 1981) for many observers the picture did not fit. In a warm savanna landscape, what need would australopithecines have had for fire? Neither their physical nor their socioecological adaptations would demand it since they could be popularly seen as something like upright chimpanzees. Now, the picture has somewhat turned round. Chesowanja is seen to be younger than the earliest occupation of Eurasia, over huge areas, to at least 40°N (Bar-Yosef and Belfer-Cohen 2001; Gabunia et al. 2001; Stiner 2002; Zhu et al. 2004). That settlement was driven forward by early Homo, already encephalised far beyond the ape-australopithecine baseline. Although robust australopithecines have been found at Chesowanja and Swartkrans, there are strong reasons for supposing that the archaeological occurrences at both sites were generated by Homo erectus. It was certainly Homo that emerged as master of the Acheulean tradition, dating now to as early as 1.76 Ma (Lepre et al. 2011). The subsequent disappearance of the robust australopithecines had no visible effect on the archaeological record.
There is not sufficient resolution in the record to see directions of movement in and out of Africa in the early days (cf. Dennell 2003), but a priori it might be equally plausible to see fire use coming from the north, or from out of Africa. Such a useful adaptation, one would suppose, would ‘spread like fire’ across the Old World, but this may not have been the case in terms of similarity of adaptation and use.
The major difficulty for one simple all-embracing fire hypothesis is that fire use is not apparent in Europe—either in the early stages at Dmanisi—or particularly from 0.8-0.5 Ma, where the record is good enough that one would expect to see it. There are no traces of fire, for instance, at Atapuerca, La Caune d’Arago or Boxgrove, nor in early Italian sites such as Notarchirico and Isernia (Roberts and Parfitt 1999; Villa 2001; Bermúdez de Castro et al. 2004; de Lumley 2006; Roebroeks and Villa 2011). Together, these sites preserve many thousands of bones, none of which are charred. Our hypotheses must accommodate this puzzling evidence, either showing it to be a false impression, or fitting it in as part of the overall scenario. Here we attempt this through a scheme of three propositions:
|•||In the first place fire would have been used as a resource of nature by early Homo or earlier hominin taxa (e.g. Burton 2009); when early Homo spread out around the world, probably c. 2.0 Ma, this might still have been the state—use of natural fire as ‘consumers’ with limited active control;|
|•||In drier zones of the tropics and subtropics the familiarity with fire offered by regular lightning strikes and forest fires would give the benefit of cooking plants and small animal resources, as stated by the cooking hypothesis. In this context, fire control skills would rapidly improve until fire use was widespread in these domains up to c. 30°|
|•||Further north, the scarcity of lightning strikes and the severe costs of accidental fire loss would create both far greater challenges to human occupation and selective pressures for dealing with these, the two factors perhaps operating to influence and constrain patterns of settlement.|
For some colleagues point (3) has the implication that if we cannot see fire-use, it must have been totally absent up to the time when it becomes readily visible 400,000 years ago. But we know from cases such as that of reindeer butchery at Arago in southern France 550,000 years ago (de Lumley et al. 2004; Moigne et al. 2006) that some hominins were already adapted to extreme cold conditions by this time. To postulate that they could manage without fire is to say that they had other strategies for preparing uncooked fat and meat, for maintaining warmth during the ice-age winter and avoiding predators. These are problems that we have not seen solved in debate.
A further difficulty with the dichotomy of points (2) and (3) is that it postulates a geographic difference that is unlikely in the face of all other evidence. Could people really have lived for as much as a million years without fire in the north, because it was difficult to manage when to the south it appears to have been under control? If so, they would also have been foregoing the major selective advantages of cooking.
There is a possible alternative, namely that fire was in use in the north, but precisely because it was difficult to manage fires were set and maintained in particular favourable contexts, which almost never survive. The preserved fire evidence at Beeches Pit in eastern Britain may hint at this pattern, as may that from Schöningen in Germany (Gowlett et al. 2005; Thieme 2005), where the hearths were not in the main occupation, but at the side of it. Those at Beeches Pit were by a water edge—still a favoured location for fires—and appear to have burned on repeated occasions, possibly for long periods. In Europe the preservation of such contexts is almost non-existent, except where there has been interglacial tufa deposition. (The same is largely true for southern Africa, where Florisbad is another example of a spring site preserving a hearth; Henderson 2001). The other main context of fires in Europe is in caves. In the Middle Palaeolithic hearths are numerous in caves (e.g. at Bolomor or Abric Romani in Spain; Pastó et al. 2000; Fernández Peris 2007), but it may be that the necessary control for setting fires in northern caves did not exist until around half a million years ago, i.e. that fire could be maintained only in other specific settings on the landscape (Gowlett 2006). Then, rather than butchery taking place at the fireside, meat may have been butchered and taken to the fire wherever that was best placed. Both Homo erectus‘s known propensity for transporting artefacts and the modern hunter-gatherer practice of transporting meat (O’Connell et al. 1988) make this a feasible scenario.
Without doubt in the north natural fire was far less available, because of the minimal number of lightning strikes (see, for example, the lightning maps produced by the United States of America’s National Space and Aeronautics Administration (NASA 2012). In default of a regular re-supply from lightning-induced fires, new social and cognitive skills would be needed to establish a full network of fire control across the landscape. Making comparison with other cultural traits, we might expect intensification in fire-use over a long period. Rolland (2000, 2004), de Lumley (2006) and Roebroeks and Villa (2011) all argue for major change around 400,000 years ago. If so, following our arguments, such major change is likely to represent not the beginning of fire use in the north, but a new flexibility in its maintenance and transport.
This set of propositions does not avoid all difficulties in the pattern of evidence—and absence of evidence—but it does grapple with the apparently different records in Africa and Europe and offers a tentative explanation that emphasises the extreme vagaries of preservation.
We have sought to explore the early phenomena of fire in the context of human evolution in Africa, and in comparison with Europe, necessarily omitting many of the complexities of later fire-related behaviour. We conclude from the evidence of several disciplines that continuous fire-use goes back at least 1.5 million years, towards and perhaps during the time of the first major increase in encephalisation in early Homo. From several viewpoints we suggest that the use of fire by hominins from the early Pleistocene onwards is the most parsimonious working hypothesis, subject to continued testing.
In the most favourable circumstances archaeology is able to recognise humanly controlled fire. This is especially the case where single artefacts are part-burnt or where hearth conformations are distinctly visible in association with artefacts. Repeated burning events in a restricted locality also carry weight. It is also plain that such circumstances occur very rarely and result from exceptional preservation conditions. In consequence, archaeology does not provide a general picture of early fire use, but highlights particular moments of it.
The emphasis on hearths means that archaeology is tending to ‘prosecute’ on the basis of happenings that are inherently rare. Consequently the most realistic framing of a general picture should come from several disciplines. It is then extremely important to distinguish between working hypotheses—as predicated from the convergences of disciplines—and established fact. Other disciplines, such as astronomy, are well able to deal with working hypotheses that are gradually accepted or rejected through an increasing factual basis.
Established fact on its own does not allow us to place human control of fire beyond one million years ago. The strongest network of evidence (Swartkrans, Wonderwerk, Gesher Benot Ya’aqov, Taman, Zhoukoudian) gives us a picture at around the Lower/Middle Pleistocene boundary (roughly 800,000 years). By then, fire use may have been very widespread. The questions that we draw attention to are whether it goes back further and whether there were lacunae in its use. The older picture has to be sustained by general arguments and by the sites of Chesowanja, Koobi Fora and possibly Gadeb. In this context other lines of evidence—particularly the cooking hypothesis and the idea of the social brain—provide an independent and broader picture. They do not even need to be pinned to particular time points: the strong gradient from early Homo around 2.0 Ma to fully fledged Homo erectus long before 1.0 Ma provides both context and need enough for many new adaptations, among which fire use was one of the most crucial.