THE STORYTELLING ANIMAL
Myth, Morality, and the Evolution of Religion
I have given the evidence to the best of my ability; and we must acknowledge, as it seems to me, that man with all his noble qualities, with sympathy which feels for the most debased, with benevolence which extends not only to other men but to the humblest living creature, with his god-like intellect which has penetrated into the movements and constitution of the solar system—with all these exalted powers—Man still bears in his bodily frame the indelible stamp of his lowly origin.
—Charles Darwin, The final paragraph of The Descent of Man, Vol. II, p. 405, 1871
During the broadcast of the 1997 Cable ACE awards on VH-1, the rock star Madonna was called upon to present an award. After slinking her way down a flight of stairs in a tightly wrapped full-length skirt and stiletto heels, she approached the podium and announced that she did not wish to talk about Princess Di, or the paparazzi who hounded her, or the tabloids that exploited her, or the public who worshiped her to death. After adroitly making her point, while simultaneously denying she wanted to make it, Madonna instead suggested that we look for the deeper cause of Princess Di’s tragic death—our fascination with gossip and other people’s personal lives, especially when it is none of our business. Ignoring the simple and obvious fact that both Diana and Madonna, like most celebrities, depend and thrive upon the very obsession they pretend to hate, Madonna was asking that humans, who are by nature storytelling animals, quit telling stories about their favorite subject—other humans. Why does no one ever discuss the true cause of Diana’s death—just one more case of speeding and drunk driving? Because that would make her just another boring statistic—over 25,000 people are killed every year in America alone due to drunk driving. Nothing interesting about that fact—no juicy gossip, no web of sex and deceit, no assassination cabals, and, most of all, no evil villains. Every story needs a hero and a villain. Princess Diana died an ignoble death, and that does not make for a very interesting story.
What does make for an interesting story? Why do we tell stories and so enjoy hearing them? Cognitive neuroscientist Michael Gazzaniga, in his book The Mind’s Past, argues that we are all storytellers, in the sense that we take the facts of our everyday experience and weave them into a narrative, from which we spin doctor our self-image. “The spin doctoring that goes on keeps us believing we are good people, that we are in control and mean to do good.” Gazzaniga calls the brain mechanism that carries out this task the interpreter, “probably the most amazing mechanism the human being possesses.” Dreams serve as another example, since at least some appear to be random firings of neural impulses that are hung together by our internal storyteller—recall dreams you have had that include disparate elements and people never found together in reality, yet make perfect sense in a dream narrative. This is the power of the pattern-seeking, storytelling animal.
What has this to do with religion and the belief in God? We have recognized two primary purposes of religion: (1) The creation of stories and myths that address the deepest questions we can ask ourselves: Where did we come from? Why are we here? What does our ultimate future hold?; (2) The production of moral systems to provide social cohesion for the most social of all the social primates. God(s) figures prominently in both these modes as the ultimate subject of mythmaking and the final arbiter of moral dilemmas and enforcer of ethical precepts. Why did this capacity to tell stories, create myths, construct morality, develop religion, and believe in God evolve?
In the seventh century B.C.E. the Greek philosopher Archilochus penned one of the pithiest yet most thoughtful epigrams when he observed: “The fox knows many things, but the hedgehog knows one great thing.” Twenty-two centuries later the nineteenth-century British philosopher William Whewell described science as employing a foxlike method to arrive at a hedgehoglike conclusion, that he called a consilience of inductions, or what might also be called a convergence of evidence. Bringing into focus numerous theories, models, and data from disparate and unconnected fields, each one of which converges to a similar conclusion, together allows us to increase the confidence in our theory. We know, for example, that evolution happened not by any one fossil or organism, but by tens of thousands of bits of data from unrelated fields, all of which converge to a single conclusion; paleontology, geology, comparative anatomy, comparative physiology, molecular genetics, population genetics, zoology, botany, biochemistry all independently point to an evolutionary history of life on Earth. Together they converge to an inescapable focal point of scientific truth.
We can engage this convergence method to understand how and why religion and belief in God evolved in human societies. (This book, in fact, has been doing just that, using the convergence and comparative methods from the various behavioral and social sciences such as neurophysiology, behavior genetics, cognitive and social psychology, anthropology, sociology, history, and archaeology.) A survey of the extensive body of literature on religion shows that this is one of the most complex of all human social phenomena, not explicable in terms of an overriding hedgehog theory. We need to take a foxlike approach, yet look for a consilience of evidence to see what these disparate fields of thought might reveal. There is no question that different cultures express religious behavior in many different and unique ways. But is there something underneath these diverse expressions?
To get at the deeper question of the purpose of religion, I begin with evolutionary theory and the distinction biologists make between how questions and why questions. How questions are concerned with proximate causes—the immediate or nearest cause or purpose of a structure or function—the “how does it work?” type of question. “How is it that fruit taste good?” A physiologist might answer “because it stimulates the sweet receptors on the tongue.” This is a proximate answer. Evolutionary biologists are also concerned with ultimate causes—the final cause or end purpose of a structure or function—the “why does it exist?” type of question. “Why does fruit taste good?” An evolutionary biologist might answer “because there was a natural selection for taste bud receptors and brain modules to produce a pleasurable sensation with certain food substances that are both scarce and healthy.” We can go even deeper and ask what drives the selection process, and postulate that those organisms for whom healthy foods tasted good ate more of them, had less disease, lived longer, and thus left behind more offspring. Since differential reproductive success is the ultimate result of natural selection, and natural selection is the primary driving force behind evolution, we have reached the deepest level of causality in answering our question.
If questions about anatomy and physiology can be answered at the deeper evolutionary level, what about behavior? In the late 1970s the field of ethology—the study of animal behavior from an evolutionary perspective—came of age. John Alcock’s Animal Behavior: An Evolutionary Approach and Irenäus Eibl-Eibesfeldt’s classic text, Ethology: The Biology of Behavior, demonstrated that behaviors are not just the result of reinforced learning in response to changing environments but also the product of millions of years of evolution. A herring gull chick, for example, pecks at a red dot on its mother’s beak. Its mother then regurgitates her food for the chick to eat. The chick did not “learn” this behavior by trial and error in its own lifetime but inherited it from the evolutionary history of the species. The chick was born “knowing” that when it sees a red dot it should peck at it. The mother, in turn, was born “knowing” that when a chick pecks at its beak, it should regurgitate its food.
Of course, compared to herring gulls and other simple organisms, human behavior is vastly more complex and influenced by learning and the environment. Nevertheless we are animals, and no less than any other organism on earth we are the product of evolution. In order to fully understand human behavior we must also address our own ultimate “why” questions from an evolutionary perspective. What humans have done in the past 13,000-year history of civilization is nothing short of miraculous, but we must not discount the orders of magnitude of deeper time that preceded the age of civilization, when the human animal was shaped over the course of hundreds of thousands of years as Pleistocene hunter-gatherers, and over the course of millions of years as primates, and tens of millions of years as mammals.
In the past decade the field of ethology has been joined by the emerging discipline of evolutionary psychology. Where evolutionary biologists focus on the effects of the physical environment, evolutionary psychologists concentrate on the influence of the social environment. Primates are extremely social mammals, and the human primate is, arguably, the most social of all. In their introduction to the field’s most influential text, anthropologists Jerome Barkow and John Tooby, and psychologist Leda Cosmides explain: “The central premise of The Adapted Mind is that there is a universal human nature, but that this universality exists primarily at the level of evolved psychological mechanisms, not of expressed cultural behaviors. In this view, cultural variability is not a challenge to claims of universality, but rather data that can give one insight into the structure of the psychological mechanisms that helped generate it.”
The Harvard evolutionary biologist Edward O. Wilson has done just this in such works as Sociobiology, On Human Nature, and most recently in Consilience: The Unity of Knowledge. Wilson argues “that the etiology of culture wends its way tortuously from the genes through the brain and senses to learning and social behavior. What we inherit are neurobiological traits that cause us to see the world in a particular way and to learn certain behaviors in preference to other behaviors.” Wilson says these complex interactions between genes and learning are guided by epigenetic rules, which “comprise the full range of inherited regularities of development in anatomy, physiology, cognition, and behavior. They are the algorithms of growth and differentiation that create a fully functioning organism.”
A simple example of an epigenetic rule is one-trial learning—taste aversion being the most obvious example. Pairing a food or drink substance with violent nausea, for example, will produce an aversion to that substance for some time to come (red wine once did me in and for over a decade I could not drink even a small amount of it). This is an evolved mechanism for avoiding toxic foods—the learning needs to take place immediately in one trial—there is no margin for error, no time for a gradual learning sequence. Language is a much more complex epigenetic rule in which we all learn the language to which we are exposed as infants, but the basic rules of language were learned over the past hundred thousand years by our ancestors. Moving beyond basic ethological concepts of innate mechanisms, Wilson shows how genes and culture interacted in our evolutionary history in complex ways he calls gene-culture coevolution. Forget the nature-nurture debate with its artificially imposed percentages assigned to each component (for example, 40 percent genes, 60 percent environment). We are well past such facile delineations (a process that itself may be a product of an epigenetic rule that directs us to cleave a continuous nature into bivariate categories in order to simplify our complex world). Humans, says Wilson, are products of both biological and cultural evolution so inextricably interwoven that the two cannot be separated:
Culture is created by the communal mind, and each mind in turn is the product of the genetically structured human brain. Genes and culture are therefore inseverably linked. But the linkage is flexible, to a degree still mostly unmeasured. The linkage is also tortuous: Genes prescribe epigenetic rules, which are the neural pathways and regularities in cognitive development by which the individual mind assembles itself. The mind grows from birth to death by absorbing parts of the existing culture available to it, with selections guided through epigenetic rules inherited by the individual brain.
The fear of and fascination with snakes, so common in peoples around the world, is produced by an epigenetic rule with obvious survival significance. But how that fear of and fascination with snakes is uniquely expressed depends on the culture in which the individual was raised. Snake stories, myths, and narratives all differ, depending on the culture, but the focus on snakes themselves is hard-wired. Wilson shows how:
Some individuals inherit epigenetic rules enabling them to survive and reproduce better in the surrounding environment and culture than individuals who lack those rules, or at least possess them in weaker valence. By this means, over many generations, the more successful epigenetic rules have spread through the population along with the genes that prescribe the rules. As a consequence the human species has evolved genetically by natural selection in behavior, just as it has in the anatomy and physiology of the brain.
This line of reasoning leaves behind the pejorative accusations and false dichotomies of biological and environmental determinism, with extremists on the political left and right accusing each other of employing Darwinian models to justify certain social or political agendas. The epigenetic rules that guide gene-culture coevolution are so complex and interactive that such name-calling tells us as much about the name-caller’s agenda as that of the accused. Culture, says Wilson, evolves “in a track parallel to and usually much faster than genetic evolution.” But, he notes, “Then quicker the pace of cultural evolution, the looser the connection between genes and culture, although the connection is never completely broken.”
Humans are pattern-seeking animals who seek and find causal relationships in our physical and social environments. The process is called learning. As we have seen, sometimes we get it right (Type 1 and 2
Hits—not believing a falsehood and believing a truth) and sometimes we get it wrong (Type 1 and 2 Errors—believing a falsehood and rejecting a truth). But we do much more than this. We do not just process environmental data like a computer, spewing out cold, hard facts. We tell stories about it. Humans are storytelling animals.
In his book How to Argue and Win Every Time, the mediagenic attorney Gerry Spence explains that one of the reasons he is so successful is that he does not speak to the jury like a lawyer, with all the legalese and law-school language spouted by most Ivy League—trained attorneys. Spence talks to them conversationally. He tells them stories. In one case, he began his closing statement with a story about a cocky young man who wanted to show up his wiser elder. His plan was to capture a small bird in his hand, approach the old man and ask him if the bird was alive or dead. If the old man said “dead,” he would let the bird go. If the old man said “alive,” he would crush the life out of the bird. Either way he would show up the old man. So the young man captured a small bird, approached the old man, and asked him if it was alive or dead. “The bird’s life,” replied the old man, “is in your hands.” Spence says he won that case because the jury understood that the story was a metaphor for the life of his client, which they held in their hands. His point was not that telling good stories wins court cases. It was that humans can relate to stories better than they can to pure logic or objective facts. It is simply easier to keep track of a complex argument if it includes people, places, and events rather than propositions, syllogisms, and symbolic logic.
Psychologists, in search of ultimate why answers to human behavioral questions, have discovered this fact about storytelling as well. Through a series of clever experiments Peter Wason discovered that when students are presented with traditional problems in logic, which they normally have a difficult time in solving, they improve significantly if these same problems are presented in the form of a story, especially a story involving people and relationships in which the students are to detect cheating and rule breaking in social contracts. Cosmides and Tooby review subsequent experiments that corroborate Wason’s findings, demonstrating that “human reasoning is well designed for detecting violations of conditional rules when these can be interpreted as cheating on a social contract.” They conclude that this is the result of an evolved mechanism because “social exchange behavior is both universal and highly elaborated across all human cultures—including hunter-gatherer cultures—as would he expected if it were an ancient and central part of human social life.”
Anthropologist Misia Landau, in a fascinating study of the evolution of storytelling, believes that “the central claim of narratology is simply that human beings love to tell stories.” But she goes further, arguing that stories are not just about our reality, they help create our realities:
Narrative, then, is … a defining characteristic of human intelligence and of the human species. Related to this assumption … is the idea that we have certain basic stories, or deep structures, for organizing our experiences. Each deep structure comes in many versions and in several different modes. For example, the Cinderella story is embedded not just in fairy tales but in novels, films, operas, ballets, and television shows. Some narratologists, stressing the central role of narrative in human experience, would further argue that we have not only different versions of stories but different versions of reality which are shaped by these basic stories.
Origin myths among indigenous peoples, of course, neatly fit this description. Landau, however, goes on to show how scientific theories of human origins are no less susceptible to narrative bias. Was it bipedalism that gave rise to tool use, which generated big brains? Or was it tool use that led to bipedalism and then big brains? Were early hominids primarily hunters—man the killer ape, warlike in nature? Or were they primarily gatherers—man the vegetarian, pacifist in nature? More importantly, does the narrative change in response to empirical evidence, or does the interpretation of the evidence change as a result of the currently popular narrative? This is a serious problem in the philosophy of science: To what extent are observations in science driven by theory? Quite a bit, as it turns out, especially in history and the social sciences. And this fact supports the thesis that humans are primarily storytelling animals. The scientific method of purposefully searching for evidence to falsify our most deeply held beliefs does not come naturally. Telling stories in the service of a scientific theory does.
One night in the early 1970s a young couple was parked in a vacant lot high in the Hollywood hills overlooking the lights of Los Angeles. The young man told his date about a recently escaped one-armed convict who was known to be roaming those very foothills, killing parked young couples by slashing them with his arm hook. The girl got scared and insisted her date take her home at once. He did, and when he went to open her car door he discovered a hook dangling from the handle.
For decades now high school kids have been telling this story, along with another favorite, the vanishing hitchhiker: Driving along a country road you pick up a hitchhiker who gets in the backseat of your car and instructs you where to drop her (sometimes it is a he) off. When you arrive at the house (sometimes it is a graveyard) you discover that the girl has disappeared from your car. You then discover that the girl was killed (sometimes “disappeared”) while hitchhiking on that very stretch of highway that same day the year before.
As Jan Harold Brunvand noted in his 1981 book about such urban legends, The Vanishing Hitchhiker. such stories are appealing because they contain three mythic elements: (1) a strong story; (2) a foundation in actual belief; (3) a meaningful message. (He presents no less than fifteen versions of the vanishing hitchhiker story!) Myths contain a staggering diversity of themes, including stories about life and death, birth and rebirth, adolescence and coming of age, love and marriage, the origin and end of the universe, moral dilemmas, the meaning of life, and all manner of human triumphs and traumas. A myth should not be thought of in terms of its veracity or lack thereof, as when we say that an urban legend is a myth, meaning it is not true. Urban legends, in fact, are a subspecies of myths; they are stories about our fears and anxieties, as in the hook-man and vanishing hitchhiker, or others like alligators living in New York City sewers. All cultures throughout the world, and all peoples throughout history have had myths. Long before there was the written word there was the spoken word, and with language humans told stories—stories about ourselves and our relationships, stories about our origin and our end, and stories about our world and our environment. These stories became myths.
What are myths, what do they mean, and what methods should we employ to understand them? The Oxford English Dictionary’s history of the word’s usage is enlightening in this regard. A myth is “a purely fictitious narrative usually involving supernatural persons, actions, or events, and embodying some popular idea concerning natural or historical phenomena.” In fact, the original Greek meaning of mythos was “word,” in the sense of a final pronouncement, to be contrasted with logos, also “word,” but one whose veracity may be disputed. The point of a myth is not whether it is true or false, but what it represents. The ancient world was rich in myths, but so too is the modern world. Science fiction, for example, is a genre of modern myth—Star Trek is filled with supernatural persons, actions, and events; it remains one of our most popular myths even in this, the Age of Science. Myths, science-fiction author Thomas Disch might say, are the dreams our stuff is made of. In his 1949 classic statement on the subject, The Hero with a Thousand Faces, Joseph Campbell describes the diversity of thought on these questions:
Mythology has been interpreted by the modern intellect as a primitive, fumbling effort to explain the world of nature (Frazer); as a production of poetical fantasy from prehistoric times, misunderstood by succeeding ages (Müller); as a repository of allegorical instruction, to shape the individual to his group (Durkheim); as a group dream, symptomatic of archetypal urges within the depths of the human psyche (Jung); as the traditional vehicle of man’s profoundest metaphysical insights (Coomaraswamy); and as God’s revelation to his children (The Church). Mythology is all of these. The various judgments are determined by the viewpoints of the judges. For when scrutinized in terms not of what it is but how it functions, of how it has served mankind in the past, or how it may serve today, mythology shows itself to be as amenable as life itself to the obsessions and requirements of the individual, the race, the age.
Campbell’s theory, as described in his 1972 Myths to Live By, is that myths serve four functions: (1) mystical, which “serves to awaken and maintain the individual sense of awe and gratitude in relation to the mystery dimension of the universe, not so that one lives in fear of it, but so that he recognizes that he participates in it”: (2) explanatory, or “an image of the universe which will be in accord with the knowledge of the time, the sciences and the fields of action of the folk to whom the mythology is addressed”; (3) normative, or to “validate, support, and imprint the norms of a given, specific moral order, that, namely of the society in which the individual is to live”; and (4) guidance, or “to guide him [the individual], stage by stage, in health, strength and harmony of spirit, through the whole foreseeable course of a useful life.” This is a useful outline to help us get our minds around the varied culture of myths, but it is only answering how questions about myths at a proximate level. To know why humans need to experience the mystical, explain the world, create norms, or seek guidance, we need to consider myths from an evolutionary perspective.
A myth is a form of symbolic communication that invests stories not only with ordinary people and events but also with gods, supernatural beings, and extraordinary happenings, often unfolding in a place or time different from that of ordinary human experience. There are many themes and subjects embodied in myths: origins (cosmogony and creation), eschatology (end times and destruction), heroes (humans with special powers and experiences), time and eternity (ages of man, periods of history), providence and destiny (destiny, mastery over fate), memory and forgetting (prenatal existence, previous lives, collective unconscious), higher beings (celestial gods), founders of religions, nations, and peoples (Abraham, Moses, Buddah, Romulus and Remus, Siegfried), kings and ascetics (Arthur and Merlin), transformation (coming of age), rebirth and renewal (seasons and ages), and messianic and millenarian (second comings and new world orders).
If myths are to be explained on a deeper evolutionary level, then they must be universal for all peoples, including ourselves. Myths are not just someone else’s story, or stories that come from far-off times or places. We have plenty of myths of our own. Marxism was a political myth, as was pure laissez-faire capitalism, both providing explanatory, descriptive, and, most importantly, normative mythic functions. Freudian psychoanalysis was a psychosocial myth, as was Skinnerian behaviorism, both serving to justify and control human behavior. Science fiction provides descriptive myths, often of dystopian or paradisiacal future states of the world. This evolutionary explanation of myths, in fact, is itself explanatory mythmaking. The fact that scientific reasoning and empirical data are employed to support the argument makes it no less a myth—a story for us, of our time, that provides meaning and purpose.
In this regard, science is a type of myth, in both function and typology. To some degree, cosmologists give us origin and eschatology myths, from the Big Bang to the Big Crunch. Historians provide hero myths, from Martin Luther to Martin Luther King, Jr. Archaeologists present time and eternity myths, from the Paleolithic Age to the Neolithic Age. Economists proffer providence and destiny myths, from total free market anarchism to pure communism. Psychologists furnish memory and forgetting myths, from recovered memories to repression. Astronomers and computer scientists supply higher beings myths, from extraterrestrial intelligences to artificial intelligences. Biblical archaeologists contribute founders of religions myths, from King David to Moses. Anthropologists produce transformation myths, from Samoan teenagers to Yanomamö warriors. Sociologists confer rebirth and renewal myths, from childhood to adulthood. And political scientists proffer messianic and millenarian myths, from the new president to the new world order.
Why do we continue telling stories and constructing myths today? Because the epigenetic rules for mythmaking still reside within us. Consider monsters and beasts as myths. From the earliest cave paintings to the present, monsters and beasts lurk at the interstices of the natural world, appear on the margins of our perception, dwell in the dangerous lands remote from human habitation, come out at night, or in our nightmares. In the light of E. O. Wilson’s epigenetic rule for fear of snakes that generates narratives in the form of snake myths in cultures worldwide, we should not be surprised that the modern sciences of zoology and especially cryptozoology (the “science of hidden animals”) are ripe with mythic tales. For millions of years hominids evolved alongside other primates and mammals in a rich and varied zoological world. The identification of other animals, and the anticipation of possibly dangerous cryptids would have produced not only Type 1 and 2 Hits but also numerous Type 1 and 2 Errors in our thinking. Thus, in our own time we have correctly identified such genuine cryptid surprises as the giant panda, the pygmy hippopotamus, the Komodo dragon, and the long-thought extinct coelacanth. The nowfamous mountain gorilla was only discovered in 1903. The pygmy chimp (making us the “third chimpanzee” in Jared Diamond’s apt phrase) was only recently identified as being a separate species. But the field of cryptozoology is ripe with pseudoscientific hoaxes, exaggerated descriptions, and ridiculous claims. These are Type 1 Errors in thinking. But we must be cautious not to commit a Type 2 Error in rejecting a new discovery. There may very well be new and possibly dangerous animals lying in wait. They may not be dangerous to those of us living in suburban America, but they certainly could have been to our paleolithic ancestors, and this is where an epigenetic rule that would generate mythic monsters could have evolved.
Or consider how an epigenetic rule might apply to the myths of dragons and werewolves. Dragons are the most common of all mythological creatures, usually portrayed as the hybrid of a serpent or crocodile, and constructed of any number of disparate mix-and-match parts such as the scales of a fish, the wings and occasionally the head of a bird, the forelimbs and sometimes the head of a lion, the ears of an ox, the feet of a tiger, the claws of an eagle, the horns of a deer, and the eyes of a demon. In the ancient world the dragon was a winged lizard or serpent, regarded as the enemy of mankind, and its overthrow is made to figure among the greatest exploits of the gods and heroes of mythology. The dragon is found in the myths of most peoples, where it has been worshiped as a god, endowed with both beneficent and malevolent attributes, combatted as a monster, or attributed supernatural power. It is mentioned thirty-one times in Judaeo-Christian scriptures, starting with the serpent in the Garden of Eden. Dragons are often associated with water and sometimes live in caves under lakes or in the ocean bottom. In medieval tales the dragon dried up rivers and caused drought, forcing inhabitants to pay an annual tribute of gold or fair maidens. Many heroes of mythology are dragon slayers: Marduk, Hercules, Apollo, St. Michael, St. George, Beowulf, King Arthur. Some mythologists conjecture that the male dragon slayer is a symbol that represents the shift from egalitarian societies to patriarchal societies. The real source of the dragon myth may be frilled lizards, or reptiles that spit a toxic venom. But more likely it comes from a prebiblical Babylonian myth of the prime female deity who was a dragon named Tiamat. She was associated with the flooding of the Tigris-Euphrates river system and the beginning of the growing season, and her ritual killing by Marduk is possibly the source of many of the dragon and dragon-slayer stories in the world.
Similarly, werewolves figure prominently in mythology. The peak of prosecutions for lycanthropy—the “condition” of a human taking on wolflike characteristics—was in the sixteenth and seventeenth centuries in France. The most famous case was that of Jean Grenier who, in 1603, boasted to three girls that he was a werewolf, telling them that a man “gave me a wolfskin cape; he wraps it around me, and every Monday, Friday and Sunday, and for about an hour at dusk every other day, I am a wolf, a werewolf. I have killed dogs and drunk their blood; but little girls taste better, and their flesh is tender and sweet, their blood rich and warm.” Looking at this tale with the distance of almost 400 years, it is likely nothing more than male boasting and posturing; but since several children had been murdered at the time Grenier was fingered and convicted. Why a wolf? The earliest myths are associated with a ceremony of a man putting on a wolf’s skin for protection from the cold, or to act as concealment when hunting for food. This mutated into the theme that the wearing of the skin passed on to the man great magical powers of strength, speed, and stealth, not just for hunting, but for exacting vengeance or gaining power over others. From here it was but a small step to changing the man into a wolf through the common mythic motif of shapeshifting, where creatures or objects can change into other creatures or objects, either at will or under special conditions. An evolutionary argument could also be made that, as pack hunters, wolves were a principal competitor to early humans in northern latitudes. Dogs, as loyal friends and noncompetitors to humans, do not generate such myths as wolves. (It also should be noted that werewolves did not have a monopoly on the genre. There were werebears, weretigers, werehyenas, werecrocodiles, and werejackals. Vampires were a type of werebat. Shapeshifting is found in countless myths, including the Burma-Assam tiger men who can share a tiger’s body, or the leopard men of certain regions of Africa.)
Telling stories and constructing myths about animals have obvious survival significance to humans living in a paleolithic environment. Most simply and directly, it is a form of pedagogy and a medium of knowledge transfer of important information about the flora and fauna of the local ecology. A simple story can relay to a child that a particular food is poisonous or a certain animal is dangerous. A myth codifies this knowledge into the permanent record of a people’s store of wisdom. Anthropologist Melvin Konner, for example, in his study of the !Kung San people of Africa, observed that their knowledge of the local ecology was “detailed and thorough enough to astonish and inform professional botanists and zoologists.” And this knowledge, he noted, was often exchanged around the campfire in the form of storytelling and mythmaking:
[Their knowledge covered] everything from the location of food sources to the behavior of predators to the movements of migratory game. Not only stories, but great stores of knowledge are exchanged around the fire among the !Kung and the dramatizations—perhaps best of all—bear knowledge critical to survival. A way of life that is difficult enough would, without such knowledge, become simply impossible.
As Wilson observed: “Storytelling may be central in language because, in simulating real experience, they bring into play all of the cognitive and emotional circuitry evolved to deal with real experience. In other words, narrative is the best mnemonic procedure; it maximizes rate of learning and understanding.” It seems reasonable, therefore, to offer the following evolutionary explanation for myths: Some individuals inherited an epigenetic rule for mythmaking, in this case myths related to animals, that enabled them to survive and reproduce better in the surrounding environment and culture than individuals who lacked these rules, thus spreading the rules. As part of gene-culture coevolution, myth culture was reconstructed by each generation collectively in the minds of individuals. When oral myths were supplemented by written myths, the culture of myth grew indefinitely large, but the fundamental influence of the epigenetic rules for myths remained constant. Since some myths survived and reproduced better than competing myths, this caused mythic culture to evolve in a track parallel to, and faster than, genetic evolution. This quicker pace of mythic cultural evolution loosened the connection between genes and culture, although the connection was never completely broken. Thus we witness the plethora of modern myths, and our fascination with them.
One of the classic myths of medieval Europe is the story of Beowulf and the monster Grendel. The myth comes to us from a single manuscript, dated circa A.D. 1000, but probably derives from an oral tradition of the eighth century. In its nascent form it was without title. It was later named for the Scandinavian hero Beowulf, although there is no historical evidence that such a person ever lived. The myth has two parts. In the first part the evil monster, Grendel, devours Danish King Hrothgar’s warriors and ravages his kingdom. Young Beowulf, a prince of the Geats of southern Sweden, hears about the monster through his noble uncle, Hygelac (who may have been a historical figure), and makes the king an offer to rid him of the Grendel monster. Meanwhile, the monster strikes at night, while everyone sleeps, stealing away numerous thanes (feudal lords) and devouring them in his keep. Beowulf sets a trap whereby Grendel grabs him one evening, but Beowulf, a mighty warrior, tears off Grendel’s arm. The beast flees, and the next day the people follow the trail of blood to discover the deceased monster. But the next night Grendel’s mother avenges her son, killing one of Hrothgar’s earls. The next day the people once again trek to the keep of the Grendel monster and there discover many monsters and dragons of the sea. Beowulf arrives to wreak vengeance for the latest killing, and finds and slays Grendel’s mother. In the second part of the story Beowulf assumes the throne when King Hrothgar dies, only to have a fire-breathing dragon ravage his land. Beowulf, now an old man, fights the dragon but is no match for the beast. With the aid of a young warrior named Wiglaf the dragon is defeated, but in the process Beowulf dies. His last words are uttered in desperation: “Dear Wiglaf, quickly now help me to see this old treasure of gold, the gladness of its bright jewels, curiously set, that I may yield my life the more easily and the lordship I have held so long.”
What many scholars see in this myth, Campbell notes, is “the old Germanic virtues … of loyalty and courage, pride in the performances of duty, and, for a king, selfless, fatherly care for his people’s good.” But, we might ask, why does this myth contain ethical values of the Germanic code of loyalty to chief and tribe and vengeance to enemies? Might there be a deeper reason, one rooted in epigenetic rules pertaining to the human condition in a paleolithic community? How do we get from pattern-seeking, storytelling, and mythmaking, to religion?
As pattern-seeking animals, humans evolved speech as one of the earliest symbolic patterns, with sounds and words representing objects and events in the physical and social environment. But no one knows exactly when language evolved. The scientific evidence is sketchy at best. Cranial endocasts of hominids as old as those of Homo habilis and even Australopithecus africanus (dating several million years old) reveal the nooks and crannies of the exterior surface of the brain. Some of these may correspond to the distinctive language centers of the modern human brain, but whether they drove language in these ancient hominids is impossible to prove. In the Kebara 2 burial site in Israel there is evidence of language in Neanderthals in the form of a nearly complete hyoid bone—a free-floating bone attached to soft tissue in the larynx that anchors throat muscles involved in speech—found next to a Neanderthal mandible.
Equally important, however, is the question, why language? Language may have evolved for strictly adaptive purposes, giving our hominid ancestors a selective advantage in dealing with the physical and especially the social environment. On the other hand, all other modern primate species are social and hierarchical, yet not one of them has developed as complex a language system as ours. Perhaps language is, in part, a spandrel—a contingent by-product of an enlarged brain evolved for dealing with symbols and different components of language developed for other reasons and later employed in language and speech. Donald Johanson summarizes this as-yet-unsolved mystery:
Language evolution is probably intimately linked to brain evolution, and since our brain has been growing and reorganizing over the past 2 million years, it seems unlikely that language suddenly arose from some radical new mutation. Human brains could have been language-competent long before spoken languages appeared. The enlarging brain of early Homo no doubt was capable of complicated cognitive coordination and calculation and as such relied on and used skills important to language. Perhaps language evolved in tandem with our enlarging brain or was a cause, rather than a consequence, of brain enlargement during the Pleistocene.
Whenever and however language evolved, from pattern-seeking to speech-making to storytelling to mythmaking, humans solved problems through language. Anthropologist Terrence Deacon goes so far as to invent a new species designation for us, Homo symbolicus, the hominid symbol user. Anthropologists studying modern hunter-gatherer societies, for example, have found that problems are often couched in the language of stories, myths, and other symbolic narratives, such as songs and poems. In his description of the Copper Eskimo, for example, anthropologist David Damas notes that “every man or woman in that group was said to have had his own compositions. Some of the subjects of the songs were man’s impotence in the universe, hunger, songs of the hunt, songs of lust, the fear of loneliness, and death.” The description is as applicable for suburban commuters in New York as it is for hunter-gatherers in Alaska.
Paleoanthropologists believe that we evolved in small hunter-gatherer (and scavenging) communities operating out of a home base and utilizing considerable cooperation and communication. The late archaeologist Glynn Isaac proffered the “home base hypothesis” from which hunting and gathering would have been conducted, with food substances brought back to a specific place where it was shared. Archaeologist Lewis Binford pushes for a “scavenging” model, where ancient hominids more likely would have taken what they could find from the remains of already hunted animals, rather than hunting themselves. Either way, anthropologist Robert Bettinger demonstrates how, compared with individuals, “groups may often be more efficient” not only “in finding and taking prey, particularly large prey” but also in coordinating the activities of individuals, who might otherwise unduly interfere with one another. Finally, as in the case of resource storage, foraging groups that pool and share resources have the effect of ‘smoothing’ the variation in daily capture rates between individuals.” That is, as the group grows larger, “lucky” individuals share their take with “unlucky” individuals, and everyone benefits. Cooperation would have been as powerful a drive in human evolution as competition, if not more so. And communication is an essential tool of cooperation, so it makes sense that Paleolithic hunter-gatherers, as well as their modern counterparts, would have employed language to tell stories and solve problems.
How large were these communities? Most modern hunter-gatherer groups range in size from 50 to 400 residents, with a medium range of 100 to 200 people. Anthropologist Napoleon Chagnon, in his extensive studies of the Yanomamö people in the Amazon, found the typical group to be roughly 100 people in size, with 40 to 80 living together in the rugged mountain regions, and 300 to 400 members living together in the largest lowland villages. He has also noted that when groups get excessively large for the carrying capacity of their local environment (given their level of technology), they fission into smaller groups. Such bifurcations may also be a product of exceeding the carrying capacity of the social environment. Psychologist Robin Dunbar, in his book, Grooming, Gossip and the Evolution of Language, argues that the figure of 150 people in a typical group has a deeper evolutionary basis. It turns out that 150 is roughly the number of living descendants (wives, husbands, and children) a Paleolithic couple would produce in four generations at the birthrate of hunter-gatherer peoples—this is how many people they knew in their immediate and extended family. Archaeologists believe that early agricultural communities in the Near East 7,000 years ago typically numbered about 150 people. Even modern farming communities, like the Hutterites in Europe (and now Dakota and Canada), average about 150 people.
When groups get large they split into smaller groups. Why? According to the Hutterites, it is because shunning does not work as well in large groups, and shunning is a primary means of social control. Sociologists know that once groups exceed 200 people a hierarchical structure is needed to enforce the rules of cooperation and to deal with offenders, who in the smaller group could be dealt with through informal personal contracts and social pressure. Still larger groups need chiefs and a police force, and rule enforcement involves more violence or the threat of violence. Even in the modern world with a population of six billion people crowded into dense cities, people find themselves divided into small groups. In the Second World War, for example, the average size company in the British army was 130 men, in the United States army it was 223 men. The 150 average also fits for the size of small businesses, of departments in large corporations, and of efficiently run factories. A Church of England study, conducted in an attempt to balance the financial support provided by a large group and the intimacy of a small group, concluded that the ideal size for congregations was 200 or less. The average number of people in any given person’s address book also turns out to be about 150 people.
It would appear that 150 is the number of people each of us knows fairly well. Dunbar claims that this figure fits a ratio of primate group size to their neocortex ratio: that is, the volume of the neocortex—evolutionarily the most recent regions of the cerebral cortex—to the rest of the brain. Extremely social primates need big brains to handle living in big groups, because there is a minimum amount of brain power needed to keep track of the complex relationships, in order to live in relative peaceful cooperation. Dunbar concludes that these groupings “are a consequence of the fact that the human brain cannot sustain more than a certain number of relationships of a given strength at any one time. The figure of 150 seems to represent the maximum number of individuals with whom we can have a genuine social relationship, the kind of relationship that goes with knowing who they are and how they relate to us. Putting it another way, it’s the number of people you would not feel embarrassed about joining uninvited for a drink if you happened to bump into them in a bar.”
Morality most likely evolved in these tiny bands of 100 to 200 people as a form of reciprocal altruism, or I’ll scratch your back if you’ll scratch mine. But as Lincoln noted, men are not angels. There are cheaters. Individuals defect from social contracts. Reciprocal altruism, in the long run, only works when you know who will cooperate and who will defect. In these small groups, cooperation is regulated through a complex feedback loop of communication among members of the community. (This also helps to explain why people in big cities can get away with being rude, inconsiderate, and uncooperative—they are anonymous and thus not subject to the normal checks and balances that come with seeing the same people every day.) In order to play the game of reciprocation you need to know whose back needs scratching and who you will trust to scratch yours. This information is gathered through telling stories about other people, better known as gossip. From an anthropologist’s perspective, gossip is a tool of social control through communicating cultural norms, as Jerome Barkow observed: “Reputation is determined by gossip, and the casual conversations of others affect one’s relative standing and one’s acceptability as a mate or as a partner in social exchange. In Euro-American society, gossiping may at times be publicly disvalued and disowned, but it remains a favorite pastime, as it no doubt is in all human societies.”
The etymology of the word gossip, in fact, is enlightening. The root stem is godsib, or god and sib, and meant “akin or related.” Its early use, as traced through the Oxford English Dictionary, included “one who has contracted spiritual affinity with another,” “a godfather or godmother,” “a sponsor,” and “applied to a woman’s female friends invited to be present at a birth” (where they would gossip). (In one of its earliest uses in 1386, for example, Chaucer wrote: “A womman may in no lasse synne assemblen with hire godsib, than with hire owene flesshly brother.”) The word then mutated into talk surrounding those who are akin or related to us, and eventually to “one who delights in idle talk,” as we employ it today. Not surprisingly, we are especially interested in gossiping about the activities of others that most affect our inclusive fitness. that is, our reproductive success, the reproductive success of our relatives, and the reciprocation of those around us. Normal gossip is about relatives, close friends, and those in our immediate sphere of influence in the community, plus members of the community or society who are high ranking or have high social status. It is here where we find our favorite subjects of gossip—sex, generosity, cheating, aggression, violence, social status and standings, births and deaths, political and religious commitments, physical and psychological health, and the various nuances of human relations, particularly friendships and alliances. Gossip is the stuff of which not only soap operas but also grand operas are made. But why, in our culture, do we gossip about total strangers, namely celebrities? The probable reason is that the mass media make these figures so familiar to us that they seem like relatives, friends, and members of our community. Why would anyone care with whom Princess Diana slept or what her status was in the royal family? Because our Pleistocene brains are being tricked into thinking that Princess Diana is someone we personally know and care about.
What has all this to do with religion? Religion is a social institution that evolved as an integral mechanism of human culture to create and promote myths, to encourage altruism and reciprocal altruism, and to reveal the level of commitment to cooperate and reciprocate among members of the community. That is to say, religion evolved as the social structure that enforced the rules of human interactions before there were such institutions as the state or such concepts as laws and rights. We would do well to remember that the history of the modern nation-state with constitutional rights and protection of basic human freedoms can be measured in mere centuries, whereas humans evolved as social primates over the course of millions of years, and human culture itself dates back at least 35,000 years, if not more. The principal social institution available to facilitate cooperation and goodwill was probably religion. An organized establishment with rules and morals, with a hierarchical structure so necessary for social primates, and with a higher power to enforce the rules and punish their transgressors, religion evolved as the penultimate effort of these pattern-seeking, storytelling, mythmaking animals. How and why did it evolve?
At the most fundamental level, blood is thicker than water, and Richard Dawkins’ famous selfish-gene model accounts for altruism and cooperation among families and extended families. That is, the percentage of genes shared among various degrees of kinship will predict the amount of benefits we receive from a given relative (on average—families will vary, of course). Thus, we do not need religion and gods to enforce the rules in the immediate family where the ties are close. Most parents do just fine. But when we move out from the circle of extended families and into the community and society, we need other mechanisms to ensure that people are kind to one another.
Morality evolved over eons in the paleolithic environment where individuals cooperated and competed with one another to meet their needs. Individuals belonged to families, families to extended families, extended families to communities, and, in the last couple of centuries, communities to societies. This natural progression, which is now in its latest evolutionary stages of perceiving societies as part of the species, and the species as part of the biosphere, is illustrated in the Bio-Cultural Pyramid below.
The lower strata of the Bio-Cultural Pyramid depict the 1.5 million years over which our moral behavior evolved under primarily biogenetic control, and the middle layer the transition about 35,000 years ago when sociocultural factors increasingly assumed control in shaping our ethical precepts. Obviously this was a continuous process. There was no point at which an Upper Paleolithic Moses descended from a glacier-covered mountain to present The Law to his fellow Cro-Magnons.
The Bio-Cultural Pyramid: A model of the origin and development of ethical behavior.
Nevertheless, the semipermeable bio-cultural transitional boundary divides time and dominant source of influence, where the individual, family, extended family, and paleolithic communities were primarily molded by natural selection; whereas neolithic communities and modern societies were and are primarily shaped by cultural selection. Starting at the bottom of the Bio-Cultural Pyramid, the individual’s need for survival and genetic propagation (through food, drink, safety, and sex) is met by way of the family, extended family, and the community. The nuclear family, however, is the foundation. Despite assaults on it in the second half of the twentieth century, the family remains the most common social unit around the world. Even within extremes of cultural deprivation—slavery, prisons, communes—the two-parents-with-children structure emerges: (1) African slave families broken up retained their attachment and structure for generations through the oral tradition; (2) in women’s prisons pseudofamilies self-organize, with a sexually active couple acting as “husband” and “wife” and others playing “brothers” and “sisters”; (3) even when communal collective parenting is the norm (e.g., Kibbutzim), many mothers switch to the two-parent arrangement and the raising of their own offspring. For this foundational social structure our evolutionary history is too strong to overcome. Conservatives need not bemoan the decline of families. They will be around as long as the species continues.
Moving up the Bio-Cultural Pyramid, basic psychological and social needs such as security, bonding, socialization, affiliation, acceptance, and affection evolved as mental programs to aid and reinforce cooperation and altruism, all of which facilitate genetic propagation through children. Kin altruism works indirectly—siblings and half-siblings, grand- and great-grandchildren, cousins and half-cousins, nieces and nephews, all carry portions of our genes. This is what is known as inclusive fitness, and applies to anyone who is genetically related to us. In larger communities and societies, where there is no genetic relationship, reciprocal altruism (if you scratch my back I will scratch yours) and indirect altruism (if you scratch my back now I will scratch yours later) supplements kin altruism. Inclusive fitness gives way to what we might call exclusive fitness. The natural progression of exclusive fitness may be the adoption of species altruism and bioaltruism (we will prevent extinction and destruction now for a long-term payoff), which Wilson argues in Biophilia may even have a genetic basis. But, Wilson confesses, this should probably still be grounded in self-interest arguments—my children and grandchildren will be better off in a future with abundant biodiversity and a healthy biosphere—since inclusive fitness is more powerful than exclusive fitness.
The width of the Bio-Cultural Pyramid at any point indicates the strength of ethical sentiment, and the degree to which it is under evolutionary control. The height of the pyramid at any point indicates the degree to which that ethical sentiment extends beyond our own genomes (ourselves). But the pyramid also shows that these two sets of sentiments are inversely related. The further a sentiment reaches beyond ourselves, the further it goes in the direction of helping someone genetically less related, and the less support it receives from underlying evolutionary mechanisms.
New research by philosopher Elliott Sober and biologist David Sloane Wilson, presented in their 1998 book Unto Others: The Evolution and Psychology of Unselfish Behavior, indicates that there may have been an additional selection component in human evolution that gave rise to cooperation and altruism, and that is a modified version of group selection. This is a volatile subject among evolutionary theorists because for the past thirty years, group selection has been next to creationism as the doctrine strict Darwinians most love to hate. From George Williams’s 1966 book Adaptation and Natural Selection to Richard Alexander’s 1987 book The Biology of Moral Systems to Richard Dawkins’s several books throughout the 1990s, group selection was vilified as the pap of bleeding-heart liberals who couldn’t deal with the reality of “nature red in tooth and claw.” Michael Ghiselin’s 1974 description summed up the Darwinian literalists perspective, especially the last line:
The economy of nature is competitive from beginning to end … . The impulses that lead one animal to sacrifice himself for another turn out to have their ultimate rationale in gaining advantage over a third … . Where it is in his own interest, every organism may reasonably be expected to aid his fellows … . Yet given a full chance to act in his own interest, nothing but expediency will restrain him from brutalizing, from maiming, from murdering—his brother, his mate, his parent, or his child. Scratch an “altruist,” and watch a “hypocrite” bleed.
Sober and Wilson, through a sophisticated mathematical model and series of logical arguments, and defining a group as “a set of individuals that influence each other’s fitness with respect to a certain trait but not the fitness of those outside the group,” demonstrate that “natural selection can operate at more than one level of the biological hierarchy.” They show how “individual selection favors traits that maximize relative fitness within single groups,” and that “group selection favors traits that maximize the relative fitness of groups.” Of course, “altruism is maladaptive with respect to individual selection but adaptive with respect to group selection.” Therefore, they conclude, “altruism can evolve if the process of group selection is sufficiently strong.” For example, they cite William Hamilton’s analysis of how consciousness might have provided a group selective advantage for certain human populations with regard to the ethical enforcement of rules: “Consider also the selective value of having a conscience. The more consciences are lacking in a group as a whole, the more energy the group will need to divert to enforcing otherwise tacit rules or else face dissolution. Thus considering one step (individual vs. group) in a hierarchical population structure, having a conscience is an ‘altruistic’ character.”
Part of the problem in this debate is in how certain terms are defined, such as altruism and cooperation, and the tendency to force these categories into either-or choices for human actions. Humans are either altruistic or selfish. Humans are either cooperative or competitive. But altruistic and cooperative are not reified things, they are behaviors. And like all behaviors, there is a broad range of expression, from a little to a lot. Applying fuzzy logic can help clarify this complex human phenomenon, where we might assign fuzzy numbers to altruism or cooperation. Depending on the circumstances, someone might be, say, .2 altruistic and .8 nonaltruistic (or selfish), or .6 cooperative and .4 noncooperative (or competitive). Humans can be both altruistic and nonaltruistic, cooperative and noncooperative.
One problem with reciprocal altruism is this: How do I know that if I scratch your back you will scratch mine? I am more than willing to cooperate with unrelated members of my community, but only if I am reasonably certain that they are going to reciprocate. How can I find out who are the cooperators and who are the defectors? Gossip is one way. Past experience with my fellow community members is another. Combined, these give me enough information to make a decision (even if it is on an unconscious level) about whom I can trust.
In a way, daily life can be modeled by a game theory technique called the Prisoner’s Dilemma. Two individuals who cooperated in committing a crime are caught, arrested, and offered the chance of a reduced sentence if one will rat out the other. The district attorney can convict both of them of a minor offense, but if one of them confesses, he can go free while the other rots in jail with a long sentence. What will they do? It depends on their respective reputations for being trustworthy. Let us simplify the game where each player gets one point if both cooperate, either one can get two points by defecting when the other cooperates, and zero points if both defect. When only one round of the game is played, most people defect. But when the game is iterated, or repeated for numerous rounds with the same players, cooperation is the norm. When you learn that your partner is a cooperator and not a defector, you become a cooperator yourself.
To test this hypothesis the mathematician and political scientist Robert Axelrod held a contest by inviting people to submit a computer program to play the iterated Prisoner’s Dilemma. Pitting the programs against each other for 200 games each. he tallied up the payoff scores and found that the winning program was the simplest one, designed by Anatol Rapoport and called Tit for Tat. The program chooses to cooperate on the first round, and then on all subsequent moves it matches the choice of its opponent. Tit for Tat, says evolutionary biologist John Maynard Smith, is an Evolutionary Stable Strategy, or “a strategy such that, if all the members of a population adopt it, no mutant strategy can invade.” Tit for Tat is reciprocal altruism—if you’ll scratch my back, I’ll scratch yours (or the reverse: If you stab me in the back, I’ll stab you in the back). But the latter is rarely necessary. Most of the time it pays to cooperate, and most of the time we do.
Is this because we want to be “good” or “moral” people? That may be what it feels like now, but current emotions may be proxies for deeper causes. Since our reputations as cooperators must be built over time, we must show consistency from day to day, week to week, and year to year. It would be difficult to fake being a cooperator in order to fool your fellow community members for any length of time. Anthropologist William Irons shows that religion, in addition to providing rules, morals, and enforcement (and numerous other benefits outside the scope of this analysis), furnishes a splendid opportunity to prove loyalty and commitment to the group. If I see you every week in the pews, every month at the confessional, getting circumcised, being bar mitzvahed, not eating meat on Fridays, wearing a yarmulke, singing the psalms of the Lord, not using electricity on the sabbath, facing east to pray, taking the bread and wine as the body and blood of the savior, going to war in the name of God, and even willing to risk death for our group, I know you are someone I can trust. That sort of commitment is hard to fake. If our self-image is that of an honest person, not only are others more likely to perceive us as honest, we are more likely to be honest. We are all fairly good at detecting cheaters and liars, so in order for the cheater or liar to get away with his offense, he has to work very hard at appearing honest. Even if deception is the original intent, in time, with repetition of the ritual, self-deception may take over. Psychics, cult gurus, and other charlatans may very well come to believe in their own outrageous claims for the simple fact that they can deceive their marks better if they themselves believe the lie. Either way, through literally millions of iterations of real-life gametheory events in the course of a lifetime, we learn who are the cooperators and who are the defectors. And through our religion (and, more recently, the state), we come to believe that our actions really are moral, just, and right. Our clan really is special, perhaps even worth dying for, if our leader or our God so asks.
One of the most common reasons people give for believing in God (see Chapter 4) is that without the existence of a deity there would be no ultimate basis for morality. The source of this belief may be that morality, God, and religion have been so intertwined for so long that there is probably an evolutionary-based epigenetic rule underlying the connection. The Enlightenment concept of human rights—as expressed and fought for in the French and American Revolutions—is relatively new. It is primarily based on the social contract: In order for humans to achieve life, liberty, and happiness they must be free, and their freedoms must be protected by the state through compacts like the Constitution and the Bill of Rights. With proper indoctrination it is possible to get young men so committed to these ideals that they will sacrifice themselves for the larger group. But this has not been easy, so it is probably no accident that beneath the surface of nationalism often lies religion. Our God is better than their God. (Or in the case of the cold war, our God is better than their Godless society.) When Winston Churchill and Franklin Roosevelt together sang “Onward Christian Soldiers” following their meeting cementing the American-British alliance, this was more than ceremonial window dressing. Or consider the words to “The Battle Hymn of the Republic” (especially the final refrain below) written by Julia Ward Howe after a review of the federal troops in Washington (published in the Atlantic Monthly in February, 1862):
Mine eyes have seen the glory of the coming of the Lord;
He is trampling out the vintage where the grapes of wrath are stor’d
He hath loos’d the fateful lightning of His terrible swift sword
His truth is marching on.
I have read a fiery gospel writ in burnished rows of steel:
“As ye deal with My contemners, so with you My grace shall deal”
Let the Hero born of woman crush the serpent with His heel
Since God is marching on.
The Confederate troops, of course, sang and prayed to the same God to do the same thing to their enemy.
God and religion are many things to many people, and reasons for belief are varied, thoughtful, and momentous. For centuries many a theologian, scholar, and scientist has attempted to explain why people need religion and believe in God. Their efforts have left a legacy of theories and libraries of books on the subject. Edward Tylor and James Frazer viewed religion as animism and magic, whereas Sigmund Freud saw it as an obsessional neurosis. Emile Durkheim said religion is a sacred part of the social structure, while Karl Marx said it is nothing more than another tool of alienation and the opiate of the masses. Mircea Eliade thought religion to be the most sacred part of the human psyche. while E. E. Evans-Pritchard saw religion as society’s “construct of the heart,” which it needs as much as science’s “construct of the mind.” Clifford Geertz believed that religion is a cultural system of symbols that act to empower, give meaning, and provide motivation. In evaluating these disparate theories, historian of religion Daniel Pals suggests asking the following questions: “(1) How does it define the subject? (2) What type of theory is it? (3) What is the range of the theory? (4) What evidence does the theory appeal to? (5) What is the relationship between a theorist’s personal religious belief (or disbelief) and the explanation he chooses to advance?”
Applying these questions to the theory of religion presented here: (1) Religion is a social institution that evolved as an integral mechanism of human culture to create and promote myths, to encourage altruism and reciprocal altruism, and to reveal the level of commitment to cooperate and reciprocate among members of the community. (2) This is a biocultural theory of religion. (3) The range of the theory is limited to deeper, ultimate “why” questions about religion and belief in God. The particulars of any one religion are not the subject of analysis. (4) This is a scientific theory, so the evidence is based on those sciences most allied with the study of myth, religion, and belief in God: archaeology, history, anthropology, sociology, cognitive and social psychology, neurophysiology, behavior genetics, and evolutionary biology. The theory attempts to probe deeply into the core of why people believe in God and religion, but does not focus on specific faiths or customs. (5) See the preface and first chapter of this book for the relationship between my personal religious beliefs and the explanation I have chosen to advance.
This theory of religion has presented a case for how humans evolved from pattern-seeking to storytelling to mythmaking to morality and religion. Where does God fit into this sequence? In short, everywhere. God is a pattern, an explanation for our universe, our world, and ourselves. God is the key actor in the story, “the greatest story ever told” about where we came from, why we are here, and where we are going. God is a myth, one of the most sublime and sacred myths ever constructed by the mythmaking animal. God is the ultimate enforcer of the rules, the final arbiter of moral dilemmas, and the pinnacle object of commitment. And God is the integrant of religion, the most elemental of all components that go into the making of the sacred. God and religion are inseparable. People believe in God because we are pattern-seeking, storytelling, mythmaking, religious, moral animals.