8
 
RISE ABOVE: TOLERANCE, FREEDOM, AND THE PROSPECTS FOR HUMANITY
 
 
 
Nature, Mr. Alnutt, is what we were put in this world to rise above.
 
—Katharine Hepburn to Humphrey Bogart
in The African Queen, 1951
 
 
 
 
 
In an episode of the original series of Star Trek, the Enterprise crew encounters an alien civilization playing a war game that is just about to escalate into a full-scale conflict. Captain James T. Kirk tells the alien leaders to “just say no” to war. Though instinctive, war may be resisted. There can be peace.

ALIEN: There can be no peace. Don’t you see? We’ve admitted it to ourselves. We are a killer species. It’s instinctive. It’s the same with you.
KIRK: All right, it’s instinctive. But the instinctive can be fought. We are human beings with the blood of a million savage years on our hands, but we can stop it. We can admit that we are killers but we’re not going to kill … today. That’s all it takes—knowing that you’re not going to kill … today. 1
 

Are we, by nature, a killer species? The scientific evidence we have surveyed in this book answers in the affirmative. Built into our makeup is the capacity for aggression, violence, and war. Providentially, the scientific evidence also points to a solution. The rise of state societies, and with them the development of codified moral systems—initially through religious organizations and subsequently through secular institutions—have truly “civilized” our species, accentuating the moral and attenuating the immoral sentiments. We have done well, but we can do better. Our moral and immoral natures are in delicate balance. Too many immoralities confront us daily, even in the most civilized of cultures. Homicides and genocides, wars and revolutions, rape and domestic violence make their appearances too frequently on the nightly news. We still retain the power to trigger our own extinction, whether through ecocide with the mass destruction of our environments or through genocide from nuclear, chemical, and biological weapons. If nature put us into this world, how best shall we rise above it? In this final chapter on the prospects for humanity, we shall examine the evidence that our species is on a long evolutionary trajectory toward greater amity toward members of our own group, a long historical path toward more tolerance and inclusivity, and a long political path toward more liberties for more people in more places, whether they are members of our group or not. Out of this analysis arise two recommendations, one on individual tolerance and the other on social and political freedoms.
 
UCLA evolutionary biologist Jared Diamond once classified humans as the “third chimpanzee.”2 Genetically we are very similar, and when it comes to high levels of between-group aggression, we also resemble our ape cousins. But in comparing ourselves to other species of great apes, there is hope for humanity. In looking at within-group levels of aggression, it turns out that we are much more like bonobos, who are well known for their peaceful and loving ways. Once classified as “pygmy chimpanzees” because of their diminutive skulls, bonobos are now placed in their own grouping, primarily based on these dramatic behavioral differences from chimpanzees. We are much more peaceful than chimpanzees with our fellow in-group members, and our prodigious sexuality is far more like that of bonobos than chimpanzees. Although humans still exhibit within-group violence—as witnessed on the nightly news—compared to chimpanzees, who almost daily exhibit uncontrolled outbursts of male on male aggression, and male on female violence, statistically we are closer to bonobos than to chimpanzees. Domestic violence among humans, for example, while still at unacceptable levels, is nevertheless significantly less than that seen in chimpanzee families.3
What are we to make of this contrast between humans as within-group bonobos and humans as between-group chimpanzees? Anthropologist Richard Wrangham proffers a plausible theory that as a result of selection pressures for greater within-group peacefulness, humans and bonobos have gone down a different behavioral evolutionary path than chimpanzees. That difference may be witnessed in morphology. The “pygmy chimpanzee” moniker was given to bonobos because, compared to chimpanzees, their skull, jaws, and teeth are much reduced in size, even while their other body parts are quite similar. It has been observed that when artificially selecting for docility in wild animals, along with far less aggression, breeders also cause a suite of other changes, including and especially a reduction in skull, jaw, and teeth size. This is called pleiotropy, in which a single gene may affect a series of traits. Selecting for one set of traits (for example, nonaggression) may generate other unintended changes (for example, reduced skull, jaw, and teeth size). The most famous study on selective breeding for domesticity in wild animals was begun in 1959 by the eminent Russian geneticist Dmitri Belyaev at the Institute of Cytology and Genetics in Siberia (and continues today by Lyudmila N. Trut). Silver foxes (Vulpes vulpes) were bred for friendliness toward humans (defined by a series of criteria, from the animal allowing itself to be approached, hand fed, petted, to it actively seeking to establish human contact). In only thirty-five generations (remarkably short on an evolutionary time scale), the researchers were able to produce tail-wagging, hand-licking, peaceful foxes. What they also fashioned were foxes with significantly smaller skulls, jaws, and teeth than their wild ancestors.
Similar changes can be seen in comparing domesticated dogs to their ancestral wild wolves. Mitochondrial DNA sequence evidence places the ancestor of every dog on earth, including New World dogs, to a single population of Asian wolves active roughly 15,000 years ago. In addition to smaller skulls, jaws, and teeth, domesticated dogs evolved a set of social-cognitive abilities that enable them to read human communicative signals indicating the location of hidden food. These abilities are not shared by their wild forebears nor by any of the other great apes. Specifically, domesticated dogs were able to pick the right container of concealed food when the experimenter looked at, tapped, or pointed to it, whereas wolves, chimpanzees, and other primates are unable to do so. Such nonverbal communication skills are vital for both domesticated dogs and domesticated humans.4 There were additional pleiotropic effects in the foxes. After several generations of breeding, for example, the foxes, like their canine cousins, began exhibiting floppy ears, curly tails, and striking color patches on their fur, including a star-shaped pattern on the face similar to those found in many breeds of dogs.
What is going on here? The Russian scientists believe that in selecting for docility, they inadvertently selected for paedomorphism, or the retention of juvenile features into adulthood, such as floppy ears (found in wild pups but not in wild adults), the delayed onset of the fear response to unknown stimuli, and lower levels of aggression. It seems that the selection process led to a significant decrease in levels of stress-related hormones such as corticosteroids, which are produced by the adrenal glands during the flight or fight response, as well as a significant increase in levels of serotonin, thought to play a leading role in the inhibition of aggression. Curiously, in selecting only for docility, the Russian scientists were also able to accomplish what no breeder had done before—increase the length of the breeding season.5
Wrangham suggests that over the past 20,000 years, as humans became more sedentary and their populations grew, there was selection pressure for less within-group aggression, and this effect can be seen in the reduced size of our skulls, jaws, and teeth (compared to our immediate hominid ancestors), our year-round breeding season and more bonobo-like prodigious sexuality, and our paedomorphism (see figure 28). Wrangham also cites research that shows how area 13 in the limbic frontal cortex in humans, believed to mediate the inhibition of aggressive behavior, more closely resembles in size the equivalent area in bonobos’ brains than it does that in chimpanzees’ brains.6
Figure 28. The Paedomorphic Primate
 
Paedomorphy is the retention of juvenile features into adulthood. Humans, like bonobos and domesticated dogs and foxes, show paedomorphy of the skull, jaw, and teeth. This illustration shows the growth of a chimpanzee skull (left) from juvenile (top) to adult (bottom), compared to the growth of a human skull (right), from juvenile (top) to adult (bottom). The chimpanzee skull matures very differently than the human skull. We are the paedomorphic primate. (From John M. Allman, Evolving Brains, 1999. Courtesy of John M. Allman)
e9781429996754_i0038.jpg
 
An additional test of this hypothesis is to compare serotonin levels in humans with those in chimpanzees and bonobos. The prediction is that human serotonin levels should more closely match those in bonobos. This comparison has never been made, but as Robert Sapolsky, a Stanford University biologist who studies aggression in primates, told me: “Overall, the literature shows that low levels of serotonin, or serotonin breakdown products in the cerebral spinal fluid, is a predictor of aggression, impulsivity, disinhibition, and so on. It probably has much to do with indirectly inhibitory roles that serotonin plays in the frontal cortex.” Paul Zak, an expert on the effects of oxytocin on cooperation and the control of aggression, agreed with Sapolsky’s assessment, and added: “Oxytocin (OT) is a feel-good hormone and we find that it guides subjects’ decisions even when they are unable to articulate why they are acting in a trusting or trustworthy matter. I think the same ‘sense’ of right and wrong in moral dilemmas may utilize OT.” As for the difference between chimpanzees and bonobos, Zak suggested that “there is evidence that serotonin (5HT) increases the release of OT. I would speculate that bonobos have higher OT (due to the frequency of sexual activity and touching) and therefore generally higher 5HT and less aggression (though I have not seen such a study done).” He also noted that in his lab they have been studying what he called oxytocin’s “ugly cousin,” “the neuroactive hormone arginine vasopressin (AVP), which is strongly related to reactive aggression in mammals, including humans.” Where does all this happen in the brain? Sapolsky suggested that the inhibition of aggression is ultimately controlled by the frontal cortex, of which we have plenty, chimps have some, and the baboons he studies on the Serengeti Plain of Africa have very little. “Baboons are far less disciplined than chimps, and when you map their brain anatomy you notice that they don’t have a whole lot of frontal cortical function. Even though there are tremendous individual differences among the baboons, they’re still at this neurological disadvantage, compared to the apes, and thus they typically blow it at just the right time. They could be scheming these incredible coalitions, but at the last moment, one decides to slash his partner in the ass instead of the guy they’re going after, just because he can get away with it for three seconds. Their whole world is three seconds long.” To the extent that morality is linked to impulse control and the delay of gratification, if there is a moral module in the brain, it is either in the frontal cortex or directly linked to it and is heavily mediated by brain chemistry and experience. Zak noted, for example, that in mice that are predisposed to have high levels of oxytocin, if they are deprived of maternal nurturing, “a large proportion of brain areas with OT receptors atrophy.”7
These data fit well with the theory on the evolution of the moral sentiments presented in the first half of this book. The increased size of our brains, particularly the frontal cortex, gave humans a cognitive advantage over other primates that allowed us to better control our impulsive emotions and delay gratification, as well as form social coalitions and plan strategies. Between-group competition (with other hominid and primate groups) for limited resources led to the selection for such immoral sentiments as competitiveness and selfishness, but at the same time it led to within-group cooperation and selflessness that enhanced the fitness level of individual members of the group. The result is within-group amity and between-group enmity (figure 29).8
This evolutionary scenario bodes well for our species. Even though the behavioral potential for between-group violence exists in our primate brains, we also harbor the seeds of peaceful coexistence. We cannot increase the size of our frontal cortex. However, we can learn to be moral. (Children do it—the frontal cortex is not fully developed until well into the teenage years, which is why children are, in a manner of speaking, premoral animals, why teens seem so impulsive, and why parents discipline their children in an attempt to shape a moral sense into them.) We can also practice being moral. If making love makes more oxytocin, and more oxytocin means less aggression, then there very well may be a neurological correlate to the cliché, “make love, not war” (figures 30 and 31). We can also learn to think differently about ourselves—perceiving the entire human population as a single group, for example, would eliminate between-group aggression. Although recent history is not encouraging, the long-term trend over the past half millennium has been toward greater inclusiveness and more liberties, for more people, in more places. There are two ways to reinforce the continuation of this positive trend: (1) individual action to accentuate tolerance and attenuate intolerance of individual differences between people; (2) political action to expand whom we tolerate, both legally and morally, to be members of our in-group.
Figure 29. The Domesticated Primates
 
An orangutan, bonobo, gorilla, and human are pictured. In measures of between-group violence, humans are more like aggressive and territorial chimpanzees. In measures of within-group violence, humans are more like peaceful and loving bonobos. Between-group competition for limited resources led to the selection for within-group cooperation and selflessness and between-group competitiveness and selfishness. (From Adolph Schultz, The Life of Primates, 1969. Courtesy of Orion Publishing Group, Inc.)
e9781429996754_i0039.jpg
 
Figure 30. Making War, Not Love
 
When it comes to between-group levels of violence, humans behave much like chimpanzees, with males fanning out into surrounding environments in search of food and other resources that often result in seek-and-destroy missions against other groups. (Top:) Two groups of New Guinea hunter-gatherers square off for war. (Photograph by Robert Gardiner. Courtesy of the Peabody Museum) (Bottom:) A group of U.S. soldiers in Vietnam sweeps through rice paddies in an army photograph entitled “Mission—Search and Destroy the Vietcong.” (Photograph by SFC Jack H. Yamaguchi)
e9781429996754_i0040.jpg
 
e9781429996754_i0041.jpg
 
Figure 31. Making Love, Not War
 
Bonobos have much lower levels of within-group violence and much higher levels of sexual contact. Conflict resolution among bonobos often involves sexual and erotic contact. Although humans are like chimpanzees in our high levels of between-group aggression, we are more like bonobos in our low levels of within-group aggression and high levels of sexuality and eroticism. (Photograph by Frans de Waal. Courtesy of Frans de Waal)
e9781429996754_i0042.jpg
 
 
Since the founding of the Skeptics Society and Skeptic magazine in 1991, I have been asked countless times how and why I lost my faith. Although my conversion to Christianity was sudden and dramatic, as is often the case for those who are not inculcated into a religion in childhood, my “de-conversion” was gradual and evolutionary. The scales did not suddenly fall from my eyes. Paul did not morph back into Saul. Rather, there was a slow but systematic displacement of one worldview and way of thinking by another: genesis and exodus myths by cosmology and evolution theories; faith by reason; final truths by provisional probabilities; trust by verification; authority by empiricism; and religious supernaturalism by scientific naturalism.9
Intellectually, I found little substance in the so-called scientific proofs of God’s existence or the philosophical arguments dating back to the Middle Ages that form the foundation of Christian apologetics. Although the scientific explanations for the origins of our universe, our world, and ourselves—Big Bang cosmology, historical geology, and evolutionary theory—were not wholly encompassing and had plenty of gaps, they seemed to me to have a higher probability of being true, by orders of magnitude, than the origin myths offered by religion. Anthropologists and social psychologists elucidate the fact that most aspects of religion, including and especially myths of origins and morality, are culturally determined and socially constructed. In my studies there came a point when it seemed to me absurd to even ask if these stories were true. Attempting to calculate how many pairs of animals could have fit on Noah’s ark seems like a pointless exercise: if God is omnipotent, He could fit as many animals on the ark as He liked. The point of the Noachian flood story is about destruction and redemption, not how Noah kept the predators away from the prey or on which deck the dinosaurs were housed. In this sense, creationists have butchered the Bible in their attempt to squeeze the square peg of religion into the round hole of science.
Emotionally, over time I found increasingly less in religion that appealed to me. The definitive nature of religious answers struck me as contrived, particularly when contrasted with the provisional nature of answers in science. For my temperament, the uncertainty of science was one of the perks of the job. Now anyone, including me, could join in the search for answers and participate personally in the adventure of exploring our world and our selves. Since there was no Archimedean point of objective observation, it meant that I was on a journey of discovery with the rest of humanity. There are no privileged priests in science.
Morally, there were aspects of religion that I found more than a little troublesome. When I became a born-again Christian, the moral complexities and subtleties of life—that I was only just beginning to explore and comprehend—suddenly vanished in the clarity of the absolute and final answers to moral dilemmas. Or so I thought. My first inkling of a problem came the day after my conversion, when a passionately religious friend reprimanded me for choosing the wrong faith. He told me I was still doomed if I did not switch to his church, the Jehovah’s Witnesses. As it turned out, this was not an isolated instance. The more faiths I examined, the more aware I became of the fact that they all think they alone are right.
Religious Absolutism and Intolerance
 
The belief that one’s faith is the only true religion too often leads to a disturbing level of intolerance, and this intolerance includes the assumption that nonbelievers cannot be as moral as believers. The Bible reinforces this idea (Ps. 14:1): “The fool has said in his heart, ‘There is no God.’ They are corrupt, they have done abominable works, there is none who does good.” The most famous pronouncement along these lines came during a news conference on August 27, 1987, by Vice President George Bush, who was making a stop at Chicago’s O’Hare Airport on the 1988 presidential campaign trail. After he explained that “faith in God is important to me,” a reporter inquired, “Surely you recognize the equal citizenship and patriotism of Americans who are atheists?” Bush replied: “No, I don’t know that atheists should be considered as citizens, nor should they be considered patriots. This is one nation under God.”10 This conclusion, however, is not born out in the data from the scientific study of religion and morality. While individual religious believers may be exceptionally moral and tolerant people, and while religion may inspire in some individuals extraordinary morality and tolerance, religion does not necessarily foster these desirable traits.
According to a 1997 survey conducted by the University of Ohio, for example, intolerance among Christian activists is relatively high, especially when it comes to the perceived moral degeneration of America. In fact, 99 percent agreed that “moral decay is the cause of America’s problems.” What is the perceived source of this moral decay? One-third of the Christian activists who responded to the survey listed the American Civil Liberties Union as the most dangerous group in America, with gay rights groups coming in a close second. About 80 percent stated that members of the ACLU and gay rights groups “should not be allowed to: make a public speech, run for public office, demonstrate in public, or operate legally.” Nearly half, 44 percent, declared that such “dangerous” people “should not be allowed to teach in public schools.” Most threatening to civil liberties and the separation of church and state, more than half (52 percent) agreed with this statement: “Christians should take dominion over all aspects of society.” No less than 91 percent believe that “God works through politics and election returns,” and 89 percent think that “the U.S. has prospered when it obeyed God” and that “Clergy and churches should be involved in politics.” A vast majority (75 percent) agreed that, “if enough people were brought to Christ, social ills would take care of themselves.”11
Nothing fuels religious extremism more than the belief that one has found the absolute moral truth. Islamic terrorism, for example, has gradually shifted from secular motives in the 1960s to religious motives today. One study found that in 1980 there were only two out of sixty-four militant Islamic groups whose mission was religiously based. In 1995 that figure had climbed to nearly half.12 It is a type of fuel that can lead to what Clay Farris Naff, executive director of the Center for the Advancement of Rational Solutions in Lincoln, Nebraska, cleverly calls the “neuron bomb,” after its cold-war counterpart, the “neutron bomb,” which was designed to kill people while leaving buildings and infrastructure intact. A schematic of the neuron bomb looks like this:

Arming Device: Belief that God’s enemies must be defeated or destroyed
Concealment: Can be implanted in any human mind
Cost: Practically nothing
Explosive Materials: Anything at hand
Destructive Potential: Unlimited13
 

Salman Rushdie minced no words in his analysis of the problems between India and Pakistan, two religiously based political systems poised intermittently on the brink of nuclear holocaust: “The political discourse matters, and explains a good deal. But there’s something beneath it, something we don’t want to look in the face: namely, that in India, as elsewhere in our darkening world, religion is the poison in the blood. So India’s problem turns out to be the world’s problem. What happened in India has happened in God’s name. The problem’s name is God.”14
To be more accurate, India’s problem—and the world’s—is extremism in the name of God, even in the industrial and democratic West. “All faiths that come out of the biblical tradition—Judaism, Christianity and Islam—have the tendency to believe that they have the exclusive truth,” writes Rabbi David Hartman of the Shalom Hartman Institute in Jerusalem. “When the Taliban wiped out the Buddhist statues, that’s what they were saying. But others have said it too.”15 Others such as Cardinal Ratzinger, a representative of the Vatican, who proclaimed in August 2000: “With the coming of the Saviour Jesus Christ, God has willed that the Church founded by Him be the instrument for the salvation of all humanity. This truth of faith … rules out, in a radical way … the belief that ‘one religion is as good as another.’”16 Religious extremism in America is particularly potent when gathered together under the umbrella of militia groups. An example can be seen in the life of Eric Robert Rudolph, the American terrorist charged in the 1996 Atlanta Olympics bombing that killed one person and wounded over a hundred others, as well as in the bombing of a gay nightclub and two abortion clinics. When he was captured in May 2003, it was reported that he was a member of the Christian Identity movement (an extremist group that believes Jews are satanic and blacks are subhuman), was known for his anti-Semitic and racist views, and that in his seven-year evasion of the FBI and law enforcement agencies, he probably had help from militia groups as well as local townspeople in Murphy, North Carolina, many of whom apparently share his views. 17
Not only is there no evidence that a lack of religiosity leads to less moral behavior, a number of studies actually support the opposite conclusion. In 1934 Abraham Franzblau found a negative correlation between acceptance of religious beliefs and three different measures of honesty. As religiosity increased, honesty decreased.18 In 1950 Murray Ross conducted a survey among 2,000 associates of the YMCA and discovered that agnostics and atheists were more likely to express their willingness to aid the poor than those who rated themselves as deeply religious.19 In 1969 sociologists Travis Hirschi and Rodney Stark reported no difference in the self-reported likelihood to commit crimes between children who attended church regularly and those who did not.20 In 1975 Ronald Smith, Gregory Wheeler, and Edward Diener discovered that college-age students in religious schools were no less likely to cheat on a test than their atheist and agnostic counterparts in nonreligious schools.21 Finally, David Wulff’s comprehensive survey of correlational studies on the psychology of religion revealed that there is a consistent positive correlation between “religious affiliation, church attendance, doctrinal orthodoxy, rated importance of religion, and so on” with “ethnocentrism, authoritarianism, dogmatism, social distance, rigidity, intolerance of ambiguity, and specific forms of prejudice, especially against Jews and blacks.”22 The conclusion is clear: not only does religion not necessarily make one more moral, it can lead to greater intolerance, racism, sexism, and the erosion of other values cherished in a free and democratic society.
Since I am a nonbeliever, I might reasonably be accused of a biased selection of data to make my case. Consider, then, the results of the well-known religious pollster George Barna, a born-again Christian, discussed in his 1996 Index of Leading Spiritual Indicators. Based on interviews with nearly 4,000 adult Americans, Barna found “Born-again Christians continue to have a higher likelihood of getting divorced than do non-Christians,” and “Atheists are less likely to get divorced than are born-again Christians.” This seems counterintuitive, yet Barna found that the current divorce rate for born-again Christians is 27 percent, while it is only 24 percent for non-Christians. In addition, the baby boomers—that generation often criticized for sexual indulgence and moral relativism—have a lower rate of divorce (34 percent) than the preceding generation (portrayed in popular culture as the idealized 1950s Ozzie and Harriet family), whose rate hovers at 37 percent. Five years later, in a 2001 survey, Barna found that “33 percent of all born again individuals who have been married have gone through a divorce, which is statistically identical to the 34 percent incidence among non-born again adults.”23 My point in this divorce data dump is to counter the claim—heard all too often in our culture—that one cannot be as good without God. That is simply scientifically and statistically false.
Even everyday intolerances are easily derived from such moral certainty. On March 25, 1998, for example, the Reverend Reggie White, better known for his bone-breaking hits as an all-star linebacker for the Super Bowl champion Green Bay Packers, presented the Wisconsin State Assembly with his theory of race and religion: “Homosexuality is a decision, it’s not a race. People from all different ethnic backgrounds live in this lifestyle. But people from all different ethnic backgrounds also are liars and cheaters and malicious and back-stabbing.” God, says White, granted each race with special gifts. Blacks, for example, are good at worship and celebration: “If you go to a black church, you see people jumping up and down because they really get into it.” Whites are good at organization: “You guys do a good job of building businesses and things of that nature and you know how to tap into money.” Latinos “were gifted in family structure and you see a Hispanic person, and they can put 20, 30 people in one home.” Asians “can turn a television into a watch.” Native Americans are “gifted in spirituality.”24 Right. God created blacks to make merry, whites to make money, Asians to make televisions, Latinos to make babies, and Indians to make rain.
In a similar vein, pro-lifer Randall Terry, founder of Operation Rescue, succinctly summarized the absolute intolerance that is possible with absolute morality: “Let a wave of intolerance wash over you … . Yes, hate is good … . Our goal is a Christian nation … . We are called by God to conquer this country … . We don’t want pluralism.” Putting an exclamation point on Terry’s philosophy is the abortion clinic bomber John Brockhoeft: “I’m a very narrow-minded, intolerant, reactionary, bible-thumping fundamentalist … a zealot and fanatic … . The reason the United States was once a great nation, besides being blessed by God, is because she was founded on truth, justice, and narrowmindedness.” 25
There is a simple historical problem with this theory. According to the 2001 New Historical Atlas of Religion in America, it is a myth that America was once a Christian nation that has since lapsed into secular debauchery.26 Most revealing are the historical maps and charts that track the changing demographics of American religion. Conservative pundits who worry about the moral fiber of America and proclaim that we need to return to the good old days when America was a Christian nation should look closely at the graph in figure 32, which shows that church membership among the U.S. population over the past century and a half has increased from 25 percent to 65 percent. If America is going to hell in an immoral handbasket, it is happening when church membership is at an all-time high and a greater percentage of Americans (90—95 percent) than ever before proclaim belief in God.
Absolute morality leads logically to absolute intolerance. Once it is determined that one has the absolute and final answers to moral questions, why be tolerant of those who refuse to accept the Truth? Religiously based moral systems apply this principle in spades. From the medieval Crusades and the Spanish Inquisition to the Holocaust and Bosnia, history is rife with examples of intolerance. In the name of God, religious people have sanctioned slavery, anti-Semitism, racism, homophobia, torture, genocide, ethnic cleansing, and war. In the name of their religion, people have burned women accused of witchcraft. The events of September 11, 2001, are a potent example of religious extremism gone political.
Figure 32. The Myth of Early America as a Christian Nation
 
Contrary to what modern conservatives claim, America has never been a more religious nation than it is today. Over the past century, church membership has climbed from 25 percent to 65 percent of the American population. (From Edwin S. Gaustad, Philip L. Barlow, and Richard Dishno, New Historical Atlas of Religion in America, 2001, figure 4.16)
e9781429996754_i0043.jpg
 
Secular Absolutism and Intolerance
 
Despite this ripping indictment of religion, it would be intolerant of me not to reiterate that religion per se is not the problem; it is religious extremism and, by extension, extremism of any stripe. So it is only reasonable to note that this generalization is true of secular moral systems as well. Extreme atheist ideologies, most notably the various Marxist regimes throughout the twentieth century, have generated their share of purges and pogroms in the name of an ideological god. But also during this time, a number of secular ethical systems have been proposed by philosophers in a quest to move beyond religion, while avoiding the relativism endemic to so many nonreligious ethical theories, and the intolerance generated by Marxism.
In A Theory of Justice, for example, John Rawls argued that there are certain moral principles that are absolute and above cultural modification (and thus are inalienable): “In a just society the liberties of equal citizenship are taken as settled; the rights secured by justice are not subject to political bargaining or to the calculus of social interests.” Where those rights came from without reference to a transcendental power and how they can be secured without political power are challenges Rawls has had to answer against his critics. Rawls is perhaps best known for his tenet that justice, or fairness, be defined by conceiving of an “original position” in which you would not know what your status in life would be (for example, which race, class, or religion you would be born into) before making a political decision. Thus, your choice is made from a “veil of ignorance” of your “original position.” When you do this you soon learn (and feel) just how unfair the world is and how many people are advantaged or disadvantaged in life to no credit or fault of their own but just by dint of birth and upbringing. To remedy this problem, Rawls says that “society must give more attention to those with fewer native assets and to those born into less favorable social positions. The idea is to redress the bias of contingencies in the direction of equality.”27
At first blush this does seem, well, fair and just. After all, it’s not my fault that my parents were not as well off as other parents who sent their children to exclusive private schools, while I floundered through mediocre public schools. Shouldn’t I be compensated somehow? What about poorer black kids attending inner city public schools that fall far below mediocrity? Shouldn’t redress be made for this bias? Unfortunately, when you carry this line of reasoning out through all of its consequences, it becomes absurd in the extreme. What qualifies as a native asset? Height, looks, and intelligence? Social psychologists have found that men in excess of six feet in height will earn more money than men under six feet tall, that better-looking people receive more attention from teachers and more breaks on the job, and that people with an IQ in excess of 145 puts them at the top of the game in both native ability and earning power. Should people receive compensation for handicaps of height, looks, and smarts? Of course not. It seems fairly obvious that this trend toward equality would lead to extensive and draconian governmental interventions. Instead of justice for all, we would end up with freedom for none.
A more reasonable alternative, it seems to me, is the libertarianism of Robert Nozick’s Anarchy, State, and Utopia. Ironically, Nozick makes an argument similar to Rawls’s concept, but he arrives at a radically different solution about the role of the state. “Individuals have rights, and there are things no person or group may do to them (without violating their rights). So strong and far-reaching are these rights that they raise the question of what, if anything, the state and its officials may do.” As little as possible, Nozick concludes: “A minimal state, limited to the narrow functions of protection against force, theft, fraud, enforcement of contracts, and so on, is justified; that any more extensive state will violate persons’ rights not to be forced to do certain things, and is unjustified; and that the minimal state is inspiring as well as right.”28 Inspiring and right. It doesn’t get any better than that. But here I may not be purely objective.
In fact, I must plea a mea culpa for enthusiastically embracing what has to be one of the most paradoxical forms of secular absolute moral systems ever devised—Ayn Rand’s Objectivism. Throughout my youthful forays into divers ethical systems, I clung to a core of philosophy laid down in her magnum opus, Atlas Shrugged, a novel many people devour for its ideals of personal responsibility, rugged individualism, and free-market economics. Objectivism is based on four central tenets: 1. Metaphysics: Objective Reality; 2. Epistemology: Reason; 3. Ethics: Self-interest; 4. Politics: Capitalism.29 Thinking for oneself is primary, and behaving morally leads to success and happiness. Rand’s moral hero is John Gait, the Atlas who shrugged when his vision of the world failed to take hold. Gait is the “Prometheus who changed his mind. After centuries of being torn by vultures in payment for having brought to men the fire of the gods, he broke his chains and he withdrew his fire—until the day when men withdraw their vultures.” When the vultures (read Big Government and Big Religion) finally withdrew their restrictions, Galt (read Rand) exhorted the heroes left standing: “The world you desired can be won, it exists, it is real, it is possible, it’s yours.30
The problem with Objectivism is its contention that absolute knowledge and final truths are attainable. For Objectivists, once a principle has been discovered through reason to be True, there is no further cause for disputation. If you disagree with the principle, then too bad for you—the principle is True anyway. This is more like theology than it is philosophy. Whatever it is, it is not science. In Rand’s circle, such absolutism led to the same end that all absolute moral systems experience if they are carried out to their logical extreme: a bipolarization of people into true believers and heretics, with acceptance of the former and excommunication of the latter. Nathaniel Branden, Rand’s chief lieutenant, who began as a true believer and ended up an excommunicated heretic, explained the “implicit premises” to which “everyone in our circle subscribed,” including: “Ayn Rand is the greatest human being who has ever lived. Atlas Shrugged is the greatest human achievement in the history of the world. Ayn Rand, by virtue of her philosophical genius, is the supreme arbiter in any issue pertaining to what is rational, moral, or appropriate to man’s life on earth. Once one is acquainted with Ayn Rand and/or her work, the measure of one’s virtue is intrinsically tied to the position one takes regarding her and/or it. No one can be a good Objectivist who does not admire what Ayn Rand admires and condemn what Ayn Rand condemns.”31
Absolute morality generates absolute intolerance. And the problem is endemic to all absolute systems of thought, from religious to nonreligious, from libertarian to communist. One would think, for example, that Objectivists would embrace all libertarians. But no, like the Baptists and Anabaptists who warred over whether baptism should be implemented at birth or in adulthood (with the Anabaptists opting for the latter), some of Rand’s biggest battles were fought not with socialists, but with fellow libertarians. Barbara Branden recalled a dinner catastrophe that resulted from the first meeting between Rand, the libertarian economist Henry Hazlitt, and Ludwig von Mises, the greatest intellectual defender of free-market economics of the twentieth century. “The evening was a disaster. It was the first time Ayn had discussed moral philosophy in depth with either of the two men. ‘My impression,’ she was to say, ‘was that von Mises did not care to consider moral issues, and Henry was seriously committed to altruism … . We argued quite violently. At one point von Mises lost his patience and screamed at me.’”32 Economist and Nobel laureate Milton Friedman, one of the fountainheads of libertarianism, recalled an incident at the first meeting of the Mont Pelerin Society in 1947, at which was gathered a veritable who’s who of free-market economists (including Friedrich von Hayek, Fritz Machlup, George Stigler, Frank Knight, Henry Hazlitt, and Ludwig von Mises). “One afternoon, the discussion was on the distribution of income, taxes, progressive taxes, and so on. In the middle of that discussion von Mises got up and said, ‘You’re all a bunch of socialists,’ and stomped out of the room.”33
Intolerance is just as prevalent at the other end of the political and economic spectrum. Economist Murray Rothbard, who avers that the Libertarian party was founded in his living room, compared the intolerance of communists with that of Randian Libertarians, with a corresponding result that due to the inability of followers to properly toe the party line, at any given time both groups had more ex-members than members: “an ideological cult can adopt the same features as a more overtly religious cult, even when the ideology is explicitly atheistic and anti-religious.” Intolerance is enforced through ideological straitjackets: “Communists preserve their members from the dangerous practice of thinking on their own by keeping them in constant activity together with other Communists … of the major Communist defectors in the United States, almost all defected only after a period of enforced isolation.” The same was true in the Rand circle. “Every night one of the top Randians lectured to different members expounding various aspects of the ‘party line’: on basics, on psychology, fiction, sex, thinking, art, economics, or philosophy. Failure to attend these lectures was a matter of serious concern in the movement.”34 Loyal followers often found themselves outcast heretics for the minutest of infractions, such as listening to the “wrong” music or not properly denouncing an irrational idea. Moral absolutism leads to moral absurdities, turning acolytes into apostates.
Judge Not, That Ye Be Not Judged
 
One of the reasons that Christianity succeeded was its tolerance for diversity and openness to all comers. With the success of Christianity, the within-group intolerant morality of the Old Testament gave way to a more tolerant morality of the New Testament. Contrast Rand’s Old Testament—style morality with that of Jesus on the matter of moral judgment. Here is Rand’s position:

The precept: “Judge not, that ye be not judged” … is an abdication of moral responsibility: it is a moral blank check one gives to others in exchange for a moral blank check one expects for oneself. There is no escape from the fact that men have to make choices; so long as men have to make choices, there is no escape from moral values; so long as moral values are at stake, no moral neutrality is possible. To abstain from condemning a torturer, is to become an accessory to the torture and murder of his victims. The moral principle to adopt … is: “Judge, and be prepared to be judged.”35
 

Actually, what Jesus said in full (in Matt. 7:1—5) was:

Judge not, that ye be not judged. For with what judgment ye judge, ye shall be judged: and with what measure ye mete, it shall be measured to you again. And why beholdest thou the mote that is in thy brother’s eye, but considerest not the beam that is in thine own eye? Or how wilt thou say to thy brother, Let me pull out the mote out of thine eye; and, behold, a beam is in thine own eye? Thou hypocrite, first cast out the beam out of thine own eye; and then shalt thou see clearly to cast out the mote out of thy brother’s eye.
 

The principle Jesus extols is not moral neutrality or a moral blank check, but a warning against self-righteous severity and a rush to judgment, as explained in the Talmudic collection of commentary on Jewish custom and law called the Mishnah: “Do not judge your fellow until you are in his position” (Aboth 2:5); “When you judge any man weight the scales in his favor” (Aboth 1:6). Jesus wants us to be cautious, not to cross the line between legitimate and hypocritical moral judgment. The “mote” and “beam” metaphor is purposeful hyperbole. The man who lacks virtue feels morally smug in judging the virtue of his neighbor. The “hypocrite” is the critic who disguises his own failings by focusing attention on the failings of others. Perhaps Jesus is offering insight into human psychology where, for example, the adulterer is obsessed with judging other peoples’ sexual offenses, the homophobe secretly wonders about his own sexuality, or the liar suspects others of excessive falsehoods.36
Methodological Individualism and Moral Tolerance
 
Why should absolutism necessarily lead to intolerance? Is it just that people who prefer absolute systems of morality tend to be intolerant by temperament, or is there something built into the systems themselves that leads to intolerant attitudes and behavior? An answer can be found in the difference between the binary logic of absolute morality and the fuzzy logic of provisional morality. The basis of most ethical systems is Aristotelian binary logic: black or white, right or wrong, moral or immoral. Ayn Rand well represents this position: “There are two sides to every issue: one side is right and the other is wrong, but the middle is always evil. The man who is wrong still retains some respect for truth, if only by accepting the responsibility of choice. But the man in the middle is the knave who blanks out the truth in order to pretend that no choice or values exist.”37
Nonsense on stilts. Philosophy often only tells us the way the world should be. Science tells us how it really is, and science reveals a very fuzzy world with multiple shades of gray. Since the basis of provisional ethics is evolutionary theory, it seems fitting to turn to a Darwinian principle that leads to a more tolerant moral guide for our fuzzy world—methodological individualism. It assumes that only individual phenomena have a basis in reality—there are no pure Platonic essences, no fixed Aristotelian types. In the natural world, for example, there is no such thing as an immutable species fixed in the mind of some divine Gepetto. There are only individual organisms classified into types we call species. (These species, while temporarily stable in form and function, harbor the seeds of change or extinction. On a human time scale they appear relatively stable, but on a geological time scale species change.) Analogously, there are no fixed “species” of pure good or evil, only individual organisms of good and evil acts. Thus, if we want to understand morality and immorality, we must study individuals who express moral and immoral behaviors. That is, the target of our investigation should be the individual and individual human action in all its wondrous variety.
Consider the work of an entomologist and evolutionary biologist whose specialty was gall wasps and whose methodology was evolutionary individualism. After earning a doctorate from Harvard and landing a post at the University of Indiana, he spent the next twenty years of his career logging tens of thousands of miles and collecting some 300,000 specimens of gall wasps, publishing the results of his research in two large monographs in 1930 and 1936: “In the intensive and extensive measurement of tens of thousands of small insects … I have made some attempt to secure the specific data and the quantity of data on which scientific scholarship must be based. During the past two years, as a result of a convergence of circumstances, I have found myself confronted with material on variation in certain types of human behavior.” The convergence of circumstances was this: in 1938 his university wanted to offer a course on marriage, a euphemism at the time for sex education. The entomologist was asked to serve as chairman of the committee to regulate the course and to give three lectures on the biology of sex. Thorough scientist that he was, he went to the library and found virtually nothing on human sexuality. So he began to research the subject himself. A student had scrawled a graffito on the title page of Harvard’s only copy of the 1936 wasp monograph: “Why don’t you write about something more interesting, Al?” Al was Alfred Kinsey, the pioneer in the scientific study of human sexuality.
Kinsey undertook to collect his own data on a massive scale. One colleague described his need “to devour life, to gulp life, to look, and experiment and record”; Kinsey explained, “The technique we are using in this study is definitely the same as the technique in the gall wasp study.”38 As an entomologist, before he would hazard even a cautious conclusion about a particular group or species, Kinsey collected thousands of individual insects. He was no less thorough in his study of human sexuality. He began his research with personal interviews in his office, but as the process became too unwieldy, he developed a sizable staff and procured a separate research office and private grants to support a longitudinal study. By the time he published his results, Kinsey had collected data on more than 18,000 people, far outstripping all other studies done on any type of human behavior.
The reason for such exhaustiveness was that Kinsey realized the unique individuality of all living organisms, from wasps to humans. A taxonomist’s generalizations of species, genera, and even higher categories, Kinsey explained, “are too often descriptions of unique individuals and structures of particular individuals that are not quite like anything that any other investigator will ever find.” Not just entomologists but psychologists as well are equally guilty of such hasty generalizations: “A mouse in a maze, today, is taken as a sample of all individuals, of all species of mice under all sorts of conditions, yesterday, today, and tomorrow.” Worse still, these collective conclusions are even extrapolated to humans: “A half dozen dogs, pedigrees unknown and breeds unnamed, are reported upon as ‘dogs’—meaning all kinds of dogs—if, indeed, the conclusions are not explicitly or at least implicitly applied to you, to your cousins, and to all other kinds and descriptions of humans.”39
If wasps showed so much variation, how much more might humans? In his 1948 book Sexual Behavior in the Human Male, Kinsey concluded: “Given the range of variation … the clinician can determine the averageness or uniqueness of any particular person, and comprehend the extent to which generalizations developed for the whole group may be applied to any particular case”; such individualist thinking helps “in the understanding of particular individuals by showing their relation to the remainder of the group.”40 Methodological individualism showed that even for such two seemingly dichotomous categories—heterosexual or homosexual—not everyone could be easily classified. “The histories which have been available in the present study make it apparent that the heterosexuality or homosexuality of many individuals is not an all-or-none proposition.” One can be both simultaneously, or neither temporarily. One can start as heterosexual and become homosexual, or vice versa. And the percentage of time spent in either state varies considerably among individuals in the population. “For instance,” Kinsey observed, “there are some who engage in both heterosexual and homosexual activities in the same year, or in the same month or week, or even in the same day”; therefore, he concluded, “one is not warranted in recognizing merely two types of individuals, heterosexual and homosexual, and that the characterization of the homosexual as a third sex fails to describe any actuality.”41 Extrapolating this methodology to taxonomy in general, Kinsey deduced the uniqueness of individuals (in a powerful statement tucked away amid countless tables and graphs):

Males do not represent two discrete populations, heterosexual and homosexual. The world is not to be divided into sheep and goats. Not all things are black nor all things white. It is a fundamental of taxonomy that nature rarely deals with discrete categories. Only the human mind invents categories and tries to force facts into separate pigeon-holes. The living world is a continuum in each and every one of its aspects. The sooner we learn this concerning human sexual behavior the sooner we shall reach a sound understanding of the realities of sex.42
 

Approaching human behavior—including the holy grail of moral behavior—from the perspective of methodological individualism leads to moral tolerance. If variation and uniqueness are the norm, then what form of morality can possibly envelop all human actions? For human sexuality alone, Kinsey measured 250 different items for each of over 10,000 people: “Endless recombinations of these characters in different individuals swell the possibilities to something which is, for all essential purposes, infinity.”43 At the end of his 1948 volume on males, Kinsey concluded that there is virtually no evidence for “the existence of such a thing as innate perversity, even among those individuals whose sexual activities society has been least inclined to accept.” On the contrary, he demonstrated through vast statistical tables and in-depth analysis that the evidence leads us to conclude that “most human sexual activities would become comprehensible to most individuals, if they could know the background of each other individual’s behavior.”44
Variation, Kinsey concluded, is the basis of both biological and cultural evolution. You cannot categorize humans as either tall or short, blond or brunette, black or white. “Dichotomous variation is the exception and continuous variation is the rule, among men as well as among insects.” Likewise for behavior, we identify right and wrong, “without allowance for the endlessly varied types of behavior that are possible between the extreme right and the extreme wrong.” That being the case, the hope for cultural evolution, like that of biological evolution, depends on the recognition of variation and individualism: “These individual differences are the materials out of which nature achieves progress, evolution in the organic world. In the differences between men lie the hopes of a changing society.”45
This extension of his scientific analysis into the realm of morality, coupled with his expose of what humans do behind closed doors, brought Kinsey much wrath and taught him more about WASPs than wasps. In a “Last Statement” dictated two weeks before his death, Kinsey noted with some bitterness the human foible of bias that seems to enter into the evaluation of human moral behavior. He bemoaned the fact that his strongest detractors were his fellow scientists, who had found difficulty “in facing facts of human sexual behavior with anything like objectivity.” One prominent scientist with a powerful political position in Washington, D.C., said unequivocally: “I do not like Kinsey, I do not like the Kinsey project, I do not like anything about the Kinsey study of sexual behavior.”46 Even Kinsey’s colleagues at Indiana University held reservations about the publication of Kinsey’s data. One department head recommended complete censorship, while another proposed delaying publication until the material was screened by the Department of Public Relations.
Such reactions are not surprising considering the political climate of 1950s McCarthy-era America. Protestant ethics forbidding sexual activity outside of heterosexual marriage were bumping up against the realities of human nature. As Kinsey noted, men’s and women’s sexual drives do not arrest themselves while awaiting the delayed marriages of modern culture. For the modern man, for example, “The society in which he lives condemns nearly all forms of sexual outlet except that legalized by marriage, but the economic system in which he finds himself imposes a delay in marriage of something like seven to twelve or more years. The teachers of morals blithely advise him to sublimate his physiologic reactions, though the record indicates that not more than two per cent of the unmarried males completely achieve that theoretical ideal.”47
For such statements, among others, Kinsey was labeled a communist and moral subversive. The Indianapolis Roman Catholic Archdiocese claimed that Kinsey’s books “pave the way for people to believe in communism and to act like Communists.” A Bloomington newspaper headline read: KINSEY’s SEX BOOKS LABELED “RED” TAINTED. The publication Christianity and Crisis denounced the Kinsey report as “animalistic.”
Most telling was the fact that his Sexual Behavior in the Human Female, published in 1953, was even more controversial than the volume on males. The negative response came mostly from men, who were either shocked or threatened by Kinsey’s figures on pre- and extramarital intercourse, and especially the greater-than-expected range of female sexual response. A New York rabbi claimed it was “a libel on all womankind.” New York Congressman Louis B. Heller demanded complete censorship in a public letter to the postmaster general of the United States, followed by this statement: “He is hurling the insult of the century against our mothers, wives, daughters, and sisters under the pretext of making a great contribution to scientific research.”48 A special House Committee, charged with investigating projects funded by tax-exempt nonprofit foundations, was called in to put pressure on both Indiana University and the Rockefeller Foundation, the latter of which funded Kinsey’s Institute for Sex Research. The investigation eventually led to the termination of his research funds in 1954. Kinsey died two years later, having never seen the remarkable change his project brought on science and society.49
What really got Kinsey in trouble was his acceptance of behavioral variation that excluded moral judgment. If moral species have no unchanging essence—no permanent and fixed typology by which to judge right and wrong behavior—then how can we derive an absolute ethical system? If morality is to be based on what people actually do, and what they do varies widely, then of what value is binary absolutism? Kinsey demonstrated that while “social forms, legal restrictions, and moral codes may be, as the social scientist would contend, the codification of human experience,” they are, like all statistical and population generalizations, “of little significance when applied to particular individuals.”50 The problem is that laws are constructed around unambiguous yeses and noes, but human behavior is a continuum, expansive in variation and individuality. In many ways, laws tell us more about the lawmakers than they do about the lawbreakers, as Kinsey concluded:

Prescriptions are merely public confessions of prescriptionists … . What is right for one individual may be wrong for the next; and what is sin and abomination to one may be a worthwhile part of the next individual’s life. The range of individual variation, in any particular case, is usually much greater than is generally understood. Some of the structural characters in my insects vary as much as twelve hundred percent. In some of the morphologic and physiologic characters which are basic to the human behavior which I am studying, the variation is a good twelve thousand percent. And yet social forms and moral codes are prescribed as though all individuals were identical; and we pass judgments, make awards, and heap penalties without regard to the diverse difficulties involved when such different people face uniform demands.51
 

Provisional ethics accommodates the range of individual variation found in human populations and suggests that we should pass judgments, make awards, and heap penalties only with regard to our great diversity. Such accommodational flexibility leads irrevocably toward greater tolerance, and more tolerance leads inexorably toward more peaceful ways of interacting with people, whether they are inside or outside of our group.
 
In modern state societies, methodological individualism and individual tolerance can only take us so far in our long-range goal of reaching the highest levels of the Bio-Cultural Evolutionary Pyramid. Preserving the planet’s ecosystem and biodiversity and maximizing within-group amity and minimizing between-group enmity also require social and political action. The goals are too far reaching and the time frames involved are too long range for how we were programmed by nature to think. We evolved in a Paleolithic environment in which our concern for the environment and biodiversity was restricted to a few tens of miles and hundreds of species over the course of only a few decades. A global ecosystem and deep time was beyond anyone’s conception until the past half millennium, which is too short a time for evolution to create a global morality and deep-time ethic. Likewise, the number of people our ancestors encountered in their lifetime could be numbered in the hundreds, so there was no reason for evolution to have produced an ethnically diverse principle of tolerance. To save the planet and ourselves, we need a new morality that incorporates global biodiversity, human ethnicity, and deep time. Provisional ethics is one system of morality that attempts to do just that.
As a professional skeptic, I am often asked, incredulously, “Do you believe anything?” The question is absurd but understandable given the common misuse of the term as a synonym for “cynic” or “nonbeliever.” In fact, skeptics believe all sorts of things, not the least of which is the power of science to understand the natural world. If, by fiat, I had to reduce the theory of scientific provisionalism to four tenets, they would be as follows (in other words, this is what I believe):
1. Metaphysics: Provisional Reality.
2. Epistemology: Provisional Naturalism.
3. Ethics: Provisional Morality.
4. Politics: Provisional Libertarianism.
 
Provisional Reality and Provisional Naturalism. I believe that reality exists over and above human and social constructions of that reality. Science as a method and naturalism as a philosophy together form the best tool we have for understanding that reality. Because science is cumulative—that is, it builds on itself in a progressive fashion—we can strive to achieve an ever-greater understanding of reality. Our knowledge of nature remains provisional because we can never know if we have final Truth. Because science is a human activity and nature is complex and dynamic, fuzzy logic and fractional probabilities best describe both nature and the estimations of our approximation toward understanding that nature. There is no such thing as the paranormal and the supernatural; there is only the normal and the natural and mysteries we have yet to explain. What separates science from all other human activities is its belief in the provisional nature of all conclusions. In science, knowledge is fluid and certainty fleeting. That is the heart of its limitation. It is also its greatest strength.
 
Provisional Morality. I believe that morality is the natural outcome of evolutionary and historical forces operating on both individuals and groups. The moral feelings of doing the right thing (such as virtuousness) or doing the wrong thing (such as guilt) were generated by nature as part of human evolution. Although cultures differ on what they define as right and wrong, the moral feelings of doing the right or wrong thing are universal to all humans. Human universals are pervasive and powerful, and include at their core the fact that we are, by nature, moral and immoral, good and evil, altruistic and selfish, cooperative and competitive, peaceful and bellicose, virtuous and nonvirtuous. Individuals and groups vary on the expression of such universal traits, but everyone has them. Most people most of the time in most circumstances are good and do the right thing for themselves and for others. But some people some of the time in some circumstances are bad and do the wrong thing for themselves and for others. As a consequence, moral principles are provisionally true, where they apply to most people, in most cultures, in most circumstances, most of the time. At some point in the last 10,000 years (around the time of writing and the shift from bands and tribes to chiefdoms and states), religions began to codify moral precepts into moral codes.
I believe that although we live in a determined universe and are governed by the laws of nature and forces of culture and history, because we can never know in its entirety the near-infinite causal net that determines our actions, we are free moral agents. And although there is no absolute and ultimate divinity to dole out rewards and punishments in some unspecified future, since moral principles are provisionally true for most people most of the time in most circumstances, provisional justice can be derived from individual responsibility and culpability through social and cultural beliefs, customs, mores, and laws that produce feelings of virtuousness and guilt and administer rewards and punishments. Since morality evolved as a trait of the species transcendent of any individual member of the species, moral provisionalism stands as a solid pillar between the permissiveness of moral relativism and the intolerance of moral absolutism.
I believe that we can discern the difference between right and wrong through three principles. (1) The ask first principle: to find out whether an action is right or wrong, ask first. (2) The happiness principle: it is a higher moral principle to always seek happiness with someone else’s happiness in mind, and never seek happiness when it leads to someone else’s unhappiness. (3) The liberty principle: it is a higher moral principle to always seek liberty with someone else’s liberty in mind, and never seek liberty when it leads to someone else’s loss of liberty. To implement social change, the moderation principle states that when innocent people die, extremism in the defense of anything is no virtue, and moderation in the protection of everything is no vice.
 
Provisional Libertarianism. I believe that humans are primarily driven to seek greater happiness, but the definition of such is personal and cannot be dictated and should not be controlled by any group. The free market is the best system yet devised for allowing the most individuals in the most places most of the time to achieve the most happiness. Individuals should take personal responsibility for their actions and buck up and quit whining when the slings and arrows of life take their toll. Libertarianism is provisional, however, because it is conditional and restricted. Before writing this book, I was an unabashed, unadulterated libertarian in favor of what is called anarcho-capitalism, a stateless society governed entirely by free markets and private contracts. I have since decided that such a society probably would not work, because the balance between the moral and immoral nature of humanity is too close. There are too many defectors and cheaters, too much greed and avarice. I could be wrong, but until the social experiment is run—an extensive free-market society is established and successfully operated for a century—I remain skeptical of extreme libertarianism. It sounds good in theory, but I am a scientist, not a philosopher; I prefer an empirical experiment to a thought experiment. We are dealing here with people, not atoms. Social experiments are always more complex than physical or biological experiments, where the unintended consequences of a minor change can cascade through the system to create major effects.
Where Goods Do Not Cross Frontiers, Armies Will
 
As discussed in chapter 3, one of the prime triggers of between-group violence is competition for scarce resources. There are rarely enough resources to support all individuals in all groups. Even if, at some given time, there were, such a condition could only be a temporary one because populations naturally tend to increase to the carrying capacity of the environment. Once that is exceeded, the demand for those resources will exceed the supply. Such was the condition throughout most of the Paleolithic for most peoples in most areas. The formula is simple: population abundance plus resource scarcity equals war. Thus, one way to decrease between-group violence is to increase the supply of resources to meet the demands of those in need of them. Nineteenth-century French economist Frederic Bastiat expressed this relationship thusly: “Where goods do not cross frontiers, armies will.”52
The indigenous peoples of New Guinea, whom Jared Diamond described as living in an almost constant state of between-group violence, in many areas have found peace. How did this happen? In the 1960s, Western colonial governments initially imposed peace on them, then ensured the peace by providing goods and supplies that they needed as well as the technologies to enable them to continue producing more resources. In less than one generation these same New Guineans were operating computers, flying planes, and running their own small businesses. In subsequent ethnographic studies, anthropologists discovered that, in many ways, these New Guineans were much happier living under colonial rule because the endemic wars were taking such a devastating physical and psychological toll.53 Where resources crossed New Guinea frontiers, New Guinea armies did not.
A similar case study can be found in the Yae9781429996754_img_807.gifnomamö, the so-called fierce people. There is good reason for the moniker because, as we saw in our extensive discussion of them in chapter 3, warfare has long been a part of Yae9781429996754_img_807.gifnomamö life. However, as missionaries in the area have discovered, the Yae9781429996754_img_807.gifnomamö do not actually like fighting. When the missionaries (and, subsequently, the Venezuelan government to which their protected territories belong) provided food and the tools for the production and procurement of food, Yae9781429996754_img_807.gifnomamö wars were significantly reduced. As Napoleon Chagnon discovered, however, even without outside intervention the Yae9781429996754_img_807.gifnomamö are sophisticated traders as well as warriors. The reason is that trade creates alliances. If, as it is said, “the enemy of my enemy is my friend,” one of the primary means of protecting one’s group is to form alliances with other groups. Trade between groups is a powerful social adhesive (as is intervillage feasting, which they also do). One village cannot go to another village and announce that they are worried about being conquered by a third, more powerful village, since this would reveal weakness. Instead, “they conceal and subsume the true motive for the alliance in the vehicles of trading and feasting, developing these institutions over months and even years. In this manner they retain an apparent modicum of sovereignty and pride, while simultaneously attaining the ultimate objectives: intervillage solidarity and military interdependence.”
Chagnon found that his charges purposefully designed a division of labor within villages in order to generate trade between villages. “Each village has one or more special products that it provides to its allies. These include items such as dogs, hallucinogenic drugs (both cultivated and collected), arrow points, arrow shafts, bows, cotton yarn, cotton and vine hammocks, baskets of several varieties, clay pots, and, in the case of contacted villages, steel tools, fishhooks, fishline, and aluminum pots.” Although, in principle, each Yae9781429996754_img_807.gifnomamö group could produce its own goods for survival, in fact, they don’t; they set up a division of labor and system of trade. They do this, says Chagnon, not because they are nascent capitalists, but because they want to form political alliances with other groups, and trade is an effective means of so doing. “Without these frequent contacts with neighbors, alliances would be much slower in formation and would be even more unstable once formed. A prerequisite to stable alliance is repetitive visiting and feasting, and the trading mechanism serves to bring about these visits.”54 Where goods cross Yae9781429996754_img_807.gifnomamö frontiers, Yae9781429996754_img_807.gifnomamö armies do not.
My point is this: just as I argued that morality evolved long before religion, I am claiming that trade evolved long before the state. There is now archaeological evidence, for example, that over the past 200,000 years stone tools and other artifacts such as seashells, flint, mammoth ivory, and beads were the objects of trade among our hominid ancestors, because they are often found hundreds of miles from where they were manufactured.55 Shepard Krech, in his debunking of the “ecological Indian” myth, shows that the reason Europeans were so readily able to trade with Native Americans (beads for pelts, for example) was that the Indians were already well accustomed to trading among themselves. 56 The psychology of trade probably has as much to do with forming alliances between individuals and groups as it does increasing the supply of resources, but the end result is the same: cooperation and reciprocal altruism that goes into making trade successful accentuates amity and attenuates enmity, leading (in the language of provisional ethics) to greater happiness and liberty for more people in more places more of the time.
The Biology of Cooperation and Trade
 
There is now scientific research to support the thesis that trade is good for both individuals and groups. Recall the Prisoner’s Dilemma experiments discussed in chapter 2, in which it was found that a cooperative trusting strategy was shunned by players in a one-trial game, but embraced when they played multiple rounds, particularly when players could interact with each other to establish trust. The best strategy in iterated contests was tit for tat with no initial defection. In nearly all cases the most selfish thing to do—that is, the way to gain the most number of points (or money) in the long run—was to begin by trusting and cooperating, and then do whatever your partner does. The most successful tactic in an extensive Prisoner’s Dilemma contest was a computer program entitled “Firm but Fair,” which cooperates with cooperators, cooperates after a mutual defection, quits playing with constant defectors, and defects with partners who always cooperate (called suckers). 57 In a related experiment, nine subjects were each given five dollars. If five or more of the nine cooperated by donating their five dollars to a general pot, all nine would receive ten dollars. Although it pays to be a cooperator (you get ten instead of five dollars), it pays even more to be a defector (fifteen instead of five dollars), as long as at least five other people cooperate. The results were mixed, with many groups of nine subjects failing to achieve the critical mass of five cooperators, because there was no trust. Then the experimenters added a step: members of some groups were given the opportunity to discuss their strategy options before playing. Those groups that interacted before playing averaged eight cooperators, and 100 percent of these groups earned cooperative bonuses. By sharp contrast, those groups that did not interact before playing earned bonuses only 60 percent of the time.58 Finally, in research on social dilemmas, psychologist Robyn Dawes found that groups given the opportunity to communicate face-to-face were more likely to cooperate than those who were not. “It is not just the successful group that prevails,” Dawes concluded, “but the individuals who have a propensity to form such groups.”59
These results remind me of President Ronald Reagan’s cold-war strategy: trust with verification. Trust is built over time and through interactions, and trade is an effective tool for establishing trust. (One of the first things to go when trust breaks down between two nations is trade. If a country is especially untrustworthy, the international community may even impose economic sanctions that prohibit any trade with them. The end result is often war.) The psychological impulse to form relationships and alliances is the deeper cause that lies beneath the moral sentiment of trust, and trade is an effective medium that allows people to create trusting relationships with and form attachments to other trustworthy people. And, recall, it is not enough to fake being a cooperator, because over time and with experience deceivers are usually flushed out. You actually have to believe you are a cooperator, and there is no surer way to believe you are a cooperator than to be one.
Our brains even evolved a mechanism to reinforce this process—cooperation leads to stimulation of the pleasure centers in the brain. Scientists at Emory University had thirty-six subjects play Prisoner’s Dilemma while undergoing a functional magnetic resonance imaging (fMRI) brain scan. They found that the areas of the brains of cooperators that lit up were the same areas activated in response to such stimuli as desserts, money, cocaine, attractive faces, and other basic pleasures. Specifically, there were two broad areas dense in neurons that responded, both rich in dopamine (a neurochemical related to addictive behaviors): the anteroventral striatum in the middle of the brain (the “pleasure center,” for which rats will endlessly press a bar to have it stimulated, even going without food) and the orbitofrontal cortex just above the eyes, related to impulse control and the processing of rewards. Tellingly, the cooperative subjects reported increased feelings of trust toward and camaraderie with their game partners.60 In addition to dopamine, neuroscientists Steven Quartz and Terrence Sejnowski have documented the connection between oxytocin—a brain chemical produced during eating, breast-feeding, and sexual orgasms, believed to play a vital role in human bonding—and pro-social behaviors, such as cooperation and exchange.61
There is now, in fact, a banquet of data that has spawned a new field of research on the cognitive neuroscience of human social behavior, demonstrating that humans evolved powerful neurological mechanisms to reinforce cooperation, accentuate pro-social behavior, and bond non-related people through the process of social exchange.62 Jorge Moll and his colleagues, for example, monitoring fMRI brain scans, found that moral emotions activate the amygdala, or the emotion module in the brain, as well as the orbital and medial prefrontal cortex, a higher level of cognitive processing in the brain, indicating that moral behaviors are as much related to moral emotions as they are to moral reasons (figure 33).63 In a similar technique utilizing fMRI scans on subjects participating in two-person “trust and reciprocity” games, Kevin McCabe and his colleagues found that areas of the prefrontal cortex—known to mediate impulse control and the delay of immediate gratification—are activated in the brains of cooperators (but not defectors), suggesting that cooperation requires “attention to mutual gains with the inhibition of immediate reward gratification to allow cooperative decisions.”64 The importance of the prefrontal cortex in humans and the other great apes was explored by Katerina Semendeferi and her colleagues, who found that area 10 of the frontal lobe in particular is linked to such higher cognitive functions as the undertaking of initiatives and the planning of future actions, and that this area, while larger in apes than in monkeys, is in humans “larger relative to the rest of the brain than it is in the apes” and has “more space available for connections with other higher-order association areas.” She concludes that “the neural substrates supporting cognitive functions associated with this part of the cortex enlarged and became specialized during hominid evolution.”65
Figure 33. Moral Modules in the Brain
 
Jorge Moll and his colleagues employed functional magnetic resonance imaging (fMRI) to produce brain scans, and discovered that moral emotions activate, as seen in this figure, the prefrontal and temporal lobes (higher-level cognitive process in the brain) as well as the amygdala (the emotion module of the brain), indicating that moral behaviors are driven by both emotional and rational parts of the brain. (Rendered by Pat Linse, from Jorge Moll et al., “The Neural Correlates of Moral Sensitivity,” in The Journal of Neuroscience, 2002, p. 2733)
e9781429996754_i0044.jpg
 
I claim that the reason for this cortical expansion is that humans evolved to became the preeminent social and moral primate. The brain imaging research of Uta and Chris Frith of University College London also supports this hypothesis, showing that in order to be a moral agent, one must be both self-aware and aware that others are self-aware, functions that are located in two different areas of the brain. Self-awareness, at least in part, appears to be located in the medial prefrontal cortex, whereas representing others’ actions and intentions appears to be centered in the temporal cortex. “We speculate that the precursors of mentalizing ability derive from a brain system that evolved for representing agents and actions, and the relationships between them.”66 It is those brain relationships that form the foundation of social relationships.
How does trust translate to trade? At the Center for Neuroeconomics Studies at Claremont Graduate University, Paul Zak has demonstrated the relationship between oxytocin, trust, and economic prosperity. He argues that economists have shown how trust is among the most powerful factors affecting economic growth, and that since trust is directly related to neurological chemicals such as oxytocin, it is vital for national prosperity that the country maximize social interactions among its members, as well as members of other countries. Free trade is one of the most effective means of socializing, as are education, increased civil liberties, freedom of the press, freedom of association (most notably by increasing telephones and roads), and even a cleaner environment (people in countries with polluted environments show higher levels of estrogen antagonists, thereby lowering their levels of oxytocin and thus their feelings of trust). Impoverished countries are poor, in part, because trust in the legal structures to protect business and personal investments is so low. “Differences in trust cause differences in living standards,” Zak concludes. He has even computed that “a 15 percent increase in the proportion of people in a country who think others are trustworthy raises income per person by I percent per year for every year thereafter.” For example, increasing levels of trust in the United States from its present 36 percent to 51 percent would raise the average income for every man, woman, and child in the country by $400 per year, or $30,000 over a lifetime .67 It pays to trust (with verification, of course).
Although extrapolating directly from neurochemistry to national economies is surely oversimplifying matters, what all this research tells us is that on one level we cooperate for the same reason we copulate—because it feels good. On a deeper evolutionary level, the reason cooperating feels good is because it is good for us, individually and as a species. Thomas Jefferson realized this in 1814: “These good acts give pleasure, but how it happens that they give us pleasure? Because nature hath implanted in our breasts a love of others, a sense of duty to them, a moral instinct, in short, which prompts us irresistibly to feel and to succor their distresses.”68
How do trust and trade reduce war and violence? In every case study of societies that made the transition from war to peace, there is a direct causal relationship between population size, ecological carrying capacity, and the availability and exchange of resources. As archaeologist Steven LeBlanc explains, “There is no change in the ability to shift to peacefulness as social complexity evolves. Rather, the shift occurs when the ecological relationships suddenly change, regardless of the type of social organization affected.”69 The primary engine driving the shift in these ecological relationships is trade. When populations grow beyond the carrying capacity of their environments, they are forced into competition, which leads to war, which leads to alliances, which leads to trade, which leads to peace. In other words, the solution to war—that is, to move a society from a warlike existence to a peacelike existence—is not to be found in a particular type of government or religion or ideology or worldview; it is in a particular type of social process called trade. The evolutionary origin of trade may have been political alliances, but one of the unintended consequences is that trade produces a division of labor that generates more goods for more people more of the time.
 
In the end, how shall we treat our fellow humans? Here we face ourselves in the penetrating mirror of humanity’s 100,000-year journey, where the heroic in the human spirit is allowed to rise from the ashes of our primitive ancestry, imploring us to rise above the dark side of our nature. Religion has certainly inspired greatness out of ordinariness, and such heroics have been well documented throughout the ages, especially by the particular religions to whom the heroes professed worship. But religion has a built-in system of intolerance that logically follows from adherence to a fixed set of dogmas. I think we can do better.
I believe in the heroic nature of humanity and in the ability of human intelligence, reason, and creativity to triumph over problems and obstacles. There has been great progress in human history, as measured by greater amounts of liberty and prosperity for more people in more places more of the time. In some cases, religion, particularly Judaism and Christianity, has fostered freedom and free markets; where this influence was combined with the Enlightenment introduction of secular political and economic systems and the separation of church and state, more freedoms for more peoples were enjoyed over the past two centuries than accumulated over the previous two millennia. Patriarchal dominance, for example, is being systematically displaced by gender equality in societies where women are allowed to flourish. How did this happen? First, women decided that they were not going to put up with this arrangement any longer; second, women were given the freedom to act on this decision; and third, society realized that women also seek greater happiness and greater liberty, and that they can achieve them more readily without being in bondage to males. Increasing the happiness and liberty of half of society raises the overall happiness of the group. Libertarianism is the happiness principle and the liberty principle writ large.
Religious freedoms must always be protected, but the price for this security is the separation of religion from government. Historically, where church and state were wed, individual liberty suffered, including and especially religious liberty. Paradoxically—because many Christian conservatives today call for greater influence of their religion on politics—Christianity is at least partially responsible for this division between the sacred and the profane. Jesus himself admonished his followers (Matt. 22:21): “Render therefore unto Caesar the things which are Caesar’s; and unto God the things that are God’s.” Historically, this resulted in two separate magisteria—spiritual and temporal, ecclesiastical and lay, religious and secular—each with its own laws, courts, and hierarchical authority.70 (Historian Bernard Lewis, in fact, identifies the secularization of Western culture as one of the strongest reasons for its prosperity and progress in science, technology, and culture; and the lack of separation of church and state for “what went wrong” in Muslim countries that drove the Arab world from its medieval apex of human achievement to its status today as a cultural backwater.) Thus, in order to generate greater liberty for more people we must maintain the separation of church and state and foster the greater secularization of state society, by which I mean public morality should only be legislated by secular bodies (while private morality may be as religious as the individual prefers). The members of secular bodies may themselves be religious, but the body itself must remain religiously neutral. Because of the natural inclination to favor one’s belief for preferential treatment even in secular systems, however, I personally prefer (although it should never be legislated) that public policy be governed by people with no religious preference at all. These are secularists.
Who are secularists? Secularists are nonbelievers, nontheists, atheists, agnostics, skeptics, free thinkers, humanists, and secular humanists. Unfortunately, many of these words carry pejorative baggage. Words matter and language counts. “Feminist” is a fine word that describes someone who believes in the need to secure the rights and opportunities for women equivalent to those provided for men. Unfortunately, thanks to conservatives like Rush Limbaugh, it has also come to be associated with sandal-wearing, tree-hugging, postmodern, deconstructionist, left-leaning liberals best scorned as “Femi-Nazis.” Likewise, “atheist” is a descriptive term that simply means “without theism,” and describes someone who does not believe in God(s). Unfortunately, thanks to religious fundamentalists, it has also come to be associated with sandal-wearing, tree-hugging, postmodern, deconstructionist, left-leaning liberals who are immoral, pinko communists hell-bent on corrupting the morals of America’s youth. Speak the scorn into existence. A 1999 Gallup poll reflected this attitude. When asked, “If your party nominated a generally well-qualified person for president who happened to be an X would you vote for that person?” (with X representing Catholic, Jew, Baptist, Mormon, black, homosexual, woman, and atheist), while six of the eight received more than 90 percent approval, only 59 percent would vote for a homosexual and less than half, 49 percent, would vote for an atheist.
For the most part I avoid labels altogether and simply prefer to say what it is that I believe or do not believe. However, at some point labels are unavoidable (most likely due to the fact that the brain is wired to pigeonhole objects into linguistic categories), and thus one is forced to use identity language. Since the name of the magazine I cofounded is Skeptic and my monthly column in Scientific American is entitled “Skeptic,” I usually just call myself a skeptic, from the Greek skeptikos , or “thoughtful.” Etymologically, in fact, its Latin derivative is scepticus, for “inquiring” or “reflective.” Further variations in the ancient Greek include “watchman” and “mark to aim at.” Hence, skepticism is thoughtful and reflective inquiry. Skeptics are the watchmen of reasoning errors, aiming to expose bad ideas. Perhaps the closest fit for skeptic is “a seeker after truth; an inquirer who has not yet arrived at definite convictions.” Skepticism is not “seek and ye shall find”—a classic case of what is called the confirmation bias in cognitive psychology—but “seek and keep an open mind.” What does it mean to have an open mind? It is to find the essential balance between orthodoxy and heresy, between a total commitment to the status quo and the blind pursuit of new ideas, between being open-minded enough to accept radical new ideas and so open-minded that your brains fall out. The virtue of skepticism is about finding that balance.71
The nineteenth-century philosopher Robert G. Ingersoll, a secular moral hero if ever there was one, found additional freedoms in a naturalistic worldview, including freedom from

the fear of eternal pain … from the winged monsters of the night … from devils, ghosts, and gods … no chains for my limbs—no lashes for my back—no fires for my flesh—no master’s frown or threat—no following another’s steps—no need to bow, or cringe, or crawl … I was free. I stood erect and fearlessly, joyously, faced all worlds … . And then my heart was filled with gratitude, with thankfulness, and went out in love to all the heroes, the thinkers who gave their lives for the liberty of hand and brain—for the freedom of labor and thought—to those who fell in the fierce fields of war, to those who died in dungeons bound with chains—to those who proudly mounted scaffold’s stairs—to those whose bones were crushed, whose flesh was scarred and torn—to those by fire consumed—to all the wise, the good, the brave of every land, whose thoughts and deeds have given freedom to the sons of men. And then I vowed to grasp the torch that they had held, and hold it high, that light might conquer darkness still.72
 

The bright torch of science illuminates the darkness of humanity to reveal a human nature that is both moral and immoral, a product of our evolutionary heritage and our cultural history. We can construct a provisional ethical system that is neither dogmatically absolute nor irrationally relative, a more universal and tolerant morality that enhances the probability of the survival and well-being of all members of the species, and perhaps eventually of all species and even the biosphere, the only home we have ever known or will know until science leads us off the planet, out of the solar system, and to the stars. Ad astra!