6
 
HOW WE ARE MORAL: ABSOLUTE, RELATIVE, AND PROVISIONAL ETHICS
 
 
 
In science, “fact” can only mean “confirmed to such a degree that it would be perverse to withhold provisional assent.”
 
—Stephen Jay Gould, Hen’s Teeth and Horse’s Toes, 1983
 
 
 
 
 
Once day in 1991 an attractive middle-aged woman was passing through the locker room of a health club on her way to meet a friend for lunch at the snack bar. She was early and there was no one around. Glancing about for her friend her eye was drawn to a shiny object on the floor. Looking closer she discovered that it was a large diamond ring. She vaguely recalled seeing it on someone at the club before, but could not recall to whom it belonged. She picked up the ring and put it in her pocket. When her friend arrived she immediately showed it to her and asked her to accompany her to the front desk so that she would have a witness to verify that she did not steal the ring, but had simply found it. “No one would ever have known that I had the ring,” she later recounted. “I could have hocked it for thousands of dollars, but I didn’t.” Why? Reflecting on the incident, she explained, “One just doesn’t do that. My conscience would not allow me to take it. I consider myself an honest person who tries to do the right thing, and in that instance I knew what the right thing to do would be.” Why is that the right thing to do? “Because if it were my ring I would hope that someone would do the same for me.” Golden rings and golden rules.
That woman was my late mother, and this story is a classic example of the Golden Rule in practice. She treated the owner of that ring as she hoped someone would treat her if she lost her own. I recount the story here not because I think that my mom was some extraordinarily moral person, but because, in fact, as a moral agent I think she was quite ordinary and that most people most of the time in most circumstances would have done the same thing. She told me this story not as a moral homily to impart some extraordinary advice, but to show the ordinary nature of moral reasoning in response to a question I posed to her about the origins of morality: why are you moral? My mother, who had considerable influence on my thinking and moral upbringing, was not a religious person and had no belief in God. It was not something she thought a lot about—she simply did not believe in God and saw no reason to foist a pretense of belief. She did not raise me to be religious or irreligious. The subject almost never came up. Yet she was a decent, moral person, as is my father, and I think my siblings and I are an ordinary moral family. How was she able to be such an ordinarily moral person without believing in an extraordinarily moral being? Without absolute morality, aren’t we reduced to accepting an “anything goes” relative morality? No. There is a middle way between absolute morality and relative morality that I call provisional morality.
 
As defined earlier, morality involves right and wrong thoughts and behaviors in context of the rules of a social group, and ethics is the scientific study of and theories about moral thoughts and behaviors in context of the rules of a social group. Thus, we may define absolute morality as an inflexible set of rules for right and wrong thought and behavior derived from a social group’s canon of ethics. The claimed source of that canon may be God, the Bible, the Koran, the state, nature, an ideology, or a philosophy.
An obvious and immediate problem with all systems of absolute morality—known formally in ethical theory as absolutism—is that they set themselves up to be the final arbiters of truth, creating two types of people: good and evil, right and wrong, true believers and heretics. This was most succinctly expressed by that sage philosopher Maxwell Smart—agent Eighty-Six on television’s Get Smart comedy series—who explained to his morally incredulous fellow agent Ninety-Nine: “Don’t be silly, Ninety-Nine. We have to shoot, kill, and destroy. We represent everything that’s wholesome and good in the world.” Sadly, such black-and-white thinking is not restricted to the little screen. Richard Nixon used such rhetoric for political gain when he admitted, “It may seem melodramatic to say that the U.S. and Russia represent Good and Evil, Light and Darkness, God and the Devil. But if we think of it that way it helps to clarify our perspective of the world struggle.” Ronald Reagan was even more histrionic in his proclamation that the Soviet Union was the “evil empire.” Most recently, George W. Bush effectively labeled Osama bin Laden and his Al Qaeda operatives as “pure evil.”
Most absolute moral systems are religiously based, but not all. Immanuel Kant’s Categorical Imperative, for example, is a secular rational attempt at an absolute morality. A Categorical Imperative is an unconditional command without exceptions, which Kant contrasted with (by way of rejecting it) a Hypothetical Imperative, a conditional command with exceptions. For Kant, if you want to judge the rightness or wrongness of an action, “Act only on that maxim through which you can at the same time will that it should become a universal law.”1 Would we ever want to universalize lying, stealing, or adultery? Of course not. That would put an end to contracts, property, and marriage.
But people do occasionally lie, steal, and commit adultery, and often there are perfectly rational reasons to do so. In the Categorical Imperative we witness a violation of the law of the excluded middle, also known as the either-or fallacy in logic, where options between extremes are excluded by forcing the issue into a binary choice. Here yet another problem is averted with fuzzy logic, where shades of fuzzy probabilities allow us to assign fractional values to moral answers that are more or less likely to be applicable. The world is usually more complex than the two choices typically presented by antagonists who wish to simplify issues for rhetorical sake. A type specimen of a statement of absolute morality can be found in the words of Christian author Francis Schaeffer:

If there is no absolute moral standard, then one cannot say in a final sense that anything is right or wrong. By absolute we mean that which always applies, that which provides a final or ultimate standard. There must be an absolute if there are to be morals, and there must be an absolute if there are to be real values. If there is no absolute beyond man’s ideas, then there is no final appeal to judge between individuals and groups whose moral judgments conflict. We are merely left with conflicting opinions.2
 

Cartoonist Wiley Miller illustrated the concept cleverly in a Non Sequitur cartoon (figure 21) in which Moses is admonishing modern moral relativists that God called them “commandments,” not “recommendations,” because they are absolute and final, no exceptions.
The ultimate fallacy with all forms of absolute morality is that since virtually everyone claims they know what constitutes right versus wrong thought and action, and since effectively all moral systems differ from all others to a greater or lesser degree, then there cannot be a universally accepted absolute morality. In reality, and ironically, it is absolute moralities that leave us with nothing but conflicting opinions and no moral compass. Nowhere is this problem more evident than in religion.
Most ethical systems are absolute, most absolute systems are derived from religious sources, and by far the most popular source of moral precepts and ethical conjectures is religion (making Divine Command Theory one of the most common of all ethical systems). The 2001 World Christian Encyclopedia, for example, reports that of the earth’s 6.1 billion humans fully 5.1 billion of them, or 84 percent, declare themselves followers of some form of organized religion. Christians dominate at just a shade under 2 billion adherents (with Catholics counting for half of those), with Muslims at 1.1 billion, Hindus at 811 million, Buddhists at 359 million, and ethnoreligionists (animists and others in Asia and Africa primarily) accounting for most of the remaining 265 million. Such overall numbers, however, tell us little. There are, in fact, 10,000 distinct religions of ten general varieties, each one of which can be further subdivided and classified. For example, Christians may be found among an astonishing 33,820 different denominations. The variety of non-Christian religions is also stunning, with worldwide distribution outstripping Christian religions despite the tireless efforts of evangelists to convert as many souls to Christ as possible. One table in the encyclopedia, for example, tracks the number of Christians (69,000) and non-Christians (147,000) by which the world will increase over the next twenty-four hours. Another table reveals the global convert/defector ratio, adjusted for births and deaths, indicating that the sphere of evangelism continues to expand into non-Christian belief space.3 Given this almost unfathomable level of religious differences, it is obvious that any claim to sole possession of absolute moral truth is fleeting. Clearly they cannot all be right.
Figure 21. Absolute v. Relative Morality
 
The Ten Commandments are a form of absolute morality. (© 2002 Wiley Miller. Distributed by Univeral Press Syndicate. Reprinted with permission)
e9781429996754_i0031.jpg
 
 
Relative morality is taken to mean a flexible set of rules for right and wrong thoughts and behaviors derived from how the situation is defined by the social group. The problem with relative morality—known formally in ethical theory as relativism—is that one can justify almost any behavior, implying that all moral actions—from self-sacrifice to human sacrifice—are equal. On a theoretical and scientific level, this is simply not true. On a practical level no one believes this. (Ethical theorists distinguish between descriptive ethical relativism, which passes no judgment on whether any of the numerous relative ethical theories are valid or not; and normative ethical relativism, which claims that each ethical theory, while relative in value compared to others, is absolutely valid for the culture in which it is practiced.)
When I was a senior in high school in 1971 I became a born-again Christian. I took my commitment seriously enough to enroll at Pepperdine University, a highly regarded Christian institution affiliated with the Church of Christ and nestled in the foothills of Malibu, California, with grand vistas of the Pacific Ocean (okay, so the attraction was not purely academic). There I studied theology and psychology, attended chapel at least twice a week (admittedly attendance was a requirement), wrestled with the relationship between science and religion, and struggled with the normal carnal impulses of youth when they bump up against moral restrictions on their expression. (One student in our dorm, desperately seeking a rationale for what he knew he could not control, actually prayed for God to provide him with an acceptable sexual outlet—read partner—because, he reasoned, he could witness for the Lord better if he were not so distracted by such basic urges.) After graduating from Pepperdine and studying evolutionary biology and experimental psychology in a graduate program at California State University, Fullerton, I turned to science and philosophy for my moral answers, and began to try different ethical systems (not unlike Woody Allen’s character in his film Hannah and Her Sisters, who examines different religions, for example, coming home one day with a crucifix, a loaf of white bread, and a jar of mayonnaise to try out Catholicism!).
Existentialism initially appealed to me because of its emphasis on moral freedom and individual responsibility. “Existence precedes essence” is a core tenet, meaning that our essence—our being, our very self—is constantly being created by the experiences we choose. We are the authors of our life stories, the architects of our souls. Very few people are innocent victims; rather, we make choices in life that ultimately place us in circumstances in which it might appear we were blameless sufferers but, in fact, most situations are created by the choices we make. Although this puts a rather sizable burden of responsibility for the outcome of your life squarely on your own shoulders, it also means that you can change; you are not stuck where you do not want to be. “Man is a wholly natural creature whose welfare comes solely from his own unaided efforts,” wrote one existentialist. To me, existentialism was one of the more optimistic philosophies I examined, but I discovered that I was in a rather small minority in that regard. Most existentialists believe that life is “absurd” because we exist in a meaningless, irrational universe—any attempt to find ultimate meaning can only end in absurdity. Most existentialists seemed to agree with one of the philosophy’s founders, Albert Camus, when he lamented, “There is but one serious philosophical problem. That is suicide. Why stay alive in a meaningless universe?”4 Suicide may be painless (as the M*A*S*H theme song croons) but it brings on one major change I found unacceptable.
After existentialism I tried utilitarianism, based on Jeremy Bentham’s principle of the “greatest happiness for the greatest number.” Specifically, I found his quantitative utilitarianism attractive because of its scientistic approach in attempting a type of hedonic calculus where one can quantify ethical decisions. By “hedonism” Bentham did not mean a simple pleasure principle where, in modern parlance, “if it feels good, do it.” In fact, Bentham specified “seven circumstances” by which “the value of a pleasure or a pain is considered”:
1. Purity—“The chance it has of not being followed by sensations of the opposite kind.”
2. Intensity—The strength, force, or power of the pleasure.
3. Propinquity—The proximity in time or place of the pleasure.
4. Certainty—The sureness of the pleasure.
5. Fecundity—“The chance it has of being followed by sensations of the same kind.”
6. Extent—“The number of persons to whom it extends; or (in other words) who are affected by it.”
7. Duration—The length of time the pleasure will last.5
 
As a pedagogical heuristic, I once presented the table in figure 22 to my introductory psychology course to draw students into seeing the problem of assigning actual numbers to these seven values (the boxes were blank), in making a rather simple choice between spending money on a good meal, a good date (with the possibility but not certainty of sex), or a good book. The values in the boxes are my own (I was single at the time).
According to Bentham, once the figures are assigned, “Sum up all the values of all the pleasures on the one side, and those of all the pains on the other. The balance, if it be on the side of pleasure, will give the good tendency of the act upon the whole, with respect to the interests of that individual person; if on the side of pain, the bad tendency of it upon the whole.”6 In my example the book wins out over the meal or date. Of course, this is just my opinion, the application of the hedonic calculus to one person. To apply the principle to society as a whole, Bentham says, we must:

Take an account of the number of persons whose interests appear to be concerned; and repeat the above process with respect to each. Sum up the numbers expressive of the degrees of good tendency, which the act has, with respect to each individual, in regard to whom the tendency of it is good upon the whole: do this again with respect to each individual, in regard to whom the tendency of it is good upon the whole: do this again with respect to each individual, in regard to whom the tendency of it is bad upon the whole. Take the balance; which, if on the side of pleasure, will give the general good tendency of the act, with respect to the total number or community of individuals concerned; if on the side of pain, the general evil tendency, with respect to the same community.7
Figure 22. Jeremy Bentham’s Hedonic Calculus
e9781429996754_i0032.jpg
 
 

Dismissing the obvious impossibility of doing this on a daily basis and being able to even leave the house, it is clear that you can cook the numbers to make it come out almost any way you like. Doing this on a societal level is simply impossible.
Utilitarianism, particularly in the form of calculating the greatest good for the greatest number as if one were computing an orbital trajectory of a planetary body, is very much grounded in pre-twentieth-century psychological, social, and economic theory that presumed humans (at least Western industrial peoples) to be rational beings who make choice calculations along the lines of a double-entry bookkeeper. (Utilitarians even designated units of pleasure as “hedons” and units of displeasure as “dolors”—in the manner of physicists measuring photons and electrons—and debated among themselves whether we should try to maximize utility or, as satisficing utilitarians held, should only try to produce just enough utility to satisfy everyone minimally.) Moral choices, then, were simply a matter of looking at the bottom line.
Thanks to extensive interdisciplinary research by psychologists, sociologists, and economists over the past several decades, however, we now know that humans are emotional and intuitive decision makers subject to the considerable whims of subjective feelings, social trends, mass movements, and base urges. We are rational at times, but we are also irrational, the latter probably a lot more than we care to consider. As we shall see at the end of this chapter, moral reason must be balanced with moral intuition.
These are just a few of the ethical systems that appealed to me, but there are many others for the student of ethics and morality to sample. For example: consequentialism, as the name implies, holds that the consequences of an action should determine whether it is right or wrong. Contractarianism posits that contractual arrangements between moral agents establish what is right and wrong, where violations of agreements are immoral. Deontology claims that one’s duty (deon is Greek for duty) is the criterion by which actions should be judged as moral or immoral. Emotivism holds that moral judgments of right or wrong behavior are a function of the positive or negative feelings evoked by the behavior. Ethical egoism (or psychological egoism) states that people behave in their own self-interest and thus even apparently altruistic behavior is really motivated by selfish ends. Moral isolationism, a form of moral relativism, argues that we ought to be morally concerned only with those in our immediate group, “isolating” those outside our group as not relevant to our moral judgments. Natural law theory states that there is a natural order to the human condition, the natural order is good, and therefore the rightness or wrongness of an action should be judged by whether or not it violates the natural order of things. Nihilism denies that there is any truth to be discovered, particularly in the moral realm. Particularity contrasts with universality and impartiality, holding that we have moral preferences to particular people morally relevant to us. Pluralism (an approach very much embraced in this book) holds that there are multiple perspectives that should be considered in evaluating a moral issue, and that no one ethical theory can explain all moral and immoral behavior. Subjectivism is an extreme form of relativism, holding that moral values are relative to the individual’s sole subjective state alone and cannot even be evaluated in the larger social or cultural context. Encyclopedias of philosophy and morality abound in an alphabet soup of ethical theories and moral labels, and library shelves are sagging with volumes on ethical theories purporting to present the reader with valid and viable criteria of right and wrong human action. What are we to make of all these theories?
 
If we are going to try to apply the methods of science to thinking about moral issues and ethical systems, here is the problem as I see it: as soon as one makes a moral decision—an action that is deemed right or wrong—it implies that there is a standard of right versus wrong that can be applied in other situations, to other people, in other cultures (in a manner that one might apply the laws of planetary geology to planets other than our own). But if that were the case, then why is that same standard not obvious and in effect in all cultures (as, in the above analogy, that geological forces operate in the same manner on all planets)? Instead, observation reveals many such systems, most of which claim to have found the royal road to Truth and all of whom differ in degrees significant enough that they cannot be reconciled (as if gravity operated on some planets but not others). If there is no absolute moral standard and instead only relative values, can we realistically speak of right and wrong? An action may be wise or unwise, prudent or imprudent, profitable or unprofitable within a given system. But is that the same as right or wrong?
So, both absolutism and relativism violate clear and obvious observations: there is a wide diversity of ethical theories about right and wrong moral behavior; because of this there are disputes about what constitutes right and wrong both between ethical theories and moral systems as well as within them; we behave both morally and immorally; humans desire a set of moral guidelines to help us determine right and wrong; there are moral principles that most ethical theories and moral systems agree are right and wrong. Any viable ethical theory of morality must account for these observations. Most do not.
In thinking about this problem I asked myself this question: how do we know something is true or right? In science, claims are not true or false, right or wrong in any absolute sense. Instead, we accumulate evidence and assign a probability of truth to a claim. A claim is probably true or probably false, possibly right or possibly wrong. Yet probabilities can be so high or so low that we can act as if they are, in fact, true or false. Stephen Jay Gould put it well: “In science, ‘fact’ can only mean ‘confirmed to such a degree that it would be perverse to withhold provisional assent.’”8 That is, scientific facts are conclusions confirmed to such an extent it would be reasonable to offer our provisional agreement. Heliocentrism—that the earth goes around the sun and not vice versa—is as factual as it gets in science. That evolution happened is not far behind heliocentrism in its factual certainty. Other theories in science, particularly within the social sciences (where the subjects are so much more complex), are far less certain and so we assign them much lower probabilities of certitude. In a fuzzy logic manner, we might say heliocentrism and evolution are .9 on a factual scale, while political, economic, and psychological theories of human social and individual behavior are much lower on the fuzzy scale, perhaps in the range of .2 to .5. Here the certainties are much fuzzier, and so fuzzy logic is critical to our understanding of how the world works, particularly in assigning fuzzy fractions to the degrees of certainty we hold about those claims. Here we find ourselves in a very familiar area of science known as probabilities and statistics. In the social sciences, for example, we say that we reject the null hypothesis at the .05 level of confidence (where we are 95 percent certain that the effect we found was not due to chance), or at the .01 level of confidence (where we are 99 percent certain), or even at the .0001 level of confidence (where the odds of the effect being due to chance are only one in ten thousand). The point is this: there is a sliding scale from high certainty to high doubt about the factual validity of a particular claim, which is why science traffics in probabilities and statistics in order to express the confidence or lack of confidence a claim or theory engenders.
The same way of thinking has application to morals and ethics. Moral choices in a provisional ethical system might be considered analogous to scientific facts, in being provisionally right or provisionally wrong, provisionally moral or provisionally immoral:

In provisional ethics, moral or immoral means confirmed to such an extent it would be reasonable to offer provisional assent.
 

Provisional is an appropriate word here, meaning “conditional, pending confirmation or validation.” In provisional ethics it would be reasonable for us to offer our conditional agreement that an action is moral or immoral if the evidence for and the justification of the action is overwhelming. It remains provisional because, as in science, the evidence and justification might change. And, obviously, some moral principles have less evidence and justification for them than others, and therefore they are more provisional and more personal.
Provisional ethics provides a reasonable middle ground between absolute and relative moral systems. Provisional moral principles are applicable for most people in most circumstances most of the time, yet flexible enough to account for the wide diversity of human behavior, culture, and circumstances. What I am getting at is that there are moral principles by which we can construct an ethical theory. These principles are not absolute (no exceptions), nor are they relative (anything goes). They are provisional—true for most people in most circumstances most of the time. And they are objective, in the sense that morality is independent of the individual. Moral sentiments evolved as part of our species; moral principles, therefore, can be seen as transcendent of the individual, making them morally objective. Whenever possible, moral questions should be subjected to scientific and rational scrutiny, much as nature’s questions are subjected to scientific and rational scrutiny. But can morality become a science?
 
One of the strongest objections to be made against provisional ethics is that if it is not a form of absolute morality, then it must be a form of relative morality, and thus another way to intellectualize one’s egocentered actions. But this is looking at the world through bivariate glasses, a violation of the either-or fallacy, breaking the law of the excluded middle.
Here again, fuzzy logic has direct applications to moral thinking. In the discussion of evil, we saw how fuzzy fractions assigned to evil deeds assisted us in assessing the relative merits or demerits of human actions. Fuzzy logic also helps us see our way through a number of moral conundrums. When does life begin? Binary logic insists on a black-and-white Aristotelian A or not-A answer. Most pro-lifers, for example, believe that life begins at conception—before conception not-life, after conception, life. A or not-A. With fuzzy morality we can assign a probability to life—before conception o, the moment of conception .1, one month after conception .2, and so on until birth, when the fetus becomes a 1.0 life-form. A and not-A. You don’t have to choose between pro-life and pro-choice, themselves bivalent categories still stuck in an Aristotelian world (more on this in the next chapter).
Death may also be assigned in degrees. “If life has a fuzzy boundary, so does death,” fuzzy logician Bart Kosko explains. “The medical definition of death changes a little each year. More information, more precision, more fuzz.” But isn’t someone either dead or alive? A or not-A? No. “Fuzzy logic may help us in our fight against death. If you can kill a brain a cell at a time, you can bring it back to life a cell at a time just as you can fix a smashed car a part at a time.”9A and not-A. Birth is fuzzy and provisional and so is death. So is murder. The law is already fuzzy in this regard. There are first-degree murder, second-degree murder, justifiable homicide, self-defense homicide, genocide, infanticide, suicide, crimes of passion, crimes against humanity. A and not-A. Complexities and subtleties abound. Nuances rule. Our legal systems have adjusted to this reality; so, too, must our ethical systems. Fuzzy birth. Fuzzy death. Fuzzy murder. Fuzzy ethics.
 
Long before he penned the book that justified laissez-faire capitalism, Adam Smith became the first moral psychologist when he observed: “Nature, when she formed man for society, endowed him with an original desire to please, and an original aversion to offend his brethren. She taught him to feel pleasure in their favorable, and pain in their unfavorable regard.” Yet, by the time he published The Wealth of Nations in 1776, Smith realized that human motives are not so pure: “It is not from the benevolence of the butcher, the brewer or the baker that we expect our dinner, but from their regard of their own interest. We address ourselves not to their humanity, but to their self-love, and never talk to them of our necessities, but of their advantage.”10
Is our regard for others or for ourselves? Are we empathetic or egotistic? We are both. But how we can strike a healthy balance between serving self and serving others is not nearly as rationally calculable as we once thought. Intuition plays a major role in human decision making—including and especially moral decision making—and new research is revealing both the powers and the perils of intuition. Consider the following scenario: imagine yourself a contestant on the classic television game show Let’s Make a Deal. You must choose one of three doors. Behind one of the doors is a brand-new automobile. Behind the other two doors are goats. You choose door number one. Host Monty Hall, who knows what is behind all three doors, shows you what’s behind door number two, a goat, then inquires: would you like to keep the door you chose or switch? It’s fifty-fifty, so it doesn’t matter, right? Most people think so. But their intuitive feeling about this problem is wrong. Here’s why: you had a one in three chance to start, but now that Monty has shown you one of the losing doors, you have a two-thirds chance of winning by switching doors. Think of it this way: there are three possibilities for the three doors: (1) good bad bad; (2) bad good bad; (3) bad bad good. In possibility one you lose by switching, but in possibilities two and three you can win by switching. Here is another way to reason around our intuition: there are ten doors; you choose door number one and Monty shows you doors number two through nine, all goats. Now would you switch? Of course you would, because your chances of winning increase from one in ten to nine in ten. This is a counterintuitive problem that drives people batty, including mathematicians and even statisticians.11
Intuition is tricky. Gamblers’ intuitions, for example, are notoriously flawed (to the profitable delight of casino operators). You are playing the roulette wheel and hit five reds in a row. Should you stay with red because you are on a “hot streak” or should you switch because black is “due”? It doesn’t matter because the roulette wheel has no memory, but try telling that to the happy gambler whose pile of chips grows before his eyes. So-called hot streaks in sports are equally misleading. Intuitively, don’t we just know that when the Los Angeles Lakers’ Kobe Bryant is hot he can’t miss? It certainly seems like it, particularly the night he broke the record for the most three-point baskets in a single game, but the findings of a fascinating 1985 study of “hot hands” in basketball by Thomas Gilovich, Robert Vallone, and Amos Tversky—who analyzed every basket shot by the Philadelphia 76ers for an entire season—does not bear out this conclusion. They discovered that the probability of a player hitting a second shot did not increase following an initial successful basket beyond what one would expect by chance and the average shooting percentage of the player. What they found is so counterintuitive that it is jarring to the sensibilities: the number of streaks, or successful baskets in sequence, did not exceed the predictions of a statistical coin-flip model. That is, if you conduct a coin-flipping experiment and record heads or tails, you will encounter streaks. On average and in the long run, you will flip five heads or tails in a row once in every thirty-two sequences of five tosses. Players may feel “hot” when they have games that fall into the high range of chance expectations, but science shows that this intuition is an illusion.12
These are just a couple of the countless ways our intuitions about the world lead us astray: we rewrite our past to fit present beliefs and moods, we badly misinterpret the source and meaning of our emotions, we are subject to the hindsight bias where after the fact we surmise that we knew it all along, we succumb to the self-serving bias where we think we are far more important than we really are, we see illusory correlations that do not exist (superstitions), and we fall for the confirmation bias where we look for and find evidence for what we already believe. Our intuitions also lead us to fear the wrong things. Let us return to Adam Smith. According to Smith’s theory, our moral sentiments lead us to observe what happens to others, empathize with their pain, then turn to our own self-interest in dreaded anticipation of the same disaster befalling us. The week I wrote this section the ABC television news program 20/20 ran a story about kids who dropped heavy stones off freeway overpasses that smashed through car windows, killing or maiming the passengers within. The producers appealed to the fearful side of our nature by introducing viewers to the hapless victims with mangled faces and shattered lives, evoking our empathy; they then engaged our self-love with the rhetorical question: “could this happen to you?”
Could it? Not likely. In fact, it is so unlikely you would be better off worrying about lightning striking you. Then why do we worry about such matters? Because our moral intuitions have been hijacked by what University of Southern California sociologist Barry Glassner calls a “culture of fear.”13 Who created this culture? Ultimately we did, by buying into the rumors and hearsay that pass for factual data fed to us by the media and other sources. But those factoids and reports had to come from somewhere. Follow the money and those who traffic in fear mongering. Politicians, for example, can win elections by grossly exaggerating (and sometimes outright lying about) crime and drug-use percentages under their opponent’s watch. Advocacy groups profit (literally) from fear campaigns that heighten an expectation of doom (to be thwarted just in time, if the donor’s contribution is beefy enough). Think of conservatives decrying the demise of the family or liberals proclaiming the destruction of the environment.
Religions play on our fears by hyping up the doom and gloom of this world to make the next world seem all the more appealing. On May 17, 1999, an evangelical Christian friend of mine insisted that we are in the “end times” because the Bible prophesied an increase in immorality and malfeasance. Since everyone knows crime is an epidemic problem in America that worsens by the year (“just look at the recent Columbine shooting,” he enthused), the end is nigh. I remember the date because it was the same day the FBI released its findings that we are in the midst of the longest decline in crime rates since the bureau began collecting data in 1930. In other words, we are confronted with the paradox of being more fearful than we have ever been at the same time that things have never been so safe. “In the late 1990s the number of drug users had decreased by half compared to a decade earlier,” Glassner explains, yet the “majority of adults rank drug abuse as the greatest danger to America’s youth.” Ditto the economy, where “the unemployment rate was below 5 percent for the first time in a quarter century. Yet pundits warned of imminent economic disaster.”14 In this century alone modern medicine and social hygiene practices and technologies have nearly doubled our life span and improved our health immeasurably, but Glassner points out that if you tally up the reported disease statistics, out of 280 million Americans, 543 million of us are seriously ill!
How can this be? Benjamin Disraeli had an answer: lies, damn lies, and statistics. We may be good storytellers, but we are lousy statisticians. Glassner shows, for example, that women in their forties believe they have a 1 in 10 chance of dying from breast cancer, but their real lifetime odds are more like 1 in 250. He notes that some “feminists helped popularize the frightful but erroneous statistic that two of three teen mothers had been seduced and abandoned by adult men” when in reality it “is more like one in ten, but some feminists continued to cultivate the scare well after the bogus stat had been definitively debunked.”15 The bigger problem here is the law of large numbers, where million-to-one odds happen 280 times a day in America, and of those the most sensational dozen make the evening news, especially if captured on video. Stay tuned—film at eleven!
Herein lies the problem for our moral sensibilities. We are fed numbers daily that we cannot comprehend about threats to our security we cannot tolerate. Better safe than sorry, right? Not necessarily. Pathological fear takes a dramatic toll on our psyches and wallets. “We waste tens of billions of dollars and person-hours every year,” Glassner notes, “on largely mythical hazards like road rage, on prison cells occupied by people who pose little or no danger to others, on programs designed to protect young people from dangers that few of them ever face, on compensation for victims of metaphorical illnesses, and on technology to make airline travel—which is already safer than other means of transportation—safer still.”16
Of all the institutions feeding our fears, the media takes center stage for sensationalism (“if it bleeds, it leads”). An Emory University study revealed that the leading cause of death in men—heart disease—received the same amount of coverage as the eleventh-ranked vector: homicide. Not surprising, drug use, the lowest-ranking risk factor associated with serious illness and death, received as much attention as the second-ranked risk factor, poor diet and lack of exercise. From 1990 to 1998, America’s murder rate decreased by 20 percent while the number of murder stories on network newscasts increased by an incredible 600 percent (and this doesn’t count O. J. Simpson stories). The fact is, there is no evidence that secondhand smoke causes cancer or that cell-phone use generates brain tumors; likewise, Gulf War Syndrome appears to be a chimera, television does not cause violence, Satanic cults are phantasmagorical, most recovered memories of childhood abuse are nothing more than false memories planted by bad therapists, silicon breast implants cause nothing more than metastatic litigation, the drug war was lost decades ago, and the drug emperor has no clothes—he’s butt naked and it’s high time someone said it. We would be well-advised to remember the law of large numbers, and to keep in mind that we have selective memory of the most egregious events and that most of our fears are illusory—the vaporous product of a culture of fear of which we are both creators and victims.17
These notable shortcomings to our intuitive instincts aside, however, there is something quite empowering about intuition that cannot be dismissed, especially in the moral realm. In fact, intuition is so ingrained into the human psyche that it cannot be separated from intellect (witness the aforementioned intuitive afflictions). So integrated are intuition and intellect that I have coalesced them into what I call the Captain Kirk Principle, from an episode of Star Trek entitled “The Enemy Within.”18 Captain James T. Kirk has just beamed up from planet Alpha 177, where magnetic anomalies have caused the transporter to malfunction, splitting Kirk into two beings. One is cool, calculating, and rational. The other is wild, impulsive, and irrational. Rational Kirk must make a command decision to save the landing party now stranded on the planet because of the malfunctioning transporter. (Why they could not just send down a shuttle craft to rescue them is never explained, and thus this episode has contributed to the long list of Star Trek bloopers.) Because his intellect and intuition have been split, Kirk is paralyzed with indecision, bemoaning to Dr. McCoy: “I can’t survive without him [irrational Kirk]. I don’t want to take him back. He’s like an animal—a thoughtless, brutal animal. And yet it’s me.” This psychological battle between intellect and intuition was played out in nearly every episode of Star Trek in the characters of the ultrarational Mr. Spock and hyperemotional Dr. McCoy, with Captain Kirk as the near-perfect embodiment of both. Thus, I call this balance the Captain Kirk Principle: intellect is driven by intuition, intuition is directed by intellect.19
For most scientists, intuition is the bête noire of a rational life, the enemy within to beam away faster than a Vulcan in heat. Yet the Captain Kirk Principle is now finding support from a rich new field of scientific inquiry brilliantly summarized by psychologist David G. Myers, who demonstrates through countless well-documented experiments that intuition—“our capacity for direct knowledge, for immediate insight without observation or reason”20—is as much a part of our thinking as analytic logic. Physical intuition, of course, is well known and accepted as part of an athlete’s repertoire of talents—Michael Jordan and Tiger Woods come to mind. But there are social, psychological, and moral intuitions as well that operate at a level so fast and subtle that they cannot be considered a product of rational thought. Harvard’s Nalini Ambady and Robert Rosenthal, for example, discovered that the evaluations of teachers by students who saw a mere thirty-second video of the teacher were remarkably similar to those of students who had taken the entire course. Even three two-second video clips of the teacher yielded a striking .72 correlation with the course student evaluations!21 How can this be? We have an intuitive sense about people that allows us to make reasonably accurate snap judgments about them.
Research consistently shows how even unattended stimuli can subtly affect us. In one experiment, for example, researchers flashed emotionally positive scenes (a kitten or a romantic couple) or negative scenes (a werewolf or a dead body) for forty-seven milliseconds before subjects viewed slides of people. Although subjects reported seeing only a flash of light for the initial emotionally charged scenes, they gave more positive ratings to people whose photos had been associated with the positive scenes.22 In other words, something registered somewhere in the brain. That also appears to be the situation in the case of a patient who was unable to recognize her own hand, and when asked to use her thumb and forefinger to estimate the size of an object was unable to do it. Yet when she reached for the object her thumb and forefinger were correctly placed.23 Another study revealed that stroke patients who have lost a portion of their visual cortex are consciously blind in part of their field of vision. When shown a series of sticks, they report seeing nothing, yet unerringly identify whether the unseen sticks are vertical or horizontal.24 That’s weird.
Intuition especially plays a powerful role in “knowing” other people. The best predictor of how well a psychotherapist will work out for you is your initial reaction in the first five minutes of the first session. 25 The reason for this is because for psychotherapy (talk therapy), research shows that no one modality or style is better than any other. It does not matter what type or how many degrees the therapist has, or what particular school the therapist attended, or whom the therapist trained under. What matters most is how well suited the therapist is for you, and only you can make that judgment, one best made through intuition, not intellect. Similarly, people with dating experience know within minutes whether or not they will want to see a first date again. That assessment is not made through tallying up the pluses and minuses of the date in some intellectual process equivalent to a mental ledger; we don’t usually ask for a date’s resume or curriculum vitae before agreeing to a second date. But we do perform something like this in a quick intuitive assessment based on subtle cues—body language, facial expressions, voice tone and volume, wit and humor, politeness, and so forth—all of which can be assessed relatively quickly.
To the extent that lie detection through the observation of body language and facial expressions is accurate (overall not very), women are better at it than men because they are more intuitively sensitive to subtle cues. In experiments in which subjects observe someone either truth telling or lying, although no one is consistently correct in identifying the liar, women are correct significantly more often than men.26 Women are also superior in discerning which of two people in a photo was the other’s supervisor, whether a male-female couple is a genuine romantic relationship or a posed phony one, and when shown a two-second silent video clip of an upset woman’s face, women guess more accurately then men whether she is criticizing someone or discussing her divorce.27 People who are highly skilled in identifying “micromomentary” facial expressions are also more accurate in judging lying. In testing such professionals as psychiatrists, polygraphists, court judges, police officers, and secret service agents on their ability to detect lies, only secret service agents trained to look for subtle cues scored above chance. Most of us are not good at lie detection because we rely too heavily on what people say rather than on what they do. Subjects with damage to the brain that renders them less attentive to speech are more accurate at detecting lies, such as aphasic stroke victims who were able to identify liars 73 percent of the time when focusing on facial expressions (normal subjects did no better than chance). In support of an evolutionary explanation of a moral sense, research shows that we may be hardwired for such intuitive thinking: a patient with damage to parts of his frontal lobe and amygdala (the fear center) is prevented from understanding social relations or detecting cheating, particularly in social contracts, even though cognitively he is otherwise normal.28 Cheating detection in social relations, such as in the role of gossip in small groups, is a vital part of our evolutionary heritage.
Although most secular theories of morality are rationalist theories, recent research on moral intuition reveals that the Captain Kirk Principle is at work in the moral realm as well. University of Virginia social psychologist Jonathan Haidt, for example, has demonstrated that the mind makes quick and automatic moral judgments similar to how we make aesthetic judgments. We do not reason our way to a moral decision; we jump right in, then later rationalize the quick decision. Our moral intuitions are more emotional than rational. Haidt’s “social intuitionist” theory says that moral feelings come first, then the rationalization of those moral feelings. “Could human morality really be run by the moral emotions, while moral reasoning struts about pretending to be in control?” Haidt asks. He answers his own question thusly: “Moral judgment involves quick gut feelings, or affectively laden intuitions, which then trigger moral reasoning.”29 In other words, research supports our usual distinction between morality (thoughts and behaviors about right and wrong) and ethics (theories about moral thoughts and behaviors). In this context, ethics is an expression of emotional moral intuitions aimed at convincing others of the rational validity of our intuitions.
Consider the following moral dilemma and how our moral intuitions respond: you witness a runaway trolley headed for five people. If you throw a switch to derail the trolley, it will save the five but send it down another track to kill one person. Would you do it? Most people say that they would. Rationally, it seems justified: sacrificing one life to save five seems like the logical thing to do. However, consider this minor modification of the moral dilemma: you witness a runaway trolley headed for five people. You can stop the trolley by pushing a person onto the track, killing that one individual but saving five lives in the process. Would you do it? It is the same moral calculation, but most say they would not do it. Why? Princeton University’s Joshua Greene believes he has found a reason through brain imaging technology. In presenting these moral dilemmas to subjects and recording what is going on inside their brains as they think about them, the second scenario of pushing the subject onto the tracks triggered the subjects’ brains to light up in their emotional areas (normally active when feeling sad and frightened) much more than when they were thinking about the first scenario.30 The difference in these two scenarios is that in the first one the subject is emotionally detached by being one step removed from the killing process—to save five lives by killing one person, one has only to flip a switch to derail the trolley car. The trolley killed the individual, not the subject. In the second scenario the subject is emotionally involved—to save five lives by killing one person, one has to be directly and viscerally responsible for killing another person. Moral judgment is not calculatingly rational. It is intuitively emotional.
Cognitive biases also play a powerful role in our moral intuitions. The self-serving bias, for example, which dictates that we tend to see ourselves in a more positive light than others actually see us, leads us to think we are more moral than others. National surveys, for instance, show that most businesspeople believe they are more moral than other businesspeople.31 Even social psychologists who study moral intuition think they are more moral than other social psychologists!32 And we all believe that we will be rewarded for our ethical behavior. A U.S. News & World Report study asked Americans who they think is most likely to make it to heaven: 19 percent said O. J. Simpson, 52 percent said former President Bill Clinton, 60 percent said Princess Diana, 65 percent chose Michael Jordan, and, not surprisingly, 79 percent elected Mother Teresa. But the person survey takers thought most likely to go to heaven, at 87 percent, was the survey taker him- or herself!33
Consistent with these experimental results are studies that show people are more likely to rate themselves superior in “moral goodness” than in “intelligence,” and community residents overwhelmingly see themselves as caring more about the environment and other social issues than other members of the community do.34 In one College Entrance Examination Board survey of 829,000 high school seniors, none rated themselves below average in the category “ability to get along with others,” 60 percent rated themselves in the top 10 percent, and 25 percent said they were in the top 1 percent.35 Likewise, just as behaviors determine perceptions—smokers overestimate the number of people who smoke, for example—moral behaviors determine moral perceptions: liars overestimate the number of lies other people tell. One study found that people who cheat on their spouses and income taxes overestimate the number of others who do so.36
Although in science we eschew intuition because of its many perils, we would do well to remember the Captain Kirk Principle that intellect and intuition are complementary, not competitive. Without intellect our intuition may drive us unchecked into emotional chaos. Without intuition we risk failing to resolve complex social dynamics and moral dilemmas, as Dr. McCoy explained to the indecisive rational Kirk: “We all have our darker side—we need it! It’s half of what we are. It’s not really ugly—it’s human. Without the negative side you couldn’t be the captain, and you know it! Your strength of command lies mostly in him.”
 
Provisional ethics fits well with the research on moral intuition because how we respond to moral problems depends on a combination of inherited moral sentiments and learned moral rules, the combination of which is often too complex to depend entirely on intellect and reason. There are moral principles that are provisionally true, and we can know and apply these principles best by listening to our moral intuition as well as our moral intellect. The little voice inside should be talking to the little calculator inside.
It cannot be overemphasized that provisional ethics is not relative or situational ethics, nor is it an attempt to eschew moral responsibility or escape moral freedom. As an evolved mechanism of human psychology, the moral sense is transcendent of individuals and groups and belongs to the species. Moral principles, derived from the moral sense, are not absolute, where they apply to all people in all cultures under all circumstances all of the time. Neither are moral principles relative, entirely determined by circumstance, culture, and history. Moral principles are provisionally true—they apply to most people in most cultures in most circumstances most of the time. Although we are all subject to laws of nature and forces of culture and history that shape our thoughts and behaviors, we are free moral agents responsible for our actions because none of us can ever know in its entirety the near-infinite causal net that determines each of our individual lives. Good things and bad things happen to both good and bad people. There is no absolute and ultimate judge to mete out rewards and punishments at some future date beyond the human career on planet Earth. But since moral principles are provisionally true for most people most of the time in most circumstances, there are individual culpability and social justice within human communities that produce feelings of righteousness and guilt and mete out rewards and punishments such that there is at least provisional justice. Provisional ethics leads to provisional justice.
Provisional ethics may not be ultimately satisfying for the moral absolutist, but since there is no justification outside of an omnipotent and omniscient God for such moral absolutism—and there is no convincing scientific evidence that such a God exists—then provisional ethics and provisional justice are the best we can do. If you want more— if you need some source of moral verification and objectification outside of yourself, your society, and your species—then you are living in the grip of a supernatural illusion. I’m sorry, but you can’t get more without eschewing reality. Given the nature of our universe, our world, and our selves, this is the best we can do. Fortunately, it is enough. It leads to a moral humanity because a moral nature is part of human nature. It exists independent and outside of any individual because it belongs to the species. As long as humanity continues so too will morality, provisional though it may be.