4
 
MASTER OF MY FATE: MAKING MORAL CHOICES IN A DETERMINED UNIVERSE
 
 
 
He that is good is free, though he be a slave; he that is evil is a slave, though he be a king.
 
—St. Augustine, The City of God, IV, A.D. 427
 
 
 
 
 
On March 30, 1981, as United States President Ronald Reagan emerged from the Washington Hilton Hotel, he was gunned down in his tracks by John W. Hinckley, Jr., an obsessive loner who, in the scuffle following the initial shots, also blew a hole in the head of Reagan’s press secretary, James Brady, and wounded two others. Hinckley was immediately arrested. Remarkably, although Hinckley clearly fired the shots and never denied this fact, he pleaded not guilty. How can you not be guilty when your act is observed by dozens of eyewitnesses, filmed and seen by millions, and you admit you committed the crime?
Hinckley claimed that he was insane at the time of the assassination. His insanity? He was “crazy” about the movie star Jodie Foster, who he said obsessed him. His was the so-called insanity defense, known legally as NGRI, or “Not Guilty by Reason of Insanity.” To determine whether Hinckley was insane—that he was not responsible for his actions and thus should be placed in a mental institution instead of jail—the court subjected him to an extensive psychological evaluation. Three government-appointed psychiatrists determined that he was sane at the time of the crime because of the considerable planning required to attempt a political assassination. But his defense-appointed psychiatrists diagnosed him with several severe mental disorders, including Schizophrenia Spectrum Disorder and Paradoxical Rage. Amazingly (although maybe we should not be amazed any longer), the jury agreed that Hinckley was not responsible for his actions because he was insane. Reflecting modern understandings of the “out of control” nature of some extreme human behaviors, the jury acquitted him. Rather than being imprisoned for shooting the president, Hinckley was sent to St. Elizabeth’s Hospital in Washington, D.C., where he underwent psychological observation and treatment. In time, he even earned privileges to leave the facility for supervised visits to his parents’ home, and eventually was granted unsupervised trips off the facility grounds.1
By sharp contrast, nearly two and a half centuries earlier, on January 5, 1757, French king Louis XV was charged by an unknown assailant, Robert-François Damiens, who broke through the king’s protective guards, grabbed his shoulder with one hand, and stabbed him with a knife held in the other. Damiens was a one-time menial in the college of the Jesuits in Paris. During a religious dispute between Pope Clement XI and the parliament of Paris over whether sacraments should be granted to Jansenists and Convulsionnaires, Damiens got in his mind the idea that religious peace would be restored if the king were eliminated. For a crime resembling Hinckley’s (albeit with a different motive), Damiens was convicted for attempted regicide and sentenced to receive a rather harsher punishment than Hinckley got:

Pincered at the breasts, arms, thighs and calves, his right hand holding the knife with which he perpetrated the said act, he was to be burned on the hand with sulfur, to be doused at the pinion points with boiling oil, molten lead, and burning resin, and then to be dismembered by four horses, before his body was burned, reduced to ashes, and scattered to the winds. Then one of the executioners, a strong and robust man, grasped the metal pincers, each one foot long, and by twisting and turning them, tore out huge lumps of flesh, leaving gaping wounds which were doused from a red-hot spoon. Between his screams, Damiens repeatedly called out, “My God, take pity on me!” and “Jesus, help me!” The final operation lasted a very long time, because the horses were not used to it. Six horses were needed: but even they were not enough. The executioner asked whether they should cut him in pieces, but the Clerk ordered them to try again. After two or three more attempts, the executioners took out knives, and cut off his legs … . They said that he was dead. But when the body had been pulled apart, the lower jaw was still moving, as if to speak.2
 

As if that were not enough, Damiens’s home was razed to the ground, his brothers and sisters were ordered to change their names, and the rest of his family, including his wife and daughter, were banished from the country.
These two dramatically different forms of punishment reflect changing social and cultural attitudes toward behavior and its causes over the past two centuries. Have we changed for the better? This is a moral question with broad and sweeping ramifications for psychology, sociology, social policy and legislation, and political and ethical theory.
In the previous chapter we discussed theodicy, or the problem of evil, where God’s omnipotence, omnibenevolence, and the existence of evil were seen to be incompatible. The noted Oxford theologian and man of letters C. S. Lewis, in his moving posthumously published work A Grief Observed, reflected on this problem after the premature death from cancer of his beloved wife, Joy: “But is it credible that such extremities of torture should be necessary for us? Well, take your choice. The tortures occur. If they are unnecessary, then there is no God or a bad one. If there is a good God, then these tortures are necessary. For no even moderately good Being could possibly inflict or permit them if they weren’t.”3
A similar set of logical tenets arises in theology over the problem of free will, and in my opinion there is no satisfactory solution for either of them. Squaring free will with God’s omniscience and omnipotence is problematic. How can He hold us responsible for making “choices” we could not have made freely if He is all-knowing and all-powerful? If we are volitional beings, then we can make free choices, which means that God is limited in knowledge, power, or both. And a limited God is not the God of Abraham, Jesus, and Muhammad. God Himself, as it were, offered this solution in Milton’s Paradise Lost, in an explanation for how Adam and Eve could have freely chosen to disobey Him even though he already knew their disobedience was foreordained by his power:

They themselves decreed
Their own revolt, not I. If I foreknew,
Foreknowledge had no influence on their fault
Which had no less proved certain unforeknown.4
 

The French philosopher René Descartes suggested a similar way out in less poetic form: “We will be free from these embarrassments if we recollect that our mind is limited while the power of God, by which he not only knew from all eternity what is or can be, but also willed and preordained it, is infinite. It thus happens that we possess sufficient intelligence to know clearly and distinctly that this power is in God, but not enough to comprehend how he leaves the free actions of men indeterminate.”5 After tackling the problem of evil, C. S. Lewis turned his acumen to the problem of free will, expanding on Descartes’s resolution of placing God outside of time:

But suppose God is outside and above the Time-line. In that case, what we call “tomorrow” is visible to Him in just the same way as what we call “today.” All the days are “Now” for Him. He doesn’t remember you doing things yesterday; he simply sees you doing them, because, though you’ve lost yesterday, He has not. He doesn’t foresee you doing things tomorrow; He simply sees you doing them: because, though tomorrow is not yet there for you, it is for Him. You never supposed that your actions at this moment were any less free because God knows what you are doing.6
 

Even without including God in the equation, a new paradox arises in its stead. If we live in a determined universe, how can we make free moral choices? If genetics, biology, culture, and history combine to create a suite of factors that determine our thoughts and behaviors, how can society hold us morally and legally responsible for our actions? Is it legally, philosophically, or scientifically tenable to argue that some or most of us are free most of the time to make moral decisions, while a few (like John Hinckley) are absolutely determined (in other words, they could not have acted otherwise) some of the time to make immoral decisions? Hinckley, it was decided, had lost his free will. He was so under the control of inner forces and outer circumstances that he was determined to commit this act of violence. Determined by what precisely? Presumably by some mix of his genes and his environment, of internal traits unique to him and external states to which he was subjected.
Since we are all subject to some blend of heredity and environment—internal traits and external states—then why couldn’t any of us cop an insanity plea, or at least a determinism appeal, for any of our immoral actions? Indeed, lots of people do. Consider the various defenses employed to build a case against moral freedom and for criminal determinism: the Twinkie defense (high blood sugar caused Dan White to kill San Francisco’s Mayor George Moscone and Supervisor Harvey Milk), the abuse excuse (the Menendez brothers murdered their parents because they were abused as children), black rage syndrome (Colin Ferguson shot six white people on a train because he snapped under the pressure of our racist society), pornography defense (watching other people have sex causes men to rape women), PMS defense (premenstrual syndrome caused a woman to assault a police officer), and television violence (watching other people being violent makes people more aggressive). In the case of John Hinckley, the American court held that he was a determined puppet, and while he didn’t get off scot-free, his punishment was far less draconian than that of Robert-François Damiens, whom the French court held was responsible for his crime because he freely chose to commit it, even though his justification might just as easily be construed today as being insane. (Perhaps a modern attorney would argue that Damiens was the victim of “regirage”—uncontrollable anger over being subjected to the rule of a king.) Which view of human nature is correct? Are we free or are we determined? And if we are determined, how can we make free moral choices and be held accountable for them? This is what I call the paradox of moral determinism, a subset of the larger free will/determinism problem.
 
The English novelist William Somerset Maugham aptly expressed the paradox of free will and determinism in his thought-provoking parable “Appointment in Samara”:

Death speaks:
There was a merchant in Baghdad who sent his servant to market to buy provisions and in a little while the servant came back, white and trembling, and said, “Master, just now when I was in the market place I was jostled by a woman in the crowd and when I turned I saw it was Death that jostled me. She looked at me and made a threatening gesture. Now, please lend me your horse and I will ride away from this city and avoid my fate. I will go to Samara and there Death will not find me.”
The merchant lent him his horse and he dug his spurs in its flanks and as fast as the horse could gallop, he went. Then the merchant went down to the market place and saw me standing in the crowd and approached me and said, “Why did you make a threatening gesture to my servant when you saw him this morning?”
“That was not a threatening gesture,” I replied. “It was only a start of surprise. I was astonished to see him in Baghdad, for I have an appointment with him tonight in Samara.”7
 

Although the meaning of Maugham’s homily is more in line with predestination, the point is made that there is a sense of inevitability in life’s drama. Although we may think we are freely going about our business, we are actually under the control of hidden masters. Consider the intricate workings of a finely crafted watch. If the hands of the watch possessed consciousness and self-awareness, they might feel like they were freely moving about the watch face, but we the watchmakers would know better. We know that the watch hands are determined because we know that the spring, cogs, wheels, and various parts all work together to cause the hands of the watch to move. We know that the watch is not a volitional being. It does not freely choose to keep accurate time. If the watch is running slow, we do not assign to it such anthropomorphic traits as indolence. It doesn’t want to be late. It simply can’t help it, and we take it to a jeweler to determine the cause of the problem. We do not assert that the problem with the watch is an insoluble one due to its volitional nature.
This is the axiom of determinism, the doctrine that every event in the universe has a prior cause, and that all effects are predictable if all causes are known. The free will/determinism problem is an ancient one, but for our purposes we begin in the seventeenth century with the rise of modern philosophy through such philosophers as René Descartes, and the ascent of modern science through such scientists as Isaac Newton. With the advent of the Cartesian/Newtonian mechanistic worldview, philosophers and scientists began to think of the universe and everything in it, including us, as determined in a mechanistic manner. The metaphor of choice, in fact, was that the universe is like a clock. The origin and action of every atom, molecule, cell, organism, person, planet, and star are the effects of some mechanical cause or series of mechanistic causes. This view became so pervasive that it was codified by the French mathematician Marquis de Laplace in what has since become known as Laplace’s demon: “Let us imagine an Intelligence who would know at a given instant of time all forces acting in nature and the position of all things of which the world consists; let us assume, further, that this Intelligence would be capable of subjecting all these data to mathematical analysis. Then it could derive a result that would embrace in one and the same formula the motion of the largest bodies in the universe and of the lightest atoms. Nothing would be uncertain for this Intelligence. The past and the future would be present to its eyes.”8 Alexander Pope elegantly rhymed the problem this way:

Think we, like some weak prince, the Eternal Cause,
Prone for his favourites to reverse his laws?
Shall burning Etna, if a sage requires,
Forget to thunder, and recall her fires?
On air or sea new motions be imprest,
O blameless Bethel, to relieve thy breast?
When the loose mountain trembles from on high,
Shall gravitation cease, if you go by?9
 

By the twentieth century, philosophers spoke of a “causal net”—a network of causes linked to effects throughout the past and into the future. The causal net encompasses all phenomena, past, present, and future, throughout the cosmos, from atoms to galaxies and everything in between, including us. Without the doctrine of determinism, science could not strive for an ultimate understanding of past events or make predictions about future phenomena. The causal net was cast over the legal profession when attorneys began to use it as a tool in defense of their clients. John Hinckley’s case shows how science and the law have each dealt with this problem. From what we have already seen about John Hinckley, it is clear that he was not “normal” in any sense of the word. So what was he, sane or insane? The answer turns out to be neither, and both.
 
In philosophy there is a well-known fallacy of logic called post hoc, ergo propter hoc, literally, “after this, therefore because of this.” In cognitive psychology there is a related problem called the “hindsight bias,” where it seems that once we know the outcome, there is a sense that “I knew it all along.” Much of what we believe about the world seems right only after the fact. Before the fact, however, things are not always so clear. Before the FBI’s assault on the Waco compound of the Branch Davidians, for example, charging the building with armed guards seemed like the right thing to do. After four FBI agents were shot, it was abundantly clear to everyone that disaster was a foregone conclusion. Monday morning quarterbacking is everyone’s favorite hobby. Causality is easy to infer after the effect; it is nearly impossible to know before. That is why the experimental methods of science demand rigid controls over intervening variables that might compound and confuse the results of an experiment.
In the case of John Hinckley, if you did not already know what he did, you would be hard-pressed to find anything in his background that would lead him to commit such an extreme act of violence. Hinckley was the youngest of three children, the son of a stay-at-home mom and a workaholic father prominent in the oil business. His mother described him as clinging and dependent. In reviewing JoAnn and Jack Hinckley’s autobiographical book about their son and family, Breaking Points, Laura Obolensky wrote in the New Republic:

Perhaps it is fear of what lies outside that makes the interior of the family so rigid and subdued, like life in a well-run bunker. The world of the Hinckleys was the rootless, middle-class Sunbelt culture that nurtures pro-family values, Christian fundamentalism, and occasional mass murderers. Families move frequently, but without compromising their parochialism. Everywhere, people are white, Christian, Republican (JoAnn explains John’s egregious prejudices by saying he had “never been around people of other races”). Somewhere outside there are malign elements—minority groups, rock musicians, big government, and the cynical, Godless cosmopolites who dominate the media. Mothers in this culture do not lavish attention on their children, but on their furniture.10
 

Affectless? Yes. Cold and uncaring? Perhaps. Progenitor of an assassin? Hardly.
Upon graduating from high school, Hinckley muddled through two years of college at Texas Tech, in Lubbock, watching television, playing guitar, and finally dropping out in the spring of 1976. Like so many aspiring entertainers before and after him, Hinckley moved to Hollywood, where he saw Jodie Foster’s debut in Taxi Driver, a movie he watched fifteen times over the next couple of years. In the film, Foster plays a young prostitute rescued from her pimp by a psychotic taxi driver named Travis Bickle, played by Robert De Niro. Hinckley identified with Bickle, adopting the mannerisms (“you talkin’ to me?”) and dress of the character (who, significantly, contemplated committing a political assassination), even to the point of keeping a diary, wearing army fatigues and boots, and developing an obsession with guns. More importantly, the film spawned in Hinckley an obsession for Foster.
A year later, Hinckley abandoned his musical aspirations and returned to Texas, wandering aimlessly as his depression deepened. A year after that, he bought his first gun and took up target shooting, and by the fall of 1979, he later confessed, he had twice played Russian roulette. Matters took a more sinister turn in the summer of 1980, when he convinced his parents to finance a writing course at Yale University where, not by chance, Foster was enrolled as a student. A distant obsession now turned to physical stalking, as Hinckley left Foster poems and letters, and twice phoned her.
His awkward love unrequited, Hinckley now considered a political assassination to get Foster’s attention. His first target of choice was President Jimmy Carter. Hinckley tailed him on the 1980 campaign trail in Washington, D.C., as well as in Columbus and Dayton, Ohio. He later admitted that he couldn’t get into “a frame of mind where I could actually carry out the act,” so he returned to Yale to make another attempt at winning Foster’s love, and once again failed. He was subsequently arrested for carrying handguns in his suitcase through the Nashville airport, but was released without incident.
With his parents’ tuition money spent, Hinckley returned home and overdosed (but recovered) on antidepressants. His parents arranged for Hinckley to see a psychiatrist, but there was apparently no hint in their sessions of the level of Hinckley’s obsession, or that he was on the brink of attempting suicide or murder. His depression deepening, Hinckley traveled to New York and considered killing himself on the same spot where John Lennon had been assassinated a few weeks earlier by another obsessed young man named Mark David Chapman. On New Year’s Eve of 1980, Hinckley recorded a rambling message in which he spoke of not “really” wanting “to hurt” Foster and that he might kill himself if Foster would not return his affections. He finally decided he would take down Reagan, as he carefully explained to Foster in a never-sent letter: “Jodie, I would abandon this idea of getting Reagan in a second if I could only win your heart and live out the rest of my life with you … . I will admit to you that the reason I’m going ahead with this attempt now is because I just cannot wait any longer to impress you. I’ve got to do something now to make you understand, in no uncertain terms, that I am doing this for your sake! By sacrificing my freedom and possibly my life, I hope to change your mind about me. This letter is being written only an hour before I leave for the Hilton Hotel. Jodie, I’m asking you to please look into your heart and at least give me the chance, with this historic deed, to gain your respect and love.”
After shooting Reagan, Hinckley was promptly arrested on the spot and transferred to the federal penitentiary in Butner, North Carolina, for a psychiatric evaluation. Although the initial assessment of Hinckley indicated that he was sane, two suicide attempts in his cell and his demand to his attorneys that they get Foster to testify on his behalf indicated that perhaps his mind was not functioning within normal operating parameters. (Foster did provide a videotaped testimony, but not one Hinckley saw as favorable to his cause, which was more focused on winning her love than winning his freedom.)
As the trial got under way, and with the hindsight bias in full display, attorneys, psychiatrists, jurors, and observers searched in vain for “the cause” of Hinckley’s actions. Hinckley’s attorney, Vince Fuller, attempted to glean from Hinckley’s mother some glitch in his upbringing or an action that could have been a clue to the imminent assassination attempt. In fact, just months before the shooting, she had told Hinckley’s psychiatrist, Dr. Hopper, “Things are fine.” Hinckley’s father testified that the last time he saw John he told him, “O.K., you are on your own. Do whatever you want to do.” In retrospect, and with the hindsight bias driving him to despair, Jack Hinckley lamented, “I’m sure that it was the greatest mistake in my life. I am the cause of John’s tragedy—I forced him out at a time when he simply couldn’t cope. I wish to God that I could trade places with him right now.” After the fact, anyone can be a Monday morning psychiatrist. Science requires predictions, not postdictions.
In her testimony, Foster said Hinckley’s first sets of letters to her were “lover-type letters,” but that the later letters were “distress-sounding” and “I gave them to the dean of my college.” In a missive dated March 6, 1981, for example, just three weeks before the shooting, Hinckley wrote, “Jodie Foster, love, just wait. I will rescue you very soon. Please cooperate. J. W. H.” Asked whether she’d “ever seen a message like that before,” Foster replied, “Yes, in the movie Taxi Driver the character Travis Bickle sends the character Iris [Foster’s role] a rescue letter.”
Hinckley’s lead psychiatrist for his defense was Dr. William Carpenter. After forty-five hours of conversation with Hinckley, Carpenter concluded that he showed four major symptoms of mental illness: “an incapacity to have an ordinary emotional arousal,” “autistic retreat from reality,” depression including “suicidal features,” and an inability to work or establish social bonds. Hinckley’s inability to properly identify with real people, Carpenter explained, led him to emulate Taxi Driver’s Bickle. Alone in his college dorm room and playing his guitar, he also identified with John Lennon. When the pop star was assassinated, Hinckley’s self-identification was discombobulated. His New Year’s Eve monologue, said Carpenter, indicated just how deep his insanity had gone. Here is what Hinckley said:

John Lennon is dead. The world is over. Forget it. It’s just gonna be insanity, if I even make it through the first few days … . I still regret having to go on with 1981 … . I don’t know why people wanna live … . John Lennon is dead … . I still think—I still think about Jodie all the time. That’s all I think about really. That, and John Lennon’s death. They were sorta binded together … .
 

Hinckley’s identification then switched from Lennon back to the fictional Bickle, indicated by his signature in the guest register at the hotel where he stayed after his father refused to let him return home: “J. Travis.”
Another psychiatrist for the defense, Dr. David Bear, concurred with Carpenter that Hinckley was psychotic, suffering from Schizophrenia Spectrum Disorder and an extreme reaction to Valium called Paradoxical Rage. For example, Hinckley believed that the character Bickle was talking to him through the film. It was “like he was acting out a movie script,” Bear explained. Could he have been faking his symptoms? Not likely, Bear answered, because imposters fake both positive and negative emotions, whereas Hinckley exhibited only negative emotions. The prosecution challenged that the never-sent letter to Foster proved that Hinckley had planned to shoot the president, but Bear denied that it was a rational plan: “This is so much the heart of the issue—a logical man plans, Hinckley simply reacted, the very opposite of logic. Do I conclude he was rational in plan? My God, my sense of justice says absolutely not.” A CAT scan of Hinckley’s brain was also introduced into evidence by Bear, who said that widened sulci in his brain were “powerful evidence” of his schizophrenia, because only about 2 percent of the general population show widened sulci, whereas about one-third of schizophrenics do. Finally, Hinckley’s score on the Minnesota Multiphasic Personality Inventory indicated that he was near the top of the range for abnormality. According to one expert, only one person out of a million with Hinckley’s score would not be suffering from serious mental illness. Case closed.
Or was it? The government’s psychiatric team drew a rather different conclusion. While Hinckley may exhibit numerous personality disorders and has some obviously unlikable traits, they argued, he was not insane. His flying about the country, purchasing handguns and ammunition, and plotting the place and timing of Reagan’s assassination all indicated that he was rational and organized. How different really was his identification with John Lennon from that of thousands of crazed rock fans? Was it really an “obsession” with Jodie Foster or just an exaggerated form of infatuation exhibited by so many young adults for celebrities? Hinckley, the prosecution argued, was a bored, spoiled, lazy, manipulative rich kid and little more: “Mr. Hinckley’s history is clearly indicative of a person who did not function in a usual reasonable manner. However, there is no evidence that he was so impaired that he could not appreciate the wrongfulness of his conduct or conform his conduct to the requirements of the law.” Hinckley said as much in a deposition with the prosecution. Basically, he explained, he just wanted to be famous so he could get Foster’s attention and affection. “It worked. You know, actually, I accomplished everything I was going for there. Actually, I should feel good because I accomplished everything on a grand scale … . I didn’t get any big thrill out of killing—I mean shooting—him. I did it for her sake … . The movie isn’t over yet.”
The movie came to an end after eight weeks of evidence and arguments. The jury was instructed by Judge Barrington Parker that the prosecution had the burden of proving beyond a reasonable doubt that Hinckley was not insane and that on the day of the assassination attempt he could appreciate the wrongfulness of his actions. After three days of deliberation the jury concluded that the prosecution had failed to do so, returning the same verdict for all thirteen counts: not guilty by reason of insanity.
The trial captured the paradox of moral determinism. After the verdict, the public was outraged; the jury had acquitted a man whose crime had been witnessed by everyone in the country on national television. Lawmakers promised to launch an investigation into the insanity plea. One reporter sardonically labeled his insanity “dementia suburbia,” because Hinckley was from a well-to-do suburban family. Nevertheless, the day after the trial Hinckley was placed in St. Elizabeth’s Hospital in Washington, where he has resided ever since, continually denied parole. In 1985 Hinckley returned to court to request grounds privileges and to lift the hospital ban on his access to telephones. His obsession with Foster was finally over, he told the judge. But it wasn’t true. Hinckley had previously written a letter to Time magazine, claiming, “The most important thing in my life is Jodie Foster’s love and admiration. If I can’t have them, neither can anyone else. We are a historical couple, like Napoleon and Josephine, and a romantic couple like Romeo and Juliet.” The judge denied his request. A search of his room during another hearing on privileges, in 1987, turned up twenty photographs of Foster and numerous writings about her, along with correspondence with serial killer Ted Bundy. Hinckley had even attempted to contact Charles Manson. Even nearly a decade and a half later, his obsession had not diminished. In 2000, shortly after he earned the right to unsupervised furloughs, a search of his room uncovered a book about Foster, and he was once again confined to quarters.
Within a month of the Hinckley verdict, the legal world was awash in debate on the question of moral culpability. The House and Senate held hearings, and Senator Arlen Specter proposed shifting the burden of proof of insanity from the prosecution to the defense. Even Reagan commented, “If you start thinking about even a lot of your friends, you would have to say, ‘Gee, if I had to prove they were sane, I would have a hard job.’” Within three years, two-thirds of the states made that shift on the burden of proof. Utah abolished the defense entirely, and eight other states changed the plea to “guilty but mentally ill.” In 1984 legislation was passed requiring a defendant to prove that the “severe” mental disease made him “unable to appreciate the nature and quality or the wrongfulness of his acts.”
 
The insanity defense is based on the moral psychological theory that most people most of the time in most circumstances freely choose to follow the law, but that a few people, in certain times and circumstances, and under particular mental duress or disease, cannot make that choice. Therefore, the traditional punishment of prison is inappropriate, because how can you punish someone for an act they did not voluntarily choose to commit?
The legal awareness that there is a paradox of moral determinism dates back to 1843 when an apparently psychotic man named Daniel M‘Naghten attempted to assassinate British prime minister Robert Peel, under the paranoid belief that he was being persecuted. Foreshadowing the Hinckley trial, M’Naghten pleaded insanity, while the prosecution argued that his behaviors indicated volition because of the planning needed to execute the attack. Several physicians provided expert testimony that M‘Naghten was insane, and the court agreed. The ensuing legal brouhaha (which included Queen Victoria and the House of Lords protesting the outcome of the trial) led to a set of criteria by which jurors were to judge the sanity or insanity of a defendant. These became known as the M’Naghten Test, which was in place in both the United Kingdom and the United States through the early 1960s. In brief, the M’Naghten Test of moral insanity includes a “right-wrong test” in which jurors were to ask themselves two questions: (1) did the defendant know what he was doing when he committed the crime? or (2) did the defendant understand that his actions were wrong? Jurors “ought to be told in all cases that every man is to be presumed to be sane, and to possess a sufficient degree of reason to be responsible for his crimes, until the contrary be proved to their satisfaction; and that, to establish a defense on the ground of insanity, it must be clearly proved that, at the time of the committing of the act, the party accused was laboring under such a defect of reason, from disease of the mind, as not to know the nature and quality of the act he was doing, or, if he did know it, that he did not know he was doing what was wrong.”
Over time different jurisdictions modified the M’Naghten Test, some adding an “irresistible impulse” clause where defendants could be acquitted even if they knew an act was wrong but could prove that they could not stop themselves from committing it. The psychological theory behind this addendum was that some forms of mental illness are so powerful that they can cause a person to act against his or her better judgment. It is not simply “I knew this was wrong, but I wanted to do it anyway,” but, rather, “I knew this was wrong, but I could not stop myself from doing it anyway.” Critics argued that this clause meant anyone could claim irresistible impulses and that the whole point of social laws and moral codes is to curb those impulses. Where do you draw the line between normal resistible and abnormal irresistible impulses?
This conundrum led to the Durham Test, which arose out of the Washington, D.C., 1954 case of Durham v. United States. The Durham Test was a modification of the M’Naghten Test, in which jurors were now to ask themselves these two questions: (1) did the defendant have a mental disease or defect? and (2) if so, was the disease or defect the reason for the unlawful act? To return a verdict of not guilty by reason of insanity, both answers had to be in the affirmative. The psychological theory behind the Durham Test was that mental illness was a disease that took control of a person’s moral volition. The Durham Test, however, never quite caught on, and by 1972 the D.C. Circuit out of which it originally arose abandoned the test and in its stead adopted the American Law Institute (ALI) Test, a more flexible model designed to recognize degrees of incapacity. Instead of a binary choice of knowing or not knowing the difference between right and wrong, defendants could show degrees of moral and psychological incapacity. Specifically, it stated: “The concept of belief in freedom of the human will and a consequent ability and duty of the normal individual to choose between good and evil is a core concept that is universal and persistent in mature systems of law.” The presumption is that good and evil actions are choices made by volitional beings. Thus, “Criminal responsibility is assessed when through free will a man elects to do evil.” Free will holds us accountable. Abandon free will in favor of determinism and moral culpability flies out the courtroom window.
By the 1970s almost all federal circuit courts had adopted the ALI Test, and it was this one that was in place for the Hinckley trial. The hue and cry over Hinckley’s acquittal, however, reversed the ALI Test precedence when states either abolished it entirely or shifted the burden of proof to the defendant. The Insanity Defense Reform Act of 1984 clearly stated the limits of the insanity plea: “It is an affirmative defense to a prosecution under any federal statute that, at the time of the commission of the acts constituting the offense, the defendant, as a result of a severe mental disease or defect, was unable to appreciate the nature and quality or the wrongfulness of his acts. Mental disease or defect does not otherwise constitute a defense.” In other words, the Reform Act eliminated the volitional aspect of the defense, required that the mental disease must be “severe,” and replaced “unable to appreciate” with “lacks substantial capacity” in order to clarify the boundaries between a total lack of understanding and partial comprehension. Many states also changed the plea “not guilty by reason of insanity” to “guilty but mentally ill.” Perhaps most telling, the American Medical Association cast its vote for the total abolition of the insanity defense. And so the paradox remains mired in legal muck because of the lack of a clear scientific understanding of where to draw the line between sanity and insanity, and between free will and determinism. 11
 
Since the time of the ancient Greeks through the present, some of the greatest minds in every generation have grappled with the problem of free will and determinism, and no one to date has proposed a solution that satisfies most people. It could be that this is a really hard problem—on par, say, with celestial mechanics—and our Newton has yet to produce the Principia of free will. Perhaps, and this is even more distressing, there is no solution to the problem. Like the question of God’s existence, the free will/determinism paradox may be an insoluble one. This may be a “mysterian” mystery, where our brains are sophisticated enough to conceive of the problem but not advanced enough to solve it.12
This is in lockstep with the fideist position in theology, where pragmatist philosophers like William James, Charles Peirce, and Miguel de Unamuno argue that it is acceptable to take a leap of faith on issues of extreme importance to human existence, when the evidence is inconclusive one way or the other, and you must choose. That is, just make a choice one way or the other even if uncertainty remains high as to which choice is the correct one. My friend and skeptical colleague Martin Gardner is a fideist and takes this approach to both the God question and the free will/determinism problem. He has chosen God and free will, not because there is better evidence for them, but because they are important issues, the evidence is inconclusive, and it works better for him to believe in God and free will. With the free will problem, Gardner says it “cannot be solved because we do not know exactly how to put the question.”13 Asking the question, Is there free will? is like asking, Why is there something rather than nothing? or What is time? After reviewing all the arguments for free will, Gardner concludes, “Like time, with which it is linked, free will is best left—indeed, I believe we cannot do otherwise—an impenetrable mystery. Ask not how it works because no one on earth can tell you.”14
If the problem is an insoluble one and thus it is acceptable to choose either free will or determinism, can one have both? The belief that free will can be derived out of a deterministic universe is called compatibilism, and it is shared by many, such as philosopher and neurobiologist Owen Flanagan, who argues that the free will/determinism problem is “ill-posed” and that even though “ours is a causal universe … no one yet knows the exact range of deterministic and indeterministic causation—assuming the universe contains some of each.” Flanagan’s solution is creative, if nothing else: “My proposal is this: Change the subject. Stop talking about free will and determinism and talk instead about whether and how we can make sense of the concepts of ‘deliberation,’ ‘choice,’ ‘reasoning,’ ‘agency,’ and ‘accountability.’”15
Compatibilist solutions such as these are really pseudosolutions or, more positively, pragmatic solutions. That is, pragmatically speaking, they are true if they work. Along those lines, here is one that works for me; maybe it will work for you: free will is a useful fiction. I feel “as if” I have free will, even though I know we live in a determined universe. This fiction is so useful that I act as if I have free will but you don’t. You do the same. Since the problem may be an insoluble one, why not act as if you do have free will, gaining the emotional gratification and social benefits that go along with it?
Insolubility and compatibilism are, at best, pseudosolutions, which, for the most part, satisfy no one. Can science help clear up this conundrum? I shall close this chapter with six scientific-based attempts to derive free will out of determinism: the uncertainty principle of quantum indeterminacy, fuzzy logic, neuroscience, genetics, evolutionary theory, and chaos and complexity theory.
 
One solution to the problem makes an appeal to quantum indeterminacy, a derivative of a field of physics called quantum mechanics. One of the pioneers of quantum mechanics, German physicist Werner Heisenberg, discovered that you cannot determine both the position and the speed of an electron moving about the nucleus of an atom. If you determine where the electron is located, you cannot know its speed. If you determine its speed, you cannot know its position. This became known as the Heisenberg uncertainty principle. Further, when you observe an atom, the “wave function collapses” (a mathematical description made by quantum mechanicists), thus bringing into reality its location. That is, the atom is in a state of uncertainty until it is observed. When observed, the wave function collapses and the state of the atom becomes certain. Finally, there is an additional level of uncertainty in that when atoms decay—as when, say, potassium atoms decay into argon atoms (a process so predictable that it serves as an atomic clock for dating geological events in the earth’s history)—it is not possible to know which particular atom will decay. The decay process is, quite literally, uncaused and unpredictable. It is truly indetermined.
From this fact, some philosophers and scientists argue that perhaps these random and indeterministic atomic events associated with quantum mechanics might trigger the random firing of neurons in the brain, leading to indeterminant mental states. Perhaps, they suggest, this is where free will arises. This argument was critiqued by one of the leading quantum physicists, Nobel laureate Murray Gell-Mann, who derisively called it quantum flapdoodle.16 Quantum effects cancel each other out at the macro level in which everyday events (like human thought) occur. Is it really possible—we might ask rhetorically in an analogy with the uncertainty principle—that the orbit of Mars, like the orbit of an electron, is scattered randomly about the sun until someone observes it, at which point the wave function collapses and the planet appears in one spot? Obviously not, any more than we might think that the moon ceases to exist until it is observed (someone once actually proposed that the moon does not exist until observed). Quantum effects wash out at large scales.
Yet even if it could be established that quantum uncertainties lead to random neuronal firings, that does not produce free will; it just adds another deterministic causal factor, one that is random rather than nonrandom. This last point was well made by the philosopher Daniel Dennett in his book Elbow Room: The Varieties of Free Will Worth Wanting, in which he also argued that if there is too much free will—if people are completely free of all determining or influencing forces—there could be no room for modification of immoral behaviors.17 Truly free agents could thumb their noses at all attempts to curb their freely chosen actions. Social chaos would be the result. We need a proper balance between determinism and free will, and indeterminism is not freedom.
 
A universally accepted set of criteria to determine moral (and thus legal) responsibility for a crime has proved to be elusive because of a deep and fundamental difference between science and the law. The law requires unambiguous categories in order to reach a judgment of guilty or not guilty based on the defendant’s sanity or insanity. Here again we see the type of Platonic thinking that troubled us over the problem of evil. As with good and evil, “sane” and “insane” are reified things, typologies meant for classification of unchanging entities. Science offers us a solution because it recognizes that there are many shades between such categories, such that a fuzzy logic solution, as we saw in solving the problem of evil, once again proves to be a useful heuristic. Asking if John Hinckley was sane or insane is the binary logic of Aristotle’s A or not-A. Instead, let us inquire about the quantitative level of Hinckley’s sanity or insanity. We can see in Hinckley’s background that the shift from sanity to insanity was a fuzzy one, say, .9 sane and .1 insane in his youth, to .8 sane and .2 insane in his teens, to .7 sane and .3 insane during his two years of college, to .6 sane and .4 insane during his year in Hollywood, to .5 sane and .5 insane during the following year of aimless drifting, to .4 sane and .6 insane as he pursued Foster at Yale, to .3 sane and .7 insane as he contemplated assassinating Carter or killing himself, to .2 sane and .8 insane when he hatched the idea to assassinate Reagan, to .1 sane and .9 insane when he penned his final letter to Foster and headed for the Hilton Hotel with his gun in hand and his moral sense fully disengaged. The three photographs of John Hinckley in figure 18 show the slow and gradual (and fuzzy) deterioration of his mind over time, and along with it his capacity for moral reasoning.
A and not-A. Sane and insane. Such psychological states are variable over time and dependent on the context. Hinckley’s long-term obsession with Foster became a form of insanity, but his assassination attempt on Reagan was not. He knew exactly what he was doing. He carefully planned it out and he understood that it was morally wrong because he knew doing it would draw enormous public and media attention. The whole point of the assassination attempt was to get Foster’s attention, and recall that he said, “It worked. You know, actually, I accomplished everything I was going for there. I did it for her sake.” Was Hinckley insane in his obsession over Foster? Was Hinckley insane when he shot Reagan? Approached scientifically, these are separate questions. Hinckley’s obsession grew into a form of insanity. Yet he knew what he was doing even as he pulled the trigger, so we should hold him morally accountable for his crime against Reagan. In other words, Hinckley’s obsession over Foster yields a verdict of not guilty by reason of insanity, and treatment by mental health professionals is an appropriate response. Hinckley’s assassination attempt of Reagan, however, generates a straight guilty verdict, and he should have been punished accordingly with a stiff sentence in a maximum-security prison. The long road that took Hinckley from college dropout in 1976 to the Hilton Hotel in 1981 was gradual enough that he could have reversed his course. As he told Newsweek magazine later that year, “The line dividing life and art can be invisible. After seeing enough hypnotizing movies and reading enough magical books, a fantasy life develops which can either be harmless or quite dangerous.” That may be the most insightful thing Hinckley ever said. It is not the fantasy life that is the problem. After all, fiction writers are paid to pour out their fantasies. What matters is whether those fantasies are converted into dangerous behaviors.
Even here we can apply the findings of science to a fuzzy logic analysis. Hinckley’s unrequited love for Foster was the driving force behind his violence. We know that the fervor of unreciprocated love can become one of the most dangerous of all the passions, overriding reason and rationality. But there are degrees of passions, fractional reactions, and fuzzy responses. Research shows, for example, that 95 percent of all men and women have experienced unrequited love at least once by the age of twenty-five (on either the sending or the receiving end).18 Most people whose love goes unwanted by another feel rejected, suffer a temporary dip in self-confidence and self-esteem, but quickly move on to find someone who returns their passionate overtures. A few (mostly men, but some women, too) undertake a vigilant campaign to win the heart of their chosen beloved, and occasionally they succeed. Their efforts are assertive, but not aggressive. But when some do not succeed (and by now it is almost entirely men), and they continue the pursuit despite their target’s efforts to reject them, charges of stalking and harassment can be filed and convictions won. This terminates almost all remaining attempts. Almost all. Hinckley’s response to Foster’s indifference, however, was at the extreme end of the fuzzy scale. But he is still on the scale. His response is still a fractional one within the whole number of human response variability. Between .1 and .9 is still not o or 1, and so a scientific approach upholds moral culpability. Even if freedom is diminished, it is not extinguished. Thus we have one solution—fuzzy freedom—to the paradox of moral determinism.
Figure 18. The Fuzzy Deterioration of John Hinckley
 
Instead of asking whether John Hinckley was sane or insane in a binary choice, a fuzzy logic analysis allows us to assess his state of mind in shades of gray between sanity and insanity, and how it changed over time. One can see the changes in his face in these three photographs. (Courtesy of Associated Press)
e9781429996754_i0028.jpg
 
 
How does the mind work? According to evolutionary psychologists, the mind is like a Swiss Army knife, equipped with specialized tools that evolved in our Paleolithic past to solve specific problems of survival, such as face recognition, language acquisition, mate selection, and cheating detection. In this model the brain is represented as a host of modules, or bundles of neurons, some located in a single spot (as in Broca’s area for language), others sprawled out over the cortex. Large modules coordinate inputs from smaller modules, which themselves collate neural events from still smaller neural bundles. This reduction continues all the way down to the single neuron level, where highly selective neurons, sometimes described as “grandmother” neurons, fire only when subjects see someone they know. Caltech neuroscientists Christof Koch and Gabriel Kreiman, in conjunction with UCLA neurosurgeon Itzhak Fried, have even found a single neuron that fires when the subject is shown a photograph of Bill Clinton.19 The Monica Lewinsky neuron must be closely connected.
What do these modules tell us about how the mind works? For one, experiences that appear to be external may, in fact, be internal. Five centuries ago, for example, demons haunted our world, with incubi and succubi tormenting their victims as they lay asleep in their beds. Two centuries ago spirits haunted our world, with ghosts and ghouls harassing their sufferers all hours of the night. Last century aliens haunted our world, with grays and greens abducting captives out of their beds and whisking them away for probing and prodding. Today people are having out-of-body experiences, floating above their beds, out of their bedrooms, and even off the planet into space. What is going on here? Are these elusive creatures and mysterious phenomena in our world or in our minds?20 New evidence indicates that they are, in fact, a product of the brain.
Neuroscientist Michael Persinger, in his laboratory at Laurentian University in Sudbury, Canada, for example, can induce all of these experiences in subjects by subjecting their temporal lobes to patterns of magnetic fields. Persinger places on a subject’s head a motorcycle helmet specially modified with electromagnets. The subject sits in an easy chair in a soundproof room with eyes covered. The electrical activity generated by the electromagnets produces a magnetic field that stimulates “microseizures” in the temporal lobes of the brain, which, in turn, produce a number of what can best be described as “spiritual” and “supernatural” experiences—the sense of a presence in the room, an out-of-body experience, bizarre distortion of body parts, and even religious feelings. Persinger calls these experiences “temporal lobe transients,” or increases and instabilities in neuronal firing patterns in the temporal lobe. Having now studied over 600 subjects in the past decade, Persinger speculates that such transient events may account for psychological states routinely reported as happening outside the mind. These events, he suggests, may be triggered by the stress of a near death experience (caused by an accident or traumatic surgery), high altitudes, fasting, a sudden decrease in oxygen, dramatic changes in blood sugar levels, and other stressful events.21 I participated as a subject in Persinger’s experiment and had a mild out-of-body experience.
Similarly, in 2002, Swiss neuroscientist Olaf Blanke and his colleagues discovered that they could bring about out-of-body experiences through electrical stimulation of the right angular gyrus in the temporal lobe of a forty-three-year-old woman suffering from severe epileptic seizures. In initial mild stimulations she reported “sinking into the bed” or “falling from a height.” More intense stimulation led her to “see myself lying in bed, from above, but I only see my legs and lower trunk.” Another stimulation induced “an instantaneous feeling of ‘lightness’ and ‘floating’ about two meters above the bed, close to the ceiling.”22
In a related study, researchers Andrew Newberg and Eugene D’Aquili found that when Buddhist monks meditate and Franciscan nuns pray, their brain scans indicate strikingly low activity in the posterior superior parietal lobe, a region of the brain the authors have dubbed the orientation association area (OAA), which orients the body in physical space (people with damage to this area have a difficult time negotiating their way around a house). When the OAA is booted up and running smoothly there is a sharp distinction between self and nonself. When OAA is in sleep mode—as in deep meditation and prayer—that division breaks down, leading to a blurring of the lines between reality and fantasy, between feeling in body and out of body. Perhaps this is what happens to monks who experience a sense of oneness with the universe, or with nuns who feel the presence of God, or with alien abductees floating out of their beds up to the mother ship.23
Since our normal experience is of stimuli coming into the brain from the outside, when a part of the brain abnormally generates these illusions, another part of the brain interprets them as external events. Hence, the abnormal is thought to be the paranormal. What these studies show is that mind and spirit are not separate from brain and body. In reality, all experience is mediated by the brain. Further, and more to the point of our discussion on free will and the brain, we now know from recent research in the neurosciences that every brain is wired, and continues to be rewired throughout life, in response to unique genetic, environmental, and historical conditions. Evolutionary psychologists Peggy La Cerra and Roger Bingham, for example, in their book The Origin of Minds, argue that our ancestral inheritance is not a set of fixed cognitive tools, but a living “brain/mind-construction system” that exploits pliable brain tissue, changing it with new experiences. The Swiss Army knife, it seems, can design new blades for cutting through new environments.24 How does it do this?
The mind is an emergent property of billions of individual neurons, each of which is connected to thousands of other neurons that together produce trillions of potential neuronal states. As the individual grows and develops into adulthood the interconnections grow and develop according to individual life experiences. Although we share a common evolutionary ancestry that generated a universal neural architecture, since no life paths are the same, and with trillions of possible permutations of neuronal connections in each brain, the result is that every human mind is unique. There are literally six billion different minds. The foundation of this neural system is what La Cerra and Bingham call the adaptive representational network (ARN), “a network of neurons that memorializes a brief scene in the ongoing movie of your life, linking together your physical and emotional state, the environment you are in, the behavior or thought you generate, and the problem-solving outcome.” What they are describing is an autocatalytic (self-generating) feedback loop. New experiences stimulate neurons to grow new synaptic connections. Those new connections are distinctive to every individual mind, which then responds to the environment in an idiosyncratic way, producing a behavioral repertoire of responses. The ARN evolved as an adaptation to help organisms survive in an ever-changing environment. No brain module can do what the ARN does, because modules evolved to solve specific problems, whereas the ARN evolved to solve a range of problems, even those never encountered.
How does this apply to real-world choices? La Cerra and Bingham reinterpret clinical depression in terms of its adaptive response consequences. The symptoms of depression—restlessness, agitation, disturbed sleeping and eating, impaired concentration, and loss of motivation—are not signs of an illness; rather, they represent an adaptive response to do something different in your life. “Because behavior is so enormously expensive energetically, the best thing a person in this situation can do is to stop what he has been doing, reconfigure his life, and try to formulate a more viable trajectory into the future.” Why would this intelligence system have evolved? “If you were an ancestral human who was being exploited by another individual or group of individuals, a complete behavior shutdown could abruptly force a renegotiation of the inequitable social relationship.” Even in the modern world, depression “serves as a wake-up call, prodding people to abandon dead-end jobs and relationships.”25
What does all this neuroscience tell us about free will and determinism? Cognitive psychologist Steven Pinker, for one, argues that the brain is wired to “feel” like it is making choices, so we should listen to what our brains are telling us. “The experience of choosing is not a fiction, regardless of how the brain works,” he explains. “It is a real neural process, with the obvious function of selecting behavior according to its foreseeable consequences.” That is, making choices that lead to behaviors that result in actual consequences for survival and reproduction in our evolutionary history would have led to the evolution of brain mechanisms that give the illusion of free will. “You cannot step outside it or let it go on without you because it is you.” Even “if the most ironclad form of determinism is real,” Pinker concludes, “you could not do anything about it anyway, because your anxiety about determinism, and how you would deal with it, would also be determined.” 26 Thus, with such convoluted and complex brains as we possess and living in a world with so many options, our brains evolved a choice-making module that, whether truly free or truly determined, nonetheless makes us feel free.
 
In 1985, while racing a bicycle along a lonely rural highway in Arkansas in the 3,000-mile nonstop transcontinental Race Across America, I was asked by ABC television commentator Diana Nyad how it felt to be too far behind the leader of the race to win. I told her that while I would prefer winning I had done everything I could in training, nutrition, equipment, and preparation, and that the only thing I could have done to improve my performance was to pick better parents. When that comment aired months later on Wide World of Sports, I called my parents to assure them I only meant that genetics plays a powerful role in athletics. I acquired the comment from renowned sports physiologist Per-Olof Astrand, who told an exercise symposium, “I am convinced that anyone interested in winning Olympic gold medals must select his or her parents very carefully.”27
From an evolutionary perspective, our parents have been very carefully selected for us—by natural selection. But we are also the products of our parental upbringing, family dynamics, peer groups, community values, teachers and education, preachers and religion, culture and politics, and much more. The science of assigning some portion of our lives to genetics and the remaining portion to the environment has a long and controversial history. The process strikes me as an exercise in futility because of the interactive nature of genes and memes, evolutionary history and cultural history. Such binary thinking, particularly since the completion of the mapping of the human genome, for example, has led to oversimplified claims for a “math gene,” a “risk-taking gene,” a “promiscuity gene,” a “rape gene,” or a “smoking gene.”28
In reality, the story is much more complex, and claims for genetic determinism are greatly exaggerated. Consider as one example among many a gene called D4DR, located on the short arm of the eleventh chromosome. D4DR codes for dopamine receptors, a neurotransmitter released by neurons that, when received by other neurons receptive to its chemical makeup, sets up dopamine pathways throughout the brain that stimulate the organism to be active (or not, if a shortage exists). A complete lack of dopamine, for example, causes patients (or rats) to slip into a virtual catatonic state. High levels of dopamine turn humans schizophrenic and rats frenetic. Dopamine stimulation, in fact, is the basis of the famous experiment where rats pressed a bar to stimulate their so-called pleasure center, which they did until collapsing in exhaustion. This is the fascinating work of geneticist Dean Hamer who, in his quest to find genes for smoking and homosexuality, discovered the gene (or, more precisely, the gene-complex) for a thrill-seeking personality. It turns out that the D4DR gene sequence repeats on chromosome eleven, and while most of us have four to seven copies, some people have two or three, and others have eight, nine, ten, or eleven copies. More copies of D4DR sequence means lower levels of dopamine, which translates into higher novelty-seeking behavior that artificially produces more dopamine (jumping off buildings and out of planes will do the trick). Hamer took 124 people who scored high on a survey that measured their desire to seek novelty and thrills (bungee jumpers and sky divers knock the roof off these tests), then looked at their DNA—specifically, chromosome eleven. He found that people who like to jump off buildings and out of planes had more copies of D4DR sequence than those who prefer knitting and watching grass grow.
When Hamer’s research was picked up in the media, headlines declared that scientists had discovered the novelty-seeking gene, implying that perhaps all of our personality traits are genetically coded at a single point on a single chromosome arm. Alas, if only it were that simple—whenever you get that urge to jump off the top of Yosemite’s Half Dome, just take a dopamine tablet and you’ll prefer to stay on the marked trails. But there is another side to this story. When you actually read the original research, it turns out that Hamer claims to explain no more than 4 percent of novelty-seeking behavior by D4DR sequences. That is, if we say that humans vary by 100 percent in their novelty-seeking behavior—catatonics on one end and X-Game skateboarders careening down hills at 50 mph two inches off the ground on the other—only 4 percent of that variance can be accounted for by D4DR. That’s it! As the science writer Matt Ridley explains in his analysis of the research:

Do you see now how unthreatening it is to talk of genetic influences over behaviour? How ridiculous to get carried away by one “personality gene” among 500? How absurd to think that, even in a future brave new world, somebody might abort a foetus because one of its personality genes is not up to scratch—and take the risk that on the next conception she would produce a foetus in which two or three other genes were a kind she does not desire? Do you see now how futile it would be to practise eugenic selection for certain genetic personalities, even if somebody had the power to do so? You would have to check each of 500 genes one by one, deciding in each case to reject those with the “wrong” gene. At the end you would be left with nobody, not even if you started with a million candidates. We are all of us mutants. The best defence against designer babies is to find more genes and swamp people in too much knowledge.29
 

Nature is so intertwined with nurture that to say that a complex human characteristic like personality or intelligence or—to the point of this book—morality is, say, 40 percent genetics and 60 percent environment (to arbitrarily pick two figures) misses something very important: inheritability of talent does not mean inevitability of success, and vice versa. We are free to select the optimal environmental conditions that will allow us to rise to the height of our biological potentials. In this sense, athletic success, like any other type of success, may be measured not just against others’ performances, but also against the upper ceiling of our own ability. To succeed is to have done one’s absolute best. To win is not just to have crossed the finish line first, but also to cross the finish line in the fastest time possible within one’s own limits. The closer one comes to reaching the personal upper limit of potential, the greater the achievement, as depicted in the Genetic Range of Potential model in figure 19. Individual “A” may have more absolute talent potential than individual “B,” but this does not guarantee relative success. If “B” prepares to the height of his or her upper limit of potential, but “A” slacks off below that mark, inherited talent becomes meaningless. There is not much we can do about selecting our parents, but we can select our environmental conditions to push us to the top of our range of potential.
 
Free will, Dennett says, emerges out of our deterministic world from the fact that we evolved a large cortex that allows us to weigh the consequences of the many courses of action available to us, that we are aware that we (and others) make these choices, and that we hold ourselves and them accountable.30
Figure 19: Genetic Range of Potential Model
 
Human behavior is a function of both genetics and environment, arrayed in a complex and interactive feedback loop. Behaviors are never “fixed” in some absolute sense by genetics; instead, genes code for a range of potential behaviors, which environments then affect. Genetically predisposed behaviors may be affected by environments to be expressed at the low end of the range or the high end of the range, or in between. Individuals may be determined to fall within a given range of potential, but where within that range they end up is a function of environmental determiners as well as self-determination, or free will.
e9781429996754_i0029.jpg
 
In Freedom Evolves, Dennett expands on his arguments in Elbow Room, adding an evolutionary component to his deduction of free will. Dennett’s thesis can be summarized as follows: (1) humans are evolved animals without a soul but with free will; (2) we are the only species with free will because we have a “self,” a sense of being self-aware, and are even aware that others are self-aware, because (3) we have symbolic language that allows us to communicate the fact that we are aware and self-aware; and (4) we have extremely complex neural circuitry and many degrees of behavioral freedom (a jellyfish, like a hot-air balloon, for example, has one degree of freedom: up and down; we have many more); and (5) we have a theory of mind about other selves who are also (6) moral animals in the sense of having evolved moral sentiments or feelings of making right or wrong choices as members of a social species, and with symbolic language, we have the representational power to reason with each other about what we ought to do; therefore (7) free will emerges out of our deterministic world from the fact that we can weigh the consequences of the many courses of action available to us, that we are aware that we (and others) make these choices, and that we hold ourselves and them accountable.
In Dennett’s evolutionary theory, free will is located in the “self,” a metaphor for an adaptation our brains evolved for monitoring what is happening in our own and others’ brains. But where is the self located? The answer is not clear, but wherever it is, it is not in one location. Reaction-time experiments that monitor different parts of the brain indicate that there is no “Self-contained You.” Instead, “all the work done by the imagined homunculus in the Cartesian Theater has to be broken up and distributed in space and time in the brain.”31 We have a functional “layer” of decision-making power that no other species has (this is not a brain layer, but what Dennett calls “a virtual layer” found “in the micro-details of the brain’s anatomy”). For example, “a male baboon can ‘ask’ a nearby female for some grooming, but neither of them can discuss the likely outcome of compliance with this request, which might have serious consequences for both of them, especially if the male is not the alpha male of the troop. We human beings not only can do things when requested to do them; we can answer inquiries about what we are doing and why. It is this kind of asking, which we can also direct to ourselves, that creates the special category of voluntary actions that sets us apart.”32
This argument for freedom from evolution brings a fresh perspective to an ancient problem. But is it true? I have my doubts. Although I accept the first six of Dennett’s points listed above and agree that he has thoroughly debunked the indeterminism argument, I remain unconvinced that free will can ultimately be derived from determinism in any consistent logical way. The terms are incompatible. What we are left with is a type of free will from ignorance, ignorance of all the determining causes in our lives, such that we are, de facto, free because when we make choices we cannot know all the causal variables. This theory of free will derives from chaos and complexity theory.
 
There is one more way to get free will, and that is through the complex world of human and social systems. The causal-net theory of determinism means that human behavior is no less caused than other physical or biological phenomena, just more difficult to understand and predict because of the number of elements in the system and the complexity of their interactions. Since no cause or set of causes we select to examine as the determiners of human action can be complete, in terms of human freedom they may be pragmatically considered as conditioning causes, not determining ones. That is, our thoughts and actions are shaped by a myriad of causes—genetic, environmental, and historical. Every individual set of genes is unique (with the exception of identical twins), each environmental setting is matchless, and every historical pathway that each of us has gone down in our individual lives is distinctive. We are, each and every one of us, unique and different from every other of the six billion members of our species. And those conditions are so complex, so interwoven, that no one could possibly know all of the causal variables for themselves or anyone else. Human freedom arises out of this ignorance of causes.
I derived this solution out of a model I developed called the model of contingent-necessity.33 Its primary function is as a tool for the historical sciences, but it can generate another solution to the paradox of moral determinism. By contingency I mean a conjuncture of events occurring without perceptible design, and by necessity I mean constraining circumstances compelling a certain course of action. Contingencies are the sometimes small, apparently insignificant, and usually unexpected events of life—the kingdom hangs in the balance awaiting the horseshoe nail. Necessities are the large and powerful laws of nature and trends of history—once the kingdom has collapsed, 100,000 horseshoe nails will not save the realm. Leaving either contingency or necessity out of the historical formula, however, is to ignore an important component in the development of historical sequences. The past is constructed by both contingencies and necessities, and therefore it is useful to combine the two into one term that expresses this interrelationship. I call this contingent-necessity, taken to mean a conjuncture of events compelling a certain course of action by constraining prior conditions.
Randomness and predictability—contingency and necessity—long seen to be opposites on a continuum, are characteristics that vary in the amount of their respective influence and at what time their influence is greatest in the historical sequence. There is available a rich matrix of interactions between early pervasive contingencies and later local necessities, varying over time, in the model of contingent-necessity: in the development of any historical sequence the role of contingencies in the construction of necessities is accentuated in the early stages and attenuated in the later. At the beginning of a historical sequence, actions of the individual elements are chaotic, unpredictable, and have a powerful influence on the future development of that sequence. But as the sequence slowly but ineluctably evolves, and the pathways become more worn, the chaotic system self-organizes into an orderly one. The individual elements sort themselves and are sorted into their allotted positions, as dictated by what came before—the conjuncture of events compelling a certain course of action by constraining prior conditions. But aren’t both necessities and contingencies caused, and themselves are the causes of effects? And, if so, then isn’t all human action caused, and thus determined? We can express the problem this way:

Necessity is omnipotent
Contingency is omnipotent
Humans have free will
 

If human history is absolutely determined by necessitating forces of any kind, then neither contingency nor free will can exist. If contingency is all-powerful, then there can be no absolutely determining forces, and all history is reduced to just “one damn thing after another.” Since it is obvious that there are necessitating forces at work in history, and it is equally obvious that contingencies push and direct historical sequences, then how do we resolve the problem of historical causality and human freedom? Here is a helpful analogy. Atoms moving about in space, like people moving about the environment, are caused, but their collisions (atomic) and encounters (human) happen by a combination of contingencies and necessities. Contingency leads to collisions and encounters; necessity governs speed and direction. Events may occur as a result of accidental causes (a conjuncture of unplanned events), but not by accident, in the sense of being uncaused. An effect, dependent upon the activity of one or more causes, may seem to be produced by accident, but it is really the result of a conjuncture of events compelling a certain course of action by constraining prior conditions. The words compelling and constraining imply powerful influence but not causal determinism.
Another way to approach the problem is to think of necessities as “what had to be” and contingencies as “what might have been.” If history is a product of contingencies and necessities, then necessities (what had to be) imply determinism, while contingencies (what did not have to be) imply, in a way, a type of freedom. If things could have turned out differently because of some small but carefully placed human action, this gives us one more way around determinism. We can make a difference. Our actions matter. And in the rich panoply of causes that determine our actions, we can feel the freedom to choose to make a difference by doing the right thing to change the course of our personal histories or global history.
The number of causes and the complexity of their interactions make the predetermination of human action pragmatically impossible. We can even put a figure on the causal net of the universe to see just how absurd it is to think we can get our minds fully around it. Tulane University theoretical physicist Frank Tipler has calculated that in order for a computer in the far future of the universe to resurrect in a virtual reality every person who ever lived or could have lived, with all causal interactions between themselves and their environment, it would need 10 to the power of 10 to the power of 123 bits (a 1 followed by 10123 zeros) of memory. An entity capable of this would be, for all intents and purposes, omniscient and omnipotent, and this is what Tipler calls the Omega Point, or God.34 Suffice it to say that no computer within the conceivable future will achieve this level of power; likewise no human brain even comes close. Thus, as far as we are concerned, the causal net will always be full of holes. Therefore, in the language of this model: human freedom is action taken with an ignorance of causes within a conjuncture of events that compels and is compelled to a certain course of action by constraining prior conditions.
In other words, the enormity of this complexity leads us to feel as if we are acting freely as uncaused causers, even though we are actually causally determined. Since no set of causes we select as the determiners of human action can be complete, the feeling of freedom arises out of this ignorance of causes.
To that extent we may act as if we are free. There is much to gain, little to lose, and personal responsibility follows. I close with William Ernest Henley’s powerful poem “Invictus,” especially fitting since he wrote it when he was terminally ill and in the context of the nineteenth-century push for scientific determinism, as if to say it ain’t so:35

Out of the night that covers me,
Black as the pit from pole to pole,
I thank whatever gods may be
For my unconquerable soul.
 
In the fell clutch of circumstance
I have not winced nor cried aloud.
Under the bludgeonings of chance
My head is bloody, but unbowed.
 
Beyond this place of wrath and tears
Looms but the Horror of the shade,
And yet the menace of the years
Finds and shall find me unafraid.
 
It matters not how strait the gate,
How charged with punishments the scroll,
I am the master of my fate:
I am the captain of my soul.