December 28, 2007, 6:52 pm
The Airport Security Follies
By Patrick Smith
Patrick Smith, a commercial airline pilot, is the author of Salon.com’s weekly Ask the Pilot air travel column; his book of the same name was published in 2004. He lives near Boston.
Six years after the terrorist attacks of 2001, airport security remains a theater of the absurd. The changes put in place following the September 11th catastrophe have been drastic, and largely of two kinds: those practical and effective, and those irrational, wasteful and pointless.
The first variety have taken place almost entirely behind the scenes. Explosives scanning for checked luggage, for instance, was long overdue and is perhaps the most welcome addition. Unfortunately, at concourse checkpoints all across America, the madness of passenger screening continues in plain view. It began with pat-downs and the senseless confiscation of pointy objects. Then came the mandatory shoe removal, followed in the summer of 2006 by the prohibition of liquids and gels. We can only imagine what is next.
To understand what makes these measures so absurd, we first need to revisit the morning of September 11th, and grasp exactly what it was the 19 hijackers so easily took advantage of. Conventional wisdom says the terrorists exploited a weakness in airport security by smuggling aboard box-cutters. What they actually exploited was a weakness in our mindset — a set of presumptions based on the decades-long track record of hijackings.
In years past, a takeover meant hostage negotiations and standoffs; crews were trained in the concept of “passive resistance.” All of that changed forever the instant American Airlines Flight 11 collided with the north tower. What weapons the 19 men possessed mattered little; the success of their plan relied fundamentally on the element of surprise. And in this respect, their scheme was all but guaranteed not to fail.
For several reasons — particularly the awareness of passengers and crew — just the opposite is true today. Any hijacker would face a planeload of angry and frightened people ready to fight back. Say what you want of terrorists, they cannot afford to waste time and resources on schemes with a high probability of failure. And thus the September 11th template is all but useless to potential hijackers.
No matter that a deadly sharp can be fashioned from virtually anything found on a plane, be it a broken wine bottle or a snapped-off length of plastic, we are content wasting billions of taxpayer dollars and untold hours of labor in a delusional attempt to thwart an attack that has already happened, asked to queue for absurd lengths of time, subject to embarrassing pat-downs and loss of our belongings.
The folly is much the same with respect to the liquids and gels restrictions, introduced two summers ago following the breakup of a London-based cabal that was planning to blow up jetliners using liquid explosives. Allegations surrounding the conspiracy were revealed to substantially embellished. In an August, 2006 article in the New York Times, British officials admitted that public statements made following the arrests were overcooked, inaccurate and “unfortunate.” The plot’s leaders were still in the process of recruiting and radicalizing would-be bombers. They lacked passports, airline tickets and, most critical of all, they had been unsuccessful in actually producing liquid explosives. Investigators later described the widely parroted report that up to ten U.S airliners had been targeted as “speculative” and “exaggerated.”
Among first to express serious skepticism about the bombers’ readiness was Thomas C. Greene, whose essay in The Register explored the extreme difficulty of mixing and deploying the types of binary explosives purportedly to be used. Green conferred with Professor Jimmie C. Oxley, an explosives specialist who has closely studied the type of deadly cocktail coveted by the London plotters.
“The notion that deadly explosives can be cooked up in an airplane lavatory is pure fiction,” Greene told me during an interview. “A handy gimmick for action movies and shows like ‘24.’ The reality proves disappointing: it’s rather awkward to do chemistry in an airplane toilet. Nevertheless, our official protectors and deciders respond to such notions instinctively, because they’re familiar to us: we’ve all seen scenarios on television and in the cinema. This, incredibly, is why you can no longer carry a bottle of water onto a plane.”
The threat of liquid explosives does exist, but it cannot be readily brewed from the kinds of liquids we have devoted most of our resources to keeping away from planes. Certain benign liquids, when combined under highly specific conditions, are indeed dangerous. However, creating those conditions poses enormous challenges for a saboteur.
“I would not hesitate to allow that liquid explosives can pose a danger,” Greene added, recalling Ramzi Yousef’s 1994 detonation of a small nitroglycerine bomb aboard Philippine Airlines Flight 434. The explosion was a test run for the so-called “Project Bojinka,” an Al Qaeda scheme to simultaneously destroy a dozen widebody airliners over the Pacific Ocean. “But the idea that confiscating someone’s toothpaste is going to keep us safe is too ridiculous to entertain.”
Yet that’s exactly what we’ve been doing. The three-ounce container rule is silly enough — after all, what’s to stop somebody from carrying several small bottles each full of the same substance — but consider for a moment the hypocrisy of T.S.A.’s confiscation policy. At every concourse checkpoint you’ll see a bin or barrel brimming with contraband containers taken from passengers for having exceeded the volume limit. Now, the assumption has to be that the materials in those containers are potentially hazardous. If not, why were they seized in the first place? But if so, why are they dumped unceremoniously into the trash? They are not quarantined or handed over to the bomb squad; they are simply thrown away. The agency seems to be saying that it knows these things are harmless. But it’s going to steal them anyway, and either you accept it or you don’t fly.
But of all the contradictions and self-defeating measures T.S.A. has come up with, possibly none is more blatantly ludicrous than the policy decreeing that pilots and flight attendants undergo the same x-ray and metal detector screening as passengers. What makes it ludicrous is that tens of thousands of other airport workers, from baggage loaders and fuelers to cabin cleaners and maintenance personnel, are subject only to occasional random screenings when they come to work.
These are individuals with full access to aircraft, inside and out. Some are airline employees, though a high percentage are contract staff belonging to outside companies. The fact that crew members, many of whom are former military fliers, and all of whom endured rigorous background checks prior to being hired, are required to take out their laptops and surrender their hobby knives, while a caterer or cabin cleaner sidesteps the entire process and walks onto a plane unimpeded, nullifies almost everything our T.S.A. minders have said and done since September 11th, 2001. If there is a more ringing let-me-get-this-straight scenario anywhere in the realm of airport security, I’d like to hear it.
I’m not suggesting that the rules be tightened for non-crew members so much as relaxed for all accredited workers. Which perhaps urges us to reconsider the entire purpose of airport security:
The truth is, regardless of how many pointy tools and shampoo bottles we confiscate, there shall remain an unlimited number of ways to smuggle dangerous items onto a plane. The precise shape, form and substance of those items is irrelevant. We are not fighting materials, we are fighting the imagination and cleverness of the would-be saboteur.
Thus, what most people fail to grasp is that the nuts and bolts of keeping terrorists away from planes is not really the job of airport security at all. Rather, it’s the job of government agencies and law enforcement. It’s not very glamorous, but the grunt work of hunting down terrorists takes place far off stage, relying on the diligent work of cops, spies and intelligence officers. Air crimes need to be stopped at the planning stages. By the time a terrorist gets to the airport, chances are it’s too late.
In the end, I’m not sure which is more troubling, the inanity of the existing regulations, or the average American’s acceptance of them and willingness to be humiliated. These wasteful and tedious protocols have solidified into what appears to be indefinite policy, with little or no opposition. There ought to be a tide of protest rising up against this mania. Where is it? At its loudest, the voice of the traveling public is one of grumbled resignation. The op-ed pages are silent, the pundits have nothing meaningful to say.
The airlines, for their part, are in something of a bind. The willingness of our carriers to allow flying to become an increasingly unpleasant experience suggests a business sense of masochistic capitulation. On the other hand, imagine the outrage among security zealots should airlines be caught lobbying for what is perceived to be a dangerous abrogation of security and responsibility — even if it’s not. Carriers caught plenty of flack, almost all of it unfair, in the aftermath of September 11th. Understandably, they no longer want that liability.
As for Americans themselves, I suppose that it’s less than realistic to expect street protests or airport sit-ins from citizen fliers, and maybe we shouldn’t expect too much from a press and media that have had no trouble letting countless other injustices slip to the wayside. And rather than rethink our policies, the best we’ve come up with is a way to skirt them — for a fee, naturally — via schemes like Registered Traveler. Americans can now pay to have their personal information put on file just to avoid the hassle of airport security. As cynical as George Orwell ever was, I doubt he imagined the idea of citizens offering up money for their own subjugation.
How we got to this point is an interesting study in reactionary politics, fear-mongering and a disconcerting willingness of the American public to accept almost anything in the name of “security.” Conned and frightened, our nation demands not actual security, but security spectacle. And although a reasonable percentage of passengers, along with most security experts, would concur such theater serves no useful purpose, there has been surprisingly little outrage. In that regard, maybe we’ve gotten exactly the system we deserve.
Saturday, December 29
Friday, December 28
Media Politics
On Thursday, Mitt Romney put up a television ad knocking John McCain for not supporting tax cuts and offering amnesty to illegal aliens. The ad asks "John McCain, an honorable man. But is he the right Republican for the future?" Now the McCain team is thinking about using Romney's own words (and campaign aides) to respond.
Taxes and immigration have nothing to do with the future, particularly, but the framing is a not-so-subtle jab at McCain's age. In his closing-argument stump speech, Romney is trying to identify himself with the future. ''No one votes for yesterday; they vote for tomorrow,'' Romney said Thursday in New Hampshire. ''Elections are about the future, the future of our families, the future of our country.''
The McCain team's response is that Romney has to talk about the future because he's spent much of the campaign running from his past. This may become more than a quip if the campaign decides to air the following television ad, which they've had on the shelf since the spring.
The ad hangs Romney with his own words—he advocates for a woman's right to choose and gun control, gets tongue tied on his own hunting practices, and distances himself from Ronald Reagan). What makes the ad particularly powerful for the McCain team, though, is that it was produced by media wizards who now work for Romney. Stuart Stevens and Russ Schriefer are veterans of the Bush campaign, which so effectively used John Kerry's words against him. They moved from Bush to McCain, but left and moved to Romney after the McCain operation imploded. When they were McCain guys, though, they helped put together this ad and pushed for running it, according to McCain aides and advisers.
Taxes and immigration have nothing to do with the future, particularly, but the framing is a not-so-subtle jab at McCain's age. In his closing-argument stump speech, Romney is trying to identify himself with the future. ''No one votes for yesterday; they vote for tomorrow,'' Romney said Thursday in New Hampshire. ''Elections are about the future, the future of our families, the future of our country.''
The McCain team's response is that Romney has to talk about the future because he's spent much of the campaign running from his past. This may become more than a quip if the campaign decides to air the following television ad, which they've had on the shelf since the spring.
The ad hangs Romney with his own words—he advocates for a woman's right to choose and gun control, gets tongue tied on his own hunting practices, and distances himself from Ronald Reagan). What makes the ad particularly powerful for the McCain team, though, is that it was produced by media wizards who now work for Romney. Stuart Stevens and Russ Schriefer are veterans of the Bush campaign, which so effectively used John Kerry's words against him. They moved from Bush to McCain, but left and moved to Romney after the McCain operation imploded. When they were McCain guys, though, they helped put together this ad and pushed for running it, according to McCain aides and advisers.
MORAL PSYCHOLOGY AND THE MISUNDERSTANDING OF RELIGION
JONATHAN HAIDT is Associate Professor of Psychology at the University of Virginia, where he does research on morality and emotion and how they vary across cultures. He is the author of The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom.
I study morality from every angle I can find. Morality is one of those basic aspects of humanity, like sexuality and eating, that can't fit into one or two academic fields. I think morality is unique, however, in having a kind of spell that disguises it. We all care about morality so passionately that it's hard to look straight at it. We all look at the world through some kind of moral lens, and because most of the academic community uses the same lens, we validate each other's visions and distortions. I think this problem is particularly acute in some of the new scientific writing about religion.
When I started graduate school at Penn in 1987, it seemed that developmental psychology owned the rights to morality within psychology. Everyone was either using or critiquing Lawrence Kohlberg's ideas, as well as his general method of interviewing kids about dilemmas (such as: should Heinz steal a drug to save his wife's life?). Everyone was studying how children's understanding of moral concepts changed with experience. But in the 1990s two books were published that I believe triggered an explosion of cross-disciplinary scientific interest in morality, out of which has come a new synthesis—very much along the lines that E. O. Wilsonpredicted in 1975.
The first was Antonio Damasio's Descartes' Error, in 1994, which showed a very broad audience that morality could be studied using the then new technology of fMRI, and also that morality, and rationality itself, were crucially dependent on the proper functioning of emotional circuits in the prefrontal cortex. The second was Frans de Waal's Good Natured, published just two years later, which showed an equally broad audience that the building blocks of human morality are found in other apes and are products of natural selection in the highly social primate lineage. These two books came out just as John Bargh was showing social psychologists that automatic and unconscious processes can and probably do cause the majority of our behaviors, even morally loaded actions (like rudeness or altruism) that we thought we were controlling consciously.
Furthermore, Damasio and Bargh both found, as Michael Gazzaniga had years before, that people couldn't stop themselves from making up post-hoc explanations for whatever it was they had just done for unconscious reasons. Combine these developments and suddenly Kohlbergian moral psychology seemed to be studying the wagging tail, rather than the dog. If the building blocks of morality were shaped by natural selection long before language arose, and if those evolved structures work largely by giving us feelings that shape our behavior automatically, then why should we be focusing on the verbal reasons that people give to explain their judgments in hypothetical moral dilemmas?
In my dissertation and my other early studies, I told people short stories in which a person does something disgusting or disrespectful that was perfectly harmless (for example, a family cooks and eats its dog, after the dog was killed by a car). I was trying to pit the emotion of disgust against reasoning about harm and individual rights.
I found that disgust won in nearly all groups I studied (in Brazil, India, and the United States), except for groups of politically liberal college students, particularly Americans, who overrode their disgust and said that people have a right to do whatever they want, as long as they don't hurt anyone else.
These findings suggested that emotion played a bigger role than the cognitive developmentalists had given it. These findings also suggested that there were important cultural differences, and that academic researchers may have inappropriately focused on reasoning about harm and rights because we primarily study people like ourselves—college students, and also children in private schools near our universities, whose morality is not representative of the United States, let alone the world.
So in the 1990s I was thinking about the role of emotion in moral judgment, I was reading Damasio, De Waal, and Bargh, and I was getting very excited by the synergy and consilience across disciplines. I wrote a review article called "The Emotional Dog and its Rational Tail," which was published in 2001, a month after Josh Greene's enormously influentialScience article. Greene used fMRI to show that emotional responses in the brain, not abstract principles of philosophy, explain why people think various forms of the "trolley problem" (in which you have to choose between killing one person or letting five die) are morally different.
Obviously I'm biased in terms of what I notice, but it seems to me that the zeitgeist in moral psychology has changed since 2001. Most people who study morality now read and write about emotions, the brain, chimpanzees, and evolution, as well as reasoning. This is exactly what E. O. Wilson predicted in Sociobiology: that the old approaches to morality, including Kohlberg's, would be swept away or merged into a new approach that focused on the emotive centers of the brain as biological adaptations. Wilson even said that these emotive centers give us moral intuitions, which the moral philosophers then justify while pretending that they are intuiting truths that are independent of the contingencies of our evolved minds.
And now, 30 years later, Josh Greene has a paper in press where he uses neuroscientific evidence to reinterpret Kantian deontological philosophy as a sophisticated post-hoc justification of our gut feelings about rights and respect for other individuals. I think E. O. Wilson deserves more credit than he gets for seeing into the real nature of morality and for predicting the future of moral psychology so uncannily. He's in my pantheon, along with David Hume and Charles Darwin. All three were visionaries who urged us to focus on the moral emotions and their social utility.
I recently summarized this new synthesis in moral psychology with four principles:
1) Intuitive primacy but not dictatorship. This is the idea, going back to Wilhelm Wundt and channeled through Robert Zajonc and John Bargh, that the mind is driven by constant flashes of affect in response to everything we see and hear.
Our brains, like other animal brains, are constantly trying to fine tune and speed up the central decision of all action: approach or avoid. You can't understand the river of fMRI studies on neuroeconomics and decision making without embracing this principle. We have affectively-valenced intuitive reactions to almost everything, particularly to morally relevant stimuli such as gossip or the evening news. Reasoning by its very nature is slow, playing out in seconds.
Studies of everyday reasoning show that we usually use reason to search for evidence to support our initial judgment, which was made in milliseconds. But I do agree with Josh Greene that sometimes we can use controlled processes such as reasoning to override our initial intuitions. I just think this happens rarely, maybe in one or two percent of the hundreds of judgments we make each week. And I do agree with Marc Hauser that these moral intuitions require a lot of computation, which he is unpacking.
Hauser and I mostly disagree on a definitional question: whether this means that "cognition" precedes "emotion." I try never to contrast those terms, because it's all cognition. I think the crucial contrast is between two kinds of cognition: intuitions (which are fast and usually affectively laden) and reasoning (which is slow, cool, and less motivating).
2) Moral thinking is for social doing. This is a play on William James' pragmatist dictum that thinking is for doing, updated by newer work on Machiavellian intelligence. The basic idea is that we did not evolve language and reasoning because they helped us to find truth; we evolved these skills because they were useful to their bearers, and among their greatest benefits were reputation management and manipulation.
Just look at your stream of consciousness when you are thinking about a politician you dislike, or when you have just had a minor disagreement with your spouse. It's like you're preparing for a court appearance. Your reasoning abilities are pressed into service generating arguments to defend your side and attack the other. We are certainly able to reason dispassionately when we have no gut feeling about a case, and no stake in its outcome, but with moral disagreements that's rarely the case. As David Hume said long ago, reason is the servant of the passions.
3) Morality binds and builds. This is the idea stated most forcefully by Emile Durkheim that morality is a set of constraints that binds people together into an emergent collective entity.
Durkheim focused on the benefits that accrue to individuals from being tied in and restrained by a moral order. In his book Suicide he alerted us to the ways that freedom and wealth almost inevitably foster anomie, the dangerous state where norms are unclear and people feel that they can do whatever they want.
Durkheim didn't talk much about conflict between groups, but Darwin thought that such conflicts may have spurred the evolution of human morality. Virtues that bind people to other members of the tribe and encourage self-sacrifice would lead virtuous tribes to vanquish more selfish ones, which would make these traits more prevalent.
Of course, this simple analysis falls prey to the free-rider problem thatGeorge Williams and Richard Dawkins wrote so persuasively about. But I think the terms of this debate over group selection have changed radically in the last 10 years, as culture and religion have become central to discussions of the evolution of morality.
I'll say more about group selection in a moment. For now I just want to make the point that humans do form tight, cooperative groups that pursue collective ends and punish cheaters and slackers, and they do this most strongly when in conflict with other groups. Morality is what makes all of that possible.
4) Morality is about more than harm and fairness. In moral psychology and moral philosophy, morality is almost always about how people treat each other. Here's an influential definition from the Berkeley psychologist Elliot Turiel: morality refers to "prescriptive judgments of justice, rights, and welfare pertaining to how people ought to relate to each other."
Kohlberg thought that all of morality, including concerns about the welfare of others, could be derived from the psychology of justice. Carol Gilligan convinced the field that an ethic of "care" had a separate developmental trajectory, and was not derived from concerns about justice.
OK, so there are two psychological systems, one about fairness/justice, and one about care and protection of the vulnerable. And if you look at the many books on the evolution of morality, most of them focus exclusively on those two systems, with long discussions of Robert Trivers' reciprocal altruism (to explain fairness) and of kin altruism and/or attachment theory to explain why we don't like to see suffering and often care for people who are not our children.
But if you try to apply this two-foundation morality to the rest of the world, you either fail or you become Procrustes. Most traditional societies care about a lot more than harm/care and fairness/justice. Why do so many societies care deeply and morally about menstruation, food taboos, sexuality, and respect for elders and the Gods? You can't just dismiss this stuff as social convention. If you want to describe human morality, rather than the morality of educated Western academics, you've got to include the Durkheimian view that morality is in large part about binding people together.
From a review of the anthropological and evolutionary literatures, Craig Joseph (at Northwestern University) and I concluded that there were three best candidates for being additional psychological foundations of morality, beyond harm/care and fairness/justice. These three we label asingroup/loyalty (which may have evolved from the long history of cross-group or sub-group competition, related to what Joe Henrich calls "coalitional psychology"); authority/respect (which may have evolved from the long history of primate hierarchy, modified by cultural limitations on power and bullying, as documented by Christopher Boehm), andpurity/sanctity, which may be a much more recent system, growing out of the uniquely human emotion of disgust, which seems to give people feelings that some ways of living and acting are higher, more noble, and less carnal than others.
Joseph and I think of these foundational systems as expressions of whatDan Sperber calls "learning modules"—they are evolved modular systems that generate, during enculturation, large numbers of more specific modules which help children recognize, quickly and automatically, examples of culturally emphasized virtues and vices. For example, we academics have extremely fine-tuned receptors for sexism (related to fairness) but not sacrilege (related to purity).
Virtues are socially constructed and socially learned, but these processes are highly prepared and constrained by the evolved mind. We call these three additional foundations the binding foundations, because the virtues, practices, and institutions they generate function to bind people together into hierarchically organized interdependent social groups that try to regulate the daily lives and personal habits of their members. We contrast these to the two individualizing foundations (harm/care and fairness/reciprocity), which generate virtues and practices that protect individuals from each other and allow them to live in harmony as autonomous agents who can focus on their own goals.
My UVA colleagues Jesse Graham, Brian Nosek, and I have collected data from about 7,000 people so far on a survey designed to measure people's endorsement of these five foundations. In every sample we've looked at, in the United States and in other Western countries, we find that people who self-identify as liberals endorse moral values and statements related to the two individualizing foundations primarily, whereas self-described conservatives endorse values and statements related to all five foundations. It seems that the moral domain encompasses more for conservatives—it's not just about Gilligan's care and Kohlberg's justice. It's also about Durkheim's issues of loyalty to the group, respect for authority, and sacredness.
I hope you'll accept that as a purely descriptive statement. You can still reject the three binding foundations normatively—that is, you can still insist that ingroup, authority, and purity refer to ancient and dangerous psychological systems that underlie fascism, racism, and homophobia, and you can still claim that liberals are right to reject those foundations and build their moral systems using primarily the harm/care and fairness/reciprocity foundations.
But just go with me for a moment that there is this difference, descriptively, between the moral worlds of secular liberals on the one hand and religious conservatives on the other. There are, of course, many other groups, such as the religious left and the libertarian right, but I think it's fair to say that the major players in the new religion wars are secular liberals criticizing religious conservatives. Because the conflict is a moral conflict, we should be able to apply the four principles of the new synthesis in moral psychology.
In what follows I will take it for granted that religion is a part of the natural world that is appropriately studied by the the methods of science. Whether or not God exists (and as an atheist I personally doubt it), religiosity is an enormously important fact about our species. There must be some combination of evolutionary, developmental, neuropsychological, and anthropological theories that can explain why human religious practices take the various forms that they do, many of which are so similar across cultures and eras. I will also take it for granted that religious fundamentalists, and most of those who argue for the existence of God, illustrate the first three principles of moral psychology (intuitive primacy, post-hoc reasoning guided by utility, and a strong sense of belonging to a group bound together by shared moral commitments).
But because the new atheists talk so much about the virtues of science and our shared commitment to reason and evidence, I think it's appropriate to hold them to a higher standard than their opponents. Do these new atheist books model the scientific mind at its best? Or do they reveal normal human beings acting on the basis of their normal moral psychology?
1) Intuitive primacy but not dictatorship. It's clear that Richard Dawkins (inThe God Delusion) and Sam Harris (in Letter To A Christian Nation) have strong feelings about religion in general and religious fundamentalists in particular. Given the hate mail they receive, I don't blame them. The passions of Dawkins and Harris don't mean that they are wrong, or that they can't be trusted. One can certainly do good scholarship on slavery while hating slavery.
But the presence of passions should alert us that the authors, being human, are likely to have great difficulty searching for and then fairly evaluating evidence that opposes their intuitive feelings about religion. We can turn to Dawkins and Harris to make the case for the prosecution, which they do brilliantly, but if we readers are to judge religion we will have to find a defense attorney. Or at least we'll have to let the accused speak.
2) Moral thinking is for social doing. This is where the scientific mind is supposed to depart from the lay mind. The normal person (once animated by emotion) engages in moral reasoning to find ammunition, not truth; the normal person attacks the motives and character of her opponents when it will be advantageous to do so. The scientist, in contrast, respects empirical evidence as the ultimate authority and avoids ad hominem arguments. The metaphor for science is a voyage of discovery, not a war. Yet when I read the new atheist books, I see few new shores. Instead I see battlefields strewn with the corpses of straw men. To name three:
a) The new atheists treat religions as sets of beliefs about the world, many of which are demonstrably false. Yet anthropologists and sociologists who study religion stress the role of ritual and community much more than of factual beliefs about the creation of the world or life after death.
b) The new atheists assume that believers, particularly fundamentalists, take their sacred texts literally. Yet ethnographies of fundamentalist communities (such as James Ault's Spirit and Flesh) show that even when people claim to be biblical literalists, they are in fact quite flexible, drawing on the bible selectively—or ignoring it—to justify humane and often quite modern responses to complex social situations.
c) The new atheists all review recent research on religion and conclude that it is an evolutionary byproduct, not an adaptation. They compare religious sentiments to moths flying into candle flames, ants whose brains have been hijacked for a parasite's benefit, and cold viruses that are universal in human societies. This denial of adaptation is helpful for their argument that religion is bad for people, even when people think otherwise.
I quite agree with these authors' praise of the work of Pascal Boyer andScott Atran, who have shown how belief in supernatural entities may indeed be an accidental output of cognitive systems that otherwise do a good job of identifying objects and agents. Yet even if belief in gods was initially a byproduct, as long as such beliefs had consequences for behavior then it seems likely that natural selection operated upon phenotypic variation and favored the success of individuals and groups that found ways (genetic or cultural or both) to use these gods to their advantage, for example as commitment devices that enhanced cooperation, trust, and mutual aid.
3) Morality binds and builds. Dawkins is explicit that his goal is to start a movement, to raise consciousness, and to arm atheists with the arguments they'll need to do battle with believers. The view that "we" are virtuous and our opponents are evil is a crucial step in uniting people behind a cause, and there is plenty of that in the new atheist books. A second crucial step is to identify traitors in our midst and punish or humiliate them. There is some of that too in these books—atheists who defend the utility of religion or who argue for disengagement or détente between science and religion are compared to Chamberlain and his appeasement of Hitler.
To my mind an irony of Dawkins' position is that he reveals a kind of religious orthodoxy in his absolute rejection of group selection. David Sloan Wilson has supplemented Durkheim's view of religion (as being primarily about group cohesion) with evolutionary analyses to propose that religion was the conduit that pulled humans through a "major transition" in evolutionary history.
Dawkins, along with George Williams and most critics of group selection, acknowledge that natural selection works on groups as well as on individuals, and that group selection is possible in principle. But Dawkins relies on Williams' argument that selection pressures at the individual level are, in practice, always stronger than those at the group level: free riders will always undercut Darwin's suggestion that morality evolved because virtuous groups outcompeted selfish groups.
Wilson, however, in Darwin's Cathedral, makes the case that culture in general and religion in particular change the variables in Williams' analysis. Religions and their associated practices greatly increase the costs of defection (through punishment and ostracism), increase the contributions of individuals to group efforts (through cultural and emotional mechanisms that increase trust), and sharpen the boundaries — biological and cultural — between groups. Throw in recent discoveries that genetic evolution can work much faster than previously supposed, and the widely respected work of Pete Richerson and Rob Boyd on cultural group selection, and suddenly the old consensus against group selection is outdated.
It's time to examine the question anew. Yet Dawkins has referred to group selection in interviews as a "heresy," and in The God Delusion he dismisses it without giving a reason. In chapter 5 he states the standard Williams free rider objection, notes the argument that religion is a way around the Williams objection, concedes that Darwin believed in group selection, and then moves on. Dismissing a credible position without reasons, and calling it a heresy (even if tongue in cheek), are hallmarks of standard moral thinking, not scientific thinking.
4) Morality is about more than harm and fairness. In Letter to a Christian Nation, Sam Harris gives us a standard liberal definition of morality: "Questions of morality are questions about happiness and suffering… To the degree that our actions can affect the experience of other creatures positively or negatively, questions of morality apply." He then goes on to show that the Bible and the Koran, taken literally, are immoral books because they're not primarily about happiness and suffering, and in many places they advocate harming people.
Reading Harris is like watching professional wrestling or the Harlem Globetrotters. It's great fun, with lots of acrobatics, but it must not be mistaken for an actual contest. If we want to stage a fair fight between religious and secular moralities, we can't eliminate one by definition before the match begins. So here's my definition of morality, which gives each side a chance to make its case:
Moral systems are interlocking sets of values, practices, institutions, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make social life possible.
In my research I have found that there are two common ways that cultures suppress and regulate selfishness, two visions of what society is and how it ought to work. I'll call them the contractual approach and the beehiveapproach.
The contractual approach takes the individual as the fundamental unit of value. The fundamental problem of social life is that individuals often hurt each other, and so we create implicit social contracts and explicit laws to foster a fair, free, and safe society in which individuals can pursue their interests and develop themselves and their relationships as they choose.
Morality is about happiness and suffering (as Harris says, and as John Stuart Mill said before him), and so contractualists are endlessly trying to fine-tune laws, reinvent institutions, and extend new rights as circumstances change in order to maximize happiness and minimize suffering. To build a contractual morality, all you need are the two individualizing foundations: harm/care, and fairness/reciprocity. The other three foundations, and any religion that builds on them, run afoul of the prime directive: let people make their own choices, as long as they harm nobody else.
The beehive approach, in contrast, takes the group and its territory as fundamental sources of value. Individual bees are born and die by the thousands, but the hive lives for a long time, and each individual has a role to play in fostering its success.The two fundamental problems of social life are attacks from outside and subversion from within. Either one can lead to the death of the hive, so all must pull together, do their duty, and be willing to make sacrifices for the group. Bees don't have to learn how to behave in this way but human children do, and this is why cultural conservatives are so heavily focused on what happens in schools, families, and the media.
Conservatives generally have a more pessimistic view of human nature than do liberals. They are more likely to believe that if you stand back and give kids space to grow as they please, they'll grow into shallow, self-centered, undisciplined pleasure seekers. Cultural conservatives work hard to cultivate moral virtues based on the three binding foundations: ingroup/loyalty, authority/respect, and purity/sanctity, as well as on the universally employed foundations of harm/care and fairness/reciprocity. The beehive ideal is not a world of maximum freedom, it is a world of order and tradition in which people are united by a shared moral code that is effectively enforced, which enables people to trust each other to play their interdependent roles. It is a world of very high social capital and low anomie.
It might seem obvious to you that contractual societies are good, modern, creative and free, whereas beehive societies reek of feudalism, fascism, and patriarchy. And, as a secular liberal I agree that contractual societies such as those of Western Europe offer the best hope for living peacefully together in our increasingly diverse modern nations (although it remains to be seen if Europe can solve its current diversity problems).
I just want to make one point, however, that should give contractualists pause: surveys have long shown that religious believers in the United States are happier, healthier, longer-lived, and more generous to charity and to each other than are secular people. Most of these effects have been documented in Europe too. If you believe that morality is about happiness and suffering, then I think you are obligated to take a close look at the way religious people actually live and ask what they are doing right.
Don't dismiss religion on the basis of a superficial reading of the Bible and the newspaper. Might religious communities offer us insights into human flourishing? Can they teach us lessons that would improve wellbeing even in a primarily contractualist society.
You can't use the New Atheists as your guide to these lessons. The new atheists conduct biased reviews of the literature and conclude that there is no good evidence on any benefits except the health benefits of religion. Here is Daniel Dennett in Breaking the Spell on whether religion brings out the best in people:
"Perhaps a survey would show that as a group atheists and agnostics are more respectful of the law, more sensitive to the needs of others, or more ethical than religious people.Certainly no reliable survey has yet been done that shows otherwise. It might be that the best that can be said for religion is that it helps some people achieve the level of citizenship and morality typically found in brights. If you find that conjecture offensive, you need to adjust your perspective. (Breaking the Spell, p. 55.)
I have italicized the two sections that show ordinary moral thinking rather than scientific thinking. The first is Dennett's claim not just that there is no evidence, but that there is certainly no evidence, when in fact surveys have shown for decades that religious practice is a strong predictor of charitable giving. Arthur Brooks recently analyzed these data (in Who Really Cares) and concluded that the enormous generosity of religious believers is not just recycled to religious charities.
Religious believers give more money than secular folk to secular charities, and to their neighbors. They give more of their time, too, and of their blood. Even if you excuse secular liberals from charity because they vote for government welfare programs, it is awfully hard to explain why secular liberals give so little blood. The bottom line, Brooks concludes, is that all forms of giving go together, and all are greatly increased by religious participation and slightly increased by conservative ideology (after controlling for religiosity).
These data are complex and perhaps they can be spun the other way, but at the moment it appears that Dennett is wrong in his reading of the literature. Atheists may have many other virtues, but on one of the least controversial and most objective measures of moral behavior—giving time, money, and blood to help strangers in need—religious people appear to be morally superior to secular folk.
My conclusion is not that secular liberal societies should be made more religious and conservative in a utilitarian bid to increase happiness, charity, longevity, and social capital. Too many valuable rights would be at risk, too many people would be excluded, and societies are so complex that it's impossible to do such social engineering and get only what you bargained for. My point is just that every longstanding ideology and way of life contains some wisdom, some insights into ways of suppressing selfishness, enhancing cooperation, and ultimately enhancing human flourishing.
But because of the four principles of moral psychology it is extremely difficult for people, even scientists, to find that wisdom once hostilities erupt. A militant form of atheism that claims the backing of science and encourages "brights" to take up arms may perhaps advance atheism. But it may also backfire, polluting the scientific study of religion with moralistic dogma and damaging the prestige of science in the process.
I study morality from every angle I can find. Morality is one of those basic aspects of humanity, like sexuality and eating, that can't fit into one or two academic fields. I think morality is unique, however, in having a kind of spell that disguises it. We all care about morality so passionately that it's hard to look straight at it. We all look at the world through some kind of moral lens, and because most of the academic community uses the same lens, we validate each other's visions and distortions. I think this problem is particularly acute in some of the new scientific writing about religion.
When I started graduate school at Penn in 1987, it seemed that developmental psychology owned the rights to morality within psychology. Everyone was either using or critiquing Lawrence Kohlberg's ideas, as well as his general method of interviewing kids about dilemmas (such as: should Heinz steal a drug to save his wife's life?). Everyone was studying how children's understanding of moral concepts changed with experience. But in the 1990s two books were published that I believe triggered an explosion of cross-disciplinary scientific interest in morality, out of which has come a new synthesis—very much along the lines that E. O. Wilsonpredicted in 1975.
The first was Antonio Damasio's Descartes' Error, in 1994, which showed a very broad audience that morality could be studied using the then new technology of fMRI, and also that morality, and rationality itself, were crucially dependent on the proper functioning of emotional circuits in the prefrontal cortex. The second was Frans de Waal's Good Natured, published just two years later, which showed an equally broad audience that the building blocks of human morality are found in other apes and are products of natural selection in the highly social primate lineage. These two books came out just as John Bargh was showing social psychologists that automatic and unconscious processes can and probably do cause the majority of our behaviors, even morally loaded actions (like rudeness or altruism) that we thought we were controlling consciously.
Furthermore, Damasio and Bargh both found, as Michael Gazzaniga had years before, that people couldn't stop themselves from making up post-hoc explanations for whatever it was they had just done for unconscious reasons. Combine these developments and suddenly Kohlbergian moral psychology seemed to be studying the wagging tail, rather than the dog. If the building blocks of morality were shaped by natural selection long before language arose, and if those evolved structures work largely by giving us feelings that shape our behavior automatically, then why should we be focusing on the verbal reasons that people give to explain their judgments in hypothetical moral dilemmas?
In my dissertation and my other early studies, I told people short stories in which a person does something disgusting or disrespectful that was perfectly harmless (for example, a family cooks and eats its dog, after the dog was killed by a car). I was trying to pit the emotion of disgust against reasoning about harm and individual rights.
I found that disgust won in nearly all groups I studied (in Brazil, India, and the United States), except for groups of politically liberal college students, particularly Americans, who overrode their disgust and said that people have a right to do whatever they want, as long as they don't hurt anyone else.
These findings suggested that emotion played a bigger role than the cognitive developmentalists had given it. These findings also suggested that there were important cultural differences, and that academic researchers may have inappropriately focused on reasoning about harm and rights because we primarily study people like ourselves—college students, and also children in private schools near our universities, whose morality is not representative of the United States, let alone the world.
So in the 1990s I was thinking about the role of emotion in moral judgment, I was reading Damasio, De Waal, and Bargh, and I was getting very excited by the synergy and consilience across disciplines. I wrote a review article called "The Emotional Dog and its Rational Tail," which was published in 2001, a month after Josh Greene's enormously influentialScience article. Greene used fMRI to show that emotional responses in the brain, not abstract principles of philosophy, explain why people think various forms of the "trolley problem" (in which you have to choose between killing one person or letting five die) are morally different.
Obviously I'm biased in terms of what I notice, but it seems to me that the zeitgeist in moral psychology has changed since 2001. Most people who study morality now read and write about emotions, the brain, chimpanzees, and evolution, as well as reasoning. This is exactly what E. O. Wilson predicted in Sociobiology: that the old approaches to morality, including Kohlberg's, would be swept away or merged into a new approach that focused on the emotive centers of the brain as biological adaptations. Wilson even said that these emotive centers give us moral intuitions, which the moral philosophers then justify while pretending that they are intuiting truths that are independent of the contingencies of our evolved minds.
And now, 30 years later, Josh Greene has a paper in press where he uses neuroscientific evidence to reinterpret Kantian deontological philosophy as a sophisticated post-hoc justification of our gut feelings about rights and respect for other individuals. I think E. O. Wilson deserves more credit than he gets for seeing into the real nature of morality and for predicting the future of moral psychology so uncannily. He's in my pantheon, along with David Hume and Charles Darwin. All three were visionaries who urged us to focus on the moral emotions and their social utility.
I recently summarized this new synthesis in moral psychology with four principles:
1) Intuitive primacy but not dictatorship. This is the idea, going back to Wilhelm Wundt and channeled through Robert Zajonc and John Bargh, that the mind is driven by constant flashes of affect in response to everything we see and hear.
Our brains, like other animal brains, are constantly trying to fine tune and speed up the central decision of all action: approach or avoid. You can't understand the river of fMRI studies on neuroeconomics and decision making without embracing this principle. We have affectively-valenced intuitive reactions to almost everything, particularly to morally relevant stimuli such as gossip or the evening news. Reasoning by its very nature is slow, playing out in seconds.
Studies of everyday reasoning show that we usually use reason to search for evidence to support our initial judgment, which was made in milliseconds. But I do agree with Josh Greene that sometimes we can use controlled processes such as reasoning to override our initial intuitions. I just think this happens rarely, maybe in one or two percent of the hundreds of judgments we make each week. And I do agree with Marc Hauser that these moral intuitions require a lot of computation, which he is unpacking.
Hauser and I mostly disagree on a definitional question: whether this means that "cognition" precedes "emotion." I try never to contrast those terms, because it's all cognition. I think the crucial contrast is between two kinds of cognition: intuitions (which are fast and usually affectively laden) and reasoning (which is slow, cool, and less motivating).
2) Moral thinking is for social doing. This is a play on William James' pragmatist dictum that thinking is for doing, updated by newer work on Machiavellian intelligence. The basic idea is that we did not evolve language and reasoning because they helped us to find truth; we evolved these skills because they were useful to their bearers, and among their greatest benefits were reputation management and manipulation.
Just look at your stream of consciousness when you are thinking about a politician you dislike, or when you have just had a minor disagreement with your spouse. It's like you're preparing for a court appearance. Your reasoning abilities are pressed into service generating arguments to defend your side and attack the other. We are certainly able to reason dispassionately when we have no gut feeling about a case, and no stake in its outcome, but with moral disagreements that's rarely the case. As David Hume said long ago, reason is the servant of the passions.
3) Morality binds and builds. This is the idea stated most forcefully by Emile Durkheim that morality is a set of constraints that binds people together into an emergent collective entity.
Durkheim focused on the benefits that accrue to individuals from being tied in and restrained by a moral order. In his book Suicide he alerted us to the ways that freedom and wealth almost inevitably foster anomie, the dangerous state where norms are unclear and people feel that they can do whatever they want.
Durkheim didn't talk much about conflict between groups, but Darwin thought that such conflicts may have spurred the evolution of human morality. Virtues that bind people to other members of the tribe and encourage self-sacrifice would lead virtuous tribes to vanquish more selfish ones, which would make these traits more prevalent.
Of course, this simple analysis falls prey to the free-rider problem thatGeorge Williams and Richard Dawkins wrote so persuasively about. But I think the terms of this debate over group selection have changed radically in the last 10 years, as culture and religion have become central to discussions of the evolution of morality.
I'll say more about group selection in a moment. For now I just want to make the point that humans do form tight, cooperative groups that pursue collective ends and punish cheaters and slackers, and they do this most strongly when in conflict with other groups. Morality is what makes all of that possible.
4) Morality is about more than harm and fairness. In moral psychology and moral philosophy, morality is almost always about how people treat each other. Here's an influential definition from the Berkeley psychologist Elliot Turiel: morality refers to "prescriptive judgments of justice, rights, and welfare pertaining to how people ought to relate to each other."
Kohlberg thought that all of morality, including concerns about the welfare of others, could be derived from the psychology of justice. Carol Gilligan convinced the field that an ethic of "care" had a separate developmental trajectory, and was not derived from concerns about justice.
OK, so there are two psychological systems, one about fairness/justice, and one about care and protection of the vulnerable. And if you look at the many books on the evolution of morality, most of them focus exclusively on those two systems, with long discussions of Robert Trivers' reciprocal altruism (to explain fairness) and of kin altruism and/or attachment theory to explain why we don't like to see suffering and often care for people who are not our children.
But if you try to apply this two-foundation morality to the rest of the world, you either fail or you become Procrustes. Most traditional societies care about a lot more than harm/care and fairness/justice. Why do so many societies care deeply and morally about menstruation, food taboos, sexuality, and respect for elders and the Gods? You can't just dismiss this stuff as social convention. If you want to describe human morality, rather than the morality of educated Western academics, you've got to include the Durkheimian view that morality is in large part about binding people together.
From a review of the anthropological and evolutionary literatures, Craig Joseph (at Northwestern University) and I concluded that there were three best candidates for being additional psychological foundations of morality, beyond harm/care and fairness/justice. These three we label asingroup/loyalty (which may have evolved from the long history of cross-group or sub-group competition, related to what Joe Henrich calls "coalitional psychology"); authority/respect (which may have evolved from the long history of primate hierarchy, modified by cultural limitations on power and bullying, as documented by Christopher Boehm), andpurity/sanctity, which may be a much more recent system, growing out of the uniquely human emotion of disgust, which seems to give people feelings that some ways of living and acting are higher, more noble, and less carnal than others.
Joseph and I think of these foundational systems as expressions of whatDan Sperber calls "learning modules"—they are evolved modular systems that generate, during enculturation, large numbers of more specific modules which help children recognize, quickly and automatically, examples of culturally emphasized virtues and vices. For example, we academics have extremely fine-tuned receptors for sexism (related to fairness) but not sacrilege (related to purity).
Virtues are socially constructed and socially learned, but these processes are highly prepared and constrained by the evolved mind. We call these three additional foundations the binding foundations, because the virtues, practices, and institutions they generate function to bind people together into hierarchically organized interdependent social groups that try to regulate the daily lives and personal habits of their members. We contrast these to the two individualizing foundations (harm/care and fairness/reciprocity), which generate virtues and practices that protect individuals from each other and allow them to live in harmony as autonomous agents who can focus on their own goals.
My UVA colleagues Jesse Graham, Brian Nosek, and I have collected data from about 7,000 people so far on a survey designed to measure people's endorsement of these five foundations. In every sample we've looked at, in the United States and in other Western countries, we find that people who self-identify as liberals endorse moral values and statements related to the two individualizing foundations primarily, whereas self-described conservatives endorse values and statements related to all five foundations. It seems that the moral domain encompasses more for conservatives—it's not just about Gilligan's care and Kohlberg's justice. It's also about Durkheim's issues of loyalty to the group, respect for authority, and sacredness.
I hope you'll accept that as a purely descriptive statement. You can still reject the three binding foundations normatively—that is, you can still insist that ingroup, authority, and purity refer to ancient and dangerous psychological systems that underlie fascism, racism, and homophobia, and you can still claim that liberals are right to reject those foundations and build their moral systems using primarily the harm/care and fairness/reciprocity foundations.
But just go with me for a moment that there is this difference, descriptively, between the moral worlds of secular liberals on the one hand and religious conservatives on the other. There are, of course, many other groups, such as the religious left and the libertarian right, but I think it's fair to say that the major players in the new religion wars are secular liberals criticizing religious conservatives. Because the conflict is a moral conflict, we should be able to apply the four principles of the new synthesis in moral psychology.
In what follows I will take it for granted that religion is a part of the natural world that is appropriately studied by the the methods of science. Whether or not God exists (and as an atheist I personally doubt it), religiosity is an enormously important fact about our species. There must be some combination of evolutionary, developmental, neuropsychological, and anthropological theories that can explain why human religious practices take the various forms that they do, many of which are so similar across cultures and eras. I will also take it for granted that religious fundamentalists, and most of those who argue for the existence of God, illustrate the first three principles of moral psychology (intuitive primacy, post-hoc reasoning guided by utility, and a strong sense of belonging to a group bound together by shared moral commitments).
But because the new atheists talk so much about the virtues of science and our shared commitment to reason and evidence, I think it's appropriate to hold them to a higher standard than their opponents. Do these new atheist books model the scientific mind at its best? Or do they reveal normal human beings acting on the basis of their normal moral psychology?
1) Intuitive primacy but not dictatorship. It's clear that Richard Dawkins (inThe God Delusion) and Sam Harris (in Letter To A Christian Nation) have strong feelings about religion in general and religious fundamentalists in particular. Given the hate mail they receive, I don't blame them. The passions of Dawkins and Harris don't mean that they are wrong, or that they can't be trusted. One can certainly do good scholarship on slavery while hating slavery.
But the presence of passions should alert us that the authors, being human, are likely to have great difficulty searching for and then fairly evaluating evidence that opposes their intuitive feelings about religion. We can turn to Dawkins and Harris to make the case for the prosecution, which they do brilliantly, but if we readers are to judge religion we will have to find a defense attorney. Or at least we'll have to let the accused speak.
2) Moral thinking is for social doing. This is where the scientific mind is supposed to depart from the lay mind. The normal person (once animated by emotion) engages in moral reasoning to find ammunition, not truth; the normal person attacks the motives and character of her opponents when it will be advantageous to do so. The scientist, in contrast, respects empirical evidence as the ultimate authority and avoids ad hominem arguments. The metaphor for science is a voyage of discovery, not a war. Yet when I read the new atheist books, I see few new shores. Instead I see battlefields strewn with the corpses of straw men. To name three:
a) The new atheists treat religions as sets of beliefs about the world, many of which are demonstrably false. Yet anthropologists and sociologists who study religion stress the role of ritual and community much more than of factual beliefs about the creation of the world or life after death.
b) The new atheists assume that believers, particularly fundamentalists, take their sacred texts literally. Yet ethnographies of fundamentalist communities (such as James Ault's Spirit and Flesh) show that even when people claim to be biblical literalists, they are in fact quite flexible, drawing on the bible selectively—or ignoring it—to justify humane and often quite modern responses to complex social situations.
c) The new atheists all review recent research on religion and conclude that it is an evolutionary byproduct, not an adaptation. They compare religious sentiments to moths flying into candle flames, ants whose brains have been hijacked for a parasite's benefit, and cold viruses that are universal in human societies. This denial of adaptation is helpful for their argument that religion is bad for people, even when people think otherwise.
I quite agree with these authors' praise of the work of Pascal Boyer andScott Atran, who have shown how belief in supernatural entities may indeed be an accidental output of cognitive systems that otherwise do a good job of identifying objects and agents. Yet even if belief in gods was initially a byproduct, as long as such beliefs had consequences for behavior then it seems likely that natural selection operated upon phenotypic variation and favored the success of individuals and groups that found ways (genetic or cultural or both) to use these gods to their advantage, for example as commitment devices that enhanced cooperation, trust, and mutual aid.
3) Morality binds and builds. Dawkins is explicit that his goal is to start a movement, to raise consciousness, and to arm atheists with the arguments they'll need to do battle with believers. The view that "we" are virtuous and our opponents are evil is a crucial step in uniting people behind a cause, and there is plenty of that in the new atheist books. A second crucial step is to identify traitors in our midst and punish or humiliate them. There is some of that too in these books—atheists who defend the utility of religion or who argue for disengagement or détente between science and religion are compared to Chamberlain and his appeasement of Hitler.
To my mind an irony of Dawkins' position is that he reveals a kind of religious orthodoxy in his absolute rejection of group selection. David Sloan Wilson has supplemented Durkheim's view of religion (as being primarily about group cohesion) with evolutionary analyses to propose that religion was the conduit that pulled humans through a "major transition" in evolutionary history.
Dawkins, along with George Williams and most critics of group selection, acknowledge that natural selection works on groups as well as on individuals, and that group selection is possible in principle. But Dawkins relies on Williams' argument that selection pressures at the individual level are, in practice, always stronger than those at the group level: free riders will always undercut Darwin's suggestion that morality evolved because virtuous groups outcompeted selfish groups.
Wilson, however, in Darwin's Cathedral, makes the case that culture in general and religion in particular change the variables in Williams' analysis. Religions and their associated practices greatly increase the costs of defection (through punishment and ostracism), increase the contributions of individuals to group efforts (through cultural and emotional mechanisms that increase trust), and sharpen the boundaries — biological and cultural — between groups. Throw in recent discoveries that genetic evolution can work much faster than previously supposed, and the widely respected work of Pete Richerson and Rob Boyd on cultural group selection, and suddenly the old consensus against group selection is outdated.
It's time to examine the question anew. Yet Dawkins has referred to group selection in interviews as a "heresy," and in The God Delusion he dismisses it without giving a reason. In chapter 5 he states the standard Williams free rider objection, notes the argument that religion is a way around the Williams objection, concedes that Darwin believed in group selection, and then moves on. Dismissing a credible position without reasons, and calling it a heresy (even if tongue in cheek), are hallmarks of standard moral thinking, not scientific thinking.
4) Morality is about more than harm and fairness. In Letter to a Christian Nation, Sam Harris gives us a standard liberal definition of morality: "Questions of morality are questions about happiness and suffering… To the degree that our actions can affect the experience of other creatures positively or negatively, questions of morality apply." He then goes on to show that the Bible and the Koran, taken literally, are immoral books because they're not primarily about happiness and suffering, and in many places they advocate harming people.
Reading Harris is like watching professional wrestling or the Harlem Globetrotters. It's great fun, with lots of acrobatics, but it must not be mistaken for an actual contest. If we want to stage a fair fight between religious and secular moralities, we can't eliminate one by definition before the match begins. So here's my definition of morality, which gives each side a chance to make its case:
Moral systems are interlocking sets of values, practices, institutions, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make social life possible.
In my research I have found that there are two common ways that cultures suppress and regulate selfishness, two visions of what society is and how it ought to work. I'll call them the contractual approach and the beehiveapproach.
The contractual approach takes the individual as the fundamental unit of value. The fundamental problem of social life is that individuals often hurt each other, and so we create implicit social contracts and explicit laws to foster a fair, free, and safe society in which individuals can pursue their interests and develop themselves and their relationships as they choose.
Morality is about happiness and suffering (as Harris says, and as John Stuart Mill said before him), and so contractualists are endlessly trying to fine-tune laws, reinvent institutions, and extend new rights as circumstances change in order to maximize happiness and minimize suffering. To build a contractual morality, all you need are the two individualizing foundations: harm/care, and fairness/reciprocity. The other three foundations, and any religion that builds on them, run afoul of the prime directive: let people make their own choices, as long as they harm nobody else.
The beehive approach, in contrast, takes the group and its territory as fundamental sources of value. Individual bees are born and die by the thousands, but the hive lives for a long time, and each individual has a role to play in fostering its success.The two fundamental problems of social life are attacks from outside and subversion from within. Either one can lead to the death of the hive, so all must pull together, do their duty, and be willing to make sacrifices for the group. Bees don't have to learn how to behave in this way but human children do, and this is why cultural conservatives are so heavily focused on what happens in schools, families, and the media.
Conservatives generally have a more pessimistic view of human nature than do liberals. They are more likely to believe that if you stand back and give kids space to grow as they please, they'll grow into shallow, self-centered, undisciplined pleasure seekers. Cultural conservatives work hard to cultivate moral virtues based on the three binding foundations: ingroup/loyalty, authority/respect, and purity/sanctity, as well as on the universally employed foundations of harm/care and fairness/reciprocity. The beehive ideal is not a world of maximum freedom, it is a world of order and tradition in which people are united by a shared moral code that is effectively enforced, which enables people to trust each other to play their interdependent roles. It is a world of very high social capital and low anomie.
It might seem obvious to you that contractual societies are good, modern, creative and free, whereas beehive societies reek of feudalism, fascism, and patriarchy. And, as a secular liberal I agree that contractual societies such as those of Western Europe offer the best hope for living peacefully together in our increasingly diverse modern nations (although it remains to be seen if Europe can solve its current diversity problems).
I just want to make one point, however, that should give contractualists pause: surveys have long shown that religious believers in the United States are happier, healthier, longer-lived, and more generous to charity and to each other than are secular people. Most of these effects have been documented in Europe too. If you believe that morality is about happiness and suffering, then I think you are obligated to take a close look at the way religious people actually live and ask what they are doing right.
Don't dismiss religion on the basis of a superficial reading of the Bible and the newspaper. Might religious communities offer us insights into human flourishing? Can they teach us lessons that would improve wellbeing even in a primarily contractualist society.
You can't use the New Atheists as your guide to these lessons. The new atheists conduct biased reviews of the literature and conclude that there is no good evidence on any benefits except the health benefits of religion. Here is Daniel Dennett in Breaking the Spell on whether religion brings out the best in people:
"Perhaps a survey would show that as a group atheists and agnostics are more respectful of the law, more sensitive to the needs of others, or more ethical than religious people.Certainly no reliable survey has yet been done that shows otherwise. It might be that the best that can be said for religion is that it helps some people achieve the level of citizenship and morality typically found in brights. If you find that conjecture offensive, you need to adjust your perspective. (Breaking the Spell, p. 55.)
I have italicized the two sections that show ordinary moral thinking rather than scientific thinking. The first is Dennett's claim not just that there is no evidence, but that there is certainly no evidence, when in fact surveys have shown for decades that religious practice is a strong predictor of charitable giving. Arthur Brooks recently analyzed these data (in Who Really Cares) and concluded that the enormous generosity of religious believers is not just recycled to religious charities.
Religious believers give more money than secular folk to secular charities, and to their neighbors. They give more of their time, too, and of their blood. Even if you excuse secular liberals from charity because they vote for government welfare programs, it is awfully hard to explain why secular liberals give so little blood. The bottom line, Brooks concludes, is that all forms of giving go together, and all are greatly increased by religious participation and slightly increased by conservative ideology (after controlling for religiosity).
These data are complex and perhaps they can be spun the other way, but at the moment it appears that Dennett is wrong in his reading of the literature. Atheists may have many other virtues, but on one of the least controversial and most objective measures of moral behavior—giving time, money, and blood to help strangers in need—religious people appear to be morally superior to secular folk.
My conclusion is not that secular liberal societies should be made more religious and conservative in a utilitarian bid to increase happiness, charity, longevity, and social capital. Too many valuable rights would be at risk, too many people would be excluded, and societies are so complex that it's impossible to do such social engineering and get only what you bargained for. My point is just that every longstanding ideology and way of life contains some wisdom, some insights into ways of suppressing selfishness, enhancing cooperation, and ultimately enhancing human flourishing.
But because of the four principles of moral psychology it is extremely difficult for people, even scientists, to find that wisdom once hostilities erupt. A militant form of atheism that claims the backing of science and encourages "brights" to take up arms may perhaps advance atheism. But it may also backfire, polluting the scientific study of religion with moralistic dogma and damaging the prestige of science in the process.
Saturday, December 22
Why Obama is the New Clinton
The New York Times
December 23, 2007
The Clinton Referendum
By MATT BAI
Winter’s first storm punished the White Mountains of New Hampshire on the Friday before Thanksgiving, rendering the terrain all but impassable. And yet in Gorham, a small town 50 miles from the Canadian border, hundreds of people shuddered patiently in the snow, in a line that snaked halfway around Gorham Middle-High School, while Secret Service dogs sniffed the gymnasium for bombs. “I’ve got a lot of people freezing out here,” a campaign aide barked into a phone, as if this might make the agents go any faster. When they finally allowed everyone in, a few of the 500 or so folding chairs remained unfilled, but the place was humming with excitement; a teacher near me was saying that this was the biggest thing to happen here since Dwight Eisenhower visited in the 1950s.
For the first time since that infamous year of 1992 — the year when Gennifer Flowers, “Stand by Your Man” and “the Comeback Kid” entered the political canon — Bill Clinton was coming back to New Hampshire’s North Country, the place where his legend was born. Clinton loves the Granite State. As it happened, I was standing with him earlier that week in South Carolina when an aide told him that he was going to be campaigning for Hillary in New Hampshire, and his eyes lighted up behind his reading glasses. “I am? Where’m I goin’?” Now he strode into the gymnasium through a side door, his face flushed with emotion, accompanied by the nostalgic bars of Fleetwood Mac’s “Don’t Stop.” (They were nostalgic back then, for crying out loud.) Clinton is now lean and regal, his hair an almost metallic white, and he was dressed in a taupe suit with a light green tie, trailing a small entourage and waving warmly. The room erupted in cheers and whistles. Over his head a banner proclaimed: “The Change We Need! HillaryClinton.com.”
“When Hillary first announced she was running for president, she came right to the North Country, and I was so jealous,” Clinton said. “I want to thank you for arranging the snow today. It made me feel right at home. I took a nap in the car, and when I woke up I thought it was 1992.” The crowd laughed appreciatively. Many in the audience probably recalled that he had all but lived in these parts for a year before that campaign; after his election, he even gave some of the families he met along the way his special ZIP code at the White House, so they could keep in touch.
Clinton began his speech, as he always does now, with a disclaimer, saying that if he wanted to, he could certainly give a big “whoop-dee-do” speech that would get everybody riled up, but that this was a serious time in America, and it deserved a serious speech. Clinton doesn’t like to play an overtly political role anymore; he enjoys the statesmanlike aura that surrounds any ex-president, and he is not about to undermine it, even for his wife’s campaign. Instead, he spoke to the Gorham audience in somber tones, telling them that a lot of the crises now confronting the North Country brought to mind 1992 as well. The paper mill in nearby Groveton had just announced it would close a few days after Christmas, kicking 300 workers to the street.
“You’re hurtin’ up here because of this mill closing,” he said. “But you should know just how close millions upon millions of your fellow Americans are to your experience.” He went on to quietly castigate the Bush administration for running up foreign debt and straining the military to its limits in Iraq, and he talked about Hillary’s plans to bring health-care coverage to all Americans, build a new jobs program around alternative energy and revamp the education system, beginning with early-childhood programs. “A lot of you already know this,” he said of his wife’s work on education issues in Arkansas, “because I talked about it when I was running.”
Even without the allusions to the old days, his speech seemed strangely reminiscent of that first campaign, and not necessarily in a good way. Listening to him talk, I found it hard not to wonder why so many of the challenges facing the next president were almost identical to those he vowed to address in 1992. Why, after Clinton’s two terms in office, were we still thinking about tomorrow? In some areas, most notably health care, Clinton tried gamely to leave behind lasting change, and he failed. In many more areas, though, the progress that was made under Clinton — almost 23 million new jobs, reductions in poverty, lower crime and higher wages — had been reversed or wiped away entirely in a remarkably short time. Clinton’s presidency seems now to have been oddly ephemeral, his record etched in chalk and left out in the rain.
Supporters of the Clintons see an obvious reason for this, of course — that George W. Bush and his Republican Party have, for the past seven years, undertaken a ferocious and unbending assault on Clinton’s progressive legacy. As Clinton points out in his speeches, Bush and the Republicans abandoned balanced budgets to fight the war in Iraq, widened income inequality by cutting taxes on the wealthy and scaled back social programs. “We’ve had now seven years of a radical experiment in extremism in domestic policy,” Clinton said in New Hampshire.
Some Democrats, though, and especially those who are apt to call themselves “progressives,” offer a more complicated and less charitable explanation. In their view, Clinton failed to seize his moment and create a more enduring, more progressive legacy — not just because of the personal travails and Republican attacks that hobbled his presidency, but because his centrist, “third way” political strategy, his strategy of “triangulating” to find some middle point in every argument, sapped the party of its core principles. By this thinking, Clinton and his friends at the Democratic Leadership Council, the centrist think tank that served as a platform for his bid for national office, were so desperate to woo back moderate Southern voters that they accepted conservative assertions about government (that it was too big and unwieldy, that what was good for business was good for workers) and thus opened the door wide for Bush to come along and enact his extremist agenda with only token opposition. In other words, they say, he was less a victim of Bush’s radicalism than he was its enabler.
“His budget policies were pretty much an extension of Bush I, and his economic policies were largely an extension of Wall Street,” says Robert Borosage, co-director of the left-wing Campaign for America’s Future. Ideologically, Borosage told me, Clinton’s presidency fit snugly into the era of Reagan and Bush. Faced with ascendant conservatism, he says, “Clinton saw his job, in a sense, as getting the Democratic Party to adjust to it, rather than to resist it.”
Aside from a few partisans on each end of the spectrum, there aren’t neatly delineated camps on this question, with Clinton lovers on one side and critics on the other. Rather, a lot of Democrats seem genuinely conflicted, on practically an existential level, when it comes to Clinton. They almost uniformly admire the former president; 82 percent of Democrats polled by Fox News in November had a favorable opinion of Clinton, and, in a New York Times poll released earlier this month, 44 percent of Democratic voters said they were more inclined to support Hillary’s candidacy because of him. And yet, they regard with suspicion, if not outright resentment, the centrist forces he helped unleash on the party. They might love Bill Clinton, but they loathe Clintonism. And it is this conflict that has, in recent weeks, become a subtle but important theme of the 2008 campaign, as Hillary Clinton’s rivals try to portray her as the Return of the Great Triangulator. Whatever else these Democratic primaries may be about — health-care plans, global warming, timetables for withdrawal from Iraq — they are, on some more philosophical and even emotional level, a judgment on the ’90s and all that those tumultuous years represent.
Hillary Clinton’s combative advisers say they welcome that dynamic. “If our opponents want to make this a referendum on Bill Clinton’s presidency, they are making a mistake,” Howard Wolfson, Clinton’s communications director, said in an e-mail message, “both because it’s a referendum they would lose on the merits and because Democrats are focused on the future and the change that needs to be made going forward.” And yet Clinton’s team often seems perplexed by a political quandary unlike any that has come before: how to exploit all the good will that Democrats have for Bill Clinton without allowing Hillary Clinton to become a constant reminder of the things they didn’t like about his presidency. Generally, the campaign’s preferred solution is simply not to talk about it. When I asked Bill Clinton about this issue, during an informal meeting in South Carolina, he readily agreed to sit down for a longer interview on his legacy’s role in the campaign. A few weeks later, however, and at the last minute, Hillary’s aides canceled the interview. Famously controlling, they would not even allow the former president to talk about his record.
Listening to Bill Clinton that day in New Hampshire, however, it was clear that whether or not he talks about it, his wife’s fortunes are bound up with his, and vice versa. Near the end of his speech in Gorham, he went off on an engaging tangent, as he sometimes does, about the trees he saw from his car window that morning, and how at one time New Hampshire was almost devoid of trees, and how Teddy Roosevelt led a national effort to replenish the forests. “But Theodore Roosevelt proposed a lot of ideas that fell flat on their face until Franklin Roosevelt passed them,” Clinton went on. “The important thing for us to do is to fight for the right thing and keep fighting for it until we finally get it done.” I had heard Clinton compare himself with T.R. before, but this was the first time I heard him do so publicly, and it struck me as an aside that would have made his wife’s advisers wince, if they noticed it. He seemed to be suggesting that Hillary’s job as president would be to cement his own unfinished legacy — provided, of course, that his legacy, or at least a widely held perception of it, didn’t end up derailing her first.
A little over a year ago, while working on a book about the Democratic Party’s divisions, I discussed that legacy with Bill Clinton in his Harlem office. Hillary Clinton had just begun running for the White House, and her husband was already trying to help neutralize her critics on the left; when I arrived at the office, Clinton was meeting with about 20 influential bloggers, who were gnawing on barbecued chicken and enjoying their first-ever audience with a former president. When I entered his office a while later, Clinton had his back to me and was busy rearranging the photos on his shelves, as if trying to get the visual narrative of his presidency exactly right. He recited a litany of his accomplishments — the first sustained rise in real wages since 1973, the biggest land-protection measure in the lower 48 since Teddy Roosevelt, victories against the tobacco and gun lobbies — and told me he couldn’t understand the allegation that his administration wasn’t really progressive.
“I think that if ‘progressive’ is defined by results, whether it’s in health care, education, incomes, the environment or the advancement of peace, then we had a very progressive administration,” Clinton said. “I think we changed the methods — that we tried also to reflect basic American values, that we tried to do it in a way that appealed to the broad middle class in America. We sure did, and I don’t apologize for that. The question is: Were the policies right or not? And I think in terms of the political success I enjoyed, people have given more credit to my political skills than they deserve and less credit to the weight, the body of the ideas.”
At the end of that interview, as he walked me to the lobby, Clinton mentioned a favorite quote from Machiavelli’s book “The Prince” and told me to look it up. When I got back to Washington, I thumbed through the book until I found the rambling passage, and this is what it said:
It must be considered that there is nothing more difficult to carry out, nor
more doubtful of success, nor more dangerous to handle, than to initiate a
new order of things. For the reformer has enemies in all those who profit by
the old order, and only lukewarm defenders in all those who would profit by
the new order, this lukewarmness arriving partly from fear of their adversaries,
who have the laws in their favor; and partly from the incredulity of
mankind, who do not truly believe in anything new until they have had an
actual experience of it. Thus it arises that on every opportunity for attacking
the reformer, the opponents do so with the zeal of partisans, the others only
defend him halfheartedly, so that between them he runs great danger.
It’s not hard to see why the postpresidential Bill Clinton sees himself in this quotation, and it says a lot about how he views his own place in American politics. In Clinton’s mind, the New Democrats of the late ’80s and early ’90s and their “third way” approach represented a call for fundamental reform, not just of the Democratic Party but also of the country’s industrial-age government. For that, he has been pilloried by Republican business interests, who were doing just fine under the old system, and “lukewarmly” defended by Democrats who resist any real break with the past.
There are, among Democrats, dueling interpretations of what Clintonism means and how it came into being. The most popular version now, by far, is that Clintonism was chiefly an electoral strategy, a way of making Democrats sound more acceptable to conservative voters by softening the party’s stances on “values” issues like guns, welfare and abortion and introducing pallid, focus-grouped phrases like “work hard and play by the rules” and making abortion “safe, legal and rare.” In other words, Clinton was basically as liberal at heart as any other Democrat who marched for civil rights and protested the Vietnam War, but he was a brilliant political strategist who instinctively understood the need to rebrand the party.
Even some of Clinton’s friends from the old days — those lukewarm defenders of the faith — accept this basic version of history. “Clintonism was about winning,” says Susan Estrich, the longtime Democratic strategist and pundit. “It was about grabbing victory from the jaws of defeat. If you were a Democrat of a certain age, it was like being a Red Sox fan — you never won. And even when you won, you lost, because you got Jimmy Carter. Clinton led us out of the desert when no one else could.”
On the other hand, Clinton’s more ardent supporters, those few who were there at the beginning, argue that Democrats have badly miscast him as an expedient strategist, when in fact he was a visionary and a modernizer. “He used to tell me all the time, ‘One of these days, people are going to figure out that I actually believe in this stuff,’ ” Al From told me recently. From and the Democratic Leadership Council that he founded in the 1980s have in recent years become a kind of convenient stand-in for Clinton, the main object of acid derision from liberal bloggers who prefer to savage someone other than the former president himself for the evils of Clintonism. Clinton was the chairman of the D.L.C. when he ran for president, and much of his campaign rhetoric came from its work.
“I don’t want to see what I think is his greatest achievement diminished,” From told me. “Just as Franklin Roosevelt saved capitalism by dealing with its excesses, Clinton saved progressive governance, and he saved progressive governance all over the world.”
Clinton’s critics on the left may scoff at this idea, but it’s fair to say that the discussion of Clintonism among party activists and especially online often displays a stunning lack of historical perspective. For a lot of younger Democrats, in particular, whose political consciousness dates back only as far as 1994 or even to the more recent days of Clinton’s impeachment, the origins of Clintonism have become not only murky but also irrelevant. “Clintonism” is, in much of the Democratic activist universe, a synonym for spinelessly appeasing Republicans in order to win, an establishment philosophy assumed to comprise no inherent principles of its own.
Lost in all this is the fact that, back in the day, Clinton and his New Democrats were themselves the outsiders taking on the ruling interest groups of the Democratic establishment the analog to bloggers and MoveOn.org activists, albeit from a different ideological direction. And it took no small amount of courage, at the end of the Reagan era, to argue inside the Democratic Party that the liberal orthodoxies of the New Deal and the Great Society, as well as the culture of the antiwar and civil rights movements, had become excessive and inflexible. Not only were Democratic attitudes toward government electorally problematic, Clinton argued; they were just plain wrong for the time.
Immediately after assuming the chairmanship of the D.L.C. in 1990, Clinton issued something called the New Orleans Declaration, which laid out the D.L.C.’s attack on old liberalism in a series of 15 core principles. By today’s standards, these principles don’t amount to much more than typical Clintonian rhetoric, but at the time, they seemed like a good way for a young Democratic governor to permanently marginalize himself in a party dominated by Big Labor, civil rights leaders and Northeastern liberals. Among the stated principles in the manifesto:
“We believe that economic growth is the prerequisite to expanding opportunity for everyone. The free market, regulated in the public interest, is the best engine of general prosperity.”
“We believe in preventing crime and punishing criminals, not in explaining away their behavior.”
“We believe the purpose of social welfare is to bring the poor into the nation’s economic mainstream, not to maintain them in dependence.”
In 1991, as Clinton prepared for what was then considered a quixotic run for president against a popular incumbent, he expanded on his governing philosophy in a series of speeches that, revisited now, are striking both for their confrontational approach toward expansive liberal government — especially coming from a candidate who needed party regulars to win — and for their ideological consistency with what would later come to pass during the Clinton era. He laid out a forceful case for improving and decentralizing decades-old institutions, from public schools to welfare, and modeling government after corporate America. He talked about revamping a Democratic Party that for 30 years was closely identified with the problems of the poor and retooling it to address the anxieties of a distressed middle class.
“There is an idea abroad in the land that if you abandon your children, the government will raise them,” Clinton said at a D.L.C. gathering in Cleveland in 1991, referring to fathers in the inner city. “I will let you in on a secret. Governments do not raise children — people do. And it’s time they were asked to assume their responsibilities and forced to do so if they refuse.”
In the same speech, Clinton outlined a new Democratic ethos based on the idea of consumer choice. “In the information age, monopoly decisions handed down on high by government bureaucracies are not always the best way to go,” he said. “With appropriate protections against discrimination based on race or income, we can provide our people more choices: child-care vouchers, public-school choice options, job training programs, choices for the elderly. ...
“Is what I just said to you liberal or conservative?” he went on to ask. “The truth is, it is both, and it is different. It rejects the Republicans’ attacks and the Democrats’ previous unwillingness to consider new alternatives.”
This, in a few lines, was the essence of Clintonism. Was it an innovative governing vision or a cynical strategy? The truth is, it was both. There is little doubt that as governor of Arkansas, Clinton believed passionately in the need to modernize liberalism and overhaul industrial-age programs, including popular entitlements and “welfare as we know it.” He grew up in hard circumstances and was raising his own child in a household with two working parents; his concern for the middle class was real, and it reflected a changed reality for a lot of baby-boomer families that older Democrats simply didn’t comprehend. But Clinton also believed his centrist message was the only way for a Democrat to win in the era after McGovern and Mondale, when running as a liberal candidate seemed only slightly more practical than running as a Marxist. And in order to get his party’s nomination, Clinton had to convince beleaguered liberals not so much that he was right about the party’s philosophical irrelevance — this probably wasn’t possible, in any event — but that his was the only way to regain the White House. He sold Clintonism as a matter of conviction and a promising electoral strategy, and both were sincere propositions.
Once in the White House, however, for some reasons within his control and many that were not, Clinton seemed to list inexorably toward the tactical side. He can claim some genuine advances in keeping with the spirit of his fundamental argument about government: the crime bill; welfare reform; the Family and Medical Leave Act; expanding the Earned Income Tax Credit, which pulled millions of working Americans out of poverty. These weren’t small achievements, and Clinton has received less credit for them than he deserves. And whether you attribute to him any part of the technology boom that created a vast amount of American wealth or believe instead that he simply had the good fortune to happen upon it it’s only fair to acknowledge, as historians almost certainly will, that Clinton presided more than ably over a historic economic expansion, leaving the nation in far better fiscal shape than he found it.
Still, a combination of events — first the collapse of Hillary Clinton’s health-care plan, then the Republican Congressional takeover of 1994 and later, of course, the debilitating sex scandal that led to his impeachment — seemed to drain the administration of its capital and ambition. Clinton’s presidency seemed, at least from the outside, to devolve into an exercise in deflection and survival, a string of near-death experiences that left little space or energy for whatever sweeping agenda Clinton (and his wife) envisioned back in 1992. As the transformational governing vision of earlier years receded, bland, poll-tested rhetoric and endless scandals rushed in to fill the void — and became, in the minds of many Democrats, the hallmarks of Clintonism.
For a lot of liberals (those who now call themselves progressives), the ’90s were a conflicted time. They never really bought the ideological premise of Clintonism, and they quietly seethed as the president moved his party to the center — enacting free-trade agreements over the objections of union leaders; embracing balanced budgets and telling Americans that “the era of big government is over”; striking a deal to give Republicans a long-sought overhaul of the welfare system. (In fact, Clinton had been talking about welfare reform for at least a decade before his presidency, but few Democrats believed his eventual support for the bill was anything other than a craven attempt to bolster his re-election prospects.) They felt embarrassed by the Lewinsky affair and the sordid controversy that devoured Clinton’s second term like flesh-eating bacteria.
There were five syllables that for these Democrats summed up all the failures of Clintonism: “triangulation.” The word was originally popularized by Dick Morris, who advised Clinton in the dark days of the mid-’90s (and who, not incidentally, was brought in to the White House by the first lady). Triangulation, as Morris intended it, is probably best described as the strategy of co-opting the issues that attract voters to your opponents by substituting centrist solutions for the ideological ones they propose, thus depriving them of victory. (In other words, if your opponents are getting traction with their demands to dismantle a broken welfare system, you acknowledge the problem but propose a middle-ground way of restructuring it instead.) To a lot of avid Democrats, however, triangulation became shorthand for gutless compromise, for saying and doing whatever you think you must in order to win.
No doubt Clinton’s style of leadership contributed to this impression as much as the substance did. There were moments, little remembered or appreciated by his critics, when Clinton demonstrated icy resolve and an indifference to polls: the budget showdown with Newt Gingrich and Congressional Republicans in 1995; the bombing of Serbia in 1999 to stop its aggression in Kosovo. More often, though, Clinton seemed determined to confirm his reputation as an agonized, late-night decision maker, a leader heavily influenced by the last guy to leave the room. Classic half-a-loaf policies like the “don’t ask, don’t tell” rule for gays in the military, along with frequent paralysis over crises like the genocide in Rwanda, created what would become an enduring impression that Clintonism was code for fecklessness.
Even so, such resentments were tempered by the fact that Clinton managed to deliver the White House not once but twice; among Democrats in the 20th century, only Woodrow Wilson and Franklin Roosevelt had done the same. He almost single-handedly pulled the Democratic Party back from its slide into irrelevance. Liberals swallowed hard and endured Clinton’s pragmatic brand of politics because they assumed that Clinton’s success would beget more success and, ultimately, a more progressive government.
Of course, it didn’t work out that way. First came the election of 2000, which Democrats believed was swiped from their grasp with little protest from the party’s Washington leaders. Next came compromises with George W. Bush on tax cuts and education reform. Then came the back-breaker: in the vote on the Iraq war resolution in 2002, many Democrats in Washington — including, most conspicuously, Hillary Clinton, then an unannounced presidential candidate — sided with President Bush in a move that antiwar liberals could only interpret as a Clintonian calculation to look tough on terror. If so, a lot of good it did; Congressional Democrats were demolished at the polls a few weeks later.
After that defeat, many longtime liberals, often coming together in the new online political space, began to voice a different thought: What if they had gone along with Clintonism for nothing? What if the path to victory lay not in compromising with Republicans but in having the fortitude to fight ruthlessly and to defend your own convictions, no matter how unpopular they might be? This was the moment in which Howard Dean’s explosive presidential campaign — and the grass-roots progressive movement it spawned — began to flourish. It was grounded in the idea that Clintonism, far from representing the postindustrial evolution of Democratic thought, had corrupted the party of the New Deal and the Great Society — and, taken to its logical end, had led Democrats and the country into a catastrophic war.
Even before they knew for sure that she was running for the presidency, Hillary Clinton’s top aides had to figure out how best to handle the growing tumult inside their own party. As a senator, Clinton had been, if anything, more centrist than her husband; she worked across the aisle with the likes of Bill Frist and Lindsey Graham, and her voting record on foreign policy placed her among the most conservative Democrats, only a few paces to the left of Joe Lieberman. There is no reason to think such stances on the issues didn’t accurately reflect Hillary’s convictions, but they had the added bonus of positioning her as eminently moderate and “electable” — both in New York State, where she won 67 percent of the vote in her 2006 re-election, and in the rest of the country.
The party, however, seemed to be moving in a different direction. Liberal activists online and in the states, in the wake of Dean’s losing campaign, were noisily demanding more confrontation and less Clintonian compromise from their Washington leaders. By the time Hillary Clinton formally announced her candidacy for president, a group of these activists — money guys, bloggers, MoveOn.org — had just combined forces to knock off Lieberman in a stunning primary upset (although Lieberman did manage to retain his seat in the general election), and these same grass-roots Democrats were lashing out at Clinton for her vote to authorize the invasion of Iraq. Some Clinton supporters in Washington thought they could see an ominous train coming down the track, and they wondered if the candidate didn’t need to get some distance between herself and her husband’s legacy, to position herself as a more partisan Democrat before it was too late.
Mark Penn steadfastly disagreed. Penn, who was Bill Clinton’s chief pollster during the ’90s, also emerged as Hillary’s most influential strategist. Penn had argued for years, going back to the Clinton White House, that Democrats won when they occupied the bipartisan, common-sense center of the political spectrum. And even in a primary campaign, Penn said he believed that Democrats had such personal loyalty toward the Clintons that they would forgive a few ideological differences they might have with the senator, especially if they thought those differences made her palatable to a wide swath of independent voters. When I suggested to Penn, back in 2005, that there might be a strong backlash emerging against the notion of Clintonism, he waved me away. “Strong backlash?” Penn scoffed, reminding me that the former president had a 70 percent approval rating in the country as a whole. “In this environment, that is a notion I would have to laugh at.”
In the end, Hillary Clinton tried to straddle the line. She broke with her husband in small but significant ways. She criticized the free-trade policies that he had long championed but that were now anathema to much of the Democratic base. She promised to abandon “don’t ask, don’t tell” and to amend the Defense of Marriage Act, which Bill Clinton signed. At the same time, Hillary Clinton has, from the start, reminded voters that she was a crucial member of her husband’s White House. (“I was deeply involved in being part of the Clinton team,” she said at a recent debate, in response to a question about foreign policy.) Vowing to be a pragmatic, bipartisan president, she signed on to lead an initiative with the D.L.C. and welcomed the endorsement of such figures as Robert Rubin, the Clinton Treasury secretary whose push for deficit reduction in the early ’90s has made him a lasting figure of revulsion for anti-corporate liberals. Despite intense pressure from John Edwards and Barack Obama, she publicly refused to swear off donations from industry lobbyists, and she spoke out in favor of a House vote to approve a new free-trade agreement with Peru. At the YouTube/CNN debate in July, she pointedly refused to describe herself as a liberal.
When Clinton, alone among the party’s presidential hopefuls, voted in September for a Senate resolution labeling the Iranian Revolutionary Guard as a terrorist group, a resolution the other Democrats charged would empower Bush to pursue yet another military strike, it looked to a lot of Democrats like an all-too-familiar Clintonian dash toward the center. Clinton seemed to be feeling secure as the front-runner and already looking ahead to the general election, where she planned to occupy the same moderate space her husband had. By then, though, voters in Iowa and New Hampshire had begun to pay closer attention to the race, and the attacks on Clintonism were beginning to resonate.
There are at least three different angles from which Edwards and Obama have tried, often subtly, to trash Clintonism without criticizing the former president himself. The first might be called the triangulation story line. Edwards unsheathed the word like a poison-tipped arrow at the same
YouTube debate where Hillary Clinton declined to be called a liberal. “Do you believe that compromise, triangulation, will bring about big change?” he asked the audience. “I don’t.” Thwang. Since then, Edwards has at every opportunity tried to encourage liberal voters in their view that the Clinton era was a time of craven calculation and surrender to the conservative movement. In October, after Clinton was asked in a debate if she supported a New York State plan to give driver’s licenses to illegal immigrants — and after she tried to twist her way out of answering with such tenacity that she nearly invented a new yoga position — the Edwards campaign released a video titled “The Politics of Parsing,” which showed Clinton contradicting herself on other issues too. The subtext was clear: Do you really want to go through all that again?
Obama, who once vowed to adhere to the “new politics” of genial campaigning, has picked up on this same triangulation theme with evident enthusiasm in recent months. In Spartanburg, S.C., last month, he said that Clinton had been running a “textbook” campaign — whose textbook wasn’t hard to discern — that “encourages vague, calculated answers to suit the politics of the moment, instead of clear, consistent principles about how you would lead America.” Later in the month, at a dinner for leading Iowa Democrats, Obama used the dreaded epithet itself. “Triangulating and poll-driven positions because we’re worried about what Mitt or Rudy might say about us just won’t do,” he said, as Hillary Clinton sat a few feet away.
The second narrative aimed at the Clinton years, pursued mostly by Edwards, is the one about corporate corruption. This one argues that Bill Clinton turned the Democratic Party into a holding company for Wall Street financiers, pursuing a series of economic policies that were bad for workers but kept the party flush with cash. By this theory, balanced budgets and free trade were more about winning elections at any cost than they were about creating an expansive economy, and they led directly to the Bush epoch and its alarming inequality. This is why Edwards spent weeks hammering at Clinton over her continued acceptance of lobbyists’ money (despite his own reliance on donations from trial lawyers, who do plenty of lobbying themselves). The point was to remind voters that when Bill Clinton rented out the Lincoln Bedroom, Hillary was sleeping down the hall.
Obama, meanwhile, has been going after the Clinton legacy with a third story line: Boomer fatigue. Never mind whether Bill Clinton or Newt Gingrich was to blame, Obama says — the point is that the two parties had each other in a death grip throughout the ’90s, and vital business went unfinished as a result. If you really want things to stay that way, he says, then vote for another Clinton and watch these self-obsessed baby boomers go at it all over again. When Obama leaned on Hillary Clinton for not pushing to declassify all of her papers from the Clinton White House, he was offering voters a reminder of all the lawyers and investigations, the missing billing records, the constant subpoenas for cabinet members that never seemed to go away.
“You have to be careful to be honest, and being honest means giving President Clinton his full due,” David Axelrod, Obama’s main strategist, told me not long ago. “I don’t think Obama is arguing that Bill Clinton is a bad person or a bad president, or that Hillary Clinton is a bad person or a bad senator. That’s not what we’re saying. We’re saying that we have to move forward and get beyond these old battles.”
By taking on the Clinton legacy through imagery and innuendo, Hillary’s rivals seem to have brought to the surface feelings of profound ambivalence, among many voters, about what that era really meant. She still holds a substantial lead in national polling, but in Iowa a flurry of recent polls have shown Clinton tied with Obama, and her lead among women there — a critical piece of her formula for victory — has eroded precipitously. According to a Washington Post-ABC News poll earlier this month, only half the voters thought Clinton was “willing enough” to say what she “really thinks about the issues,” compared with three-quarters for her two main rivals. Perhaps more troubling for the Clinton camp, the race in New Hampshire, where the Clintons are essentially family, appeared to have tightened considerably. While polls from New Hampshire have varied widely, making their reliability something of a guessing game, a poll jointly conducted a few weeks ago by WMUR in Manchester and CNN found that Clinton’s 20-point lead there had completely evaporated.
Clinton’s aides described all this as the inevitable dynamic of a race in its later stages, when voters really focus on their choices for the first time. But as Iowa edges closer, their campaign has seemed on the edge of panic. Earlier this month, Clinton, who had always tried to appear vaguely amused at her opponents’ antics, started flailing away at Obama. First she assailed him for saying he hadn’t always wanted to be president when, in fact, he wrote an essay in kindergarten saying that he did intend to one day occupy the Oval Office. (She shrewdly left out the fact that every other 5-year-old in America says the exact same thing.) On that same day, Wolfson, her communications director, appearing on “Face the Nation,” charged that Obama had been operating a “slush fund” through his political action committee. Then one of Clinton’s national campaign co-chairmen in New Hampshire pointedly suggested that Obama, who has admitted to using drugs when he was younger, would be vulnerable, as the nominee, to questions about whether he gave drugs to others or even sold them. That was too much for the candidate herself, who felt compelled to apologize personally.
For his part, Bill Clinton has tried to restrain himself. In his later years, the Big Dog, as bloggers sometimes refer to him, has transcended politics and even ordinary celebrity; like Paul McCartney or Muhammad Ali, Clinton is now a historical figure who remains a breathing, walking presence, and when he enters a room of strangers, even those who didn’t vote for him react as if witnessing a small miracle. On Veterans Day, as I trailed Clinton through South Carolina, he dropped in on Jack’s Cosmic Dogs, where he ordered up a chili dog with fries — now that his foundation was on a crusade against childhood obesity, Clinton told me with mock gravity, it was vital that he sample the offending food every so often — and made his way to all the tables so the customers could swoon and take pictures.
“Oh, these iPhones take good pictures!” he exclaimed to one young mother as she looked around for a volunteer photographer to snap her portrait with the former president. A few minutes later, I heard him talking into another woman’s cellphone while she looked on nervously. “Hi, there, this is Bill Clinton! No, seriously! It is!”
As he doused his fries in ketchup, Clinton told me that he was generally more inclined to want to “pop back” at Edwards or Obama than his wife was, but he had to remind himself that Hillary was plenty capable of defending herself. There have been reports in the last few weeks about Clinton’s lashing out at strategists and meddling in his wife’s campaign; insiders say this has been exaggerated, but some of Clinton’s friends and former advisers told me that the attacks from rivals irritate Clinton a lot more now, when they are directed at his wife, than they did when he was running. (“As a candidate, he was absolutely bulletproof — it never bothered him,” says Paul Begala, one of Clinton’s 1992 advisers.) What he takes even more personally — and should, really — is the unmistakable premise that underlies the sniping, that somehow his own presidency was bad for the country and the party.
On those rare occasions when the former president hasn’t been able to resist defending his wife or burnishing his own record, the results haven’t been especially helpful. Unlike Hillary Clinton and her team of advisers, who are relentlessly on message and disciplined, Bill Clinton is a more instinctual politician, given to improvisational moments that must torment his wife’s obsessive-compulsive aides. In November, Clinton suddenly asserted during a campaign appearance in Iowa that he opposed the invasion of Iraq from the beginning — an aside that he needn’t have offered and that clearly contradicted not only his wife’s Congressional vote but his own statements in the build-up to the war. Aides told me that he had simply misspoken, and that seemed plausible enough, but the minor incident only served to reinforce the image that Edwards and Obama were doing their best to conjure. In trying, perhaps unconsciously, to exonerate himself among his persistent liberal critics, Clinton reminded even sympathetic voters of the qualities that had made him seem maddeningly incapable of standing on principle or admitting fault. Here was the statesman Bill Clinton, wizened and mature, telling us once again that he didn’t inhale.
There is, however, a rich paradox in the strategy that Obama and Edwards are employing in their quest to dislodge Clinton from her perch atop the field. The plain fact is that, for all their condemnation of Bill Clinton’s governing philosophy, both Obama and Edwards — and just about every other Democratic candidate in the field, with the possible exception of Dennis Kucinich, who seems to have been teleported straight from 1972 — spend a fair amount of time imitating him. So thorough was Clinton’s influence on Democratic politics, so transformative were his rhetoric and his theory of the electorate, that Democrats don’t even seem to realize anymore the extent to which they owe him their political identities.
Obama can rail about poll-tested positions and partisanship if he wants, but some of his most memorable speeches since being elected to the Senate have baldly echoed Clintonian themes and language. He has repeatedly called on poor African-Americans to take more responsibility for their parenting and their children’s education, and he has been skeptical of centralized federal programs for the poor, advocating a partnership between government and new kinds of community-based nonprofits. He has railed against “a mass-media culture that saturates our airwaves with a steady stream of sex, violence and materialism.” Such “values” stances were far outside the mainstream of the party before Bill Clinton expressed them.
In an impressive 2005 commencement speech at Knox College, Obama talked about economic transformation. “Instead of doing nothing or simply defending 20th-century solutions, let’s imagine together what we could do to give every American a fighting chance in the 21st century,” he said. “What if we prepared every child in America with the education and skills they need to compete in the new economy? If we made sure that college was affordable for everyone who wanted to go? If we walked up to those Maytag workers and said, Your old job is not coming back, but a new job will be there because we’re going to seriously retrain you and there’s a lifelong education waiting for you?
“Republicans will have to recognize our collective responsibilities,” he went on, “even as Democrats recognize that we have to do more than just defend old programs.” Bill Clinton could have spoken those exact words in 1991. In fact, it would be hard to find a better summation of the substance behind Clintonism.
Similarly, Edwards, doing his best William Jennings Bryan impression, lashes out at the policy priorities of the ’90s and at poverty deepened by corporate venality, but his arsenal of specific proposals includes expanding the Earned Income Tax Credit and accelerating the process of moving people out of public housing and into mixed-income neighborhoods. These new ideas are actually extensions of Clinton-era programs; they may be notable for their boldness but not for their originality. And even Edwards, in criticizing the lack of aid for poor Americans, has constructed his ambitious agenda on the central premise that people should get assistance only if they’re willing to work for it. In today’s environment, this hardly qualifies as noteworthy — there’s no serious Democratic candidate who would propose anything else — but it represents a marked shift from the party’s stance on welfare programs before Clinton started talking about those who “work hard and play by the rules.”
“Despite all the protestations, Clinton’s third-way politics and governing philosophy have as much of a hold on these Democratic candidates as the New Deal mind-set did on generations before,” says Jonathan Cowan, whose think tank, Third Way, has emerged as the next iteration of the D.L.C. “Clinton’s politics have basically become the DNA of Democrats seeking the White House, and it’s almost certain that they would all govern from that Clintonian center if they actually became president.” Even the party’s leaders in Congress, newly empowered by an uprising against Republican hegemony, continue to speak in the measured tones of Clintonian centrism.
Clinton’s rhetorical influence, in fact, spans not just the Democratic Party but really the entire spectrum of American politics. Today politicians throw around phrases like “the new economy” or “the information age” as if they have always been part of the political lexicon, and yet most ordinary voters didn’t really grasp that America was undergoing a profound upheaval — moving from an industrial economy to one centered on intellectual and service industries — until Clinton showed up to masterfully explain it. Few American politicians talked about “globalization” before Clinton, as a candidate, stood on factory floors and argued that the next era’s economy would be nothing like the last, and that for workers, the transition would be painful but also full of promise. Clinton wasn’t the first candidate to grasp this change and to put it into words, but he was by far the most persuasive. He also articulated a philosophy of how to deal with these challenges that transcended the binary ideological struggle between outright entitlement and Darwinian self-reliance. When you go into a hospital now and see a placard on the wall that lists a patient’s “rights” directly opposite his “responsibilities” as a citizen, that’s Clinton’s influence. At its best, Clintonism represented a more modern relationship between government and individuals, one that demanded responsibilities of both.
Words aren’t the same thing as achievements, of course, but at critical points in history, they can move a country forward by modernizing the debate, and in this way, Clinton’s comparing himself with Theodore Roosevelt, the president who dragged politics into the industrial age, is apt. Perhaps it’s true that Clinton’s presidency will be remembered as a series of lost opportunities — “the Great Squandering,” as the historian David Kennedy recently described it to me. But it’s also possible that history will record Bill Clinton as the first president of the 21st century, the man who synthesized the economic and international challenges of the next American moment, even if he didn’t make a world of progress in solving them.
This may be the defining difference between the candidacies of Bill Clinton and his wife, between Clintonism and Hillaryism, if such a thing can be said to exist. Like most successful outsiders, Bill Clinton directly challenged the status quo of both his party and the country, arguing that such a tumultuous moment demanded more than two stark ideologies better suited to the past. By contrast, Hillary Clinton’s campaign to this point has been mostly about restoring an old status quo; she holds herself up as the best chance Democrats have to end eight years of Bush’s “radical experiment” and to return to the point where her husband left off. It has been a strong but safe campaign, full of nondescript slogans (“I’m In to Win!” “The Change We Need!”) and familiar, if worthy, policy prescriptions. That might be a shrewd primary strategy, but winning a general election could well require a more inspiring rationale. Nonincumbents who go on to win the White House almost always take some greater risk along the way, promising changes more profound — if potentially more divisive — than a return to normalcy. The reformer runs great danger. The more cautious candidate merely runs.
Matt Bai, who covers national politics for the magazine, is the author of “The Argument: Billionaires, Bloggers and the Battle to Remake Democratic Politics.” www.mattbai.com.
December 23, 2007
The Clinton Referendum
By MATT BAI
Winter’s first storm punished the White Mountains of New Hampshire on the Friday before Thanksgiving, rendering the terrain all but impassable. And yet in Gorham, a small town 50 miles from the Canadian border, hundreds of people shuddered patiently in the snow, in a line that snaked halfway around Gorham Middle-High School, while Secret Service dogs sniffed the gymnasium for bombs. “I’ve got a lot of people freezing out here,” a campaign aide barked into a phone, as if this might make the agents go any faster. When they finally allowed everyone in, a few of the 500 or so folding chairs remained unfilled, but the place was humming with excitement; a teacher near me was saying that this was the biggest thing to happen here since Dwight Eisenhower visited in the 1950s.
For the first time since that infamous year of 1992 — the year when Gennifer Flowers, “Stand by Your Man” and “the Comeback Kid” entered the political canon — Bill Clinton was coming back to New Hampshire’s North Country, the place where his legend was born. Clinton loves the Granite State. As it happened, I was standing with him earlier that week in South Carolina when an aide told him that he was going to be campaigning for Hillary in New Hampshire, and his eyes lighted up behind his reading glasses. “I am? Where’m I goin’?” Now he strode into the gymnasium through a side door, his face flushed with emotion, accompanied by the nostalgic bars of Fleetwood Mac’s “Don’t Stop.” (They were nostalgic back then, for crying out loud.) Clinton is now lean and regal, his hair an almost metallic white, and he was dressed in a taupe suit with a light green tie, trailing a small entourage and waving warmly. The room erupted in cheers and whistles. Over his head a banner proclaimed: “The Change We Need! HillaryClinton.com.”
“When Hillary first announced she was running for president, she came right to the North Country, and I was so jealous,” Clinton said. “I want to thank you for arranging the snow today. It made me feel right at home. I took a nap in the car, and when I woke up I thought it was 1992.” The crowd laughed appreciatively. Many in the audience probably recalled that he had all but lived in these parts for a year before that campaign; after his election, he even gave some of the families he met along the way his special ZIP code at the White House, so they could keep in touch.
Clinton began his speech, as he always does now, with a disclaimer, saying that if he wanted to, he could certainly give a big “whoop-dee-do” speech that would get everybody riled up, but that this was a serious time in America, and it deserved a serious speech. Clinton doesn’t like to play an overtly political role anymore; he enjoys the statesmanlike aura that surrounds any ex-president, and he is not about to undermine it, even for his wife’s campaign. Instead, he spoke to the Gorham audience in somber tones, telling them that a lot of the crises now confronting the North Country brought to mind 1992 as well. The paper mill in nearby Groveton had just announced it would close a few days after Christmas, kicking 300 workers to the street.
“You’re hurtin’ up here because of this mill closing,” he said. “But you should know just how close millions upon millions of your fellow Americans are to your experience.” He went on to quietly castigate the Bush administration for running up foreign debt and straining the military to its limits in Iraq, and he talked about Hillary’s plans to bring health-care coverage to all Americans, build a new jobs program around alternative energy and revamp the education system, beginning with early-childhood programs. “A lot of you already know this,” he said of his wife’s work on education issues in Arkansas, “because I talked about it when I was running.”
Even without the allusions to the old days, his speech seemed strangely reminiscent of that first campaign, and not necessarily in a good way. Listening to him talk, I found it hard not to wonder why so many of the challenges facing the next president were almost identical to those he vowed to address in 1992. Why, after Clinton’s two terms in office, were we still thinking about tomorrow? In some areas, most notably health care, Clinton tried gamely to leave behind lasting change, and he failed. In many more areas, though, the progress that was made under Clinton — almost 23 million new jobs, reductions in poverty, lower crime and higher wages — had been reversed or wiped away entirely in a remarkably short time. Clinton’s presidency seems now to have been oddly ephemeral, his record etched in chalk and left out in the rain.
Supporters of the Clintons see an obvious reason for this, of course — that George W. Bush and his Republican Party have, for the past seven years, undertaken a ferocious and unbending assault on Clinton’s progressive legacy. As Clinton points out in his speeches, Bush and the Republicans abandoned balanced budgets to fight the war in Iraq, widened income inequality by cutting taxes on the wealthy and scaled back social programs. “We’ve had now seven years of a radical experiment in extremism in domestic policy,” Clinton said in New Hampshire.
Some Democrats, though, and especially those who are apt to call themselves “progressives,” offer a more complicated and less charitable explanation. In their view, Clinton failed to seize his moment and create a more enduring, more progressive legacy — not just because of the personal travails and Republican attacks that hobbled his presidency, but because his centrist, “third way” political strategy, his strategy of “triangulating” to find some middle point in every argument, sapped the party of its core principles. By this thinking, Clinton and his friends at the Democratic Leadership Council, the centrist think tank that served as a platform for his bid for national office, were so desperate to woo back moderate Southern voters that they accepted conservative assertions about government (that it was too big and unwieldy, that what was good for business was good for workers) and thus opened the door wide for Bush to come along and enact his extremist agenda with only token opposition. In other words, they say, he was less a victim of Bush’s radicalism than he was its enabler.
“His budget policies were pretty much an extension of Bush I, and his economic policies were largely an extension of Wall Street,” says Robert Borosage, co-director of the left-wing Campaign for America’s Future. Ideologically, Borosage told me, Clinton’s presidency fit snugly into the era of Reagan and Bush. Faced with ascendant conservatism, he says, “Clinton saw his job, in a sense, as getting the Democratic Party to adjust to it, rather than to resist it.”
Aside from a few partisans on each end of the spectrum, there aren’t neatly delineated camps on this question, with Clinton lovers on one side and critics on the other. Rather, a lot of Democrats seem genuinely conflicted, on practically an existential level, when it comes to Clinton. They almost uniformly admire the former president; 82 percent of Democrats polled by Fox News in November had a favorable opinion of Clinton, and, in a New York Times poll released earlier this month, 44 percent of Democratic voters said they were more inclined to support Hillary’s candidacy because of him. And yet, they regard with suspicion, if not outright resentment, the centrist forces he helped unleash on the party. They might love Bill Clinton, but they loathe Clintonism. And it is this conflict that has, in recent weeks, become a subtle but important theme of the 2008 campaign, as Hillary Clinton’s rivals try to portray her as the Return of the Great Triangulator. Whatever else these Democratic primaries may be about — health-care plans, global warming, timetables for withdrawal from Iraq — they are, on some more philosophical and even emotional level, a judgment on the ’90s and all that those tumultuous years represent.
Hillary Clinton’s combative advisers say they welcome that dynamic. “If our opponents want to make this a referendum on Bill Clinton’s presidency, they are making a mistake,” Howard Wolfson, Clinton’s communications director, said in an e-mail message, “both because it’s a referendum they would lose on the merits and because Democrats are focused on the future and the change that needs to be made going forward.” And yet Clinton’s team often seems perplexed by a political quandary unlike any that has come before: how to exploit all the good will that Democrats have for Bill Clinton without allowing Hillary Clinton to become a constant reminder of the things they didn’t like about his presidency. Generally, the campaign’s preferred solution is simply not to talk about it. When I asked Bill Clinton about this issue, during an informal meeting in South Carolina, he readily agreed to sit down for a longer interview on his legacy’s role in the campaign. A few weeks later, however, and at the last minute, Hillary’s aides canceled the interview. Famously controlling, they would not even allow the former president to talk about his record.
Listening to Bill Clinton that day in New Hampshire, however, it was clear that whether or not he talks about it, his wife’s fortunes are bound up with his, and vice versa. Near the end of his speech in Gorham, he went off on an engaging tangent, as he sometimes does, about the trees he saw from his car window that morning, and how at one time New Hampshire was almost devoid of trees, and how Teddy Roosevelt led a national effort to replenish the forests. “But Theodore Roosevelt proposed a lot of ideas that fell flat on their face until Franklin Roosevelt passed them,” Clinton went on. “The important thing for us to do is to fight for the right thing and keep fighting for it until we finally get it done.” I had heard Clinton compare himself with T.R. before, but this was the first time I heard him do so publicly, and it struck me as an aside that would have made his wife’s advisers wince, if they noticed it. He seemed to be suggesting that Hillary’s job as president would be to cement his own unfinished legacy — provided, of course, that his legacy, or at least a widely held perception of it, didn’t end up derailing her first.
A little over a year ago, while working on a book about the Democratic Party’s divisions, I discussed that legacy with Bill Clinton in his Harlem office. Hillary Clinton had just begun running for the White House, and her husband was already trying to help neutralize her critics on the left; when I arrived at the office, Clinton was meeting with about 20 influential bloggers, who were gnawing on barbecued chicken and enjoying their first-ever audience with a former president. When I entered his office a while later, Clinton had his back to me and was busy rearranging the photos on his shelves, as if trying to get the visual narrative of his presidency exactly right. He recited a litany of his accomplishments — the first sustained rise in real wages since 1973, the biggest land-protection measure in the lower 48 since Teddy Roosevelt, victories against the tobacco and gun lobbies — and told me he couldn’t understand the allegation that his administration wasn’t really progressive.
“I think that if ‘progressive’ is defined by results, whether it’s in health care, education, incomes, the environment or the advancement of peace, then we had a very progressive administration,” Clinton said. “I think we changed the methods — that we tried also to reflect basic American values, that we tried to do it in a way that appealed to the broad middle class in America. We sure did, and I don’t apologize for that. The question is: Were the policies right or not? And I think in terms of the political success I enjoyed, people have given more credit to my political skills than they deserve and less credit to the weight, the body of the ideas.”
At the end of that interview, as he walked me to the lobby, Clinton mentioned a favorite quote from Machiavelli’s book “The Prince” and told me to look it up. When I got back to Washington, I thumbed through the book until I found the rambling passage, and this is what it said:
It must be considered that there is nothing more difficult to carry out, nor
more doubtful of success, nor more dangerous to handle, than to initiate a
new order of things. For the reformer has enemies in all those who profit by
the old order, and only lukewarm defenders in all those who would profit by
the new order, this lukewarmness arriving partly from fear of their adversaries,
who have the laws in their favor; and partly from the incredulity of
mankind, who do not truly believe in anything new until they have had an
actual experience of it. Thus it arises that on every opportunity for attacking
the reformer, the opponents do so with the zeal of partisans, the others only
defend him halfheartedly, so that between them he runs great danger.
It’s not hard to see why the postpresidential Bill Clinton sees himself in this quotation, and it says a lot about how he views his own place in American politics. In Clinton’s mind, the New Democrats of the late ’80s and early ’90s and their “third way” approach represented a call for fundamental reform, not just of the Democratic Party but also of the country’s industrial-age government. For that, he has been pilloried by Republican business interests, who were doing just fine under the old system, and “lukewarmly” defended by Democrats who resist any real break with the past.
There are, among Democrats, dueling interpretations of what Clintonism means and how it came into being. The most popular version now, by far, is that Clintonism was chiefly an electoral strategy, a way of making Democrats sound more acceptable to conservative voters by softening the party’s stances on “values” issues like guns, welfare and abortion and introducing pallid, focus-grouped phrases like “work hard and play by the rules” and making abortion “safe, legal and rare.” In other words, Clinton was basically as liberal at heart as any other Democrat who marched for civil rights and protested the Vietnam War, but he was a brilliant political strategist who instinctively understood the need to rebrand the party.
Even some of Clinton’s friends from the old days — those lukewarm defenders of the faith — accept this basic version of history. “Clintonism was about winning,” says Susan Estrich, the longtime Democratic strategist and pundit. “It was about grabbing victory from the jaws of defeat. If you were a Democrat of a certain age, it was like being a Red Sox fan — you never won. And even when you won, you lost, because you got Jimmy Carter. Clinton led us out of the desert when no one else could.”
On the other hand, Clinton’s more ardent supporters, those few who were there at the beginning, argue that Democrats have badly miscast him as an expedient strategist, when in fact he was a visionary and a modernizer. “He used to tell me all the time, ‘One of these days, people are going to figure out that I actually believe in this stuff,’ ” Al From told me recently. From and the Democratic Leadership Council that he founded in the 1980s have in recent years become a kind of convenient stand-in for Clinton, the main object of acid derision from liberal bloggers who prefer to savage someone other than the former president himself for the evils of Clintonism. Clinton was the chairman of the D.L.C. when he ran for president, and much of his campaign rhetoric came from its work.
“I don’t want to see what I think is his greatest achievement diminished,” From told me. “Just as Franklin Roosevelt saved capitalism by dealing with its excesses, Clinton saved progressive governance, and he saved progressive governance all over the world.”
Clinton’s critics on the left may scoff at this idea, but it’s fair to say that the discussion of Clintonism among party activists and especially online often displays a stunning lack of historical perspective. For a lot of younger Democrats, in particular, whose political consciousness dates back only as far as 1994 or even to the more recent days of Clinton’s impeachment, the origins of Clintonism have become not only murky but also irrelevant. “Clintonism” is, in much of the Democratic activist universe, a synonym for spinelessly appeasing Republicans in order to win, an establishment philosophy assumed to comprise no inherent principles of its own.
Lost in all this is the fact that, back in the day, Clinton and his New Democrats were themselves the outsiders taking on the ruling interest groups of the Democratic establishment the analog to bloggers and MoveOn.org activists, albeit from a different ideological direction. And it took no small amount of courage, at the end of the Reagan era, to argue inside the Democratic Party that the liberal orthodoxies of the New Deal and the Great Society, as well as the culture of the antiwar and civil rights movements, had become excessive and inflexible. Not only were Democratic attitudes toward government electorally problematic, Clinton argued; they were just plain wrong for the time.
Immediately after assuming the chairmanship of the D.L.C. in 1990, Clinton issued something called the New Orleans Declaration, which laid out the D.L.C.’s attack on old liberalism in a series of 15 core principles. By today’s standards, these principles don’t amount to much more than typical Clintonian rhetoric, but at the time, they seemed like a good way for a young Democratic governor to permanently marginalize himself in a party dominated by Big Labor, civil rights leaders and Northeastern liberals. Among the stated principles in the manifesto:
“We believe that economic growth is the prerequisite to expanding opportunity for everyone. The free market, regulated in the public interest, is the best engine of general prosperity.”
“We believe in preventing crime and punishing criminals, not in explaining away their behavior.”
“We believe the purpose of social welfare is to bring the poor into the nation’s economic mainstream, not to maintain them in dependence.”
In 1991, as Clinton prepared for what was then considered a quixotic run for president against a popular incumbent, he expanded on his governing philosophy in a series of speeches that, revisited now, are striking both for their confrontational approach toward expansive liberal government — especially coming from a candidate who needed party regulars to win — and for their ideological consistency with what would later come to pass during the Clinton era. He laid out a forceful case for improving and decentralizing decades-old institutions, from public schools to welfare, and modeling government after corporate America. He talked about revamping a Democratic Party that for 30 years was closely identified with the problems of the poor and retooling it to address the anxieties of a distressed middle class.
“There is an idea abroad in the land that if you abandon your children, the government will raise them,” Clinton said at a D.L.C. gathering in Cleveland in 1991, referring to fathers in the inner city. “I will let you in on a secret. Governments do not raise children — people do. And it’s time they were asked to assume their responsibilities and forced to do so if they refuse.”
In the same speech, Clinton outlined a new Democratic ethos based on the idea of consumer choice. “In the information age, monopoly decisions handed down on high by government bureaucracies are not always the best way to go,” he said. “With appropriate protections against discrimination based on race or income, we can provide our people more choices: child-care vouchers, public-school choice options, job training programs, choices for the elderly. ...
“Is what I just said to you liberal or conservative?” he went on to ask. “The truth is, it is both, and it is different. It rejects the Republicans’ attacks and the Democrats’ previous unwillingness to consider new alternatives.”
This, in a few lines, was the essence of Clintonism. Was it an innovative governing vision or a cynical strategy? The truth is, it was both. There is little doubt that as governor of Arkansas, Clinton believed passionately in the need to modernize liberalism and overhaul industrial-age programs, including popular entitlements and “welfare as we know it.” He grew up in hard circumstances and was raising his own child in a household with two working parents; his concern for the middle class was real, and it reflected a changed reality for a lot of baby-boomer families that older Democrats simply didn’t comprehend. But Clinton also believed his centrist message was the only way for a Democrat to win in the era after McGovern and Mondale, when running as a liberal candidate seemed only slightly more practical than running as a Marxist. And in order to get his party’s nomination, Clinton had to convince beleaguered liberals not so much that he was right about the party’s philosophical irrelevance — this probably wasn’t possible, in any event — but that his was the only way to regain the White House. He sold Clintonism as a matter of conviction and a promising electoral strategy, and both were sincere propositions.
Once in the White House, however, for some reasons within his control and many that were not, Clinton seemed to list inexorably toward the tactical side. He can claim some genuine advances in keeping with the spirit of his fundamental argument about government: the crime bill; welfare reform; the Family and Medical Leave Act; expanding the Earned Income Tax Credit, which pulled millions of working Americans out of poverty. These weren’t small achievements, and Clinton has received less credit for them than he deserves. And whether you attribute to him any part of the technology boom that created a vast amount of American wealth or believe instead that he simply had the good fortune to happen upon it it’s only fair to acknowledge, as historians almost certainly will, that Clinton presided more than ably over a historic economic expansion, leaving the nation in far better fiscal shape than he found it.
Still, a combination of events — first the collapse of Hillary Clinton’s health-care plan, then the Republican Congressional takeover of 1994 and later, of course, the debilitating sex scandal that led to his impeachment — seemed to drain the administration of its capital and ambition. Clinton’s presidency seemed, at least from the outside, to devolve into an exercise in deflection and survival, a string of near-death experiences that left little space or energy for whatever sweeping agenda Clinton (and his wife) envisioned back in 1992. As the transformational governing vision of earlier years receded, bland, poll-tested rhetoric and endless scandals rushed in to fill the void — and became, in the minds of many Democrats, the hallmarks of Clintonism.
For a lot of liberals (those who now call themselves progressives), the ’90s were a conflicted time. They never really bought the ideological premise of Clintonism, and they quietly seethed as the president moved his party to the center — enacting free-trade agreements over the objections of union leaders; embracing balanced budgets and telling Americans that “the era of big government is over”; striking a deal to give Republicans a long-sought overhaul of the welfare system. (In fact, Clinton had been talking about welfare reform for at least a decade before his presidency, but few Democrats believed his eventual support for the bill was anything other than a craven attempt to bolster his re-election prospects.) They felt embarrassed by the Lewinsky affair and the sordid controversy that devoured Clinton’s second term like flesh-eating bacteria.
There were five syllables that for these Democrats summed up all the failures of Clintonism: “triangulation.” The word was originally popularized by Dick Morris, who advised Clinton in the dark days of the mid-’90s (and who, not incidentally, was brought in to the White House by the first lady). Triangulation, as Morris intended it, is probably best described as the strategy of co-opting the issues that attract voters to your opponents by substituting centrist solutions for the ideological ones they propose, thus depriving them of victory. (In other words, if your opponents are getting traction with their demands to dismantle a broken welfare system, you acknowledge the problem but propose a middle-ground way of restructuring it instead.) To a lot of avid Democrats, however, triangulation became shorthand for gutless compromise, for saying and doing whatever you think you must in order to win.
No doubt Clinton’s style of leadership contributed to this impression as much as the substance did. There were moments, little remembered or appreciated by his critics, when Clinton demonstrated icy resolve and an indifference to polls: the budget showdown with Newt Gingrich and Congressional Republicans in 1995; the bombing of Serbia in 1999 to stop its aggression in Kosovo. More often, though, Clinton seemed determined to confirm his reputation as an agonized, late-night decision maker, a leader heavily influenced by the last guy to leave the room. Classic half-a-loaf policies like the “don’t ask, don’t tell” rule for gays in the military, along with frequent paralysis over crises like the genocide in Rwanda, created what would become an enduring impression that Clintonism was code for fecklessness.
Even so, such resentments were tempered by the fact that Clinton managed to deliver the White House not once but twice; among Democrats in the 20th century, only Woodrow Wilson and Franklin Roosevelt had done the same. He almost single-handedly pulled the Democratic Party back from its slide into irrelevance. Liberals swallowed hard and endured Clinton’s pragmatic brand of politics because they assumed that Clinton’s success would beget more success and, ultimately, a more progressive government.
Of course, it didn’t work out that way. First came the election of 2000, which Democrats believed was swiped from their grasp with little protest from the party’s Washington leaders. Next came compromises with George W. Bush on tax cuts and education reform. Then came the back-breaker: in the vote on the Iraq war resolution in 2002, many Democrats in Washington — including, most conspicuously, Hillary Clinton, then an unannounced presidential candidate — sided with President Bush in a move that antiwar liberals could only interpret as a Clintonian calculation to look tough on terror. If so, a lot of good it did; Congressional Democrats were demolished at the polls a few weeks later.
After that defeat, many longtime liberals, often coming together in the new online political space, began to voice a different thought: What if they had gone along with Clintonism for nothing? What if the path to victory lay not in compromising with Republicans but in having the fortitude to fight ruthlessly and to defend your own convictions, no matter how unpopular they might be? This was the moment in which Howard Dean’s explosive presidential campaign — and the grass-roots progressive movement it spawned — began to flourish. It was grounded in the idea that Clintonism, far from representing the postindustrial evolution of Democratic thought, had corrupted the party of the New Deal and the Great Society — and, taken to its logical end, had led Democrats and the country into a catastrophic war.
Even before they knew for sure that she was running for the presidency, Hillary Clinton’s top aides had to figure out how best to handle the growing tumult inside their own party. As a senator, Clinton had been, if anything, more centrist than her husband; she worked across the aisle with the likes of Bill Frist and Lindsey Graham, and her voting record on foreign policy placed her among the most conservative Democrats, only a few paces to the left of Joe Lieberman. There is no reason to think such stances on the issues didn’t accurately reflect Hillary’s convictions, but they had the added bonus of positioning her as eminently moderate and “electable” — both in New York State, where she won 67 percent of the vote in her 2006 re-election, and in the rest of the country.
The party, however, seemed to be moving in a different direction. Liberal activists online and in the states, in the wake of Dean’s losing campaign, were noisily demanding more confrontation and less Clintonian compromise from their Washington leaders. By the time Hillary Clinton formally announced her candidacy for president, a group of these activists — money guys, bloggers, MoveOn.org — had just combined forces to knock off Lieberman in a stunning primary upset (although Lieberman did manage to retain his seat in the general election), and these same grass-roots Democrats were lashing out at Clinton for her vote to authorize the invasion of Iraq. Some Clinton supporters in Washington thought they could see an ominous train coming down the track, and they wondered if the candidate didn’t need to get some distance between herself and her husband’s legacy, to position herself as a more partisan Democrat before it was too late.
Mark Penn steadfastly disagreed. Penn, who was Bill Clinton’s chief pollster during the ’90s, also emerged as Hillary’s most influential strategist. Penn had argued for years, going back to the Clinton White House, that Democrats won when they occupied the bipartisan, common-sense center of the political spectrum. And even in a primary campaign, Penn said he believed that Democrats had such personal loyalty toward the Clintons that they would forgive a few ideological differences they might have with the senator, especially if they thought those differences made her palatable to a wide swath of independent voters. When I suggested to Penn, back in 2005, that there might be a strong backlash emerging against the notion of Clintonism, he waved me away. “Strong backlash?” Penn scoffed, reminding me that the former president had a 70 percent approval rating in the country as a whole. “In this environment, that is a notion I would have to laugh at.”
In the end, Hillary Clinton tried to straddle the line. She broke with her husband in small but significant ways. She criticized the free-trade policies that he had long championed but that were now anathema to much of the Democratic base. She promised to abandon “don’t ask, don’t tell” and to amend the Defense of Marriage Act, which Bill Clinton signed. At the same time, Hillary Clinton has, from the start, reminded voters that she was a crucial member of her husband’s White House. (“I was deeply involved in being part of the Clinton team,” she said at a recent debate, in response to a question about foreign policy.) Vowing to be a pragmatic, bipartisan president, she signed on to lead an initiative with the D.L.C. and welcomed the endorsement of such figures as Robert Rubin, the Clinton Treasury secretary whose push for deficit reduction in the early ’90s has made him a lasting figure of revulsion for anti-corporate liberals. Despite intense pressure from John Edwards and Barack Obama, she publicly refused to swear off donations from industry lobbyists, and she spoke out in favor of a House vote to approve a new free-trade agreement with Peru. At the YouTube/CNN debate in July, she pointedly refused to describe herself as a liberal.
When Clinton, alone among the party’s presidential hopefuls, voted in September for a Senate resolution labeling the Iranian Revolutionary Guard as a terrorist group, a resolution the other Democrats charged would empower Bush to pursue yet another military strike, it looked to a lot of Democrats like an all-too-familiar Clintonian dash toward the center. Clinton seemed to be feeling secure as the front-runner and already looking ahead to the general election, where she planned to occupy the same moderate space her husband had. By then, though, voters in Iowa and New Hampshire had begun to pay closer attention to the race, and the attacks on Clintonism were beginning to resonate.
There are at least three different angles from which Edwards and Obama have tried, often subtly, to trash Clintonism without criticizing the former president himself. The first might be called the triangulation story line. Edwards unsheathed the word like a poison-tipped arrow at the same
YouTube debate where Hillary Clinton declined to be called a liberal. “Do you believe that compromise, triangulation, will bring about big change?” he asked the audience. “I don’t.” Thwang. Since then, Edwards has at every opportunity tried to encourage liberal voters in their view that the Clinton era was a time of craven calculation and surrender to the conservative movement. In October, after Clinton was asked in a debate if she supported a New York State plan to give driver’s licenses to illegal immigrants — and after she tried to twist her way out of answering with such tenacity that she nearly invented a new yoga position — the Edwards campaign released a video titled “The Politics of Parsing,” which showed Clinton contradicting herself on other issues too. The subtext was clear: Do you really want to go through all that again?
Obama, who once vowed to adhere to the “new politics” of genial campaigning, has picked up on this same triangulation theme with evident enthusiasm in recent months. In Spartanburg, S.C., last month, he said that Clinton had been running a “textbook” campaign — whose textbook wasn’t hard to discern — that “encourages vague, calculated answers to suit the politics of the moment, instead of clear, consistent principles about how you would lead America.” Later in the month, at a dinner for leading Iowa Democrats, Obama used the dreaded epithet itself. “Triangulating and poll-driven positions because we’re worried about what Mitt or Rudy might say about us just won’t do,” he said, as Hillary Clinton sat a few feet away.
The second narrative aimed at the Clinton years, pursued mostly by Edwards, is the one about corporate corruption. This one argues that Bill Clinton turned the Democratic Party into a holding company for Wall Street financiers, pursuing a series of economic policies that were bad for workers but kept the party flush with cash. By this theory, balanced budgets and free trade were more about winning elections at any cost than they were about creating an expansive economy, and they led directly to the Bush epoch and its alarming inequality. This is why Edwards spent weeks hammering at Clinton over her continued acceptance of lobbyists’ money (despite his own reliance on donations from trial lawyers, who do plenty of lobbying themselves). The point was to remind voters that when Bill Clinton rented out the Lincoln Bedroom, Hillary was sleeping down the hall.
Obama, meanwhile, has been going after the Clinton legacy with a third story line: Boomer fatigue. Never mind whether Bill Clinton or Newt Gingrich was to blame, Obama says — the point is that the two parties had each other in a death grip throughout the ’90s, and vital business went unfinished as a result. If you really want things to stay that way, he says, then vote for another Clinton and watch these self-obsessed baby boomers go at it all over again. When Obama leaned on Hillary Clinton for not pushing to declassify all of her papers from the Clinton White House, he was offering voters a reminder of all the lawyers and investigations, the missing billing records, the constant subpoenas for cabinet members that never seemed to go away.
“You have to be careful to be honest, and being honest means giving President Clinton his full due,” David Axelrod, Obama’s main strategist, told me not long ago. “I don’t think Obama is arguing that Bill Clinton is a bad person or a bad president, or that Hillary Clinton is a bad person or a bad senator. That’s not what we’re saying. We’re saying that we have to move forward and get beyond these old battles.”
By taking on the Clinton legacy through imagery and innuendo, Hillary’s rivals seem to have brought to the surface feelings of profound ambivalence, among many voters, about what that era really meant. She still holds a substantial lead in national polling, but in Iowa a flurry of recent polls have shown Clinton tied with Obama, and her lead among women there — a critical piece of her formula for victory — has eroded precipitously. According to a Washington Post-ABC News poll earlier this month, only half the voters thought Clinton was “willing enough” to say what she “really thinks about the issues,” compared with three-quarters for her two main rivals. Perhaps more troubling for the Clinton camp, the race in New Hampshire, where the Clintons are essentially family, appeared to have tightened considerably. While polls from New Hampshire have varied widely, making their reliability something of a guessing game, a poll jointly conducted a few weeks ago by WMUR in Manchester and CNN found that Clinton’s 20-point lead there had completely evaporated.
Clinton’s aides described all this as the inevitable dynamic of a race in its later stages, when voters really focus on their choices for the first time. But as Iowa edges closer, their campaign has seemed on the edge of panic. Earlier this month, Clinton, who had always tried to appear vaguely amused at her opponents’ antics, started flailing away at Obama. First she assailed him for saying he hadn’t always wanted to be president when, in fact, he wrote an essay in kindergarten saying that he did intend to one day occupy the Oval Office. (She shrewdly left out the fact that every other 5-year-old in America says the exact same thing.) On that same day, Wolfson, her communications director, appearing on “Face the Nation,” charged that Obama had been operating a “slush fund” through his political action committee. Then one of Clinton’s national campaign co-chairmen in New Hampshire pointedly suggested that Obama, who has admitted to using drugs when he was younger, would be vulnerable, as the nominee, to questions about whether he gave drugs to others or even sold them. That was too much for the candidate herself, who felt compelled to apologize personally.
For his part, Bill Clinton has tried to restrain himself. In his later years, the Big Dog, as bloggers sometimes refer to him, has transcended politics and even ordinary celebrity; like Paul McCartney or Muhammad Ali, Clinton is now a historical figure who remains a breathing, walking presence, and when he enters a room of strangers, even those who didn’t vote for him react as if witnessing a small miracle. On Veterans Day, as I trailed Clinton through South Carolina, he dropped in on Jack’s Cosmic Dogs, where he ordered up a chili dog with fries — now that his foundation was on a crusade against childhood obesity, Clinton told me with mock gravity, it was vital that he sample the offending food every so often — and made his way to all the tables so the customers could swoon and take pictures.
“Oh, these iPhones take good pictures!” he exclaimed to one young mother as she looked around for a volunteer photographer to snap her portrait with the former president. A few minutes later, I heard him talking into another woman’s cellphone while she looked on nervously. “Hi, there, this is Bill Clinton! No, seriously! It is!”
As he doused his fries in ketchup, Clinton told me that he was generally more inclined to want to “pop back” at Edwards or Obama than his wife was, but he had to remind himself that Hillary was plenty capable of defending herself. There have been reports in the last few weeks about Clinton’s lashing out at strategists and meddling in his wife’s campaign; insiders say this has been exaggerated, but some of Clinton’s friends and former advisers told me that the attacks from rivals irritate Clinton a lot more now, when they are directed at his wife, than they did when he was running. (“As a candidate, he was absolutely bulletproof — it never bothered him,” says Paul Begala, one of Clinton’s 1992 advisers.) What he takes even more personally — and should, really — is the unmistakable premise that underlies the sniping, that somehow his own presidency was bad for the country and the party.
On those rare occasions when the former president hasn’t been able to resist defending his wife or burnishing his own record, the results haven’t been especially helpful. Unlike Hillary Clinton and her team of advisers, who are relentlessly on message and disciplined, Bill Clinton is a more instinctual politician, given to improvisational moments that must torment his wife’s obsessive-compulsive aides. In November, Clinton suddenly asserted during a campaign appearance in Iowa that he opposed the invasion of Iraq from the beginning — an aside that he needn’t have offered and that clearly contradicted not only his wife’s Congressional vote but his own statements in the build-up to the war. Aides told me that he had simply misspoken, and that seemed plausible enough, but the minor incident only served to reinforce the image that Edwards and Obama were doing their best to conjure. In trying, perhaps unconsciously, to exonerate himself among his persistent liberal critics, Clinton reminded even sympathetic voters of the qualities that had made him seem maddeningly incapable of standing on principle or admitting fault. Here was the statesman Bill Clinton, wizened and mature, telling us once again that he didn’t inhale.
There is, however, a rich paradox in the strategy that Obama and Edwards are employing in their quest to dislodge Clinton from her perch atop the field. The plain fact is that, for all their condemnation of Bill Clinton’s governing philosophy, both Obama and Edwards — and just about every other Democratic candidate in the field, with the possible exception of Dennis Kucinich, who seems to have been teleported straight from 1972 — spend a fair amount of time imitating him. So thorough was Clinton’s influence on Democratic politics, so transformative were his rhetoric and his theory of the electorate, that Democrats don’t even seem to realize anymore the extent to which they owe him their political identities.
Obama can rail about poll-tested positions and partisanship if he wants, but some of his most memorable speeches since being elected to the Senate have baldly echoed Clintonian themes and language. He has repeatedly called on poor African-Americans to take more responsibility for their parenting and their children’s education, and he has been skeptical of centralized federal programs for the poor, advocating a partnership between government and new kinds of community-based nonprofits. He has railed against “a mass-media culture that saturates our airwaves with a steady stream of sex, violence and materialism.” Such “values” stances were far outside the mainstream of the party before Bill Clinton expressed them.
In an impressive 2005 commencement speech at Knox College, Obama talked about economic transformation. “Instead of doing nothing or simply defending 20th-century solutions, let’s imagine together what we could do to give every American a fighting chance in the 21st century,” he said. “What if we prepared every child in America with the education and skills they need to compete in the new economy? If we made sure that college was affordable for everyone who wanted to go? If we walked up to those Maytag workers and said, Your old job is not coming back, but a new job will be there because we’re going to seriously retrain you and there’s a lifelong education waiting for you?
“Republicans will have to recognize our collective responsibilities,” he went on, “even as Democrats recognize that we have to do more than just defend old programs.” Bill Clinton could have spoken those exact words in 1991. In fact, it would be hard to find a better summation of the substance behind Clintonism.
Similarly, Edwards, doing his best William Jennings Bryan impression, lashes out at the policy priorities of the ’90s and at poverty deepened by corporate venality, but his arsenal of specific proposals includes expanding the Earned Income Tax Credit and accelerating the process of moving people out of public housing and into mixed-income neighborhoods. These new ideas are actually extensions of Clinton-era programs; they may be notable for their boldness but not for their originality. And even Edwards, in criticizing the lack of aid for poor Americans, has constructed his ambitious agenda on the central premise that people should get assistance only if they’re willing to work for it. In today’s environment, this hardly qualifies as noteworthy — there’s no serious Democratic candidate who would propose anything else — but it represents a marked shift from the party’s stance on welfare programs before Clinton started talking about those who “work hard and play by the rules.”
“Despite all the protestations, Clinton’s third-way politics and governing philosophy have as much of a hold on these Democratic candidates as the New Deal mind-set did on generations before,” says Jonathan Cowan, whose think tank, Third Way, has emerged as the next iteration of the D.L.C. “Clinton’s politics have basically become the DNA of Democrats seeking the White House, and it’s almost certain that they would all govern from that Clintonian center if they actually became president.” Even the party’s leaders in Congress, newly empowered by an uprising against Republican hegemony, continue to speak in the measured tones of Clintonian centrism.
Clinton’s rhetorical influence, in fact, spans not just the Democratic Party but really the entire spectrum of American politics. Today politicians throw around phrases like “the new economy” or “the information age” as if they have always been part of the political lexicon, and yet most ordinary voters didn’t really grasp that America was undergoing a profound upheaval — moving from an industrial economy to one centered on intellectual and service industries — until Clinton showed up to masterfully explain it. Few American politicians talked about “globalization” before Clinton, as a candidate, stood on factory floors and argued that the next era’s economy would be nothing like the last, and that for workers, the transition would be painful but also full of promise. Clinton wasn’t the first candidate to grasp this change and to put it into words, but he was by far the most persuasive. He also articulated a philosophy of how to deal with these challenges that transcended the binary ideological struggle between outright entitlement and Darwinian self-reliance. When you go into a hospital now and see a placard on the wall that lists a patient’s “rights” directly opposite his “responsibilities” as a citizen, that’s Clinton’s influence. At its best, Clintonism represented a more modern relationship between government and individuals, one that demanded responsibilities of both.
Words aren’t the same thing as achievements, of course, but at critical points in history, they can move a country forward by modernizing the debate, and in this way, Clinton’s comparing himself with Theodore Roosevelt, the president who dragged politics into the industrial age, is apt. Perhaps it’s true that Clinton’s presidency will be remembered as a series of lost opportunities — “the Great Squandering,” as the historian David Kennedy recently described it to me. But it’s also possible that history will record Bill Clinton as the first president of the 21st century, the man who synthesized the economic and international challenges of the next American moment, even if he didn’t make a world of progress in solving them.
This may be the defining difference between the candidacies of Bill Clinton and his wife, between Clintonism and Hillaryism, if such a thing can be said to exist. Like most successful outsiders, Bill Clinton directly challenged the status quo of both his party and the country, arguing that such a tumultuous moment demanded more than two stark ideologies better suited to the past. By contrast, Hillary Clinton’s campaign to this point has been mostly about restoring an old status quo; she holds herself up as the best chance Democrats have to end eight years of Bush’s “radical experiment” and to return to the point where her husband left off. It has been a strong but safe campaign, full of nondescript slogans (“I’m In to Win!” “The Change We Need!”) and familiar, if worthy, policy prescriptions. That might be a shrewd primary strategy, but winning a general election could well require a more inspiring rationale. Nonincumbents who go on to win the White House almost always take some greater risk along the way, promising changes more profound — if potentially more divisive — than a return to normalcy. The reformer runs great danger. The more cautious candidate merely runs.
Matt Bai, who covers national politics for the magazine, is the author of “The Argument: Billionaires, Bloggers and the Battle to Remake Democratic Politics.” www.mattbai.com.
Friday, December 21
The Partisan: A review of Paul Krugman's 'The Conscience of a Liberal'
from the New York Review of Books:
The Partisan
By Michael Tomasky
a review of:
The Conscience of a Liberal
by Paul Krugman
Norton, 296 pp., $25.95
Difficult as it is to remember now, there was a time in the United States, as recently as fifteen or so years ago, when we were not engaged in constant political warfare. In those days Senator Max Cleland, who lost three limbs in a war, would not have been visually equated with Saddam Hussein in a television ad, something the Republicans did to him in 2002. The release of a declaration by, for example, the National Academy of Sciences was for the most part acknowledged as legitimate, and not attacked as a product of so-called liberal bias as its 2005 report on global warming was.[1]
We can regret, as it is customary to do, the loss of civility in political discourse (although such laments tend to assume a golden era that wasn't quite as civil in reality as it is in the memories of those who mourn its passing). But the nakedness of the modern right's drive for political power and of the Bush administration's politicization of so many aspects of governance and civic life has, paradoxically, given us one thing to be grateful for. Liberals and Democrats now understand much more plainly the nature of the fight they're in. Some recognized this early on: many of those who worked on the Clinton administration's health care plan recognized back in 1994, as Paul Starr, one veteran of that effort, puts it, that the Republicans would not compromise on the plan under any circumstances because "if it succeeded, it might renew New Deal beliefs in the efficacy of government, whereas a defeat of the health plan could set liberalism back for years."[2]
Others realized what was happening only much later, after the impeachment of Clinton, the foreshortened election in Florida, and the administration's post–September 11 policies, including its brutal violations of civil liberties and its invasion and occupation of Iraq. Why it took such people so long to recognize reality is an interesting question; but now, in plenty of time for next year's presidential election, Democrats and liberals seem more prepared than usual to put up a fight.
Many liberals would name Paul Krugman of The New York Times as perhaps the most consistent and courageous—and unapologetic—liberal partisan in American journalism. He has made his perspective on the Bush administration and the contemporary right, and on the need to see politics as a battle, manifestly clear in column after incendiary column. Indeed, of all the ways he could have concluded The Conscience of a Liberal, he chose to do so with a short essay that appears under the headline "On Being Partisan," which notes:
The central fact of modern American political life is the control of the Republican Party by movement conservatives, whose vision of what America should be is completely antithetical to that of the progressive movement. Because of that control, the notion, beloved of political pundits, that we can make progress through bipartisan consensus is simply foolish....
To be a progressive, then, means being a partisan—at least for now. The only way a progressive agenda can be enacted is if Democrats have both the presidency and a large enough majority in Congress to overcome Republican opposition.
From the Krugman we've come to know through his New York Times columns, this assertion is hardly surprising. But the pre-Bush Krugman was a quite different person. He was in those days far more prominently an economist than a polemicist, and an illustrious one—in 1991 he won the John Bates Clark medal from the American Economic Association, an award for economists under forty said to be perhaps harder to win (or at least given out more sparingly) than the Nobel Prize in economics. As such, he was not much involved in partisan politics. He was always a liberal, to be sure, and highly critical of supply-side economics. But he also disparaged economists to his left, especially opponents of free trade (a view he defends less staunchly now than then). In his 1994 book, Peddling Prosperity, Krugman had harsh words for liberal economists such as Robert Reich and Lester Thurow, who advocated se-lective protection from foreign competition, particularly from countries in which poor workers were harshly exploited. Reviewing Peddling Prosperity in these pages,[3] Benjamin M. Friedman took note of Krugman's disdain for such "strategic traders" and their attempts to exert influence within the Clinton administration in favor of protectionism and subsi-dies for selected domestic industries. If you were a radical economist in those days, or a labor movement intellectual, or a left-leaning social scientist, chances are you weren't a big fan of Paul Krugman.
On the strength of his academic work and his attention-getting columns in Slate, Krugman started writing in the Times on January 2, 2000. He made a point of challenging antiglobalization activists in his very first column. ("It is a sad irony that the cause that has finally awakened the long-dormant American left is that of—yes!—denying opportunity to third-world workers.") During that election year, he regularly and strongly criticized George W. Bush's tax-cut proposal and his opaque statements about Social Security. But he limited his critiques to economic policy and, compared to his Op-Ed-page colleagues, didn't write about politics all that often, emphasizing such topics as the power of Microsoft, the future of Amazon.com, the policies of the Federal Reserve Bank, and Japan's economic woes.
About Bush v. Gore, he had little to say. After Bush took office, he savaged the administration's regressive tax cuts. But it wasn't really until the fall of 2002, as the marketing of the Iraq war began in earnest, that he began broadening his criticism beyond economics to the war and the threats to civil liberties, extending his critique to the larger conservative movement and its modus operandi, and discussing the mainstream press's failure to report what was really going on right in front of their noses. It was around then that Krugman began producing with regularity the kinds of columns that whiz their way around the liberal blogosphere. Aficionados may even remember certain columns, like the one headlined "Dead Parrot Society," from October 25, 2002, in which Krugman cited some typical Orwellian phrases from the press, as when a reporter wrote that President Bush's statements on such matters as "Iraq's military capability" were "dubious if not wrong" and that Bush had "taken some flights of fancy." Krugman wrote:
Reading all these euphemisms, I was reminded of Monty Python's parrot: he's pushing up the daisies, his metabolic processes are history, he's joined the choir invisible. That is, he's dead. And the Bush administration lies a lot.[4]
So Krugman came a bit late to the political trenches—and perhaps a bit reluctantly. Just as Arnold Schoenberg said of himself, when asked by a stranger if he was indeed the controversial composer, that "nobody wanted to be, someone had to be, so I let it be me," so I suspect Krugman might say that virtually no one on the leading Op-Ed pages was saying the things that so obviously needed to be said as the Iraq war approached, so he let it be himself who said them.
And now, after years of twice-weekly deadlines, he appears to have decided that there's no turning back: The Conscience of a Liberal, with its title so clearly aping and answering Barry Goldwater's from forty-seven years ago, represents Krugman's fullest embrace of his polemicist identity. He has written some twenty books, popular and scholarly; this is the first that chiefly sees economics through the lens of politics rather than the other way around.
He makes this clear in the first chapter, with an argument that seems self-evident to people who see the world in political terms but will surely strike some of his fellow economists as an apostasy. Krugman begins by observing that he was born (in 1953) at a time when expanding prosperity was a given and inequality was on the decline, and when political polarization was at a comparative low. Noting that both of those conditions changed starting in the 1970s and asking what happened, he observes that two narratives define the march of history, one political and one economic. The normal economist's view is that economic changes drive political changes. Thus, the Depression led to the New Deal, for example. In our time, global competition, the IT revolution, and the demand for high skills led to higher inequality, which in turn meant a shrinking constituency for a populist politics and a larger constituency, among the winners, for the kind of top-down, class-warfare politics that today's GOP engages in.
Krugman had always believed—even "when I began working on this book"—that this was how things unfolded. "Yet," he writes of our era:
I've become increasingly convinced that much of the causation runs the other way—that political change in the form of rising polarization has been a major cause of rising inequality. That is, I'd suggest an alternative story for the last thirty years that runs like this: Over the course of the 1970s, radicals of the right determined to roll back the achievements of the New Deal took over the Republican Party, opening a partisan gap with the Democrats.... The empowerment of the hard right emboldened business to launch an all-out attack on the union movement, drastically reducing workers' bargaining power; freed business executives from the political and social constraints that had previously placed limits on runaway executive paychecks; sharply reduced tax rates on high incomes; and in a variety of other ways promoted rising inequality.
Elsewhere in the book, he explicitly, and in more detail, rebuts the view that market forces like technological change, immigration, and growing trade could possibly account for today's dramatic levels of inequality. He argues that what has happened is
largely due to changes in institutions, such as the strength of labor unions, and norms, such as the once powerful but now weak belief that having the boss make vastly more than the workers is bad for morale.
This concern about institutions and norms is at the heart of Krugman's argument.
The earlier chapters trace roughly the last one hundred years of American history from one gilded age to an era of general prosperity to a time of tumult (the 1970s) and finally back to a new gilded age. The arguments are brief, clearly organized, and crisply to the point. Each chapter ends with a little one- or two-paragraph cliffhanger that sets up the next:
Why have advocates of a smaller welfare state and regressive tax politics been able to win elections, even as growing income inequality should have made the welfare state more popular? That's the subject of the next chapter.
Chapters three and four are the crucial ones in Krugman's historical overview of the American economy since the Depression, since they describe how an aggressively liberal politics of an earlier time spelled the end of a long age of inequality and how those liberal policies, once thought to be radical, became mainstream. Chapter three, the New Deal chapter, explains—one gets the feeling that his real intended audience here is today's Democrats—how it was the political choices made by Roosevelt and his colleagues, and not impersonal economic forces, that lifted the country out of depression and poverty and built the middle class. The result was, in the phrase of two economic historians he cites, "the Great Compression," the narrowing of the gap between the rich and the rest, and the reduction in wage differentials among workers.
Three decisions by the government stood out. The first was raising taxes on the rich. The wealthiest Americans went from paying a top rate of 24 percent in the 1920s to 63 percent during FDR's first term and 79 by his second. By the mid-1950s, it was 91 percent (today's top rate is 35 percent). Corporate and estate taxes went up as well. The second decision was to make it easier for workers to unionize: in consequence, union membership tripled from 1933 to 1938, and then almost doubled again by 1947. The third decision was made after Pearl Harbor to use the National War Labor Board to encourage employers to raise the wages of the lowest-paid workers. And after the war ended, "the amazing thing is that the changes stuck."
These decisions dramatically reduced inequality and, far from having the cataclysmic effects on the economy predicted by conservatives at the time, they led to the postwar boom. (He emphasizes that the rich then were far less rich than they are today, a point to which he returns several times throughout the book.) And then, because they were so successful, the decisions he describes became widely accepted after the war.
This is the subject of Krugman's fourth chapter—how the decline in inequality led to a decline in political polarization. When Harry Truman won the 1948 election, the GOP dropped its project of trying to repeal the New Deal. After that election, "the Republican Party survived—but it did so by moving toward the new political center." He cites the work of three political scientists—Keith Poole, Howard Rosenthal, and Nolan McCarty—who have studied the different degrees of polarization and cooperation in every Congress since the nineteenth century and who found, sure enough, that the Congresses of the 1950s saw far more ideological overlap between the parties than did the Congresses of the 1920s or the current decade. Things were looking almost too good:
In sum, between 1948 and sometime in the 1970s both parties accepted the changes that had taken place during the Great Compression [of inequality]. To a large extent the New Deal had created the political conditions that sustained this consensus. A highly progressive tax system limited wealth at the top, and the rich were too weak politically to protest. Social Security and unemployment insurance were untouchable programs, and Medicare eventually achieved the same status. Strong unions were an accepted part of the national scene.
The story, as we know, takes a dark turn from this point. Chapters follow on the social turmoil of the 1960s (still a bucolic time economically, Krugman points out); the rise of "movement conservatism" in the 1970s, against the backdrop of the oil crisis and defeat in Vietnam; the emergence of Ronald Reagan and the push toward a new inequality; the right-wing war against unions and the collapse of the 1949 "Treaty of Detroit" that ensured good wages and benefits and labor peace in the auto industry; and the "weapons of mass distraction," i.e., the racial demagoguery and the social issues that Republicans have used to such triumphal effect, as for example in Reagan's "thinly disguised appeals to segre-gationist sentiment" to win Southern states in 1980; and, more recently, the Bush administration's effective use of coded language to address the Christian right.
This story, of course, has been told many times with different emphases. Krugman's version is well worth reading, for two reasons. First, his embrace of the idea that politics rather than economics has created our present-day inequality gives us a sense of political agency. If politics created this mess, then better politics may be able to do something about it. Krugman suspects "that the 2006 election wasn't an aberration, [and] that the US public is actually ready for something different—a new politics of equality." This is the subject of the book's last three chapters, which discuss inequality and health care and flesh out his ideas for "an unabashedly liberal program of expanding the social safety net and reducing inequality—a new New Deal."
The chapter on health care is probably the moral core of The Conscience of a Liberal—the one chapter above all the others that he would wish readers and policymakers to take to heart. That this is so is perhaps a further sign of the ascendancy of Krugman the polemicist (also the fact, which will undoubtedly seem strange to some economists, that the book barely discusses trade and globalization at all, thus omitting comment on highly contentious political issues among liberals). He lays out the weaknesses of the American system compared to the systems of other advanced countries, describes the way the health care crisis has built up in America since 1965, when Medicare and Medicaid became law, and analyzes the obstacles to reform during those forty years. The need for universal health care may seem obvious, but he makes the case for it with great clarity. He supports a competition between public and private insurers to offer affordable care, with mandated coverage, subsidies for low- income families, and a so-called "community rating" system to control the cost of premiums. This plan would resemble to some extent the German system of providing health insurance through regulated "sickness funds" and would look a lot like what John Edwards, Barack Obama, and Hillary Clinton have proposed (Obama's plan would not mandate coverage for all, at least at the beginning).
But Krugman doesn't want health care only for health care's sake:
There is, however, another important reason for health care reform. It's the same reason movement conservatives were so anxious to kill [Bill] Clinton's plan. That plan's success, said [William] Kristol, "would signal the rebirth of centralized welfare-state policy"—by which he really meant that universal health care would give new life to the New Deal idea that society should help its less for-tunate members. Indeed it would—and that's a big argument in its favor.
Health care is the key to the better politics that Krugman demands. Will the time be right for a new system, in early 2009, if we have a Democratic president and Democratic majorities in Congress? The opposition will be intense, for the reasons that both Krugman and Paul Starr lay out. But it does seem that Democrats and advocates have learned most of the right lessons from the debacle of Hillary Clinton's plan in 1994 and have regained some courage for a fight. And certainly, as larger numbers reach retirement, the crisis for people who lack the care they should have is deeper than it was then. So the opportunity should be greater, and public support stronger, than it was when the Clintons' efforts were defeated.
The second element of Krugman's account that gives it special value is its commitment to accurate history even when some fudging might be in order for the sake of political expediency. For example, it is in my recent experience not at all unusual to hear liberals say that, compared to Bush, Ronald Reagan wasn't really all that bad; this willingness to keep alive the image of Reagan as an avuncular and well-meaning sort—wrong but not at all malevolent—is somehow seen, I think, as strengthening the case against Bush. It is the language of persuasion, intended to have a mollifying effect on people wary of Democrats.
But Krugman is unimpressed. He detests Reagan. He notes, more than once, that Reagan officially opened his 1980 presidential campaign in Philadelphia, Mississippi, where James Cheney, Andrew Goodman, and Michael Schwerner had been murdered in 1964, avowing his support for states' rights. (I suppose it's progress of a sort that such an act of reactionary symbolism seems inconceivable for a national candidate in today's America.) He discusses Reagan's "remarkable callousness" in joking, in a famous speech, also in 1964, that "we were told four years ago that 17 million people went to bed hungry each night.... Well, that was probably true. They were all on a diet." And most egregious of all, he quotes Reagan on his promise to repeal California's fair housing act when he was running for governor in 1966: "If an individual wants to discriminate against Negroes or others in selling or renting his house, he has a right to do so."
Similarly William F. Buckley Jr., who has renounced the Iraq war and was host to many liberals during his Firing Line days, is regarded as one of the more "reasonable" conservatives. But Krugman finds, in an early issue of Buckley's National Review, editorial conclusions about race that went beyond the public views of right-wing politicians. Krugman writes:
Today leading figures on the American right are masters of what the British call "dog-whistle politics": They say things that appeal to certain groups in a way that only the targeted groups can hear—and thereby avoid having the extremism of their positions become generally obvious.... Reagan was able to signal sympathy for racism without ever saying anything overtly racist.... George W. Bush consistently uses language that sounds at worst slightly stilted to most Americans, but is fraught with meaning to the most extreme, end-of-days religious extremists. But in the early days of the National Review positions were stated more openly.
Thus in 1957 the magazine published an editorial celebrating a Senate vote that, it believed, would help the South continue the disenfranchisement of blacks.
"The central question that emerges—and it is not a parliamentary question or a question that is answered by merely consulting a catalog of the rights of American citizens, born Equal—is whether the White community in the South is entitled to take such measures as are necessary to prevail, politically and culturally, in areas in which it does not predominate numerically? The sobering answer is Yes—the White community is so entitled because, for the time being, it is the advanced race."
Krugman introduces the information about Reagan and the National Review as evidence for his argument, made at length in The Conscience of a Liberal, that modern movement conservatism is at its deepest core most fundamentally about, and built on, race. For example, comparing northern and southern states with similar characteristics, such as Massachusetts and Virginia, he writes that "in most though not all cases the more southerly, blacker state is far more conservative. It's hard not to conclude that race is the difference."
Whatever one concludes about this thesis—Krugman himself refers to the very recent "blueing" of Virginia and other signs of a decrease in racism—the intensity of his discussion of race suggests something about the ways he has moved, in the last fifteen or so years, from being a center-left scholar to being a liberal polemicist. Those may seem like different identities. But perhaps also there is a consistency at work here. From what I have read of his economic writings, they are not unlike his columns, or his attacks on Reagan and the National Review in his book, in the sense that persuasion of people with very different views is at best of secondary interest to him. What is of interest to him is describing things as he believes they are.
In Washington, this earns one the epithet—as Washington prefers to think of it—"partisan." But too many people who are also granted valuable journalistic space spent the early Bush years in denial about the evidence that was accumulating right before their eyes, whether about official lies, or executive overreach, or rampant class warfare waged on behalf of the richest one percent against the rest of us. Mildly deploring some of these excesses while accepting others is what is meant by bipartisanship today, and Krugman is right to have none of it. As a result he has left us a much more accurate record of the Bush years than, say, The Washington Post's David S. Broder, or some of his more celebrated New York Times colleagues.
There may again someday be a time when bipartisanship can flourish as it did in the mid-twentieth century. As Krugman writes,
The great age of bipartisanship wasn't a reflection of the gentlemanly character of an earlier generation of politicians. Rather, it reflected the subdued nature of political conflict in an era when the parties weren't that far apart on basic issues.
If movement conservatism is marginalized, and hard-line conservatives become once again no more than a faction in a more heterogeneous GOP, liberals should welcome consensus politics. In the meantime, the reality Krugman describes is the one that matters.
Notes
[1] I refer specifically to the June 2005 declaration, signed by the national academies of the G8 nations and three others, that the science of global warming is real enough to merit prompt action on reduction of greenhouse gases. Amid intense conservative criticism, the White House began the next month to retreat from the declaration's language.
[2] See Paul Starr, "The Hillarycare Myth-ology," The American Prospect, October 2007, an excellent analysis of how the plan was constructed and sold (for better and worse), and of the forces that worked to defeat it.
[3] "Must We Compete?," The New York Review, October 20, 1994.
[4] Every Krugman column is collected at the Unofficial Paul Krugman Archive, at www.pkarchive.org.
The Partisan
By Michael Tomasky
a review of:
The Conscience of a Liberal
by Paul Krugman
Norton, 296 pp., $25.95
Difficult as it is to remember now, there was a time in the United States, as recently as fifteen or so years ago, when we were not engaged in constant political warfare. In those days Senator Max Cleland, who lost three limbs in a war, would not have been visually equated with Saddam Hussein in a television ad, something the Republicans did to him in 2002. The release of a declaration by, for example, the National Academy of Sciences was for the most part acknowledged as legitimate, and not attacked as a product of so-called liberal bias as its 2005 report on global warming was.[1]
We can regret, as it is customary to do, the loss of civility in political discourse (although such laments tend to assume a golden era that wasn't quite as civil in reality as it is in the memories of those who mourn its passing). But the nakedness of the modern right's drive for political power and of the Bush administration's politicization of so many aspects of governance and civic life has, paradoxically, given us one thing to be grateful for. Liberals and Democrats now understand much more plainly the nature of the fight they're in. Some recognized this early on: many of those who worked on the Clinton administration's health care plan recognized back in 1994, as Paul Starr, one veteran of that effort, puts it, that the Republicans would not compromise on the plan under any circumstances because "if it succeeded, it might renew New Deal beliefs in the efficacy of government, whereas a defeat of the health plan could set liberalism back for years."[2]
Others realized what was happening only much later, after the impeachment of Clinton, the foreshortened election in Florida, and the administration's post–September 11 policies, including its brutal violations of civil liberties and its invasion and occupation of Iraq. Why it took such people so long to recognize reality is an interesting question; but now, in plenty of time for next year's presidential election, Democrats and liberals seem more prepared than usual to put up a fight.
Many liberals would name Paul Krugman of The New York Times as perhaps the most consistent and courageous—and unapologetic—liberal partisan in American journalism. He has made his perspective on the Bush administration and the contemporary right, and on the need to see politics as a battle, manifestly clear in column after incendiary column. Indeed, of all the ways he could have concluded The Conscience of a Liberal, he chose to do so with a short essay that appears under the headline "On Being Partisan," which notes:
The central fact of modern American political life is the control of the Republican Party by movement conservatives, whose vision of what America should be is completely antithetical to that of the progressive movement. Because of that control, the notion, beloved of political pundits, that we can make progress through bipartisan consensus is simply foolish....
To be a progressive, then, means being a partisan—at least for now. The only way a progressive agenda can be enacted is if Democrats have both the presidency and a large enough majority in Congress to overcome Republican opposition.
From the Krugman we've come to know through his New York Times columns, this assertion is hardly surprising. But the pre-Bush Krugman was a quite different person. He was in those days far more prominently an economist than a polemicist, and an illustrious one—in 1991 he won the John Bates Clark medal from the American Economic Association, an award for economists under forty said to be perhaps harder to win (or at least given out more sparingly) than the Nobel Prize in economics. As such, he was not much involved in partisan politics. He was always a liberal, to be sure, and highly critical of supply-side economics. But he also disparaged economists to his left, especially opponents of free trade (a view he defends less staunchly now than then). In his 1994 book, Peddling Prosperity, Krugman had harsh words for liberal economists such as Robert Reich and Lester Thurow, who advocated se-lective protection from foreign competition, particularly from countries in which poor workers were harshly exploited. Reviewing Peddling Prosperity in these pages,[3] Benjamin M. Friedman took note of Krugman's disdain for such "strategic traders" and their attempts to exert influence within the Clinton administration in favor of protectionism and subsi-dies for selected domestic industries. If you were a radical economist in those days, or a labor movement intellectual, or a left-leaning social scientist, chances are you weren't a big fan of Paul Krugman.
On the strength of his academic work and his attention-getting columns in Slate, Krugman started writing in the Times on January 2, 2000. He made a point of challenging antiglobalization activists in his very first column. ("It is a sad irony that the cause that has finally awakened the long-dormant American left is that of—yes!—denying opportunity to third-world workers.") During that election year, he regularly and strongly criticized George W. Bush's tax-cut proposal and his opaque statements about Social Security. But he limited his critiques to economic policy and, compared to his Op-Ed-page colleagues, didn't write about politics all that often, emphasizing such topics as the power of Microsoft, the future of Amazon.com, the policies of the Federal Reserve Bank, and Japan's economic woes.
About Bush v. Gore, he had little to say. After Bush took office, he savaged the administration's regressive tax cuts. But it wasn't really until the fall of 2002, as the marketing of the Iraq war began in earnest, that he began broadening his criticism beyond economics to the war and the threats to civil liberties, extending his critique to the larger conservative movement and its modus operandi, and discussing the mainstream press's failure to report what was really going on right in front of their noses. It was around then that Krugman began producing with regularity the kinds of columns that whiz their way around the liberal blogosphere. Aficionados may even remember certain columns, like the one headlined "Dead Parrot Society," from October 25, 2002, in which Krugman cited some typical Orwellian phrases from the press, as when a reporter wrote that President Bush's statements on such matters as "Iraq's military capability" were "dubious if not wrong" and that Bush had "taken some flights of fancy." Krugman wrote:
Reading all these euphemisms, I was reminded of Monty Python's parrot: he's pushing up the daisies, his metabolic processes are history, he's joined the choir invisible. That is, he's dead. And the Bush administration lies a lot.[4]
So Krugman came a bit late to the political trenches—and perhaps a bit reluctantly. Just as Arnold Schoenberg said of himself, when asked by a stranger if he was indeed the controversial composer, that "nobody wanted to be, someone had to be, so I let it be me," so I suspect Krugman might say that virtually no one on the leading Op-Ed pages was saying the things that so obviously needed to be said as the Iraq war approached, so he let it be himself who said them.
And now, after years of twice-weekly deadlines, he appears to have decided that there's no turning back: The Conscience of a Liberal, with its title so clearly aping and answering Barry Goldwater's from forty-seven years ago, represents Krugman's fullest embrace of his polemicist identity. He has written some twenty books, popular and scholarly; this is the first that chiefly sees economics through the lens of politics rather than the other way around.
He makes this clear in the first chapter, with an argument that seems self-evident to people who see the world in political terms but will surely strike some of his fellow economists as an apostasy. Krugman begins by observing that he was born (in 1953) at a time when expanding prosperity was a given and inequality was on the decline, and when political polarization was at a comparative low. Noting that both of those conditions changed starting in the 1970s and asking what happened, he observes that two narratives define the march of history, one political and one economic. The normal economist's view is that economic changes drive political changes. Thus, the Depression led to the New Deal, for example. In our time, global competition, the IT revolution, and the demand for high skills led to higher inequality, which in turn meant a shrinking constituency for a populist politics and a larger constituency, among the winners, for the kind of top-down, class-warfare politics that today's GOP engages in.
Krugman had always believed—even "when I began working on this book"—that this was how things unfolded. "Yet," he writes of our era:
I've become increasingly convinced that much of the causation runs the other way—that political change in the form of rising polarization has been a major cause of rising inequality. That is, I'd suggest an alternative story for the last thirty years that runs like this: Over the course of the 1970s, radicals of the right determined to roll back the achievements of the New Deal took over the Republican Party, opening a partisan gap with the Democrats.... The empowerment of the hard right emboldened business to launch an all-out attack on the union movement, drastically reducing workers' bargaining power; freed business executives from the political and social constraints that had previously placed limits on runaway executive paychecks; sharply reduced tax rates on high incomes; and in a variety of other ways promoted rising inequality.
Elsewhere in the book, he explicitly, and in more detail, rebuts the view that market forces like technological change, immigration, and growing trade could possibly account for today's dramatic levels of inequality. He argues that what has happened is
largely due to changes in institutions, such as the strength of labor unions, and norms, such as the once powerful but now weak belief that having the boss make vastly more than the workers is bad for morale.
This concern about institutions and norms is at the heart of Krugman's argument.
The earlier chapters trace roughly the last one hundred years of American history from one gilded age to an era of general prosperity to a time of tumult (the 1970s) and finally back to a new gilded age. The arguments are brief, clearly organized, and crisply to the point. Each chapter ends with a little one- or two-paragraph cliffhanger that sets up the next:
Why have advocates of a smaller welfare state and regressive tax politics been able to win elections, even as growing income inequality should have made the welfare state more popular? That's the subject of the next chapter.
Chapters three and four are the crucial ones in Krugman's historical overview of the American economy since the Depression, since they describe how an aggressively liberal politics of an earlier time spelled the end of a long age of inequality and how those liberal policies, once thought to be radical, became mainstream. Chapter three, the New Deal chapter, explains—one gets the feeling that his real intended audience here is today's Democrats—how it was the political choices made by Roosevelt and his colleagues, and not impersonal economic forces, that lifted the country out of depression and poverty and built the middle class. The result was, in the phrase of two economic historians he cites, "the Great Compression," the narrowing of the gap between the rich and the rest, and the reduction in wage differentials among workers.
Three decisions by the government stood out. The first was raising taxes on the rich. The wealthiest Americans went from paying a top rate of 24 percent in the 1920s to 63 percent during FDR's first term and 79 by his second. By the mid-1950s, it was 91 percent (today's top rate is 35 percent). Corporate and estate taxes went up as well. The second decision was to make it easier for workers to unionize: in consequence, union membership tripled from 1933 to 1938, and then almost doubled again by 1947. The third decision was made after Pearl Harbor to use the National War Labor Board to encourage employers to raise the wages of the lowest-paid workers. And after the war ended, "the amazing thing is that the changes stuck."
These decisions dramatically reduced inequality and, far from having the cataclysmic effects on the economy predicted by conservatives at the time, they led to the postwar boom. (He emphasizes that the rich then were far less rich than they are today, a point to which he returns several times throughout the book.) And then, because they were so successful, the decisions he describes became widely accepted after the war.
This is the subject of Krugman's fourth chapter—how the decline in inequality led to a decline in political polarization. When Harry Truman won the 1948 election, the GOP dropped its project of trying to repeal the New Deal. After that election, "the Republican Party survived—but it did so by moving toward the new political center." He cites the work of three political scientists—Keith Poole, Howard Rosenthal, and Nolan McCarty—who have studied the different degrees of polarization and cooperation in every Congress since the nineteenth century and who found, sure enough, that the Congresses of the 1950s saw far more ideological overlap between the parties than did the Congresses of the 1920s or the current decade. Things were looking almost too good:
In sum, between 1948 and sometime in the 1970s both parties accepted the changes that had taken place during the Great Compression [of inequality]. To a large extent the New Deal had created the political conditions that sustained this consensus. A highly progressive tax system limited wealth at the top, and the rich were too weak politically to protest. Social Security and unemployment insurance were untouchable programs, and Medicare eventually achieved the same status. Strong unions were an accepted part of the national scene.
The story, as we know, takes a dark turn from this point. Chapters follow on the social turmoil of the 1960s (still a bucolic time economically, Krugman points out); the rise of "movement conservatism" in the 1970s, against the backdrop of the oil crisis and defeat in Vietnam; the emergence of Ronald Reagan and the push toward a new inequality; the right-wing war against unions and the collapse of the 1949 "Treaty of Detroit" that ensured good wages and benefits and labor peace in the auto industry; and the "weapons of mass distraction," i.e., the racial demagoguery and the social issues that Republicans have used to such triumphal effect, as for example in Reagan's "thinly disguised appeals to segre-gationist sentiment" to win Southern states in 1980; and, more recently, the Bush administration's effective use of coded language to address the Christian right.
This story, of course, has been told many times with different emphases. Krugman's version is well worth reading, for two reasons. First, his embrace of the idea that politics rather than economics has created our present-day inequality gives us a sense of political agency. If politics created this mess, then better politics may be able to do something about it. Krugman suspects "that the 2006 election wasn't an aberration, [and] that the US public is actually ready for something different—a new politics of equality." This is the subject of the book's last three chapters, which discuss inequality and health care and flesh out his ideas for "an unabashedly liberal program of expanding the social safety net and reducing inequality—a new New Deal."
The chapter on health care is probably the moral core of The Conscience of a Liberal—the one chapter above all the others that he would wish readers and policymakers to take to heart. That this is so is perhaps a further sign of the ascendancy of Krugman the polemicist (also the fact, which will undoubtedly seem strange to some economists, that the book barely discusses trade and globalization at all, thus omitting comment on highly contentious political issues among liberals). He lays out the weaknesses of the American system compared to the systems of other advanced countries, describes the way the health care crisis has built up in America since 1965, when Medicare and Medicaid became law, and analyzes the obstacles to reform during those forty years. The need for universal health care may seem obvious, but he makes the case for it with great clarity. He supports a competition between public and private insurers to offer affordable care, with mandated coverage, subsidies for low- income families, and a so-called "community rating" system to control the cost of premiums. This plan would resemble to some extent the German system of providing health insurance through regulated "sickness funds" and would look a lot like what John Edwards, Barack Obama, and Hillary Clinton have proposed (Obama's plan would not mandate coverage for all, at least at the beginning).
But Krugman doesn't want health care only for health care's sake:
There is, however, another important reason for health care reform. It's the same reason movement conservatives were so anxious to kill [Bill] Clinton's plan. That plan's success, said [William] Kristol, "would signal the rebirth of centralized welfare-state policy"—by which he really meant that universal health care would give new life to the New Deal idea that society should help its less for-tunate members. Indeed it would—and that's a big argument in its favor.
Health care is the key to the better politics that Krugman demands. Will the time be right for a new system, in early 2009, if we have a Democratic president and Democratic majorities in Congress? The opposition will be intense, for the reasons that both Krugman and Paul Starr lay out. But it does seem that Democrats and advocates have learned most of the right lessons from the debacle of Hillary Clinton's plan in 1994 and have regained some courage for a fight. And certainly, as larger numbers reach retirement, the crisis for people who lack the care they should have is deeper than it was then. So the opportunity should be greater, and public support stronger, than it was when the Clintons' efforts were defeated.
The second element of Krugman's account that gives it special value is its commitment to accurate history even when some fudging might be in order for the sake of political expediency. For example, it is in my recent experience not at all unusual to hear liberals say that, compared to Bush, Ronald Reagan wasn't really all that bad; this willingness to keep alive the image of Reagan as an avuncular and well-meaning sort—wrong but not at all malevolent—is somehow seen, I think, as strengthening the case against Bush. It is the language of persuasion, intended to have a mollifying effect on people wary of Democrats.
But Krugman is unimpressed. He detests Reagan. He notes, more than once, that Reagan officially opened his 1980 presidential campaign in Philadelphia, Mississippi, where James Cheney, Andrew Goodman, and Michael Schwerner had been murdered in 1964, avowing his support for states' rights. (I suppose it's progress of a sort that such an act of reactionary symbolism seems inconceivable for a national candidate in today's America.) He discusses Reagan's "remarkable callousness" in joking, in a famous speech, also in 1964, that "we were told four years ago that 17 million people went to bed hungry each night.... Well, that was probably true. They were all on a diet." And most egregious of all, he quotes Reagan on his promise to repeal California's fair housing act when he was running for governor in 1966: "If an individual wants to discriminate against Negroes or others in selling or renting his house, he has a right to do so."
Similarly William F. Buckley Jr., who has renounced the Iraq war and was host to many liberals during his Firing Line days, is regarded as one of the more "reasonable" conservatives. But Krugman finds, in an early issue of Buckley's National Review, editorial conclusions about race that went beyond the public views of right-wing politicians. Krugman writes:
Today leading figures on the American right are masters of what the British call "dog-whistle politics": They say things that appeal to certain groups in a way that only the targeted groups can hear—and thereby avoid having the extremism of their positions become generally obvious.... Reagan was able to signal sympathy for racism without ever saying anything overtly racist.... George W. Bush consistently uses language that sounds at worst slightly stilted to most Americans, but is fraught with meaning to the most extreme, end-of-days religious extremists. But in the early days of the National Review positions were stated more openly.
Thus in 1957 the magazine published an editorial celebrating a Senate vote that, it believed, would help the South continue the disenfranchisement of blacks.
"The central question that emerges—and it is not a parliamentary question or a question that is answered by merely consulting a catalog of the rights of American citizens, born Equal—is whether the White community in the South is entitled to take such measures as are necessary to prevail, politically and culturally, in areas in which it does not predominate numerically? The sobering answer is Yes—the White community is so entitled because, for the time being, it is the advanced race."
Krugman introduces the information about Reagan and the National Review as evidence for his argument, made at length in The Conscience of a Liberal, that modern movement conservatism is at its deepest core most fundamentally about, and built on, race. For example, comparing northern and southern states with similar characteristics, such as Massachusetts and Virginia, he writes that "in most though not all cases the more southerly, blacker state is far more conservative. It's hard not to conclude that race is the difference."
Whatever one concludes about this thesis—Krugman himself refers to the very recent "blueing" of Virginia and other signs of a decrease in racism—the intensity of his discussion of race suggests something about the ways he has moved, in the last fifteen or so years, from being a center-left scholar to being a liberal polemicist. Those may seem like different identities. But perhaps also there is a consistency at work here. From what I have read of his economic writings, they are not unlike his columns, or his attacks on Reagan and the National Review in his book, in the sense that persuasion of people with very different views is at best of secondary interest to him. What is of interest to him is describing things as he believes they are.
In Washington, this earns one the epithet—as Washington prefers to think of it—"partisan." But too many people who are also granted valuable journalistic space spent the early Bush years in denial about the evidence that was accumulating right before their eyes, whether about official lies, or executive overreach, or rampant class warfare waged on behalf of the richest one percent against the rest of us. Mildly deploring some of these excesses while accepting others is what is meant by bipartisanship today, and Krugman is right to have none of it. As a result he has left us a much more accurate record of the Bush years than, say, The Washington Post's David S. Broder, or some of his more celebrated New York Times colleagues.
There may again someday be a time when bipartisanship can flourish as it did in the mid-twentieth century. As Krugman writes,
The great age of bipartisanship wasn't a reflection of the gentlemanly character of an earlier generation of politicians. Rather, it reflected the subdued nature of political conflict in an era when the parties weren't that far apart on basic issues.
If movement conservatism is marginalized, and hard-line conservatives become once again no more than a faction in a more heterogeneous GOP, liberals should welcome consensus politics. In the meantime, the reality Krugman describes is the one that matters.
Notes
[1] I refer specifically to the June 2005 declaration, signed by the national academies of the G8 nations and three others, that the science of global warming is real enough to merit prompt action on reduction of greenhouse gases. Amid intense conservative criticism, the White House began the next month to retreat from the declaration's language.
[2] See Paul Starr, "The Hillarycare Myth-ology," The American Prospect, October 2007, an excellent analysis of how the plan was constructed and sold (for better and worse), and of the forces that worked to defeat it.
[3] "Must We Compete?," The New York Review, October 20, 1994.
[4] Every Krugman column is collected at the Unofficial Paul Krugman Archive, at www.pkarchive.org.
Subscribe to:
Posts (Atom)