The divisions between black and white and rich and poor begin almost at birth, and they are reinforced every day of a child’s life. And yet the schools provide evidence that the president is, in his most basic understanding of the problem, entirely right: the achievement gap can be overcome, in a convincing way, for large numbers of poor and minority students, not in generations but in years. What he and others seem not to have apprehended quite yet is the magnitude of the effort that will be required for that change to take place.
But the evidence is becoming difficult to ignore: when educators do succeed at educating poor minority students up to national standards of proficiency, they invariably use methods that are radically different and more intensive than those employed in most American public schools. So as the No Child Left Behind law comes up for reauthorization next year, Americans are facing an increasingly stark choice: is the nation really committed to guaranteeing that all of the country’s students will succeed to the same high level? And if so, how hard are we willing to work, and what resources are we willing to commit, to achieve that goal?
In the years after World War II, and especially after the civil rights reforms of the 1960s, black Americans’ standardized-test scores improved steadily and significantly, compared with those of whites. But at some point in the late 1980s, after decades of progress, the narrowing of the gap stalled, and between 1988 and 1994 black reading scores actually fell by a sizable amount on the national assessment. What had appeared to be an inexorable advance toward equality had run out of steam, and African-American schoolchildren seemed to be stuck well behind their white peers.
The issue was complicated by the fact that there are really two overlapping test-score gaps: the one between black children and white children, and the one between poor children and better-off children. Given that those categories tend to overlap — black children are three times as likely to grow up in poverty as white children — many people wondered whether focusing on race was in fact a useful approach. Why not just concentrate on correcting the academic disadvantages of poor people? Solve those, and the black-white gap will solve itself.
There had, in fact, been evidence for a long time that poor children fell behind rich and middle-class children early, and stayed behind. But researchers had been unable to isolate the reasons for the divergence. Did rich parents have better genes? Did they value education more? Was it that rich parents bought more books and educational toys for their children? Was it because they were more likely to stay married than poor parents? Or was it that rich children ate more nutritious food? Moved less often? Watched less TV? Got more sleep? Without being able to identify the important factors and eliminate the irrelevant ones, there was no way even to begin to find a strategy to shrink the gap.
Researchers began peering deep into American homes, studying up close the interactions between parents and children. The first scholars to emerge with a specific culprit in hand were Betty Hart and Todd R. Risley, child psychologists at the University of Kansas, who in 1995 published the results of an intensive research project on language acquisition. Ten years earlier, they recruited 42 families with newborn children in Kansas City, and for the following three years they visited each family once a month, recording absolutely everything that occurred between the child and the parent or parents. The researchers then transcribed each encounter and analyzed each child’s language development and each parent’s communication style. They found, first, that vocabulary growth differed sharply by class and that the gap between the classes opened early. By age 3, children whose parents were professionals had vocabularies of about 1,100 words, and children whose parents were on welfare had vocabularies of about 525 words. The children’s I.Q.’s correlated closely to their vocabularies. The average I.Q. among the professional children was 117, and the welfare children had an average I.Q. of 79.
When Hart and Risley then addressed the question of just what caused those variations, the answer they arrived at was startling. By comparing the vocabulary scores with their observations of each child’s home life, they were able to conclude that the size of each child’s vocabulary correlated most closely to one simple factor: the number of words the parents spoke to the child. That varied greatly across the homes they visited, and again, it varied by class. In the professional homes, parents directed an average of 487 “utterances” — anything from a one-word command to a full soliloquy — to their children each hour. In welfare homes, the children heard 178 utterances per hour.
What’s more, the kinds of words and statements that children heard varied by class. The most basic difference was in the number of “discouragements” a child heard — prohibitions and words of disapproval — compared with the number of encouragements, or words of praise and approval. By age 3, the average child of a professional heard about 500,000 encouragements and 80,000 discouragements. For the welfare children, the situation was reversed: they heard, on average, about 75,000 encouragements and 200,000 discouragements. Hart and Risley found that as the number of words a child heard increased, the complexity of that language increased as well. As conversation moved beyond simple instructions, it blossomed into discussions of the past and future, of feelings, of abstractions, of the way one thing causes another — all of which stimulated intellectual development.
Hart and Risley showed that language exposure in early childhood correlated strongly with I.Q. and academic success later on in a child’s life. Hearing fewer words, and a lot of prohibitions and discouragements, had a negative effect on I.Q.; hearing lots of words, and more affirmations and complex sentences, had a positive effect on I.Q. The professional parents were giving their children an advantage with every word they spoke, and the advantage just kept building up.
Jeanne Brooks-Gunn, a professor at Teachers College, has overseen hundreds of interviews of parents and collected thousands of hours of videotape of parents and children, and she and her research team have graded each one on a variety of scales. Their conclusion: Children from more well-off homes tend to experience parental attitudes that are more sensitive, more encouraging, less intrusive and less detached — all of which, they found, serves to increase I.Q. and school-readiness.
Over the course of several years, Lareau and her research assistants observed a variety of families from different class backgrounds, basically moving in to each home for three weeks of intensive scrutiny. Lareau found that the middle-class families she studied all followed a similar strategy, which she labeled concerted cultivation. The parents in these families engaged their children in conversations as equals, treating them like apprentice adults and encouraging them to ask questions, challenge assumptions and negotiate rules. They planned and scheduled countless activities to enhance their children’s development — piano lessons, soccer games, trips to the museum.
The working-class and poor families Lareau studied did things differently. In fact, they raised their children the way most parents, even middle-class parents, did a generation or two ago. They allowed their children much more freedom to fill in their afternoons and weekends as they chose — playing outside with cousins, inventing games, riding bikes with friends — but much less freedom to talk back, question authority or haggle over rules and consequences. Children were instructed to defer to adults and treat them with respect. This strategy Lareau named accomplishment of natural growth.
In her book “Unequal Childhoods,” published in 2003, Lareau described the costs and benefits of each approach and concluded that the natural-growth method had many advantages. Concerted cultivation, she wrote, “places intense labor demands on busy parents. ... Middle-class children argue with their parents, complain about their parents’ incompetence and disparage parents’ decisions.” Working-class and poor children, by contrast, “learn how to be members of informal peer groups. They learn how to manage their own time. They learn how to strategize.” But outside the family unit, Lareau wrote, the advantages of “natural growth” disappear. In public life, the qualities that middle-class children develop are consistently valued over the ones that poor and working-class children develop. Middle-class children become used to adults taking their concerns seriously, and so they grow up with a sense of entitlement, which gives them a confidence, in the classroom and elsewhere, that less-wealthy children lack. The cultural differences translate into a distinct advantage for middle-class children in school, on standardized achievement tests and, later in life, in the workplace.
True, every poor child would benefit from having more books in his home and more nutritious food to eat (and money certainly makes it easier to carry out a program of concerted cultivation). But the real advantages that middle-class children gain come from more elusive processes: the language that their parents use, the attitudes toward life that they convey. However you measure child-rearing, middle-class parents tend to do it differently than poor parents — and the path they follow in turn tends to give their children an array of advantages.
What would it take to overcome these disadvantages? Does poverty itself need to be eradicated, or can its effects on children somehow be counteracted? Can the culture of child-rearing be changed in poor neighborhoods, and if so, is that a project that government or community organizations have the ability, or the right, to take on? Is it enough simply to educate poor children in the same way that middle-class children are educated? And can any school, on its own, really provide an education to poor minority students that would allow them to achieve the same results as middle-class students?
There is, in fact, evidence emerging that some schools are succeeding at the difficult task of educating poor minority students to high levels of achievement. But there is still great disagreement about just how many schools are pulling this off and what those successful schools mean for the rest of the American education system. One well-publicized evaluation of those questions has come from the Education Trust, a policy group in Washington that has issued a series of reports making the case that there are plenty of what they call “high flying” schools, which they define as high-poverty or high-minority schools whose students score in the top third of all schools in their state. The group’s landmark report, published in December 2001, identified 1,320 “high flying” schools nationwide that were both high-poverty and high minority. This was a big number, and it had a powerful effect on the debate over the achievement gap. The pessimists — those who believed that the disadvantages of poverty were all but impossible to overcome in public schools — were dealt a serious blow. If the report’s figures held up, it meant that high achievement for poor minority kids was not some one-in-a-million occurrence; it was happening all the time, all around us.
But in the years since the report’s release, its conclusions have been challenged by scholars and analysts who have argued that the Education Trust made it too easy to be included on their list. To be counted as a high-flier, a school needed to receive a high score in only one subject in one grade in one year. If your school had a good fourth-grade reading score, it was on the list, even if all its other scores were mediocre. To many researchers, that was an unconvincing standard of academic success. Douglas Harris, a professor of education and economics at Florida State University, pored over Education Trust’s data, trying to ascertain how many of the high-flying schools were able to register consistently good numbers. When he tightened the definition of success to include only schools that had high scores in two subjects in two different grades over two different years, Harris could find only 23 high-poverty, high-minority schools in the Education Trust’s database, a long way down from 1,320.
That number isn’t exhaustive; Harris says he has no doubt that there are some great schools that slipped through his data sieve. But his results still point to a very different story than the one the original report told. Education Trust officials intended their data to refute the idea that family background is the leading cause of student performance. But on closer examination, their data largely confirm that idea, demonstrating clearly that the best predictors of a school’s achievement scores are the race and wealth of its student body. A public school that enrolls mostly well-off white kids has a 1 in 4 chance of earning consistently high test scores, Harris found; a school with mostly poor minority kids has a 1 in 300 chance.
Despite those long odds, the last decade — and especially the last few years — have seen the creation of dozens, even hundreds, of schools across the country dedicated to precisely that mission: delivering consistently high results with a population that generally achieves consistently low results. The schools that have taken on this mission most aggressively tend to be charter schools — the publicly financed, privately run institutions that make up one of the most controversial educational experiments of our time. Because charters exist outside the control of public-school boards and are generally not required to adhere to union contracts with their teachers, they have attracted significant opposition, and their opponents are able to point to plenty of evidence that the charter project has failed. Early charter advocates claimed the schools would raise test scores across the board, and that hasn’t happened; nationally, scores for charter-school students are the same as or lower than scores for public-school students. But by another measure, charter schools have succeeded: by allowing educators to experiment in ways that they generally can’t inside public-school systems, they have led to the creation of a small but growing corps of schools with new and ambitious methods for educating students facing real academic challenges.
In the early years of the charter-school movement, every school was an island, trying out its own mad or brilliant educational theory. But as charter-school proponents have studied the successes and learned from the mistakes of their predecessors, patterns, even a consensus, have begun to emerge. The schools that are achieving the most impressive results with poor and minority students tend to follow three practices. First, they require many more hours of class time than a typical public school. The school day starts early, at 8 a.m. or before, and often continues until after 4 p.m. These schools offer additional tutoring after school as well as classes on Saturday mornings, and summer vacation usually lasts only about a month. The schools try to leaven those long hours with music classes, foreign languages, trips and sports, but they spend a whole lot of time going over the basics: reading and math.
Second, they treat classroom instruction and lesson planning as much as a science as an art. Explicit goals are set for each year, month and day of each class, and principals have considerable authority to redirect and even remove teachers who aren’t meeting those goals. The schools’ leaders believe in frequent testing, which, they say, lets them measure what is working and what isn’t, and they use test results to make adjustments to the curriculum as they go. Teachers are trained and retrained, frequently observed and assessed by their principals and superintendents. There is an emphasis on results but also on “team building” and cooperation and creativity, and the schools seem, to an outsider at least, like genuinely rewarding places to work, despite the long hours. They tend to attract young, enthusiastic teachers, including many alumni of Teach for America, the program that recruits graduates from top universities to work for two years in inner-city public schools.
Third, they make a conscious effort to guide the behavior, and even the values, of their students by teaching what they call character. Using slogans, motivational posters, incentives, encouragements and punishments, the schools direct students in everything from the principles of teamwork and the importance of an optimistic outlook to the nuts and bolts of how to sit in class, where to direct their eyes when a teacher is talking and even how to nod appropriately.
The schools are, in the end, a counterintuitive combination of touchy-feely idealism and intense discipline. Their guiding philosophy is in many ways a reflection of the findings of scholars like Lareau and Hart and Risley — like those academics, these school leaders see childhood as a series of inputs and outputs. When students enroll in one of these schools (usually in fifth or sixth grade), they are often two or more grade levels behind. Usually they have missed out on many of the millions of everyday intellectual and emotional stimuli that their better-off peers have been exposed to since birth. They are, educationally speaking, in deep trouble. The schools reject the notion that all that these struggling students need are high expectations; they do need those, of course, but they also need specific types and amounts of instruction, both in academics and attitude, to compensate for everything they did not receive in their first decade of life.
It is still too early in the history of this nascent movement to say which schools are going to turn out to be the most successful with this new approach to the education of poor children. But so far, the most influential schools are the ones run by KIPP, or the Knowledge Is Power Program. KIPP’s founders, David Levin and Michael Feinberg, met in 1992, when they were young college graduates enrolled in Teach for America, working in inner-city public schools in Houston. They struggled at first as teachers but were determined to figure out how to motivate and educate their students. Each night they would compare notes on what worked in the classroom — songs, games, chants, rewards — and, before long, both of them became expert classroom instructors.
In the fall of 1994, Levin and Feinberg started a middle school in Houston, teaching just 50 students, and they named it KIPP. A year later, Levin moved to New York and started the second KIPP school, in the South Bronx. As the KIPP schools grew, Levin and Feinberg adhered to a few basic principles: their mission was to educate low-income and minority students. They would emphasize measurable results. And they would promise to do whatever it took to help their students succeed. They offered an extended day and an extended year that provided KIPP students with about 60 percent more time in school than most public-school students. They set clear and strict rules of conduct: their two principles of behavior were “Work Hard” and “Be Nice,” and all the other rules flowed out of those. At the beginning of each year, parents and students signed a pledge — unenforceable but generally taken seriously — committing to certain standards of hard work and behavior. Teachers gave students their cellphone numbers so students could call them at night for homework help.
The methods raised students’ test scores, and the schools began to attract the attention of the media and of philanthropists. A “60 Minutes” report on the schools in 1999 led to a $15 million grant from Doris and Donald Fisher, the founders of the Gap, and Feinberg and Levin began gradually to expand KIPP into a national network. Two years ago, they received $8 million from the Gates Foundation to create up to eight KIPP high schools. There are now 52 KIPP schools across the country, almost all middle schools, and together they are educating 12,000 children. The network is run on a franchise model; each school’s principal has considerable autonomy, while quality control is exercised from the home office in San Francisco. Feinberg is the superintendent of KIPP’s eight schools in Houston, and Levin is the superintendent of the four New York City schools.
KIPP is part of a loose coalition with two other networks of charter schools based in and around New York City. One is Achievement First, which grew out of the success of Amistad Academy, a charter school in New Haven that was founded in 1999. Achievement First now runs six schools in New Haven and Brooklyn. The other network is Uncommon Schools, which was started by a founder of North Star Academy in Newark along with principals from three acclaimed charter schools in Massachusetts; it now includes seven schools in Rochester, Newark and Brooklyn. The connections among the three networks are mostly informal, based on the friendships that bind Levin to Norman Atkins, the former journalist who founded North Star, and to Dacia Toll, the Rhodes scholar and Yale Law graduate who started Amistad with Doug McCurry, a former teacher. Toll and Atkins visited Levin at the Bronx KIPP Academy when they were setting up their original schools and studied the methods he was using; they later sent their principals to the leadership academy that Levin and Feinberg opened in 2000, and they have continued to model many of their practices on KIPP’s. Now the schools are beginning to formalize their ties. As they each expand their charters to include high schools, Levin, Toll and Atkins are working on a plan to bring students from all three networks together under one roof.
Students at both KIPP and Achievement First schools follow a system for classroom behavior invented by Levin and Feinberg called Slant, which instructs them to sit up, listen, ask questions, nod and track the speaker with their eyes. When I visited KIPP Academy last month, I was standing with Levin at the front of a music class of about 60 students, listening to him talk, when he suddenly interrupted himself and pointed at me. “Do you notice what he’s doing right now?” he asked the class.
They all called out at once, “Nodding!”
Levin’s contention is that Americans of a certain background learn these methods for taking in information early on and employ them instinctively. KIPP students, he says, need to be taught the methods explicitly. And so it is a little unnerving to stand at the front of a KIPP class; every eye is on you. When a student speaks, every head swivels to watch her. To anyone raised in the principles of progressive education, the uniformity and discipline in KIPP classrooms can be off-putting. But the kids I spoke to said they use the Slant method not because they fear they will be punished otherwise but because it works: it helps them to learn. (They may also like the feeling of having their classmates’ undivided attention when they ask or answer a question.) When Levin asked the music class to demonstrate the opposite of Slanting — “Give us the normal school look,” he said — the students, in unison, all started goofing off, staring into space and slouching. Middle-class Americans know intuitively that “good behavior” is mostly a game with established rules; the KIPP students seemed to be experiencing the pleasure of being let in on a joke.
Still, Levin says that the innovations a visitor to a KIPP school might notice first — the Slanting and the walls festooned with slogans and mottos (“Team Always Beats Individual,” “All of Us Will Learn”) and the orderly rows of students walking in the hallways — are not the only things contributing to the schools’ success. Equally important, he says, are less visible practices: clear and coherent goals for each class; teachers who work 15 to 16 hours a day; careful lesson planning; and a decade’s worth of techniques, tricks, games and chants designed to help vast amounts of information penetrate poorly educated brains very quickly.
The schools that Toll, Atkins, Levin and Feinberg run are not racially integrated. Most of the 70 or so schools that make up their three networks have only one or two white children enrolled, or none at all. Although as charter schools, their admission is open through a lottery to any student in the cities they serve, their clear purpose is to educate poor black and Hispanic children. The guiding principle for the four school leaders, all of whom are white, is an unexpected twist on the “separate but equal” standard: they assert that for these students, an “equal” education is not good enough. Students who enter middle school significantly behind grade level don’t need the same good education that most American middle-class students receive; they need a better education, because they need to catch up. Toll, especially, is preoccupied with the achievement gap: her schools’ stated mission is to close the gap entirely. “The promise in America is that if you work hard, if you make good decisions, that you’ll be able to be successful,” Toll explained to me. “And given the current state of public education in a lot of our communities, that promise is just not true. There’s not a level playing field.” In Toll’s own career, in fact, the goal of achieving equality came first, and the tool of education came later. When she was at Yale Law School, her plan was to become a civil rights lawyer, but she concluded that she could have more of an impact on the nation’s inequities by founding a charter school.
The methods these educators use seem to work: students at their schools consistently score well on statewide standardized tests. ... At KIPP’s Bronx academy, the sixth, seventh and eighth grades had proficiency rates at least 12 percentage points above the state average on this year’s statewide tests. And when the scores are compared with the scores of the specific high-poverty cities or neighborhoods where the schools are located — in Newark, New Haven or the Bronx — it isn’t even close: 86 percent of eighth-grade students at KIPP Academy scored at grade level in math this year, compared with 16 percent of students in the South Bronx.
Even if schools like KIPP are allowed to expand to meet the demand in the educational marketplace — all of them have long waiting lists — it is hard to imagine that, alone, they will be able to make much of a dent in the problem of the achievement gap; there are, after all, millions of poor and minority public-school students who aren’t getting the education they need either at home or in the classroom. What these charter schools demonstrate, though, is the effort that would be required to provide those students with that education.
Toll put it this way: “We want to change the conversation from ‘You can’t educate these kids’ to ‘You can only educate these kids if. ...’ ” And to a great extent, she and the other principals have done so. The message inherent in the success of their schools is that if poor students are going to catch up, they will require not the same education that middle-class children receive but one that is considerably better; they need more time in class than middle-class students, better-trained teachers and a curriculum that prepares them psychologically and emotionally, as well as intellectually, for the challenges ahead of them.
Right now, of course, they are not getting more than middle-class students; they are getting less. For instance, nationwide, the best and most experienced teachers are allowed to choose where they teach. And since most state contracts offer teachers no bonus or incentive for teaching in a school with a high population of needy children, the best teachers tend to go where they are needed the least. A study that the Education Trust issued in June used data from Illinois to demonstrate the point. Illinois measures the quality of its teachers and divides their scores into four quartiles, and those numbers show glaring racial inequities. In majority-white schools, bad teachers are rare: just 11 percent of the teachers are in the lowest quartile. But in schools with practically no white students, 88 percent of the teachers are in the worst quartile. The same disturbing pattern holds true in terms of poverty. At schools where more than 90 percent of the students are poor — where excellent teachers are needed the most — just 1 percent of teachers are in the highest quartile.
Government spending on education does not tend to compensate for these inequities; in fact, it often makes them worse. Goodwin Liu, a law professor at the University of California at Berkeley, has compiled persuasive evidence for what he calls the country’s “education apartheid.” In states with more poor children, spending per pupil is lower. In Mississippi, for instance, it is $5,391 a year; in Connecticut, it is $9,588. Most education financing comes from state and local governments, but the federal supplement for poor children, Title 1, is “regressive,” Liu points out, because it is tied to the amount each state spends. So the federal government gives Arkansas $964 to help educate each poor child in the state, and it gives Massachusetts $2,048 for each poor child there.
Without making a much more serious commitment to the education of poor and minority students, it is hard to see how the federal government will be able to deliver on the promise contained in No Child Left Behind. The law made states responsible for turning their poorest children into accomplished scholars in a little more than a decade — a national undertaking on the order of a moon landing — but provided them with little assistance or even direction as to how they might accomplish that goal. And recently, many advocates have begun to argue that the Education Department has quietly given up on No Child Left Behind.
The most malignant element of the original law was that it required all states to achieve proficiency but then allowed each state to define proficiency for itself. It took state governments a couple of years to realize just what that meant, but now they have caught on — and many of them are engaged in an ignoble competition to see which state can demand the least of its students. At the head of this pack right now is Mississippi, which has declared 89 percent of its fourth-grade students to be proficient readers, the highest percentage in the nation, while in fact, the National Assessment of Educational Progress shows that only 18 percent of Mississippi fourth graders know how to read at an appropriate level — the second-lowest score of any state. In the past year, Arizona, Maryland, Ohio, North Dakota and Idaho all followed Mississippi’s lead and slashed their standards in order to allow themselves to label uneducated students educated. The federal government has permitted these maneuvers, and after several years of tough talk about enforcing the law’s standards, the Education Department has in the past year begun cutting one deal after another with states that want to redefine “success” for their schools.
The evidence is now overwhelming that if you take an average low-income child and put him into an average American public school, he will almost certainly come out poorly educated. What the small but growing number of successful schools demonstrate is that the public-school system accomplishes that result because we have built it that way. We could also decide to create a different system, one that educates most (if not all) poor minority students to high levels of achievement. It is not yet entirely clear what that system might look like — it might include not only KIPP-like structures and practices but also high-quality early-childhood education, as well as incentives to bring the best teachers to the worst schools — but what is clear is that it is within reach.
Although the failure of No Child Left Behind now seems more likely than not, it is not too late for it to succeed. We know now, in a way that we did not when the law was passed, what it would take to make it work. And if the law does, in the end, fail — if in 2014 only 20 or 30 or 40 percent of the country’s poor and minority students are proficient, then we will need to accept that its failure was not an accident and was not inevitable, but was the outcome we chose.
Paul Tough is an editor at the magazine. He is writing a book about the Harlem Children’s Zone, a community organization.
Sunday, November 26
Thursday, November 16
Single-Sex Ed 101
Welcome to the latest educational fad.
By Meghan O'Rourke, SLATE
Posted Wednesday, Nov. 15, 2006, at 12:43 PM ET
Not long ago, the idea that American public schools should offer separate classes for boys and girls would have been regarded as retrograde; in the late 1980s, single-sex public schools had almost disappeared. But during the last decade, single-sex education has come to seem cutting-edge once again, backed by a startling rise of bipartisan support. In October, the Department of Education announced new federal regulations making it easier for public schools to become single-sex institutions, provided that "substantially equal" opportunities are available to the other sex. Part of the impetus behind the new rules is simply Americans' love of choice. As a Department of Education spokeswoman told me, single-sex schools will aid families by adding "one more tool to the toolbox." But part of it is the belief that single-sex schools will be a panacea for struggling boys and girls: Some of the staunchest advocates of alternatives to co-education are preaching new approaches based on magnifying, rather than trying to overcome, gender differences.
Behind what has been billed as a pragmatic decision lurks a more programmatic (and pseudoscientific) agenda. Invoking murky neurobiological data about innate gender differences, these advocates leap to cut-and-dry classroom prescriptions—ones that may ultimately provide less pedagogical variety for students themselves. It's one thing to offer students the option to learn the same things in separate classrooms. It's quite another to urge that all students learn in programmatically gender-tailored ways—and possibly even learn different things.
Among the most influential of the lobbying groups, the National Association for Single Sex Public Education is headed by an MIT-educated psychologist named Leonard Sax. Extrapolating rather freely from neuroscientific studies—many with small sample pools—Sax argues that, paradoxically, treating students in a gender-neutral manner tends to reinforce stereotypical weaknesses in the classroom, leading to declines in aptitude for both genders. His remedy is to urge educational techniques that cater to the unique "boy" brain and unique "girl" brain. Girls, Sax believes, don't enjoy abstraction; they have more sensitive hearing than their male peers; and they work better than boys do in groups. For them, using more context in math class is useful. Boys, on the other hand, relish abstraction and are bored by context. They benefit from moving around constantly. Therefore, Sax claims, "It's not sufficient just to put girls in one classroom and boys in another. In order to improve academic performance and broaden educational horizons, you need to understand how girls and boys learn differently."
Consider a typical example from the NASSPE Web site: "Girls have a sense of hearing which is two to four times better than boys (depending on the frequency tested) … if you have a male teacher speaking in a tone of voice which seems normal to him, a girl in the front row may feel that the teacher is practically yelling at her. Remember that she is experiencing a sound four times louder than what the male teacher is experiencing. The simplest way to accommodate these differences in a coed classroom is to put all the boys in the front and the girls in the back—just the opposite of the usual seating pattern that the children themselves will choose." Sax's best-selling book, Why Gender Matters, is full of similar illustrations.
The trouble with this type of reductive emphasis on group identity is that it contributes to the very problem the other single-sex education promoters aim to combat: pedagogical practices that unwittingly enforce gender stereotypes. First of all, group differences between the genders, as psychologist Elizabeth Spelke at Harvard University emphasizes, should not obscure the wide overlap in capacities among individual boys and girls. Second, what differences do exist rarely dictate one clear-cut pedagogical response. A good teacher is, or should be, fine-tuning classroom chemistry, not proceeding on the basis of simplistic biology. Putting all the girls in the back, for example, might result in the queen bees distracting each other, and more than a few boys turning around to look at them. Third, we still don't fully understand the import of the neuroscientific studies Sax cites, or what, precisely, "blood flow" to different areas of the brain means. Leaping to sweeping, untested conclusions is hardly scientific.
That's not to say that single-sex education should be dismissed out of hand. Numerous studies do show that students from Hispanic and black single-sex Catholic schools score significantly better on cognitive tests than their peers at co-ed Catholic schools do. Others have found that girls at single-sex institutions demonstrate more interest in math than their co-educated peers do. And one laudable goal of single-sex educators is simply to get kids to enjoy school. You're more likely to practice things you enjoy, and you're more likely to learn when you're engaged. It's no service to students when schools push one didactic approach above all others—an emphasis in kindergarten, say, on fine-motor and academic skills and lots of sitting activities that slower-developing boys tend to find more frustrating. Finally, more than a few parents and kids themselves will attest that single-sex schooling can help focus an adolescent hopelessly distracted by the other sex.
But whatever advantages might ultimately derive from single-sex schools, the gender-specific approach all too easily devolves to formulaic teaching that promises to narrow (rather than expand) learning options for kids. When it comes to English, for instance, single-sex-education advocates tend to disparage what they believe is a "feminized" verbal curriculum and approach, arguing that it plays to boys' weaknesses and handicaps them. Sax suggests, then, that girls and boys be asked to do different exercises in English. The girls would read Are You There God? It's Me, Margaret and engage in a role-playing exercise about the characters. The boys, meanwhile, would read Lord of the Flies and then create a map of the island, demonstrating that they've read closely enough to retain key details. Neither exercise sounds particularly useful. But the latter simply doesn't accomplish the most essential work of an English class. It's a test of reading for information—and a reaffirmation of boys' good "spatial skills"—rather than an exploration of the thematic complexities of William Golding's classic. Even well-intentioned prescription easily becomes a form of zealotry, as when Sax declares that "Ernest Hemingway's books are boy-friendly, while Toni Morrison's are girl-friendly" and adds, "some teachers suggest that we need to stretch the boys' imaginations … surely such a suggestion violates every rule of pedagogy."
There's a curious paradox here: Sax's goal is to get kids to feel more comfortable with skills that don't come easily to them. Yet his recipe for doing so is to segregate them with similarly challenged kids. In this, the NASSPE ethos scants another central goal of school: learning how to work with those who have different aptitudes from your own. In the right circumstances, a classroom can profitably expose kids to diverse thinking and aptitudes. A friend who teaches at a private school recently told me a story about asking his students to compose a list of metaphors. In his English class is a kid—let's call him J—who fits the stereotypical "male" learning model. He is remarkably good at abstract concepts and at logic, and it's hard for him not to blurt out answers to math problems (which can lead to quiet girls being overlooked). But his verbal skills are less honed. In class, J read his list out loud; most of his metaphors were highly logical but somewhat literal. Then a few girls read theirs, including a student of remarkable verbal talent. After listening to her, J said, "I think some of my examples weren't really metaphors; they were more like comparisons." If you buy the gender-specific line of thinking, J might never have arrived at that insight.
Proponents of single-sex education would protest that their approach gives children more latitude to carve out a distinctive identity. Removing "the other" from the classroom can help kids conceive of themselves as individuals rather than as members of a gender. But the risk is that the more didactic—and "scientifically" justified—the campaign for single-sex schools becomes, the more the idea of "essential" gender differences will filter down to kids themselves. And as psychologist Carol Dweck and others have shown, the way we think about how we learn has a profound effect on the way we actually learn. Claude Steele's work on "stereotype threat" has shown that students who absorb others' ideas about their group's handicaps exhibit further declines in aptitude in the contested areas. (And a 2001 study of pilot single-sex programs in California demonstrated what can happen when programs are badly implemented: In this case, unconscious teacher bias inadvertently accentuated more trivial stereotypes as well, with girls encouraged to be "concerned with their appearance," and boys encouraged to be "strong.") What is designed as an escape from gender-based thinking—boys are better at math than girls—could, in the end, only reinforce gender-based thinking, if a more nuanced form of it: Girls aren't as good with abstraction as boys are. That's a result that even those who believe in innate differences, like Sax, shouldn't be in favor of accentuating further.
Meghan O'Rourke is Slate's culture editor.
By Meghan O'Rourke, SLATE
Posted Wednesday, Nov. 15, 2006, at 12:43 PM ET
Not long ago, the idea that American public schools should offer separate classes for boys and girls would have been regarded as retrograde; in the late 1980s, single-sex public schools had almost disappeared. But during the last decade, single-sex education has come to seem cutting-edge once again, backed by a startling rise of bipartisan support. In October, the Department of Education announced new federal regulations making it easier for public schools to become single-sex institutions, provided that "substantially equal" opportunities are available to the other sex. Part of the impetus behind the new rules is simply Americans' love of choice. As a Department of Education spokeswoman told me, single-sex schools will aid families by adding "one more tool to the toolbox." But part of it is the belief that single-sex schools will be a panacea for struggling boys and girls: Some of the staunchest advocates of alternatives to co-education are preaching new approaches based on magnifying, rather than trying to overcome, gender differences.
Behind what has been billed as a pragmatic decision lurks a more programmatic (and pseudoscientific) agenda. Invoking murky neurobiological data about innate gender differences, these advocates leap to cut-and-dry classroom prescriptions—ones that may ultimately provide less pedagogical variety for students themselves. It's one thing to offer students the option to learn the same things in separate classrooms. It's quite another to urge that all students learn in programmatically gender-tailored ways—and possibly even learn different things.
Among the most influential of the lobbying groups, the National Association for Single Sex Public Education is headed by an MIT-educated psychologist named Leonard Sax. Extrapolating rather freely from neuroscientific studies—many with small sample pools—Sax argues that, paradoxically, treating students in a gender-neutral manner tends to reinforce stereotypical weaknesses in the classroom, leading to declines in aptitude for both genders. His remedy is to urge educational techniques that cater to the unique "boy" brain and unique "girl" brain. Girls, Sax believes, don't enjoy abstraction; they have more sensitive hearing than their male peers; and they work better than boys do in groups. For them, using more context in math class is useful. Boys, on the other hand, relish abstraction and are bored by context. They benefit from moving around constantly. Therefore, Sax claims, "It's not sufficient just to put girls in one classroom and boys in another. In order to improve academic performance and broaden educational horizons, you need to understand how girls and boys learn differently."
Consider a typical example from the NASSPE Web site: "Girls have a sense of hearing which is two to four times better than boys (depending on the frequency tested) … if you have a male teacher speaking in a tone of voice which seems normal to him, a girl in the front row may feel that the teacher is practically yelling at her. Remember that she is experiencing a sound four times louder than what the male teacher is experiencing. The simplest way to accommodate these differences in a coed classroom is to put all the boys in the front and the girls in the back—just the opposite of the usual seating pattern that the children themselves will choose." Sax's best-selling book, Why Gender Matters, is full of similar illustrations.
The trouble with this type of reductive emphasis on group identity is that it contributes to the very problem the other single-sex education promoters aim to combat: pedagogical practices that unwittingly enforce gender stereotypes. First of all, group differences between the genders, as psychologist Elizabeth Spelke at Harvard University emphasizes, should not obscure the wide overlap in capacities among individual boys and girls. Second, what differences do exist rarely dictate one clear-cut pedagogical response. A good teacher is, or should be, fine-tuning classroom chemistry, not proceeding on the basis of simplistic biology. Putting all the girls in the back, for example, might result in the queen bees distracting each other, and more than a few boys turning around to look at them. Third, we still don't fully understand the import of the neuroscientific studies Sax cites, or what, precisely, "blood flow" to different areas of the brain means. Leaping to sweeping, untested conclusions is hardly scientific.
That's not to say that single-sex education should be dismissed out of hand. Numerous studies do show that students from Hispanic and black single-sex Catholic schools score significantly better on cognitive tests than their peers at co-ed Catholic schools do. Others have found that girls at single-sex institutions demonstrate more interest in math than their co-educated peers do. And one laudable goal of single-sex educators is simply to get kids to enjoy school. You're more likely to practice things you enjoy, and you're more likely to learn when you're engaged. It's no service to students when schools push one didactic approach above all others—an emphasis in kindergarten, say, on fine-motor and academic skills and lots of sitting activities that slower-developing boys tend to find more frustrating. Finally, more than a few parents and kids themselves will attest that single-sex schooling can help focus an adolescent hopelessly distracted by the other sex.
But whatever advantages might ultimately derive from single-sex schools, the gender-specific approach all too easily devolves to formulaic teaching that promises to narrow (rather than expand) learning options for kids. When it comes to English, for instance, single-sex-education advocates tend to disparage what they believe is a "feminized" verbal curriculum and approach, arguing that it plays to boys' weaknesses and handicaps them. Sax suggests, then, that girls and boys be asked to do different exercises in English. The girls would read Are You There God? It's Me, Margaret and engage in a role-playing exercise about the characters. The boys, meanwhile, would read Lord of the Flies and then create a map of the island, demonstrating that they've read closely enough to retain key details. Neither exercise sounds particularly useful. But the latter simply doesn't accomplish the most essential work of an English class. It's a test of reading for information—and a reaffirmation of boys' good "spatial skills"—rather than an exploration of the thematic complexities of William Golding's classic. Even well-intentioned prescription easily becomes a form of zealotry, as when Sax declares that "Ernest Hemingway's books are boy-friendly, while Toni Morrison's are girl-friendly" and adds, "some teachers suggest that we need to stretch the boys' imaginations … surely such a suggestion violates every rule of pedagogy."
There's a curious paradox here: Sax's goal is to get kids to feel more comfortable with skills that don't come easily to them. Yet his recipe for doing so is to segregate them with similarly challenged kids. In this, the NASSPE ethos scants another central goal of school: learning how to work with those who have different aptitudes from your own. In the right circumstances, a classroom can profitably expose kids to diverse thinking and aptitudes. A friend who teaches at a private school recently told me a story about asking his students to compose a list of metaphors. In his English class is a kid—let's call him J—who fits the stereotypical "male" learning model. He is remarkably good at abstract concepts and at logic, and it's hard for him not to blurt out answers to math problems (which can lead to quiet girls being overlooked). But his verbal skills are less honed. In class, J read his list out loud; most of his metaphors were highly logical but somewhat literal. Then a few girls read theirs, including a student of remarkable verbal talent. After listening to her, J said, "I think some of my examples weren't really metaphors; they were more like comparisons." If you buy the gender-specific line of thinking, J might never have arrived at that insight.
Proponents of single-sex education would protest that their approach gives children more latitude to carve out a distinctive identity. Removing "the other" from the classroom can help kids conceive of themselves as individuals rather than as members of a gender. But the risk is that the more didactic—and "scientifically" justified—the campaign for single-sex schools becomes, the more the idea of "essential" gender differences will filter down to kids themselves. And as psychologist Carol Dweck and others have shown, the way we think about how we learn has a profound effect on the way we actually learn. Claude Steele's work on "stereotype threat" has shown that students who absorb others' ideas about their group's handicaps exhibit further declines in aptitude in the contested areas. (And a 2001 study of pilot single-sex programs in California demonstrated what can happen when programs are badly implemented: In this case, unconscious teacher bias inadvertently accentuated more trivial stereotypes as well, with girls encouraged to be "concerned with their appearance," and boys encouraged to be "strong.") What is designed as an escape from gender-based thinking—boys are better at math than girls—could, in the end, only reinforce gender-based thinking, if a more nuanced form of it: Girls aren't as good with abstraction as boys are. That's a result that even those who believe in innate differences, like Sax, shouldn't be in favor of accentuating further.
Meghan O'Rourke is Slate's culture editor.
Not Coming Soon to a Channel Near You
By ALESSANDRA STANLEY
The lead story on the debut of Al Jazeera’s new English language channel yesterday was the re-election of President Joseph Kabila of Congo.
There were also features on the hip, multicultural scene in Damascus; traffic in Beijing; Brazilian indigenous tribes; and the trials and tribulations of a Palestinian ambulance driver in Gaza. “Everywoman,” a weekly woman’s program, took on “the horrors of skin-bleaching cream” and also spoke to the wife of Sami al-Hajj, an Al Jazeera cameraman who has spent years imprisoned without trial at Guantánamo Bay.
Secretary of Defense Donald H. Rumsfeld once famously denounced the Arab-language Al Jazeera as “vicious, inaccurate and inexcusable,” which may be one reason that major cable and satellite providers in the United States declined to offer the English version. Yesterday, most Americans could watch it only on the Internet at english.aljazeera.net.
It’s a shame. Americans can see almost anything on television these days, from Polish newscasts to reruns of “Benson.” The new channel, Al Jazeera English, will never displace CNN, MSNBC or Fox News, but it provides the curious — or the passionately concerned — with a window into how the world sees us, or doesn’t. It’s a Saul Steinberg map of the globe in which the channel’s hub in Doha, Qatar, looms over Iran, Iraq, Syria and the West Bank — the dots in the horizon are New York and Hollywood.
While American cable news shows focused yesterday on live coverage of the Senate Armed Services Committee’s hearings on Iraq, Al Jazeera English was crammed with reports about Iran’s growing influence in the Middle East, the crisis in Darfur, kidnappings in Iraq and the Israeli-Palestinian conflict, with frequent updates on Israeli retaliatory air strikes in Gaza.
Even on a computer screen, Al Jazeera English looks like CNN International and sounds like a cross of C-Span and Fox News: the stories are long and detailed (that’s the C-Span part); behind the news reports is an overall sensibility that is different from that of most mainstream television news organizations (that’s the Fox News part).
Just as Fox News gives its viewers a vision of the world as seen by conservative, patriotic Americans, Al Jazeera English reflects the mindsets across much of Africa, Asia and the Middle East. It is an American-style cable news network with jazzy newsrooms, poised, attractive anchors, flashy promos and sleek ads for Qatar Airways, Nokia and Shell. But its goal is to bring a non-Western perspective to the West.
There was no fuss over Naomi Campbell’s court appearance on accusations that she had struck her maid or People magazine’s choice for “Sexiest Man Alive” (George Clooney) on Al Jazeera English. A promo for an upcoming program described American policy in Iraq as George Bush’s “alleged war on terror.”
Al Jazeera English — which also broadcasts from bureaus in London, Washington and Kuala Lumpur, Malaysia — recruited many Western journalists, including David Frost and Dave Marash, a longtime “Nightline” correspondent who was let go by ABC almost a year ago. Both men are showcased in advertisements for the channel, but were not as visible on the maiden newscast. Mr. Marash, based in Washington, is the anchor of an evening newscast alongside Ghida Fakhry.
Riz Khan, a veteran of the BBC and CNN, is one of the channel’s bigger stars — he has his own show, “Riz Khan,” on Al Jazeera English. Yesterday, he conducted separate but equally long satellite interviews with Ismail Haniya, prime minister of the Palestinian Authority, and Shimon Peres, Israel’s deputy prime minister.
Mr. Khan asked the two leaders questions sent in by viewers, including a New Yorker named Danny who asked if Mr. Haniya was worried that he would be killed like so many of his predecessors, a question Mr. Khan described as “morbid.” Mr. Haniya was not offended. “All Palestinians are in danger: leaders, women, children and the elderly,” he replied. “We always expect the worst from Israel.” When his turn came, Mr. Peres was just as unruffled.
The original Al Jazeera, created in 1996 with the backing of the emir of Qatar, boasts that it gets as many complaints from African dictators and Muslim leaders as American officials. American viewers mostly know it as an Arab-language news channel that shows Osama bin Laden videos and grisly images of dead American soldiers and mutilated Iraqi children. If yesterday is any indication, the English language version is more button-down and cosmopolitan.
Though Al Jazeera English looks at news events through a non-Western prism, it also points to where East and West actually meet. On a feature story, a group of Syrian women, Muslim and Christian, let a reporter follow them on their girls’ night out. Topic A was the shortage of men in Syria.
The lead story on the debut of Al Jazeera’s new English language channel yesterday was the re-election of President Joseph Kabila of Congo.
There were also features on the hip, multicultural scene in Damascus; traffic in Beijing; Brazilian indigenous tribes; and the trials and tribulations of a Palestinian ambulance driver in Gaza. “Everywoman,” a weekly woman’s program, took on “the horrors of skin-bleaching cream” and also spoke to the wife of Sami al-Hajj, an Al Jazeera cameraman who has spent years imprisoned without trial at Guantánamo Bay.
Secretary of Defense Donald H. Rumsfeld once famously denounced the Arab-language Al Jazeera as “vicious, inaccurate and inexcusable,” which may be one reason that major cable and satellite providers in the United States declined to offer the English version. Yesterday, most Americans could watch it only on the Internet at english.aljazeera.net.
It’s a shame. Americans can see almost anything on television these days, from Polish newscasts to reruns of “Benson.” The new channel, Al Jazeera English, will never displace CNN, MSNBC or Fox News, but it provides the curious — or the passionately concerned — with a window into how the world sees us, or doesn’t. It’s a Saul Steinberg map of the globe in which the channel’s hub in Doha, Qatar, looms over Iran, Iraq, Syria and the West Bank — the dots in the horizon are New York and Hollywood.
While American cable news shows focused yesterday on live coverage of the Senate Armed Services Committee’s hearings on Iraq, Al Jazeera English was crammed with reports about Iran’s growing influence in the Middle East, the crisis in Darfur, kidnappings in Iraq and the Israeli-Palestinian conflict, with frequent updates on Israeli retaliatory air strikes in Gaza.
Even on a computer screen, Al Jazeera English looks like CNN International and sounds like a cross of C-Span and Fox News: the stories are long and detailed (that’s the C-Span part); behind the news reports is an overall sensibility that is different from that of most mainstream television news organizations (that’s the Fox News part).
Just as Fox News gives its viewers a vision of the world as seen by conservative, patriotic Americans, Al Jazeera English reflects the mindsets across much of Africa, Asia and the Middle East. It is an American-style cable news network with jazzy newsrooms, poised, attractive anchors, flashy promos and sleek ads for Qatar Airways, Nokia and Shell. But its goal is to bring a non-Western perspective to the West.
There was no fuss over Naomi Campbell’s court appearance on accusations that she had struck her maid or People magazine’s choice for “Sexiest Man Alive” (George Clooney) on Al Jazeera English. A promo for an upcoming program described American policy in Iraq as George Bush’s “alleged war on terror.”
Al Jazeera English — which also broadcasts from bureaus in London, Washington and Kuala Lumpur, Malaysia — recruited many Western journalists, including David Frost and Dave Marash, a longtime “Nightline” correspondent who was let go by ABC almost a year ago. Both men are showcased in advertisements for the channel, but were not as visible on the maiden newscast. Mr. Marash, based in Washington, is the anchor of an evening newscast alongside Ghida Fakhry.
Riz Khan, a veteran of the BBC and CNN, is one of the channel’s bigger stars — he has his own show, “Riz Khan,” on Al Jazeera English. Yesterday, he conducted separate but equally long satellite interviews with Ismail Haniya, prime minister of the Palestinian Authority, and Shimon Peres, Israel’s deputy prime minister.
Mr. Khan asked the two leaders questions sent in by viewers, including a New Yorker named Danny who asked if Mr. Haniya was worried that he would be killed like so many of his predecessors, a question Mr. Khan described as “morbid.” Mr. Haniya was not offended. “All Palestinians are in danger: leaders, women, children and the elderly,” he replied. “We always expect the worst from Israel.” When his turn came, Mr. Peres was just as unruffled.
The original Al Jazeera, created in 1996 with the backing of the emir of Qatar, boasts that it gets as many complaints from African dictators and Muslim leaders as American officials. American viewers mostly know it as an Arab-language news channel that shows Osama bin Laden videos and grisly images of dead American soldiers and mutilated Iraqi children. If yesterday is any indication, the English language version is more button-down and cosmopolitan.
Though Al Jazeera English looks at news events through a non-Western prism, it also points to where East and West actually meet. On a feature story, a group of Syrian women, Muslim and Christian, let a reporter follow them on their girls’ night out. Topic A was the shortage of men in Syria.
Sunday, November 12
2006: The Year of the ‘Macaca’
By FRANK RICH
OF course, the “thumpin’ ” was all about Iraq. But let us not forget Katrina. It was the collision of the twin White House calamities in August 2005 that foretold the collapse of the presidency of George W. Bush.
Back then, the full measure of the man finally snapped into focus for most Americans, sending his poll numbers into the 30s for the first time. The country saw that the president who had spurned a grieving wartime mother camping out in the sweltering heat of Crawford was the same guy who had been unable to recognize the depth of the suffering in New Orleans’s fetid Superdome. This brand of leadership was not the “compassionate conservatism” that had been sold in all those photo ops with African-American schoolchildren. This was callous conservatism, if not just plain mean.
It’s the kind of conservatism that remains silent when Rush Limbaugh does a mocking impersonation of Michael J. Fox’s Parkinson’s symptoms to score partisan points. It’s the kind of conservatism that talks of humane immigration reform but looks the other way when candidates demonize foreigners as predatory animals. It’s the kind of conservatism that pays lip service to “tolerance” but stalls for days before taking down a campaign ad caricaturing an African-American candidate as a sexual magnet for white women.
This kind of politics is now officially out of fashion. Harold Ford did lose his race in Tennessee, but by less than three points in a region that has not sent a black man to the Senate since Reconstruction. Only 36 years old and hugely talented, he will rise again even as the last vestiges of Jim Crow tactics continue to fade and Willie Horton ads countenanced by a national political party join the Bush dynasty in history’s dustbin.
Elsewhere, the 2006 returns more often than not confirmed that Americans, Republicans and Democrats alike, are far better people than this cynical White House takes them for. This election was not a rebuke merely of the reckless fiasco in Iraq but also of the divisive ideology that had come to define the Bush-Rove-DeLay era. This was the year that Americans said a decisive no to the politics of “macaca” just as firmly as they did to pre-emptive war and Congressional corruption.
For all of Mr. Limbaugh’s supposed clout, his nasty efforts did not defeat the ballot measure supporting stem-cell research in his native state, Missouri. The measure squeaked through, helping the Democratic senatorial candidate knock out the Republican incumbent. (The other stem-cell advocates endorsed by Mr. Fox in campaign ads, in Maryland and Wisconsin, also won.) Arizona voters, despite their proximity to the Mexican border, defeated two of the crudest immigrant-bashing demagogues running for Congress, including one who ran an ad depicting immigrants menacing a JonBenet Ramsey look-alike. (Reasserting its Goldwater conservative roots, Arizona also appears to be the first state to reject an amendment banning same-sex marriage.) Nationwide, the Republican share of the Hispanic vote fell from 44 percent in 2004 to 29 percent this year. Hispanics aren’t buying Mr. Bush’s broken-Spanish shtick anymore; they saw that the president, despite his nuanced take on immigration, never stood up forcefully to the nativists in his own camp when it counted most, in an election year.
But for those who’ve been sickened by the Bush-Rove brand of politics, surely the happiest result of 2006 was saved for last: Jim Webb’s ousting of Senator George Allen in Virginia. It is all too fitting that this race would be the one that put the Democrats over the top in the Senate. Mr. Allen was the slickest form of Bush-Rove conservative, complete with a strategist who’d helped orchestrate the Swift Boating of John Kerry. Mr. Allen was on a fast track to carry that banner into the White House once Mr. Bush was gone. His demise was so sudden and so unlikely that it seems like a fairy tale come true.
As recently as April 2005, hard as it is to believe now, Mr. Allen was chosen in a National Journal survey of Beltway insiders as the most likely Republican presidential nominee in 2008. Political pros saw him as a cross between Ronald Reagan and George W. Bush whose “affable” conservatism and (contrived) good-old-boy persona were catnip to voters. His Senate campaign this year was a mere formality; he began with a double-digit lead.
That all ended famously on Aug. 11, when Mr. Allen, appearing before a crowd of white supporters in rural Virginia, insulted a 20-year-old Webb campaign worker of Indian descent who was tracking him with a video camera. After belittling the dark-skinned man as “macaca, or whatever his name is,” Mr. Allen added, “Welcome to America and the real world of Virginia.”
The moment became a signature cultural event of the political year because the Webb campaign posted the video clip on YouTube.com, the wildly popular site that most politicians, to their peril, had not yet heard about from their children. Unlike unedited bloggorhea, which can take longer to slog through than Old Media print, YouTube is all video snippets all the time; the one-minute macaca clip spread through the national body politic like a rabid virus. Nonetheless it took more than a week for Mr. Allen to recognize the magnitude of the problem and apologize to the object of his ridicule. Then he compounded the damage by making a fool of himself on camera once more, this time angrily denying what proved to be accurate speculation that his mother was a closeted Jew. It was a Mel Gibson meltdown that couldn’t be blamed on the bottle.
Mr. Allen has a history of racial insensitivity. He used to display a Confederate flag in his living room and, bizarrely enough, a noose in his office for sentimental reasons that he could never satisfactorily explain. His defense in the macaca incident was that he had no idea that the word, the term for a genus of monkey, had any racial connotation. But even if he were telling the truth — even if Mr. Allen were not a racist — his non-macaca words were just as damning. “Welcome to America and the real world of Virginia” was unmistakably meant to demean the young man as an unwashed immigrant, whatever his race. It was a typical example of the us-versus-them stridency that has defined the truculent Bush-Rove fearmongering: you’re either with us or you’re a traitor, possibly with the terrorists.
As it happened, the “macaca” who provoked the senator’s self-destruction, S. R. Sidarth, was not an immigrant but the son of immigrants. He was born in Washington’s Virginia suburbs to well-off parents (his father is a mortgage broker) and is the high-achieving graduate of a magnet high school, a tournament chess player, a former intern for Joe Lieberman, a devoted member of his faith (Hindu) and, currently, a senior at the University of Virginia. He is even a football jock like Mr. Allen. In other words, he is an exemplary young American who didn’t need to be “welcomed” to his native country by anyone. The Sidarths are typical of the families who have abetted the rapid growth of northern Virginia in recent years, much as immigrants have always built and renewed our nation. They, not Mr. Allen with his nostalgia for the Confederate “heritage,” are America’s future. It is indeed just such northern Virginians who have been tinting the once reliably red commonwealth purple.
Though the senator’s behavior was toxic, the Bush-Rove establishment rewarded it. Its auxiliaries from talk radio, the blogosphere and the Wall Street Journal opinion page echoed the Allen campaign’s complaint that the incident was inflated by the news media, especially The Washington Post. Once it became clear that Mr. Allen was in serious trouble, conservative pundits mainly faulted him for running an “awful campaign,” not for being an awful person.
The macaca incident had resonance beyond Virginia not just because it was a hit on YouTube. It came to stand for 2006 as a whole because it was synergistic with a national Republican campaign that made a fetish of warning that a Congress run by Democrats would have committee chairmen who are black (Charles Rangel) or gay (Barney Frank), and a middle-aged woman not in the Stepford mold of Laura Bush as speaker. In this context, Mr. Allen’s defeat was poetic justice: the perfect epitaph for an era in which Mr. Rove systematically exploited the narrowest prejudices of the Republican base, pitting Americans of differing identities in cockfights for power and profit, all in the name of “faith.”
Perhaps the most interesting finding in the exit polls Tuesday was that the base did turn out for Mr. Rove: white evangelicals voted in roughly the same numbers as in 2004, and 71 percent of them voted Republican, hardly a mass desertion from the 78 percent of last time. But his party was routed anyway. It was the end of the road for the boy genius and his can’t-miss strategy that Washington sycophants predicted could lead to a permanent Republican majority.
What a week this was! Here’s to the voters of both parties who drove a stake into the heart of our political darkness. If you’ll forgive me for paraphrasing George Allen: Welcome back, everyone, to the world of real America.
OF course, the “thumpin’ ” was all about Iraq. But let us not forget Katrina. It was the collision of the twin White House calamities in August 2005 that foretold the collapse of the presidency of George W. Bush.
Back then, the full measure of the man finally snapped into focus for most Americans, sending his poll numbers into the 30s for the first time. The country saw that the president who had spurned a grieving wartime mother camping out in the sweltering heat of Crawford was the same guy who had been unable to recognize the depth of the suffering in New Orleans’s fetid Superdome. This brand of leadership was not the “compassionate conservatism” that had been sold in all those photo ops with African-American schoolchildren. This was callous conservatism, if not just plain mean.
It’s the kind of conservatism that remains silent when Rush Limbaugh does a mocking impersonation of Michael J. Fox’s Parkinson’s symptoms to score partisan points. It’s the kind of conservatism that talks of humane immigration reform but looks the other way when candidates demonize foreigners as predatory animals. It’s the kind of conservatism that pays lip service to “tolerance” but stalls for days before taking down a campaign ad caricaturing an African-American candidate as a sexual magnet for white women.
This kind of politics is now officially out of fashion. Harold Ford did lose his race in Tennessee, but by less than three points in a region that has not sent a black man to the Senate since Reconstruction. Only 36 years old and hugely talented, he will rise again even as the last vestiges of Jim Crow tactics continue to fade and Willie Horton ads countenanced by a national political party join the Bush dynasty in history’s dustbin.
Elsewhere, the 2006 returns more often than not confirmed that Americans, Republicans and Democrats alike, are far better people than this cynical White House takes them for. This election was not a rebuke merely of the reckless fiasco in Iraq but also of the divisive ideology that had come to define the Bush-Rove-DeLay era. This was the year that Americans said a decisive no to the politics of “macaca” just as firmly as they did to pre-emptive war and Congressional corruption.
For all of Mr. Limbaugh’s supposed clout, his nasty efforts did not defeat the ballot measure supporting stem-cell research in his native state, Missouri. The measure squeaked through, helping the Democratic senatorial candidate knock out the Republican incumbent. (The other stem-cell advocates endorsed by Mr. Fox in campaign ads, in Maryland and Wisconsin, also won.) Arizona voters, despite their proximity to the Mexican border, defeated two of the crudest immigrant-bashing demagogues running for Congress, including one who ran an ad depicting immigrants menacing a JonBenet Ramsey look-alike. (Reasserting its Goldwater conservative roots, Arizona also appears to be the first state to reject an amendment banning same-sex marriage.) Nationwide, the Republican share of the Hispanic vote fell from 44 percent in 2004 to 29 percent this year. Hispanics aren’t buying Mr. Bush’s broken-Spanish shtick anymore; they saw that the president, despite his nuanced take on immigration, never stood up forcefully to the nativists in his own camp when it counted most, in an election year.
But for those who’ve been sickened by the Bush-Rove brand of politics, surely the happiest result of 2006 was saved for last: Jim Webb’s ousting of Senator George Allen in Virginia. It is all too fitting that this race would be the one that put the Democrats over the top in the Senate. Mr. Allen was the slickest form of Bush-Rove conservative, complete with a strategist who’d helped orchestrate the Swift Boating of John Kerry. Mr. Allen was on a fast track to carry that banner into the White House once Mr. Bush was gone. His demise was so sudden and so unlikely that it seems like a fairy tale come true.
As recently as April 2005, hard as it is to believe now, Mr. Allen was chosen in a National Journal survey of Beltway insiders as the most likely Republican presidential nominee in 2008. Political pros saw him as a cross between Ronald Reagan and George W. Bush whose “affable” conservatism and (contrived) good-old-boy persona were catnip to voters. His Senate campaign this year was a mere formality; he began with a double-digit lead.
That all ended famously on Aug. 11, when Mr. Allen, appearing before a crowd of white supporters in rural Virginia, insulted a 20-year-old Webb campaign worker of Indian descent who was tracking him with a video camera. After belittling the dark-skinned man as “macaca, or whatever his name is,” Mr. Allen added, “Welcome to America and the real world of Virginia.”
The moment became a signature cultural event of the political year because the Webb campaign posted the video clip on YouTube.com, the wildly popular site that most politicians, to their peril, had not yet heard about from their children. Unlike unedited bloggorhea, which can take longer to slog through than Old Media print, YouTube is all video snippets all the time; the one-minute macaca clip spread through the national body politic like a rabid virus. Nonetheless it took more than a week for Mr. Allen to recognize the magnitude of the problem and apologize to the object of his ridicule. Then he compounded the damage by making a fool of himself on camera once more, this time angrily denying what proved to be accurate speculation that his mother was a closeted Jew. It was a Mel Gibson meltdown that couldn’t be blamed on the bottle.
Mr. Allen has a history of racial insensitivity. He used to display a Confederate flag in his living room and, bizarrely enough, a noose in his office for sentimental reasons that he could never satisfactorily explain. His defense in the macaca incident was that he had no idea that the word, the term for a genus of monkey, had any racial connotation. But even if he were telling the truth — even if Mr. Allen were not a racist — his non-macaca words were just as damning. “Welcome to America and the real world of Virginia” was unmistakably meant to demean the young man as an unwashed immigrant, whatever his race. It was a typical example of the us-versus-them stridency that has defined the truculent Bush-Rove fearmongering: you’re either with us or you’re a traitor, possibly with the terrorists.
As it happened, the “macaca” who provoked the senator’s self-destruction, S. R. Sidarth, was not an immigrant but the son of immigrants. He was born in Washington’s Virginia suburbs to well-off parents (his father is a mortgage broker) and is the high-achieving graduate of a magnet high school, a tournament chess player, a former intern for Joe Lieberman, a devoted member of his faith (Hindu) and, currently, a senior at the University of Virginia. He is even a football jock like Mr. Allen. In other words, he is an exemplary young American who didn’t need to be “welcomed” to his native country by anyone. The Sidarths are typical of the families who have abetted the rapid growth of northern Virginia in recent years, much as immigrants have always built and renewed our nation. They, not Mr. Allen with his nostalgia for the Confederate “heritage,” are America’s future. It is indeed just such northern Virginians who have been tinting the once reliably red commonwealth purple.
Though the senator’s behavior was toxic, the Bush-Rove establishment rewarded it. Its auxiliaries from talk radio, the blogosphere and the Wall Street Journal opinion page echoed the Allen campaign’s complaint that the incident was inflated by the news media, especially The Washington Post. Once it became clear that Mr. Allen was in serious trouble, conservative pundits mainly faulted him for running an “awful campaign,” not for being an awful person.
The macaca incident had resonance beyond Virginia not just because it was a hit on YouTube. It came to stand for 2006 as a whole because it was synergistic with a national Republican campaign that made a fetish of warning that a Congress run by Democrats would have committee chairmen who are black (Charles Rangel) or gay (Barney Frank), and a middle-aged woman not in the Stepford mold of Laura Bush as speaker. In this context, Mr. Allen’s defeat was poetic justice: the perfect epitaph for an era in which Mr. Rove systematically exploited the narrowest prejudices of the Republican base, pitting Americans of differing identities in cockfights for power and profit, all in the name of “faith.”
Perhaps the most interesting finding in the exit polls Tuesday was that the base did turn out for Mr. Rove: white evangelicals voted in roughly the same numbers as in 2004, and 71 percent of them voted Republican, hardly a mass desertion from the 78 percent of last time. But his party was routed anyway. It was the end of the road for the boy genius and his can’t-miss strategy that Washington sycophants predicted could lead to a permanent Republican majority.
What a week this was! Here’s to the voters of both parties who drove a stake into the heart of our political darkness. If you’ll forgive me for paraphrasing George Allen: Welcome back, everyone, to the world of real America.
Saturday, November 11
How fertility advances could allow women to take over the boardroom.
My Boss Is 65 and Pregnant
By Tim Harford
Updated Saturday, Nov. 11, 2006, at 10:45 AM ET, SLATE
The revelation by the American Society for Reproductive Medicine that women in their 50s can cope with the stresses of parenthood as well—or as badly—as anyone else has again raised the prospect that the experience of women such as Dr. Patricia Rashbrook, who this year became the oldest new mother in Britain at the age of 62, will become increasingly common. That seems unlikely for now. Treatments are expensive, unreliable, and imperfect: Both Dr. Rashbrook and Adriana Iliescu, said at age 66 to be the world's oldest woman to give birth, needed donated eggs.
Still, with time, who knows? And the effects may be more far-reaching than we imagine, as they were when women last enjoyed a sharp improvement in control over their fertility. Harvard economists Claudia Goldin and Lawrence Katz have traced the effects of the introduction of the contraceptive pill in the United States. They say that it sparked a revolution in education and women's participation in high-powered careers.
When the pill became more widely available to young, unmarried American women around 1970, college parties probably became a bit more fun. But the effect on undergraduates was more than a sexual liberation; women knew that without having to abstain from sex, they had control over when—and if—they started a family. That made it possible to plan ahead for a career with a long prequalification period, such as law, medicine, or dentistry. A woman knew she could qualify and establish her career, becoming a doctor without also becoming a nun.
There were indirect, reinforcing effects, too. Women also felt more able to postpone marriage—why hurry? And as more intelligent females delayed getting hitched, that meant that more intelligent men would be floating around, unattached. The dating scene became a more interesting place to dip in and out of for a decade or so, and the risk of being trapped with a small choice of husbands ("Sorry, only the economists are left ...") fell. Potential mentors and employers also had more confidence that women would not give up their careers because of an accidental pregnancy. Such effects helped to increase the number of women starting medical school during the 1970s from less than 10 percent to a third. In law, business, and dentistry, the climbs were even more dramatic.
But what the pill did not do was reliably allow women to stop the biological clock. So, ambitious young women who also wanted families still faced a difficult choice over timing. Doctors warn that later pregnancies are riskier, but University of Virginia economist Amalia Miller has proved that earlier pregnancies are risky, too—to women's careers. Professor Miller showed that women who delay having children for only one year earn 10 percent more over the course of their lives than those who don't delay.
Cause and effect seem unclear, because a woman might delay pregnancy to earn more money for some third reason, such as ambition. But Miller solves that problem by looking at women who, because of miscarriages or accidental pregnancies, do not have children when they would have chosen to have them. (Last year, Steven Landsburg wrote about Miller's study in Slate.)
Women may have already overtaken men at American schools and universities, but perhaps they will not do so in the boardroom until they can reliably delay pregnancy into their 50s and 60s. Then employers might start to dismiss as remote the risk that a valued employee will take time off to have a family. Indeed, having one might become something you do once you've made it to the top and retired.
Perhaps this is nothing more than science fiction, but if my daughters become part of this future revolution, they can forget about leaving their kids with grandpa.
Tim Harford is a columnist for the Financial Times. His latest book is The Undercover Economist.
Article URL: http://www.slate.com/id/2153183/
Copyright 2006 Washingtonpost.Newsweek Interactive Co. LLC
By Tim Harford
Updated Saturday, Nov. 11, 2006, at 10:45 AM ET, SLATE
The revelation by the American Society for Reproductive Medicine that women in their 50s can cope with the stresses of parenthood as well—or as badly—as anyone else has again raised the prospect that the experience of women such as Dr. Patricia Rashbrook, who this year became the oldest new mother in Britain at the age of 62, will become increasingly common. That seems unlikely for now. Treatments are expensive, unreliable, and imperfect: Both Dr. Rashbrook and Adriana Iliescu, said at age 66 to be the world's oldest woman to give birth, needed donated eggs.
Still, with time, who knows? And the effects may be more far-reaching than we imagine, as they were when women last enjoyed a sharp improvement in control over their fertility. Harvard economists Claudia Goldin and Lawrence Katz have traced the effects of the introduction of the contraceptive pill in the United States. They say that it sparked a revolution in education and women's participation in high-powered careers.
When the pill became more widely available to young, unmarried American women around 1970, college parties probably became a bit more fun. But the effect on undergraduates was more than a sexual liberation; women knew that without having to abstain from sex, they had control over when—and if—they started a family. That made it possible to plan ahead for a career with a long prequalification period, such as law, medicine, or dentistry. A woman knew she could qualify and establish her career, becoming a doctor without also becoming a nun.
There were indirect, reinforcing effects, too. Women also felt more able to postpone marriage—why hurry? And as more intelligent females delayed getting hitched, that meant that more intelligent men would be floating around, unattached. The dating scene became a more interesting place to dip in and out of for a decade or so, and the risk of being trapped with a small choice of husbands ("Sorry, only the economists are left ...") fell. Potential mentors and employers also had more confidence that women would not give up their careers because of an accidental pregnancy. Such effects helped to increase the number of women starting medical school during the 1970s from less than 10 percent to a third. In law, business, and dentistry, the climbs were even more dramatic.
But what the pill did not do was reliably allow women to stop the biological clock. So, ambitious young women who also wanted families still faced a difficult choice over timing. Doctors warn that later pregnancies are riskier, but University of Virginia economist Amalia Miller has proved that earlier pregnancies are risky, too—to women's careers. Professor Miller showed that women who delay having children for only one year earn 10 percent more over the course of their lives than those who don't delay.
Cause and effect seem unclear, because a woman might delay pregnancy to earn more money for some third reason, such as ambition. But Miller solves that problem by looking at women who, because of miscarriages or accidental pregnancies, do not have children when they would have chosen to have them. (Last year, Steven Landsburg wrote about Miller's study in Slate.)
Women may have already overtaken men at American schools and universities, but perhaps they will not do so in the boardroom until they can reliably delay pregnancy into their 50s and 60s. Then employers might start to dismiss as remote the risk that a valued employee will take time off to have a family. Indeed, having one might become something you do once you've made it to the top and retired.
Perhaps this is nothing more than science fiction, but if my daughters become part of this future revolution, they can forget about leaving their kids with grandpa.
Tim Harford is a columnist for the Financial Times. His latest book is The Undercover Economist.
Article URL: http://www.slate.com/id/2153183/
Copyright 2006 Washingtonpost.Newsweek Interactive Co. LLC
The Great Revulsion
By PAUL KRUGMAN
I’m not feeling giddy as much as greatly relieved. O.K., maybe a little giddy. Give ’em hell, Harry and Nancy!
Here’s what I wrote more than three years ago, in the introduction to my column collection “The Great Unraveling”: “I have a vision — maybe just a hope — of a great revulsion: a moment in which the American people look at what is happening, realize how their good will and patriotism have been abused, and put a stop to this drive to destroy much of what is best in our country.”
At the time, the right was still celebrating the illusion of victory in Iraq, and the bizarre Bush personality cult was still in full flower. But now the great revulsion has arrived.
Tuesday’s election was a truly stunning victory for the Democrats. Candidates planning to caucus with the Democrats took 24 of the 33 Senate seats at stake this year, winning seven million more votes than Republicans. In House races, Democrats received about 53 percent of the two-party vote, giving them a margin more than twice as large as the 2.5-percentage-point lead that Mr. Bush claimed as a “mandate” two years ago — and the margin would have been even bigger if many Democrats hadn’t been running unopposed.
The election wasn’t just the end of the road for Mr. Bush’s reign of error. It was also the end of the 12-year Republican dominance of Congress. The Democrats will now hold a majority in the House that is about as big as the Republicans ever achieved during that era of dominance.
Moreover, the new Democratic majority may well be much more effective than the majority the party lost in 1994. Thanks to a great regional realignment, in which a solid Northeast has replaced the solid South, Democratic control no longer depends on a bloc of Dixiecrats whose ideological sympathies were often with the other side of the aisle.
Now, I don’t expect or want a permanent Democratic lock on power. But I do hope and believe that this election marks the beginning of the end for the conservative movement that has taken over the Republican Party.
In saying that, I’m not calling for or predicting the end of conservatism. There always have been and always will be conservatives on the American political scene. And that’s as it should be: a diversity of views is part of what makes democracy vital.
But we may be seeing the downfall of movement conservatism — the potent alliance of wealthy individuals, corporate interests and the religious right that took shape in the 1960s and 1970s. This alliance may once have had something to do with ideas, but it has become mainly a corrupt political machine, and America will be a better place if that machine breaks down.
Why do I want to see movement conservatism crushed? Partly because the movement is fundamentally undemocratic; its leaders don’t accept the legitimacy of opposition. Democrats will only become acceptable, declared Grover Norquist, the president of Americans for Tax Reform, once they “are comfortable in their minority status.” He added, “Any farmer will tell you that certain animals run around and are unpleasant, but when they’ve been fixed, then they are happy and sedate.”
And the determination of the movement to hold on to power at any cost has poisoned our political culture. Just think about the campaign that just ended, with its coded racism, deceptive robo-calls, personal smears, homeless men bused in to hand out deceptive fliers, and more. Not to mention the constant implication that anyone who questions the Bush administration or its policies is very nearly a traitor.
When movement conservatism took it over, the Republican Party ceased to be the party of Dwight Eisenhower and became the party of Karl Rove. The good news is that Karl Rove and the political tendency he represents may both have just self-destructed.
Two years ago, people were talking about permanent right-wing dominance of American politics. But since then the American people have gotten a clearer sense of what rule by movement conservatives means. They’ve seen the movement take us into an unnecessary war, and botch every aspect of that war. They’ve seen a great American city left to drown; they’ve seen corruption reach deep into our political process; they’ve seen the hypocrisy of those who lecture us on morality.
And they just said no.
I’m not feeling giddy as much as greatly relieved. O.K., maybe a little giddy. Give ’em hell, Harry and Nancy!
Here’s what I wrote more than three years ago, in the introduction to my column collection “The Great Unraveling”: “I have a vision — maybe just a hope — of a great revulsion: a moment in which the American people look at what is happening, realize how their good will and patriotism have been abused, and put a stop to this drive to destroy much of what is best in our country.”
At the time, the right was still celebrating the illusion of victory in Iraq, and the bizarre Bush personality cult was still in full flower. But now the great revulsion has arrived.
Tuesday’s election was a truly stunning victory for the Democrats. Candidates planning to caucus with the Democrats took 24 of the 33 Senate seats at stake this year, winning seven million more votes than Republicans. In House races, Democrats received about 53 percent of the two-party vote, giving them a margin more than twice as large as the 2.5-percentage-point lead that Mr. Bush claimed as a “mandate” two years ago — and the margin would have been even bigger if many Democrats hadn’t been running unopposed.
The election wasn’t just the end of the road for Mr. Bush’s reign of error. It was also the end of the 12-year Republican dominance of Congress. The Democrats will now hold a majority in the House that is about as big as the Republicans ever achieved during that era of dominance.
Moreover, the new Democratic majority may well be much more effective than the majority the party lost in 1994. Thanks to a great regional realignment, in which a solid Northeast has replaced the solid South, Democratic control no longer depends on a bloc of Dixiecrats whose ideological sympathies were often with the other side of the aisle.
Now, I don’t expect or want a permanent Democratic lock on power. But I do hope and believe that this election marks the beginning of the end for the conservative movement that has taken over the Republican Party.
In saying that, I’m not calling for or predicting the end of conservatism. There always have been and always will be conservatives on the American political scene. And that’s as it should be: a diversity of views is part of what makes democracy vital.
But we may be seeing the downfall of movement conservatism — the potent alliance of wealthy individuals, corporate interests and the religious right that took shape in the 1960s and 1970s. This alliance may once have had something to do with ideas, but it has become mainly a corrupt political machine, and America will be a better place if that machine breaks down.
Why do I want to see movement conservatism crushed? Partly because the movement is fundamentally undemocratic; its leaders don’t accept the legitimacy of opposition. Democrats will only become acceptable, declared Grover Norquist, the president of Americans for Tax Reform, once they “are comfortable in their minority status.” He added, “Any farmer will tell you that certain animals run around and are unpleasant, but when they’ve been fixed, then they are happy and sedate.”
And the determination of the movement to hold on to power at any cost has poisoned our political culture. Just think about the campaign that just ended, with its coded racism, deceptive robo-calls, personal smears, homeless men bused in to hand out deceptive fliers, and more. Not to mention the constant implication that anyone who questions the Bush administration or its policies is very nearly a traitor.
When movement conservatism took it over, the Republican Party ceased to be the party of Dwight Eisenhower and became the party of Karl Rove. The good news is that Karl Rove and the political tendency he represents may both have just self-destructed.
Two years ago, people were talking about permanent right-wing dominance of American politics. But since then the American people have gotten a clearer sense of what rule by movement conservatives means. They’ve seen the movement take us into an unnecessary war, and botch every aspect of that war. They’ve seen a great American city left to drown; they’ve seen corruption reach deep into our political process; they’ve seen the hypocrisy of those who lecture us on morality.
And they just said no.
Thursday, November 9
Rumsfeld's biggest blunders and how they've harmed America.
A Catalog of Failure
By Phillip Carter, SLATE
It remains unclear whether Defense Secretary Donald Rumsfeld finally stepped down because he mismanaged the war on terrorism, failed in his efforts to transform the Pentagon, or became the scapegoat for the Republicans' loss of the House. However, understanding Rumsfeld's failures is the key to moving forward, so it's useful to examine a few of his biggest ones.
Iraq dominates the list of Rumsfeld errors because of the sheer enormity of his strategic mistakes. Indeed, his Iraq blunders should have cost him his job long before the 2006 midterm elections. From tinkering with troop deployments in 2002 and 2003, which ensured there were too few troops from the start, to micromanaging operations with his famed "8,000 mile screwdriver," to pushing for the disastrous twin policies of de-Baathification and disbandment for the Iraqi army, Rumsfeld's failures transformed the Iraq war from a difficult enterprise into an unwinnable one. Likewise, in Afghanistan, missteps by the Pentagon have left America's victory there unconsummated. Make no mistake: These were not tactical failures, made by subordinate military officers. Rather, these were strategic errors of epic proportions that no amount of good soldiering could undo. Blame for these strategic missteps lies properly with the secretary of defense and his senior generals, and, ultimately, with the White House.
Iraq is now in a state beyond civil war. The victory that was possible in 2003 is not possible in 2006. Yet, despite losing the war there, no senior officers or civilian leaders have been held accountable—until now. One general, Ricardo Sanchez, watched his promotion chances evaporate because of Abu Ghraib and his failure to bring the insurgency to heel. But his case has been the exception to the pattern in which general officers are promoted into the upper stratosphere of the military without regard to their performance at war. America didn't always treat its generals this way; Lincoln famously sacked many of his field commanders, and generals often lost their jobs during World War II after losing battles. Perhaps Rumsfeld's departure signals a new willingness to hold senior officials accountable for the failures at their level.
Beyond Iraq, it is clear that Rumsfeld's Pentagon failed to develop a strategy to win in the larger war on terrorism. In a leaked memo, Rumsfeld asked his staff in October 2003 if we had metrics to know whether we were winning or losing the war, along with a number of other fundamental questions of strategy. The problem is that none of these questions have been resolved now, some five years after the start of the war with al-Qaida. Strategy is the province of defense secretaries and generals. Their most important job is to answer such questions as why we are fighting and how we will align national resources to accomplish that mission. The Rumsfeld Pentagon failed to articulate a successful strategic vision for the war. Consequently, America's wars in Iraq, Afghanistan, the Philippines, and East Africa since Sept. 11 have lacked strategic coherence. There is no sense that the sum of these small victories would equal a larger victory over al-Qaida or terrorism generally.
Indeed, Rumsfeld's dominance of the cabinet and the Bush administration may have guaranteed that America chose the entirely wrong paradigm for the past five years. Notwithstanding the spectacular violence of the Sept. 11 attacks, America might have done better had it not chosen a war paradigm to fight terrorism and instead chosen to employ a comprehensive array of diplomatic, intelligence, military, and law enforcement approaches. Doing so might have encouraged more of our allies to stand by our side. It might also have put America on a better footing to sustain its efforts for what promises to be a generational struggle against terrorism.
When Rumsfeld took office in 2001, he swept in with promises to transform America's military—to move from the industrial age to the information age by revolutionizing both America's military hardware and the way it does business. He presented himself as a successful CEO who would hammer the Pentagon's notoriously recalcitrant bureaucracy into shape. Yet, despite all his rhetoric, it's not clear that he actually accomplished much in this area. The Rumsfeld defense budgets allocated more money to areas that he prioritized, such as missile defense and sophisticated systems like the Joint Strike Fighter and Future Combat Systems, but these were marginal changes from the 1990s, consistent with the ways the services were moving already. Despite his best efforts, Rumsfeld never managed to fundamentally change the way the Pentagon does business, partly because he ran into a solid wall of opposition from the military establishment, defense contractors, and Congress.
In battling these foes and others, Rumsfeld didn't just lose the fight, he also did a great deal of damage to the military and to the country. Thanks to Bob Woodward, we now know a few more salacious details about his spats with senior military leaders—such as the way he emasculated former Joint Chiefs of Staff Chairman Richard Myers. We also know how he handpicked officers for key positions in order to ensure that every senior general or admiral was a Rumsfeld company man, a policy that had a tremendously deleterious and narrowing effect on the kind of military advice and dissent flowing into the office of the secretary of defense. His office famously undercut and eventually sacked Gen. Eric Shinseki after his testimony to Congress stating that Iraq would take a "few hundred thousand" troops to secure, although to this day the Rumsfeld press machine vigorously insists that Shinseki simply left when his term expired. This move, more than any other, crystallized the tension between Rumsfeld and the generals and telegraphed quite clearly that loyalty was more prized than intellectual honesty. That so few generals have spoken out since then is proof of how effective this message was. Only those who have retired, and the military establishment's press, feel they can criticize the defense secretary's policies in public.
War is too important to be left to the generals, as French Prime Minister George Clemenceau said during World War I. Rumsfeld was right to insist on civilian control of the military. But war is too important—and too complex—to be left to the politicians, as well. As historian Eliot Cohen writes in his brilliant study Supreme Command, the best strategies emerge when generals and political leaders find a way to effectively share military power. With his rough style, abrasive personality, and legendary skill as a bureaucratic infighter, Rumsfeld ensured that this would never occur in his Pentagon, much to the detriment of America and its war on terrorism.
Civil-military relations will recover from the Rumsfeld era, just as they recovered from Vietnam, but the damage to our war effort has been done. Defense secretary-designate Robert Gates will need to work quickly to limit the damage and find a way forward through the rubble of America's flawed policies of the past six years. Although the future in Iraq and Afghanistan looks bleak, all is not yet lost. If he listens to the advice of his generals, and is willing to consider and implement unconventional options, then Gates may manage to pull an imperfect victory from the jaws of defeat. But time is short.
Phillip Carter, an attorney and former Army officer, writes on legal and military affairs. He recently returned from a year advising the Iraqi police in Baqubah with the Army's 101st Airborne Division.
By Phillip Carter, SLATE
It remains unclear whether Defense Secretary Donald Rumsfeld finally stepped down because he mismanaged the war on terrorism, failed in his efforts to transform the Pentagon, or became the scapegoat for the Republicans' loss of the House. However, understanding Rumsfeld's failures is the key to moving forward, so it's useful to examine a few of his biggest ones.
Iraq dominates the list of Rumsfeld errors because of the sheer enormity of his strategic mistakes. Indeed, his Iraq blunders should have cost him his job long before the 2006 midterm elections. From tinkering with troop deployments in 2002 and 2003, which ensured there were too few troops from the start, to micromanaging operations with his famed "8,000 mile screwdriver," to pushing for the disastrous twin policies of de-Baathification and disbandment for the Iraqi army, Rumsfeld's failures transformed the Iraq war from a difficult enterprise into an unwinnable one. Likewise, in Afghanistan, missteps by the Pentagon have left America's victory there unconsummated. Make no mistake: These were not tactical failures, made by subordinate military officers. Rather, these were strategic errors of epic proportions that no amount of good soldiering could undo. Blame for these strategic missteps lies properly with the secretary of defense and his senior generals, and, ultimately, with the White House.
Iraq is now in a state beyond civil war. The victory that was possible in 2003 is not possible in 2006. Yet, despite losing the war there, no senior officers or civilian leaders have been held accountable—until now. One general, Ricardo Sanchez, watched his promotion chances evaporate because of Abu Ghraib and his failure to bring the insurgency to heel. But his case has been the exception to the pattern in which general officers are promoted into the upper stratosphere of the military without regard to their performance at war. America didn't always treat its generals this way; Lincoln famously sacked many of his field commanders, and generals often lost their jobs during World War II after losing battles. Perhaps Rumsfeld's departure signals a new willingness to hold senior officials accountable for the failures at their level.
Beyond Iraq, it is clear that Rumsfeld's Pentagon failed to develop a strategy to win in the larger war on terrorism. In a leaked memo, Rumsfeld asked his staff in October 2003 if we had metrics to know whether we were winning or losing the war, along with a number of other fundamental questions of strategy. The problem is that none of these questions have been resolved now, some five years after the start of the war with al-Qaida. Strategy is the province of defense secretaries and generals. Their most important job is to answer such questions as why we are fighting and how we will align national resources to accomplish that mission. The Rumsfeld Pentagon failed to articulate a successful strategic vision for the war. Consequently, America's wars in Iraq, Afghanistan, the Philippines, and East Africa since Sept. 11 have lacked strategic coherence. There is no sense that the sum of these small victories would equal a larger victory over al-Qaida or terrorism generally.
Indeed, Rumsfeld's dominance of the cabinet and the Bush administration may have guaranteed that America chose the entirely wrong paradigm for the past five years. Notwithstanding the spectacular violence of the Sept. 11 attacks, America might have done better had it not chosen a war paradigm to fight terrorism and instead chosen to employ a comprehensive array of diplomatic, intelligence, military, and law enforcement approaches. Doing so might have encouraged more of our allies to stand by our side. It might also have put America on a better footing to sustain its efforts for what promises to be a generational struggle against terrorism.
When Rumsfeld took office in 2001, he swept in with promises to transform America's military—to move from the industrial age to the information age by revolutionizing both America's military hardware and the way it does business. He presented himself as a successful CEO who would hammer the Pentagon's notoriously recalcitrant bureaucracy into shape. Yet, despite all his rhetoric, it's not clear that he actually accomplished much in this area. The Rumsfeld defense budgets allocated more money to areas that he prioritized, such as missile defense and sophisticated systems like the Joint Strike Fighter and Future Combat Systems, but these were marginal changes from the 1990s, consistent with the ways the services were moving already. Despite his best efforts, Rumsfeld never managed to fundamentally change the way the Pentagon does business, partly because he ran into a solid wall of opposition from the military establishment, defense contractors, and Congress.
In battling these foes and others, Rumsfeld didn't just lose the fight, he also did a great deal of damage to the military and to the country. Thanks to Bob Woodward, we now know a few more salacious details about his spats with senior military leaders—such as the way he emasculated former Joint Chiefs of Staff Chairman Richard Myers. We also know how he handpicked officers for key positions in order to ensure that every senior general or admiral was a Rumsfeld company man, a policy that had a tremendously deleterious and narrowing effect on the kind of military advice and dissent flowing into the office of the secretary of defense. His office famously undercut and eventually sacked Gen. Eric Shinseki after his testimony to Congress stating that Iraq would take a "few hundred thousand" troops to secure, although to this day the Rumsfeld press machine vigorously insists that Shinseki simply left when his term expired. This move, more than any other, crystallized the tension between Rumsfeld and the generals and telegraphed quite clearly that loyalty was more prized than intellectual honesty. That so few generals have spoken out since then is proof of how effective this message was. Only those who have retired, and the military establishment's press, feel they can criticize the defense secretary's policies in public.
War is too important to be left to the generals, as French Prime Minister George Clemenceau said during World War I. Rumsfeld was right to insist on civilian control of the military. But war is too important—and too complex—to be left to the politicians, as well. As historian Eliot Cohen writes in his brilliant study Supreme Command, the best strategies emerge when generals and political leaders find a way to effectively share military power. With his rough style, abrasive personality, and legendary skill as a bureaucratic infighter, Rumsfeld ensured that this would never occur in his Pentagon, much to the detriment of America and its war on terrorism.
Civil-military relations will recover from the Rumsfeld era, just as they recovered from Vietnam, but the damage to our war effort has been done. Defense secretary-designate Robert Gates will need to work quickly to limit the damage and find a way forward through the rubble of America's flawed policies of the past six years. Although the future in Iraq and Afghanistan looks bleak, all is not yet lost. If he listens to the advice of his generals, and is willing to consider and implement unconventional options, then Gates may manage to pull an imperfect victory from the jaws of defeat. But time is short.
Phillip Carter, an attorney and former Army officer, writes on legal and military affairs. He recently returned from a year advising the Iraqi police in Baqubah with the Army's 101st Airborne Division.
Replacing Rumsfeld: Why Is It OK that the President Lied?
By Susan Estrich, FOX NEWS
Since when is it OK for a president to lie to reporters?
Wasn’t it just last week that the president told reporters that both Donald Rumsfeld and Dick Cheney were staying on their jobs for the next two years?
It was an important question. The president didn’t duck. He didn’t give one of those mealy mouthed, we’ll see answers. He said yes, they’re both staying. And even as he said it, they were getting ready to get rid of Rumsfeld.
He must have known that. He hadn’t talked to Gates yet, but Gates was being vetted for the Defense job at the time. Surely, the president knew that.
Excuse me, but it doesn’t add up to an honest answer.
Am I the only one who’s wondering why it is that the president is allowed to intentionally mislead people, and no one says “boo.”
I know I’m not supposed to say this (my conservative friends get so bent out of shape when liberals accuse the president of “lying” but if the shoe fits….) but is it because we’re so used to it?
Does it just go without saying that politicians “lie” – excuse me, don’t tell the truth, that is, mislead, and it’s OK. Pretty pitiful, wouldn’t you say? Part of the whole disgusting world of negative ads, grubbing for money, trading on power, selling influence. If so much didn’t depend on it, it would be the kind of business any decent person would wash their hands of.
I recall another president who mislead people about his sex life, and my conservative friends went ballistic.
Is it somehow worse to lie about your sex life than about who is going to be defense secretary? Or are we all just so used to presidents lying to us that we accept it without blinking?
I wanted you to move on to another question, the president explained to reporters, as if that is a reason for lying. Sure he did.
But isn’t there something wrong with that?
There were a hundred ways that the president could have answered the question that would have left the door open to a replacement, so that he would not have lied. He didn’t choose any of them.
Corruption was the second reason, after the Iraq war, for the Republicans’ loss. But corruption doesn’t just mean taking money or coming on to young boys. It means not having any credibility because you don’t tell the truth. About things that matter. It means losing the confidence of the people who elected you. Which the Republicans have done.
Everybody was making the right noises on the day after the election. Speaker-to-be Nancy Pelosi talked about working with the president. The president invited her to lunch. There were all the correct shouts and murmurs about bipartisanship. If you believe it, I’ve got a beautiful bridge to show you.
If the president couldn’t get his agenda through when his own party controlled the House and the Senate, does anyone honestly expect that he will accomplish more with both houses narrowly in the control of the other party, while the president is the lamest of ducks, his popularity in the toilet, not to mention the fact that every other senator (and even the occasional House member) is now running for president?
I don’t want to rain on the parade, but it sure sounds like a recipe for paralysis to me. That’s not politically correct, but it’s probably right. Shall we tell the people, or just assume that they’re not smart enough to figure it out?
The most you can expect is that Democrats will be demanding some change in policy on Iraq, and will use such occasions as confirmation hearings for a new defense secretary to explore what that new policy will be.
The president has now given them their first opportunity to hold his feet to the fire.
That’s fine. But what’s so troubling about it, maybe just to me, is that he gave it to them by breaking his word while barely acknowledging that he never planned on keeping it; or that he owed anyone an apology; or that he had any obligation to tell people the truth in the first instance.
All of which may have more to do with why he lost the election in the first place than Mr. Bush wants to acknowledge.
Since when is it OK for a president to lie to reporters?
Wasn’t it just last week that the president told reporters that both Donald Rumsfeld and Dick Cheney were staying on their jobs for the next two years?
It was an important question. The president didn’t duck. He didn’t give one of those mealy mouthed, we’ll see answers. He said yes, they’re both staying. And even as he said it, they were getting ready to get rid of Rumsfeld.
He must have known that. He hadn’t talked to Gates yet, but Gates was being vetted for the Defense job at the time. Surely, the president knew that.
Excuse me, but it doesn’t add up to an honest answer.
Am I the only one who’s wondering why it is that the president is allowed to intentionally mislead people, and no one says “boo.”
I know I’m not supposed to say this (my conservative friends get so bent out of shape when liberals accuse the president of “lying” but if the shoe fits….) but is it because we’re so used to it?
Does it just go without saying that politicians “lie” – excuse me, don’t tell the truth, that is, mislead, and it’s OK. Pretty pitiful, wouldn’t you say? Part of the whole disgusting world of negative ads, grubbing for money, trading on power, selling influence. If so much didn’t depend on it, it would be the kind of business any decent person would wash their hands of.
I recall another president who mislead people about his sex life, and my conservative friends went ballistic.
Is it somehow worse to lie about your sex life than about who is going to be defense secretary? Or are we all just so used to presidents lying to us that we accept it without blinking?
I wanted you to move on to another question, the president explained to reporters, as if that is a reason for lying. Sure he did.
But isn’t there something wrong with that?
There were a hundred ways that the president could have answered the question that would have left the door open to a replacement, so that he would not have lied. He didn’t choose any of them.
Corruption was the second reason, after the Iraq war, for the Republicans’ loss. But corruption doesn’t just mean taking money or coming on to young boys. It means not having any credibility because you don’t tell the truth. About things that matter. It means losing the confidence of the people who elected you. Which the Republicans have done.
Everybody was making the right noises on the day after the election. Speaker-to-be Nancy Pelosi talked about working with the president. The president invited her to lunch. There were all the correct shouts and murmurs about bipartisanship. If you believe it, I’ve got a beautiful bridge to show you.
If the president couldn’t get his agenda through when his own party controlled the House and the Senate, does anyone honestly expect that he will accomplish more with both houses narrowly in the control of the other party, while the president is the lamest of ducks, his popularity in the toilet, not to mention the fact that every other senator (and even the occasional House member) is now running for president?
I don’t want to rain on the parade, but it sure sounds like a recipe for paralysis to me. That’s not politically correct, but it’s probably right. Shall we tell the people, or just assume that they’re not smart enough to figure it out?
The most you can expect is that Democrats will be demanding some change in policy on Iraq, and will use such occasions as confirmation hearings for a new defense secretary to explore what that new policy will be.
The president has now given them their first opportunity to hold his feet to the fire.
That’s fine. But what’s so troubling about it, maybe just to me, is that he gave it to them by breaking his word while barely acknowledging that he never planned on keeping it; or that he owed anyone an apology; or that he had any obligation to tell people the truth in the first instance.
All of which may have more to do with why he lost the election in the first place than Mr. Bush wants to acknowledge.
Monday, November 6
Always Academicize
Always Academicize: My Response to the Responses
In my post of October 22, I argued that college and university teachers should not take it upon themselves to cure the ills of the world, but should instead do the job they are trained and paid to do — the job, first, of introducing students to areas of knowledge they were not acquainted with before, and second, of equipping those same students with the analytic skills that will enable them to assess and evaluate the materials they are asked to read. I made the further point that the moment an instructor tries to do something more, he or she has crossed a line and ventured into territory that belongs properly to some other enterprise. It doesn’t matter whether the line is crossed by someone on the left who wants to enroll students in a progressive agenda dedicated to the redress of injustice, or by someone on the right who is concerned that students be taught to be patriotic, God-fearing, family oriented, and respectful of tradition. To be sure, the redress of injustice and the inculcation of patriotic and family values are worthy activities, but they are not academic activities, and they are not activities academics have the credentials to perform. Academics are not legislators, or political leaders or therapists or ministers; they are academics, and as academics they have contracted to do only one thing – to discuss whatever subject is introduced into the classroom in academic terms.
And what are academic terms? The list is long and includes looking into a history of a topic, studying and mastering the technical language that comes along with it, examining the controversies that have grown up around it and surveying the most significant contributions to its development. The list of academic terms would, however, not include coming to a resolution about a political or moral issue raised by the materials under discussion. This does not mean that political and moral questions are banned from the classroom, but that they should be regarded as objects of study – Where did they come from? How have they been answered at different times in different cultures? – rather than as invitations to take a vote (that’s what you do at the ballot box) or make a life decision (that’s what you do in the private recesses of your heart). No subject is out of bounds; what is out of bounds is using it as an occasion to move students in some political or ideological direction. The imperative, as I said in the earlier post, is to “academicize” the subject; that is, to remove it from whatever context of urgency it inhabits in the world and insert it into a context of academic urgency where the question being asked is not “What is the right thing to do?” but “Is this account of the matter attentive to the complexity of the issue?”
Those who commented on the post raised many sharp and helpful objections to it. Some of those objections give me the opportunity to make my point again. I happily plead guilty to not asking the question Dr. James Cook would have me (and all teachers) ask when a “social/political” issue comes up in the classroom: “Does silence contribute to the victory of people who espouse values akin to those of Hitler?” The question confuses and conflates political silence – you decide not to speak up as a citizen against what you consider an outrage – with an academic silence that is neither culpable nor praiseworthy because it goes without saying if you understand the nature of academic work. When, as a teacher, you are silent about your ethical and political commitments, you are not making a positive choice – Should I or shouldn’t I? is not an academic question — but simply performing your pedagogical role.
Of course the teacher who doesn’t think to declare his or her ethical preferences because it is not part of the job description might well be very active and vocal at a political rally or in a letter to the editor. I am not counseling moral and political abstinence across the board, only in those contexts – like the classroom – where the taking of positions on the war in Iraq or assisted suicide or the conduct of foreign policy is extraneous to or subversive of the activity being performed. Dr. Cook, along with Dr. Richard Flanagan, Ignacio Garcia and others accuse me of putting aside every moral issue or sterilizing issues of their moral implications or leaving my ethical sense at the door. No, I am refusing the implication that one’s ethical obligations remain the same no matter where one is or what one is doing or what one is being paid to do.
In fact, my stance is aggressively ethical: it demands that we take the ethics of the classroom – everything that belongs to pedagogy including preparation, giving assignments, grading papers, keeping discussions on point, etc.– seriously and not allow the scene of instruction to become a scene of indoctrination. Were the ethics appropriate to the classroom no different from the ethics appropriate to the arena of political action or the ethics of democratic citizenry, there would be nothing distinctive about the academic experience – it would be politics by another name – and no reason for anyone to support the enterprise. For if its politics you want, you might as well get right to it and skip the entire academic apparatus entirely.
My argument, then, rests on the conviction that academic work is unlike other forms of work — if it isn’t, it has no shape of its own and no claim on our attention — and that fidelity to it demands respect for its difference, a difference defined by its removal from the decision-making pressures of the larger world. And that finally may be the point underlying the objections to my position: in a world so beset with problems, some of my critics seem to be asking, is it either possible or desirable to remain aloof from the fray? Thus Fred Moramarco declares, “It’s clearly not easy to ‘just do your job’ where genocide, aggression, moral superiority, and hatred of opposing views are ordinary, everyday occurrences.” I take him to be saying at least two things:1) it’s hard to academicize a political/ moral issue and stay clear of coming down on one side or another, and 2) it’s irresponsible to do so given all that is wrong with the current state of things. As to the assertion that it’s hard, it’s really quite easy, a piece of cake; but the second assertion – academicizing is not what we should be doing in perilous times – has a genuine force; and if, as a teacher, you feel that force, your response should not be to turn your classroom into a political rally or an encounter group, but to get out of teaching and into a line of work more likely to address directly the real world problems you want to solve. There is nothing virtuous or holy about teaching; it’s just a job, and like any job it aims at particular results, not at all results. If the results teaching is able to produce when it is done well – improving student knowledge and analytical abilities – are not what you’re after, then teaching is the wrong profession for you. But if teaching is the profession you commit to, then you should do it and not use it to do something else.
The issue not explicitly raised in the comments but implied by many of them is the issue of justification. If the point of liberal arts education is what I say it is – to lay out the history and structure of political and ethical dilemmas without saying yes or no to any of the proposed courses of action – what is the yield that justifies the enormous expenditure of funds and energies? Beats me! I don’t think that the liberal arts can be justified and, furthermore, I believe that the demand for justification should be resisted because it is always the demand that you account for what you do in someone else’s terms, be they the terms of the state, or of the economy, or of the project of democracy. “Tell me, why should I as a businessman or a governor or a preacher of the Word, value what you do?” There is no answer to this question that does not involve preferring the values of the person who asks it to yours. The moment you acquiesce to the demand for justification, you have lost the game, because even if you succeed, what you will have done is acknowledge that your efforts are instrumental to some external purpose; and if you fail, as is more likely, you leave yourself open to the conclusion that what you do is really not needed. The spectacle of departments of French or Byzantine Studies or Classics attempting to demonstrate that the state or society or the world order benefits from their existence is embarrassing and pathetic. These and other programs are in decline not because they have failed to justify themselves, but because they have tried to.
The only self-respecting form justification could take is internal and circular. You value the activity because you like doing it and you like encouraging others to do it. Aside from that, there’s not much to say. Kathryn Jakacbin makes my point (inadvertently) when she observes that while “inquiry into the phenomena, their origins, extent, implications would be enlightening,” it would, if “untethered from a basic moral base also be weightless.” Just so! I’m saying that “weightless” is good, because “enlightening,” without any real-world payoff, is the business we’re in. And I would give the same reply to Andrea who is worried “that what we do as academics may be irrelevant to the active/political life.” Let’s hope so. In a similar vein, John Dillinger (a great name) complains that, “As it is now, academia in the U.S. couldn’t be more depoliticized, and more irrelevant.” Would that were true, but read any big city newspaper and you will find endless stories about politicized classrooms, stories that would never have been written if teachers followed the injunction to always academicize. You know you’re doing your job if you have no comeback at all to the charge that, aside from the pleasures it offers you and your students, the academic study of materials and problems is absolutely useless.
My mention of the pleasures of the classroom brings me to a final point and to the complaint most often voiced by the respondents to the initial post: an academicized classroom will be an arid classroom, a classroom that produces mindless robots and “cold passionless non-critical thinkers” versed only in the bare facts, a classroom presided over by a drab technician who does little but show up and could just as easily have mailed it in. Nothing could be further from the truth. Excitement comes in many forms, and not all forms of excitement are appropriate to every activity. The excitement appropriate to the activity of college and university teaching is the excitement of analysis, of trying to make sense of something, be it a poem, an archive, an historical event, a database, a chemical reaction, whatever. Analysis may seem a passionless word denoting a passionless exercise, but I have seen students fired up to a pitch just short of physical combat arguing about whether Satan is the hero of “Paradise Lost” or whether John Rawls is correctly classified as a Neo-Kantian or whether liberal democracies are capable of accommodating strong religious belief. The marshaling of evidence, the framing of distinctions, the challenging of the distinctions just framed, the parsing of dense texts – these are hard and exhilarating tasks and the students who engage in them are anything but mindless, not despite but because they don’t have their minds on the next election .
Of course, there will also be excitement in your class if you give it over to a discussion of what your students think about this or that hot-button issue. Lots of people will talk, and the talk will be heated, and everyone will go away feeling satisfied. But the satisfaction will be temporary as will its effects, for the long lasting pleasure of learning something will have been sacrificed to the ephemeral pleasure of exchanging uninformed opinions. You can glorify that exercise in self-indulgence by calling it interactive learning or engaged learning or ethical learning, but in the end it will be nothing more than a tale full of sound and fury, signifying nothing.]
In my post of October 22, I argued that college and university teachers should not take it upon themselves to cure the ills of the world, but should instead do the job they are trained and paid to do — the job, first, of introducing students to areas of knowledge they were not acquainted with before, and second, of equipping those same students with the analytic skills that will enable them to assess and evaluate the materials they are asked to read. I made the further point that the moment an instructor tries to do something more, he or she has crossed a line and ventured into territory that belongs properly to some other enterprise. It doesn’t matter whether the line is crossed by someone on the left who wants to enroll students in a progressive agenda dedicated to the redress of injustice, or by someone on the right who is concerned that students be taught to be patriotic, God-fearing, family oriented, and respectful of tradition. To be sure, the redress of injustice and the inculcation of patriotic and family values are worthy activities, but they are not academic activities, and they are not activities academics have the credentials to perform. Academics are not legislators, or political leaders or therapists or ministers; they are academics, and as academics they have contracted to do only one thing – to discuss whatever subject is introduced into the classroom in academic terms.
And what are academic terms? The list is long and includes looking into a history of a topic, studying and mastering the technical language that comes along with it, examining the controversies that have grown up around it and surveying the most significant contributions to its development. The list of academic terms would, however, not include coming to a resolution about a political or moral issue raised by the materials under discussion. This does not mean that political and moral questions are banned from the classroom, but that they should be regarded as objects of study – Where did they come from? How have they been answered at different times in different cultures? – rather than as invitations to take a vote (that’s what you do at the ballot box) or make a life decision (that’s what you do in the private recesses of your heart). No subject is out of bounds; what is out of bounds is using it as an occasion to move students in some political or ideological direction. The imperative, as I said in the earlier post, is to “academicize” the subject; that is, to remove it from whatever context of urgency it inhabits in the world and insert it into a context of academic urgency where the question being asked is not “What is the right thing to do?” but “Is this account of the matter attentive to the complexity of the issue?”
Those who commented on the post raised many sharp and helpful objections to it. Some of those objections give me the opportunity to make my point again. I happily plead guilty to not asking the question Dr. James Cook would have me (and all teachers) ask when a “social/political” issue comes up in the classroom: “Does silence contribute to the victory of people who espouse values akin to those of Hitler?” The question confuses and conflates political silence – you decide not to speak up as a citizen against what you consider an outrage – with an academic silence that is neither culpable nor praiseworthy because it goes without saying if you understand the nature of academic work. When, as a teacher, you are silent about your ethical and political commitments, you are not making a positive choice – Should I or shouldn’t I? is not an academic question — but simply performing your pedagogical role.
Of course the teacher who doesn’t think to declare his or her ethical preferences because it is not part of the job description might well be very active and vocal at a political rally or in a letter to the editor. I am not counseling moral and political abstinence across the board, only in those contexts – like the classroom – where the taking of positions on the war in Iraq or assisted suicide or the conduct of foreign policy is extraneous to or subversive of the activity being performed. Dr. Cook, along with Dr. Richard Flanagan, Ignacio Garcia and others accuse me of putting aside every moral issue or sterilizing issues of their moral implications or leaving my ethical sense at the door. No, I am refusing the implication that one’s ethical obligations remain the same no matter where one is or what one is doing or what one is being paid to do.
In fact, my stance is aggressively ethical: it demands that we take the ethics of the classroom – everything that belongs to pedagogy including preparation, giving assignments, grading papers, keeping discussions on point, etc.– seriously and not allow the scene of instruction to become a scene of indoctrination. Were the ethics appropriate to the classroom no different from the ethics appropriate to the arena of political action or the ethics of democratic citizenry, there would be nothing distinctive about the academic experience – it would be politics by another name – and no reason for anyone to support the enterprise. For if its politics you want, you might as well get right to it and skip the entire academic apparatus entirely.
My argument, then, rests on the conviction that academic work is unlike other forms of work — if it isn’t, it has no shape of its own and no claim on our attention — and that fidelity to it demands respect for its difference, a difference defined by its removal from the decision-making pressures of the larger world. And that finally may be the point underlying the objections to my position: in a world so beset with problems, some of my critics seem to be asking, is it either possible or desirable to remain aloof from the fray? Thus Fred Moramarco declares, “It’s clearly not easy to ‘just do your job’ where genocide, aggression, moral superiority, and hatred of opposing views are ordinary, everyday occurrences.” I take him to be saying at least two things:1) it’s hard to academicize a political/ moral issue and stay clear of coming down on one side or another, and 2) it’s irresponsible to do so given all that is wrong with the current state of things. As to the assertion that it’s hard, it’s really quite easy, a piece of cake; but the second assertion – academicizing is not what we should be doing in perilous times – has a genuine force; and if, as a teacher, you feel that force, your response should not be to turn your classroom into a political rally or an encounter group, but to get out of teaching and into a line of work more likely to address directly the real world problems you want to solve. There is nothing virtuous or holy about teaching; it’s just a job, and like any job it aims at particular results, not at all results. If the results teaching is able to produce when it is done well – improving student knowledge and analytical abilities – are not what you’re after, then teaching is the wrong profession for you. But if teaching is the profession you commit to, then you should do it and not use it to do something else.
The issue not explicitly raised in the comments but implied by many of them is the issue of justification. If the point of liberal arts education is what I say it is – to lay out the history and structure of political and ethical dilemmas without saying yes or no to any of the proposed courses of action – what is the yield that justifies the enormous expenditure of funds and energies? Beats me! I don’t think that the liberal arts can be justified and, furthermore, I believe that the demand for justification should be resisted because it is always the demand that you account for what you do in someone else’s terms, be they the terms of the state, or of the economy, or of the project of democracy. “Tell me, why should I as a businessman or a governor or a preacher of the Word, value what you do?” There is no answer to this question that does not involve preferring the values of the person who asks it to yours. The moment you acquiesce to the demand for justification, you have lost the game, because even if you succeed, what you will have done is acknowledge that your efforts are instrumental to some external purpose; and if you fail, as is more likely, you leave yourself open to the conclusion that what you do is really not needed. The spectacle of departments of French or Byzantine Studies or Classics attempting to demonstrate that the state or society or the world order benefits from their existence is embarrassing and pathetic. These and other programs are in decline not because they have failed to justify themselves, but because they have tried to.
The only self-respecting form justification could take is internal and circular. You value the activity because you like doing it and you like encouraging others to do it. Aside from that, there’s not much to say. Kathryn Jakacbin makes my point (inadvertently) when she observes that while “inquiry into the phenomena, their origins, extent, implications would be enlightening,” it would, if “untethered from a basic moral base also be weightless.” Just so! I’m saying that “weightless” is good, because “enlightening,” without any real-world payoff, is the business we’re in. And I would give the same reply to Andrea who is worried “that what we do as academics may be irrelevant to the active/political life.” Let’s hope so. In a similar vein, John Dillinger (a great name) complains that, “As it is now, academia in the U.S. couldn’t be more depoliticized, and more irrelevant.” Would that were true, but read any big city newspaper and you will find endless stories about politicized classrooms, stories that would never have been written if teachers followed the injunction to always academicize. You know you’re doing your job if you have no comeback at all to the charge that, aside from the pleasures it offers you and your students, the academic study of materials and problems is absolutely useless.
My mention of the pleasures of the classroom brings me to a final point and to the complaint most often voiced by the respondents to the initial post: an academicized classroom will be an arid classroom, a classroom that produces mindless robots and “cold passionless non-critical thinkers” versed only in the bare facts, a classroom presided over by a drab technician who does little but show up and could just as easily have mailed it in. Nothing could be further from the truth. Excitement comes in many forms, and not all forms of excitement are appropriate to every activity. The excitement appropriate to the activity of college and university teaching is the excitement of analysis, of trying to make sense of something, be it a poem, an archive, an historical event, a database, a chemical reaction, whatever. Analysis may seem a passionless word denoting a passionless exercise, but I have seen students fired up to a pitch just short of physical combat arguing about whether Satan is the hero of “Paradise Lost” or whether John Rawls is correctly classified as a Neo-Kantian or whether liberal democracies are capable of accommodating strong religious belief. The marshaling of evidence, the framing of distinctions, the challenging of the distinctions just framed, the parsing of dense texts – these are hard and exhilarating tasks and the students who engage in them are anything but mindless, not despite but because they don’t have their minds on the next election .
Of course, there will also be excitement in your class if you give it over to a discussion of what your students think about this or that hot-button issue. Lots of people will talk, and the talk will be heated, and everyone will go away feeling satisfied. But the satisfaction will be temporary as will its effects, for the long lasting pleasure of learning something will have been sacrificed to the ephemeral pleasure of exchanging uninformed opinions. You can glorify that exercise in self-indulgence by calling it interactive learning or engaged learning or ethical learning, but in the end it will be nothing more than a tale full of sound and fury, signifying nothing.]
Sunday, November 5
The Difference Two Years Made
NYT Editorial Page
Published: November 5, 2006
On Tuesday, when this page runs the list of people it has endorsed for election, we will include no Republican Congressional candidates for the first time in our memory. Although Times editorials tend to agree with Democrats on national policy, we have proudly and consistently endorsed a long line of moderate Republicans, particularly for the House. Our only political loyalty is to making the two-party system as vital and responsible as possible.
That is why things are different this year.
To begin with, the Republican majority that has run the House — and for the most part, the Senate — during President Bush’s tenure has done a terrible job on the basics. Its tax-cutting-above-all-else has wrecked the budget, hobbled the middle class and endangered the long-term economy. It has refused to face up to global warming and done pathetically little about the country’s dependence on foreign oil.
Republican leaders, particularly in the House, have developed toxic symptoms of an overconfident majority that has been too long in power. They methodically shut the opposition — and even the more moderate members of their own party — out of any role in the legislative process. Their only mission seems to be self-perpetuation.
The current Republican majority managed to achieve that burned-out, brain-dead status in record time, and with a shocking disregard for the most minimal ethical standards. It was bad enough that a party that used to believe in fiscal austerity blew billions on pork-barrel projects. It is worse that many of the most expensive boondoggles were not even directed at their constituents, but at lobbyists who financed their campaigns and high-end lifestyles.
That was already the situation in 2004, and even then this page endorsed Republicans who had shown a high commitment to ethics reform and a willingness to buck their party on important issues like the environment, civil liberties and women’s rights.
For us, the breaking point came over the Republicans’ attempt to undermine the fundamental checks and balances that have safeguarded American democracy since its inception. The fact that the White House, House and Senate are all controlled by one party is not a threat to the balance of powers, as long as everyone understands the roles assigned to each by the Constitution. But over the past two years, the White House has made it clear that it claims sweeping powers that go well beyond any acceptable limits. Rather than doing their duty to curb these excesses, the Congressional Republicans have dedicated themselves to removing restraints on the president’s ability to do whatever he wants. To paraphrase Tom DeLay, the Republicans feel you don’t need to have oversight hearings if your party is in control of everything.
An administration convinced of its own perpetual rightness and a partisan Congress determined to deflect all criticism of the chief executive has been the recipe for what we live with today.
Congress, in particular the House, has failed to ask probing questions about the war in Iraq or hold the president accountable for his catastrophic bungling of the occupation. It also has allowed Mr. Bush to avoid answering any questions about whether his administration cooked the intelligence on weapons of mass destruction. Then, it quietly agreed to close down the one agency that has been riding herd on crooked and inept American contractors who have botched everything from construction work to the security of weapons.
After the revelations about the abuse, torture and illegal detentions in Abu Ghraib, Afghanistan and Guantánamo Bay, Congress shielded the Pentagon from any responsibility for the atrocities its policies allowed to happen. On the eve of the election, and without even a pretense at debate in the House, Congress granted the White House permission to hold hundreds of noncitizens in jail forever, without due process, even though many of them were clearly sent there in error.
In the Senate, the path for this bill was cleared by a handful of Republicans who used their personal prestige and reputation for moderation to paper over the fact that the bill violates the Constitution in fundamental ways. Having acquiesced in the president’s campaign to dilute their own authority, lawmakers used this bill to further Mr. Bush’s goal of stripping the powers of the only remaining independent branch, the judiciary.
This election is indeed about George W. Bush — and the Congressional majority’s insistence on protecting him from the consequences of his mistakes and misdeeds. Mr. Bush lost the popular vote in 2000 and proceeded to govern as if he had an enormous mandate. After he actually beat his opponent in 2004, he announced he now had real political capital and intended to spend it. We have seen the results. It is frightening to contemplate the new excesses he could concoct if he woke up next Wednesday and found that his party had maintained its hold on the House and Senate.
Published: November 5, 2006
On Tuesday, when this page runs the list of people it has endorsed for election, we will include no Republican Congressional candidates for the first time in our memory. Although Times editorials tend to agree with Democrats on national policy, we have proudly and consistently endorsed a long line of moderate Republicans, particularly for the House. Our only political loyalty is to making the two-party system as vital and responsible as possible.
That is why things are different this year.
To begin with, the Republican majority that has run the House — and for the most part, the Senate — during President Bush’s tenure has done a terrible job on the basics. Its tax-cutting-above-all-else has wrecked the budget, hobbled the middle class and endangered the long-term economy. It has refused to face up to global warming and done pathetically little about the country’s dependence on foreign oil.
Republican leaders, particularly in the House, have developed toxic symptoms of an overconfident majority that has been too long in power. They methodically shut the opposition — and even the more moderate members of their own party — out of any role in the legislative process. Their only mission seems to be self-perpetuation.
The current Republican majority managed to achieve that burned-out, brain-dead status in record time, and with a shocking disregard for the most minimal ethical standards. It was bad enough that a party that used to believe in fiscal austerity blew billions on pork-barrel projects. It is worse that many of the most expensive boondoggles were not even directed at their constituents, but at lobbyists who financed their campaigns and high-end lifestyles.
That was already the situation in 2004, and even then this page endorsed Republicans who had shown a high commitment to ethics reform and a willingness to buck their party on important issues like the environment, civil liberties and women’s rights.
For us, the breaking point came over the Republicans’ attempt to undermine the fundamental checks and balances that have safeguarded American democracy since its inception. The fact that the White House, House and Senate are all controlled by one party is not a threat to the balance of powers, as long as everyone understands the roles assigned to each by the Constitution. But over the past two years, the White House has made it clear that it claims sweeping powers that go well beyond any acceptable limits. Rather than doing their duty to curb these excesses, the Congressional Republicans have dedicated themselves to removing restraints on the president’s ability to do whatever he wants. To paraphrase Tom DeLay, the Republicans feel you don’t need to have oversight hearings if your party is in control of everything.
An administration convinced of its own perpetual rightness and a partisan Congress determined to deflect all criticism of the chief executive has been the recipe for what we live with today.
Congress, in particular the House, has failed to ask probing questions about the war in Iraq or hold the president accountable for his catastrophic bungling of the occupation. It also has allowed Mr. Bush to avoid answering any questions about whether his administration cooked the intelligence on weapons of mass destruction. Then, it quietly agreed to close down the one agency that has been riding herd on crooked and inept American contractors who have botched everything from construction work to the security of weapons.
After the revelations about the abuse, torture and illegal detentions in Abu Ghraib, Afghanistan and Guantánamo Bay, Congress shielded the Pentagon from any responsibility for the atrocities its policies allowed to happen. On the eve of the election, and without even a pretense at debate in the House, Congress granted the White House permission to hold hundreds of noncitizens in jail forever, without due process, even though many of them were clearly sent there in error.
In the Senate, the path for this bill was cleared by a handful of Republicans who used their personal prestige and reputation for moderation to paper over the fact that the bill violates the Constitution in fundamental ways. Having acquiesced in the president’s campaign to dilute their own authority, lawmakers used this bill to further Mr. Bush’s goal of stripping the powers of the only remaining independent branch, the judiciary.
This election is indeed about George W. Bush — and the Congressional majority’s insistence on protecting him from the consequences of his mistakes and misdeeds. Mr. Bush lost the popular vote in 2000 and proceeded to govern as if he had an enormous mandate. After he actually beat his opponent in 2004, he announced he now had real political capital and intended to spend it. We have seen the results. It is frightening to contemplate the new excesses he could concoct if he woke up next Wednesday and found that his party had maintained its hold on the House and Senate.
Saturday, November 4
For U.S. and Top Iraqi, Animosity Is Mutual
November 4, 2006
News Analysis
By JOHN F. BURNS
BAGHDAD, Nov. 3 — The cycle of discord and strained reconciliation that has broken into the open between Iraq’s Shiite-led government and the Bush administration has revealed how wide the gulf has become between what the United States expects from the Baghdad government and what it is able or willing to deliver.
Just in the past 10 days, Prime Minister Nuri Kamal al-Maliki has rejected the notion of an American “timeline” for action on urgent Iraqi political issues; ordered American commanders to lift checkpoints they had set up around the Shiite district of Sadr City to hunt for a kidnapped American soldier and a fugitive Shiite death squad leader; blamed the Americans for the deteriorating security situation in Iraq; and demanded speeded-up Iraqi control of its own military.
The estrangement has developed despite the two governments’ mutual dependency. The Maliki government needs the United States for the protection its 150,000 troops afford, and without which, most Iraqi politicians agree, the country would slide into full-blown civil war. For the Americans, success for the government that won a four-year term in January’s elections seems central to any hope for an orderly American disengagement from Iraq.
Without doubt, there has been an element of political grandstanding by Mr. Maliki that reflects his need to rally support among fractious Shiite political partners and the restive masses they represent. With American pressures focusing on the need for political concessions to the minority Sunnis by the majority Shiites — the principal victims of Saddam Hussein’s repression, and, since his overthrow, the main targets for Sunni insurgent bombings — the prime minister cannot afford to be seen to be at America’s beck and call.
Still, the differences between the new Shiite rulers and the Americans are real and growing. And the paradox of their animosity is that the primary beneficiary of the rift is likely to be their common enemy, the Sunni insurgents. Their aim has been to recapture the power the Sunnis lost with Mr. Hussein’s overthrow — and to repeat the experience of the 1920s, when Shiites squandered their last opportunity to wrest power and handed the Sunnis an opening to another 80 years of domination.
The bitterness between the Shiite leaders and the Americans reflects widely divergent views of the government’s responsibilities. The Americans want Mr. Maliki to lead in forging a “national compact,” healing bitter splits between Shiites, Sunnis and Kurds over the division of political and economic power.
The timeline that Zalmay Khalilzad, the American ambassador, set out last week — prompting an acerbic protest from Mr. Maliki — foresaw framework agreements over coming months. Central issues include disbanding the militias that have been responsible for a wave of sectarian killing, the future division of oil revenues, and a new approach to the Baathists, who were the bedrock of the Hussein government, that will strike a fairer balance between holding the worst accountable for their crimes and offering others rehabilitation.
But Mr. Maliki is not well cast for the role of national conciliator, and has shown a growing tendency to revert to type as a stalwart of a Shiite religious party, the Islamic Dawa Party, which had thousands of its followers killed under Mr. Hussein.
Like most other current Shiite leaders, Mr. Maliki spent decades in exile, and lost family members in Mr. Hussein’s gulag. By nature, he is withdrawn and, American officials say, lacks the natural ease, and perhaps the will, to reach out to politicians from other communities, especially Sunnis.
The Americans say that a self-reinforcing dynamic is at play, with the growing sectarian violence between Sunnis and Shiites, responsible for thousands of deaths this year in Baghdad and surrounding areas, causing politicians from both groups to pull back from the vision of a shared life.
Instead, positions have hardened. In the case of Mr. Maliki, who heads what is nominally a “national unity” cabinet, this has meant an increasing tendency to act as the steward of Shiite interests, sometimes so obtrusively that Sunnis, and to a lesser extent Kurds, have accused him of blatant sectarianism.
The issue of greatest concern to the Americans — and to Sunnis — has been Mr. Maliki’s resistance to American pressure for a crackdown on the Mahdi Army, the Shiite militia that the Americans say has been in the forefront of death squad attacks on Sunnis. The Shiite cleric who leads the militia, Moktada al-Sadr, controls the largest Shiite bloc in Parliament and backed Mr. Maliki in the contest among Shiite groups to name the new prime minister.
Another Shiite militia, the Badr Organization, is controlled by Abdul Aziz al-Hakim, leader of the Supreme Council for the Islamic Revolution in Iraq, who is both a powerful rival to Mr. Maliki in Shiite religious politics and another mainstay of the government.
So for Mr. Maliki, American demands for action to disband the militias have revealed in their sharpest form the tensions between his role as national leader and as steward of Shiite interests. Compounding his dilemma, public opinion among Shiites, particularly in Sadr City, the Mahdi Army’s main stronghold, has coalesced around the militiamen, who are seen by many as the only effective protection against Sunni insurgents who have killed thousands of Shiites with their bombings of marketplaces, mosques, weddings, funerals and other public gatherings.
The failure of American troops to stop these bombings is a source of anger among Shiites, who have woven conspiracy theories that depict the Americans as silent partners for the Sunnis. And the rancor finds a favorite target in Mr. Khalilzad, who has become a figure of contempt among some senior Shiites in the government for his efforts to draw the Sunnis into the circle of power in Baghdad. It has become common among Shiite officials to say that the envoy harbors an unease toward Shiites engendered by growing up in a Sunni family in Afghanistan that distrusted Hazaras, Shiite descendants of Genghis Khan.
For months, Mr. Maliki has argued against forcible moves to disband the militias, urging a political solution and pointing to cases in which Mr. Sadr himself has approved, or at least not opposed, raids on death squad leaders whom he has described as renegades from the mainforce Mahdi Army. Publicly, the Americans have backed the prime minister; privately, they say the country cannot wait while sectarian killing rages unabated. The result has been an uneasy, and at times volatile, compromise.
American commanders have picked off some of the most brutal Shiite death squad leaders on a raid-by-raid basis, sometimes with Mr. Maliki’s approval, and sometimes, as in the case of a disputed Sadr City raid last week that failed to capture the wanted man, known as Abu Derar, without it. In one case last month, Gen. George W. Casey Jr., the top American commander in Iraq, intervened to release another alleged Mahdi Army death squad leader captured in a raid in west Baghdad after Mr. Maliki demanded he be freed, apparently to assuage Mr. Sadr.
American dissatisfaction with the Maliki government goes far beyond the ambivalence over the militias. When the government was sworn in on May 20, Mr. Khalilzad and General Casey said it had six months to take a broad range of political actions that would build public support, and make the war winnable. When President Bush made a six-hour visit to Baghdad in June, he said he had looked Mr. Maliki “in the eye” to determine if America had a reliable partner, and reported that he was convinced the new prime minister met the test.
High among American priorities was the need for effective government after a largely wasted year under Mr. Maliki’s predecessor, Ibrahim al-Jaafari. American officials have told reporters in background briefings in recent weeks that little has changed, with the budgets of many government departments, including the Health Ministry, controlled by officials loyal to Mr. Sadr, being used for what the Americans say amounts to wholesale looting.
In the past week, Mr. Maliki has added a new, potentially incendiary grievance against the Americans. In interviews that preceded a placatory teleconference call with President Bush last weekend, he said the poor security situation across Iraq was the Americans’ fault, and demanded a more rapid transfer of command authority over the war. With apparent unconcern for the war’s growing unpopularity in the United States, he demanded more American money for the buildup of Iraq’s own forces, and for reconstruction of the country’s infrastructure, on top of the $38 billion the Bush administration says it has already spent on civil and military aid to Iraq since the toppling of Mr. Hussein in 2003 and the nearly $400 billion for America’s own deployments.
Mr. Bush responded by dispatching his national security adviser, Stephen J. Hadley, on an urgent trip to Baghdad on Monday, and agreeing to work on ways of accelerating the transfer of authority, especially in regard to the Maliki government’s ability to control the deployment of Iraqi troops.
What the Bush administration’s public comments omitted was any reference to the deep frustration among American commanders at the continuing weakness of many Iraqi Army units, which have been plagued by high levels of indiscipline, absenteeism and desertion. Some American officers say that as many as half of the listed 137,000 Iraqi soldiers are effectively undeployable.
The situation has its keenest effects in Baghdad, where American commanders say the war will ultimately be won or lost. In the stepped-up effort to clear the city of insurgents and death squads, begun in August and acknowledged by American commanders to be faltering, American troops have accounted for two-thirds of the 25,000 deployed, after Iraqi commanders delivered two of the six battalions they promised.
The result, American officers involved in the operation have noted, is that what little security there is in the city — and, ultimately, the survival of the Maliki government itself — relies far more on American than Iraqi troops.
News Analysis
By JOHN F. BURNS
BAGHDAD, Nov. 3 — The cycle of discord and strained reconciliation that has broken into the open between Iraq’s Shiite-led government and the Bush administration has revealed how wide the gulf has become between what the United States expects from the Baghdad government and what it is able or willing to deliver.
Just in the past 10 days, Prime Minister Nuri Kamal al-Maliki has rejected the notion of an American “timeline” for action on urgent Iraqi political issues; ordered American commanders to lift checkpoints they had set up around the Shiite district of Sadr City to hunt for a kidnapped American soldier and a fugitive Shiite death squad leader; blamed the Americans for the deteriorating security situation in Iraq; and demanded speeded-up Iraqi control of its own military.
The estrangement has developed despite the two governments’ mutual dependency. The Maliki government needs the United States for the protection its 150,000 troops afford, and without which, most Iraqi politicians agree, the country would slide into full-blown civil war. For the Americans, success for the government that won a four-year term in January’s elections seems central to any hope for an orderly American disengagement from Iraq.
Without doubt, there has been an element of political grandstanding by Mr. Maliki that reflects his need to rally support among fractious Shiite political partners and the restive masses they represent. With American pressures focusing on the need for political concessions to the minority Sunnis by the majority Shiites — the principal victims of Saddam Hussein’s repression, and, since his overthrow, the main targets for Sunni insurgent bombings — the prime minister cannot afford to be seen to be at America’s beck and call.
Still, the differences between the new Shiite rulers and the Americans are real and growing. And the paradox of their animosity is that the primary beneficiary of the rift is likely to be their common enemy, the Sunni insurgents. Their aim has been to recapture the power the Sunnis lost with Mr. Hussein’s overthrow — and to repeat the experience of the 1920s, when Shiites squandered their last opportunity to wrest power and handed the Sunnis an opening to another 80 years of domination.
The bitterness between the Shiite leaders and the Americans reflects widely divergent views of the government’s responsibilities. The Americans want Mr. Maliki to lead in forging a “national compact,” healing bitter splits between Shiites, Sunnis and Kurds over the division of political and economic power.
The timeline that Zalmay Khalilzad, the American ambassador, set out last week — prompting an acerbic protest from Mr. Maliki — foresaw framework agreements over coming months. Central issues include disbanding the militias that have been responsible for a wave of sectarian killing, the future division of oil revenues, and a new approach to the Baathists, who were the bedrock of the Hussein government, that will strike a fairer balance between holding the worst accountable for their crimes and offering others rehabilitation.
But Mr. Maliki is not well cast for the role of national conciliator, and has shown a growing tendency to revert to type as a stalwart of a Shiite religious party, the Islamic Dawa Party, which had thousands of its followers killed under Mr. Hussein.
Like most other current Shiite leaders, Mr. Maliki spent decades in exile, and lost family members in Mr. Hussein’s gulag. By nature, he is withdrawn and, American officials say, lacks the natural ease, and perhaps the will, to reach out to politicians from other communities, especially Sunnis.
The Americans say that a self-reinforcing dynamic is at play, with the growing sectarian violence between Sunnis and Shiites, responsible for thousands of deaths this year in Baghdad and surrounding areas, causing politicians from both groups to pull back from the vision of a shared life.
Instead, positions have hardened. In the case of Mr. Maliki, who heads what is nominally a “national unity” cabinet, this has meant an increasing tendency to act as the steward of Shiite interests, sometimes so obtrusively that Sunnis, and to a lesser extent Kurds, have accused him of blatant sectarianism.
The issue of greatest concern to the Americans — and to Sunnis — has been Mr. Maliki’s resistance to American pressure for a crackdown on the Mahdi Army, the Shiite militia that the Americans say has been in the forefront of death squad attacks on Sunnis. The Shiite cleric who leads the militia, Moktada al-Sadr, controls the largest Shiite bloc in Parliament and backed Mr. Maliki in the contest among Shiite groups to name the new prime minister.
Another Shiite militia, the Badr Organization, is controlled by Abdul Aziz al-Hakim, leader of the Supreme Council for the Islamic Revolution in Iraq, who is both a powerful rival to Mr. Maliki in Shiite religious politics and another mainstay of the government.
So for Mr. Maliki, American demands for action to disband the militias have revealed in their sharpest form the tensions between his role as national leader and as steward of Shiite interests. Compounding his dilemma, public opinion among Shiites, particularly in Sadr City, the Mahdi Army’s main stronghold, has coalesced around the militiamen, who are seen by many as the only effective protection against Sunni insurgents who have killed thousands of Shiites with their bombings of marketplaces, mosques, weddings, funerals and other public gatherings.
The failure of American troops to stop these bombings is a source of anger among Shiites, who have woven conspiracy theories that depict the Americans as silent partners for the Sunnis. And the rancor finds a favorite target in Mr. Khalilzad, who has become a figure of contempt among some senior Shiites in the government for his efforts to draw the Sunnis into the circle of power in Baghdad. It has become common among Shiite officials to say that the envoy harbors an unease toward Shiites engendered by growing up in a Sunni family in Afghanistan that distrusted Hazaras, Shiite descendants of Genghis Khan.
For months, Mr. Maliki has argued against forcible moves to disband the militias, urging a political solution and pointing to cases in which Mr. Sadr himself has approved, or at least not opposed, raids on death squad leaders whom he has described as renegades from the mainforce Mahdi Army. Publicly, the Americans have backed the prime minister; privately, they say the country cannot wait while sectarian killing rages unabated. The result has been an uneasy, and at times volatile, compromise.
American commanders have picked off some of the most brutal Shiite death squad leaders on a raid-by-raid basis, sometimes with Mr. Maliki’s approval, and sometimes, as in the case of a disputed Sadr City raid last week that failed to capture the wanted man, known as Abu Derar, without it. In one case last month, Gen. George W. Casey Jr., the top American commander in Iraq, intervened to release another alleged Mahdi Army death squad leader captured in a raid in west Baghdad after Mr. Maliki demanded he be freed, apparently to assuage Mr. Sadr.
American dissatisfaction with the Maliki government goes far beyond the ambivalence over the militias. When the government was sworn in on May 20, Mr. Khalilzad and General Casey said it had six months to take a broad range of political actions that would build public support, and make the war winnable. When President Bush made a six-hour visit to Baghdad in June, he said he had looked Mr. Maliki “in the eye” to determine if America had a reliable partner, and reported that he was convinced the new prime minister met the test.
High among American priorities was the need for effective government after a largely wasted year under Mr. Maliki’s predecessor, Ibrahim al-Jaafari. American officials have told reporters in background briefings in recent weeks that little has changed, with the budgets of many government departments, including the Health Ministry, controlled by officials loyal to Mr. Sadr, being used for what the Americans say amounts to wholesale looting.
In the past week, Mr. Maliki has added a new, potentially incendiary grievance against the Americans. In interviews that preceded a placatory teleconference call with President Bush last weekend, he said the poor security situation across Iraq was the Americans’ fault, and demanded a more rapid transfer of command authority over the war. With apparent unconcern for the war’s growing unpopularity in the United States, he demanded more American money for the buildup of Iraq’s own forces, and for reconstruction of the country’s infrastructure, on top of the $38 billion the Bush administration says it has already spent on civil and military aid to Iraq since the toppling of Mr. Hussein in 2003 and the nearly $400 billion for America’s own deployments.
Mr. Bush responded by dispatching his national security adviser, Stephen J. Hadley, on an urgent trip to Baghdad on Monday, and agreeing to work on ways of accelerating the transfer of authority, especially in regard to the Maliki government’s ability to control the deployment of Iraqi troops.
What the Bush administration’s public comments omitted was any reference to the deep frustration among American commanders at the continuing weakness of many Iraqi Army units, which have been plagued by high levels of indiscipline, absenteeism and desertion. Some American officers say that as many as half of the listed 137,000 Iraqi soldiers are effectively undeployable.
The situation has its keenest effects in Baghdad, where American commanders say the war will ultimately be won or lost. In the stepped-up effort to clear the city of insurgents and death squads, begun in August and acknowledged by American commanders to be faltering, American troops have accounted for two-thirds of the 25,000 deployed, after Iraqi commanders delivered two of the six battalions they promised.
The result, American officers involved in the operation have noted, is that what little security there is in the city — and, ultimately, the survival of the Maliki government itself — relies far more on American than Iraqi troops.