Swift Boating the Planet
By PAUL KRUGMAN
A brief segment in "An Inconvenient Truth" shows Senator Al Gore questioning James Hansen, a climatologist at NASA, during a 1989 hearing. But the movie doesn't give you much context, or tell you what happened to Dr. Hansen later.
And that's a story worth telling, for two reasons. It's a good illustration of the way interest groups can create the appearance of doubt even when the facts are clear and cloud the reputations of people who should be regarded as heroes. And it's a warning for Mr. Gore and others who hope to turn global warming into a real political issue: you're going to have to get tougher, because the other side doesn't play by any known rules.
Dr. Hansen was one of the first climate scientists to say publicly that global warming was under way. In 1988, he made headlines with Senate testimony in which he declared that "the greenhouse effect has been detected, and it is changing our climate now." When he testified again the following year, officials in the first Bush administration altered his prepared statement to downplay the threat. Mr. Gore's movie shows the moment when the administration's tampering was revealed.
In 1988, Dr. Hansen was well out in front of his scientific colleagues, but over the years that followed he was vindicated by a growing body of evidence. By rights, Dr. Hansen should have been universally acclaimed for both his prescience and his courage.
But soon after Dr. Hansen's 1988 testimony, energy companies began a campaign to create doubt about global warming, in spite of the increasingly overwhelming evidence. And in the late 1990's, climate skeptics began a smear campaign against Dr. Hansen himself.
Leading the charge was Patrick Michaels, a professor at the University of Virginia who has received substantial financial support from the energy industry. In Senate testimony, and then in numerous presentations, Dr. Michaels claimed that the actual pace of global warming was falling far short of Dr. Hansen's predictions. As evidence, he presented a chart supposedly taken from a 1988 paper written by Dr. Hansen and others, which showed a curve of rising temperatures considerably steeper than the trend that has actually taken place.
In fact, the chart Dr. Michaels showed was a fraud — that is, it wasn't what Dr. Hansen actually predicted. The original paper showed a range of possibilities, and the actual rise in temperature has fallen squarely in the middle of that range. So how did Dr. Michaels make it seem as if Dr. Hansen's prediction was wildly off? Why, he erased all the lower curves, leaving only the curve that the original paper described as being "on the high side of reality."
The experts at www.realclimate.org, the go-to site for climate science, suggest that the smears against Dr. Hansen "might be viewed by some as a positive sign, indicative of just how intellectually bankrupt the contrarian movement has become." But I think they're misreading the situation. In fact, the smears have been around for a long time, and Dr. Hansen has been trying to correct the record for years. Yet the claim that Dr. Hansen vastly overpredicted global warming has remained in circulation, and has become a staple of climate change skeptics, from Michael Crichton to Robert Novak.
There's a concise way to describe what happened to Dr. Hansen: he was Swift-boated.
John Kerry, a genuine war hero, didn't realize that he could successfully be portrayed as a coward. And it seems to me that Dr. Hansen, whose predictions about global warming have proved remarkably accurate, didn't believe that he could successfully be portrayed as an unreliable exaggerator. His first response to Dr. Michaels, in January 1999, was astonishingly diffident. He pointed out that Dr. Michaels misrepresented his work, but rather than denouncing the fraud involved, he offered a rather plaintive appeal for better behavior.
Even now, Dr. Hansen seems reluctant to say the obvious. "Is this treading close to scientific fraud?" he recently asked about Dr. Michaels's smear. The answer is no: it isn't "treading close," it's fraud pure and simple.
Now, Dr. Hansen isn't running for office. But Mr. Gore might be, and even if he isn't, he hopes to promote global warming as a political issue. And if he wants to do that, he and those on his side will have to learn to call liars what they are.
Wednesday, May 31
Friday, May 26
Republican Mayor, in Wall St. Journal, denounces Conservative Immigration Plan in House
Enforceable, Sustainable, Compassionate
On immigration, it's time to get real.
BY MICHAEL R. BLOOMBERG
Wednesday, May 24, 2006, Wall Street Journal
In every decade there is a critical domestic issue that shapes our political life for decades to come. In the 1960s, it was civil rights; in the 1970s, the Watergate crisis; in the 1980s, crime and drugs; and in the 1990s, welfare dependency. Today, it is immigration.
In New York City, 500,000 of our more than three million immigrants are here illegally. Although they broke the law by illegally crossing our borders or overstaying their visas, our economy would be a shell of itself had they not, and it would collapse if they were deported. The same holds true for the nation. Yet in a post-9/11 world, the federal government can no longer wink at illegal immigration. To ensure our national security and keep our economy growing, it is essential that immigration reform embody four key principles:
1. Reduce Incentives. As a business owner, I know the absurdity of our existing immigration regulations all too well. Employers are required to check the status of all job applicants, but not to do anything more than eyeball their documents. In fact, hypocritically, employers are not even permitted to ask probing questions. As a result, fake "green cards" are dime a dozen, and illegal immigrants can easily qualify for jobs.
It is encouraging that a growing number of Democrats and Republicans in Congress recognize the need for a federal database that will allow employers to verify the status of those applying for jobs. The database must identify all job applicants in America based on documentation that cannot be corrupted--fingerprints or DNA, for example. (Social Security cards are just too easy to falsify.) In addition, there must be stiff penalties for businesses that fail to conduct checks or ignore their results. Holding businesses accountable is the crucial step, because it is the only way to reduce the incentive to come here illegally. Requiring employers to verify citizenship status was the promise of the 1986 immigration reform law, but it was an empty promise, never enforced by a federal government pressured to look the other way while workers were exploited. This allowed illegal immigration levels to overwhelm our border control. We must not make the same mistake again.
2. Increase Lawful Opportunity. Baby boomers are starting to retire, America's birthrate continues to slow, and our visa quotas remain too low. As a result, we need more workers than we have, and that's exactly why so many people want to come here. In most cases, those here illegally are filling low-wage, low-skill jobs that Americans do not want. Recent studies put the lie to the old argument that immigrants take jobs away from native-born Americans and significantly depress wages. Global economic forces are responsible for the declines in the real wages of unskilled workers and occur regardless of whether immigrants are present in a community. Moreover, any slight wage decline is more than offset by substantial increases in productivity.
To keep people and businesses investing in America, we need to ensure that we have workers for all types of jobs. That means increasing the number of visas for overseas manual workers, who help provide the essential muscle and elbow grease we need to keep our economy running, as well as the number of visas for immigrant engineers, doctors, scientists and other professionally trained workers--the brains of tomorrow's economy. And it means giving all of them, as well as foreign students, the opportunity to earn permanent status, so they can put their knowledge and entrepreneurial spirit to use for our country. Why shouldn't we reap the benefits of the skills they have obtained here? If we don't allow them in, or we send them home, we will be sending the future of science--and the jobs of tomorrow--with them.
3. Reduce Access. Controlling our borders is a matter of urgent national security. As President Bush recognizes, in some areas, particularly in border towns, additional fencing may be required; in open desert areas, a virtual wall--created through sensors and cameras--will be far more effective. However, even after doubling their numbers, border security guards will remain overwhelmed by the flood of people attempting to enter illegally. Only by embracing the first two principles--reducing incentives and increasing lawful opportunity--will the border security so desired by the House become a manageable task.
4. Get Real. The idea of deporting 11 million people, nearly as many as live in the entire state of Illinois, is pure fantasy. It is physically impossible to carry out, though if it were attempted, it would devastate both families and our economy. The Senate's tiered approach requiring that some people "report to deport" through guest worker programs--while leaving their spouses, children and mortgages behind--is no less ridiculous. If this approach becomes law, there can be little doubt that the black market for false documentation would remain strong and real enforcement impossible.
There is only one practical solution, and it is a solution that respects the history of our nation: Offer those already here the opportunity to earn permanent status and keep their families together, provided they pay appropriate penalties. For decades, the federal government has tacitly welcomed them into the workforce and collected their income and Social Security taxes, which two-thirds of undocumented workers pay. Now, instead of pointing fingers about the past, let's accept the present for what it is by bringing people out of the shadows, and focus on the future by casting those shadows aside, permanently.
As the debate continues in Washington, it is essential that Congress recognize the need for an immigration policy that is enforceable, sustainable and compassionate--and that enables the American economy to thrive in the 21st century. But if one principle is abandoned, we will be no better off than we were after passage of the 1986 law. A successful solution to our border problems cannot rest on a wall alone; it must be built on a foundation strong enough to support it, and to support our continued economic growth and prosperity.
Mr. Bloomberg is mayor of New York.
On immigration, it's time to get real.
BY MICHAEL R. BLOOMBERG
Wednesday, May 24, 2006, Wall Street Journal
In every decade there is a critical domestic issue that shapes our political life for decades to come. In the 1960s, it was civil rights; in the 1970s, the Watergate crisis; in the 1980s, crime and drugs; and in the 1990s, welfare dependency. Today, it is immigration.
In New York City, 500,000 of our more than three million immigrants are here illegally. Although they broke the law by illegally crossing our borders or overstaying their visas, our economy would be a shell of itself had they not, and it would collapse if they were deported. The same holds true for the nation. Yet in a post-9/11 world, the federal government can no longer wink at illegal immigration. To ensure our national security and keep our economy growing, it is essential that immigration reform embody four key principles:
1. Reduce Incentives. As a business owner, I know the absurdity of our existing immigration regulations all too well. Employers are required to check the status of all job applicants, but not to do anything more than eyeball their documents. In fact, hypocritically, employers are not even permitted to ask probing questions. As a result, fake "green cards" are dime a dozen, and illegal immigrants can easily qualify for jobs.
It is encouraging that a growing number of Democrats and Republicans in Congress recognize the need for a federal database that will allow employers to verify the status of those applying for jobs. The database must identify all job applicants in America based on documentation that cannot be corrupted--fingerprints or DNA, for example. (Social Security cards are just too easy to falsify.) In addition, there must be stiff penalties for businesses that fail to conduct checks or ignore their results. Holding businesses accountable is the crucial step, because it is the only way to reduce the incentive to come here illegally. Requiring employers to verify citizenship status was the promise of the 1986 immigration reform law, but it was an empty promise, never enforced by a federal government pressured to look the other way while workers were exploited. This allowed illegal immigration levels to overwhelm our border control. We must not make the same mistake again.
2. Increase Lawful Opportunity. Baby boomers are starting to retire, America's birthrate continues to slow, and our visa quotas remain too low. As a result, we need more workers than we have, and that's exactly why so many people want to come here. In most cases, those here illegally are filling low-wage, low-skill jobs that Americans do not want. Recent studies put the lie to the old argument that immigrants take jobs away from native-born Americans and significantly depress wages. Global economic forces are responsible for the declines in the real wages of unskilled workers and occur regardless of whether immigrants are present in a community. Moreover, any slight wage decline is more than offset by substantial increases in productivity.
To keep people and businesses investing in America, we need to ensure that we have workers for all types of jobs. That means increasing the number of visas for overseas manual workers, who help provide the essential muscle and elbow grease we need to keep our economy running, as well as the number of visas for immigrant engineers, doctors, scientists and other professionally trained workers--the brains of tomorrow's economy. And it means giving all of them, as well as foreign students, the opportunity to earn permanent status, so they can put their knowledge and entrepreneurial spirit to use for our country. Why shouldn't we reap the benefits of the skills they have obtained here? If we don't allow them in, or we send them home, we will be sending the future of science--and the jobs of tomorrow--with them.
3. Reduce Access. Controlling our borders is a matter of urgent national security. As President Bush recognizes, in some areas, particularly in border towns, additional fencing may be required; in open desert areas, a virtual wall--created through sensors and cameras--will be far more effective. However, even after doubling their numbers, border security guards will remain overwhelmed by the flood of people attempting to enter illegally. Only by embracing the first two principles--reducing incentives and increasing lawful opportunity--will the border security so desired by the House become a manageable task.
4. Get Real. The idea of deporting 11 million people, nearly as many as live in the entire state of Illinois, is pure fantasy. It is physically impossible to carry out, though if it were attempted, it would devastate both families and our economy. The Senate's tiered approach requiring that some people "report to deport" through guest worker programs--while leaving their spouses, children and mortgages behind--is no less ridiculous. If this approach becomes law, there can be little doubt that the black market for false documentation would remain strong and real enforcement impossible.
There is only one practical solution, and it is a solution that respects the history of our nation: Offer those already here the opportunity to earn permanent status and keep their families together, provided they pay appropriate penalties. For decades, the federal government has tacitly welcomed them into the workforce and collected their income and Social Security taxes, which two-thirds of undocumented workers pay. Now, instead of pointing fingers about the past, let's accept the present for what it is by bringing people out of the shadows, and focus on the future by casting those shadows aside, permanently.
As the debate continues in Washington, it is essential that Congress recognize the need for an immigration policy that is enforceable, sustainable and compassionate--and that enables the American economy to thrive in the 21st century. But if one principle is abandoned, we will be no better off than we were after passage of the 1986 law. A successful solution to our border problems cannot rest on a wall alone; it must be built on a foundation strong enough to support it, and to support our continued economic growth and prosperity.
Mr. Bloomberg is mayor of New York.
Monday, May 22
100 Years in the Back Door, Out the Front
May 21, 2006, NYT
100 Years in the Back Door, Out the Front
By NINA BERNSTEIN
THE Texas cotton lobbyist tried to reassure Congress that the tens of thousands of Mexicans who labored in the fields of the Southwest were not a threat to national security. There "never was a more docile animal in the world than the Mexican," he told the Senate committee.
Then he offered a way around the political problem the congressmen faced in extending the program that had let the workers in.
"If you gentlemen have any objections to admitting the Mexicans by law," he said, "take the river guard away and let us alone, and we will get them all right."
They did — and that was in 1920. Almost a century later, the debate over illegal immigration from Mexico often makes it sound like a recent development that breaks with the tradition of legal passage to America.
Quite the contrary, say immigration scholars like Aristide R. Zolberg, who relates the anecdote about the Texas cotton grower in his new book, "A Nation by Design: Immigration Policy in the Fashioning of America." A pattern of deliberately leaving the country's "back door" open to Mexican workers, then moving to expel them and their families years later, has been a recurrent feature of immigration policy since the 1890's.
"Things are not the same today, but the basic dynamics do not change," said Mr. Zolberg, a professor of political science at the New School. "Wanting immigrants because they're a good source of cheap labor and human capital on the one hand, and then posing the identity question: But will they become Americans? Where is the boundary of American identity going to be?"
Nearly every immigrant group has been caught at that crossroads for a time, wanted for work but unwelcome as citizens, especially when the economy slumps. But Mexicans have been summoned and sent back in cycles for four generations, repeatedly losing the ground they had gained.
During the Depression, as many as a million Mexicans, and even Mexican-Americans, were ousted, along with their American-born children, to spare relief costs or discourage efforts to unionize. They were welcome again during World War II and cast as heroic "braceros." But in the 1950's, Mexicans were re-branded as dangerous, welfare-seeking "wetbacks."
In 1954, President Dwight D. Eisenhower sent Gen. Joseph Swing to "secure the border" with farm raids and summary deportations that drove out at least a million people. At the same time, growers were assured of a new supply of temporary workers through the "braceros" program, which soon doubled to 400,000 a year.
The pattern grew during the years between the 1882 Chinese Exclusion Act and the quotas of 1929, as rising legal barriers drastically narrowed the nation's front door. The goal was to preserve the country's "Nordic character" against Italians and Eastern European Jews who had begun arriving in large numbers.
Yet Congress refused to close the back entrance to a growing flow of Mexicans, even though by the lawmakers' own racial standards, Mexicans were even more objectionable than the "degraded races" of Asians and Southern Europeans whom they were increasingly replacing in fields, factories and railroad work.
A convenient way was found to reconcile the contradiction, said Camille Guérin-Gonzales, a professor of history at the University of Wisconsin and the author of "Mexican Workers and American Dreams." No quotas were necessary to keep Mexicans out because they were not going to stay. "Not wanting to 'mongrelize the race,' but needing cheap labor, Americans constructed Mexicans as 'birds of passage,' " she said, using the phrase coined to describe Italian immigrants. "The proximity of the border made that even more believable."
The cotton pickers cited by the Texas lobbyist had arrived by way of a program intended to address World War I labor shortages. But as commercial agriculture created "factories in the field," undocumented entry became the norm. Growers pointed out that no willing field hand could afford the "head tax" that went with legal entry. And employers regularly cited informal entry as a feature that made Mexicans more desirable than cheap foreign laborers like Filipinos, because they were easier to deport. As one rancher quoted in Mr. Zolberg's book remarked to a Mexican hand: "When we want you, we'll call you; when we don't — git."
The full, brutal weight of that formula hit in the Depression. Roundups of Mexican families in public places, summary deportations — and well-publicized threats of more to come — sent panic through Mexican-American communities in 1931. The tactic was called "scare-heading" by its architect, Charles P. Visel, the director of the Los Angeles Citizens Committee on the Coordination of Unemployment Relief. It worked. Even many legal immigrants were panicked into selling their property cheap and leaving "voluntarily."
It was a time when crops went unharvested for lack of buyers and white families like those in "The Grapes of Wrath" poured West, desperate for work. "They gave you a choice: starve or go back to Mexico," a resident of Indiana Harbor, Ind., recalled later, as Roger Daniels relates in his book "Guarding the Golden Door." A Santa Barbara woman said she would never forget seeing trains organized by the railroad transporting families to the border in boxcars. The same rail lines had long been maintained by Mexicans who had settled not only in the Southwest, but in Indiana, Illinois and eastward.
"I have left the best of my life and strength here, sprinkling with the sweat of my brow the fields and factories of these gringos, who only know how to make one sweat and don't even pay attention to one when they see one is old," said one worker, Juan Berzunzolo, interviewed in California in the 1920's by a Mexican anthropologist and quoted by Devra Weber in "Dark Sweat, White Gold: California Farm Workers, Cotton and the New Deal."
At the other side of the border, Ms. Guérin-Gonzales said, an 11-year-old American-born girl who had been "repatriated" from California told an interviewer in the 1930's, "I would be in the fifth grade there, but here, no, because I didn't know how to read and write Spanish." A boy recounted how a Mexican policeman upbraided him for speaking English. But by 1943, with the economy ascendant and employers crying of wartime labor shortages, the cycle began anew.
Today, the nature of the deal can no longer be disguised, said Marcelo M. Suárez-Orozco, co-director of Immigration Studies at New York University. "It's a bad-faith pact," he said. "We can't have it both ways — an economy that's addicted to immigrant labor, but that's not ready to pay the cost."
And Mr. Zolberg said the old resort to mass expulsion is less likely, since the naturalization of millions of Latinos, including those from the 1986 amnesty, changed the rules of the game. "Mexicans, and Latinos generally are more in the situation today that Italians and Jews were in the 20's and 30's," he said. "They began to have some electoral clout, because there were more people of that national origin who could stand up."
100 Years in the Back Door, Out the Front
By NINA BERNSTEIN
THE Texas cotton lobbyist tried to reassure Congress that the tens of thousands of Mexicans who labored in the fields of the Southwest were not a threat to national security. There "never was a more docile animal in the world than the Mexican," he told the Senate committee.
Then he offered a way around the political problem the congressmen faced in extending the program that had let the workers in.
"If you gentlemen have any objections to admitting the Mexicans by law," he said, "take the river guard away and let us alone, and we will get them all right."
They did — and that was in 1920. Almost a century later, the debate over illegal immigration from Mexico often makes it sound like a recent development that breaks with the tradition of legal passage to America.
Quite the contrary, say immigration scholars like Aristide R. Zolberg, who relates the anecdote about the Texas cotton grower in his new book, "A Nation by Design: Immigration Policy in the Fashioning of America." A pattern of deliberately leaving the country's "back door" open to Mexican workers, then moving to expel them and their families years later, has been a recurrent feature of immigration policy since the 1890's.
"Things are not the same today, but the basic dynamics do not change," said Mr. Zolberg, a professor of political science at the New School. "Wanting immigrants because they're a good source of cheap labor and human capital on the one hand, and then posing the identity question: But will they become Americans? Where is the boundary of American identity going to be?"
Nearly every immigrant group has been caught at that crossroads for a time, wanted for work but unwelcome as citizens, especially when the economy slumps. But Mexicans have been summoned and sent back in cycles for four generations, repeatedly losing the ground they had gained.
During the Depression, as many as a million Mexicans, and even Mexican-Americans, were ousted, along with their American-born children, to spare relief costs or discourage efforts to unionize. They were welcome again during World War II and cast as heroic "braceros." But in the 1950's, Mexicans were re-branded as dangerous, welfare-seeking "wetbacks."
In 1954, President Dwight D. Eisenhower sent Gen. Joseph Swing to "secure the border" with farm raids and summary deportations that drove out at least a million people. At the same time, growers were assured of a new supply of temporary workers through the "braceros" program, which soon doubled to 400,000 a year.
The pattern grew during the years between the 1882 Chinese Exclusion Act and the quotas of 1929, as rising legal barriers drastically narrowed the nation's front door. The goal was to preserve the country's "Nordic character" against Italians and Eastern European Jews who had begun arriving in large numbers.
Yet Congress refused to close the back entrance to a growing flow of Mexicans, even though by the lawmakers' own racial standards, Mexicans were even more objectionable than the "degraded races" of Asians and Southern Europeans whom they were increasingly replacing in fields, factories and railroad work.
A convenient way was found to reconcile the contradiction, said Camille Guérin-Gonzales, a professor of history at the University of Wisconsin and the author of "Mexican Workers and American Dreams." No quotas were necessary to keep Mexicans out because they were not going to stay. "Not wanting to 'mongrelize the race,' but needing cheap labor, Americans constructed Mexicans as 'birds of passage,' " she said, using the phrase coined to describe Italian immigrants. "The proximity of the border made that even more believable."
The cotton pickers cited by the Texas lobbyist had arrived by way of a program intended to address World War I labor shortages. But as commercial agriculture created "factories in the field," undocumented entry became the norm. Growers pointed out that no willing field hand could afford the "head tax" that went with legal entry. And employers regularly cited informal entry as a feature that made Mexicans more desirable than cheap foreign laborers like Filipinos, because they were easier to deport. As one rancher quoted in Mr. Zolberg's book remarked to a Mexican hand: "When we want you, we'll call you; when we don't — git."
The full, brutal weight of that formula hit in the Depression. Roundups of Mexican families in public places, summary deportations — and well-publicized threats of more to come — sent panic through Mexican-American communities in 1931. The tactic was called "scare-heading" by its architect, Charles P. Visel, the director of the Los Angeles Citizens Committee on the Coordination of Unemployment Relief. It worked. Even many legal immigrants were panicked into selling their property cheap and leaving "voluntarily."
It was a time when crops went unharvested for lack of buyers and white families like those in "The Grapes of Wrath" poured West, desperate for work. "They gave you a choice: starve or go back to Mexico," a resident of Indiana Harbor, Ind., recalled later, as Roger Daniels relates in his book "Guarding the Golden Door." A Santa Barbara woman said she would never forget seeing trains organized by the railroad transporting families to the border in boxcars. The same rail lines had long been maintained by Mexicans who had settled not only in the Southwest, but in Indiana, Illinois and eastward.
"I have left the best of my life and strength here, sprinkling with the sweat of my brow the fields and factories of these gringos, who only know how to make one sweat and don't even pay attention to one when they see one is old," said one worker, Juan Berzunzolo, interviewed in California in the 1920's by a Mexican anthropologist and quoted by Devra Weber in "Dark Sweat, White Gold: California Farm Workers, Cotton and the New Deal."
At the other side of the border, Ms. Guérin-Gonzales said, an 11-year-old American-born girl who had been "repatriated" from California told an interviewer in the 1930's, "I would be in the fifth grade there, but here, no, because I didn't know how to read and write Spanish." A boy recounted how a Mexican policeman upbraided him for speaking English. But by 1943, with the economy ascendant and employers crying of wartime labor shortages, the cycle began anew.
Today, the nature of the deal can no longer be disguised, said Marcelo M. Suárez-Orozco, co-director of Immigration Studies at New York University. "It's a bad-faith pact," he said. "We can't have it both ways — an economy that's addicted to immigrant labor, but that's not ready to pay the cost."
And Mr. Zolberg said the old resort to mass expulsion is less likely, since the naturalization of millions of Latinos, including those from the 1986 amnesty, changed the rules of the game. "Mexicans, and Latinos generally are more in the situation today that Italians and Jews were in the 20's and 30's," he said. "They began to have some electoral clout, because there were more people of that national origin who could stand up."
Thursday, May 18
Judge Dismisses Suit by Man Who Says He Was Tortured
By THE ASSOCIATED PRESS
ROCKVILLE, Md. (AP) -- A federal judge dismissed a lawsuit by a German man who said he was illegally detained and tortured in overseas prisons run by the CIA, ruling that a lawsuit would improperly expose state secrets.
The ruling by U.S. District Judge T.S. Ellis III makes no determination on the validity of the claims by Khaled al-Masri, who said he was kidnapped on New Year's Eve 2003 and detained in various overseas prisons for nearly five months before finally being dumped on an abandoned road in Albania.
During his detention, he said he was beaten and sodomized with a foreign object by his captors. He also alleges that a CIA team forced him to wear a diaper, drugged him and refused to contact German authorities about his arrest.
Ellis said he was satisfied after receiving a secret written briefing from the director of central intelligence that allowing al-Masri's lawsuit to proceed would harm national security.
"In the present circumstances, al-Masri's private interests must give way to the national interest in preserving state secrets," Ellis wrote.
ROCKVILLE, Md. (AP) -- A federal judge dismissed a lawsuit by a German man who said he was illegally detained and tortured in overseas prisons run by the CIA, ruling that a lawsuit would improperly expose state secrets.
The ruling by U.S. District Judge T.S. Ellis III makes no determination on the validity of the claims by Khaled al-Masri, who said he was kidnapped on New Year's Eve 2003 and detained in various overseas prisons for nearly five months before finally being dumped on an abandoned road in Albania.
During his detention, he said he was beaten and sodomized with a foreign object by his captors. He also alleges that a CIA team forced him to wear a diaper, drugged him and refused to contact German authorities about his arrest.
Ellis said he was satisfied after receiving a secret written briefing from the director of central intelligence that allowing al-Masri's lawsuit to proceed would harm national security.
"In the present circumstances, al-Masri's private interests must give way to the national interest in preserving state secrets," Ellis wrote.
Wednesday, May 17
A New Library of Alexandria?
Initially published in the New York Times Magazine, May 14, 2006
Kevin Kelly is the "senior maverick" at Wired magazine and author of "Out of Control: The New Biology of Machines, Social Systems and the Economic World" and other books. He last wrote for the magazine about digital music.
Scan This Book!
By KEVIN KELLY
In several dozen nondescript office buildings around the world, thousands of hourly workers bend over table-top scanners and haul dusty books into high-tech scanning booths. They are assembling the universal library page by page.
The dream is an old one: to have in one place all knowledge, past and present. All books, all documents, all conceptual works, in all languages. It is a familiar hope, in part because long ago we briefly built such a library. The great library at Alexandria, constructed around 300 B.C., was designed to hold all the scrolls circulating in the known world. At one time or another, the library held about half a million scrolls, estimated to have been between 30 and 70 percent of all books in existence then. But even before this great library was lost, the moment when all knowledge could be housed in a single building had passed. Since then, the constant expansion of information has overwhelmed our capacity to contain it. For 2,000 years, the universal library, together with other perennial longings like invisibility cloaks, antigravity shoes and paperless offices, has been a mythical dream that kept receding further into the infinite future.
Until now. When Google announced in December 2004 that it would digitally scan the books of five major research libraries to make their contents searchable, the promise of a universal library was resurrected. Indeed, the explosive rise of the Web, going from nothing to everything in one decade, has encouraged us to believe in the impossible again. Might the long-heralded great library of all knowledge really be within our grasp?
Brewster Kahle, an archivist overseeing another scanning project, says that the universal library is now within reach. "This is our chance to one-up the Greeks!" he shouts. "It is really possible with the technology of today, not tomorrow. We can provide all the works of humankind to all the people of the world. It will be an achievement remembered for all time, like putting a man on the moon." And unlike the libraries of old, which were restricted to the elite, this library would be truly democratic, offering every book to every person.
But the technology that will bring us a planetary source of all written material will also, in the same gesture, transform the nature of what we now call the book and the libraries that hold them. The universal library and its "books" will be unlike any library or books we have known. Pushing us rapidly toward that Eden of everything, and away from the paradigm of the physical paper tome, is the hot technology of the search engine.
1. Scanning the Library of Libraries
Scanning technology has been around for decades, but digitized books didn't make much sense until recently, when search engines like Google, Yahoo, Ask and MSN came along. When millions of books have been scanned and their texts are made available in a single database, search technology will enable us to grab and read any book ever written. Ideally, in such a complete library we should also be able to read any article ever written in any newspaper, magazine or journal. And why stop there? The universal library should include a copy of every painting, photograph, film and piece of music produced by all artists, present and past. Still more, it should include all radio and television broadcasts. Commercials too. And how can we forget the Web? The grand library naturally needs a copy of the billions of dead Web pages no longer online and the tens of millions of blog posts now gone — the ephemeral literature of our time. In short, the entire works of humankind, from the beginning of recorded history, in all languages, available to all people, all the time.
This is a very big library. But because of digital technology, you'll be able to reach inside it from almost any device that sports a screen. From the days of Sumerian clay tablets till now, humans have "published" at least 32 million books, 750 million articles and essays, 25 million songs, 500 million images, 500,000 movies, 3 million videos, TV shows and short films and 100 billion public Web pages. All this material is currently contained in all the libraries and archives of the world. When fully digitized, the whole lot could be compressed (at current technological rates) onto 50 petabyte hard disks. Today you need a building about the size of a small-town library to house 50 petabytes. With tomorrow's technology, it will all fit onto your iPod. When that happens, the library of all libraries will ride in your purse or wallet — if it doesn't plug directly into your brain with thin white cords. Some people alive today are surely hoping that they die before such things happen, and others, mostly the young, want to know what's taking so long. (Could we get it up and running by next week? They have a history project due.)
Technology accelerates the migration of all we know into the universal form of digital bits. Nikon will soon quit making film cameras for consumers, and Minolta already has: better think digital photos from now on. Nearly 100 percent of all contemporary recorded music has already been digitized, much of it by fans. About one-tenth of the 500,000 or so movies listed on the Internet Movie Database are now digitized on DVD. But because of copyright issues and the physical fact of the need to turn pages, the digitization of books has proceeded at a relative crawl. At most, one book in 20 has moved from analog to digital. So far, the universal library is a library without many books.
But that is changing very fast. Corporations and libraries around the world are now scanning about a million books per year. Amazon has digitized several hundred thousand contemporary books. In the heart of Silicon Valley, Stanford University (one of the five libraries collaborating with Google) is scanning its eight-million-book collection using a state-of-the art robot from the Swiss company 4DigitalBooks. This machine, the size of a small S.U.V., automatically turns the pages of each book as it scans it, at the rate of 1,000 pages per hour. A human operator places a book in a flat carriage, and then pneumatic robot fingers flip the pages — delicately enough to handle rare volumes — under the scanning eyes of digital cameras.
Like many other functions in our global economy, however, the real work has been happening far away, while we sleep. We are outsourcing the scanning of the universal library. Superstar, an entrepreneurial company based in Beijing, has scanned every book from 900 university libraries in China. It has already digitized 1.3 million unique titles in Chinese, which it estimates is about half of all the books published in the Chinese language since 1949. It costs $30 to scan a book at Stanford but only $10 in China.
Raj Reddy, a professor at Carnegie Mellon University, decided to move a fair-size English-language library to where the cheap subsidized scanners were. In 2004, he borrowed 30,000 volumes from the storage rooms of the Carnegie Mellon library and the Carnegie Library and packed them off to China in a single shipping container to be scanned by an assembly line of workers paid by the Chinese. His project, which he calls the Million Book Project, is churning out 100,000 pages per day at 20 scanning stations in India and China. Reddy hopes to reach a million digitized books in two years.
The idea is to seed the bookless developing world with easily available texts. Superstar sells copies of books it scans back to the same university libraries it scans from. A university can expand a typical 60,000-volume library into a 1.3 million-volume one overnight. At about 50 cents per digital book acquired, it's a cheap way for a library to increase its collection. Bill McCoy, the general manager of Adobe's e-publishing business, says: "Some of us have thousands of books at home, can walk to wonderful big-box bookstores and well-stocked libraries and can get Amazon.com to deliver next day. The most dramatic effect of digital libraries will be not on us, the well-booked, but on the billions of people worldwide who are underserved by ordinary paper books." It is these underbooked — students in Mali, scientists in Kazakhstan, elderly people in Peru — whose lives will be transformed when even the simplest unadorned version of the universal library is placed in their hands.
2. What Happens When Books Connect
The least important, but most discussed, aspects of digital reading have been these contentious questions: Will we give up the highly evolved technology of ink on paper and instead read on cumbersome machines? Or will we keep reading our paperbacks on the beach? For now, the answer is yes to both. Yes, publishers have lost millions of dollars on the long-prophesied e-book revolution that never occurred, while the number of physical books sold in the world each year continues to grow. At the same time, there are already more than a half a billion PDF documents on the Web that people happily read on computers without printing them out, and still more people now spend hours watching movies on microscopic cellphone screens. The arsenal of our current display technology — from handheld gizmos to large flat screens — is already good enough to move books to their next stage of evolution: a full digital scan.
Yet the common vision of the library's future (even the e-book future) assumes that books will remain isolated items, independent from one another, just as they are on shelves in your public library. There, each book is pretty much unaware of the ones next to it. When an author completes a work, it is fixed and finished. Its only movement comes when a reader picks it up to animate it with his or her imagination. In this vision, the main advantage of the coming digital library is portability — the nifty translation of a book's full text into bits, which permits it to be read on a screen anywhere. But this vision misses the chief revolution birthed by scanning books: in the universal library, no book will be an island.
Turning inked letters into electronic dots that can be read on a screen is simply the first essential step in creating this new library. The real magic will come in the second act, as each word in each book is cross-linked, clustered, cited, extracted, indexed, analyzed, annotated, remixed, reassembled and woven deeper into the culture than ever before. In the new world of books, every bit informs another; every page reads all the other pages.
In recent years, hundreds of thousands of enthusiastic amateurs have written and cross-referenced an entire online encyclopedia called Wikipedia. Buoyed by this success, many nerds believe that a billion readers can reliably weave together the pages of old books, one hyperlink at a time. Those with a passion for a special subject, obscure author or favorite book will, over time, link up its important parts. Multiply that simple generous act by millions of readers, and the universal library can be integrated in full, by fans for fans.
In addition to a link, which explicitly connects one word or sentence or book to another, readers will also be able to add tags, a recent innovation on the Web but already a popular one. A tag is a public annotation, like a keyword or category name, that is hung on a file, page, picture or song, enabling anyone to search for that file. For instance, on the photo-sharing site Flickr, hundreds of viewers will "tag" a photo submitted by another user with their own simple classifications of what they think the picture is about: "goat," "Paris," "goofy," "beach party." Because tags are user-generated, when they move to the realm of books, they will be assigned faster, range wider and serve better than out-of-date schemes like the Dewey Decimal System, particularly in frontier or fringe areas like nanotechnology or body modification.
The link and the tag may be two of the most important inventions of the last 50 years. They get their initial wave of power when we first code them into bits of text, but their real transformative energies fire up as ordinary users click on them in the course of everyday Web surfing, unaware that each humdrum click "votes" on a link, elevating its rank of relevance. You may think you are just browsing, casually inspecting this paragraph or that page, but in fact you are anonymously marking up the Web with bread crumbs of attention. These bits of interest are gathered and analyzed by search engines in order to strengthen the relationship between the end points of every link and the connections suggested by each tag. This is a type of intelligence common on the Web, but previously foreign to the world of books.
Once a book has been integrated into the new expanded library by means of this linking, its text will no longer be separate from the text in other books. For instance, today a serious nonfiction book will usually have a bibliography and some kind of footnotes. When books are deeply linked, you'll be able to click on the title in any bibliography or any footnote and find the actual book referred to in the footnote. The books referenced in that book's bibliography will themselves be available, and so you can hop through the library in the same way we hop through Web links, traveling from footnote to footnote to footnote until you reach the bottom of things.
Next come the words. Just as a Web article on, say, aquariums, can have some of its words linked to definitions of fish terms, any and all words in a digitized book can be hyperlinked to other parts of other books. Books, including fiction, will become a web of names and a community of ideas.
Search engines are transforming our culture because they harness the power of relationships, which is all links really are. There are about 100 billion Web pages, and each page holds, on average, 10 links. That's a trillion electrified connections coursing through the Web. This tangle of relationships is precisely what gives the Web its immense force. The static world of book knowledge is about to be transformed by the same elevation of relationships, as each page in a book discovers other pages and other books. Once text is digital, books seep out of their bindings and weave themselves together. The collective intelligence of a library allows us to see things we can't see in a single, isolated book.
When books are digitized, reading becomes a community activity. Bookmarks can be shared with fellow readers. Marginalia can be broadcast. Bibliographies swapped. You might get an alert that your friend Carl has annotated a favorite book of yours. A moment later, his links are yours. In a curious way, the universal library becomes one very, very, very large single text: the world's only book.
3. Books: The Liquid Version
At the same time, once digitized, books can be unraveled into single pages or be reduced further, into snippets of a page. These snippets will be remixed into reordered books and virtual bookshelves. Just as the music audience now juggles and reorders songs into new albums (or "playlists," as they are called in iTunes), the universal library will encourage the creation of virtual "bookshelves" — a collection of texts, some as short as a paragraph, others as long as entire books, that form a library shelf's worth of specialized information. And as with music playlists, once created, these "bookshelves" will be published and swapped in the public commons. Indeed, some authors will begin to write books to be read as snippets or to be remixed as pages. The ability to purchase, read and manipulate individual pages or sections is surely what will drive reference books (cookbooks, how-to manuals, travel guides) in the future. You might concoct your own "cookbook shelf" of Cajun recipes compiled from many different sources; it would include Web pages, magazine clippings and entire Cajun cookbooks. Amazon currently offers you a chance to publish your own bookshelves (Amazon calls them "listmanias") as annotated lists of books you want to recommend on a particular esoteric subject. And readers are already using Google Book Search to round up minilibraries on a certain topic — all books about Sweden, for instance, or books on clocks. Once snippets, articles and pages of books become ubiquitous, shuffle-able and transferable, users will earn prestige and perhaps income for curating an excellent collection.
Libraries (as well as many individuals) aren't eager to relinquish ink-on-paper editions, because the printed book is by far the most durable and reliable backup technology we have. Printed books require no mediating device to read and thus are immune to technological obsolescence. Paper is also extremely stable, compared with, say, hard drives or even CD's. In this way, the stability and fixity of a bound book is a blessing. It sits there unchanging, true to its original creation. But it sits alone.
So what happens when all the books in the world become a single liquid fabric of interconnected words and ideas? Four things: First, works on the margins of popularity will find a small audience larger than the near-zero audience they usually have now. Far out in the "long tail" of the distribution curve — that extended place of low-to-no sales where most of the books in the world live — digital interlinking will lift the readership of almost any title, no matter how esoteric. Second, the universal library will deepen our grasp of history, as every original document in the course of civilization is scanned and cross-linked. Third, the universal library of all books will cultivate a new sense of authority. If you can truly incorporate all texts — past and present, multilingual — on a particular subject, then you can have a clearer sense of what we as a civilization, a species, do know and don't know. The white spaces of our collective ignorance are highlighted, while the golden peaks of our knowledge are drawn with completeness. This degree of authority is only rarely achieved in scholarship today, but it will become routine.
Finally, the full, complete universal library of all works becomes more than just a better Ask Jeeves. Search on the Web becomes a new infrastructure for entirely new functions and services. Right now, if you mash up Google Maps and Monster.com, you get maps of where jobs are located by salary. In the same way, it is easy to see that in the great library, everything that has ever been written about, for example, Trafalgar Square in London could be present on that spot via a screen. In the same way, every object, event or location on earth would "know" everything that has ever been written about it in any book, in any language, at any time. From this deep structuring of knowledge comes a new culture of interaction and participation.
The main drawback of this vision is a big one. So far, the universal library lacks books. Despite the best efforts of bloggers and the creators of the Wikipedia, most of the world's expertise still resides in books. And a universal library without the contents of books is no universal library at all.
There are dozens of excellent reasons that books should quickly be made part of the emerging Web. But so far they have not been, at least not in great numbers. And there is only one reason: the hegemony of the copy.
4. The Triumph of the Copy
The desire of all creators is for their works to find their way into all minds. A text, a melody, a picture or a story succeeds best if it is connected to as many ideas and other works as possible. Ideally, over time a work becomes so entangled in a culture that it appears to be inseparable from it, in the way that the Bible, Shakespeare's plays, "Cinderella" and the Mona Lisa are inseparable from ours. This tendency for creative ideas to infiltrate other works is great news for culture. In fact, this commingling of creations is culture.
In preindustrial times, exact copies of a work were rare for a simple reason: it was much easier to make your own version of a creation than to duplicate someone else's exactly. The amount of energy and attention needed to copy a scroll exactly, word for word, or to replicate a painting stroke by stroke exceeded the cost of paraphrasing it in your own style. So most works were altered, and often improved, by the borrower before they were passed on. Fairy tales evolved mythic depth as many different authors worked on them and as they migrated from spoken tales to other media (theater, music, painting). This system worked well for audiences and performers, but the only way for most creators to earn a living from their works was through the support of patrons.
That ancient economics of creation was overturned at the dawn of the industrial age by the technologies of mass production. Suddenly, the cost of duplication was lower than the cost of appropriation. With the advent of the printing press, it was now cheaper to print thousands of exact copies of a manuscript than to alter one by hand. Copy makers could profit more than creators. This imbalance led to the technology of copyright, which established a new order. Copyright bestowed upon the creator of a work a temporary monopoly — for 14 years, in the United States — over any copies of the work. The idea was to encourage authors and artists to create yet more works that could be cheaply copied and thus fill the culture with public works.
Not coincidentally, public libraries first began to flourish with the advent of cheap copies. Before the industrial age, libraries were primarily the property of the wealthy elite. With mass production, every small town could afford to put duplicates of the greatest works of humanity on wooden shelves in the village square. Mass access to public-library books inspired scholarship, reviewing and education, activities exempted in part from the monopoly of copyright in the United States because they moved creative works toward the public commons sooner, weaving them into the fabric of common culture while still remaining under the author's copyright. These are now known as "fair uses."
This wonderful balance was undone by good intentions. The first was a new copyright law passed by Congress in 1976. According to the new law, creators no longer had to register or renew copyright; the simple act of creating something bestowed it with instant and automatic rights. By default, each new work was born under private ownership rather than in the public commons. At first, this reversal seemed to serve the culture of creation well. All works that could be copied gained instant and deep ownership, and artists and authors were happy. But the 1976 law, and various revisions and extensions that followed it, made it extremely difficult to move a work into the public commons, where human creations naturally belong and were originally intended to reside. As more intellectual property became owned by corporations rather than by individuals, those corporations successfully lobbied Congress to keep extending the once-brief protection enabled by copyright in order to prevent works from returning to the public domain. With constant nudging, Congress moved the expiration date from 14 years to 28 to 42 and then to 56.
While corporations and legislators were moving the goal posts back, technology was accelerating forward. In Internet time, even 14 years is a long time for a monopoly; a monopoly that lasts a human lifetime is essentially an eternity. So when Congress voted in 1998 to extend copyright an additional 70 years beyond the life span of a creator — to a point where it could not possibly serve its original purpose as an incentive to keep that creator working — it was obvious to all that copyright now existed primarily to protect a threatened business model. And because Congress at the same time tacked a 20-year extension onto all existing copyrights, nothing — no published creative works of any type — will fall out of protection and return to the public domain until 2019. Almost everything created today will not return to the commons until the next century. Thus the stream of shared material that anyone can improve (think "A Thousand and One Nights" or "Amazing Grace" or "Beauty and the Beast") will largely dry up.
In the world of books, the indefinite extension of copyright has had a perverse effect. It has created a vast collection of works that have been abandoned by publishers, a continent of books left permanently in the dark. In most cases, the original publisher simply doesn't find it profitable to keep these books in print. In other cases, the publishing company doesn't know whether it even owns the work, since author contracts in the past were not as explicit as they are now. The size of this abandoned library is shocking: about 75 percent of all books in the world's libraries are orphaned. Only about 15 percent of all books are in the public domain. A luckier 10 percent are still in print. The rest, the bulk of our universal library, is dark.
5. The Moral Imperative to Scan
The 15 percent of the world's 32 million cataloged books that are in the public domain are freely available for anyone to borrow, imitate, publish or copy wholesale. Almost the entire current scanning effort by American libraries is aimed at this 15 percent. The Million Book Project mines this small sliver of the pie, as does Google. Because they are in the commons, no law hinders this 15 percent from being scanned and added to the universal library.
The approximately 10 percent of all books actively in print will also be scanned before long. Amazon carries at least four million books, which includes multiple editions of the same title. Amazon is slowly scanning all of them. Recently, several big American publishers have declared themselves eager to move their entire backlist of books into the digital sphere. Many of them are working with Google in a partnership program in which Google scans their books, offers sample pages (controlled by the publisher) to readers and points readers to where they can buy the actual book. No one doubts electronic books will make money eventually. Simple commercial incentives guarantee that all in-print and backlisted books will before long be scanned into the great library. That's not the problem.
The major problem for large publishers is that they are not certain what they actually own. If you would like to amuse yourself, pick an out-of-print book from the library and try to determine who owns its copyright. It's not easy. There is no list of copyrighted works. The Library of Congress does not have a catalog. The publishers don't have an exhaustive list, not even of their own imprints (though they say they are working on it). The older, the more obscure the work, the less likely a publisher will be able to tell you (that is, if the publisher still exists) whether the copyright has reverted to the author, whether the author is alive or dead, whether the copyright has been sold to another company, whether the publisher still owns the copyright or whether it plans to resurrect or scan it. Plan on having a lot of spare time and patience if you inquire. I recently spent two years trying to track down the copyright to a book that led me to Random House. Does the company own it? Can I reproduce it? Three years later, the company is still working on its answer. The prospect of tracking down the copyright — with any certainty — of the roughly 25 million orphaned books is simply ludicrous.
Which leaves 75 percent of the known texts of humans in the dark. The legal limbo surrounding their status as copies prevents them from being digitized. No one argues that these are all masterpieces, but there is history and context enough in their pages to not let them disappear. And if they are not scanned, they in effect will disappear. But with copyright hyperextended beyond reason (the Supreme Court in 2003 declared the law dumb but not unconstitutional), none of this dark library will return to the public domain (and be cleared for scanning) until at least 2019. With no commercial incentive to entice uncertain publishers to pay for scanning these orphan works, they will vanish from view. According to Peter Brantley, director of technology for the California Digital Library, "We have a moral imperative to reach out to our library shelves, grab the material that is orphaned and set it on top of scanners."
No one was able to unravel the Gordian knot of copydom until 2004, when Google came up with a clever solution. In addition to scanning the 15 percent out-of-copyright public-domain books with their library partners and the 10 percent in-print books with their publishing partners, Google executives declared that they would also scan the 75 percent out-of-print books that no one else would touch. They would scan the entire book, without resolving its legal status, which would allow the full text to be indexed on Google's internal computers and searched by anyone. But the company would show to readers only a few selected sentence-long snippets from the book at a time. Google's lawyers argued that the snippets the company was proposing were something like a quote or an excerpt in a review and thus should qualify as a "fair use."
Google's plan was to scan the full text of every book in five major libraries: the more than 10 million titles held by Stanford, Harvard, Oxford, the University of Michigan and the New York Public Library. Every book would be indexed, but each would show up in search results in different ways. For out-of-copyright books, Google would show the whole book, page by page. For the in-print books, Google would work with publishers and let them decide what parts of their books would be shown and under what conditions. For the dark orphans, Google would show only limited snippets. And any copyright holder (author or corporation) who could establish ownership of a supposed orphan could ask Google to remove the snippets for any reason.
At first glance, it seemed genius. By scanning all books (something only Google had the cash to do), the company would advance its mission to organize all knowledge. It would let books be searchable, and it could potentially sell ads on those searches, although it does not do that currently. In the same stroke, Google would rescue the lost and forgotten 75 percent of the library. For many authors, this all-out campaign was a salvation. Google became a discovery tool, if not a marketing program. While a few best-selling authors fear piracy, every author fears obscurity. Enabling their works to be found in the same universal search box as everything else in the world was good news for authors and good news for an industry that needed some. For authors with books in the publisher program and for authors of books abandoned by a publisher, Google unleashed a chance that more people would at least read, and perhaps buy, the creation they had sweated for years to complete.
6. The Case Against Google
Some authors and many publishers found more evil than genius in Google's plan. Two points outraged them: the virtual copy of the book that sat on Google's indexing server and Google's assumption that it could scan first and ask questions later. On both counts the authors and publishers accused Google of blatant copyright infringement. When negotiations failed last fall, the Authors Guild and five big publishing companies sued Google. Their argument was simple: Why shouldn't Google share its ad revenue (if any) with the copyright owners? And why shouldn't Google have to ask permission from the legal copyright holder before scanning the work in any case? (I have divided loyalties in the case. The current publisher of my books is suing Google to protect my earnings as an author. At the same time, I earn income from Google Adsense ads placed on my blog.)
One mark of the complexity of this issue is that the publishers suing were, and still are, committed partners in the Google Book Search Partner Program. They still want Google to index and search their in-print books, even when they are scanning the books themselves, because, they say, search is a discovery tool for readers. The ability to search the scans of all books is good for profits.
The argument about sharing revenue is not about the three or four million books that publishers care about and keep in print, because Google is sharing revenues for those books with publishers. (Google says publishers receive the "majority share" of the income from the small ads placed on partner-program pages.) The argument is about the 75 percent of books that have been abandoned by publishers as uneconomical. One curious fact, of course, is that publishers only care about these orphans now because Google has shifted the economic equation; because of Book Search, these dark books may now have some sparks in them, and the publishers don't want this potential revenue stream to slip away from them. They are now busy digging deep into their records to see what part of the darkness they can declare as their own.
The second complaint against Google is more complex. Google argues that it is nearly impossible to track down copyright holders of orphan works, and so, it says, it must scan those books first and only afterward honor any legitimate requests to remove the scan. In this way, Google follows the protocol of the Internet. Google scans all Web pages; if it's on the Web, it's scanned. Web pages, by default, are born copyrighted. Google, therefore, regularly copies billions of copyrighted pages into its index for the public to search. But if you don't want Google to search your Web site, you can stick some code on your home page with a no-searching sign, and Google and every other search engine will stay out. A Web master thus can opt out of search. (Few do.) Google applies the same principle of opting-out to Book Search. It is up to you as an author to notify Google if you don't want the company to scan or search your copyrighted material. This might be a reasonable approach for Google to demand from an author or publisher if Google were the only search company around. But search technology is becoming a commodity, and if it turns out there is any money in it, it is not impossible to imagine a hundred mavericks scanning out-of-print books. Should you as a creator be obliged to find and notify each and every geek who scanned your work, if for some reason you did not want it indexed? What if you miss one?
There is a technical solution to this problem: for the search companies to compile and maintain a common list of no-scan copyright holders. A publisher or author who doesn't want a work scanned notifies the keepers of the common list once, and anyone conducting scanning would have to remove material that was listed. Since Google, like all the other big search companies — Microsoft, Amazon and Yahoo — is foremost a technical-solution company, it favors this approach. But the battle never got that far.
7. When Business Models Collide
In thinking about the arguments around search, I realized that there are many ways to conceive of this conflict. At first, I thought that this was a misunderstanding between people of the book, who favor solutions by laws, and people of the screen, who favor technology as a solution to all problems. Last November, the New York Public Library (one of the "Google Five") sponsored a debate between representatives of authors and publishers and supporters of Google. I was tickled to see that up on the stage, the defenders of the book were from the East Coast and the defenders of the screen were from the West Coast. But while it's true that there's a strand of cultural conflict here, I eventually settled on a different framework, one that I found more useful. This is a clash of business models.
Authors and publishers (including publishers of music and film) have relied for years on cheap mass-produced copies protected from counterfeits and pirates by a strong law based on the dominance of copies and on a public educated to respect the sanctity of a copy. This model has, in the last century or so, produced the greatest flowering of human achievement the world has ever seen, a magnificent golden age of creative works. Protected physical copies have enabled millions of people to earn a living directly from the sale of their art to the audience, without the weird dynamics of patronage. Not only did authors and artists benefit from this model, but the audience did, too. For the first time, billions of ordinary people were able to come in regular contact with a great work. In Mozart's day, few people ever heard one of his symphonies more than once. With the advent of cheap audio recordings, a barber in Java could listen to them all day long.
But a new regime of digital technology has now disrupted all business models based on mass-produced copies, including individual livelihoods of artists. The contours of the electronic economy are still emerging, but while they do, the wealth derived from the old business model is being spent to try to protect that old model, through legislation and enforcement. Laws based on the mass-produced copy artifact are being taken to the extreme, while desperate measures to outlaw new technologies in the marketplace "for our protection" are introduced in misguided righteousness. (This is to be expected. The fact is, entire industries and the fortunes of those working in them are threatened with demise. Newspapers and magazines, Hollywood, record labels, broadcasters and many hard-working and wonderful creative people in those fields have to change the model of how they earn money. Not all will make it.)
The new model, of course, is based on the intangible assets of digital bits, where copies are no longer cheap but free. They freely flow everywhere. As computers retrieve images from the Web or display texts from a server, they make temporary internal copies of those works. In fact, every action you take on the Net or invoke on your computer requires a copy of something to be made. This peculiar superconductivity of copies spills out of the guts of computers into the culture of computers. Many methods have been employed to try to stop the indiscriminate spread of copies, including copy-protection schemes, hardware-crippling devices, education programs, even legislation, but all have proved ineffectual. The remedies are rejected by consumers and ignored by pirates.
As copies have been dethroned, the economic model built on them is collapsing. In a regime of superabundant free copies, copies lose value. They are no longer the basis of wealth. Now relationships, links, connection and sharing are. Value has shifted away from a copy toward the many ways to recall, annotate, personalize, edit, authenticate, display, mark, transfer and engage a work. Authors and artists can make (and have made) their livings selling aspects of their works other than inexpensive copies of them. They can sell performances, access to the creator, personalization, add-on information, the scarcity of attention (via ads), sponsorship, periodic subscriptions — in short, all the many values that cannot be copied. The cheap copy becomes the "discovery tool" that markets these other intangible valuables. But selling things-that-cannot-be-copied is far from ideal for many creative people. The new model is rife with problems (or opportunities). For one thing, the laws governing creating and rewarding creators still revolve around the now-fragile model of valuable copies.
8. Search Changes Everything
The search-engine companies, including Google, operate in the new regime. Search is a wholly new concept, not foreseen in version 1.0 of our intellectual-property law. In the words of a recent ruling by the United States District Court for Nevada, search has a "transformative purpose," adding new social value to what it searches. What search uncovers is not just keywords but also the inherent value of connection. While almost every artist recognizes that the value of a creation ultimately rests in the value he or she personally gets from creating it (and for a few artists that value is sufficient), it is also true that the value of any work is increased the more it is shared. The technology of search maximizes the value of a creative work by allowing a billion new connections into it, often a billion new connections that were previously inconceivable. Things can be found by search only if they radiate potential connections. These potential relationships can be as simple as a title or as deep as hyperlinked footnotes that lead to active pages, which are also footnoted. It may be as straightforward as a song published intact or as complex as access to the individual instrument tracks — or even individual notes.
Search opens up creations. It promotes the civic nature of publishing. Having searchable works is good for culture. It is so good, in fact, that we can now state a new covenant: Copyrights must be counterbalanced by copyduties. In exchange for public protection of a work's copies (what we call copyright), a creator has an obligation to allow that work to be searched. No search, no copyright. As a song, movie, novel or poem is searched, the potential connections it radiates seep into society in a much deeper way than the simple publication of a duplicated copy ever could.
We see this effect most clearly in science. Science is on a long-term campaign to bring all knowledge in the world into one vast, interconnected, footnoted, peer-reviewed web of facts. Independent facts, even those that make sense in their own world, are of little value to science. (The pseudo- and parasciences are nothing less, in fact, than small pools of knowledge that are not connected to the large network of science.) In this way, every new observation or bit of data brought into the web of science enhances the value of all other data points. In science, there is a natural duty to make what is known searchable. No one argues that scientists should be paid when someone finds or duplicates their results. Instead, we have devised other ways to compensate them for their vital work. They are rewarded for the degree that their work is cited, shared, linked and connected in their publications, which they do not own. They are financed with extremely short-term (20-year) patent monopolies for their ideas, short enough to truly inspire them to invent more, sooner. To a large degree, they make their living by giving away copies of their intellectual property in one fashion or another.
The legal clash between the book copy and the searchable Web promises to be a long one. Jane Friedman, the C.E.O. of HarperCollins, which is supporting the suit against Google (while remaining a publishing partner), declared, "I don't expect this suit to be resolved in my lifetime." She's right. The courts may haggle forever as this complex issue works its way to the top. In the end, it won't matter; technology will resolve this discontinuity first. The Chinese scanning factories, which operate under their own, looser intellectual-property assumptions, will keep churning out digital books. And as scanning technology becomes faster, better and cheaper, fans may do what they did to music and simply digitize their own libraries.
What is the technology telling us? That copies don't count any more. Copies of isolated books, bound between inert covers, soon won't mean much. Copies of their texts, however, will gain in meaning as they multiply by the millions and are flung around the world, indexed and copied again. What counts are the ways in which these common copies of a creative work can be linked, manipulated, annotated, tagged, highlighted, bookmarked, translated, enlivened by other media and sewn together into the universal library. Soon a book outside the library will be like a Web page outside the Web, gasping for air. Indeed, the only way for books to retain their waning authority in our culture is to wire their texts into the universal library.
But the reign of livelihoods based on the copy is not over. In the next few years, lobbyists for book publishers, movie studios and record companies will exert every effort to mandate the extinction of the "indiscriminate flow of copies," even if it means outlawing better hardware. Too many creative people depend on the business model revolving around copies for it to pass quietly. For their benefit, copyright law will not change suddenly.
But it will adapt eventually. The reign of the copy is no match for the bias of technology. All new works will be born digital, and they will flow into the universal library as you might add more words to a long story. The great continent of orphan works, the 25 million older books born analog and caught between the law and users, will be scanned. Whether this vast mountain of dark books is scanned by Google, the Library of Congress, the Chinese or by readers themselves, it will be scanned well before its legal status is resolved simply because technology makes it so easy to do and so valuable when done. In the clash between the conventions of the book and the protocols of the screen, the screen will prevail. On this screen, now visible to one billion people on earth, the technology of search will transform isolated books into the universal library of all human knowledge.
-------
Also see:
http://en.wikipedia.org/wiki/Copyright
http://www.centerforsocialmedia.org/resources/fair_use/
http://en.wikipedia.org/wiki/Free_Culture
http://www.lessig.org/content/articles/
http://en.wikipedia.org/wiki/Free_Culture_Movement
Kevin Kelly is the "senior maverick" at Wired magazine and author of "Out of Control: The New Biology of Machines, Social Systems and the Economic World" and other books. He last wrote for the magazine about digital music.
Scan This Book!
By KEVIN KELLY
In several dozen nondescript office buildings around the world, thousands of hourly workers bend over table-top scanners and haul dusty books into high-tech scanning booths. They are assembling the universal library page by page.
The dream is an old one: to have in one place all knowledge, past and present. All books, all documents, all conceptual works, in all languages. It is a familiar hope, in part because long ago we briefly built such a library. The great library at Alexandria, constructed around 300 B.C., was designed to hold all the scrolls circulating in the known world. At one time or another, the library held about half a million scrolls, estimated to have been between 30 and 70 percent of all books in existence then. But even before this great library was lost, the moment when all knowledge could be housed in a single building had passed. Since then, the constant expansion of information has overwhelmed our capacity to contain it. For 2,000 years, the universal library, together with other perennial longings like invisibility cloaks, antigravity shoes and paperless offices, has been a mythical dream that kept receding further into the infinite future.
Until now. When Google announced in December 2004 that it would digitally scan the books of five major research libraries to make their contents searchable, the promise of a universal library was resurrected. Indeed, the explosive rise of the Web, going from nothing to everything in one decade, has encouraged us to believe in the impossible again. Might the long-heralded great library of all knowledge really be within our grasp?
Brewster Kahle, an archivist overseeing another scanning project, says that the universal library is now within reach. "This is our chance to one-up the Greeks!" he shouts. "It is really possible with the technology of today, not tomorrow. We can provide all the works of humankind to all the people of the world. It will be an achievement remembered for all time, like putting a man on the moon." And unlike the libraries of old, which were restricted to the elite, this library would be truly democratic, offering every book to every person.
But the technology that will bring us a planetary source of all written material will also, in the same gesture, transform the nature of what we now call the book and the libraries that hold them. The universal library and its "books" will be unlike any library or books we have known. Pushing us rapidly toward that Eden of everything, and away from the paradigm of the physical paper tome, is the hot technology of the search engine.
1. Scanning the Library of Libraries
Scanning technology has been around for decades, but digitized books didn't make much sense until recently, when search engines like Google, Yahoo, Ask and MSN came along. When millions of books have been scanned and their texts are made available in a single database, search technology will enable us to grab and read any book ever written. Ideally, in such a complete library we should also be able to read any article ever written in any newspaper, magazine or journal. And why stop there? The universal library should include a copy of every painting, photograph, film and piece of music produced by all artists, present and past. Still more, it should include all radio and television broadcasts. Commercials too. And how can we forget the Web? The grand library naturally needs a copy of the billions of dead Web pages no longer online and the tens of millions of blog posts now gone — the ephemeral literature of our time. In short, the entire works of humankind, from the beginning of recorded history, in all languages, available to all people, all the time.
This is a very big library. But because of digital technology, you'll be able to reach inside it from almost any device that sports a screen. From the days of Sumerian clay tablets till now, humans have "published" at least 32 million books, 750 million articles and essays, 25 million songs, 500 million images, 500,000 movies, 3 million videos, TV shows and short films and 100 billion public Web pages. All this material is currently contained in all the libraries and archives of the world. When fully digitized, the whole lot could be compressed (at current technological rates) onto 50 petabyte hard disks. Today you need a building about the size of a small-town library to house 50 petabytes. With tomorrow's technology, it will all fit onto your iPod. When that happens, the library of all libraries will ride in your purse or wallet — if it doesn't plug directly into your brain with thin white cords. Some people alive today are surely hoping that they die before such things happen, and others, mostly the young, want to know what's taking so long. (Could we get it up and running by next week? They have a history project due.)
Technology accelerates the migration of all we know into the universal form of digital bits. Nikon will soon quit making film cameras for consumers, and Minolta already has: better think digital photos from now on. Nearly 100 percent of all contemporary recorded music has already been digitized, much of it by fans. About one-tenth of the 500,000 or so movies listed on the Internet Movie Database are now digitized on DVD. But because of copyright issues and the physical fact of the need to turn pages, the digitization of books has proceeded at a relative crawl. At most, one book in 20 has moved from analog to digital. So far, the universal library is a library without many books.
But that is changing very fast. Corporations and libraries around the world are now scanning about a million books per year. Amazon has digitized several hundred thousand contemporary books. In the heart of Silicon Valley, Stanford University (one of the five libraries collaborating with Google) is scanning its eight-million-book collection using a state-of-the art robot from the Swiss company 4DigitalBooks. This machine, the size of a small S.U.V., automatically turns the pages of each book as it scans it, at the rate of 1,000 pages per hour. A human operator places a book in a flat carriage, and then pneumatic robot fingers flip the pages — delicately enough to handle rare volumes — under the scanning eyes of digital cameras.
Like many other functions in our global economy, however, the real work has been happening far away, while we sleep. We are outsourcing the scanning of the universal library. Superstar, an entrepreneurial company based in Beijing, has scanned every book from 900 university libraries in China. It has already digitized 1.3 million unique titles in Chinese, which it estimates is about half of all the books published in the Chinese language since 1949. It costs $30 to scan a book at Stanford but only $10 in China.
Raj Reddy, a professor at Carnegie Mellon University, decided to move a fair-size English-language library to where the cheap subsidized scanners were. In 2004, he borrowed 30,000 volumes from the storage rooms of the Carnegie Mellon library and the Carnegie Library and packed them off to China in a single shipping container to be scanned by an assembly line of workers paid by the Chinese. His project, which he calls the Million Book Project, is churning out 100,000 pages per day at 20 scanning stations in India and China. Reddy hopes to reach a million digitized books in two years.
The idea is to seed the bookless developing world with easily available texts. Superstar sells copies of books it scans back to the same university libraries it scans from. A university can expand a typical 60,000-volume library into a 1.3 million-volume one overnight. At about 50 cents per digital book acquired, it's a cheap way for a library to increase its collection. Bill McCoy, the general manager of Adobe's e-publishing business, says: "Some of us have thousands of books at home, can walk to wonderful big-box bookstores and well-stocked libraries and can get Amazon.com to deliver next day. The most dramatic effect of digital libraries will be not on us, the well-booked, but on the billions of people worldwide who are underserved by ordinary paper books." It is these underbooked — students in Mali, scientists in Kazakhstan, elderly people in Peru — whose lives will be transformed when even the simplest unadorned version of the universal library is placed in their hands.
2. What Happens When Books Connect
The least important, but most discussed, aspects of digital reading have been these contentious questions: Will we give up the highly evolved technology of ink on paper and instead read on cumbersome machines? Or will we keep reading our paperbacks on the beach? For now, the answer is yes to both. Yes, publishers have lost millions of dollars on the long-prophesied e-book revolution that never occurred, while the number of physical books sold in the world each year continues to grow. At the same time, there are already more than a half a billion PDF documents on the Web that people happily read on computers without printing them out, and still more people now spend hours watching movies on microscopic cellphone screens. The arsenal of our current display technology — from handheld gizmos to large flat screens — is already good enough to move books to their next stage of evolution: a full digital scan.
Yet the common vision of the library's future (even the e-book future) assumes that books will remain isolated items, independent from one another, just as they are on shelves in your public library. There, each book is pretty much unaware of the ones next to it. When an author completes a work, it is fixed and finished. Its only movement comes when a reader picks it up to animate it with his or her imagination. In this vision, the main advantage of the coming digital library is portability — the nifty translation of a book's full text into bits, which permits it to be read on a screen anywhere. But this vision misses the chief revolution birthed by scanning books: in the universal library, no book will be an island.
Turning inked letters into electronic dots that can be read on a screen is simply the first essential step in creating this new library. The real magic will come in the second act, as each word in each book is cross-linked, clustered, cited, extracted, indexed, analyzed, annotated, remixed, reassembled and woven deeper into the culture than ever before. In the new world of books, every bit informs another; every page reads all the other pages.
In recent years, hundreds of thousands of enthusiastic amateurs have written and cross-referenced an entire online encyclopedia called Wikipedia. Buoyed by this success, many nerds believe that a billion readers can reliably weave together the pages of old books, one hyperlink at a time. Those with a passion for a special subject, obscure author or favorite book will, over time, link up its important parts. Multiply that simple generous act by millions of readers, and the universal library can be integrated in full, by fans for fans.
In addition to a link, which explicitly connects one word or sentence or book to another, readers will also be able to add tags, a recent innovation on the Web but already a popular one. A tag is a public annotation, like a keyword or category name, that is hung on a file, page, picture or song, enabling anyone to search for that file. For instance, on the photo-sharing site Flickr, hundreds of viewers will "tag" a photo submitted by another user with their own simple classifications of what they think the picture is about: "goat," "Paris," "goofy," "beach party." Because tags are user-generated, when they move to the realm of books, they will be assigned faster, range wider and serve better than out-of-date schemes like the Dewey Decimal System, particularly in frontier or fringe areas like nanotechnology or body modification.
The link and the tag may be two of the most important inventions of the last 50 years. They get their initial wave of power when we first code them into bits of text, but their real transformative energies fire up as ordinary users click on them in the course of everyday Web surfing, unaware that each humdrum click "votes" on a link, elevating its rank of relevance. You may think you are just browsing, casually inspecting this paragraph or that page, but in fact you are anonymously marking up the Web with bread crumbs of attention. These bits of interest are gathered and analyzed by search engines in order to strengthen the relationship between the end points of every link and the connections suggested by each tag. This is a type of intelligence common on the Web, but previously foreign to the world of books.
Once a book has been integrated into the new expanded library by means of this linking, its text will no longer be separate from the text in other books. For instance, today a serious nonfiction book will usually have a bibliography and some kind of footnotes. When books are deeply linked, you'll be able to click on the title in any bibliography or any footnote and find the actual book referred to in the footnote. The books referenced in that book's bibliography will themselves be available, and so you can hop through the library in the same way we hop through Web links, traveling from footnote to footnote to footnote until you reach the bottom of things.
Next come the words. Just as a Web article on, say, aquariums, can have some of its words linked to definitions of fish terms, any and all words in a digitized book can be hyperlinked to other parts of other books. Books, including fiction, will become a web of names and a community of ideas.
Search engines are transforming our culture because they harness the power of relationships, which is all links really are. There are about 100 billion Web pages, and each page holds, on average, 10 links. That's a trillion electrified connections coursing through the Web. This tangle of relationships is precisely what gives the Web its immense force. The static world of book knowledge is about to be transformed by the same elevation of relationships, as each page in a book discovers other pages and other books. Once text is digital, books seep out of their bindings and weave themselves together. The collective intelligence of a library allows us to see things we can't see in a single, isolated book.
When books are digitized, reading becomes a community activity. Bookmarks can be shared with fellow readers. Marginalia can be broadcast. Bibliographies swapped. You might get an alert that your friend Carl has annotated a favorite book of yours. A moment later, his links are yours. In a curious way, the universal library becomes one very, very, very large single text: the world's only book.
3. Books: The Liquid Version
At the same time, once digitized, books can be unraveled into single pages or be reduced further, into snippets of a page. These snippets will be remixed into reordered books and virtual bookshelves. Just as the music audience now juggles and reorders songs into new albums (or "playlists," as they are called in iTunes), the universal library will encourage the creation of virtual "bookshelves" — a collection of texts, some as short as a paragraph, others as long as entire books, that form a library shelf's worth of specialized information. And as with music playlists, once created, these "bookshelves" will be published and swapped in the public commons. Indeed, some authors will begin to write books to be read as snippets or to be remixed as pages. The ability to purchase, read and manipulate individual pages or sections is surely what will drive reference books (cookbooks, how-to manuals, travel guides) in the future. You might concoct your own "cookbook shelf" of Cajun recipes compiled from many different sources; it would include Web pages, magazine clippings and entire Cajun cookbooks. Amazon currently offers you a chance to publish your own bookshelves (Amazon calls them "listmanias") as annotated lists of books you want to recommend on a particular esoteric subject. And readers are already using Google Book Search to round up minilibraries on a certain topic — all books about Sweden, for instance, or books on clocks. Once snippets, articles and pages of books become ubiquitous, shuffle-able and transferable, users will earn prestige and perhaps income for curating an excellent collection.
Libraries (as well as many individuals) aren't eager to relinquish ink-on-paper editions, because the printed book is by far the most durable and reliable backup technology we have. Printed books require no mediating device to read and thus are immune to technological obsolescence. Paper is also extremely stable, compared with, say, hard drives or even CD's. In this way, the stability and fixity of a bound book is a blessing. It sits there unchanging, true to its original creation. But it sits alone.
So what happens when all the books in the world become a single liquid fabric of interconnected words and ideas? Four things: First, works on the margins of popularity will find a small audience larger than the near-zero audience they usually have now. Far out in the "long tail" of the distribution curve — that extended place of low-to-no sales where most of the books in the world live — digital interlinking will lift the readership of almost any title, no matter how esoteric. Second, the universal library will deepen our grasp of history, as every original document in the course of civilization is scanned and cross-linked. Third, the universal library of all books will cultivate a new sense of authority. If you can truly incorporate all texts — past and present, multilingual — on a particular subject, then you can have a clearer sense of what we as a civilization, a species, do know and don't know. The white spaces of our collective ignorance are highlighted, while the golden peaks of our knowledge are drawn with completeness. This degree of authority is only rarely achieved in scholarship today, but it will become routine.
Finally, the full, complete universal library of all works becomes more than just a better Ask Jeeves. Search on the Web becomes a new infrastructure for entirely new functions and services. Right now, if you mash up Google Maps and Monster.com, you get maps of where jobs are located by salary. In the same way, it is easy to see that in the great library, everything that has ever been written about, for example, Trafalgar Square in London could be present on that spot via a screen. In the same way, every object, event or location on earth would "know" everything that has ever been written about it in any book, in any language, at any time. From this deep structuring of knowledge comes a new culture of interaction and participation.
The main drawback of this vision is a big one. So far, the universal library lacks books. Despite the best efforts of bloggers and the creators of the Wikipedia, most of the world's expertise still resides in books. And a universal library without the contents of books is no universal library at all.
There are dozens of excellent reasons that books should quickly be made part of the emerging Web. But so far they have not been, at least not in great numbers. And there is only one reason: the hegemony of the copy.
4. The Triumph of the Copy
The desire of all creators is for their works to find their way into all minds. A text, a melody, a picture or a story succeeds best if it is connected to as many ideas and other works as possible. Ideally, over time a work becomes so entangled in a culture that it appears to be inseparable from it, in the way that the Bible, Shakespeare's plays, "Cinderella" and the Mona Lisa are inseparable from ours. This tendency for creative ideas to infiltrate other works is great news for culture. In fact, this commingling of creations is culture.
In preindustrial times, exact copies of a work were rare for a simple reason: it was much easier to make your own version of a creation than to duplicate someone else's exactly. The amount of energy and attention needed to copy a scroll exactly, word for word, or to replicate a painting stroke by stroke exceeded the cost of paraphrasing it in your own style. So most works were altered, and often improved, by the borrower before they were passed on. Fairy tales evolved mythic depth as many different authors worked on them and as they migrated from spoken tales to other media (theater, music, painting). This system worked well for audiences and performers, but the only way for most creators to earn a living from their works was through the support of patrons.
That ancient economics of creation was overturned at the dawn of the industrial age by the technologies of mass production. Suddenly, the cost of duplication was lower than the cost of appropriation. With the advent of the printing press, it was now cheaper to print thousands of exact copies of a manuscript than to alter one by hand. Copy makers could profit more than creators. This imbalance led to the technology of copyright, which established a new order. Copyright bestowed upon the creator of a work a temporary monopoly — for 14 years, in the United States — over any copies of the work. The idea was to encourage authors and artists to create yet more works that could be cheaply copied and thus fill the culture with public works.
Not coincidentally, public libraries first began to flourish with the advent of cheap copies. Before the industrial age, libraries were primarily the property of the wealthy elite. With mass production, every small town could afford to put duplicates of the greatest works of humanity on wooden shelves in the village square. Mass access to public-library books inspired scholarship, reviewing and education, activities exempted in part from the monopoly of copyright in the United States because they moved creative works toward the public commons sooner, weaving them into the fabric of common culture while still remaining under the author's copyright. These are now known as "fair uses."
This wonderful balance was undone by good intentions. The first was a new copyright law passed by Congress in 1976. According to the new law, creators no longer had to register or renew copyright; the simple act of creating something bestowed it with instant and automatic rights. By default, each new work was born under private ownership rather than in the public commons. At first, this reversal seemed to serve the culture of creation well. All works that could be copied gained instant and deep ownership, and artists and authors were happy. But the 1976 law, and various revisions and extensions that followed it, made it extremely difficult to move a work into the public commons, where human creations naturally belong and were originally intended to reside. As more intellectual property became owned by corporations rather than by individuals, those corporations successfully lobbied Congress to keep extending the once-brief protection enabled by copyright in order to prevent works from returning to the public domain. With constant nudging, Congress moved the expiration date from 14 years to 28 to 42 and then to 56.
While corporations and legislators were moving the goal posts back, technology was accelerating forward. In Internet time, even 14 years is a long time for a monopoly; a monopoly that lasts a human lifetime is essentially an eternity. So when Congress voted in 1998 to extend copyright an additional 70 years beyond the life span of a creator — to a point where it could not possibly serve its original purpose as an incentive to keep that creator working — it was obvious to all that copyright now existed primarily to protect a threatened business model. And because Congress at the same time tacked a 20-year extension onto all existing copyrights, nothing — no published creative works of any type — will fall out of protection and return to the public domain until 2019. Almost everything created today will not return to the commons until the next century. Thus the stream of shared material that anyone can improve (think "A Thousand and One Nights" or "Amazing Grace" or "Beauty and the Beast") will largely dry up.
In the world of books, the indefinite extension of copyright has had a perverse effect. It has created a vast collection of works that have been abandoned by publishers, a continent of books left permanently in the dark. In most cases, the original publisher simply doesn't find it profitable to keep these books in print. In other cases, the publishing company doesn't know whether it even owns the work, since author contracts in the past were not as explicit as they are now. The size of this abandoned library is shocking: about 75 percent of all books in the world's libraries are orphaned. Only about 15 percent of all books are in the public domain. A luckier 10 percent are still in print. The rest, the bulk of our universal library, is dark.
5. The Moral Imperative to Scan
The 15 percent of the world's 32 million cataloged books that are in the public domain are freely available for anyone to borrow, imitate, publish or copy wholesale. Almost the entire current scanning effort by American libraries is aimed at this 15 percent. The Million Book Project mines this small sliver of the pie, as does Google. Because they are in the commons, no law hinders this 15 percent from being scanned and added to the universal library.
The approximately 10 percent of all books actively in print will also be scanned before long. Amazon carries at least four million books, which includes multiple editions of the same title. Amazon is slowly scanning all of them. Recently, several big American publishers have declared themselves eager to move their entire backlist of books into the digital sphere. Many of them are working with Google in a partnership program in which Google scans their books, offers sample pages (controlled by the publisher) to readers and points readers to where they can buy the actual book. No one doubts electronic books will make money eventually. Simple commercial incentives guarantee that all in-print and backlisted books will before long be scanned into the great library. That's not the problem.
The major problem for large publishers is that they are not certain what they actually own. If you would like to amuse yourself, pick an out-of-print book from the library and try to determine who owns its copyright. It's not easy. There is no list of copyrighted works. The Library of Congress does not have a catalog. The publishers don't have an exhaustive list, not even of their own imprints (though they say they are working on it). The older, the more obscure the work, the less likely a publisher will be able to tell you (that is, if the publisher still exists) whether the copyright has reverted to the author, whether the author is alive or dead, whether the copyright has been sold to another company, whether the publisher still owns the copyright or whether it plans to resurrect or scan it. Plan on having a lot of spare time and patience if you inquire. I recently spent two years trying to track down the copyright to a book that led me to Random House. Does the company own it? Can I reproduce it? Three years later, the company is still working on its answer. The prospect of tracking down the copyright — with any certainty — of the roughly 25 million orphaned books is simply ludicrous.
Which leaves 75 percent of the known texts of humans in the dark. The legal limbo surrounding their status as copies prevents them from being digitized. No one argues that these are all masterpieces, but there is history and context enough in their pages to not let them disappear. And if they are not scanned, they in effect will disappear. But with copyright hyperextended beyond reason (the Supreme Court in 2003 declared the law dumb but not unconstitutional), none of this dark library will return to the public domain (and be cleared for scanning) until at least 2019. With no commercial incentive to entice uncertain publishers to pay for scanning these orphan works, they will vanish from view. According to Peter Brantley, director of technology for the California Digital Library, "We have a moral imperative to reach out to our library shelves, grab the material that is orphaned and set it on top of scanners."
No one was able to unravel the Gordian knot of copydom until 2004, when Google came up with a clever solution. In addition to scanning the 15 percent out-of-copyright public-domain books with their library partners and the 10 percent in-print books with their publishing partners, Google executives declared that they would also scan the 75 percent out-of-print books that no one else would touch. They would scan the entire book, without resolving its legal status, which would allow the full text to be indexed on Google's internal computers and searched by anyone. But the company would show to readers only a few selected sentence-long snippets from the book at a time. Google's lawyers argued that the snippets the company was proposing were something like a quote or an excerpt in a review and thus should qualify as a "fair use."
Google's plan was to scan the full text of every book in five major libraries: the more than 10 million titles held by Stanford, Harvard, Oxford, the University of Michigan and the New York Public Library. Every book would be indexed, but each would show up in search results in different ways. For out-of-copyright books, Google would show the whole book, page by page. For the in-print books, Google would work with publishers and let them decide what parts of their books would be shown and under what conditions. For the dark orphans, Google would show only limited snippets. And any copyright holder (author or corporation) who could establish ownership of a supposed orphan could ask Google to remove the snippets for any reason.
At first glance, it seemed genius. By scanning all books (something only Google had the cash to do), the company would advance its mission to organize all knowledge. It would let books be searchable, and it could potentially sell ads on those searches, although it does not do that currently. In the same stroke, Google would rescue the lost and forgotten 75 percent of the library. For many authors, this all-out campaign was a salvation. Google became a discovery tool, if not a marketing program. While a few best-selling authors fear piracy, every author fears obscurity. Enabling their works to be found in the same universal search box as everything else in the world was good news for authors and good news for an industry that needed some. For authors with books in the publisher program and for authors of books abandoned by a publisher, Google unleashed a chance that more people would at least read, and perhaps buy, the creation they had sweated for years to complete.
6. The Case Against Google
Some authors and many publishers found more evil than genius in Google's plan. Two points outraged them: the virtual copy of the book that sat on Google's indexing server and Google's assumption that it could scan first and ask questions later. On both counts the authors and publishers accused Google of blatant copyright infringement. When negotiations failed last fall, the Authors Guild and five big publishing companies sued Google. Their argument was simple: Why shouldn't Google share its ad revenue (if any) with the copyright owners? And why shouldn't Google have to ask permission from the legal copyright holder before scanning the work in any case? (I have divided loyalties in the case. The current publisher of my books is suing Google to protect my earnings as an author. At the same time, I earn income from Google Adsense ads placed on my blog.)
One mark of the complexity of this issue is that the publishers suing were, and still are, committed partners in the Google Book Search Partner Program. They still want Google to index and search their in-print books, even when they are scanning the books themselves, because, they say, search is a discovery tool for readers. The ability to search the scans of all books is good for profits.
The argument about sharing revenue is not about the three or four million books that publishers care about and keep in print, because Google is sharing revenues for those books with publishers. (Google says publishers receive the "majority share" of the income from the small ads placed on partner-program pages.) The argument is about the 75 percent of books that have been abandoned by publishers as uneconomical. One curious fact, of course, is that publishers only care about these orphans now because Google has shifted the economic equation; because of Book Search, these dark books may now have some sparks in them, and the publishers don't want this potential revenue stream to slip away from them. They are now busy digging deep into their records to see what part of the darkness they can declare as their own.
The second complaint against Google is more complex. Google argues that it is nearly impossible to track down copyright holders of orphan works, and so, it says, it must scan those books first and only afterward honor any legitimate requests to remove the scan. In this way, Google follows the protocol of the Internet. Google scans all Web pages; if it's on the Web, it's scanned. Web pages, by default, are born copyrighted. Google, therefore, regularly copies billions of copyrighted pages into its index for the public to search. But if you don't want Google to search your Web site, you can stick some code on your home page with a no-searching sign, and Google and every other search engine will stay out. A Web master thus can opt out of search. (Few do.) Google applies the same principle of opting-out to Book Search. It is up to you as an author to notify Google if you don't want the company to scan or search your copyrighted material. This might be a reasonable approach for Google to demand from an author or publisher if Google were the only search company around. But search technology is becoming a commodity, and if it turns out there is any money in it, it is not impossible to imagine a hundred mavericks scanning out-of-print books. Should you as a creator be obliged to find and notify each and every geek who scanned your work, if for some reason you did not want it indexed? What if you miss one?
There is a technical solution to this problem: for the search companies to compile and maintain a common list of no-scan copyright holders. A publisher or author who doesn't want a work scanned notifies the keepers of the common list once, and anyone conducting scanning would have to remove material that was listed. Since Google, like all the other big search companies — Microsoft, Amazon and Yahoo — is foremost a technical-solution company, it favors this approach. But the battle never got that far.
7. When Business Models Collide
In thinking about the arguments around search, I realized that there are many ways to conceive of this conflict. At first, I thought that this was a misunderstanding between people of the book, who favor solutions by laws, and people of the screen, who favor technology as a solution to all problems. Last November, the New York Public Library (one of the "Google Five") sponsored a debate between representatives of authors and publishers and supporters of Google. I was tickled to see that up on the stage, the defenders of the book were from the East Coast and the defenders of the screen were from the West Coast. But while it's true that there's a strand of cultural conflict here, I eventually settled on a different framework, one that I found more useful. This is a clash of business models.
Authors and publishers (including publishers of music and film) have relied for years on cheap mass-produced copies protected from counterfeits and pirates by a strong law based on the dominance of copies and on a public educated to respect the sanctity of a copy. This model has, in the last century or so, produced the greatest flowering of human achievement the world has ever seen, a magnificent golden age of creative works. Protected physical copies have enabled millions of people to earn a living directly from the sale of their art to the audience, without the weird dynamics of patronage. Not only did authors and artists benefit from this model, but the audience did, too. For the first time, billions of ordinary people were able to come in regular contact with a great work. In Mozart's day, few people ever heard one of his symphonies more than once. With the advent of cheap audio recordings, a barber in Java could listen to them all day long.
But a new regime of digital technology has now disrupted all business models based on mass-produced copies, including individual livelihoods of artists. The contours of the electronic economy are still emerging, but while they do, the wealth derived from the old business model is being spent to try to protect that old model, through legislation and enforcement. Laws based on the mass-produced copy artifact are being taken to the extreme, while desperate measures to outlaw new technologies in the marketplace "for our protection" are introduced in misguided righteousness. (This is to be expected. The fact is, entire industries and the fortunes of those working in them are threatened with demise. Newspapers and magazines, Hollywood, record labels, broadcasters and many hard-working and wonderful creative people in those fields have to change the model of how they earn money. Not all will make it.)
The new model, of course, is based on the intangible assets of digital bits, where copies are no longer cheap but free. They freely flow everywhere. As computers retrieve images from the Web or display texts from a server, they make temporary internal copies of those works. In fact, every action you take on the Net or invoke on your computer requires a copy of something to be made. This peculiar superconductivity of copies spills out of the guts of computers into the culture of computers. Many methods have been employed to try to stop the indiscriminate spread of copies, including copy-protection schemes, hardware-crippling devices, education programs, even legislation, but all have proved ineffectual. The remedies are rejected by consumers and ignored by pirates.
As copies have been dethroned, the economic model built on them is collapsing. In a regime of superabundant free copies, copies lose value. They are no longer the basis of wealth. Now relationships, links, connection and sharing are. Value has shifted away from a copy toward the many ways to recall, annotate, personalize, edit, authenticate, display, mark, transfer and engage a work. Authors and artists can make (and have made) their livings selling aspects of their works other than inexpensive copies of them. They can sell performances, access to the creator, personalization, add-on information, the scarcity of attention (via ads), sponsorship, periodic subscriptions — in short, all the many values that cannot be copied. The cheap copy becomes the "discovery tool" that markets these other intangible valuables. But selling things-that-cannot-be-copied is far from ideal for many creative people. The new model is rife with problems (or opportunities). For one thing, the laws governing creating and rewarding creators still revolve around the now-fragile model of valuable copies.
8. Search Changes Everything
The search-engine companies, including Google, operate in the new regime. Search is a wholly new concept, not foreseen in version 1.0 of our intellectual-property law. In the words of a recent ruling by the United States District Court for Nevada, search has a "transformative purpose," adding new social value to what it searches. What search uncovers is not just keywords but also the inherent value of connection. While almost every artist recognizes that the value of a creation ultimately rests in the value he or she personally gets from creating it (and for a few artists that value is sufficient), it is also true that the value of any work is increased the more it is shared. The technology of search maximizes the value of a creative work by allowing a billion new connections into it, often a billion new connections that were previously inconceivable. Things can be found by search only if they radiate potential connections. These potential relationships can be as simple as a title or as deep as hyperlinked footnotes that lead to active pages, which are also footnoted. It may be as straightforward as a song published intact or as complex as access to the individual instrument tracks — or even individual notes.
Search opens up creations. It promotes the civic nature of publishing. Having searchable works is good for culture. It is so good, in fact, that we can now state a new covenant: Copyrights must be counterbalanced by copyduties. In exchange for public protection of a work's copies (what we call copyright), a creator has an obligation to allow that work to be searched. No search, no copyright. As a song, movie, novel or poem is searched, the potential connections it radiates seep into society in a much deeper way than the simple publication of a duplicated copy ever could.
We see this effect most clearly in science. Science is on a long-term campaign to bring all knowledge in the world into one vast, interconnected, footnoted, peer-reviewed web of facts. Independent facts, even those that make sense in their own world, are of little value to science. (The pseudo- and parasciences are nothing less, in fact, than small pools of knowledge that are not connected to the large network of science.) In this way, every new observation or bit of data brought into the web of science enhances the value of all other data points. In science, there is a natural duty to make what is known searchable. No one argues that scientists should be paid when someone finds or duplicates their results. Instead, we have devised other ways to compensate them for their vital work. They are rewarded for the degree that their work is cited, shared, linked and connected in their publications, which they do not own. They are financed with extremely short-term (20-year) patent monopolies for their ideas, short enough to truly inspire them to invent more, sooner. To a large degree, they make their living by giving away copies of their intellectual property in one fashion or another.
The legal clash between the book copy and the searchable Web promises to be a long one. Jane Friedman, the C.E.O. of HarperCollins, which is supporting the suit against Google (while remaining a publishing partner), declared, "I don't expect this suit to be resolved in my lifetime." She's right. The courts may haggle forever as this complex issue works its way to the top. In the end, it won't matter; technology will resolve this discontinuity first. The Chinese scanning factories, which operate under their own, looser intellectual-property assumptions, will keep churning out digital books. And as scanning technology becomes faster, better and cheaper, fans may do what they did to music and simply digitize their own libraries.
What is the technology telling us? That copies don't count any more. Copies of isolated books, bound between inert covers, soon won't mean much. Copies of their texts, however, will gain in meaning as they multiply by the millions and are flung around the world, indexed and copied again. What counts are the ways in which these common copies of a creative work can be linked, manipulated, annotated, tagged, highlighted, bookmarked, translated, enlivened by other media and sewn together into the universal library. Soon a book outside the library will be like a Web page outside the Web, gasping for air. Indeed, the only way for books to retain their waning authority in our culture is to wire their texts into the universal library.
But the reign of livelihoods based on the copy is not over. In the next few years, lobbyists for book publishers, movie studios and record companies will exert every effort to mandate the extinction of the "indiscriminate flow of copies," even if it means outlawing better hardware. Too many creative people depend on the business model revolving around copies for it to pass quietly. For their benefit, copyright law will not change suddenly.
But it will adapt eventually. The reign of the copy is no match for the bias of technology. All new works will be born digital, and they will flow into the universal library as you might add more words to a long story. The great continent of orphan works, the 25 million older books born analog and caught between the law and users, will be scanned. Whether this vast mountain of dark books is scanned by Google, the Library of Congress, the Chinese or by readers themselves, it will be scanned well before its legal status is resolved simply because technology makes it so easy to do and so valuable when done. In the clash between the conventions of the book and the protocols of the screen, the screen will prevail. On this screen, now visible to one billion people on earth, the technology of search will transform isolated books into the universal library of all human knowledge.
-------
Also see:
http://en.wikipedia.org/wiki/Copyright
http://www.centerforsocialmedia.org/resources/fair_use/
http://en.wikipedia.org/wiki/Free_Culture
http://www.lessig.org/content/articles/
http://en.wikipedia.org/wiki/Free_Culture_Movement
Tuesday, May 16
Wal-Mart Goes Organic: And Now for the Bad News
Wal-Mart Goes Organic: And Now for the Bad News
At the risk of sounding more equivocal than any self-respecting blogger is expected to sound, I’m going to turn my attention from the benefits of Wal-Mart’s decision to enter the organic food market to its costs. You’ll have to decide for yourself whether the advantage of making organic food accessible to more Americans is outweighed by the damage Wal-Mart may do to the practice and meaning of organic food production. The trade-offs are considerable.
When Wal-Mart announced its plan to offer consumers a wide selection of organic foods, the company claimed it would keep the price premium for organic to no more than 10 percent. This in itself is grounds for concern — in my view, it virtually guarantees that Wal-Mart’s version of cheap, industrialized organic food will not be sustainable in any meaningful sense of the word (see my earlier column, “Voting With Your Fork,” for a discussion of that word). Why? Because to index the price of organic to the price of conventional food is to give up, right from the start, on the idea — once enshrined in the organic movement — that food should be priced responsibly. Cheap industrial food, the organic movement has argued, only seems cheap, because the real costs are charged to the environment (in the form of water and air pollution and depletion of the soil); to the public purse (in the form of subsidies to conventional commodity producers); and to the public health (in the cost of diabetes, obesity and cardiovascular disease), not to mention to the welfare of the farm- and food-factory workers and the well-being of the animals. As Wendell Berry once wrote, the motto of our conventional food system — at the center of which stands Wal-Mart, the biggest grocer in America — should be: Cheap at Any Price!
To say you can sell organic food for 10 percent above the price at which you sell irresponsibly priced food suggests you don’t really get it — that you plan to bring the same principles of industrial “efficiency” and “economies of scale” to a system of food production that was supposed to mimic the logic of nature rather than that of the factory.
We have already seen what happens when the logic of industry is applied to organic food production. Synthetic pesticides are simply replaced by approved organic pesticides; synthetic fertilizer is simply replaced by compost and manures and mined forms of nitrogen imported from South America. The result is a greener factory farm, to be sure, but a factory nevertheless.
The industrialization of organic agriculture, which Wal-Mart’s entry will hasten, has given us “organic feedlots” — two words that I never thought would find their way into the same clause. To supply the burgeoning demand for cheap organic milk, agribusiness companies are setting up 5000-head dairies, often in the desert. The milking cows never touch a blade of grass, but instead spend their lives standing around a dry lot “loafing area” munching organic grain — grain that takes a toll on both the animals’ health (these ruminants evolved to eat grass after all) and the nutritional value of their milk. Frequently the milk is then ultra-pasteurized (a high heat process that further diminishes its nutritional value) before being shipped across the country. This is the sort of milk we’re going to see a lot more of in our supermarkets, as long as Wal-Mart honors its commitment to keep organic milk cheap.
We’re also going to see more organic milk coming from places like New Zealand, a trend driven by soaring demand — and also by what seems to me, in an era of energy scarcity, a rather forgiving construction of the idea of sustainability. Making organic food inexpensive means buying it from anywhere it can be produced most cheaply — lengthening rather than shortening the food chain, and deepening its dependence on fossil fuels.
Similarly, organic meat is increasingly coming not from polycultures growing a variety of species (which are able to recycle nutrients between plants and animals) but from ever-bigger organic confined animal feeding operations, or CAFO’s, that, apart from not using antibiotics and feeding organic grain, are little different from their conventional counterparts. Yes, the organic rules say the animals should have “access to the outdoors,” but in practice this means providing them with a tiny exercise yard or, in the case of one egg producer in New England, a screened-in concrete “porch.” This is one of the ironies of practicing organic agriculture on an industrial scale: big, single-species organic CAFO’s are even more precarious than their industrial cousins, since they can’t rely on antibiotics to keep thousands of animals living in close confinement from getting sick. So organic CAFO-hands (to call them farm-hands just doesn’t seem right) keep the free-ranging to a minimum, and then keep their fingers crossed.
The industrial food chain, whether organic or conventional, inevitably links giant supermarkets to giant farms. But this is not because big farms are any more efficient or productive than small farms — to the contrary. Studies have found that small farms produce more food per unit of land than big farms do). And polycultures are more productive than monocultures. So why don’t such farms predominate? Because big supermarkets prefer to do business with big farms growing lots of the same thing. It is more efficient for Wal-Mart — in the economic, not the biological, sense — to contract with a single huge carrot or chicken grower than with 10 small ones: the “transaction costs” are lower, even if the price and the quality is no different. This is just one of the many ways in which the logic of capitalism and the logic of biology on a farm come into conflict. At least in the short term, the logic of business usually prevails.
Wal-Mart’s big-foot entry into the organic market is bad news for small organic farmers, that seems obvious enough. But it may also spell trouble for the big growers they’ll favor. Wal-Mart has a reputation for driving down prices by squeezing its suppliers, especially after the suppliers have invested in expanding production to feed the Wal-Mart maw. Once you’ve boosted your production to supply Wal-Mart, you’re at the company’s mercy when it decides it no longer wants to give you a price that will cover the cost of production, let alone enable you to make a profit. When that happens, the notion of responsibly priced food will be sacrificed to the need to survive, and the pressure to cut corners will become irresistible.
Right now, the federal organic standards provide a bulwark against that pressure. But with the industrialization of organic, the rules are coming under increasing pressure, and (forgive my skepticism) it’s hard to believe that the lobbyists from Wal-Mart are going to play a constructive role in defending those standards from efforts to dilute them. Earlier this year, the Organic Trade Association hired lobbyists from Kraft to move a bill through Congress making it easier to include synthetic ingredients in products labeled organic.
(What are any synthetic ingredients doing in products labeled organic, anyway? A good question, and one that was recently posed in a lawsuit against the U.S. Department of Agriculture by a blueberry farmer in Maine, who argued that the 1990 law establishing the federal organic program had specifically prohibited synthetics in organic food. Within weeks after he won his case, the industry went to Congress to preserve its right to put synthetic ingredients like xanthan gum and ascorbic acid into organic processed foods.)
For better or worse, the legal meaning of the word organic is now in the hands of the government, which means it is subject to all the usual political and economic forces at play in Washington. The drive to keep organic food cheap will bring pressure to further weaken the regulations, and some of K Street’s most skillful and influential lobbyists will soon be on the case. A couple of years ago, a chicken producer in Georgia named Fieldale Farms induced its congressman to slip a helpful provision into an Agriculture Department appropriations bill that would allow organic chicken farmers to substitute conventional chicken feed when the price of organic feed exceeded a certain level. Well, that certainly makes life easier for a chicken producer, especially when the price of organic corn is up around $8 a bushel (compared to less than $2 for conventional feed). But in what sense would a chicken fed on conventional feed still be organic? In no sense except the Orwellian one: because the government says it is. An outcry from consumers and wiser organic producers (who saw their precious label losing credibility) put a halt to Fieldale’s plans, and the legislation was quickly repealed.
The moral of the Fieldale story is that unless consumers and well-meaning producers remain vigilant, the drive to make organic foods nearly as cheap as conventional foods threatens to hollow out the word and kill the gold-egg-laying organic goose. Let’s hope Wal-Mart understands that the marketing power of the word organic — a power that flows directly from consumers’ uneasiness about the conventional food chain — is a little like the health of a chicken living in close confinement with 20,000 other chickens in an organic CAFO, munching organic corn: fragile.
At the risk of sounding more equivocal than any self-respecting blogger is expected to sound, I’m going to turn my attention from the benefits of Wal-Mart’s decision to enter the organic food market to its costs. You’ll have to decide for yourself whether the advantage of making organic food accessible to more Americans is outweighed by the damage Wal-Mart may do to the practice and meaning of organic food production. The trade-offs are considerable.
When Wal-Mart announced its plan to offer consumers a wide selection of organic foods, the company claimed it would keep the price premium for organic to no more than 10 percent. This in itself is grounds for concern — in my view, it virtually guarantees that Wal-Mart’s version of cheap, industrialized organic food will not be sustainable in any meaningful sense of the word (see my earlier column, “Voting With Your Fork,” for a discussion of that word). Why? Because to index the price of organic to the price of conventional food is to give up, right from the start, on the idea — once enshrined in the organic movement — that food should be priced responsibly. Cheap industrial food, the organic movement has argued, only seems cheap, because the real costs are charged to the environment (in the form of water and air pollution and depletion of the soil); to the public purse (in the form of subsidies to conventional commodity producers); and to the public health (in the cost of diabetes, obesity and cardiovascular disease), not to mention to the welfare of the farm- and food-factory workers and the well-being of the animals. As Wendell Berry once wrote, the motto of our conventional food system — at the center of which stands Wal-Mart, the biggest grocer in America — should be: Cheap at Any Price!
To say you can sell organic food for 10 percent above the price at which you sell irresponsibly priced food suggests you don’t really get it — that you plan to bring the same principles of industrial “efficiency” and “economies of scale” to a system of food production that was supposed to mimic the logic of nature rather than that of the factory.
We have already seen what happens when the logic of industry is applied to organic food production. Synthetic pesticides are simply replaced by approved organic pesticides; synthetic fertilizer is simply replaced by compost and manures and mined forms of nitrogen imported from South America. The result is a greener factory farm, to be sure, but a factory nevertheless.
The industrialization of organic agriculture, which Wal-Mart’s entry will hasten, has given us “organic feedlots” — two words that I never thought would find their way into the same clause. To supply the burgeoning demand for cheap organic milk, agribusiness companies are setting up 5000-head dairies, often in the desert. The milking cows never touch a blade of grass, but instead spend their lives standing around a dry lot “loafing area” munching organic grain — grain that takes a toll on both the animals’ health (these ruminants evolved to eat grass after all) and the nutritional value of their milk. Frequently the milk is then ultra-pasteurized (a high heat process that further diminishes its nutritional value) before being shipped across the country. This is the sort of milk we’re going to see a lot more of in our supermarkets, as long as Wal-Mart honors its commitment to keep organic milk cheap.
We’re also going to see more organic milk coming from places like New Zealand, a trend driven by soaring demand — and also by what seems to me, in an era of energy scarcity, a rather forgiving construction of the idea of sustainability. Making organic food inexpensive means buying it from anywhere it can be produced most cheaply — lengthening rather than shortening the food chain, and deepening its dependence on fossil fuels.
Similarly, organic meat is increasingly coming not from polycultures growing a variety of species (which are able to recycle nutrients between plants and animals) but from ever-bigger organic confined animal feeding operations, or CAFO’s, that, apart from not using antibiotics and feeding organic grain, are little different from their conventional counterparts. Yes, the organic rules say the animals should have “access to the outdoors,” but in practice this means providing them with a tiny exercise yard or, in the case of one egg producer in New England, a screened-in concrete “porch.” This is one of the ironies of practicing organic agriculture on an industrial scale: big, single-species organic CAFO’s are even more precarious than their industrial cousins, since they can’t rely on antibiotics to keep thousands of animals living in close confinement from getting sick. So organic CAFO-hands (to call them farm-hands just doesn’t seem right) keep the free-ranging to a minimum, and then keep their fingers crossed.
The industrial food chain, whether organic or conventional, inevitably links giant supermarkets to giant farms. But this is not because big farms are any more efficient or productive than small farms — to the contrary. Studies have found that small farms produce more food per unit of land than big farms do). And polycultures are more productive than monocultures. So why don’t such farms predominate? Because big supermarkets prefer to do business with big farms growing lots of the same thing. It is more efficient for Wal-Mart — in the economic, not the biological, sense — to contract with a single huge carrot or chicken grower than with 10 small ones: the “transaction costs” are lower, even if the price and the quality is no different. This is just one of the many ways in which the logic of capitalism and the logic of biology on a farm come into conflict. At least in the short term, the logic of business usually prevails.
Wal-Mart’s big-foot entry into the organic market is bad news for small organic farmers, that seems obvious enough. But it may also spell trouble for the big growers they’ll favor. Wal-Mart has a reputation for driving down prices by squeezing its suppliers, especially after the suppliers have invested in expanding production to feed the Wal-Mart maw. Once you’ve boosted your production to supply Wal-Mart, you’re at the company’s mercy when it decides it no longer wants to give you a price that will cover the cost of production, let alone enable you to make a profit. When that happens, the notion of responsibly priced food will be sacrificed to the need to survive, and the pressure to cut corners will become irresistible.
Right now, the federal organic standards provide a bulwark against that pressure. But with the industrialization of organic, the rules are coming under increasing pressure, and (forgive my skepticism) it’s hard to believe that the lobbyists from Wal-Mart are going to play a constructive role in defending those standards from efforts to dilute them. Earlier this year, the Organic Trade Association hired lobbyists from Kraft to move a bill through Congress making it easier to include synthetic ingredients in products labeled organic.
(What are any synthetic ingredients doing in products labeled organic, anyway? A good question, and one that was recently posed in a lawsuit against the U.S. Department of Agriculture by a blueberry farmer in Maine, who argued that the 1990 law establishing the federal organic program had specifically prohibited synthetics in organic food. Within weeks after he won his case, the industry went to Congress to preserve its right to put synthetic ingredients like xanthan gum and ascorbic acid into organic processed foods.)
For better or worse, the legal meaning of the word organic is now in the hands of the government, which means it is subject to all the usual political and economic forces at play in Washington. The drive to keep organic food cheap will bring pressure to further weaken the regulations, and some of K Street’s most skillful and influential lobbyists will soon be on the case. A couple of years ago, a chicken producer in Georgia named Fieldale Farms induced its congressman to slip a helpful provision into an Agriculture Department appropriations bill that would allow organic chicken farmers to substitute conventional chicken feed when the price of organic feed exceeded a certain level. Well, that certainly makes life easier for a chicken producer, especially when the price of organic corn is up around $8 a bushel (compared to less than $2 for conventional feed). But in what sense would a chicken fed on conventional feed still be organic? In no sense except the Orwellian one: because the government says it is. An outcry from consumers and wiser organic producers (who saw their precious label losing credibility) put a halt to Fieldale’s plans, and the legislation was quickly repealed.
The moral of the Fieldale story is that unless consumers and well-meaning producers remain vigilant, the drive to make organic foods nearly as cheap as conventional foods threatens to hollow out the word and kill the gold-egg-laying organic goose. Let’s hope Wal-Mart understands that the marketing power of the word organic — a power that flows directly from consumers’ uneasiness about the conventional food chain — is a little like the health of a chicken living in close confinement with 20,000 other chickens in an organic CAFO, munching organic corn: fragile.
Monday, May 15
Will the Real Traitors Please Stand Up?
By FRANK RICH
New York Times
WHEN America panics, it goes hunting for scapegoats. But from Salem onward, we've more often than not ended up pillorying the innocent. Abe Rosenthal, the legendary Times editor who died last week, and his publisher, Arthur Ochs Sulzberger, were denounced as treasonous in 1971 when they defied the Nixon administration to publish the Pentagon Papers, the secret government history of the Vietnam War. Today we know who the real traitors were: the officials who squandered American blood and treasure on an ill-considered war and then tried to cover up their lies and mistakes. It was precisely those lies and mistakes, of course, that were laid bare by the thousands of pages of classified Pentagon documents leaked to both The Times and The Washington Post.
This history is predictably repeating itself now that the public has turned on the war in Iraq. The administration's die-hard defenders are desperate to deflect blame for the fiasco, and, guess what, the traitors once again are The Times and The Post. This time the newspapers committed the crime of exposing warrantless spying on Americans by the National Security Agency (The Times) and the C.I.A.'s secret "black site" Eastern European prisons (The Post). Aping the Nixon template, the current White House tried to stop both papers from publishing and when that failed impugned their patriotism.
President Bush, himself a sometime leaker of intelligence, called the leaking of the N.S.A. surveillance program a "shameful act" that is "helping the enemy." Porter Goss, who was then still C.I.A. director, piled on in February with a Times Op-Ed piece denouncing leakers for potentially risking American lives and compromising national security. When reporters at both papers were awarded Pulitzer Prizes last month, administration surrogates, led by bloviator in chief William Bennett, called for them to be charged under the 1917 Espionage Act.
We can see this charade for what it is: a Hail Mary pass by the leaders who bungled a war and want to change the subject to the journalists who caught them in the act. What really angers the White House and its defenders about both the Post and Times scoops are not the legal questions the stories raise about unregulated gulags and unconstitutional domestic snooping, but the unmasking of yet more administration failures in a war effort riddled with ineptitude. It's the recklessness at the top of our government, not the press's exposure of it, that has truly aided the enemy, put American lives at risk and potentially sabotaged national security. That's where the buck stops, and if there's to be a witch hunt for traitors, that's where it should begin.
Well before Dana Priest of The Post uncovered the secret prisons last November, the C.I.A. had failed to keep its detention "secrets" secret. Having obtained flight logs, The Sunday Times of London first reported in November 2004 that the United States was flying detainees "to countries that routinely use torture." Six months later, The New York Times added many details, noting that "plane-spotting hobbyists, activists and journalists in a dozen countries have tracked the mysterious planes' movements." These articles, capped by Ms. Priest's, do not impede our ability to detain terrorists. But they do show how the administration, by condoning torture, has surrendered the moral high ground to anti-American jihadists and botched the war of ideas that we can't afford to lose.
The N.S.A. eavesdropping exposed in December by James Risen and Eric Lichtblau of The Times is another American debacle. Hoping to suggest otherwise and cast the paper as treasonous, Dick Cheney immediately claimed that the program had saved "thousands of lives." The White House's journalistic mouthpiece, the Wall Street Journal editorial page, wrote that the Times exposé "may have ruined one of our most effective anti-Al Qaeda surveillance programs."
Surely they jest. If this is one of our "most effective" programs, we're in worse trouble than we thought. Our enemy is smart enough to figure out on its own that its phone calls are monitored 24/7, since even under existing law the government can eavesdrop for 72 hours before seeking a warrant (which is almost always granted). As The Times subsequently reported, the N.S.A. program was worse than ineffective; it was counterproductive. Its gusher of data wasted F.B.I. time and manpower on wild-goose chases and minor leads while uncovering no new active Qaeda plots in the United States. Like the N.S.A. database on 200 million American phone customers that was described last week by USA Today, this program may have more to do with monitoring "traitors" like reporters and leakers than with tracking terrorists.
Journalists and whistle-blowers who relay such government blunders are easily defended against the charge of treason. It's often those who make the accusations we should be most worried about. Mr. Goss, a particularly vivid example, should not escape into retirement unexamined. He was so inept that an overzealous witch hunter might mistake him for a Qaeda double agent.
Even before he went to the C.I.A., he was a drag on national security. In "Breakdown," a book about intelligence failures before the 9/11 attacks, the conservative journalist Bill Gertz delineates how Mr. Goss, then chairman of the House Intelligence Committee, played a major role in abdicating Congressional oversight of the C.I.A., trying to cover up its poor performance while terrorists plotted with impunity. After 9/11, his committee's "investigation" of what went wrong was notoriously toothless.
Once he ascended to the C.I.A. in 2004, Mr. Goss behaved like most other Bush appointees: he put politics ahead of the national interest, and stashed cronies and partisan hacks in crucial positions. On Friday, the F.B.I. searched the home and office of one of them, Dusty Foggo, the No. 3 agency official in the Goss regime. Mr. Foggo is being investigated by four federal agencies pursuing the bribery scandal that has already landed former Congressman Randy (Duke) Cunningham in jail. Though Washington is titillated by gossip about prostitutes and Watergate "poker parties" swirling around this Warren Harding-like tale, at least the grafters of Teapot Dome didn't play games with the nation's defense during wartime.
Besides driving out career employees, underperforming on Iran intelligence and scaling back a daily cross-agency meeting on terrorism, Mr. Goss's only other apparent accomplishment at the C.I.A. was his war on those traitorous leakers. Intriguingly, this was a new cause for him. "There's a leak every day in the paper," he told The Sarasota Herald-Tribune when the identity of the officer Valerie Wilson was exposed in 2003. He argued then that there was no point in tracking leaks down because "that's all we'd do."
What prompted Mr. Goss's about-face was revealed in his early memo instructing C.I.A. employees to "support the administration and its policies in our work." His mission was not to protect our country but to prevent the airing of administration dirty laundry, including leaks detailing how the White House ignored accurate C.I.A. intelligence on Iraq before the war. On his watch, C.I.A. lawyers also tried to halt publication of "Jawbreaker," the former clandestine officer Gary Berntsen's account of how the American command let Osama bin Laden escape when Mr. Berntsen's team had him trapped in Tora Bora in December 2001. The one officer fired for alleged leaking during the Goss purge had no access to classified intelligence about secret prisons but was presumably a witness to her boss's management disasters.
Soon to come are the Senate's hearings on Mr. Goss's successor, Gen. Michael Hayden, the former head of the N.S.A. As Jon Stewart reminded us last week, Mr. Bush endorsed his new C.I.A. choice with the same encomium he had bestowed on Mr. Goss: He's "the right man" to lead the C.I.A. "at this critical moment in our nation's history." That's not exactly reassuring.
This being an election year, Karl Rove hopes the hearings can portray Bush opponents as soft on terrorism when they question any national security move. It was this bullying that led so many Democrats to rubber-stamp the Iraq war resolution in the 2002 election season and Mr. Goss's appointment in the autumn of 2004.
Will they fall into the same trap in 2006? Will they be so busy soliloquizing about civil liberties that they'll fail to investigate the nominee's record? It was under General Hayden, a self-styled electronic surveillance whiz, that the N.S.A. intercepted actual Qaeda messages on Sept. 10, 2001 — "Tomorrow is zero hour" for one — and failed to translate them until Sept. 12. That same fateful summer, General Hayden's N.S.A. also failed to recognize that "some of the terrorists had set up shop literally under its nose," as the national-security authority James Bamford wrote in The Washington Post in 2002. The Qaeda cell that hijacked American Flight 77 and plowed into the Pentagon was based in the same town, Laurel, Md., as the N.S.A., and "for months, the terrorists and the N.S.A. employees exercised in some of the same local health clubs and shopped in the same grocery stores."
If Democrats — and, for that matter, Republicans — let a president with a Nixonesque approval rating install yet another second-rate sycophant at yet another security agency, even one as diminished as the C.I.A., someone should charge those senators with treason, too.
New York Times
WHEN America panics, it goes hunting for scapegoats. But from Salem onward, we've more often than not ended up pillorying the innocent. Abe Rosenthal, the legendary Times editor who died last week, and his publisher, Arthur Ochs Sulzberger, were denounced as treasonous in 1971 when they defied the Nixon administration to publish the Pentagon Papers, the secret government history of the Vietnam War. Today we know who the real traitors were: the officials who squandered American blood and treasure on an ill-considered war and then tried to cover up their lies and mistakes. It was precisely those lies and mistakes, of course, that were laid bare by the thousands of pages of classified Pentagon documents leaked to both The Times and The Washington Post.
This history is predictably repeating itself now that the public has turned on the war in Iraq. The administration's die-hard defenders are desperate to deflect blame for the fiasco, and, guess what, the traitors once again are The Times and The Post. This time the newspapers committed the crime of exposing warrantless spying on Americans by the National Security Agency (The Times) and the C.I.A.'s secret "black site" Eastern European prisons (The Post). Aping the Nixon template, the current White House tried to stop both papers from publishing and when that failed impugned their patriotism.
President Bush, himself a sometime leaker of intelligence, called the leaking of the N.S.A. surveillance program a "shameful act" that is "helping the enemy." Porter Goss, who was then still C.I.A. director, piled on in February with a Times Op-Ed piece denouncing leakers for potentially risking American lives and compromising national security. When reporters at both papers were awarded Pulitzer Prizes last month, administration surrogates, led by bloviator in chief William Bennett, called for them to be charged under the 1917 Espionage Act.
We can see this charade for what it is: a Hail Mary pass by the leaders who bungled a war and want to change the subject to the journalists who caught them in the act. What really angers the White House and its defenders about both the Post and Times scoops are not the legal questions the stories raise about unregulated gulags and unconstitutional domestic snooping, but the unmasking of yet more administration failures in a war effort riddled with ineptitude. It's the recklessness at the top of our government, not the press's exposure of it, that has truly aided the enemy, put American lives at risk and potentially sabotaged national security. That's where the buck stops, and if there's to be a witch hunt for traitors, that's where it should begin.
Well before Dana Priest of The Post uncovered the secret prisons last November, the C.I.A. had failed to keep its detention "secrets" secret. Having obtained flight logs, The Sunday Times of London first reported in November 2004 that the United States was flying detainees "to countries that routinely use torture." Six months later, The New York Times added many details, noting that "plane-spotting hobbyists, activists and journalists in a dozen countries have tracked the mysterious planes' movements." These articles, capped by Ms. Priest's, do not impede our ability to detain terrorists. But they do show how the administration, by condoning torture, has surrendered the moral high ground to anti-American jihadists and botched the war of ideas that we can't afford to lose.
The N.S.A. eavesdropping exposed in December by James Risen and Eric Lichtblau of The Times is another American debacle. Hoping to suggest otherwise and cast the paper as treasonous, Dick Cheney immediately claimed that the program had saved "thousands of lives." The White House's journalistic mouthpiece, the Wall Street Journal editorial page, wrote that the Times exposé "may have ruined one of our most effective anti-Al Qaeda surveillance programs."
Surely they jest. If this is one of our "most effective" programs, we're in worse trouble than we thought. Our enemy is smart enough to figure out on its own that its phone calls are monitored 24/7, since even under existing law the government can eavesdrop for 72 hours before seeking a warrant (which is almost always granted). As The Times subsequently reported, the N.S.A. program was worse than ineffective; it was counterproductive. Its gusher of data wasted F.B.I. time and manpower on wild-goose chases and minor leads while uncovering no new active Qaeda plots in the United States. Like the N.S.A. database on 200 million American phone customers that was described last week by USA Today, this program may have more to do with monitoring "traitors" like reporters and leakers than with tracking terrorists.
Journalists and whistle-blowers who relay such government blunders are easily defended against the charge of treason. It's often those who make the accusations we should be most worried about. Mr. Goss, a particularly vivid example, should not escape into retirement unexamined. He was so inept that an overzealous witch hunter might mistake him for a Qaeda double agent.
Even before he went to the C.I.A., he was a drag on national security. In "Breakdown," a book about intelligence failures before the 9/11 attacks, the conservative journalist Bill Gertz delineates how Mr. Goss, then chairman of the House Intelligence Committee, played a major role in abdicating Congressional oversight of the C.I.A., trying to cover up its poor performance while terrorists plotted with impunity. After 9/11, his committee's "investigation" of what went wrong was notoriously toothless.
Once he ascended to the C.I.A. in 2004, Mr. Goss behaved like most other Bush appointees: he put politics ahead of the national interest, and stashed cronies and partisan hacks in crucial positions. On Friday, the F.B.I. searched the home and office of one of them, Dusty Foggo, the No. 3 agency official in the Goss regime. Mr. Foggo is being investigated by four federal agencies pursuing the bribery scandal that has already landed former Congressman Randy (Duke) Cunningham in jail. Though Washington is titillated by gossip about prostitutes and Watergate "poker parties" swirling around this Warren Harding-like tale, at least the grafters of Teapot Dome didn't play games with the nation's defense during wartime.
Besides driving out career employees, underperforming on Iran intelligence and scaling back a daily cross-agency meeting on terrorism, Mr. Goss's only other apparent accomplishment at the C.I.A. was his war on those traitorous leakers. Intriguingly, this was a new cause for him. "There's a leak every day in the paper," he told The Sarasota Herald-Tribune when the identity of the officer Valerie Wilson was exposed in 2003. He argued then that there was no point in tracking leaks down because "that's all we'd do."
What prompted Mr. Goss's about-face was revealed in his early memo instructing C.I.A. employees to "support the administration and its policies in our work." His mission was not to protect our country but to prevent the airing of administration dirty laundry, including leaks detailing how the White House ignored accurate C.I.A. intelligence on Iraq before the war. On his watch, C.I.A. lawyers also tried to halt publication of "Jawbreaker," the former clandestine officer Gary Berntsen's account of how the American command let Osama bin Laden escape when Mr. Berntsen's team had him trapped in Tora Bora in December 2001. The one officer fired for alleged leaking during the Goss purge had no access to classified intelligence about secret prisons but was presumably a witness to her boss's management disasters.
Soon to come are the Senate's hearings on Mr. Goss's successor, Gen. Michael Hayden, the former head of the N.S.A. As Jon Stewart reminded us last week, Mr. Bush endorsed his new C.I.A. choice with the same encomium he had bestowed on Mr. Goss: He's "the right man" to lead the C.I.A. "at this critical moment in our nation's history." That's not exactly reassuring.
This being an election year, Karl Rove hopes the hearings can portray Bush opponents as soft on terrorism when they question any national security move. It was this bullying that led so many Democrats to rubber-stamp the Iraq war resolution in the 2002 election season and Mr. Goss's appointment in the autumn of 2004.
Will they fall into the same trap in 2006? Will they be so busy soliloquizing about civil liberties that they'll fail to investigate the nominee's record? It was under General Hayden, a self-styled electronic surveillance whiz, that the N.S.A. intercepted actual Qaeda messages on Sept. 10, 2001 — "Tomorrow is zero hour" for one — and failed to translate them until Sept. 12. That same fateful summer, General Hayden's N.S.A. also failed to recognize that "some of the terrorists had set up shop literally under its nose," as the national-security authority James Bamford wrote in The Washington Post in 2002. The Qaeda cell that hijacked American Flight 77 and plowed into the Pentagon was based in the same town, Laurel, Md., as the N.S.A., and "for months, the terrorists and the N.S.A. employees exercised in some of the same local health clubs and shopped in the same grocery stores."
If Democrats — and, for that matter, Republicans — let a president with a Nixonesque approval rating install yet another second-rate sycophant at yet another security agency, even one as diminished as the C.I.A., someone should charge those senators with treason, too.
Saturday, May 13
geography is destiny
By ROGER COHEN
International Herald Tribune
Published: May 13, 2006
So now we know a little more about an important subject: the minds of young Americans. According to a new survey of 18-to-24 year-olds by National Geographic, 63 percent of them cannot find Iraq or Saudi Arabia on a map, and 88 percent cannot find Afghanistan. But the outside world should not take this personally. The survey, done in conjunction with Roper Public Affairs, found that 50 percent cannot find New York State on a map.
International Herald Tribune
Published: May 13, 2006
So now we know a little more about an important subject: the minds of young Americans. According to a new survey of 18-to-24 year-olds by National Geographic, 63 percent of them cannot find Iraq or Saudi Arabia on a map, and 88 percent cannot find Afghanistan. But the outside world should not take this personally. The survey, done in conjunction with Roper Public Affairs, found that 50 percent cannot find New York State on a map.
Thursday, May 11
The Phallus today
Do loose chicks sink dicks?
College men offered sex on a plate are reportedly having trouble getting hard.
Do men really need to chase women down to get it up?
By Rebecca Traister, Salon
May. 11, 2006 | There was a story in the Washington Post on Sunday about a problem apparently facing a lot of men on college campuses: They're having a hard time getting hard. This isn't the first time I've heard reports of this in recent years, mostly from young women who assume, as I have assumed, that it's one of the costs of living in a world with antidepressants. Those sexual side effects are no joke. Then, of course, there is the rise in campus binge drinking, which has, since time began, sometimes resulted in a condition popularly known as "beer dick."
It's a really valid and compelling issue. The fact that young guys are having a rough time with erectile dysfunction is well worth investigating and I was happy to see a long reported piece about it in the Post. But imagine my surprise at learning that antidepressants, alcohol and stress aren't the real story here. (They get mentioned several paragraphs into the piece, along with explanations like anxiety, recreational drug use and overconsumption of Red Bull, so as not to rob the piece of its backlash-y punch.) No, according to the Washington Post, the factor that's making boys go limp is ... (drum roll) ... women who want to have sex with them! That's right, folks. Apparently nothing can make a dude lose a stiffie like the feeling that a girl is horny. You following? No, me neither. But here's how the story, by Laura Sessions Stepp, lays it out.
First, she writes about Adam Skrodzki, a senior at the University of Maryland who, along with his fellow on-the-record interviewees, gets a big medal for bravery in the service of ethical journalism for allowing his name to be used in this piece. Anyway, Adam "bench-presses a respectable 280 pounds. He fights fires in Howard County as a volunteer and plans to join the Secret Service in the fall. In short, he's a man's man." And therefore, we are supposed to infer, he's used to getting man's-man-quality boners. (Guys who, say, play first viola in the university orchestra and volunteer at the local ASPCA wouldn't be interesting in a story about hard-ons because they probably don't get them anyway.)
In any case, Adam continued to think of himself as a "man's man" until last fall when he "hooked up with a sophomore -- at her urging." Cue "Psycho" music: Eeek! Eeek! Eeek! See, the sophomore wanted him, he wasn't into her, so she offered to be his "friend with benefits," which is cool because that means sex with no emotional responsibility and he really didn't see anything wrong with that. But then, the first time they tried to do it, he couldn't get it up.
Now, I read Adam's story and I think: Hey, maybe Adam is just a really great guy! The kind of guy who actually wants to have sex with someone he really likes and is attracted to. Clearly he wasn't so interested in this sophomore, and so his body didn't respond to her. I don't see this as a disaster so much as a positive indicator of a healthy attitude about sex. I get that men and women often enjoy sex with people they despise or are indifferent to, but wouldn't the world be a generally more cheerful place if our bodies nudged us toward those we found physically and emotionally alluring to begin with?
But the Washington Post sees it differently. It turns out that Adam is "far from alone." In fact, Stepp continues, "for a sizable number of young men, the fact that they can get sex whenever they want may have created a situation where, in fact, they're unable to have sex." That's right. "According to surveys, young women are now as likely as young men to have sex and by countless reports are also as likely to initiate sex, taking away from males the age-old, erotic power of the chase." Countless reports! Sizable numbers! Call the police! Vague and unquantifiable numbers of women want biggish amounts of sex!
Perhaps (and I realize this is pie-in-the-sky thinking here) the leveling of the sexual marketplace Stepp writes about, in which women and men enjoy and pursue sex with comparable vigor, could be good for both sexes. First, it could deflate some of the frequently unearned but long-held stereotypes about guys who'll have sex with anything that moves, who consider each conquest a notch on their bedpost, who are more turned on by the pursuit than by the physical pleasure of union. Perhaps, if sex with women is something that they didn't have to finagle and tease and chase their way into, if it was just a fun activity that two people who liked each other chose to engage in and that often felt really great, everyone would have a better time.
Bzzzzz! Apparently that answer was incorrect. According to Stepp, we're not looking at the maturation and increasing sophistication of the socio-sexual dynamic here. We're looking at the loss of manhood in its purest form. Guys who can't get woodies for any old girl on the block are a poignant representation of the crumbling power of the erect phallus, which is, after all, as Stepp writes, "in the minds of many males, the sign of authority and dominance, perhaps the last such symbol in a society slogging its way toward gender equality." Wow. Stepp isn't doing the men she's writing about any favors in treating their condition not as a treatable health problem related to stress or their recreational habits, but as an actual loss of their masculinity, the ultimate cost of gender equality.
The Post does go on to chronicle a whole batch of other reasons for why the tools might not work. In addition to the antidepressants and drug use and heavy drinking and anxiety and caffeine consumption, there's also the fact that once it happens once (due to nervousness or a bad mood or beer dick or whatever) the anxiety about it happening again can naturally become a self-fulfilling prophecy.
One of Stepp's subjects, George Washington University sophomore Peter Schneider, had an arousal problem with a girlfriend he'd been sleeping with for several weeks. Turns out, he'd been "smoking cigarettes and marijuana, popping Adderall in order to work through the night to finish his econ papers. He was drinking a lot and not getting any regular exercise." Hey, Scooby, I think we may have gotten to the bottom of that particular mystery! Another kid, G.W. senior James Daley, was with a girl who gave him a hard time the first time he couldn't perform. Understandably, he then experienced a couple of repeat non-performances. Now, he tells Stepp, he's worried that he's just headed downhill, that he's used up all his manly mojo.
It shouldn't surprise anyone that massive consumption of alcohol, cigarettes, drugs and caffeine takes a toll on a human body. And this is a generation of kids that has been pushed to achieve -- through hyper-scheduled play dates and after-school activities and college-prep courses -- to the point where "performance anxiety" is a whole new ballgame. And I'm perfectly willing to believe that a sexual economy where female desire is allowed to match male desire could lead to a changed playing field on which the boys were less motivated by the sense that sex is the equivalent of a touchdown, scored by pushing your way through the opposing team's defense.
But why, when there are all these perfectly reasonable explanations -- explanations that, not for nothing, could turn out to be productive if we reacted to them by educating boys about the effects of recreational substance use, or developing and prescribing pills with fewer sexual side effects, or encouraging guys to get used to a sex life in which they're on equal footing with their partners -- do we have to immediately start in on the ghoulish, desire-sapping, sexless succubus of women's liberation?
Stepp writes, "One can argue that a young woman speaking her mind is a sign of equality" -- um, yes, one can argue that. But human sexuality prof Sawyer, the father of four daughters, says that, "for some guys, it has come at a price. It's turned into ED in men you normally wouldn't think would have ED." Are we straight on that? Women speak their minds; men don't want to have sex with them anymore.
It all falls into the John Tierney school of thought that says that all these overachieving college girls are going to end up single. All the libidinous ones are going to go sexless as well. Why don't we just buckle up our chastity belts and give those boys something to focus on unlocking already? Because lord knows, our eager, aroused bodies are totally harshing their hard-ons!
-- By Rebecca Traister
College men offered sex on a plate are reportedly having trouble getting hard.
Do men really need to chase women down to get it up?
By Rebecca Traister, Salon
May. 11, 2006 | There was a story in the Washington Post on Sunday about a problem apparently facing a lot of men on college campuses: They're having a hard time getting hard. This isn't the first time I've heard reports of this in recent years, mostly from young women who assume, as I have assumed, that it's one of the costs of living in a world with antidepressants. Those sexual side effects are no joke. Then, of course, there is the rise in campus binge drinking, which has, since time began, sometimes resulted in a condition popularly known as "beer dick."
It's a really valid and compelling issue. The fact that young guys are having a rough time with erectile dysfunction is well worth investigating and I was happy to see a long reported piece about it in the Post. But imagine my surprise at learning that antidepressants, alcohol and stress aren't the real story here. (They get mentioned several paragraphs into the piece, along with explanations like anxiety, recreational drug use and overconsumption of Red Bull, so as not to rob the piece of its backlash-y punch.) No, according to the Washington Post, the factor that's making boys go limp is ... (drum roll) ... women who want to have sex with them! That's right, folks. Apparently nothing can make a dude lose a stiffie like the feeling that a girl is horny. You following? No, me neither. But here's how the story, by Laura Sessions Stepp, lays it out.
First, she writes about Adam Skrodzki, a senior at the University of Maryland who, along with his fellow on-the-record interviewees, gets a big medal for bravery in the service of ethical journalism for allowing his name to be used in this piece. Anyway, Adam "bench-presses a respectable 280 pounds. He fights fires in Howard County as a volunteer and plans to join the Secret Service in the fall. In short, he's a man's man." And therefore, we are supposed to infer, he's used to getting man's-man-quality boners. (Guys who, say, play first viola in the university orchestra and volunteer at the local ASPCA wouldn't be interesting in a story about hard-ons because they probably don't get them anyway.)
In any case, Adam continued to think of himself as a "man's man" until last fall when he "hooked up with a sophomore -- at her urging." Cue "Psycho" music: Eeek! Eeek! Eeek! See, the sophomore wanted him, he wasn't into her, so she offered to be his "friend with benefits," which is cool because that means sex with no emotional responsibility and he really didn't see anything wrong with that. But then, the first time they tried to do it, he couldn't get it up.
Now, I read Adam's story and I think: Hey, maybe Adam is just a really great guy! The kind of guy who actually wants to have sex with someone he really likes and is attracted to. Clearly he wasn't so interested in this sophomore, and so his body didn't respond to her. I don't see this as a disaster so much as a positive indicator of a healthy attitude about sex. I get that men and women often enjoy sex with people they despise or are indifferent to, but wouldn't the world be a generally more cheerful place if our bodies nudged us toward those we found physically and emotionally alluring to begin with?
But the Washington Post sees it differently. It turns out that Adam is "far from alone." In fact, Stepp continues, "for a sizable number of young men, the fact that they can get sex whenever they want may have created a situation where, in fact, they're unable to have sex." That's right. "According to surveys, young women are now as likely as young men to have sex and by countless reports are also as likely to initiate sex, taking away from males the age-old, erotic power of the chase." Countless reports! Sizable numbers! Call the police! Vague and unquantifiable numbers of women want biggish amounts of sex!
Perhaps (and I realize this is pie-in-the-sky thinking here) the leveling of the sexual marketplace Stepp writes about, in which women and men enjoy and pursue sex with comparable vigor, could be good for both sexes. First, it could deflate some of the frequently unearned but long-held stereotypes about guys who'll have sex with anything that moves, who consider each conquest a notch on their bedpost, who are more turned on by the pursuit than by the physical pleasure of union. Perhaps, if sex with women is something that they didn't have to finagle and tease and chase their way into, if it was just a fun activity that two people who liked each other chose to engage in and that often felt really great, everyone would have a better time.
Bzzzzz! Apparently that answer was incorrect. According to Stepp, we're not looking at the maturation and increasing sophistication of the socio-sexual dynamic here. We're looking at the loss of manhood in its purest form. Guys who can't get woodies for any old girl on the block are a poignant representation of the crumbling power of the erect phallus, which is, after all, as Stepp writes, "in the minds of many males, the sign of authority and dominance, perhaps the last such symbol in a society slogging its way toward gender equality." Wow. Stepp isn't doing the men she's writing about any favors in treating their condition not as a treatable health problem related to stress or their recreational habits, but as an actual loss of their masculinity, the ultimate cost of gender equality.
The Post does go on to chronicle a whole batch of other reasons for why the tools might not work. In addition to the antidepressants and drug use and heavy drinking and anxiety and caffeine consumption, there's also the fact that once it happens once (due to nervousness or a bad mood or beer dick or whatever) the anxiety about it happening again can naturally become a self-fulfilling prophecy.
One of Stepp's subjects, George Washington University sophomore Peter Schneider, had an arousal problem with a girlfriend he'd been sleeping with for several weeks. Turns out, he'd been "smoking cigarettes and marijuana, popping Adderall in order to work through the night to finish his econ papers. He was drinking a lot and not getting any regular exercise." Hey, Scooby, I think we may have gotten to the bottom of that particular mystery! Another kid, G.W. senior James Daley, was with a girl who gave him a hard time the first time he couldn't perform. Understandably, he then experienced a couple of repeat non-performances. Now, he tells Stepp, he's worried that he's just headed downhill, that he's used up all his manly mojo.
It shouldn't surprise anyone that massive consumption of alcohol, cigarettes, drugs and caffeine takes a toll on a human body. And this is a generation of kids that has been pushed to achieve -- through hyper-scheduled play dates and after-school activities and college-prep courses -- to the point where "performance anxiety" is a whole new ballgame. And I'm perfectly willing to believe that a sexual economy where female desire is allowed to match male desire could lead to a changed playing field on which the boys were less motivated by the sense that sex is the equivalent of a touchdown, scored by pushing your way through the opposing team's defense.
But why, when there are all these perfectly reasonable explanations -- explanations that, not for nothing, could turn out to be productive if we reacted to them by educating boys about the effects of recreational substance use, or developing and prescribing pills with fewer sexual side effects, or encouraging guys to get used to a sex life in which they're on equal footing with their partners -- do we have to immediately start in on the ghoulish, desire-sapping, sexless succubus of women's liberation?
Stepp writes, "One can argue that a young woman speaking her mind is a sign of equality" -- um, yes, one can argue that. But human sexuality prof Sawyer, the father of four daughters, says that, "for some guys, it has come at a price. It's turned into ED in men you normally wouldn't think would have ED." Are we straight on that? Women speak their minds; men don't want to have sex with them anymore.
It all falls into the John Tierney school of thought that says that all these overachieving college girls are going to end up single. All the libidinous ones are going to go sexless as well. Why don't we just buckle up our chastity belts and give those boys something to focus on unlocking already? Because lord knows, our eager, aroused bodies are totally harshing their hard-ons!
-- By Rebecca Traister
Wednesday, May 10
A Star Is Made
By STEPHEN J. DUBNER and STEVEN D. LEVITT
The Birth-Month Soccer Anomaly, NYT, May 7
If you were to examine the birth certificates of every soccer player in next month's World Cup tournament, you would most likely find a noteworthy quirk: elite soccer players are more likely to have been born in the earlier months of the year than in the later months. If you then examined the European national youth teams that feed the World Cup and professional ranks, you would find this quirk to be even more pronounced. On recent English teams, for instance, half of the elite teenage soccer players were born in January, February or March, with the other half spread out over the remaining 9 months. In Germany, 52 elite youth players were born in the first three months of the year, with just 4 players born in the last three.
What might account for this anomaly? Here are a few guesses: a) certain astrological signs confer superior soccer skills; b) winter-born babies tend to have higher oxygen capacity, which increases soccer stamina; c) soccer-mad parents are more likely to conceive children in springtime, at the annual peak of soccer mania; d) none of the above.
Anders Ericsson, a 58-year-old psychology professor at Florida State University, says he believes strongly in "none of the above." He is the ringleader of what might be called the Expert Performance Movement, a loose coalition of scholars trying to answer an important and seemingly primordial question: When someone is very good at a given thing, what is it that actually makes him good?
Ericsson, who grew up in Sweden, studied nuclear engineering until he realized he would have more opportunity to conduct his own research if he switched to psychology. His first experiment, nearly 30 years ago, involved memory: training a person to hear and then repeat a random series of numbers. "With the first subject, after about 20 hours of training, his digit span had risen from 7 to 20," Ericsson recalls. "He kept improving, and after about 200 hours of training he had risen to over 80 numbers."
This success, coupled with later research showing that memory itself is not genetically determined, led Ericsson to conclude that the act of memorizing is more of a cognitive exercise than an intuitive one. In other words, whatever innate differences two people may exhibit in their abilities to memorize, those differences are swamped by how well each person "encodes" the information. And the best way to learn how to encode information meaningfully, Ericsson determined, was a process known as deliberate practice.
Deliberate practice entails more than simply repeating a task — playing a C-minor scale 100 times, for instance, or hitting tennis serves until your shoulder pops out of its socket. Rather, it involves setting specific goals, obtaining immediate feedback and concentrating as much on technique as on outcome.
Ericsson and his colleagues have thus taken to studying expert performers in a wide range of pursuits, including soccer, golf, surgery, piano playing, Scrabble, writing, chess, software design, stock picking and darts. They gather all the data they can, not just performance statistics and biographical details but also the results of their own laboratory experiments with high achievers.
Their work, compiled in the "Cambridge Handbook of Expertise and Expert Performance," a 900-page academic book that will be published next month, makes a rather startling assertion: the trait we commonly call talent is highly overrated. Or, put another way, expert performers — whether in memory or surgery, ballet or computer programming — are nearly always made, not born. And yes, practice does make perfect. These may be the sort of clichés that parents are fond of whispering to their children. But these particular clichés just happen to be true.
Ericsson's research suggests a third cliché as well: when it comes to choosing a life path, you should do what you love — because if you don't love it, you are unlikely to work hard enough to get very good. Most people naturally don't like to do things they aren't "good" at. So they often give up, telling themselves they simply don't possess the talent for math or skiing or the violin. But what they really lack is the desire to be good and to undertake the deliberate practice that would make them better.
"I think the most general claim here," Ericsson says of his work, "is that a lot of people believe there are some inherent limits they were born with. But there is surprisingly little hard evidence that anyone could attain any kind of exceptional performance without spending a lot of time perfecting it." This is not to say that all people have equal potential. Michael Jordan, even if he hadn't spent countless hours in the gym, would still have been a better basketball player than most of us. But without those hours in the gym, he would never have become the player he was.
Ericsson's conclusions, if accurate, would seem to have broad applications. Students should be taught to follow their interests earlier in their schooling, the better to build up their skills and acquire meaningful feedback. Senior citizens should be encouraged to acquire new skills, especially those thought to require "talents" they previously believed they didn't possess.
And it would probably pay to rethink a great deal of medical training. Ericsson has noted that most doctors actually perform worse the longer they are out of medical school. Surgeons, however, are an exception. That's because they are constantly exposed to two key elements of deliberate practice: immediate feedback and specific goal-setting.
The same is not true for, say, a mammographer. When a doctor reads a mammogram, she doesn't know for certain if there is breast cancer or not. She will be able to know only weeks later, from a biopsy, or years later, when no cancer develops. Without meaningful feedback, a doctor's ability actually deteriorates over time. Ericsson suggests a new mode of training. "Imagine a situation where a doctor could diagnose mammograms from old cases and immediately get feedback of the correct diagnosis for each case," he says. "Working in such a learning environment, a doctor might see more different cancers in one day than in a couple of years of normal practice."
If nothing else, the insights of Ericsson and his Expert Performance compatriots can explain the riddle of why so many elite soccer players are born early in the year.
Since youth sports are organized by age bracket, teams inevitably have a cutoff birth date. In the European youth soccer leagues, the cutoff date is Dec. 31. So when a coach is assessing two players in the same age bracket, one who happened to have been born in January and the other in December, the player born in January is likely to be bigger, stronger, more mature. Guess which player the coach is more likely to pick? He may be mistaking maturity for ability, but he is making his selection nonetheless. And once chosen, those January-born players are the ones who, year after year, receive the training, the deliberate practice and the feedback — to say nothing of the accompanying self-esteem — that will turn them into elites.
This may be bad news if you are a rabid soccer mom or dad whose child was born in the wrong month. But keep practicing: a child conceived on this Sunday in early May would probably be born by next February, giving you a considerably better chance of watching the 2030 World Cup from the family section.
Stephen J. Dubner and Steven D. Levitt are the authors of "Freakonomics: A Rogue Economist Explores the Hidden Side of Everything." More information on the research behind this column is at www.freakonomics.com.
The Birth-Month Soccer Anomaly, NYT, May 7
If you were to examine the birth certificates of every soccer player in next month's World Cup tournament, you would most likely find a noteworthy quirk: elite soccer players are more likely to have been born in the earlier months of the year than in the later months. If you then examined the European national youth teams that feed the World Cup and professional ranks, you would find this quirk to be even more pronounced. On recent English teams, for instance, half of the elite teenage soccer players were born in January, February or March, with the other half spread out over the remaining 9 months. In Germany, 52 elite youth players were born in the first three months of the year, with just 4 players born in the last three.
What might account for this anomaly? Here are a few guesses: a) certain astrological signs confer superior soccer skills; b) winter-born babies tend to have higher oxygen capacity, which increases soccer stamina; c) soccer-mad parents are more likely to conceive children in springtime, at the annual peak of soccer mania; d) none of the above.
Anders Ericsson, a 58-year-old psychology professor at Florida State University, says he believes strongly in "none of the above." He is the ringleader of what might be called the Expert Performance Movement, a loose coalition of scholars trying to answer an important and seemingly primordial question: When someone is very good at a given thing, what is it that actually makes him good?
Ericsson, who grew up in Sweden, studied nuclear engineering until he realized he would have more opportunity to conduct his own research if he switched to psychology. His first experiment, nearly 30 years ago, involved memory: training a person to hear and then repeat a random series of numbers. "With the first subject, after about 20 hours of training, his digit span had risen from 7 to 20," Ericsson recalls. "He kept improving, and after about 200 hours of training he had risen to over 80 numbers."
This success, coupled with later research showing that memory itself is not genetically determined, led Ericsson to conclude that the act of memorizing is more of a cognitive exercise than an intuitive one. In other words, whatever innate differences two people may exhibit in their abilities to memorize, those differences are swamped by how well each person "encodes" the information. And the best way to learn how to encode information meaningfully, Ericsson determined, was a process known as deliberate practice.
Deliberate practice entails more than simply repeating a task — playing a C-minor scale 100 times, for instance, or hitting tennis serves until your shoulder pops out of its socket. Rather, it involves setting specific goals, obtaining immediate feedback and concentrating as much on technique as on outcome.
Ericsson and his colleagues have thus taken to studying expert performers in a wide range of pursuits, including soccer, golf, surgery, piano playing, Scrabble, writing, chess, software design, stock picking and darts. They gather all the data they can, not just performance statistics and biographical details but also the results of their own laboratory experiments with high achievers.
Their work, compiled in the "Cambridge Handbook of Expertise and Expert Performance," a 900-page academic book that will be published next month, makes a rather startling assertion: the trait we commonly call talent is highly overrated. Or, put another way, expert performers — whether in memory or surgery, ballet or computer programming — are nearly always made, not born. And yes, practice does make perfect. These may be the sort of clichés that parents are fond of whispering to their children. But these particular clichés just happen to be true.
Ericsson's research suggests a third cliché as well: when it comes to choosing a life path, you should do what you love — because if you don't love it, you are unlikely to work hard enough to get very good. Most people naturally don't like to do things they aren't "good" at. So they often give up, telling themselves they simply don't possess the talent for math or skiing or the violin. But what they really lack is the desire to be good and to undertake the deliberate practice that would make them better.
"I think the most general claim here," Ericsson says of his work, "is that a lot of people believe there are some inherent limits they were born with. But there is surprisingly little hard evidence that anyone could attain any kind of exceptional performance without spending a lot of time perfecting it." This is not to say that all people have equal potential. Michael Jordan, even if he hadn't spent countless hours in the gym, would still have been a better basketball player than most of us. But without those hours in the gym, he would never have become the player he was.
Ericsson's conclusions, if accurate, would seem to have broad applications. Students should be taught to follow their interests earlier in their schooling, the better to build up their skills and acquire meaningful feedback. Senior citizens should be encouraged to acquire new skills, especially those thought to require "talents" they previously believed they didn't possess.
And it would probably pay to rethink a great deal of medical training. Ericsson has noted that most doctors actually perform worse the longer they are out of medical school. Surgeons, however, are an exception. That's because they are constantly exposed to two key elements of deliberate practice: immediate feedback and specific goal-setting.
The same is not true for, say, a mammographer. When a doctor reads a mammogram, she doesn't know for certain if there is breast cancer or not. She will be able to know only weeks later, from a biopsy, or years later, when no cancer develops. Without meaningful feedback, a doctor's ability actually deteriorates over time. Ericsson suggests a new mode of training. "Imagine a situation where a doctor could diagnose mammograms from old cases and immediately get feedback of the correct diagnosis for each case," he says. "Working in such a learning environment, a doctor might see more different cancers in one day than in a couple of years of normal practice."
If nothing else, the insights of Ericsson and his Expert Performance compatriots can explain the riddle of why so many elite soccer players are born early in the year.
Since youth sports are organized by age bracket, teams inevitably have a cutoff birth date. In the European youth soccer leagues, the cutoff date is Dec. 31. So when a coach is assessing two players in the same age bracket, one who happened to have been born in January and the other in December, the player born in January is likely to be bigger, stronger, more mature. Guess which player the coach is more likely to pick? He may be mistaking maturity for ability, but he is making his selection nonetheless. And once chosen, those January-born players are the ones who, year after year, receive the training, the deliberate practice and the feedback — to say nothing of the accompanying self-esteem — that will turn them into elites.
This may be bad news if you are a rabid soccer mom or dad whose child was born in the wrong month. But keep practicing: a child conceived on this Sunday in early May would probably be born by next February, giving you a considerably better chance of watching the 2030 World Cup from the family section.
Stephen J. Dubner and Steven D. Levitt are the authors of "Freakonomics: A Rogue Economist Explores the Hidden Side of Everything." More information on the research behind this column is at www.freakonomics.com.