Thursday, June 30

A Church-State Solution

http://www.nytimes.com/2005/07/03/magazine/03CHURCH.html?pagewanted=print


Some comments:

The "two sides" - one wonders why these two? are these meant to be exclusive, or merely indicative? - are that of "values evangelicalism" and "legal secularism." So right from the beginning, who would not want to be in the first category? People of faith have something called "values," and they want a unifed nation based upon shared values. Sounds great. And on the other hand, what do we have? Secularists (practically anathema in the US) who replace faith in god with faith in the legal process. Now in these days of supreme court bashing, what on earth could be more patently eggregious?!

Listen: "You might call those who hold this view ''legal secularists,'' not because they are necessarily strongly secular in their personal worldviews -- though many are -- but because they argue that government should be secular and that the laws should make it so. To the legal secularists, full citizenship means fully sharing in the legal and political commitments of the nation. If the nation defines itself in terms of values drawn from religion, they worry, then it will inevitably tend to adopt the religious values of the majority, excluding religious minorities and nonreligious people from full citizenship."

Notice what this debate is NOT about. It is not about what, in the words of what a former Republican congressman from Missouri and current episcopalian priest said on NPR today - namely, that a great number, perhaps even the vast majority of both republicans and christians in this country are decidely unconfortable with the loud, aggressive, and divisive rhetoric of the evangelical christians in the current government who have been getting all the media attention. That it is NOT about, as Justice Scalia recently wrote, that 98% of Americans are christian, so why not go ahread and enshine christianity as the state religion, but rather that there are a vast majority of moderate christians who, because their views on social topics tend to be less loud and boiserous, even less decided, are being completely left out of a national debate. That a few angry evangelicals, spreading their message of social conservativism, intolerance, even hate, are claiming the support of a vast majority of christians who would, in fact, never support such things, and who constantly wield the rhetorical pick-axe of "morally degenerate atheist" and/or "judicial activist" to
create an enemy whom they can easily and assuredly vanquish.


With the terms of the "debate" stated so absurdly from the beginning, the stacked desk plays out as one would obviously expect.

"But instead of attacking religion directly, as some antireligious secularists did earlier in the century with little success, organizations like Americans United for Separation of Church and State and the American Civil Liberties Union argued more narrowly that government ought to be secular in word, deed and intent. In 1971, in Lemon v. Kurtzman, the Supreme Court made this position law, requiring all government decisions to be motivated by a secular purpose, to have primarily secular effects and not to entangle the state with religious institutions. This new standard -- known thereafter as the ''Lemon test'' -- did much more than simply reaffirm a deeply rooted American norm of no government money for religion; it prohibited school prayer and Bible reading, which had been practiced in the public schools since their founding, and in many instances it removed Christmas decorations from the public square. The framers had neither known nor used the category ''secular'' as we understand it, but the court made secularism an official condition of all acceptable government conduct. In many quarters of religious America, there was outrage at this court-mandated secularism, which to many believers soon came to seem of a piece with the Supreme Court's 1973 guarantee of abortion rights in Roe v. Wade."

Begins by associating the court's attempt to maintain the establishment clause with "an attack on religion" in order to switch the topic away from the obvious constitutional roots to which conservatives are supposedly hold so dear. Then to the melodrama of christmas decorations being removed, we are told that bible reading goes back to the founding, and that the framers had not known "secularism." These two relative non-sequitors, while not logically advancing anything, are marshalled as a sort of stand-in for the absent constitutional argument about the establishment clause. Before we can think about this, we're on to "court-mandated secularism" which sounds like we're in the Soviet Union.

Despite his desire to return to the early days of American history when he thinks it helps a point, we get no history whatsoever when he turns to the Pledge of Allegiance. "When a California man" [here using the popular California = moral degenerates neocon link] "named Michael Newdow pressed the court to find that the words ''under God'' in the Pledge of Allegiance impermissibly endorsed religion, the court ducked the issue. The more liberal justices seemed afraid to rule the pledge unconstitutional..." No intimation that the phrase itself was of an extremely recent vintage, and in fact, added by a McCarthyite congress that, in hindsight, was probably one of the worst of the 20th century.

Finally, we come to see that this very setup was something of a rhetorical straw-man:
"Even a joint commitment to ''the culture of life'' turns out to be very thin. Catholics and conservative Protestants [where are liberal or even moderate Catholics or Protestants - do they even exist for this writer?] may agree broadly on abortion and euthanasia; but what about capital punishment, which Pope John Paul II condemned as an immoral usurpation of God's authority to determine life and death but which many evangelical Christians support as biblically mandated? To reach consensus, the values evangelicals have to water down the ''values'' they say they accept to the point where they would mean nothing at all. They are left either acknowledging disagreement about values or else falling into a kind of relativism (I'm O.K., you're O.K.) that is inconsistent with the very goal of standing for something rather than nothing."

The "I'm Ok you're Ok" is a cheap shot at the 60s cultural revoltion in disguise - an attempt to link non-religious examination of both the personal and the political to a kind of kitchy self-help genre. But more importantly, we have the underlying (conservative) idea that to "stand" for something - stand upright, stand erect, as it were, is to have some kind of iron-clad rule and to hold fast to it come what may. Whereas that is certainly NOT the only alternative to "relativism." Any number of modern writers about philosophical ethics have any attempt to hold fast to a single rule or worldview against all that would oppose it almost assuredly leads to the greatest kinds of ethical failing through a failure to imagine the Other.

"Meanwhile, the legal secularists have a different problem. They claim that separating religion from government is necessary to ensure full inclusion of all citizens. The problem is that many citizens -- values evangelicals among them -- feel excluded by precisely this principle of keeping religion private. Keeping nondenominational prayer out of the public schools may protect religious minorities who might feel excluded; but it also sends a message of exclusion to those who believe such prayer would signal commitment to shared values."

This is borderline incoherent. To what "shared values" is he referring? The shared values of christian evangelicals to insist on everyone being like them? That's not a shared value. The whole point is that they are NOT excluded by school prayer the way that minorities are BY it. This incoherent line of argument is common though. It was used by the Southern whites to maintain slavery, and then to uphold segregation and laws against miscegination. Today, it is the fight against gay marriage.

In every case, you have one side which is actively being discriminated against, being treated as unequal, and on the other side, you have a group which claims that it has a god-given right to discriminate, and to treat others AS unequal. Obviously these two positions come into conflict, but it is fallacious to seem them as similar. Slaves weren't telling slave-owners what they could do, blacks weren't telling whites what they could do, mixed-race couples weren't telling "racially pure" couples what they could do, and gay couples aren't telling straight couples what they can do. In each case, you have a group that wants to be allowed to enter the national conversation about what an American is, and another group which wants to not even HAVE a conversation, but simply wants to tell everyone else what they KNOW an American is.

By the time he gets around to his actual argument about cutting public financing of religion or state coercion, he's on much better ground, and the arguments gain solidity.

But before too long, we get more annoying examples: "If many congressmen say that their faith requires intervening to save Terri Schiavo, that is not a violation of the rules of political debate. The secular congresswoman who thinks Schiavo should have the right to die in peace can express her contrary view and explain why it is that she believes a rational and legal analysis of the situation requires it." Why is the religous person on the side of keeping Shavio alive in a vegetative state rather than letting her die naturally, as one might readily argue the Creator here intended?


And again: "Once in a while they may, if the composition of the Supreme Court is just right, thwart the values evangelicals' numerical superiority with a judicial override; but in the long run, all they will accomplish is to alienate the values evangelicals in a way that undercuts the meaningfulness of participatory democracy." So the country is really evangelical, and the only reason everything isn't is that a secular supreme court keeps sticking its nose in and undermining Democracy.

"To give a religious reason for passing a law is still to run the risk of that law being held unconstitutional as serving a religious rather than a secular purpose. So evangelicals end up speaking in euphemisms (''family values'') or proposing purpose-built dodges like ''creation science'' that even they often privately acknowledge to be paradoxical. A better approach would be for secularists to confront the evangelicals' arguments on their own terms, refusing to stop the conversation and instead arguing for the rightness of their beliefs about their own values. Reason can in fact engage revelation, as it has throughout the history of philosophy. The skeptic can challenge the believer to explain how he derives his views from Scripture and why the view he ascribes to God is morally attractive -- questions that most believers consider profoundly important and perfectly relevant."

I doubt this. I doubt that, if in a televised debate, or even on the senate floor, or anywhere, for that matter, challenging someone's faith in this way would be responded to with enthusiasm. I rather think that it would engender incredible anxiety and defensive aggression, as it always now does. Only very intellectual christians would feel perfectly confortable seriously questioning their own belief that the christianity does, in fact, lead to better ends than secularism. Belief is, for the most part, a highly irrational affair - I seriously doubt an interesting debate between a rationalist and an irrationalist would take place. Given this, a politician would be crazy to do this sort of thing on TV - a medium which almost inevatably seeks the lowest common intellectual and cultural denomenator in order to be "inclusive."

"Secularists who are confident in their views should expect to prevail on the basis of reason; evangelicals who wish to win the argument will discover that their arguments must extend beyond simple invocation of faith."

Again, no. Because people can vote on the basis of either, and thus it would look just like it does today - rationalists explaining their reasoning, evangelicals explaining their faith, and voters making their decision based on mixed-criteria - either faith or reason, or some combination of hetergeneous factors that, for them, add up to a persuasive claim.

The final arguments about cutting all public funding for religious social programs or school vouchers strike me as convincing, but I remain uncertain why they should for the evangelical christians he's trying to convince.

Ultimately, a very strange, flawed, but interesting piece.

-andrew

Wednesday, June 29

William Safire on the Jailing of Judith Miller

LEGEND has it when Henry David Thoreau went to jail to protest an unjust law, his friend, the philosopher Ralph Waldo Emerson, visited him and asked, "Henry, what are you doing in here?" The great nature writer replied, "What are you doing out there?"

The Supreme Court has just flinched from its responsibility to stop the unjust jailing of two journalists - not charged with any wrongdoing - by a runaway prosecutor who will go to any lengths to use the government's contempt power to force them to betray their confidential sources.

The case was about the "outing" of an agent - supposedly covert, but working openly at C.I.A. headquarters - in Robert Novak's column two years ago by unnamed administration officials angry at her husband's prewar Iraq criticism.

To show its purity, the Bush Justice Department appointed a special counsel to find any violation of the 1982 Intelligence Identities Protection Act. That law prohibits anyone from knowingly revealing the name of a covert agent that the C.I.A. is taking "affirmative measures" to conceal. The revelation must be, like that of the 70's turncoat Philip Agee - "in the course of a pattern" intending to harm United States intelligence.

Evidently no such serious crime took place. After spending two years and thousands of F.B.I. agent-hours and millions of dollars that could better have been directed against terrorism and identity theft, the prosecutor, Patrick Fitzgerald, admits his investigation has been stalled since last October. We have seen no indictment under the identities protection act.

What evidence of serious crime does he have that makes the testimony of Judith Miller of The New York Times and Matthew Cooper of Time magazine so urgent? We don't know - eight pages of his contempt demand are secret - but some legal minds think he is falling back on the Martha Stewart Theory of Prosecution. That is: if the underlying crime has not been committed, justify the investigation by indicting a big name for giving false information.

Thus, if the reporters resist the coercion of the loss of their freedom, the prosecutor can blame them for his inability to go to trial on the "heavy" charge. But if they cave in, he can get some headlines on the ancillary charge of false statements. (I have known Judy Miller, a superb and intrepid reporter, for a generation; she'll never betray a source.)

The principle at stake here is the idea of "reportorial privilege," embraced in shield laws in 49 states and the District of Columbia, but not in federal courts. That privilege not to testify - held by lawyers, members of the clergy, spouses and others - gives assurance to whistleblowers that information confided to a reporter revealing corruption or malfeasance in government will not result in loss of job or more severe retaliation from on high. (Most of the states' attorneys general, recognizing the value of press leads in law enforcement, strongly supported the reporters in this case.)

To every privilege there are exceptions; a lawyer, for example, cannot conspire with his client in committing a crime, and a reporter's testimony may be necessary in a capital case. But this investigation has shown no national security crime at all, as defined in the identities act. Maybe an official misled an agent, or even perjured himself to save his job; is that sufficient cause to incarcerate innocent journalists and impede the entire press's traditional means of exposing official corruption?

Here's what needs to be done now:

1. The judge should resist the prosecutor's pressure for coercive, lengthy and possibly dangerous confinement. Judy won't crack and should not be made to suffer.

2. The prosecutor should submit an information bewailing his witness difficulties in fingering sources in false denial, but showing why no major national-security crime had been committed.

3. Mr. Novak should finally write the column he owes readers and colleagues perhaps explaining how his two sources - who may have truthfully revealed themselves to investigators - managed to get the prosecutor off his back.

4. The Congress should urgently hold hearings on shield bills to conform federal practice to the states' laws based on Congress's 1975 directive to the Supreme Court to apply "reason and experience" to extending privilege - which the court did in its 1996 Jaffee decision to psychotherapists.

The contempt epidemic is spreading fast. Yesterday, a federal appeals panel in the District of Columbia followed up the Supreme Court flinch by forcing a New York Times reporter and three other journalists in a different case to burn their sources or be sentenced. Along with Judy and Matt, these endangered journalists can look at plumber-prosecutors, smirking media-bashers and the wimps taking official handouts and ask:

"What are you doing out there?"

William Safire is a former Times Op-Ed columnist.

Tuesday, June 28

MGM v Grokster Ruling (w/multiple commentaries)

Well, I was initially horrified, but based on my own experience in this area, as well as that of those whom I trust, this is hardly the slam-dunk for the entertainment industry that the newspapers and tv shows will likely proclaim. In fact, they're likely to find it quite dissappointing, which technologists will likely be quite reassured. The heart of the Sony decision was unanimously upheld, and if you read the Breyer concurrence, there are plenty of good interpretations about copyright's proper place. To wit:

Sony's rule is clear. That clarity allows those who develop new products that are capable of substantial noninfringing uses to know, ex ante, that distribution of their product will not yield massive monetary liability. At the same time, it helps deter them from distributing products that have no other real function than--or that are specifically intended for--copyright infringement, deterrence that the Court's holding today reinforces (by adding a weapon to the copyright holder's legal arsenal).

Sony's rule is strongly technology protecting. The rule deliberately makes it difficult for courts to find secondary liability where new technology is at issue. It establishes that the law will not impose copyright liability upon the distributors of dual-use technologies (who do not themselves engage in unauthorized copying) unless the product in question will be used almost exclusively to infringe copyrights (or unless they actively induce infringements as we today describe). Sony thereby recognizes that the copyright laws are not intended to discourage or to control the emergence of new technologies, including (perhaps especially) those that help disseminate information and ideas more broadly or more efficiently. Thus Sony's rule shelters VCRs, typewriters, tape recorders, photocopiers, computers, cassette players, compact disc burners, digital video recorders, MP3 players, Internet search engines, and peer-to-peer software. But Sony's rule does not shelter descramblers, even if one could theoretically use a descrambler in a noninfringing

Given the nature of the Sony rule, it is not surprising that in the last 20 years, there have been relatively few contributory infringement suits--based on a product distribution theory--brought against technology providers

To require defendants to provide, for example, detailed evidence--say business plans, profitability estimates, projected technological modifications, and so forth--would doubtless make life easier for copyrightholder plaintiffs. But it would simultaneously increase the legal uncertainty that surrounds the creation or development of a new technology capable of being put to infringing uses.

I do not doubt that a more intrusive Sony test would generally provide greater revenue security for copyright holders. But it is harder to conclude that the gains on the copyright swings would exceed the losses on the technology roundabouts.

For one thing, the law disfavors equating the two different kinds of gain and loss; rather, it leans in favor of protecting technology. As Sony itself makes clear, the producer of a technology which permits unlawful copying does not himself engage in unlawful copying--a fact that makes the attachment of copyright liability to the creation, production, or distribution of the technology an exceptional thing.

Moreover, Sony has been the law for some time. And that fact imposes a serious burden upon copyright holders like MGM to show a need for change in the current rules of the game

In any event, the evidence now available does not, in my view, make out a sufficiently strong case for change.

Will an unmodified Sony lead to a significant diminution in the amount or quality of creative work produced? Since copyright's basic objective is creation and its revenue objectives but a means to that end, this is the underlying copyright question. See Twentieth Century Music Corp. v. Aiken, 422 U. S. 151, 156 (1975) ("Creative work is to be encouraged and rewarded, but private motivation must ultimately serve the cause of promoting broad public availability of literature, music, and the other arts"). And its answer is far from clear.

The extent to which related production has actually and resultingly declined remains uncertain, though there is good reason to believe that the decline, if any, is not substantial.

As today's opinion makes clear, a copyright holder may proceed against a technology provider where a provable specific intent to infringe (of the kind the Court describes) is present.



----------
From: Tim Wu
To: Walter Dellinger, Dahlia Lithwick, and Charles Fried
Subject: Grokster Dies, iTunes Lives
Monday, June 27, 2005, at 2:22 PM PT

Grokster was the case everyone (but Dahlia) has been waiting for, because everyone and their dog has either downloaded music or knows someone who has. And despite all the technological buzz, the court's opinion is old school. Justice David Souter, joined by the entire court, in essence says this: If you run a crooked business, we will shut you down. The defendants were explicitly relying on illegal copying to run their peer-to-peer file-sharing business—one of the P2P companies went so far as to make Top 40 songs available for free. And that, said the court, just isn't kosher. "One who distributes a device with the object of promoting its use to infringe copyright," Souter wrote, "is liable for the resulting acts of infringement by third parties."

But what about the Sony BetaMax rule—which saved the VCR on the grounds that it can be used for legal as well as illegal purposes? The Sony rule is central: It is what makes it legal to sell the TiVo, the photocopier, and even the typewriter, though each in its own way might be used for evil deeds. All the Grokster-watchers wanted to know what the court would do with Sony.

The most interesting part about Grokster is that it purports to leave Sony untouched, while adding this new cautionary against operating a crooked business. In other words, the court is saying that it's all about the marketing. By this logic, if Xerox in the 1970s had said,"Don't buy that textbook—photocopy it!" the photocopier, just like that, would have become contraband.

If a rule that's based on marketing seems odd, that's because it is. Can we really know, by looking at a company's ads, whether they're up to no good? The aftermath of Grokster will be a long debate over what exactly it means to "promote" violations of copyright. One reading of Grokster is that it creates a new safe harbor. The opinion suggests that by taking affirmative steps to stop infringement, Grokster and StreamCast might have stayed out of trouble. "Neither company attempted to develop filtering tools or other mechanisms to diminish the infringing activity using their software," Souter pointed out. Yet the court also maintains that companies don't have to so diminish—after all, the VCR itself did nothing to prevent illegal uses. In short, by encrypting its offerings, making a deal with the recording industry, or taking some other copyright-friendly step, the court suggests a new P2P service could stay out of copyright trouble.

What the court is doing boils down to asking judges to be on the watch for monkey business. In the eyes of the justices, companies like KaZaA and Grokster were clearly up to no good, but iTunes—now there's a respectable operation. What the court is trying to do, however awkwardly, is prevent copyright from killing new technologies while at the same time preventing scofflaws from getting away with the technological equivalent of murder. The result is almost like a rule of etiquette—yes, you can sell something that will destroy the recording industry, as long as you don't flaunt it. That's a lesson that's already been learned by Steve Jobs' iTunes, the leading legitimate music download service. The court, in short, has cursed KaZaA, blessed iTunes, and told us that TiVo is OK, too. And while today's decision nominally declares victory for the recording industry, I doubt there will be much celebration going on in industry headquarters tonight.

EFF
Supreme Court Sows Uncertainty
June 27, 2005

Let's measure today's opinion against the chief issues mentioned in the "Grokster Reader's Guide" last week.

* It's Not About P2P: It's still not about P2P. Whether or not today's ruling unleashes new litigation against innovators, it will have no effect on the tens of millions of Americans who continue to use P2P file-sharing software, nor will it deter off-shore programmers living beyond the reach of US copyright laws. Hilary Rosen is right: giving music fans a compelling legitimate alternative, whether through collective licensing or simply competing with free, is the only solution.

* No Matter What, We've Won: There is reason to celebrate in today's ruling. It could have been much worse. As many have noted, the Court rejected many of the more extreme positions that the entertainment industry argued for in the courts below. As discussed below, the Court left intact several important legal bulwarks for innovators. While the Court didn't shore them up, it also didn't tear them down.

* Main Event #1: Sony Betamax. The Supreme Court left the Betamax defense intact by essentially refusing to say anything about it, although the sniping between the two concurrences suggests that a future battle may be coming. Neither side can declare total victory on this score and future cases are probably inevitable (especially where well-advised companies use today's decision as a roadmap for avoiding any hint of inducement).

* Main Event #2: Vicarious Liability. The Court chose to punt on this issue, choosing to base its decision on inducement instead of addressing the entertainment industry's "you could have designed it differently" theory of vicarious liability. The Court's exposition of inducement, however, suggests that it would be hostile to any theory that imposed a free-floating obligation to redesign (without any evidence of inducement) on technologists. That's good news.

* Main Event #3: Inducement. The Court conjured a new form of indirect copyright liability, importing inducement from patent law. Lawyers will be reading the tea leaves here for years to come, trying to divine the precise boundaries of this new form of copyright liability (and, contrary to what the patent lawyers will tell you, patent precedents don't resolve all the questions). The opinion suggests that copyright plaintiffs must show some overt act of inducement; the design and distribution (along with the usual incidents of distribution) of a product, by itself, are not enough. But the Court's opinion may lead lower courts to conclude that once you find an overt act, however small, virtually everything else becomes relevant to divine your "intent." That would be a bonanza for entertainment lawyers eager to foist huge legal costs on defendants. Reminiscent, in some ways, of the securities class actions that have bedeviled high tech companies for years.



Public Knowledge statement from their president Gigi Sohn:

Today's Court decision in the Grokster case underscores a principle Public Knowledge has long promoted -- punish infringers, not technology. The Court has sent the case back to the trial court so that the trial process can determine whether the defendant companies intentionally encouraged infringement. What this means is, to the extent that providers of P2P technology do not intentionally encourage infringement, they are exempt from secondary liability under our copyright law. The Court also acknowledged, importantly, that there are lawful uses for peer-to-peer technology, including distribution of electronic files 'by universities, government agencies, corporations, and libraries, among others.'

The Court is clearly aware that any technology-based rule would have chilled technological innovation. That is why their decision today re-emphasized and preserved the core principle of Sony v. Universal City Studios -- that technology alone can't be the basis of copyright liability -- and focused clearly and unambiguously on whether defendants engaged in intentional acts of encouraging infringement. The Court held expressly that liability for providing a technological tool such as the Grokster file-sharing client depends on 'clear expression or other affirmative steps taken to foster infringement.' What this means is, in the absence of such clear expression or other affirmative acts fostering infringement, a company that provides peer-to-peer technology is not going to be secondarily liable under the Copyright Act.


Lichtman: Hollow Victory in Grokster

MGM won on paper today, but my first reading of the opinion makes me wonder whether the victory will have any bite outside of this specific litigation. Intent-based standards, after all, are among the easiest to avoid. Just keep your message clear -- tell everyone that your technology is designed to facilitate only authorized exchange -- and you have no risk of accountability.

That is not the standard I was hoping for. As I wrote in the amicus brief, I would have allowed liability to be based exclusively on objective evidence, for example a party's failure to alter its technology in a way that would significantly reduce infringing behavior without significantly interfering with legitimate.



Legality of Design Decisions, and Footnote 12 in Grokster

As a technologist I find the most interesting, and scariest, part of the Grokster opinion to be the discussion of product design decisions. The Court seems to say that Sony bars liability based solely on product design (p. 16):

Sony barred secondary liability based on presuming or imputing intent to cause infringement solely from the design of distribution of a product capable of substantial lawful use, which the distributor knows is in fact used for infringement.

And again (on p. 17),

Sony’s rule limits imputing culpable intent as a matter of law from the characteristics or uses of a distributed product.

But when it comes time to lay out the evidence of intent to foster infringement, we get this (p. 22):

Second, this evidence of unlawful objective is given added significance of MGM’s showing that neither company attempted to develop filtering tools or other mechanisms to diminish the infringing activity using their software. While the Ninth Circuit treated the defendants’ failure to develop such tools as irrelevant because they lacked an independent duty to monitor their users’ activity, we think this evidence underscores Grokster’s and StreamCast’s intentional facilitation of their users’ infringement.

It’s hard to square this with the previous statements that intent is not to be inferred from the characteristics of the product. Perhaps the answer is in -footnote 12, which the court hangs off the last word in the previous quote:

Of course, in the absence of other evidence of intent, a court would be unable to find contributory infringement liability merely based on a failure to take affirmative steps to prevent infringement, if the device otherwise was capable of substantial noninfringing uses. Such a holding would tread too close to the Sony safe harbor.

So it seems that product design decisions are not to be questioned, unless there is some other evidence of bad intent to open the door.

The Court closed the door on this sort of inquiry, however. As the opinion makes clear, evidence of unreasonable product design can be considered only if there is also smoking-gun evidence of intent. Indeed, even outlandish design desicions are off limits without the relevant precursor.

Surely the Court realizes that well-advised bad actors rarely leave smoking guns lying about. Hence the victory here looks hollow, and in my view the legal rule seems poorly crafted.


From Eric Goldman's Technology Law Blog
http://blog.ericgoldman.org/archives/2005/06/grokster_suprem.htm

Why Did It Happen

I think the Supreme Court reached the only logical result. It had to find for the plaintiffs. I say this because there was simply no way for the Court to ignore that Grokster and Streamcast were facilitating massive copyright infringement. As the court says, “the probable scope of copyright infringement is staggering” and “there is evidence of infringement on a gigantic scale.” If it ignored these facts, it was simply going to force Congress to act.

On the other hand, the Supreme Court had to acknowledge that the rights of copyright owners can go too far in limiting technological innovation. The majority touches on this briefly in the beginning of its opinion, but more telling is the relatively narrow ruling—and careful drafting—of its basis for reversing the Ninth Circuit. The Court really tried to make sure that it found a way to get Grokster and Streamcast without opening up too much new liability.

In particular, the fact that the Court simply sidestepped any broad pronouncements about Sony is telling. Although the opinion was unanimous that the Ninth Circuit should be reversed, the court appeared badly fractured on the meaning and application of the Sony rule. Thus, it simply tried to leave Sony for another day. One can almost imagine the discussion in chambers: there must have been clear agreement that the defendants should lose, but no agreement on how or why. As a result, the Court seized on an “inducement” theory as a way to avoid clarifying Sony.

The Inducement Theory

Is “inducement” a new basis of liability? I don’t think it's a radical new doctrine. Under standard articulations of the contributory copyright infringement doctrine, a defendant is contributorily liable when it, “with knowledge of the infringing activity, induces, causes or materially contributes to the infringing conduct of another.” Gershwin Publishing Corp. v. Columbia Artists Mgmt., 443 F.2d 1159, 1162 (2d Cir. 1971).

So “inducement” was already part of contributory copyright infringement. One way to read this opinion is that the court merely amplified a new definition of what the word “induces” meant from Gershwin. The court’s definition: “one who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, is liable for the resulting acts of infringement by third parties.”

However, this definition does a couple of things to extend contributory infringement. Most specifically, the court is a little cagey about the knowledge requirement from Gershwin. As we saw in the lower court opinions in Grokster, there were plenty of questions about knowledge of what and when.

The court sidesteps all of those questions, but in doing so, I’m not sure it really overstates the rule. I think the court clearly interpolates some intent of infringement—a higher level of scienter than knowledge.

This is where the Sony rule should kick in—knowledge or intent should be irrelevant if the device-maker is protected by the staple article of commerce doctrine. The court handles the Sony rule bizarrely and in a way that is sure to spawn hundreds of law review articles. It recharacterizes Sony as merely offering/removing presumptions. The court says “Sony barred secondary liability based on presuming or imputing intent to cause infringement solely from the design or distribution of a product capable of substantial lawful use, which the distributor knows is in fact used for infringement.”

I think the court is trying to say that Sony permitted defendants to argue against any presumption of “knowledge” under Gershwin—and without knowledge, defendants are not contributorily liable. With this recasting, now, the court is suggesting that we don’t need to worry about the knowledge prong (or, alternatively, we can infer that knowledge exists) when the defendants induce infringement.

While we might obsess about the nuances of each word, conceptually I’m not sure the court’s semantic jujitsu really changes much of anything. I still think contributory copyright infringement requires scienter + actus reus. The scienter is still knowledge (or intent) of the infringements, and the actus reus is still some type of contribution/facilitation. Under inducement, the actus reus is building and marketing of the device as a way to infringe.

Did the Defendants Induce?

While I think the legal standard of inducement is not a radical restatement of the law, it could have a significant impact depending on how courts apply the doctrine to the facts. This is where I think the court went out of its way to make sure that Grokster and Streamcast lost. The evidence that supports that Grokster and Streamcast induced infringement was questionable. Abstracting from the facts, the court's basic thread seems to be:

· Napster was a bad actor
· Grokster and Streamcast tried to capitalize on Napster’s customer base after Napster’s demise
· It was wrong of Grokster and Streamcast to try to woo Napster’s customers knowing that Napster was a bad actor

In the end, the defendants appear to suffer a “taint by association”—by having been associated with the Napster collapse, they get tarred by the same brush.

Some of the specific facts that the court references:

· the defendants picked names that implicitly invoked Napster in customers’ heads
· the defendants offered the same basic services to customers that Napster offered
· internal StreamCast correspondence that the company was targeting former Napster users (which proves intent regardless of whether the messages ever reached consumers)

These facts are all ridiculously laughable. These are so defendant-specific and lightweight that it’s hard to take them seriously. Instead, the fact that the court showcases these facts reinforces that the court wants to make sure that Grokster and Streamcast lost in a narrow opinion.

While the case is interesting and will spawn plenty of discussion (some intelligent, some insipid), I think the Supreme Court successfully took care of Grokster/StreamCast without going too far. As a result, I think the practical consequences of this case are not that great. With the exception of Grokster and StreamCast as corporate entities (and their employees), I think this case will affect almost no one’s behavior.

Prediction: on remand, Grokster/StreamCast will be hit with enormous damages that will overwhelm their financial resources. As a result, I don’t see a bright future for these companies.

However, users will keep using their software, so the practical effect of this ruling on their users will be minimal. I also think that users generally will not change their file-sharing ways due to this opinion, so file-sharing will continue as if nothing happened.

Prediction: other P2P file sharing services will not change their behavior based on the ruling. The reasons why Grokster and StreamCast induce are so company-specific that very few other P2P file sharing services will feel like it affects them. Further, new file-sharing technologies will emerge that will not promote themselves as tools for infringement, thus carefully avoiding the same “taint by association” that snared Grokster and StreamCast.

Prediction: Congress will not attempt to disturb this ruling. I think the Supreme Court successfully struck a middle ground that will keep Congress from getting involved. The copyright owners won the case, so Congress won’t be that sympathetic to their requests. Further, the copyright owners got a Supreme Court pronouncement on “inducement,” so that will substantially relax any pressure they could put on Congress to give them an inducement doctrine.

----

Finally:

Charles R. Nesson said in an interview on Monday night that he now supports the entertainment industry's effort to hold file-sharing networks liable for copyright infringement by their users, who include many college students, even though he had filed a brief in support of the defendants in the case, Grokster and StreamCast Networks Inc. StreamCast is the creator of a file-sharing program called Morpheus.

"I ... got persuaded that Grokster should lose," said Mr. Nesson, who is also the faculty co-director at Harvard's Berkman Center for Internet & Society.

The case was argued before the Supreme Court in March. During those arguments, Mr. Nesson said, he became sympathetic to the view that file-sharing networks should be held accountable for business plans that promote what he labeled "piracy tools" at the expense of copyright holders.

As it turned out, the Supreme Court found that Grokster and StreamCast had marketing strategies designed to attract people who infringe on copyrights, and therefore could be held liable.

"It is a good decision because it says you can't be a total predator," Mr. Nesson said.

In his brief to the Supreme Court, Mr. Nesson argued that a ruling in support of the entertainment industry could stifle a Berkman Center plan to build a digital library. But on Monday he said he no longer harbored that fear. The Grokster decision means that subsequent courts would consider a library's nonprofit status, and whether it tried to keep piracy at a minimum, he said.

Sunday, June 26

The Armstrong Williams NewsHour

By FRANK RICH

HERE'S the difference between this year's battle over public broadcasting and the one that blew up in Newt Gingrich's face a decade ago: this one isn't really about the survival of public broadcasting. So don't be distracted by any premature obituaries for Big Bird. Far from being an endangered species, he's the ornithological equivalent of a red herring.

Let's not forget that Laura Bush has made a fetish of glomming onto popular "Sesame Street" characters in photo-ops. Polls consistently attest to the popular support for public broadcasting, while Congress is in a race to the bottom with Michael Jackson. Big Bird will once again smite the politicians - as long as he isn't caught consorting with lesbians.

That doesn't mean the right's new assault on public broadcasting is toothless, far from it. But this time the game is far more insidious and ingenious. The intent is not to kill off PBS and NPR but to castrate them by quietly annexing their news and public affairs operations to the larger state propaganda machine that the Bush White House has been steadily constructing at taxpayers' expense. If you liked the fake government news videos that ended up on local stations - or thrilled to the "journalism" of Armstrong Williams and other columnists who were covertly paid to promote administration policies - you'll love the brave new world this crowd envisions for public TV and radio.

There's only one obstacle standing in the way of the coup. Like Richard Nixon, another president who tried to subvert public broadcasting in his war to silence critical news media, our current president may be letting hubris get the best of him. His minions are giving any investigative reporters left in Washington a fresh incentive to follow the money.

That money is not the $100 million that the House still threatens to hack out of public broadcasting's various budgets. Like the theoretical demise of Big Bird, this funding tug-of-war is a smoke screen that deflects attention from the real story. Look instead at the seemingly paltry $14,170 that, as Stephen Labaton of The New York Times reported on June 16, found its way to a mysterious recipient in Indiana named Fred Mann. Mr. Labaton learned that in 2004 Kenneth Tomlinson, the Karl Rove pal who is chairman of the Corporation for Public Broadcasting, clandestinely paid this sum to Mr. Mann to monitor his PBS bête noire, Bill Moyers's "Now."

Now, why would Mr. Tomlinson pay for information that any half-sentient viewer could track with TiVo? Why would he hire someone in Indiana? Why would he keep this contract a secret from his own board? Why, when a reporter exposed his secret, would he try to cover it up by falsely maintaining in a letter to an inquiring member of the Senate, Byron Dorgan, that another CPB executive had "approved and signed" the Mann contract when he had signed it himself? If there's a news story that can be likened to the "third-rate burglary," the canary in the coal mine that invited greater scrutiny of the Nixon administration's darkest ambitions, this strange little sideshow could be it.

After Mr. Labaton's first report, Senator Dorgan, a North Dakota Democrat, called Mr. Tomlinson demanding to see the "product" Mr. Mann had provided for his $14,170 payday. Mr. Tomlinson sent the senator some 50 pages of "raw data." Sifting through those pages when we spoke by phone last week, Mr. Dorgan said it wasn't merely Mr. Moyers's show that was monitored but also the programs of Tavis Smiley and NPR's Diane Rehm.

Their guests were rated either L for liberal or C for conservative, and "anti-administration" was affixed to any segment raising questions about the Bush presidency. Thus was the conservative Republican Senator Chuck Hagel given the same L as Bill Clinton simply because he expressed doubts about Iraq in a discussion mainly devoted to praising Ronald Reagan. Three of The Washington Post's star beat reporters (none of whom covers the White House or politics or writes opinion pieces) were similarly singled out simply for doing their job as journalists by asking questions about administration policies.

"It's pretty scary stuff to judge media, particularly public media, by whether it's pro or anti the president," Senator Dorgan said. "It's unbelievable."

Not from this gang. Mr. Mann was hardly chosen by chance to assemble what smells like the rough draft of a blacklist. He long worked for a right-wing outfit called the National Journalism Center, whose director, M. Stanton Evans, is writing his own Ann Coulteresque book to ameliorate the reputation of Joe McCarthy. What we don't know is whether the 50 pages handed over to Senator Dorgan is all there is to it, or how many other "monitors" may be out there compiling potential blacklists or Nixonian enemies lists on the taxpayers' dime.

We do know that it's standard practice for this administration to purge and punish dissenters and opponents - whether it's those in the Pentagon who criticized Donald Rumsfeld's low troop allotments for Iraq or lobbying firms on K Street that don't hire Tom DeLay cronies. We also know that Mr. Mann's highly ideological pedigree is typical of CPB hires during the Tomlinson reign.

Eric Boehlert of Salon discovered that one of the two public ombudsmen Mr. Tomlinson recruited in April to monitor the news broadcasts at PBS and NPR for objectivity, William Schulz, is a former writer for the radio broadcaster Fulton Lewis Jr., a notorious Joe McCarthy loyalist and slime artist. The Times reported that to provide "insights" into Conrad Burns, a Republican senator who supported public-broadcasting legislation that Mr. Tomlinson opposed, $10,000 was shelled out to Brian Darling, the G.O.P. operative who wrote the memo instructing Republicans to milk Terri Schiavo as "a great political issue."

Then, on Thursday, a Rove dream came true: Patricia Harrison, a former co-chairwoman of the Republican National Committee, ascended to the CPB presidency. In her last job, as an assistant secretary of state, Ms. Harrison publicly praised the department's production of faux-news segments - she called them "good news" segments - promoting American success in Afghanistan and Iraq. As The Times reported in March, one of those fake news videos ended up being broadcast as real news on the Fox affiliate in Memphis.

Mr. Tomlinson has maintained that his goal at CPB is to strengthen public broadcasting by restoring "balance" and stamping out "liberal bias." But Mr. Moyers left "Now" six months ago. Mr. Tomlinson's real, not-so-hidden agenda is to enforce a conservative bias or, more specifically, a Bush bias. To this end, he has not only turned CPB into a full-service employment program for apparatchiks but also helped initiate "The Journal Editorial Report," the only public broadcasting show ever devoted to a single newspaper's editorial page, that of the zealously pro-Bush Wall Street Journal. Unlike Mr. Moyers's "Now" - which routinely balanced its host's liberalism with conservative guests like Ralph Reed, Grover Norquist, Paul Gigot and Cal Thomas - The Journal's program does not include liberals of comparable stature.

THIS is all in keeping with Mr. Tomlinson's long career as a professional propagandist. During the Reagan administration he ran Voice of America. Then he moved on to edit Reader's Digest, where, according to Peter Canning's 1996 history of the magazine, "American Dreamers," he was rumored to be "a kind of 'Manchurian Candidate' " because of the ensuing spike in pro-C.I.A. spin in Digest articles. Today Mr. Tomlinson is chairman of the Broadcasting Board of Governors, the federal body that supervises all nonmilitary international United States propaganda outlets, Voice of America included. That the administration's foremost propagandist would also be chairman of the board of CPB, the very organization meant to shield public broadcasting from government interference, is astonishing. But perhaps no more so than a White House press secretary month after month turning for softball questions to "Jeff Gannon," a fake reporter for a fake news organization ultimately unmasked as a G.O.P. activist's propaganda site.

As the public broadcasting debate plays out, there will be the usual talk about how to wean it from federal subsidy and the usual complaints (which I share) about the redundancy, commerciality and declining quality of some PBS programming in a cable universe. But once Big Bird, like that White House Thanksgiving turkey, is again ritualistically saved from the chopping block and the Senate restores more of the House's budget cuts, the most crucial test of the damage will be what survives of public broadcasting's irreplaceable journalistic offerings.

Will monitors start harassing Jim Lehrer's "NewsHour," which Mr. Tomlinson trashed at a March 2004 State Department conference as a "tired and slowed down" also-ran to Shepard Smith's rat-a-tat-tat newscast at Fox News? Will "Frontline" still be taking on the tough investigations that network news no longer touches? Will the reportage on NPR be fearless or the victim of a subtle or not-so-subtle chilling effect instilled by Mr. Tomlinson and his powerful allies in high places?

Forget the pledge drive. What's most likely to save the independent voice of public broadcasting from these thugs is a rising chorus of Deep Throats.

Thursday, June 23

Industry-funded Scientific Research - an Oxymoron?

By LILA GUTERMAN

Medical research has suffered a public-relations disaster in recent months. Leading arthritis drugs such as Vioxx and Bextra were pulled from store shelves when it was revealed that their makers and the federal government had overlooked deadly side effects. The National Institutes of Health came under fire when an investigation found that some of its researchers had received hundreds of thousands of dollars as consultants for drug companies. And a medical professor at the University of Vermont admitted to having made up data in dozens of federal grant applications and in published papers about, among other things, aging and obesity in women.

Those cases have sparked a public debate about ethics in research and whether academic medical researchers are truly independent of the companies that have a stake in their findings. Far from the headlines, however, corporate influence on another sort of health-related research runs even deeper, with little discussion among scientists, and meager scrutiny by watchdogs.

"There's not been a lot of debate," says Arthur L. Frank, a professor of public health and chair of the department of environmental and occupational health at Drexel University. "Most academics live in their ivory towers and do their own work: 'You leave me alone, I'll leave you alone.' It takes a certain amount of guts to stand up and say the emperor has no clothes."

Many academic scientists who work in occupational and environmental health -- a field dedicated to studying dangers to the public's health from the workplace and the environment -- say business interests increasingly drive research agendas.

"In this country over the last 20 years, the proportion of research studies that have been funded publicly has dropped substantially, and the proportion of privately funded has gone up," says Michael A. Silverstein, a clinical professor of environmental and occupational health sciences at the University of Washington. "It may be even more true in the area of environmental and occupational health," he says, because the federal agencies that provide funds for that research receive less money than many other institutes do.

The stakes are high. Academic researchers find themselves facing off as expert witnesses in multimillion-dollar lawsuits by neighbors of industrial sites, who say their cancers were caused by exposure to toxins. Scientists represent opposite sides in debates over protecting workers from radiation and protecting populations from polluted water and air. Meanwhile, lawsuits drag on for years, and the regulatory process grinds to a halt in the face of the experts' conflicting analyses.

Some researchers in these fields think any collaboration with industry taints the science. "This isn't a matter of minor ethics," says Joseph LaDou, a clinical professor of medicine at the University of California at San Francisco. "These are bought scientists."

But researchers who do work for industry dismiss such attacks. "You're not biased if you're correct," says Kenneth J. Rothman, a part-time professor of epidemiology at Boston University who consults for industry when he is not on the campus.

Regardless of their philosophy, scientists agree that their battles have substantial influence on public health.

"A tremendous amount of research is being done now clearly for advocacy purposes," says David M. Michaels, a professor of environmental and occupational health and of epidemiology at George Washington University. "Right now the money that's going into epidemiology is going into epidemiology to support litigation."

Money Talks Loudly

What critics see as a crisis in occupational- and environmental-health research is driven by two fundamental realities in the field: the lack of available money from the government, and researchers' dependence on industry for information.

Because their work is a low priority at government agencies that sponsor medical research, scientists are forced to look elsewhere for money.

For instance, the 2004 budget of the National Institute for Occupational Safety and Health for extramural research on workers' public health -- work done under the institute's auspices but not within its walls -- was $81.6-million. Environmental-health research fares relatively better: the 2004 budget for outside research at the National Institute of Environmental Health Sciences is $462-million.

But both figures pale in comparison with some of the better-financed areas of medical research. The National Cancer Institute, for instance, spent $3.7-billion on extramural research in 2004, while the National Institute of Allergy and Infectious Diseases spent $3.5-billion.

"The demand is probably greater than the funding that NIOSH has," says Richard A. Lemen, who was deputy director of the institute from 1992 to 1996.

Industry often fills the gap. An industry-sponsored study done 10 years ago of risks in just one field, semiconductor manufacturing, cost about "half the extramural research budget of NIOSH," says the academic scientist who spearheaded it.

By definition, practically any epidemiological study in occupational medicine will involve some industry participation. Environmental medicine has a similar tie -- researchers often want to know whether chemicals from an industrial site are affecting public health. Cooperation with industry can mean access to a facility's work force -- or to crucial data that wouldn't be available otherwise.

"To say academic expertise should only be on one side is kind of wrong on its face," argues Marc B. Schenker, a professor and chairman of the department of public-health sciences at the University of California at Davis, who led the industry-sponsored semiconductor study. Dr. Schenker, who has consulted with companies and testified on behalf of both plaintiffs and defendants, says scientists need to follow where the data lead, whether or not the findings are politically correct.

The sponsors of the semiconductor study did not interfere with his research, he says. He kept all of the data and was guaranteed the right to publish his results, regardless of how they affected his sponsors' bottom line. His major conclusion was that women who worked in the fabrication room at 14 semiconductor manufacturers had a slightly increased risk of spontaneous abortion.

Unusual Conditions

Other scientists have found Dr. Schenker's study on semiconductor manufacturers convincing. "It was a highly visible study, after much public concern, with an outstanding group of researchers, that got adequate support," says Richard W. Clapp, a professor of environmental health at Boston University, in an e-mail message. "Those conditions are unusual."

Dr. Schenker adds, "I don't see myself as an advocate who manipulates science to reach some preconceived goals." Besides, he says, there would be no reason to bias the results: He makes no money through his corporate-sponsored research studies or his litigation services. Any payment for such work goes to the university, he says.

Patricia A. Buffler, a professor of epidemiology and dean emerita of the School of Public Health at the University of California at Berkeley, is even more insistent that research and industry sponsorship can coexist happily. She has served as an expert witness and consultant -- and been paid for her efforts -- for corporations involved in lawsuits on environmental-health issues.

Ms. Buffler would not tell The Chronicle how much she earns for such work, but she is listed in California Superior Court documents from a continuing case against Lockheed Martin Corporation as an expert witness who has been paid $450 per hour as a consultant for the aerospace company. During the discovery phase of the trial of the case last July, she testified that she had also consulted for Arco, Gulf Oil, Goodyear, Shell Oil, Standard Oil, and Union Carbide, among other companies -- although she says she was not paid for much of that work.

The Lockheed lawsuit, filed in the mid-1990s, pits about 400 residents of Redlands, Calif., against the company. The residents say a Lockheed rocket-testing facility polluted their groundwater and caused cancer and other illnesses. Perchlorate, one of the chemicals that the plaintiffs say they have been exposed to, is known to interfere with thyroid function.

With financial support from Lockheed Martin, Ms. Buffler and other scientists conducted a study that found that area residents did not experience higher risk of thyroid problems than other people did.

The professor says she requires assurance in writing from any company that she agrees to work with that it will not attempt to suppress or manipulate her findings.

"I believe I behave with a very high ethical threshold," she says. "The way to achieve the best public-health goals is to have a strong science base and not to get carried away with poorly based advocacy. That puts me at odds sometimes with the environmental groups, but in the long run that pays off in terms of public health. Shoddy science won't stand up to rigorous scrutiny."

Disappearing Acts

Critics of industry-sponsored research argue that even the most forthright agreements between researcher and industry carry risks of bias in results or interpretation that benefit the sponsors.

"Even under the best of circumstances, there's some understanding that future funding depends at least in part on the results you find this time," says Anthony Robbins, a professor of public health and family medicine at Tufts University, who is a former director of the National Institute for Occupational Safety and Health.

Other critics are more blunt. Daniel T. Teitelbaum, a doctor in Denver who specializes in medical toxicology and occupational epidemiology, says: "Industry doesn't give you money to do research. Industry gives you money to do research that favors them."

When that research doesn't go in industry's favor, critics say, it has a tendency to disappear. "There's a systematic failure to publish positive findings when they happen to conflict with the sponsor's interest," says Sander Greenland, a professor of epidemiology and statistics at the University of California at Los Angeles.

Among the public, in the federal government, and among scientists who test new therapies and drugs, momentum is growing to require listing all clinical trials in a national registry, in order to expose such underreporting. Mr. Greenland argues that the same should be done in epidemiology. But that may be an uphill battle. Some journals in occupational and environmental health do not even require authors to reveal their sources of money or any other potential conflicts of interest.

Aside from the obvious pressures that industry exerts over research, it can influence academic scientists in subtler ways. For example, when companies cast doubt on academic studies, they often force university researchers into long and draining debates that lead to years of additional work, tying up proposed regulations or court cases in endless debates about the validity of the science.

Beate R. Ritz, an associate professor of epidemiology and environmental health at UCLA, has experienced that firsthand.

Because of nearby residents' concerns about their exposure to radiation and to solvents from a facility that tested rockets and nuclear reactors, Dr. Ritz did a study in the mid-90s of the health and employment records of some 55,000 people who had worked at the facility between 1950 and 1993. The facility, which overlooks Simi Valley, had been run by a company called Rocketdyne, which in 1996 became part of the Boeing Company. Dr. Ritz's work was sponsored by the U.S. Department of Energy and cost $750,000.

She and her collaborators found that workers who had had high exposures to radiation were more likely than their colleagues to die of leukemia, and that workers exposed to the rocket fuel hydrazine were at a slightly higher risk of dying of lung cancer.

Residents of neighborhoods near the facility sued Rocketdyne in 1997, concerned that they were at risk for the same cancers. The lawsuits are pending in U.S. district court. Rocketdyne then hired a panel of academic scientists to assess Dr. Ritz's study. When those analyses were completed, her group, as well as Rocketdyne representatives, met with local residents in 1999. The company "said everything we had done was wrong," she recalls. "These people were so antagonistic."

She was taken aback, since her findings had not been scientifically surprising. Her studies had passed peer review and been published in top-flight epidemiology and occupational-medicine journals. But John K. Mitchell, a spokesperson for the company, says, "There were a lot of questions that people felt hadn't been answered by the original study." So Rocketdyne, along with the United Aerospace Workers union, paid a contract research organization $3.5-million to conduct further work.

"They pretty much came up with the same data we had," Dr. Ritz says. "But the way they presented the data made all the effects that are there go away."

Dr. Ritz got another grant from the state of California, for $210,000, to do a follow-up study of her own, which added several more years' worth of deaths to the data. The new study also allowed her to look at the incidence of disease, not just causes of death -- something she had not done earlier because not enough data had yet been entered into California's cancer registry. The later Rocketdyne-commissioned study did not look at incidence either, despite having begun years after Dr. Ritz's study, when more data had become available. "Why didn't they do it?" she asks. "They didn't want to get any results that show something." Mortality data, she says, are less clear-cut and therefore less likely to reveal a causal connection.

Mr. Mitchell denies the charge. He says scientists from the research organization looked at mortality but not illness because "they thought it was more accurate."

Dr. Ritz now has two papers under review on the Rocketdyne case and is writing two more on the new study. As in the original study, she found an increased risk of death from cancer for workers who had been exposed to radiation and to hydrazine.

Other scientists say her experience is not unusual. Researchers feel forced to expand studies or repeat their work after such attacks, says David Ozonoff, a professor of environmental health at Boston University, "to dispel the doubt that industry has created."

In his own research on perchloroethylene, the solvent most often used in dry cleaning -- and a common groundwater contaminant -- he has had to "keep doing studies over and over again" because an industry association keeps casting doubt on his results.

Dr. Robbins, of Tufts, says: "Industry has found it worthwhile to challenge all of the studies that suggest there might be a link between some exposure and some kind of disease or illness ... . Industry is in the business of manufacturing uncertainty."

A Happy Medium

What is the solution? Find a middle ground, say some researchers. An organization cited by many researchers as performing truly independent studies while relying on industry money is the Health Effects Institute.

"They do high-quality work, and it's very transparent," says Mr. Michaels, of George Washington University, who was assistant secretary for environment, safety, and health in the Department of Energy from 1998 to 2001. "They are very careful about doing it right."

The nonprofit institute finances studies and does analyses of existing findings on air pollution and health. It is supported jointly -- and equally -- by the Environmental Protection Agency and a consortium of car-and-truck manufacturers.

Every five years, the institute comes up with a list of research priorities. Based on that, the EPA and the manufacturers provide five-year commitments of money, which currently stand at about $4-million annually. "No matter what we say with any particular study we publish, people can't just pull out," says Daniel S. Greenbaum, the institute's president.

But that example is a lonely one. Like many other scientists, Colin L. Soskolne, a professor of epidemiology at the University of Alberta, in Canada, is distressed by how many other scientists "pursue the interests of vested interests."

"We're all human," he says. "We tend to respond in ways that will ensure that we are secure in our careers. Unfortunately, in my view, this does not always -- perhaps ever -- serve science well or the public interest well."

Wednesday, June 22

Humanistic Approaches for Digital-Media Studies

The Chronicle of Higher Education
Information Technology
From the issue dated June 24, 2005

By JANET H. MURRAY

In April 1999 I wrote an opinion article for The Chronicle in which I called for new, principle-based curricula to prepare students for the emerging field of interactive design. I criticized the then-current interdisciplinary programs for inculcating conflicting models of the computer -- models that often reflected the design criteria of older media -- and I recommended a new kind of professional education that would recognize the computer as a representational medium with its own expressive properties.

At the time I wrote that essay, I was at the Massachusetts Institute of Technology, teaching one course a year in interactive narrative, while spending most of my time leading projects in humanities computing. I became interested in curriculum after the publication of my book Hamlet on the Holodeck: The Future of Narrative in Cyberspace (Free Press, 1997), which also coincided with the expansion of the World Wide Web and the growth of the first programs in new media.

As I spoke with designers in professional and university settings, I was struck by their shared confusion over how to evaluate the new digital artifacts they were making, like online newspapers, multimedia CD-ROM's, and interactive museum installations. They knew what they liked, but they didn't know why -- and people with different training liked different things. Without clear conventions to guide them, the designers were forced to rely on principles that reflected the constraints of the old printed page or film camera. It seemed to me that a humanities-based professional program could solve that problem, by looking at the computer as unique in its formal properties but still embedded in the rich representational traditions of older media.

In the fall of 1999 I joined such a program. The School of Literature, Communication, and Culture at the Georgia Institute of Technology had been offering a master's of science degree in information design and technology since 1993. The following fall I became director of graduate studies, and we began a process of updating the master's curriculum and designing a Ph.D. program in digital media, which admitted its first class last fall. So I have had ample opportunity over the past few years to experience the challenges of curriculum making in an emerging field.

Georgia Tech is a good place to think about a digital-media curriculum because it is unusual in bringing together, in one academic unit, a strong faculty of humanists who both make things and theorize about the media they use. My colleague Jay D. Bolter, for example, was the co-inventor of Storyspace, an early hypertext-writing environment, and is currently collaborating on augmented-reality applications, in which ghostlike video characters are superimposed on the world around us and interact with us. He is also the author of several books that examine the relationship between digital media and older traditions. Another colleague, Eugene Thacker, writes about the new understanding of genetics as an information medium and creates art that merges biological and computational codes.

In addition, the institution has a longstanding interdisciplinary culture that fosters collaboration between us and our colleagues in computer science, digital music, architecture, and engineering. As a result, we can take breadth for granted without having to constantly improvise interdisciplinary connections. Jay Bolter's augmented-reality project, for instance, is housed in the interdisciplinary Graphics, Visualization, and Usability Center, and is a collaboration between him and the computer scientist Blair MacIntyre.

At the same time, we have enough senior and junior faculty members within our own unit to concentrate on depth. For example, Michael Mateas creates story games based on artificial-intelligence programming techniques, Ian Bogost works on political games, and my own students create conversational characters. Our different approaches to games and stories expose the students to a wide range of design strategies.

The Georgia Tech degree programs differ from those of other universities in some significant ways. Most important, the master's degree is an academic, rather than a narrowly professional, degree. Some of our students go on to earn Ph.D.'s, although most go into professional work. Universities with a strong professional focus often have a less structured curriculum, relying on team-based projects and cooperative work with industry to prepare students for real-world jobs. Our students have summer internships making real-world artifacts like video games, informational Web sites, interactive TV programs, and digital-art installations, but the bulk of their training is in structured courses, including four required core lab courses and a required "Project Studio" course, which focuses on long-term, faculty-led research projects.

Like all digital-media and new-media graduate programs, ours admits students from a striking variety of backgrounds: programmers, artists, journalists, filmmakers. But unlike faculty members at many other institutions, we expect students to work outside their specialty. We want the filmmakers and artists to learn the principles of computational abstraction -- how to represent the world as objects with rule-based behaviors -- rather than just mastering the latest multimedia assembly tools. We want the programmers to integrate knowledge of interface-design principles with their understanding of the inner workings of a game, informational Web site, or immersive environment. We also want all our courses to share common methodologies, like iterative design and focused critiques. We include theoretical readings in all the core courses, working from a common reader so that we can coordinate the discussion across courses and semesters.

A program in such a fast-changing field can remain successful only if the faculty constantly improves and expands the curriculum in response to emerging technologies (like wireless communications and interactive television), emerging research fields (like game studies and the psychology of online communities), and the increasing sophistication of entering students. One strategy that helps to keep our master's program in information design and technology responsive without making it overly trend driven is the establishment of umbrella courses -- like "Experimental Media" and "Project Studio" -- that can be shaped to reflect the specialties of individual faculty members.

"Project Studio" has been one of the most successful elements of our curriculum. It is a required, repeatable course that engages students in team-based, faculty-led projects that produce deliverables, like a game about urban-planning decisions, within a single semester. Faculty members are eager to teach the course because it gives them free assistance with projects -- students get credit instead of pay for their research. It also ensures that all students in the master's program get a chance to work on well-conceived problems, giving them important experience with teamwork and in shaping their own projects, as well as items to include in their professional portfolios.

We have had to balance the responsibility of preparing our students for professional employment, which requires mastery of specific tools and practices, with our commitment to a curriculum based on enduring principles of design -- a body of knowledge that the faculty is constantly redefining as we debate and challenge one another. Because we came to the theory and practice of digital media from different disciplinary backgrounds, we bring many viewpoints on what is essential knowledge in the field and how it is best taught in the limited time available. Currently we teach two core courses that emphasize visual culture -- from the history of the alphabet to the creation of three-dimensional moving images -- and two that explore computational structures. Our faculty members continue to be engaged in reviewing and rethinking the program's learning goals, especially in the context of evaluating our students' impressive projects or of reviewing the semester's gloriously varied work at our semiannual Demo Days.

With an average of 15 to 20 master's recipients a year since 1995 and an expanding international reputation based on the research activity of our faculty members, the demand for a Ph.D. program at Georgia Tech grew steadily throughout the 1990s. The effort to start a program stalled for all the usual reasons that change is difficult at academic institutions. Faculty members outside the field questioned its validity while voicing fears that other fields would suffer from the growing popularity of digital studies. Faculty members within the field, drawn from very different disciplinary backgrounds, worried that we would not be able to reach a consensus about the curriculum. The general shift in literary and film studies over the last decades of the 20th century had been from a fixed creative canon to an expanded and contested creative canon, then to a contested theoretical canon. That meant that defining a curriculum could easily lead to ideological warfare. People avoided the task rather than risk dissension.

The initiative gathered momentum only after the faculty was coaxed into creating a list of possible texts (readings, artworks, films, digital works, etc.) that students could present for their comprehensive examination, the multipart written and oral test that qualified them for submitting a thesis proposal. Students would be asked to choose 50 works in each of four categories -- media theory, media traditions, digital media, and a specialty category of their own devising; they could be tested on any of those 200 works. The list began when we combined individual professors' research bibliographies, which allowed us to see the points of intersection (for example, everyone included Walter Benjamin's article "The Work of Art in the Age of Mechanical Reproduction") and made visible to everyone many areas of individual expertise that had not previously been widely known -- like research on cognition and culture.

In addition to our graduates' need for a Ph.D. program, our master's program had been suffering from the lack of an undergraduate feeder program. With students coming into the program with very different kinds of preparation, we felt the need for foundational courses in graphic design, digital-media studies, video production, and other basic subjects. We therefore welcomed the opportunity to develop a joint undergraduate major with our colleagues in the College of Computing. Georgia Tech began to offer a bachelor's of science in computational media at the same time as our Ph.D. program started, and that undergraduate curriculum has already allowed us to rethink our course offerings as a multitiered structure. The new undergraduate courses have already allowed us to raise the level of our graduate courses and have also provided valuable instructional positions for our Ph.D. students.

It may be unusual for a master's program in a new discipline like digital-media studies to lead to Ph.D. and undergraduate programs. Many institutions start with a few undergraduate courses that coalesce into a major, and most degree programs begin at the undergraduate level and work their way upward. But several prominent universities have started master's programs in the past five years, including Carnegie Mellon University, MIT, the Rhode Island School of Design, the University of California at Los Angeles, and the University of Southern California. A few, like Simon Fraser University and the IT University of Copenhagen, have established Ph.D. programs. Each of those programs has its own emphasis, reflecting differences in academic organization and faculty specialties. For instance, Carnegie Mellon has been particularly successful in combining improvisational theater techniques with innovative computing research.

There are also a growing number of programs, mostly for undergraduates, in game design and digital media that emphasize industry-oriented training, and perhaps an equal number in new-media studies that emphasize theory over practice. That trend, though understandable, is disturbing because it suggests that digital media may go the way of film studies and film production, literature and writing, or art history and studio art -- fragmenting into two disciplines that barely communicate with one another.

The prospect reminds me of my experience as a graduate student in English at Harvard University, when I naïvely suggested taking a poetry-writing course with one of the greatest living American poets, who was then on the faculty. My adviser looked at me with horror: "This is not a trade school!" he barked.

It is an odd but persistent academic prejudice to view analysis and creation as antithetical enterprises. So far my experience at Georgia Tech has reinforced the opposite belief. I am more convinced than ever that it is best to teach and learn about digital media by combining humanistic critique with computationally sophisticated practice.


Janet H. Murray is a professor and director of graduate studies at the School of Literature, Communication, and Culture at the Georgia Institute of Technology.

The Plight of English Majors in a Post 9/11 World

The following is based on the commencement address given to the graduating students of the Department of English of the University of California at Berkeley in the Hearst Greek Theatre, May 15, 2005.

When I was invited to give this speech, I was asked for a title. I dillied and dallied, begged for more time, and of course the deadline passed. The title I really wanted to suggest was the response that all of you have learned to expect when asked your major: What are you going to do with that? To be an English major is to live not only by questioning, but by being questioned. It is to live with a question mark placed squarely on your forehead. It is to live, at least some of the time, in a state of "existential dread." To be a humanist, that is, means not only to see clearly the surface of things and to see beyond those surfaces, but to place oneself in opposition, however subtle, an opposition that society seldom lets you forget: What are you going to do with that?

To the recent graduate, American society—in all its vulgar, grotesque power—reverberates with that question. It comes from friends, from relatives, and perhaps even from the odd parent here and there. For the son or daughter who becomes an English major puts a finger squarely on the great parental paradox: you raise your children to make their own decisions, you want your children to make their own decisions—and then one day, by heaven, they make their own decisions. And now parents are doomed to confront daily the condescending sympathy of your friends—their children, of course, are economics majors or engineering majors or pre-meds—and to confront your own dread about the futures of your children.

It's not easy to be an English major these days, or any student of the humanities. It requires a certain kind of determination, and a refusal—an annoying refusal, for some of our friends and families, and for a good many employers—to make decisions, or at least to make the kind of "practical decisions" that much of society demands of us. It represents a determination, that is, not only to do certain things—to read certain books and learn certain poems, to acquire or refine a certain cast of mind—but not to do other things: principally, not to decide, right now, quickly, how you will earn your living; which is to say, not to decide how you will justify your existence. For in the view of a large part of American society, the existential question is at the bottom an economic one: Who are you and what is your economic justification for being?

English majors, and other determined humanists, distinguish themselves not only by reading Shakespeare or Chaucer or Joyce or Woolf or Zora Neale Hurston but by refusing, in the face of overwhelming pressure, to answer that question. Whether they acknowledge it or not—whether they know it or not —and whatever they eventually decide to do with "that," they see developing the moral imagination as more important than securing economic self-justification.

Such an attitude has never been particularly popular in this country. It became downright suspect after September 11, 2001—and you of course are the Class of September 11, having arrived here only days before those attacks and the changed world they ushered in. Which means that, whether you know it or not, by declaring yourselves as questioners, as humanists, you already have gone some way in defining yourselves, for good or ill, as outsiders.

I must confess it: I, too, was an English major...for nineteen days. This was back in the Berkeley of the East, at Harvard College, and I was a refugee from philosophy—too much logic and math in that for me, too practical— and I tarried in English just long enough to sit in on one tutorial (on Keats's "To Autumn"), before I fled into my own major, one I conceived and designed myself, called, with even greater practical attention to the future, "Modern Literatures and Aesthetics."

Which meant of course that almost exactly twenty-five years ago today I was sitting where you are now, hanging on by a very thin thread. Shortly thereafter I found myself lying on my back in a small apartment in Cambridge, Massachusetts, reading The New York Times and The New York Review—very thoroughly: essentially spending all day, every day, lying on my back, reading, living on graduation- present money and subsisting on deliveries of fried rice from the Hong Kong restaurant (which happened to be two doors away—though I felt I was unable to spare the time to leave the apartment, or the bed, to pick it up). The Chinese food deliveryman looked at me dispassionately and then, as one month stretched into two, a bit knowingly. If I knew then what I know now I would say I was depressed. At the time, however, I was under the impression that I was resting.

Eventually I became a writer, which is not a way to vanquish existential dread but a way to live with it and even to earn a modest living from it. Perhaps some of you will follow that path; but whatever you decide to "do with that," remember: whether you know it yet or not, you have doomed yourselves by learning how to read, learning how to question, learning how to doubt. And this is a most difficult time—the most difficult I remember— to have those skills. Once you have them, however, they are not easy to discard. Finding yourself forced to see the gulf between what you are told about the world, whether it's your government doing the telling, or your boss, or even your family or friends, and what you yourself can't help but understand about that world—this is not always a welcome kind of vision to have. It can be burdensome and awkward and it won't always make you happy.

I think I became a writer in part because I found that yawning difference between what I was told and what I could see to be inescapable. I started by writing about wars and massacres and violence. The State Department, as I learned from a foreign service officer in Haiti, has a technical term for the countries I mostly write about: the TFC beat. TFC—in official State Department parlance—stands for "Totally Fucked-up Countries." After two decades of this, of Salvador and Haiti and Bosnia and Iraq, my mother— who already had to cope with the anxiety of a son acquiring a very expensive education in "Modern Literature and Aesthetics"—still asks periodically: Can't you go someplace nice for a change?

When I was sitting where you are sitting now the issue was Central America and in particular the war in El Salvador. America, in the backwash of defeat in Vietnam, was trying to protect its allies to the south—to protect regimes under assault by leftist insurgencies—and it was doing so by supporting a government in El Salvador that was fighting the war by massacring its own people. I wrote about one of those events in my first book, The Massacre at El Mozote, which told of the murder of a thousand or so civilians by a new, elite battalion of the Salvadoran army—a battalion that the Americans had trained. A thousand innocent civilians dead in a few hours, by machete and by M-16.

Looking back at that story now— and at many of the other stories I have covered over the years, from Central America to Iraq—I see now that in part I was trying to find a kind of moral clarity: a place, if you will, where that gulf that I spoke about, between what we see and what is said, didn't exist. Where better to find that place than in the world where massacres and killings and torture happen, in the place, that is, where we find evil. What could be clearer than that kind of evil?

But I discovered it was not clear at all. Chat with a Salvadoran general about the massacre of a thousand people that he ordered and he will tell you that it was military necessity, that those people had put themselves in harm's way by supporting the guerrillas, and that "such things happen in war." Speak to the young conscript who wielded the machete and he will tell you that he hated what he had to do, that he has nightmares about it still, but that he was following orders and that if he had refused he would have been killed. Talk to the State Department official who helped deny that the massacre took place and he will tell you that there was no definitive proof and, in any case, that he did it to protect and promote the vital interests of the United States. None of them is lying. I found that if you search for evil, once you leave the corpses behind you will have great difficulty finding the needed grimacing face.

Let me give you another example. It's from 1994, during an unseasonably warm February day in a crowded market in the besieged city of Sarajevo. I was with a television crew—I was writing a documentary on the war in Bosnia for Peter Jennings at ABC News—but our schedule had slipped, as it always does, and we had not yet arrived at the crowded marketplace when a mortar shell landed. When we arrived with our cameras a few moments later, we found a dark swamp of blood and broken bodies and, staggering about in it, the bereaved, shrieking and wailing amid a sickening stench of cordite. Two men, standing in rubber boots knee-deep in a thick black lake, had already begun to toss body parts into the back of a truck. Slipping about on the wet pavement, I tried my best to count the bodies and the parts of them, but the job was impossible: fifty? sixty? When all the painstaking matching had been done, sixty-eight had died there.

As it happened, I had a lunch date with their killer the following day. The leader of the Serbs, surrounded in his mountain villa by a handful of good-looking bodyguards, had little interest in the numbers of dead. We were eating stew. "Did you check their ears?" he asked. I'm sorry? "They had ice in their ears." I paused at this and worked on my stew. He meant, I realized, that the bodies were corpses from the morgue that had been planted, that the entire scene had been trumped up by Bosnian intelligence agents. He was a psychiatrist, this man, and it seemed to me, after a few minutes of discussion, that he had gone far to convince himself of the truth of this claim. I was writing a profile of him and he of course did not want to talk about bodies or death. He preferred to speak of his vision for the nation.[1]

For me, the problem in depicting this man was simple: the level of his crimes dwarfed the interest of his character. His motivations were paltry, in no way commensurate with the pain he had caused. It is often a problem with evil and that is why, in my experience, talking with mass murderers is invariably a disappointment. Great acts of evil so rarely call forth powerful character that the relation between the two seems nearly random. Put another way, that relation is not defined by melodrama, as popular fiction would have it. To understand this mass murderer, you need Dostoevsky, or Conrad.[2]

Let me move closer to our own time, because you are the Class of September 11, and we do not lack for examples. Never in my experience has frank mendacity so dominated our public life. This has to do less with ideology itself, I think, than the fact that our country was attacked and that—from the Palmer Raids after World War I, to the internment of Japanese-Americans during World War II, to the McCarthyite witch-hunts during the Fifties —America tends to respond to such attacks, or the threat of them, in predictably paranoid ways. Notably, by "rounding up the usual suspects" and by dividing the world, dramatically and hysterically, into a good part and an evil part. September 11 was no exception to this: indeed, in its wake—coterminous with your time here—we have seen this American tendency in its purest form.

One welcome distinction between the times we live in and those other periods I have mentioned is the relative frankness of our government officials—I should call it unprecedented frankness—in explaining how they conceive the relationship of power and truth. Our officials believe that power can determine truth, as an unnamed senior adviser to the President explained to a reporter last fall:

We're an empire now, and when we act, we create our own reality. And while you're studying that reality—judiciously, as you will— we'll act again, creating other new realities, which you can study too, and that's how things will sort out.[3]

The reporter, the adviser said, was a member of what he called "the reality-based community," destined to "judiciously study" the reality the administration was creating. Now it is important that we realize—and by "we" I mean all of us members of the "reality-based community"—that our leaders of the moment really do believe this, as anyone knows who has spent much time studying September 11 and the Iraq war and the various scandals that have sprung from those events—the "weapons of mass destruction" scandal and the Abu Ghraib scandal, to name only two.

What is interesting about both of those is that the heart of the scandal, the wrongdoing, is right out in front of us. Virtually nothing of great importance remains to be revealed. Ever since Watergate we've had a fairly established narrative of scandal. First you have revelation: the press, usually with the help of various leakers within the government, reveals the wrongdoing. Then you have investigation, when the government—the courts, or Congress, or, as with Watergate, both—constructs a painstaking narrative of what exactly happened: an official story, one that society—that the community—can agree on. Then you have expiation, when the judges hand down sentences, the evildoers are punished, and the society returns to a state of grace.

What distinguishes our time—the time of September 11—is the end of this narrative of scandal. With the scandals over weapons of mass destruction and Abu Ghraib, we are stuck at step one. We have had the revelation; we know about the wrongdoing. Just recently, in the Downing Street memo, we had an account of a high-level discussion in Britain, nearly eight months before the Iraq war, in which the head of British intelligence flatly tells the prime minister—the intelligence officer has just returned from Washington—that not only has the President of the United States decided that "military action was...inevitable" but that—in the words of the British intelligence chief—"the intelligence and facts were being fixed around the policy." This memo has been public for weeks.[4]

So we have had the revelations; we know what happened. What we don't have is any clear admission of—or adjudication of—guilt, such as a serious congressional or judicial investigation would give us, or any punishment. Those high officials responsible are still in office. Indeed, not only have they received no punishment; many have been promoted. And we—you and I, members all of the reality-based community—we are left to see, to be forced to see. And this, for all of us, is a corrupting, a maddening, but also an inescapable burden.

Let me give you a last example. The example is in the form of a little play: a reality-based playlet that comes to us from the current center of American comedy. I mean the Pentagon press briefing room, where the real true-life comedies are performed. The time is a number of weeks ago. The dramatis personae are Secretary of Defense Donald Rumsfeld; Vice Chairman of the Joint Chiefs (and soon to be promoted) General Peter Pace of the Marine Corps; and of course, playing the Fool, a lowly and hapless reporter.

The reporter's question begins with an involved but perfectly well-sourced discussion of Abu Ghraib and the fact that all the reports suggest that something systematic—something ordered by higher-ups—was going on there. He mentions the Sanchez memo, recently released, in which the commanding general in Iraq at the time, Lieutenant General Ricardo Sanchez, approved twelve interrogation techniques that, as the reporter says, "far exceed limits established by the Army's own field manual." These include prolonged stress positions, sensory deprivation (or "hooding"), the use of dogs "to induce stress," and so on; the reporter also mentions extraordinary "rendition" (better known as kidnapping, in which people are snatched off the streets by US intelligence agents and brought to third countries like Syria and Egypt to be tortured). Here's his question, and the officials' answer:

Hapless Reporter: And I wonder if you would just respond to the suggestion that there is a systematic problem rather than the kinds of individual abuses we've heard of before.

Secretary Rumsfeld: I don't believe there's been a single one of the investigations that have been conducted, which has got to be six, seven, eight or nine—

General Pace: Ten major reviews and 300 individual investigations of one kind or another.

Secretary Rumsfeld: And have you seen one that characterized it as systematic or systemic?

General Pace: No, sir.

Rumsfeld: I haven't either.

Hapless Reporter: What about—?

Rumsfeld: Question?

[Laughter][5]

And, as the other reporters laughed, Secretary Rumsfeld did indeed ignore the attempt to follow up, and went on to the next question.

But what did the hapless reporter want to say? All we have is his truncated attempt at a question: "What about—?" We will never know, of course. Perhaps he wanted to read from the very first Abu Ghraib report, directed by US Army Major General Antonio Taguba, who wrote in his conclusion

that between October and December 2003, at the Abu Ghraib Confinement Facility, numerous incidents of sadistic, blatant, and wanton criminal abuses were inflicted.... This systemic and illegal abuse was intentionally perpetrated.... [Emphasis added.][6]

Or perhaps this from the Red Cross report, which is the only contemporaneous account of what was going on at Abu Ghraib, recorded by witnesses at the time:

These methods of physical and psychological coercion were used by the military intelligence in a systematic way to gain confessions and extract information or other forms of co-operation from persons who had been arrested in connection with suspected security offenses or deemed to have an "intelligence value." [Emphasis added.][7]

(I should note here, by the way, that the military itself estimated that between 85 and 90 percent of the prisoners at Abu Ghraib had "no intelligence value.")

Between that little dramatic exchange—

Rumsfeld: And have you seen one that characterized it as systematic or systemic?

General Pace: No, sir.

Rumsfeld: I haven't either—

—and the truth, there is a vast gulf of lies. For these reports do use the words "systematic" and "systemic"—they are there, in black and white—and though the reports have great shortcomings, the truth is that they tell us basic facts about Abu Ghraib: first, that the torture and abuse was systematic; that it was ordered by higher-ups, and not carried out by "a few bad apples," as the administration has maintained; that responsibility for it can be traced—in documents that have been made public —to the very top ranks of the administration, to decisions made by officials in the Department of Justice and the Department of Defense and, ultimately, the White House. The significance of what we know about Abu Ghraib, and about what went on—and, most important, what is almost certainly still going on—not only in Iraq but at Guantánamo Bay, Cuba, and Bagram Air Base in Afghanistan, and other military and intelligence bases, some secret, some not, around the world—is clear: that after September 11, shortly after you all came to Berkeley, our government decided to change this country from a nation that officially does not torture to one, officially, that does.

What is interesting about this fact is not that it is hidden but that it is revealed. We know this—or rather those who are willing to read know it. Those who can see the gulf between what officials say and what the facts are. And we, as I have said, remain fairly few. Secretary Rumsfeld can say what he said at that nationally televised news conference because no one is willing to read the reports. We are divided, then, between those of us willing to listen, and believe, and those of us determined to read, and think, and find out. And you, English majors of the Class of 2005, you have taken the fateful first step in numbering yourselves, perhaps irredeemably, in the second category. You have taken a step along the road to being Empiricists of the Word.

Now we have come full circle—all the way back to the question: What are you going to do with that? I cannot answer that question. Indeed, I still have not answered it for myself. But I can show you what you can do with "that," by quoting a poem. It is by a friend of mine who died almost a year ago, after a full and glorious life, at the age of ninety-three. Czeslaw Milosz was a legend in Berkeley, of course, a Nobel Prize winner—and he saw as much injustice in his life as any man. He endured Nazism and Stalinism and then came to Berkeley to live and write for four decades in a beautiful house high on Grizzly Peak.

Let me read you one of his poems: it is a simple poem, a song, as he calls it, but in all its beauty and simplicity it bears closely on the subject of this talk.

A SONG ON THE END OF THE WORLD

On the day the world ends
A bee circles a clover,
A fisherman mends a glimmering net.
Happy porpoises jump in the sea,
By the rainspout young sparrows are playing
And the snake is gold-skinned as it should always be.
On the day the world ends
Women walk through the fields under their umbrellas,
A drunkard grows sleepy at the edge of a lawn,
Vegetable peddlers shout in the street
And a yellow-sailed boat comes nearer the island,
The voice of a violin lasts in the air
And leads into a starry night.
And those who expected lightning and thunder
Are disappointed.
And those who expected signs and archangels' trumps
Do not believe it is happening now.
As long as the sun and the moon are above,
As long as the bumblebee visits a rose,
As long as rosy infants are born
No one believes it is happening now.
Only a white-haired old man, who would be a prophet
Yet is not a prophet, for he's much too busy,
Repeats while he binds his tomatoes:
There will be no other end of the world,
There will be no other end of the world.

"There will be no other end of the world." I should add that there are two words at the end of the poem, a place and a date. Czeslaw wrote that poem in Warsaw in 1944. Can we think of a better place to put the end of the world? Perhaps Hiroshima 1945? Or Berlin 1945? Or even perhaps downtown New York in September 2001?

When Czeslaw Milosz wrote his poem in Warsaw, in 1944, there were those, as now, who saw the end of the world and those who did not. And now, as then, truth does matter. Integrity—much rarer than talent or brilliance—does matter. In that beautiful poem, written by a man—a poet, an artist—trying to survive at the end of the world, the white-haired old man binding his tomatoes is like yourselves. He may not have been a prophet but he could see. Members of the Class of September 11, whatever you decide "to do with that"—whether you are writers or professors or journalists, or nurses or lawyers or executives—I hope you will think of that man and his tomatoes, and keep your faith with him. I hope you will remember that man, and your own questioning spirit. Will you keep your place beside him?