Banish the Cyber-Bigots
By Michael Gerson
Friday, September 25, 2009
The transformation of Germany in the 1920s and '30s from the nation of Goethe to the nation of Goebbels is a specter that haunts, or should haunt, every nation.
The triumph of Nazi propaganda in this period is the subject of a remarkable exhibit at the United States Holocaust Memorial Museum (where I serve on the governing board). Germany in the 1920s was a land of broad literacy and diverse politics, boasting 146 daily newspapers in Berlin alone. Yet in the course of a few years, a fringe party was able to define a national community by scapegoating internal enemies; elevate a single, messianic leader; and keep the public docile with hatred while the state committed unprecedented crimes.
The adaptive use of new technology was central to this achievement. The Nazis pioneered voice amplification at rallies, the distribution of recorded speeches and the sophisticated targeting of poster art toward groups and regions.
But it was radio that proved the most powerful tool. The Nazis worked with radio manufacturers to provide Germans with free or low-cost "people's receivers." This new technology was disorienting, taking the public sphere, for the first time, into private places -- homes, schools and factories. "If you tuned in," says Steve Luckert, curator of the exhibit, "you heard strangers' voices all the time. The style had a heavy emphasis on emotion, tapping into a mass psychology. You were bombarded by information that you were unable to verify or critically evaluate. It was the Internet of its time."
This comparison to the Internet is apt. The Nazis would have found much to admire in the adaptation of their message on neo-Nazi, white supremacist and Holocaust-denial Web sites.
But the challenge of this technology is not merely an isolated subculture of hatred. It is a disorienting atmosphere in which information is difficult to verify or critically evaluate, the rules of discourse are unclear, and emotion -- often expressed in CAPITAL LETTERS -- is primary. User-driven content on the Internet often consists of bullying, conspiracy theories and racial prejudice. The absolute freedom of the medium paradoxically encourages authoritarian impulses to intimidate and silence others. The least responsible contributors see their darkest tendencies legitimated and reinforced, while serious voices are driven away by the general ugliness.
Ethicist Clive Hamilton calls this a "belligerent brutopia." "The Internet should represent a great flourishing of democratic participation," he argues. "But it doesn't. . . . The brutality of public debate on the Internet is due to one fact above all -- the option of anonymity. The belligerence would not be tolerated if the perpetrators' identities were known because they would be rebuffed and criticized by those who know them. Free speech without accountability breeds dogmatism and confrontation."
This destructive disinhibition is disturbing in itself. It also allows hatred to invade respected institutional spaces on the Internet, gaining for these ideas a legitimacy denied to fringe Web sites. After the Bernard Madoff scandal broke, for example, major newspaper sites included user-generated content such as "Find a Jew who isn't Crooked" and "Just another jew money changer thief" -- sentiments that newspapers would not have printed as letters to the editor. Postings of this kind regularly attack immigrants and African Americans, recycle centuries of anti-Semitism and deny the events of the Holocaust as a massive Jewish lie.
Legally restricting such content -- apart from prosecuting direct harassment and threats against individuals or incitement to violence -- is impossible. In America, the First Amendment protects blanket statements of bigotry. But this does not mean that popular news sites, along with settings such as Facebook and YouTube, are constitutionally required to provide forums for bullies and bigots. As private institutions, they are perfectly free to set rules against racism and hatred. This is not censorship; it is the definition of standards.
Some online institutions, such as The New York Times and the Los Angeles Times, screen user comments before posting them. Others, such as The Post and The Wall Street Journal, rely on readers to identify objectionable content -- a questionable strategy because numbness to abusiveness and hatred on the Internet is part of the challenge.
Whatever the method, no reputable institution should allow its publishing capacity, in print or online, to be used as the equivalent of the wall of a public bathroom stall.
The exploitation of technology by hatred will never be eliminated. But hatred must be confined to the fringes of our culture -- as the hatred of other times should have been.