WASHINGTON — A tense congressional hearing to explore the spread of white nationalism on social media quickly served to illustrate the problem Silicon Valley faces, after anonymous users on YouTube began posting vitriolic attacks that targeted others on the basis of race and religion.

The hearing – held by the House Judiciary Committee – was streamed live on the video site owned by Google, which testified Tuesday. Alongside the stream, a live chat featured posts from users, some of whom published anti-Semitic screeds and argued that white nationalism is not a form of racism.

New_Zealand_Mosque_Attacks_46528

A police officer stands guard last month in front of the Al Noor mosque in Christchurch, New Zealand, where 50 people were killed in an anti-Muslim attack on March 15. Social media is being blamed for helping hate groups spread their messages. Associated Press/Vincent Yu

“These Jews want to destroy all white nations,” wrote the user Celtic Pride.

“Anti-hate is a code word for anti-white,” wrote another named Fight White Genocide.

Appearing before the committee, Alexandria Walden, the counsel for free expression and human rights at Google, stressed the tech giant has invested in people and technology to remove content that incites violent or spreads hate. “We know the very platforms that have enabled these societal benefits can be abused,” she said.

By the time she spoke, though, YouTube had “disabled comments on the livestream,” citing the uptick in hateful content, the company confirmed. Other YouTube live streams still had live chats enabled.

Advertisement

“This just illustrates part of the problem we’re dealing with,” said Rep. Jerry Nadler, D-N.Y., the chairman of the committee.

His comment was greeted with skepticism by Rep. Louie Gohmert, a Texas Republican. “Could that be another hate hoax?” he asked. “Just keep an open mind.”

The tension infused the hearing, which had been called to explore the spread of hate speech and the rise of white supremacist movements in the United States, which Nadler described as an “urgent crisis in our country.” He said that white nationalism had motivated many of the world’s deadly attacks, including the neo-Nazi rally in Charlottesville, Virginia, in 2017, the attack on a Pittsburgh synagogue last year and the shooting at two mosques in New Zealand last month.

In many cases, Nadler said, social-media sites had served as “as conduits to spread vitriolic hate messages to every home,” adding that “Congress in recent years also has failed to take seriously the threat.” He and other Democrats also raised the possibility that Trump had worsened the problem, given the president’s rhetoric on Twitter and his administration’s approach to handling hate crimes widely.

Some Republican lawmakers on the committee emphasized they shared a concern about the spread of white supremacy. Their comments came months after one of their own members, Rep. Steve King, faced an overwhelming vote of condemnation in the House for his comments about white nationalism. The top GOP lawmaker on the House Judiciary Committee, Rep. Doug Collins of Georgia, mentioned the matter but not King by name, even as he said that “nothing white nationalists claim resonates with any of us here today.”

But the witnesses invited to testify by Republicans sharply criticized Democrats for holding the hearing. Candace Owens, a prominent conservative and public Trump ally, opened the hearing by charging it was a form of “fear mongering” that serves as part of Democrats’ 2020 presidential election strategy.

Advertisement

“They blame Facebook, they blame Google, they blame Twitter, really [Democrats] blame the birth of social media, which has disrupted their monopoly on minds,” she said.

Tuesday’s hearing comes just weeks after a shooter targeted two mosques in Christchurch, New Zealand, in an attack that reverberated widely on social media. The massacre had been broadcast on Facebook, while users on an anonymous web forum, called 8chan, schemed ways to continue to upload it to major tech platforms in a way that evaded their detection.

Appearing Tuesday, Facebook and Google each emphasized their efforts to hire new workers to review troubling content and invest in artificial-intelligence tools that can spot and remove troubling posts and videos before they go viral. “Hate can take many forms beyond overt terrorism, and none of it is permitted on our platform,” said Neil Potts, a public policy director at Facebook.

But the Anti-Defamation League, which testified Tuesday, estimated that white supremacists had been responsible for three quarters of all domestic extremist murders in 2019. “These platforms are like round-the-clock digital white supremacist rallies,” said Eileen Hershenov, the organization’s senior vice president of policy.

To that end, civil-rights advocates urged lawmakers in response to consider and adopt new regulations of social-media giants, following in the footsteps of other countries that have sought to hold Facebook, Google and their peers accountable for harmful content posted online.

“Instead of hiding under hoods, they now organize at computer screens,” said Kristen Clarke, the president of the Lawyers’ Committee for Equal Rights Under Law, about the rise of white supremacists.


Only subscribers are eligible to post comments. Please subscribe or login first for digital access. Here’s why.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.

filed under: