Communicators struggle to contain video of mass shooting in New Zealand

Attacks by gunmen on two mosques were streamed live, raising questions about tech companies’ responsibility in curbing violent videos. Experts urge users not to share the footage.

NZ shooting response

Tech companies seem to have few answers for censoring violent speech online.

The inability to control and moderate social media platforms was highlighted by a tragic shooting in New Zealand. Two gunmen separately attacked mosques in Christchurch, New Zealand, killing dozens of people.

One of the attackers appears to have livestreamed his actions on Facebook, forcing the company to answer for its role in the tragedy.

CNN reported:

“New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video,” Mia Garlick, Facebook’s director of policy for Australia and New Zealand, said in a statement.

Hours after the attack, however, copies of the gruesome video continued to appear on Facebook, YouTube and Twitter, raising new questions about the companies’ ability to manage harmful content on their platforms.

Facebook is “removing any praise or support for the crime and the shooter or shooters as soon as we’re aware,” Garlick said.

Other social media organizations were also forced to address the footage circulating on their platforms.

CNN continued:

Twitter (TWTR) said it suspended an account related to the shooting and is working to remove the video from its platform.

YouTube, which is owned by Google (GOOGL), removes “shocking, violent and graphic content” as soon as it is made aware of it, according to a Google spokesperson.

New Zealand police asked social media users to stop sharing the purported shooting footage and said they were seeking to have it taken down.

The tech companies sought to express sympathy for the victims and condemn the violence.

Fortune reported:

“Our hearts go out to the victims of this terrible tragedy. Shocking, violent and graphic content has no place on our platforms, and is removed as soon as we become aware of it. As with any major tragedy, we will work cooperatively with the authorities,” Google said in a statement.

“We are deeply saddened to hear of the shootings in Christchurch. Twitter has rigorous processes and a dedicated team in place for managing emergency situations such as this. We will also cooperate with law enforcement to facilitate their investigations as required,” Twitter said.

Violent video making its way onto social media platforms is nothing new. The latest video only highlights how little social media companies have done to address inappropriate content on their sites.

CNN wrote:

This is the latest case of social media companies being caught off guard by killers posting videos of their crimes, and other users then sharing the disturbing footage. It has happened in the United StatesThailandDenmark and other countries.

Friday’s video reignites questions about how social media platforms handle offensive content: Are the companies doing enough to try to catch this type of content? How quickly should they be expected to remove it?

“While Google, YouTube, Facebook and Twitter all say that they’re cooperating and acting in the best interest of citizens to remove this content, they’re actually not because they’re allowing these videos to reappear all the time,” said Lucinda Creighton, a senior adviser at the Counter Extremism Project, an international policy organization.

The companies claim they moved quickly to remove the disturbing images, but users say the video was available for hours before being scrubbed.

Bloomberg reported:

While platforms including Twitter and YouTube said they moved fast to remove the content, users reported it was still widely available hours after being first uploaded to the alleged shooter’s Facebook account. The video, which shows a first-person view of the killings in Christchurch, New Zealand, was readily accessible during and after the attack — as was the suspect’s hate-filled manifesto.

Facebook, YouTube and other social-media platforms are struggling to scrub offensive content from sites that generate billions of dollars in revenue from advertisers. In the U.S., those sites also have been criticized for spreading political misinformation, with Facebook founder Mark Zuckerberg being called before Congress.

The gunmen also appear to be part of a larger ecosystem of toxic ideas shared on social media.  They used social media to promote a white nationalist manifesto and encouraged others to subscribe to the controversial YouTube channels of PewDiePie, who has performed anti-Semitic gestures for “parody.”

The New York Times reported:

Before the shooting, someone appearing to be the gunman posted links to a white-nationalist manifesto on Twitter and 8chan, an online forum known for extremist right-wing discussions. The 8chan post included a link to what appeared to be the gunman’s Facebook page, where he said he would also broadcast live video of the attack.

The Twitter posts showed weapons covered in the names of past military generals and men who have recently carried out mass shootings.

In his manifesto, he identified himself as a 28-year-old man born in Australia and listed his white nationalist heroes. …

Felix Kjellberg, the man behind PewDiePie, sought to distance himself from the ideals of the gunmen.

The New York Times continued:

Felix Kjellberg, a polarizing YouTube celebrity known as PewDiePie, distanced himself from the attacks after the man who filmed himself shooting victims at a mosque encouraged viewers to “subscribe to PewDiePie” in a video livestream.

“I feel absolutely sickened having my name uttered by this person,” Mr. Kjellberg, a Swede, said on Twitter.

Police have turned to their own social media channels, as well as traditional media interviews, to ask users not to share the footage in hopes of containing the video:

CNN reported:

John Battersby, a counter-terrorism expert at Massey University of New Zealand, said the country had been spared mass terrorist attacks, partly because of its isolation. Social media had changed that.

“This fellow live streamed the shooting and his supporters have cheered him on, and most of them are not in New Zealand,” he said. “Unfortunately once it’s out there and it’s downloaded, it can still be (online),” he added.

The spread of the video could inspire copycats, said CNN legal enforcement analyst Steve Moore, a retired supervisory special agent for the FBI.

“What I would tell the public is this: Do you want to help terrorists? Because if you do, sharing this video is exactly how you do it,” Moore said.

“Do not share the video or you are part of this,” he added.

Some tech commentators remarked on how hard moderation is with modern technology. This Twitter thread highlights the importance of a fast PR response, even when the solution to the problem is difficult and will take time to implement:

Social media was also the place for users to share their sadness, pain and anger.

Many users were angry with social media platforms:

World leaders shared messages of empathy and condemned the attacks:

Social media companies will have to do more to convince consumers that their platforms are safe and free of hate speech and violent content. Whatever tactics they decide to use, the process is certain to take time and money—and some platforms might be running out of time with impatient users.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.