Twitter, YouTube, Facebook and Microsoft vow to cut off terrorist content

The platforms are sharing a database to log and quickly quash the spread of hateful content online. Instagram also announced features to quell harassment.

In a time of growing divisions and increasingly negative online commentary, social media and tech companies are making a stand.

Facebook, Twitter, YouTube and Microsoft have pledged to come together to stop the spread of terrorist content online.

A recent post on Facebook’s newsroom reads, in part:

Starting today, we commit to the creation of a shared industry database of “hashes” — unique digital “fingerprints” — for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services. By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.

Our companies will begin sharing hashes of the most extreme and egregious terrorist images and videos we have removed from our services — content most likely to violate all of our respective companies’ content policies. Participating companies can add hashes of terrorist images or videos that are identified on one of our platforms to the database. Other participating companies can then use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.

Twitter shared the same announcement on its newsroom, too.

The move comes after online platforms have faced scrutiny over hate speech and increasing pressure to police content.

The Verge reported:

Social networks have faced criticism this year for doing too little to prevent their platforms for being used to spread terrorist propaganda. Amid pressure from the European Union, the companies agreed this year to remove content within 24 hours if it qualifies as hate speech or propaganda. Meanwhile, Twitter faced a lawsuit in the United States alleging that its slowness in removing posts from ISIS constituted material support to the terrorist group. The lawsuit was dismissed in August.

CNN Tech reported:

The European Commission found that only 40 percent of hate speech published on the firms’ platforms was reviewed within 24 hours.

For those worrying about user privacy, the companies clarified that the “hashes” database doesn’t contain identifiable user data:

As we continue to collaborate and share best practices, each company will independently determine what image and video hashes to contribute to the shared database. No personally identifiable information will be shared, and matching content will not be automatically removed. Each company will continue to apply its own policies and definitions of terrorist content when deciding whether to remove content when a match to a shared hash is found. And each company will continue to apply its practice of transparency and review for any government requests, as well as retain its own appeal process for removal decisions and grievances. As part of this collaboration, we will all focus on how to involve additional companies in the future.

Throughout this collaboration, we are committed to protecting our users’ privacy and their ability to express themselves freely and safely on our platforms. We also seek to engage with the wider community of interested stakeholders in a transparent, thoughtful and responsible way as we further our shared objective to prevent the spread of terrorist content online while respecting human rights.

Could this unifying force be enough to affect the spread of other damaging content, such as fake news? Some expressed hope that it might be.

Techcrunch’s Sarah Perez wrote:

Given the recent discussions about the spread of fake news on social media, one hopes this new collaboration could potentially pave a path for the companies working together on other initiatives going forward.

Instagram using new features to crack down on harassment

Though not a move to address terrorist content per se, Instagram also recently announced additional features to cut down on harassment and abusive comments.

In a blog post, the company said:

Comments are where the majority of conversation happens on Instagram. While comments are largely positive, they’re not always kind or welcome. Previously, we launched the ability to filter comments based on keywords. This was an important step in giving you more control over your comments experience. However, there are two more features we think will improve this experience.

We’ll soon add a way to turn off comments on any post. Sometimes there may be moments when you want to let your post stand on its own. Previously this was only available for a small number of accounts. In a few weeks, it will be available for everyone. Tap “Advanced Settings” before you post and then select “Turn Off Commenting.” You can also tap the … menu any time after posting to turn commenting back on.

Instagram also enables users to remove abusive followers and report self-injury posts.

Consumerist reported:

Users who have set their accounts to “Private” will now also be able to remove followers they have already accepted, something that wasn’t possible before without blocking that user. Anyone you remove from your list of approved followers won’t be notified.

To help those who may be struggling in the community, Instagram is also adding anonymous reporting for self-injury posts. If you report a friend who may be thinking about hurting themselves, Instagram will connect your friend to organizations that offer help.

No matter whether it’s terrorist content or bullying remarks, more social platforms are likely to make additional moves to battle virtual hatred.

Perez wrote:

… [B]ecause of their outsized influence on today’s web, these companies are beginning to wake up to the fact that they will be held accountable for the content shared on their platforms, given that content has the ability to influence everything from terrorist acts to how people perceive the world and even politics on a global scale.

What do you think of these efforts, Ragan readers?

(Image via)

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.