Facebook is about to make big changes to prevent extremist posts?
Facebook, the world's largest social network, has come under intense pressure to limit the spread of hateful messages, images and videos on its site.
![]() |
Illustration photo. Source: AFP |
Facebook on September 17 announced a series of changes to limit hate speech and extremism on the social network, expanded the definition of terrorist organizations and planned to deploy artificial intelligence to detect and block live videos of mass shootings.
The social media company is also expanding a program that redirects users searching for extremist content to pages run by organizations that help eliminate hateful ideology.
The announcement comes a day before a congressional hearing on how Facebook, Google and Twitter handle violent content. US lawmakers are expected to question social media executives about how they handle posts from extremists.
Facebook, the world’s largest social network, has come under intense pressure to curb the spread of hateful messages, images and videos on its site. The social network has also faced harsh criticism for failing to detect and remove live video of the massacre of 51 people in Christchurch, New Zealand.
In at least three mass shootings this year, including one in Christchurch, plans for violence were announced in advance on 8chan, an online message board. US federal lawmakers questioned 8chan’s owners this month.
In its announcement post, Facebook said the tragedy in Christchurch had strongly influenced updates to its content moderation practices.
Facebook also said it recently developed a plan with Microsoft, Twitter, Google and Amazon to address how technology is used to spread terrorist content.
Facebook has long touted its ability to filter terrorism-related content on its platform. Over the past two years, the social media company says it has been able to detect and remove 99% of extremist posts — about 26 million pieces of content — before they are reported.
The team working on countering extremism on Facebook has grown to 350 people, including experts in law enforcement, national security, counterterrorism and academics who study extremism.
To detect more real-world harmful content, Facebook said it is updating its artificial intelligence to better filter violent live videos in the first few seconds of going live.
The world's largest social network said it has worked with US and British law enforcement officials to obtain camera footage from counter-terrorism training programs to help its artificial intelligence systems learn what violent events actually look like.
Since March, Facebook has also been redirecting users searching for terms related to “white power” to resources like Life After Hate, an organization founded by former violent extremists to provide crisis intervention and outreach. In the wake of the Christchurch tragedy, Facebook is expanding that feature to Australia and Indonesia./.