SYDNEY: Facebook, Twitter and other social media platforms agreed to take further steps to stop the spread of violent extremist content in Australia as global pressure increases following the live-streaming of the New Zealand massacre in March.
An Australian task force that also includes YouTube and Australia’s biggest mobile-phone operators on Sunday released a report with nine areas for action, including efforts to prevent, detect and remove such material.
The group, officially named The Australian Taskforce to Combat Terrorist and Extreme Violent Material Online, recommended enacting a simulated event as soon as this year to assess the response of the industry and the government.
Fifty-one people died in the Christchurch attacks, a mass murder that could be viewed on Facebook, and the company came under criticism for not taking down the material fast enough.
G20 leaders meeting in Osaka this weekend called for online platforms to do more to prevent and detect such content.
The G20 statement was “a clear warning from global leaders,” Australian Prime Minister Scott Morrison said in a statement. “Social media companies are on notice.”
In a statement, Facebook said it’s been reviewing what more it can do “to limit our services from being used to cause harm or spread hate” since the New Zealand attacks.
The company said it has restricted who can use its live-streaming service and co-developed the nine-point action plan.
“We welcome the Australian government’s announcement of the Taskforce report,” Facebook said.
Australia passed legislation in April aimed at stopping violent crime and acts of extremism from being live-streamed online.
That law carries penalties of up to 10% of a company’s annual turnover, and potential prison terms of up to three years for executives of social media companies.