Facebook has been using real-life first-person shooter video footage to develop artificial intelligence that can more effectively auto-block the type of video that was livestreamed on March 15.
It also plans to direct New Zealand users who search for white supremacy pages to anti-hate groups to help them de-radicalise, a strategy already in place for US users.
The use of the shooting footage, supplied by the US and UK Governments, is part of a Facebook announcement today about how the social media giant is building capacity to stop the spread of the kind of video that was livestreamed on March 15.
Facebook has been widely criticised over the way it failed to stop the spread of the March 15 footage, which was uploaded 1.5 million times over 24 hours.
Facebook auto-blocked 1.2 million videos, but it is not known how many people viewed the remaining 300,000 videos.
Since then, it has announced several initiatives, including signing up to the Christchurch Call and being party to a tech industry-led nine-point plan to target online terrorist and violent extremist content.
The tech-industry's AI shortcomings were also highlighted on March 15. A Facebook official reportedly told a congressional hearing in April that the livestreamed footage didn't have enough gore to be blocked automatically.
"The video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine-learning technology," Facebook said in a statement.
"That's why we're working with government and law enforcement officials in the US and UK to obtain camera footage from their firearms training programs – providing a valuable source of data to train our systems.
"We aim to improve our detection of real-world, first-person footage of violent events and avoid incorrectly detecting other types of footage such as fictional content from movies or video games."
Take your Radio, Podcasts and Music with you