Facebook has announced a series of updates aimed at combatting the influence of terrorism and hate organisations online.
The company has faced increasing scrutiny to do more in the fight against the spread of violent and hateful messages on its platform.
In an effort to do this, Facebook said it had tightened up its definition of what it considers to be "dangerous individuals and organisations".
This move would help guide the company's decision-making process in dealing with potentially dangerous and illegal content, it said.
- Major Kiwi companies set to pull ads from Facebook and Google
- Bridges slams Christchurch Call, says PM too focused on Twitter
- When it comes to fake news, social media just doesn't care
- PM to meet with Twitter CEO to discuss terrorist content on social media
"The updated definition still focuses on the behaviour, not ideology, of groups. But while our previous definition focused on acts of violence intended to achieve a political or ideological aim, our new definition more clearly delineates that attempts at violence, particularly when directed toward civilians with the intent to coerce and intimidate, also qualify."
The company said it had also improved automated detection techniques, which identifies copies of known offensive material. It also updated its list of terrorist organisations, banning more than 200 white supremacist organisations from the platform.
"We use a combination of AI and human expertise to remove content praising or supporting these organisations," Facebook said in a statement.
The updates also affect Instagram, which is owned by Facebook.
"We'll need to continue to iterate on our tactics because we know bad actors will continue to change theirs, but we think these are important steps in improving our detection abilities."
Prime Minister Jacinda Ardern has taken a hard line against tech companies such as Facebook following the Christchurch mosque attacks in March.
In that attack, the alleged gunman shared live video of himself as he opened fire in two Christchurch mosques, killing 51 people.
That video was viewed around 4000 times before being removed by Facebook. Even after it was taken down, however, copies of the footage spread online. Around 1.5 million copies of the video were taken down from Facebook.
The company said its recently implemented policies aimed at preventing a repeat of such disturbing and illegal footage. It said it was working with government and law enforcement officials in the US and UK to obtain camera footage from their firearms training programmes to help train its systems to identify potentially illegal video.
"The video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology," Facebook said.
The company also said it had put in measures to connect people searching for terms associated with white supremacists in contact with resources to help people influenced by hate groups.
Some of the updates had been implemented in the last few months while others went into effect last year, the company said.