Cash is pouring in to strengthen the chief censor's office. But in an internet age, is it equipped for the job? National Correspondent Katie Kenny reports the chief censor himself is not even sure about the long-term future for his office.
On Wednesday, October 9, two people died and more were injured in an antisemitic attack in the eastern German city of Halle.
Chief Censor David Shanks in Wellington learned of the attack at 6am the following day. It was a copycat of New Zealand's March 15 terror attack, when an alleged white supremacist opened fire in two Christchurch mosques, killing 51 worshippers while broadcasting live on Facebook.
For the second time in just over six months, Shanks would find himself fronting media on issues relating to terrorist and violent extremist content online.
The German shooter's platform of choice was streaming site Twitch, known for its video game content. He apologised to his viewers when he was arrested after failing to enter a synagogue where up to 80 people had gathered to celebrate Yom Kippur, the holiest day of the year in Judaism.
Twitch confirmed about five people watched the livestream in real time and thousands of others saw it before it was flagged and removed. While it was still circulating on darker corners of the internet, it wasn't easily found on the bigger social media platforms.
That was in contrast to the video of the Christchurch attack, which by any definition of the term went viral. Users attempted to re-upload it 1.5 million times on Facebook. YouTube at one point was removing one copy of it per second.
- When it comes to fake news, social media just doesn't care
- Bridges slams Christchurch Call, says PM too focused on Twitter
- PM to meet with Twitter CEO to discuss terrorist content on social media
- US tech companies not releasing data despite Christchurch Call pledge
On March 20, Shanks classified the Christchurch video as objectionable because of its depiction and promotion of extreme violence and terrorism – meaning it's illegal for anyone in New Zealand to view, possess, or distribute it. Three days later, he also banned a document, or manifesto, said to have been written by the terrorist.
That Thursday morning after the German attack, Shanks and several classification officers watched the Halle video. Reporters were already asking if he'd ban it.
By 11.30am, he made the call.
"While this video is not filmed in New Zealand and fatalities are fewer than in Christchurch, the fundamentals of this publication are the same as that of the March 15 livestream," he said in a statement. "It appears on the face of it to be a racially motivated terrorist attack depicting cold-blooded murder of innocent people."
AN OLD MODEL FOR A NEW AGE
In 1915, a conference of representatives of 45 organisations called for the introduction of a censorship system. They claimed: "The class of moving pictures at present exhibited in New Zealand constitutes a grave danger to the moral health and social welfare of the community."
The first film censor was appointed the following year. He snipped naughty bits from magazines and banned some books entirely.
The Office of Film and Literature Classification was established as an independent Crown entity under the Films, Videos, and Publications Classification Act 1993.
"In 1993, the idea of the internet and what it could become was just a twinkle in the legislator's eye," Shanks says.
Back then, "everything was physical": tapes, books, magazines.
"Fast-forward to 2017 and the universe is fundamentally changed in terms of how people consume and conceive of, and market and provide, media. When I came into the role [that year] I know I'd need to match the framework against the reality," he says.
"I think about this role as fundamentally about being a media regulator, who has a responsibility to keep people safe from harm and also to protect people's freedoms."
Shanks has a background in legal roles and came to the job of chief censor from one in charge of health, safety and security at the Ministry of Education.
He was thrust immediately into the limelight over the controversial Netflix series 13 Reasons Why. The programme, which is targeted at teenagers, addresses or depicts rape, suicide, drug use, and bullying. It was easily accessible for young people to watch unsupervised via the Netflix streaming service.
The Chief Censor introduced a new classification for the show: RP18. This meant anyone under 18 should only watch the programme with the support of an adult to process the topics raised in the series.
But 13 Reasons is, in one respect, not typical of the kinds of potentially harmful content viewed by young people in 2019.
"We know from our research on young people that a large amount of their content is not from cinema or TV or even streamed services. It's YouTube or other similar free tubes," Shanks says.
"If you think about that as an example, [YouTube's] current stats are about 500 hours of content going up every minute. There is no sensible way you can have human moderation of classification of tubes generating that amount of content."
Whereas Shanks could see the second season of 13 Reasons coming, and speak with Netflix about its release, there is no way censors could know where the next white supremacist meme is coming from.
Canterbury University sociolologist Michael Grimshaw points to the banning of the alleged Christchurch shooter's so-called manifesto as further evidence of the problem.
"The aim of banning manifestos worked when you could shut down the means of publication and also shut down the means of distribution; that is, in the world of physical media," he says.
Now, documents circulate independently and can contain many embedded links which make it much more than a single document.
"So every manifesto is a multiplicity of parts that can be divided up and circulated, and so the model is not up to date," Grimshaw says.
This is where digital solutions, such as artificial intelligence (AI) that finds and flags up dangerous content, enter the conversation.
Big platforms like YouTube and Facebook are already using AI to identify and remove extremist content, pornography or other types of material. Facebook last month announced a range of measures to better clamp down on violent extremists, terrorists and hate groups on its platforms. This includes using first-person military videos to train artificial intelligence to more quickly identify terror attacks like the live-streamed Christchurch massacre.
The Office of Film and Literature Classification is developing a tool of its own. It's essentially a filter for New Zealand's sensitivities, applied over the top of a self-classification entered by a streaming service.
- Jacinda Ardern highlights Christchurch Call's importance in wake of livestreamed German shooting
- Anti-terror measures vs digital freedom
- Donald Trump warns about social media platforms ‘acquiring immense power’
Shanks says streaming services operating across multiple countries will enter content indicators or flags to the tool. With that information, the right sort of classification or rating will be automatically applied for audiences in different countries.
"We know Netflix or other providers will use, say, US classifications. They're really anxious to warn about bad language and nudity, but they won't specifically warn for rape or suicide content," he says.
"We've got different sensitivities and different things people expect to be warned about. So that's a change that's in progress right now, which is a change towards basically thinking about how we can adopt and adapt sensible consumer information and age recommendations into an online environment."
The tool is currently at a "prototype" stage with the aim of having it in a workable shape for the rollout of legislation next year.
THE CASE FOR AN UBER-REGULATOR
The initiative is designed to prevent the office becoming a "bottleneck" to the ready flow of content in the digital age. But does the office have a future at all?
Prime Minister Jacinda Ardern announced a $17m funding boost last week for the chief censor and the Censorship Compliance Unit within the Department of Internal Affairs (DIA), a technical team of 13 people which is currently focused on detecting and investigating child sexual exploitation images. It was announced alongside legislative changes designed to improve the speed at which the chief censor could move to have objectionable material on social media removed.
"While terrorist and violent extremist content is objectionable, and therefore illegal under current law, the changes mean we can target this material in a similar way to how we target child sexual exploitation material, by working quickly with online content hosts to remove it as quickly as possible," Internal Affairs Minister Tracey Martin said at the time.
That means more staff and more power for the chief censor's office. Shanks, however, is uncertain about the long-term shape of his office.
"Once you start breaking it down looking at the fundamentals here, you do end up with a chief censor-type role that may in fact be just a media regulator-type role which encompasses all media," he says.
"We've currently got a Broadcasting Standards Authority (BSA) aimed at looking at broadcast TV and the like, [and] you've got this office that deals with traditional film and physical content. But I think down the track, I don't know when convergence will drive the thinking towards just having a unified content regulator of some sort."
An uber-regulator would be responsible for managing harms and making sure there was the right kind of research going on to get input from young people and parents, in particular.
DOING THE JOB
Yet Shanks is far from feeling hopeless about the task his office confronts today. He sees the banning of content after the March 15 terror attack as effective, and believes there has been a recent shift for the better in the way mainstream media reports cases of extreme sensitivity.
"It's really interesting to take stock of where we're at in this point in time, given that prior to March 15 a lot of people would have said you can't effectively take that sort of stuff off the internet.
"The bottom line is ... if everyone had just shrugged and looked away in terms of the livestream and the document, I guarantee you it'd be at the top of people's Google feeds, it'd be shown by children to each other in playgrounds, it'd be watched accidentally by people still, but the fact is, it's not."
"Pretty much everyone" knows it's illegal to access and share the banned material, he says.
And he believes there has been a "sea change" in how mainstream media report on terror attacks.
"There's greater awareness across the system about how terrorists will seek to use online platforms and mainstream media to amplify their message.
"I'm hearing a lot of this can still be found – nothing's perfect – but where we're at is good as opposed to disastrous, which is I think where we'd be at if we hadn't done anything."