Print
Category: News
Author: Alexander Gillespie

Last weekend’s “anti-lockdown” protest in Auckland provided a snapshot of the various conspiracy theories and grievances circulating online and within the community: masks, vaccination, QAnon, 5G technology, government tyranny and Covid-19 were all in the mix.

The “freedom rally” also featured Advance NZ party leaders Jami-Lee Ross and Billy Te Kahika, who has previously described Covid-19 as no more serious than influenza.

The same scepticism about the pandemic was reportedly behind the Mt Roskill Evangelical Church cluster and spread, which prompted Health Minister Chris Hipkins to ask that people “think twice before sharing information that can’t be verified”.

Hipkins also refused to rule out punitive measures for anyone found to be deliberately spreading lies.

It’s not a new problem. As far back as 1688, the English Privy Council issued a proclamation prohibiting the spread of false information. The difference in the 21st century, of course, is the reach and speed of fake news and disinformation.

The World Health Organisation (WHO) has even spoken of a massive “infodemic” hindering the public health response to Covid-19: “an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it.”

An “anti-lockdown” protest on Queen St in Auckland,

The limits of freedom of speech

This is particularly dangerous when people are already anxious and politically polarised. Disinformation spreads fastest where freedom is greatest, including in New Zealand where everyone has the right under the Bill of Rights Act “to freedom of expression, including the freedom to seek, receive, and impart information and opinions of any kind in any form”.

This leads to an anomaly. On the one hand, people using misleading or deceptive information to market products (including medicines) can be held to account, and advertising must be responsible. On the other hand, spreading misleading or deceptive ideas is not, as a rule, illegal.

However, there are restrictions on free speech when it comes to offensive behaviour and language, racial discrimination and sexual harassment. We also censor objectionable material and police harmful digital communications that target individuals.

So, should we add Covid-19 conspiracies and disinformation to that list? The answer is probably not. And if we do, we should be very specific.

A focused approach is crucial

Deciding who gets caught in the net and defining what information is harmful to the public is a very slippery slope. Furthermore, the internet has many corners to hide in and may be near impossible to police.

Given those spreading conspiracy theories and disinformation tend to believe already in government overreach, we risk pouring petrol on the fire by attempting to ban their activities.



The exception, where further restraint is justified, involves attempts to use misinformation or undue influence (especially by a foreign power) to manipulate elections. This is where a more focused approach to who and what is targeted makes sense.

Countries such as Canada, the UK, France and Australia are all grappling with how best to protect their democracies from manipulation of information, but these initiatives are still in their infancy.

In New Zealand we have a law prohibiting the publishing of false statements to influence voters, and the Justice Committee put out an excellent report on the 2017 general election that covered some of these points and urged vigilance.

Can we police the tech giants?

While tools such as Netsafe’s fake news awareness campaign and official Covid-19 information sources are excellent, they are not enough on their own.

https://twitter.com/netsafeNZ/status/1240806540862468097

The best line of defence against malicious information is still education. Scientific literacy and critical thinking are crucial. Good community leadership, responsible journalism and academic freedom can all contribute.

But if that isn’t enough, what can we do about the platforms where disinformation thrives?

Conventional broadcasters must make reasonable efforts to present balanced information and viewpoints.

But that kind of balance is much harder to enforce in the decentralised, instantaneous world of social media. The worst example of this, the live-streamed terror attack in Christchurch, led to the Christchurch Call. It’s a noble initiative, but controlling this modern hydra will be a long battle.

Attempts to control misinformation on Facebook, Twitter and Google through self-regulation and warning labels are welcome. But the work is slow and ad-hoc. The European Commission is now proposing new rules to formalise the social media platforms’ responsibility and liability for their content.

Like tobacco, that content might not be prohibited, but citizens should be warned about what they’re consuming – even if it comes from the president of the United States.

The final line of defence would be to make individuals who spread fake news liable to prosecution. Many countries have already begun to make such laws, with China and Russia at the forefront.

The risk, of course, is that social media regulation can disguise political censorship designed to target dissent. For that reason we need to treat this option with extreme caution.

But if the tolerance of our liberal democracy is too sorely tested in the forthcoming election, and if all other defences prove inadequate, new laws that strengthen the protection of the electoral process may well be justified. The Conversation

Alexander Gillespie is a Professor of Law at the University of Waikato.

Hits: 745
Article: https://www.stuff.co.nz/national/politics/122791630/with-the-election-campaign-underway-can-the-law-protect-voters-from-fake-news-and-conspiracy-theories
:
Note from Nighthawk.NZ:

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive