‘Legal but harmful’: making social media the arbiter of gospel truth

WARNING: This blog post is legal but harmful (potentially). If you proceed, there is a significant risk that the nature of the content might cause adverse psychological impact on an adult of ordinary sensibilities.

The Government’s Online Safety Bill intends to make the UK “the safest place in the world to be online” by placing a duty of care on internet companies such as Facebook, Twitter, TikTok, YouTube and Instagram, which host user-generated content, to limit the spread of illegal material. Social media and video-sharing platforms will therefore be prohibited from promoting harmful content, ranging from online bullying and grooming to sexual abuse and self-harm. It also aims to curb disinformation and misinformation, and, indeed, any information which may cause physical or psychological harm.

The definitions are clearly laid out in the proposed legislation. For example, the meaning of “content that is harmful to adults” is defined as:

(3) Content is within this subsection if the provider of the service has reasonable grounds to believe that the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities (“A”).
..
(5) Content is within this subsection if the provider of the service has reasonable grounds to believe that there is a material risk of the fact of the content’s dissemination having a significant adverse physical or psychological impact on an adult of ordinary sensibilities, taking into account (in particular)—

(a) how many users may be assumed to encounter the content by means of the service, and
(b) how easily, quickly and widely content may be disseminated by means of the service.

Or perhaps not so clearly laid out.

When does psychological impact become ‘significant’? What are ‘ordinary’ sensibilities? When the Government empowers Ofcom with discerning and determining that which is ‘legal but harmful’, and Ofcom then requires social media companies to police that which is ‘legal but harmful’, then all the free-speech assurances in the world won’t protect the freedoms of speech or expression. They may insist (and they do) that such protections are foundational to the Bill:

2) A duty to have regard to the importance of—

(a) protecting users’ right to freedom of expression within the law, and
(b) protecting users from unwarranted infringements of privacy, when deciding on, and implementing, safety policies and procedures.

Category 1 services

(3) A duty—

(a) when deciding on safety policies and procedures, to carry out an assessment of the impact that such policies or procedures would have on—

(i) the protection of users’ right to freedom of expression within the law, and
(ii) the protection of users from unwarranted infringements of privacy; and

(b) to carry out an assessment of the impact of adopted safety policies and procedures on the matters mentioned in paragraph (a)(i) and (ii).

But it is hard to see how Facebook, Twitter, TikTok, YouTube, Instagram, etc., etc can be obliged to be robust in the defence of national freedom when they are faced with fines of £18 million or 10% of annual worldwide turnover (whichever is greater) if they get it wrong. The Bill also contains a deferred mechanism to impose criminal sanctions on senior management of regulated service providers companies in certain circumstances, so why would they take the risk to err on the side of freedom and democracy if getting it wrong could land them in court with a possible criminal conviction?

There will be no fine or criminal sanction if they erroneously ‘cancel’ someone for insisting that sex is a matter of biological science, and gender is social construct, and a man can’t become a woman no matter how many surgeries or treatments they may have. If the Trans community find this ‘harmful’ (psychologically), why would social media companies not summarily shut down the account? Why would Facebook and Twitter not swiftly move to censor it to mitigate that harm? What social media algorithmic moderation would recognise the nuances of sociology and subtleties of philosophy inherent in such a debate?

What happens to national context and social culture? Words or images which can reasonably be assumed to particularly affect people with a certain characteristic (or combination of characteristics), or to particularly affect a certain group of people in one context, may not be deemed to affect that same group in another. Who will be making these context-based assessments, and with what degree of objectivity? How will the obligation to censor that which is legal but harmful be consistent with reasoned but offensive journalistic content?

No-one will have a problem with an algorithm developed to flag satirical speech about sexuality, or to raise positive awareness about an issue such as child-grooming or the prevention of suicide, but what about a reasoned socio-cultural debate about the definition of ‘child’ in the context of the age of majority; or a conversation about when martyrdom meets suicide? How will Twitter deal with the engineered ‘pile-on’ phenomenon, where someone with millions of followers decides to get offended by something someone said about (say) creeping sharia in some UK schools, and then urges their followers to bully and bludgeon the person into retraction, or demand that Twitter censor their ‘offensive’ Tweet? How will social media companies deal with thousands of appeals with respect to algorithmic or AI-based moderation? Would it not simply be easier (and cheaper) to summarily remove content that is over-flagged? Why would Ofcom not simply urge compliance to avoid offence?

“Do you want Nick Clegg to be the supreme censor of what you write online?”, asks David Davis MP, “Because that could be the accidental effect of the Government’s new Online Safety Bill.”

“It could make the former Deputy Prime Minister – now a multi-million-dollar-a-year Facebook executive – the overall arbiter of whether your views are acceptable or ‘legal but harmful’, and thereby subject to being censored.” He continues, recounting when a speech he gave to Big Brother Watch (at a Conservative Party Conference) was summarily removed from their YouTube channel. The crime? David Davis was accused of spreading ‘medical misinformation’: he had spoken against the introduction of domestic Covid vaccine passports, and done so with references to scientific evidence. But this didn’t matter: he was challenging the political zeitgeist, and so he had to be ‘cancelled’.

The implications for free speech are potentially disastrous. Do we really want 20-something whizzkids sitting in a plush Californian office 5,000 miles away to be deciding whether a comedy routine from the 1970s crosses that line?

Or deciding whether those questioning the consensus on Covid policy are raising concerns or pushing disinformation?

Or whether a public figure writing about transgender issues has crossed into hate speech?

These are matters of nuance that we cannot outsource to big tech zealots and their fashionable opinions.

What could be deemed ‘harmful’ for one set of users might be completely fine for another.

The video was restored after YouTube was very publicly challenged, but David Davis is a high-profile politician with a considerable following. How would the little guy secure justice? Why would the social media giants waste their time listening to someone with 12 Twitter followers?

The gospel of Christ is legal but harmful. You try telling people they are dead in their sin and on their way to hell unless they repent and accept Jesus as their Saviour. You try saying on social media:

Or do you not know that wrongdoers will not inherit the kingdom of God? Do not be deceived: Neither the sexually immoral nor idolaters nor adulterers nor men who have sex with men nor thieves nor the greedy nor drunkards nor slanderers nor swindlers will inherit the kingdom of God. And that is what some of you were. But you were washed, you were sanctified, you were justified in the name of the Lord Jesus Christ and by the Spirit of our God (1Cor 6:9ff).

And see where that gets you.

You try saying that a male may not become a woman, or that two men may not be married in the sight of God.

You try saying that Mohammed was a false prophet.

It is legal but harmful to say these things — harmful, that is, to certain groups or to people with certain characteristics. When does the psychological impact of these harmful truths become ‘significant’? What ‘ordinary’ sensibilities should Christians be mindful of when they preach and teach (or write)? The Government is about to censor the freedom to proclaim the gospel, and delegate to those who hate the cross of Christ the power to discern what is harmful about that cross.

A Conservative government…

The post ‘Legal but harmful’: making social media the arbiter of gospel truth appeared first on Archbishop Cranmer.

‘Legal but harmful’: making social media the arbiter of gospel truth