Regulating Disinformation
with Artificial Intelligence (AI)
Prof. Chris Marsden (Sussex) &
Dr Trisha Meyer
(IES, Free University of Brussels VUB)
United Nations Internet Governance Forum
29 November 2019
Defining disinformation
“False inaccurate or misleading information
designed, presented and promoted to
intentionally cause public harm or for profit”
In line with European Commission High Level Expert Group
We distinguish disinformation from misinformation,
which refers to unintentionally false or inaccurate information.
Defining Automated Content
Recognition (ACR)
Within Machine Learning techniques that are advancing towards AI,
ACR technologies are textual and audio-visual analysis programmes
that are algorithmically trained to identify potential ‘bot’ accounts and
unusual potential disinformation material.
ACR refers to both
the use of automated techniques in the recognition and
the moderation of content and accounts to assist human judgement.
Moderating content at scale requires ACR to supplement human
editing
ACR to detect disinformation is
prone to false negatives/positives
due to the difficulty of parsing multiple, complex, and possibly
conflicting meanings emerging from text.
Inadequate for natural language processing & audiovisual
including so-called ‘deep fakes’
(fraudulent representation of individuals in video),
ACR has more reported success in identifying ‘bot’ accounts.
We use ‘AI’ to refer to ACR technologies.
Disinformation a rapidly moving target
We analysed 250 articles, papers and reports
strengths and weaknesses of those focussed on AI disinformation
solutions on freedom of expression, media pluralism & democracy
We agree with other experts: evidence of harm is still inconclusive
2016 US Presidential election/UK ‘Brexit’ referendum
Investigated US Department of Justice and UK Parliamentary Committee
International Grand Committee on
Disinformation (“Fake News”)
Leopoldo Moreau, Chair, Freedom of Expression Commission, Chamber of
Deputies, Argentina,
Nele Lijnen, member, Committee on Infrastructure, Communications and
Public Enterprises, Parliament of Belgium,
Alessandro Molon, Member of the Chamber of Deputies, Brazil,
Bob Zimmer, Chair, and Nathaniel Erskine-Smith and Charlie Angus, Vice-
Chairs, Standing Committee on Access to Information, Privacy and Ethics,
House of Commons, Canada,
Catherine Morin-Desailly, Chair, Standing Committee on Culture, Education
and Media, French Senate,
Hildegarde Naughton, Chair, and Eamon Ryan, member, Joint Committee on
Communications, Climate Action and Environment, Parliament of Ireland,
Dr Inese Lībiņa-Egnere, Deputy Speaker, Parliament of Latvia,
Pritam Singh, Edwin Tong and Sun Xueling, members, Select Committee on
Deliberate Online Falsehoods, Parliament of Singapore
Damian Collins, Chair, DCMS Select Committee, House of Commons
The Fog of War?
Carl von Clausewitz (1832) Vom Kriege
“War is the realm of uncertainty; three quarters of the factors on which
action in war is based are wrapped in a fog of greater or lesser uncertainty
A sensitive and discriminating judgment is called for;
a skilled intelligence to scent out the truth.”
Russian Hybrid Warfare: A Study of Disinformation
Flemming Splidsboel Hansen, 2017, Danish Institute for International Studies
Marsden, C. [2014] Hyper-power and private monopoly: the unholy marriage of
(neo) corporatism and the imperial surveillance state, Critical Studies in Media
Communication Vol.31 Issue 2 pp.100-108
http://www.tandfonline.com/doi/full/10.1080/15295036.2014.913805
Marsden, C. [2004] Hyperglobalized Individuals: the Internet, globalization,
freedom and terrorism 6 Foresight 3 at 128-140
Neil Mohan made Jordan Peterson
& Yaxley-Lennon famous
https://twitter.com/heyneil
“Google Paid This Man $100 Million: Here's His Story”
https://www.businessinsider.com/neal-mohan-
googles-100-million-man-2013-4?r=US&IR=T
“the visionary who predicted how brand advertising would fund
the Internet, turned this vision into a plan, and then executed it”
Methodology: Literature &
Elite Interviews
Project consists of a literature review, expert interviews and mapping of policy
and technology initiatives on disinformation in the European Union.
150 regulatory documents and research papers/reports
10 expert interviews in August-October;
took part in several expert seminars, including:
Annenberg-Oxford Media Policy Summer Institute, Jesus College, Oxford, 7 Aug;
Google-Oxford Internet Leadership Academy, Oxford Internet Institute, 5 Sept;
Gikii’18 at the University of Vienna, Austria, 13-14 Sept;
Microsoft Cloud Computing Research Consortium, St John’s Cambridge,17-18 Sept
We thank all interview respondent and participants for the enlightening
disinformation discussions; all errors remain our own.
Internet regulatory experts:
socio-legal scholar with a background in law/economics of mass communications;
media scholar with a background in Internet policy processes and copyright reform;
reviewed by a computer scientist with a background in Internet regulation and
fundamental human rights.
We suggest that this is the bare minimum of interdisciplinary expertise
required to study the regulation of disinformation on social media.
Interdisciplinary study analyses implications
of AI disinformation initiatives
Policy options based on literature, 10 expert interviews & mapping
We warn against technocentric optimism as a solution to
disinformation,
that proposes use of automated detection, (de)prioritisation, blocking
and removal by online intermediaries without human intervention.
More independent, transparent and effective appeal and oversight
mechanisms are necessary in order to minimise inevitable
inaccuracies
What can AI do to stop disinformation?
Bot accounts identified:
Facebook removed 1.3billion in 6 months
Facebook’s AI “ultimately removes 38% of hate speech-related posts”
it doesn’t have enough training data to be effective except English-Portuguese
Trained algorithmic detection of fact verification may never be as effective
as human intervention:
each has accuracy of 76%
Future work might want to explore how hybrid decision models consisting
of fact verification and data-driven machine learning can be integrated
Koebler, J., and Cox, J. (23 Aug 2018) ‘The Impossible Job: Inside Facebook’s
Struggle to Moderate Two Billion People’, Motherboard,
https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-
moderation-works
Zuckerberg on AI & disinformation
Some categories of harmful content are easier for AI to identify
and in others it takes more time to train our systems.
Visual problems, like identifying nudity, are often easier than
Nuanced linguistic challenges, like hate speech
Zuckerberg, M. (15 Nov 2018) ‘A Blueprint for Content Governance and Enforcement’,
Facebook Notes, https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-
content-governance-and-enforcement/10156443129621634/
Legislators should not push this
difficult judgment exercise onto
online intermediaries
Restrictions to freedom of expression must be
provided by law, legitimate and
proven necessary and as
the least restrictive means to pursue the aim.
The illegality of disinformation should be proven before
filtering or blocking is deemed suitable.
Human rights laws paramount in maintaining freedom online
AI is not a silver bullet
Automated technologies are limited in their accuracy,
especially for expression where cultural or contextual cues necessary
Imperative that legislators consider which measures
may provide a bulwark against disinformation
without introducing AI-generated censorship of European citizens
Different aspects of disinformation
merit different types of regulation
All proposed policy solutions stress the
importance of
literacy and
cybersecurity
Five recommendations
1. Media literacy and user choice
2. Strong human review and appeal processes
where AI is used
3. Independent appeal and audit of platforms
4. Standardizing notice and appeal procedures
Creating a multistakeholder body for appeals
5. Transparency in AI disinformation techniques
1. Disinformation is best tackled
through media pluralism/literacy
These allow diversity of expression and choice.
Source transparency indicators are preferable over
(de)prioritisation of disinformation,
Users need the opportunity to understand
how search results or social media feeds are built
and make changes where desirable.
2. Advise against regulatory action
to encourage increased use of AI
for content moderation (CoMo) purposes,
without strong independent
human review
and
appeal processes.
3. Recommend independent appeal
and audit of platforms’ regulation
Introduced as soon as feasible.
Technical intermediaries moderation of
content & accounts
1. detailed and transparent policies,
2. notice and appeal procedures, and
3. regular reports are crucial.
Valid for automated removals as well.
4. Standardizing notice and appeal
procedures and reporting
creating self- or co-regulatory multistakeholder body
UN Special Rapporteur’s suggested “social media council”
Multi-stakeholder body could have competence
to deal with industry-wide appeals
better understanding & minimisation of effects of AI
on freedom of expression and media pluralism.
5. Lack of independent evidence or
detailed research in this policy area
means the risk of harm remains far too high
for any degree of policy or regulatory certainty.
Greater transparency must be introduced
into AI and disinformation reduction techniques
used by online platforms and content providers.
Option and
form of
regulation
Typology of regulation Implications/Notes
0 Status quo Corporate Social
Responsibility, single-
company initiatives
Note that enforcement of the new General Data Protection
Regulation and the proposed revised ePrivacy Regulation, plus
agreed text for new AVMS Directive, would all continue and likely
expand
1 Non-audited
self-
regulation
Industry code of practice,
transparency reports, self-
reporting
Corporate agreement on principles for common technical
solutions and Santa Clara Principles
2 Audited self-
regulation
European Code of Practice of
Sept 2018; Global Network
Initiative published audit
reports
Open interoperable publicly available standard e.g. commonly
engineered/designed standard for content removal to which
platforms could certify compliance
3 Formal self-
regulator
Powers to expel non-
performing members,
Dispute Resolution
ruling/arbitration on cases
Commonly engineered standard for content filtering or
algorithmic moderation. Requirement for members of self-
regulatory body to conform to standard or prove equivalence.
Particular focus on content ‘Put Back’ metrics and
efficiency/effectiveness of appeal process
4 Co-
regulation
Industry code approved by
Parliament or regulator(s)
with statutory powers to
supplant
Government-approved technical standard – for filtering or other
forms of moderation. Examples from broadcast and advertising
regulation
5 Statutory
regulation
Formal regulation - tribunal
with judicial review
National Regulatory Agencies – though note many overlapping
powers between agencies on e.g. freedom of expression,