3. Defining disinformation
“False inaccurate or misleading information
designed, presented and promoted to
intentionally cause public harm or for profit”
In line with European Commission High Level Expert Group
We distinguish disinformation from misinformation,
which refers to unintentionally false or inaccurate information.
4. Disinformation a rapidly moving target
We analysed 250 articles, papers and reports
strengths and weaknesses of those focussed on AI disinformation
solutions on freedom of expression, media pluralism & democracy
We agree with other experts: evidence of harm is still inconclusive
2016 US Presidential election/UK ‘Brexit’ referendum
Investigated US Department of Justice and UK Parliamentary Committee
5. Evidence Base is Contextual
The premise of the recommendations is unstable:
Is there empirical evidence for disinformation effects
that it is an online specific issue?
6. If policy wants
hard evidence of disinformation:
It does not and has never existed
Newspapers not proven to influence UK 1992
General Election
Disinformation not proven to influence US 2016
Presidential election
Motives are impure – evidence
threshold impossible
7. Online and offline media – which is
more influential and disinformed?
Should we focus on online media or core source of
misinformation for most people; mainstream media?
Cable TV news; newspapers; shock jock radio
Noting elderly rely on existing media more than FBK
Pollution of online disinformation into MSM?
Online platforms are more important to tell us about
the weaknesses in the existing ecosystem
Digital canary in the media coalmine?
Use of MSM in online clips e.g. Bloomberg/deep fakes
8. Democracies differently equipped
to handle disinformation
UK national tabloid newspapers uniquely corrupt
BBC extraordinarily well resourced independent pubcaster
Scandinavian press council model
Satellite TV & commercial TV business structures
9. Can misinformation affect the
normative context of public discourse
rather than disinformation directly
causing harm and
undermining democratic principles?
Data integrity vs discourse integrity?
10. Methodology: Literature &
Elite Interviews
Project consists of a literature review, expert interviews and mapping of policy
and technology initiatives on disinformation in the European Union.
150 regulatory documents and research papers/reports
10 expert interviews in August-October 2018;
took part in several expert seminars, including:
Annenberg-Oxford Media Policy Summer Institute, Jesus College, Oxford, 7 Aug;
Google-Oxford Internet Leadership Academy, Oxford Internet Institute, 5 Sept;
Gikii’18 at the University of Vienna, Austria, 13-14 Sept;
Microsoft Cloud Computing Research Consortium, St John’s Cambridge,17-18 Sept
We thank all interview respondent and participants for the enlightening
disinformation discussions; all errors remain our own.
Internet regulatory experts:
socio-legal scholar with a background in law/economics of mass communications;
media scholar with a background in Internet policy processes and copyright reform;
reviewed by a computer scientist with a background in Internet regulation and
fundamental human rights.
We suggest that this is the bare minimum of interdisciplinary expertise
required to study the regulation of disinformation on social media.
11. Interdisciplinary study analyses implications
of AI disinformation initiatives
Policy options based on literature, 10 expert interviews & mapping
We warn against technocentric optimism as a solution to disinformation,
proposes use of automated detection, (de)prioritisation, blocking and
removal by online intermediaries without human intervention.
Independent, transparent, effective appeal and oversight mechanisms
are necessary in order to minimise inevitable inaccuracies
13. International Grand Committee on
Disinformation (“Fake News”)
Leopoldo Moreau, Chair, Freedom of Expression Commission,
Chamber of Deputies, Argentina,
Nele Lijnen, member, Committee on Infrastructure, Communications
and Public Enterprises, Parliament of Belgium,
Alessandro Molon, Member of the Chamber of Deputies, Brazil,
Bob Zimmer, Chair, and Nathaniel Erskine-Smith and Charlie Angus,
Vice-Chairs, Standing Committee on Access to Information, Privacy
and Ethics, House of Commons, Canada,
Catherine Morin-Desailly, Chair, Standing Committee on Culture,
Education and Media, French Senate,
Hildegarde Naughton, Chair, and Eamon Ryan, member, Joint
Committee on Communications, Climate Action and Environment,
Parliament of Ireland,
Dr Inese Lībiņa-Egnere, Deputy Speaker, Parliament of Latvia,
Pritam Singh, Edwin Tong and Sun Xueling, members, Select
Committee on Deliberate Online Falsehoods, Parliament of Singapore
Damian Collins, Chair, DCMS Select Committee, House of Commons
15. Different aspects of disinformation
merit different types of regulation
All proposed policy solutions stress the
importance of
literacy and
cybersecurity
16. Defining Automated Content
Recognition (ACR)
Within Machine Learning techniques that are advancing towards AI,
ACR technologies are textual and audio-visual analysis programmes
that are algorithmically trained to identify potential ‘bot’ accounts and
unusual potential disinformation material.
ACR refers to both
the use of automated techniques in the recognition and
the moderation of content and accounts to assist human judgement.
Moderating content at scale requires ACR to supplement human
editing
17. ACR to detect disinformation is
prone to false negatives/positives
due to the difficulty of parsing multiple, complex, and possibly
conflicting meanings emerging from text.
Inadequate for natural language processing & audiovisual
including so-called ‘deep fakes’
(fraudulent representation of individuals in video),
ACR has more reported success in identifying ‘bot’ accounts.
We use ‘AI’ to refer to ACR technologies.
18. What can AI do to stop disinformation?
Bot accounts identified:
Facebook removed 1.3billion in 6 months
Facebook’s AI “ultimately removes 38% of hate speech-related posts”
it doesn’t have enough training data to be effective except English-Portuguese
Trained algorithmic detection of fact verification may never be as effective
as human intervention:
each has accuracy of 76%
Future work might want to explore how hybrid decision models consisting
of fact verification and data-driven machine learning can be integrated
Koebler, J., and Cox, J. (23 Aug 2018) ‘The Impossible Job: Inside Facebook’s
Struggle to Moderate Two Billion People’, Motherboard,
https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-
moderation-works
19. Zuckerberg on AI & disinformation
Some categories of harmful content are easier for AI to identify
and in others it takes more time to train our systems.
Visual problems, like identifying nudity, are often easier than
Nuanced linguistic challenges, like hate speech
Zuckerberg, M. (15 Nov 2018) ‘A Blueprint for Content Governance and Enforcement’,
Facebook Notes, https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-
content-governance-and-enforcement/10156443129621634/
20. Legislators should not push this
difficult judgment exercise onto
online intermediaries
Restrictions to freedom of expression must be
provided by law, legitimate and
proven necessary and as
the least restrictive means to pursue the aim.
The illegality of disinformation should be proven before
filtering or blocking is deemed suitable.
Human rights laws paramount in maintaining freedom online
21. AI is not a silver bullet
Automated technologies are limited in their accuracy,
especially for expression where cultural or contextual cues necessary
Imperative that legislators consider which measures
may provide a bulwark against disinformation
without introducing AI-generated censorship of European citizens
22. Option and
form of
regulation
Typology of regulation Implications/Notes
0 Status quo Corporate Social
Responsibility, single-
company initiatives
Note that enforcement of the new General Data Protection
Regulation and the proposed revised ePrivacy Regulation, plus
agreed text for new AVMS Directive, would all continue and likely
expand
1 Non-audited
self-
regulation
Industry code of practice,
transparency reports, self-
reporting
Corporate agreement on principles for common technical
solutions and Santa Clara Principles
2 Audited self-
regulation
European Code of Practice of
Sept 2018; Global Network
Initiative published audit
reports
Open interoperable publicly available standard e.g. commonly
engineered/designed standard for content removal to which
platforms could certify compliance
3 Formal self-
regulator
Powers to expel non-
performing members,
Dispute Resolution
ruling/arbitration on cases
Commonly engineered standard for content filtering or
algorithmic moderation. Requirement for members of self-
regulatory body to conform to standard or prove equivalence.
Particular focus on content ‘Put Back’ metrics and
efficiency/effectiveness of appeal process
4 Co-
regulation
Industry code approved by
Parliament or regulator(s)
with statutory powers to
supplant
Government-approved technical standard – for filtering or other
forms of moderation. Examples from broadcast and advertising
regulation
5 Statutory
regulation
Formal regulation - tribunal
with judicial review
National Regulatory Agencies – though note many overlapping
powers between agencies on e.g. freedom of expression,
23. #KeepItOn Are national emergency
& superior court criteria effective?
Fragile democracies may have weak
executive or judicial institutions.
Who decides what is fraudulent?
May contribute to bias if enforced by
already distrusted institutions
Hungary, Poland
Current COVID19 live disinformation examples
24. Five EU recommendations
1. Media literacy and user choice
2. Strong human review and appeal processes
where AI is used
3. Independent appeal and audit of platforms
4. Standardizing notice and appeal procedures
Creating a multistakeholder body for appeals
5. Transparency in AI disinformation techniques
25. 1. Disinformation is best tackled
through media pluralism/literacy
These allow diversity of expression and choice.
Source transparency indicators are preferable over
(de)prioritisation of disinformation,
Users need the opportunity to understand
how search results or social media feeds are built
and make changes where desirable.
26. 2. Advise against regulatory action
to encourage increased use of AI
for content moderation (CoMo) purposes,
without strong independent
human review
and
appeal processes.
27. 3. Recommend independent appeal
and audit of platforms’ regulation
Introduced as soon as feasible.
Technical intermediaries moderation of
content & accounts
1. detailed and transparent policies,
2. notice and appeal procedures, and
3. regular reports are crucial.
Valid for automated removals as well.
28. 4. Standardizing notice and appeal
procedures and reporting
creating self- or co-regulatory multistakeholder body
UN Special Rapporteur’s suggested “social media council”
Multi-stakeholder body could have competence
to deal with industry-wide appeals
better understanding & minimisation of effects of AI
on freedom of expression and media pluralism.
29. 5. Lack of independent evidence or
detailed research in this policy area
means the risk of harm remains far too high
for any degree of policy or regulatory certainty.
Greater transparency must be introduced
into AI and disinformation reduction techniques
used by online platforms and content providers.
30. Questions
What’s missing?
What evidence is needed?
What should be regulated by platforms?
What should be subject to effective court
oversight (not just theory)?
Does an oversight board or a co-regulator
work better in theory? And practice?
34. What should be subject to effective
court oversight (not just theory)?
Not the US judification approach!
35. Does an oversight board or a co-
regulator work better in theory?
And practice?
36. Recs 3 + 4. Disinformation best
tackled with digital literacy
But should be regulated with law. Mismatch?
We cannot all be Finland, with schooling on disinfo
Most misinformed are dementia generation – over70s
Media literacy the last refuge of the deregulatory?
4b. in relation to smaller and dual-purpose platforms.
appropriate for big (US) platforms with fairly tried and tested ToS
but difficult for the smaller competitors? Innovation defence?
Does a threshold of 2million (5%) or 5million (12%) cut it?
DE 1 Oct 2017 Netzwerkdurchsetzungsgesetz (Network Enforcement Act NetzDG)
FR Law 22 Dec 2018 relating to the fight against manipulation of information
37. Is there a regulatory approach to
design accountability into platforms?
See Shorenstein Report &
Santa Clara Declaration
David Kaye, UN Rapporteur on Freedom of
Expression
Social media councils
Scandinavian press council model? Or GNI?
Fact (not face) checking
Models all breach freedom of expression & privacy?
Human rights impact assessments