SlideShare ist ein Scribd-Unternehmen logo
1 von 38
Regulating Disinformation –
Regulatory Options
Prof. Chris Marsden (Sussex)
PSU Pilot-Kluge Center workshop
3 April 2020
Specific Policy Options:
The Co-Regulatory Triangle
Defining disinformation
“False inaccurate or misleading information
designed, presented and promoted to
intentionally cause public harm or for profit”
In line with European Commission High Level Expert Group
We distinguish disinformation from misinformation,
which refers to unintentionally false or inaccurate information.
Disinformation a rapidly moving target
 We analysed 250 articles, papers and reports
 strengths and weaknesses of those focussed on AI disinformation
solutions on freedom of expression, media pluralism & democracy
 We agree with other experts: evidence of harm is still inconclusive
 2016 US Presidential election/UK ‘Brexit’ referendum
 Investigated US Department of Justice and UK Parliamentary Committee
Evidence Base is Contextual
The premise of the recommendations is unstable:
Is there empirical evidence for disinformation effects

that it is an online specific issue?
If policy wants
hard evidence of disinformation:
It does not and has never existed
Newspapers not proven to influence UK 1992
General Election
Disinformation not proven to influence US 2016
Presidential election
Motives are impure – evidence
threshold impossible
Online and offline media – which is
more influential and disinformed?
Should we focus on online media or core source of
misinformation for most people; mainstream media?
Cable TV news; newspapers; shock jock radio
 Noting elderly rely on existing media more than FBK
Pollution of online disinformation into MSM?
Online platforms are more important to tell us about
the weaknesses in the existing ecosystem
Digital canary in the media coalmine?
Use of MSM in online clips e.g. Bloomberg/deep fakes
Democracies differently equipped
to handle disinformation
 UK national tabloid newspapers uniquely corrupt
 BBC extraordinarily well resourced independent pubcaster
 Scandinavian press council model
 Satellite TV & commercial TV business structures
Can misinformation affect the
normative context of public discourse
rather than disinformation directly
causing harm and
undermining democratic principles?
Data integrity vs discourse integrity?
Methodology: Literature &
Elite Interviews
Project consists of a literature review, expert interviews and mapping of policy
and technology initiatives on disinformation in the European Union.
 150 regulatory documents and research papers/reports
 10 expert interviews in August-October 2018;
 took part in several expert seminars, including:
 Annenberg-Oxford Media Policy Summer Institute, Jesus College, Oxford, 7 Aug;
 Google-Oxford Internet Leadership Academy, Oxford Internet Institute, 5 Sept;
 Gikii’18 at the University of Vienna, Austria, 13-14 Sept;
 Microsoft Cloud Computing Research Consortium, St John’s Cambridge,17-18 Sept
 We thank all interview respondent and participants for the enlightening
disinformation discussions; all errors remain our own.
 Internet regulatory experts:
 socio-legal scholar with a background in law/economics of mass communications;
 media scholar with a background in Internet policy processes and copyright reform;
 reviewed by a computer scientist with a background in Internet regulation and
fundamental human rights.
 We suggest that this is the bare minimum of interdisciplinary expertise
required to study the regulation of disinformation on social media.
Interdisciplinary study analyses implications
of AI disinformation initiatives
Policy options based on literature, 10 expert interviews & mapping
We warn against technocentric optimism as a solution to disinformation,
 proposes use of automated detection, (de)prioritisation, blocking and
 removal by online intermediaries without human intervention.
 Independent, transparent, effective appeal and oversight mechanisms
 are necessary in order to minimise inevitable inaccuracies
Fake news?
International Grand Committee on
Disinformation (“Fake News”)
 Leopoldo Moreau, Chair, Freedom of Expression Commission,
Chamber of Deputies, Argentina,
 Nele Lijnen, member, Committee on Infrastructure, Communications
and Public Enterprises, Parliament of Belgium,
 Alessandro Molon, Member of the Chamber of Deputies, Brazil,
 Bob Zimmer, Chair, and Nathaniel Erskine-Smith and Charlie Angus,
Vice-Chairs, Standing Committee on Access to Information, Privacy
and Ethics, House of Commons, Canada,
 Catherine Morin-Desailly, Chair, Standing Committee on Culture,
Education and Media, French Senate,
 Hildegarde Naughton, Chair, and Eamon Ryan, member, Joint
Committee on Communications, Climate Action and Environment,
Parliament of Ireland,
 Dr Inese Lībiņa-Egnere, Deputy Speaker, Parliament of Latvia,
 Pritam Singh, Edwin Tong and Sun Xueling, members, Select
Committee on Deliberate Online Falsehoods, Parliament of Singapore
 Damian Collins, Chair, DCMS Select Committee, House of Commons
FBK/Cambridge Analytica via N.Ireland
Different aspects of disinformation
merit different types of regulation
All proposed policy solutions stress the
importance of
literacy and
cybersecurity
Defining Automated Content
Recognition (ACR)
 Within Machine Learning techniques that are advancing towards AI,
 ACR technologies are textual and audio-visual analysis programmes
that are algorithmically trained to identify potential ‘bot’ accounts and
unusual potential disinformation material.
ACR refers to both
 the use of automated techniques in the recognition and
 the moderation of content and accounts to assist human judgement.
Moderating content at scale requires ACR to supplement human
editing
ACR to detect disinformation is
prone to false negatives/positives
 due to the difficulty of parsing multiple, complex, and possibly
conflicting meanings emerging from text.
 Inadequate for natural language processing & audiovisual
 including so-called ‘deep fakes’
 (fraudulent representation of individuals in video),
 ACR has more reported success in identifying ‘bot’ accounts.
 We use ‘AI’ to refer to ACR technologies.
What can AI do to stop disinformation?
Bot accounts identified:
 Facebook removed 1.3billion in 6 months
Facebook’s AI “ultimately removes 38% of hate speech-related posts”
 it doesn’t have enough training data to be effective except English-Portuguese
Trained algorithmic detection of fact verification may never be as effective
as human intervention:
 each has accuracy of 76%
 Future work might want to explore how hybrid decision models consisting
of fact verification and data-driven machine learning can be integrated
 Koebler, J., and Cox, J. (23 Aug 2018) ‘The Impossible Job: Inside Facebook’s
Struggle to Moderate Two Billion People’, Motherboard,
https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-
moderation-works
Zuckerberg on AI & disinformation
Some categories of harmful content are easier for AI to identify
and in others it takes more time to train our systems.
Visual problems, like identifying nudity, are often easier than
Nuanced linguistic challenges, like hate speech
 Zuckerberg, M. (15 Nov 2018) ‘A Blueprint for Content Governance and Enforcement’,
Facebook Notes, https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-
content-governance-and-enforcement/10156443129621634/
Legislators should not push this
difficult judgment exercise onto
online intermediaries
 Restrictions to freedom of expression must be
 provided by law, legitimate and
 proven necessary and as
 the least restrictive means to pursue the aim.
 The illegality of disinformation should be proven before
filtering or blocking is deemed suitable.
 Human rights laws paramount in maintaining freedom online
AI is not a silver bullet
 Automated technologies are limited in their accuracy,
 especially for expression where cultural or contextual cues necessary
 Imperative that legislators consider which measures
 may provide a bulwark against disinformation
 without introducing AI-generated censorship of European citizens
Option and
form of
regulation
Typology of regulation Implications/Notes
0 Status quo Corporate Social
Responsibility, single-
company initiatives
Note that enforcement of the new General Data Protection
Regulation and the proposed revised ePrivacy Regulation, plus
agreed text for new AVMS Directive, would all continue and likely
expand
1 Non-audited
self-
regulation
Industry code of practice,
transparency reports, self-
reporting
Corporate agreement on principles for common technical
solutions and Santa Clara Principles
2 Audited self-
regulation
European Code of Practice of
Sept 2018; Global Network
Initiative published audit
reports
Open interoperable publicly available standard e.g. commonly
engineered/designed standard for content removal to which
platforms could certify compliance
3 Formal self-
regulator
Powers to expel non-
performing members,
Dispute Resolution
ruling/arbitration on cases
Commonly engineered standard for content filtering or
algorithmic moderation. Requirement for members of self-
regulatory body to conform to standard or prove equivalence.
Particular focus on content ‘Put Back’ metrics and
efficiency/effectiveness of appeal process
4 Co-
regulation
Industry code approved by
Parliament or regulator(s)
with statutory powers to
supplant
Government-approved technical standard – for filtering or other
forms of moderation. Examples from broadcast and advertising
regulation
5 Statutory
regulation
Formal regulation - tribunal
with judicial review
National Regulatory Agencies – though note many overlapping
powers between agencies on e.g. freedom of expression,
#KeepItOn Are national emergency
& superior court criteria effective?
Fragile democracies may have weak
executive or judicial institutions.
Who decides what is fraudulent?
May contribute to bias if enforced by
already distrusted institutions
Hungary, Poland
Current COVID19 live disinformation examples
Five EU recommendations
1. Media literacy and user choice
2. Strong human review and appeal processes
where AI is used
3. Independent appeal and audit of platforms
4. Standardizing notice and appeal procedures
Creating a multistakeholder body for appeals
5. Transparency in AI disinformation techniques
1. Disinformation is best tackled
through media pluralism/literacy
These allow diversity of expression and choice.
Source transparency indicators are preferable over
(de)prioritisation of disinformation,
Users need the opportunity to understand
how search results or social media feeds are built
and make changes where desirable.
2. Advise against regulatory action
to encourage increased use of AI
for content moderation (CoMo) purposes,
without strong independent
human review
and
appeal processes.
3. Recommend independent appeal
and audit of platforms’ regulation
Introduced as soon as feasible.
Technical intermediaries moderation of
content & accounts
1. detailed and transparent policies,
2. notice and appeal procedures, and
3. regular reports are crucial.
Valid for automated removals as well.
4. Standardizing notice and appeal
procedures and reporting
creating self- or co-regulatory multistakeholder body
UN Special Rapporteur’s suggested “social media council”
Multi-stakeholder body could have competence
to deal with industry-wide appeals
better understanding & minimisation of effects of AI
on freedom of expression and media pluralism.
5. Lack of independent evidence or
detailed research in this policy area
means the risk of harm remains far too high
for any degree of policy or regulatory certainty.
Greater transparency must be introduced
into AI and disinformation reduction techniques
used by online platforms and content providers.
Questions
What’s missing?
What evidence is needed?
What should be regulated by platforms?
What should be subject to effective court
oversight (not just theory)?
Does an oversight board or a co-regulator
work better in theory? And practice?
What’s missing?
What evidence is needed?
What should be regulated by
platforms?
What should be subject to effective
court oversight (not just theory)?
Not the US judification approach!
Does an oversight board or a co-
regulator work better in theory?
 And practice?
Recs 3 + 4. Disinformation best
tackled with digital literacy
But should be regulated with law. Mismatch?
 We cannot all be Finland, with schooling on disinfo
 Most misinformed are dementia generation – over70s
 Media literacy the last refuge of the deregulatory?
4b. in relation to smaller and dual-purpose platforms.
 appropriate for big (US) platforms with fairly tried and tested ToS
 but difficult for the smaller competitors? Innovation defence?
Does a threshold of 2million (5%) or 5million (12%) cut it?
 DE 1 Oct 2017 Netzwerkdurchsetzungsgesetz (Network Enforcement Act NetzDG)
 FR Law 22 Dec 2018 relating to the fight against manipulation of information
Is there a regulatory approach to
design accountability into platforms?
See Shorenstein Report &
Santa Clara Declaration
David Kaye, UN Rapporteur on Freedom of
Expression
Social media councils
Scandinavian press council model? Or GNI?
Fact (not face) checking
Models all breach freedom of expression & privacy?
Human rights impact assessments
Recommendation 7.
Regulating media vs platforms
 New AVSD expanding scope to social media platforms
 ERGA v. BERECv. EDPS issues?

Weitere ähnliche Inhalte

Was ist angesagt?

Privacy Report: Romania – from the DP Act to the Constitutional Court decisio...
Privacy Report: Romania – from the DP Act to the Constitutional Court decisio...Privacy Report: Romania – from the DP Act to the Constitutional Court decisio...
Privacy Report: Romania – from the DP Act to the Constitutional Court decisio...bmanolea
 
"Digital.Report+" - expert magazine for ICT policy professionals
"Digital.Report+" - expert magazine for ICT policy professionals"Digital.Report+" - expert magazine for ICT policy professionals
"Digital.Report+" - expert magazine for ICT policy professionalsVadim Dryganov
 
Internet Content Regulation: What it means in 2010
Internet Content Regulation: What it means in 2010Internet Content Regulation: What it means in 2010
Internet Content Regulation: What it means in 2010Wendy Qi
 
2016 Legal Update on Digital Accessibility Cases
2016 Legal Update on Digital Accessibility Cases2016 Legal Update on Digital Accessibility Cases
2016 Legal Update on Digital Accessibility Cases3Play Media
 
Internet Corporate Responsibility
Internet Corporate ResponsibilityInternet Corporate Responsibility
Internet Corporate ResponsibilityCharles Mok
 
2021 Digital Accessibility Legal Update with Lainey Feingold
2021 Digital Accessibility Legal Update with Lainey Feingold2021 Digital Accessibility Legal Update with Lainey Feingold
2021 Digital Accessibility Legal Update with Lainey Feingold3Play Media
 
Module 5 IG Presentation Iran 1
Module 5 IG Presentation Iran 1Module 5 IG Presentation Iran 1
Module 5 IG Presentation Iran 1Habib Noroozi
 
Anti social networking v 2
Anti social networking v 2Anti social networking v 2
Anti social networking v 2lilianedwards
 
Technology for Human Trafficking and sexual exploitation - Trace Projects Fin...
Technology for Human Trafficking and sexual exploitation - Trace Projects Fin...Technology for Human Trafficking and sexual exploitation - Trace Projects Fin...
Technology for Human Trafficking and sexual exploitation - Trace Projects Fin...Trilateral Research
 
Kasita's presentation
Kasita's presentationKasita's presentation
Kasita's presentationChande Kasita
 
Cst group project#2, intro++
Cst group project#2, intro++Cst group project#2, intro++
Cst group project#2, intro++Roxanne St Ives
 
Ethical Issues in Machine Learning Algorithms. (Part 3)
Ethical Issues in Machine Learning Algorithms. (Part 3)Ethical Issues in Machine Learning Algorithms. (Part 3)
Ethical Issues in Machine Learning Algorithms. (Part 3)Vladimir Kanchev
 
Ethical Issues in Machine Learning Algorithms (Part 2)
Ethical Issues in Machine Learning Algorithms (Part 2)Ethical Issues in Machine Learning Algorithms (Part 2)
Ethical Issues in Machine Learning Algorithms (Part 2)Vladimir Kanchev
 
UK Independent Parliamentary Report Into Online Child Protection
UK Independent Parliamentary Report Into Online Child ProtectionUK Independent Parliamentary Report Into Online Child Protection
UK Independent Parliamentary Report Into Online Child ProtectionKrishna De
 

Was ist angesagt? (20)

Privacy Report: Romania – from the DP Act to the Constitutional Court decisio...
Privacy Report: Romania – from the DP Act to the Constitutional Court decisio...Privacy Report: Romania – from the DP Act to the Constitutional Court decisio...
Privacy Report: Romania – from the DP Act to the Constitutional Court decisio...
 
"Digital.Report+" - expert magazine for ICT policy professionals
"Digital.Report+" - expert magazine for ICT policy professionals"Digital.Report+" - expert magazine for ICT policy professionals
"Digital.Report+" - expert magazine for ICT policy professionals
 
Internet Content Regulation: What it means in 2010
Internet Content Regulation: What it means in 2010Internet Content Regulation: What it means in 2010
Internet Content Regulation: What it means in 2010
 
2016 Legal Update on Digital Accessibility Cases
2016 Legal Update on Digital Accessibility Cases2016 Legal Update on Digital Accessibility Cases
2016 Legal Update on Digital Accessibility Cases
 
Internet Corporate Responsibility
Internet Corporate ResponsibilityInternet Corporate Responsibility
Internet Corporate Responsibility
 
2021 Digital Accessibility Legal Update with Lainey Feingold
2021 Digital Accessibility Legal Update with Lainey Feingold2021 Digital Accessibility Legal Update with Lainey Feingold
2021 Digital Accessibility Legal Update with Lainey Feingold
 
LACS
LACSLACS
LACS
 
Module 5 IG Presentation Iran 1
Module 5 IG Presentation Iran 1Module 5 IG Presentation Iran 1
Module 5 IG Presentation Iran 1
 
Chap 4 (1)
Chap 4 (1)Chap 4 (1)
Chap 4 (1)
 
Online Harms White Paper April 2019 Bill Dutton
Online Harms White Paper April 2019 Bill DuttonOnline Harms White Paper April 2019 Bill Dutton
Online Harms White Paper April 2019 Bill Dutton
 
Anti social networking v 2
Anti social networking v 2Anti social networking v 2
Anti social networking v 2
 
Technology for Human Trafficking and sexual exploitation - Trace Projects Fin...
Technology for Human Trafficking and sexual exploitation - Trace Projects Fin...Technology for Human Trafficking and sexual exploitation - Trace Projects Fin...
Technology for Human Trafficking and sexual exploitation - Trace Projects Fin...
 
Kasita's presentation
Kasita's presentationKasita's presentation
Kasita's presentation
 
Cst group project#2, intro++
Cst group project#2, intro++Cst group project#2, intro++
Cst group project#2, intro++
 
Ethical Issues in Machine Learning Algorithms. (Part 3)
Ethical Issues in Machine Learning Algorithms. (Part 3)Ethical Issues in Machine Learning Algorithms. (Part 3)
Ethical Issues in Machine Learning Algorithms. (Part 3)
 
Ethical Issues in Machine Learning Algorithms (Part 2)
Ethical Issues in Machine Learning Algorithms (Part 2)Ethical Issues in Machine Learning Algorithms (Part 2)
Ethical Issues in Machine Learning Algorithms (Part 2)
 
DIY Policing
DIY PolicingDIY Policing
DIY Policing
 
UK Independent Parliamentary Report Into Online Child Protection
UK Independent Parliamentary Report Into Online Child ProtectionUK Independent Parliamentary Report Into Online Child Protection
UK Independent Parliamentary Report Into Online Child Protection
 
Chapter 9_dp-pertemuan_14
 Chapter 9_dp-pertemuan_14 Chapter 9_dp-pertemuan_14
Chapter 9_dp-pertemuan_14
 
Artificial Intelligence, elections, media pluralism and media freedom
Artificial Intelligence, elections, media pluralism and media freedom Artificial Intelligence, elections, media pluralism and media freedom
Artificial Intelligence, elections, media pluralism and media freedom
 

Ähnlich wie Marsden Regulating Disinformation Kluge 342020

Marsden regulating disinformation Brazil 2020
Marsden regulating disinformation Brazil 2020Marsden regulating disinformation Brazil 2020
Marsden regulating disinformation Brazil 2020Chris Marsden
 
Aiws presentation leeper rebecca
Aiws presentation leeper rebeccaAiws presentation leeper rebecca
Aiws presentation leeper rebeccaBoston Global Forum
 
What's Next: The World of Fake News
What's Next: The World of Fake NewsWhat's Next: The World of Fake News
What's Next: The World of Fake NewsOgilvy Consulting
 
Governing algorithms – perils and powers of ai in the public sector1(1)
Governing algorithms – perils and powers of ai in the public sector1(1)Governing algorithms – perils and powers of ai in the public sector1(1)
Governing algorithms – perils and powers of ai in the public sector1(1)PanagiotisKeramidis
 
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...Presentation at COMPACT Project event in Riga - Disinformation, Media literac...
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...Oles Kulchytskyy
 
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...DataScienceConferenc1
 
'Humans still needed' - research project reveals impact of artificial intelli...
'Humans still needed' - research project reveals impact of artificial intelli...'Humans still needed' - research project reveals impact of artificial intelli...
'Humans still needed' - research project reveals impact of artificial intelli...Chartered Institute of Public Relations
 
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...Ansgar Koene
 
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja Saidot
 
Access to justice through virtual doors - Daniela Piana
Access to justice through virtual doors - Daniela PianaAccess to justice through virtual doors - Daniela Piana
Access to justice through virtual doors - Daniela PianaOECD Governance
 
"Towards Value-Centric Big Data" e-SIDES Workshop - Slide-deck
"Towards Value-Centric Big Data" e-SIDES Workshop - Slide-deck"Towards Value-Centric Big Data" e-SIDES Workshop - Slide-deck
"Towards Value-Centric Big Data" e-SIDES Workshop - Slide-decke-SIDES.eu
 
e-SIDES and Ethical AI
e-SIDES and Ethical AIe-SIDES and Ethical AI
e-SIDES and Ethical AIIDC4EU
 
ILS presentation on principles of fake news regulation
ILS presentation on principles of fake news regulationILS presentation on principles of fake news regulation
ILS presentation on principles of fake news regulationmrleiser
 
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCEHUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCEeraser Juan José Calderón
 
Teacher Education: AI - Ethical, legal and societal implications
Teacher Education: AI - Ethical, legal and societal implicationsTeacher Education: AI - Ethical, legal and societal implications
Teacher Education: AI - Ethical, legal and societal implicationsSonja Aits
 
Linking Data: The Legal Implications - SemTech2010
Linking Data: The Legal Implications - SemTech2010Linking Data: The Legal Implications - SemTech2010
Linking Data: The Legal Implications - SemTech2010mleyden
 
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaEthical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
 

Ähnlich wie Marsden Regulating Disinformation Kluge 342020 (20)

Marsden regulating disinformation Brazil 2020
Marsden regulating disinformation Brazil 2020Marsden regulating disinformation Brazil 2020
Marsden regulating disinformation Brazil 2020
 
Un may 28, 2019
Un may 28, 2019Un may 28, 2019
Un may 28, 2019
 
Aiws presentation leeper rebecca
Aiws presentation leeper rebeccaAiws presentation leeper rebecca
Aiws presentation leeper rebecca
 
What's Next: The World of Fake News
What's Next: The World of Fake NewsWhat's Next: The World of Fake News
What's Next: The World of Fake News
 
Governing algorithms – perils and powers of ai in the public sector1(1)
Governing algorithms – perils and powers of ai in the public sector1(1)Governing algorithms – perils and powers of ai in the public sector1(1)
Governing algorithms – perils and powers of ai in the public sector1(1)
 
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...Presentation at COMPACT Project event in Riga - Disinformation, Media literac...
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...
 
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...
[DSC Adria 23] MARIJANA ŠAROLIĆ ROBIĆ Everyone is invited to AI enhanced pres...
 
'Humans still needed' - research project reveals impact of artificial intelli...
'Humans still needed' - research project reveals impact of artificial intelli...'Humans still needed' - research project reveals impact of artificial intelli...
'Humans still needed' - research project reveals impact of artificial intelli...
 
Identity Management Policy Seminar
Identity Management Policy SeminarIdentity Management Policy Seminar
Identity Management Policy Seminar
 
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
 
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
 
Access to justice through virtual doors - Daniela Piana
Access to justice through virtual doors - Daniela PianaAccess to justice through virtual doors - Daniela Piana
Access to justice through virtual doors - Daniela Piana
 
"Towards Value-Centric Big Data" e-SIDES Workshop - Slide-deck
"Towards Value-Centric Big Data" e-SIDES Workshop - Slide-deck"Towards Value-Centric Big Data" e-SIDES Workshop - Slide-deck
"Towards Value-Centric Big Data" e-SIDES Workshop - Slide-deck
 
e-SIDES and Ethical AI
e-SIDES and Ethical AIe-SIDES and Ethical AI
e-SIDES and Ethical AI
 
ILS presentation on principles of fake news regulation
ILS presentation on principles of fake news regulationILS presentation on principles of fake news regulation
ILS presentation on principles of fake news regulation
 
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCEHUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
 
Teacher Education: AI - Ethical, legal and societal implications
Teacher Education: AI - Ethical, legal and societal implicationsTeacher Education: AI - Ethical, legal and societal implications
Teacher Education: AI - Ethical, legal and societal implications
 
Linking Data: The Legal Implications - SemTech2010
Linking Data: The Legal Implications - SemTech2010Linking Data: The Legal Implications - SemTech2010
Linking Data: The Legal Implications - SemTech2010
 
COMMON GOOD DIGITAL FRAMEWORK
COMMON GOOD DIGITAL FRAMEWORKCOMMON GOOD DIGITAL FRAMEWORK
COMMON GOOD DIGITAL FRAMEWORK
 
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaEthical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
 

Mehr von Chris Marsden

QUT Regulating Disinformation with AI Marsden 2024
QUT Regulating Disinformation with AI Marsden 2024QUT Regulating Disinformation with AI Marsden 2024
QUT Regulating Disinformation with AI Marsden 2024Chris Marsden
 
Aligarh Democracy and AI.pptx
Aligarh Democracy and AI.pptxAligarh Democracy and AI.pptx
Aligarh Democracy and AI.pptxChris Marsden
 
CPA Democracy and AI.pptx
CPA Democracy and AI.pptxCPA Democracy and AI.pptx
CPA Democracy and AI.pptxChris Marsden
 
Generative AI, responsible innovation and the law
Generative AI, responsible innovation and the lawGenerative AI, responsible innovation and the law
Generative AI, responsible innovation and the lawChris Marsden
 
Evidence base for AI regulation.pptx
Evidence base for AI regulation.pptxEvidence base for AI regulation.pptx
Evidence base for AI regulation.pptxChris Marsden
 
Generative AI and law.pptx
Generative AI and law.pptxGenerative AI and law.pptx
Generative AI and law.pptxChris Marsden
 
Marsden CELPU 2021 platform law co-regulation
Marsden CELPU 2021 platform law co-regulationMarsden CELPU 2021 platform law co-regulation
Marsden CELPU 2021 platform law co-regulationChris Marsden
 
Marsden Interoperability European Parliament 13 October
Marsden Interoperability European Parliament 13 OctoberMarsden Interoperability European Parliament 13 October
Marsden Interoperability European Parliament 13 OctoberChris Marsden
 
Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...
Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...
Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...Chris Marsden
 
Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018
Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018
Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018Chris Marsden
 
The Valetta Effect: GDPR enforcement for Gikii Vienna 14 Sept
The Valetta Effect: GDPR enforcement for Gikii Vienna 14 SeptThe Valetta Effect: GDPR enforcement for Gikii Vienna 14 Sept
The Valetta Effect: GDPR enforcement for Gikii Vienna 14 SeptChris Marsden
 
Marsden Net Neutrality OII
Marsden Net Neutrality OIIMarsden Net Neutrality OII
Marsden Net Neutrality OIIChris Marsden
 
Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018
Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018
Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018Chris Marsden
 
Human centric multi-disciplinary NGI4EU Iceland 2018
Human centric multi-disciplinary NGI4EU Iceland 2018Human centric multi-disciplinary NGI4EU Iceland 2018
Human centric multi-disciplinary NGI4EU Iceland 2018Chris Marsden
 
Human centric multi-disciplinary @ngi4eu @nesta_uk 21 march
Human centric multi-disciplinary @ngi4eu @nesta_uk 21 marchHuman centric multi-disciplinary @ngi4eu @nesta_uk 21 march
Human centric multi-disciplinary @ngi4eu @nesta_uk 21 marchChris Marsden
 
Georgetown Offdata 2018
Georgetown Offdata 2018Georgetown Offdata 2018
Georgetown Offdata 2018Chris Marsden
 
IPSA Hannover Marsden 5 December
IPSA Hannover Marsden 5 DecemberIPSA Hannover Marsden 5 December
IPSA Hannover Marsden 5 DecemberChris Marsden
 

Mehr von Chris Marsden (20)

QUT Regulating Disinformation with AI Marsden 2024
QUT Regulating Disinformation with AI Marsden 2024QUT Regulating Disinformation with AI Marsden 2024
QUT Regulating Disinformation with AI Marsden 2024
 
Aligarh Democracy and AI.pptx
Aligarh Democracy and AI.pptxAligarh Democracy and AI.pptx
Aligarh Democracy and AI.pptx
 
CPA Democracy and AI.pptx
CPA Democracy and AI.pptxCPA Democracy and AI.pptx
CPA Democracy and AI.pptx
 
Generative AI, responsible innovation and the law
Generative AI, responsible innovation and the lawGenerative AI, responsible innovation and the law
Generative AI, responsible innovation and the law
 
Evidence base for AI regulation.pptx
Evidence base for AI regulation.pptxEvidence base for AI regulation.pptx
Evidence base for AI regulation.pptx
 
Gikii23 Marsden
Gikii23 MarsdenGikii23 Marsden
Gikii23 Marsden
 
#Gikii23 Marsden
#Gikii23 Marsden#Gikii23 Marsden
#Gikii23 Marsden
 
Generative AI and law.pptx
Generative AI and law.pptxGenerative AI and law.pptx
Generative AI and law.pptx
 
Marsden CELPU 2021 platform law co-regulation
Marsden CELPU 2021 platform law co-regulationMarsden CELPU 2021 platform law co-regulation
Marsden CELPU 2021 platform law co-regulation
 
Marsden Interoperability European Parliament 13 October
Marsden Interoperability European Parliament 13 OctoberMarsden Interoperability European Parliament 13 October
Marsden Interoperability European Parliament 13 October
 
Net neutrality 2021
Net neutrality 2021Net neutrality 2021
Net neutrality 2021
 
Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...
Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...
Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...
 
Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018
Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018
Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018
 
The Valetta Effect: GDPR enforcement for Gikii Vienna 14 Sept
The Valetta Effect: GDPR enforcement for Gikii Vienna 14 SeptThe Valetta Effect: GDPR enforcement for Gikii Vienna 14 Sept
The Valetta Effect: GDPR enforcement for Gikii Vienna 14 Sept
 
Marsden Net Neutrality OII
Marsden Net Neutrality OIIMarsden Net Neutrality OII
Marsden Net Neutrality OII
 
Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018
Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018
Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018
 
Human centric multi-disciplinary NGI4EU Iceland 2018
Human centric multi-disciplinary NGI4EU Iceland 2018Human centric multi-disciplinary NGI4EU Iceland 2018
Human centric multi-disciplinary NGI4EU Iceland 2018
 
Human centric multi-disciplinary @ngi4eu @nesta_uk 21 march
Human centric multi-disciplinary @ngi4eu @nesta_uk 21 marchHuman centric multi-disciplinary @ngi4eu @nesta_uk 21 march
Human centric multi-disciplinary @ngi4eu @nesta_uk 21 march
 
Georgetown Offdata 2018
Georgetown Offdata 2018Georgetown Offdata 2018
Georgetown Offdata 2018
 
IPSA Hannover Marsden 5 December
IPSA Hannover Marsden 5 DecemberIPSA Hannover Marsden 5 December
IPSA Hannover Marsden 5 December
 

Kürzlich hochgeladen

Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfchloefrazer622
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 

Kürzlich hochgeladen (20)

Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 

Marsden Regulating Disinformation Kluge 342020

  • 1. Regulating Disinformation – Regulatory Options Prof. Chris Marsden (Sussex) PSU Pilot-Kluge Center workshop 3 April 2020
  • 2. Specific Policy Options: The Co-Regulatory Triangle
  • 3. Defining disinformation “False inaccurate or misleading information designed, presented and promoted to intentionally cause public harm or for profit” In line with European Commission High Level Expert Group We distinguish disinformation from misinformation, which refers to unintentionally false or inaccurate information.
  • 4. Disinformation a rapidly moving target  We analysed 250 articles, papers and reports  strengths and weaknesses of those focussed on AI disinformation solutions on freedom of expression, media pluralism & democracy  We agree with other experts: evidence of harm is still inconclusive  2016 US Presidential election/UK ‘Brexit’ referendum  Investigated US Department of Justice and UK Parliamentary Committee
  • 5. Evidence Base is Contextual The premise of the recommendations is unstable: Is there empirical evidence for disinformation effects  that it is an online specific issue?
  • 6. If policy wants hard evidence of disinformation: It does not and has never existed Newspapers not proven to influence UK 1992 General Election Disinformation not proven to influence US 2016 Presidential election Motives are impure – evidence threshold impossible
  • 7. Online and offline media – which is more influential and disinformed? Should we focus on online media or core source of misinformation for most people; mainstream media? Cable TV news; newspapers; shock jock radio  Noting elderly rely on existing media more than FBK Pollution of online disinformation into MSM? Online platforms are more important to tell us about the weaknesses in the existing ecosystem Digital canary in the media coalmine? Use of MSM in online clips e.g. Bloomberg/deep fakes
  • 8. Democracies differently equipped to handle disinformation  UK national tabloid newspapers uniquely corrupt  BBC extraordinarily well resourced independent pubcaster  Scandinavian press council model  Satellite TV & commercial TV business structures
  • 9. Can misinformation affect the normative context of public discourse rather than disinformation directly causing harm and undermining democratic principles? Data integrity vs discourse integrity?
  • 10. Methodology: Literature & Elite Interviews Project consists of a literature review, expert interviews and mapping of policy and technology initiatives on disinformation in the European Union.  150 regulatory documents and research papers/reports  10 expert interviews in August-October 2018;  took part in several expert seminars, including:  Annenberg-Oxford Media Policy Summer Institute, Jesus College, Oxford, 7 Aug;  Google-Oxford Internet Leadership Academy, Oxford Internet Institute, 5 Sept;  Gikii’18 at the University of Vienna, Austria, 13-14 Sept;  Microsoft Cloud Computing Research Consortium, St John’s Cambridge,17-18 Sept  We thank all interview respondent and participants for the enlightening disinformation discussions; all errors remain our own.  Internet regulatory experts:  socio-legal scholar with a background in law/economics of mass communications;  media scholar with a background in Internet policy processes and copyright reform;  reviewed by a computer scientist with a background in Internet regulation and fundamental human rights.  We suggest that this is the bare minimum of interdisciplinary expertise required to study the regulation of disinformation on social media.
  • 11. Interdisciplinary study analyses implications of AI disinformation initiatives Policy options based on literature, 10 expert interviews & mapping We warn against technocentric optimism as a solution to disinformation,  proposes use of automated detection, (de)prioritisation, blocking and  removal by online intermediaries without human intervention.  Independent, transparent, effective appeal and oversight mechanisms  are necessary in order to minimise inevitable inaccuracies
  • 13. International Grand Committee on Disinformation (“Fake News”)  Leopoldo Moreau, Chair, Freedom of Expression Commission, Chamber of Deputies, Argentina,  Nele Lijnen, member, Committee on Infrastructure, Communications and Public Enterprises, Parliament of Belgium,  Alessandro Molon, Member of the Chamber of Deputies, Brazil,  Bob Zimmer, Chair, and Nathaniel Erskine-Smith and Charlie Angus, Vice-Chairs, Standing Committee on Access to Information, Privacy and Ethics, House of Commons, Canada,  Catherine Morin-Desailly, Chair, Standing Committee on Culture, Education and Media, French Senate,  Hildegarde Naughton, Chair, and Eamon Ryan, member, Joint Committee on Communications, Climate Action and Environment, Parliament of Ireland,  Dr Inese Lībiņa-Egnere, Deputy Speaker, Parliament of Latvia,  Pritam Singh, Edwin Tong and Sun Xueling, members, Select Committee on Deliberate Online Falsehoods, Parliament of Singapore  Damian Collins, Chair, DCMS Select Committee, House of Commons
  • 15. Different aspects of disinformation merit different types of regulation All proposed policy solutions stress the importance of literacy and cybersecurity
  • 16. Defining Automated Content Recognition (ACR)  Within Machine Learning techniques that are advancing towards AI,  ACR technologies are textual and audio-visual analysis programmes that are algorithmically trained to identify potential ‘bot’ accounts and unusual potential disinformation material. ACR refers to both  the use of automated techniques in the recognition and  the moderation of content and accounts to assist human judgement. Moderating content at scale requires ACR to supplement human editing
  • 17. ACR to detect disinformation is prone to false negatives/positives  due to the difficulty of parsing multiple, complex, and possibly conflicting meanings emerging from text.  Inadequate for natural language processing & audiovisual  including so-called ‘deep fakes’  (fraudulent representation of individuals in video),  ACR has more reported success in identifying ‘bot’ accounts.  We use ‘AI’ to refer to ACR technologies.
  • 18. What can AI do to stop disinformation? Bot accounts identified:  Facebook removed 1.3billion in 6 months Facebook’s AI “ultimately removes 38% of hate speech-related posts”  it doesn’t have enough training data to be effective except English-Portuguese Trained algorithmic detection of fact verification may never be as effective as human intervention:  each has accuracy of 76%  Future work might want to explore how hybrid decision models consisting of fact verification and data-driven machine learning can be integrated  Koebler, J., and Cox, J. (23 Aug 2018) ‘The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People’, Motherboard, https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content- moderation-works
  • 19. Zuckerberg on AI & disinformation Some categories of harmful content are easier for AI to identify and in others it takes more time to train our systems. Visual problems, like identifying nudity, are often easier than Nuanced linguistic challenges, like hate speech  Zuckerberg, M. (15 Nov 2018) ‘A Blueprint for Content Governance and Enforcement’, Facebook Notes, https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for- content-governance-and-enforcement/10156443129621634/
  • 20. Legislators should not push this difficult judgment exercise onto online intermediaries  Restrictions to freedom of expression must be  provided by law, legitimate and  proven necessary and as  the least restrictive means to pursue the aim.  The illegality of disinformation should be proven before filtering or blocking is deemed suitable.  Human rights laws paramount in maintaining freedom online
  • 21. AI is not a silver bullet  Automated technologies are limited in their accuracy,  especially for expression where cultural or contextual cues necessary  Imperative that legislators consider which measures  may provide a bulwark against disinformation  without introducing AI-generated censorship of European citizens
  • 22. Option and form of regulation Typology of regulation Implications/Notes 0 Status quo Corporate Social Responsibility, single- company initiatives Note that enforcement of the new General Data Protection Regulation and the proposed revised ePrivacy Regulation, plus agreed text for new AVMS Directive, would all continue and likely expand 1 Non-audited self- regulation Industry code of practice, transparency reports, self- reporting Corporate agreement on principles for common technical solutions and Santa Clara Principles 2 Audited self- regulation European Code of Practice of Sept 2018; Global Network Initiative published audit reports Open interoperable publicly available standard e.g. commonly engineered/designed standard for content removal to which platforms could certify compliance 3 Formal self- regulator Powers to expel non- performing members, Dispute Resolution ruling/arbitration on cases Commonly engineered standard for content filtering or algorithmic moderation. Requirement for members of self- regulatory body to conform to standard or prove equivalence. Particular focus on content ‘Put Back’ metrics and efficiency/effectiveness of appeal process 4 Co- regulation Industry code approved by Parliament or regulator(s) with statutory powers to supplant Government-approved technical standard – for filtering or other forms of moderation. Examples from broadcast and advertising regulation 5 Statutory regulation Formal regulation - tribunal with judicial review National Regulatory Agencies – though note many overlapping powers between agencies on e.g. freedom of expression,
  • 23. #KeepItOn Are national emergency & superior court criteria effective? Fragile democracies may have weak executive or judicial institutions. Who decides what is fraudulent? May contribute to bias if enforced by already distrusted institutions Hungary, Poland Current COVID19 live disinformation examples
  • 24. Five EU recommendations 1. Media literacy and user choice 2. Strong human review and appeal processes where AI is used 3. Independent appeal and audit of platforms 4. Standardizing notice and appeal procedures Creating a multistakeholder body for appeals 5. Transparency in AI disinformation techniques
  • 25. 1. Disinformation is best tackled through media pluralism/literacy These allow diversity of expression and choice. Source transparency indicators are preferable over (de)prioritisation of disinformation, Users need the opportunity to understand how search results or social media feeds are built and make changes where desirable.
  • 26. 2. Advise against regulatory action to encourage increased use of AI for content moderation (CoMo) purposes, without strong independent human review and appeal processes.
  • 27. 3. Recommend independent appeal and audit of platforms’ regulation Introduced as soon as feasible. Technical intermediaries moderation of content & accounts 1. detailed and transparent policies, 2. notice and appeal procedures, and 3. regular reports are crucial. Valid for automated removals as well.
  • 28. 4. Standardizing notice and appeal procedures and reporting creating self- or co-regulatory multistakeholder body UN Special Rapporteur’s suggested “social media council” Multi-stakeholder body could have competence to deal with industry-wide appeals better understanding & minimisation of effects of AI on freedom of expression and media pluralism.
  • 29. 5. Lack of independent evidence or detailed research in this policy area means the risk of harm remains far too high for any degree of policy or regulatory certainty. Greater transparency must be introduced into AI and disinformation reduction techniques used by online platforms and content providers.
  • 30. Questions What’s missing? What evidence is needed? What should be regulated by platforms? What should be subject to effective court oversight (not just theory)? Does an oversight board or a co-regulator work better in theory? And practice?
  • 32. What evidence is needed?
  • 33. What should be regulated by platforms?
  • 34. What should be subject to effective court oversight (not just theory)? Not the US judification approach!
  • 35. Does an oversight board or a co- regulator work better in theory?  And practice?
  • 36. Recs 3 + 4. Disinformation best tackled with digital literacy But should be regulated with law. Mismatch?  We cannot all be Finland, with schooling on disinfo  Most misinformed are dementia generation – over70s  Media literacy the last refuge of the deregulatory? 4b. in relation to smaller and dual-purpose platforms.  appropriate for big (US) platforms with fairly tried and tested ToS  but difficult for the smaller competitors? Innovation defence? Does a threshold of 2million (5%) or 5million (12%) cut it?  DE 1 Oct 2017 Netzwerkdurchsetzungsgesetz (Network Enforcement Act NetzDG)  FR Law 22 Dec 2018 relating to the fight against manipulation of information
  • 37. Is there a regulatory approach to design accountability into platforms? See Shorenstein Report & Santa Clara Declaration David Kaye, UN Rapporteur on Freedom of Expression Social media councils Scandinavian press council model? Or GNI? Fact (not face) checking Models all breach freedom of expression & privacy? Human rights impact assessments
  • 38. Recommendation 7. Regulating media vs platforms  New AVSD expanding scope to social media platforms  ERGA v. BERECv. EDPS issues?