SlideShare a Scribd company logo
1 of 31
Regulating Disinformation –
Regulatory Options
Professor Chris Marsden (Sussex)
Brazilian Virtual Seminar
“Freedom of Expression and Online Content Moderation”
22 October 2020
Specific Policy Options:
The Co-Regulatory Triangle
Defining disinformation
“False inaccurate or misleading information
designed, presented and promoted to
intentionally cause public harm or for profit”
In line with European Commission High Level Expert Group
We distinguish disinformation from misinformation,
which refers to unintentionally false or inaccurate information.
If policy wants
hard evidence of disinformation:
It does not and has never existed
Newspapers not proven to influence UK 1992
General Election
Disinformation not proven to influence US 2016
Presidential election
Motives are impure – evidence
threshold impossible
Online and offline media – which is
more influential and disinformed?
Should we focus on online media or core source of
misinformation for most people; mainstream media?
Cable TV news; newspapers; shock jock radio
 Noting elderly rely on existing media more than FBK
Pollution of online disinformation into MSM?
Online platforms are more important to tell us about
the weaknesses in the existing ecosystem
Digital canary in the media coalmine?
Use of MSM in online clips e.g. Bloomberg/deep fakes
Can misinformation affect the
normative context of public discourse
rather than disinformation directly
causing harm and
undermining democratic principles?
Data integrity vs discourse integrity?
General Hayman, former Director
of US National Security Agency
Governments
simply
identify
criticism as
‘fake news’
Can we vaccinate the public
against fake news?
Methodology 2018-2020:
Literature & Elite Interviews
Projects consists of a literature review, expert interviews and mapping of policy
and technology initiatives on disinformation in the European Union.
 150 regulatory documents and research papers/reports
 10 expert interviews in August-October 2018;
 took part in several expert seminars, including:
 Annenberg-Oxford Media Policy Summer Institute, Jesus College, Oxford, 7 Aug;
 Google-Oxford Internet Leadership Academy, Oxford Internet Institute, 5 Sept;
 Gikii’18 at the University of Vienna, Austria, 13-14 Sept;
 Microsoft Cloud Computing Research Consortium, St John’s Cambridge,17-18 Sept
 We thank all interview respondent and participants for the enlightening
disinformation discussions; all errors remain our own.
 Internet regulatory experts:
 socio-legal scholar with a background in law/economics of mass communications;
 media scholar with a background in Internet policy processes and copyright reform;
 reviewed by a computer scientist with a background in Internet regulation and
fundamental human rights.
 We suggest that this is the bare minimum of interdisciplinary expertise
required to study the regulation of disinformation on social media.
Interdisciplinary study analyses implications
of AI disinformation initiatives
Policy options based on literature, expert interviews & mapping
We warn against technocentric optimism as a solution to
disinformation,
proposes use of automated detection,
(de)prioritisation, blocking and
 removal by online intermediaries without human intervention.
Independent, transparent, effective appeal and
oversight mechanisms
 are necessary in order to minimise inevitable inaccuracies
Different aspects of disinformation
merit different types of regulation
All proposed policy solutions stress the
importance of
literacy and
cybersecurity
Defining ACR
Automated Content Recognition
 Within Machine Learning techniques that are advancing towards AI,
 ACR technologies are textual and audio-visual analysis programmes
that are algorithmically trained to identify potential ‘bot’ accounts and
unusual potential disinformation material.
ACR refers to both
 the use of automated techniques in the recognition and
 the moderation of content and accounts to assist human judgement.
Moderating content at scale requires ACR to supplement human editing
ACR to detect disinformation is
prone to false negatives/positives
 due to the difficulty of parsing multiple, complex, and possibly conflicting
meanings emerging from text.

 Inadequate for natural language processing & audiovisual
 including so-called ‘deep fakes’
 (fraudulent representation of individuals in video)
 ACR has more reported success in identifying ‘bot’ accounts
 We use ‘AI’ to refer to ACR technologies.
What can AI do to stop disinformation?
Bot accounts: Facebook removed 1.3billion in 6 months
Facebook’s AI “ultimately removes 38% of hate speech-related posts”
 it doesn’t have enough training data to be effective except English-Portuguese
Trained algorithmic detection of fact verification may never be as effective
as human intervention:
 each has accuracy of 76%
 Future work to explore how hybrid decision models consisting of fact
verification and data-driven machine learning can be integrated
 Koebler, J., and Cox, J. (23 Aug 2018) ‘The Impossible Job: Inside Facebook’s
Struggle to Moderate Two Billion People’, Motherboard,
https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-
moderation-works
Zuckerberg on AI & disinformation
Some categories of harmful content are easier for AI to identify
and in others it takes more time to train our systems.
Visual problems, like identifying nudity, are often easier than
Nuanced linguistic challenges, like hate speech
 Zuckerberg, M. (15 Nov 2018) ‘A Blueprint for Content Governance and Enforcement’,
Facebook Notes, https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-
content-governance-and-enforcement/10156443129621634/
Legislators should not push this
difficult judgment exercise onto
online intermediaries
 Restrictions to freedom of expression must be
 provided by law, legitimate and
 proven necessary and as
 the least restrictive means to pursue the aim.
 The illegality of disinformation should be proven before filtering
or blocking is deemed suitable.
 Human rights laws paramount in maintaining freedom online
AI is not a silver bullet
Automated technologies limited in their accuracy,
especially for expression where
cultural or contextual cues necessary
Imperative that legislators consider measures
Which may provide a bulwark against disinformation
without introducing AI-generated censorship of citizens
Option and
form of
regulation
Typology of regulation Implications/Notes
0 Status quo Corporate Social
Responsibility, single-
company initiatives
Note that enforcement of the new General Data Protection
Regulation and the proposed revised ePrivacy Regulation, plus
agreed text for new AVMS Directive, would all continue and likely
expand
1 Non-audited
self-
regulation
Industry code of practice,
transparency reports, self-
reporting
Corporate agreement on principles for common technical
solutions and Santa Clara Principles
2 Audited self-
regulation
European Code of Practice of
Sept 2018; Global Network
Initiative published audit
reports
Open interoperable publicly available standard e.g. commonly
engineered/designed standard for content removal to which
platforms could certify compliance
3 Formal self-
regulator
Powers to expel non-
performing members,
Dispute Resolution
ruling/arbitration on cases
Commonly engineered standard for content filtering or
algorithmic moderation. Requirement for members of self-
regulatory body to conform to standard or prove equivalence.
Particular focus on content ‘Put Back’ metrics and
efficiency/effectiveness of appeal process
4 Co-
regulation
Industry code approved by
Parliament or regulator(s)
with statutory powers to
supplant
Government-approved technical standard – for filtering or other
forms of moderation. Examples from broadcast and advertising
regulation
5 Statutory
regulation
Formal regulation - tribunal
with judicial review
National Regulatory Agencies – though note many overlapping
powers between agencies on e.g. freedom of expression,
Sources – Research Reports
Reports for
European Parliament 2019 (28 nations),
Commonwealth 2020 (56 nations),
UNESCO 2020 (everyone)
http://sro.sussex.ac.uk/view/creators/9904.html
Academic article:
[2020] Platform values and democratic elections:
How can the law regulate digital disinformation?
Chris Marsden, Trisha Meyer, Ian Brown,
Comp. Law & Security Rev. at
https://doi.org/10.1016/j.clsr.2019.105373
#KeepItOn Are national emergency
& superior court criteria effective?
Fragile democracies may have weak
executive or judicial institutions.
Who decides what is fraudulent?
May contribute to bias if enforced by
already distrusted institutions
Hungary, Poland
Current COVID19 live disinformation examples
Five EU recommendations
1. Media literacy and user choice
2. Strong human review and appeal processes
where AI is used
3. Independent appeal and audit of platforms
4. Standardizing notice and appeal procedures
Creating a multistakeholder body for appeals
5. Transparency in AI disinformation techniques
1. Disinformation is best tackled
through media pluralism/literacy
These allow diversity of expression and choice.
Source transparency indicators are preferable over
(de)prioritisation of disinformation,
Users need the opportunity to understand
how search results or social media feeds are built
and make changes where desirable.
2. Advise against regulatory action
to encourage increased use of AI
for content moderation (CoMo) purposes,
without strong independent
human review
and
appeal processes.
3. Recommend independent appeal
and audit of platforms’ regulation
Introduced as soon as feasible.
Technical intermediaries moderation of
content & accounts
1. detailed and transparent policies,
2. notice and appeal procedures, and
3. regular reports are crucial.
Valid for automated removals as well.
4. Standardizing notice and appeal
procedures and reporting
creating self- or co-regulatory multistakeholder body
UN Special Rapporteur’s suggested “social media council”
Multi-stakeholder body could have competence
to deal with industry-wide appeals
better understanding & minimisation of effects of AI
on freedom of expression and media pluralism.
5. Lack of independent evidence or
detailed research in this policy area
means the risk of harm remains far too high
for any degree of policy or regulatory certainty.
Greater transparency must be introduced
into AI and disinformation reduction techniques
used by online platforms and content providers.
Is there a regulatory approach to
design accountability into platforms?
See Shorenstein Report &
Santa Clara Declaration
David Kaye, UN Rapporteur on Freedom of
Expression
Social media councils
Scandinavian press council model? Or GNI?
Fact (not face) checking
Models all breach freedom of expression & privacy?
Human rights impact assessments
Questions
What’s missing?
What evidence is needed?
What should be regulated by platforms?
What should be subject to effective court
oversight (not just theory)?
Does an oversight board or a co-regulator
work better in theory? And practice?
Recs 3 + 4. Disinformation best
tackled with digital literacy
But should be regulated with law. Mismatch?
 We cannot all be Finland, with schooling on disinfo
 Most misinformed are dementia generation – over70s
 Media literacy the last refuge of the deregulatory?
4b. in relation to smaller and dual-purpose platforms.
 appropriate for big (US) platforms with fairly tried and tested ToS
 but difficult for the smaller competitors? Innovation defence?
Does a threshold of 2million (5%) or 5million (12%) cut it?
 DE 1 Oct 2017 Netzwerkdurchsetzungsgesetz (Network Enforcement Act NetzDG)
 FR Law 22 Dec 2018 relating to the fight against manipulation of information

More Related Content

What's hot

Memo: European policy: Gallo report
Memo: European policy: Gallo reportMemo: European policy: Gallo report
Memo: European policy: Gallo report
Steven Lauwers
 
Privacy law and policy 2 - LIS550
Privacy law and policy 2 - LIS550 Privacy law and policy 2 - LIS550
Privacy law and policy 2 - LIS550
Brian Rowe
 
“The Powers That E - CIO”
“The Powers That E - CIO”“The Powers That E - CIO”
“The Powers That E - CIO”
Jeff Kaplan
 

What's hot (20)

2017 Legal Update on Digital Accessibility Cases with Lainey Feingold
2017 Legal Update on Digital Accessibility Cases with Lainey Feingold2017 Legal Update on Digital Accessibility Cases with Lainey Feingold
2017 Legal Update on Digital Accessibility Cases with Lainey Feingold
 
Memo: European policy: Gallo report
Memo: European policy: Gallo reportMemo: European policy: Gallo report
Memo: European policy: Gallo report
 
Privacy law and policy 2 - LIS550
Privacy law and policy 2 - LIS550 Privacy law and policy 2 - LIS550
Privacy law and policy 2 - LIS550
 
Privacy post-Snowden
Privacy post-SnowdenPrivacy post-Snowden
Privacy post-Snowden
 
Cnil 35th activity report 2014
Cnil 35th activity report 2014Cnil 35th activity report 2014
Cnil 35th activity report 2014
 
Where next for the Regulation of Investigatory Powers Act?
Where next for the Regulation of Investigatory Powers Act?Where next for the Regulation of Investigatory Powers Act?
Where next for the Regulation of Investigatory Powers Act?
 
2021 Digital Accessibility Legal Update with Lainey Feingold
2021 Digital Accessibility Legal Update with Lainey Feingold2021 Digital Accessibility Legal Update with Lainey Feingold
2021 Digital Accessibility Legal Update with Lainey Feingold
 
User Privacy or Cyber Sovereignty Freedom House Special Report 2020
User Privacy or Cyber Sovereignty Freedom House Special Report 2020User Privacy or Cyber Sovereignty Freedom House Special Report 2020
User Privacy or Cyber Sovereignty Freedom House Special Report 2020
 
Freedom on the net 2018
Freedom on the net 2018Freedom on the net 2018
Freedom on the net 2018
 
Foi presentation
Foi presentationFoi presentation
Foi presentation
 
Gibson final
Gibson  finalGibson  final
Gibson final
 
Marsden #Regulatingcode MIT
Marsden #Regulatingcode MITMarsden #Regulatingcode MIT
Marsden #Regulatingcode MIT
 
Net Neutrality in Education
Net Neutrality in EducationNet Neutrality in Education
Net Neutrality in Education
 
Surveillance Capitalism
Surveillance  CapitalismSurveillance  Capitalism
Surveillance Capitalism
 
Who Has Your Back 2014: Protecting Your Data From Government Requests
Who Has Your Back 2014: Protecting Your Data From Government RequestsWho Has Your Back 2014: Protecting Your Data From Government Requests
Who Has Your Back 2014: Protecting Your Data From Government Requests
 
“The Powers That E - CIO”
“The Powers That E - CIO”“The Powers That E - CIO”
“The Powers That E - CIO”
 
Report on zero rating and its definition – 18 annenberg-oxford media policy s...
Report on zero rating and its definition – 18 annenberg-oxford media policy s...Report on zero rating and its definition – 18 annenberg-oxford media policy s...
Report on zero rating and its definition – 18 annenberg-oxford media policy s...
 
Where next for encryption regulation?
Where next for encryption regulation?Where next for encryption regulation?
Where next for encryption regulation?
 
The Crisis of Self Sovereignty in The Age of Surveillance Capitalism
The Crisis of Self Sovereignty in The Age of Surveillance CapitalismThe Crisis of Self Sovereignty in The Age of Surveillance Capitalism
The Crisis of Self Sovereignty in The Age of Surveillance Capitalism
 
Draft data protection regn 2012
Draft data protection regn 2012Draft data protection regn 2012
Draft data protection regn 2012
 

Similar to Marsden regulating disinformation Brazil 2020

Governing algorithms – perils and powers of ai in the public sector1(1)
Governing algorithms – perils and powers of ai in the public sector1(1)Governing algorithms – perils and powers of ai in the public sector1(1)
Governing algorithms – perils and powers of ai in the public sector1(1)
PanagiotisKeramidis
 
AI NOW REPORT 2018
AI NOW REPORT 2018AI NOW REPORT 2018
AI NOW REPORT 2018
Peerasak C.
 

Similar to Marsden regulating disinformation Brazil 2020 (20)

Marsden Regulating Disinformation Kluge 342020
Marsden Regulating Disinformation Kluge 342020Marsden Regulating Disinformation Kluge 342020
Marsden Regulating Disinformation Kluge 342020
 
Marsden Disinformation Algorithms #IGF2019
Marsden Disinformation Algorithms #IGF2019 Marsden Disinformation Algorithms #IGF2019
Marsden Disinformation Algorithms #IGF2019
 
2019: Regulating disinformation with artificial intelligence (AI)
2019: Regulating disinformation with artificial intelligence (AI)2019: Regulating disinformation with artificial intelligence (AI)
2019: Regulating disinformation with artificial intelligence (AI)
 
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...Presentation at COMPACT Project event in Riga - Disinformation, Media literac...
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...
 
Un may 28, 2019
Un may 28, 2019Un may 28, 2019
Un may 28, 2019
 
Oxford Internet Institute 19 Sept 2019: Disinformation – Platform, publisher ...
Oxford Internet Institute 19 Sept 2019: Disinformation – Platform, publisher ...Oxford Internet Institute 19 Sept 2019: Disinformation – Platform, publisher ...
Oxford Internet Institute 19 Sept 2019: Disinformation – Platform, publisher ...
 
ILS presentation on principles of fake news regulation
ILS presentation on principles of fake news regulationILS presentation on principles of fake news regulation
ILS presentation on principles of fake news regulation
 
Artificial Intelligence, elections, media pluralism and media freedom
Artificial Intelligence, elections, media pluralism and media freedom Artificial Intelligence, elections, media pluralism and media freedom
Artificial Intelligence, elections, media pluralism and media freedom
 
Governing algorithms – perils and powers of ai in the public sector1(1)
Governing algorithms – perils and powers of ai in the public sector1(1)Governing algorithms – perils and powers of ai in the public sector1(1)
Governing algorithms – perils and powers of ai in the public sector1(1)
 
COMMON GOOD DIGITAL FRAMEWORK
COMMON GOOD DIGITAL FRAMEWORKCOMMON GOOD DIGITAL FRAMEWORK
COMMON GOOD DIGITAL FRAMEWORK
 
Aiws presentation leeper rebecca
Aiws presentation leeper rebeccaAiws presentation leeper rebecca
Aiws presentation leeper rebecca
 
Ethics of computing in pharmaceutical research
Ethics of computing in pharmaceutical researchEthics of computing in pharmaceutical research
Ethics of computing in pharmaceutical research
 
online-disinformation-human-rights
online-disinformation-human-rightsonline-disinformation-human-rights
online-disinformation-human-rights
 
State of the art research on Convergence and Social Media A Compendium on R&D...
State of the art research on Convergence and Social Media A Compendium on R&D...State of the art research on Convergence and Social Media A Compendium on R&D...
State of the art research on Convergence and Social Media A Compendium on R&D...
 
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCEHUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
 
AI NOW REPORT 2018
AI NOW REPORT 2018AI NOW REPORT 2018
AI NOW REPORT 2018
 
What's Next: The World of Fake News
What's Next: The World of Fake NewsWhat's Next: The World of Fake News
What's Next: The World of Fake News
 
SCL Annual Conference 2019: Regulating social media platforms for interoperab...
SCL Annual Conference 2019: Regulating social media platforms for interoperab...SCL Annual Conference 2019: Regulating social media platforms for interoperab...
SCL Annual Conference 2019: Regulating social media platforms for interoperab...
 
How does fakenews spread understanding pathways of disinformation spread thro...
How does fakenews spread understanding pathways of disinformation spread thro...How does fakenews spread understanding pathways of disinformation spread thro...
How does fakenews spread understanding pathways of disinformation spread thro...
 
"A principle based approach to regulating fake news"
"A principle based approach to regulating fake news""A principle based approach to regulating fake news"
"A principle based approach to regulating fake news"
 

More from Chris Marsden

Georgetown Offdata 2018
Georgetown Offdata 2018Georgetown Offdata 2018
Georgetown Offdata 2018
Chris Marsden
 

More from Chris Marsden (20)

QUT Regulating Disinformation with AI Marsden 2024
QUT Regulating Disinformation with AI Marsden 2024QUT Regulating Disinformation with AI Marsden 2024
QUT Regulating Disinformation with AI Marsden 2024
 
Aligarh Democracy and AI.pptx
Aligarh Democracy and AI.pptxAligarh Democracy and AI.pptx
Aligarh Democracy and AI.pptx
 
CPA Democracy and AI.pptx
CPA Democracy and AI.pptxCPA Democracy and AI.pptx
CPA Democracy and AI.pptx
 
Generative AI, responsible innovation and the law
Generative AI, responsible innovation and the lawGenerative AI, responsible innovation and the law
Generative AI, responsible innovation and the law
 
Evidence base for AI regulation.pptx
Evidence base for AI regulation.pptxEvidence base for AI regulation.pptx
Evidence base for AI regulation.pptx
 
Gikii23 Marsden
Gikii23 MarsdenGikii23 Marsden
Gikii23 Marsden
 
#Gikii23 Marsden
#Gikii23 Marsden#Gikii23 Marsden
#Gikii23 Marsden
 
Generative AI and law.pptx
Generative AI and law.pptxGenerative AI and law.pptx
Generative AI and law.pptx
 
Marsden CELPU 2021 platform law co-regulation
Marsden CELPU 2021 platform law co-regulationMarsden CELPU 2021 platform law co-regulation
Marsden CELPU 2021 platform law co-regulation
 
Marsden Interoperability European Parliament 13 October
Marsden Interoperability European Parliament 13 OctoberMarsden Interoperability European Parliament 13 October
Marsden Interoperability European Parliament 13 October
 
Net neutrality 2021
Net neutrality 2021Net neutrality 2021
Net neutrality 2021
 
Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...
Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...
Social Utilities, Dominance and Interoperability: A Modest ProposalGikii 2008...
 
Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018
Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018
Marsden Net Neutrality Internet Governance Forum 2018 #IGF2018
 
Marsden Net Neutrality OII
Marsden Net Neutrality OIIMarsden Net Neutrality OII
Marsden Net Neutrality OII
 
Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018
Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018
Marsden Net Neutrality Annenberg Oxford 2018 #ANOX2018
 
Human centric multi-disciplinary NGI4EU Iceland 2018
Human centric multi-disciplinary NGI4EU Iceland 2018Human centric multi-disciplinary NGI4EU Iceland 2018
Human centric multi-disciplinary NGI4EU Iceland 2018
 
Human centric multi-disciplinary @ngi4eu @nesta_uk 21 march
Human centric multi-disciplinary @ngi4eu @nesta_uk 21 marchHuman centric multi-disciplinary @ngi4eu @nesta_uk 21 march
Human centric multi-disciplinary @ngi4eu @nesta_uk 21 march
 
Georgetown Offdata 2018
Georgetown Offdata 2018Georgetown Offdata 2018
Georgetown Offdata 2018
 
IPSA Hannover Marsden 5 December
IPSA Hannover Marsden 5 DecemberIPSA Hannover Marsden 5 December
IPSA Hannover Marsden 5 December
 
Marsden #ATDT After the Digital Tornado
Marsden #ATDT After the Digital TornadoMarsden #ATDT After the Digital Tornado
Marsden #ATDT After the Digital Tornado
 

Recently uploaded

The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 

Recently uploaded (20)

Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdfUnit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
 
Google Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptxGoogle Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptx
 
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
 
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxHow to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
 
Plant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptxPlant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptx
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
OSCM Unit 2_Operations Processes & Systems
OSCM Unit 2_Operations Processes & SystemsOSCM Unit 2_Operations Processes & Systems
OSCM Unit 2_Operations Processes & Systems
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
21st_Century_Skills_Framework_Final_Presentation_2.pptx
21st_Century_Skills_Framework_Final_Presentation_2.pptx21st_Century_Skills_Framework_Final_Presentation_2.pptx
21st_Century_Skills_Framework_Final_Presentation_2.pptx
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structure
 
Basic Intentional Injuries Health Education
Basic Intentional Injuries Health EducationBasic Intentional Injuries Health Education
Basic Intentional Injuries Health Education
 
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
 
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptxREMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxInterdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
 
Philosophy of china and it's charactistics
Philosophy of china and it's charactisticsPhilosophy of china and it's charactistics
Philosophy of china and it's charactistics
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 

Marsden regulating disinformation Brazil 2020

  • 1. Regulating Disinformation – Regulatory Options Professor Chris Marsden (Sussex) Brazilian Virtual Seminar “Freedom of Expression and Online Content Moderation” 22 October 2020
  • 2. Specific Policy Options: The Co-Regulatory Triangle
  • 3. Defining disinformation “False inaccurate or misleading information designed, presented and promoted to intentionally cause public harm or for profit” In line with European Commission High Level Expert Group We distinguish disinformation from misinformation, which refers to unintentionally false or inaccurate information.
  • 4. If policy wants hard evidence of disinformation: It does not and has never existed Newspapers not proven to influence UK 1992 General Election Disinformation not proven to influence US 2016 Presidential election Motives are impure – evidence threshold impossible
  • 5. Online and offline media – which is more influential and disinformed? Should we focus on online media or core source of misinformation for most people; mainstream media? Cable TV news; newspapers; shock jock radio  Noting elderly rely on existing media more than FBK Pollution of online disinformation into MSM? Online platforms are more important to tell us about the weaknesses in the existing ecosystem Digital canary in the media coalmine? Use of MSM in online clips e.g. Bloomberg/deep fakes
  • 6. Can misinformation affect the normative context of public discourse rather than disinformation directly causing harm and undermining democratic principles? Data integrity vs discourse integrity?
  • 7. General Hayman, former Director of US National Security Agency
  • 9.
  • 10. Can we vaccinate the public against fake news?
  • 11. Methodology 2018-2020: Literature & Elite Interviews Projects consists of a literature review, expert interviews and mapping of policy and technology initiatives on disinformation in the European Union.  150 regulatory documents and research papers/reports  10 expert interviews in August-October 2018;  took part in several expert seminars, including:  Annenberg-Oxford Media Policy Summer Institute, Jesus College, Oxford, 7 Aug;  Google-Oxford Internet Leadership Academy, Oxford Internet Institute, 5 Sept;  Gikii’18 at the University of Vienna, Austria, 13-14 Sept;  Microsoft Cloud Computing Research Consortium, St John’s Cambridge,17-18 Sept  We thank all interview respondent and participants for the enlightening disinformation discussions; all errors remain our own.  Internet regulatory experts:  socio-legal scholar with a background in law/economics of mass communications;  media scholar with a background in Internet policy processes and copyright reform;  reviewed by a computer scientist with a background in Internet regulation and fundamental human rights.  We suggest that this is the bare minimum of interdisciplinary expertise required to study the regulation of disinformation on social media.
  • 12. Interdisciplinary study analyses implications of AI disinformation initiatives Policy options based on literature, expert interviews & mapping We warn against technocentric optimism as a solution to disinformation, proposes use of automated detection, (de)prioritisation, blocking and  removal by online intermediaries without human intervention. Independent, transparent, effective appeal and oversight mechanisms  are necessary in order to minimise inevitable inaccuracies
  • 13. Different aspects of disinformation merit different types of regulation All proposed policy solutions stress the importance of literacy and cybersecurity
  • 14. Defining ACR Automated Content Recognition  Within Machine Learning techniques that are advancing towards AI,  ACR technologies are textual and audio-visual analysis programmes that are algorithmically trained to identify potential ‘bot’ accounts and unusual potential disinformation material. ACR refers to both  the use of automated techniques in the recognition and  the moderation of content and accounts to assist human judgement. Moderating content at scale requires ACR to supplement human editing
  • 15. ACR to detect disinformation is prone to false negatives/positives  due to the difficulty of parsing multiple, complex, and possibly conflicting meanings emerging from text.   Inadequate for natural language processing & audiovisual  including so-called ‘deep fakes’  (fraudulent representation of individuals in video)  ACR has more reported success in identifying ‘bot’ accounts  We use ‘AI’ to refer to ACR technologies.
  • 16. What can AI do to stop disinformation? Bot accounts: Facebook removed 1.3billion in 6 months Facebook’s AI “ultimately removes 38% of hate speech-related posts”  it doesn’t have enough training data to be effective except English-Portuguese Trained algorithmic detection of fact verification may never be as effective as human intervention:  each has accuracy of 76%  Future work to explore how hybrid decision models consisting of fact verification and data-driven machine learning can be integrated  Koebler, J., and Cox, J. (23 Aug 2018) ‘The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People’, Motherboard, https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content- moderation-works
  • 17. Zuckerberg on AI & disinformation Some categories of harmful content are easier for AI to identify and in others it takes more time to train our systems. Visual problems, like identifying nudity, are often easier than Nuanced linguistic challenges, like hate speech  Zuckerberg, M. (15 Nov 2018) ‘A Blueprint for Content Governance and Enforcement’, Facebook Notes, https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for- content-governance-and-enforcement/10156443129621634/
  • 18. Legislators should not push this difficult judgment exercise onto online intermediaries  Restrictions to freedom of expression must be  provided by law, legitimate and  proven necessary and as  the least restrictive means to pursue the aim.  The illegality of disinformation should be proven before filtering or blocking is deemed suitable.  Human rights laws paramount in maintaining freedom online
  • 19. AI is not a silver bullet Automated technologies limited in their accuracy, especially for expression where cultural or contextual cues necessary Imperative that legislators consider measures Which may provide a bulwark against disinformation without introducing AI-generated censorship of citizens
  • 20. Option and form of regulation Typology of regulation Implications/Notes 0 Status quo Corporate Social Responsibility, single- company initiatives Note that enforcement of the new General Data Protection Regulation and the proposed revised ePrivacy Regulation, plus agreed text for new AVMS Directive, would all continue and likely expand 1 Non-audited self- regulation Industry code of practice, transparency reports, self- reporting Corporate agreement on principles for common technical solutions and Santa Clara Principles 2 Audited self- regulation European Code of Practice of Sept 2018; Global Network Initiative published audit reports Open interoperable publicly available standard e.g. commonly engineered/designed standard for content removal to which platforms could certify compliance 3 Formal self- regulator Powers to expel non- performing members, Dispute Resolution ruling/arbitration on cases Commonly engineered standard for content filtering or algorithmic moderation. Requirement for members of self- regulatory body to conform to standard or prove equivalence. Particular focus on content ‘Put Back’ metrics and efficiency/effectiveness of appeal process 4 Co- regulation Industry code approved by Parliament or regulator(s) with statutory powers to supplant Government-approved technical standard – for filtering or other forms of moderation. Examples from broadcast and advertising regulation 5 Statutory regulation Formal regulation - tribunal with judicial review National Regulatory Agencies – though note many overlapping powers between agencies on e.g. freedom of expression,
  • 21. Sources – Research Reports Reports for European Parliament 2019 (28 nations), Commonwealth 2020 (56 nations), UNESCO 2020 (everyone) http://sro.sussex.ac.uk/view/creators/9904.html Academic article: [2020] Platform values and democratic elections: How can the law regulate digital disinformation? Chris Marsden, Trisha Meyer, Ian Brown, Comp. Law & Security Rev. at https://doi.org/10.1016/j.clsr.2019.105373
  • 22. #KeepItOn Are national emergency & superior court criteria effective? Fragile democracies may have weak executive or judicial institutions. Who decides what is fraudulent? May contribute to bias if enforced by already distrusted institutions Hungary, Poland Current COVID19 live disinformation examples
  • 23. Five EU recommendations 1. Media literacy and user choice 2. Strong human review and appeal processes where AI is used 3. Independent appeal and audit of platforms 4. Standardizing notice and appeal procedures Creating a multistakeholder body for appeals 5. Transparency in AI disinformation techniques
  • 24. 1. Disinformation is best tackled through media pluralism/literacy These allow diversity of expression and choice. Source transparency indicators are preferable over (de)prioritisation of disinformation, Users need the opportunity to understand how search results or social media feeds are built and make changes where desirable.
  • 25. 2. Advise against regulatory action to encourage increased use of AI for content moderation (CoMo) purposes, without strong independent human review and appeal processes.
  • 26. 3. Recommend independent appeal and audit of platforms’ regulation Introduced as soon as feasible. Technical intermediaries moderation of content & accounts 1. detailed and transparent policies, 2. notice and appeal procedures, and 3. regular reports are crucial. Valid for automated removals as well.
  • 27. 4. Standardizing notice and appeal procedures and reporting creating self- or co-regulatory multistakeholder body UN Special Rapporteur’s suggested “social media council” Multi-stakeholder body could have competence to deal with industry-wide appeals better understanding & minimisation of effects of AI on freedom of expression and media pluralism.
  • 28. 5. Lack of independent evidence or detailed research in this policy area means the risk of harm remains far too high for any degree of policy or regulatory certainty. Greater transparency must be introduced into AI and disinformation reduction techniques used by online platforms and content providers.
  • 29. Is there a regulatory approach to design accountability into platforms? See Shorenstein Report & Santa Clara Declaration David Kaye, UN Rapporteur on Freedom of Expression Social media councils Scandinavian press council model? Or GNI? Fact (not face) checking Models all breach freedom of expression & privacy? Human rights impact assessments
  • 30. Questions What’s missing? What evidence is needed? What should be regulated by platforms? What should be subject to effective court oversight (not just theory)? Does an oversight board or a co-regulator work better in theory? And practice?
  • 31. Recs 3 + 4. Disinformation best tackled with digital literacy But should be regulated with law. Mismatch?  We cannot all be Finland, with schooling on disinfo  Most misinformed are dementia generation – over70s  Media literacy the last refuge of the deregulatory? 4b. in relation to smaller and dual-purpose platforms.  appropriate for big (US) platforms with fairly tried and tested ToS  but difficult for the smaller competitors? Innovation defence? Does a threshold of 2million (5%) or 5million (12%) cut it?  DE 1 Oct 2017 Netzwerkdurchsetzungsgesetz (Network Enforcement Act NetzDG)  FR Law 22 Dec 2018 relating to the fight against manipulation of information