The document discusses issues around disinformation and how different actors like platforms, publishers, and public authorities can address problems related to the scale and scope of disinformation. It examines responses from platform, publishers, and policy perspectives. Specifically, it looks at what is known about the scale of disinformation problems and potential actions different actors could take to counter related issues.
2. With the move to a more digital,
mobile, and platform-dominated
media environment
People increasingly find and access news and information
via platforms like search engines and social media.
These have empowered citizens in many ways and
are important drivers of attention to established publishers
but have also enabled the distribution of disinformation from
a range of different actors.
3. In a context where citizens are
often increasingly sceptical of
platforms,
publishers, and
public authorities,
what do we know about the scale and scope of disinformation
problems
and
what can different actors do to counter the problems we face?
4. Schleswig-Holstein problem
Three people know the answer
One’s dead, one’s deranged, and I’ve forgotten…
AI is an even more difficult issue:
5. But at least it’s not net neutrality…
Impossible to get right regulatory answer
Affects largest telecoms companies (and pensions) in every nation
Actors global and lobbying intense
Facebook, Google, Twitter, argue they are “special cases”
Only solutions are co-regulatory and involve “do no evil”
Can affect electoral politics – especially United States and India
Perhaps net neutrality and regulating AI have a lot in common!
7. Artificial Intelligence (AI) technologies aim to reproduce or surpass
abilities (in computational systems) that would require 'intelligence' if
humans were to perform them. [UK EPSRC]
What is AI? 7
12. Let’s break it down by actors
Platforms
All web hosts but especially
Facebook/WhatsApp/Instagram,
Google/YouTube,
Twitter,
Microsoft/LinkedIn
Publishers
Everybody? Certainly every politician, political party, Twitter user,
as well as all media, old and new
Public authorities
Governments as both regulators and
Producers of ‘official’ information
Politicians
13. It’s not just law that regulates us
Regulation by software code
Self-organisation by users
Self-regulation by companies
Co-regulation within markets
Regulation by states
15. …and in this it differs from “an artificial intelligence”
It’s a field of study… 15
16. Largely governed through self-regulation
Technology giants appear set to persuade us that
self-regulation remains the only effective route to
legal accountability for machine learning systems,
jeopardising the sustainable introduction of smart
contracts,
permitting algorithmic discrimination and
compromising the implementation of privacy law.
17. Recent public policy focus
AI latest iteration of Machine Learning
But is in fact a very early stage of any defined real AI.
ML is subset of human-computer interaction
(HCI):
1.algorithms
Persia/Greek procedure applied to mathematical rule
2.(big) data
Focus on discrimination that occurs in machine
learning parsed into their interaction
18. Data cleaning is expensive?
70-80% of cost of AI research is in the data set - outsourced
GRAHAM, M. & ANWAR, M.A. The global gig economy:
Towards a planetary labour market? First Monday, 2019
doi:https://doi.org/10.5210/fm.v24i4.9913
ILO research: http://www.ilo.org/public/libdoc/ilo/2016/490648.pdf
20. AINow set up by Googler Meredith Whittaker
& Microsoft’s Kate Crawford: NYU
21. Discriminatory data is likely to lead
to discriminatory results
Discriminatory algorithms
as well as those not designed to filter out discrimination
can make those results more discriminatory
Justice requires that lawyers study algorithmic outcomes
in order to ascertain such discrimination,
which may be highly inefficient as well as
outrageous to natural justice and fundamental rights.
22. Public administration has generic
solutions
Administrative law
Natural justice –at least ‘reasonableness’
Right to explanation/remedy?
Discrimination law –
applies to corporate decisions
Specialist technology law
Biomedical/nanotech
Railways, roads, telecoms
Data Protection
25. Focus in this talk is on the private
activities of private companies
Judges may solve problem in tort/contract
Only took 100 years in case of railways litigation…
Would require 1000 technologically savvy judges…
As a result in part of failures of Victorian pre-regulatory period
we now have
Anti-discrimination and equality laws
financial regulation,
consumer contract law
Telecommunications regulation etc. etc.
26.
27. Not ethical programming but legal
compliance programming
Really smart contracts
Smart contracts were done TO you, not for you
‘Tap on/off’ Opal/Octopus cards
EULAs and click-wrap licences no-one reads:
Lemley 2000
Freedom of contract online = ‘Freedom of the fox
in the barnyard’?
Changing the terms of trade to consumer law
PROSUMER LAW
28. New Law of Robotics:
Ignorantia juris non excusat
“Ignorance of the law is no excuse” – Aristotle
Margetts, Helen and Dunleavy, Patrick (2013) The second wave of digital-era
governance: a quasi-paradigm for government on the Web,
Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering
Sciences VL - 371 at http://rsta.royalsocietypublishing.org/content/371/1987/20120382.abstract
Laird, L. (2016) As Governments Open Access to Data, Law Lags Far Behind
http://www.abajournal.com/news/article/as_governments_open_access_to_data_law_lags_far_behind
Council of Europe CM/Rec(2014) 6 Recommendation of the Committee of
Ministers to member states on a guide on human rights for Internet users
Adopted by the Committee of Ministers on 16 April 2014 at the 1197th meeting of
the Ministers' Deputies
30. Council of Europe: to err is human,
inducing AI complexity does not absolve
31. Caveat: regulation may not be suitable,
appropriate or feasible for many algorithms
But for those that regulators have most
concern about in
sectors that provide the most sensitive
socioeconomic decisions,
it is a remedy that can be explored.
32. Sensitive public facing sectors?
Banking/Credit, Insurance,
Medical Care & Research,
Social Care,
Policing and Security,
Education,
Transport
AI-piloted Airliners &
Autonomous Vehicles,
Social media
Telecommunications.
33. BTW don’t care (as much) about
big data that’s totally anonymized
Astrophysicists need not fear – geneticists
might….
1. Pseudonymization is not anonymization
2. If you don’t believe that, give me (or
immigration officials) your phone for 2minutes
34. Transparency and replicability are
not the solutions to AI/ML problems
Transparency first requirement of legal recourse
(though some algorithms can be reverse engineered without
transparency “under the hood” of the machine).
It is not sufficient, however, for several reasons.
Claims that the ability to study an algorithm and its operation
provides a remedy for users who suffer as result of decisions.
35. Things change!
Both the training data and the algorithm itself will change constantly
e,g. impossible to forecast real time outcomes of Google searches
vast SEO business attempts approximations without complete accuracy
Remedy that can be achieved is only replicability –
taking an ‘old’ algorithm and its data at a previous point in time
to demonstrate whether algorithm and data became discriminatory.
Estimate just how incomplete a remedy by
allowing effectively ‘slow motion replays’
while the game is rushing onwards
37. AI regulation and 'ethics washing'
Undertaken by technology companies and their
professional advisors
to persuade policy makers that
self-regulation is the only effective route to legal
accountability for Machine Learning systems,
1. jeopardising the sustainable introduction of
smart contracts,
2. permitting algorithmic discrimination and
3. compromising implementation of data
protection law.
39. Ethics washing will fail
Cursory research into
history of communications regulation and
Internet law
demonstrates the falsity of this self-regulation proposition.
See:
Marsden, C. (2018) “Prosumer Law and Network Platform Regulation: The Long
View Towards Creating Offdata”, 2 Georgetown Tech. L.R. 2, pp.376-398;
Marsden, C. and T. Meyer (2019) Report for European Parliament: “The effects of
automated content recognition (ACR) technology-based disinformation initiatives on
freedom of expression and media pluralism”
40. Need for systematic redress
by external agency
Ben Wagner (2019) Liable, but Not in Control?
Ensuring Meaningful Human Agency in Automated Decision-Making
Systems, Policy & Internet, Vol. 11, No. 1, 2019, 104-122 at
https://onlinelibrary.wiley.com/doi/pdf/10.1002/poi3.198
Self-driving cars,
police searches using social media/PNR,
Facebook content moderation
42. Research ethics topics
1. Personally identifiable data
EC/95/46 & GDPR equivalents
Council of Europe Convention 108 has 55 signatories
see Greenleaf
Ethics of personal data collection
User informed consent and reuse
2. Proprietary data
The unknown unknowns
43. GDPR is NOT a panacea
“Whereas Google and Facebook in Washington DC faced from 2012 what
turned out to be pretty poor privacy protection via an independent audit of
their new products, the situation in the small towns of Europe was even
worse. Irish regulator has never fined a single company a single €.”
Chris Marsden and Michael Veale, Gikii 2018, Vienna, 9 September
Specific Sensitive Data Deregulation (SSDD): the Portarlington-Valetta-Nicosia-
Bucharest Effect in Global Data Protection Law
For contrast, Omer Tene, The Irish DPC is fit: A response to Shaw, May 11,
2018, IAPP, https://iapp.org/news/a/the-irish-dpc-is-fit-a-response-to-shaw/
“advising controllers and conducting prior consultations is a central pillar of a
DPA’s role under the express language of GDPR (GDPR Articles 36, 51(3)(a)).
Irish DPC isn’t just to enforce and punish companies GDPR Article 51(1):
“monitoring the application of this Regulation, in order to protect the
fundamental rights and freedoms of natural persons in relation to processing
and to facilitate the free flow of personal data within the Union”
“This careful balance— protecting privacy while facilitating data flows —the
cornerstone of the data protection framework since the 1980s.”
45. What can and should be done?
1. Ethical standards for all AI deployed in ‘wild’ – to public
1. ISO standards being formed, basic privacy/human rights impact
assessment
2. No mandated interoperability for public communications providers
– Instant Messaging/Search/Social Media companies
3. APIs opened to dominant (SMP) operators
Based on Microsoft remedies in longest most expensive antitrust case in
EC history: case started in 1993 in US, EU 1998-2010
Google case started 2009 – ongoing a decade later
Commission decision of 27 June 2017 Case AT.39740 - Google Search (shopping)
46. 1. Ethical standards for all AI
deployed in ‘wild’ – to public
ISO standards being formed
1. Can be quite powerful influencers c.f. ISO27001 on cybersecurity
2. Typically technical engineering realm not normative standards
3. Embedded in national laws can become weak coregulatory signal
Basic privacy/human rights impact assessment
1. Proposed by UN Rapporteur Prof. David Kaye
2. Also see ‘Regulating Code’ (Brown/Marsden)
3. AI impact assessment suggested by European Data Protection
Supervisor
47. Standards still important!
Standards Australia chairing ISO Working Party:
ISO/IEC JTC 1/SC 42 Artificial intelligence
https://www.iso.org/committee/6794475.html
Australian Computer Society AI Ethics Committee:
https://www.acs.org.au/governance/ai-ethics-committee.html
Data61 (Australian Commonwealth Scientific and Industrial Research
Organisation (CSIRO):
Dawson D and Schleiger E*, Horton J, McLaughlin J, Robinson C∞, Quezada
G, Scowcroft J, and Hajkowicz S† (2019) Artificial Intelligence: Australia’s Ethics
Framework. Data61 CSIRO, https://data61.csiro.au/en/Our-Work/AI-Framework
Greenleaf, Graham and Clarke, Roger and Lindsay, David F., (2019)
Does AI Need Governance? The Potential Roles of a ‘Responsible Innovation
Organisation’ in Australia; Submission to the Human Rights Commissioner on
the White Paper Artificial Intelligence: Governance and Leadership
http://dx.doi.org/10.2139/ssrn.3346149
UK Information Commissioner’s Office, Feedback request — profiling and
automated decision-making, 6 April 2017,
https://ico.org.uk/media/about-the-ico/consultations/2013894/ico-feedback-
request-profiling-and-automated-decisionmaking.pdf
48. European Union & OECD
Guidelines widest acceptance
1. 70+ other RRI guidelines and counting
2. US 2019 Executive Order on AI
3. Industry Australia consultation
4. UK Centre for Data Ethics and Innovation (CDEI) at Turing Institute
1. Not perfect, industry sponsored, very light touch
49. Interoperability as an algorithmic
regulatory remedy
Attempt to move beyond glances in the rear view mirror
Silicon Valley mantra is “move fast and break things”
To enforce access to dominant regulated company’s API
Application Programme Interfaces
Enables brokers, comparator programmes, regulators
to access algorithms in real time & controlled conditions
to observe the algorithm’s behaviour.
50. 2. Interoperability option for
public communications providers
Instant Messaging/Search/Social Media companies
1. Not so radical – required for broadcasters and telcos
1. Electronic Programme Guides
2. Telephone numbering schemes
3. NOT interconnection – up to smaller Ims to decide how to comply
4. Co-regulatory standards
2. Not as utilities but as media providers
1. This is NOT common carrier regulation
2. Not equivalent to energy/postal providers
3. Not as publishers but as printers
1. Arguments on fake news/hate speech for another time
2. Attempts to impose ‘Duty of Care’ fiduciary in UK/US are highly inappropriate
53. EU Commmissioner Vestager on
interoperability and large platforms
3 June speech: “Competition and the Digital Economy”
https://ec.europa.eu/commission/commissioners/2014-
2019/vestager/announcements/competition-and-digital-economy_en
“Making sure that products made by one company will work
properly with those made by others –
can be vital to keep markets open for competition.”
Microsoft’s takeover of LinkedIn approval depended on
agreement to keep Office working properly,
not just with LinkedIn,
but also with other professional social networks.
“Commission will need to keep a close eye on strategies that
undermine interoperability”
54. 3. Dominant (SMP) operators
API opened
If dominant –competition and consumer remedy
1. ACCC find dominance by Facebook & Google
2. Only applies to platform aspects of their business
1. i.e. iTunes not Apple phones
Microsoft remedies in longest most expensive
antitrust case in EC history - $5billion fines
1. Case started in 1993 in US, EU 1998-2014
1. Google case started 2009 – ongoing a decade later
55. Note this is not about the
advertising market (only a proxy)
56. Three models – proposed by
Brown/Marsden 2008, 2013
Model 1: Must-carry obligations
broadcasters & Electronic Programme Guides
Model 2: API disclosure requirements
Microsoft from EC rulings
Case T-201/04, Microsoft v Commission, EU:T:2007:289, 1088
Decision 24 May 2004 Case C-3/37792 Microsoft; Decision of
16 December 2009 in Case 39530 Microsoft (Tying)
Model 3: Interconnect requirements
Applied to telcos, especially with SMP
57. Interoperability? 3 Types
Protocol interoperability
ability of services/products to interconnect technically
usual interoperability in competition policy
Data interoperability
Recalling Mayer-Schonberger/Cukier
Slice of data to competitors
Full protocol interoperability
What telecoms often thinks of as full
interconnection
58. Why interoperate?
It’s the economics!
Mechanism for achieving any-to-any connectivity –
promotes innovation
There is nothing less valuable than a network with one user!
Interoperability results in increased value of networks
promotes efficient investment in/use of infrastructure
Essential for new entrants to compete with existing
operators on non-discriminatory basis promotes entry
61. Network effects of interoperability
Metcalfe's law states the effect of a
network is proportional to the square
of the number of connected users of
the system
Network 1 has 6 users = 36
Network 2 has 4 users = 16
Network 1 interoperating with Network 2
has 10 users = 100
The users and operators of each
network gain
Network 1 Network 2
Networks interconnected by specifying
messages that can flow between each
network.
Specification uses an application programming
interface
Basic data flows between the networks
Business rules and value-added information
are not exchanged
62. Social benefits of interoperability too
5 mobile networks that do not interoperate/connect?
forcing all users onto all networks
if they had the appetite/patience
Remember when US had different 2.5G standards to EU? CDMA/GSM
Or winner-takes-all: Facebook!
Mobiles still combine discrete non-interoperable networks
Skype, WhatsApp, Telegram, Signal, Facebook IM, WeChat
and an interconnected network on the same phone!
SMS text, phone, Internet
63. Is this remedy more broadly applicable?
Banking/insurance/medical algorithmic ‘AI’?
Self-driving vehicles?
Depends on a variety of socio-economic factors
Many sectors have regulators working on ‘regulatory sandpit’ solutions
Interoperability extensively used in sectors with which we are most
familiar
64. Consumer Data Right?
Oz CDR to deliver open banking, open energy and open telecoms?
Many Europeans – well, we few –very excited about CDR model
UK Furman Review of Digital Markets: ‘data mobility’
Competition and Markets Authority: Data, Technology & Analytics unit
Innovation and Intelligence team: audit algorithms & research tech markets
67. Christopher Kuner, Fred H. Cate, Orla Lynskey, Christopher Millard, Nora Ni Loideain,
and Dan Jerker B. Svantesson, ‘Expanding the artificial intelligence-data protection
debate’ (2018) 8 (4) International Data Privacy Law, 289
Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of
Automated Decision-Making Does Not Exist in the General Data Protection Regulation’
(2017) 7 (2) International Data Privacy Law 76;
Sandra Wachter, Brent Mittelstadt, Chris Russell, ‘Counterfactual Explanations without
Opening the Black Box: Automated Decisions and the GDPR’ (2018) HarvardJL&Tech 1
Andrew D. Selbst and Julia Powles, ‘Meaningful information and the right to
explanation’ (2017) 7 (4) International Data Privacy Law 233.
Lilian Edwards, Michael Veale, ‘Slave to the algorithm? Why a ’right to an explanation’
is probably not the remedy you are looking for’ (2017) 16 (1) Duke Law & Technology
Review 18;
Lilian Edwards, Michael Veale, ‘Enslaving the Algorithm: From a "Right to an
Explanation" to a "Right to Better Decisions”?’ (2018) 16 (3) IEEE Security & Privacy 46
Lilian Edwards, Michael Veale, ‘Clarity, surprises, and further questions in the Article 29
Working Party draft guidance on automated decision-making and profiling’ (2018) 34 (2)
Computer Law & Security Review 398
68. 10 Steps towards Ethical AI
1. Transparency
Geeks love this, it’s almost meaningless to average user
2. Explainability
See above –more useful is replicability
3. Consent
See GDPR on meaningful & ‘course of business’
4. Discrimination
Garbage in/Garbage out
5. Accountability to Stakeholders
6. Portability
Australia’s Consumer Data Right!
7. Redress and Appeal
8. Algorithmic Literacy
See ‘how to programme your VCR’
9. Independent oversight
10. Governance
Hosanagar advocates for the creation of an independent Algorithmic
Safety Board, modeled on the Federal Reserve Board
https://www.vox.com/the-highlight/2019/5/22/18273284/ai-algorithmic-bill-of-
rights-accountability-transparency-consent-bias