SlideShare ist ein Scribd-Unternehmen logo
1 von 32
Bias in algorithmic decision-making:
Standards, Algorithmic Literacy and
Governance
ANSGAR KOENE,
HORIZON DIGITAL ECONOMY RESEARCH INSTITUTE, UNIVERSITY OF NOTTIN GHAM
5TH SEPTEMBER 2018
1
Projects
UnBias – EPSRC funded “Digital Economy” project
◦ Horizon Digital Economy research institute, University of Nottingham
◦ Human Centred Computing group, University of Oxford
◦ Centre for Intelligent Systems and their Application, University of Edinburgh
IEEE P7003 Standard for Algorithmic Bias Considerations
◦ Multi-stakeholder working group with 70+ participants from Academia, Civil-society and Industry
A governance framework for algorithmic accountability and transparency – EP Science
Technology Options Assessment report
◦ UnBias; AI Now; Purdue University; EMLS RI
Age Appropriate Design
◦ UnBias; 5Rights
2
UnBias: Emancipating Users Against Algorithmic
Biases for a Trusted Digital Economy
Standards and policy
Stakeholder workshops
3
Youth Juries
Algorithms in the news
4
5
Theme 1: The Use of Algorithms
Introduces the concept of algorithms
Activities include:
◦ Mapping your online world
◦ Discusses the range of online services that use algorithms
◦ What’s in your personal filter bubble?
◦ Highlights that not everyone gets the
same results online
Theme 1: The Use of Algorithms
Activities include:
◦ What kinds of data do algorithms use?
◦ Discusses the range of data collected and inferred by algorithms, and what happens to it
◦ How much is your data worth?
◦ From the perspective of you (the user) and the companies that buy/sell it
Theme 2: Regulation of Algorithms
Uses real-life scenarios to highlight issues
surrounding the use of algorithms, and asks
Who is responsible when things go wrong?
Participants debate both sides of a case and
develop their critical thinking skills
Theme 3: Algorithm Transparency
The algorithm as a ‘black box’
Discusses the concept of meaningful
transparency and the sort of information that
young people would like to have when they are
online
◦ What data is being collected about me?
◦ Why?
◦ Where does it go?
11
Fairness Toolkit
http://unbias.wp.horizon.ac.uk
https://oer.horizon.ac.uk/5rights-youth-juries
12
13
IEEE P7000: Model Process for Addressing Ethical Concerns During System Design
IEEE P7001: Transparency of Autonomous Systems
IEEE P7002: Data Privacy Process
IEEE P7003: Algorithmic Bias Considerations
IEEE P7004: Child and Student Data Governance
IEEE P7005: Employer Data Governance
IEEE P7006: Personal Data AI Agent Working Group
IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation Systems
IEEE P7008: Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
IEEE P7009: Fail-Safe Design of Autonomous and Semi-Autonomous Systems
IEEE P7010: Wellbeing Metrics Standard for Ethical AI and Autonomous Systems
IEEE P7011: Process of Identifying and Rating the Trustworthiness of News Sources
IEEE P7012: Standard for Machines Readable Personal Privacy Terms
Algorithmic systems are socio-technical
Algorithmic systems do not exist in a vacuum
They are built, deployed and used:
◦ by people,
◦ within organizations,
◦ within a social, political, legal and cultural context.
The outcomes of algorithmic decisions can have significant impacts
on real, and possibly vulnerable, people.
P7003 - Algorithmic Bias Considerations
All non-trivial* decisions are biased
We seek to minimize bias that is:
◦ Unintended
◦ Unjustified
◦ Unacceptable
as defined by the context where the system is used.
*Non-trivial means the decision space has more than one possible outcome and the choice is not
uniformly random.
Causes of algorithmic bias
Insufficient understanding of the context of use.
Failure to rigorously map decision criteria.
Failure to have explicit justifications for the chosen criteria.
17
Algorithmic Discrimination
18
Complex individuals reduced to
simplistic binary stereotypes
Key question when developing or
deploying an algorithmic system
19
 Who will be affected?
 What are the decision/optimization criteria?
 How are these criteria justified?
 Are these justifications acceptable in the context where the
system is used?
20
P7003 foundational sections
 Taxonomy of Algorithmic Bias
 Legal frameworks related to Bias
 Psychology of Bias
 Cultural aspects
P7003 algorithm development sections
 Algorithmic system design stages
 Person categorization and identifying affected population groups
 Assurance of representativeness of testing/training/validation data
 Evaluation of system outcomes
 Evaluation of algorithmic processing
 Assessment of resilience against external manipulation to Bias
 Documentation of criteria, scope and justifications of choices
Related AI standards activities
British Standards Institute (BSI) – BS 8611 Ethics design and application of robots
ISO/IEC JTC 1/SC 42 Artificial Intelligence
◦ SG 1 Computational approaches and characteristics of AI systems
◦ SG 2 Trustworthiness
◦ SG 3 Use cases and applications
◦ WG 1 Foundational standards
Jan 2018 China published “Artificial Intelligence Standardization White Paper.”
A governance framework for
algorithmic accountability and
transparency
EPRS/2018/STOA/SER/18/002
ANSGAR KOENE, UNIVERSITY OF NOTTINGHAM
RASHIDA RICHARDSON & DILLON REISMAN, AI NOW INSTITUTE
YOHKO HATADA, EMLS RI
HELENA WEBB, M. PATEL, J. LAVIOLETTE, C. MACHADO, UNIVERSITY OF OXFORD
CHRIS CLIFTON, PURDUE UNIVERSITY
25TH OCTOBER 2018
Awareness raising: education,
watchdogs and whistleblowers
 “Algorithmic literacy” - teaching core concepts: computational thinking, the role of data
and the importance of optimisation criteria.
 Standardised notification to communicate type and degree of algorithmic processing in
decisions.
 Provision of computational infrastructure and access to technical experts to support data
analysis etc. for “algorithmic accountability journalism”.
 Whistleblower protection and protection against prosecution on grounds of breaching
copyright or Terms of Service when doing so serves the public interest.
Accountability in public sector use
of algorithmic decision-making
Adoption of Algorithmic Impact Assessment (AIA) for algorithmic systems used for public service
1. Public disclosure of purpose, scope, intended use and associated policies, self-assessment
process and potential implementation timeline
2. Performing and publishing of self-assessment of the system with focus on inaccuracies, bias,
harms to affected communities, and mitigation plans for potential impacts.
3. Publication of plan for meaningful, ongoing access to external researchers to review the system.
4. Public participation period.
5. Publication of final AIA, once issues raised in public participation have been addressed.
6. Renewal of AIAs on a regular timeline.
7. Opportunity for public to challenge failure to mitigate issues raised in the public participation
period or foreseeable outcomes.
Regulatory oversight and Legal liability
 Regulatory body for algorithms:
 Risk assessment
 Investigating algorithmic systems suspected of infringing of human rights.
 Advising other regulatory agencies regarding algorithmic systems
 Algorithmic Impact Assessment requirement for systems classified as causing potentially
severe non-reversible impact
 Strict tort liability for algorithmic systems with medium severity non-reversible impacts
 Reduced liability for algorithmic systems certified as compliant with best-practice standards.
Global coordination for algorithmic governance
 Establishment a permanent global Algorithm Governance Forum (AGF)
 Multi-stakeholder dialog and policy expertise related to algorithmic systems
 Based on the principles of Responsible Research and Innovation
 Provide a forum for coordination and exchanging of governance best-practices
 Strong positions in trade negotiations to protect regulatory ability to
investigate algorithmic
systems and hold parties accountable for violations of European laws and
human rights.
Age Appropriate Design
27
What is the Age-Appropriate Design Code?
The Age-Appropriate Design Code sits at section 123 of the UK Data Protection Act 2018
(“DPA”) and will set out the standards of data protection that Information Society Services
(“ISS”, known as online services) must offer children. It was brought into UK legislation by
Crossbench Peer, Baroness Kidron, Parliamentary Under-Secretary, Department for Digital,
Culture, Media and Sport, Lord Ashton of Hyde, Opposition Spokesperson, Lord Stevenson
of Balmacara, Conservative Peer, Baroness Harding of Winscombe and Liberal Democrat
Spokesperson, Lord Clement-Jones of Clapham.
28
29
ICO to draft code by 25 October 2019
30
Thank you
31
http://unbias.wp.horizon.ac.uk
Biography
• Dr. Koene is Senior Research Fellow at the Horizon Digital Economy Research
Institute of the University of Nottingham, where he conducts research on
societal impact of Digital Technology.
• Chairs the IEEE P7003TM Standard for Algorithms Bias Considerations working
group., and leads the policy impact activities of the Horizon institute.
• He is co-investigator on the UnBias project to develop regulation-, design- and
education-recommendations for minimizing unintended, unjustified and
inappropriate bias in algorithmic systems.
• Over 15 years of experience researching and publishing on topics ranging from
Robotics, AI and Computational Neuroscience to Human Behaviour studies and
Tech. Policy recommendations.
• He received his M.Eng., and Ph.D. in Electical Engineering & Neuroscience,
respectively, from Delft University of Technology and Utrecht University.
• Trustee of 5Rights, a UK based charity for Enabling Children and Young People
to Access the digital world creatively, knowledgeably and fearlessly.
Dr. Ansgar Koene
ansgar.koene@nottingham.ac.uk
https://www.linkedin.com/in/akoene/
https://www.nottingham.ac.uk/comp
uterscience/people/ansgar.koene
http://unbias.wp.horizon.ac.uk/

Weitere ähnliche Inhalte

Was ist angesagt?

are algorithms really a black box
are algorithms really a black boxare algorithms really a black box
are algorithms really a black boxAnsgar Koene
 
iConference 2018 BIAS workshop keynote
iConference 2018 BIAS workshop keynoteiConference 2018 BIAS workshop keynote
iConference 2018 BIAS workshop keynoteAnsgar Koene
 
Taming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and PolicyTaming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and PolicyAnsgar Koene
 
Algorithms of Online Platforms and Networks
Algorithms of Online Platforms and NetworksAlgorithms of Online Platforms and Networks
Algorithms of Online Platforms and NetworksAnsgar Koene
 
A koene ai_in_command_control
A koene ai_in_command_controlA koene ai_in_command_control
A koene ai_in_command_controlAnsgar Koene
 
Human Agency on Algorithmic Systems
Human Agency on Algorithmic SystemsHuman Agency on Algorithmic Systems
Human Agency on Algorithmic SystemsAnsgar Koene
 
Editorial responsibilities arising from personalisation algorithms
Editorial responsibilities arising from personalisation algorithmsEditorial responsibilities arising from personalisation algorithms
Editorial responsibilities arising from personalisation algorithmsAnsgar Koene
 
A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017Ansgar Koene
 
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesIndustry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesAnsgar Koene
 
Bsa cpd a_koene2016
Bsa cpd a_koene2016Bsa cpd a_koene2016
Bsa cpd a_koene2016Ansgar Koene
 
AI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry StandardsAI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry StandardsAnsgar Koene
 
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja Saidot
 
Ethics of Analytics and Machine Learning
Ethics of Analytics and Machine LearningEthics of Analytics and Machine Learning
Ethics of Analytics and Machine LearningMark Underwood
 
Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17Ansgar Koene
 
Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Patrick Van Renterghem
 

Was ist angesagt? (20)

are algorithms really a black box
are algorithms really a black boxare algorithms really a black box
are algorithms really a black box
 
iConference 2018 BIAS workshop keynote
iConference 2018 BIAS workshop keynoteiConference 2018 BIAS workshop keynote
iConference 2018 BIAS workshop keynote
 
Taming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and PolicyTaming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and Policy
 
Algorithms of Online Platforms and Networks
Algorithms of Online Platforms and NetworksAlgorithms of Online Platforms and Networks
Algorithms of Online Platforms and Networks
 
A koene ai_in_command_control
A koene ai_in_command_controlA koene ai_in_command_control
A koene ai_in_command_control
 
Human Agency on Algorithmic Systems
Human Agency on Algorithmic SystemsHuman Agency on Algorithmic Systems
Human Agency on Algorithmic Systems
 
Editorial responsibilities arising from personalisation algorithms
Editorial responsibilities arising from personalisation algorithmsEditorial responsibilities arising from personalisation algorithms
Editorial responsibilities arising from personalisation algorithms
 
A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017
 
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesIndustry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
 
Bsa cpd a_koene2016
Bsa cpd a_koene2016Bsa cpd a_koene2016
Bsa cpd a_koene2016
 
AI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry StandardsAI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry Standards
 
EU Ethics guidelines for trustworthy AI
EU Ethics guidelines for trustworthy AI EU Ethics guidelines for trustworthy AI
EU Ethics guidelines for trustworthy AI
 
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
 
Implementing Ethics in AI
Implementing Ethics in AIImplementing Ethics in AI
Implementing Ethics in AI
 
Ethics of Analytics and Machine Learning
Ethics of Analytics and Machine LearningEthics of Analytics and Machine Learning
Ethics of Analytics and Machine Learning
 
Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17
 
Introduction to AI Governance
Introduction to AI GovernanceIntroduction to AI Governance
Introduction to AI Governance
 
Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...
 
Aspa ai webinar
Aspa   ai webinarAspa   ai webinar
Aspa ai webinar
 
Ai Now institute 2017 report
 Ai Now institute 2017 report Ai Now institute 2017 report
Ai Now institute 2017 report
 

Ähnlich wie Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Governance

AI NOW REPORT 2018
AI NOW REPORT 2018AI NOW REPORT 2018
AI NOW REPORT 2018Peerasak C.
 
Ethics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptxEthics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptxPetar Radanliev
 
Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...
Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...
Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...Comisión de Regulación de Comunicaciones
 
Generative AI - Responsible Path Forward.pdf
Generative AI - Responsible Path Forward.pdfGenerative AI - Responsible Path Forward.pdf
Generative AI - Responsible Path Forward.pdfSaeed Al Dhaheri
 
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive AnalyticsDataScienceConferenc1
 
[DSC Europe 23] Luciano Catani - AI in Diplomacy.PDF
[DSC Europe 23] Luciano Catani - AI in Diplomacy.PDF[DSC Europe 23] Luciano Catani - AI in Diplomacy.PDF
[DSC Europe 23] Luciano Catani - AI in Diplomacy.PDFDataScienceConferenc1
 
Review of Previous ETAP Forums - Deepak Maheshwari
Review of Previous ETAP Forums - Deepak MaheshwariReview of Previous ETAP Forums - Deepak Maheshwari
Review of Previous ETAP Forums - Deepak Maheshwarivpnmentor
 
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018e-SIDES.eu
 
What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?Nozha Boujemaa
 
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaEthical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
 
Emerging Technologies in Data Sharing and Analytics at Data61
Emerging Technologies in Data Sharing and Analytics at Data61Emerging Technologies in Data Sharing and Analytics at Data61
Emerging Technologies in Data Sharing and Analytics at Data61Liming Zhu
 
The AI Now Report The Social and Economic Implications of Artificial Intelli...
The AI Now Report  The Social and Economic Implications of Artificial Intelli...The AI Now Report  The Social and Economic Implications of Artificial Intelli...
The AI Now Report The Social and Economic Implications of Artificial Intelli...Willy Marroquin (WillyDevNET)
 
Incorporating Ethical Considerations in Autonomous and Intelligent Systems
Incorporating Ethical Considerations in Autonomous and Intelligent SystemsIncorporating Ethical Considerations in Autonomous and Intelligent Systems
Incorporating Ethical Considerations in Autonomous and Intelligent SystemsDelft Design for Values Institute
 
Privacy, Accountability and Trust Privacy, Accountability and Trust Privacy, ...
Privacy, Accountability and Trust Privacy, Accountability and Trust Privacy, ...Privacy, Accountability and Trust Privacy, Accountability and Trust Privacy, ...
Privacy, Accountability and Trust Privacy, Accountability and Trust Privacy, ...Karlos Svoboda
 
acatech_STUDY_Internet_Privacy_WEB
acatech_STUDY_Internet_Privacy_WEBacatech_STUDY_Internet_Privacy_WEB
acatech_STUDY_Internet_Privacy_WEBJaina Hirai
 
Written-Blog_Ethic_AI_08Aug23_pub_jce.pdf
Written-Blog_Ethic_AI_08Aug23_pub_jce.pdfWritten-Blog_Ethic_AI_08Aug23_pub_jce.pdf
Written-Blog_Ethic_AI_08Aug23_pub_jce.pdfjiricejka
 
Data at the centre of a complex world
Data at the centre of a complex world Data at the centre of a complex world
Data at the centre of a complex world Kate Carruthers
 
Digital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdfDigital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdfMahdi_Fahmideh
 
Broad Exchange on the Published Guidelines on the Introduction and Use of Art...
Broad Exchange on the Published Guidelines on the Introduction and Use of Art...Broad Exchange on the Published Guidelines on the Introduction and Use of Art...
Broad Exchange on the Published Guidelines on the Introduction and Use of Art...Dr. Fotios Fitsilis
 

Ähnlich wie Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Governance (20)

AI NOW REPORT 2018
AI NOW REPORT 2018AI NOW REPORT 2018
AI NOW REPORT 2018
 
Ethics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptxEthics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptx
 
Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...
Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...
Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...
 
Generative AI - Responsible Path Forward.pdf
Generative AI - Responsible Path Forward.pdfGenerative AI - Responsible Path Forward.pdf
Generative AI - Responsible Path Forward.pdf
 
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics
 
[DSC Europe 23] Luciano Catani - AI in Diplomacy.PDF
[DSC Europe 23] Luciano Catani - AI in Diplomacy.PDF[DSC Europe 23] Luciano Catani - AI in Diplomacy.PDF
[DSC Europe 23] Luciano Catani - AI in Diplomacy.PDF
 
Review of Previous ETAP Forums - Deepak Maheshwari
Review of Previous ETAP Forums - Deepak MaheshwariReview of Previous ETAP Forums - Deepak Maheshwari
Review of Previous ETAP Forums - Deepak Maheshwari
 
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
 
DATAIA & TransAlgo
DATAIA & TransAlgoDATAIA & TransAlgo
DATAIA & TransAlgo
 
What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?
 
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaEthical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
 
Emerging Technologies in Data Sharing and Analytics at Data61
Emerging Technologies in Data Sharing and Analytics at Data61Emerging Technologies in Data Sharing and Analytics at Data61
Emerging Technologies in Data Sharing and Analytics at Data61
 
The AI Now Report The Social and Economic Implications of Artificial Intelli...
The AI Now Report  The Social and Economic Implications of Artificial Intelli...The AI Now Report  The Social and Economic Implications of Artificial Intelli...
The AI Now Report The Social and Economic Implications of Artificial Intelli...
 
Incorporating Ethical Considerations in Autonomous and Intelligent Systems
Incorporating Ethical Considerations in Autonomous and Intelligent SystemsIncorporating Ethical Considerations in Autonomous and Intelligent Systems
Incorporating Ethical Considerations in Autonomous and Intelligent Systems
 
Privacy, Accountability and Trust Privacy, Accountability and Trust Privacy, ...
Privacy, Accountability and Trust Privacy, Accountability and Trust Privacy, ...Privacy, Accountability and Trust Privacy, Accountability and Trust Privacy, ...
Privacy, Accountability and Trust Privacy, Accountability and Trust Privacy, ...
 
acatech_STUDY_Internet_Privacy_WEB
acatech_STUDY_Internet_Privacy_WEBacatech_STUDY_Internet_Privacy_WEB
acatech_STUDY_Internet_Privacy_WEB
 
Written-Blog_Ethic_AI_08Aug23_pub_jce.pdf
Written-Blog_Ethic_AI_08Aug23_pub_jce.pdfWritten-Blog_Ethic_AI_08Aug23_pub_jce.pdf
Written-Blog_Ethic_AI_08Aug23_pub_jce.pdf
 
Data at the centre of a complex world
Data at the centre of a complex world Data at the centre of a complex world
Data at the centre of a complex world
 
Digital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdfDigital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdf
 
Broad Exchange on the Published Guidelines on the Introduction and Use of Art...
Broad Exchange on the Published Guidelines on the Introduction and Use of Art...Broad Exchange on the Published Guidelines on the Introduction and Use of Art...
Broad Exchange on the Published Guidelines on the Introduction and Use of Art...
 

Mehr von Ansgar Koene

A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018Ansgar Koene
 
IEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias ConsiderationsIEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias ConsiderationsAnsgar Koene
 
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementTRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementAnsgar Koene
 
A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017Ansgar Koene
 
Internet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User TrustInternet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User TrustAnsgar Koene
 
Explorers fair talk who_isincontrol_you_thealgorithm
Explorers fair talk who_isincontrol_you_thealgorithmExplorers fair talk who_isincontrol_you_thealgorithm
Explorers fair talk who_isincontrol_you_thealgorithmAnsgar Koene
 
Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16Ansgar Koene
 
Dasts16 a koene_un_bias
Dasts16 a koene_un_biasDasts16 a koene_un_bias
Dasts16 a koene_un_biasAnsgar Koene
 
Ass a koene_ca_sma
Ass a koene_ca_smaAss a koene_ca_sma
Ass a koene_ca_smaAnsgar Koene
 

Mehr von Ansgar Koene (10)

What is AI?
What is AI?What is AI?
What is AI?
 
A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018
 
IEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias ConsiderationsIEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias Considerations
 
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementTRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
 
A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017
 
Internet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User TrustInternet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User Trust
 
Explorers fair talk who_isincontrol_you_thealgorithm
Explorers fair talk who_isincontrol_you_thealgorithmExplorers fair talk who_isincontrol_you_thealgorithm
Explorers fair talk who_isincontrol_you_thealgorithm
 
Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16
 
Dasts16 a koene_un_bias
Dasts16 a koene_un_biasDasts16 a koene_un_bias
Dasts16 a koene_un_bias
 
Ass a koene_ca_sma
Ass a koene_ca_smaAss a koene_ca_sma
Ass a koene_ca_sma
 

Kürzlich hochgeladen

VIP Call Girls Service Bikaner Aishwarya 8250192130 Independent Escort Servic...
VIP Call Girls Service Bikaner Aishwarya 8250192130 Independent Escort Servic...VIP Call Girls Service Bikaner Aishwarya 8250192130 Independent Escort Servic...
VIP Call Girls Service Bikaner Aishwarya 8250192130 Independent Escort Servic...Suhani Kapoor
 
(PRIYA) Call Girls Rajgurunagar ( 7001035870 ) HI-Fi Pune Escorts Service
(PRIYA) Call Girls Rajgurunagar ( 7001035870 ) HI-Fi Pune Escorts Service(PRIYA) Call Girls Rajgurunagar ( 7001035870 ) HI-Fi Pune Escorts Service
(PRIYA) Call Girls Rajgurunagar ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
CBO’s Recent Appeals for New Research on Health-Related Topics
CBO’s Recent Appeals for New Research on Health-Related TopicsCBO’s Recent Appeals for New Research on Health-Related Topics
CBO’s Recent Appeals for New Research on Health-Related TopicsCongressional Budget Office
 
(SUHANI) Call Girls Pimple Saudagar ( 7001035870 ) HI-Fi Pune Escorts Service
(SUHANI) Call Girls Pimple Saudagar ( 7001035870 ) HI-Fi Pune Escorts Service(SUHANI) Call Girls Pimple Saudagar ( 7001035870 ) HI-Fi Pune Escorts Service
(SUHANI) Call Girls Pimple Saudagar ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
Regional Snapshot Atlanta Aging Trends 2024
Regional Snapshot Atlanta Aging Trends 2024Regional Snapshot Atlanta Aging Trends 2024
Regional Snapshot Atlanta Aging Trends 2024ARCResearch
 
Expressive clarity oral presentation.pptx
Expressive clarity oral presentation.pptxExpressive clarity oral presentation.pptx
Expressive clarity oral presentation.pptxtsionhagos36
 
Climate change and occupational safety and health.
Climate change and occupational safety and health.Climate change and occupational safety and health.
Climate change and occupational safety and health.Christina Parmionova
 
EDUROOT SME_ Performance upto March-2024.pptx
EDUROOT SME_ Performance upto March-2024.pptxEDUROOT SME_ Performance upto March-2024.pptx
EDUROOT SME_ Performance upto March-2024.pptxaaryamanorathofficia
 
Zechariah Boodey Farmstead Collaborative presentation - Humble Beginnings
Zechariah Boodey Farmstead Collaborative presentation -  Humble BeginningsZechariah Boodey Farmstead Collaborative presentation -  Humble Beginnings
Zechariah Boodey Farmstead Collaborative presentation - Humble Beginningsinfo695895
 
Global debate on climate change and occupational safety and health.
Global debate on climate change and occupational safety and health.Global debate on climate change and occupational safety and health.
Global debate on climate change and occupational safety and health.Christina Parmionova
 
Top Rated Pune Call Girls Bhosari ⟟ 6297143586 ⟟ Call Me For Genuine Sex Ser...
Top Rated  Pune Call Girls Bhosari ⟟ 6297143586 ⟟ Call Me For Genuine Sex Ser...Top Rated  Pune Call Girls Bhosari ⟟ 6297143586 ⟟ Call Me For Genuine Sex Ser...
Top Rated Pune Call Girls Bhosari ⟟ 6297143586 ⟟ Call Me For Genuine Sex Ser...Call Girls in Nagpur High Profile
 
Call On 6297143586 Viman Nagar Call Girls In All Pune 24/7 Provide Call With...
Call On 6297143586  Viman Nagar Call Girls In All Pune 24/7 Provide Call With...Call On 6297143586  Viman Nagar Call Girls In All Pune 24/7 Provide Call With...
Call On 6297143586 Viman Nagar Call Girls In All Pune 24/7 Provide Call With...tanu pandey
 
The Economic and Organised Crime Office (EOCO) has been advised by the Office...
The Economic and Organised Crime Office (EOCO) has been advised by the Office...The Economic and Organised Crime Office (EOCO) has been advised by the Office...
The Economic and Organised Crime Office (EOCO) has been advised by the Office...nservice241
 
VIP Kolkata Call Girl Jatin Das Park 👉 8250192130 Available With Room
VIP Kolkata Call Girl Jatin Das Park 👉 8250192130  Available With RoomVIP Kolkata Call Girl Jatin Das Park 👉 8250192130  Available With Room
VIP Kolkata Call Girl Jatin Das Park 👉 8250192130 Available With Roomishabajaj13
 
Postal Ballots-For home voting step by step process 2024.pptx
Postal Ballots-For home voting step by step process 2024.pptxPostal Ballots-For home voting step by step process 2024.pptx
Postal Ballots-For home voting step by step process 2024.pptxSwastiRanjanNayak
 
Incident Command System xxxxxxxxxxxxxxxxxxxxxxxxx
Incident Command System xxxxxxxxxxxxxxxxxxxxxxxxxIncident Command System xxxxxxxxxxxxxxxxxxxxxxxxx
Incident Command System xxxxxxxxxxxxxxxxxxxxxxxxxPeter Miles
 
VIP High Class Call Girls Amravati Anushka 8250192130 Independent Escort Serv...
VIP High Class Call Girls Amravati Anushka 8250192130 Independent Escort Serv...VIP High Class Call Girls Amravati Anushka 8250192130 Independent Escort Serv...
VIP High Class Call Girls Amravati Anushka 8250192130 Independent Escort Serv...Suhani Kapoor
 

Kürzlich hochgeladen (20)

VIP Call Girls Service Bikaner Aishwarya 8250192130 Independent Escort Servic...
VIP Call Girls Service Bikaner Aishwarya 8250192130 Independent Escort Servic...VIP Call Girls Service Bikaner Aishwarya 8250192130 Independent Escort Servic...
VIP Call Girls Service Bikaner Aishwarya 8250192130 Independent Escort Servic...
 
(PRIYA) Call Girls Rajgurunagar ( 7001035870 ) HI-Fi Pune Escorts Service
(PRIYA) Call Girls Rajgurunagar ( 7001035870 ) HI-Fi Pune Escorts Service(PRIYA) Call Girls Rajgurunagar ( 7001035870 ) HI-Fi Pune Escorts Service
(PRIYA) Call Girls Rajgurunagar ( 7001035870 ) HI-Fi Pune Escorts Service
 
CBO’s Recent Appeals for New Research on Health-Related Topics
CBO’s Recent Appeals for New Research on Health-Related TopicsCBO’s Recent Appeals for New Research on Health-Related Topics
CBO’s Recent Appeals for New Research on Health-Related Topics
 
(SUHANI) Call Girls Pimple Saudagar ( 7001035870 ) HI-Fi Pune Escorts Service
(SUHANI) Call Girls Pimple Saudagar ( 7001035870 ) HI-Fi Pune Escorts Service(SUHANI) Call Girls Pimple Saudagar ( 7001035870 ) HI-Fi Pune Escorts Service
(SUHANI) Call Girls Pimple Saudagar ( 7001035870 ) HI-Fi Pune Escorts Service
 
Regional Snapshot Atlanta Aging Trends 2024
Regional Snapshot Atlanta Aging Trends 2024Regional Snapshot Atlanta Aging Trends 2024
Regional Snapshot Atlanta Aging Trends 2024
 
Expressive clarity oral presentation.pptx
Expressive clarity oral presentation.pptxExpressive clarity oral presentation.pptx
Expressive clarity oral presentation.pptx
 
Climate change and occupational safety and health.
Climate change and occupational safety and health.Climate change and occupational safety and health.
Climate change and occupational safety and health.
 
EDUROOT SME_ Performance upto March-2024.pptx
EDUROOT SME_ Performance upto March-2024.pptxEDUROOT SME_ Performance upto March-2024.pptx
EDUROOT SME_ Performance upto March-2024.pptx
 
Zechariah Boodey Farmstead Collaborative presentation - Humble Beginnings
Zechariah Boodey Farmstead Collaborative presentation -  Humble BeginningsZechariah Boodey Farmstead Collaborative presentation -  Humble Beginnings
Zechariah Boodey Farmstead Collaborative presentation - Humble Beginnings
 
Rohini Sector 37 Call Girls Delhi 9999965857 @Sabina Saikh No Advance
Rohini Sector 37 Call Girls Delhi 9999965857 @Sabina Saikh No AdvanceRohini Sector 37 Call Girls Delhi 9999965857 @Sabina Saikh No Advance
Rohini Sector 37 Call Girls Delhi 9999965857 @Sabina Saikh No Advance
 
Global debate on climate change and occupational safety and health.
Global debate on climate change and occupational safety and health.Global debate on climate change and occupational safety and health.
Global debate on climate change and occupational safety and health.
 
Top Rated Pune Call Girls Bhosari ⟟ 6297143586 ⟟ Call Me For Genuine Sex Ser...
Top Rated  Pune Call Girls Bhosari ⟟ 6297143586 ⟟ Call Me For Genuine Sex Ser...Top Rated  Pune Call Girls Bhosari ⟟ 6297143586 ⟟ Call Me For Genuine Sex Ser...
Top Rated Pune Call Girls Bhosari ⟟ 6297143586 ⟟ Call Me For Genuine Sex Ser...
 
Call On 6297143586 Viman Nagar Call Girls In All Pune 24/7 Provide Call With...
Call On 6297143586  Viman Nagar Call Girls In All Pune 24/7 Provide Call With...Call On 6297143586  Viman Nagar Call Girls In All Pune 24/7 Provide Call With...
Call On 6297143586 Viman Nagar Call Girls In All Pune 24/7 Provide Call With...
 
How to Save a Place: 12 Tips To Research & Know the Threat
How to Save a Place: 12 Tips To Research & Know the ThreatHow to Save a Place: 12 Tips To Research & Know the Threat
How to Save a Place: 12 Tips To Research & Know the Threat
 
The Economic and Organised Crime Office (EOCO) has been advised by the Office...
The Economic and Organised Crime Office (EOCO) has been advised by the Office...The Economic and Organised Crime Office (EOCO) has been advised by the Office...
The Economic and Organised Crime Office (EOCO) has been advised by the Office...
 
VIP Kolkata Call Girl Jatin Das Park 👉 8250192130 Available With Room
VIP Kolkata Call Girl Jatin Das Park 👉 8250192130  Available With RoomVIP Kolkata Call Girl Jatin Das Park 👉 8250192130  Available With Room
VIP Kolkata Call Girl Jatin Das Park 👉 8250192130 Available With Room
 
Postal Ballots-For home voting step by step process 2024.pptx
Postal Ballots-For home voting step by step process 2024.pptxPostal Ballots-For home voting step by step process 2024.pptx
Postal Ballots-For home voting step by step process 2024.pptx
 
Delhi Russian Call Girls In Connaught Place ➡️9999965857 India's Finest Model...
Delhi Russian Call Girls In Connaught Place ➡️9999965857 India's Finest Model...Delhi Russian Call Girls In Connaught Place ➡️9999965857 India's Finest Model...
Delhi Russian Call Girls In Connaught Place ➡️9999965857 India's Finest Model...
 
Incident Command System xxxxxxxxxxxxxxxxxxxxxxxxx
Incident Command System xxxxxxxxxxxxxxxxxxxxxxxxxIncident Command System xxxxxxxxxxxxxxxxxxxxxxxxx
Incident Command System xxxxxxxxxxxxxxxxxxxxxxxxx
 
VIP High Class Call Girls Amravati Anushka 8250192130 Independent Escort Serv...
VIP High Class Call Girls Amravati Anushka 8250192130 Independent Escort Serv...VIP High Class Call Girls Amravati Anushka 8250192130 Independent Escort Serv...
VIP High Class Call Girls Amravati Anushka 8250192130 Independent Escort Serv...
 

Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Governance

  • 1. Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Governance ANSGAR KOENE, HORIZON DIGITAL ECONOMY RESEARCH INSTITUTE, UNIVERSITY OF NOTTIN GHAM 5TH SEPTEMBER 2018 1
  • 2. Projects UnBias – EPSRC funded “Digital Economy” project ◦ Horizon Digital Economy research institute, University of Nottingham ◦ Human Centred Computing group, University of Oxford ◦ Centre for Intelligent Systems and their Application, University of Edinburgh IEEE P7003 Standard for Algorithmic Bias Considerations ◦ Multi-stakeholder working group with 70+ participants from Academia, Civil-society and Industry A governance framework for algorithmic accountability and transparency – EP Science Technology Options Assessment report ◦ UnBias; AI Now; Purdue University; EMLS RI Age Appropriate Design ◦ UnBias; 5Rights 2
  • 3. UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy Standards and policy Stakeholder workshops 3 Youth Juries
  • 5. 5
  • 6. Theme 1: The Use of Algorithms Introduces the concept of algorithms Activities include: ◦ Mapping your online world ◦ Discusses the range of online services that use algorithms ◦ What’s in your personal filter bubble? ◦ Highlights that not everyone gets the same results online
  • 7. Theme 1: The Use of Algorithms Activities include: ◦ What kinds of data do algorithms use? ◦ Discusses the range of data collected and inferred by algorithms, and what happens to it ◦ How much is your data worth? ◦ From the perspective of you (the user) and the companies that buy/sell it
  • 8. Theme 2: Regulation of Algorithms Uses real-life scenarios to highlight issues surrounding the use of algorithms, and asks Who is responsible when things go wrong? Participants debate both sides of a case and develop their critical thinking skills
  • 9. Theme 3: Algorithm Transparency The algorithm as a ‘black box’ Discusses the concept of meaningful transparency and the sort of information that young people would like to have when they are online ◦ What data is being collected about me? ◦ Why? ◦ Where does it go?
  • 10.
  • 12. 12
  • 13. 13 IEEE P7000: Model Process for Addressing Ethical Concerns During System Design IEEE P7001: Transparency of Autonomous Systems IEEE P7002: Data Privacy Process IEEE P7003: Algorithmic Bias Considerations IEEE P7004: Child and Student Data Governance IEEE P7005: Employer Data Governance IEEE P7006: Personal Data AI Agent Working Group IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation Systems IEEE P7008: Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems IEEE P7009: Fail-Safe Design of Autonomous and Semi-Autonomous Systems IEEE P7010: Wellbeing Metrics Standard for Ethical AI and Autonomous Systems IEEE P7011: Process of Identifying and Rating the Trustworthiness of News Sources IEEE P7012: Standard for Machines Readable Personal Privacy Terms
  • 14. Algorithmic systems are socio-technical Algorithmic systems do not exist in a vacuum They are built, deployed and used: ◦ by people, ◦ within organizations, ◦ within a social, political, legal and cultural context. The outcomes of algorithmic decisions can have significant impacts on real, and possibly vulnerable, people.
  • 15. P7003 - Algorithmic Bias Considerations All non-trivial* decisions are biased We seek to minimize bias that is: ◦ Unintended ◦ Unjustified ◦ Unacceptable as defined by the context where the system is used. *Non-trivial means the decision space has more than one possible outcome and the choice is not uniformly random.
  • 16. Causes of algorithmic bias Insufficient understanding of the context of use. Failure to rigorously map decision criteria. Failure to have explicit justifications for the chosen criteria.
  • 18. 18 Complex individuals reduced to simplistic binary stereotypes
  • 19. Key question when developing or deploying an algorithmic system 19  Who will be affected?  What are the decision/optimization criteria?  How are these criteria justified?  Are these justifications acceptable in the context where the system is used?
  • 20. 20 P7003 foundational sections  Taxonomy of Algorithmic Bias  Legal frameworks related to Bias  Psychology of Bias  Cultural aspects P7003 algorithm development sections  Algorithmic system design stages  Person categorization and identifying affected population groups  Assurance of representativeness of testing/training/validation data  Evaluation of system outcomes  Evaluation of algorithmic processing  Assessment of resilience against external manipulation to Bias  Documentation of criteria, scope and justifications of choices
  • 21. Related AI standards activities British Standards Institute (BSI) – BS 8611 Ethics design and application of robots ISO/IEC JTC 1/SC 42 Artificial Intelligence ◦ SG 1 Computational approaches and characteristics of AI systems ◦ SG 2 Trustworthiness ◦ SG 3 Use cases and applications ◦ WG 1 Foundational standards Jan 2018 China published “Artificial Intelligence Standardization White Paper.”
  • 22. A governance framework for algorithmic accountability and transparency EPRS/2018/STOA/SER/18/002 ANSGAR KOENE, UNIVERSITY OF NOTTINGHAM RASHIDA RICHARDSON & DILLON REISMAN, AI NOW INSTITUTE YOHKO HATADA, EMLS RI HELENA WEBB, M. PATEL, J. LAVIOLETTE, C. MACHADO, UNIVERSITY OF OXFORD CHRIS CLIFTON, PURDUE UNIVERSITY 25TH OCTOBER 2018
  • 23. Awareness raising: education, watchdogs and whistleblowers  “Algorithmic literacy” - teaching core concepts: computational thinking, the role of data and the importance of optimisation criteria.  Standardised notification to communicate type and degree of algorithmic processing in decisions.  Provision of computational infrastructure and access to technical experts to support data analysis etc. for “algorithmic accountability journalism”.  Whistleblower protection and protection against prosecution on grounds of breaching copyright or Terms of Service when doing so serves the public interest.
  • 24. Accountability in public sector use of algorithmic decision-making Adoption of Algorithmic Impact Assessment (AIA) for algorithmic systems used for public service 1. Public disclosure of purpose, scope, intended use and associated policies, self-assessment process and potential implementation timeline 2. Performing and publishing of self-assessment of the system with focus on inaccuracies, bias, harms to affected communities, and mitigation plans for potential impacts. 3. Publication of plan for meaningful, ongoing access to external researchers to review the system. 4. Public participation period. 5. Publication of final AIA, once issues raised in public participation have been addressed. 6. Renewal of AIAs on a regular timeline. 7. Opportunity for public to challenge failure to mitigate issues raised in the public participation period or foreseeable outcomes.
  • 25. Regulatory oversight and Legal liability  Regulatory body for algorithms:  Risk assessment  Investigating algorithmic systems suspected of infringing of human rights.  Advising other regulatory agencies regarding algorithmic systems  Algorithmic Impact Assessment requirement for systems classified as causing potentially severe non-reversible impact  Strict tort liability for algorithmic systems with medium severity non-reversible impacts  Reduced liability for algorithmic systems certified as compliant with best-practice standards.
  • 26. Global coordination for algorithmic governance  Establishment a permanent global Algorithm Governance Forum (AGF)  Multi-stakeholder dialog and policy expertise related to algorithmic systems  Based on the principles of Responsible Research and Innovation  Provide a forum for coordination and exchanging of governance best-practices  Strong positions in trade negotiations to protect regulatory ability to investigate algorithmic systems and hold parties accountable for violations of European laws and human rights.
  • 28. What is the Age-Appropriate Design Code? The Age-Appropriate Design Code sits at section 123 of the UK Data Protection Act 2018 (“DPA”) and will set out the standards of data protection that Information Society Services (“ISS”, known as online services) must offer children. It was brought into UK legislation by Crossbench Peer, Baroness Kidron, Parliamentary Under-Secretary, Department for Digital, Culture, Media and Sport, Lord Ashton of Hyde, Opposition Spokesperson, Lord Stevenson of Balmacara, Conservative Peer, Baroness Harding of Winscombe and Liberal Democrat Spokesperson, Lord Clement-Jones of Clapham. 28
  • 29. 29
  • 30. ICO to draft code by 25 October 2019 30
  • 32. Biography • Dr. Koene is Senior Research Fellow at the Horizon Digital Economy Research Institute of the University of Nottingham, where he conducts research on societal impact of Digital Technology. • Chairs the IEEE P7003TM Standard for Algorithms Bias Considerations working group., and leads the policy impact activities of the Horizon institute. • He is co-investigator on the UnBias project to develop regulation-, design- and education-recommendations for minimizing unintended, unjustified and inappropriate bias in algorithmic systems. • Over 15 years of experience researching and publishing on topics ranging from Robotics, AI and Computational Neuroscience to Human Behaviour studies and Tech. Policy recommendations. • He received his M.Eng., and Ph.D. in Electical Engineering & Neuroscience, respectively, from Delft University of Technology and Utrecht University. • Trustee of 5Rights, a UK based charity for Enabling Children and Young People to Access the digital world creatively, knowledgeably and fearlessly. Dr. Ansgar Koene ansgar.koene@nottingham.ac.uk https://www.linkedin.com/in/akoene/ https://www.nottingham.ac.uk/comp uterscience/people/ansgar.koene http://unbias.wp.horizon.ac.uk/

Hinweis der Redaktion

  1. Automated decisions are not defined by algorithms alone. Rather, they emerge from automated systems that mix human judgment, conventional software, and statistical models, all designed to serve human goals and purposes. Discerning and debating the social impact of these systems requires a holistic approach that considers: Computational and statistical aspects of the algorithmic processing; Power dynamics between the service provider and the customer; The social-political-legal-cultural context within which the system is used;
  2. All non-trivial decisions are biased. For example, a good results from a search engine should be biased to match the interests of the user as expressed by the search-term, and possibly refined based on personalization data. When we say we want ‘no Bias’ we mean we want to minimize unintended, unjustified and unacceptable bias, as defined by the context within which the algorithmic system is being used.
  3. In the absence of malicious intent, bias in algorithmic system is generally caused by: Insufficient understanding of the context that the system is part of. This includes lack of understanding who will be affected by the algorithmic decision outcomes, resulting in a failure to test how the system performs for specific groups, who are often minorities. Diversity in the development team can partially help to address this. Failure to rigorously map decision criteria. When people think of algorithmic decisions as being more ‘objectively trustworthy’ than human decisions, more often than not they are referring to the idea that algorithmic systems follow a clearly defined set of criteria with no ‘hidden agenda’. The complexity of system development challenges, however, can easily introduce ‘hidden decision criteria’ introduced as a quick fix during debugging or embedded within Machine Learning training data. Failure to explicitly define and examine the justifications for the decision criteria. Given the context within which the system is used, are these justifications acceptable? For example, in a given context is it OK to treat high correlation as evidence of causation?
  4. Ansgar Koene is a Senior Research Fellow at Horizon Digital Economy Research institute, University of Nottingham and chairs the IEEE P7003 Standard for Algorithm Bias Considerations working group. As part of his work at Horizon Ansgar is the lead researcher in charge of Policy Impact; leads the stakeholder engagement activities of the EPSRC (UK research council) funded UnBias project to develop regulation-, design- and education-recommendations for minimizing unintended, unjustified and inappropriate bias in algorithmic systems; and frequently contributes evidence to UK parliamentary inquiries related to ICT and digital technologies. Ansgar has a multi-disciplinary research background, having previously worked and published on topics ranging from bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies.