SlideShare ist ein Scribd-Unternehmen logo
1 von 15
An Ethical
Framework
for AI:
“Separate DB
Tabs”
• Source of data: Where did the data come from and how?
• Environment: What impact will the AI have on the environment?
• Privacy: Does it prioritize privacy?
• Accountability: Who is responsible for the AI’s actions?
• Responsibility: Who decides what the AI is and isn’t allowed to do?
• Anthropomorphism: Does it take advantage of emotions?
• Trust: Is the output trustworthy/accurate?
• Explainable / Interpretability: Can we understand how it reaches
conclusions?
• Data Quality: Is the data reliable and accurate?
• Beneficiaries: Does it cause disproportionate benefit or harm?
• Transparency: Are the risks, limitations, and uses of the AI shared?
• Audited: Have claims by the programmers been independently verified?
• Bias: Are the data or outcomes biased?
• Stakeholders: Was feedback/input from all affected
parties considered?
Source of
Data
Understanding how the organization that created the AI collected the
data it used to train the AI model is important. It can have both
ethical and legal implications.
1. Where did the data come from and how did they get it?
2. Is the data representative of the population it's trying to model, or did
the organization only gather data that was convenient to collect?
3. Who participated in the data creation?
4. Who is left out of the datasets?
5. Who chose which data to collect?
6. Did the sources provide informed consent to use their data for the
purposes of the model?
Environment
The pros of using AI can't be overlooked, but the cost it has on the
environment is unavoidable. The good of a modern technology is
only a temporary benefit if it's significantly outweighed by the
damage done to the planet.
1. What impact will the AI have on the environment?
2. How much electricity will the AI need?
3. Where will the electricity come from (oil, gas, solar, wind, etc.)
4. How can energy usage be reduced?
5. How much pollution does the AI generate?
6. How much water waste does the AI create?
Privacy
If AI models are cultivated using datasets containing private
information, further inspection is necessary to preserve peoples'
privacy.
1. What security is in place to prevent Personal Identifying Information
(PII), health information, and/or financial information from leaking?
2. Who can access the PII?
3. Do people have a write to having their PII removed from the model?
4. Does the AI contribute to what may be privacy-related torts (intrusion on
seclusion, false light, public disclosure of private facts, and
appropriation of name or likeness?
Accountability
Regardless of whether the regulatory scheme is based on self-
regulation or one imposed by a government, there must be clear
accountability for the scheme to be effective.
1. Who should ultimately be responsible for the AI’s actions? The user?
The developers of the AI? The company? The executives? The board of
directors?
2. How will the user get justice if the AI harms them?
3. Is there someone who oversees and "owns" the AI development within
the company?
Responsibility
Society must determine who should get to make decisions regarding
AI that impacts society. For example, should it belong solely to
profit-driven private individuals? Or should there be more
democratic and public input?
1. Who should decide what AI should and should not be allowed to do?
The engineers? The executives at the company? The Board of the
company? The government?
2. Who should have access to advanced AI? Should everyone be trusted
equally?
3. How do we monitor it's not used for harmful activities?
Anthropomorphism
As numerous studies and common interactions with smart speakers
and chatbots reveal, humans are quick to anthropomorphize AI by
giving them pronouns and interpreting its actions as being driven by
intent. It's not difficult to think how this tendency could be used to
convince people to take actions they may not have otherwise taken.
1. Does the AI take advantage of user emotions? For example, does it try
to come across as human or as a sympathetic creature so the user will
spend more money on something?
2. Is there a reason the AI company chooses to make the AI seem
human? For example, why give a speaker a human voice rather than
one that is more alien or robotic? Is that the best alternative?
Trust
One of the primary purposes for using AI is for it to give us reliable
results. However, some advanced AI systems, like LLMs, have shown
that a response from a complex AI doesn't necessarily equate to
precision, as they are capable of creating fictional facts and URLs.
1. Are the outputs of the AI accurate? That is, do they accurately reflect
facts and reality?
2. How can we be sure an LLM isn't hallucinating?
Explainable /
Interpretable
It's vital the AI can explain in sufficient detail how it reached its
conclusion. This is especially relevant when AIs make decisions that
can impact others' livelihoods, such as with finances or healthcare. A
black box that humans are told to place their faith in is not sufficient.
1. Can we interrogate the AI?
2. Can the AI accurately explain why it reached its output?
3. Can the AI explain how it reached its output?
4. Explainable AI should consider who it should be explainable for. For
example, AI for medicine should be explainable to both doctors and
patients.
5. Are doctors using risk scores to triage patients able to interpret a risk
score correctly? Are bankers able to interpret AI ratings of loan
applicants?
6. Is the system output only a single score or does it also output potential
decision explanations and guidance for how to use the score?
Data Quality
Apart from the method of collecting data (reference: Source of data,
above), it is imperative to take the data quality into consideration.
Entering bad data into a model will result in an unreliable model that
could have dangerous implications. Garbage in, garbage out.
1. Is the data reliable and accurate?
2. Is there any measurement error (for example, from incomplete or
incorrect data)?
3. Did selection bias play a role in data collection? Selection bias can
occur if the subset of individuals represented in the training data is not
representative of the patient population of interest.
4. Does the AI model overfit because it was trained on too little data?
5. Are historical biases being kept alive via the AI?
6. Are some ethnicities, races, nationalities, or other populations
inappropriately missing from the data? For example, text scraped from
the web may overrepresent people who can afford internet access and
have the time to post a comment.
7. Check to ensure the data between groups is substantially similar. If it’s
not, you may get inaccurate and unfair results. This happens with credit
ratings when poorer folks have less credit history, so they are given
lower scores, which makes it harder to work up to higher scores.
Beneficiaries
Because the most advanced AI models require hundreds of millions
of dollars to create, only the already-wealthy can participate in its
creation. This severely limits who can create the AI and naturally
forms a dominant oligopoly as the success of the advanced AI makes
the already-wealthy companies wealthier. The same effect likely
happens on every scale regarding use of the AI as well. Entities that
can afford to use the AI will benefit more than those who can't use
the AI, which extends the advantage of the entities that are already
relatively wealthy.
1. Do the rich get richer at the expense of the less fortunate? Do
individuals benefit? Or certain communities? Or certain nations?
2. Is there inequitable access to the AI (e.g., low SES people may go to
hospitals that don’t have the latest and greatest AI; low SES students
may not have the best AI in their schools)?
3. Is there any issue with Fortune 500 corporations growing their profits to
benefit a relatively small number of people (investors) by replacing
human workers with AI? Or is this a feature of capitalism, not a bug?
Transparency
Transparency is being open and clear with the user about how the AI should be
used, when the AI is being used, the limitations of the AI, potential benefits and
harms, etc. In short, the AI should not be a secret to the user.
1. Are the kinds of harms an AI system might cause, the shortcomings the AI may
suffer from, the varying degrees of severity of the harms, the limitations of the
technology, how and when it’s being used, and so on communicated to the user?
2. If the AI creates visualizations, like charts, does it include context, known
unknowns and uncertainties, the source of the data, and how the source data was
collected?
3. Examples of limitations the AI may need to disclose: (a) AI systems are parametric,
meaning the same data run through the same system could yield a different result
each time. (b) The AI can only see what’s in the data (text, video, image, rows and
columns, etc.). It doesn’t know the context around it (e.g., maybe someone had
high blood pressure because a loved one recently died).
4. Do all interfaces of the AI that make decisions about users clearly and succinctly
inform users what is going on, how it works, how long the outputs are useful, and
why the AI was needed to reach the output as opposed to any other method?
5. Does the AI system allow users to obtain the information it has about them, and let
the users dispute that information if the users believe it's inaccurate?
Audited
It's one thing for a company to make claims about the safety,
security, accuracy, and impact of their AI. It's another to have an
independent third-party audit and verify those claims. With advanced
AI it may be better to verify, then trust.
1. Have independent auditors reviewed the data and source code to verify
it is and does what it says it is/will do?
2. Is the AI model independently audited on at least an annual basis?
3. Can we be certain the claims made by the developers are accurate and
reliable? For example, that the data isn't biased, that the outputs are
trustworthy, and that the environmental impact is no greater than
claimed?
Bias
As seen in hiring, promotions, image recognition, allocation of
resources, and in criminal justice, among many, many other areas, AI
is fallible. The developers may have the best intentions and the
algorithm may appear entirely neutral, but if the outcomes are
biased, or if the data used to train the data was gathered during more
biased times in history (against people of certain races, genders,
sexual orientations, etc.), then the AI should face greater scrutiny.
1. Is the data biased to favor one group over another based on an
irrelevant or protected class?
2. Were various forms of bias explored (e.g., disparate impact, statistical
parity, equal opportunity, and/or average odds), or did the company only
publish the most flattering analyses?
3. Does the AI perpetuate societal biases?
Stakeholders
Representatives from all groups who may be affected by the AI should be
consulted during the AI's development prior to its release. Stakeholders should
include even those the AI may affect who won't even know of the AI's existence
and will never interact with it directly because they could be affected indirectly.
1. Did the AI company get feedback from a representative sample of everyone
the AI will affect?
2. Are the values of the users, the developers, and the society in which the AI is
used align?

Weitere ähnliche Inhalte

Ähnlich wie AI Ethical Framework.pptx

“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...Edge AI and Vision Alliance
 
ANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
ANIn Kolkata April 2024 |Ethics of AI by Abhishek NandyANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
ANIn Kolkata April 2024 |Ethics of AI by Abhishek NandyAgileNetwork
 
How to undermine AI dystopia
How to undermine AI dystopiaHow to undermine AI dystopia
How to undermine AI dystopiaBianca Ximenes
 
Exploring AI Ethics_ Challenges, Solutions, and Significance
Exploring AI Ethics_ Challenges, Solutions, and SignificanceExploring AI Ethics_ Challenges, Solutions, and Significance
Exploring AI Ethics_ Challenges, Solutions, and SignificanceBluebash LLC
 
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptxISSIP
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
 
[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AI
[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AI[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AI
[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AIDataScienceConferenc1
 
ETHICAL ISSSUES OF AI WITH REFERENCE TO LAW
ETHICAL ISSSUES OF AI WITH REFERENCE TO LAWETHICAL ISSSUES OF AI WITH REFERENCE TO LAW
ETHICAL ISSSUES OF AI WITH REFERENCE TO LAWSMadhuriparadesi
 
Artificial Intelligence.docx.pdf
Artificial Intelligence.docx.pdfArtificial Intelligence.docx.pdf
Artificial Intelligence.docx.pdfMehedi844252
 
Ethical Issues in Artificial Intelligence: Examining Bias and Discrimination
Ethical Issues in Artificial Intelligence: Examining Bias and DiscriminationEthical Issues in Artificial Intelligence: Examining Bias and Discrimination
Ethical Issues in Artificial Intelligence: Examining Bias and DiscriminationTechCyber Vision
 
Data, Ethics and Healthcare
Data, Ethics and HealthcareData, Ethics and Healthcare
Data, Ethics and HealthcareLee Schlenker
 
AI and Technology – Cyberroot Risk Advisory
AI and Technology – Cyberroot Risk AdvisoryAI and Technology – Cyberroot Risk Advisory
AI and Technology – Cyberroot Risk AdvisoryCR Group
 
Discovering the Right Path in the Ethical World of Artificial Intelligence
Discovering the Right Path in the Ethical World of Artificial IntelligenceDiscovering the Right Path in the Ethical World of Artificial Intelligence
Discovering the Right Path in the Ethical World of Artificial IntelligencePrashantTripathi629528
 
How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?Mark Borg
 

Ähnlich wie AI Ethical Framework.pptx (20)

“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
 
AI.pptx
AI.pptxAI.pptx
AI.pptx
 
ANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
ANIn Kolkata April 2024 |Ethics of AI by Abhishek NandyANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
ANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
 
How to undermine AI dystopia
How to undermine AI dystopiaHow to undermine AI dystopia
How to undermine AI dystopia
 
Exploring AI Ethics_ Challenges, Solutions, and Significance
Exploring AI Ethics_ Challenges, Solutions, and SignificanceExploring AI Ethics_ Challenges, Solutions, and Significance
Exploring AI Ethics_ Challenges, Solutions, and Significance
 
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
 
A.I.pptx
A.I.pptxA.I.pptx
A.I.pptx
 
Ai book
Ai bookAi book
Ai book
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open Source
 
[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AI
[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AI[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AI
[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AI
 
ETHICAL ISSSUES OF AI WITH REFERENCE TO LAW
ETHICAL ISSSUES OF AI WITH REFERENCE TO LAWETHICAL ISSSUES OF AI WITH REFERENCE TO LAW
ETHICAL ISSSUES OF AI WITH REFERENCE TO LAW
 
Artificial Intelligence.docx.pdf
Artificial Intelligence.docx.pdfArtificial Intelligence.docx.pdf
Artificial Intelligence.docx.pdf
 
Ethical Issues in Artificial Intelligence: Examining Bias and Discrimination
Ethical Issues in Artificial Intelligence: Examining Bias and DiscriminationEthical Issues in Artificial Intelligence: Examining Bias and Discrimination
Ethical Issues in Artificial Intelligence: Examining Bias and Discrimination
 
AI Forum-2019_Nakagawa
AI Forum-2019_NakagawaAI Forum-2019_Nakagawa
AI Forum-2019_Nakagawa
 
Data, Ethics and Healthcare
Data, Ethics and HealthcareData, Ethics and Healthcare
Data, Ethics and Healthcare
 
[REPORT PREVIEW] The Customer Experience of AI
[REPORT PREVIEW] The Customer Experience of AI[REPORT PREVIEW] The Customer Experience of AI
[REPORT PREVIEW] The Customer Experience of AI
 
AI and Technology – Cyberroot Risk Advisory
AI and Technology – Cyberroot Risk AdvisoryAI and Technology – Cyberroot Risk Advisory
AI and Technology – Cyberroot Risk Advisory
 
Artificial Intelligence (AI) & Ethics.pptx
Artificial Intelligence (AI) & Ethics.pptxArtificial Intelligence (AI) & Ethics.pptx
Artificial Intelligence (AI) & Ethics.pptx
 
Discovering the Right Path in the Ethical World of Artificial Intelligence
Discovering the Right Path in the Ethical World of Artificial IntelligenceDiscovering the Right Path in the Ethical World of Artificial Intelligence
Discovering the Right Path in the Ethical World of Artificial Intelligence
 
How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?
 

Kürzlich hochgeladen

Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 

Kürzlich hochgeladen (20)

Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 

AI Ethical Framework.pptx

  • 1. An Ethical Framework for AI: “Separate DB Tabs” • Source of data: Where did the data come from and how? • Environment: What impact will the AI have on the environment? • Privacy: Does it prioritize privacy? • Accountability: Who is responsible for the AI’s actions? • Responsibility: Who decides what the AI is and isn’t allowed to do? • Anthropomorphism: Does it take advantage of emotions? • Trust: Is the output trustworthy/accurate? • Explainable / Interpretability: Can we understand how it reaches conclusions? • Data Quality: Is the data reliable and accurate? • Beneficiaries: Does it cause disproportionate benefit or harm? • Transparency: Are the risks, limitations, and uses of the AI shared? • Audited: Have claims by the programmers been independently verified? • Bias: Are the data or outcomes biased? • Stakeholders: Was feedback/input from all affected parties considered?
  • 2. Source of Data Understanding how the organization that created the AI collected the data it used to train the AI model is important. It can have both ethical and legal implications. 1. Where did the data come from and how did they get it? 2. Is the data representative of the population it's trying to model, or did the organization only gather data that was convenient to collect? 3. Who participated in the data creation? 4. Who is left out of the datasets? 5. Who chose which data to collect? 6. Did the sources provide informed consent to use their data for the purposes of the model?
  • 3. Environment The pros of using AI can't be overlooked, but the cost it has on the environment is unavoidable. The good of a modern technology is only a temporary benefit if it's significantly outweighed by the damage done to the planet. 1. What impact will the AI have on the environment? 2. How much electricity will the AI need? 3. Where will the electricity come from (oil, gas, solar, wind, etc.) 4. How can energy usage be reduced? 5. How much pollution does the AI generate? 6. How much water waste does the AI create?
  • 4. Privacy If AI models are cultivated using datasets containing private information, further inspection is necessary to preserve peoples' privacy. 1. What security is in place to prevent Personal Identifying Information (PII), health information, and/or financial information from leaking? 2. Who can access the PII? 3. Do people have a write to having their PII removed from the model? 4. Does the AI contribute to what may be privacy-related torts (intrusion on seclusion, false light, public disclosure of private facts, and appropriation of name or likeness?
  • 5. Accountability Regardless of whether the regulatory scheme is based on self- regulation or one imposed by a government, there must be clear accountability for the scheme to be effective. 1. Who should ultimately be responsible for the AI’s actions? The user? The developers of the AI? The company? The executives? The board of directors? 2. How will the user get justice if the AI harms them? 3. Is there someone who oversees and "owns" the AI development within the company?
  • 6. Responsibility Society must determine who should get to make decisions regarding AI that impacts society. For example, should it belong solely to profit-driven private individuals? Or should there be more democratic and public input? 1. Who should decide what AI should and should not be allowed to do? The engineers? The executives at the company? The Board of the company? The government? 2. Who should have access to advanced AI? Should everyone be trusted equally? 3. How do we monitor it's not used for harmful activities?
  • 7. Anthropomorphism As numerous studies and common interactions with smart speakers and chatbots reveal, humans are quick to anthropomorphize AI by giving them pronouns and interpreting its actions as being driven by intent. It's not difficult to think how this tendency could be used to convince people to take actions they may not have otherwise taken. 1. Does the AI take advantage of user emotions? For example, does it try to come across as human or as a sympathetic creature so the user will spend more money on something? 2. Is there a reason the AI company chooses to make the AI seem human? For example, why give a speaker a human voice rather than one that is more alien or robotic? Is that the best alternative?
  • 8. Trust One of the primary purposes for using AI is for it to give us reliable results. However, some advanced AI systems, like LLMs, have shown that a response from a complex AI doesn't necessarily equate to precision, as they are capable of creating fictional facts and URLs. 1. Are the outputs of the AI accurate? That is, do they accurately reflect facts and reality? 2. How can we be sure an LLM isn't hallucinating?
  • 9. Explainable / Interpretable It's vital the AI can explain in sufficient detail how it reached its conclusion. This is especially relevant when AIs make decisions that can impact others' livelihoods, such as with finances or healthcare. A black box that humans are told to place their faith in is not sufficient. 1. Can we interrogate the AI? 2. Can the AI accurately explain why it reached its output? 3. Can the AI explain how it reached its output? 4. Explainable AI should consider who it should be explainable for. For example, AI for medicine should be explainable to both doctors and patients. 5. Are doctors using risk scores to triage patients able to interpret a risk score correctly? Are bankers able to interpret AI ratings of loan applicants? 6. Is the system output only a single score or does it also output potential decision explanations and guidance for how to use the score?
  • 10. Data Quality Apart from the method of collecting data (reference: Source of data, above), it is imperative to take the data quality into consideration. Entering bad data into a model will result in an unreliable model that could have dangerous implications. Garbage in, garbage out. 1. Is the data reliable and accurate? 2. Is there any measurement error (for example, from incomplete or incorrect data)? 3. Did selection bias play a role in data collection? Selection bias can occur if the subset of individuals represented in the training data is not representative of the patient population of interest. 4. Does the AI model overfit because it was trained on too little data? 5. Are historical biases being kept alive via the AI? 6. Are some ethnicities, races, nationalities, or other populations inappropriately missing from the data? For example, text scraped from the web may overrepresent people who can afford internet access and have the time to post a comment. 7. Check to ensure the data between groups is substantially similar. If it’s not, you may get inaccurate and unfair results. This happens with credit ratings when poorer folks have less credit history, so they are given lower scores, which makes it harder to work up to higher scores.
  • 11. Beneficiaries Because the most advanced AI models require hundreds of millions of dollars to create, only the already-wealthy can participate in its creation. This severely limits who can create the AI and naturally forms a dominant oligopoly as the success of the advanced AI makes the already-wealthy companies wealthier. The same effect likely happens on every scale regarding use of the AI as well. Entities that can afford to use the AI will benefit more than those who can't use the AI, which extends the advantage of the entities that are already relatively wealthy. 1. Do the rich get richer at the expense of the less fortunate? Do individuals benefit? Or certain communities? Or certain nations? 2. Is there inequitable access to the AI (e.g., low SES people may go to hospitals that don’t have the latest and greatest AI; low SES students may not have the best AI in their schools)? 3. Is there any issue with Fortune 500 corporations growing their profits to benefit a relatively small number of people (investors) by replacing human workers with AI? Or is this a feature of capitalism, not a bug?
  • 12. Transparency Transparency is being open and clear with the user about how the AI should be used, when the AI is being used, the limitations of the AI, potential benefits and harms, etc. In short, the AI should not be a secret to the user. 1. Are the kinds of harms an AI system might cause, the shortcomings the AI may suffer from, the varying degrees of severity of the harms, the limitations of the technology, how and when it’s being used, and so on communicated to the user? 2. If the AI creates visualizations, like charts, does it include context, known unknowns and uncertainties, the source of the data, and how the source data was collected? 3. Examples of limitations the AI may need to disclose: (a) AI systems are parametric, meaning the same data run through the same system could yield a different result each time. (b) The AI can only see what’s in the data (text, video, image, rows and columns, etc.). It doesn’t know the context around it (e.g., maybe someone had high blood pressure because a loved one recently died). 4. Do all interfaces of the AI that make decisions about users clearly and succinctly inform users what is going on, how it works, how long the outputs are useful, and why the AI was needed to reach the output as opposed to any other method? 5. Does the AI system allow users to obtain the information it has about them, and let the users dispute that information if the users believe it's inaccurate?
  • 13. Audited It's one thing for a company to make claims about the safety, security, accuracy, and impact of their AI. It's another to have an independent third-party audit and verify those claims. With advanced AI it may be better to verify, then trust. 1. Have independent auditors reviewed the data and source code to verify it is and does what it says it is/will do? 2. Is the AI model independently audited on at least an annual basis? 3. Can we be certain the claims made by the developers are accurate and reliable? For example, that the data isn't biased, that the outputs are trustworthy, and that the environmental impact is no greater than claimed?
  • 14. Bias As seen in hiring, promotions, image recognition, allocation of resources, and in criminal justice, among many, many other areas, AI is fallible. The developers may have the best intentions and the algorithm may appear entirely neutral, but if the outcomes are biased, or if the data used to train the data was gathered during more biased times in history (against people of certain races, genders, sexual orientations, etc.), then the AI should face greater scrutiny. 1. Is the data biased to favor one group over another based on an irrelevant or protected class? 2. Were various forms of bias explored (e.g., disparate impact, statistical parity, equal opportunity, and/or average odds), or did the company only publish the most flattering analyses? 3. Does the AI perpetuate societal biases?
  • 15. Stakeholders Representatives from all groups who may be affected by the AI should be consulted during the AI's development prior to its release. Stakeholders should include even those the AI may affect who won't even know of the AI's existence and will never interact with it directly because they could be affected indirectly. 1. Did the AI company get feedback from a representative sample of everyone the AI will affect? 2. Are the values of the users, the developers, and the society in which the AI is used align?