Weitere ähnliche Inhalte Ähnlich wie “Fairness Cases as an Accelerant and Enabler for Cognitive Assistance Adoption” (20) Mehr von diannepatricia (20) Kürzlich hochgeladen (20) “Fairness Cases as an Accelerant and Enabler for Cognitive Assistance Adoption”1. © 2017 The MITRE Corporation. All rights reserved.
Chuck Howell, howell@mitre.org
23 March 2017
“Fairness Cases” as an Accelerant
and Enabler for Cognitive Assistance
Adoption*
2. | 2 |
© 2017 The MITRE Corporation. All rights reserved.
Beware Disappointed Football Fans…
3. | 3 |
© 2017 The MITRE Corporation. All rights reserved.
Justice Delayed is Justice Denied
4. | 4 |
© 2017 The MITRE Corporation. All rights reserved.
Solution: Cool Dispassionate AI?
5. | 5 |
© 2017 The MITRE Corporation. All rights reserved.
Doh!
6. | 6 |
© 2017 The MITRE Corporation. All rights reserved.
Doh!
OSTP Preparing for the Future of AI section on Fairness, Safety, and Governance:
As AI technologies gain broader deployment, technical experts and policy analysts have raised
concerns about unintended consequences. The use of AI to make consequential decisions about
people, often replacing decisions made by human actors and institutions, leads to concerns about
how to ensure justice, fairness, and accountability.
7. | 7 |
© 2017 The MITRE Corporation. All rights reserved.
Proof by repeated assertion (1 of 2)
As AI systems take on a more important role in
high-stakes decision-making – from offers of
credit and insurance, to hiring decisions and
parole – they will begin to affect who gets
offered crucial opportunities, and who is left
behind. This brings questions of rights,
liberties, and basic fairness to the forefront.
While some hope that AI systems will help to
overcome the biases that plague human
decision-making, others fear that AI systems
will amplify such biases, denying opportunities
to the deserving and subjecting the deprived to
further disadvantage.
8. | 8 |
© 2017 The MITRE Corporation. All rights reserved.
Proof by repeated assertion (2 of 2)
http://www.fatml.org
9. | 9 |
© 2017 The MITRE Corporation. All rights reserved.
Assertions
§ Adoption of AI in decision support and other Cognitive
Assistance roles is growing significantly, and pace of
adoption continues to increase
§ Concerns expressed about risks of implicit bias and
discrimination
§ Point solutions being explored
§ Desired: broad systems engineering framework for calibrating
and mitigating fairness risks in AI systems
§ Adopting/adapting tools and techniques from safety critical
software community is one opportunity
10. | 10 |
© 2017 The MITRE Corporation. All rights reserved.
Safety Critical Software
§ Software is a key part of a variety of critical
systems, requiring systematic and effective
techniques for assurance that the risks of
deploying are understood and acceptable
§ Safety Critical Software: Software for which
compelling evidence is required that it delivers a
specified set of services in a manner that satisfies
specified critical properties tied to safety.
“Engineers today, like Galileo three and a half centuries ago, are not superhuman.
They make mistakes in their assumptions, in their calculations, in their conclusions.
That they make mistakes is forgivable; that they catch them is imperative. Thus it
is the essence of modern engineering not only to be able to check one’s own work,
but also to have one’s work checked and to be able to check the work of others.”
-- H. Petroski in To Engineer is Human: The Role of Failure in Successful Design
11. | 11 |
© 2017 The MITRE Corporation. All rights reserved.
Multiple Stakeholders for Safety
Source:
Andy
Lacher,
MITRE
12. | 12 |
© 2017 The MITRE Corporation. All rights reserved.
System properties vs. specific actions
§ Assessment of the system prior to operation
– Various stakeholders: Are the risks of allowing this system to be
used understood, and justified by the benefits?
– Certification, acceptance tests, standards compliance, etc.
§ Assessment of the results of operation
– Various stakeholders: Have the claims made prior to operation
been justified? Do the benefits justify the current understanding of
risks?
– Instrumentation, mishap investigation, audit, etc.
13. | 13 |
© 2017 The MITRE Corporation. All rights reserved.
Some exchanges between AI and
safety communities
But much more to do, and adapting to other critical concerns is only starting
14. | 14 |
© 2017 The MITRE Corporation. All rights reserved.
No silver bullet, but tools and notations
matter…
÷ vs.
15. | 15 |
© 2017 The MITRE Corporation. All rights reserved.
Examples of opportunities to adapt
safety tools and techniques
§ Fairness case framework to organize all activity and
communicate to various stakeholders
§ Hazard analysis to expose potential threats to fairness (obvious
and unexpected sources)
§ Instrumentation and monitoring tools and techniques to detect
potential fairness violations in operation
§ Accident and incident investigation tools and techniques to
understand underlying causes of a fairness violation
§ Error handling frameworks to focus engineering attention on off-
nominal cases, where problems often lurk
16. | 16 |
© 2017 The MITRE Corporation. All rights reserved.
What is an “Assurance Case”?
§ Systems under regulation or acquisition constraints
– Third party certification, approval, licensing, etc.
– Require a documented body of evidence that provides a
compelling case that the system satisfies certain critical
properties for specific contexts (to “make the case”)
– “safety case”, “certification evidence”, “security case”…
– Collectively we’ll refer to them as “assurance cases”
A documented body of evidence that provides a convincing and valid argument
that a specified set of critical claims about a system’s properties are adequately
justified for a given system in a given environment.
17. | 17 |
© 2017 The MITRE Corporation. All rights reserved.
What does tool support look like?
18. | 18 |
© 2017 The MITRE Corporation. All rights reserved.
Hazard Analysis Tools
§ Standards for regulated safety-critical sectors require specific
techniques for assessing the potential hazards of proposed
systems. These techniques often focus on individual
component failures and their associated reliability. More recent
techniques such as the Systems-Theoretic Process Analysis
(STPA) view safety as a system-level property, and accidents as
a control problem not a component reliability problem.
§ Other emerging techniques such as Hierarchically Performed
Hazard Origin & Propagation Studies (HiP-HOPS)
§ Explicitly and deliberately exploring potential fairness violations
at the start could contribute to confidence in the overall fairness
case, influence system design, and reduce rework
19. | 19 |
© 2017 The MITRE Corporation. All rights reserved.
Hazard analysis for fairness
§ Force early consideration of well known categories of potential
problems and how they will be addressed
– e.g., statistical equivalence of training data wrt operational data
(“Robustness to Distributional Change”1)
§ Focus on what aviation industry calls “hazardous misleading
information” not just overt failure
1Concrete Problems in AI Safety, https://arxiv.org/abs/1606.06565#
20. | 20 |
© 2017 The MITRE Corporation. All rights reserved.
Accident Investigation Tools and
Notations
§ Working back from an incident or accident to root causes can be
extremely expensive and complex
– Millions of $s, years of effort
– Consequences of false positive and negative findings
– Tools and notations have evolved to help manage the data, do
“book keeping” and structural checks, and communicate
complicated findings
§ Screenshots of a few follow, but the key ideas are that they are
intended to support a collaborative team working backwards
from a rare event through a complex, subtle, and incomplete sea
of data to root causes: investigation and diagnosis
21. | 21 |
© 2017 The MITRE Corporation. All rights reserved.
NASA Multi-User Investigation
Organizer
22. | 22 |
© 2017 The MITRE Corporation. All rights reserved.
Why-Because Analysis Graphs
23. | 23 |
© 2017 The MITRE Corporation. All rights reserved.
Conclusion
§ Painfully obvious early days of a work in progress
– Please help correct errors of omission and commission, all
feedback very much appreciated
§ Cognitive assistance presents opportunities for great social
good, but concerns over fairness present possible impediment
§ Integrated collection of tools and techniques from safety critical
software community is worth assessing
– What can be readily adopted? Adapted, and how? What are gaps
that require completely new tools and techniques?
24. | 24 |
© 2017 The MITRE Corporation. All rights reserved.
Closing Credits
“They constantly try to escape... by dreaming of systems so perfect that
no one will need to be good”
T. S. Eliot, Choruses from "The Rock", VI
“Be careful how you fix what you don't understand.”
Fred Brooks, The Design of Design
Thank you!
25. © 2017 The MITRE Corporation. All rights reserved.
| 25 |
Backups