This session was recorded in NYC on October 22nd, 2019 and can be viewed here: https://youtu.be/rToFuhI6Nlw
Responsible Data Science: Identifying and Fixing Biased AI
Numerous stories in the press have shown that machine learning has the potential to be unfair and even discriminatory. As a result, the public, regulators, and legislators are taking a hard look at AI; if your models are used for high-stakes decision making, then you will need to be able to convince these groups that your models are not discriminatory. To do this, you need to know how to assess models for evidence of discrimination and then be able to fix any problems you may find. In this talk, Nick will outline what is required for a model to be fair, discuss how different types of discrimination might make their way into a model, and then explain the algorithms and techniques that can be used to make AI fairer.
Bio: Nicholas Schmidt is a partner at BLDS, LLC, and heads the Artificial Intelligence Practice. In these roles, Nick specializes in the application of statistics and economics to questions of law, regulatory compliance, and best practices in model governance.
As head of the A.I. practice, Nick develops and assists in the deployment of methods that allow his clients to make their A.I. models fairer and more inclusive. In this work, he has created A.I.-based techniques that enable clients to minimize disparate impact in credit, insurance, and marketing models. He has additionally helped his clients understand and implement methods that open “black-box” A.I. models, enabling a clearer understanding A.I.’s decision-making process. His clients use this work to inform their customers on potential denials of credit (“adverse action notices”). These methods are used in a number of the top-10 U.S. retail banks and FinTechs.
In his litigation practice, Nick testifies and consults on matters relating to employment discrimination litigation, wage and hour law, and other matters requiring the utilization of statistics to address questions of liability or damages.
Nick holds an MBA in economics and econometrics from the University of Chicago Booth School of Business.
Nick Schmidt, BLDS - Responsible Data Science: Identifying and Fixing Biased AI - H2O World 2019 NYC
1. Responsible Data Science: Identifying
and Fixing Biased AI
Nicholas Schmidt
Partner and AI Practice Leader
BLDS, LLC
2. Introduction – a bit about me
• Nicholas Schmidt, Partner and AI Practice Leader at BLDS, LLC
• We assess evidence of discrimination in Banking, Insurance,
Employment, and elsewhere
• We Use AI to Fix AI: create and implement algorithms that find fair and
predictive models
• I am not a lawyer – do not take this as legal advice
Nicholas Schmidt
BLDS
LLC
nschmidt@bldsllc.com
https://www.linkedin.com/in/nickpschmidt
3. Can AI discriminate?
BLDS
LLC
Lohr, Steve. “Facial recognition is accurate, if you’re a white guy.” New York Times, 9 February 2018.
1%
7%
12%
35%
4. Why is fair AI important
Fair AI
Positive
impact on
society
Decreased
regulatory
risk
Decreased
reputational
risk
Increased
market
share &
profit
Responsible
and Ethical
Behavior
Increased
Stakeholder
Value
BLDS
LLC
5. Fairness and the law (in the U.S.)
• Many definitions of fairness – many of which are contradictory(1)
• Defining fairness is hard – let’s use 55 years of legal precedent
• Goals of U.S. anti-discrimination laws:
• Treat like people alike
• Achieve parity in favorable outcomes across groups
BLDS
LLC
(1) Arvind Narayanan, “21 Definitions of Fairness and their Politics”
https://fatconference.org/2018/livestream_vh220.html
6. Types of discrimination: disparate treatment
• Disparate treatment
• Intentional discrimination (generally)
• “I will not give you a loan because you are a [woman, Hispanic, Asian,
etc.].”
• “I will make it harder to for [woman, Hispanic, Asian, etc.] to get a loan.”
BLDS
LLC
9. Reasons why AI might discriminate
1. Problematic data
• Underrepresentation of minorities
• Inaccurate or missing data
• Differences in patterns of causation and correlation
• Incorporates past or potential future discrimination
2. Less discriminatory models are available
• Looking for fairer models is desirable and may forestall legal problems
BLDS
LLC
11. Fairer AI: Using AI to Fix AI
• The challenge:
• Find a fairer model that meets business necessity
• AI makes this easier because of the “Multiplicity of Good Models”
• With lots of choices, optimize on a second metric - fairness!
BLDS
LLC
12. How to fix biased AI: the Pareto Frontier
BLDS
LLC
13. Using AI to Fix AI: the Pareto Frontier in action
BLDS
LLC
14. Using AI to Fix AI: methods
Use AI
to Fix AI
Feature
Selection
Algorithm
Selection
Adversarial
Modeling
Data
Preprocessing
Regularization
Model
Tuning
BLDS
LLC
See IBM’s “AI Fairness 360” for implementations of
many of these methods.
But be careful if you are considering using these
methods in regulated industries!
15. Using AI to Fix AI: feature selection
BLDS
LLC
16. Using AI to Fix AI: dual-objective optimization
Model splits are determined based on minimizing loss while maximizing accuracy
ℒ 𝐹 = ℒ 𝐶 − 𝜆 ∗ ℒ(𝐷𝐼)
Algorithm tests model quality and DI tradeoff at each iteration
Set Requirements for business validity
(acceptable drop in quality)
BLDS
LLC
17. Using AI to Fix AI: dual-objective optimization
All
Observations
Credit
Score < 640
Credit
Score ≥ 640
All
Observations
Credit
Score < 660
Credit
Score ≥ 660
All
Observations
Credit
Score < 700
Credit
Score ≥ 700
Model Quality (AUC): 70.5
Disparate Impact (AIR): 65%
Model Quality (AUC): 69.8
Disparate Impact (AIR): 70%
Model Quality (AUC): 60.1
Disparate Impact (AIR): 95%
BLDS
LLC
18. Parting thoughts
• AI can be discriminatory – even when the model builder has no ill intent
• Review your data closely
• Aim towards having a causal model
• Search for diverse views when reviewing models
• It is possible to fix discriminatory AI
• Making fairer AI can be relatively low-cost and will benefit your company’s
stakeholders
• Please reach out to me if you would like to discuss this more!
BLDS
LLC