Machine Learning and/or AI is being adopted across many industries at a rapid pace. But Bias in AI, lack of talent diversity in AI and lack of access to knowledge pose major risks. In this presentation, I showcase some real-life example of Bias in AI. But if we take the right steps we can build an Inclusive AI. Building an Inclusive AI is the right thing to do for the society, it also makes for a great product and business.
2. AI in the next 10 years will
have a Global Economic impact
of up-to $3T.
AI will have an impact in every
industry including
Banking
Healthcare
Autonomous Driving
Agriculture
Identifying new drugs
Legal, Fraud detection
Information security
Disaster relief
Customer support and
many many more.
AI in the Next 10 Years
$1.5 T - $2.95 T
Global Economic Impacts Associated with AI, Analysis Group, 2016
3. "AI is probably the most important thing humanity has
ever worked on. I think of it as something more
profound than electricity or fire."
Sundar Pichai
CEO, Google
Satya Nadella
CEO, Microsoft
“..AI is the defining technology of our
times"
Tim Cook
CEO, Apple
“We believe that artificial intelligence will be able to embroider a
person’s ability and help to make a breakthrough that transforms our
lives in education, in access to health services and in countless other
areas."
Google, Microsoft and IBM,
Apple and other large
companies and Government
organizations are betting their
future on AI.
The race is on and AI is rapidly
being democratized.
But there are big Risks
involved.
4. I am not worried about Killer
Robots as the movies portray.
I am worried about
Bias in AI being the real
danger.
Diversity of talent in AI.
Lack of access to knowledge
in AI.
5. I am motivated to speak about
Bias in AI because a year ago we
set to build face and emotion
recognition algorithms and
models for our company.
We collaborated with
Universities and experts in the
field and leveraged data that
was easily available.
As we built our first version we
realized it had trouble
recognizing my face because I
am brown skinned as you can
notice.
The root cause was a lack of
diverse dataset.
This made me think how
many other startups and
corporations are building AI
models based on biased
dataset and not even realizing
it.
6. Biased AI systems are all
around us.
They disproportionately affect
Women and Minorities.
Biased AI systems are
everywhere and very few seem
to care
7. Research shows that there are
over 180 human biases that are
classified that may end up in AI
If unchecked can impact our
daily lives and decision
making.
From who get loans,
purchasing, advertising to
decisions made in the criminal
justice system.
More than 180 human biases
have been defined and classified
IBM Research
9. Dataset bias is when there is
not enough representation of a
section of the population
leading to errors in
classification.
For example research from MIT
media labs shows that leading
AI algorithms do a poor job
identifying faces of Darker
Females … in some cases
classifying them as Male or not
even finding a face.
Example: Dataset Bias
Gender Shades, Joy Boulamwini, Timnit Gebru, 2018
10. Borden was rated high risk for
future crime after she and a
friend took a kid’s bike and
scooter that were sitting
outside. She did not reoffend.
Propublica uncovered that
COMPAS software used for
predicting future criminal
behavior is biased against
black people.
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Example: Dataset Bias
11. When you search for “Nurse” in
google the results are
predominantly female.
Example: Reinforcing Stereotypes
12. Example: Reinforcing Stereotypes
When you search for “Doctor”
in google the results are
predominantly male.
When this is used as a dataset
to train AI it can lead to
Reinforcing the stereotypes
that currently exists in our
society.
AI companies need to
deliberately break the
stereotypes by including
demographic and task
diversity in their training
datasets.
13. Example: Automation & Confirmation Bias
AI is very good at making fast
decisions.
But is it important to take the
cultural context into account.
In this case a Snapchat filter is
propagating a European notion
of beauty by lightening the skin
color and the eyes, making the
nose and lips narrower.
14. Example: Interaction Bias
If we let AI train in the wild
without any safeguards.
Something like the Tay.ai
messaging app controversy
could happen.
In less than 24 hour this twitter
bot went from saying Humans
are super cool. To becoming
racist, sexist bigot.
16. I want to also show you what
happens when you fix for bias.
This is an example of an image
with many different types of
faces and skin tone.
In this case a generic algorithm
was used and it only detected
few faces.
But we put some extra effort
with our AI algorithm and
trained it to recognize faces in
the real world. The dataset
contained faces that were
tilted , profiles, dark skin
tones, women, dark lighting
and more.
17. The result is the algorithm can
now pick up all the faces in this
scene.
Because it was trained with a
dataset that represents the
real world.
18. Now it is able to recognize faces
across a large crowd of diverse
population.
With some conscious steps and
by incorporating a diverse
dataset we actually are able to
build a much better product.
19. To Build Inclusive AI Algorithms & Datasets
AI4All, 2017. GirlsWhoCode.com
Test + Audit
Fairness + Ethics
Transparency
Diverse talent
So how do we build a an inclusive AI
product and ensure its bias free?
It includes testing and auditing of
algorithms and datasets. If
someone is selling you an AI solution
ask where does their data come
from and how diverse is it.
If you are a corporation,
government entity, startup
accelerator or an investor this
should be part of your policy.
When we are building AI systems we
should set high standards for
fairness and ethics.
Provide transparency of the AI/
ML related work.
These steps can help you build an
Inclusive AI.
But to really execute on this mission
we need to build diverse and
inclusive teams that implement AI
based solutions.
Look at who is discussing about bias
in AI today. We are women and men
of color because we are experience
this first hand.
We need them to be in our teams to
build an Inclusive AI.
Today only 13% of the AI/ML
companies are run by Female CEO
and only 1 in 5 computer science
graduates are female.
20. Current estimate of Global AI talent
200,000 to 300,000
Estimate of global AI talent, Tencent, 2017
On the other hand there is a
huge talent gap in AI.
Today there are only 200K to
300K people in AI across the
world. But we need millions of
AI practitioners.
This provides a huge
opportunity for girls and
minorities to fill the gap.
This is where we all come in.
Girls and Minorities need our
support, guidance and role
models.
We also need the support of
more organizations that
support girls and minorities to
get into AI.
21. Organizations supporting/promoting Inclusive AI
These are some of the
organizations involved in
training or up-training and
providing mentorship programs
for AI. They are also providing
awareness around Bias in AI.
But there is a lot more work to
do and for more organizations
to be created to tackle the need.
BlackinAI.org
AI-4-ALL.org
RefineAI.com You?
22. To build an Inclusive AI
Analyze current systems for Bias
Be conscious about the cultural contexts
Focus on diversity of the AI team
An inclusive AI is not just the right
thing to do but it makes for a better
product and business.