Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Scaling Harm: Designing Artificial Intelligence for Humans

3.795 Aufrufe

Veröffentlicht am

On the responsibility of creators of inference algorithms to people in the real world
(Wrangle 2015)

Video: bit.ly/scalingharmvideo

Veröffentlicht in: Technologie
  • Follow the link, new dating source: ❤❤❤ http://bit.ly/2F4cEJi ❤❤❤
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • Sex in your area is here: ❤❤❤ http://bit.ly/2F4cEJi ❤❤❤
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • "Fair" and "biased" are human constructs - they are subjective in the same way as good or bad are subjective. You cannot combat bias by talking to people - that only changes it to a different bias.
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • This is thoughtful and sensible. But it has been grabbed and disseminated by those who would use it to undermine data's power and argue in favour of keeping the same humans telling us all what to do.
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier

Scaling Harm: Designing Artificial Intelligence for Humans

  1. 1. Wrangle 2015 Designing Artificial Intelligence for Humans Clare Corthell summer.ai
  2. 2. Wrangle 2015
  3. 3. Wrangle 2015 We build algorithms that make inferences about the real world.
  4. 4. Wrangle 2015 Potential to make life better • trade stocks • find you a date • score customer churn
  5. 5. Wrangle 2015 Potential to do harm, in the real world (not the super intelligence — in the real world, to real people, now)
  6. 6. Wrangle 2015 Let’s get really uncomfortable
  7. 7. Wrangle 2015 Goal think about how to mitigate harm when using data about the real world
  8. 8. Wrangle 2015 “ruthless, dictatorial, biased, & harmfully revealing”
  9. 9. Wrangle 2015 This isn’t hysterical - it’s Reasonable Fear
  10. 10. Wrangle 2015 How harm came about Where harm occurs Action we can take
  11. 11. Wrangle 2015 How does harm come about?
  12. 12. Wrangle 2015 Digital Technology and Scale How repeatable tasks are automated humans decide which tasks computer does and how productivity is limited by how fast people make decisions
  13. 13. Wrangle 2015 Inference Technology and Scale How human-like decisions humans being “automated away” decisions faster, at scale, and without humans
  14. 14. Wrangle 2015 computers do everything ∴ Rainbows & Cornucopias — right?
  15. 15. Wrangle 2015 Yes, in theory…
  16. 16. Wrangle 2015 Bias & prescription (not everyone gets their cornucopia of rainbows) Where examples as defined by
  17. 17. Wrangle 2015 Bias holding prejudicial favor, usually learned implicitly and socially. every one of us is biased, and people can’t observe their own biases Where
  18. 18. Wrangle 2015 Bias in Data bias in human thought leaves bias in data, skews that we can’t directly observe Where
  19. 19. Wrangle 2015 Bias, scale, harm inference technology scales human decisions — any flawed decisions or biases it is built on is scaled, too. Where
  20. 20. Wrangle 2015 Descriptive Predictive Prescriptive What happened? What will happen? What should happen? Where
  21. 21. Wrangle 2015 Prescription when inferences are used to decide what should happens in the real world Where
  22. 22. Wrangle 2015 Prescription is power possible and likely to reinforce biases that existed in the real world before Where
  23. 23. Wrangle 2015 Examples online Dating future Founders loan assessment Where
  24. 24. Wrangle 2015 Dating Subtle Bias and Race Where
  25. 25. Wrangle 2015 Dating Subtle Bias and Race Where We can say that building a matching algorithm based on scores would reinforce a racial bias Ratings men typically gave to women The effect is apparent in aggregate
  26. 26. Wrangle 2015 • College Education • Computer Science major • Years of experience • Last position title • Approximate age • Work experience in venture backed company future startup Founders Institutional Bias Where Decision Tree with inputs:
  27. 27. Wrangle 2015 institutional bias comes through the data — though it seemed meritocratic at the outset The features say nothing about gender! Yet in literally pattern matching founders, we see bias. future startup Founders Institutional Bias Where
  28. 28. Wrangle 2015 The problem is not that this doesn’t reflect the real world — but rather that it doesn’t reflect the world we want to live in. future startup Founders Institutional Bias Where
  29. 29. Wrangle 2015 loan assessment The long history of bias Where long history of loan officers issuing loans based on measurable values such as income, assets, education, and zip code Problem: in aggregate, loan officers are historically biased So loan algorithms perpetuate and reinforce an unfair past in the real world today
  30. 30. Wrangle 2015 let’s go beyond criticism to action
  31. 31. Wrangle 2015 the world isn’t perfect, so it’s worth exploring potential worlds corrected for biases like racism and sexism
  32. 32. Wrangle 2015 You sit in the captain’s chair; you move the levers and the knobs
  33. 33. Wrangle 2015 The future is unwritten — yet sometimes we forget that we could make it better.
  34. 34. Wrangle 2015 Design for the world you want to live in.
  35. 35. Wrangle 2015 Bias is difficult to understand because it lives deep within your data and deep within the context of the real world Finding Bias
  36. 36. Wrangle 2015 1. Talk to people 2. Construct Fairness ACtion Two ways to Combat biaS
  37. 37. Wrangle 2015 Seek to understand: who they are what they value what they need what potential harm can affect them ACtion talk to people
  38. 38. Wrangle 2015 Construct fairness for example: to mitigate gender bias, include gender so you can actively enforce fairness (what doesn’t get measured can’t be managed) ACtion
  39. 39. Wrangle 2015 designing algorithms is creative (data does not speak for itself)
  40. 40. Wrangle 2015 We should value what we do not simply by the accuracy of our models but the the benefit for humans and lack of negative impact
  41. 41. Wrangle 2015 a facial recognition thought experiment
  42. 42. Wrangle 2015 if you don’t build it maybe no one will
  43. 43. Wrangle 2015 a facial recognition thought experiment
  44. 44. Wrangle 2015 you have agency and a market price
  45. 45. Wrangle 2015 If an algorithm makes something cheaper for the majority but harmful for a minority — are you comfortable with that impact?
  46. 46. Wrangle 2015 we’re at the forefront of a new age governed by algorithms We must be deliberate in managing them ethically, strategically, and tactically
  47. 47. Wrangle 2015 remember the people on the other side of the algorithm
  48. 48. Wrangle 2015 because whether it’s their next song, their next date, or their next loan,
  49. 49. Wrangle 2015 you’re designing their future. Make it a future you want to live in.
  50. 50. Wrangle 2015 huge thanks to @clarecorthell clare@summer.ai sources of note • “Fairness Through Awareness” Dwork, et al • Fortune-Tellers, Step Aside: Big Data Looks For Future Entrepreneurs NPR • Harvard Implicit Bias Test Manuel Ebert Cynthia Dwork Wade Vaughn Marta Hanson