Presentation outlining the UnBias project, an EPSRC funded project about transparency of biases in algorithm behaviour, often due to unavoidable implicit choices that had to be made.
This presentation was given at the DASTS16 conference in Aarhus Denmark on June 3rd 2016.
4. Information services, e.g. internet search, news feeds etc.
• free-to-use => no competition on price
• lots of results => no competition on quantity
• Competition on quality of service
• Quality = relevance
= appropriate filtering
Good information service = good filtering
7. Personalized recommendations
• Content based – similarity to past results the
user liked
• Collaborative – results that similar users liked
(people with statistically similar tastes/interests)
• Community based – results that people in the same
social network liked
(people who are linked on a social network e.g.
‘friends’)
10. User understanding of social
media algorithms
More than 60% of Facebook users are entirely unaware of
any algorithmic curation on Facebook at all: “They
believed every single story from their friends and followed
pages appeared in their news feed”.
Published at: CHI 2015
13. Information filtering, or ranking, implicitly manipulates choice
behaviour.
Many online information services are ‘free-to-use’, the
service is paid for by adverting revenue, not users directly
Potential conflict of interest:
promote advertisement vs. match user interests
Advertising inherently tries to manipulate consumer behaviour
Personalized filtering can also be use for political spin /
propaganda etc.
Manipulation: conflict of interest
15. Q&A with N. Lundblad (Google)
Nicklas Lundblad, Head of EMEA Public Policy and Government Relations at Google
Human attention is the limited resource that services need to
compete for.
As long as there exist competing platforms, loss of agency due to
algorithms deciding what to show to users is not an issue.
Users can switch to other platform.
16. UnBias: Emancipating Users Against Algorithmic
Biases for a Trusted Digital Economy
WP1: ‘Youth Juries’ workshops with 13-17 year olds to co-
produce citizen education materials on properties of
information filtering/recommendation algorithms;
WP2: co-design workshops, hackathons and double-blind
testing to produce user-friendly open source tools for
benchmarking and visualizing biases in the algorithms;
WP3: design requirements for algorithms that satisfy subjective
criteria of bias avoidance based on interviews and
observation of users’ sense-making behaviour
WP4: policy briefs for an information and education governance
framework for social media usage. Developed through broad
stakeholder focus groups with representatives of
government, industry, third-sector organizations, educators,
lay-people and young people (a.k.a. “digital natives”).
17. Thank you for your attention
ansgar.koene@nottingham.ac.uk
Click to add
texthttp://casma.wp.horizon.ac.uk/
18. It’s based on data so it must be true
“More data, not better models”
Belief that ‘law of large number’ means Big Data methods do
not need to worry about model quality or sampling bias as
long as enough data is used.
“More Data” is the key to Deep-learning success compared to
previous AI
19. Garbage in -> garbage out
perpetuating the status-quo
ProPublica “Machine Bias”
20. ‘equal opportunity by design’
“Big Data: A Report on Algorithmic Systems,
Opportunities, and Civil Rights“, White House
report focused on the problem of avoiding
discriminatory outcomes
“To avoid exacerbating biases by encoding them into
technological systems a principle of ‘equal
opportunity by design’—designing data systems
that promote fairness and safeguard against
discrimination from the first step of the
engineering process and continuing throughout
their lifespan.”