A commonly held perspective is that humans are the source of error in systems and thus, something to design around - often framing things in terms of cognitive biases or human error. The goal in these systems is to reduce human input, often to the point of eliminating them from the system altogether.
But there is another way to think about the human role - a positive perspective that considers and takes advantage of human capabilities. This sees the human operator as a valuable contributor that is essential for system success.
This presentation walks through some commonly held views on humans and presents an alternate perspective that puts humans in a better light. It also describes how this perspective impacts how we design systems so that we can help reduce the blind spots that are created when we try to take people out of a system.
2. What Comes to Mind When We Think of Human Behavior?
Irrational
Error-prone
Fatigue
Lazy
Doesn’t pay attention
Absent-minded
Forgetful
Limited
3. Only one inevitable solution End Result to
Remove People
Most tech companies
believe that by replacing
humans with smart
technologies, they can
reduce errors, save
money, and improve
efficiency.
Photo - zdnet.com
5. Vitruvian Man - Leonardo Da Vinci - from Youtube
People Are
Designed
Humans have thrived on
this planet for millenia for
a reason.
We’ve been designing for
people for less than a
century.
6. People Have
Constraints
“Everybody is a genius.
But if you judge a fish by
its ability to climb a tree, it
will live its whole life
believing that it is stupid.”
-Unknown
https://otterman.wordpress.com/2009/03/04/staying-
out-deep-water-the-giant-mudskipper/
7. Types of Constraints
Accessibility Affordances
Emotions
A person may have a
cognitive or physical
disability that limits
movement, senses, or
actions
People will see their
interaction capabilities with
the world differently based
on their own physical state
and abilities.
A person’s emotional
state may impose
constraints that we have
to support.
Physical Perception
Our physical bodies are
only capable of so much
(reach, force, speed).
Our senses, especially our
vision, has limits. We often
call these optical illusions.
Decision-making
With imperfect information,
people don’t always make
the best possible decision.
10. “If someone’s behavior
doesn’t make sense to
you, that says something
about you.”
-Jens Rasmussen
https://theoverwhelmedbrain.com/irrational-
people/irrational-behavior/ Source: News-Herald
People Are
Rational
11. Anchoring
Availability
Ben Franklin Effect
Group Attribution Error
Halo Effect
Understanding Errors & Biases
Loss Aversion
Outcome
Recency
Selection
Survivorship
We’ve evolved to help us survive in a complex, data-
incomplete world with no best answer, but many
satisfactory answers.
13. Embrace and Adapt to Limitations
● Identify where people can satisfice vs optimize and support appropriately.
● Capture the underlying model of the system within the design to foster
understanding.
● Allow people to explore the decision-making space.
● Help people transition from Stage 1 (heuristics) to Stage 2 (analytical)
thinking.
17. Leverage innate human focusing
● Don’t assume people need your
notification right now.
● Allow people to determine where there
focus should be.
● Don’t overwhelm the user’s innate filtering
capabilities.
● Explore the use of calm technologies.
From wikipedia: user - Zyxwv99
18. Some Numbers:
We have billions of receptors collecting data all over our
body every seconds of every day!
● The human retina contains about 120 million rod cells, and 6 million
cone cells.
● Every square centimetre of your skin contains around 200 pain receptors,
15 receptors for pressure, 6 for cold and 1 for warmth.
● Humans have around 10,000 taste buds.
23. Negative human mindset
seeks to replace people
with superior technology
within a system.
Photo by Chris Dorst: Charleston Gazette Mail
Designing For
Automation
24. This system detects exactly
10,000 things, no more, no
less. Any slight [...] variation
will require another new rule.”
-Jeff Jonas
Automation Is Rule
Oriented
Rule-based systems - use rules as a
knowledge representation.
Learning systems - create their own models
using rule-based optimization systems.
Technology doesn’t care if you accomplish your
goals or not (and can’t go to jail if it fails).
Tech Only Does What We Ask
25. All technology is designed to
work under certain conditions.
Those conditions are built in
by their creators.
Photo: Cornell University
Automation Has
Constraints
26. Substitution Myth is the
“belief on the part of
designers that automation
activities simply can be
substituted for human
activities without otherwise
affecting the operation of the
system.”
-Christofferson & Woods
Automation Cannot
Replace Humans
“How to make automated systems team players”
(2003)
Tasks performed within a system are
independent.
Technology Assumption
Most systems are networks of interdependent
and mutually adapted activities.
Cognitive demands are met through the
interaction and coordination of multiple people
(and machine agents).
Reality
27. “Every technology carries its
own negativity, which is
invented at the same time as
technical progress.”
-Paul Virilio
Photo by: Uber Engineering
Automation Creates
New Work
28. Photo from: NASA
Group performance is more than the sum of
abilities of the individuals that compose the group.
Design for humans and technology as teammates
● Create shared representations of the problem space AND
representations of the activities of agents.
● Help users identify events, anticipate, and detect patterns / anomolies
in the world.
● Give users the ability to substantially influence the technology’s
activities (but not micro-manage).
● Don’t divide work into what humans do best and what technology
does best.
30. ● People are the only ones invested in system goals, keep them
invovlved.
● People have constraints - but all systems do.
● Design to extend human capabilities by focusing on what we do well
(pattern recognition, novel problem solving, attention focusing).
● Consider smart technologies as teammates, not replacement players.
● Help users identify events, anticipate, and detect patterns / anomolies
in the world.
Takeaways
31. Takeaways
People are successful when they are given the information they need, in a way
they can understand it, in the time they need it.
It’s up to us to design systems this way.
Thanks for coming
-A lot of talk about human-centered design, but focus is on user goals, motivation, and things like that.
-This is going to be focused more on people in general. This is another key piece to being human-centered, but something that doesnt get talked about a lot.
-Normally think about the negatives in humans
-This is especially true in tech companies.
When we think about humans within systems we generally think in terms of negatives. Humans are irrational and the source of errors. We suffer fatigue, we are lazy, etc… The list goes on and on. This is the standard way most of the tech community talks about people.
And the end result of this thinking inevitably leads to removing people from the system.
Usually, the goal is to replace with technology (AI, auto, etc…) and think of positive outcomes that it brings
But this ignore the positive things that people bring to systems. And when we design systems this way, our systems end up missing something.
That’s why this talk is about extending our capabilities through design. Not just designing around people.
Whether its AI, automation, or whatever, the belief is that people are flawed and the technology will save the day. They can reduce errors, save money, improve efficient with limited downsides.
And certainly, these things can and do happen.
But - today I wil make the case that this is only part of the picture. I want to explore this with you today, reframe the situation, and show the positive things that people can be responsible for.
This is important because how we think about the capabilites of our users impacts the way we design for them. If we think of people in a negative light, of course the tendency is to design people out of the system. But when we change this perspective, we focus on extending our capabilities and start seeing new opportunities for design and creating more powerful & impactful systems.
So let’s start and frame the conversation.
-Shaped by evolution to survive and thrive in world
-Been designing tools for millions of years
-But only been designing thinking about how people work for less than 100 years.
-How much of world have we designed poorly?
We have been shaped by millions of years of evolution to exist and thrive in this world. However, we haven’t designed the world to fit these capabilities. We only started thinking about how to design for humans in the last 100 years or so, with WW2 really being the big turning point. So everything we think about should be framed in this perspective. We have designed the world largely not considering what humans were designed for, and then we are not surprised that we aren’t really fit for that environment. So, we can try to design around humans, or we can start shaping our world to better fit with how humans work.
This is not a point I am going to contend with. And we will get into some of these.
Quote: often attributed to Einstein, but no record he actually said this..
The point is, how are we judging how well humans are performing? Are we being fair when we talk about human shortcomings?
For most of human history, we’ve created tools that extend our physical or perceptual capabilities. It started with simple tools like rocks and turn into hammers. We’ve gotten more advanced.
People can’t fly. We don’t judge people by their ability to fly. We don’t consider ourselves incapable or limited because of this. This is simply a constraint. And we build products (airplanes, helicopters, hangliders, etc…) that help us fill this gap - extend ourselves beyond our constraints.
We are starting to design for better accessibility. We often talk about how we need to accommodate various emotional states. Even perception - we build glasses, telescope, etc. to improve our vision in various capacities.
But we don’t give ourselves the same leeway when talking about decision-making. Too often, we look at constrained human decision-making and throw our hands up and look to remove people from the system. Why? Why do we treat this differently. We need to start looking at what we do well and determine ways to extend those capabilities.
People are driven to be successful in system, but sometimes this results in failure.
Failure leads to anaylsis, which has benefit of hindsight (with more time / info than user has in real time)
Often dont haer about time that person in loop prevents an accident based on our capabilities.
I think this is the best place to start. And something that gets missed a lot. People are driven towards being successful in systems. No one sets out to fail. We have a goal and we try to accomplish that. And in the course of doing this, yes, sometimes we fail.
And when we fail, what happens? We do accident analysis, where its easy to see, in hindsight, what people should have done. It’s a lot easier with more information and more time to analyze the problem.
What we also miss is the number of times that people are successful and avert disaster. This is a picture from the Miracle on the Hudson. Yes, pilots often do things within crashes that help contribute to teh accident, but they always do so in an effort to be successful. THey are trying to save the system. Some times it works, like the Miracle on the Hudson - sometimes it doesnt.
Every decision happens in context. People rarely set out to make a bad decision or cause an accident. They are doing their actions for some reason - even if we think its a bad reason or a bad action - in the time with the information they have and their goals - they think it’s a good idea.
In general, when people make mistakes, they are doing exactly what they think is the best thing in that situation.
From Jens Rasmussen - a renown figure in the field of cognitive systems engineering, system safety, human error, and accident
Let’s even consider driving. It’s easy to look at driving and consider that people are bad at driving. In 2017, there were over 40,000 driving deaths. This is an astronomical number and it certainly is too high. And we should be doing everything we can to bring this number down as far as we can. But assuming that this number is high because people are bad at driving is kind of missing the point.
People are by and large very good at driving. Driving is largely a task that we can do without much thought. 99.9% of the time everything happens exactly as it should and we respond appropriately. So if we look at the context of actions, we can start to take a different perspective.
For every 1,000 minutes of driving, everything pretty much happens as normal. When we have this spare capacity, we start looking to fill in our time with other activities - talking, singing with the song on our phone, sending a text. The problem with driving is not that people are bad at driving, its that driving is a fairly mundane activity that does not demand our attention most of the time.
Drs Daniel Kahnemann and Amos Tversky are often called the fathers of behavioral economics. They started finding ‘inefficiencies’ in the human decision-making process and how it can impact all sorts of activities in this world. You’ve heard of a lot of them. Anchoring, primacy effect…
These are very real. I’m not here to tell you they dont exist. Most of them do - Pyschology is having a bit of a replication crisis right now.
But they exist as capabilities evolved to help us navigate a complex data-incomplete world with no best answer but many satisfactory answers. Most of life occupies this space, even though we are often told differently.
Again, what are our ultimate goals in life - survival and continuing our lineage.
Lets look at some of these and maybe understand why they exist (Risk management, collaboration).
There are a lot of cars that work for your needs - trying to find the best car or the best deal is a fool’s errand that car companies and ad agencies try to convince us are worthwhile.
This is a video from the show ‘The Good Place’ that helps show what life is like when we put too much thought into our decisions.
This is a video from the show ‘The Good Place’ that helps show what life is like when we put too much thought into our decisions. This is what happens if we dont have these heuristics.
Life would be exhausting without them - these are useful shortcuts and reduce our cognitive load, again to survive another day and help us to thrive.
We need to rethink how we have this conversation. When we think in terms of rational, we think in terms of what a computer would do. In a world where we are trying to support people and our work, why are we framing things in terms of what computers do well.
Pt 2 - get people the right information to understand their decision-making space. Let them explore it
Kahnemann splits decision-making into stage 1 and 2. Stage 2 is more analytic, slower, more deliberate.
I am going to have you watch a video and ask you to perform a simple task. At the end I am going to ask you a question. If you have seen this video before or something similar, please dont spoil it for those who havent.
In this video, there will be two groups of people. Some dressed in white and some dressed in black. Your task is to count the number of passes that the white team makes. Pretty straightforward. Any questions before we begin?
<Play video>
<Video ends> And now for the more important question...Did you see the gorilla make an appearance?
There are two completely different ways to look at this scenario. The popular way to look at this is that humans are so incompetent that something as crazy as a person in a gorilla suit walking through the scene can go unnoticed. This reinforces the idea that people are flawed.
But let’s look at this another way. Did you get the right answer? Did you count 15 passes for the white team?
I gave you a task and you completed it, despite all the other things that were going on.
Our perceptual system is so powerful that the cones in our eye can fire based on an object in the scene but our mind will ignore it because it is completely irrelevant to the task at hand.
Think about what this means for design. When people see this as a weakness, they decide to hit us over the head with information to make sure we don’t miss it.
Impact on Design: Think about what this means for design. When people see this as a weakness, they decide to hit us over the head with information to make sure we don’t miss it. Think about the notifications that you get through your phone or your outlook. They demand your attention.
MORE TROUBLE IN COMMAND CENTERS / TMI story: If you think this is bad on your phone or from outlook, imagine command centers, especially in trouble situations. At three mile island (so I’m told), the klaxon was sounding faster than the operators could clear it. They jammed a penny into the clear button just to stop the noise.
FLOW: We talk in design a lot about flow - Helping people achieve flow. When people are bombarded with alerts, even if they dont respond. If they have to think about them to decide, then they have broken up flow.
Good design doesn’t demand our attention. Good design allows us to use our innate capabilities to focus our attention properly.
Our peripheral vision is essentially designed to operate as a mechanism to help us re-direct our attention.
Cocktail party effect.
Technology should require the smallest possible amount of attention
Technology should inform and create calm
Technology should make use of the periphery
Technology should amplify the best of technology and the best of humanity
People like to throw around data overload like its a human limitation, but when we look at the human body, we process tremendous amounts of sensory data and make sense of it.
THis is one of the reasons behind Tufte’s quote here.
We make sense of the world because of how it is presented to us. Things are grouped appropriately (into objects). Things move in ways that make sense. We can identify patterns in the world, even when they dont exist.
And when we design from this perspective, we can present more to the user than we normally think we can. One way to do this is through data visualizations. Rather than present just numbers of data, when we visualize it in the right way, we can maximize the information density within the view using our ability to detect patterns.
Use our pre-attentive properties to help people understand the data without increasing cognitive load.
These two visualiations come from the mid 1800s. These were and still are standards in how we should design to help people understand. So why is it so difficult to convince people that we need to provide more than just provide data in tables?
We can talk about things like automation, AI, analytics, etc… basically anything that offloads some of the work of a human user.
As children, we often heard the African American folklore about John Henry - the steel driving man. The story is positioned as man vs machine. The railroad company creates a steam powered drill that could replace the human workers. There is a race, and John Henry wins, although he dies in the end, so maybe he wasn’t the winner after all.
And this is how we still think about technology, especially automation and AI, today. It’s us vs the machines. They are here to replace us. This has a lot of negative connotations and it affects how we design these systems.
Compare this with what I said about people earlier - people are success oriented. We care about achieving goals. Automation doesnt. AI doesnt. It cares about whatever is in its programming - its data set.
Jeff Jonas - formerly of IBM now running a company that creates entity disambiguation for businesses to help stop fraud.
Anything that is rule-based will have issues.
Just like errors, biases, and optical illusions show conditions where the human mind struggles, there are also clear conditions where software-based technologies will fail consistently. This doesn’t mean that these technologies are poor or we should abandon them completely (although ethically we really need to consider how we are using them). It simply goes to show that any technology will have its weak spots which need to be compensated for in the system design. They fail because they are built as rule-based systems. Rule-based systems work when pre-conditions are met. And they have a chance to fail when they don’t.
Interestingly, humans operate in a completely different way, which system designers should take note of. Humans haven’t evolved more complex rules to handle problem/edge cases. Instead, humans have developed the capacity to apply more deliberate thinking (what Kahneman dubbed System 2) to a problem that can help us overcome the gaps in our fast-twitch, error-prone, rule-based thinking (labeled System 1) — hence the title of Kahneman’s book ‘Thinking, Fast and Slow’.
Tech only knows what we give it to learn from. Builds in biases in the data set (Carol January talk)
‘Technically Wrong” by Sara Wachter-Boettcher, has a whole litany of cases where technology fails because people aren’t considering the constraints that technology faces.
From the article “How to make automated systems team players”:
This belief is predicated on an assumption that the tasks performed within the system are basically independent. However, when we look closely at these environments, what we actually see is a network of interdependent and mutually adapted activities and artifacts (e.g., Hutchins, 1995). The cognitive demands of the work domain are not met simply by the sum of the efforts of individual agents working in isolation, but are met through the interaction and coordinated efforts of multiple people and machine agents.
This is not an anti-technology viewpoint. We can and should look to include technology within systems. This is simply a recognition that it’s never just a 1:1 swap.
Every time we create a new system or add new levels of complexity, we add new failure modes. One of the reason we need humans in the loop is to be prepared for these new failure models.
When we add technology to systems, we now have to understand several things about the technology. Is it working as it should? Is it nearing a boundary condition? Is it approaching it’s limits and do I need to interject? Is it making a decision that makes sense given the context I understand.
The recent 737 max crashes highlight this. Boeing added in some technology but provided pilots with no information to understand it. They may have been trained on it, but in the stress of the moment, did they provide the information they needed? Indications are that they did not.
Even the self-driving Uber accident last year is another example. One hypothesis is that the signal detection model was set too strict to prevent false positives. The point is not whether it was too low or too high, but that there is no perfect spot. Someone should be monitoring this and seeing events that fall just above or just below and respond accordingly.
People that add technology fail to take these things into account. Again, you can’t simply substitute humans in most open systems and expect perfect results. Things may get better, but there will be new constraints to consider.
Chess tournament example. Humans using computer programs. Grandmasters and top of the line chess AI. Who won? Two mid-level players using mid-level chess program. But they designed the system in a way that let them team better.
Consider what makes good teammates. Communication. Asking for help. Stepping in when problems arise. It’s not work silos (give people what they do best and technology what it does best), but collaboration for the sake of achieving goals.
IBM Watson is doing this with its radiology software. It’s very good, even better than doctors, at detecting cancers. But as an assistent to the Dr,, the overall system improves still.
Again - it’s not that AI is weak or incapable. We see over and over again that AI or automation is capable of tremendous things. Every week it seems like AI is beating people at something new.
But given that in open problems spaces, there are always some unknown gaps, the human strengths of adaptibility and problem solving and goal focuses come in handy.
From Christofferson and Woods (again):
The agents need to maintain a common understanding of the nature of the problem to be solved.
The second part, shared representation of other agents’ activities, involves access to information about what other agents are working on, which solution strategies they are pursuing, why they chose a particular strategy, the 5 status of their efforts (e.g., are they having difficulties? why? how long will they be occupied?), and their intentions about what to do next.
These things are available in good human teams - and something we should be striving for as a designer.
Human-centered design is more than considering what our user’s goals and motivations are.
We must also fundamentally understand how people think, comprehend, perceive, and process emotions.
This information, in combination, allows us to be human-centered.
Human-centered design is more than considering what our user’s goals and motivations are.
We must also fundamentally understand how people think, comprehend, perceive, and process emotions.
This information, in combination, allows us to be human-centered.