3. Please fill in the circle that best describes how you feel
I found this task to be easy to accomplish
Strongly Neither agree Strongly
disagree nor disagree agree
4. Please fill in the circle that best describes how you feel
I found this task to be easy to accomplish
Strongly Neither agree Strongly
disagree nor disagree agree
14. SATISFICING
evaluating options
based on adequacy
rather than maximum
benefit.
pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty.
16. COGNITIVE BIAS
a pattern of judgement that stems from
adaptive behavior and/or mental flaw.
pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty.
25. STUDY
How do we evaluate options?
subscription test 1
Online-only $59
Print-only $125
Print and Online $125
26. STUDY
How do we evaluate options?
subscription test 1
Online-only $59 16%
Print-only $125
Print and Online $125 84%
27. STUDY
How do we evaluate options?
subscription test 2
Online-only $59
Print-only $125
28. STUDY
How do we evaluate options?
subscription test 2
Online-only $59 68%
Print-only $125 32%
29. STUDY
How do we evaluate options?
subscription test 1 subscription test 2
Online-only $59 16% Online-only $59 68%
Print-only $125 Print-only $125 32%
Print and Online $125 84%
• About two years ago, I was lurking around on the NewsObserver web site when I came across this. At the time, the N&O was in the midst of a site redesign. \n\n• They were prepping for their new release and held an open call for usability test participants. Naturally, I signed up. \n\n• My goal wasn’t to troll their researchers or pick apart their techniques. Well, not totally at least. \n\n• I was more interested in what it was like to be a test participant. I had been on the other side of the table, so to speak, but I hadn’t developed true empathy for what it’s like to be tested. \n
• As the test proceeded, I was asked to rate how much I liked the interface and how easy it was to complete the task. \n\n• This rating was done in front of the product manager who administered the test. \n\n• He stuck to the protocol. Before we started, he made a point to explain that I wasn’t being tested. Nothing I could say would hurt his feelings. \n\n• Despite this assurance, I felt myself rating things a blip or two higher than I felt right. I could feel this invisible force coaxing me to give him the benefit of the doubt. I stayed on task longer, worked harder, and was more persistent than I would be in real life. \n\n• Why?\n
• As the test proceeded, I was asked to rate how much I liked the interface and how easy it was to complete the task. \n\n• This rating was done in front of the product manager who administered the test. \n\n• He stuck to the protocol. Before we started, he made a point to explain that I wasn’t being tested. Nothing I could say would hurt his feelings. \n\n• Despite this assurance, I felt myself rating things a blip or two higher than I felt right. I could feel this invisible force coaxing me to give him the benefit of the doubt. I stayed on task longer, worked harder, and was more persistent than I would be in real life. \n\n• Why?\n
• As the test proceeded, I was asked to rate how much I liked the interface and how easy it was to complete the task. \n\n• This rating was done in front of the product manager who administered the test. \n\n• He stuck to the protocol. Before we started, he made a point to explain that I wasn’t being tested. Nothing I could say would hurt his feelings. \n\n• Despite this assurance, I felt myself rating things a blip or two higher than I felt right. I could feel this invisible force coaxing me to give him the benefit of the doubt. I stayed on task longer, worked harder, and was more persistent than I would be in real life. \n\n• Why?\n
• In this test, I fell victim to two cognitive biases: Hawthorne effect and participant bias. Despite the assurance that I was not being tested, I felt as though my performance was being reviewed. And, because I knew the researcher had vested ownership of the project, I bent my answers ever so slightly toward what I thought he wanted.\n\n• The Hawthorne effect, incidentally, was observed by researchers trying to understand whether light had a measurable impact on productivity. \n\n• Repeatedly as researchers changed variables up or down, productivity improved. \n\n• As they peered deeper into this, they realized that the simple interest shown to employees in the study caused them to perform better.\n\n• The Hawthorne effect suggests that people change their behavior when they know they’re being observed. Participant bias – which is closely related – suggests that people cater their responses to what they believe researchers want to hear. \n\n\n\n\n
• In this test, I fell victim to two cognitive biases: Hawthorne effect and participant bias. Despite the assurance that I was not being tested, I felt as though my performance was being reviewed. And, because I knew the researcher had vested ownership of the project, I bent my answers ever so slightly toward what I thought he wanted.\n\n• The Hawthorne effect, incidentally, was observed by researchers trying to understand whether light had a measurable impact on productivity. \n\n• Repeatedly as researchers changed variables up or down, productivity improved. \n\n• As they peered deeper into this, they realized that the simple interest shown to employees in the study caused them to perform better.\n\n• The Hawthorne effect suggests that people change their behavior when they know they’re being observed. Participant bias – which is closely related – suggests that people cater their responses to what they believe researchers want to hear. \n\n\n\n\n
• Today’s presentation is what I like to call “brain bugs.”\n\n
• These are the subtle, usually unconscious forces that shape our decisions. \n\n• I think you’ll find the topic familiar, even where the examples might be new. Even if you haven’t studied decision making before, you engage in the practice every day. \n\n• Decision making is an incredibly complex topic. It’s attracted researchers from economics, philosophy, and psychology just to name the heavy hitters. \n\n• When we decide, there are a multitude of factors at play. The situation, your motivation, even how you’ve chosen before all influence how you make decisions. \n\n• I want to focus today on one small aspect of that - cognitive bugs. \n
• So, a bit of backstory. I came to this topic through research. \n\n• Like all of us in the web services field, we ask questions. Lots of them. We question our clients; we question users. We question our colleagues, if we’re doing it right. \n\n• And I think most of us have a healthy amount of skepticism about the responses we get. When a client launches into tactics too early, we parry and say, “let’s back up and talk about the problem?” \n\n• I think we all want to understand how decisions are made. Are they based on empiricism or on a hunch? Did the person just finish reading some article in HBR or FastCompany?\n
• So for today, we’ll cover just two topics and hopefully leave room to discuss. \n
\n
• So, what’s a decision? There are a number of definitions out there, but this one works pretty well. \n\n• Describing decision making as a process rather than some emergent event gives us grounds to dissect it.\n\n• So decision theory, then, is a field of study dedicated to understanding how we choose. And because decision making is so fundamental to humanity, it’s been studied and postulated on for years. \n
• It’s bedrock was solidified with the early Greek philosophers – Plato, Aristotle, etc.\n\n• In more classical decision theory, people were modeled as rational agents -- acting to maximize the utility of their decisions, given available information and processing power. \n\n• Many of the theories arising out of this characterize people as striving to be as rational as possible. Even into the 20th century with Freud and his concept of the ego and the id, our impulsive, reptilian brain needed to be managed through the filter of reason. \n\n• Rational agent theory - a concept used to model economic events – suggested people making decisions had clear preferences, could perform the mental calculus to weigh options, and could identify the maximal outcome.\n\n• But as researchers started peering in deeper, they found that while this model wasn’t quite accurate. \n\n• So while it’s lovely to perceive of ourselves as diligent information processors who can accumulate data and spit out the best decisions...\n
• In practice, people didn’t always try to maximize their outcomes. In fact, people aren’t always sure what a good outcome would look like. \n\n• Stressors like time, available information, our mental faculties, and personal experience limit how effectively we decide. These shape our decisions and, in some cases, push us away from rational positions. \n\n• Instead, more contemporary theories such as bounded rationalism and prospect theory – popularized by researchers like Herb Simon, Amos Tversky & Daniel Kahneman – model decision making in terms of imperfect rationality. \n\n• They take into account our mental flaws and biases, using these to better model decision making not as pure cost-benefit analysis but as a Voltron of rationality, emotion, preconceptions, and mental flaws. \n\n• And through these limitations, we tend to satisfice. \n\n
• Now, what does that mean? Consciously or not, we try to make good decisions given what we have. And when we come to a decision, it might not be the best possible scenario, but it’s good enough when weighed against the effort. \n\n• Sometimes we’re tricking our Hyundais with cardboard spoilers. We’re just doing what we need to do to get by. \n\n• We satisfice through conscious and unconscious ways. Consciously, we use analogies and decomposition to simplify hard problems. \n\n• Unconsciously, our brain is at work too. These are cognitive biases - or “brain bugs.” \n\n
\n
• These are documented ways that we make irrational choices. \n\n• By no means are four comprehensive. But these, I think are particularly interesting. \n\n• For each of these, I’ll start with an example and then follow up with an explanation.\n
\n
• Researchers at Emory conducted a study to see how people people dealt with information that ran counter to their beliefs. \n\n• The study involved the 2004 Presidential campaign. Participants were selected based on their partisanship and, in one part of the study, each was given three bits of information. \n\n• The first was a quote from the candidate expressing a position on an issue. \n\n• The second was a statement that identified a clear contradiction in their actions. \n\n• Finally, they were given a statement from the candidate that explained away the contradiction. The participants were asked to rate the degree to which the candidate’s actions were contrary to the account. \n\n• More often than not, they found people excused their own candidate’s behavior while holding their opponent to the fire. \n
• Perhaps one of the most nefarious biases is confirmation. Before we make decisions, we often hold hypotheses and beliefs. These may be a priori or they may be empirical. But we tend to value information that supports our beliefs more highly. \n\n• Furthermore, we distort information to match our beliefs. This is especially true with information that is dissonant -- or runs counter to our opinion. As the Presidential study showed, we can selectively interpret information to supports what we believe. \n\n• Turning this back to our work, imagine we’re reviewing GA on one of our sites post-launch. We notice a large number of bounces, which we surmise is related to poor content the client provided. We may unknowingly cherry-pick metrics that support this. In doing so, we might turn a blind eye or explain away the possibility that part of our work is to blame. \n
\n
• In 2007, two Princeton researchers had an interesting question. Does our opinion of information change based on how easy it is to manage in our heads?\n\n• In this study, research participants were asked to predict how well a company would do, given only the pronounceability of the names. In this case, pronounceability was used as a proxy for fluency - things that are easier to say require less cognitive effort. \n
• They found that people tended to be more bullish -- that is, think more highly of -- companies whose ticker symbols were pronounceable.\n\n• Now, this only lasted for about a day. As more information came available to them - company history, etc. -- this effect was minimized. \n\n• But it did suggest that people tend to believe, trust, and like things that are easier to process. These items seem more familiar and provide positive emotional responses.\n
• Our decisions are shaped by the information that we can imagine. And imagination is often heavily influenced by what we can know exists and when we last heard about it. Having tacit knowledge about something – especially if it occurred recently – encourages us to exaggerate its relevance. \n\n• Consider the recent trend towards gamification. This is absolutely not a new concept -- but to read the blogs, it’s as if people discovered gold. It’s fundamentally the same system of incentives, recognition and rewards we’ve known for years. But no one talks about that. \n\n• The latest buzz has created a cargo effect. Trophies, point systems, and leaderboards are easy to envision and examples abound in the wild. But does that make them relevant? Do we overestimate how impactful they will be?\n\n
\n
• Dan Ariely, a behavioral economist, noticed something weird when he logged on to The Economist. In the subscription box, there were three options shown here. \n\n• At first he thought it was a pricing mistake but then considered it was a sly tactic by marketers to pump up sales. So he conducted a test with 100 MBA students. In the first test, he gave them the following table - verbatim from the site. He asked them to pick the subscription they would purchase. \n\n• 84% chose the Print and Online version and 16% chose Online only. \n
• In the next test, he removed the print and online option and asked a different set of participants to choose. \n\n• The results were interesting. 68% chose online only and only 32% chose print only. \n\n\n
• The mere inclusion of the Print and Online version made it seem like a bargain in comparison. \n\n
• In most cases, we don’t have a good picture about what something’s intrinsic value -- even if there is such a thing as intrinsic value, which is debatable. We evaluate options in relation to others - is this a better deal that that? \n\n• This manner of thinking is highly suggestible. Those who’ve studied negotiation know that anchoring plays a part in determining the final price outcome. Whether that anchor is client supplied (i.e. we have a 100k budget) or if it emerges organically in discussion, we tend to fixate on that value and measure subsequent choices based upon that. We look to see the relative contrast between options and evaluate what we gain or lose. \n\n• Internally, we’re susceptible to this in budgeting. Figuring out what a project will ultimately cost requires dealing with large amounts of uncertainty. Because it’s prohibitive to conduct deep analysis to come up with a well-vetted cost -- and also because time and transparency isn’t on our side -- we tend to start with a reference point that we know. Oftentimes this is an efficient and informed way to manage uncertainty. We look for projects that seem similar to this and consider how they netted out. Were they over budget or under budget? What issues could arise? We massage line items -- add a few hours here, strike a deliverable there. We pad it when it “feels off.” \n\n\n
\n
• Trust no one - Actually, it doesn’t need to be that fatalist. But it helps to have a healthy bit of skepticism about how people arrive at decisions. \n\n• Knowing is half the battle – Knowing how people shortcut decision making can be incredibly useful in research. For example, instead of asking “what features should this site have” ask people to tell a story about how they use the system. Coaxing people to walk you through a scenario of use can help you uncover the latent issues that they’re unaware of. This can help shortcut recency and availability. It can also help you avoid making research errors like priming and anchoring. \n\n• Search your feelings – when you analyze the reasons for choices you made, you might find you’re basing your decisions on cognitive errors. For example, if you’re conducting research, consider whether you’re selectively gathering information that supports your hypothesis and ignoring other possibilities. \n
• Trust no one - Actually, it doesn’t need to be that fatalist. But it helps to have a healthy bit of skepticism about how people arrive at decisions. \n\n• Knowing is half the battle – Knowing how people shortcut decision making can be incredibly useful in research. For example, instead of asking “what features should this site have” ask people to tell a story about how they use the system. Coaxing people to walk you through a scenario of use can help you uncover the latent issues that they’re unaware of. This can help shortcut recency and availability. It can also help you avoid making research errors like priming and anchoring. \n\n• Search your feelings – when you analyze the reasons for choices you made, you might find you’re basing your decisions on cognitive errors. For example, if you’re conducting research, consider whether you’re selectively gathering information that supports your hypothesis and ignoring other possibilities. \n
• Trust no one - Actually, it doesn’t need to be that fatalist. But it helps to have a healthy bit of skepticism about how people arrive at decisions. \n\n• Knowing is half the battle – Knowing how people shortcut decision making can be incredibly useful in research. For example, instead of asking “what features should this site have” ask people to tell a story about how they use the system. Coaxing people to walk you through a scenario of use can help you uncover the latent issues that they’re unaware of. This can help shortcut recency and availability. It can also help you avoid making research errors like priming and anchoring. \n\n• Search your feelings – when you analyze the reasons for choices you made, you might find you’re basing your decisions on cognitive errors. For example, if you’re conducting research, consider whether you’re selectively gathering information that supports your hypothesis and ignoring other possibilities. \n