As we continue to stitch our physical world together with digital information, context is becoming harder to manage and understand. Everything we do or buy is potentially connected to everything else, complicating the meaning of our everyday actions. How do we insure that the networked "things" we put into the world make sense as part a human environment? The answers have less to do with the devices we make than with the way people perceive and comprehend their surroundings.
Using everyday examples and practical models, this talk shows how we can figure out the contextual angles underlying the experiences of your product's or service's users and customers.
1. FOR THE INTERNET OF THINGS
Andrew Hinton / @inkblurt
WebVisions 2015 Chicago
UNDERSTANDING CONTEXT
2. 2
@inkblurt
@contextbook
by
(me)
Hi, I’m Andrew, and I wrote a book about context. And while I was researching and writing this, I did a lot of thinking about what contextual experience
means in a world filled with things that think for themselves and chatter among one another on the internet.
3. THE INTERNET OF
THINGS
This is a very popular phrase of late … but when we say Things, what do we mean by that?
4. 4
Rogue’s Gallery of IoT Gadgets!
Typically when people talk about the internet of things, they roll out a rogue’s gallery of IoT devices and gadgets….
But part of what I want to get across today is that we need to back up a bit and understand that we can’t really deal with the internet of things until we better understand things
that aren’t necessarily smart or networked.
5. 5
Context is largely about what actions mean in our environment. And that’s getting more complicated. Trying to use a bathroom has become a contextual conundrum. What works and how? We grew up using bathrooms without these
sensors, but now we have to re-learn every bathroom we enter.
We need a label with picture and words on a faucet to understand how to use it. The soap has to be supplemented with store-bought back-up, because it isn’t working or it has run out.
In fact, auto soap dispensers make no sense if you think through the full context of how people behave in bathrooms — we don’t touch the soap again after we’ve washed, so why automate it? Instead, these auto-dispensers often
mistakenly squirt all the soap out as we wash under the faucet. This is a failure of understanding how a system of things and the system of human behavior meet to create context.
6. THE INTERNET OF THINGS
IS LESS ABOUT THINGS
& MORE ABOUT
ENVIRONMENTS
If I learned anything about the IoT as I was learning about how people understand and create context, it’s this … the internet of things is really about environments, not just the
objects in the environment. The objects make no sense without the context of the whole.
7. 7
Affordance
James J Gibson
The potential the environment
offers for bodily action.
The guy who invented the concept of affordance — which we hear a lot about in design work — meant it as something a bit different than the way most folks mean it, and it was part of a
stunningly brilliant framework for understanding perception.
8. 8
Invariants
That which does not change
in the midst of change.
James J Gibson
A central idea in Gibson’s framework is what he calls Invariants.
Invariants are the parts of our environment that don’t change, in the midst of change. Things that persist — they don’t vary. The ground under our feet is an invariant — generally it’s always solid
and supports our weight, for example. We evolved the bodies and brains we have in part because of the invariant nature of the ground under us, the air around us, the way stone is solid and the
way water is fluid.
Gibson mainly discusses invariants that exist for nearly all creatures, not just humans. But he also touches on human-made environmental stuff.
Invariants are at the core of how we understand context. Because context is about the relationships between the elements of our environment — and we need stable elements to bring coherence
to all the rest that isn’t stable.
9. ENVIRONMENT
wikimedia
Invariants
Something I now base my work on entirely is the idea that we have to understand how humans comprehend a context like this, because it’s the foundation for how we
understand everything else we’ve put into the world since the earliest civilizations could build stone walls and wooden fences.
10. ENVIRONMENT
wikimedia
Built
Invariants
We build things depending on the invariant qualities of the materials, and we depend on the invariant quality of the resulting buildings to be stable enough that we can have
civilization — stuff doesn’t change that fast or easily, so we rely on it for our context, for living, working, running businesses, and everything else.
But also notice all the language in this scene… turns out language is a sort of built invariant too, because we put it in the environment as structure. But the way language is
invariant is different from physical stuff.
http://commons.wikimedia.org/wiki/File:Taipei_City_Nanyang_Street_20130127.jpg
11. Invariants
(Convention)
Walking up to the elevator at Midway Airport on my way to WebVisions, on the way down to the train, there are these signs that are evidently required to help people understand
where the elevator button is… we’ve been using elevators and buttons for over a century, yet these things still require mediation and explanation, just so people understand
what their environment does. These are invariants based on social convention, not physics.
12. 12
“elevator”
“button”
over a century of
cultural invariant
convention
context
There’s at least a century of cultural convention that we can take for granted when we put a sign saying ‘elevator button’ on something — and give people a button to push.
And the signs and the button aren’t usually going to change or disappear. I bring this up to emphasize how something as simple as a button on an elevator depends on tons of
context in order to make sense for a user.
13. 13
Words
Pictures
Symbols …
Language as
environment.
We’ve gotten used to having lots of invariant semantic information in our environment — we depend on it to keep everything running right. Language acts
as a sort of environmental infrastructure in this way.
14. 14
VARIANT!
But what happens when these things change? This is a speed limit sign in Atlanta, where they’ve installed digital signs that can change the speed limit at any time during the
day. What used to take physical effort, and even legislation to change the speed limit for an interstate highway, is now highly variant. This is a mundane ‘thing’ on the internet,
so to speak — it’s networked to a central office and changed either manually or by algorithm — actually, drivers have no idea what is behind the changes, other than assuming it
is altered based on traffic needs.
15. wikimedia
Not just variant,
but invisible.
Our landscapes are being saturated in these changing, variant parts of our infrastructure. And not only are they changing in visible ways, they’re mostly changing in invisible
ways.
16. APPROXIMATE INCREASE IN ENVIRONMENTAL COMPLEXITY OVER TIME*
Complexity
added to human
environment
No big whoop.
OMG PLEASE
MAKE IT STOP!!!!!!
What’s complexity?
We’re so modern!
Learn a new app? Uh. Ok.
I have no idea what my
phone is doing.
I have no idea what my
house is doing.
Time
Olden times Fin de siècle
* according to Andrew Hinton’s feelings on the subject.
Industrial
Revolution
“Information Age” 21st Century
According to my empirical measurement of my own personal feelings, complexity is hitting an extreme upward curve in our world. Humans are creating so many new parts of
our environment that do not behave the way everything has always behaved, that we are entering an unmapped territory.
The sort of territory that ancient mapmakers marked with pictures of sea monsters.
17. 17
wikimedia
Interface
(Machine to Machine)
When the term “interface” came into usage in technology, it was about interfacing machines to machines. The humans who had to use the machines needed a way to “interface”
as well — and much to engineers’ annoyance, we don’t have a 25 pin standardized plug. We’re messier and more nuanced creatures.
18. 18
Interface
(Machine to Human)
So we needed ways to let people tell things to computers. Some folks like to say that we’re getting away from having to have input mechanisms like this, and that we need to
make systems with “no UI” — but there’s no such thing as “no UI” — there will always need to be an interface.
19. THERE’S NO SUCH
THING AS “NO UI”
LANGUAGE IS THE
INTERFACE W/ IOT
We’re wanting our internet of things to understand the way we behave and talk “naturally” but that’s still language. And teaching them how to do this is very very difficult.
20. 20
Understanding Context, Andrew Hinton, O’Reilly Media 2014
But we’re still not always very good about translating between the digital things in our lives and regular people. Even for simple everyday devices like a gas pump.
21. 21
Remember back in the day when the Palm Pilot was a thing, and everyone who used one had to learn a new writing standard called Graffiti.
It was a well-designed alphabet for meeting the machine half way. The thing is, written language is already code — it’s encoded speech — so writing itself is very close to what
computers more easily comprehend, at least as input, but handwriting is too nuanced and “variant” for computers — even now — so standardizing how you make letters means
you’re making special marks that the computer can understand.
22. 22
https://www.google.com/patents/US8558759
In the ensuing years, we’ve been trying to get even more sophisticated with the way we make signifiers to digital systems embedded in our environment. Like this “heart” gesture
that Google patented.
23. THE “THINGS” ARE
LISTENING
& EAGER TO ACT
We’re wanting our internet of things to understand the way we behave and talk “naturally” but that’s still language. And teaching them how to do this is very very difficult.
24. 24
Captured from Twitter on September 22, 2015
The thing is, though, that we do things in our environment all the time that we assume will have only one meaning, because that’s the context we’ve always had. But digital
agents in our midst are now listening and reacting and making decisions based on those cues. You can search twitter at any point for “amazon echo commercial” and see people
complaining about their Echo reacting to Amazon commercials on TV.
25. 25
But algorithms are getting more and more complex, to the point where they sort of have minds of their own. We want them to be “smart” but they’re not smart in the way that people are — and they may never be.
A friend of WebVisions, Dan Saffer, published this great piece in Wired about how we should tame our algorithms like dogs…and that makes a lot of sense. It’s actually bad for us and dogs to treat them like people…and we ought to not
assume that ‘smart things’ are like people either, even though they’re being designed and marketed to us to feel human.
26. 26
No metaphor is more
misleading than “smart”.
- Mark Weiser, pioneer of ubiquitous computing
The late great Mark Weiser’s quote about “smart” things is more relevant now than ever.
27. 27
James J Gibson
In fact, Saffer’s article came out just as I was finishing a part of my book about ‘smart things’ and context. In it, I explain how JJ Gibson’s ecological perception theory posits that creatures with autonomy in our environment are
objects with agency, animals, basically. They’re different from inert objects, because they act and move on their own, according to something other than predictable, invariant natural patterns.
I think that we need to not only do contextual research for users to understand how they perceive and act in the world, but for the IoT smart things with agency as well, so we can map and model the way they perceive and act.
Because it’s our job to translate between them.
28. 28
We may never be able to rely on so-called intelligent systems to do things especially right always, because they make contextual mistakes all the time. Human
context is extremely nuanced and complex. For example, if Amazon should get anything right, it’s the authors of books, but for the longest time, it thought I had
a co-author.
29. 29
Just another amusing example of how we make systems that we want to be smart and automated, but that don’t grok the nuances of real people — Facebook wants me to name this halloween decoration. Maybe it should be A. Horsley
Hinton?
30. 30
“BMR”
height:weight:age:gender
“Calories!”
Arm
Movements
“Steps!”
[algorithm]
A tracker like the fitbit perceives us as a series of movements in space, and nothing more. It doesn’t actually understand the context of those movements. Like the other things we put into the
environment, this object makes our bodily movements mean something additional and new, and somewhat invisible to us, that movement didn’t mean before. Just like when we walk down our hall
and a smart thermostat reads it as activity that it considers when deciding to adjust temperature.
When we make smart, networked things, we have to create a language interface. But it’s challenging, because what the algorithms and sensors and little digital brains actually understand and
what they tell us might be really disconnected. The Fitbit and other trackers like it aren’t actually counting steps and calories, but making assumptions that are mostly “good enough” to
approximate those values, but not actually know them. These need more transparency and clarity about the complexity behind the information, rather than pretending that it’s “simple.”
31. 31
Understanding Context, Andrew Hinton, O’Reilly Media 2014
Technology work tends to focus on the task — the action itself — without considering the context of need that brought it about, or the situation that spawned the need. But
contextual complexity demands now that we consider all these dimensions.
32. 32
Understanding Context, Andrew Hinton, O’Reilly Media 2014
People need to understand digital “things” …
So, we need to understand digital things… and design needs to create careful layers of translation between the digital system’s artificial, binary conceptualization and the
organic, analog, messy human perception and understanding.
33. 33
Understanding Context, Andrew Hinton, O’Reilly Media 2014
… but things also need to understand people.
But we need to work the other way around too — every thing in the IoT has its own situational context, a need that it’s programmed to meet, and actions it takes.
34. PLACES ARE MADE OF THINGS.
AN INTERNET OF THINGS
IS AN INTERNET OF
PLACES.
A final point to emphasize the environmental context of all this. Things are not just things, they’re part of places. When you add all these things together, what sort of places
result? Most of them are unaware of one another, especially if they come from different manufacturers.
35. 35
We’ve been imagining for over a century, since the dawn of sophisticated industrial technology, what sort of amazing, automated, technologically intelligent environments we
might be able to make for ourselves. We’ve fantasized about these as utopias as well as dystopias, and often mixtures of both. But we’re long past the point of fiction on this.
(http://thecharnelhouse.org/2)
36. 36
Creators of things like the Nest products want to grow their businesses and they want to make things that make people’s lives easier and better. This isn’t just about objects —
things — but entire places and ecosystems.
37. 37
Mashable
XBox reading user’s skeleton & heartbeat
It still blows me away to think that a consumer device in our homes can sense our skeletal structure, and our heartbeats. Again, this is an extreme form of digital-agent
perception that doesn’t mean the same things to these objects that it means to us.
38. 38
But how well
do you
understand
your iPhone??
With homekit and the new Apple TV, Apple is making a big play in this realm as well. But how well do you understand even your iphone? If I asked you to write down, right now,
all the streams of information your phone is saving to and pulling from cloud services, could you do it? Nobody ever says yes, by the way.
10 years ago, if I asked if you’d want a product on your person most of the day that tracked and saved externally all this information about you, you’d have said, no way — i
don’t bring this up to be alarmist, though, only to point out that we have a tendency to keep going forward with all this, without taking the time to map and understand it all, and
help users have a clear understanding of all the complexity that’s now invisible but meaningful to them. Context is radically changed in our phones — and already changing for
our homes and workplaces and cities. How do we help it be more clear?
39. ENVIRONMENT
PRODUCT
PRODUCT PRODUCT PRODUCT
PRODUCT
PRODUCT
PRODUCT
PRODUCT
PRODUCT
PRODUCT
PRODUCT
PRODUCT
PRODUCT
PRODUCT
PRODUCT
We tend to be focused on products, and we’re creating them incredibly fast.
In fact, the the way a lot of people are now defining design is essentially pushing product out into the world to see what happens. It’s great that we’re more product-focused
than project-focused, but that’s not enough.
>> But all of these are part of an environment. And they’re all connected. Every product needs to be created with clear awareness of how it will exist as part of a place.
40. 40
Name & model the invisible contexts we make.
The place to start is to at least name and model the things inbetween — the invisible things — and their rules and what they see and sense. And then we
start to see that we’re wanting people to understand many different overlapping contexts at once.
41. 1
MODEL & CREATE
INVARIANTS FOR
LEARNABLE CONTEXTS
CONTEXT / IoT
So to sum up, here are 3 points to walk away with.
42. 2
LANGUAGE IS THE
HUMAN INTERFACE WITH
DIGITAL CREATURES
CONTEXT / IoT
context and strategy have a lot to do with each other because of the challenges of environments becoming more complex.