Image Intelligence: Making Visual Content Predictive
1. IMAGE INTELLIGENCE:
MAKING VISUAL
CONTENT PREDICTIVE
Including 30 use cases for image intelligence
in the enterprise
By Susan Etlinger, Analyst
Altimeter, a Prophet Company
July 18, 2016
2. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
2
EXECUTIVE SUMMARY
People no longer communicate online simply via written content, such as posts and
comments; they upload and share billions of photos every day. This can be both exciting
and terrifying from a brand perspective, because approximately 80% of images that include
one or more logos do not directly refer to the brand with associated text. As a result,
organizations are missing the content and meaning of images and are unable to act on the
opportunities or risks they present.
Companies ranging from technology startups to industry Goliaths, such as Facebook and
Google, are developing technologies that use artificial intelligence to analyze the content
of images. Increasingly, they’re applying analytics to images to better understand their
impact on the business. But the opportunity for organizations to make sense of images isn’t
just about recognition and analysis; it’s about image intelligence: the ability to detect and
analyze images, incorporate them with other data sources, and develop predictive models
to forecast and act on emerging trends.
This report lays out the current market opportunities, challenges and use cases for image
intelligence and offers recommendations for organizations that wish to unlock the predictive
potential of visual content.
TABLE OF CONTENTS
Executive Summary 2
The Rise of Visual Media 3
How Do Computers See? 7
From Computer Vision to Image Intelligence 12
The Business Value of Image Intelligence 14
Privacy, Trust and Customer Experience 24
Challenges of Image Intelligence 26
A Look at the Future 29
Recommendations 30
Endnotes 33
Methodology 34
Brands, Researchers, Agencies and Industry Experts (10) 34
Technology Vendors (17) 34
Social & Digital Media Technology Platforms (3) 34
Acknowledgments 35
About Us 36
3. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
3
THE RISE OF VISUAL MEDIA
“I see more and more people sharing
images and getting away from text;
look at the explosion of memes and
emoji. It’s becoming a more and
more complex environment, how
people are communicating over
social media.”
— Glen Szczypka, Deputy Director,
Health Media Collaboratory, National Opinion Research
Center at the University of Chicago
3
www.altimetergroup.com | @setlinger | susan@altimetergroup.com
4. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
4
The ubiquity of smartphone cameras, combined with increasing use of social networks, has led
to an explosion in picture taking and photo sharing. According to Mary Meeker’s 2016 Internet
Trends report, people share and upload over 3 billion images every day on Facebook properties
(Facebook, Messenger, WhatsApp and Instagram) and Snapchat alone (see Figure 1).
In addition to sparking trends and conversations, photo sharing is driving technology innovation.
Markets and Markets, a research firm, expects the image-recognition market to reach nearly $30
billion by 2020, driven in large part by sharing via social media.1
Image recognition — what Gartner defines as “technologies [that] strive to identify objects,
people, buildings, places, logos, and anything else that has value to consumers and enterprises”
— is just the first step in deriving insight from and acting on images, however. The next step is to
analyze them to better understand their context and impact.
The photo on the following page provides a good example (see Figure 2).
FIGURE 1
IMAGE GROWTH REMAINS STRONG, SAYS MARY MEEKER’S INTERNET TRENDS REPORT
Source: Snapchat, Company disclosed information, KPCB estimates
Note: Snapchat data includes images and video. Snapchat stories are a compilation of images and video. WhatsApp data estimated based on
average of photos shared disclosed in Q1:15 and Q1:16. Instagram data per Instagram press release. Messenger data per Facebook (~9.5B
photos per months). Facebokk shares ~2B photos per day across Facebook, Instagram, Messenger, and WhatsApp (2015)
5. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
5
FIGURE 2 MEASURING THE VALUE OF IMAGES
The value of this photo to a brand such as Sony Ericsson or Olympus is its effectiveness at
reaching as broad an audience as possible. When this photo is shared in social or digital
channels, however, it is unlikely to include any explicit brand mention such as a hashtag or
caption. But for brands that sponsor sporting events, the ability of computers to detect these
types of brand mentions can be extremely valuable tools for measuring calculate sharing
behavior, reach and, ultimately, sponsorship ROI.
A human can easily interpret this photo as a woman playing tennis at the U.S. Open. If she is a
tennis fan, she may even recognize Ana Ivanovic. But a computer simply “sees” a collection of
pixels that it must then classify into objects (a woman, a tennis racket, some logos, and so on). It
then must interpret those objects to infer meaning: a woman playing in an event in the US Open
Series, sponsored by Sony Ericsson and Olympus.
6. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
6
Computer vision enables images to become a source of actionable insight for brands.
Images have the power to capture emotion and call people to action in a way that words often
cannot. They can ignite product sales, as when a young mother named Candace Payne recorded
a video of herself laughing hysterically in a Chewbacca mask in her car in a Kohl’s parking lot.2
The video immediately went viral and became, with over 137 million views, the most-watched
Facebook Live video ever, causing the mask to sell out across multiple channels. In addition,
although photos or videos may carry diverse cultural connotations, they are also far more
universal than language — a significant asset for global brands.
“The growth of media inside the Twitter stream has been enormous,” says Chris Moody, Vice
President Data Strategy at Twitter. “Just as people originally said they wanted to listen to social
content, they are increasingly asking to see images — pictures of their products and pictures
of their logos. This is repeating with images what we saw with text analytics: PR and crisis
management, photos of products and competitors’ products. They want to understand what
those images are.”
The opportunity for organizations to make sense of images isn’t just about recognition and
analysis, however; it’s about image intelligence — the ability to detect and analyze images,
develop predictive models based upon them, and use these models in context with other data
sources to forecast and act on emerging trends, develop business cases, detect and mitigate
crises, and a host of other uses.
Before we discuss the use cases for image intelligence, it’s important to understand a bit about
how it works.
6
www.altimetergroup.com | @setlinger | susan@altimetergroup.com
7. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
7
HOW DO COMPUTERS SEE?
The first thing to understand about image recognition technology is that it is kind of a
paradox3
. For many people, there is nothing more natural than to look at an object and
perceive that it is a daisy, or a group of people at the NBA finals, or a woman riding a bicycle.
From an engineering perspective, however, the process of perception is highly complex.
DEEP LEARNING TEACHES COMPUTERS ABOUT IMAGES
To enable a computer to recognize and classify an image (e.g., to “see”), requires a technique
known as deep learning. Deep learning consists of running data through an “artificial neural
network”; basically, a software program that roughly simulates the behavior of neurons in the
brain. Deep learning translates things people can easily perceive into something computers
can recognize and interpret.
The goal of deep learning is to train the software to classify future data; for example, to
distinguish a cat from a dog, a beagle from an Irish Setter, and so on.4
A recent article in MIT
Technology Review puts it this way: “Deep learning is why you can search images stored in
Google Photos using keywords and why Facebook recognizes your friends in photos before
you’ve tagged them.”5
8. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
8
Deep learning and neural networks aren’t new; variations of this technique have been used
since the 1950s.6
But the rise in photo sharing and the desire to identify, interpret and act on
visual content have provided a new application for neural networks: understanding not only the
objects contained in an image, but their context and meaning. Figure 3 shows two sets of images:
backpacks and beagles. Below each image is a number that denotes the estimated accuracy
(the “confidence level”) of the image. The higher the number, the higher the probability that
the classification is accurate. You can begin to understand some of the complexity involved in
deep learning. We know, without having to be told, that color is not a meaningful attribute for a
backpack; it can be red or blue or plaid or camouflage, and it’s still a backpack.
FIGURE 3 HOW NEURAL NETWORKS CLASSIFY OBJECTS
Source: Ditto
9. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
9
But color is an important attribute for classifying beagles. Yet even the human eye isn’t infallible;
a meme that circulated on Reddit in early 2016 — “puppy or bagel?” — showed just how easy it is
to fool the eye (and potentially a computer) under the right circumstances (see Figure 4).7
While this is a humorous example, incorrect classification — what data scientists call “false
positives” and “false negatives” — can have damaging consequences. Last year, Flickr’s auto-
tagging feature mistakenly identified a photograph of an iron gate in front of a field as a “jungle
gym.” In fact, it was a concentration camp.8
So, even with deep learning, some images are harder
to interpret than others.
FIGURE 4 PUPPY OR BAGEL? FOOLING COMPUTER AND HUMAN VISION
Source: Imgur
10. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
10
Objects
= binoculars
OBJECTS
· Logos
· Faces and other parts
of the body
· Trees, Lakes, Oceans
· People
· Transportation
· Animals and Breeds
· Bridges,Monuments
· Sports fields and facilities
· Hotels, hospitals,
office buildings
SCENE
· Sporting Events
· Basketball Game
· Football Game
· Golf Tournament
· Tennis Match
· Marathon
· Stores
· Parks
· Concerts
· Weddings
· Parties
· Protests
· Beach
· Mall
ATTRIBUTES
· Quantity, Color, Size,
Shape, Gender
· Image Prominence
· Weather
· Sunny
· Rainy
· Snowing
· Season or Time of Day
· Afternoon
· Sunset
· Winter
· Summer
· Style
· Luxury
· Cozy
· Fashionable
· Bridal
· Sexual, Violent
EMOTION
· Happiness
· Sadness
· Excitement
· Anger
Attributes
= green, yellow, navy
Scene
= outdoors
Emotion
= happy
FIGURE 5 HOW NEURAL NETWORKS CLASSIFY IMAGES
On a daily basis, most of the photos we see don’t simply include a single object in a photograph:
That would be too easy. We usually see objects grouped together in scenes: the photo of Ana
Ivanovic at the U.S. Open, a group of people playing beach volleyball, and so on. Figure 5
provides a framework for understanding the kinds of things computers can identify using neural
networks: objects, scenes, attributes and even emotion. This is just a sampling for illustrative
purposes; there are many more examples, some of which are easier to recognize than others.
11. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
11
THE IMPORTANCE OF TRAINING DATA
At its most fundamental, a set of images is a collection of data. To interpret the data, you need
to run it through a neural network. But, like people, neural networks can’t learn in a vacuum; they
have to be trained with examples — the more, the better — to help them distinguish things from
each other. This is called machine learning, and it is how, in Figure 3, the Ditto network was able
to identify a set of 10 beagles with at least 98% probability. But consider this: If the network had
never “seen” other dog breeds, how would it know that not every dog is a beagle? There is an
old saying: “If all you have is a hammer, everything is a nail.” With deep learning, if all you have is
a hammer, everything is a hammer.
To teach a neural network the difference between hammers and other types of tools, you need to
show it lots of data, known as training data. This means showing it at least hundreds and ideally
thousands of different images of tools and objects other than hammers: wrenches, screwdrivers
and so on. We know this intuitively when we see small children start to use language. At first,
every animal is a “doggie”; over time, children learn to distinguish different types of animals from
each other.
Human beings do this unconsciously, but with computers, we need to use different training data
for different purposes. Training data with thousands of pictures of dogs won’t help a network
learn to recognize shoes, much less distinguish a sneaker from a stiletto. As a result, there is no
dataset that is optimized for every possible use. Michael Jones, Director of Product Management
and Data Science at Salesforce, says, “There is no generic data set for training data. Don’t
assume it’s omnipotent.”
Why is this so important?
Because, when evaluating image-recognition technology, how well the system learns from data
— not what it “knows” at the outset — is where the real value lies. In humans, we think of this
as an aspect of intelligence; in computers, we call it artificial intelligence (AI). AI-based systems
require an agile and collaborative mindset and the willingness to evaluate the technology to see
how well it suits a particular use case before making a decision.
11
www.altimetergroup.com | @setlinger | susan@altimetergroup.com
12. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
12
FROM COMPUTER VISION
TO IMAGE INTELLIGENCE
“We find in social and digital that
a lot of our fans talk about us,
but they don’t talk to us.”
— Sebastian Quinn, Director of Digital Marketing,
Hard Rock International
12
www.altimetergroup.com | @setlinger | susan@altimetergroup.com
13. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
13
The rise in visual media, specifically the sharing of photos and videos, creates unique
opportunities for organizations that text alone cannot. While this is not an independently verified
statistic, several of the technology providers Altimeter interviewed stated that approximately
80% of images that they see that include brand logos do not explicitly mention the brand with
any accompanying text.
While this number may be startling, it makes sense intuitively. We take photos and sometimes
hashtag them, but we don’t usually call out every single product or object in the image. Machine
learning teaches the network to recognize what is important to us and what is not so we can
identify unanticipated opportunities and risks.
Image intelligence, therefore, refers to the ability to extract meaning from images, detect
patterns, and use that insight in conjunction with other data to make a prediction about the
future (see Figure 6).
IMAGE
RECOGNITION
What is it?
Detects objects, scenes,
attributes, emotion in images
IMAGE
ANALYSIS
What happened?
Detects objects, scenes,
attributes, emotion in images
Measures impact
(shares, sentiment, reach,
impressions)
IMAGE
INTELLIGENCE
What is likely to happen?
Detects objects, scenes,
attributes, emotion in images
Measures impact
(shares, sentiment,
reach, impressions)
Integrates other data sources,
detects patterns, and makes
inferences and predictions.
FIGURE 6 HOW NEURAL NETWORKS CLASSIFY IMAGES
14. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
14
THE BUSINESS VALUE OF
IMAGE INTELLIGENCE
“Communication is taking place
visually, and it’s much more
ephemeral than it ever was before;
it’s now a visual conversation.”
— Andrew Higgins, Director, Product Marketing, Pixlee
14
www.altimetergroup.com | @setlinger | susan@altimetergroup.com
15. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
15
Altimeter’s 2011 report A Framework for Social Analytics lays out the primary areas of value for
social business.9
The central framework of that report is as applicable to images as it is to text.
Figure 7 demonstrates the primary categories of business value for visual content, which span
the organization.
The following images, sourced from Flickr, illustrate a few examples (see Figure 8).
FIGURE 7 THE BUSINESS VALUE OF VISUAL CONTENT
BRAND HEALTH
A measure of attitudes, conversation
and behavior towards your brand
MARKETING
OPTIMIZATION
Improving the
effectiveness of
marketing programs
REVENUE
GENERATION
Where and how your company
generates revenue
OPERATIONAL
PERFORMANCE
Where and how your company
increases productivity and reduces
expenses and risk
30
CUSTOMER
EXPERIENCE
Improving your
relationship with
customers, and
their experience
with your brand
INNOVATION
Collaborating with customers to
drive future products
and services
BUSINESS
GOAL
16. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
16
FIGURE 8 PHOTOGRAPHS HIGHLIGHT HIDDEN BRAND OPPORTUNITY AND RISK
BRAND MENTIONS,
MOMENTS OF CONSUMPTION
This photograph includes a bottle featuring a partial logo
of Hellmann’s Mayonnaise, which can be counted as a
brand mention and shows the context in which the product
is being used in the meal.
SPONSORSHIP VALUE
This photo of Marcos Baghdatis, taken at the 2013 U.S.
Open, clearly shows a partial logo of the auto insurance
company esurance, but without image recognition it
would not appear as an esurance impression unless the
photographer or person who shared it explicitly tagged it
that way.
COUNTERFEIT PRODUCTS,
TRADEMARK INFRINGEMENT
This bottle of “Oil of Ulay” resembles a well-known beauty
product. Products such as these may constitute trademark
infringement. Image recognition technology can detect
authorized or unauthorized variations of logo or package
design, enabling organizations to enforce brand and product
standards.
The value of image intelligence is not confined to these three examples. In fact, while many
of the use cases for image analytics reside in digital marketing, they have value across
the organization. The following section illustrates a range of business use cases for image
intelligence: brand management, marketing, revenue generation, customer experience,
operations and innovation and strategy.
P
hoto: Steven PIsano, cc 2.
0
Photo: Twistin, cc 2.0
Ph
oto: Indi Samarajiva, cc. 2.
0
17. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
17
USE CASES
BRAND HEALTH
Brand health — a measure of how people feel about, talk about and act toward your brand — is
one of the most common use cases for social data. Analyzing images in the context of brand
health can yield complementary insights to text analysis. The analysis of sentiment in images,
however, requires facial recognition and emotion detection capabilities and raises a host of
privacy and disclosure issues that are important to factor into any image intelligence strategy
(see Privacy, Trust and Customer Experience, page 23).10
Image intelligence can help brand managers detect and act upon brand mentions, context
and associated sentiment that would otherwise be invisible. Additionally, it provides a lens into
sharing behavior and engagement that may not explicitly mention the brand. Roger Worak,
Senior Director of Digital Loyalty and CRM at Hard Rock International, says, “At the core, Hard
Rock is an experience brand, so we want to see what experiences inspired our customers to
share their photos.”
Another interesting application of image intelligence is the ability to analyze different sets of
images over time for brand storytelling purposes. For example, by correlating a new set of
images to one already on the web, one could detect and classify visual changes over a specific
time period. Francesco D’Orazio, VP Product and Research at Pulsar, says, “You could compare
a set of images of New York to pictures on the web that were taken in the 1950s. This would give
you an analysis of conversation about New York that is multi-dimensional.” This could be an
interesting application to better understand the impact of urban development, climate change
or even brand presence in a city or other area over time.
18. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
18
Example Description
Tracking Brand Mentions Understand when and where the brand appears in images.
Several vendors are able to detect partial or upside-down logos
in addition to non-standard logo variations. This can be used to
estimate or validate brand awareness or to measure the value of
visually oriented campaigns.
Brand Storytelling Compare images of a brand over time to identify visual changes
in the brand story.
Brand and Trademark
Compliance
Understand whether logos and brand assets are being used in
compliance with brand guidelines. Examples:
• Pharma, in which misuse of brand logo can have human/legal
consequences
• Trademark infringement
• Fake or pirated products/services
Crisis and Issues
Management
Understand how topics — whether related to one’s own brand,
other brands or external issues — may affect brand health.
Examples: environmental issues, GMO foods, product recall,
damaged shipments, disease outbreaks, political campaigns,
geopolitics.
MARKETING OPTIMIZATION
As images become even more central to marketing programs and campaigns, the ability to
detect those images when they are shared without language becomes essential to evaluating the
performance of marketing and advertising programs and, ultimately, media mix management.
This can extend from identifying moments of consumption (which foods people eat for breakfast,
what bikes they ride or cars they drive) to affinities among brands, to competitive benchmarking, to
sponsorship performance, to micro-segmentation and ad targeting. David Rose, CEO of Ditto, says
“Now you can look for these tribes of people who are into fashion, or golf, or beach vacations.”
Image analytics can also help marketers identify valuable user-generated content (UGC), making
content development more cost-effective. In the past, if companies wanted to show a customer
using the product, they’d have to shoot the image themselves. Says Paul Piggott, Global Social
Media and Influencer Marketing Manager, Ultimate Ears, “We lean into user-generated content
[UGC] and influencer content a great deal. We don’t really spend money creating content
anymore. We post UGC with the creator’s permission and attribution. We found early on that
UGC performed twice as well in terms of engagement on our social channels, so we moved to
nearly total UGC.”
FIGURE 9 BRAND HEALTH USE CASES
19. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
19
Example Description
Market Research Identify moments of consumption. This can be used for revenue
attribution or for micro-segmentation.
Measuring Brand Lift Measure content performance. As brands invest in content and
content propagates, it can become very expensive to perform
traditional brand lift studies that require surveys. Visual analytics
can be a more cost-effective way to measure the propagation of
brand content.
Brand Affinities Identify interesting ways in which customers/consumers use your
product in ways you never intended, and how they may pair it
with something else. Example: chips with dip, breakfast cereal
with other breakfast items, fashion.
Competitive
Benchmarking
Compare visual share of voice to competitors. For example, auto
manufacturers can use image analysis to identify images of their
brands and models versus competitors’. It may also be interesting
to benchmark visual against textual data to see what overlaps and
gaps may exist.
Micro-Segmentation/Ad
Targeting
Identify micro-segments that may affect ad targeting; for
example, golfers, pet owners, mixed martial arts enthusiasts,
fashionistas, and so on.
Content Marketing/UGC Crowdsource content, whether from concerts, sporting events,
daily life, then obtain content from fans, customers and
photographers for use in creative or in digital commerce.
Sponsorship Performance Understand the value of sponsorship, as measured by logos and
other brand content shared and amplified in photos. This can
count toward brand mentions, impressions, advocacy or other
metrics.
Influencer & Celebrity
Marketing
Identify and measure impact of influencers and celebrities for
digital marketing purposes.
Content Performance Understand message diffusion across different media types, so
on-brand messages can be amplified and marketers can receive
alerts about off-brand images that may require attention.
Strengthen Brand
Identity
Understand what types of images perform most effectively in the
context of brand management and marketing.
FIGURE 10 MARKETING OPTIMIZATION USE CASES
20. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
20
REVENUE GENERATION
Understanding the conversion (or direct revenue) impact of social and digital programs is an
ongoing challenge for marketers. While a customer may not convert on a known channel or device
or within the time frame prescribed by conversion attribution tools, images enable marketers to see
the evidence of customer purchases, in addition to the context in which they are presented.
Errol Apostolopoulos, SVP Product of Crimson Hexagon, says, “With text you get a lot of
intention: ‘I can’t wait until I get the Apple Watch.’ But with image analysis, you can now show
the watch. The thing that a photo can tell you is that they are not just talking about it; they have
it. Brands can use that as proof of purchase.” Images can also contain commerce widgets, which
provide valuable analytics about which images perform best under which conditions.
In addition, images may also contain other information not accounted for in traditional data models
that can aid in price prediction or other types of value calculation. Real-estate listings typically factor
in attributes such as number of bedrooms, bathrooms, school district, square feet and so on, to
determine price. Ram Ramanathan, Product Manager of Google Cloud Vision, says “soft” attributes
in images, that would not normally be accounted for, can make models more predictive. French
windows in a dining room can be a potential signal for a higher price, for example.
Example Description
Price Prediction Use visual attributes in images to help train models to predict the
price of a product.
Selling Triggers Detect content that contains aspirational language: “I wish I had
this” plus images. This is applicable to any product or service.
Proof of Purchase Detect evidence that customers have bought a product rather
than just talking about wanting it. This is particularly interesting for
ingredient brands or products, whether they are microprocessors
from companies such as Intel, as those mentions generally tend
to be invisible to brands. It’s also interesting for businesses that
struggle with conversion attribution because of long sales cycles
or indirect distribution, such as car companies, mobile phone
manufacturers and cosmetics brands, to name a few.
Merchandise Use user-generated images on merchandise, thus adding to
product assortment and reducing cost of development.
Shoppable Images/
Commerce Widgets
Insert shoppable images in any social or digital channel. Using this
technology, someone who views a dress, or watch, or book, or other
product she likes on Instagram or another visual channel could
simply click on the image and go right to a commerce transaction
— all without leaving the platform. Companies such as Curalate and
Project September, and platforms such as Pinterest, provide this
type of capability.11
This becomes particularly intriguing in images
that contain multiple brands and/or products.
FIGURE 11 REVENUE GENERATION USE CASES
21. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
21
OPERATIONAL PERFORMANCE
Analyzing owned images facilitates visual asset management (and, increasingly, visual search
capability) for content strategists who otherwise would have no way to inventory the many
thousands of images they own and use. The ability to analyze user-generated images can inform
everything from merchandising strategy to franchise management to security and fraud practices.
For example, a financial services company found that first-time credit card holders were proudly
posting photographs of their new credit cards — a serious fraud risk.12
Example Description
Enterprise Search
& Visual Asset
Management
Identify and locate brand content. Many organizations have
terabytes of images they refer to internally as "dark assets,"
because they are difficult, if not impossible, to search by attribute
and have limited, if any, metadata associated with them. As
a result, companies may buy the same image multiple times,
overlook high-value images, or recreate images that already exist.
Visual search enables companies to make images discoverable
and usable, saving time and money.
Merchandising Strategy Understand preference and consumption patterns. Images
can illustrate which flavors/brands/styles perform best in which
location at which time.
Fraud Prevention Identify fake instances of customer service (tax preparers, banks,
pirated products, credit cards) to prevent phishing and identity
theft. While fraudulent products, such as handbags and skin
creams, may be difficult to mitigate, the presence of a large
database of images may help investigators detect patterns of
fraud and identify the guilty parties.
Security Identify customers or employees posting images with sensitive
information, such as access badges, boarding passes, passports
or credit cards, all of which are potential security risks, to enable
companies to identify and remediate them.
Franchise Management Manage brand consistency among franchises. Some
organizations (hotels, restaurant chains and so on) tend to work
under the franchise model, which means that corporate marketers
tend not to have much visibility into the content that individual
franchisees may use. Image intelligence can provide that visibility,
helping to promote consistency and reduce risk.
FIGURE 12 OPERATIONAL PERFORMANCE USE CASES
22. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
22
CUSTOMER EXPERIENCE
Like text analytics, image intelligence can serve as an early warning system for situations that
may affect customers. The difference, however, is that often people post their photos about
unpleasant customer experiences without comment that would alert the affected brand: a
smashed package on a doorstep, an unappetizing meal at a restaurant, a dirty hotel room.
Seeing patterns and volume of these images over time, particularly if they are tagged with
location, can help businesses spot emerging risks related to customer service, product or venue
safety — even possible health violations.
Another important factor in image intelligence is the ability to identify and moderate images
that include adult or violent content. This is critical for brands that use UGC for marketing or
commerce purposes, or for conferences, festivals or other events with live social media streams.
Because the risk of a false negative (missing a violent/adult/offensive image) is so high, UGC
always requires human moderation, but AI can significantly reduce the number of these images
that employees must review.13
Another application of image recognition is to improve the accessibility of images for the visually
impaired. In April 2016, Facebook released “Automatic Alternative Text,” which uses artificial
intelligence to describe photos to people with visual impairment. This could be a valuable
feature for brands to offer in digital channels to improve the digital experience for visually
impaired customers.14
Example Description
Product Defects or
Damaged Shipments
Identify products or customer experiences that are broken or
defective and that may not be visible simply with text. This could
be as simple as damaged clothing, restaurant or hotel cleanliness,
damaged shipments — anything that customers can photograph
and share.
Ability to Flag NSFW
Content
Reduce risk associated with user-generated content — for events,
commerce, campaigns or other purposes. Brands that use
UGC need to filter adult, violent or racist images to ensure that
they are not inadvertently shared with customers. Some of the
visual analytics platforms include adult content filters to remove
undesirable content from the feed before a human sees it. While
this still requires some level of moderation, it takes much of the time
and cost out of the process and helps ensure a safer experience.
Accessibility for Visually
Impaired
Analyze images and communicate their contents verbally to vision-
impaired users/customers. Facebook provides this capability with
their “Accessibility” feature, but similar technology can be used to
make websites and other digital assets more accessible.15
FIGURE 13 CUSTOMER EXPERIENCE USE CASES
23. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
23
INNOVATION AND STRATEGY
Images also have the potential to inform strategy and influence product innovation. For example,
image recognition can help identify customer trends or validate hypotheses about them, such
as what styles of clothing, flavors of snacks, or even colors and patterns are becoming popular in
which locations. They can replace or augment traditional focus groups and surveys by showing
what people actually do versus what they say they do or how they combine, use or adapt
products in interesting ways. They can also be used to identify leading indicators of issues that
may affect consumers and spark product or service ideas: climate or traffic patterns, for example.
Example Description
Product Innovation/
Ideation
Identify not only that a specific flavor or color or kind of
configuration of a product is popular or being amplified relative
to others, but how and possibly even why. This can also be used
for demand prediction.
Manufacturing Detect defects and service issues. Image recognition is commonly
used in manufacturing to identify production defects in products
from juice boxes to cosmetics, to food products, to identify
burned or broken items. It can also be used to better understand
desirable attributes; what people identify as beautiful, or
luxurious, or expensive, for example.
Research Better understand externalities that may affect the business.
Organizations can use user-generated images to look at patterns
that affect daily life and business conditions, such as pollution,
weather, urban planning, traffic and other issues.
FIGURE 14 INNOVATION AND STRATEGY USE CASES
24. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
24
PRIVACY, TRUST AND
CUSTOMER EXPERIENCE
As with any new use of data, the act of collecting, interpreting and acting on images has
implications for customer privacy and trust. Many of these, such as the choice to identify
individuals versus aggregating data, can be addressed within an existing privacy framework. But
computer vision introduces new considerations into the mix that can have an impact on customer
experience. Following are a few issues unique to images.
SOCIAL CUSTOMER CARE: ARE YOU TALKING TO ME?
There is established practice around posting to a social site when a customer loves (or
complains about) a product or service. Social service teams generally expect that the use of an
@ sign or hashtag that identifies the brand (as in @uber_support or #uber) represents a direct
communication with the brand. But images may be considered more private than text.
For example, if someone posts content with a hashtag, that may signal that they expect a
response from the brand. But if a company were to reach out because she was wearing their
brand of shoes, that could be perceived as intrusive. To ensure that they are operating within
a “safe” zone of customer interactions, brands must set guidelines that lay out the conditions
under which they will — or will not — reach out to customers who have not explicitly
addressed them.
MICRO-SEGMENTATION
Serving highly targeted content to a specific consumer can also cross the line, depending on the
consumer’s expectations about the interaction. For example, a furniture manufacturer may find
that many images of its products include pets and use that insight to inform a campaign. This
is unlikely to be sensitive, while segmentation based on health or other personal issues may be
highly sensitive. As a result, these strategies require careful monitoring to prevent interactions
that feel too intimate and that may erode trust in the brand. In general, rules of engagement for
using images in micro-segmentation strategies should follow rules of engagement for other data
uses with regard to sample sizes, de-identification and appropriate (and inappropriate) uses of
the data.
25. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
25
FACIAL RECOGNITION AND EMOTION DETECTION
No discussion of computer vision would be complete without a mention of facial recognition,
especially given that many people’s first conscious experience of computer vision was Facebook’s
“tag friends” feature. But facial recognition, in which companies are able to identify faces and
even detect emotion such as happiness and excitement, is both compelling and potentially
problematic.
For example, sports franchises, eager to measure fan sentiment, may want to use facial
recognition related to athletes and teams and measure that over time as an indication of brand
health. Even if the sentiment analysis is only done at an aggregate level, and even assuming
proper disclosures, such uses of computer vision need to be evaluated carefully to ensure that
they do not alienate customers and fans. This is important because any data that an organization
stores may be subject to security breach or even subpoena in a legal case.
For more on ethical data use and privacy, see The Trust Imperative: A Framework for Ethical
Data Use.16
25
www.altimetergroup.com | @setlinger | susan@altimetergroup.com
26. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
26
CHALLENGES OF
IMAGE INTELLIGENCE
“All models are wrong,
but some are useful.”
— George E. P. Box
26
www.altimetergroup.com | @setlinger | susan@altimetergroup.com
27. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
27
Any new technology presents challenges, both known and unknown. Following is a summary of
the main challenges of computer vision and image intelligence.
1. There is no standard analytics methodology. Because image intelligence is new and it
relies on machine learning, there is no set of standards for analyzing images. This is akin to
the beginning of the social web 10 years ago, when analysts began to define and socialize
measures of impact, such as reach and engagement, or the Internet 20 years ago, when
analysts began to define metrics, such as views, click-throughs and dwell time. That said,
organizations should start with discrete measures, such as object, scene and attribute
detection, and look at volumes and sharing behaviors over time as a starting point. That will
form that basis for more detailed metrics (for example, sponsorship ROI).
2. Sampling is inconsistent. Some social platforms (Instagram) and most or all messaging
platforms (Snapchat) don’t have a “full firehose” that analysts can use to pull and analyze
data the way they can with Twitter. In Snapchat’s case, this is a function of how the platform
is used — for ephemeral content. Making user data available would violate its terms
of service and send users running to other, more private platforms. While the issue of
“dark” content is not unique to images, it does need to be factored into any analytics
methodology. Furthermore, Instagram changed its Platform Policy in early June to include a
provision that users should not “apply computer vision technology to User Content, without
our prior permission.” This move is likely intended to rationalize and standardize the
ecosystem for image intelligence and give Instagram more control over how images posted
on the site are used and interpreted.17
3. Benchmarks over time are challenging. Because there is no full fire hose for images,
analysts need to collect them in real time. This means they need to know in advance what
they want to analyze, which can sometimes be impossible. As an example, imagine Kohl’s
desire to know how many times people shared the photos and videos of Candace Payne
in her Chewbacca mask; it would have had to know in advance she would film it and that
it would go viral, which is clearly impossible. Now imagine that another viral event occurs
with a competitor in six months. How would they compare them to each other? This means
that analysts need to estimate or use proxies (number of video views) to gauge the impact
of visual content. While these proxies may not be strictly accurate, they could certainly be
directionally useful.
28. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
28
4. Balancing Accuracy and Scale. Analyzing images requires constant negotiation between
what data scientists call “precision” (accuracy) and “recall” (volume). As with other types
of streaming data, the desire to catalog thousands, if not millions, of images can sacrifice
accuracy; while ensuring accuracy (or a high level of precision), by definition, limits the
number of images that can be analyzed. This is a critical conversation for data scientists
and business stakeholders to have together. Some use cases (for example, filtering adult or
violent content) require extreme precision, while others (looking for brand mentions) may
place more value on scale than accuracy.
5. Latency — the time it takes to analyze images — is higher with images than with text.
This is a function of the complexity of analysis. Adi Kleiman, SVP of Product at Tracx, says,
“There’s definitely a huge gap between how advanced and real-time text analytics is with
social, and how advanced and real-time image analytics is. The biggest difference with
vendors is time take to analyze images. Today it’s pretty much about finding objects in
images and video.” The goal, says Kleiman, is for this technology to mature to the point
where it’s less about finding objects and more about telling a story. This will require a
codified set of metrics, increases in the precision of image recognition technology, and
pricing models that are well aligned to enterprise budgets. Right now, these three variables
are still forming as organizations determine their use cases, train their data, assess their
findings, and build business cases for further investment.
6. Learning Curve. As with any new technology, there is a learning curve with computer vision
and image intelligence for analysts and data scientists, as well as business stakeholders. Any
technology that involves artificial intelligence and machine learning requires a tolerance
for agile development and collaboration. Organizations that are siloed (both in terms of
people and data) will find it challenging to take on image intelligence. The good news
is that investing in image intelligence will help the organization learn these skills, which
are critical to other areas, related to real-time analysis and predictive modeling, in which
machine learning and AI are (increasingly) used.
7. NSFW and Off-Brand Moderation Is Still Necessary. For some uses, such as filtering
adult or violent material within UGC, human moderation may always be necessary. This
can add expense and also carries a human cost, as some images may be disturbing.18
But,
cautions Andrew Higgins of Pixlee, “A human has to touch any photo before it’s used in any
brand marketing context.”
29. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
29
A LOOK AT THE FUTURE
Every day, communication becomes more visual and technology adapts. From a practical
standpoint, social and digital marketers should expect image analytics to become integrated
into social and marketing cloud offerings in the not-to-distant future, and into other suites
(customer, even sales) over time. Features such as sentiment analysis, object and context
recognition will continue to improve, although there will always be edge cases that defy even the
smartest neural networks. But the ultimate criterion for deciding to use image recognition and
intelligence technology should not be confined simply to “visual listening” or social use cases;
the real value resides in the its ability to help predict outcomes and inform business strategy.
While eventually image-based services will become self-serve, there will be a period of transition
that mirrors what we saw in the mid-2000s with natural language processing within social media
listening platforms: At first only experts will use it; then, eventually, it will become a shared service
within the organization. But to get there, organizations must engage in agile development
methodologies, which are critical to the ability to build predictive models.
As video, 360-degree video, virtual reality and augmented reality become more commonplace,
visual analytics and, ultimately, predictive intelligence platforms will emerge to make sense of this
data and make it actionable across the enterprise. We will see more numerous and stable APIs,
although message-based platforms, such as Snapchat, and walled gardens, such as Facebook
and LinkedIn, will continue to restrict access to visual data.
But one thing is clear: Visual content — and the need for image recognition and intelligence —
is here to stay. Peter Davis, National Advertising & Media Manager, Sanitarium brands, says, “We
think it adds value and are likely to continue to find ways to deploy it.”
30. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
30
RECOMMENDATIONS
“I love being wrong,
‘cause that means in that instant,
I learned something new that day.”
19
— Neil De Grasse Tyson, noted astrophysicist, cosmologist,
author and science communicator
30
www.altimetergroup.com | @setlinger | susan@altimetergroup.com
31. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
31
1. Find a project sponsor. Identifying someone in the business with a vested interest in the
visual brand and/or in customer experience — a brand, marketing or customer experience
executive — is critical to success. This isn’t a “one and done” kind of technology; it’s the
beginning of a trend that will encompass not only still images, but video, 360-degree video,
and even virtual reality and augmented reality.
Organizations that are rethinking “stores of the future” or digital experiences this year must
evaluate image technology sooner rather than later, as image intelligence will help identify
attributes of successful physical and digital spaces, programs and other experiences that
are visual in nature. This is particularly important because the proliferation of visual content
will only increase and become more complex over time.
2. Understand that culture and methodology are as important as tools. Because it is
based on machine learning, computer vision is by its nature an iterative discipline that
can only succeed in a culture of continuous learning. Gabriele Colombo, designer and
PhD candidate at the DensityDesign research lab at Politecnico di Milano, points out that
intentionally pushing the boundaries of computer vision tools can reveal their limitations,
feed research and help platforms evolve. Rather than over-focusing on tools, he argues that
what is more valuable at the outset is to develop a structured methodology with which to
analyze visual data.
3. Select a practical business case. It’s important to realize that image intelligence is as
applicable to business-to-business (B2B) companies as it is to business-to-consumer (B2C).
While alcohol and beverage, sports and entertainment, and fashion brands are the obvious
choices to derive value right away, any brand with a visual component, such as food and
beverage, hospitality, retail and consumer-packaged goods, should explore this technology,
as should any brand that makes significant investment in images, sponsors sports or holds
conferences. This is also true because visual search — using computer vision to manage
visual assets for content marketers — is a further use case.
Finally, shoppable images are coming; ecommerce executives should plan for a future in
which any image, anywhere, can be enabled with a widget that makes it possible for the
consumer to transact in virtually any digital space.
4. Set up a small and focused proof of concept. What’s important in developing proof-
of-concept is to choose the most specific use case possible — one that will yield value
and learning and can be used to help justify further experimentation. The key is to ensure
that the key performance indicators — for example, ROI calculation on a sponsorship
investment or the mitigation of fraud risk — is determined before the project starts,
reducing the potential for scope creep and muddied conclusions.
32. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
32
5. Address governance and privacy before they become an issue. If using UGC, develop
or adapt a process for obtaining media usage and rights permissions. It’s important to
ensure that organizations using UGC have established a process to obtain appropriate
media usage and/or licensing permissions from the content owner. It’s also important to
evaluate any new uses of UGC to make sure they are not invasive and don’t threaten user
trust. For information on how to plan for new privacy scenarios, see the Altimeter report The
Trust Imperative: A Framework for Ethical Data Use.20
6. Start small, think big. In addition to owned and UGC images, organizations should think
about how they might use other visual data from sensors and other devices to optimize
the business. This can be as basic as logo detection, or as complex as videos from
drones sent to inspect oil rigs for safety, or medical imaging used for the purposes of
personalized medicine.
CONCLUSION
It’s critical to remember that we are in the earliest stages of our ability to extract insight
and act upon the many images that consumers, customers, citizens and others share
daily around the world. This trend will only continue, straining existing tools and analytics
methodologies. To unlock the value of visual content will therefore require curiosity,
patience and cross-organizational and cross-disciplinary collaboration, as well as a
willingness to celebrate failures as well as successes. Says Farida Vis, Director, Visual Social
Media Lab, and Faculty Research Fellow, University of Sheffield, “We have to do more to
value the exploration.”
32
www.altimetergroup.com | @setlinger | susan@altimetergroup.com
33. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
33
ENDNOTES
1
“Image Recognition Market by Technology (Pattern Recognition), by Component (Hardware, Software,
Service), by Application (Marketing and Advertising), by Deployment Type (On-Premises and Cloud), by
Industry Vertical and by Region - Global Forecast To 2022” Markets and Markets, March 2016
http://www.marketsandmarkets.com/Market-Reports/image-recognition-market-222404611.html
2
L.V. Anderson, “Why a Woman Putting on a Chewbacca Mask Is Facebook Live’s Most-Watched Video Ever”
Slate, May 20, 2016 http://www.slate.com/blogs/xx_factor/2016/05/20/why_candace_payne_s_video_
of_herself_putting_on_a_chewbacca_mask_is_facebook.html
3
As with most emerging technologies, there are many terms that describe the ability to process and
recognize images: image processing, image recognition, computer vision and so on. For the purposes of
this report, we will use the term computer vision, as it most accurately conveys the intelligence needed to
extract meaning from digital images.
4
Tom Simonite, “How Computers Can Tell What They’re Looking At” MIT Technology Review, April 11,
2016. https://www.technologyreview.com/s/601118/how-computers-can-tell-what-theyre-looking-at/
See also Tom Simonite, “Training Machines to Understand Us: MIT Technology Review, August 20, 2015
https://www.technologyreview.com/s/540001/teaching-machines-to-understand-us/.
5
Ibid.
6
Caroline Clabaugh, Dave Myszewski, and Jimmy Pang, “History: The 1940’s to the 1970’s” Neural
Networks Website, Stanford Computer Science Department. Date Accessed June 21, 2016 https://
cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/History/history1.html
7
“Puppy or…Bagel? Reddit, https://www.reddit.com/r/funny/comments/49iu7t/puppy_or_bagel/ Date
Accessed June 21, 2016
8
Alex Hern, “Flickr faces complaints over ‘offensive’ auto-tagging for photos” The Guardian, May 20,
2015, https://www.theguardian.com/technology/2015/may/20/flickr-complaints-offensive-auto-tagging-photos
9
Susan Etlinger, “A Framework for Social Analytics” Altimeter, a Prophet Company, August 10, 2011
http://www.slideshare.net/setlinger/altimeter-social-analytics081011final
10
Susan Etlinger, “The Trust Imperative: A Framework for Ethical Data Use” Altimeter, a Prophet Company,
June 25, 2016 http://go.pardot.com/l/69102/2015-07-12/pxysr
11
Anthony Ha, “Gilt co-founder launches photo-driven shopping startup Project September” TechCrunch,
April 14, 2016 https://techcrunch.com/2016/04/14/project-september-launch/
12
Serge Malenkovich, “Posting photos of your debit card… is a terrible idea”
Kaspersky Lab, December 26, 2012 https://blog.kaspersky.com/the-next-time-you-feel-like-posting-a-
picture-of-your-debit-or-credit-card-dont/421/
13
Adrian Chen, “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed” Wired,
October 23, 2014 http://www.wired.com/2014/10/content-moderation/
14
Dario Garcia Garcia, Manohar Paluri, Shaomei Wu, “Under the hood: Building accessibility tools for the
visually impaired on Facebook” Facebook Code, April 4, 2016
https://code.facebook.com/posts/457605107772545/under-the-hood-building-accessibility-tools-for-
the-visually-impaired-on-facebook/
15
Ibid.
16
Etlinger, “The Trust Imperative”
17
Instagram Platform Policy, item A17, https://www.instagram.com/about/legal/terms/api/.
18
Chen, “The Laborers”
19
“Not My Job: We Quiz Cosmos Expert Neil deGrasse Tyson On Cosmetology” Wait Wait Don’t Tell Me,
National Public Radio, October 24, 2015 http://www.npr.org/2015/10/24/450994221/not-my-job-we-
quiz-cosmos-expert-neil-degrasse-tyson-on-cosmetology.
20
Etlinger, “The Trust Imperative”
34. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
34
METHODOLOGY
This document was developed based upon online and in-person conversations with 32 market influencers,
technology vendors, academics, brands and others on established and emerging uses of data, as well as
secondary research, including relevant and timely books, articles and news stories. Our deepest gratitude to
the following:
BRANDS, RESEARCHERS, AGENCIES AND INDUSTRY EXPERTS (10)
Blink, Genevieve Norman, Co-Founder
Fast Forward Labs, Mike Williams, Research Engineer
Hard Rock International, Sebastian Quinn, Director of Digital Marketing and Roger Worak,
Senior Director of Digital Loyalty and CRM
The Mars Agency, Ethan Goodman, SVP Shopper Experience
Room 214, James Clark, James Clark, Founder
Sanitarium Health & Wellbeing, Peter Davis, National Advertising & Media Manager
Stitch Fix, Eric Colson, Chief Algorithms Officer
Ultimate Ears, Paul Piggott, Global Social Media and Influencer Marketing Manager
National Opinion Research Center at the University of Chicago, Glen Szczypka, Deputy Director,
Health Media Collaboratory
YoShirt, Billy Shipp, CMO
ACADEMICS (2)
Density Design, Politecnico di Milano, Gabriele Colombo, designer and PhD candidate
University of Sheffield, Farida Vis, Director, Visual Social Media Lab and Faculty Research Fellow
TECHNOLOGY VENDORS (17)
Adobe, Elliot Sedegah, Sr. Product Marketing Manager
Chute, Jody Farrar, VP of Marketing and Communications
Clarifai, Mike Zeiler, CEO, Clarifai
Crimson Hexagon, Errol Apostolopoulos, SVP Product
Ditto Labs, David Rose, CEO
GumGum, Brian Kim, Vice President, Product Development
HP Enterprise, Jeff Veis, VP, HP Big Data Platform Solutions
Lithium Technology, Tyler Singletary, VP & GM Klout and Consumer Data
NetBase, Mark Bowles, Chief Technology Officer; Steve Winters, EVP Engineering & Operations
Olapic, Pau Sabria, Co-Founder
Oracle, Mike Strutton, VP, Social Cloud and Olaf Kowalik, Principal Product Manager
Pixlee, Andrew Higgins, Director, Product Marketing
Pulsar, Francesco D’Orazio, VP Product & Research
Salesforce, Luke Ball, Senior Director, Product Management and Michael Jones,
Director Product Management and Data Science
Synthesio, Adam Dalezman, Leah Pope, CMO
Talkwalker, Todd Grossman, CEO, Americas
Tracx, Adi Kleiman, SVP Product
SOCIAL & DIGITAL MEDIA TECHNOLOGY PLATFORMS (3)
Google, Ram Ramanathan, Product Manager, Google Cloud Vision
Pinterest, Kevin Jing, Engineering Manager and Tech Lead
Twitter, Chris Moody, VP Data Strategy
35. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
35
ACKNOWLEDGMENTS
I would like to convey my gratitude to my colleagues Christina Andrews, Omar Akhtar, Kalia Armbruster,
Leslie Candy, Cheryl Knight, Charlene Li, Aubrey Littleton and Ed Terpening for help, insight and
introductions throughout this process. Thanks for introductions and advice to Jesse Berrett, Konnie
Brown, Julie Clement Cochran, The Nick DeWolf Foundation, Ari Entin, Jamie Favazza, Jaime Lovejoy
Resmini, Steve Lundeen, Matt Hicks, Rob Hilsen, Adam Isserlis, Mike Mayzel, Tammy Nam, Briana
Schweizer, Clara Shih, Connie Sung Moyle, Sarah Takvorian, Mary Tarczynski, Christine Wan, Linda Ziffrin
and Mat Zucker. Finally, I would like to express my deepest thanks to the speakers and participants at the
Picturing the Social conference and workshop in Manchester, U.K on June 20 and 21, 2016. Your insights
were invaluable and will enrich future research on this topic. As always, any errors are mine alone..
OPEN RESEARCH
This independent research report was 100% funded by Altimeter, A Prophet Company. This report is
published under the principle of Open Research and is intended to advance the industry at no cost. This
report is intended for you to read, utilize and share with others; if you do so, please provide attribution to
Altimeter, A Prophet Company.
PERMISSIONS
The Creative Commons License is Attribution-NoncommercialShareAlike 3.0 United States, which can be
found at https://creativecommons.org/licenses/by-nc-sa/3.0/us/.
DISCLAIMER
ALTHOUGH THE INFORMATION AND DATA USED IN THIS REPORT HAVE BEEN PRODUCED AND
PROCESSED FROM SOURCES BELIEVED TO BE RELIABLE, NO WARRANTY EXPRESSED OR IMPLIED IS
MADE REGARDING THE COMPLETENESS, ACCURACY, ADEQUACY, OR USE OF THE INFORMATION.
THE AUTHORS AND CONTRIBUTORS OF THE INFORMATION AND DATA SHALL HAVE NO
LIABILITY FOR ERRORS OR OMISSIONS CONTAINED HEREIN OR FOR INTERPRETATIONS THEREOF.
REFERENCE HEREIN TO ANY SPECIFIC PRODUCT OR VENDOR BY TRADE NAME, TRADEMARK
OR OTHERWISE DOES NOT CONSTITUTE OR IMPLY ITS ENDORSEMENT, RECOMMENDATION OR
FAVORING BY THE AUTHORS OR CONTRIBUTORS AND SHALL NOT BE USED FOR ADVERTISING
OR PRODUCT ENDORSEMENT PURPOSES. THE OPINIONS EXPRESSED HEREIN ARE SUBJECT TO
CHANGE WITHOUT NOTICE.
36. www.altimetergroup.com | @setlinger | susan@altimetergroup.com
36
HOW TO WORK WITH US
Altimeter research is applied and brought to life in our client engagements. We help
organizations understand and take advantage of digital disruption. There are several ways
Altimeter can help you with your business initiatives:
• Strategy Consulting. Altimeter creates strategies and plans to help companies act
on business and technology trends, including ethical and strategic data use and
communications. Our team of analysts and consultants work with global organizations on
needs assessments, strategy roadmaps, and pragmatic recommendations to address a range
of strategic challenges and opportunities.
• Education and Workshops. Engage an Altimeter speaker to help make the business case to
executives or arm practitioners with new knowledge and skills.
• Advisory. Retain Altimeter for ongoing research-based advisory: Conduct an ad-hoc session
to address an immediate challenge, or gain deeper access to research and strategy counsel.
To learn more about Altimeter’s offerings, contact sales@altimetergroup.com.
About Susan Etlinger,
Industry Analyst
Susan Etlinger (@setlinger) is
an industry analyst at Altimeter,
a Prophet Company, where she
works with global organizations to
develop data and analytics strategies
that support their business objectives.
Susan has a diverse background in
marketing and strategic planning within
both corporations and agencies. She’s
a frequent speaker on data, privacy and
emerging technologies and has been
extensively quoted in media outlets
including Fast Company, BBC, The New
York Times, and The Wall Street Journal.
Find her on LinkedIn and at her blog,
Thought Experiments, at susanetlinger.
com.
About Altimeter, a Prophet Company
Altimeter, a Prophet company, is a
research and strategy consulting firm
that helps companies understand and
take advantage of digital disruption. In
2015, Prophet acquired Altimeter Group
to bring forward-thinking digital research
and strategy consulting together under
one umbrella, and to help clients unlock
the power of digital transformation.
Altimeter, founded in 2008 by best-selling
author Charlene Li, focuses on research in
digital transformation, social business and
governance, customer experience, big
data, and content strategy.
Altimeter, a Prophet Company
One Bush Street, 7th Floor
San Francisco, CA 94104
info@altimetergroup.com
www.altimetergroup.com
@altimetergroup
415-363-0004