Tom Edwards, Chief Digital Officer, Agency @ Epsilon and recent Ad Age Marketing Technology Trailblazer, recently attended Google I/O 2017. From Google's vision of ubiquitous computing, multi-modal, computing, Google Assistant (Actions, Auto, Computer Vision, Wear, Android O, Progressive Web Apps, Structured Data & Search to Immersive Computing tied to Daydream, Social VR, WebVR, Visual Positioning Services, Tango and WebAR, this is a comprehensive recap of the event.
2. GOOGLE I/O 2017
I had the opportunity to attend Google I/O 2017 and it was an incredibly rewarding
experience. For the past twelve months I have been thinking about the future evolution
of marketing through intelligent systems and immersive computing.
Google I/O validated a number of my hypotheses by shifting their approach to product
development to be AI first vs. mobile first. They also demonstrated the foundation for a
hyper connected future by introducing computer vision powered object recognition that
seamlessly integrates into Google Assistant.
For the first time I am combining the elements of Connection & Cognition from our
trend framework as almost all products and services discussed is actually an
integration of the two such as Google Assistant, Home, Wear, and Auto
• Connection & Cognition - Trends that demonstrate how to connect with
consumers through intelligent systems.
• Immersion - Trends that highlight advancements in all facets of immersive
computing.
The final section reviews Google’s approach to Immersive computing and covers all
facets of virtual reality and how Google is focused on enhancing experiences through
sharing, co-viewing and enhancing discovery. This section also highlights how
computer vision and visual positioning services will power augmented reality solutions.
Now all i need is for Google to evolve it’s experimental Fuchsia Operating system which
is being designed to redefine how we interface with apps by creating an abstraction of
services so intelligent systems will be able to stitch together predictive and reactive
services. Maybe it will be featured at Google I/O 2018? One can hope.
Tom Edwards
Chief Digital Officer, Agency
2
5. Mobile First to AI First
5
Connection
For the past few years, Google and other industry heavy weights have
proclaimed to be mobile first organizations. At Google I/O 2017, Google
pronounced their intent to move from mobile first to AI first.
Most of the product announcements and a majority of what I cover in this
recap are impacted by artificial intelligence. From machine learning, deep
learning, smart reply, suggestions, computer vision and much more.
Structurally, Google has aligned groups around artificial intelligence
research, tools and applied AI. Tools include the incredibly powerful open-
sourced machine learning (ML) platform TensorFlow which is flexible,
portable and production ready.
The one AI fueled experience that will serve as a foundational element
across various products is Google Assistant. Assistant will reside across
multiple formats and will be a key connection point between AI fueled
experiences and brands through “actions”.
Implications: ML/AI are rapidly transforming business, products and
services. A primary fuel for ML/AI is data. Understanding how to create
actionable data centric AI experiences is critical to drive growth.
Google is focused on evolving their 7 primary platforms that serve over 1 billion users
to apply artificial intelligence to all products.
7. Conversational UX
7
Connection
The shift towards an AI first organization is to recognize that text and
visual experiences (mobile & desktop) alone are not enough to evolve
the future of interaction.
Conversational experiences have been a primary topic of discussion
over the past year. Google is taking the approach of enabling
conversional experiences across Google home and Google assistant
through tools like api.ai, which create a web based set of tools to
seamlessly create complex conversational experiences for Google
Assistant and all it’s form factors.
Implications: At Epsilon, we have been experimenting with
conversational experiences across various formats for the past 12
months. This includes new use cases of aligning voice + visual.
We understand how to build conversational action through storytelling
and understand the nuances of mapping intents and creating highly
engaging experiences.
This includes the creation of Google Assistant actions and enhancing
CX experiences through TensorFlow (ML) integration.
Evolving strategy to include conversational experiences and multi-modal interaction
models are critical to move towards an AI first approach.
9. Pervasive Assistant
9
Connection
Google Assistant was arguably the star of the Google I/O 2017. It serves
as a foundational element to Google’s multi-modal approach to
computing and how consumers interact across devices and situations.
Google Assistant is powered by Google’s natural language processing,
knowledge graph and machine learning products and resides on Google
Home, iPhone, Android, Wear, TV & Auto.
Google is committed to integration of assistant into new and emerging
platforms. This includes the newly announced integration with computer
vision driven Google Lens which connects object recognition with Google
Assistant. This could serve as a foundation for service driven AR.
Implications: Google is fully committed to shifting from a mobile first
company to an AI first organization. Google assistant will serve as an
intelligence engine that will flex across products and consumer journey’s
to meet consumers with the right action at the right time.
The role of actions (think skills for Alexa) will be key for brand marketers
to consider to ensure they are ready for the quick evolution towards a
rapidly expanding Google Assistant enabled ecosystem.
Google is looking to align Google Assistant across consumer touchpoints. From in
home, TV, smartphones, wearables, automobiles and soon connecting physical and
digital through computer vision aligned with Google Assistant services.
11. Assistant SDK
11
Connection
The Google Assistant drives value for consumers by linking either text
or voice based conversational experiences to solve everyday needs.
Brand marketers can capitalize on the rapidly scaling Google Assistant
ecosystem through the Google Assistant SDK in the form of “actions”.
Actions are comparable to Amazon Alexa skills, but with a few
differences. Actions do not require an installation like a skill, Google
automatically maps to action.
Actions support deep linking invocations, which means if a user knows
where they need to go they can jump directly to it. Finally, actions are
immediately accessible to 200 million devices that have access to
voice assistant, including the newly launched iPhone assistant.
Implications: Google Assistant now supports transactions via actions,
meaning it’s possible to link a transaction event directly through
Google Assistant opening up purchase beyond amazon.com.
Also the number of actions is currently in the hundreds vs. tens of
thousands skills in market.
Rapidly growing ecosystem, scale across 200 million devices, transaction
support, discovery of actions and contextual relevance across devices are a
few reasons to consider Google Assistant Actions.
13. Assistant + Auto
13
Connection
Google understands that developing a successful multi-modal
approach to distributing Google Assistant is built on contextual user
experiences and the automobile unlocks key connection points for
consumers that are beyond the confines of the home.
Google’s goal with assistant in auto is to create an intelligent co-pilot in
the car. By integrating seamlessly between messaging, media apps,
calendar events, and navigation, assistant can provide an enhanced in
car experience.
Implications: The key point for brand marketers to consider are the
role that assistant actions can play to create new services and usage
in vehicle as they become available.
Another point to consider is developing action experiences that
integrate with other 3rd parties to create “proxy” solutions.
An example is invoking Google assistant + 3rd party reservation
service to account for current location in vehicle, parking availability
and traffic to book a reservation based on actual situational factors.
Android Auto is an extension of Android OS that is templated and allows for a
seamless experience across messaging and media apps.
15. Seeing The Future
15
Connection
My opinion is that the announcement of Google Lens, which aligns
machine learning powered computer vision with Google assistant was
a significant move towards the evolution towards an immersive proxy
web. The Proxy web is a theory that is predicated on systems taking
over core day-to-day human functions and requires both predictive
elements as well as situational awareness.
Google Lens + Assistant provides the ability to overlay computer
vision, which will serve as the basis for contextual augmented reality
through augmented reality that also links to various services, from
purchasing, to content, to predictive reservations based on traffic and
other environmental factors. Voice has led the way in 2017. 2018 will
be the year of computer vision powered experiences.
Implications: This was the first time I have seen a full use cases for
the Proxy web by one of the major publishers that connects proxy +
immersive.
Brand marketers need to consider that a time is rapidly approaching
where mobile will give way to ambient marketing experiences across
all modes of interaction.
Integration of object recognition, location and Google assistant create
new connections between services and serves as a foundational element
towards a hyper connected future.
17. Android Wear
17
Connection
Wearables were a part of the Google I/O 2017 programming. Google
jumped into smart watches in 2014, but the category has been slowing
over the past year.
In February of this year, Google launched Android Wear 2.0. 2.0
Introduced new 3rd party brands (12 to 24) and now there are over 46
Android Wear watches.
Android Wear 2.0 features a redesign from the ground up. New design,
native applications, messenger integration and most importantly
Google Assistant integration.
Implications: Android Wear’s ability to support enhanced social
messaging such as stickers shows the potential associated with
independent experiences from a mobile device.
More important though is the integration of Google assistant and
specifically actions directly into the Android Wear. This is yet another
engagement point for Google Assistant. Brand marketers need to
consider the role that “actions” will play across device types including
Android Wear.
Google Assistant is now integrated with Android Wear. This includes
the ability to activate Google Assistant Actions.
19. Android O
19
Connection
During Google I/O the Android O beta preview was announced and is
ready for download.
The Android operating system now has 2 billion devices and will
continue to evolve to support Google’s approach to multi-modal
computing.
Android O serves as a key entry point for consumer engagement and
Google is working to streamline the experience. From announcing
picture in picture to allow users to video call and take notes, to app
shortcuts and widgets to smart text powered by machine learning.
One of the key points for brands that have native apps available for
Android is the introduction of notification dots that will appear on app.
Implications: Google is taking methodical steps towards their future
state/experimental operating system Fuchsia, which will make native
apps more of a middle layer service vs. an end destination. Android O
will serve as a bridge by integrating machine learning and foundational
immersive computing elements for VR & AR.The Android O Operating System was made available as a beta during Google
I/O 2017. It’s enhanced through AI & built to support VR experiences.
21. A Faster Web
21
Connection
The web is the biggest platform in the world, bigger than every OS and
platform. There are over 5B devices connected to the web. Google is
focused on creating deeply engaging experiences as soon as a user
lands on a mobile web site and give them a native like experience from
the web.
This platform is called progressive web apps (PWAs). This allows
mobile web experiences to integrate features normally reserved for
native applications such as instant loading, push notifications,
placement on a home screen, smooth animations and highly
responsive.
Implications: Google has a vested interest in ensuring the mobile web
remains relevant. With PWA’s brands can create immersive web
experiences for online and offline viewing that work across multiple
platforms (iOS & Android)
Google is also committed to driving discovery of PWAs through
additional additional spots while delivering improved cross-functionality
switching between PWA’s & apps. The internet has evolved, and how we think about the mobile web needs to evolve as
well. Mass adoption of HTML5 has fueled the evolution of Progressive Web Apps.
23. Search + Structured Data
23
Connection
Google search will continue to evolve. Those who follow SEO trends
closely have seen major shifts over the past few years with the rise in
the percent of search results that are AI informed and now thinking
about SEO as it pertains to the launch of the Chrome browser in
Virtual Reality.
Google is increasing the role and weighting that structured data plays
when it comes to enhancing search experiences via rich card
experiences. The role of structured data feeding into Google search is
becoming increasingly important for brands that deal in entertainment,
food and beverage, events and more as a key point of differentiation
within Google search.
Implications: As Google evolves search across conversational
experiences, voice based experiences and more, the role of structured
data fueling search experience will be key.
Structured data markup will continue to evolve, and those
organizations that embrace the role of structured data and rich cards
will be prepared for the evolution towards computer vision enhanced
search.
Facebook understands that in order to evolve beyond echo chambers and expand
new points of connection that creating new tools to expand networks is key.
26. Immersive Computing
At Google I/O 2017, Google outlined their view of immersive
computing. Their view is virtual reality (VR) and augmented reality
(AR) are different labels on an immersive computing spectrum.
They are looking to provide tools that enhance the real world such as
the AR tool Tango. They are also providing hardware and software
solutions, such as Daydream, for computer-generated virtual reality.
In addition to the tools, Google understands the need for portability of
experiences and demonstrated experimental browser based AR
experiences while also demonstrating how to drive additional
discovery and shareability of experiences within a virtual environment.
Implications: With the unveiling of Google Lens computer vision
driven engine, visual positioning services, enhancements to assistant
and tools that create immersive experiences, Google is positioning for
the next evolution of consumer experiences.
They demonstrated a future forward vision for immersive computing
that includes laying a foundation towards mass adoption through
devices that we currently use everyday as well as designing flexibility
for the hardware that’s yet to be invented.
26
Google views Immersive Computing as a spectrum that represents the real-world and
computer generated ones with AR & VR residing on various points.
28. VR Hardware & Software
Google outlined a virtual reality strategy that revolved around both
hardware and software enhancements.
From a hardware perspective, Google announced it’s first stand alone
VR headset to be released later this fall in partnership with Lenovo &
HTC.
Software enhancements include new VR support for the upcoming
Android O operating system that will bring notifications and settings
integrated into Daydream to keep consumers in VR.
Another part of their strategy is focus on driving discovery within VR
through WebVR experiences as well as new tools to empower
creators, increase sharing and allow for co-viewing.
Implications: Brand marketers need to consider the conversion of
content experiences from 2D to 360. Both Facebook & Google are
focused on increasing accessibility and the ability to share virtual
experiences.
The introduction of VR web browsers as well as integration of core
features such as notifications and an engine that automatically
converts core Google services into VR is a key territory to monitor.
28
Google’s approach to virtual reality is to make immersive
experiences accessible for everyone.
30. Sharing & Co-Viewing
At Facebook’s F8 conference a month prior, Facebook officially
launched Spaces for Oculus. From virtual selfies to watching 360
video, it’s very clear to see that Facebook is focused on creating a new
for of social interaction via a virtual environment.
Not to be outdone, Google revealed their strategy for creating social
connections and sharing experiences via VR. Google showcased the
ability to “cast” your experience to a nearby screen so others can see
what you are viewing.
Google also enabled easy sharing and capture of content via VR and
finally showcased co-viewing via YouTube and other VR experiences
to allow groups to connect and share in the content together.
Implications: One of the draw backs to VR adoption has been tied to
how isolating an experience can be with limited abilities to share
“what’s happening”. Both Google and Facebook realize that adoption
is closely tied to accessibility and the ability to share experiences.
Brands that can capitalize seamlessly on providing engaging and
shareable experiences that also empower consumers to own and
create within virtual environments will have a definite advantage.
30
Consumer empowerment and accessibility to share experiences is critical to fuel adoption
of emerging technology. Google is taking the first steps to connect consumers virtually.
32. DISCOVERY IN VR
One of the primary missions of Google is to connect users with
information. This means enabling discovery of information across all of
their products.
When it comes to virtual reality, Google has retooled the Chrome
browser for VR to create a frictionless, immersive experience that can
real all forms of VR headsets.
From a technology perspective, Google is focused on portability of
content. This includes importing elements originally designed for 2D
consumption like notifications into a virtual environment seamlessly.
Implications: One of the key issues with virtual reality to date is the
chaining of experiences from one to the next. By taking an existing
behavior that consumers are comfortable with, such as web browsing,
Google can ease the shock of moving from one type of computing to
another.
Marketers need to consider how their experiences are received across
traditional and new forms of computing and ensuring that discovery
and a seamless experience is at the center of their strategy.
32
Google is reimagining web browsing within a virtual environment. Integrating and
expanding 2D elements into VR as well as making navigation and content discovery easier.
34. Visual Positioning Service
Over the years, Google has been a pioneer of mapping the outside
world. From Google Maps to street views of your neighborhood.
Now Google is focused on how GPS gets you to the door of your
destination, but now through their visual positioning service, they are
looking to map the interior of spaces.
This service is revolutionary as it will serve as a foundational element
to truly converting the real world into a digital palate.
Implications: In order for augmented reality to truly scale it has be be
accessible, easy to use but most importantly accurate.
Visual Positioning Services allow products like Tango to create
landmarks for AR positioning of experiences.
This is key for brands, especially those that have physical spaces as
VPS will allow the mapping and subsequent interaction points that can
activate down to centimeters of accuracy.
34
Visual Positioning Services will play a key role in driving scale of augmented reality
solutions. By mapping down to centimeters and having the ability to anchor AR solutions in
a physical space in real-time is key to consumer adoption and brand enablement.
36. Richer, Deeper AR
Tango is Google’s Augmented Reality engine that brings motion
sensing and environment scanning and capture to mobile devices.
Tango combined with Visual Positioning Services provide a framework
to create augmented reality solutions at scale. The demo showed the
conversion of a 10,000 sq. ft museum into a rain forest.
Tango is available on a select number of mobile devices and that
library will continue to evolve over the next few years and you will see
higher portability of experiences in the future, including WebAR.
Implications: Tango is a powerful platform that will play an even
bigger role in the near future. With depth sensing, wide angle tracking
and relocalization, Tango enabled devices can deliver immersive
experiences that map an entire space.
Enhancing the real world through augmented reality is a key element
of Google’s future state strategy. By aligning computer vision, tango
and assistant technology, Google has the foundation to build passive
and active AR solutions at scale that are mapped to contextual actions
of a consumer.
36
Google demo’d Into the Wild. A collaboration with the WWF. Using Tango + VPS, they
created a virtual/AR rain forest out of a 10,000 sq ft. museum space.
38. AR via Browser
Facebook is viewing the camera as the first augmented reality
platform. Google is taking a slightly different approach. The camera is
an important part of the ecosystem, but Google is interested in
enabling behaviors beyond sharing and creating effects such as
supporting commerce.
One of the more surprising announcements during the event was the
reveal of WebAR. Essentially Google has enabled augmented reality
through an experimental version of Google Chrome for Android.
This experience is delivered without an application and is a
combination of Javascript and Web GL.
Implications: Accessibility and ease of user will be key to the adoption
of consumer centric augmented reality. By integrating native AR into
the browser, without the need for an application is a big step towards a
seamless approach to immersive computing.
This, combined with computer vision integration via Google Lens can
set the stage for conversion of the real world into a hyper connected
digital one.
38
Google showcased an experimental version of the Google Chrome browser that supports
real-time AR rendering directly through a mobile browser experience without an app.