VENTURI is a collaborative European project targeting the shortcomings of current Augmented Reality design; bringing together the forces of mobile platform manufacturers, technology providers, content creators, and researchers in the field.
VENTURI aims to place engaging, innovative and useful mixed reality experiences into the hands of ordinary people, by co-evolving next generation AR platforms and algorithms.
VENTURI plans to create a seamless and optimal user experience through a thorough analysis and evolution of the AR technology chain, spanning device hardware capabilities to user satisfaction.
1. Oct ‘11 – Oct ‘14
“creating a pervasive Augmented Reality paradigm, where information is presented in a
‘user’ rather than a ‘device’ centric way”
FP7-ICT-2011-1.5 Networked Media and Search Systems
End-to-end Immersive and Interactive Media Technologies
2.
3. Partners
FBK - Italy
Fraunhofer HHI - Germany
ST-Microelectronics – Italy
Metaio - Germany
ST-Microelectronics and
ST-Ericsson – France
e-Diam Sistemas - Spain
Sony Mobile - Sweden Research
38% 37% Istitutions
INRIA - France SME
25%
Industry
4. Market
End Users
Products
Pre-product
Mobile
Prototypes
Lab
Prototypes
Coverage
Applied
Research
Pure
Research
5. Challenges
O AR focused platform development
O Visual registration chain user-device-world
O World modelling using consumer-level mobile devices
O Mobile contextual understanding
O Context sensitive content delivery
O User interactions with AR
6. AR Platform Evolution
O Multi-core CPU & GPU
O Hi-res single or dual cameras,
plus large set of sensors
O Smart power management
policies
O Address AR requirements in development of platform
SW framework and services (e.g. sensor fusion, video
pipe optimized for AR use)
O Optimize the whole processing chain, taking server
side resources (i.e. The cloud) when possible
7. Registration chain
O Match visual features with nearby
photos to identify ‘tagged’ landmarks
O Match visual features to synthetic
models of the World
O Locate text/logos/signs in the
environment then check against local
geo-objects/events
O Use on-board sensors to guide image /
audio processing algorithms
O Estimate user (body, face, hands)
position with respect to the device
8. World modelling
O Photogrammetric 3D
reconstruction using mono/
stereo cameras (including
historical imagery)
O Structure from motion for
modelling dynamic objects in
the scene
O Planar surface identification
for ad-hoc interactive surfaces
9. Mobile context understanding
O User motion/activity analysis using on-board sensors
O Fusion of cues from:
O modelling and registration
O geo-objects
O geo-social activity
10. Context sensitive AR delivery
O Inject AR data in a natural manner according to:
O The environment
O Occlusions
O Lighting and shadows
O User activity
O Exploit user and environment ‘context’ to select
best delivery modality (text, graphics, audio,
haptic, etc.), i.e. scalable/simplify-able audio-
visual content
11. User Interactions
O Explore evolving means of AR
delivery and interaction
O In-air interfaces for sensing
gestures (motion of device,
hands, face, etc..)
O 3D audio
O Micro-projection for multi-user,
social-AR
O AR visors/glasses
12. Prototypes
O A consolidated prototype at the end of each year to be
evaluated through Use Cases
O 3 Use Cases
O Tourism
O Gaming
O Personal assistant
13. “creating a pervasive Augmented Reality paradigm, where information is presented in a
‘user’ rather than a ‘device’ centric way”