3. Introduction
Q. What is Clayvision?
ï
It is a new quasi-immersive urban navigation
system that rethinks the design convention of
existing Augmented Reality (AR) applications.
ï
Instead of âinformation bubblesâ onto the urban
scenery, ClayVision communicates through realtime 3D transformations and video feed of the city
elements.
3/8/2014
3
4. ï
In other words, the system reassembles the city into
a better-designed copy of the original, that is both
easier to navigate and tailored to suit the userâs
needs and preferences.
3/8/2014
4
5. History
ï
There were many experimental systems built in the
mid 1990s ,which were marked by their bulky
setups and low frame rates.
ï
Later on, due to the devices lacking the graphical
capabilities had to send camera images to the server
for each frame and increased computational burden.
3/8/2014
5
6. Contd..
ï
The further developed AR technique lacks accuracy.
âInformation bubbleâ display , the bubbles do not
have any absolute, exact positions within the 3-D
space.
3/8/2014
6
7. Requirements
ï± High
Speed Wireless Internet Connection.
ï± Hardware
Device having Camera and Display
Screen.
ï± A 3-D
Graphics Engine
-part of system.
-handles graphical simulation and interfaces.
3/8/2014
7
8. 3-D Graphics Engine
API is API ?
ï What
-protocol used as interface by
software
for comm.
OpenGL ?
-routine , data structure, obj.
classâŠ.etc
GLUT ?
-libraries in c++ ,java API
3/8/2014
8
9. Application Programming Interface
Q. What is API ?
-protocol used as interface by software for
communication.
-routine , data structure, objects,Class etc.
-libraries in c++ ,java etc.
3/8/2014
9
11. What is GLUT?
GLUT is OpenGL Utility Toolkit
⊠Not part of OpenGL.
⊠âGLUT is designed for constructing small to
medium sized OpenGL programs.â
3/8/2014
11
12. OVERALL FLOW:
HARDWARE
DEVICE USED AS
INPUT
DATA IS DISPLAYED
IMAGE CAPTURING
DATA IS SENT TO
HARDWARE
DEVICE
IMAGE
PROCESSING
USE OF 3-D
GRAPHICS ENGINE
DATA SENT TO
SATELLITE
CORRELATION OF
CO-ORDINATES
3/8/2014
12
13. Image Processing:
ï
ï
ï
Image processing of the video feed is done using
SIFT, which outputs a set of feature points and other
data used to determine the relative position of the
entire frame.
Scale-invariant feature transform (or SIFT) is an
algorithm in computer vision to detect and describe
local features in images.
Output is used to compare the video feed to the
database of pictures and the template pictures are
transformed based on the deviceâs specifications to
produce the correct pose.
3/8/2014
13
14. Contd..
ï
After localization, projection and modelview
matrices are calculated to map 3D building models
onto the feed.
ï
These models are then textured using information
from the feed and transformed to communicate
information to the user.
ï
Texturing is done correctly by altering the image
background with template picture information in a
way that doesnât disrupt the video and allows for
transformations that donât cause excessive errors.
3/8/2014
14
15. Working
ï
Each frame of the real-time video feed is compared
to a collection of photos, shot from the same
location using a device.
ï
Attributes such as building shapes , colours ,
materials are modified so that they represent useful
information, thus increase the efficiency of visual
communication.
ï
This is the approach taken by ClayVision.
3/8/2014
15
17. Navigation:
ï
Buildings can be emphasized by strategically
changing their visual attributes.
ï
Attaching fake facades , enhancing the height ,
adding false saturation levels can make us find the
given targeted building more quickly.
3/8/2014
17
19. Advantages
ï
It is will be very useful in the fields of urban
planning and design.
ï
Navigation systems can be upgraded to a new
level using clayvision technology.
ï
3-D Model Mapping is implemented using
Clayvision Technology
3/8/2014
19
20. Disadvantages
ï
This design attracts a significant part of the userâs
attention, which may result in the user becoming
less attentive to other pedestrians, cars, etc., creating
a serious safety risk.
ï
Slowdowns may occur, especially because of
network speed.
3/8/2014
20
21. Future Scope
ï
Panorama creation: It can converge all major city
elements into a single screen.
ï
Straightening streets: There are streets crooked,
extending in seemingly random directions. Clay
vision provides a clear view of what can be found
further down the road.
ï
Manual interaction: Tapping or drawing on the
screen can be implemented for example cutting a
hole in a building to see what lies beyond.
3/8/2014
21
22. Conclusion
ï
Thus ClayVision is a novel vision-based augmented
reality system that offers the experience of real-time
urban design. It is a set of techniques to enable
freeform transformations of built elements in the
city, and discussed a range of transformation
operations and their implications on the urban
experience.
3/8/2014
22