The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
3DTV state of the art
1. 3D Television
The future of television
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA
15/03/2010
2. 3D Television – The Future of television
SUMMARY
INTRODUCTION ..............................................................................................................................................................4
1) 3D FOR DUMMIES ..................................................................................................................................................5
A) THE STEREOSCOPIC SOLUTION: FROM THE OLD ITALY TO HIGHTECH PUBS ................................................................. 5
i.
Principle ...................................................................................................................................................................................... 5
ii.
Evolution ..................................................................................................................................................................................... 6
The first tries ....................................................................................................................................................................... 6
Photography......................................................................................................................................................................... 6
The 3D video ........................................................................................................................................................................ 7
B) MULTIVIEW.............................................................................................................................................................................. 8
C) HOLOGRAPHY .......................................................................................................................................................................... 9
Capturing a digital hologram: .................................................................................................................................................... 9
2) CAPTURE ............................................................................................................................................................... 11
i. Camera & its configuration ..............................................................................................................................................11
ii. Corrections ...............................................................................................................................................................................11
3) CONVENTIONAL 3D STEREO CODING.......................................................................................................... 13
A) MPEG-2 MULTIVIEW PROFILE......................................................................................................................................... 13
B) H.264/AVC SIMULCAST.................................................................................................................................................... 13
C) H.264 SEI MESSAGE ....................................................................................................................................................... 14
D) DOWNSAMPLING WITH H.264/AVC ........................................................................................................................ 14
4) VIDEO PLUS DEPTH CODING .......................................................................................................................... 15
A) MPEG-C PART 3 ................................................................................................................................................................. 15
B) H.264-AVC WITH AUXILIARY PICTURE SYNTAX.......................................................................................................... 16
C) MPEG-4 MAC ..................................................................................................................................................................... 17
D) H.264/SVC.......................................................................................................................................................................... 17
5) MULTIVIEW 3D ................................................................................................................................................... 19
A) MULTI VIEW CODING STANDARD (MVC) ....................................................................................................................... 19
B) MULTIVIEW PLUS DEPTH (MVD) ..................................................................................................................................... 19
C) LAYERED DEPTH VIDEO ..................................................................................................................................................... 20
D) DEPTH ENHANCED STEREO ................................................................................................................................................ 21
E) 3D VIDEO CODING ............................................................................................................................................................... 22
TRANSPORT METHODS FOR TV3D ....................................................................................................................... 23
A) STEREOSCOPIC 3D TV OVER SATELLITE .......................................................................................................................... 23
i.What is TRANSMITTED? ...................................................................................................................................................24
ii.
How to transmit IT? .............................................................................................................................................................24
MPEG 4 / AVC: Advanced Video Coding: ............................................................................................................... 24
DVB-S2: Digital Video Broadcasting for Satellite (version 2) ....................................................................... 24
B) MUTIVIEW TRANSPORT ...................................................................................................................................................... 25
i. Over Internet Protocol: ......................................................................................................................................................25
Over Satellite:....................................................................................................................................................................................26
Without IP .......................................................................................................................................................................... 27
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 2
3. 3D Television – The Future of television
With IP ................................................................................................................................................................................. 28
6) DISPLAY STANDARDISATION ........................................................................................................................ 29
A) STEREOSCOPIC SCREENS ..................................................................................................................................................... 29
i. 3D-Ready...................................................................................................................................................................................29
ii. Full-3D .......................................................................................................................................................................................29
B) AUTO-STEREOSCOPIC DISPLAYS ........................................................................................................................................ 30
C) OTHER STANDARDS ............................................................................................................................................................. 30
i. Blu-Ray Disc Association ...................................................................................................................................................30
ii. HDMI 1.4 and DisplayPort 1.2.........................................................................................................................................31
D) PORTABLE DEVICES: 3D MOBILES ................................................................................................................................. 31
CONCLUSION ................................................................................................................................................................. 32
REFERENCES ................................................................................................................................................................. 33
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 3
4. 3D Television – The Future of television
INTRODUCTION
Born since January, 27th of 1926, television has established itself in our homes as an essential, a
unifying item, putting rhythm in our daily life. The history of television is both complex and far-
reaching, involving the work of many inventors and engineers in several countries over many decades.
Initially, work proceeded along two different but overlapping lines of development: those designs
employing both mechanical and electronic principles, and those employing only electronic principles.
Electromechanical television would eventually be abandoned in favor of all-electronic designs.
Nevertheless, we can notice three big steps in the evolution of television.
The first step was achieved in the past, in the fifties for the USA or in the sixties for the Europe.
From black and white television, people now would see the scenes and the characters in their real color
for the first time, thanks to SECAM in France, or PAL in the Europe, or even NTSC in the USA. In France
for example, the programs were broadcasted in color on October 1st of 1967, but only 1500 television
sets were in use at this time.
The second revolution is being achieved. From analog broadcasts, we go digital. This step
enabled us to see the high definition coming. Now, the viewers are offered an exceptional image quality,
improving the visual comfort and making the image more realistic. With the digitalization of television,
we can now have it everywhere thanks to several means of telecommunications. Analog broadcast is
shutting down gradually: nowadays, only ten countries have totally abandoned this technology in favor
of digital broadcast only, using DVB-T mainly in the Europe and ATSC (evolution of NTSC) in the USA.
Another aspect of this revolution is the fact that screens go thinner: we are far from the days of the
CRTs (Cathode Ray Tube). Plasmas, LCD and now LED screens allow us to put TV everywhere like we
did with frames, and especially to consume less while being elegant.
Finally, we have now a magnificent image and a pure sound, but television stays in two
dimensions. So, the third step to come would be a three dimensions television. With that technology, the
viewer could submerge deeply into the movie, thus creating new sensations in his mind and
experiencing a totally new TV revolution. With the success of the movie Avatar (more than one billion of
dollars at the box office), we definitely can say that three dimensions will be obvious in the coming
years. Many companies are ready to launch such channels, like Canal+ which consider starting a 3D
channel before the end of 2010. However this technology requires some standards which are not all
ready yet.
In this report, we will try to summarize the different aspects of 3D television. In the first part,
we will strive to present the various 3D technologies and how each works. Then, in the second part, we
will focus on the transmission chain, from the camera which shoots a movie to the screen which
displays it, through the transmitting part, which we will emphasize. Indeed, our first goal is to
determine which standard is better for a satellite broadcast 3D television. Eventually, we will see what
could be done in the near future and the far future to improve the viewer sensations.
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 4
5. 3D Television – The Future of television
1) 3D FOR DUMMIES
A) THE STEREOSCOPIC SOLUTION: FROM THE OLD ITALY TO
HIGHTECH PUBS
I. PRINCIPLE
The principle of Stereoscopy is to present, to the user, two pictures that are taken with a slight
horizontal offset. Each image have to be displayed to a single eye, by a system a glasses for example. The
rest of the job is done by the brain, which is creating a new image from the 2 flat pictures. If pictures
were correctly taken, the brain the user will perceive an impression of relief. Before anymore studies, a
very important distinction has to be done: perspective is often confused with 3D, which is not quite
true, because the third dimension (Depth) is only "simulated". The user could not turn around the
image discovering what is hidden behind. Therefore, 2½D would be a more appropriate expression.
FIGURE 1 - LEFT VIEW AND RIGHT VIEW
The process by which the brain exploits the parallax (watching the same object from different
point of view) due to the different views from the eyes, to gain depth perception and estimate distances
to objects is call “stereopsis”. Without going to deep into the physiological explanations, to summarize,
it appears that the binocular cells of the visual cortex have receptive fields in different horizontal
position in each eye. These cells are only active when their preferred stimulus is in the right position in
each eye. [1]
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 5
6. 3D Television – The Future of television
II. EVOLUTION
THE FIRST TRIES
Most people assumed that the principle of stereoscopy was invented by Sir Charles Wheatstone
in the middle of the XIX century, with the creation of the photography, but surprisingly the true
precursor could be Jacopo Chimenti, an Italian painter of the XVI century. He was representing subjects
with a minor deviation. Perceiving the illusion was probably difficult at this time, but the idea was the
correct one to produce the illusion.
PHOTOGRAPHY
The true development of the stereoscopy goes with the photography, and the possibility
to take pictures with two cameras separated by the common space between eyes (6mm). The first to
use this progress was Charles Wheatstone. He build a device (the stereoscope) based on a combination
of prisms and mirrors to allow a person to see 3D images from 2D pictures. [2][3]
FIGURE 2 - THE STEREOSCOPE
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 6
7. 3D Television – The Future of television
THE 3D VIDEO
The most interesting way to use stereoscopy is of course the video. Strictly reserved for
movies at the beginning, the latest technologies allowed live programs to be broadcast in 3D.
Stereoscopic motion pictures can be produced through a variety of different methods. It
began in the late 1890s when British film pioneer Willima Friese-Greene filed a patent for a 3-D movie
process. In his patent, two films were projected side by side on screen. The viewer looked through a
stereoscope to converge the two images.
After a golden era in the 50es, 3D movies popularity dropped and and were almost forgot
until the mid 80es when the IMAX society began a large production of 3D content. A key point was that
this production, emphasized mathematical correctness of the 3D rendition and thus largely eliminated
the eye fatigue and pain that resulted from the approximate geometries of previous 3D incarnations.
3D movies really re-entered the theater in 2003, pushed by the desire of one man: James
Cameron. He is responsible for a very large amount of progress in 3D content, and more important in
the development of 3D HD cameras. His master piece, Avatar, benefits from state of the art 3D
technologies like 3D DOLBY or IMAX 3D, and can state as the best today technologies can produce in
term of 3D content. [4]
The next challenge of 3D video is the broadcasting of live events in 3D. This is far more
complicated challenge because of real-time constraint. Problems come from the fact that technologies
developed for movies needs very important time of proceed. Despite this major issue, the first
experiments have already been made. This is how English football fans were able to watch some games
of their favorite teams in 3D in some London's pubs specially equipped.[5]
FIGURE 3 - FIRST 3D SPORTS EVENT IN A PUB
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 7
8. 3D Television – The Future of television
B) MULTIVIEW
Stereoscopic displays are currently used and work quite well. This is the first approach of the 3D
displays and it follows the evolution of the technology used in the 7th art. Apparently, this kind of
technology does not fit the needs of the “everyday life” user. Indeed, do you think you could wear
glasses every time you want to watch television? Even when you’re eating? That’s the reason why
research is working on displays called “multiview” or “auto stereoscopic”. Multiview will be used to
drive the next generation 3D displays. These displays enable viewing of high resolution stereoscopic
images without the need of glasses but from arbitrary positions. Moreover, compression will be needed
because of the increase in the amount of data. New compression techniques exploiting the temporal
correlation but also the inter-view correlation, since all cameras capture the same scene from different
viewpoints, are studied. The main standard is the known H.264/AVC for time correlation.
Actually, good compression efficiency will be achieved if there is a significant gain with
multiview coding compared to independent compression of each view.
Compression efficiency measures the tradeoff between cost (in terms of bit-rate) and benefit (in
terms of video quality). However, there can be other contradictory requirements: compression
efficiency and low delay for example.
As you can imagine, producing an auto-stereoscopic display requires playing with optical
properties. Some optical elements are aligned on the surface of the screen, ensuring that the observer
sees multiple views with each eye, each one seen from a particular viewing angle. To achieve it, an
optical layer is added on the screen to redirect the light passing through it. There are two common
types of optical filters: lenticular sheet which works by refracting the light and parallax barrier which
works by blocking the light in certain directions
FIGURE 4 - METHODS FOR IMAGE SEPARATION: A) LENTICULAR SHEET AND B) PARALLAX BARRIER
The main advantage of parallax barrier is that we can choose to use it or not. As a consequence,
you can watch multimedia content either in 3D or in 2D. The main disadvantage of the parallax barrier
is that it blocks part of the light, resulting in a lowered brightness of the display.[6]
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 8
9. 3D Television – The Future of television
C) HOLOGRAPHY
Another lead to 3D research is holography. Holography is different from the other technologies
discussed previously because it has no need for expensive and advanced display devices. Some
European projects have started researches along side with the ones on 3D television. The goals are
slightly less ambitious then the ones for regular 3D television. When we have seen that in three years
time, the aims of the 3DTV projects are to have a complete broadcasting chain and the proposal of
standards for capturing 3D content, coding schemes, compression, but also transmitting and
broadcasting live content; for holograms the goal is to be able to capture a real-world object, process
and display it. The process of holograms that was unthinkable just a few years ago is now conceivable
thanks to the digital representation of holograms. Their strength is that they can be processed,
analyzed, and transmitted electronically. [7]
Digital holography is the technology of acquiring and processing holographic measurement
data, typically via a CCD camera (charged-couple device camera) or a similar device. The captured
object is reconstructed numerically from the data acquired. The reconstruction is different from the one
that can be obtained optically because with holography the aspect, the structure and the texture and the
object can be reproduced. With the 3D digital holograms we typically feel the optical surface and
thickness of the real-world object.
CAPTURING A DIGITAL HOLOGRAM:
There are different techniques to capture a hologram. Indeed no proper way is normalized and
thus various techniques are used. Hereafter is the most common one.
Digital holograms are multiplexed by recording a few fringe patterns using the same CCD.
Different techniques including one or multiple reference beams with different angles can be used in
order to obtain interference fringe patterns which will transcribe more or less accurately the object.
Next, all the captured holograms are superimposed in one CCD frame; each hologram can be inversely
reconstructed using a digital spatial filtering. For the de-multiplexing each holograms is treated with
the digital Fourier transform of the pre-multiplexed hologram. A pass-band filter is used to select the
desired hologram. Finally the inverse Fourier transform on the previous result is applied to obtain a
separate digital hologram for every multiplexed signal. [8][9]
a. Amplitude reconstruction of one hologram in the lens focal plane
b. Amplitude of the synthetic spectrum obtained by the numerical multiplexing in the LFP of 100
digital holograms
FIGURE 5 - CAPTURING A HOLOGRAM
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 9
10. 3D Television – The Future of television
Here is what the capturing, processing and displaying of a digital hologram looks like from end
to end. The actual object is reconstructed for the observer that in theory can hardly make a difference
between the real-world object and the 3D digital hologram. Digital holographic imaging can be seen a
very important and increasing solution for 3D- visualization. Evaluation of the human visual system to
perceive digital holograms of 3D images is a quite new research area and only few results about
researches and especially related to visual perception of digital holograms have been found. The main
requirements to display holograms are to provide natural viewing conditions for the human visual
system and separate the reconstructed images from the display device that has to be almost invisible to
the viewer. [10]
FIGURE 6 - DIPLAYING A HOLOGRAM
For now, standards are out of the questions for two reasons. On the one hand the holographic
technology is that much younger then 3DTV that researches on a longer time period need to be
conducted before one can think about standardization. On the other hand is seems that the market is
leaning towards 3DTV perhaps closer to HDTV and thus easier to implant in homes and more likely to
obtain an economical success.
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 10
11. 3D Television – The Future of television
2) CAPTURE
We can divide the capture part into two sections:
- The capture properly speaking, requiring cameras;
- Multiple corrections on the basic video sequences.
I. CAMERA & ITS CONFIGURATION
Figure 1 represents the 3D4YOU’s suggested conceptual diagram of the fundamental cameras that
are needed to proceed shoots for 3D.
FIGURE 7: CONCEPTUAL DIAGRAM OF 3D4YOU CAPTURE SYSTEM
You can see on this picture regular cameras and satellites. Stereoscopic displays will only use
information given by the two regular cameras. The satellite cameras are for LDV or MVD displays. They
extend the baseline and enable the users to look at objects with more numerous points of view. Another
tool called PMD, also named ToF camera, is used. It’s actually a low resolution infra-red camera. It is
used to record a depth map of the scene. Shooting a scene in 3D is a little bit harder because these
cameras need a specific configuration to work. For example, in the 3D4YOU paper, the distance between
the two regular cameras was fixed at 62mm. The PMD was placed exactly between the two regular
cameras…
II. CORRECTIONS
After capturing, multiple corrections have to be made. Indeed, input views have to be color
corrected. Then, geometric rectification is used to restrict parallax shifts between images to the
horizontal axis only. You can see in the second figure that the armchair has moved from right to left. If
available, the ToF depth map is converted into a disparity (depth parallax) map corresponding to the
rectified image pair. [11]
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 11
12. 3D Television – The Future of television
FIGURE 8: EXAMPLE OF THE DIFFERENT POST-CAPTURE PROCESSING STEPS
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 12
13. 3D Television – The Future of television
3) CONVENTIONAL 3D STEREO CODING
As shown in the first part, the principle of stereo 3D is to cast two different but very similar
views on each eye.
A) MPEG-2 MULTIVIEW PROFILE
A good idea for compression efficiency is to try to predict one view from the other. Let’s take a
sequence of successive pictures coded as intra (I), predictive (P) or bipredictive (B) pictures. Multi View
Profile (MVP) uses a two-layer video coding scheme: the base layer and the enhancement one. The base
layer video is coded as MPEG-2 main profile bitstream. This profile allows intra and predicted frames
encoding and decoding with bidirectional prediction. The enhancement layer video is coded with
temporal scalability tools and exploits the correlation between the two viewings to improve the
compression efficiency. In stereoscopic video applications the base layer is assigned to the left eye and
the enhancement layer to the right eye.
FIGURE 9: STEREO CODING WITH TEMPORAL AND INTER-VIEW PREDICTIONS
The main problem is that there is a real gain only for I pictures which are not the most numerous in a
Group Of Pictures (GOP). [12][13]
B) H.264/AVC SIMULCAST
At first, there are two input videos: video 1 for the left eye and video 2 for the right one. The
principle is to treat these two videos independently. As a result, the two input videos are encoded with
H.264/AVC into two different bit streams. Then, they are sent through the channel and decoded
independently. [13][14]
FIGURE 10 - AVC SIMULCAST TRANSMISSION CHAIN
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 13
14. 3D Television – The Future of television
C) H.264 SEI MESSAGE
A stereo Supplemental Enhancement Information (SEI) message was added to H.264/AVC. It
implements inter-view prediction. The only difference with H.264/AVC simulcast is the operation made
on the two different videos. Encoding and decoding the two videos independently was not efficient, so
the idea was to group them, by interleaving. Indeed, the two videos are interleaved line by line. [13][14]
FIGURE 11 - H.264 SEI TRANSMISSION CHAIN
D) DOWNSAMPLING WITH H.264/AVC
Another way to achieve a good compression is to use the capacities of the human brain by down
sampling one of the two views. Indeed, this incredible organ is able to retrieve the stereo with the
quality of the best picture.
FIGURE 12: HIGH DEFINITION VIEW (LEFT) AND DOWNSAMPLED ONE (RIGHT)
The 3DTV project downsampled one of the views by scaling its temporal rate and its spatial size
using H.264/AVC codec. They managed to obtain stereoscopic videos coded at a rate of 1.2 times a
classic video with little visual quality degradation. [13][15]
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 14
15. 3D Television – The Future of television
4) VIDEO PLUS DEPTH CODING
The principle is very simple: next to each 2D image frame, a greyscale depthmap indicates if a
specific pixel in the 2D image needs to be shown in front of the display (white) or behind the screen
plane (black). Typically this depth range is quantized with 8 bits: the closest point is associated with the
value 255 and the most distant point is associated with the value 0. These 256 greyscales are sufficient
to create a smooth gradient of depth within the image. Then, the processing part is done in the monitor
in order to render the multiview images. [12]
FIGURE 13 - PRINCIPLE OF VIDEO + DEPTH CODING
A) MPEG-C PART 3
Responding to a strong industry demand in 3D video content
generation, the Motion Picture Experts Group launched the ISO/IEC 23002-
3 specification (a.k.a. MPEG-C part 3), standardized since 2007.
As you can see in the diagram on your left (figure 2), MPEG-C Part 3
uses the video plus depth format, consisting in the input video and the
associated depth information. These signals are encoded independently
thanks to H.264/AVC, resulting in two encoded bit-streams. Then, these
two bit-streams are interleaved frame-by-frame in the multiplexer for the
transmission part. After transmission over the channel, the demultiplexer
separates this stream into the two individually coded streams, thanks to
H.264/AVC. Like in the encoding part, these two streams are decoded
independently, resulting in the distorted video sequence and the distorted
depth sequence.
What is interesting in MPEG-C Part 3 is that it specifies an Auxiliary
Video Data format, more than just depth. It simply consists in an array of N-
bit values that are associated with the individual pixels of a regular video
stream. Depth maps and parallax maps are the first specified types of
auxiliary video streams, relating to stereoscopic-view video content. They
are respectively coded 0x10 and 0x11. Parallax can be seen as a hyperbolic
representation of depth. In fact, it is the apparent displacement or
FIGURE 14 - SCHEMATIC difference in the apparent position of an object viewed along two different
BLOCK DIAGRAM FOR MPEG-C lines of sight. New values for additional data representations could be
PART 3 CODING WITH VIDEO
PLUS DEPTH FORMAT DATA
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 15
16. 3D Television – The Future of television
added to meet future technologies of coding. This specification is directly applicable to 3D video
because it allows specifying such video in the format of single view plus associated depth, where the
single channel video is augmented by the per-pixel depth attached as auxiliary data. This standard
allows for optional subsampling of the depth map (spatial and temporal domain). It is a very good
solution on applications where we need very low bitrates.
Using video plus depth encoded with MPEG-3 Part C is very useful. Indeed, depth can be
compressed very efficiently using video coders such as AVC (5-10% bitrate compared to color signal),
which means that 3D video is just a little bit heavier than HD video. Moreover, this technology offers a
backwards compatibility with 2D: if the user cannot decode the depth map, he can display the source
video. Besides, this standard is independent from the display technology and the capture technology.
Furthermore, MPEG-3 part C is directly compatible with most “2D to 3D” algorithms.
However, this approach has some limitations too. First, when there are big occlusion areas, a
very strong depth is difficult to handle at the display. Secondly, if you want to avoid headaches, you
have to watch a video with a limited depth impression. So you have to compromise between visual
comfort and visual pleasure. To conclude, we could say that this technique would initially be good
because of the small depth range, but it has to be perfected in order to achieve the free-viewpoint-video,
where you can watch a scene from every angle. [13][14][16]
B) H.264-AVC WITH AUXILIARY PICTURE SYNTAX
As you can see in the diagram on your left, H.264 Auxiliary Picture
Syntax uses also the video plus depth format. The color and depth images
are combined into a single source coded by H.264/AVC. Video is the primary
coded picture and depth is the auxiliary coded picture. A single encoder
output is advantageous, compared to MPEG-3 part C, because it will not
affect the end-to-end communication chain. Indeed, we do not need extra
signalling in order to transmit the additional bitstream). The primary coded
picture contains all macroblocks of the picture. The only pictures that have
an effect on the decoding process are primary coded pictures. The auxiliary
coded picture is a picture that supplements the primary coded picture. It
may be used in combination with other data not specified by H.264/AVC in
the display process. An auxiliary coded picture must contain the same
number of macroblocks as the primary coded picture.
After transmission over the channel this stream is decoded, again
simultaneously but independently for primary and auxiliary coded pictures,
resulting in the distorted video sequence and the distorted depth sequence.
[14]
FIGURE 15 - AVC APS
TRANSMISSION CHAIN
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 16
17. 3D Television – The Future of television
C) MPEG-4 MAC
MPEG-4 Multiple Auxiliary Component allows the encoding of auxiliary components, such as a
depth map, in addition to the Y, U, V components present in 2D video.
The basic idea of the Multiple Auxiliary Components (MAC) is that grayscale shape is not only
used to describe the transparency of the video object, but can be defined in a more general way. MACs
are defined for a video object plane on a pixel-by-pixel basis, and contains up to three auxiliary
components, related to the video object. We can use for example disparity, depth, and an additional
texture. To date, only a limited number of types and combinations are defined and identified by a 4-bit
integer, but more applications are possible. Like H.264/AVC, MAC produces one bitstream output which
avoids the different multiplexing and demultiplexing stages hen there are several streams. [17]
MPEG 4
MAC
Encoder
FIGURE 16 - MPEG-4 MAC ARCHITECTURE
D) H.264/SVC
FIGURE 17 - H.264/SVC ARCHITECTURE
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 17
18. 3D Television – The Future of television
H.264 Scalable Video Coding (SVC) is an extension of the H.264/AVC. Scalability means that it
can serve different needs of different users with different displays connected through different network
links by using a single bit stream. To summarize, with H.264/SVC, we have one bitstream containing
several qualities, several resolutions and several frame rates. Thus, the video sequence adapts itself to
the best need for the user.
SVC generates a H.264/AVC compatible base layer and one or several enhancement layer(s).
The base layer bit stream corresponds to a minimum quality, frame rate, and resolution whereas the
enhancement layer bit streams represent the same video at gradually increased quality and/or
increased resolution and/or increased frame rate. The goal here is to code stereoscopic video
sequences based on the layered architecture proposed in H.264/SVC.
Here again, there is an advantage regarding displays technologies: as the base layer is
compatible for H.264/AVC decoding, users with a H.264/AVC decoder will be able to decode the color
image, whereas users with an SVC decoder will be able to decode the depth/disparity image and will
experience stereoscopic video. We can notice that this standard has better performances than MPEG-4
MAC and the same performances than H.264/AVC. [17][18]
H.264
SVC
Encoder
FIGURE 18 - H.264/SVC STEREOSCOPIC CODING
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 18
19. 3D Television – The Future of television
5) MULTIVIEW 3D
A) MULTI VIEW CODING STANDARD (MVC)
Concretely, MVC is inspired by MPEG-2 Multiview profile or H.264/AVC SEI, presented before,
but with more than two views.
FIGURE 19 : H.264/MVC CODING STRUCTURE WITH INTER-VIEW PREDICTION (RED ARROWS) AND TEMPORAL
PREDICTION (BLACK ARROWS) USING HIERARCHICAL B PICTURES FOR 5 CAMERAS AND A GOP SIZE OF 8
PICTURES
The first problem of MVC is when different views of multiview video sequences are really
different. Then, no real gain can be achieved. Another drawback of MVC is that the compressed
bitstream still requires a datarate, which is linearly dependent on the number of views. [14][19]
B) MULTIVIEW PLUS DEPTH (MVD)
Multiview video plus depth (MVD) can be regarded as an extension of Video + Depth. It comes
from the Multi Video Coding. The principle is the same: transmitting several views of the scene.
However, here the goal is to encode with each view its depthmap. Thus, instead of encoding 8 views, we
prefer encoding 4 views of the scene, plus their associated depthmaps. Eventually, we will have 4 high
resolution images plus 4 greyscale encoded images, which represents between 5 and 10% of the total
size of the stream, in megabytes. Moreover, this bitstream can be compressed thanks to statistical
redundancy within the view (figure 20), like we have in MPEG-4. However, the video can even be
compressed more, using the redundancy in the different views (figure 21). [20][21][22]
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 19
20. 3D Television – The Future of television
FIGURE 20 – TEMPORAL PREDICITON
FIGURE 21 - BOTH TEMPORAL AND INTER-VIEW PREDICTION
Another solution to compress even more the bitstream could be to send only some views. For
exemple, on a display showing 9 views, we can send the 2 views of the sides and the view of the middle,
in addition with their depthmap. Then, the other views can be synthetized thanks to an algorithm called
DIBR (Depth image based rendering [13]).
C) LAYERED DEPTH VIDEO
Layered depth video (LDV) comes from MVD. The principle is simple: more than just taking the
actual video (A) plus its depth (B), LDV takes in addition the background layer (C) with its associated
depth map (D). We have a total of 4 pictures (see figure 3). The background layer(C) consists in the
image content which is covered by foreground objects in the main layer (A). We can access this content
thanks to other viewing directions. Another approach of this technology consists in isolating each color
(Y, U, V) and transmitting each color video plus each associated depthmap. One color is the main layer
and the two others are the residual layers.
LDV can be generated from MVD. Indeed, we just need to take the additional views and process
them to have the background layer. Thanks to a pixel-wise comparison, we can determine which parts
of the image belong to the background and which belong to the main layer. Layered Depth Video can be
more efficient than MVD because in accordance with the principle of Video plus Depth, the depthmaps
are encoded on a grayscale (only 256 colors). It implies that less data is sent, compared to MVD. [13]
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 20
21. 3D Television – The Future of television
A B
C D
FIGURE 22 - EXEMPLE OF A LAYERED DEPTH VIDEO
D) DEPTH ENHANCED STEREO
At first, systems in the market will use conventional stereo. However, MVD and LVD techniques
will not allow backward compatibility with stereo systems. This is why the Depth Enhanced Stereo
(DES) concept has appeared. Its goal is to extend conventional 3D.
FIGURE 23: DEPTH ENHANCED STEREO POSSIBILITIES
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 21
22. 3D Television – The Future of television
As shown in the figure 4, beyond the two views that are sufficient for stereoscopic displays, DES
adds information about depth and occlusions. This standard is flexible as you can add views, forget
depth or occlusion information… High quality stereo views are included but also all advanced
functionalities that rely on depth. [13][23]
E) 3D VIDEO CODING
3D Video Coding (3DVC) is a standard that targets serving a variety of 3D displays, including
stereoscopic and multi-view displays.
FIGURE 24: TARGET OF 3D VIDEO FORMAT ILLUSTRATING LIMITED CAMERA INPUTS AND CONSTRAINED RATE
TRANSMISSION ACCORDING TO A DISTRIBUTION ENVIRONMENT. THE 3DV DATA FORMAT AIMS TO BE
CAPABLE OF RENDERING A LARGE NUMBER OF OUTPUT VIEWS FOR AUTO-STEREOSCOPIC N-VIEW DISPLAYS
AND SUPPORT ADVANCED STEREOSCOPIC PROCESSING
The 3DV project is assumed to be based on
limited camera inputs. The problem remains if a user
has an auto-stereoscopic display which works with
large number of views. Indeed, a possible scenario is
when four camera inputs are used but the display of
the user needs nine views. Then, it should be
possible to generate views from transmitted data.
Additionally, the rate required for transmitting the
3DV format should be fixed to the distribution
constraints, whatever your transmitting way: IP,
DVB or anything else. There should not be an
increase in the rate simply because the display
requires a higher number of views to cover a larger
viewing angle. Figure 6 represents well the
objectives of the 3DV project: a higher 3D rendering
capability than 2D+depth without using a high bit
rate such as MVC. [13][24]
FIGURE 25 - 3DV OBJECTIVES
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 22
23. 3D Television – The Future of television
TRANSPORT METHODS FOR TV3D
Transmitting 3DTV data, from the content creator to the user, not only is probably the most
challenging topic at the moment, but it is also the less glamorous aspect of this so attractive technology.
One on side James Cameron is having fun developing new cameras every year, on the other end the
possibility of watching an holograph in his living room a few years from now. Most users will never
imagine how challenging transmitting this information was.
Challenges come everywhere: from the two kinds of 3D data (stereo and MVC) to how the
meteorological activity affects transmission quality. In order to answer all this questions every group
propose his own solution, leading to the evident need of standardization. This part outlines the different
possible solutions to successfully broadcast data.
Studying the already existent technologies about stereoscopic transmissions, will be a very
helpful step to the comprehension of different propositions of solutions to succeed in transmitting
multiview video.
A) STEREOSCOPIC 3D TV OVER SATELLITE
This part is not fiction any more, it is pure reality. Indeed, very recent reality. Firsts 3D
experiences have already began. For the moment only specific events, like a football game or a fashion
show, have been broadcast in specific places (theater, pub, ...) equipped with 3D stereoscopic screens,
for stereoscopic-glasses equipped spectators. This part outlines, through the example of Eutelsat, the
different technologies used to perform this transmission. [25]
FIGURE 26 - EUTELSAT BROADCAST
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 23
24. 3D Television – The Future of television
I. WHAT IS TRANSMITTED?
As we studied it in a previous part, stereoscopic video is very simple compared to MVC. Indeed,
the only difference with HD video is the fact that 2 slightly different images have to be broadcast instead
of one. Video format is still MPEG-4. Transmission still uses DVB-S2. Finally, the channel used is a classic
8Mbps channel. The difficult parts are among (capturing), and after (decoding the stream) this state.
II. HOW TO TRANSMIT IT?
MPEG 4 / AVC: ADVANCED VIDEO CODING:
cf previous part for a complete description of the H.264 norm.
DVB-S2: DIGITAL VIDEO BROADCASTING FOR SATELLITE (VERSION 2)
DVB-S2 is the second-generation specification for satellite broadcasting developed in
2003. It benefits from more recent developments in channel coding (LDPC codes, Low Density
Parity Check codes) combined with a variety of modulation formats (QPSK, 8PSK, 16APSK and
32APSK). The transmission parameters for each individual user are optimize, dependant on
path conditions. [26]
DVB-S2 accommodates any input stream format, including single or multiple MPEG
Transport streams, continuous bit-streams, IP as well as ATM packets.
The selected LDPC codes use very large block lengths . Code rates of 1/4, 1/3, 2/5, 1/2,
3/5, 2/3, 3/4, 4/5, 5/6, 8/9 and 9/10 are available, depending on the selected modulation and
the system requirements. Coding rates 1/4, 1/3 and 2/5 have been introduced to operate, in
combination with QPSK, under exceptionally poor link conditions, where the signal level is
below the noise level.
FIGURE 27 - DIFFERENT MODULATIONS
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 24
25. 3D Television – The Future of television
By selecting the modulation constellation and code rates, spectrum efficiencies from 0.5
to 4.5 bit per symbol are available and can be chosen according to the capabilities and
restrictions of the satellite transponder used.
DVB-S2 is particularly interesting for TV broadcast. The multiple choices in bit rates, on
the same link, allow flexibility in transmissions. For a highly efficient link, modulation and code
will be choose in order to maximize the length of the bandwidth, in order to transmit HD for
example. On the same link, but under poor weather conditions, the modulation and code will
be change to a more reasonable combination. The consequence of this switch could be the
downgrade from HD to SD for the user during this period.
Therefore in the future we could imagine comparable philosophy for 3D TV broadcast.
Under good conditions the user will be able to watch his favorite programs in relief. But during
a storm, for example, the program could be switch to simple HD, or even SD in the worst cases.
B) MUTIVIEW TRANSPORT
Multiview live broadcasting is the ultimate goal. Unfortunately it will also be the hardest to
reach. Even without taking in account the difficulties in the capture, and the complexity of building
devices capable of properly displaying these streams, the quantity of data to transmit is a gigantic
obstacle. During our visit to Astrium, we were able to weight the problem. The 10 seconds video we
have watched were 3 Gb. Even if the encoding technology was really raw, this example illustrates the
problem. It is no longer 2 HD pictures, who need to be transported, just like in the stereoscopic
broadcast, but the number of HD images choose (2 ou 4 for the moment) and their depth. Fortunately
transmitting depth is not a problem for such advanced technologies.
This part outlines the different ways searchers currently tries to dig, in order to accomplished
the 3D Tv ultimate goal.
I. OVER INTERNET PROTOCOL:
It is fair to expect that any new communication application will be based on packet network
technology for its transport infrastructure and employ the already existing Internet Protocol suites. The
IP architecture is proving to be flexible and successful in accommodating a wide array of
communication applications as can be seen from the ongoing replacement of classical telephone
services by voice over IP applications. Transport of the TV signals over IP packet networks seems to be
a natural extension of such applications. [27]
That's why we are imagining a system based on the packet philosophy and IP in order to
transport 3DTV. Systems for streaming 3D video over the Internet will be built based on already known
technologies from the 2D television. Unfortunately, 3D video can have a much larger bandwidth
implying new needs in the transport structure.
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 25
26. 3D Television – The Future of television
At the moment, the most used transport protocol for media/multimedia is the real-time
transport protocol (RTP) over UDP. This solution from the 2D world does not propose any congestion
control mechanism and, therefore, can lead to congestion problems when severals multiview
communications are delivered at the same time. The datagram congestion control protocol (DCCP) has
been designed in order to replace UDP for media transmission, running directly over the Internet
Protocol. DCCP can be seen as as a UDP with congestion control and connection setup, or like a TCP
without reliability and in-order packet delivery. Anyway, 3DTV over IP is expected to employ the DCCP
protocol. [28]
Packet losses in the wired or wireless IP links is the major challenge for streaming media
applications. Congestion is the main cause of packet losses over the wired Internet. Unlike the
backbone network who is wired, wireless channel capacity is too limited in term of bandwidth, mostly
because of noise interferences, which lead to bit errors. Several network protocols discard packets with
errors who, later, will translate into losses. That is why this weakest link will necessitate a particular
care for future multimedia networks. [29]
OVER SATELLITE:
This is one of the area where things are the less established. 2 major theories, each justifying
their architecture by the kind of service they are aiming at, are opposing themselves on, whether or not;
IP should be used in this kind of transmission. We have already studied in a previous part that the most
common stream to transmit is an MPEG4-AVC stream. One more time depth captures are not the
problem here.
FIGURE 28- BROADCAST OVER DVB-S2
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 26
27. 3D Television – The Future of television
WITHOUT IP
FIGURE 29 - BROADCAST WITH MPEG-TS
This philosophy of architecture is not to use the IP protocol. This choice can look obvious since
in any of the 2 scenarii proposed before, we use MPEG. That's why in this case the MPEG stream is
directly transport by DVB-S2, using modulation, either QPSK or 8/16/32PSK.
Broadly, this solution is very close from today broadcasting systems. They already use H.264
over DVB-S2 to broadcast television programs in SD and HDTV (depending on the link conditions). The
difference is the size of data to transmit for the same length of video, which drastically increase in the
3DTV context. This is where choices have to be taken. Increasing broadband width will probably be
possible, but in the long term. Searchers have to do some compromise about the video quality. For
example, using 2 images plus depth, instead of 4, still provide some promising results. An other solution
could be to decrease the quality of each picture.
That is, unfortunately, not the only concern. This proposition of architecture tends to transmit
MPEG4-AVC data. The fact is that as you are reading this, it is simply impossible so far, to broadcast a
live event with this format due to time of processing. So far, producing a live H.264 stream, from a live
event, in order to broadcast it in real time to users, is not possible due to gigantic processing times. That
is why this solution is mainly envisaged to be apply (for the moment) to services like Video on Demand.
This solution is the one choose in the Muscade project. This is the acronym for Multimedia
Scalable 3D for Europe. In this very ambitious program Astrium is, among other missions, in charge of
the transmission of content produce by other partners. The final goal for Muscade is being to produce a
scalable and generic 3DTV representation format. [30]
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 27
28. 3D Television – The Future of television
WITH IP
FIGURE 30 - BROADCAST WITH IP & GSE
At first this solution could appeared ineffective. As a matter of fact, destroying a MPEG stream to
encapsulate it in IP packets, and to finally use DVB-S2 who has been develop in order to transport MPEG
stream, definitely looks useless. After a quick study of the technologies to use, to use this architecture,
we will put in relief some examples of it utility.
The first step consists in encapsuling MPEG frames in IP packets. This is the easiest state. For
example, Motorola developed technologies in the direction, even if it was formally to use specificities of
the IP network.
The second and most complex step is the encapsulation of IP packets over DVB-S2. In order to
realize it, we need GSE: Generic Stream Encapsulation. This protocol allows direct encapsulation of IP
and other network-layer packets over DVB-S2 physical layer frames. It replaces MPEG-TS
encapsulation, which we studied in the previous part. [31][32]
This choice of architecture, if he is far more complex than the previous, presents the advantage
to offer a better versatility in the content transmission. Conference systems could benefit from this
point, and in a not so far future, propose 3D teleconference. The 3D Presence project is one of these
examples. Even if the transmission is not their priority for the moment, due to the complexity of others
parts like the capture and the display, in the end the project will need to choose a way to transmit his
data. [33]
FIGURE 31 - 3D VIDEO CONFERENCE DIPLAYS
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 28
29. 3D Television – The Future of television
6) DISPLAY STANDARDISATION
There are numerous ongoing projects that are currently studying the future of 3D televisions.
From stereoscopic displays that come with glasses to auto-stereoscopic displays these are the next
generation of home televisions. Standards already exist and are called 3D-Ready or Full-3D. Other
researches from associations like HDMI or Blu-Ray Disc are also coming up with standards for the new
TV generation. Finally we will discuss the standardization of mobile phone in order for their screens to
display 3D content.
A) STEREOSCOPIC SCREENS
I. 3D-READY
The 3D-Ready standard is in fact a norm for Full HD televisions. The aim is to be able to display 3D
content on 2D HD screens. The mechanisms exploited by the constructors is the fact that a 2D
televisions working in 50 Hz mode is now ready to bear a whole innovative 100 Hz mode. Thanks to
these new standard two images can be displayed on the same screen enabling the 3D effect when used
with a pair of stereoscopic glasses. It seems that these televisions are ready to use. Some are even sold
in retail stores and can be purchased as easily as any Full HD screen. The limit of this technique is that
3D glasses have to be worn by the user in order to witness the 3D effect. However here are a few
characteristics that are alike for all the TV models regardless of the brand. [34]
Native resolution is 1920x1080 pixels
Native input format is almost always frame-sequential 120 Hz
The 3D-Ready display method changes for one brand to another. In fact
most 3D-Ready televisions use 120 Hz time-sequential 3D LED Backlit
FIGURE 32 - 3D
LCD but other brands prefer to use 120 Hz time-sequential 3D Plasma.
GLASSES
All of the available 3D-Ready televisions are sold with pairs of active
shutter glasses.
II. FULL-3D
We are most likely to think that this norm relates to auto-
stereoscopic televisions. And we are wrong! This standard is only the new
generation of stereoscopic 3D TV screens. The main difference between
Full-3D and 3D-Ready is that Full-3D is coming out in stores as a standard
that has been thought from the beginning to be the best available
technology to display 3D content. The Sony LX900 is described as Full-3D
TV. But it comes with glasses! The same observation can be made with the
other Full-3D televisions announced on the market (TX-PVT20 Panasonic,
LCD GD-463D10 from JVC). The Full-3D standard is extremely close to the
3D-Ready norm and can be described partially as follows [35][36]: FIGURE 33 - FULL 3D
SCREEN
TV LCD Full HD 1080p
LED technology, which is suppose to give a nicer aspect of the 3D effect
Works with many modes: 720p, 1080i, 576p, 480p, 576i
Norm (MPEG-4), H-264
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 29
30. 3D Television – The Future of television
B) AUTO-STEREOSCOPIC DISPLAYS
These televisions are the real next generation 3D televisions. The 3D effect is processed and
there is no need for 3D glasses. Scientists have come to realize that wearing glasses at home would be
constraint that people are now willing to make in their own living room. Such displays are currently in
the labs but unfortunately not ready for the market. There are two main reasons for this official report.
The first one is that the 3D effect can only be seen by a limited number of viewers standing
straight in front of the screen.
The second reason is that some users have complained about headaches after a auto-
stereoscopic 3D viewing session.
Nevertheless researches are being conducted in many European projects. The oldest
projects are two years old when the youngest ones are only a few weeks old. Regardless the
participants the goals are somewhat always similar. They can be summarized as follows:
To develop a display supporting multiple mobile viewers
To eliminate the need for 3D glasses
To Allow viewers freedom of movement
Motion parallax to all viewers (need extremely important for example in the project 3D
Presence which is a 3D conference meeting)
High brightness and color gamut
Viewer gesture/interaction tracking
To implement the apparent practical need for 2D/3D mode switching in a display
The display must be relatively inexpensive
We can truly believe that between now and 3 to 5 years time, standards will naturally be the
consequences of these studies. [37][38][39]
C) OTHER STANDARDS
Here we will discuss norms that are also coming out but are not directly related to 3D
televisions. Two standards are highly discussed all over the Internet: HDMI and Blu-Ray Disc.
I. BLU-RAY DISC ASSOCIATION
The Blu-Ray Disc Association has finished a new norm in early February of 2010 that specifies
the new 3D Blu-Ray Disc. The new disc will be able to work with resolutions up to 1080p. The
specification also shows how 2D content can be read by the new 3D Blu-Ray Disc reader. Of course the
other way around is not possible for commercial reasons.
The BDA's specification mandates encoding 3D video using the Multiview Video Coding (MVC)
codec, an extension to the H.264 Advanced Video Coding (AVC) codec currently supported by all Blu-ray
Disc players. With this standard each eye will receive a 1080p image giving an incredible 3D
impression. The specification also incorporates enhanced graphic features to enable menus, subtitles
and so on to appear in 3D form.
It is said that this standard will be agnostic. Understand that it will be able to cooperate with any
3D TV, stereoscopic or auto-stereoscopic. But nothing is said on that matter and it could become an
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 30
31. 3D Television – The Future of television
issue if the Blu-Ray Disc Association decides to orientate it’s product with a certain 3DTV, for example a
Sony product. There is nevertheless a hint to where this standard might go when we know that Sony
Playstation has announced that 3D video games will be available under the conditions that a user has a
3D-Ready TV and has a device that can read 3D Blu-Ray Disc. [40]
II. HDMI 1.4 AND DISPLAYPORT 1.2
At the last CES in Las Vegas the VESA (Video Electronics Standards Association) unveiled their
new norm DisplayPort 1.2. This new version of the HDMI contestant connector can work with an
extremely high resolution as high as 3840x2400 at 60 Hz and at a maximum rate of 21.6 Gb/s. The very
good news is that with this new version of the DisplayPort, 240 Hz Full-HD 3D is supported. This will of
course help with the future standardization of auto-stereoscopic televisions. In this case, the maximum
resolution is 2560x1600 at a frequency of 120Hz for each eye. More over the DisplayPort 1.2 is retro-
compatible with the 1.1 version and will therefore be able to work with Full-HD televisions and more
interesting 3D-Ready displays.
The new released HDMI 1.4 standard is similar to the DisplayPort 1.2. This new cable will
support a resolution up to 4096x2160 along with full 3D display. Three new cables are discussed in the
norm. The disadvantage of the HDMI 1.4 is that the compatibility with HDMI 1.3 products is impossible.
Instead users will have to buy one of the three following HDMI 1.4 cables to work along with the 1.4
products about to be released. [41][42]
The mini HDMI 1.4 for computers, will not exceed 1080p
The HDMI 1.4 standard, will cope with resolutions up to 2160p but will not be “network”
HDMI EC (Ethernet Channel), surely the most expensive; data will be exchanged at a
maximum rate of 100Mb/s.
This HDMI 1.4 norm also presents the new 3D video format and data blocks involved in
supporting 3D content. As for the DisplayPort 1.2 it will work with 3D devices by displaying one 1080p
flow for each eye.
Those new norms make it possible to work with 3D at a frequency of 120 Hz and to provide at
the same time HD sound which was not possible with the previous standards due to lack of bandwidth.
D) PORTABLE DEVICES: 3D MOBILES
A few projects are studying what they believe will become the future of mobile phones. As a
matter of fact we can indeed agree to the idea. However the aims of these projects seem unlikely to be
met in a period of time from 3 to 5 years. The studies vary from encoding for mobile platform to
creating mobile video 3D content to the choice of a display technique. The projects are hesitating
between a 3D screen that resembles in many ways an auto-stereoscopic 3D television and a holographic
screen. It is thought by scientists that home holographic TV screens will not be available until at least
2050. With that in mind it is hard to concede that a holographic mobile screen can be ready to use in
2015! These projects also want to equip the future 3D phones with a 3D camera in order to enable the
user to capture its own 3D footage. As big as are the 3D cameras today, and keeping in mind that only
10 seconds of 3D video content weighs more than 5 Gigabytes it is once again difficult to admit that
such goals can be met in small deadlines. Having said that does not mean that these projects will not
someday bring standards to the mobile world. One can argue that this step will be taken once 3D
standards are clear and well applied for 3D televisions. [43]
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 31
32. 3D Television – The Future of television
CONCLUSION
3D has threatened to sweep the world many times over the past few decades. However, this
time, we can think that it is the good time. Since Avatar, more and more movies will be shot in three
dimensions.
As we have seen in this report, a few standards are ready but there is a lot to do in this way.
Even if the capture process is well-known for stereoscopic displays, the multiview part is on discussion.
For the encoding part, it is the same: multiview data stays too heavy for broadcast. Moreover, multiview
TV screens are not ready yet: they have particularly several problems like headaches if the user is too
much exposed to a video or the fact that the view angle is limited and so the numbers of viewers is also
limited. Nevertheless, we feel that people is interested in this technology. In this context, the main TV
manufacturers are currently launching their first 3D ready televisions. For the moment, you will need
glasses to assess the third dimension. Likewise, TV channels will begin to broadcast in three
dimensions. For example the well-known ESPN, which is a sports channel in the USA, will broadcast in
3D the next FIFA world Cup in South Africa. Let it be said, the main television events which have a
leading role in TV revolution are sports events.
We can hope to see multiview screens in our living room in less than ten years. For holographic
screens, we will have to wait more, about fifty years or so. But this technology would be the final goal to
achieve.
To conclude, we should say that even if everything is not done yet, 3D television will be obvious
in the next few years. We can even say that 3DTV is the challenge of this decade. The third dimension is
the last barrier to get through, and then we will be able to immerse ourselves in movies the same way
as if we were getting out in the street.
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 32
33. 3D Television – The Future of television
REFERENCES
[1] http://www.stereo3d.com/3dhome.htm
[2] http://photo.stereo.free.fr/
[3]http://www.bitwise.net/~ken-bill/stereo.htm
[4] http://www.capcomespace.net/dossiers/cinema/imax/IMAX.htm
[5] http://news.cnet.com/8301-1023_3-10106431-93.html
[6]http://sp.cs.tut.fi/mobile3dtv/technology/displays.shtml
[7]http://www.3dtv-research.org/
[8]http://www.digitalholography.eu/publications.html
[9]http://www.3dpresence.eu/index.php?option=com_wrapper&Itemid=68
[10]http://www.holography.co.uk/Conference/Spierings/Walter.pdf
[11] 3D4YOU - “3D Content Requirements & Initial Acquisition Work” - 2009
[12] Dr Ing Aljoscha Smolic from HHI –“3DV & FVV Technologies, Applications and MPEG Standards
2006” – http://vca.ele.tue.nl/events/3Dworkshop2006/pdf/AljoschaSmolic.pdf
[13] Aljoscha Smolic, Karsten Mueller, Philipp Merkle, Anthony Vetro – “Development of a new MPEG
Standard for Advanced 3D Video Applications” – 2009
[14] Philipp Merkle, Heribert Brust, Kristina Dix, Yongzhe Wang, Aljoscha Smolic - “Adaptation and
optimization of coding algorithms for mobile 3DTV” – 2008
http://www.mobile3dtv.eu/results/tech/D2.2_Mobile3DTV_v1.0.pdf
[15] Aljoscha Smolic - “Comparison for 3DTV with special focus on MPEG standards” – 2007
[16] Arnaud Bourge, Jean Gobert and Fons Bruls – “MPEG-C PART 3: ENABLING THE INTRODUCTION
OF VIDEO PLUS DEPTH CONTENTS” - 2009
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.131.941&rep=rep1&type=pdf
[17] C.T.E.R. Hewage, H.A. Karim, S. Worrall, S. Dogan, A.M. Kondoz – “Comparison of Stereo Video
Coding Support in MPEG-4 MAC,H.264/AVC and H.264/SVC” – 2008
http://info.ee.surrey.ac.uk/Personal/S.Worrall/Publications/VIE2007_0027_paper.pdf
[18] Dr Stewart Worrall, University of Surrey
http://www.ee.surrey.ac.uk/CCSR/research/ilab/2d3d/svc
[19] Morvan Y. et al. – “The effect of multiviewdepth video compression on multiview rendering” – 2009
[20] Sang-Tae Na, Kwan-Jung Oh, Cheon Lee, and Yo-Sung Ho - “Multi-view Depth Video Coding using
Depth View Synthesis” - 2008
[21] Sang-Tae Na, Kwan-Jung Oh, and Yo-Sung Ho - “JOINT CODING OF MULTI-VIEW VIDEO AND
CORRESPONDING DEPTH MAP” - 2008
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 33
34. 3D Television – The Future of television
[22] Philipp Merkle, Aljoscha Smolic, Karsten Müller, and Thomas Wiegand from HHI Germany -
“MULTI-VIEW VIDEO PLUS DEPTH REPRESENTATION AND CODING” – 2007
http://itg32.hhi.de/docs/ITG32_HHI_07_1_177.pdf
[23] Ralph Tanger - “3D4YOU” - 2007
[24] Mobile 3DTV - “State of the art of technology and Standards”- 2009
[25] http://www.eutelsat.com/fr/products/audiovisuel-3d.html
[26] http://www.hellas-sat.net/files/file/EBU_DVB_S2.pdf
[27] http://www.3dtv-research.org/publicDocs/techReports08/D32.3_Public.pdf
[28] Burak Gorkemli - “SVC Coded Video Streaming over DCCP” - 2006
[29] http://www.3dtv-research.org/publicDocs/techReports07/D32.2_WP10_TR2.pdf
[30] http://muscade.eu
[31]http://www.motorola.com/staticfiles/Business/Solutions/Industry%20Solutions/Service%20Pro
viders/Telcos/_Documents/static%20files/WPpr_MPEG4_B_New.pdf?localeId=33
[32] http://www.ietf.org/proceedings/66/slides/ipdvb-4.pdf
[33] http://www.3dpresence.eu/
[34] http://www.3dathome.org/
[35] http://www.ecrans.fr/3D-a-tout-prix,8779.html
[36] www.sony.fr
[37] http://www.cse.dmu.ac.uk/~mutedusr/index.html
[38]http://www.cse.dmu.ac.uk/~heliumusr/
[39] 3DTV Project, “Technical Report #3 on 3D Display Techniques and Potential Applications”, 2008
http://www.3dtv-research.org/publicDocs/techReports08/D36.3_Public.pdf
[40] http://www.blu-ray.com
[41] HMDI spec1.4 http://www.hdmi.com
[42]http://www.laptopspirit.fr/60640/la-norme-displayport-12-officialise-par-vesa-3d-au-
programme.html
[43]http://www.3dphone.org/
Elliott ELLIS – David METGE – Paul VERGE – Romain ZIBA 34