2. The word multimedia comes from the Latin word multus which
means numerous and media which means middle or center .
The multiple means by which we can perceive information are:
1. Text – (e.g. Books, News papers, Articles, Journals , Magazine
etc.
2. Images & Graphics – (e.g. Photographs, Charts, Maps, Logos,
Sketches)
3. Audio – (e.g. Radio, CD, DVD etc.)
4. Video & Animation – (e.g. TV, Video, Motion Pictures)
3. Multimedia is the field of Computer Science that
integrates different forms of information and represents
in the form of audio, video, and animation along with the
traditional media, i.e., text, graphics/drawings, images,
etc.
4. Multimedia Components
Following are the major components of a multimedia
computer system −
1. Text
It contains alphanumeric and some other special
characters. Keyboard is usually used for input of text;
however, there are some internal (inbuilt) features to
include such text.
5. 3. Graphics
It is technology to generate, represent, process,
manipulate, and display pictures.
It is one of the most important components of
multimedia application.
The development of graphics is supported by a different
software.
6. 4. Audio
This technology records, synthesizes, and plays audio
(sound).
There are many learning courses and different instructions
that can be delivered through this medium appropriately.
7. 5. Video
This technology records, synthesizes, and displays
images (known as frames) in such sequences (at a fixed
speed) that makes the creation appear as moving; this is
how we see a completely developed video.
In order to watch a video without any interruption,
video device must display 25 to 30 frames/second.
8. 6. Animation
Computer animation is a modern technology, which
helps in creating, developing, sequencing, and
displaying a set of images (technically known as
‘frames’).
Animation gives visual effects or motion very similar
to that of a video file.
10. 1.Multiple Media –
Text has been the main mode of communication for many
years during the pre multimedia era, and still to
continuous to be one, but it is now more and more
supplemented by other media, which often prove more
effective.
11. Pictures were sub divided into two types –
Images & Graphics
1. The real-world pictures captured by camera called
Images.
2. The hand drawn pictures like sketches, diagrams, and
portraits called graphics.
12. Movies are again divided into two classes –
Video & Animation
1. Those which depict real world incidents are called
motion pictures are motion video.
2. Those which depict artificial or imaginary scenarios
are called animation.
13. Text , graphics, and images are together referred to as
static element, because they do not change over
time.
With further improvement technology, time-varying
element like audio(sound), video(movie) and
animation were used.
14. 2. Non – Linearity
i. Non – Linearity is the capability of jumping or
navigating from a one point within presentation to
another point without appreciable delay.
ii. TV shows and motion pictures are considered linear
presentations because the user has to watch the
information being displayed , in a predefined
sequence of frames as determined by the producer
or creator of the show.
iii. The viewer cannot change the sequence of frames
or the timing between them desired.
15. iv. In a multimedia presentation, the user can instantly
navigate to different parts of the presentation and
display the frames in any way he/she choose,
without appreciable delays , due to which it is
called a non-linear presentation.
16. 3. Interactivity – interactivity is a powerful tool and
hopes to achieve much more than simply switching off/on
media elements.
It allows one to get involved with the presentation It
enables the author to create an environment within the
presentation where the learner can give inputs and ask
the system to provide certain outputs or feedback,
simulating a tutor.
This is considered very useful for application like training
and learning package.
17. 4. Digital Representation
i. Multimedia requires instant access to different
portions of the presentation.
ii. This is best done inside a digital computer which
stores data on random access devices like hard disc,
CD, and DVD. Hence, multimedia presentations are
produced and played back on the digital platform.
iii. Each of the media types like text, images, audio, video
needs to be expressed in the digital form before they
can be utilized within a presentation.
18. iv. Digital representation has other advantages.
Software based programs can be used to edit the
digitized media in various ways to improve their
appearances and compress file size to increase
performance efficiency.
19. 5. Integrity
i. Another important characteristics of multimedia
presentation called integrity.
ii. This means that although there may be several
media types present and playing simultaneously,
they need to be integrated or be part of a single
entity which is the presentation.
iii. We should not be able to separate out the
various media and control them independently,
rather they should be controlled from within the
framework of the presentation.
20. Uses of Multimedia
1. Home Entertainment
2. Educational Purpose
3. Industrial Training
4. Information Kiosk
5. Corporate Presentation
6. Business
7. Tourism and Travel
8. Electronic Shopping
9. Medicine
10.Engineering Application
11.Content Based Storage & Retrieval System(CBSR)
21. 1.Home Entertainment
Applications of multimedia technology related to
home entertainment includes computer based
games for kids, interactive encyclopedias, story
telling, cartoons, etc.
Computer games are one of the best applications of
multimedia because of the high amount of
interactivity involved.
22. 2. Educational Purpose
Different aspects of the course curriculum which can
not be explained or grasped easily through simple text
and images could be presented through video clips,
animation, 3d modeling , audio annotations etc. for
making them more comprehensible.
23. 3. Industrial Training
This involves Computer Based Training packages for
employees, both technical and marketing, Successful
organizations are required to maintain a high level of
staff training and development. Some of the
advantages of these courses are:
i. Many people can use each of these courses.
ii. They do not need to spend time away from office
iii. People can learn at their own pace.
iv. Full time instructors are not required,
v. Experimental setups could be reduced as these can
be simulated.
24. 4. Information Kiosk
These are devices where information is accessed
through a touch screen and viewed on a monitor.
Shopping kiosk would provide consumers with
access to an electronic shopping center offering a
wide range of information , products and services.
Kiosk can also be used to capture statistical data for
an in-depth market research to be carried out on
consumer trends.
25. 5. Corporate Presentation
Corporate presentations for emphasizing the salient
features and activities of a company its products, its
business partners like suppliers and retailers, can be
built by incorporating multimedia elements along
with textual descriptions.
26. 6. Business
Items difficult to stock like glass utensils, industrial
equipment etc. can be displayed to perspective
buyers by company sales people through multimedia
presentation.
Real estate agents can display interior and exterior
buildings along with necessary information like
dimensions and price.
The benefits include saving on space, inventory and
distribution.
27. 7. Tourism and Travel Industry
Travel companies can market packaged tours by
showing prospective customers glimpses of the places
they would like to visit, details on lodging, fooding ,
special attractions, etc.
Hotel owners can utilize the technology to display
details of facilities offered at various hotels at different
locations.
28. 8. Electronic Shopping
Customers can compare different products in relation
to their quality, price , appearances without leaving
their homes and offices.
Additionally such catalogs can also serve as expert
systems, which could take a list of criteria from
customers like color, shape, size, weight, appearances
and provide a list of products which satisfy them.
29. 9. Medicine
Multimedia technologies can be used to prepare high-
quality magnetic resonance 3D images of human bodies
and practice complicated procedures.
Archives of X-ray images, CT scans, ultra- sonography
images etc. will enable doctors to provide better
consultations, and could serve as an expert system.
30. 9. Engineering Application
Multimedia is used extensively in designing mechanical,
electrical, electronic and architectural parts through the
use of CAD and CAM applications.
They enable engineers to develop software
representations of products from various viewpoints,
rotate scale and move parts and portions, zoom on to
critical parts and try out various combinations before
deciding on the final product implementation.
31. 10. Content Based Storage & Retrieval
System(CBSR)
Traditionally data searching activities have been
performed on textual database by string matching. As
large repositories of media elements like images, audio,
and video are growing up all over the world, efficient
methods of searching non-textual media are being
developed.
An example is the matching of a fingerprint from police
records to identify a criminal
32. Multimedia file
A file that is capable of holding two or more multimedia elements
(text, images, audio, video and animations).
33. Multimedia Container
1. A digital file format that holds audio, video and subtitles.
2. Containers support a variety of audio and video compression
methods and are not tied to one particular audio or video
codec.
3. AVI was the first Windows container format, and
Matroska/MKV is a popular open source container.
34. Metafile
1. A metafile is a file format that can store multiple types of
data such as graphics file formats. i.e. a file that contains
other files.
2. It generally refers to graphics files that can hold vector drawings
and bitmaps. For example, Windows Metafiles (WMFs) and
Enhanced Metafiles (EMFs) can store pictures in vector and
bitmap formats as well as text.
3. A Computer Graphics Metafile (CGM) also stores both types of
graphics.
4. They are typically used for vector images, such as Adobe
Illustrator, CorelDRAW, and EPS (Encapsulated PostScript) files,
but can include raster images as well.
35. Input & Output Devices Used in
Multimedia Applications
Multimedia is a mixture of different media -- such as text, video,
audio, graphics and data -- that work together to provide you
with all of the computing functions you need.
To use multimedia, you rely on a team of input and output
devices that are responsible for both transmitting and receiving
information between you and the computer.
36. Input and Output Devices
• Most important components of a multimedia system
• Devices classified as per their use
• Key devices for multimedia output
– Monitors for text and graphics (still and motion)
– Speakers and midi interfaces for sound
– Specialized helmets and immersive displays for virtual reality
37. • Key devices for multimedia input
– Keyboard and OCR for text
– Digital cameras, Scanners, and cd-roms for graphics
– midi keyboards, cd-roms and microphones for sound
– Video cameras, cd-roms, and frame grabbers for video
– Mice, trackballs, joy sticks, virtual reality gloves and wands,
for spatial data
– Modems and network interfaces for network data
38. How to Create Multimedia Presentation?
Here are the basic steps for creating multimedia
presentation.
39. 1. Select a Topic
1. A topic may belong to any of the categories such as –
Medical, Education, Information Kiosk, Corporate
Presentation etc.
2. Topics which can be explained using various media types
are more conductive to multimedia presentation.
3. Use of Text is not prohibited but should be kept at a
minimum.
4. Secondly, multimedia presentations should not be simple
page turners like ppt they should have extra element called
interactivity through which the user can actively get
involved with the subject matter.
40. 2. Writing a Story –
1. The focus should be on what the author wants to communicate
to his her audience.
2. Within the story the author can divide the matter into logical
divisions like chapters , sections , topics etc. for better
readability and modularization.
3. The writing style should be textual and should resemble an
easy.
41. 3. Writing a Script –
1. While a story focuses on ‘what’ is communicated, a script
emphasizes ‘how’ the subjects matter unfolds.
2. The subject matter of the story be divided into small modules
one for each screen.
3. Deciding how the subject matter for each screen should be
divided among the various media.
4. The script could also include other accessory information like
how the elements displayed on the screen, what transitions
are used between screens.
42. 4. Preparing a Storyboard –
1. It depicts information about the background color or image,
appearance of the navigational buttons or menu items, the
location and size of the graphics and text, the duration of
voice over, etc.
2. Items on the screen should not be cluttered, but each should
occupy a separate designed area.
3. The major important items should be placed centrally so that
they catch the user attention while accessories like buttons
and menus should be placed around the border.
43. 5. Implementation –
1. Implementation means actually creating the physical
presentation using required hardware and software.
2. Implementations has a number of sub-steps.
I. Collections of media items like photo, diagrams , audio,
video clips would need to be gathered.
II. This may either mean using ready made items or creating
your own items.
3. Alternative , the author can use software to create his/her
own items, e.g. creating a graphic or animation sequence.
44. 6. Testing and Feedback –
1. Testing and Feedback should be done for improving the
quality of the presentation.
2. This step involves distributing whole or part of the
presentation to sections of the target audience or experts and
getting feedback from them about the possible areas which
need improvement.
3. Taking suggestions and comments from the target audience,
the author is supposed to go back to his/her drawing boards
and make necessary changes.
45. 7. Final Delivery –
1. Usually the run time version of the application files are copied
onto a CD-ROM and physically handed over to the customer.
2. One should carefully test each portion of the application to
ensure free from errors before copying it to the CD-Rom.
3. Also one should only distribute run time version (EXE) file
instead of source file.
46.
47. Types of File Format
Text File
Format
Video File
Format
Image File
Format
Audio File
Format
49. TXT(text)
Unformatted Text document created by an
editor like Notepad on Windows Platform.
Unformatted Text document Can be used to
transfer textual information between different
platform like Windows, DOS and UNIX
50. DOC(Document)
Contains a rich set of formatting capabilities.
Since it requires proprietary software it is not
considered a document exchange format.
RTF –
The WordPad editor created RTF files by default
although now it has switched to the DOC format.
51. Image File Format
BMP TIFF
JPEG GIF PNG TGA
Images may be stored in variety of file formats.
Each File format is characterized by a specific
compression type and color depth.
52. BMP –
BMP is a standard windows image format on
DOS and windows compatible computers.
BMP supports RGB, Indexed color and Gray
scale.
53. JPEG
JPEG format support CMYK, RGB, and Gray
Scale color mode.
Uses Lossy compression techniques.
Support 24 bit color.
Supported all Browsers
Suitable for photographs.
54. GIF –
GIF format support RGB, and Gray Scale color
mode.
Uses Loss Less compression techniques.
Support 8 bit colors.
Supported all Browsers
Supported animation.
Suitable for Text, Artwork, Icons & Cartoons.
55. Sr. No. Key JPEG GIF
1
Stands for JPEG stands for Joint Photographic
Experts Group.
GIF stands for Graphical Interchange
Format.
2
Compression
Algorithm type
JPEG uses lossy compression
algorithm.
GIF uses lossless compression
algorithm.
3
Image Quality JPEG image may lose some image data
causing quality loss.
GIF image is of high quality.
4 Colors JPEG image supports 16 million colors. GIF image supports only 256 colors.
5
Transparency JPEG does not supports transparency
in images.
GIF supports transparency in images.
6
Extensions JPEG images use .jpeg or .jpg
extension.
GIF images use .gif extension.
7
Animation JPEG images do not support
animation.
GIF images support animation.
8
Usage JPEG images are used in photography. GIF images are generally used in logo
or animated image creation.
Following are the important differences between JPEG and GIF.
56. TIFF –
It is used to exchange files between application
and computer platform
TIFF format support CMYK, RGB, Grayscale,
Indexed-color
Uses Loss less compression techniques.
Support 48 bits color. Supported all Browsers
Appropriate format for printing purposes.
57. PNG –
PNG format support RGB, Indexed-color, Gray
Scale and bit map color mode.
Uses Loss less compression techniques.
Support 24 bit color and produces background
transparency without jagged edges.
NOT Supported by all Browsers
58. TGA –
TGA format designed for system using the video
board and commonly supported by MS-DOS color
application.
TGA format support RGB, Indexed-color and
Gray Scale color mode.
Uses Lossy compression techniques.
Support 24 and 32 bit color(8bit x 3 color
channels plus 8bit alpha channels) .
59. Audio File Format
WAVE AIFF
MP3 MIDI AAC
WMA
It is storing digital audio data in computer
system. This data can be stored compressed
and uncompressed to reduce the file size.
60. WAV (Waveform Audio)
It is used for uncompressed 8, 12, and 16 bit
audio files both mono and multi channel, at a variety
of sampling rates including 44.1khz.
Very good sound Quality
You can record your own .wav file on CD, tape
etc.
61. MP3 -
MP3 files are actually MPEG files.
A highly compressed audio format providing
almost CD-quality sound.
MP3 can compress a typical song into 5 MB for
which it is extensively used for putting audio content
on the internet.
The files can be coded at a variety of bit rates,
and provides good results at bit rates of 96 kbps.
New technology emerging that will allow you to
“stream” the file.
62. MIDI –
MIDI files are textual files which contains
instructions on how to play a piece of music.
The actual music is generated from a digital
synthesizer chip which can be recognize the
instructions and retrieve corresponding audio
samples using a repository of sounds.
Hence the files are very compact in size and
ideal for web applications.
63. AIFF –
AIFF is a file format standard used for storing
audio data on PCs.
The audio data in an AIFF file is uncompressed so
the files tend to be much larger than files that use
lossless or lossy compression formats.
Types of chunks found in a AIFF files include:
Common Chunk, Sound Data Chunk, Marker
Chunk, Instrument chunk, Comment Chunk, Name
chunk, Author Chunk, Copyright Chunk.
64. WMA (Windows Media Audio)
WMA is a proprietary compressed audio file
format used by Microsoft.
A WMA file is almost always encapsulated in an
Advanced Systems Format.
Files in this format can be played using Windows
Media Player, Winamp, and many other alternative
media player.
65. AAC (Advanced Audio Coding )
AAC is a lossy data compression scheme
intended for audio streams.
AAC was designed as an improved performance
CODEC relative to MP3 and MPEG2.
67. A video format is a specific type of file format that saves
recorded video material and all its associated information.
Videos comprise a large number of individual images as well
as audio that aligns to these images.
Basically any video format is composed of a total of four
different values.
68. i. Frame rate (sometimes called frame frequency): This is
reproduced in frames per second (fps). The higher this
value, the more images are used to create the video.
ii. Color depth: Information on color and brightness values
iii.Film format: Information on image resolution and on the
aspect ratio of the video (e.g. 16:9 or 4:3)
iv.Audio track: All information regarding the recorded
sound.
69. AVI (Audio Video Interleave)
The main advantage of this format is that it's one
of the most widely used, so it can be used with
almost all current multimedia applications.
A drawback is that these files are very large in
comparison to other video formats and therefore
require a lot of disk space.
70. MOV (QuickTime Movie) –
The MOV video format was developed by Apple and was
originally intended for use with QuickTime.
The format has the advantage of being easy to
implement in Apple environments and allowing for the
creation of very small files thanks to a high degree of
compression.
On the other hand, the powerful compression is also a
disadvantage, because it results in loss of data and image
quality.
This format is commonly used in professional and semi-
professional domains.
71. MPEG (Motion Picture Expert Group)
This format comes in a number of versions.
MPEG I and MPEG II are often used for transmitting TV
signals, and also for DVDs, while MPEG IV was developed for
use on systems with reduced processing power (e.g. smart
phones) and lower bandwidth.
This enables high image quality for relatively lower file
sizes.
A heavily compressed MPEG IV file needs to be unzipped
by an application before it can be played.
72. MKV – (Matroska)
MKV is currently one of the most popular video file formats
on the web. It is a powerful container format that can hold
audio tracks, menus, and many other functions in addition to
video files.
Users choose MKV for its high-quality video files and its
vast range of applications. It’s true that MKV format is only
compatible with one codec, but this is freely available online.
The main drawback of this file format is that the level of
compression is low, meaning you can’t create very small
files.
73. WMV –(Windows Media Video)
The WMV format was developed by Microsoft and is still
widely used today.
The benefit of this format is that it requires less storage
space to maintain the quality of the original material,
meaning that this format is suitable for Internet use.
The disadvantage lies with the fact that it is a Windows
video format, meaning that it can be difficult or even
impossible to play on other systems (e.g. Mac and Linux).
74. Div X-
DIV X is a video CODEC known for its ability to
compress the lengthy video segments into small
sizes.
A typical feature –length movie on DVD is
around 7 GB in size; With Div X this can be
compressed to around 700 MB which fits on a CD-
ROM with minimal loss in quality.
75. OGG (ogging)
OGG video file format was developed was to bring more
flexibility to a market that was largely dominated by rights
holders. OGG was, therefore, one of the first formats that
could be used in the popular Linux environment, and quickly
became well established on the market.
The advantage of OGG lies in the fact that it is patent-free,
widely accepted and is supported natively by the
majority of browsers.
Despite its wide distribution, the OGG video format does
have some disadvantages. Both Safari and Internet
Explorer only partially support it for example, so your
video may not be displayed properly in these browsers.
76. Video
format Developer
Year of
release Applications
MP4 Moving Picture
Experts Group
2003 Originally Apple, but can
now be used on many other
devices too
AVI Microsoft 1992 All common video platforms
and devices
MKV Matroska 2003 Only supported by a handful
of video players
MOV Apple 1991 Used primarily on Apple
devices
OGG Xiph.Org
Foundation
2008 Supported by lots of video
platforms and players
VOB DVD Forum 1997 Mainly for DVDs
WMV Microsoft 2000 For all digital media where
copy protection is required
78. We can divide Audio and Video services into
three broad categories
79.
80.
81.
82. Streaming Stored Audio / Video
To understand the concept let us discuss four
approaches each with a different complexity.
1. Using a Web Server
2. Using a Web Server with Meta File
3. Using a Media Server
4. Using a Media Server and RTSP
83. 1. Using a Web server
A compressed A/V file
can be downloaded as a
text file.
The Client can use the
services of HTTP and
send a get message to
down load the file.
The web server can
send the compressed file
to the browser
84. 2. Using a Web Server with a Metafile
The Media Player is
directly connected to the
web server for
downloading the A/V file.
The Web server stores
two files:
i. the actual A/V file &
ii. Meta file that holds
information about the
A/V file.
85. 3. Using a Media Server
The Problem with 2nd
approach is that the
browser and media
player both uses the
services of http.
HTTP is designed to run
over TCP. This is
appropriate retrieving the
meta file but not
retrieving the A/V file.
86. The reason is that TCP transmits a lot or
damaged segment, which is counter to the
philosophy of streaming.
We need to dismiss TCP and its error control :
we need to use UDP.
However HTTP which access the web server,
and web server itself are designed for TCP; We
need another server , A Media Server.
87. 4. Using a Media Server and RTSP
The RTSP is a control
protocol to designed to
add more functionalities
to the streaming process.
Using RTSP we can
control the playing of A/V
file.
88. UDP
UDP is more suitable for interactive multimedia
traffic. UDP supports multicasting and has no
retransmission strategy.
However UDP has no provision for time
Stamping, Sequencing or Mixing.
A new transport protocol provides these
missing features.
89. There is a gap
between the 1st and
2nd packets and
between 2nd & 3rd as
the video viewed at
the remote site.
This phenomenon
called jitter.
Jitter
90. Time Stamp
One solution to jitter is
the use of a time stamp.
In the previous example
1st , 2nd & 3rd packet has
time stamp 0,10,20
seconds respectively.
If the receiver starts
playing back 1st,2nd &3rd
packet 00:00:8,
00:00:18, & 00:00:28
respectively. There are
no gaps between
packets.
91. Mixing
If there is more than one source that can send
data at the same time (as in a A/V conference), the
traffic is made of multiple streams.
A mixer mathematically add signals coming from
different sources to create one single signal
92. RTP (Real-time Transport Protocol)
RTP is the protocol designed to handle real-time
traffic on the internet.
RTP does not have a delivery mechanism; it
must used with UDP.
RTP stands between UDP and the application
program.
The main contribution of RTP are time-stamping,
sequencing, and mixing facilities.
93. RTCP (Real-time Transport Control Protocol)
RTP allows only one type of message, one that
carries data from the source to the destination.
In many cases, there is a need for other message
in a session.
These messages control the flow and quality of
data and allow the recipient to send feed back to the
source/s.
RTCP designed for this purpose.
95. i. SenderReport-
The sender report is sent periodically by the active senders
in a conference to report transmission and reception
statistics for all RTP packets sent during the interval.
ii. Receiver Report-
The report informs the sender and other receivers about the
quality of service.
iii. Source Description Message –
This information can be the name , e-mail, address,
telephone number of the owner or controller of the source.
96. iv. Bye Message-
A source sends Bye Message to shut down stream.
It allows the sources to announce that it is leaving the
conference.
v. Application Specific Message
The ASM is packet for an application that wants to use
new applications.
It allows the definition of a new message type.
97. CalculateMemory space of the Image File
If Gif Image Size is 640 x 480
How many Kilo Bytes are required for storing above file?
100. Sound / Audio –
1. Sound is a physical phenomenon produced by the
vibration of matter, such as a violin string, or a block of
wood.
2. As the matter vibrates, pressure variations are created in
the air surrounding it.
3. This alteration of high and low pressure is propagated
through the air in a wave like motion.
4. When a wave reaches the human ear, a sound is heard.
5. The process sound the main system components required
are:
101. 1. Microphone for sound input
2. Amplifiers for
boosting the loudness
levels
3. Speakers for output are
played back of sound.
102. Fundamental Characteristics of Sound
1. Amplitude –
1. The physical manifestation of amplitude is the intensity
of energy of the wave for sound waves the corresponds
to the loudness of sound.
2. Loudness is measured in unit called decibel (db).
103. 2. Frequency –
1. This measures the numbers of vibrations of a particle in
the path of a wave, in one second.
2. The physical manifestation of frequency of a sound wave
is the pitch of sound.
3. A high pitch sound like whistle, has higher frequency
than dull flat sound like the sound of drum.
104. 3. Wave Form –
1. The physical manifestation of waveform is the quality
or timbre of sound.
2. This helps us to distinguish two sounds coming from
different instruments like guitar & violin.
3. Two sounds having the same loudness and pitch but
having different waveforms will have different having
perception in our ears.
105. 4. Speed –
The speed of sounds depend on the medium through which
the sound travels, and the temperature of the medium but
not on the pressure.
106. Video
Motion video is a combination of image and audio.
It consists of a set of still images called frames displayed
to the user one after another at a specific speed, known as
the frame rate measured in number of frames per
second(fps).
The frame rate should range between 20 and 30 for
perceiving smooth realistic motion.
108. i. Component Video:
Higher-end video systems end video systems make use
of three separate video signals for the red, green, and blue
image planes. Each color channel is sent as a separate
video signal.
Most computer systems use Component Video, with
separate signals for R, G, and B signals.
For any color separation scheme, Component Video
gives the best color reproduction since there is no
"crosstalk" between the three channels.
However, requires more bandwidth and good
synchronization of the three components
109. ii. Composite Video
For ease is signal transmission, specially TV broadcasting,
as also to reduce cable/channel requirements, component
signals are often combined into a single signal which is
transmitted along a single wire or channel.
In this case the total bandwidth of the channel is split into
separate portions and allowed for the luminance and
chrominance parts.
110. iii. S-Video
An analog video signal format where the luminance and
chrominance portions are transmitted separately using
multiple wires instead of the same wire as for composite
video.
111. Color Space: YUV
YUV (PAL)
YUV from RGB
Y = .299R + .587G + .114B
U = 0.492 (B - Y)
V = 0.877 (R - Y)
YIQ (NTSC)
YIQ from RGB
Y = .299R + .587G + .114B
I = .74 (R - Y) - .27 (B - Y)
Q = 0.48 (R - Y) + 0.41 (B – Y)
112.
113.
114. Resolution and color Presentation
CGA (Color Graphics Adapter): The first color monitor and
graphics cards for PC computers. support resolution 320x200
pixels with 4 color presentation.
EGA (Enhanced Graphics Adapter): support resolution
640x350 pixels with 16 color presentation.
VGA (Video Graphics Array) : support resolution 640 x 480
with 16 colors or “320 x 200” with 256 colours.
XGA (Extended Graphics Array) : support resolution of 640
x480 and 65000 different colors. With the resolution of 1,024
x 768 pixel , with 256 color.
SVGA : support resolution 1024 x 768 this is a very popular
resolution today with 24 bit/pixel.
116. i. NTSC
(National Television System Committee)
TV standard is mostly used in North America and Japan.
It uses the familiar 4:3 aspect ratio and uses 525 scan
lines per frame at 30 frames per second (fps).
NTSC follows the interlaced scanning system, and each
frame is divided into two fields, with 262.5 lines/field.
117. ii. PAL Video (Phase Alternating Line)
PAL is a TV standard widely used in Western Europe,
China, India, and many other parts of the world.
PAL uses PAL uses 625 scan lines per frame at scan lines
per frame, at 25 frame.
118. iii. SECAM (Sequential Color and Memory)
It is third major broadcast TV standard used in France,
Russia and Middle East.
SECAM also uses 625 scan lines per frame, at 25 frames
per second, with a 4:3 aspect ratio and interlaced fields.
SECAM and PAL are very similar. They differ slightly in
their color coding scheme.
120. i. Enhanced Definition Television Systems
(EDTV)
These are conventional systems modified to offer improved
vertical and horizontal resolutions.
EDTV is an attempt to improve NTSC image by using digital
memory to double the scanning lines from 525 to 1050.
It is designed as an intermediate standard for transition from
current European analog standard to HDTV standard
121. ii. CCIR (ITU-R) International
Telecommunications Union – Radio
formerly known as the Consultative Committee for
International Radio,
Defined a standard for digitization of video signals known
as CCIR-601 Recommendations.
A color video signal has three components – a luminance
component and two chrominance components.
The CCIR format used for NTSC and PAL TV systems both
of them being interlaced formats.
122. iii. Common Intermediate Format (CIF)
CIF is a non-interlaced format.
Its luminance resolution has 360X288 at 30 fps and the
chrominance has half the luminance resolution in both
horizontal and vertical directions.
SIF (Source Input Format) is usually used for video-
conferencing applications.
Y=360 X 288, Cb=Cr=180 X 144
QCIF (Quarter-CIF) is usually used in video-telephony
applications
Y=180 X 144, Cb=Cr= 90 X 72
123. i. Animating is moving something that can not move on it’s
own.
ii. Traditionally, images to be animated are hand drawn on
celluloid sheets.
iii. Animations are created from a sequence of still image.
iv. Each image is slightly changed from the previous one
with respect to one or more objects in the image.
v. The image are then displayed rapidly in succession so
that the eye is fooled into perceiving continuous motion
due to persistence of vision .
Animation
124. Uses of Animation
i. Animation is therefore widely used in the entertainment
industry.
ii. It is also at the heart of most computer games for
making the graphics more realistic , exciting.
iii. A major use of animation in industrial and scientific
application is to visualize simulation of scientific
phenomena.
iv. Now days animation is increasingly used in education as
it provides an excellent way to explain dynamic
processes which cannot be easily captured by video.
125. Key-frames and Tweening
Traditionally animation sequences are created by two types
of artists: The lead artist or expert who draw those frames
where major changes take place within the sequence called
key-frames , while the assistants draw a number of frames
in between the key-frames , a process called is tweening
which has been derived from the word ‘in-between’.
Tweening is a key process in all types of animation,
including computer animation.
127. 1. Cell Animation
i. A cel animation is a term from traditional animation.
ii. Animation cels are generally layered, one on top of the
other , to produce a single animation frame.
iii. A frame consist of the background cel and the overlaying
cels and is like a snapshot of the action at one instant
time.
128.
129. 2. Path Animation
Path animation does not exist as collection of frames but
rather as mathematical entities , called vectors , stored by the
animation program.
It involves an image or collection of images together , called a
sprite, that moves as an independent object, like flying bird ,
bouncing ball etc.
The sprite moves along a motion path, typically curved called
splines. The vector define how the movement of the sprite
takes place, i.e. they represent a spline as set of equations.
130.
131. 3. 2D Animation
i. 2D animation programs do not take into consideration
the depth of objects and typically depict animated objects
on flat surfaces.
ii. These are drawn taking into account two coordinates
axes along X and Y directions.
iii. 2D animation like traditional animation. The difference is
that frames are not drawn manually instead a computer
program like Flash or Adobe Animate is used for
animating the fundamental frames.
132. i. 3D animation monitors objects by considering space
coordinates and usually involves modeling, rendering
and adding surface properties, lighting and camera
motions.
ii. These are drawn taking into account three coordinates
axes along X , Y and Z directions to define locations of
objects in space.
3D Animation
133. Animation Techniques
The objectives of these techniques are generally to
improve the efficiency or reduce time –involvement or
introduce some innovation over the basic cel or path
animation.
134. 1.OnionSkinning
i. Onion Skinning is a drawing technique borrowed from
traditional cel animation that helps the animator creates
illusion of smooth motion.
ii. Rather than working on each frame is isolation,
animation lay these transparent cels one on top of the
other.
iii. Onion Skinning is an easy way to complete sequence of
frames at a glance and to see how each frame flows into
the frames before and after.
135. 2. Masking-
i. A mask in a computer program is in a sense a model of
the plastic mask – it protects parts of a frame from
effects of other editing tools.
ii. This technique can be used to make an animated object
move “behind” the protected area.
136. 3. Motion Cycling
Human and animal motion such as walking, running and
flying, is mainly a repetitive action that is best represented
by a cycle. A walk cycle requires from 8-12 frames.
137. 4. Flip-Book Animation –
i. A flip book is a book with a series of pictures that
varying gradually change from one page to the next, so
that when the pages are turned rapidly, the pictures
appear to animate by simulating motion or some other
change.
ii. Flip books are not always separate books, but may
appear as added features inn ordinary books or
magazines often in the page corners.
iii. A flipbook is an actual book, and each page is a static
image.
iv. The reader flips through all of the pages at an even
pace, resulting in a short animated movie.
138. 5. Rotoscoping and Bluescreening
i. Rotoscoping was an early animation technique which
enabled animators and video editors to trace the contour
of objects on each frame of an animation and video
sequence to create a silhouette called a matte.
ii. The traced contour would then by replaced by something
else to produce special visual effects.
iii. Rotoscoping has been used as tool for special effects in
action movies.
139. Bluescreening
i. Bluescreening is a technique for shooting live action
against a even colored blue background and then
replacing the background by another image.
ii. This is nowadays extensively used as chroma-keying
using digital editing tools whereby the background
color is selected by a selection tool and replaced
pasting over with some other background.
140. 6. Morphing –
i. Morphing is a process of smoothly interpolating between
two different images.
ii. When played back it appears that the first image
gradually and seamlessly changes into the second
image.
141. ANALOG-TO-DIGITAL CONVERSION
A digital signal is superior to an analog signal because –
i. It is more robust to noise and can easily be recovered,
corrected and amplified.
ii. Easy to detect due to discrete level
iii. Easy to encrypt
iv. Cheaper to implement due to advancement digital
electronics
v. It is simpler to store digital signal
For this reason, the tendency today is to change an analog
signal to digital data.
142. Pulse Code Modulation (PCM)
PCM is a technique which is used to convert an analog
signal into digital signal.
PCM consists of three steps to digitize an analog signal:
1. Sampling
2. Quantization
3. Binary encoding
.
144. Sampling –
Sampling is a process of finding a sufficient number of
samples so that original signal can be represented by
those samples completely and it should be possible to
reconstruct the original signal.
Analog signal is sampled every TS secs.
Ts is referred to as the sampling interval.
fs = 1/Ts is called the sampling rate or sampling frequency.
The process is referred to as pulse amplitude modulation
PAM and the outcome is a signal with analog (non integer)
values
146. According to the Nyquist theorem, the
sampling rate must be
at least 2 times the highest frequency
contained in the signal.
Nyquist sampling rate for low-pass and bandpass signals
147. Example
For an intuitive example of the Nyquist theorem, let us
sample a simple sine wave at three sampling rates:
fs = 4f (2 times the Nyquist rate), fs = 2f (Nyquist rate), and
fs = f (one-half the Nyquist rate). Next Figure shows the
sampling and the subsequent recovery of the signal.
It can be seen that sampling at the Nyquist rate can
create a good approximation of the original sine wave
(part a).
Oversampling in part b can also create the same
approximation, but it is redundant and unnecessary.
Sampling below the Nyquist rate (part c) does not
produce a signal that looks like the original sine wave.
149. Quantization
Sampling results in a series of pulses of varying amplitude
values ranging between two limits: a min and a max.
The amplitude values are infinite between the two limits.
We need to map the infinite amplitude values onto a finite
set of known values.
It is the process convert to infinite value signal into a finite
level.
150. Quantization Levels
Quantization Levels refer to the number of different sample
values that can be used to represent a digital quantity.
To start with we may have thousands of different sample
values but we may want to retain only a few hundreds of
them.
151. Quantization Error
During the digitization process we use a lot of approximation
at various stages.
Initially due to sampling we consider the values of the wave
at discrete points of time while discarding the remaining
values.
Then during the quantization stage we consider only a
limited number of such a samples while discarding the rest.
Hence , errors are introduced between the digital output and
analog input.
Specifically the error introduced at the quantization stage
is referred to as the quantization error.
153. Advantages of PCM
1. Encoders allow secured data transmission.
2. It ensures uniform transmission quality.
3. Compatibility of different classes of Traffic in the
Network.
4. Increased utilization of Existing Circuit.
154. Disadvantagesof PCM
1. Pulse code modulation increases the transmission
bandwidth.
2. A PCM system is somewhat more complex than
another system.