SlideShare ist ein Scribd-Unternehmen logo
1 von 11
Downloaden Sie, um offline zu lesen
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING &
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME

TECHNOLOGY (IJCET)

ISSN 0976 – 6367(Print)
ISSN 0976 – 6375(Online)
Volume 5, Issue 2, February (2014), pp. 19-29
© IAEME: www.iaeme.com/ijcet.asp
Journal Impact Factor (2014): 4.4012 (Calculated by GISI)
www.jifactor.com

IJCET
©IAEME

A CONTENT BASED MULTIMEDIA RETRIEVAL SYSTEM
Payel Saha
TCET, Thakur Village, Kandivali (E), Mumbai – 101,
Sudhir Sawarkar
D.M.C.E., Airoli-708,

ABSTRACT
Multimedia search and retrieval has become an active field for many contemporary
information systems. This paper presents a scheme of retrieving a multimedia object, i.e. a video clip
with audio. For video retrieval, the system searches a particular query video clip from a database of
video clips by matching on the basis of motion vector analysis. For audio retrieval, the audio from
the query is to be separated and matched using the fingerprint algorithm with all the audio files of the
videos from the database and provide rankings to the matched files.
Key words: Multimedia, CBVR, Query, Image, Audio, Motion Compensation
I. INTRODUCTION
The increasing popularity of digital video content has made the demand of automatic, userfriendly and efficient retrieval of video collection becomes an important issue [11, 17]. VideoQ [3] is
the first on-line content-based video search engine providing interactive object-based retrieval and
spatiotemporal queries of video contents. Some commercial search engines; such as Google and
Yahoo!, have started to extend their services to video searching on the Internet, and it is already
possible to search for video clips by typing keywords. However, commonly adopted features, such as
colour, texture, or motion are still insufficient to describe the rich visual content of a video clip. In
the past few years, the area of content-based multimedia retrieval has attracted worldwide attention.
Among the different types of features used in previous content-based video retrieval (CBVR)
systems, the motion feature has played a very important role [13]. Multimedia search and retrieval
has become an active field after the standardization of MPEG-7. The syntactic information used in
MPEG-7 includes color, texture, shape and motion. The technology of moving-object tracking plays
19
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME

an important role in those video retrieval systems. At first, motion and color information extracted
from the MPEG-2 compressed stream to determine which pixels belong to moving objects and which
pixels belong to static background for a period of time. This kind of technologies can help us find
some interesting events from video data and avoid tedious searching processes.
II. SYSTEM FUNCTIONS
A video retrieval system consists of the search for a particular query video clip from a
database of video clips. Following are the basic functions of the system:
1) Matching Video: The motion vector for each video query is unique and comparing it with
that from the database will provide an ideal match. This is based on the assumption that the
entire query video is available in the database.
2) Matching Audio: The audio from the query is separated and matched using the fingerprint
algorithm with all the audio files of the videos from the database and provide rankings to the
matched files.
3) Search rankings post processing: After the final processing of audio and motion vector, the
system should provide rankings to the search results. This will be on the basis of the motion
vector.

Start

Video Retrieval Algorithm

GUI Development

Audio Retrieval Algorithm

Video database Creation

Integration of Codes

End

Fig. 1 Flowchart of the system

20
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME

III. AUDIO PROCESSING
Audio retrieval implements a landmark-based audio fingerprinting system that is very well
suited to identifying small, noisy excerpts from a large number of items. Each audio track is analysed
to find prominent onsets concentrated in frequency, since these onsets are most likely to be preserved
in noise and distortion. These onsets are formed into pairs, parameterized by the frequencies of the
peaks and the time in between them. These values are quantized to give a relatively large number of
distinct landmark hashes [18].
Each reference track is described by landmarks it contains and the times at which they occur.
This information is held in an inverted index, which, for each of the1 million distinct landmarks, lists
the tracks in which they occur and when they occur in those tracks. To identify a query, it is similarly
converted to landmarks. Then, the database is queried to find all the reference tracks that share
landmarks with the queries, and the relative time differences between where they occur in the query
and where they occur in the reference tracks. Once a sufficient number of landmarks have been
identified as coming from the same reference track, with the same relative timing, a match can be
confidently declared.
IV. MOTION DETECTION
Here, some image frames without moving objects are used to compute statistical quantities
for the background scene. Then, the foreground pixels are detected and features extracted.
A. Background image
The initial background image modelling is carried out over first 50 image frames. With the
assumption of no moving objects in the 50 image frames, the reference colour background image
with a normal distribution is built. The background is modelled by computing the sample mean
µሺx, yሻ and variance σଶ ሺx, yሻ in the colour images over a sequence of 50 frames. These statistics are
calculated separately for each one of the RGB components by using the following iterative formula
[4]. For image frame f = 1,…, 50, we have,
ሾµሺx, yሻሿ୤ ൌ ሾµሺx, yሻሿ୤ିଵ ൅ భሺሾCሺx, yሻሿ୤ െ ሾµሺx, yሻሿ୤ିଵ ሻ
౜

(1)

ሾσଶ ሺx, yሻሿ୤ ൌ ౜షమ ሾσଶ ሺx, yሻሿ୤ିଵ ൅ భሺ ሾµሺx, yሻሿ୤ିଵ െ ሾCሺx, yሻሿ୤ ሻଶ
౜షభ
౜

(2)

where [·]f denotes the corresponding value at frames f, and [µ(x,y)] = [C(x,y)] and [σ2(x,y)] =
0 Equations (1, 2) can be shown by straightforward calculations that they yield the sample mean and
variance over the first 50 image frames [4]. The sample mean is the background image. The
background image and the variance of the (x, y)th pixel's RGB values over the first 50 image frames
is given by [5] :
µሺx, yሻ ൌ ሾµୖ ሺx, yሻ, µୋ ሺx, yሻ, µ୆ ሺx, yሻሿ

(3)

σଶ ሺx, yሻ ൌ ሾσଶ ሺx, yሻ, σଶ ሺx, yሻ, σଶ ሺx, yሻሿ
ୖ
ୋ
୆

(4)

B. Background subtraction
The term “moving pixels” is defined as the “foreground.” In each new image frame C(x,y),
the foreground can be obtained by comparing their RGB values to the corresponding mean values.
First a binary image D(x,y) with the same dimension as the image C(x,y) is created and all its pixel
values are set to 0. The output of the background subtraction method is defined as follows:
21
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME

Dሺx, yሻ ൌ ൜

ሺforegroundሻ, if |Cሺx, yሻ െ µሺx, yሻ| ൐ α · σሺx, yሻ
0 ሺbackgroundሻ, otherwise

(5)

A pixel (x,y) is extracted as a foreground if it RGB values C(x,y), satisfies the absolute
difference value with µ(x,y), where the parameter α can be adjusted to yield more or less foreground.
4σ is used as the threshold in background subtraction. If the threshold is too low (e.g. 1σ), it will
cause too much fault-foreground; on the contrary, if the threshold is too high (e.g. 5σ), it will cause
too much fault- background.
C. Background updating
The background cannot be expected to be stationary for a long period of time. An adaptive
scheme makes a constant updating of background as linear combination of previous background
image and current image frame. The recursive estimation of mean and variance can be performed
using equations (6) and (7). Equations (6) and (7) update the background image and sample variance,
respectively [5, 8, 9]:
µሺx, yሻ୤ ൌ β · Cሺx, yሻ୤ ൅ ሺ1 െ βሻ · µሺx, yሻ୤ିଵ

(6)

σଶ ሺx, yሻ୤ ൌ β · ሺCሺx, yሻ୤ െ µሺx, yሻ୤ ሻଶ ൅ ሺ1 െ βሻ · σଶ ሺx, yሻ୤ିଵ

(7)

where C(x,y) is the current image frame; µ(x,y)f , σ2(x,y)f the mean and variance values that
update the current image frame; β is the learning rate that determines the speed at which the
distribution's parameters change (0<β<1) [7].
D. Shadow elimination
The shadow problem will cause redundant foreground (moving pixels) and decrease the
system’s accuracy. Therefore, there is a need to eliminate the shadow after background subtraction.
Before the shadow elimination process, some pre-processing is needed:
1. The matching part of the image |Cሺx, yሻ െ µሺx, yሻ| from the RGB to greyscale format is
created.
2. Gaussian filter is used to remove isolation points and smooth the old grayscale image.
3. The gray-level distribution of the new gray scale image is done and the minimal gaussian value
is taken as threshold. A support map C′(x,y) is a RGB colour image where pixel values are set
to current image frame if it is greater than threshold; otherwise, set to 0.
This method is defined as follows:
CԢሺx, yሻ ൌ ൜

Cሺx, yሻ, if ሺgreyscale imageሻ ൐ threshold
0, otherwise

(8)

The equation (8) maintains the motion parts (foreground, shadow, highlight, noise) of C(x,y)
while the non-motion parts are removed.
E. Noise elimination
From above processes, the obtained foreground image may still have some noises. Those
noises come from lighting changes, illumination changes and false matches. To eliminate noises and
improve the foreground, connected component labelling and morphological operations are applied to

22
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME

noise elimination. Connected component labelling is used to label all pixels that are determined as
foreground and count the area of all the labelling components.
F. Motion vector computation
The process of computing motion vectors comprise of two steps:
1. Motion estimation.
2. Motion compensation.
1. Motion estimation
In compressed domain video indexing and retrieval, feature used is motion vectors. Motion is
the most significant feature in video which represents two dimensional temporal change of video
content [8] despite the conventional image features including color, texture and shape. It is possible
to distinguish video and images in terms of motion. It is well known that a motion vector field is
usually composed of camera motion, object motion, and noises. The global motion in a video is
mostly contributed by camera motion. Therefore, the following four-parameter global motion model,
which is fast and also valid for most videos [9], is used to estimate the camera motion from the
motion vector field.
ሬሬሬሬሬሬሬሬሬሬሬሬሬሬሬሬሬԦൌ ቀ‫݉݋݋ݖ‬
‫ܸ݉ܽܿܯ‬
െ‫݁ݐܽݐ݋ݎ‬

‫ݔ‬
‫݊ܽ݌‬
‫݁ݐܽݐ݋ݎ‬
ቁ · ቀ‫ ݕ‬ቁ ൅ ቀ
ቁ
‫ݐ݈݅ݐ‬
‫݉݋݋ݖ‬

(9)

If the underlying supposition behind motion estimation is that the patterns corresponding to
objects and background in a frame of video sequence move within the frame to form corresponding
objects on the subsequent frame. The idea behind block matching is to divide the current frame into a
matrix of ‘macro blocks’ that are then compared with corresponding block and its adjacent neighbors
in the previous frame to create a vector that stipulates the movement of a macro block from one
location to another in the previous frame. This movement calculated for all the macro blocks
comprising a frame, constitutes the motion estimated in the current frame.
The search area for a good macro block match is constrained up to p pixels on all fours sides
of the corresponding macro block in previous frame. This ‘p’ is called as the search parameter.
Larger motions require a larger p, and the larger the search parameter the more computationally
expensive the process of motion estimation becomes. Usually the macro block is taken as a square of
side 16 pixels, and the search parameter p is 7 pixels. The matching of one macro block with another
is based on the output of a cost function. The macro block that results in the least cost is the one that
matches the closest to current block [1].
In MEPG video coding, each frame is divided into non-overlapping macro blocks (MBs) of
size 16 ×16. For each MB, the motion vector reveals its displacement between the reference frame
and current P-frame. The motion vector consists of a horizontal component, x, and a vertical
component, y. Let mv = (mvx, mvy) denote the forward motion vector of a MB in a P-frame [10].
The motion object detection is obtained as:
Compute the magnitude, ρ, and angle, θ, of the motion vector:
ߩ ൌ ඥሺ݉‫ ݒ‬௫ ሻଶ ൅ ሺ݉‫ ݒ‬௬ ሻଶ

(10)

௠௩ ೣ

ߠ ൌ ‫ି݊ܽݐ‬ଵ ቀ௠௩೤ቁ

(11)
23
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME

2. Motion compensation
In general, Motion compensation is an algorithmic technique employed in the encoding of
video data for video compression, for example in the generation of MPEG-2 files. Motion
compensation describes a picture in terms of the transformation of a reference picture to the current
picture. The reference picture may be previous in time or even from the future. When images can be
accurately synthesized from previously transmitted or stored images, the compression efficiency can
be improved.
Using motion compensation, a video stream will contain some full (reference) frames; then
the only information stored for the frames in between would be the information needed to transform
the previous frame into the next frame.
Here block motion compensation (BMC) is used. In BMC, the frames are partitioned in
blocks of pixels (i.e. macroblocks of 16×16 pixels in MPEG). Each block is predicted from a block
of equal size in the reference frame. The blocks are not transformed in any way apart from being
shifted to the position of the predicted block. This shift is represented by a motion vector. To exploit
the redundancy between neighbouring block vectors, (e.g. for a single moving object covered by
multiple blocks) it is common to encode only the difference between the current and previous motion
vector in the bit-stream.
Block motion compensation divides up the current frame into non-overlapping blocks, and
the motion compensation vector tells where those blocks come from. The source blocks typically
overlap in the source frame.
V. COLOR EXTRACTION AND SIMILARITY MEASUREMENT
A. Color Descriptors
Color descriptors of images and video can be global and local. Global descriptors specify the
overall color content of the image but with no information about the spatial distribution of these
colors. Local descriptors relate to particular image regions and, in conjunction with geometric
properties of these latter, describe also the spatial arrangement of the colors.
B. Color Histograms
Color histogram is the most widely used method owing to its robustness to scaling,
orientation, perspective, and occlusion of images. The joint distribution of the three color channels is
denoted by the histogram. The human perspective to color is a merger of three stimuli, R (red), G
(Green), and B (Blue), which form a color space.
A colour histogram [h(image) = (hk(image) k = 1,...,K)] is a K-dimensional vector such that
each component hk(image) represents the relative number of pixels of colour Ck in the image, that is,
the fraction of pixels that are most similar to the corresponding colour. To build the colour
histogram, the image colours should be transformed to an appropriate colour space and quantized
according to a particular codebook of the size K.
The retrieval system typically contains two mechanisms: similarity measurement and
multidimensional indexing. Similarity measurement is used to find the most similar objects.
Multidimensional indexing is used to accelerate the query performance in the search process.
Similarity measurement plays an important role in retrieval. A query frame is given to a
system which retrieves similar videos from the database. The distance metric can be termed as
similarity measure, which is the key-component in Content Based Video Retrieval. In conventional
retrieval, the Euclidean distances between the database and the query are calculated and used for
ranking. The query frame is more similar to the database frame if the distance is smaller. If x and y
are 2D feature vectors of database index frame and query frame respectively [14].

24
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME

Then Euclidean distance is given as:
‫ ܦ‬ൌ ඥ∑௜ሺ‫ݔ‬௜ െ ‫ݕ‬௜ ሻଶ

(12)

Background image

Background subtraction = current
image – background image

Threshold of standard deviation α

Background

Foreground

Shadow Elimination

Noise Elimination

Motion vector computation and
motion compensation

Color Matching

Final output

Fig. 2 Flowchart for motion processing

VI. GRAPHIC USER INTERFACES
The program runs with the help of two separate but linked Graphic User Interfaces (GUIs).
The main GUI is used to obtain all the details of the query clip. Various computations required for
this purpose are performed using different pushbuttons. The other retrieved GUI displays the final
result at the end and works on the video files from the database.

25
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME

The different functions performed by the main GUI are explained below.

Fig. 3 Main Gui
A. Selection of video query
The ‘Select Video’ pushbutton allows selecting any one video clip. The video clip is selected
directly from the folder and played in media player. This is the query clip. The size of this clip is
small and should be of minimum 5seconds. It is a video file in AVI format.
B. Separation of audio
The audio is separated from the query clip, which is in raw audio format i.e. in PCM format
and played in a media player. This is an uncompressed audio format. This is executed by ‘Retrieved
Audio’ pushbutton.
After extracting the raw audio, it is required to convert it into standard lossy compressed
.mp3 format for further processing. As most of the audio files used today are in mp3 format, which
has a reduction in size than the raw audio. This can be done using any external converter within no
time.
A separate code is used to process this mp3 audio clip further. The code searches for the best
possible match of the clip with the all audio files in the database which are again in mp3 format.
From the listed matches, the top three best matches are obtained and sent for video processing.
C. Calculation of motion vector
The first 50 frames of the query are taken under consideration for further processing and
saved. The image data, matrix or 3-D array (RGB) of values specifying the colour of each
rectangular area defining the image is obtained and the current colours are mapped.
The next step is to detect the motion from the frames extracted. This process again includes
different steps. After detecting motion, labeling is done; a new matrix contains label for the
connected components of the input image (frame) is created. The input image can have any
dimension; the label matrix (L) is of the same size as input image. The elements of L are integer
values greater than or equal to 0. The pixels labeled 0 are the background. The pixels labeled 1 make
26
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME

up one object; the pixels labeled 2 make up a second object; and so on. The default connectivity is 8
for two dimensions, 26 for three dimensions. This process is basically to labeling the background.
Then at last motion vectors are obtained and motion compensation is done.

Fig 4 Axis displaying motion in the query clip
The videos that have been given the best rank in the audio retrieval are stored in a .mat file
called X.mat. This file is then loaded to provide the list of the top 3 ranks on the basis of audio
fingerprint matching. The computation begins after selecting ‘Click Here’ pushbutton.
The motion vectors for these videos are calculated by running the code from the initial
main.m in a loop. Further Euclidean distance is found out between the frames of the query and those
of the videos in the loop as well. If the Euclidean distance is found to be zero i.e. an exact match on
the basis of colour is found, that portion of the query is played in a figure.
When processing is out of the loop, motion vectors are compared with the target motion
vector and the difference is found out. The differences are arranged in ascending order thus
providing us with the final rankings.
VII.

RESULTS

The videos that have been given the best rank in the audio retrieval are stored in a .mat file
called X.mat. This file is then loaded to provide the list of the top 3 ranks on the basis of audio
fingerprint matching. The computation begins after selecting ‘Click Here’ pushbutton.
The motion vectors for these videos are calculated by running the code from the initial
main.m in a loop. Further Euclidean distance is found out between the frames of the query and those
of the videos in the loop as well. If the Euclidean distance is found to be zero i.e. an exact match on
the basis of colour is found, that portion of the query is played in a figure.
When processing is out of the loop, motion vectors are compared with the target motion
vector and the difference is found out. The differences are arranged in ascending order thus
providing us with the final rankings.

27
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976
0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME
375(Online),

The final result of this CBVR system yields three best possible matches of the small query
clip selected earlier. As the ‘Play’ pushbutton is clicked, that particular video is played separa
,
separately in
any media player as shown in the figure below.

Fig 5 Final Retrieved Video Clip

ACKNOWLEDGMENT
The authors are grateful to the colleagues of respective Institutes for their motivation, and
help towards the completion of this paper, as well as for providing valuable advice.
REFERENCES
[1]
[2]
[3]

[4]

[5]
[6]
[7]

Aroh Barjatya, “Block Matching Algorithms For Motion Estimation, “Student Member,
IEEE, DIP 6620 Spring 2004 Final Project Paper.
Avery Wang, "An Industrial-Strength Audio Search Algorithm", International Symposium on
lgorithm",
Music Information Retrieval (ISMIR 2003), Baltimore, MD, Oct. 2003.
S. F. Chang, W. Chen, H. J. Meng, H. Sundaram, D. Zhong, “A fully automated contentcontent
based video search engine supporting spatiotemporal queries”, IEEE Trans on Circuits and
Systems for Video Technology Vol.8, No.5, pp.602–615, 1998.
Technology,
H. T. Chen, H. H. Lin, T. L. Liu., “Multi-object tracking using dynamical graph matching”,
“Multi object
IEEE Computer Society Conference on Computer Vision and Pattern Recogni
Recognition
(CVPR’01), Vol.2, p.210, 2001.
Q. Zang & R. Klette, “Robust background subtraction and maintenance”, Proceedings of the
17th International Conference on Pattern Recognition, Vol.2, Aug. 23-26, pp.90
23 26, pp.90–93, 2004.
S. J. McKenna, S. Jabri, Z. Duric, A. Rosenfeld, and. H. Wechsler, “Tracking groups of
Duric,
people”, Computer Vision and Image Understanding 80, pp.42–56, 2000.
pp.42
S. Jabri, Z. Duric, H. Wechsler, A. Rosenfeld. “Detection and location of people in video
images using adaptive fusion of color and edge information”, 15th International Conference
on Pattern Recognition, Vol. 4, p.4627, 2000.
28
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME

[8]

[9]
[10]

[11]
[12]

[13]
[14]
[15]

[16]
[17]
[18]
[19]
[20]

[21]

[22]
[23]
[24]

[25]

[26]

Tsuhan Chen, “From Low-Level Features to High-Level Semantics: Are We Bridging the
Gap?”, Proceedings of the Seventh IEEE International Symposium on Multimedia, pp. 179,
2005.
R. Wang and T. Huang, “Fast camera motion analysis in MPEG domain”, ICIP, Vol.3,
pp.691-694, 1999.
Jau-Ling Shih, Ming-Chieh Chuang, “A Video Object Retrieval System using Motion and
Color Features”, Department of Computer Science and Information Engineering, Chung Hua
University, Hsinchu, Taiwan, R.O.C.
Yoshitaka, A., Ichikawa, T., “A survey on content-based retrieval for multimedia databases”,
IEEE Trans Knowledge and Data Engineering, Vol.11, No.1, pp.81–93, 1999.
Heinrich A. van Nieuwenhuizen, Willie C. Venter and Leenta, “The Study and
Implementation of Shazam’s Audio Fingerprinting Algorithm for Advertisement
Identification”, M.J. Grobler School of Electrical, Electronic and Computer Engineering
North-West University, Potchefstroom Campus, South Africa.
Chih-Wen Su, Hong-Yuan Mark Liao, Kuo-Chin Fan, “A Motion-Flow-Based Fast Video
Retrieval System”, ACM, pp.10--11, MIR’05.
B. V. Patel and B. B. Meshram, “Content Based Video Retrieval Systems”, International
Journal of UbiComp (IJU), Vol.3, No.2, April 2012.
T.N.Shanmugam, Priya Rajendran, “An Enhanced Content-Based Video Retrieval System
Based On Query Clip”, International Journal of Research and Reviews in Applied Sciences,
Vol.1, Issue 3, 2009.
Che-Yen Wen, Liang-Fan Chang, Hung-Hsin Li, “Content based video retrieval with motion
vectors and the RGB color model”, Forensic Science Journal, Vol.6, No. 2, 2007.
Kuo, C.T., Chen, L.P., “Content-based query processing for video databases”, IEEE Trans
Multimedia, Vol.2, No.1, pp.1–13, 2000.
Ellis, Daniel P.W., "Robust Landmark-Based Audio Fingerprinting.", 2009,
http://labrosa.ee.columbia.edu/matlab/fingerprint/
IBM Research TRECVID2004 Video Retrieval System.pdf,
http://www.research.ibm.com/people/a/argillander/files/2004/
S. Dagtas, W. Al-Khatib, A. Ghafoor, R.L. Kashyap, “Models for motion-based video
indexing andretrieval,” IEEE Transactions on Image Processing, Vol.9, No.1, pp.88-101,
2000.
Y. Tsaig, A. Averbuch, “Automatic segmentation of moving objects in video sequences: a
region labeling approach,” IEEE Transactions on Circuits and Systems for Video
Technology, Vol. 12, No.7, pp.597-612, 2002.
C.W. Ngo, T.C. Pong, H.J. Zhang, “Motion-based video representation for scene change
detection,” Int. Journal Computer Vision, pp 127-142, 2002.
A. Del Bimbo and P. Pala, Content-Based Retrieval of 3D Models, ACM Transactions on
Multimedia Computing, Communications and Applications, Vol. 2, No. 1, pp. 20-43, 2006.
Ali Amiri, Mahmood Fathy, and Atusa Naseri, “A Novel Video Retrieval System Using GED
based Similarity Measure”, International Journal of Signal Processing, Image Processing and
Pattern Recognition, Vol. 2, No.3, 2009.
Gopal Thapa, Kalpana Sharma and M.K.Ghose, “Multi Resolution Motion Estimation
Techniques for Video Compression: A Survey”, International Journal of Computer
Engineering & Technology (IJCET), Volume 3, Issue 2, 2012, pp. 399 - 406, ISSN Print:
0976 – 6367, ISSN Online: 0976 – 6375.
Vilas Naik and Sagar Savalagi, “Textual Query Based Sports Video Retrieval by Embedded
Text Recognition”, International Journal of Computer Engineering & Technology (IJCET),
Volume 4, Issue 4, 2013, pp. 556 - 565, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.
29

Weitere ähnliche Inhalte

Was ist angesagt?

Lec9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)
Lec9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)Lec9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)
Lec9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)Ulaş Bağcı
 
F0255046056
F0255046056F0255046056
F0255046056theijes
 
Different Approach of VIDEO Compression Technique: A Study
Different Approach of VIDEO Compression Technique: A StudyDifferent Approach of VIDEO Compression Technique: A Study
Different Approach of VIDEO Compression Technique: A StudyEditor IJCATR
 
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...Image Restoration UsingNonlocally Centralized Sparse Representation and histo...
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...IJERA Editor
 
Lecture 1 for Digital Image Processing (2nd Edition)
Lecture 1 for Digital Image Processing (2nd Edition)Lecture 1 for Digital Image Processing (2nd Edition)
Lecture 1 for Digital Image Processing (2nd Edition)Moe Moe Myint
 
An Open Source solution for Three-Dimensional documentation: archaeological a...
An Open Source solution for Three-Dimensional documentation: archaeological a...An Open Source solution for Three-Dimensional documentation: archaeological a...
An Open Source solution for Three-Dimensional documentation: archaeological a...Giulio Bigliardi
 
digital image processing, image processing
digital image processing, image processingdigital image processing, image processing
digital image processing, image processingKalyan Acharjya
 
Survey on optical flow estimation with DL
Survey on optical flow estimation with DLSurvey on optical flow estimation with DL
Survey on optical flow estimation with DLLeapMind Inc
 
Lec11: Active Contour and Level Set for Medical Image Segmentation
Lec11: Active Contour and Level Set for Medical Image SegmentationLec11: Active Contour and Level Set for Medical Image Segmentation
Lec11: Active Contour and Level Set for Medical Image SegmentationUlaş Bağcı
 
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...IJERA Editor
 
Optimized linear spatial filters implemented in FPGA
Optimized linear spatial filters implemented in FPGAOptimized linear spatial filters implemented in FPGA
Optimized linear spatial filters implemented in FPGAIOSRJVSP
 
image segmentation
image segmentationimage segmentation
image segmentationarpanmankar
 
Design and Implementation of VLSI Architecture for Image Scaling Processor
Design and Implementation of VLSI Architecture for Image  Scaling ProcessorDesign and Implementation of VLSI Architecture for Image  Scaling Processor
Design and Implementation of VLSI Architecture for Image Scaling ProcessorIJMER
 
Parking detection system using background subtraction and HSV color segmentation
Parking detection system using background subtraction and HSV color segmentationParking detection system using background subtraction and HSV color segmentation
Parking detection system using background subtraction and HSV color segmentationjournalBEEI
 
Image processing
Image processingImage processing
Image processingsree_2099
 
A secured data transmission system by reversible data hiding with scalable co...
A secured data transmission system by reversible data hiding with scalable co...A secured data transmission system by reversible data hiding with scalable co...
A secured data transmission system by reversible data hiding with scalable co...IAEME Publication
 

Was ist angesagt? (20)

Lec9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)
Lec9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)Lec9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)
Lec9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)
 
F0255046056
F0255046056F0255046056
F0255046056
 
Different Approach of VIDEO Compression Technique: A Study
Different Approach of VIDEO Compression Technique: A StudyDifferent Approach of VIDEO Compression Technique: A Study
Different Approach of VIDEO Compression Technique: A Study
 
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...Image Restoration UsingNonlocally Centralized Sparse Representation and histo...
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...
 
Seema dip
Seema dipSeema dip
Seema dip
 
Lecture 1 for Digital Image Processing (2nd Edition)
Lecture 1 for Digital Image Processing (2nd Edition)Lecture 1 for Digital Image Processing (2nd Edition)
Lecture 1 for Digital Image Processing (2nd Edition)
 
An Open Source solution for Three-Dimensional documentation: archaeological a...
An Open Source solution for Three-Dimensional documentation: archaeological a...An Open Source solution for Three-Dimensional documentation: archaeological a...
An Open Source solution for Three-Dimensional documentation: archaeological a...
 
digital image processing, image processing
digital image processing, image processingdigital image processing, image processing
digital image processing, image processing
 
Chap1
Chap1Chap1
Chap1
 
Survey on optical flow estimation with DL
Survey on optical flow estimation with DLSurvey on optical flow estimation with DL
Survey on optical flow estimation with DL
 
Lec11: Active Contour and Level Set for Medical Image Segmentation
Lec11: Active Contour and Level Set for Medical Image SegmentationLec11: Active Contour and Level Set for Medical Image Segmentation
Lec11: Active Contour and Level Set for Medical Image Segmentation
 
Imran2016
Imran2016Imran2016
Imran2016
 
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
 
Optimized linear spatial filters implemented in FPGA
Optimized linear spatial filters implemented in FPGAOptimized linear spatial filters implemented in FPGA
Optimized linear spatial filters implemented in FPGA
 
I0343065072
I0343065072I0343065072
I0343065072
 
image segmentation
image segmentationimage segmentation
image segmentation
 
Design and Implementation of VLSI Architecture for Image Scaling Processor
Design and Implementation of VLSI Architecture for Image  Scaling ProcessorDesign and Implementation of VLSI Architecture for Image  Scaling Processor
Design and Implementation of VLSI Architecture for Image Scaling Processor
 
Parking detection system using background subtraction and HSV color segmentation
Parking detection system using background subtraction and HSV color segmentationParking detection system using background subtraction and HSV color segmentation
Parking detection system using background subtraction and HSV color segmentation
 
Image processing
Image processingImage processing
Image processing
 
A secured data transmission system by reversible data hiding with scalable co...
A secured data transmission system by reversible data hiding with scalable co...A secured data transmission system by reversible data hiding with scalable co...
A secured data transmission system by reversible data hiding with scalable co...
 

Andere mochten auch (20)

40220140501006
4022014050100640220140501006
40220140501006
 
50120140503012
5012014050301250120140503012
50120140503012
 
20320140503020 2-3
20320140503020 2-320320140503020 2-3
20320140503020 2-3
 
20320140503014 2-3
20320140503014 2-320320140503014 2-3
20320140503014 2-3
 
20320140503012
2032014050301220320140503012
20320140503012
 
Kekker Presentation Eng s
Kekker Presentation Eng sKekker Presentation Eng s
Kekker Presentation Eng s
 
Testimonial Letter from Ed Bygrave
Testimonial Letter from Ed BygraveTestimonial Letter from Ed Bygrave
Testimonial Letter from Ed Bygrave
 
L & T DESIGN EXPERIENCE CERTIFICATE
L & T DESIGN EXPERIENCE CERTIFICATEL & T DESIGN EXPERIENCE CERTIFICATE
L & T DESIGN EXPERIENCE CERTIFICATE
 
Inauguracion libertad 4201 MD
Inauguracion libertad 4201 MDInauguracion libertad 4201 MD
Inauguracion libertad 4201 MD
 
In Step Testimonial 2016
In Step Testimonial 2016In Step Testimonial 2016
In Step Testimonial 2016
 
TRANSKIP Gunadarma
TRANSKIP GunadarmaTRANSKIP Gunadarma
TRANSKIP Gunadarma
 
A validação 3.0: publico para pensar!
A validação 3.0: publico para pensar! A validação 3.0: publico para pensar!
A validação 3.0: publico para pensar!
 
Plantilla con-normas-icontec1
Plantilla con-normas-icontec1Plantilla con-normas-icontec1
Plantilla con-normas-icontec1
 
Prabin Koirala 2
Prabin Koirala 2Prabin Koirala 2
Prabin Koirala 2
 
La television educativa . marco tulio vargas
La television educativa . marco tulio vargasLa television educativa . marco tulio vargas
La television educativa . marco tulio vargas
 
Record of achievement1
Record of achievement1Record of achievement1
Record of achievement1
 
Libros Digitales
Libros DigitalesLibros Digitales
Libros Digitales
 
APD-JobStreet
APD-JobStreetAPD-JobStreet
APD-JobStreet
 
2015 AgExpo Agenda
2015 AgExpo Agenda2015 AgExpo Agenda
2015 AgExpo Agenda
 
Business Management
Business ManagementBusiness Management
Business Management
 

Ähnlich wie 50120140502003

A Framework for Soccer Video Processing and AnalysisBased on Enhanced Algorit...
A Framework for Soccer Video Processing and AnalysisBased on Enhanced Algorit...A Framework for Soccer Video Processing and AnalysisBased on Enhanced Algorit...
A Framework for Soccer Video Processing and AnalysisBased on Enhanced Algorit...CSCJournals
 
Application of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysisApplication of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysisIAEME Publication
 
Application of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysisApplication of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysisIAEME Publication
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemiaemedu
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemIAEME Publication
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemiaemedu
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemiaemedu
 
International Journal of Image Processing (IJIP) Volume (3) Issue (4)
International Journal of Image Processing (IJIP) Volume (3) Issue (4)International Journal of Image Processing (IJIP) Volume (3) Issue (4)
International Journal of Image Processing (IJIP) Volume (3) Issue (4)CSCJournals
 
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURES
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURESA FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURES
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATUREScscpconf
 
A fast search algorithm for large
A fast search algorithm for largeA fast search algorithm for large
A fast search algorithm for largecsandit
 
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURES
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURESA FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURES
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATUREScscpconf
 
FPGA DESIGN FOR H.264/AVC ENCODER
FPGA DESIGN FOR H.264/AVC ENCODERFPGA DESIGN FOR H.264/AVC ENCODER
FPGA DESIGN FOR H.264/AVC ENCODERIJCSEA Journal
 
Efficient Approach for Content Based Image Retrieval Using Multiple SVM in YA...
Efficient Approach for Content Based Image Retrieval Using Multiple SVM in YA...Efficient Approach for Content Based Image Retrieval Using Multiple SVM in YA...
Efficient Approach for Content Based Image Retrieval Using Multiple SVM in YA...csandit
 
EFFICIENT APPROACH FOR CONTENT BASED IMAGE RETRIEVAL USING MULTIPLE SVM IN YA...
EFFICIENT APPROACH FOR CONTENT BASED IMAGE RETRIEVAL USING MULTIPLE SVM IN YA...EFFICIENT APPROACH FOR CONTENT BASED IMAGE RETRIEVAL USING MULTIPLE SVM IN YA...
EFFICIENT APPROACH FOR CONTENT BASED IMAGE RETRIEVAL USING MULTIPLE SVM IN YA...cscpconf
 
Gesture Recognition Based Video Game Controller
Gesture Recognition Based Video Game ControllerGesture Recognition Based Video Game Controller
Gesture Recognition Based Video Game ControllerIRJET Journal
 
Comparison of ezw and h.264 2
Comparison of ezw and h.264 2Comparison of ezw and h.264 2
Comparison of ezw and h.264 2IAEME Publication
 
Human Action Recognition using Contour History Images and Neural Networks Cla...
Human Action Recognition using Contour History Images and Neural Networks Cla...Human Action Recognition using Contour History Images and Neural Networks Cla...
Human Action Recognition using Contour History Images and Neural Networks Cla...IRJET Journal
 

Ähnlich wie 50120140502003 (20)

A Framework for Soccer Video Processing and AnalysisBased on Enhanced Algorit...
A Framework for Soccer Video Processing and AnalysisBased on Enhanced Algorit...A Framework for Soccer Video Processing and AnalysisBased on Enhanced Algorit...
A Framework for Soccer Video Processing and AnalysisBased on Enhanced Algorit...
 
Application of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysisApplication of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysis
 
Application of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysisApplication of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysis
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
International Journal of Image Processing (IJIP) Volume (3) Issue (4)
International Journal of Image Processing (IJIP) Volume (3) Issue (4)International Journal of Image Processing (IJIP) Volume (3) Issue (4)
International Journal of Image Processing (IJIP) Volume (3) Issue (4)
 
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURES
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURESA FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURES
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURES
 
A fast search algorithm for large
A fast search algorithm for largeA fast search algorithm for large
A fast search algorithm for large
 
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURES
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURESA FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURES
A FAST SEARCH ALGORITHM FOR LARGE VIDEO DATABASE USING HOG BASED FEATURES
 
FPGA DESIGN FOR H.264/AVC ENCODER
FPGA DESIGN FOR H.264/AVC ENCODERFPGA DESIGN FOR H.264/AVC ENCODER
FPGA DESIGN FOR H.264/AVC ENCODER
 
Efficient Approach for Content Based Image Retrieval Using Multiple SVM in YA...
Efficient Approach for Content Based Image Retrieval Using Multiple SVM in YA...Efficient Approach for Content Based Image Retrieval Using Multiple SVM in YA...
Efficient Approach for Content Based Image Retrieval Using Multiple SVM in YA...
 
EFFICIENT APPROACH FOR CONTENT BASED IMAGE RETRIEVAL USING MULTIPLE SVM IN YA...
EFFICIENT APPROACH FOR CONTENT BASED IMAGE RETRIEVAL USING MULTIPLE SVM IN YA...EFFICIENT APPROACH FOR CONTENT BASED IMAGE RETRIEVAL USING MULTIPLE SVM IN YA...
EFFICIENT APPROACH FOR CONTENT BASED IMAGE RETRIEVAL USING MULTIPLE SVM IN YA...
 
Gesture Recognition Based Video Game Controller
Gesture Recognition Based Video Game ControllerGesture Recognition Based Video Game Controller
Gesture Recognition Based Video Game Controller
 
Comparison of ezw and h.264 2
Comparison of ezw and h.264 2Comparison of ezw and h.264 2
Comparison of ezw and h.264 2
 
40120140502005
4012014050200540120140502005
40120140502005
 
Human Action Recognition using Contour History Images and Neural Networks Cla...
Human Action Recognition using Contour History Images and Neural Networks Cla...Human Action Recognition using Contour History Images and Neural Networks Cla...
Human Action Recognition using Contour History Images and Neural Networks Cla...
 
50120140501019
5012014050101950120140501019
50120140501019
 
Sharath copy
Sharath   copySharath   copy
Sharath copy
 

Mehr von IAEME Publication

IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME Publication
 
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
 
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSA STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
 
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSBROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
 
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSDETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
 
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
 
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOVOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
 
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
 
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYVISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
 
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
 
GANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEGANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEIAEME Publication
 
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
 
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
 
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
 
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
 
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
 
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
 
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
 
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
 
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTA MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
 

Mehr von IAEME Publication (20)

IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdf
 
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
 
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSA STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
 
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSBROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
 
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSDETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
 
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
 
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOVOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
 
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
 
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYVISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
 
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
 
GANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEGANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICE
 
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
 
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
 
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
 
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
 
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
 
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
 
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
 
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
 
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTA MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
 

Kürzlich hochgeladen

08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 

Kürzlich hochgeladen (20)

08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 

50120140502003

  • 1. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME TECHNOLOGY (IJCET) ISSN 0976 – 6367(Print) ISSN 0976 – 6375(Online) Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME: www.iaeme.com/ijcet.asp Journal Impact Factor (2014): 4.4012 (Calculated by GISI) www.jifactor.com IJCET ©IAEME A CONTENT BASED MULTIMEDIA RETRIEVAL SYSTEM Payel Saha TCET, Thakur Village, Kandivali (E), Mumbai – 101, Sudhir Sawarkar D.M.C.E., Airoli-708, ABSTRACT Multimedia search and retrieval has become an active field for many contemporary information systems. This paper presents a scheme of retrieving a multimedia object, i.e. a video clip with audio. For video retrieval, the system searches a particular query video clip from a database of video clips by matching on the basis of motion vector analysis. For audio retrieval, the audio from the query is to be separated and matched using the fingerprint algorithm with all the audio files of the videos from the database and provide rankings to the matched files. Key words: Multimedia, CBVR, Query, Image, Audio, Motion Compensation I. INTRODUCTION The increasing popularity of digital video content has made the demand of automatic, userfriendly and efficient retrieval of video collection becomes an important issue [11, 17]. VideoQ [3] is the first on-line content-based video search engine providing interactive object-based retrieval and spatiotemporal queries of video contents. Some commercial search engines; such as Google and Yahoo!, have started to extend their services to video searching on the Internet, and it is already possible to search for video clips by typing keywords. However, commonly adopted features, such as colour, texture, or motion are still insufficient to describe the rich visual content of a video clip. In the past few years, the area of content-based multimedia retrieval has attracted worldwide attention. Among the different types of features used in previous content-based video retrieval (CBVR) systems, the motion feature has played a very important role [13]. Multimedia search and retrieval has become an active field after the standardization of MPEG-7. The syntactic information used in MPEG-7 includes color, texture, shape and motion. The technology of moving-object tracking plays 19
  • 2. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME an important role in those video retrieval systems. At first, motion and color information extracted from the MPEG-2 compressed stream to determine which pixels belong to moving objects and which pixels belong to static background for a period of time. This kind of technologies can help us find some interesting events from video data and avoid tedious searching processes. II. SYSTEM FUNCTIONS A video retrieval system consists of the search for a particular query video clip from a database of video clips. Following are the basic functions of the system: 1) Matching Video: The motion vector for each video query is unique and comparing it with that from the database will provide an ideal match. This is based on the assumption that the entire query video is available in the database. 2) Matching Audio: The audio from the query is separated and matched using the fingerprint algorithm with all the audio files of the videos from the database and provide rankings to the matched files. 3) Search rankings post processing: After the final processing of audio and motion vector, the system should provide rankings to the search results. This will be on the basis of the motion vector. Start Video Retrieval Algorithm GUI Development Audio Retrieval Algorithm Video database Creation Integration of Codes End Fig. 1 Flowchart of the system 20
  • 3. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME III. AUDIO PROCESSING Audio retrieval implements a landmark-based audio fingerprinting system that is very well suited to identifying small, noisy excerpts from a large number of items. Each audio track is analysed to find prominent onsets concentrated in frequency, since these onsets are most likely to be preserved in noise and distortion. These onsets are formed into pairs, parameterized by the frequencies of the peaks and the time in between them. These values are quantized to give a relatively large number of distinct landmark hashes [18]. Each reference track is described by landmarks it contains and the times at which they occur. This information is held in an inverted index, which, for each of the1 million distinct landmarks, lists the tracks in which they occur and when they occur in those tracks. To identify a query, it is similarly converted to landmarks. Then, the database is queried to find all the reference tracks that share landmarks with the queries, and the relative time differences between where they occur in the query and where they occur in the reference tracks. Once a sufficient number of landmarks have been identified as coming from the same reference track, with the same relative timing, a match can be confidently declared. IV. MOTION DETECTION Here, some image frames without moving objects are used to compute statistical quantities for the background scene. Then, the foreground pixels are detected and features extracted. A. Background image The initial background image modelling is carried out over first 50 image frames. With the assumption of no moving objects in the 50 image frames, the reference colour background image with a normal distribution is built. The background is modelled by computing the sample mean µሺx, yሻ and variance σଶ ሺx, yሻ in the colour images over a sequence of 50 frames. These statistics are calculated separately for each one of the RGB components by using the following iterative formula [4]. For image frame f = 1,…, 50, we have, ሾµሺx, yሻሿ୤ ൌ ሾµሺx, yሻሿ୤ିଵ ൅ భሺሾCሺx, yሻሿ୤ െ ሾµሺx, yሻሿ୤ିଵ ሻ ౜ (1) ሾσଶ ሺx, yሻሿ୤ ൌ ౜షమ ሾσଶ ሺx, yሻሿ୤ିଵ ൅ భሺ ሾµሺx, yሻሿ୤ିଵ െ ሾCሺx, yሻሿ୤ ሻଶ ౜షభ ౜ (2) where [·]f denotes the corresponding value at frames f, and [µ(x,y)] = [C(x,y)] and [σ2(x,y)] = 0 Equations (1, 2) can be shown by straightforward calculations that they yield the sample mean and variance over the first 50 image frames [4]. The sample mean is the background image. The background image and the variance of the (x, y)th pixel's RGB values over the first 50 image frames is given by [5] : µሺx, yሻ ൌ ሾµୖ ሺx, yሻ, µୋ ሺx, yሻ, µ୆ ሺx, yሻሿ (3) σଶ ሺx, yሻ ൌ ሾσଶ ሺx, yሻ, σଶ ሺx, yሻ, σଶ ሺx, yሻሿ ୖ ୋ ୆ (4) B. Background subtraction The term “moving pixels” is defined as the “foreground.” In each new image frame C(x,y), the foreground can be obtained by comparing their RGB values to the corresponding mean values. First a binary image D(x,y) with the same dimension as the image C(x,y) is created and all its pixel values are set to 0. The output of the background subtraction method is defined as follows: 21
  • 4. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME Dሺx, yሻ ൌ ൜ ሺforegroundሻ, if |Cሺx, yሻ െ µሺx, yሻ| ൐ α · σሺx, yሻ 0 ሺbackgroundሻ, otherwise (5) A pixel (x,y) is extracted as a foreground if it RGB values C(x,y), satisfies the absolute difference value with µ(x,y), where the parameter α can be adjusted to yield more or less foreground. 4σ is used as the threshold in background subtraction. If the threshold is too low (e.g. 1σ), it will cause too much fault-foreground; on the contrary, if the threshold is too high (e.g. 5σ), it will cause too much fault- background. C. Background updating The background cannot be expected to be stationary for a long period of time. An adaptive scheme makes a constant updating of background as linear combination of previous background image and current image frame. The recursive estimation of mean and variance can be performed using equations (6) and (7). Equations (6) and (7) update the background image and sample variance, respectively [5, 8, 9]: µሺx, yሻ୤ ൌ β · Cሺx, yሻ୤ ൅ ሺ1 െ βሻ · µሺx, yሻ୤ିଵ (6) σଶ ሺx, yሻ୤ ൌ β · ሺCሺx, yሻ୤ െ µሺx, yሻ୤ ሻଶ ൅ ሺ1 െ βሻ · σଶ ሺx, yሻ୤ିଵ (7) where C(x,y) is the current image frame; µ(x,y)f , σ2(x,y)f the mean and variance values that update the current image frame; β is the learning rate that determines the speed at which the distribution's parameters change (0<β<1) [7]. D. Shadow elimination The shadow problem will cause redundant foreground (moving pixels) and decrease the system’s accuracy. Therefore, there is a need to eliminate the shadow after background subtraction. Before the shadow elimination process, some pre-processing is needed: 1. The matching part of the image |Cሺx, yሻ െ µሺx, yሻ| from the RGB to greyscale format is created. 2. Gaussian filter is used to remove isolation points and smooth the old grayscale image. 3. The gray-level distribution of the new gray scale image is done and the minimal gaussian value is taken as threshold. A support map C′(x,y) is a RGB colour image where pixel values are set to current image frame if it is greater than threshold; otherwise, set to 0. This method is defined as follows: CԢሺx, yሻ ൌ ൜ Cሺx, yሻ, if ሺgreyscale imageሻ ൐ threshold 0, otherwise (8) The equation (8) maintains the motion parts (foreground, shadow, highlight, noise) of C(x,y) while the non-motion parts are removed. E. Noise elimination From above processes, the obtained foreground image may still have some noises. Those noises come from lighting changes, illumination changes and false matches. To eliminate noises and improve the foreground, connected component labelling and morphological operations are applied to 22
  • 5. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME noise elimination. Connected component labelling is used to label all pixels that are determined as foreground and count the area of all the labelling components. F. Motion vector computation The process of computing motion vectors comprise of two steps: 1. Motion estimation. 2. Motion compensation. 1. Motion estimation In compressed domain video indexing and retrieval, feature used is motion vectors. Motion is the most significant feature in video which represents two dimensional temporal change of video content [8] despite the conventional image features including color, texture and shape. It is possible to distinguish video and images in terms of motion. It is well known that a motion vector field is usually composed of camera motion, object motion, and noises. The global motion in a video is mostly contributed by camera motion. Therefore, the following four-parameter global motion model, which is fast and also valid for most videos [9], is used to estimate the camera motion from the motion vector field. ሬሬሬሬሬሬሬሬሬሬሬሬሬሬሬሬሬԦൌ ቀ‫݉݋݋ݖ‬ ‫ܸ݉ܽܿܯ‬ െ‫݁ݐܽݐ݋ݎ‬ ‫ݔ‬ ‫݊ܽ݌‬ ‫݁ݐܽݐ݋ݎ‬ ቁ · ቀ‫ ݕ‬ቁ ൅ ቀ ቁ ‫ݐ݈݅ݐ‬ ‫݉݋݋ݖ‬ (9) If the underlying supposition behind motion estimation is that the patterns corresponding to objects and background in a frame of video sequence move within the frame to form corresponding objects on the subsequent frame. The idea behind block matching is to divide the current frame into a matrix of ‘macro blocks’ that are then compared with corresponding block and its adjacent neighbors in the previous frame to create a vector that stipulates the movement of a macro block from one location to another in the previous frame. This movement calculated for all the macro blocks comprising a frame, constitutes the motion estimated in the current frame. The search area for a good macro block match is constrained up to p pixels on all fours sides of the corresponding macro block in previous frame. This ‘p’ is called as the search parameter. Larger motions require a larger p, and the larger the search parameter the more computationally expensive the process of motion estimation becomes. Usually the macro block is taken as a square of side 16 pixels, and the search parameter p is 7 pixels. The matching of one macro block with another is based on the output of a cost function. The macro block that results in the least cost is the one that matches the closest to current block [1]. In MEPG video coding, each frame is divided into non-overlapping macro blocks (MBs) of size 16 ×16. For each MB, the motion vector reveals its displacement between the reference frame and current P-frame. The motion vector consists of a horizontal component, x, and a vertical component, y. Let mv = (mvx, mvy) denote the forward motion vector of a MB in a P-frame [10]. The motion object detection is obtained as: Compute the magnitude, ρ, and angle, θ, of the motion vector: ߩ ൌ ඥሺ݉‫ ݒ‬௫ ሻଶ ൅ ሺ݉‫ ݒ‬௬ ሻଶ (10) ௠௩ ೣ ߠ ൌ ‫ି݊ܽݐ‬ଵ ቀ௠௩೤ቁ (11) 23
  • 6. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME 2. Motion compensation In general, Motion compensation is an algorithmic technique employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted or stored images, the compression efficiency can be improved. Using motion compensation, a video stream will contain some full (reference) frames; then the only information stored for the frames in between would be the information needed to transform the previous frame into the next frame. Here block motion compensation (BMC) is used. In BMC, the frames are partitioned in blocks of pixels (i.e. macroblocks of 16×16 pixels in MPEG). Each block is predicted from a block of equal size in the reference frame. The blocks are not transformed in any way apart from being shifted to the position of the predicted block. This shift is represented by a motion vector. To exploit the redundancy between neighbouring block vectors, (e.g. for a single moving object covered by multiple blocks) it is common to encode only the difference between the current and previous motion vector in the bit-stream. Block motion compensation divides up the current frame into non-overlapping blocks, and the motion compensation vector tells where those blocks come from. The source blocks typically overlap in the source frame. V. COLOR EXTRACTION AND SIMILARITY MEASUREMENT A. Color Descriptors Color descriptors of images and video can be global and local. Global descriptors specify the overall color content of the image but with no information about the spatial distribution of these colors. Local descriptors relate to particular image regions and, in conjunction with geometric properties of these latter, describe also the spatial arrangement of the colors. B. Color Histograms Color histogram is the most widely used method owing to its robustness to scaling, orientation, perspective, and occlusion of images. The joint distribution of the three color channels is denoted by the histogram. The human perspective to color is a merger of three stimuli, R (red), G (Green), and B (Blue), which form a color space. A colour histogram [h(image) = (hk(image) k = 1,...,K)] is a K-dimensional vector such that each component hk(image) represents the relative number of pixels of colour Ck in the image, that is, the fraction of pixels that are most similar to the corresponding colour. To build the colour histogram, the image colours should be transformed to an appropriate colour space and quantized according to a particular codebook of the size K. The retrieval system typically contains two mechanisms: similarity measurement and multidimensional indexing. Similarity measurement is used to find the most similar objects. Multidimensional indexing is used to accelerate the query performance in the search process. Similarity measurement plays an important role in retrieval. A query frame is given to a system which retrieves similar videos from the database. The distance metric can be termed as similarity measure, which is the key-component in Content Based Video Retrieval. In conventional retrieval, the Euclidean distances between the database and the query are calculated and used for ranking. The query frame is more similar to the database frame if the distance is smaller. If x and y are 2D feature vectors of database index frame and query frame respectively [14]. 24
  • 7. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME Then Euclidean distance is given as: ‫ ܦ‬ൌ ඥ∑௜ሺ‫ݔ‬௜ െ ‫ݕ‬௜ ሻଶ (12) Background image Background subtraction = current image – background image Threshold of standard deviation α Background Foreground Shadow Elimination Noise Elimination Motion vector computation and motion compensation Color Matching Final output Fig. 2 Flowchart for motion processing VI. GRAPHIC USER INTERFACES The program runs with the help of two separate but linked Graphic User Interfaces (GUIs). The main GUI is used to obtain all the details of the query clip. Various computations required for this purpose are performed using different pushbuttons. The other retrieved GUI displays the final result at the end and works on the video files from the database. 25
  • 8. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME The different functions performed by the main GUI are explained below. Fig. 3 Main Gui A. Selection of video query The ‘Select Video’ pushbutton allows selecting any one video clip. The video clip is selected directly from the folder and played in media player. This is the query clip. The size of this clip is small and should be of minimum 5seconds. It is a video file in AVI format. B. Separation of audio The audio is separated from the query clip, which is in raw audio format i.e. in PCM format and played in a media player. This is an uncompressed audio format. This is executed by ‘Retrieved Audio’ pushbutton. After extracting the raw audio, it is required to convert it into standard lossy compressed .mp3 format for further processing. As most of the audio files used today are in mp3 format, which has a reduction in size than the raw audio. This can be done using any external converter within no time. A separate code is used to process this mp3 audio clip further. The code searches for the best possible match of the clip with the all audio files in the database which are again in mp3 format. From the listed matches, the top three best matches are obtained and sent for video processing. C. Calculation of motion vector The first 50 frames of the query are taken under consideration for further processing and saved. The image data, matrix or 3-D array (RGB) of values specifying the colour of each rectangular area defining the image is obtained and the current colours are mapped. The next step is to detect the motion from the frames extracted. This process again includes different steps. After detecting motion, labeling is done; a new matrix contains label for the connected components of the input image (frame) is created. The input image can have any dimension; the label matrix (L) is of the same size as input image. The elements of L are integer values greater than or equal to 0. The pixels labeled 0 are the background. The pixels labeled 1 make 26
  • 9. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME up one object; the pixels labeled 2 make up a second object; and so on. The default connectivity is 8 for two dimensions, 26 for three dimensions. This process is basically to labeling the background. Then at last motion vectors are obtained and motion compensation is done. Fig 4 Axis displaying motion in the query clip The videos that have been given the best rank in the audio retrieval are stored in a .mat file called X.mat. This file is then loaded to provide the list of the top 3 ranks on the basis of audio fingerprint matching. The computation begins after selecting ‘Click Here’ pushbutton. The motion vectors for these videos are calculated by running the code from the initial main.m in a loop. Further Euclidean distance is found out between the frames of the query and those of the videos in the loop as well. If the Euclidean distance is found to be zero i.e. an exact match on the basis of colour is found, that portion of the query is played in a figure. When processing is out of the loop, motion vectors are compared with the target motion vector and the difference is found out. The differences are arranged in ascending order thus providing us with the final rankings. VII. RESULTS The videos that have been given the best rank in the audio retrieval are stored in a .mat file called X.mat. This file is then loaded to provide the list of the top 3 ranks on the basis of audio fingerprint matching. The computation begins after selecting ‘Click Here’ pushbutton. The motion vectors for these videos are calculated by running the code from the initial main.m in a loop. Further Euclidean distance is found out between the frames of the query and those of the videos in the loop as well. If the Euclidean distance is found to be zero i.e. an exact match on the basis of colour is found, that portion of the query is played in a figure. When processing is out of the loop, motion vectors are compared with the target motion vector and the difference is found out. The differences are arranged in ascending order thus providing us with the final rankings. 27
  • 10. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME 375(Online), The final result of this CBVR system yields three best possible matches of the small query clip selected earlier. As the ‘Play’ pushbutton is clicked, that particular video is played separa , separately in any media player as shown in the figure below. Fig 5 Final Retrieved Video Clip ACKNOWLEDGMENT The authors are grateful to the colleagues of respective Institutes for their motivation, and help towards the completion of this paper, as well as for providing valuable advice. REFERENCES [1] [2] [3] [4] [5] [6] [7] Aroh Barjatya, “Block Matching Algorithms For Motion Estimation, “Student Member, IEEE, DIP 6620 Spring 2004 Final Project Paper. Avery Wang, "An Industrial-Strength Audio Search Algorithm", International Symposium on lgorithm", Music Information Retrieval (ISMIR 2003), Baltimore, MD, Oct. 2003. S. F. Chang, W. Chen, H. J. Meng, H. Sundaram, D. Zhong, “A fully automated contentcontent based video search engine supporting spatiotemporal queries”, IEEE Trans on Circuits and Systems for Video Technology Vol.8, No.5, pp.602–615, 1998. Technology, H. T. Chen, H. H. Lin, T. L. Liu., “Multi-object tracking using dynamical graph matching”, “Multi object IEEE Computer Society Conference on Computer Vision and Pattern Recogni Recognition (CVPR’01), Vol.2, p.210, 2001. Q. Zang & R. Klette, “Robust background subtraction and maintenance”, Proceedings of the 17th International Conference on Pattern Recognition, Vol.2, Aug. 23-26, pp.90 23 26, pp.90–93, 2004. S. J. McKenna, S. Jabri, Z. Duric, A. Rosenfeld, and. H. Wechsler, “Tracking groups of Duric, people”, Computer Vision and Image Understanding 80, pp.42–56, 2000. pp.42 S. Jabri, Z. Duric, H. Wechsler, A. Rosenfeld. “Detection and location of people in video images using adaptive fusion of color and edge information”, 15th International Conference on Pattern Recognition, Vol. 4, p.4627, 2000. 28
  • 11. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 19-29 © IAEME [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] Tsuhan Chen, “From Low-Level Features to High-Level Semantics: Are We Bridging the Gap?”, Proceedings of the Seventh IEEE International Symposium on Multimedia, pp. 179, 2005. R. Wang and T. Huang, “Fast camera motion analysis in MPEG domain”, ICIP, Vol.3, pp.691-694, 1999. Jau-Ling Shih, Ming-Chieh Chuang, “A Video Object Retrieval System using Motion and Color Features”, Department of Computer Science and Information Engineering, Chung Hua University, Hsinchu, Taiwan, R.O.C. Yoshitaka, A., Ichikawa, T., “A survey on content-based retrieval for multimedia databases”, IEEE Trans Knowledge and Data Engineering, Vol.11, No.1, pp.81–93, 1999. Heinrich A. van Nieuwenhuizen, Willie C. Venter and Leenta, “The Study and Implementation of Shazam’s Audio Fingerprinting Algorithm for Advertisement Identification”, M.J. Grobler School of Electrical, Electronic and Computer Engineering North-West University, Potchefstroom Campus, South Africa. Chih-Wen Su, Hong-Yuan Mark Liao, Kuo-Chin Fan, “A Motion-Flow-Based Fast Video Retrieval System”, ACM, pp.10--11, MIR’05. B. V. Patel and B. B. Meshram, “Content Based Video Retrieval Systems”, International Journal of UbiComp (IJU), Vol.3, No.2, April 2012. T.N.Shanmugam, Priya Rajendran, “An Enhanced Content-Based Video Retrieval System Based On Query Clip”, International Journal of Research and Reviews in Applied Sciences, Vol.1, Issue 3, 2009. Che-Yen Wen, Liang-Fan Chang, Hung-Hsin Li, “Content based video retrieval with motion vectors and the RGB color model”, Forensic Science Journal, Vol.6, No. 2, 2007. Kuo, C.T., Chen, L.P., “Content-based query processing for video databases”, IEEE Trans Multimedia, Vol.2, No.1, pp.1–13, 2000. Ellis, Daniel P.W., "Robust Landmark-Based Audio Fingerprinting.", 2009, http://labrosa.ee.columbia.edu/matlab/fingerprint/ IBM Research TRECVID2004 Video Retrieval System.pdf, http://www.research.ibm.com/people/a/argillander/files/2004/ S. Dagtas, W. Al-Khatib, A. Ghafoor, R.L. Kashyap, “Models for motion-based video indexing andretrieval,” IEEE Transactions on Image Processing, Vol.9, No.1, pp.88-101, 2000. Y. Tsaig, A. Averbuch, “Automatic segmentation of moving objects in video sequences: a region labeling approach,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 12, No.7, pp.597-612, 2002. C.W. Ngo, T.C. Pong, H.J. Zhang, “Motion-based video representation for scene change detection,” Int. Journal Computer Vision, pp 127-142, 2002. A. Del Bimbo and P. Pala, Content-Based Retrieval of 3D Models, ACM Transactions on Multimedia Computing, Communications and Applications, Vol. 2, No. 1, pp. 20-43, 2006. Ali Amiri, Mahmood Fathy, and Atusa Naseri, “A Novel Video Retrieval System Using GED based Similarity Measure”, International Journal of Signal Processing, Image Processing and Pattern Recognition, Vol. 2, No.3, 2009. Gopal Thapa, Kalpana Sharma and M.K.Ghose, “Multi Resolution Motion Estimation Techniques for Video Compression: A Survey”, International Journal of Computer Engineering & Technology (IJCET), Volume 3, Issue 2, 2012, pp. 399 - 406, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. Vilas Naik and Sagar Savalagi, “Textual Query Based Sports Video Retrieval by Embedded Text Recognition”, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 4, 2013, pp. 556 - 565, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. 29