SlideShare ist ein Scribd-Unternehmen logo
1 von 8
Downloaden Sie, um offline zu lesen
ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011




  Wireless Vision based Real time Object Tracking
         System Using Template Matching
                                              I Manju Jackin1, M Manigandan2
                1
                    Assistant Professor, Velammal Engineering College, Department of ECE, Chennai, INDIA
                                                 Email: rebejack@gmail.com
                    2
                      Senior Lecturer, Velammal Engineering College, Department of ECE, Chennai, INDIA
                                              Email: mani.mit2005@gmail.com


Abstract— In the present work the concepts of template                    There are three key steps in implementation of our object
matching through Normalized Cross Correlation (NCC) has                tracking system:
been used to implement a robust real time object tracking                  • Detection of interesting moving objects,
system. In this implementation a wireless surveillance pinhole
                                                                           • Tracking of such objects from frame to frame,
camera has been used to grab the video frames from the non
ideal environment. Once the object has been detected it is                 • Analysis of object tracks to automate the disc
tracked by employing an efficient Template Matching                             mechanism
algorithm. The templates used for the matching purposes are                A basic problem that often occurs in image processing
generated dynamically. This ensures that any change in the             is the determination of the position of a given pattern in an
pose of the object does not hinder the tracking procedure. To          image or part of an image, the so-called region of interest.
automate the tracking process the camera is mounted on a               The two basic cases are
disc coupled with stepper motor, which is synchronized with a              • The position of the pattern in unknown
tracking algorithm. As and when the object being tracked
moves out of the viewing range of the camera, the setup is
                                                                           • An estimate for the position of the pattern is not
automatically adjusted to move the camera so as to keep the                     available
object of about 360 degree field of view. The system is capable            Usually, both cases have to be treated to solve the
of handling entry and exit of an object. The performance of            problem of determining the position of a given pattern in an
the proposed Object tracking system has been demonstrated              image. In the latter case the information about the position
in real time environment both for indoor and outdoor by                of the pattern can be used to reduce the computational
including a variety of disturbances and a means to detect a            effort significantly. It is also known as feature tracking in a
loss of track.                                                         sequence of images [5], [6].
                                                                           For both feature tracking and to know the initial
Index Terms— Normalized Cross correlation, template
matching, Tracking, Wireless vision
                                                                       estimations of the position of the given pattern, a lot of
                                                                       different well known algorithms have been developed [7],
                       I. INTRODUCTION                                 [8] .One basic approach that can be used in both cases
                                                                       mentioned above, is template matching. This means that
   Object tracking is one of the most important subjects in            the position of the given pattern is determined by a pixel-
video processing. It has wide range of applications                    wise comparison of the image with a given template that
including traffic surveillance, vehicle navigation, gesture            contains the desired pattern. For this, the template is shifted
understanding, security camera systems and so forth.                   ‘ u’ discrete steps in the x direction and v steps in the y
Tracking objects can be complex [1] due to loss of                     direction of the image and then the comparison is
information caused by projection of the 3D world on a 2D               calculated over the template area for each position (u,v).To
image, noise in images, complex object motion, non rigid               calculate this ,normalized cross correlation is a choice in
or articulated nature of objects [2], partial and full object          many cases [5]. Nevertheless it is computationally
occlusions, complex object shapes, scene illumination                  expensive and therefore a fast correlation algorithm that
changes, and real-time processing requirements.                        requires fewer calculations than the basic version is of
   Tracking is made simple by imposing constraints on the              interest. In this paper we propose a complete real time
motion and/or appearance of objects. In our application we             object tracking system which combines template matching
have tried to minimize the number of constraints on the                with normalized cross correlation.
motion and appearance of the object. The only constraint                  Related works are discussed in Section II and the details
on the motion of the object is that it should not make                 about template matching are discussed in Section III. In
sudden change in the direction of motion while moving out              Section IV we discuss about the proposed system. The
of the viewing range of the camera. Unlike other                       system model implementation is discussed in Section V
algorithms [3] the present algorithm is capable of handling            and the results are discussed in section VI.
the entry and exit of an object. Also, unlike [3],[4] , no
color information is required for tracking an object. There                                II. RELATED WORKS
is no major constraint on the appearance of the object
though an object which is a little brighter than the                      Tracking lays the foundation for many application areas
background gives better tracking results.                              including augmented reality, visual servoing and vision

                                                                  11
© 2011 ACEEE
DOI: 01.IJNS.02.01.105
ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011



based industrial applications. Consequently, there is a huge         the frame rate is not very high because of their complex
amount of related publications. The methods used for real            cost function. Moreover image blurring poses a problem
time 3D tracking can be divided into four categories: Line           for feature extraction.
based tracking, template based tracking, feature based                  Hybrid tracking approaches combine two or more of the
tracking and hybrid approaches.                                      aforementioned approaches. Some recent related
   Line based tracking requires a line model of the tracked          publications include [22], which combines template-based
object. The pose is determined by matching a projection of           tracking and line-based tracking. In [21] the authors
the line model to the lines extracted in the image. One of           combine line-based tracking and feature-based tracking.
the first publications in this field was [9].Recently a real         Even though these algorithms perform well, the line-based
time line tracking system which uses multiple-hypothesis             tracking only improves the results for a few cases and
line tracking was proposed in [10].The main disadvantage             might corrupt the result in the case of background clutter.
of line tracking is that it has severe problems with                 In [23] the authors use a template-based method for
background clutter and image blurring so that in practice it         tracking small patches on the object, which are then used
cannot be applied to applications we are targeting.                  for point based pose estimation. Since this approach uses a
   Template-based tracking fits better into our scenarios. It        template-based method for tracking it cannot deal with fast
uses a reference template of the object and tracks it using          object motion.
image differences. These works nicely for well textured
objects and small inter frame displacements. One of the                            III. TEMPLATE MATCHING
first publications on template-based tracking [11] was
using the optical flow method. In order to improve the                  Template matching is an important research area with
                                                                     strong roots in detection theory [24],[25]. It has obvious
efficiency of the tracking and to deal with more complex
objects and/or camera motions, other approaches were                 applications in computer and robot vision, stereography,
proposed [12],[13]. In [14] the authors compare these                image analysis and motion estimation [24]. Template
approaches and show that they all have an equivalent                 matching in the context of an image processing is a process
convergence rate and frequency up to a first order                   of locating the position of a sub image within an image of
                                                                     the same, or more typically, a larger size[26],[28],[27].
approximation with some being more efficient than others.
A more recently suggested approach is the Efficient Second           Template matching can also be described as a process to
Order Minimization (ESM) algorithm [15], whose main                  determine the similarity between two images. The sub
                                                                     image is referred to as the template image and the larger
contribution consists in finding a parameterizations and an
algorithm,     which     allow     achieving    second-order         image is referred to as the search area (main image) [25],
convergence at the computational cost and consequently               [26], [28], [27]. The template matching process involves
                                                                     shifting the template over the search area and computing a
the speed of first-order methods.
   Similar to template-based tracking, feature-based also            similarity between the template image and the window of
requires a well-textured object. They work by extracting             the search area over which the template [25], [26], [29],
                                                                     [27]. These shifting and computing processes operate
salient image regions from a reference image and matching
them to another image. Each single point in the reference            simultaneously and do the repetition until the template
image is compared with other points belonging in a search            image lies on the edge of the search area. For the image
                                                                     processing purpose, the most common technique for
region in the other image. The one that gives the best
similarity measure score is considered as the corresponding          measuring the similarity between two-dimensional images
one a common choice for feature extraction is the Harris             is by using cross correlation method. A correlation measure
                                                                     is determined between the template and respective
corner detector [16]. Features can then be matched using
normalized cross correlation (NCC) or some other                     windows of the search area to find the template position
similarity measure [17]. Two recent feature-matching                 which has maximum correlation [25], [30]. Generally, for
                                                                     two-dimensional image, the correlation coefficient is
approaches are SIFT [18] and Randomized Trees [19].
Both perform equally well in terms of accuracy. However,             computed for the entire area on the template image shifted
                                                                     on the main image.
despite a recently proposed optimization of SIFT called
                                                                        The correlation between two signals (cross correlation)
SURF [20], SIFT has a lower runtime performance than the
Randomized Trees, which exhibit a fast feature matching              is a standard approach to feature detection as well as a
                                                                     component of more sophisticated techniques.
thanks to an offline learning step. In comparison to
template-based methods, feature-based approaches can deal            A. Template Matching by Cross Correlation.
with bigger inter frame displacements and can even be used              Correlation based matching is a simple, yet very popular
for wide-baseline matching if we consider the whole image            technique in pattern recognition with images. The basic
as the search region. However, wide-baseline approaches              method consists of forming a correlation measure between
                                                                     a template T and image I (I ⊗ T ) and determining the
are in general too slow for real-time applications. Therefore
they are mostly used for initialization rather than tracking.
A full tracking system using only features was proposed in           location of a correspondence by finding the location of
[21]. They rely on registered reference images of the object         maximum correlation. Usually I is much larger than the
and perform feature matching between reference image and             template.
current image as well as between previous image and                     A common measure of dissimilarity is
current image to estimate the pose of the object. However,
                                                                12
© 2011 ACEEE
DOI: 01.IJNS.02.01.105
ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011




      ∑ (I − T ) = ∑ I                       − 2∑ IT + ∑ T 2                                 techniques are included for preprocessing purpose to get
                       2                 2
 d=                                                                             (1)
                                                                                             the proper images such as flip image, region of interest
                                                                                             (ROI) and image binarization. These techniques are not the
  Where the sum is over pixels in the template T and the                                     major work for this project. Therefore, these techniques are
image I . The value of‘d’ will be small when I and T are                                     not discussed in this paper. The correlation method is
almost identical and large when they differ significantly.                                   implemented using the Fast Fourier transform and modified

Since the term ∑ (IT ) will have to be large whenever,’d’                                    into discrete correlation expression for computer usage.
                                                                                             Discrete Correlation is a mathematical process of

is small, large value of ∑
                             (IT ) similarly indicates the                                   comparing one image with another image discretely. The
                                                                                             resulting image is a two- dimensional expression of
location of a good match of the template. Notice that                                        equivalence.
∑ (IT )         is the cross correlation of the two signals I
                                                                                                Supposedly, the relation between template image and
                                                                                             search area (main image) is defined as;
and T .                                                                                                            n    n
   The simple cross-correlation is called unnormalized and
it has several major shortcomings. If a certain pixel in the
                                                                                                                 ∑ ∑ M (i , j )N (i , j )
                                                                                                                  i =1 j =1
                                                                                              CC =
image     has an       abnormally      large value,      the                                                                                                  1
                                                                                                      ⎡ n n                                                  ⎤2
∑ (IT ) associated with this location is likely to have a
                                                                                                                                        n    n

                                                                                                      ⎢∑ ∑ M
                                                                                                                        2
                                                                                                                            (i , j )∑ ∑            N (i , j )⎥
                                                                                                                                                     2

                                                                                                      ⎣ i = 1 j =1                    i = 1 j =1             ⎦
large value even though no good match exists. And since
zeros do not contribute to the correlation value, a good                                                                                                              (4)
match containing zeros will possible have a small value. To
overcome these problems, normalized cross-correlation                                                                  M (i , j ) = cN (i , j )
(NCC) is often used instead, in which the measure is
formulated as                                                                                                      n        n

                                       ⎡~                              ⎤
                                       ⎢T ( x , y ) I ( x ± u , y ± v )⎥
                                                    ~                                                             ∑∑ cN (i, j )N (i, j )
                           ∑∑          ⎢                               ⎥                      CC =
                                                                                                                  i =1 j =1
                                                                                                                                                                  1
                                       ⎣                               ⎦
                           x   y
 ρ (u , v ) =                                                                                         ⎡n n 2 2         n   n     ⎤2
                                                                                                      ⎢∑∑ c N (i, j )∑∑ N (i, j )⎥
                                                                                                                               2
                               ~                          ~

                  ∑ ∑ T (x , y ) ∑ ∑ I (x ± u , y ± v )
                   x       y
                                   2

                                                  x   y
                                                           2
                                                                                                      ⎣ i =1 j =1    i =1 j =1   ⎦                                    (5)
                                                                                (2)
                                                                                                                             n    n
  In this equation (2), I is an image of image size n x n                                                              c ∑∑ N 2 (i, j )
with a search window of a larger image and                                 T is a                                           i =1 j =1
                                                 ~
                                                                                              CC =
template of image size n x n; T = t − μ T , and                            μ T is the
                                                                                                                                                                  1
                                                                                                      ⎡n n 2 2                 n   n                           ⎤2
                                                                                                      ⎢∑∑         c N (i, j )∑∑ N 2 (i,                     j )⎥
average intensity of T .This formula keeps ρ in the range -
1 to 1,with a value close to 1 indicating a better matching.                                          ⎣ i =1 j =1            i =1 j =1                         ⎦      (6)
   For equation 2 to be effective, a prior accurate
knowledge of the pattern T is required .For the detection                                       Let CC=1, then for some constant c, the correlation
of detonators, we assume the shape of a detonator is fixed                                   coefficient will result as a value 1, proving the 100%
and known. However, the orientation of the detonator in                                      similarity between both images. This theory is applied
images can be changing and multiple templates with                                           together with the discrete correlation expression in this
varying orientations have to be used.                                                        system. The equation for the two-dimensional discrete
   Now let as assume that                                                                    correlation is given by [5];
                           T (x, y ) = M (i, j ) and                                                           M −1 N −1
                                                                                              Out (k , l ) =   ∑∑ In1(m, n)In2(k + m, l + n )
                                                                                                               m =0 n =0
                  I ( x + u , y + v ) = N (i, j )
                                                                                                                                                                      (7)
                                                                               (3)
                                                                                                Where In1 and In 2 are the input images and Out is
                                                                                             the output image. MxN is the dimension of both images
 With image size as mentioned above for both                          I and T .
                                                                                             and possibly they have different dimensions and sizes,
  For a sample image (contains left and right breast                                         however, it will filled in with certain pixel values to allow
images), the position, orientation and size need to be                                       the matching process. The In1 and In 2 are called the
observed. These criteria are important in obtaining the best                                 search area and the template image respectively. The
output for the matching process. Unfortunately, most                                         output will reveal the resultant image and its position of the
images do not satisfy these criteria. Therefore, some
                                                                                        13
© 2011 ACEEE
DOI: 01.IJNS.02.01.105
ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011



greatest match between the search area (main image) and                 minimizes difficulties due to different image quality and
the template.                                                           image formation conditions.
                                                                           For the initialization registered images of the object
                  IV. PROPOSED SYSTEM                                   called key frames, are required. They can be created
                                                                        directly from the textured model by rendering it from
   An overview of the proposed system as a finite state
                                                                        different views. If the real-world metric pose is required,
machine (FSM) is given in Figure 1.The system starts with               the correct intrinsic camera parameters have to be
an initialization phase, and then uses the template-based               provided.
tracking algorithm to track the object. If the template-based
tracker is unable to recover the pose within a certain                  B. Initialization
number of attempts the initialization is invoked again.                    When initializing, features are extracted from the current
Template-based Tracking Algorithm                                       image and matched to the features extracted in the key
                                                                        frame. The pose can then be estimated from the 3D object
   We use the NCC algorithm for template-based tracking.                points and corresponding 2D feature points in the current
The object is tracked using this method until a loss of track           image. Since the tracker is using a textured model of the
is detected,
                                                                        object the accuracy of the initial pose estimation is not very
Given: Input image I ; orientation of the object initialize an          critical. If on the other hand the reference templates used
output array with the same size as the input image                      for tracking were extracted from the current image, the
    [1] Set an initial threshold value for thresholding the             precision of the initialization procedure would be a major
         correlation result. Normally, it is set at 0.7-0.8             issue, because the quality of the tracking result depends
    [2] Repeat the following steps until the result is                  directly on the quality of the templates used for tracking.
         satisfactory.                                                  Hence we decided to directly use the templates taken from
              Construct the correlation filter template T               the textured model in our system.
              using an empirical model method.
                                                                        C. Reference Patch Extraction
              Perform normalized cross-correlation of I
              with T , and store the result in the output array.           The textures of the reference patches, which are required
              Threshold these correlation values. If any                for tracking, are taken from the textured model or each
              value is larger than the threshold, set the               patch the object is rendered so that the patch is oriented
              corresponding point in the output array to 1,             parallel to the image plane. It is also important to ensure
              otherwise to 0                                            that the relative sizes of the object patches are reflected in
               If the result is not satisfactory, adjust the            the size of the rendered patches, since the number of pixels
              threshold or the correlation filter template and          in a patch is directly proportional to its importance during
              continue                                                  tracking. Since the pose parameters used to render the
    [3] End repeat [1] & [2]                                            patches are known, the reference patches can be directly
                                                                        extracted from the rendered image. These reference patches
                                                                        are then reduced a few times in size by a factor of two to
                                                                        create a stack of reference patches at different scales, which
                                                                        are used to speed up the tracking in a coarse-to-fine
                                                                        approach.
                                                                         D. Visibility Test
                                                                           Attempting to track patches which are not visible will
                                                                        lead to erroneous results. Hence it is necessary to ascertain
                                                                        the visibility of every patch. This test is performed by
                                                                        rendering the model with OpenCV and using the occlusion
                                                                        query extension to test which patches are visible and which
                                                                        are occluded. The visibility test is performed for each
                                                                        frame using the pose estimated in the previous frame.
                                                                        Thanks to the occlusion query extension the visibility test
                                                                        can be performed very fast, so that it does not interfere with
                                                                        the tracking performance
         Figure 1. Overview of the proposed tracking system             E. Loss Of Track
                                                                           Determining when the tracker lost the object is important
A. Required Information                                                 in order to switch to the proposed algorithm. In our system
   In our system we use a textured 3D model of the object.              this is accomplished by computing the normalized cross
                                                                                                                             *
This model can either be created manually or semi-                      correlation (NCC) between the reference patch I k and
automatically with commercially available products. One                 the current patch I k after the end of the optimization for
point to note is that it is advisable to use the same camera
for texturing the model and for tracking, because this                  all visible patches.

                                                                   14
© 2011 ACEEE
DOI: 01.IJNS.02.01.105
ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011



          *
   Let I be the reference image and I the current image.                       mechanism implies the motion of the camera is governed
Further ‘ p ’ be the pixel coordinates of the pixels in the                    by the motor installed horizontally.
reference image The NCC between two patches is defined                         B. Wireless Camera Interfacing
as:                                                                               In order to track the object using Template Matching
                       (I ( p ) − μ )(I ( p ) − μ )
                  )= ∑
                            *            *                                     through NCC based approach we have implemented an
          (
 NCC I , I k  *        pk   k   k        k   k       k    k                    Wireless vision interface model for acquiring the real time
                                    N σ σk                                     images from remote place through JK wireless Surveillance
              k                      2   *
                                         k                        (8)          camera. For tracking the objects based on user defined
                                                                               template OpenCV is used. To grab the image we have used
Where     N k is the number of pixels of each patch, μ and    *
                                                              k                Zebronics image grabber TV tuner for acquiring the image
μk   are the mean pixels intensities and σ k and
                                                 *
                                                         σk       their
                                                                               from wireless camera through 2.5GHz video receiver
                                                                               module interfaced with PC. To interface the Image grabber
standard deviations. If the NCC of a patch falls below a                       and OpenCV we have used a Dynamic Link Library files
certain threshold, it is excluded from the tracking. The loss                  vcapg2.dll. To track the object completely an efficient
of tracking an object occurs mainly due to the presence of                     decision making rule is being implemented by software
either due to fast movement of the object or else due to                       approach i.e., if-else and then conditions based on the
tracking object is out of focus. To avoid the loss of tracking                 movement of the object on ‘x’-directions.
we propose a tracking system which could be able of
tracking the object in 360degree rotation. Basically in this                   C. Decision Making Rule for Object Tracking
paper the tracking of the object is considered only in the                        Our object tracker module generates the trajectory of an
‘x’-direction .In this paper we are not going to deal with the                 object over time by locating its position in every frame of
mechanical design of the rotational disc mechanism where                       the video. In our implementation the tasks of detecting the
as we would be dealing with tracking the object                                object and establishing correspondence between the object
continuously without any loss of the object which is under                     instances across frames are performed simultaneously (i.e
test. Complete control of the tracking system is done                          multithreaded). The possible object region in every frame
through wireless interface using RF 433 MHz RF module                          is obtained by means of our object detection module, and
from the remote place which would be discussed in the                          then our tracker module tracks the object across frames. In
following section in detail.                                                   our tracking approach the object is represented using the
                                                                               Primitive Geometric Shape Appearance Model i.e., the
     V.   TEMPLATE MATCHING SYSTEM MODEL                                       object is represented as a rectangle.
                IMPLEMENTATION                                                    The object tracking module takes the positional
                                                                               information of x & y co-ordinates of the system of the
  This proposed system model consists of three major
                                                                               moving object as an input from the object template module
subdivisions
                                                                               based on the ROI. This information is then used to extract a
    A. Hardware setup
                                                                               square image template (whenever required) from the last
    B. Wireless camera Interface with OpenCV
                                                                               acquired frame. The module keeps on searching it in the
    C. Decision Making rule for continuous tracking of
                                                                               frames captured from that point. Whenever found it
         object based on ROI
                                                                               displays a red overlaid rectangle over the detected object. If
A. Hardware setup                                                              the template matching doesn’t yield any result (signifying
   This section deals with the rotational disc arrangement                     that the object has changed its appearance) a new template
of the tracking system. The system utilized with single                        is can be selected based on the user interest on the real time
stepper motor to account for the tracking of the object in x-                  video sequence.
direction movement of the object. As presented in Fig. 6                           Further on the (x, y) co-ordinates of the tracked object
motor is operated using an electronic stepper motor driver                     are calculated and passed on to the rotational disc module
circuit. We have used Wireless Transmitter and Receiver of                     from the PC through wireless interface for analysis and
433MHZ for transmitting and receiving the data from the                        automation.
remote computer and the rotational disc arrangement on                         D. Algorithm for the Tracking Module based on Decision
which the wireless Camera is being mounted. At the                             making rule:
receiving section the wireless camera would be keep on                            i.    Get the positional information image I(x ,y) frame
tracking the object about 360 degree rotational controls is                             size of the object it is generally considered with
done by means of stepper motor interface along with the                                 320x240
microcontroller ATMEL89C52. The received data are                                ii.    Generate a image template T with size as mentioned
processed based on the decision making rule which is being                              in Section III A by extracting a square image from
imposed in the microcontroller. The motor driver circuits                               the last frame grabbed by the camera. The template
are in turn controlled by the tracking program through the                              is extracted in the form of a square whose image
data received from the remote PC based on the information                               coordinates are given by
of the tracking object is under test. Rotational disc



                                                                          15
© 2011 ACEEE
DOI: 01.IJNS.02.01.105
ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011



                       -Top, Left corner: (x - u, y - v)
                       -Top, Right corner: (x + u, y - v)
                       -Bottom, Left corner: (x - u, y + v)
                       -Bottom, Right corner :(x+u, y+v)                                      Template Image-Vehicle
 iii.        Search the generated template T in the last frame
             grabbed by the camera. This is done by using an
             efficient template matching algorithm based on
             NCC.
 iv.         IF the template matching is successful transmit the
             data through RF Transmitter 433MHz through
             serial Port
             ELSE IF the tracker has NOT detected motion of
             the object then stop automation of the wireless
             camera module setup.
             AND the detector has
             THEN goto STEP 1 (get a new template)
             ELSE goto STEP 5 (get the x, y position)
             ELSE goto STEP 1 (get a new template)
  v.         Obtain the position T(x, y) of the match and pass
             it on to the rotational disc x-direction automation
             module for analysis.                                            Figure 2. Template Matching based Vehicle Tracking Prototype with
  vi.        Goto Step iii.                                                               real time images of frame size 320x240


  The below Table I shows the complete details how the
object would be tracked based on the information of the co-
ordinates and exactly serial data for controlling the                                                Template Image-Face
Rotational Disc Camera Mounting setup through Wireless
RF 433MHz.

TABLE I.          Automation Of Tracking Module And Its Control Word
  Object        Automation of the Tracking      Wireless Serial Data
 Tracking       system Based on the Co-          Transfer Based on
  module        ordinates of the Template        Coordinates of the
 Direction      Matching in the Frame           Object Present to the
                                                 Receiver Section
  LEFT          Move the camera towards                 ‘R’
                direction of the object by
                rotating the stepper motor in
                Anti Clockwise Direction
 RIGHT          Move the camera towards                 ‘L’
                direction of the object by
                rotating the stepper motor in
                Clockwise Direction
  IDEAL           Stop the movement of the              ‘S’
                           actuators

                              VI. RESULTS
   Results are presented of the experiment performed using
the tracking Finite State Machine as shown in the Figure
1.The experiment was conducted for various images
acquired from Wireless pinhole camera interfaces with
Rotational Disc Mechanism for through low cost frame
grabber Zebronic TV tuner card programmed using
OpenCV.
   The template matching program written in VC++                                          Figure 3. Template Matching based Face Tracking
environment along with some predefined function available
in OpenCV environment. To make the system
more versatile for various environmental conditions we
tried to implement a concept of software approach based
threshold adjustments which would be helpful for better
template matching based tracking.


                                                                        16
© 2011 ACEEE
DOI: 01.IJNS.02.01.105
ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011



                                                                             When the sequences of images is within the region then the
                                                                             tracking is smoother where else when the object being
                                                                             tracked is out of focus then the camera should be atomized
         Template Matching-Survelliance Camera                               through stepper motor so that the object being tracked
                                                                             would not be missed out which is being discussed in the
                                                                             above section TABLE I. In the result we have consider
                                                                             three objects Vehicle prototype, Human Face, Surveillance
                                                                             camera movement with reference template match compared
                                                                             with actual images tracked output frames are shown in the
                                                                             Figure 2,3&4,5 respectively

                                                                                                     CONCLUSIONS
                                                                                In this paper we have implemented the template
                                                                             matching based on NCC algorithm with real time tracking
                                                                             under non-ideal environment irrespective of the lighting
                                                                             conditions. The system has been automated using rotation
                                                                             disc camera mount mechanism setup which is synchronized
                                                                             with the algorithm. Such an automated object tracking
                                                                             system can be used in applications Defense application
                                                                             such as Automatic target tracking, Autonomous target
                                                                             hitting through turret by classifying the templates whether
                                                                             it is enemy objects or not. Future focuses on tracking
                                                                             multiple objects at the same time as well as on improving
                                                                             tracker accuracy during camera motion. Even stereo vision
                                                                             based tracking system can be carried out for maximum
                                                                             coverage of the objects and also to track multiple objects
                                                                             too.

                                                                                                  ACKNOWLEDGMENT
 Figure 4. Template Matching based Surveillance Camera Tracking                 The authors gratefully acknowledge the following
with real time images of frame size 320x240 to move the stepper motor
            actuator towards Left side- Clockwise Direction
                                                                             individuals for their valuable support: Prof.V.Bhaskaran,
                                                                             Department of Mathematics and our colleagues for their
                                                                             valuable guidance for devoting their precious time, sharing
                                                                             their knowledge and co-operation.

                                                                                                      REFERENCES
                                                                             [1] Yilmaz A, Javed O, Shah M, ”Object tracking: A survey”.
                                                                                 ACM Computer Surv.,38,4, Article 13(Dec 2006)
                                                                             [2] Richard Y. D. Xu, John G. Allen, Jesse S.J,”Robust real time
                                                                                 tracking of non-rigid objects”, Proceedings of the Pan
                                                                                 Sydney Area workshop on Visual Information processing,
                                                                                 p.95-98, June 01, 2004.
                                                                             [3] Bar Shalom Y, Foreman T, ”Tracking and Data
                                                                                 Association”, Academic Press Inc
                                                                             [4] Fieguth P, Terzopolus D, ”Color-based tracking of heads and
                                                                                 other mobile objects at video frames rates”, IEEE
                                                                                 Conference on Computer and Vision and Pattern
                                                                                 Recognition(CVPR),p.21-27,2004
                                                                             [5] Gregory D.Hager and Kentaro Toyama,”The Xvision
                                                                                 System: A General-Purpose Substrate for Portable Real Time
                                                                                 Vision Applications”, Computer Vision and Image
                                                                                 Understanding, Volume 69, Number 1, pp.23-37, 1998.
                                                                             [6] B.D. Lucas and T. Kanade, “An Interactive Image
                                                                                 Registration Technique with an Application to Stereo
                                                                                 Vision”, IJCAI 1981.
                                                                             [7] Hirochika Inoue, Tetsuya Tachikawa and Masayuki Inaba,
                                                                                 “Robot Vision System with a Correlation Chip for Real time
Figure 5. Template Matching based Surveillance Camera Tracking with              tracking Optical flow and Depth Map Generation”,
  real time images of frame size 320x240 to move the stepper motor               Proceedings of the IEEE International Conference on
         actuator towards Right side –Anticlockwise Direction                    Robotics and Automation, pp 1621-1626,France,1992

                                                                        17
© 2011 ACEEE
DOI: 01.IJNS.02.01.105
ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011



[8] Changming Sun, “A Fast Stereo Matching Method”, Digital              [22] Pressigout, M. and Marchand, E, “Real-time planar structure
     Image Computing: Techniques and Applications,pp.95-                      tracking for visual servoing: a contour and texture
     100,Massey University, Auckland ,New Zealand,10-12                       approach”. In IEEE/RSJ Int. Conf. on Intelligent Robots and
     December 1997                                                            Systems,2005
[9] Bauthemy, P.,”A maximum likelihood framework for                     [23] Masson, L., Dhome, M., and Jurie, F, “Robust real time
     determining moving edges”. IEEE Trans. Pattern Anal.                     tracking of 3d objects”, In Int. Conf. on Pattern Recognition,
     Mach. Intell., 11(5):499–511,1989                                        pages 252–255, 2004.
[10] Wuest, H., Vial, F., and Stricker, D,”Adaptive line tracking        [24] Mehdi Khosravi and Ronald W.Schafer, “Template
     with multiple hypotheses for augmented reality”. IEEE and                Matching Based on a Grayscale Hit-or –Miss Transform”.
     ACM International Symposium on Mixed and Augmented                       1996.
     Reality, pp 62–69,2005                                              [25] Yongmin Kim, Shijun Sun, Hyun Wook Park, “Template
[11] Lucas, B. and Kanade, T. “An iterative image registration                Matching Using Correlative Autopredicative Search”.
     technique with application to stereo vision”. In Int. Joint              University of Washington, 2004.
     Conf. on Artificial Intelligence, pages 674–679,1981                [26] Harley R. Myler, Arthur R. Weeks, “The Pocket Handbook
[12] Hager, G. and Belhumeur, P, “Efficient region tracking with              of Imaging Processing Algorithms in C”. Department of
     parametric models of geometry and illumination”. IEEE                    Electrical & Computer Engineering, University of Central
     Trans. on Pattern Analysis and Machine Intelligence,                     Florida, Orlando Florida. 1993.
     20(10):1025–1039,1998                                               [27] Rafael C.Gonzalez and Richard E.Woods, “Digital Image
[13] Baker, S. and Matthews, I, “Equivalence and efficiency of                Processing Second Edition”. 2002.
     image alignment algorithms”. In IEEE ICCVPR, pages                  [28] Maria Kallergi, “Medical-Image Analysis and Methods”.
     1090–1097,2001                                                           Copyright 2005 by Taylor & Francis Group, LLC.
[14] Baker, S., Dellaert, F., and Matthews, I,” Aligning images          [29] Gregory S. Chirikjian and Alexander B. Kyatkin,
     incrementally backwards”. Technical report, Robotics                     “Engineering Applications of Noncommutative Harmonic
     Institute, Carnegie Mellon University,2001                               Analysis: With Emphasis on Rotation and Motion Group”.
[15] Benhimane, S. and Malis, E., “Real-time image based                      CRC Press LLC,2001
     tracking of planes using efficient second-order                     [30] Dongming Zhao, “Image Processing and Feature
     minimization”. In IEEE/RSJ Int. Conf. on Intelligent Robots              Extraction”, Taylor & Francis Group, LLC,2005
     Systems, pages 943–948,2004
                                                                                           1
[16] Harris, C. and Stephens, M.,”A combined corner and edge                                I.Manju Jackin has received her Post
     detector”. In Proceedings of the 4th Alvey Vision Conf.,                              Graduation in M.E., VLSI with First with
     pages 147–151, 1998.                                                                  Distinction from RMK Engineering College
[17] Zhang, Z., Deriche, R., Faugeras, O., and Luong, Q.-T. A                              affiliated to Anna University and B.E.,
     robust technique for matching two uncalibrated images                                 Electronics    and     Communication       from
     through the recovery of the unknown epipolar geometry.                                University of Madras. Currently she is serving
     Technical Report 2273, INRIA ,1994                                                    as Assistant Professor under Anna University
[18] Lowe, D. “Distinctive image features from scale invariant           affiliated college. She has got nearly 13 years of teaching
     keypoints”. International Journal of Computer,2001                  experience profession and her current research includes VLSI and
[19] Lepetit, V., Lagger, P., and Fua, P,” Randomized trees for          Digital Image Processing. She has published 3 papers in various
     real-time keypoint recognition”, In IEEE Int.Conf. On               National and International conferences. She has been in one of the
     Computer Vision and Pattern Recognition, pages 775–                 reviewers committee for ARTCom2009.
     781.Vision, 60(2):91–110.
                                                                                          2
[20] Bay, H., Tuytelaars, T., and Van Gool, L, “SURF: Speeded                              M.Manigandan has received his Post
     up robust features”. European Conference on Computer                                 Graduation in M.E., Mechatronics with First
     Vision,2006                                                                          with Distinction from Madras Institute of
[21] Vacchetti, L., Lepetit, V., and Fua, P,” Combining edge and                          Technology, Anna University, and B.E.,
     texture information for real-time accurate 3d camera                                 Electronics and Communication from University
     tracking”. In Proceedings of the Third IEEE and ACM                                  of Madras. Currently he is serving as Senior
     International Symposium on Mixed and Augmented Reality,             Lecturer under Anna University affiliated college. His current
     pages 48–57, 2004.                                                  research includes Robotics and Digital Image Processing. He has
                                                                         published 3 papers in various National and International
                                                                         conferences.




                                                                    18
© 2011 ACEEE
DOI: 01.IJNS.02.01.105

Weitere ähnliche Inhalte

Was ist angesagt?

Survey on video object detection & tracking
Survey on video object detection & trackingSurvey on video object detection & tracking
Survey on video object detection & trackingijctet
 
Detection and Tracking of Objects: A Detailed Study
Detection and Tracking of Objects: A Detailed StudyDetection and Tracking of Objects: A Detailed Study
Detection and Tracking of Objects: A Detailed StudyIJEACS
 
A Pattern Classification Based approach for Blur Classification
A Pattern Classification Based approach for Blur ClassificationA Pattern Classification Based approach for Blur Classification
A Pattern Classification Based approach for Blur Classificationijeei-iaes
 
An optimized framework for detection and tracking of video objects in challen...
An optimized framework for detection and tracking of video objects in challen...An optimized framework for detection and tracking of video objects in challen...
An optimized framework for detection and tracking of video objects in challen...ijma
 
Video object tracking with classification and recognition of objects
Video object tracking with classification and recognition of objectsVideo object tracking with classification and recognition of objects
Video object tracking with classification and recognition of objectsManish Khare
 
Occlusion and Abandoned Object Detection for Surveillance Applications
Occlusion and Abandoned Object Detection for Surveillance ApplicationsOcclusion and Abandoned Object Detection for Surveillance Applications
Occlusion and Abandoned Object Detection for Surveillance ApplicationsEditor IJCATR
 
A Critical Survey on Detection of Object and Tracking of Object With differen...
A Critical Survey on Detection of Object and Tracking of Object With differen...A Critical Survey on Detection of Object and Tracking of Object With differen...
A Critical Survey on Detection of Object and Tracking of Object With differen...Editor IJMTER
 
IRJET - Computer Vision-based Image Processing System for Redundant Objec...
IRJET -  	  Computer Vision-based Image Processing System for Redundant Objec...IRJET -  	  Computer Vision-based Image Processing System for Redundant Objec...
IRJET - Computer Vision-based Image Processing System for Redundant Objec...IRJET Journal
 
Matrioska tracking keypoints in real-time
Matrioska tracking keypoints in real-timeMatrioska tracking keypoints in real-time
Matrioska tracking keypoints in real-timepowerUserHallo
 
Video surveillance Moving object detection& tracking Chapter 1
Video surveillance Moving object detection& tracking Chapter 1 Video surveillance Moving object detection& tracking Chapter 1
Video surveillance Moving object detection& tracking Chapter 1 ahmed mokhtar
 
Objects detection and tracking using fast principle component purist and kalm...
Objects detection and tracking using fast principle component purist and kalm...Objects detection and tracking using fast principle component purist and kalm...
Objects detection and tracking using fast principle component purist and kalm...IJECEIAES
 
Moving object detection
Moving object detectionMoving object detection
Moving object detectionManav Mittal
 
IRJET- Comparative Analysis of Video Processing Object Detection
IRJET- Comparative Analysis of Video Processing Object DetectionIRJET- Comparative Analysis of Video Processing Object Detection
IRJET- Comparative Analysis of Video Processing Object DetectionIRJET Journal
 
Moving object detection in video surveillance
Moving object detection in video surveillanceMoving object detection in video surveillance
Moving object detection in video surveillanceAshfaqul Haque John
 
Analysis of Human Behavior Based On Centroid and Treading Track
Analysis of Human Behavior Based On Centroid and Treading  TrackAnalysis of Human Behavior Based On Centroid and Treading  Track
Analysis of Human Behavior Based On Centroid and Treading TrackIJMER
 
Face Recognition Based Intelligent Door Control System
Face Recognition Based Intelligent Door Control SystemFace Recognition Based Intelligent Door Control System
Face Recognition Based Intelligent Door Control Systemijtsrd
 

Was ist angesagt? (20)

Survey on video object detection & tracking
Survey on video object detection & trackingSurvey on video object detection & tracking
Survey on video object detection & tracking
 
Detection and Tracking of Objects: A Detailed Study
Detection and Tracking of Objects: A Detailed StudyDetection and Tracking of Objects: A Detailed Study
Detection and Tracking of Objects: A Detailed Study
 
A Pattern Classification Based approach for Blur Classification
A Pattern Classification Based approach for Blur ClassificationA Pattern Classification Based approach for Blur Classification
A Pattern Classification Based approach for Blur Classification
 
An optimized framework for detection and tracking of video objects in challen...
An optimized framework for detection and tracking of video objects in challen...An optimized framework for detection and tracking of video objects in challen...
An optimized framework for detection and tracking of video objects in challen...
 
N046047780
N046047780N046047780
N046047780
 
[IJET-V1I6P16] Authors : Indraja Mali , Saumya Saxena ,Padmaja Desai , Ajay G...
[IJET-V1I6P16] Authors : Indraja Mali , Saumya Saxena ,Padmaja Desai , Ajay G...[IJET-V1I6P16] Authors : Indraja Mali , Saumya Saxena ,Padmaja Desai , Ajay G...
[IJET-V1I6P16] Authors : Indraja Mali , Saumya Saxena ,Padmaja Desai , Ajay G...
 
Video object tracking with classification and recognition of objects
Video object tracking with classification and recognition of objectsVideo object tracking with classification and recognition of objects
Video object tracking with classification and recognition of objects
 
Occlusion and Abandoned Object Detection for Surveillance Applications
Occlusion and Abandoned Object Detection for Surveillance ApplicationsOcclusion and Abandoned Object Detection for Surveillance Applications
Occlusion and Abandoned Object Detection for Surveillance Applications
 
A Critical Survey on Detection of Object and Tracking of Object With differen...
A Critical Survey on Detection of Object and Tracking of Object With differen...A Critical Survey on Detection of Object and Tracking of Object With differen...
A Critical Survey on Detection of Object and Tracking of Object With differen...
 
Object tracking final
Object tracking finalObject tracking final
Object tracking final
 
IRJET - Computer Vision-based Image Processing System for Redundant Objec...
IRJET -  	  Computer Vision-based Image Processing System for Redundant Objec...IRJET -  	  Computer Vision-based Image Processing System for Redundant Objec...
IRJET - Computer Vision-based Image Processing System for Redundant Objec...
 
Matrioska tracking keypoints in real-time
Matrioska tracking keypoints in real-timeMatrioska tracking keypoints in real-time
Matrioska tracking keypoints in real-time
 
Video surveillance Moving object detection& tracking Chapter 1
Video surveillance Moving object detection& tracking Chapter 1 Video surveillance Moving object detection& tracking Chapter 1
Video surveillance Moving object detection& tracking Chapter 1
 
Objects detection and tracking using fast principle component purist and kalm...
Objects detection and tracking using fast principle component purist and kalm...Objects detection and tracking using fast principle component purist and kalm...
Objects detection and tracking using fast principle component purist and kalm...
 
Moving object detection
Moving object detectionMoving object detection
Moving object detection
 
Kq3518291832
Kq3518291832Kq3518291832
Kq3518291832
 
IRJET- Comparative Analysis of Video Processing Object Detection
IRJET- Comparative Analysis of Video Processing Object DetectionIRJET- Comparative Analysis of Video Processing Object Detection
IRJET- Comparative Analysis of Video Processing Object Detection
 
Moving object detection in video surveillance
Moving object detection in video surveillanceMoving object detection in video surveillance
Moving object detection in video surveillance
 
Analysis of Human Behavior Based On Centroid and Treading Track
Analysis of Human Behavior Based On Centroid and Treading  TrackAnalysis of Human Behavior Based On Centroid and Treading  Track
Analysis of Human Behavior Based On Centroid and Treading Track
 
Face Recognition Based Intelligent Door Control System
Face Recognition Based Intelligent Door Control SystemFace Recognition Based Intelligent Door Control System
Face Recognition Based Intelligent Door Control System
 

Ähnlich wie Wireless Vision based Real time Object Tracking System Using Template Matching

IRJET - A Survey Paper on Efficient Object Detection and Matching using F...
IRJET -  	  A Survey Paper on Efficient Object Detection and Matching using F...IRJET -  	  A Survey Paper on Efficient Object Detection and Matching using F...
IRJET - A Survey Paper on Efficient Object Detection and Matching using F...IRJET Journal
 
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...CSCJournals
 
IRJET-Vision Based Occupant Detection in Unattended Vehicle
IRJET-Vision Based Occupant Detection in Unattended VehicleIRJET-Vision Based Occupant Detection in Unattended Vehicle
IRJET-Vision Based Occupant Detection in Unattended VehicleIRJET Journal
 
IRJET-Real-Time Object Detection: A Survey
IRJET-Real-Time Object Detection: A SurveyIRJET-Real-Time Object Detection: A Survey
IRJET-Real-Time Object Detection: A SurveyIRJET Journal
 
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...cscpconf
 
Moving object detection using background subtraction algorithm using simulink
Moving object detection using background subtraction algorithm using simulinkMoving object detection using background subtraction algorithm using simulink
Moving object detection using background subtraction algorithm using simulinkeSAT Publishing House
 
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...csandit
 
Real Time Object Identification for Intelligent Video Surveillance Applications
Real Time Object Identification for Intelligent Video Surveillance ApplicationsReal Time Object Identification for Intelligent Video Surveillance Applications
Real Time Object Identification for Intelligent Video Surveillance ApplicationsEditor IJCATR
 
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET Journal
 
IRJET- Object Detection in Real Time using AI and Deep Learning
IRJET- Object Detection in Real Time using AI and Deep LearningIRJET- Object Detection in Real Time using AI and Deep Learning
IRJET- Object Detection in Real Time using AI and Deep LearningIRJET Journal
 
Real Time Detection of Moving Object Based on Fpga
Real Time Detection of Moving Object Based on FpgaReal Time Detection of Moving Object Based on Fpga
Real Time Detection of Moving Object Based on Fpgaiosrjce
 
K-Means Clustering in Moving Objects Extraction with Selective Background
K-Means Clustering in Moving Objects Extraction with Selective BackgroundK-Means Clustering in Moving Objects Extraction with Selective Background
K-Means Clustering in Moving Objects Extraction with Selective BackgroundIJCSIS Research Publications
 
Object Capturing In A Cluttered Scene By Using Point Feature Matching
Object Capturing In A Cluttered Scene By Using Point Feature MatchingObject Capturing In A Cluttered Scene By Using Point Feature Matching
Object Capturing In A Cluttered Scene By Using Point Feature MatchingIJERA Editor
 
Development of Human Tracking System For Video Surveillance
Development of Human Tracking System For Video SurveillanceDevelopment of Human Tracking System For Video Surveillance
Development of Human Tracking System For Video Surveillancecscpconf
 
Implementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoImplementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoIDES Editor
 

Ähnlich wie Wireless Vision based Real time Object Tracking System Using Template Matching (20)

IRJET - A Survey Paper on Efficient Object Detection and Matching using F...
IRJET -  	  A Survey Paper on Efficient Object Detection and Matching using F...IRJET -  	  A Survey Paper on Efficient Object Detection and Matching using F...
IRJET - A Survey Paper on Efficient Object Detection and Matching using F...
 
X36141145
X36141145X36141145
X36141145
 
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
 
D018112429
D018112429D018112429
D018112429
 
IRJET-Vision Based Occupant Detection in Unattended Vehicle
IRJET-Vision Based Occupant Detection in Unattended VehicleIRJET-Vision Based Occupant Detection in Unattended Vehicle
IRJET-Vision Based Occupant Detection in Unattended Vehicle
 
IRJET-Real-Time Object Detection: A Survey
IRJET-Real-Time Object Detection: A SurveyIRJET-Real-Time Object Detection: A Survey
IRJET-Real-Time Object Detection: A Survey
 
Csit3916
Csit3916Csit3916
Csit3916
 
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
 
Moving object detection using background subtraction algorithm using simulink
Moving object detection using background subtraction algorithm using simulinkMoving object detection using background subtraction algorithm using simulink
Moving object detection using background subtraction algorithm using simulink
 
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
 
Q180305116119
Q180305116119Q180305116119
Q180305116119
 
Real Time Object Identification for Intelligent Video Surveillance Applications
Real Time Object Identification for Intelligent Video Surveillance ApplicationsReal Time Object Identification for Intelligent Video Surveillance Applications
Real Time Object Identification for Intelligent Video Surveillance Applications
 
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
 
IRJET- Object Detection in Real Time using AI and Deep Learning
IRJET- Object Detection in Real Time using AI and Deep LearningIRJET- Object Detection in Real Time using AI and Deep Learning
IRJET- Object Detection in Real Time using AI and Deep Learning
 
F011113741
F011113741F011113741
F011113741
 
Real Time Detection of Moving Object Based on Fpga
Real Time Detection of Moving Object Based on FpgaReal Time Detection of Moving Object Based on Fpga
Real Time Detection of Moving Object Based on Fpga
 
K-Means Clustering in Moving Objects Extraction with Selective Background
K-Means Clustering in Moving Objects Extraction with Selective BackgroundK-Means Clustering in Moving Objects Extraction with Selective Background
K-Means Clustering in Moving Objects Extraction with Selective Background
 
Object Capturing In A Cluttered Scene By Using Point Feature Matching
Object Capturing In A Cluttered Scene By Using Point Feature MatchingObject Capturing In A Cluttered Scene By Using Point Feature Matching
Object Capturing In A Cluttered Scene By Using Point Feature Matching
 
Development of Human Tracking System For Video Surveillance
Development of Human Tracking System For Video SurveillanceDevelopment of Human Tracking System For Video Surveillance
Development of Human Tracking System For Video Surveillance
 
Implementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoImplementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time Video
 

Mehr von IDES Editor

Power System State Estimation - A Review
Power System State Estimation - A ReviewPower System State Estimation - A Review
Power System State Estimation - A ReviewIDES Editor
 
Artificial Intelligence Technique based Reactive Power Planning Incorporating...
Artificial Intelligence Technique based Reactive Power Planning Incorporating...Artificial Intelligence Technique based Reactive Power Planning Incorporating...
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
 
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
 
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
 
Line Losses in the 14-Bus Power System Network using UPFC
Line Losses in the 14-Bus Power System Network using UPFCLine Losses in the 14-Bus Power System Network using UPFC
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
 
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
 
Assessing Uncertainty of Pushover Analysis to Geometric Modeling
Assessing Uncertainty of Pushover Analysis to Geometric ModelingAssessing Uncertainty of Pushover Analysis to Geometric Modeling
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
 
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
 
Selfish Node Isolation & Incentivation using Progressive Thresholds
Selfish Node Isolation & Incentivation using Progressive ThresholdsSelfish Node Isolation & Incentivation using Progressive Thresholds
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
 
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
 
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
 
Cloud Security and Data Integrity with Client Accountability Framework
Cloud Security and Data Integrity with Client Accountability FrameworkCloud Security and Data Integrity with Client Accountability Framework
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
 
Genetic Algorithm based Layered Detection and Defense of HTTP Botnet
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetGenetic Algorithm based Layered Detection and Defense of HTTP Botnet
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
 
Enhancing Data Storage Security in Cloud Computing Through Steganography
Enhancing Data Storage Security in Cloud Computing Through SteganographyEnhancing Data Storage Security in Cloud Computing Through Steganography
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
 
Low Energy Routing for WSN’s
Low Energy Routing for WSN’sLow Energy Routing for WSN’s
Low Energy Routing for WSN’sIDES Editor
 
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
 
Rotman Lens Performance Analysis
Rotman Lens Performance AnalysisRotman Lens Performance Analysis
Rotman Lens Performance AnalysisIDES Editor
 
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesBand Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
 
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
 
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
 

Mehr von IDES Editor (20)

Power System State Estimation - A Review
Power System State Estimation - A ReviewPower System State Estimation - A Review
Power System State Estimation - A Review
 
Artificial Intelligence Technique based Reactive Power Planning Incorporating...
Artificial Intelligence Technique based Reactive Power Planning Incorporating...Artificial Intelligence Technique based Reactive Power Planning Incorporating...
Artificial Intelligence Technique based Reactive Power Planning Incorporating...
 
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...
 
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...
 
Line Losses in the 14-Bus Power System Network using UPFC
Line Losses in the 14-Bus Power System Network using UPFCLine Losses in the 14-Bus Power System Network using UPFC
Line Losses in the 14-Bus Power System Network using UPFC
 
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...
 
Assessing Uncertainty of Pushover Analysis to Geometric Modeling
Assessing Uncertainty of Pushover Analysis to Geometric ModelingAssessing Uncertainty of Pushover Analysis to Geometric Modeling
Assessing Uncertainty of Pushover Analysis to Geometric Modeling
 
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...
 
Selfish Node Isolation & Incentivation using Progressive Thresholds
Selfish Node Isolation & Incentivation using Progressive ThresholdsSelfish Node Isolation & Incentivation using Progressive Thresholds
Selfish Node Isolation & Incentivation using Progressive Thresholds
 
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...
 
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...
 
Cloud Security and Data Integrity with Client Accountability Framework
Cloud Security and Data Integrity with Client Accountability FrameworkCloud Security and Data Integrity with Client Accountability Framework
Cloud Security and Data Integrity with Client Accountability Framework
 
Genetic Algorithm based Layered Detection and Defense of HTTP Botnet
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetGenetic Algorithm based Layered Detection and Defense of HTTP Botnet
Genetic Algorithm based Layered Detection and Defense of HTTP Botnet
 
Enhancing Data Storage Security in Cloud Computing Through Steganography
Enhancing Data Storage Security in Cloud Computing Through SteganographyEnhancing Data Storage Security in Cloud Computing Through Steganography
Enhancing Data Storage Security in Cloud Computing Through Steganography
 
Low Energy Routing for WSN’s
Low Energy Routing for WSN’sLow Energy Routing for WSN’s
Low Energy Routing for WSN’s
 
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...
 
Rotman Lens Performance Analysis
Rotman Lens Performance AnalysisRotman Lens Performance Analysis
Rotman Lens Performance Analysis
 
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesBand Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
 
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...
 
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
 

Kürzlich hochgeladen

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 

Kürzlich hochgeladen (20)

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 

Wireless Vision based Real time Object Tracking System Using Template Matching

  • 1. ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011 Wireless Vision based Real time Object Tracking System Using Template Matching I Manju Jackin1, M Manigandan2 1 Assistant Professor, Velammal Engineering College, Department of ECE, Chennai, INDIA Email: rebejack@gmail.com 2 Senior Lecturer, Velammal Engineering College, Department of ECE, Chennai, INDIA Email: mani.mit2005@gmail.com Abstract— In the present work the concepts of template There are three key steps in implementation of our object matching through Normalized Cross Correlation (NCC) has tracking system: been used to implement a robust real time object tracking • Detection of interesting moving objects, system. In this implementation a wireless surveillance pinhole • Tracking of such objects from frame to frame, camera has been used to grab the video frames from the non ideal environment. Once the object has been detected it is • Analysis of object tracks to automate the disc tracked by employing an efficient Template Matching mechanism algorithm. The templates used for the matching purposes are A basic problem that often occurs in image processing generated dynamically. This ensures that any change in the is the determination of the position of a given pattern in an pose of the object does not hinder the tracking procedure. To image or part of an image, the so-called region of interest. automate the tracking process the camera is mounted on a The two basic cases are disc coupled with stepper motor, which is synchronized with a • The position of the pattern in unknown tracking algorithm. As and when the object being tracked moves out of the viewing range of the camera, the setup is • An estimate for the position of the pattern is not automatically adjusted to move the camera so as to keep the available object of about 360 degree field of view. The system is capable Usually, both cases have to be treated to solve the of handling entry and exit of an object. The performance of problem of determining the position of a given pattern in an the proposed Object tracking system has been demonstrated image. In the latter case the information about the position in real time environment both for indoor and outdoor by of the pattern can be used to reduce the computational including a variety of disturbances and a means to detect a effort significantly. It is also known as feature tracking in a loss of track. sequence of images [5], [6]. For both feature tracking and to know the initial Index Terms— Normalized Cross correlation, template matching, Tracking, Wireless vision estimations of the position of the given pattern, a lot of different well known algorithms have been developed [7], I. INTRODUCTION [8] .One basic approach that can be used in both cases mentioned above, is template matching. This means that Object tracking is one of the most important subjects in the position of the given pattern is determined by a pixel- video processing. It has wide range of applications wise comparison of the image with a given template that including traffic surveillance, vehicle navigation, gesture contains the desired pattern. For this, the template is shifted understanding, security camera systems and so forth. ‘ u’ discrete steps in the x direction and v steps in the y Tracking objects can be complex [1] due to loss of direction of the image and then the comparison is information caused by projection of the 3D world on a 2D calculated over the template area for each position (u,v).To image, noise in images, complex object motion, non rigid calculate this ,normalized cross correlation is a choice in or articulated nature of objects [2], partial and full object many cases [5]. Nevertheless it is computationally occlusions, complex object shapes, scene illumination expensive and therefore a fast correlation algorithm that changes, and real-time processing requirements. requires fewer calculations than the basic version is of Tracking is made simple by imposing constraints on the interest. In this paper we propose a complete real time motion and/or appearance of objects. In our application we object tracking system which combines template matching have tried to minimize the number of constraints on the with normalized cross correlation. motion and appearance of the object. The only constraint Related works are discussed in Section II and the details on the motion of the object is that it should not make about template matching are discussed in Section III. In sudden change in the direction of motion while moving out Section IV we discuss about the proposed system. The of the viewing range of the camera. Unlike other system model implementation is discussed in Section V algorithms [3] the present algorithm is capable of handling and the results are discussed in section VI. the entry and exit of an object. Also, unlike [3],[4] , no color information is required for tracking an object. There II. RELATED WORKS is no major constraint on the appearance of the object though an object which is a little brighter than the Tracking lays the foundation for many application areas background gives better tracking results. including augmented reality, visual servoing and vision 11 © 2011 ACEEE DOI: 01.IJNS.02.01.105
  • 2. ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011 based industrial applications. Consequently, there is a huge the frame rate is not very high because of their complex amount of related publications. The methods used for real cost function. Moreover image blurring poses a problem time 3D tracking can be divided into four categories: Line for feature extraction. based tracking, template based tracking, feature based Hybrid tracking approaches combine two or more of the tracking and hybrid approaches. aforementioned approaches. Some recent related Line based tracking requires a line model of the tracked publications include [22], which combines template-based object. The pose is determined by matching a projection of tracking and line-based tracking. In [21] the authors the line model to the lines extracted in the image. One of combine line-based tracking and feature-based tracking. the first publications in this field was [9].Recently a real Even though these algorithms perform well, the line-based time line tracking system which uses multiple-hypothesis tracking only improves the results for a few cases and line tracking was proposed in [10].The main disadvantage might corrupt the result in the case of background clutter. of line tracking is that it has severe problems with In [23] the authors use a template-based method for background clutter and image blurring so that in practice it tracking small patches on the object, which are then used cannot be applied to applications we are targeting. for point based pose estimation. Since this approach uses a Template-based tracking fits better into our scenarios. It template-based method for tracking it cannot deal with fast uses a reference template of the object and tracks it using object motion. image differences. These works nicely for well textured objects and small inter frame displacements. One of the III. TEMPLATE MATCHING first publications on template-based tracking [11] was using the optical flow method. In order to improve the Template matching is an important research area with strong roots in detection theory [24],[25]. It has obvious efficiency of the tracking and to deal with more complex objects and/or camera motions, other approaches were applications in computer and robot vision, stereography, proposed [12],[13]. In [14] the authors compare these image analysis and motion estimation [24]. Template approaches and show that they all have an equivalent matching in the context of an image processing is a process convergence rate and frequency up to a first order of locating the position of a sub image within an image of the same, or more typically, a larger size[26],[28],[27]. approximation with some being more efficient than others. A more recently suggested approach is the Efficient Second Template matching can also be described as a process to Order Minimization (ESM) algorithm [15], whose main determine the similarity between two images. The sub image is referred to as the template image and the larger contribution consists in finding a parameterizations and an algorithm, which allow achieving second-order image is referred to as the search area (main image) [25], convergence at the computational cost and consequently [26], [28], [27]. The template matching process involves shifting the template over the search area and computing a the speed of first-order methods. Similar to template-based tracking, feature-based also similarity between the template image and the window of requires a well-textured object. They work by extracting the search area over which the template [25], [26], [29], [27]. These shifting and computing processes operate salient image regions from a reference image and matching them to another image. Each single point in the reference simultaneously and do the repetition until the template image is compared with other points belonging in a search image lies on the edge of the search area. For the image processing purpose, the most common technique for region in the other image. The one that gives the best similarity measure score is considered as the corresponding measuring the similarity between two-dimensional images one a common choice for feature extraction is the Harris is by using cross correlation method. A correlation measure is determined between the template and respective corner detector [16]. Features can then be matched using normalized cross correlation (NCC) or some other windows of the search area to find the template position similarity measure [17]. Two recent feature-matching which has maximum correlation [25], [30]. Generally, for two-dimensional image, the correlation coefficient is approaches are SIFT [18] and Randomized Trees [19]. Both perform equally well in terms of accuracy. However, computed for the entire area on the template image shifted on the main image. despite a recently proposed optimization of SIFT called The correlation between two signals (cross correlation) SURF [20], SIFT has a lower runtime performance than the Randomized Trees, which exhibit a fast feature matching is a standard approach to feature detection as well as a component of more sophisticated techniques. thanks to an offline learning step. In comparison to template-based methods, feature-based approaches can deal A. Template Matching by Cross Correlation. with bigger inter frame displacements and can even be used Correlation based matching is a simple, yet very popular for wide-baseline matching if we consider the whole image technique in pattern recognition with images. The basic as the search region. However, wide-baseline approaches method consists of forming a correlation measure between a template T and image I (I ⊗ T ) and determining the are in general too slow for real-time applications. Therefore they are mostly used for initialization rather than tracking. A full tracking system using only features was proposed in location of a correspondence by finding the location of [21]. They rely on registered reference images of the object maximum correlation. Usually I is much larger than the and perform feature matching between reference image and template. current image as well as between previous image and A common measure of dissimilarity is current image to estimate the pose of the object. However, 12 © 2011 ACEEE DOI: 01.IJNS.02.01.105
  • 3. ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011 ∑ (I − T ) = ∑ I − 2∑ IT + ∑ T 2 techniques are included for preprocessing purpose to get 2 2 d= (1) the proper images such as flip image, region of interest (ROI) and image binarization. These techniques are not the Where the sum is over pixels in the template T and the major work for this project. Therefore, these techniques are image I . The value of‘d’ will be small when I and T are not discussed in this paper. The correlation method is almost identical and large when they differ significantly. implemented using the Fast Fourier transform and modified Since the term ∑ (IT ) will have to be large whenever,’d’ into discrete correlation expression for computer usage. Discrete Correlation is a mathematical process of is small, large value of ∑ (IT ) similarly indicates the comparing one image with another image discretely. The resulting image is a two- dimensional expression of location of a good match of the template. Notice that equivalence. ∑ (IT ) is the cross correlation of the two signals I Supposedly, the relation between template image and search area (main image) is defined as; and T . n n The simple cross-correlation is called unnormalized and it has several major shortcomings. If a certain pixel in the ∑ ∑ M (i , j )N (i , j ) i =1 j =1 CC = image has an abnormally large value, the 1 ⎡ n n ⎤2 ∑ (IT ) associated with this location is likely to have a n n ⎢∑ ∑ M 2 (i , j )∑ ∑ N (i , j )⎥ 2 ⎣ i = 1 j =1 i = 1 j =1 ⎦ large value even though no good match exists. And since zeros do not contribute to the correlation value, a good (4) match containing zeros will possible have a small value. To overcome these problems, normalized cross-correlation M (i , j ) = cN (i , j ) (NCC) is often used instead, in which the measure is formulated as n n ⎡~ ⎤ ⎢T ( x , y ) I ( x ± u , y ± v )⎥ ~ ∑∑ cN (i, j )N (i, j ) ∑∑ ⎢ ⎥ CC = i =1 j =1 1 ⎣ ⎦ x y ρ (u , v ) = ⎡n n 2 2 n n ⎤2 ⎢∑∑ c N (i, j )∑∑ N (i, j )⎥ 2 ~ ~ ∑ ∑ T (x , y ) ∑ ∑ I (x ± u , y ± v ) x y 2 x y 2 ⎣ i =1 j =1 i =1 j =1 ⎦ (5) (2) n n In this equation (2), I is an image of image size n x n c ∑∑ N 2 (i, j ) with a search window of a larger image and T is a i =1 j =1 ~ CC = template of image size n x n; T = t − μ T , and μ T is the 1 ⎡n n 2 2 n n ⎤2 ⎢∑∑ c N (i, j )∑∑ N 2 (i, j )⎥ average intensity of T .This formula keeps ρ in the range - 1 to 1,with a value close to 1 indicating a better matching. ⎣ i =1 j =1 i =1 j =1 ⎦ (6) For equation 2 to be effective, a prior accurate knowledge of the pattern T is required .For the detection Let CC=1, then for some constant c, the correlation of detonators, we assume the shape of a detonator is fixed coefficient will result as a value 1, proving the 100% and known. However, the orientation of the detonator in similarity between both images. This theory is applied images can be changing and multiple templates with together with the discrete correlation expression in this varying orientations have to be used. system. The equation for the two-dimensional discrete Now let as assume that correlation is given by [5]; T (x, y ) = M (i, j ) and M −1 N −1 Out (k , l ) = ∑∑ In1(m, n)In2(k + m, l + n ) m =0 n =0 I ( x + u , y + v ) = N (i, j ) (7) (3) Where In1 and In 2 are the input images and Out is the output image. MxN is the dimension of both images With image size as mentioned above for both I and T . and possibly they have different dimensions and sizes, For a sample image (contains left and right breast however, it will filled in with certain pixel values to allow images), the position, orientation and size need to be the matching process. The In1 and In 2 are called the observed. These criteria are important in obtaining the best search area and the template image respectively. The output for the matching process. Unfortunately, most output will reveal the resultant image and its position of the images do not satisfy these criteria. Therefore, some 13 © 2011 ACEEE DOI: 01.IJNS.02.01.105
  • 4. ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011 greatest match between the search area (main image) and minimizes difficulties due to different image quality and the template. image formation conditions. For the initialization registered images of the object IV. PROPOSED SYSTEM called key frames, are required. They can be created directly from the textured model by rendering it from An overview of the proposed system as a finite state different views. If the real-world metric pose is required, machine (FSM) is given in Figure 1.The system starts with the correct intrinsic camera parameters have to be an initialization phase, and then uses the template-based provided. tracking algorithm to track the object. If the template-based tracker is unable to recover the pose within a certain B. Initialization number of attempts the initialization is invoked again. When initializing, features are extracted from the current Template-based Tracking Algorithm image and matched to the features extracted in the key frame. The pose can then be estimated from the 3D object We use the NCC algorithm for template-based tracking. points and corresponding 2D feature points in the current The object is tracked using this method until a loss of track image. Since the tracker is using a textured model of the is detected, object the accuracy of the initial pose estimation is not very Given: Input image I ; orientation of the object initialize an critical. If on the other hand the reference templates used output array with the same size as the input image for tracking were extracted from the current image, the [1] Set an initial threshold value for thresholding the precision of the initialization procedure would be a major correlation result. Normally, it is set at 0.7-0.8 issue, because the quality of the tracking result depends [2] Repeat the following steps until the result is directly on the quality of the templates used for tracking. satisfactory. Hence we decided to directly use the templates taken from Construct the correlation filter template T the textured model in our system. using an empirical model method. C. Reference Patch Extraction Perform normalized cross-correlation of I with T , and store the result in the output array. The textures of the reference patches, which are required Threshold these correlation values. If any for tracking, are taken from the textured model or each value is larger than the threshold, set the patch the object is rendered so that the patch is oriented corresponding point in the output array to 1, parallel to the image plane. It is also important to ensure otherwise to 0 that the relative sizes of the object patches are reflected in If the result is not satisfactory, adjust the the size of the rendered patches, since the number of pixels threshold or the correlation filter template and in a patch is directly proportional to its importance during continue tracking. Since the pose parameters used to render the [3] End repeat [1] & [2] patches are known, the reference patches can be directly extracted from the rendered image. These reference patches are then reduced a few times in size by a factor of two to create a stack of reference patches at different scales, which are used to speed up the tracking in a coarse-to-fine approach. D. Visibility Test Attempting to track patches which are not visible will lead to erroneous results. Hence it is necessary to ascertain the visibility of every patch. This test is performed by rendering the model with OpenCV and using the occlusion query extension to test which patches are visible and which are occluded. The visibility test is performed for each frame using the pose estimated in the previous frame. Thanks to the occlusion query extension the visibility test can be performed very fast, so that it does not interfere with the tracking performance Figure 1. Overview of the proposed tracking system E. Loss Of Track Determining when the tracker lost the object is important A. Required Information in order to switch to the proposed algorithm. In our system In our system we use a textured 3D model of the object. this is accomplished by computing the normalized cross * This model can either be created manually or semi- correlation (NCC) between the reference patch I k and automatically with commercially available products. One the current patch I k after the end of the optimization for point to note is that it is advisable to use the same camera for texturing the model and for tracking, because this all visible patches. 14 © 2011 ACEEE DOI: 01.IJNS.02.01.105
  • 5. ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011 * Let I be the reference image and I the current image. mechanism implies the motion of the camera is governed Further ‘ p ’ be the pixel coordinates of the pixels in the by the motor installed horizontally. reference image The NCC between two patches is defined B. Wireless Camera Interfacing as: In order to track the object using Template Matching (I ( p ) − μ )(I ( p ) − μ ) )= ∑ * * through NCC based approach we have implemented an ( NCC I , I k * pk k k k k k k Wireless vision interface model for acquiring the real time N σ σk images from remote place through JK wireless Surveillance k 2 * k (8) camera. For tracking the objects based on user defined template OpenCV is used. To grab the image we have used Where N k is the number of pixels of each patch, μ and * k Zebronics image grabber TV tuner for acquiring the image μk are the mean pixels intensities and σ k and * σk their from wireless camera through 2.5GHz video receiver module interfaced with PC. To interface the Image grabber standard deviations. If the NCC of a patch falls below a and OpenCV we have used a Dynamic Link Library files certain threshold, it is excluded from the tracking. The loss vcapg2.dll. To track the object completely an efficient of tracking an object occurs mainly due to the presence of decision making rule is being implemented by software either due to fast movement of the object or else due to approach i.e., if-else and then conditions based on the tracking object is out of focus. To avoid the loss of tracking movement of the object on ‘x’-directions. we propose a tracking system which could be able of tracking the object in 360degree rotation. Basically in this C. Decision Making Rule for Object Tracking paper the tracking of the object is considered only in the Our object tracker module generates the trajectory of an ‘x’-direction .In this paper we are not going to deal with the object over time by locating its position in every frame of mechanical design of the rotational disc mechanism where the video. In our implementation the tasks of detecting the as we would be dealing with tracking the object object and establishing correspondence between the object continuously without any loss of the object which is under instances across frames are performed simultaneously (i.e test. Complete control of the tracking system is done multithreaded). The possible object region in every frame through wireless interface using RF 433 MHz RF module is obtained by means of our object detection module, and from the remote place which would be discussed in the then our tracker module tracks the object across frames. In following section in detail. our tracking approach the object is represented using the Primitive Geometric Shape Appearance Model i.e., the V. TEMPLATE MATCHING SYSTEM MODEL object is represented as a rectangle. IMPLEMENTATION The object tracking module takes the positional information of x & y co-ordinates of the system of the This proposed system model consists of three major moving object as an input from the object template module subdivisions based on the ROI. This information is then used to extract a A. Hardware setup square image template (whenever required) from the last B. Wireless camera Interface with OpenCV acquired frame. The module keeps on searching it in the C. Decision Making rule for continuous tracking of frames captured from that point. Whenever found it object based on ROI displays a red overlaid rectangle over the detected object. If A. Hardware setup the template matching doesn’t yield any result (signifying This section deals with the rotational disc arrangement that the object has changed its appearance) a new template of the tracking system. The system utilized with single is can be selected based on the user interest on the real time stepper motor to account for the tracking of the object in x- video sequence. direction movement of the object. As presented in Fig. 6 Further on the (x, y) co-ordinates of the tracked object motor is operated using an electronic stepper motor driver are calculated and passed on to the rotational disc module circuit. We have used Wireless Transmitter and Receiver of from the PC through wireless interface for analysis and 433MHZ for transmitting and receiving the data from the automation. remote computer and the rotational disc arrangement on D. Algorithm for the Tracking Module based on Decision which the wireless Camera is being mounted. At the making rule: receiving section the wireless camera would be keep on i. Get the positional information image I(x ,y) frame tracking the object about 360 degree rotational controls is size of the object it is generally considered with done by means of stepper motor interface along with the 320x240 microcontroller ATMEL89C52. The received data are ii. Generate a image template T with size as mentioned processed based on the decision making rule which is being in Section III A by extracting a square image from imposed in the microcontroller. The motor driver circuits the last frame grabbed by the camera. The template are in turn controlled by the tracking program through the is extracted in the form of a square whose image data received from the remote PC based on the information coordinates are given by of the tracking object is under test. Rotational disc 15 © 2011 ACEEE DOI: 01.IJNS.02.01.105
  • 6. ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011 -Top, Left corner: (x - u, y - v) -Top, Right corner: (x + u, y - v) -Bottom, Left corner: (x - u, y + v) -Bottom, Right corner :(x+u, y+v) Template Image-Vehicle iii. Search the generated template T in the last frame grabbed by the camera. This is done by using an efficient template matching algorithm based on NCC. iv. IF the template matching is successful transmit the data through RF Transmitter 433MHz through serial Port ELSE IF the tracker has NOT detected motion of the object then stop automation of the wireless camera module setup. AND the detector has THEN goto STEP 1 (get a new template) ELSE goto STEP 5 (get the x, y position) ELSE goto STEP 1 (get a new template) v. Obtain the position T(x, y) of the match and pass it on to the rotational disc x-direction automation module for analysis. Figure 2. Template Matching based Vehicle Tracking Prototype with vi. Goto Step iii. real time images of frame size 320x240 The below Table I shows the complete details how the object would be tracked based on the information of the co- ordinates and exactly serial data for controlling the Template Image-Face Rotational Disc Camera Mounting setup through Wireless RF 433MHz. TABLE I. Automation Of Tracking Module And Its Control Word Object Automation of the Tracking Wireless Serial Data Tracking system Based on the Co- Transfer Based on module ordinates of the Template Coordinates of the Direction Matching in the Frame Object Present to the Receiver Section LEFT Move the camera towards ‘R’ direction of the object by rotating the stepper motor in Anti Clockwise Direction RIGHT Move the camera towards ‘L’ direction of the object by rotating the stepper motor in Clockwise Direction IDEAL Stop the movement of the ‘S’ actuators VI. RESULTS Results are presented of the experiment performed using the tracking Finite State Machine as shown in the Figure 1.The experiment was conducted for various images acquired from Wireless pinhole camera interfaces with Rotational Disc Mechanism for through low cost frame grabber Zebronic TV tuner card programmed using OpenCV. The template matching program written in VC++ Figure 3. Template Matching based Face Tracking environment along with some predefined function available in OpenCV environment. To make the system more versatile for various environmental conditions we tried to implement a concept of software approach based threshold adjustments which would be helpful for better template matching based tracking. 16 © 2011 ACEEE DOI: 01.IJNS.02.01.105
  • 7. ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011 When the sequences of images is within the region then the tracking is smoother where else when the object being tracked is out of focus then the camera should be atomized Template Matching-Survelliance Camera through stepper motor so that the object being tracked would not be missed out which is being discussed in the above section TABLE I. In the result we have consider three objects Vehicle prototype, Human Face, Surveillance camera movement with reference template match compared with actual images tracked output frames are shown in the Figure 2,3&4,5 respectively CONCLUSIONS In this paper we have implemented the template matching based on NCC algorithm with real time tracking under non-ideal environment irrespective of the lighting conditions. The system has been automated using rotation disc camera mount mechanism setup which is synchronized with the algorithm. Such an automated object tracking system can be used in applications Defense application such as Automatic target tracking, Autonomous target hitting through turret by classifying the templates whether it is enemy objects or not. Future focuses on tracking multiple objects at the same time as well as on improving tracker accuracy during camera motion. Even stereo vision based tracking system can be carried out for maximum coverage of the objects and also to track multiple objects too. ACKNOWLEDGMENT Figure 4. Template Matching based Surveillance Camera Tracking The authors gratefully acknowledge the following with real time images of frame size 320x240 to move the stepper motor actuator towards Left side- Clockwise Direction individuals for their valuable support: Prof.V.Bhaskaran, Department of Mathematics and our colleagues for their valuable guidance for devoting their precious time, sharing their knowledge and co-operation. REFERENCES [1] Yilmaz A, Javed O, Shah M, ”Object tracking: A survey”. ACM Computer Surv.,38,4, Article 13(Dec 2006) [2] Richard Y. D. Xu, John G. Allen, Jesse S.J,”Robust real time tracking of non-rigid objects”, Proceedings of the Pan Sydney Area workshop on Visual Information processing, p.95-98, June 01, 2004. [3] Bar Shalom Y, Foreman T, ”Tracking and Data Association”, Academic Press Inc [4] Fieguth P, Terzopolus D, ”Color-based tracking of heads and other mobile objects at video frames rates”, IEEE Conference on Computer and Vision and Pattern Recognition(CVPR),p.21-27,2004 [5] Gregory D.Hager and Kentaro Toyama,”The Xvision System: A General-Purpose Substrate for Portable Real Time Vision Applications”, Computer Vision and Image Understanding, Volume 69, Number 1, pp.23-37, 1998. [6] B.D. Lucas and T. Kanade, “An Interactive Image Registration Technique with an Application to Stereo Vision”, IJCAI 1981. [7] Hirochika Inoue, Tetsuya Tachikawa and Masayuki Inaba, “Robot Vision System with a Correlation Chip for Real time Figure 5. Template Matching based Surveillance Camera Tracking with tracking Optical flow and Depth Map Generation”, real time images of frame size 320x240 to move the stepper motor Proceedings of the IEEE International Conference on actuator towards Right side –Anticlockwise Direction Robotics and Automation, pp 1621-1626,France,1992 17 © 2011 ACEEE DOI: 01.IJNS.02.01.105
  • 8. ACEEE Int. J. on Network Security, Vol. 02, No. 01, Jan 2011 [8] Changming Sun, “A Fast Stereo Matching Method”, Digital [22] Pressigout, M. and Marchand, E, “Real-time planar structure Image Computing: Techniques and Applications,pp.95- tracking for visual servoing: a contour and texture 100,Massey University, Auckland ,New Zealand,10-12 approach”. In IEEE/RSJ Int. Conf. on Intelligent Robots and December 1997 Systems,2005 [9] Bauthemy, P.,”A maximum likelihood framework for [23] Masson, L., Dhome, M., and Jurie, F, “Robust real time determining moving edges”. IEEE Trans. Pattern Anal. tracking of 3d objects”, In Int. Conf. on Pattern Recognition, Mach. Intell., 11(5):499–511,1989 pages 252–255, 2004. [10] Wuest, H., Vial, F., and Stricker, D,”Adaptive line tracking [24] Mehdi Khosravi and Ronald W.Schafer, “Template with multiple hypotheses for augmented reality”. IEEE and Matching Based on a Grayscale Hit-or –Miss Transform”. ACM International Symposium on Mixed and Augmented 1996. Reality, pp 62–69,2005 [25] Yongmin Kim, Shijun Sun, Hyun Wook Park, “Template [11] Lucas, B. and Kanade, T. “An iterative image registration Matching Using Correlative Autopredicative Search”. technique with application to stereo vision”. In Int. Joint University of Washington, 2004. Conf. on Artificial Intelligence, pages 674–679,1981 [26] Harley R. Myler, Arthur R. Weeks, “The Pocket Handbook [12] Hager, G. and Belhumeur, P, “Efficient region tracking with of Imaging Processing Algorithms in C”. Department of parametric models of geometry and illumination”. IEEE Electrical & Computer Engineering, University of Central Trans. on Pattern Analysis and Machine Intelligence, Florida, Orlando Florida. 1993. 20(10):1025–1039,1998 [27] Rafael C.Gonzalez and Richard E.Woods, “Digital Image [13] Baker, S. and Matthews, I, “Equivalence and efficiency of Processing Second Edition”. 2002. image alignment algorithms”. In IEEE ICCVPR, pages [28] Maria Kallergi, “Medical-Image Analysis and Methods”. 1090–1097,2001 Copyright 2005 by Taylor & Francis Group, LLC. [14] Baker, S., Dellaert, F., and Matthews, I,” Aligning images [29] Gregory S. Chirikjian and Alexander B. Kyatkin, incrementally backwards”. Technical report, Robotics “Engineering Applications of Noncommutative Harmonic Institute, Carnegie Mellon University,2001 Analysis: With Emphasis on Rotation and Motion Group”. [15] Benhimane, S. and Malis, E., “Real-time image based CRC Press LLC,2001 tracking of planes using efficient second-order [30] Dongming Zhao, “Image Processing and Feature minimization”. In IEEE/RSJ Int. Conf. on Intelligent Robots Extraction”, Taylor & Francis Group, LLC,2005 Systems, pages 943–948,2004 1 [16] Harris, C. and Stephens, M.,”A combined corner and edge I.Manju Jackin has received her Post detector”. In Proceedings of the 4th Alvey Vision Conf., Graduation in M.E., VLSI with First with pages 147–151, 1998. Distinction from RMK Engineering College [17] Zhang, Z., Deriche, R., Faugeras, O., and Luong, Q.-T. A affiliated to Anna University and B.E., robust technique for matching two uncalibrated images Electronics and Communication from through the recovery of the unknown epipolar geometry. University of Madras. Currently she is serving Technical Report 2273, INRIA ,1994 as Assistant Professor under Anna University [18] Lowe, D. “Distinctive image features from scale invariant affiliated college. She has got nearly 13 years of teaching keypoints”. International Journal of Computer,2001 experience profession and her current research includes VLSI and [19] Lepetit, V., Lagger, P., and Fua, P,” Randomized trees for Digital Image Processing. She has published 3 papers in various real-time keypoint recognition”, In IEEE Int.Conf. On National and International conferences. She has been in one of the Computer Vision and Pattern Recognition, pages 775– reviewers committee for ARTCom2009. 781.Vision, 60(2):91–110. 2 [20] Bay, H., Tuytelaars, T., and Van Gool, L, “SURF: Speeded M.Manigandan has received his Post up robust features”. European Conference on Computer Graduation in M.E., Mechatronics with First Vision,2006 with Distinction from Madras Institute of [21] Vacchetti, L., Lepetit, V., and Fua, P,” Combining edge and Technology, Anna University, and B.E., texture information for real-time accurate 3d camera Electronics and Communication from University tracking”. In Proceedings of the Third IEEE and ACM of Madras. Currently he is serving as Senior International Symposium on Mixed and Augmented Reality, Lecturer under Anna University affiliated college. His current pages 48–57, 2004. research includes Robotics and Digital Image Processing. He has published 3 papers in various National and International conferences. 18 © 2011 ACEEE DOI: 01.IJNS.02.01.105