SlideShare ist ein Scribd-Unternehmen logo
1 von 4
Downloaden Sie, um offline zu lesen
Sajad ein / International Journal of Engineering Research and Applications (IJERA) ISSN:
           2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.2302-2305
        Motion Tracking Using By Pillar K-Mean Segmentation
                                                  Sajad einy
                       mtech spatial information technology JNTU university Hyderabad
                                       Abouzar Shahraki Kia PG Scholar
                     DEP of Electrical and Electronic Engineering JNTU,Hyderabad,INDIA

Abstract
         In this paper , we present pillar k-mean         Algorithm. The Pillar algorithm considers the
segmentation and regions tracking model, which            pillars’ placement which should be located as far as
aims at combining color, texture, pattern, and            possible from each other to withstand against the
motion features. in the first the pillar algorithm        pressure distribution of a roof, as identical to the
segmented the objects which are tracking and              number of centroids amongst the data distribution.
realized them ;second the global motion of the            This algorithm is able to optimize the K-means
video sequence is estimated and compensated               clustering for image segmentation in aspects of
with presenting algorithms. The spatio-temporal           precision and computation time.
map is updated and compensated using pillar
segmentation model to keep consistency in video           Motion estimation based on tubes
objects tracking.                                                  To extract motion information correlated
                                                          with the motions of real life objects in the video
Key     words: image segmentation, object                 shot, we consider several successive frames and we
tracking, region tracking, pillar k-mean                  make the assumption of an uniform motion between
clustering                                                them. Taking account of perceptual considerations,
                                                          and of the frame rate of the next HDTV generation
Introduction                                              in progressive mode, we use a GOF composed of 9
          Image segmentation and video objects            frames [4,5] The goal is to ensure the coherence of
tracking are the subjects of large researches for         the motion along a perceptually significant duration
video coding and security of area. For instance, the      Figure 1 illustrates how a spatiotemporal tube is
new video standard allows chose one possible is to        estimated considering a block of the frame 𝑓𝑡 at the
use adapted coding parameters for the video object        GOF center an uniform motion is assumed and the
during several frames. To track objects in a video        tube passes through the 9 successive frames such as
sequence, they need to be segmented. Spatial-             it minimizes the error between the current block and
temporal shape characterized by its texture, its color,   those aligned [8,7]
and its own motion that differs from the global
motion of the shot. In the literature, several kinds of
methods are described, they use spatial and/or
temporal [1] information to segment the objects on
temporal information need to know the global
motion of the video to perform an effective video
objects segmentation. Horn and Schunck [2]
proposed to determine the optical flow between two
successive frames. Otherwise, the motion parametric
model of the successive frames can be estimated [3].      Fig. 1. Spatio-temporal tube used to determine the
Studies in motion analysis have shown that motion-        motion vector of a given block [8]
based segmentation would benefit from including                     We get motion vectors field with one vector
not only motion, but also the intensity cue, in           per tube, and one tube for each block of the image 𝑓𝑡
particular to retrieve accurately the regions             . This motion vectors field is more homogeneous
boundaries. Hence the knowledge of the spatial            (smoother) and more correlated with the motion of
partition can improve the reliability of the motion-      real life objects, this field is the input of the next
based segmentation with pillar algorithm. As a            process: the global motion estimation.
consequence,we propose a pillar segmentation
combining the motion information and the spatial          2.2. Robust global motion estimation
features of the sequence to achieve an accurate                    The next step is to identify the parameters
segmentation and video objects tracking. This             of the global motion of the GOF from this motion
segmentation process includes a new mechanism for         vectors field. We use an affine model with six
clustering the elements of high-resolution images in      parameters. First, we compute the derivatives of
order to improve precision and reduce computation         each motion vector and accumulate them in an
time. The system applies K-means clustering to the        histogram (one respective histogram for each global
image segmentation after optimized by Pillar              parameter).The localization of the main peak in the


                                                                                               2302 | P a g e
Sajad ein / International Journal of Engineering Research and Applications (IJERA) ISSN:
           2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.2302-2305
histogram produces the value retained for the             13. If no < nmin, go to step 8
parameter. Then, once the deformation parameters          14. Assign D(SX)=0
have been identified, they are used to compensate         15. C = C U ж
the original motion vectors field. Thus, the              16. i = i + 1
remaining vectors correspond only to the translation      17. If i ≤ k, go back to step 7
motions. These remaining motion vectors are then          18. Finish in which C is the solution as optimized
accumulated in a two dimensions (2D) histogram.           initial
The main peak in this 2D histogram represents the             centroids.
values of the translation parameters [4].in the fig2
the amount of motion can estimate.                        However, the computation time may take long time
                                                          if we apply the Pillar algorithm directly for all
                                                          elements of high resolution image data points. In
                                                          order to solve this problem, we reduce the image
                                                          size to 5%, and then we apply the Pillar algorithm.
                                                          After getting the optimized initial centroids as
                                                          shown in Fig. 3, we apply clustering using the K-
                                                          means algorithm and then obtain the position of final
                                                          centroids. We use these final centroids as the initial
                                                          centroids for the real size of the image as shown in
                                                          Figure 4, and then apply the image data point
Fig2.histogram of gray level
                                                          clustering using K-means. This mechanism is able to
                                                          improve segmentation results and make faster
Motion segmentation using pillar k means                            computation for the image segmentation[6].
clustering
          The     image      segmentation        system
preprocesses three steps: noise removal, color space
transformation and dataset normalization. First, the
image is enhanced by applying adaptive noise
removal filtering. Then, our system provides a
function to convert RGB of an image into HSL and
CIELAB color systems. Because of different ranges
of data in HSL and CIELAB, we apply the data
normalization. Then, the system clusters the image
for segmentation by applying K-means clustering
after optimized by Pillar algorithm. Fig. 3 shows the
computational steps of our approach for image
segmentation.The Pillar algorithm is described as
                                                          Fig3.main algorithm of pillar segmentation[6]
follows. Let X={xi |i=1,…,n} be data, k be number
of clusters,C={ci | i=1,…,k} be initial centroids, SX
⊆ X be identification for X which are already
selected in the sequence of process, DM={xi
|i=1,…,n} be accumulated distance metric, D={xi |
i=1,…,n} be distance metric for each iteration, and
m be the grand mean of X. The following execution
steps of the proposed algorithm are described as:
1. Set C=Ø, SX=Ø, and DM=[ ]
2. Calculate D dis(X,m)
3. Set number of neighbors nmin = α. n / k
4. Assign dmax argmax(D)
5. Set neighborhood boundary nbdis = β . dmax
6. Set i=1 as counter to determine the i-th initial
centroid
7. DM = DM + D
8. Select ж  xargmax(DM) as the candidate for i-th
initial centroids
9. SX=SX U ж
10. Set D as the distance metric between X to ж.
11. Set no number of data points fulfilling D ≤
nbdis
12. Assign DM(ж)=0


                                                                                               2303 | P a g e
Sajad ein / International Journal of Engineering Research and Applications (IJERA) ISSN:
           2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.2302-2305
                                                                            𝑚𝑣 𝑠                𝑚𝑣 𝑟𝜀 𝑠
                                                             𝑑𝑚 =                      ∗
                                                                   max⁡ 𝑠 ,𝑚𝑣 𝑟𝜀 𝑠 ) max⁡ 𝑠 ,𝑚𝑣 𝑟𝜀 𝑠 )
                                                                          (𝑚𝑣                 (𝑚𝑣
                                                        Where 𝑚𝑣 𝑟𝜀 𝑠 and 𝑚𝑣 𝑠 are respectively the motion
                                                        vectors of the site s; and of the region 𝑟𝜀 𝑠 formed by
                                                        the sites labelled 𝜀 𝑠 . In order to constrain this
                                                        distance between 0 and 1, we compute[8,9]
                                                         𝑝 𝑚 𝑚𝑣 𝑠 ,𝑚𝑣 𝑟𝜀 𝑠 = (𝑑 𝑚 + 1)/2

                                                        .1.4. Regions tracking
                                                                    In order to track the regions between two
                                                        successive GOF, we compare their segmentation
                                                        maps. Exactly the segmentation map of the previous
                                                        GOF, is first compensated using all of the motion
                                                        information (global motion, motion vectors of its
                                                        objects). Next we compare the labels of the regions
                                                        in the previous and in the current GOF. A metric
                                                        based on the color, the texture, and the recovery
                                                        between the regions, is used. For the color, and the
                                                        texture, we adapt the Bhattacharyya . A region of the
                                                        current GOF takes the label of the closest region of
                                                        the previous GOF (if their distance is small
                                                        enough).The compensated map of the previous GOF
                                                        is used to improve the current map through the
                                                        potential function:
                                                                      𝑣 𝑐𝑡 = 𝛽 𝑡       𝑖𝑓 𝑒 𝑠 (𝑡) ≠ 𝑒 𝑠 (𝑡 − 1)
                                                                    𝑣 𝑐𝑡 = −𝛽 𝑡         𝑖𝑓 𝑒 𝑠 𝑡 = 𝑒 𝑠 (𝑡 − 1)
                                                                    With 𝛽 𝑡 > 0, and where 𝑒 𝑠 (t), and 𝑒 𝑠 (t-1)
                                                        are respectively the labels of the site for the current,
                                                        and the motion compensated previous GOF. Here C
                                                        is the set of temporal second order cliques. Each
                                                        clique corresponds to a pair of adjacent tubes
                                                        between the previous and the current GOF:
                                                         𝑤5 (𝑒 𝑠 (𝑡))= 𝑐 𝑡 𝑣 𝑐 (𝑒 𝑠 𝑡 , 𝑒 𝑠 (𝑡 − 1))
                                                                    Where 𝑐 𝑡 is the set of all the temporal
                                                        cliques of S. Inside a GOF, when the motions of the
                                                        potential objects are very similar, the motion-based
                                                        segmentation failed to detect them. In this case, the
                                                        initial segmentation map for our pillar segmentation
                                                        model contains no information, hence, we use the
                                                        motion compensated map from the previous GOF as
                                                        initialization for our MRF segmentation model. This
                                                        process allows to keep consistency for video objects
                                                        tracking through the sequence GOF[8,9].



                           Fig4.image is segmented
Motion features
         Inside a GOF, the main criterion for the
segmentation is often the motion: for a given region,
the motion vectors of its tubes should have close
values. Therefore we want to associate energy to
assess the difference between the motion of a tube
and the motion of a region. So the motion vector        Fig5.tracking object in area
associated to a peak is also the estimated motion of
the region in the GOF. The distance between the         Experimental result:
motions of a tube, and a region, according to their              In order to have better object tracking
norms and their directions, follows[8]:                 results we use threshold values. For better results


                                                                                                2304 | P a g e
Sajad ein / International Journal of Engineering Research and Applications (IJERA) ISSN:
           2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.2302-2305
pillar segmentation is implemented along with the       [2]   B.K.P. Horn and B.G. Schunck,
Gof algorithm in case tracking and removal of noise           “Determining Optical Flow,”Artificial
particle. The performance analysis of these two               Intelligence, vol. 17, no. 1 – 3, pp. 185 –
parameters is discussed below with various images             203, 1981.
acquired from the camera shown in Figure 4,5            [3]   J. M. Odobez and P. Bouthemy, “Robust
consecutively. In fact the tracking of an object also         Multiresolution Esti-mation of Parametric
depends on the distance and positioning of the                MotionModels,” Journal of Visual Com-
camera. In this paper we have tested the tracking on          munication and Image Representation, vol.
Objects such as hand shaking tracking. For testing            6, December 1995.
purpose we have tested the algorithm with real time     [4]   O. Brouard, F. Delannay, V. Ricordel, and
with an Image Size of 150x150. Motion tracking                D. Barba, “Robust Motion Segmentation
purely depends on the size of the object and center           for High Definition Video Sequences using
of segmentation objects.The performance analysis of           a Fast Multi-Resolution Motion Estimation
these two parameters is discussed below in the out            Based on Spatio-Temporal Tubes,” Lisbon,
put images with various images acquired from the              Portugal, in Proc. PCS 2007.
camera shown in Figure 4,5 consecutively and its        [5]   O. LeMeur, P. Le Callet, andD. Barba,
statistical data for various HSV - Value and                  “ACoherent Computa-tional Approach
Threshold values are shown in Figure 5, 6 and 7               toModel Bottom-Up Visual Attention,”
respectively. All the results have been computed by           IEEE Trans. on Pattern Analysis and
varying the segmentation value particularly                   Machine Intelligence, vol. 28, no. 5, pp.
parameters which plays an vital role during tracking          802 – 817, May 2006.
any objects. This particular parameter has to be        [6]   "A New Approach for Image Segmentation
adjusted depending upon the lightening conditions             using Pillar-Kmeans Algorithm" Ali Ridho
under various environment.                                    Barakbah and Yasushi Kiyoki in World
                                                              Academy of Science, Engineering and
Conclusion:                                                   Technology 59 2009
         In this paper we present the pillar k-mean     [7]   "Spatio-temporal segmentation based on
clustering for segmentation the object in the area            region merging "Pattern Analysis and
which should be located as far as possible from each          Machine Intelligence, IEEE Transactions
other to withstand against the pressure distribution          on: Sep 1998
of a roof, as identical to the number of centroids      [8]   "Spatio-temporal segmentation and regions
amongst the data distribution.this is realized for a          tracking of high definition video sequences
GOF of nine frames and to keep consistency                    based on a Markov Random Field model
between the successive GOF segmentation maps in               "Pattern Analysis and Machine Intelligence,
the video image tracking.                                     IEEE Transactions on            Sep 1998
                                                              [9]"Combining motion segmentation with
References:                                                   tracking for activity analysis "Automatic
  [1]   R. Megret and D. DeMenthon, “A survey of              Face and Gesture Recognition, 2004.
        spatio-temporal grouping techniques,”                 Proceedings. Sixth IEEE International
        Tech. Rep., LAMP-TR-094/CS-TR-4403,                   Conference on 17-19 May 2004
        University of Maryland, 1994.




                                                                                         2305 | P a g e

Weitere ähnliche Inhalte

Was ist angesagt?

A Novel Algorithm for Watermarking and Image Encryption
A Novel Algorithm for Watermarking and Image Encryption A Novel Algorithm for Watermarking and Image Encryption
A Novel Algorithm for Watermarking and Image Encryption cscpconf
 
Enhanced target tracking based on mean shift
Enhanced target tracking based on mean shiftEnhanced target tracking based on mean shift
Enhanced target tracking based on mean shifteSAT Publishing House
 
Enhanced target tracking based on mean shift algorithm for satellite imagery
Enhanced target tracking based on mean shift algorithm for satellite imageryEnhanced target tracking based on mean shift algorithm for satellite imagery
Enhanced target tracking based on mean shift algorithm for satellite imageryeSAT Journals
 
Object tracking by dtcwt feature vectors 2-3-4
Object tracking by dtcwt feature vectors 2-3-4Object tracking by dtcwt feature vectors 2-3-4
Object tracking by dtcwt feature vectors 2-3-4IAEME Publication
 
Multiexposure Image Fusion
Multiexposure Image FusionMultiexposure Image Fusion
Multiexposure Image FusionIJMER
 
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...An Approach for Image Deblurring: Based on Sparse Representation and Regulari...
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...IRJET Journal
 
IRJET- Contrast Enhancement of Grey Level and Color Image using DWT and SVD
IRJET-  	  Contrast Enhancement of Grey Level and Color Image using DWT and SVDIRJET-  	  Contrast Enhancement of Grey Level and Color Image using DWT and SVD
IRJET- Contrast Enhancement of Grey Level and Color Image using DWT and SVDIRJET Journal
 
Icamme managed brightness and contrast enhancement using adapted histogram eq...
Icamme managed brightness and contrast enhancement using adapted histogram eq...Icamme managed brightness and contrast enhancement using adapted histogram eq...
Icamme managed brightness and contrast enhancement using adapted histogram eq...Jagan Rampalli
 
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...
ANALYSIS OF INTEREST POINTS OF CURVELET  COEFFICIENTS CONTRIBUTIONS OF MICROS...ANALYSIS OF INTEREST POINTS OF CURVELET  COEFFICIENTS CONTRIBUTIONS OF MICROS...
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...sipij
 
Application of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving WeatherApplication of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
 
miccai-poster-Bahram-Marami
miccai-poster-Bahram-Maramimiccai-poster-Bahram-Marami
miccai-poster-Bahram-MaramiBahram Marami
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 

Was ist angesagt? (17)

Vf sift
Vf siftVf sift
Vf sift
 
Ijetr011837
Ijetr011837Ijetr011837
Ijetr011837
 
A Novel Algorithm for Watermarking and Image Encryption
A Novel Algorithm for Watermarking and Image Encryption A Novel Algorithm for Watermarking and Image Encryption
A Novel Algorithm for Watermarking and Image Encryption
 
Enhanced target tracking based on mean shift
Enhanced target tracking based on mean shiftEnhanced target tracking based on mean shift
Enhanced target tracking based on mean shift
 
Enhanced target tracking based on mean shift algorithm for satellite imagery
Enhanced target tracking based on mean shift algorithm for satellite imageryEnhanced target tracking based on mean shift algorithm for satellite imagery
Enhanced target tracking based on mean shift algorithm for satellite imagery
 
Object tracking by dtcwt feature vectors 2-3-4
Object tracking by dtcwt feature vectors 2-3-4Object tracking by dtcwt feature vectors 2-3-4
Object tracking by dtcwt feature vectors 2-3-4
 
Kv3419501953
Kv3419501953Kv3419501953
Kv3419501953
 
G0443640
G0443640G0443640
G0443640
 
Multiexposure Image Fusion
Multiexposure Image FusionMultiexposure Image Fusion
Multiexposure Image Fusion
 
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...An Approach for Image Deblurring: Based on Sparse Representation and Regulari...
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...
 
Be36338341
Be36338341Be36338341
Be36338341
 
IRJET- Contrast Enhancement of Grey Level and Color Image using DWT and SVD
IRJET-  	  Contrast Enhancement of Grey Level and Color Image using DWT and SVDIRJET-  	  Contrast Enhancement of Grey Level and Color Image using DWT and SVD
IRJET- Contrast Enhancement of Grey Level and Color Image using DWT and SVD
 
Icamme managed brightness and contrast enhancement using adapted histogram eq...
Icamme managed brightness and contrast enhancement using adapted histogram eq...Icamme managed brightness and contrast enhancement using adapted histogram eq...
Icamme managed brightness and contrast enhancement using adapted histogram eq...
 
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...
ANALYSIS OF INTEREST POINTS OF CURVELET  COEFFICIENTS CONTRIBUTIONS OF MICROS...ANALYSIS OF INTEREST POINTS OF CURVELET  COEFFICIENTS CONTRIBUTIONS OF MICROS...
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...
 
Application of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving WeatherApplication of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving Weather
 
miccai-poster-Bahram-Marami
miccai-poster-Bahram-Maramimiccai-poster-Bahram-Marami
miccai-poster-Bahram-Marami
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 

Andere mochten auch

Zeměpis kraj
Zeměpis  krajZeměpis  kraj
Zeměpis krajMachradka
 
The impact of innovation on travel and tourism industries (World Travel Marke...
The impact of innovation on travel and tourism industries (World Travel Marke...The impact of innovation on travel and tourism industries (World Travel Marke...
The impact of innovation on travel and tourism industries (World Travel Marke...Brian Solis
 
Reuters: Pictures of the Year 2016 (Part 2)
Reuters: Pictures of the Year 2016 (Part 2)Reuters: Pictures of the Year 2016 (Part 2)
Reuters: Pictures of the Year 2016 (Part 2)maditabalnco
 
The Six Highest Performing B2B Blog Post Formats
The Six Highest Performing B2B Blog Post FormatsThe Six Highest Performing B2B Blog Post Formats
The Six Highest Performing B2B Blog Post FormatsBarry Feldman
 
The Outcome Economy
The Outcome EconomyThe Outcome Economy
The Outcome EconomyHelge Tennø
 

Andere mochten auch (8)

Torrox
TorroxTorrox
Torrox
 
Ana p aula atividade 6
Ana p aula atividade 6Ana p aula atividade 6
Ana p aula atividade 6
 
Zeměpis kraj
Zeměpis  krajZeměpis  kraj
Zeměpis kraj
 
Maria Presentacion
Maria PresentacionMaria Presentacion
Maria Presentacion
 
The impact of innovation on travel and tourism industries (World Travel Marke...
The impact of innovation on travel and tourism industries (World Travel Marke...The impact of innovation on travel and tourism industries (World Travel Marke...
The impact of innovation on travel and tourism industries (World Travel Marke...
 
Reuters: Pictures of the Year 2016 (Part 2)
Reuters: Pictures of the Year 2016 (Part 2)Reuters: Pictures of the Year 2016 (Part 2)
Reuters: Pictures of the Year 2016 (Part 2)
 
The Six Highest Performing B2B Blog Post Formats
The Six Highest Performing B2B Blog Post FormatsThe Six Highest Performing B2B Blog Post Formats
The Six Highest Performing B2B Blog Post Formats
 
The Outcome Economy
The Outcome EconomyThe Outcome Economy
The Outcome Economy
 

Ähnlich wie Oc2423022305

V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1KARTHIKEYAN V
 
Intelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIntelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIRJET Journal
 
Dense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic AlgorithmDense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic AlgorithmSlimane Djema
 
Efficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpointsEfficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpointsjournalBEEI
 
An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...Kunal Kishor Nirala
 
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGAPPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
 
Implementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoImplementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoIDES Editor
 
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...IOSR Journals
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
 
Research Paper v2.0
Research Paper v2.0Research Paper v2.0
Research Paper v2.0Kapil Tiwari
 
A Novel Background Subtraction Algorithm for Dynamic Texture Scenes
A Novel Background Subtraction Algorithm for Dynamic Texture ScenesA Novel Background Subtraction Algorithm for Dynamic Texture Scenes
A Novel Background Subtraction Algorithm for Dynamic Texture ScenesIJMER
 
Trajectory Based Unusual Human Movement Identification for ATM System
	 Trajectory Based Unusual Human Movement Identification for ATM System	 Trajectory Based Unusual Human Movement Identification for ATM System
Trajectory Based Unusual Human Movement Identification for ATM SystemIRJET Journal
 
motion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videosmotion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videosshiva kumar cheruku
 

Ähnlich wie Oc2423022305 (20)

V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1
 
Intelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIntelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial Intelligence
 
Dense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic AlgorithmDense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic Algorithm
 
Efficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpointsEfficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpoints
 
An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...
 
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGAPPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
 
Implementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoImplementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time Video
 
Ay33292297
Ay33292297Ay33292297
Ay33292297
 
Ay33292297
Ay33292297Ay33292297
Ay33292297
 
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
 
Medial axis transformation based skeletonzation of image patterns using image...
Medial axis transformation based skeletonzation of image patterns using image...Medial axis transformation based skeletonzation of image patterns using image...
Medial axis transformation based skeletonzation of image patterns using image...
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
 
Research Paper v2.0
Research Paper v2.0Research Paper v2.0
Research Paper v2.0
 
557 480-486
557 480-486557 480-486
557 480-486
 
A Novel Background Subtraction Algorithm for Dynamic Texture Scenes
A Novel Background Subtraction Algorithm for Dynamic Texture ScenesA Novel Background Subtraction Algorithm for Dynamic Texture Scenes
A Novel Background Subtraction Algorithm for Dynamic Texture Scenes
 
E017443136
E017443136E017443136
E017443136
 
I0343065072
I0343065072I0343065072
I0343065072
 
Trajectory Based Unusual Human Movement Identification for ATM System
	 Trajectory Based Unusual Human Movement Identification for ATM System	 Trajectory Based Unusual Human Movement Identification for ATM System
Trajectory Based Unusual Human Movement Identification for ATM System
 
motion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videosmotion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videos
 

Oc2423022305

  • 1. Sajad ein / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.2302-2305 Motion Tracking Using By Pillar K-Mean Segmentation Sajad einy mtech spatial information technology JNTU university Hyderabad Abouzar Shahraki Kia PG Scholar DEP of Electrical and Electronic Engineering JNTU,Hyderabad,INDIA Abstract In this paper , we present pillar k-mean Algorithm. The Pillar algorithm considers the segmentation and regions tracking model, which pillars’ placement which should be located as far as aims at combining color, texture, pattern, and possible from each other to withstand against the motion features. in the first the pillar algorithm pressure distribution of a roof, as identical to the segmented the objects which are tracking and number of centroids amongst the data distribution. realized them ;second the global motion of the This algorithm is able to optimize the K-means video sequence is estimated and compensated clustering for image segmentation in aspects of with presenting algorithms. The spatio-temporal precision and computation time. map is updated and compensated using pillar segmentation model to keep consistency in video Motion estimation based on tubes objects tracking. To extract motion information correlated with the motions of real life objects in the video Key words: image segmentation, object shot, we consider several successive frames and we tracking, region tracking, pillar k-mean make the assumption of an uniform motion between clustering them. Taking account of perceptual considerations, and of the frame rate of the next HDTV generation Introduction in progressive mode, we use a GOF composed of 9 Image segmentation and video objects frames [4,5] The goal is to ensure the coherence of tracking are the subjects of large researches for the motion along a perceptually significant duration video coding and security of area. For instance, the Figure 1 illustrates how a spatiotemporal tube is new video standard allows chose one possible is to estimated considering a block of the frame 𝑓𝑡 at the use adapted coding parameters for the video object GOF center an uniform motion is assumed and the during several frames. To track objects in a video tube passes through the 9 successive frames such as sequence, they need to be segmented. Spatial- it minimizes the error between the current block and temporal shape characterized by its texture, its color, those aligned [8,7] and its own motion that differs from the global motion of the shot. In the literature, several kinds of methods are described, they use spatial and/or temporal [1] information to segment the objects on temporal information need to know the global motion of the video to perform an effective video objects segmentation. Horn and Schunck [2] proposed to determine the optical flow between two successive frames. Otherwise, the motion parametric model of the successive frames can be estimated [3]. Fig. 1. Spatio-temporal tube used to determine the Studies in motion analysis have shown that motion- motion vector of a given block [8] based segmentation would benefit from including We get motion vectors field with one vector not only motion, but also the intensity cue, in per tube, and one tube for each block of the image 𝑓𝑡 particular to retrieve accurately the regions . This motion vectors field is more homogeneous boundaries. Hence the knowledge of the spatial (smoother) and more correlated with the motion of partition can improve the reliability of the motion- real life objects, this field is the input of the next based segmentation with pillar algorithm. As a process: the global motion estimation. consequence,we propose a pillar segmentation combining the motion information and the spatial 2.2. Robust global motion estimation features of the sequence to achieve an accurate The next step is to identify the parameters segmentation and video objects tracking. This of the global motion of the GOF from this motion segmentation process includes a new mechanism for vectors field. We use an affine model with six clustering the elements of high-resolution images in parameters. First, we compute the derivatives of order to improve precision and reduce computation each motion vector and accumulate them in an time. The system applies K-means clustering to the histogram (one respective histogram for each global image segmentation after optimized by Pillar parameter).The localization of the main peak in the 2302 | P a g e
  • 2. Sajad ein / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.2302-2305 histogram produces the value retained for the 13. If no < nmin, go to step 8 parameter. Then, once the deformation parameters 14. Assign D(SX)=0 have been identified, they are used to compensate 15. C = C U ж the original motion vectors field. Thus, the 16. i = i + 1 remaining vectors correspond only to the translation 17. If i ≤ k, go back to step 7 motions. These remaining motion vectors are then 18. Finish in which C is the solution as optimized accumulated in a two dimensions (2D) histogram. initial The main peak in this 2D histogram represents the centroids. values of the translation parameters [4].in the fig2 the amount of motion can estimate. However, the computation time may take long time if we apply the Pillar algorithm directly for all elements of high resolution image data points. In order to solve this problem, we reduce the image size to 5%, and then we apply the Pillar algorithm. After getting the optimized initial centroids as shown in Fig. 3, we apply clustering using the K- means algorithm and then obtain the position of final centroids. We use these final centroids as the initial centroids for the real size of the image as shown in Figure 4, and then apply the image data point Fig2.histogram of gray level clustering using K-means. This mechanism is able to improve segmentation results and make faster Motion segmentation using pillar k means computation for the image segmentation[6]. clustering The image segmentation system preprocesses three steps: noise removal, color space transformation and dataset normalization. First, the image is enhanced by applying adaptive noise removal filtering. Then, our system provides a function to convert RGB of an image into HSL and CIELAB color systems. Because of different ranges of data in HSL and CIELAB, we apply the data normalization. Then, the system clusters the image for segmentation by applying K-means clustering after optimized by Pillar algorithm. Fig. 3 shows the computational steps of our approach for image segmentation.The Pillar algorithm is described as Fig3.main algorithm of pillar segmentation[6] follows. Let X={xi |i=1,…,n} be data, k be number of clusters,C={ci | i=1,…,k} be initial centroids, SX ⊆ X be identification for X which are already selected in the sequence of process, DM={xi |i=1,…,n} be accumulated distance metric, D={xi | i=1,…,n} be distance metric for each iteration, and m be the grand mean of X. The following execution steps of the proposed algorithm are described as: 1. Set C=Ø, SX=Ø, and DM=[ ] 2. Calculate D dis(X,m) 3. Set number of neighbors nmin = α. n / k 4. Assign dmax argmax(D) 5. Set neighborhood boundary nbdis = β . dmax 6. Set i=1 as counter to determine the i-th initial centroid 7. DM = DM + D 8. Select ж  xargmax(DM) as the candidate for i-th initial centroids 9. SX=SX U ж 10. Set D as the distance metric between X to ж. 11. Set no number of data points fulfilling D ≤ nbdis 12. Assign DM(ж)=0 2303 | P a g e
  • 3. Sajad ein / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.2302-2305 𝑚𝑣 𝑠 𝑚𝑣 𝑟𝜀 𝑠 𝑑𝑚 = ∗ max⁡ 𝑠 ,𝑚𝑣 𝑟𝜀 𝑠 ) max⁡ 𝑠 ,𝑚𝑣 𝑟𝜀 𝑠 ) (𝑚𝑣 (𝑚𝑣 Where 𝑚𝑣 𝑟𝜀 𝑠 and 𝑚𝑣 𝑠 are respectively the motion vectors of the site s; and of the region 𝑟𝜀 𝑠 formed by the sites labelled 𝜀 𝑠 . In order to constrain this distance between 0 and 1, we compute[8,9] 𝑝 𝑚 𝑚𝑣 𝑠 ,𝑚𝑣 𝑟𝜀 𝑠 = (𝑑 𝑚 + 1)/2 .1.4. Regions tracking In order to track the regions between two successive GOF, we compare their segmentation maps. Exactly the segmentation map of the previous GOF, is first compensated using all of the motion information (global motion, motion vectors of its objects). Next we compare the labels of the regions in the previous and in the current GOF. A metric based on the color, the texture, and the recovery between the regions, is used. For the color, and the texture, we adapt the Bhattacharyya . A region of the current GOF takes the label of the closest region of the previous GOF (if their distance is small enough).The compensated map of the previous GOF is used to improve the current map through the potential function: 𝑣 𝑐𝑡 = 𝛽 𝑡 𝑖𝑓 𝑒 𝑠 (𝑡) ≠ 𝑒 𝑠 (𝑡 − 1) 𝑣 𝑐𝑡 = −𝛽 𝑡 𝑖𝑓 𝑒 𝑠 𝑡 = 𝑒 𝑠 (𝑡 − 1) With 𝛽 𝑡 > 0, and where 𝑒 𝑠 (t), and 𝑒 𝑠 (t-1) are respectively the labels of the site for the current, and the motion compensated previous GOF. Here C is the set of temporal second order cliques. Each clique corresponds to a pair of adjacent tubes between the previous and the current GOF: 𝑤5 (𝑒 𝑠 (𝑡))= 𝑐 𝑡 𝑣 𝑐 (𝑒 𝑠 𝑡 , 𝑒 𝑠 (𝑡 − 1)) Where 𝑐 𝑡 is the set of all the temporal cliques of S. Inside a GOF, when the motions of the potential objects are very similar, the motion-based segmentation failed to detect them. In this case, the initial segmentation map for our pillar segmentation model contains no information, hence, we use the motion compensated map from the previous GOF as initialization for our MRF segmentation model. This process allows to keep consistency for video objects tracking through the sequence GOF[8,9]. Fig4.image is segmented Motion features Inside a GOF, the main criterion for the segmentation is often the motion: for a given region, the motion vectors of its tubes should have close values. Therefore we want to associate energy to assess the difference between the motion of a tube and the motion of a region. So the motion vector Fig5.tracking object in area associated to a peak is also the estimated motion of the region in the GOF. The distance between the Experimental result: motions of a tube, and a region, according to their In order to have better object tracking norms and their directions, follows[8]: results we use threshold values. For better results 2304 | P a g e
  • 4. Sajad ein / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.2302-2305 pillar segmentation is implemented along with the [2] B.K.P. Horn and B.G. Schunck, Gof algorithm in case tracking and removal of noise “Determining Optical Flow,”Artificial particle. The performance analysis of these two Intelligence, vol. 17, no. 1 – 3, pp. 185 – parameters is discussed below with various images 203, 1981. acquired from the camera shown in Figure 4,5 [3] J. M. Odobez and P. Bouthemy, “Robust consecutively. In fact the tracking of an object also Multiresolution Esti-mation of Parametric depends on the distance and positioning of the MotionModels,” Journal of Visual Com- camera. In this paper we have tested the tracking on munication and Image Representation, vol. Objects such as hand shaking tracking. For testing 6, December 1995. purpose we have tested the algorithm with real time [4] O. Brouard, F. Delannay, V. Ricordel, and with an Image Size of 150x150. Motion tracking D. Barba, “Robust Motion Segmentation purely depends on the size of the object and center for High Definition Video Sequences using of segmentation objects.The performance analysis of a Fast Multi-Resolution Motion Estimation these two parameters is discussed below in the out Based on Spatio-Temporal Tubes,” Lisbon, put images with various images acquired from the Portugal, in Proc. PCS 2007. camera shown in Figure 4,5 consecutively and its [5] O. LeMeur, P. Le Callet, andD. Barba, statistical data for various HSV - Value and “ACoherent Computa-tional Approach Threshold values are shown in Figure 5, 6 and 7 toModel Bottom-Up Visual Attention,” respectively. All the results have been computed by IEEE Trans. on Pattern Analysis and varying the segmentation value particularly Machine Intelligence, vol. 28, no. 5, pp. parameters which plays an vital role during tracking 802 – 817, May 2006. any objects. This particular parameter has to be [6] "A New Approach for Image Segmentation adjusted depending upon the lightening conditions using Pillar-Kmeans Algorithm" Ali Ridho under various environment. Barakbah and Yasushi Kiyoki in World Academy of Science, Engineering and Conclusion: Technology 59 2009 In this paper we present the pillar k-mean [7] "Spatio-temporal segmentation based on clustering for segmentation the object in the area region merging "Pattern Analysis and which should be located as far as possible from each Machine Intelligence, IEEE Transactions other to withstand against the pressure distribution on: Sep 1998 of a roof, as identical to the number of centroids [8] "Spatio-temporal segmentation and regions amongst the data distribution.this is realized for a tracking of high definition video sequences GOF of nine frames and to keep consistency based on a Markov Random Field model between the successive GOF segmentation maps in "Pattern Analysis and Machine Intelligence, the video image tracking. IEEE Transactions on Sep 1998 [9]"Combining motion segmentation with References: tracking for activity analysis "Automatic [1] R. Megret and D. DeMenthon, “A survey of Face and Gesture Recognition, 2004. spatio-temporal grouping techniques,” Proceedings. Sixth IEEE International Tech. Rep., LAMP-TR-094/CS-TR-4403, Conference on 17-19 May 2004 University of Maryland, 1994. 2305 | P a g e