Weitere ähnliche Inhalte
Ähnlich wie 40120140502005
Ähnlich wie 40120140502005 (20)
Mehr von IAEME Publication
Mehr von IAEME Publication (20)
Kürzlich hochgeladen (20)
40120140502005
- 1. International Journal of Electronics and JOURNALEngineering & Technology (IJECET), ISSN 0976 –
INTERNATIONAL Communication OF ELECTRONICS AND
6464(Print), ISSN 0976 – 6472(Online), Volume 5, Issue 2, February (2014), pp. 36-41 © IAEME
COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)
ISSN 0976 – 6464(Print)
ISSN 0976 – 6472(Online)
Volume 5, Issue 2, February (2014), pp. 36-41
© IAEME: www.iaeme.com/ijecet.asp
Journal Impact Factor (2014): 3.7215 (Calculated by GISI)
www.jifactor.com
IJECET
©IAEME
AN INTELLIGENT OBJECT INPAINTING APPROACH FOR VIDEO
REPAIRING USING MATLAB
1
Mr.Abhijeet A.Chincholkar
PG Student, M.E. Digital Electronics, DBNCOET Yavatmal, India.
2
Prof. Salim A.Chavan
Associate Professor and Vice Principal, DBNCOET Yavatmal, India.
ABSTRACT
In this paper we presents a new approach for object based method in Video Inpainting. In this
approach it fill/restore the missing/occluded part of background as well as moving fore objects of a
video taken by a both stationary and mobile camera. This method differs from previous methods on
video processing which work on 3 dimensional data. This method slices the image sequences along
with motion manifold of travelling object, so that it reduces the search area for 3 dimensional by
making conversion in 2 dimensional. To advance the computational efficiency based on geometric
video analysis system approaches to touch up real videos under perspective distortion for the
common camera motions, such as panning of camera, tilting angle, and zooming ratio. Result
demonstration on algorithm is that it performs comparably better on 3 dimensional based methods,
and however it modifies the recent repairing techniques to inpaint damaged videos with projective
effects, as well as lighting changes. To improve the performance of an object-based video inpainting
scheme that can solve the spatial consistency problem as well as the temporal continuity problem
simultaneously. It comprised in three steps virtual contour construction, Key-posture selection and
mapping and Synthetic posture generation. It improve the accuracy and efficiency of an object-based
video inpainting to make an intelligent inpainting.
Keywords: Object completion, posture mapping, posture sequence retrieval, synthetic posture, video
inpainting.
I. INTRODUCTION
This work proposes an object-based video inpainting scheme that can maintain the spatial
consistency as well as temporal motion continuity of an object simultaneously. This method can also
36
- 2. International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 –
6464(Print), ISSN 0976 – 6472(Online), Volume 5, Issue 2, February (2014), pp. 36-41 © IAEME
handle the problem of less available postures. Initially, we assume that the objects to be removed and
the occluded objects to restore have been extracted by object segmentation method [12]. After object
extraction is done, the occluded objects and the background are completed separately. We assume, the
trajectory of each occluded object can be approximated by a linear line segment during the period of
occlusion. This supposition is reasonable for plenty of practical applications because the duration of
occlusion is typically short, and object does not usually perform composite motions during such a
short period. Our main goal is to solve the problem of completing somewhat or total occluded objects
in a video sequence. This work proposes object completion method which is comprised in three parts:
virtual contour construction method, key-posture based posture sequence matching technique and
synthetic key-posture generation method. The first one is of object inpainting, involves sampling of a
3 Dimensional volume of video in to 2 directional spatio-temporal slices. Then patch-based inpainting
methods especially exemplar-based [11], operation complete the somewhat occluded object
trajectories in the 2 Dimensional spatio-temporal slices. In this matter objective maintains the
trajectories found due to spatial as well as temporal continuity problem. Next virtual contour
construction method and posture sequence matching technique are used to extract most similar frame
sequence from available object postures. The available postures are collected from the clear part of the
input video.
In this work key posture selection method, indexing of available frames, and coding
operations to exchange the posture sequence synthesis problem in comparative search problem, which
can be solved efficiently by existing matching algorithms [14]. If virtual contour method unable to
search matching sequence in database of available postures, then by this new approach, we construct
synthetic postures. Construction of synthetic postures is done by combining constituent components
of key postures to improve the objects posture database. This process sort outs maximum number of
postures helps to solve problem of insufficient available postures for database. After extracting the
mostly similar posture, then occluded objects are completed replace by the extracted ones. For
background inpainting, we follow the background mosaics method proposed in [1]. The method first
constructs a background mosaic for each video shot based on global motion estimation (GME), and
then finds the corresponding available data in the background mosaic for each pixel in a missing
region. The data are used to fill/restore the missing regions and thereby achieve spatio-temporal
consistency in the completed background. Since background inpainting is not the focus of this work,
its implementation in detail is not considered.
II. PRESENT THEORY AND PRACTICES
VIDEO inpainting [1]–[9] mainly attracted in recent years due to its powerful ability to
fix/restore damaged videos and its ability to make system flexible for video editing. Specifically,
inpainting techniques have been used extensively for fixing/restoring damaged digital images [10]–
[11]. Depending on how they fix/restore damaged images, the techniques can be categorized into
three groups: texture synthesis-based methods [10], [11], and patch-based methods [11]. The concept
of texture synthesis is borrowed from computer graphics. Its main purpose is to insert an elected input
texture into an occluded region. In contrast, PDE-based approaches propagate information from the
boundary of a missing region toward the center of that region. They are suitable for completing a
damaged image in which thin regions are lost. Texture synthesis and Partial Differential Equation
based propagation cannot handle cases of general image inpainting because the former does not
consider structural information and frequently introduces blurring artifacts. A patch-based approach
[11], on the other hand, is more suitable for image inpainting because it can produce high-quality
visual effects and maintain the consistency of local structures. Because of the success of patch-based
image inpainting, researchers have applied a similar concept in video inpainting [3]; however, the
issues that need to be addressed in video inpainting are more challenging job. Although video
inpainting is a relatively new research area, a number of methods have been proposed in recent years.
37
- 3. International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 –
6464(Print), ISSN 0976 – 6472(Online), Volume 5, Issue 2, February (2014), pp. 36-41 © IAEME
Generally, inpainting methods are classified into two types: patch-based methods [1]–[4], and
object-based methods [5], [6]. As the patch-based approach has been successfully applied in image
inpainting [11], researchers have extended a similar concept to video inpainting. For example, video
inpainting under constrained camera motion [1] and under space-time completion [3] can be regarded
as extensions of the nonparametric sampling technique. [1] Propose a video inpainting technique that
combines motion information and image inpainting. Like most existing methods, it is assumed that
the camera’s movements are constrained in some directions. In the preprocessing there is three step
mosaics such as the background, the foreground and the optical-flow, are constructed to provide
information for video inpainting. Each missing region in a frame has a corresponding missing region
in the foreground or background mosaic. The applicant patch in the foreground mosaic that is most
similar to the missing region in the frame is used to fill the missing region. For background inpainting,
the image inpainting method proposed in [16] is adopted to fill the missing regions of the background
mosaic. Although the approach in [1] produces a good visual effect for each frame, it cannot maintain
continuity along the temporal axis. The lack of temporal continuity results in flickering artifacts. In
[2] fixed-size cubes with three dimensions as the unit of the similarity measure function. A set of
constituent cubes are used to compute the value of a missing pixel. Based on the similarity measure
function, which is the sum of squared differences (SSD), each cube finds the same candidate cube.
Although the results reported in [2] are good.[4] Proposes constructing motion manifolds of the spacetime volume. In [4], two lines must be drawn because both the foreground and the background must
be in consideration.
As a result, if a patch contains both spatial as well as temporal dimensions, they cannot be
handled smoothly at the same time, which results in either motion artifacts or incomplete structure.
Object-based approaches, such as [5] and [6], also employ a video inpainting mechanism. In [6] an
efficient object-based video inpainting technique for dealing with videos recorded by a still camera.
To inpaint the background, they use background pixels that are most well-suited with the current
frame to fill a missing region and to inpaint foreground, they utilize all existing object templates. A
fixed size sliding window is defined to include a set of continuous object templates. The authors also
propose a similarity function that measures the similarity between two sets of continuous object
templates. A sliding window covers the missing object and its neighboring object templates that are
used to find the most similar object template. Then corresponding object template replace the missing
object. If the number of postures in the database is not sufficient, the inpainting result could be
unsatisfactory. In [7] propose a user-assisted video layer segmentation technique that separate out a
target video into color and illumination videos. Then tensor voting technique is maintaining
consistency in both the spatio-temporal domain and the illumination domain. The method inpaint an
occluded object by synthesizing other available objects, but synthesized object does not have a real
trajectory and only textures are allowed in background. Patch-based methods often have difficulty to
handle spatial consistency as well as temporal continuity. In addition to this method patch-based
approaches generate inpainting errors in the foreground. As a result, researchers have focused on
object-based approaches, which usually generate high-quality visual results. Even so, some difficult
issues still needed to be focus.
Here we propose an object-based video inpainting scheme that can simultaneously solve the
spatial inconsistency problem as well as the temporal continuity problem. Scheme comprised in three
steps: virtual contour construction, key-posture selection and mapping, and synthetic posture
generation. The contribution of this work is in three-fold. First, we propose a scheme that is used to
derive the virtual contour of an occluded object. The contour provides a fairly precise initial estimate
of the posture and filling location of the occluded object, even if the object is completely occluded.
Therefore, the virtual contour is suitable for finding a good replacement for the occluded object from
the available postures in the input video. Second, we propose a key posture-based mapping scheme
that converts the posture sequence retrieval problem into string matching problem, thereby
minimizing the computational complexity significantly, while maintaining the matching accuracy.
38
- 4. International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 –
6464(Print), ISSN 0976 – 6472(Online), Volume 5, Issue 2, February (2014), pp. 36-41 © IAEME
Since the occluded objects are completed for a whole subsequence rather than for individual frames,
the temporal continuity of object motion is well maintained. Third, for a sequence in which we cannot
find a sufficiently large set of available postures for completing occluded postures, our proposed
synthetic posture generation scheme can effectively enrich the database of postures by combining the
constituent parts of different available postures.
In this work we propose the novel approach to an object-based video inpainting scheme that
can solve the spatial inconsistency problem as well as temporal continuity problem simultaneously.
The scheme is comprised of three steps: virtual contour construction, key-posture selection and
mapping, and synthetic posture generation.
III. KEY OBSERVATIONS
We propose an object-based video inpainting method that can maintain the spatial consistency
as well as temporal motion continuity of an object simultaneously. The scheme can also handle the
problem of insufficiency of available postures. Figure 1 shows a block diagram of the proposed
scheme.
Input Video
Available Object’s Posture Creation
Posture Processing
Ideal Posture Sequence
Generation
Indexing Available
Postures
Not
Available
Posture
Sequence
Comparison
Synthetic
Posture
Generation
Available
Replace Ocluded Objects
Result
Fig. 1. Flowchart for proposed system
Initially, we assume that the objects to be removed and the occluded objects to be
fixed/restored have been extracted by an automatic object segmentation scheme or by an interactive
extraction scheme. After object extraction, the occluded objects and the background are completed
separately. We also assume that the trajectory of each occluded object can be approximated by a
linear line segment during the period of occlusion. This assumption is reasonable for many practical
applications because the duration of an occlusion is typically tiny, and an object does not usually
perform complex motions during such a short period. Our primary goal is to solve the problem of
39
- 5. International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 –
6464(Print), ISSN 0976 – 6472(Online), Volume 5, Issue 2, February (2014), pp. 36-41 © IAEME
completing partly or totally occluded objects in a video. If a virtual contour unable to find good match
in database of available postures, it has synthetic postures by combining the constituent components
of key postures to enrich the posture database. This process mitigates the problem of insufficient
available postures. After extracting the most similar posture sequence from video, then occluded
objects are completely replace the damage objects by retrieved ones. For background inpainting, This
method follow the background mosaics method proposed the method first constructs a background
mosaic for each video shot based on global motion estimation (GME), and then finds the
corresponding available data in the background mosaic for each pixel in a missing region. The data
are used to fill the missing regions and thereby achieve spatio-temporal consistency in the completed
background. Since background inpainting is not the focus of this work, we do not consider its
implementation in detail.
IV. IMPLEMENTATION DETAILS
Introduction of the system will be given. Review of the available literature & work already
done related with video inpainting will be studied. Theory & technique involved in proposed system
will be studied. Suitable tool for implementation will be chosen. Testing of the system will be carried
out. Result of the system will be discussed. Conclusion will be drawn from result & future scope will
be analyzed for further research work.
V. CONCLUSION
This proposed system presents a framework for hole removal, signature removal and object
tracking and inpaint in a video sequence. The proposed system is comprised in three steps virtual
contour construction, Key-posture selection and mapping and Synthetic posture generation. For
Object based inpainting, we modified patch based inpainting algorithm for getting improved results as
compared to the previous methods. Our experiment results will show that the proposed method
removes objects with good quality in terms of the object's spatial consistency as well as temporal
motion continuity. It avoids over-smoothing artifacts and compensates for insufficient available
postures. The non-linearity of the occluded objects is also allowed to work. It may not compose
sufficiently accurate postures, if an object moves nonlinearly during an occlusion period, due to
virtual trajectories. The proposed method does not deal with the illumination change problem which
occurs when lighting is not uniform across the scene. It also helps to make an object based inpainting
method intelligent.
VI. REFERENCES
[1]
[2]
[3]
[4]
[5]
K. A. Patwardhan, G. Sapiro, and M. Bertalmío, “Video inpainting under constrained camera
motion,” IEEE Trans. Image Process., vol.16, no. 2, pp. 545–553, Feb. 2007.
Y. Wexler, E. Shechtman, and M. Irani, “Space-time completion of video,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 29, no. 3, pp. 1–14,Mar. 2007.
T. K. Shih, N. C. Tang, and J.-N. Hwang, “Exemplar-based video inpainting without ghost
shadow artifacts by maintaining temporal continuity,”IEEE Trans. Circuits Syst. Video
Technol., vol. 19, no. 3, pp.347–360, Mar. 2009.
Y. Shen, F. Lu, X. Cao, and H. Foroosh, “Video completion for perspective camera under
constrained motion,” in Proc. IEEE Conf. Pattern Recognition., Hong Kong, China, Aug.
2006, pp. 63–66.
J. Jia, Y.-W.Tai, T.-P.Wu, and C.-K. Tang, “Video repairing under variable illumination
using cyclic motions,” IEEE Trans. Pattern Anal.Mach. Intell., vol. 28, no. 5, pp. 832–839,
May 2006.
40
- 6. International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 –
6464(Print), ISSN 0976 – 6472(Online), Volume 5, Issue 2, February (2014), pp. 36-41 © IAEME
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
S.-C. S. Cheung, J. Zhao, and M. V. Venkatesh, “Efficient object-based video inpainting,” in
Proc. IEEE Conf. Image Process., Atlanta, GA, Oct. 2006, pp. 705–708.
M. Bertalmio, A. L. Bertozzi and G. Sapiro, “Navier-stokes, fluid dynamics, and image and
video inpainting,” in Proc. IEEE Conference Comput.Vis. Pattern Recognit., Kauai,, HI, Dec.
2001, pp. 355–362.
T. Ding, M. Sznaier, and O. I. Camps, “A rank minimization approach to video inpainting,”
in Proc. IEEE Conf. Comput.Vis., Rio de Janeiro,Brazil, Oct. 2007, pp. 1–8.
Y. Matsushita, E. Ofek, W. Ge, X. Tang, and H.-Y.Shum, “Full frame video stabilization
with motion inpainting,” IEEE Trans. Pattern Anal.Mach. Intell., vol. 28, no. 7, pp. 1150–
1163, Jul. 2006.
A. Efros and T. Leung, “Texture synthesis by non-parametric sampling,”in Proc. IEEE Conf.
Comput. Vis., 1999, vol. 2, pp. 1033–1038.
A. Criminisi, P. Perez and K. Toyama, “Region filling and object removal by exemplar-based
image inpainting,” IEEE Trans. ImageProcess., vol. 13, no. 9, pp. 1200–1212, Sep. 2004.
I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: Who? When? Where? What? A real-time
system for detecting and tracking people,”in Proc. IEEE Int. Conf. Automatic Face Gesture
Recognit., Los Alamitos, CA, 1998, pp. 222–227.
L. Yatziv and G. Sapiro, “Fast image and video colorization using chrominance blending,”
IEEE Trans. Image Process., vol. 15, no. 5,pp. 1120–1129, May 2006.
S. Belongie, J. Malik and J. Puzicha, “Shape matching and object recognition using shape
contexts,” IEEE Trans. Pattern Anal. Mach.Intell., volume. 24, no. 4, pp. 509–522, Apr.
2002.
Y.-M. Liang, S.-W.Shih, C.-C. A. Shih, H.-Y. M. Liao, and C.-C.Lin, “Learning atomic
human actions using variable-length Markov models,” IEEE Trans. Syst., Man, Cybern. B,
Cybern., vol. 39, no. 1,pp. 268–280, Jan. 2009.
A. Elgammal and C.-S.Lee, “Inferring 3D body pose from silhouettes using activity manifold
learning,” in Proc. IEEE Conf. Comput. Vis.Pattern Recognit., Washington, DC, Jun. 2004,
pp. 681–688.
D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 25, no. 5, pp.564–577, May 2003.
X. Bai, L. J. Latecki, and W.-Y.Liu, “Skeleton pruning by contour partitioning with discrete
curve evolution,” IEEE Trans. Pattern Anal.Mach. Intell., vol. 29, no. 3, pp. 449–462, Mar.
2007.
Manoj Pandey, J S Ubhi and Kota Solomon Raju, “Kernel Based Similarity Estimation and
Real Time Tracking of Moving Objects”, International Journal of Electronics and
Communication Engineering & Technology (IJECET), Volume 4, Issue 7, 2013,
pp. 293 - 300, ISSN Print: 0976- 6464, ISSN Online: 0976 –6472.
B.A.Ahire and Prof.Neeta A. Deshpande, “A Review on Video Inpainting Techniques”,
International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 1,
2013, pp. 203 - 210, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.
41