SlideShare ist ein Scribd-Unternehmen logo
1 von 4
Downloaden Sie, um offline zu lesen
Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and
                      Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                        Vol. 2, Issue 5, September- October 2012, pp.246-249

          A Innovative method for Image retrieval using perceptual
                             textural features
                                             Prathusha.Perugu1, Krishna veni 2
                                 1
                            M.Tech Student , Kottam College of Engineering ,Kurnool,AP.
2
    Asst. Prof. ,Dept. of Computer Science and Engineering, Kottam College of Engineering , Tekur, Kurnool,A.P


Abstract :
         Texture is a very important image                     For images containing repetitive texture patterns, the
feature extremely used in various image                        autocorrelation function exhibits periodic behavior
processing problems like Computer Vision                       with the same period as in the original image. For
,Image Representation and Retrieval It has been                coarse textures, the autocorrelation function
shown that humans use some perceptual textural                 decreases slowly, whereas for fine textures it
features to distinguish between textured images                decreases rapidly For            images containing
or regions. Some of the most important features                orientation(s), the autocorrelation function saves the
are coarseness, contrast, direction and busyness.              same orientation(s).
In this paper a study is made on these texture
features. The proposed computational measures                           In this paper the test images are taken from
can be based upon two representations: the                     Broadatz database [2] and the autocorrelation
original images representation and the                         function is applied on original images shown in
autocorrelation function (associated with original             figure 1 . The original images are scaled in to nine
images) representation.                                        parts and autocorrelation function is applied on the
                                                               one of the scaled images randomly. Scaled images
Keywords : Texture features, Content Based Image               are shown in figure 2.
Retrieval, Computational measures

     I.        Introduction
Texture has been extensively studied and used in
literature since it plays a very important role in
human visual perception. Texture refers to the
spatial distribution of grey-levels and can be defined
as the deterministic or random repetition of one or            Fig :1 Original images from Broadatz Database
several primitives in an image.
Texture analysis techniques can be divided into two            Any image of the scaled images can be selected and
main categories: spatial techniques and frequency-             the autocorrelation function is applied on the scaled
based techniques. Frequency based methods include              image.
the Fourier transform and the wavelet-based
methods such as the Gabor model. Spatial texture
analysis methods can be categorized as statistical
methods, structural methods, or hybrid methods.
It is widely admitted in the computer vision
community that there is a set of textural features that
human beings use to recognize and categorize
textures. Such features include coarseness, contrast
and directionality.

    II.        Autocorrelation function
The Auto correlation Function [1] denoted as
 𝑓 𝛿𝑖, 𝛿𝑗 for an image I of nXm dimensions is
defined as
                          1
            𝑓 𝛿𝑖, 𝛿𝑗 =         𝑋                               Figure 3 shows the randomly selected scaled image
                                     𝑛 −𝛿𝑖   𝑚 −𝛿𝑗
     𝑛−𝛿𝑖 −1     𝑚−𝛿𝑗 −1                                       from the scaled images and its histogram.
    𝑖=0         𝑗 =0       𝐼 𝑖, 𝑗 𝐼(𝑖 + 𝛿𝑖, 𝑗 + 𝛿𝑗)   (1)




                                                                                                     246 | P a g e
Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and
                         Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                           Vol. 2, Issue 5, September- October 2012, pp.246-249

                                                                                                         primitives and is characterized by a high degree of
                                12000                                                                    local variations of grey-levels. First we compute the
                                10000                                                                    first derivative of the autocorrelation function in a
                                 8000
                                                                                                         separable way according to rows and columns,
                                                                                                         respectively.Two functions 𝐶𝑥 𝑖, 𝑗 𝑎𝑛𝑑 𝐶𝑦 𝑖, 𝑗 are
                                 6000
                                                                                                         then obtained
                                 4000
                                                                                                                    𝐶𝑥 𝑖, 𝑗 = 𝑓 𝑖, 𝑗 − 𝑓 𝑖 + 1, 𝑗
                                 2000                                                                             𝐶𝑦 𝑖, 𝑗 = 𝑓 𝑖, 𝑗 − 𝑓 𝑖, 𝑗 + 1       (2)
                                           0

                                               0                 100               200                   Second, we compute the first derivatives of the
                                                                                                         obtained functions 𝐶𝑥(𝑖, 𝑗)and 𝐶𝑦 𝑖, 𝑗          in a
       Fig :3 Randomly selected scaled image and its                                                     separable way according to rows and columns. Two
       histogram                                                                                         functions 𝐶𝑥𝑥(𝑖, 𝑗) and 𝐶𝑦𝑦(𝑖, 𝑗)are then obtained
                                                                                                                  𝐶𝑥𝑥 𝑖, 𝑗 = 𝐶𝑥 𝑖, 𝑗 − 𝐶𝑥 𝑖 + 1, 𝑗
       After applying the autocorrelation fucntion the                                                            𝐶𝑦𝑦 𝑖, 𝑗 = 𝐶𝑦 𝑖, 𝑗 − 𝐶𝑦(𝑖, 𝑗 + 1) (3)
       image is shown in figure 4 and its equivalent
       histogram.                                                                                        To detect maxima, we use the following equations
                                           4
                                       x 10



                                                                                                         (according to rows and columns, respectively):
                                  3


                                 2.5


                                  2
                                                                                                                              𝐶𝑥 𝑖, 𝑗 = 0
                                 1.5


                                  1


                                 0.5                                                                     𝐶𝑥𝑥 𝑖, 𝑗 < 0              (4)
                                  0

                                       0       0.1   0.2   0.3   0.4   0.5   0.6   0.7   0.8   0.9   1
                                                                                                         𝐶𝑦 𝑖, 𝑗 = 0
       Fig:4 Autocorrelation applied on scaled image and
       its histogram                                                                                     𝐶𝑦𝑦 𝑖, 𝑗 < 0              (5)

III.      Computational measures for Textural                                                            Coarseness, denoted ,Cs is estimated as the average
                                                                                                         number of maxima in the autocorrelation function: a
          features
                                                                                                         coarse texture will have a small number of maxima
                 Perceptual features ave been largely used
                                                                                                         and a fine texture will have a large number of
       both in texture retrieval and content-based image
                                                                                                         maxima. Let 𝑀𝑎𝑥𝑥 𝑖, 𝑗 = 1if pixel (i,j) is a
       retrieval (CBIR)[4] systems like:
                                                                                                         maximum on rows and𝑀𝑎𝑥𝑥 𝑖, 𝑗 = 0 if pixel is not
       • IBM QBIC system which implements Tamura
       features.                                                                                         a maximum on rows. Similarly Let 𝑀𝑎𝑥𝑦 𝑖, 𝑗 =
       • Candid by the Los Alamos National Lab’s by                                                      1 if pixel is a maximum on columns and
       Kelly et alii using Laws filters .                                                                 𝑀𝑎𝑥𝑦 𝑖, 𝑗 = 0 if pixel is not a maximum on
       • Photobook by Pentland et alii from MIT referring                                                columns. Coarseness can be expressed by the
       to Wold Texture decomposition model                                                               following equation:
                                                                                                                                           1
       The general estimation process of computational                                                   𝐶𝑠            𝑛 −1    𝑚 −1 𝑀𝑎𝑥𝑥 𝑖,𝑗 +    𝑚 −1    𝑛 −1
                                                                                                                                                                              (6)
                                                                                                              1/2𝑋(   𝑖=0     𝑗 =0               𝑗 =0    𝑖=0 𝑀𝑎𝑥𝑦 (𝑖,𝑗 )/𝑚
       measures simulating human visual perception is as                                                                                𝑛

       follows [3] .
                                                                                                                 The denominator gives the number of
                                                                                                         maxima according to rows and columns. The more
       Step :1 The autocorrelation 𝑓(𝛿𝑖, 𝛿𝑗) is computed
                                                                                                         the number of maxima is high, the less the
       on Image 𝐼(𝑖, 𝑗).
                                                                                                         coarseness is and vice-versa.
       Step 2 : Then, the convolution of the autocorrelation
       function and the gradient of the Gaussian function is
                                                                                                         B. Contrast
       computed in a separable way
                                                                                                                   Contrast measures the degree of clarity
       Step 3 : Based on these two functions,
                                                                                                         with which one can distinguish between different
       computational measures for each perceptual feature
                                                                                                         primitives in a texture. A well contrasted image is
       are computed
                                                                                                         an image in which primitives are clearly visible and
                                                                                                         separable. Local contrast is commonly defined for
       A.Coarseness
                                                                                                         each pixel as an estimate of the local variation in a
                 Coarseness is the most important feature ,
                                                                                                         neighborhood. More precisely, given a pixel p=(i,j)
       it is coarseness that determines the existence of
                                                                                                         and neighbor mask W X Wof the pixel, local
       texture in an image. Coarseness measures the size of
                                                                                                         contrast is computed as
       the primitives that constitute the texture. A coarse
       texture is composed of large primitives and is                                                                         𝑚𝑎𝑥𝑝 ∈𝑊𝑋𝑊 𝑃 − 𝑚𝑖𝑛𝑝 ∈𝑊𝑋𝑊 (𝑃)
       characterized by a high degree of local uniformity of                                             𝐿_𝐶𝑡 𝑖, 𝑗 =                                                    (7)
                                                                                                                              𝑚𝑎𝑥𝑝 ∈𝑊𝑋𝑊 𝑃 +𝑚𝑖𝑛𝑝 ∈𝑊𝑋𝑊 (𝑃)
       grey-levels. A fine texture is constituted by small



                                                                                                                                                              247 | P a g e
Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and
                  Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                    Vol. 2, Issue 5, September- October 2012, pp.246-249

We propose to measure the global contrast as the           orientation(s) 𝑑 . Let 𝜃𝑑 𝑖, 𝑗 = 1 if pixel(i,j) has a
global arithmetic mean of all the local contrast           dominant orientation 𝜃𝑑 and 𝜃𝑑 𝑖, 𝑗 = 0 if pixel
values over the image:                                     (i, j) does not have a dominant orientation.We
                                                           consider only dominant orientations 𝜃𝑑 that are
           1      𝑛     𝑚
𝐺_𝐶𝑡 =           𝑖=1   𝑗 =𝑖   𝐿+ 𝐶𝑡(𝑖, 𝑗)   (8)            present in a sufficient number of pixels, and then
         𝑚 ∗𝑛
                                                           more than a threshold t so that orientation becomes
where n, m are the dimensions of the image.                visible. Let us denote the number of non oriented
                                                           pixels 𝑁𝜃𝑛𝑑 The degree of directionality 𝑁𝜃𝑑 of
                                                           an image can be expressed by the following
                                                           equation:
                                                                         𝑛 −1    𝑚 −1
                                                                        𝑖=0     𝑗 =0 𝜃𝑑 𝑖,𝑗
                                                                𝑁𝜃𝑑 =      𝑛𝑋𝑚 −𝑁𝜃𝑛𝑑
                                                                                                     (10)
                                                          The more NӨd is large, the more the image is
                                                          directional. The more NӨd is small, the more the
                                                          image is nondirectional.

                                                          IV.      Experimental Results
Fig. 5 The less and the most contrasted texture in the             The Computational perceptual features are
Vis Tex database.                                         used to retrieve the image. Figure 6 shows the
                                                          window to select the query image. Among the
          In Figure 5 are shown the most and the          available images the image is
less contrasted textures in Vis Tex texture database
[5] as respect to the proposed measure

C. Directionality
         Regarding directionality,we want to
estimate two parameters: the dominant orientation(s)
and the degree of directionality. Orientation refers to
the global orientation of primitives that constitute
the texture. The degree of directionality is related to
the visibility of the dominant orientation(s) in an
image, and refers to the number of pixels having the
dominant orientation(s).

Orientation Estimation: When considering the
autocorrelation function, one can notice two
phenomena concerning the orientation:                           Figure 6 : Window to select query image
 1. existing orientation in the original image is saved
in the corresponding autocorrelation function;            retrieved .In the figure 7 we can see the total number
 2. the usage of the autocorrelation function instead     of retrieved images is Nine. Precision of the image
of the original image allows                              retrieved is calculated as total number of relavent
keeping the global orientation rather than the local      images / total number of images i.e Precision = 6/9
orientation when one uses the original image.             =0.667 in this example

It follows that the global orientation of the image
can be estimated by applying the gradient on the
autocorrelation function of the original image
according to the lines Cx and according to the
columns Cy . The orientation Ө is given by
                       𝐶𝑦
          𝜃 = arctan⁡ 𝐶𝑥 )
                     (                    (9)
We consider only pixels with a significant
orientation. A pixel is considered as oriented if its
amplitude is superior than a certain threshold t.

 Directionality Estimation:For directionality, we
 consider the number of pixels having dominant                  Figure 7 : Retrieved images



                                                                                                 248 | P a g e
Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and
                   Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.246-249

 V.      Conclusion
         An Innovative method is used to content
based image retrieval using perceptual features of
texture. The precision is calculated and shown
experimentally that it is 0.6 to 0.8. And this method
of image retrieval is effective compared to existing
methods.

References :
  1.   N. Abbadeni, D. Ziou, and S. Wang.
       Autocovariance based perceptual textural
       features corresponding to human visual
       perception. to appear in the Proceedingsof the
       15th      International     Conference      on
       PatternRecognition,      Barcelona,    Spain,
       September 3-8, 2000
  2.   P. Brodatz, Textures: A Photographic Album
       for Artists and Designers.
  3.   New York: Dover, 1966.
  4.   Noureddine       Abbadeni       Computational
       Perceptual       Features      for    Texture
       Representation and Retrieval            IEEE
       transactions on Image processing, Vol. 20,
       NO. 1, January 2011
  5.   Sara Sanam1, Sudha Madhuri.M ,Perception
       based Texture Classification,Representation
       and Retrieval , International Journal of
       Computer Trends and Technology- volume3
       Issue1- 2012
  6.   VISTEX texture database:
  7.   http://www.white.media.mit.edu/vismod/ima
       gery/VisionTexture/ vistex.html




                                                                                  249 | P a g e

Weitere ähnliche Inhalte

Was ist angesagt?

GRUPO 5 : novel fuzzy logic based edge detection technique
GRUPO 5 :  novel fuzzy logic based edge detection techniqueGRUPO 5 :  novel fuzzy logic based edge detection technique
GRUPO 5 : novel fuzzy logic based edge detection technique
viisonartificial2012
 
An adaptive method for noise removal from real world images
An adaptive method for noise removal from real world imagesAn adaptive method for noise removal from real world images
An adaptive method for noise removal from real world images
IAEME Publication
 
11.optimal nonlocal means algorithm for denoising ultrasound image
11.optimal nonlocal means algorithm for denoising ultrasound image11.optimal nonlocal means algorithm for denoising ultrasound image
11.optimal nonlocal means algorithm for denoising ultrasound image
Alexander Decker
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
iaemedu
 
GRUPO 1 : digital manipulation of bright field and florescence images noise ...
GRUPO 1 :  digital manipulation of bright field and florescence images noise ...GRUPO 1 :  digital manipulation of bright field and florescence images noise ...
GRUPO 1 : digital manipulation of bright field and florescence images noise ...
viisonartificial2012
 

Was ist angesagt? (17)

LN s05-machine vision-s2
LN s05-machine vision-s2LN s05-machine vision-s2
LN s05-machine vision-s2
 
GRUPO 5 : novel fuzzy logic based edge detection technique
GRUPO 5 :  novel fuzzy logic based edge detection techniqueGRUPO 5 :  novel fuzzy logic based edge detection technique
GRUPO 5 : novel fuzzy logic based edge detection technique
 
An adaptive method for noise removal from real world images
An adaptive method for noise removal from real world imagesAn adaptive method for noise removal from real world images
An adaptive method for noise removal from real world images
 
New approach for generalised unsharp masking alogorithm
New approach for generalised unsharp masking alogorithmNew approach for generalised unsharp masking alogorithm
New approach for generalised unsharp masking alogorithm
 
3 intensity transformations and spatial filtering slides
3 intensity transformations and spatial filtering slides3 intensity transformations and spatial filtering slides
3 intensity transformations and spatial filtering slides
 
11.optimal nonlocal means algorithm for denoising ultrasound image
11.optimal nonlocal means algorithm for denoising ultrasound image11.optimal nonlocal means algorithm for denoising ultrasound image
11.optimal nonlocal means algorithm for denoising ultrasound image
 
Optimal nonlocal means algorithm for denoising ultrasound image
Optimal nonlocal means algorithm for denoising ultrasound imageOptimal nonlocal means algorithm for denoising ultrasound image
Optimal nonlocal means algorithm for denoising ultrasound image
 
Collaborative Filtering Based on Star Users
Collaborative Filtering Based on Star UsersCollaborative Filtering Based on Star Users
Collaborative Filtering Based on Star Users
 
Bh044365368
Bh044365368Bh044365368
Bh044365368
 
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...
 
Image Restitution Using Non-Locally Centralized Sparse Representation Model
Image Restitution Using Non-Locally Centralized Sparse Representation ModelImage Restitution Using Non-Locally Centralized Sparse Representation Model
Image Restitution Using Non-Locally Centralized Sparse Representation Model
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
Labview with dwt for denoising the blurred biometric images
Labview with dwt for denoising the blurred biometric imagesLabview with dwt for denoising the blurred biometric images
Labview with dwt for denoising the blurred biometric images
 
N018219199
N018219199N018219199
N018219199
 
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESGREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
 
GRUPO 1 : digital manipulation of bright field and florescence images noise ...
GRUPO 1 :  digital manipulation of bright field and florescence images noise ...GRUPO 1 :  digital manipulation of bright field and florescence images noise ...
GRUPO 1 : digital manipulation of bright field and florescence images noise ...
 
Despeckling of Ultrasound Imaging using Median Regularized Coupled Pde
Despeckling of Ultrasound Imaging using Median Regularized Coupled PdeDespeckling of Ultrasound Imaging using Median Regularized Coupled Pde
Despeckling of Ultrasound Imaging using Median Regularized Coupled Pde
 

Andere mochten auch

Chiarini elena esercizio3
Chiarini elena esercizio3Chiarini elena esercizio3
Chiarini elena esercizio3
elenachiarini
 
Sessions alumnes. el que ens transmet la publicitat 1,2
Sessions alumnes. el que ens transmet la publicitat 1,2Sessions alumnes. el que ens transmet la publicitat 1,2
Sessions alumnes. el que ens transmet la publicitat 1,2
Jenny Carrasquer
 

Andere mochten auch (20)

Oi2423352339
Oi2423352339Oi2423352339
Oi2423352339
 
Oa2422882296
Oa2422882296Oa2422882296
Oa2422882296
 
Of2423212324
Of2423212324Of2423212324
Of2423212324
 
Op2423922398
Op2423922398Op2423922398
Op2423922398
 
Ol2423602367
Ol2423602367Ol2423602367
Ol2423602367
 
V24154161
V24154161V24154161
V24154161
 
Fz2510901096
Fz2510901096Fz2510901096
Fz2510901096
 
Nw2422642271
Nw2422642271Nw2422642271
Nw2422642271
 
R24129136
R24129136R24129136
R24129136
 
Oc2423022305
Oc2423022305Oc2423022305
Oc2423022305
 
Fv2510671071
Fv2510671071Fv2510671071
Fv2510671071
 
Cn31594600
Cn31594600Cn31594600
Cn31594600
 
Cx31676679
Cx31676679Cx31676679
Cx31676679
 
Da31699705
Da31699705Da31699705
Da31699705
 
Asaduzzaman - Mitigation in Bangladesh
Asaduzzaman - Mitigation in BangladeshAsaduzzaman - Mitigation in Bangladesh
Asaduzzaman - Mitigation in Bangladesh
 
Chiarini elena esercizio3
Chiarini elena esercizio3Chiarini elena esercizio3
Chiarini elena esercizio3
 
Sessions alumnes. el que ens transmet la publicitat 1,2
Sessions alumnes. el que ens transmet la publicitat 1,2Sessions alumnes. el que ens transmet la publicitat 1,2
Sessions alumnes. el que ens transmet la publicitat 1,2
 
Jk2416381644
Jk2416381644Jk2416381644
Jk2416381644
 
Plataforma PLATICAR. INTA/Costa Rica
Plataforma PLATICAR. INTA/Costa RicaPlataforma PLATICAR. INTA/Costa Rica
Plataforma PLATICAR. INTA/Costa Rica
 
Jo2416561659
Jo2416561659Jo2416561659
Jo2416561659
 

Ähnlich wie Ao25246249

chAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligencechAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligence
shesnasuneer
 
computervision1.pptx its about computer vision
computervision1.pptx its about computer visioncomputervision1.pptx its about computer vision
computervision1.pptx its about computer vision
shesnasuneer
 
griffm3_ECSE4540_Final_Project_Report
griffm3_ECSE4540_Final_Project_Reportgriffm3_ECSE4540_Final_Project_Report
griffm3_ECSE4540_Final_Project_Report
Matt Griffin
 
Anish_Hemmady_assignmnt1_Report
Anish_Hemmady_assignmnt1_ReportAnish_Hemmady_assignmnt1_Report
Anish_Hemmady_assignmnt1_Report
anish h
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
iaemedu
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
iaemedu
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
IAEME Publication
 

Ähnlich wie Ao25246249 (20)

Solving linear equations from an image using ann
Solving linear equations from an image using annSolving linear equations from an image using ann
Solving linear equations from an image using ann
 
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURES
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURESSEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURES
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURES
 
Aggregation operator for image reduction
Aggregation operator for image reductionAggregation operator for image reduction
Aggregation operator for image reduction
 
Estimation, Detection & Comparison of Soil Nutrients using Matlab
Estimation, Detection & Comparison of Soil Nutrients using MatlabEstimation, Detection & Comparison of Soil Nutrients using Matlab
Estimation, Detection & Comparison of Soil Nutrients using Matlab
 
A broad ranging open access journal Fast and efficient online submission Expe...
A broad ranging open access journal Fast and efficient online submission Expe...A broad ranging open access journal Fast and efficient online submission Expe...
A broad ranging open access journal Fast and efficient online submission Expe...
 
Structure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and StructureStructure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and Structure
 
Comparative Analysis of Hand Gesture Recognition Techniques
Comparative Analysis of Hand Gesture Recognition TechniquesComparative Analysis of Hand Gesture Recognition Techniques
Comparative Analysis of Hand Gesture Recognition Techniques
 
chAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligencechAPTER1CV.pptx is abouter computer vision in artificial intelligence
chAPTER1CV.pptx is abouter computer vision in artificial intelligence
 
computervision1.pptx its about computer vision
computervision1.pptx its about computer visioncomputervision1.pptx its about computer vision
computervision1.pptx its about computer vision
 
Av4301248253
Av4301248253Av4301248253
Av4301248253
 
Image processing second unit Notes
Image processing second unit NotesImage processing second unit Notes
Image processing second unit Notes
 
griffm3_ECSE4540_Final_Project_Report
griffm3_ECSE4540_Final_Project_Reportgriffm3_ECSE4540_Final_Project_Report
griffm3_ECSE4540_Final_Project_Report
 
Application of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving WeatherApplication of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving Weather
 
On image intensities, eigenfaces and LDA
On image intensities, eigenfaces and LDAOn image intensities, eigenfaces and LDA
On image intensities, eigenfaces and LDA
 
Amalgamation of contour, texture, color, edge, and
Amalgamation of contour, texture, color, edge, andAmalgamation of contour, texture, color, edge, and
Amalgamation of contour, texture, color, edge, and
 
Anish_Hemmady_assignmnt1_Report
Anish_Hemmady_assignmnt1_ReportAnish_Hemmady_assignmnt1_Report
Anish_Hemmady_assignmnt1_Report
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
Improvement of the Recognition Rate by Random Forest
Improvement of the Recognition Rate by Random ForestImprovement of the Recognition Rate by Random Forest
Improvement of the Recognition Rate by Random Forest
 

Ao25246249

  • 1. Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.246-249 A Innovative method for Image retrieval using perceptual textural features Prathusha.Perugu1, Krishna veni 2 1 M.Tech Student , Kottam College of Engineering ,Kurnool,AP. 2 Asst. Prof. ,Dept. of Computer Science and Engineering, Kottam College of Engineering , Tekur, Kurnool,A.P Abstract : Texture is a very important image For images containing repetitive texture patterns, the feature extremely used in various image autocorrelation function exhibits periodic behavior processing problems like Computer Vision with the same period as in the original image. For ,Image Representation and Retrieval It has been coarse textures, the autocorrelation function shown that humans use some perceptual textural decreases slowly, whereas for fine textures it features to distinguish between textured images decreases rapidly For images containing or regions. Some of the most important features orientation(s), the autocorrelation function saves the are coarseness, contrast, direction and busyness. same orientation(s). In this paper a study is made on these texture features. The proposed computational measures In this paper the test images are taken from can be based upon two representations: the Broadatz database [2] and the autocorrelation original images representation and the function is applied on original images shown in autocorrelation function (associated with original figure 1 . The original images are scaled in to nine images) representation. parts and autocorrelation function is applied on the one of the scaled images randomly. Scaled images Keywords : Texture features, Content Based Image are shown in figure 2. Retrieval, Computational measures I. Introduction Texture has been extensively studied and used in literature since it plays a very important role in human visual perception. Texture refers to the spatial distribution of grey-levels and can be defined as the deterministic or random repetition of one or Fig :1 Original images from Broadatz Database several primitives in an image. Texture analysis techniques can be divided into two Any image of the scaled images can be selected and main categories: spatial techniques and frequency- the autocorrelation function is applied on the scaled based techniques. Frequency based methods include image. the Fourier transform and the wavelet-based methods such as the Gabor model. Spatial texture analysis methods can be categorized as statistical methods, structural methods, or hybrid methods. It is widely admitted in the computer vision community that there is a set of textural features that human beings use to recognize and categorize textures. Such features include coarseness, contrast and directionality. II. Autocorrelation function The Auto correlation Function [1] denoted as 𝑓 𝛿𝑖, 𝛿𝑗 for an image I of nXm dimensions is defined as 1 𝑓 𝛿𝑖, 𝛿𝑗 = 𝑋 Figure 3 shows the randomly selected scaled image 𝑛 −𝛿𝑖 𝑚 −𝛿𝑗 𝑛−𝛿𝑖 −1 𝑚−𝛿𝑗 −1 from the scaled images and its histogram. 𝑖=0 𝑗 =0 𝐼 𝑖, 𝑗 𝐼(𝑖 + 𝛿𝑖, 𝑗 + 𝛿𝑗) (1) 246 | P a g e
  • 2. Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.246-249 primitives and is characterized by a high degree of 12000 local variations of grey-levels. First we compute the 10000 first derivative of the autocorrelation function in a 8000 separable way according to rows and columns, respectively.Two functions 𝐶𝑥 𝑖, 𝑗 𝑎𝑛𝑑 𝐶𝑦 𝑖, 𝑗 are 6000 then obtained 4000 𝐶𝑥 𝑖, 𝑗 = 𝑓 𝑖, 𝑗 − 𝑓 𝑖 + 1, 𝑗 2000 𝐶𝑦 𝑖, 𝑗 = 𝑓 𝑖, 𝑗 − 𝑓 𝑖, 𝑗 + 1 (2) 0 0 100 200 Second, we compute the first derivatives of the obtained functions 𝐶𝑥(𝑖, 𝑗)and 𝐶𝑦 𝑖, 𝑗 in a Fig :3 Randomly selected scaled image and its separable way according to rows and columns. Two histogram functions 𝐶𝑥𝑥(𝑖, 𝑗) and 𝐶𝑦𝑦(𝑖, 𝑗)are then obtained 𝐶𝑥𝑥 𝑖, 𝑗 = 𝐶𝑥 𝑖, 𝑗 − 𝐶𝑥 𝑖 + 1, 𝑗 After applying the autocorrelation fucntion the 𝐶𝑦𝑦 𝑖, 𝑗 = 𝐶𝑦 𝑖, 𝑗 − 𝐶𝑦(𝑖, 𝑗 + 1) (3) image is shown in figure 4 and its equivalent histogram. To detect maxima, we use the following equations 4 x 10 (according to rows and columns, respectively): 3 2.5 2 𝐶𝑥 𝑖, 𝑗 = 0 1.5 1 0.5 𝐶𝑥𝑥 𝑖, 𝑗 < 0 (4) 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 𝐶𝑦 𝑖, 𝑗 = 0 Fig:4 Autocorrelation applied on scaled image and its histogram 𝐶𝑦𝑦 𝑖, 𝑗 < 0 (5) III. Computational measures for Textural Coarseness, denoted ,Cs is estimated as the average number of maxima in the autocorrelation function: a features coarse texture will have a small number of maxima Perceptual features ave been largely used and a fine texture will have a large number of both in texture retrieval and content-based image maxima. Let 𝑀𝑎𝑥𝑥 𝑖, 𝑗 = 1if pixel (i,j) is a retrieval (CBIR)[4] systems like: maximum on rows and𝑀𝑎𝑥𝑥 𝑖, 𝑗 = 0 if pixel is not • IBM QBIC system which implements Tamura features. a maximum on rows. Similarly Let 𝑀𝑎𝑥𝑦 𝑖, 𝑗 = • Candid by the Los Alamos National Lab’s by 1 if pixel is a maximum on columns and Kelly et alii using Laws filters . 𝑀𝑎𝑥𝑦 𝑖, 𝑗 = 0 if pixel is not a maximum on • Photobook by Pentland et alii from MIT referring columns. Coarseness can be expressed by the to Wold Texture decomposition model following equation: 1 The general estimation process of computational 𝐶𝑠 𝑛 −1 𝑚 −1 𝑀𝑎𝑥𝑥 𝑖,𝑗 + 𝑚 −1 𝑛 −1 (6) 1/2𝑋( 𝑖=0 𝑗 =0 𝑗 =0 𝑖=0 𝑀𝑎𝑥𝑦 (𝑖,𝑗 )/𝑚 measures simulating human visual perception is as 𝑛 follows [3] . The denominator gives the number of maxima according to rows and columns. The more Step :1 The autocorrelation 𝑓(𝛿𝑖, 𝛿𝑗) is computed the number of maxima is high, the less the on Image 𝐼(𝑖, 𝑗). coarseness is and vice-versa. Step 2 : Then, the convolution of the autocorrelation function and the gradient of the Gaussian function is B. Contrast computed in a separable way Contrast measures the degree of clarity Step 3 : Based on these two functions, with which one can distinguish between different computational measures for each perceptual feature primitives in a texture. A well contrasted image is are computed an image in which primitives are clearly visible and separable. Local contrast is commonly defined for A.Coarseness each pixel as an estimate of the local variation in a Coarseness is the most important feature , neighborhood. More precisely, given a pixel p=(i,j) it is coarseness that determines the existence of and neighbor mask W X Wof the pixel, local texture in an image. Coarseness measures the size of contrast is computed as the primitives that constitute the texture. A coarse texture is composed of large primitives and is 𝑚𝑎𝑥𝑝 ∈𝑊𝑋𝑊 𝑃 − 𝑚𝑖𝑛𝑝 ∈𝑊𝑋𝑊 (𝑃) characterized by a high degree of local uniformity of 𝐿_𝐶𝑡 𝑖, 𝑗 = (7) 𝑚𝑎𝑥𝑝 ∈𝑊𝑋𝑊 𝑃 +𝑚𝑖𝑛𝑝 ∈𝑊𝑋𝑊 (𝑃) grey-levels. A fine texture is constituted by small 247 | P a g e
  • 3. Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.246-249 We propose to measure the global contrast as the orientation(s) 𝑑 . Let 𝜃𝑑 𝑖, 𝑗 = 1 if pixel(i,j) has a global arithmetic mean of all the local contrast dominant orientation 𝜃𝑑 and 𝜃𝑑 𝑖, 𝑗 = 0 if pixel values over the image: (i, j) does not have a dominant orientation.We consider only dominant orientations 𝜃𝑑 that are 1 𝑛 𝑚 𝐺_𝐶𝑡 = 𝑖=1 𝑗 =𝑖 𝐿+ 𝐶𝑡(𝑖, 𝑗) (8) present in a sufficient number of pixels, and then 𝑚 ∗𝑛 more than a threshold t so that orientation becomes where n, m are the dimensions of the image. visible. Let us denote the number of non oriented pixels 𝑁𝜃𝑛𝑑 The degree of directionality 𝑁𝜃𝑑 of an image can be expressed by the following equation: 𝑛 −1 𝑚 −1 𝑖=0 𝑗 =0 𝜃𝑑 𝑖,𝑗 𝑁𝜃𝑑 = 𝑛𝑋𝑚 −𝑁𝜃𝑛𝑑 (10) The more NӨd is large, the more the image is directional. The more NӨd is small, the more the image is nondirectional. IV. Experimental Results Fig. 5 The less and the most contrasted texture in the The Computational perceptual features are Vis Tex database. used to retrieve the image. Figure 6 shows the window to select the query image. Among the In Figure 5 are shown the most and the available images the image is less contrasted textures in Vis Tex texture database [5] as respect to the proposed measure C. Directionality Regarding directionality,we want to estimate two parameters: the dominant orientation(s) and the degree of directionality. Orientation refers to the global orientation of primitives that constitute the texture. The degree of directionality is related to the visibility of the dominant orientation(s) in an image, and refers to the number of pixels having the dominant orientation(s). Orientation Estimation: When considering the autocorrelation function, one can notice two phenomena concerning the orientation: Figure 6 : Window to select query image 1. existing orientation in the original image is saved in the corresponding autocorrelation function; retrieved .In the figure 7 we can see the total number 2. the usage of the autocorrelation function instead of retrieved images is Nine. Precision of the image of the original image allows retrieved is calculated as total number of relavent keeping the global orientation rather than the local images / total number of images i.e Precision = 6/9 orientation when one uses the original image. =0.667 in this example It follows that the global orientation of the image can be estimated by applying the gradient on the autocorrelation function of the original image according to the lines Cx and according to the columns Cy . The orientation Ө is given by 𝐶𝑦 𝜃 = arctan⁡ 𝐶𝑥 ) ( (9) We consider only pixels with a significant orientation. A pixel is considered as oriented if its amplitude is superior than a certain threshold t. Directionality Estimation:For directionality, we consider the number of pixels having dominant Figure 7 : Retrieved images 248 | P a g e
  • 4. Prathusha.Perugu, Krishna veni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.246-249 V. Conclusion An Innovative method is used to content based image retrieval using perceptual features of texture. The precision is calculated and shown experimentally that it is 0.6 to 0.8. And this method of image retrieval is effective compared to existing methods. References : 1. N. Abbadeni, D. Ziou, and S. Wang. Autocovariance based perceptual textural features corresponding to human visual perception. to appear in the Proceedingsof the 15th International Conference on PatternRecognition, Barcelona, Spain, September 3-8, 2000 2. P. Brodatz, Textures: A Photographic Album for Artists and Designers. 3. New York: Dover, 1966. 4. Noureddine Abbadeni Computational Perceptual Features for Texture Representation and Retrieval IEEE transactions on Image processing, Vol. 20, NO. 1, January 2011 5. Sara Sanam1, Sudha Madhuri.M ,Perception based Texture Classification,Representation and Retrieval , International Journal of Computer Trends and Technology- volume3 Issue1- 2012 6. VISTEX texture database: 7. http://www.white.media.mit.edu/vismod/ima gery/VisionTexture/ vistex.html 249 | P a g e