SlideShare ist ein Scribd-Unternehmen logo
1 von 5
Downloaden Sie, um offline zu lesen
ISSN: 2277 – 9043
                             International Journal of Advanced Research in Computer Science and Electronics Engineering
                                                                                            Volume 1, Issue 5, July 2012



    ANN Implementation for Classification of Noisy
     Numeral Corrupted By Salt and Pepper Noise
      Smita K. Chaudhari*                                                  G.A.Kulkarni
      Dept. Of E& C Engg SSGBCOET,Bhusawal                                 Dept. Of E& C Engg SSGBCOET,Bhusawal
      smita.c20@gmail.com                                                  girish227252@rediffmail.com

                                                                       of as an "expert" in the category of information it has been given to
Abstract— Neural Network (NN) is information processing                 analyse.
paradigm that is inspired by the way biological nervous                     Neural networks take a different approach to problem solving
systems, such as the brain, process information. Neural                 than that of conventional computers. Conventional computers use
Networks are known to be capable of providing good                      an algorithmic approach i.e. the computer follows a set of
recognition rate in presence of noise. Neural Network with              instructions in order to solve a problem. Unless the specific steps
various architectures and Training algorithms have successfully         that the computer needs to follow are known the computer cannot
been applied for letter or character recognition [1]. Numerals          solve the problem. That restricts the problem solving capability of
Recognition is one of the artificial intelligence applications          conventional computers to problems that we already understand and
which provide an important fundamental for various advanced             know how to solve. But computers would be so much more useful if
applications,     including    information       retrieval     and      they could do things that we don't exactly know how to do. Neural
human-computer interaction applications. The neural networks            networks process information in a similar way the human brain
are also able to extract meaningful features of the digits, such as     does.
edges. Handwritten recognition is complex due to large                      Object recognition is the study of how machines can observe the
variation of handwritten style whereas printed character                environment, learn to distinguish patterns of interest and make
recognition is also difficult due to increase number fonts.             reasonable decisions about the categories of patterns. The
This paper uses hamming netwok to recognize noisy numerals.             performance of a machine may be better than the performance
The proposed algorithm will design a system which associates            of a human in a noisy environment due to the factors: human
every fundamental pattern with itself. That is, when presented          performance degrades with increasing number of targets; where as
with xi as input, the system should produce xi at the output. In        the performance of a machine does not depend on the size of the
addition, when presented with a noisy (corrupted) version of xi         set of targets. The performance of a machine does not degrade
at the input, the system should also produce xi at the output.          due to fatigue caused by prolonged effort. A knowledge based
The recognition results of the noisy numeral showed that the            system is desirable for reliable, quick and accurate recognition of
network could recognize normal numerals with 100% accuracy,             objects from noisy and partial input images [3].
numerals added with salt and pepper noise at average of 89%.                The McCulloch and Pitts model was utilized in the development
                                                                        of the first artificial neural network by Rosenblatt in 1959 [11]. This
                                                                        network was based on a unit called the perceptron, which produces
                                                                        an output scaled as 1 or -1 depending upon the weighted, linear
  Index Terms— Character Recognition, Hamming Network,
                                                                        combination of inputs.
Noisy Numeral, Salt & pepper Noise, Neural Network.
                                                                            The optical character recognition system for hand printed
                                                                        numerals of noisy and low-resolution measurement consists of the
                                                                        two-stage feature extraction process. In the first stage a set of
                       I. INTRODUCTION                                  primary features insensitive to the quality and format of a
   Recently, neural network becomes more popular as a                   black-white bit pattern are extracted. In the second stage, a set of
technique to perform character recognition. It has been reported        properties capable of discriminating the character classes is derived
that neural networks could produce high recognition accuracy.           from primary features. The system is simple and reliable in that only
Neural networks are capable of providing good recognition at            three kinds of primary features are needed to be detected. The
the present of noise that other methods normally fail.                  recognition is based on the decision tree which tests the logic
   An Artificial Neural Network (ANN) is information processing         statements of secondary features. [12]
paradigm that is inspired by the way biological nervous systems,                    The importance of using a hierarchical network is shown
such as the brain, process information. The key element of this         in literature [16] Seong-Whan Lee finds a new scheme for off-line
paradigm is the novel structure of the information processing system.   recognition of totally unconstrained handwritten numerals using a
It is composed of a large number of highly interconnected               simple multilayer cluster neural network trained with the back
processing elements (neurons) working in unison to solve specific       propagation algorithm which avoids the problem of finding local
problems. ANNs, like people, learn by example. An ANN is                minima & improves the recognition rates [10]
configured for a specific application, such as pattern recognition or               H. K. Kwan introduced multilayer recurrent neural
data classification, through a learning process. Learning in            networks in the form of 3-layer bidirectional symmetrical and
biological systems involves adjustments to the synaptic connections     asymmetrical associative memories are presented. The networks
that exist between the neurons. This is true of ANNs as well. Neural    possess the features of both a multilayer feedforward neural network
networks, with their remarkable ability to derive meaning from          and a bidirectional associative memory. These networks can have
complicated or imprecise data, can be used to extract patterns and      two modes of recalling, namely, recalling by one pattern and
detect trends that are too complex to be noticed by either humans or    recalling by a pattern pair in[12]
other computer techniques. A trained neural network can be thought                   Recognition of Noisy Numerals using Neural Network
                                                                        by Mohd Yusoff Mashor and Siti Noraini Sulaiman.This paper
                                                                        uses MLP network trained using Levenberg-Marquardt algorithm
                                                                        to recognise noisy numerals. The recognition results of the noisy


                                                                                                                                            88
                                                All Rights Reserved © 2012 IJARCSEE
ISSN: 2277 – 9043
                              International Journal of Advanced Research in Computer Science and Electronics Engineering
                                                                                             Volume 1, Issue 5, July 2012

numeral showed that the network could recognize normal                    indicate the correlation between the prototype patterns and the input
numerals, blended numerals[15].                                           matrix.
                                                                            The neurons will compete each other to determine a winner. When
         II. BACKGROUND & TERMINOLOGY.                                    the processes are finished, there will be only one neuron with
                                                                          nonzero output. This neuron indicates the prototype pattern that is
   The Hamming network method was developed by a                          closest to the input.
mathematician, Richard W. Hamming. He has many contributions                The processes in the recurrent layer will be divided into iterations.
not only in the mathematical field, but also for computer science and     When one iteration is finished, a function will check whether there
telecommunication [5]. He was also the founder and has been the           is only one nonzero output. If available so , the process in this layer
president of Association for Computing Machinery. Hamming                 will be stopped, and the process will continue to generate the output.
network method is developed to solve pattern recognition problems          In Figure D is the function to check whether there is only one
which use binary format, such as a matrix with only two possible          nonzero output. W2 is the weight matrix for this layer with the
values, 0 and 1. In the Hamming network, there is a matrix which          dimension of S x S. The iteration number will be given as t, and it
stores the patterns of all objects, called the prototype data matrix.     will be added by one until the iteration stopped. The activation
The patterns will not be learned by the system, but rather to be          function which is used is the positive linear transfer function
stored as a matrix data. The matrix will be used to define the output     (poslin). This function is linear for positive values and zero for
of the network. The objective of the Hamming network is to decide         negative values.
which prototype matrix is closest to the input matrix. It calculates
the similarities between the prototype matrix of all objects and the
input.                                                                    C. Salt and Pepper Noise.
   It is designed explicitly to solve binary pattern recognition           Salt and pepper noise is an impulse type of noise, which is also
problems. It has both feed forward and recurrent layer. The number        referred to as intensity spikes. This is caused generally due to errors
of neuron in the first layer is the same as the number of neurons in      in data transmission. It has only two possible values, a and b. The
the second layer. The objective of the hamming network is to              probability of each is typically less than 0.1. The corrupted pixels
decide which prototype vector is closest to the input vector.             are set alternatively to the minimum or to the maximum value,
This decision is indicated by the output of the recurrent layer. When     giving the image a “salt and pepper” like appearance. Unaffected
the network converges, there will be only one nonzero output. This        pixels remain unchanged. For an 8-bit image, the typical value for
indicates the prototype pattern that is closest to the input vector.      pepper noise is 0 and for salt noise 255. The salt and pepper noise is
                                                                          generally caused by malfunctioning of pixel elements in the camera
                                                                          sensors, faulty memory locations, or timing errors in the digitization
                                                                          process. The probability density function for this type of noise is
                                                                          shown in Figure 2. Salt and pepper noise with a variance of 0.05 is
                                                                          shown in Image 3




                       Fig.1 Hamming Network


  A. Feedforward Layer                                                                     Fig 2 . PDF for salt and pepper noise

   Feedforward layer calculates the correlation between each
patterns of the prototype matrix and the input matrix (figure 1). The
calculation results will be processed to generate the output neurons
for this layer.
   As shown in the figure 1, the layer has the input matrix from p,
which has the dimension as R x 1. This input matrix goes to the
weight matrix (W1) with the dimension of S x R. The net of this
layer (n1) will be the sum of the W1p and the bias input b. The
weight matrix of W1 will be the matrix of the prototype data which
include the patterns of all objects. The element of the bias b will be
given as the number of R. The transfer function which is used in this                           Fig 3. Salt & Pepper Noise.
layer is the linear transfer function (purelin). This function will not
change the value so the output of this feedforward layer (a1) will be
given as: a1 = purelin (W1p + b1) The output neurons of this layer
will be used as the initial input for the recurrent layer.                                III. LITERATURE SURVEY.
B.Recurrent Layer                                                                   The neural networks were also able to extract meaningful
                                                                          features of the digits, such as edges. The cascade correlation
  The recurrent layer is also called as a competitive layer. In this      network was the least successful, possibly because the network was
layer, there is a neuron for each prototype pattern. The neurons are      committing itself to poor results on in training when few hidden
initialized with the output neurons of the feedforward layer, which       units were present. It was found that an elaborate conjugate gradient
                                                                          minimization technique yielded little improvement in generalization

                                                                                                                                              89
                                                  All Rights Reserved © 2012 IJARCSEE
ISSN: 2277 – 9043
                              International Journal of Advanced Research in Computer Science and Electronics Engineering
                                                                                             Volume 1, Issue 5, July 2012

performance and resulted in six times longer training time than                      H. K. Kwan introduced multilayer recurrent neural
ordinary backpropagation. The importance of using a hierarchical          networks in the form of 3-layer bidirectional symmetrical and
network is shown in literature [9] Seong-Whan Lee finds a new             asymmetrical associative memories are presented. The networks
scheme for off-line recognition of totally unconstrained handwritten      possess the features of both a multilayer feedforward neural network
numerals using a simple multilayer cluster neural network trained         and a bidirectional associative memory. These networks can have
with the back propagation algorithm which avoids the problem of           two modes of recalling, namely, recalling by one pattern and
finding local minima & improves the recognition rates [10]                recalling by a pattern pair in[12].
           Le Cun et al. [19] achieved excellent results with a back                 Recognition of totally unconstrained handwritten numeral
propagation network using size-normalized images as direct input.         strings. The system is built upon a number of components, named, a
Their solution consists of a network architecture which is highly         presegmentation module, an isolated numeral recognizer, a
constrained and specifically designed for the task. There are four        segmentation-free module and a merging module. Presegmentation
internal layers, two layers made of independent groups of feature         consists in dividing continuous numeral string image into groups of
extractors and two layers which perform averaging/sub-sampling.           numerals, each of which represents an integer number of numerals.
The last internal layer is fully connected to the ten-element output,     For each group, the actual number of numerals and their identity are
but all other connections are local and use shared weights. In total,     then determined by a cascade of two recognition-based tests:
there are 4,635 units and 98,442 connections but only 2,578               isolated numeral and segmentation-free. The last one is able to
independent parameters.                                                   recognize a numeral group of any length. All results from all groups
   A modified quadratic classifier based scheme was used for the          are eventually merged yielding the final interpretation of the input
recognition of off-line handwritten numerals of six popular Indian        numeral string. The concept of dummy symbol in order to overcome
scripts such as Devnagari, Bangla, Telugu, Oriya, Kannada and             the problem o f noisy parts that cannot be eliminated by standard
Tamil scripts. The features used in the classifier are obtained from      filtering algorithms.[13]
the directional information of the numerals. For feature
computation, the bounding box of a numeral is segmented into                 IV. DESIGN AND IMPLEMENTATION OF THE
blocks and the directional features are computed in each of the                              SYSTEM.
blocks. These blocks are then down sampled by a Gaussian filter
and the features obtained from the down sampled blocks are fed to a
modified quadratic classifier for recognition.[20]                                   The system designed in this paper associates every
Amit Choudhary analyzes the performance of back-propagation               fundamental pattern with itself. That is, when presented with xi as
feed-forward algorithm using various different activation functions       input, the system should produce xi at the output. In addition, when
for the neurons of hidden and output layers. For sample creation,         presented with a noisy (corrupted) version of xi at the input, the
250 numerals were gathered form 35 people. After binarization,            system should also produce xi at the output. The system which is
these numerals were clubbed together to form training patterns for        developed is a system that gets an input of digit, process it through
the neural network. Network was trained to learn its behavior by          the network, and generates the result. The digit which are used in the
adjusting the connection strengths at every iteration. The conjugate      development are limited to printed digit from 1 to 9. The system has
gradient descent of each presented training pattern was calculated to     some prototype data that consists of the pattern of digits, from 1 to
identify the minima on the error surface for each training pattern.       9. This prototype data is used as the weight matrix for the process in
Experiments were performed by selecting different combinations of         the feedforward layer of the Hamming network. The system is built
two activation functions out of the three activation functions            using the MATLAB and the images are processed using the
„logsig‟, „tansig‟ and „purelin‟ for the neurons of the hidden and        Microsoft Paint.
output layers and the results revealed that the percentage                  The type of the image file is bitmap (.bmp) . The image is read &
recognition accuracy of the neural network was observed to be             converted into 64×64 matrix form. This matrix is converted to 8×8
optimum when „tansig‟-„tansig‟ combination of activation functions        matrix to reduce the computations. Since two dimensional input
was used for neurons of hidden and output layers.[16]                     can‟t be given to neural network then it is converted to 64×1
           Handwritten Numeral recognition plays a vital role in          column vector and this column vector is the prototype pattern The
postal automation services especially in countries like India where       system will have a function that simulates the Hamming network.
multiple languages and scripts are used. Because of intermixing of        The function will act as the network and process the input data to
these languages; it is very difficult to understand the script in which   generate the output. The output neuron of the network indicates the
the pin code is written. Objective of this paper is to resolve this       result of the recognition process.
problem through Multilayer feed-forward back-propagation                    In the feedforward layer, p is the input matrix. It will be the matrix
algorithm using two hidden layer. This work has been tested on five       of the input image which size is 8 x 8. Therefore, the input p will be
different popular Indian scripts namely Devnagri, English, Urdu,          a matrix of 64 x 1. The R number is 64, which is the number of input
Tamil and Telugu. Network was trained to learn its behavior by            neuron, while S is the number of the output neuron for this network,
adjusting the connection strengths on every iteration. The resultant      which is 9. The weight matrix W1 will be generated using the
of each presented training pattern was calculated to identify the         prototype data. It will take the prototype data matrix of the 9 digits,
minima on the error surface for each training pattern. Experiments        so the weight matrix will be a matrix of 9 x 64. The bias b will be a
were performed on samples by using two hidden layers and as the           matrix of 9 x 1.
number of hidden layers is increased, more accuracy is achieved in          In the recurrent layer, there are 9 output neurons which represent
large number of epochs.[17]                                               the number of digits.
           The recognition of machine printed and handwritten               The salt & pepper noise having different density is added in the
numerals has been the subject of much attention in pattern                image by using the MATLAB function & then it is processed &
recognition because of its number of applications such as bank            recognized by using the designed system.
check processing, interpretation of ID numbers, vehicle registration
numbers and pin codes for mail sorting. Promising feature                              V. PERFORMANCE ANALYSIS.
extraction methods have been identified in the literature for
recognition of characters and numerals of many different scripts.            The task was to design a system which associates every
                                                                                                                                          i
These include template matching, projection histograms, geometric         fundamental pattern with itself. That is, when presented with x as
                                                                                                            i
moments, Zernike moments, contour profile, Fourier descriptors,           input, the system should produce x at the output. In addition, when
                                                                                                                             i
and unitary transforms. A brief review of these feature extraction        presented with a noisy (corrupted) version of x at the input, the
                                                                                                       i
methods is found in [21]                                                  system should also produce x at the output.

                                                                                                                                               90
                                                  All Rights Reserved © 2012 IJARCSEE
ISSN: 2277 – 9043
                               International Journal of Advanced Research in Computer Science and Electronics Engineering
                                                                                              Volume 1, Issue 5, July 2012

   Let the Hamming distance between two binary vectors x and
y (of the same dimension) be denoted as d(x, y). The design phase                            100
of the Hamming memory involves simply storing all the patterns of
the fundamental memory set. In the recall phase, for a given input                           80




                                                                                 %Accuracy
                      N
memory key x є (0, 1) , the retrieved pattern is obtained as follows
     (1) Compute the Hamming distances dk = d(x, xk), k = 1, 2,                              60
           ……., m.
     (2) Select the minimum such distance dk = min {d1, d2,                                  40
     …..dm}                                                                                                                           % Accuracy
                                                 k                                           20
     (3) Output the fundamental memory y = x (closest match)
     (4) Input: storage patterns for Hamming network.                                          0
     (5) Input prototype images for digits 1-9 from .bmp format.
     (6) Example: p = 64*64 matrix of prototype input image of                                     1 2 3 4 5 6 7 8 9
     digit 1.
                                                                                                      Input Pattern

                                                                                         Fig. 5. Reconstruction Efficiency for Salt & Pepper Noise
                                                                            Reconstruction is done by Hopfield network which gives
                                                                          which gives maximum % accuracy for digit 1. Figure 5
                                                                          shows the graph of % accuracy of reconstruction and input
                                                                          pattern
                        Fig. 4. Prototype Images
Scale data and display as image to use the full colormap. Colormap                                    VI. CONCLUSION.
(gray) sets the current figure‟s colormap to gray. The values are in
the range from 0 to 1. A colormap matrix may have any number of                      Pattern recognition can be done in normal computers and
rows, but it must have exactly 3 columns. Each row is interpreted as      neural networks. Computers use conventional arithmetic algorithms
a color, with the first element specifying the intensity of Red light,    to detect whether the given pattern matches an existing one. It will
the second Green light, and the third Blue. Color intensity can be        say either yes or no. It does not tolerate noisy patterns. On the other
specified on the interval 0.0 to 1.0. For example, [0 0 0] is black, [1   hand, neural networks can tolerate noise and, if trained properly,
1 1] is white, [1 0 0] is pure Red, [.5 .5 .5] is gray, and [127/255 1    will respond correctly for unknown patterns. Neural networks
212/255] is aquamarine. Resizes a matrix map image to an 8*8              constructed with the proper architecture and trained correctly with
matrix to reduce computations. i.e. Convert and compression of            good data give amazing results, not only in pattern recognition but
image.                                                                    also in other scientific and commercial applications.
  Example: p2 = resize (p, [8, 8]).                                          The model hamming is used for image pattern classification
Table 1: Classification Efficiency/ Output digit for salt & paper noise   this algorithm supply the prototype images in the model
                                                                          memory and then use the memory later to identify the stored
   Input      Noise Density/ Recognised output              %             patterns; when distorted input is given as input to the model .
                                                                          Efficiency of both models varies according to the noise. The
  Pattern      0.01     0.05      0.1    0.5       1      Accuracy
                                                                          hamming network could recognize the input numerals added with
      1          1        1        1      1        4          80          salt and pepper noise at average of 89% . The developed system can
                                                                          be used in car plate recognition. In future we can consider alphabets
      2          2        2        2      2        3          80
                                                                          for recognition.
      3          3        3        3      3        1          80
                                                                                                            REFERENCES
      4          4        4        4      4        1          80
                                                                          [1]     Hagan M.T., Demuth H. B., Beale M., “Neural Network Design”,
      5          5        5        5      5        4          80                  Thomson Learning Vikas Publishing House, 1996.
                                                                          [2]    R C.Gonzalez, R. E. Woods, “Digital Image Processing”, Pearson
      6          6        6        6      1        4          60                  Education, Inc. and Dorling Kindersley Publishing, Inc.2008.
      7          7        7        7      7        9          80          [3]     S Jayaraman, S Esakkirajan, T Veerakumar, “Digital Image
                                                                                  Processing”, Tata McGraw Hill Education, 2009
      8          8        8        8      8        4          80          [4]     Earl, Gose, Richard Johnsonbaugh, Steve Jost, “Pattern recognition
                                                                                  and Image analysis”, Asoke K. Ghosh, Prentice Hall, 1997.
      9          9        9        9      9        4          80          [5]     “Statistical Pattern Recognition: A Review”, Anil K. Jain,Fellow,
                                                                                  IEEE, Robert P.W.Duin, and Jianchang Mao, Senior Member, IEEE
                                                                                  Transactions On Pattern Analysis And Machine Intelligence, Vol. 22,
Classification efficiency of the system is different for each digit for           No. 1, January 2000.
the increased noise density. It gives maximum 80% accuracy for            [6]     Zurada, J. M., “Introduction to Artificial Neural Systems”, Jaico
numeral 1 to 5 and 7 to 9                                                         publishing House, Mumbai, 2002.
                                                                          [7]     The IEEE website. [Online]. Available: http://www.ieee.org/
                                                                                  “PDCA12-70 data sheet,” Opto Speed SA, Mezzovico, Switzerland.
                                                                          [8 ]    McCulloch, W.S., and Pitts, W. (1943), "A Logical Calculus of the
                                                                                  Ideas Immanent in Nervous Activity," Bulletin of Mathematical
                                                                                  Biophysics, 5, 115-133.
                                                                          [9]    Brion IC. Dolenko and Howard C. Card, “Handwritten Digit Feature
                                                                                  Extraction and Classification Using Neural Networks", „CCECE’,
                                                                                  1993, IEEE 0-7803-1443, PP 88-91.


                                                                                                                                                     91
                                                       All Rights Reserved © 2012 IJARCSEE
ISSN: 2277 – 9043
                                   International Journal of Advanced Research in Computer Science and Electronics Engineering
                                                                                                  Volume 1, Issue 5, July 2012

[10] Seong-Whan Lee, “off-line recognition of totally unconstrained
      handwritten numerals using a simple multilayer cluster neural
      network”, IEEE transactions on pattern analysis and machine
      intelligence, vol. 18, no. 6, June 1996, 648-652
[11] Rosenblatt, F. (1959), "The Perceptron: A Probabilistic Model for
      Information Storage and Organization in the Brain," Psychological
      Review 65:386-408.
[12] “Recognition of handprinted numerals by two-stage feature extraction”,
      IEEE transactions on systems science and cybernetics, April 1970
[13] “Handwritten alphabet recognition using hamming network”, Arnold
      Aribowo, Samuel Lukas, Handy, Seminar National Aplikasi
      Teknologi Informasion, 2007 (SNATI 2007) ISSN: 1907-5022
      Yogyakarta, 16 June 2007
[14] A Two-Level Hamming Network for High Performance Associative
      Memory”, by Nobuhiko Ikeda, Paul Watta, Metin Artiklar and
      Mohamad H. Hassoun.
[15] “Recognition of Noisy Numerals using Neural Network”, Mohd
      Yusoff Mashor and Sitz Noraini Sulaiman Centre for Electronic
      Intelligent System (CELIS), School of Electrical and Electronic
      Engineering, University Sains Malaysia Engineering Campus, 14300
      Nibong Tebal Pulau Pinang, Malaysia.
[16] Amit Choudhary, Rahul Rishi, Savita Ahlawat, “Performance
      Analysis of Feed Forward MLP with various Activation Functions
      for Handwritten Numerals Recognition” IEEE, Volume 5, 2010,pp
      852-856,
[17] Stuti Asthana, Farha Haneef, Rakesh K Bhujade, “Handwritten
      Multiscript Numeral Recognition using Artificial Neural Networks”
      IJSCE 2231-2307, Volume-1, Issue-1, March 2011 pp
[18] Leah Bar, Nir Sochen, and Nahum Kiryati “Image eblurring in the
      Presence of Salt-and-Pepper Noise”IEEE IMAGE PROCESSING, VOL.
       16, NO. 4, APRIL 2007
[19] Y. Le Cun, et al., ”Constrained Neural Network for Unconstrained
      Handwritten Digit Recognition,” Proc. of First Int. Workshop on
      Frontiers in Handwriting Recognition, Montreal, Canada, 1990, pp.
      145-154,.
 [20] U. Pal, T. Wakabayashi and F. Kimura, “Handwritten numeral
      recognition of six popular scripts,” Ninth International conference on
      Document Analysis and Recognition ICDAR 07, Vol.2, pp.749-753,
      2007.
[21] Øivind Due Trier, Anil K. Jain and Torfinn Taxt, Feature Extraction
                              Methods for Character Recognition- A
                              survey, Pattern Recognition, Volume 29,
                              Issue 4, April 1996, pp 641-662.

                                 Smita K. Chaudhari is presently Pursuing
                                 ME in Electronics & Communication
                                 Engineering from SSGB College of Engg. &
                                 Technology, North Maharashta University-
                                 Jalgaon, Maharashtra, India. She received the
                                 BE degree from Godavari College of Engg.&
                                 Technology, North Maharashta University,
                               Jalgaon. Her interested area is Image
                               processing, Neural Network,VLSI.



                          G.A.Kulkarni is presently Associate Professor &
                          Head of Electronics & Communication Engg.
                          Department SSGB College of Engg. &
                          Technology, affiliated to North Maharashtra
                          University- Jalgaon, Maharashtra, India.. He
                          received the M.E degree from the Dr.B.A.M.
                          University Aurangabad and presently he is
                          persuing PhD degree from Dr.B.A.M. University
                          Aurangabad. His research interests include
                          communication systems & electromagnetic engg.,
                          Neural network.




                                                                                                                           92
                                                        All Rights Reserved © 2012 IJARCSEE

Weitere ähnliche Inhalte

Was ist angesagt?

Compegence: Dr. Rajaram Kudli - An Introduction to Artificial Neural Network ...
Compegence: Dr. Rajaram Kudli - An Introduction to Artificial Neural Network ...Compegence: Dr. Rajaram Kudli - An Introduction to Artificial Neural Network ...
Compegence: Dr. Rajaram Kudli - An Introduction to Artificial Neural Network ...COMPEGENCE
 
Neural Network Research Projects Topics
Neural Network Research Projects TopicsNeural Network Research Projects Topics
Neural Network Research Projects TopicsMatlab Simulation
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkImtiaz Siddique
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligencealldesign
 
artificial neural network
artificial neural networkartificial neural network
artificial neural networkPallavi Yadav
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applicationsSangeeta Tiwari
 
Artificial Neural Network Abstract
Artificial Neural Network AbstractArtificial Neural Network Abstract
Artificial Neural Network AbstractAnjali Agrawal
 
neural networks
 neural networks neural networks
neural networksjoshiblog
 
Artificial Neural Network and its Applications
Artificial Neural Network and its ApplicationsArtificial Neural Network and its Applications
Artificial Neural Network and its Applicationsshritosh kumar
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101AMIT KUMAR
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networknainabhatt2
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Sivagowry Shathesh
 
Neural network
Neural network Neural network
Neural network Faireen
 
Basics of Deep learning
Basics of Deep learningBasics of Deep learning
Basics of Deep learningRamesh Kumar
 

Was ist angesagt? (20)

Compegence: Dr. Rajaram Kudli - An Introduction to Artificial Neural Network ...
Compegence: Dr. Rajaram Kudli - An Introduction to Artificial Neural Network ...Compegence: Dr. Rajaram Kudli - An Introduction to Artificial Neural Network ...
Compegence: Dr. Rajaram Kudli - An Introduction to Artificial Neural Network ...
 
Neural Network Research Projects Topics
Neural Network Research Projects TopicsNeural Network Research Projects Topics
Neural Network Research Projects Topics
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
 
Deep learning
Deep learning Deep learning
Deep learning
 
artificial neural network
artificial neural networkartificial neural network
artificial neural network
 
Project Report -Vaibhav
Project Report -VaibhavProject Report -Vaibhav
Project Report -Vaibhav
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applications
 
Data Parallel Deep Learning
Data Parallel Deep LearningData Parallel Deep Learning
Data Parallel Deep Learning
 
Artificial Neural Network Abstract
Artificial Neural Network AbstractArtificial Neural Network Abstract
Artificial Neural Network Abstract
 
neural networks
 neural networks neural networks
neural networks
 
Neural networks
Neural networksNeural networks
Neural networks
 
Artificial Neural Network and its Applications
Artificial Neural Network and its ApplicationsArtificial Neural Network and its Applications
Artificial Neural Network and its Applications
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing
 
Deep learning
Deep learningDeep learning
Deep learning
 
Neural network
Neural network Neural network
Neural network
 
Artificial Neural Network.pptx
Artificial Neural Network.pptxArtificial Neural Network.pptx
Artificial Neural Network.pptx
 
Basics of Deep learning
Basics of Deep learningBasics of Deep learning
Basics of Deep learning
 

Ähnlich wie ANN Implementation for Classifying Noisy Numerals

Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksjsharath
 
Neural networking this is about neural networks
Neural networking this is about neural networksNeural networking this is about neural networks
Neural networking this is about neural networksv02527031
 
Handwritten digit and symbol recognition using CNN.pptx
Handwritten digit and symbol recognition using CNN.pptxHandwritten digit and symbol recognition using CNN.pptx
Handwritten digit and symbol recognition using CNN.pptxYasmin297583
 
Artificial Neural Network: A brief study
Artificial Neural Network: A brief studyArtificial Neural Network: A brief study
Artificial Neural Network: A brief studyIRJET Journal
 
Artificial Neural networks for e-NOSE
Artificial Neural networks for e-NOSEArtificial Neural networks for e-NOSE
Artificial Neural networks for e-NOSEMercy Martina
 
Review of Deep Neural Network Detectors in SM MIMO System
Review of Deep Neural Network Detectors in SM MIMO SystemReview of Deep Neural Network Detectors in SM MIMO System
Review of Deep Neural Network Detectors in SM MIMO Systemijtsrd
 
Tamil Character Recognition based on Back Propagation Neural Networks
Tamil Character Recognition based on Back Propagation Neural NetworksTamil Character Recognition based on Back Propagation Neural Networks
Tamil Character Recognition based on Back Propagation Neural NetworksDR.P.S.JAGADEESH KUMAR
 
Neural Networks and Elixir
Neural Networks and ElixirNeural Networks and Elixir
Neural Networks and Elixirbgmarx
 
Pattern Recognition using Artificial Neural Network
Pattern Recognition using Artificial Neural NetworkPattern Recognition using Artificial Neural Network
Pattern Recognition using Artificial Neural NetworkEditor IJCATR
 
Nature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic WebNature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic Webguestecf0af
 
IRJET- Survey on Text Error Detection using Deep Learning
IRJET-  	  Survey on Text Error Detection using Deep LearningIRJET-  	  Survey on Text Error Detection using Deep Learning
IRJET- Survey on Text Error Detection using Deep LearningIRJET Journal
 

Ähnlich wie ANN Implementation for Classifying Noisy Numerals (20)

Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Kn2518431847
Kn2518431847Kn2518431847
Kn2518431847
 
Kn2518431847
Kn2518431847Kn2518431847
Kn2518431847
 
40120140507007
4012014050700740120140507007
40120140507007
 
40120140507007
4012014050700740120140507007
40120140507007
 
ANN - UNIT 1.pptx
ANN - UNIT 1.pptxANN - UNIT 1.pptx
ANN - UNIT 1.pptx
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Artificial Neural Networking
Artificial Neural Networking Artificial Neural Networking
Artificial Neural Networking
 
Neural networking this is about neural networks
Neural networking this is about neural networksNeural networking this is about neural networks
Neural networking this is about neural networks
 
Handwritten digit and symbol recognition using CNN.pptx
Handwritten digit and symbol recognition using CNN.pptxHandwritten digit and symbol recognition using CNN.pptx
Handwritten digit and symbol recognition using CNN.pptx
 
Artificial Neural Network: A brief study
Artificial Neural Network: A brief studyArtificial Neural Network: A brief study
Artificial Neural Network: A brief study
 
Artificial Neural networks for e-NOSE
Artificial Neural networks for e-NOSEArtificial Neural networks for e-NOSE
Artificial Neural networks for e-NOSE
 
Review of Deep Neural Network Detectors in SM MIMO System
Review of Deep Neural Network Detectors in SM MIMO SystemReview of Deep Neural Network Detectors in SM MIMO System
Review of Deep Neural Network Detectors in SM MIMO System
 
Tamil Character Recognition based on Back Propagation Neural Networks
Tamil Character Recognition based on Back Propagation Neural NetworksTamil Character Recognition based on Back Propagation Neural Networks
Tamil Character Recognition based on Back Propagation Neural Networks
 
Neural Networks and Elixir
Neural Networks and ElixirNeural Networks and Elixir
Neural Networks and Elixir
 
Pattern Recognition using Artificial Neural Network
Pattern Recognition using Artificial Neural NetworkPattern Recognition using Artificial Neural Network
Pattern Recognition using Artificial Neural Network
 
Nature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic WebNature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic Web
 
Neural network
Neural networkNeural network
Neural network
 
IRJET- Survey on Text Error Detection using Deep Learning
IRJET-  	  Survey on Text Error Detection using Deep LearningIRJET-  	  Survey on Text Error Detection using Deep Learning
IRJET- Survey on Text Error Detection using Deep Learning
 
Deep learning
Deep learning Deep learning
Deep learning
 

Mehr von Ijarcsee Journal (20)

130 133
130 133130 133
130 133
 
122 129
122 129122 129
122 129
 
116 121
116 121116 121
116 121
 
109 115
109 115109 115
109 115
 
104 108
104 108104 108
104 108
 
99 103
99 10399 103
99 103
 
93 98
93 9893 98
93 98
 
88 92
88 9288 92
88 92
 
82 87
82 8782 87
82 87
 
78 81
78 8178 81
78 81
 
73 77
73 7773 77
73 77
 
65 72
65 7265 72
65 72
 
58 64
58 6458 64
58 64
 
52 57
52 5752 57
52 57
 
46 51
46 5146 51
46 51
 
41 45
41 4541 45
41 45
 
36 40
36 4036 40
36 40
 
28 35
28 3528 35
28 35
 
24 27
24 2724 27
24 27
 
19 23
19 2319 23
19 23
 

Kürzlich hochgeladen

The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 

Kürzlich hochgeladen (20)

The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 

ANN Implementation for Classifying Noisy Numerals

  • 1. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 5, July 2012 ANN Implementation for Classification of Noisy Numeral Corrupted By Salt and Pepper Noise Smita K. Chaudhari* G.A.Kulkarni Dept. Of E& C Engg SSGBCOET,Bhusawal Dept. Of E& C Engg SSGBCOET,Bhusawal smita.c20@gmail.com girish227252@rediffmail.com  of as an "expert" in the category of information it has been given to Abstract— Neural Network (NN) is information processing analyse. paradigm that is inspired by the way biological nervous Neural networks take a different approach to problem solving systems, such as the brain, process information. Neural than that of conventional computers. Conventional computers use Networks are known to be capable of providing good an algorithmic approach i.e. the computer follows a set of recognition rate in presence of noise. Neural Network with instructions in order to solve a problem. Unless the specific steps various architectures and Training algorithms have successfully that the computer needs to follow are known the computer cannot been applied for letter or character recognition [1]. Numerals solve the problem. That restricts the problem solving capability of Recognition is one of the artificial intelligence applications conventional computers to problems that we already understand and which provide an important fundamental for various advanced know how to solve. But computers would be so much more useful if applications, including information retrieval and they could do things that we don't exactly know how to do. Neural human-computer interaction applications. The neural networks networks process information in a similar way the human brain are also able to extract meaningful features of the digits, such as does. edges. Handwritten recognition is complex due to large Object recognition is the study of how machines can observe the variation of handwritten style whereas printed character environment, learn to distinguish patterns of interest and make recognition is also difficult due to increase number fonts. reasonable decisions about the categories of patterns. The This paper uses hamming netwok to recognize noisy numerals. performance of a machine may be better than the performance The proposed algorithm will design a system which associates of a human in a noisy environment due to the factors: human every fundamental pattern with itself. That is, when presented performance degrades with increasing number of targets; where as with xi as input, the system should produce xi at the output. In the performance of a machine does not depend on the size of the addition, when presented with a noisy (corrupted) version of xi set of targets. The performance of a machine does not degrade at the input, the system should also produce xi at the output. due to fatigue caused by prolonged effort. A knowledge based The recognition results of the noisy numeral showed that the system is desirable for reliable, quick and accurate recognition of network could recognize normal numerals with 100% accuracy, objects from noisy and partial input images [3]. numerals added with salt and pepper noise at average of 89%. The McCulloch and Pitts model was utilized in the development of the first artificial neural network by Rosenblatt in 1959 [11]. This network was based on a unit called the perceptron, which produces an output scaled as 1 or -1 depending upon the weighted, linear Index Terms— Character Recognition, Hamming Network, combination of inputs. Noisy Numeral, Salt & pepper Noise, Neural Network. The optical character recognition system for hand printed numerals of noisy and low-resolution measurement consists of the two-stage feature extraction process. In the first stage a set of I. INTRODUCTION primary features insensitive to the quality and format of a Recently, neural network becomes more popular as a black-white bit pattern are extracted. In the second stage, a set of technique to perform character recognition. It has been reported properties capable of discriminating the character classes is derived that neural networks could produce high recognition accuracy. from primary features. The system is simple and reliable in that only Neural networks are capable of providing good recognition at three kinds of primary features are needed to be detected. The the present of noise that other methods normally fail. recognition is based on the decision tree which tests the logic An Artificial Neural Network (ANN) is information processing statements of secondary features. [12] paradigm that is inspired by the way biological nervous systems, The importance of using a hierarchical network is shown such as the brain, process information. The key element of this in literature [16] Seong-Whan Lee finds a new scheme for off-line paradigm is the novel structure of the information processing system. recognition of totally unconstrained handwritten numerals using a It is composed of a large number of highly interconnected simple multilayer cluster neural network trained with the back processing elements (neurons) working in unison to solve specific propagation algorithm which avoids the problem of finding local problems. ANNs, like people, learn by example. An ANN is minima & improves the recognition rates [10] configured for a specific application, such as pattern recognition or H. K. Kwan introduced multilayer recurrent neural data classification, through a learning process. Learning in networks in the form of 3-layer bidirectional symmetrical and biological systems involves adjustments to the synaptic connections asymmetrical associative memories are presented. The networks that exist between the neurons. This is true of ANNs as well. Neural possess the features of both a multilayer feedforward neural network networks, with their remarkable ability to derive meaning from and a bidirectional associative memory. These networks can have complicated or imprecise data, can be used to extract patterns and two modes of recalling, namely, recalling by one pattern and detect trends that are too complex to be noticed by either humans or recalling by a pattern pair in[12] other computer techniques. A trained neural network can be thought Recognition of Noisy Numerals using Neural Network by Mohd Yusoff Mashor and Siti Noraini Sulaiman.This paper uses MLP network trained using Levenberg-Marquardt algorithm to recognise noisy numerals. The recognition results of the noisy 88 All Rights Reserved © 2012 IJARCSEE
  • 2. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 5, July 2012 numeral showed that the network could recognize normal indicate the correlation between the prototype patterns and the input numerals, blended numerals[15]. matrix. The neurons will compete each other to determine a winner. When II. BACKGROUND & TERMINOLOGY. the processes are finished, there will be only one neuron with nonzero output. This neuron indicates the prototype pattern that is The Hamming network method was developed by a closest to the input. mathematician, Richard W. Hamming. He has many contributions The processes in the recurrent layer will be divided into iterations. not only in the mathematical field, but also for computer science and When one iteration is finished, a function will check whether there telecommunication [5]. He was also the founder and has been the is only one nonzero output. If available so , the process in this layer president of Association for Computing Machinery. Hamming will be stopped, and the process will continue to generate the output. network method is developed to solve pattern recognition problems In Figure D is the function to check whether there is only one which use binary format, such as a matrix with only two possible nonzero output. W2 is the weight matrix for this layer with the values, 0 and 1. In the Hamming network, there is a matrix which dimension of S x S. The iteration number will be given as t, and it stores the patterns of all objects, called the prototype data matrix. will be added by one until the iteration stopped. The activation The patterns will not be learned by the system, but rather to be function which is used is the positive linear transfer function stored as a matrix data. The matrix will be used to define the output (poslin). This function is linear for positive values and zero for of the network. The objective of the Hamming network is to decide negative values. which prototype matrix is closest to the input matrix. It calculates the similarities between the prototype matrix of all objects and the input. C. Salt and Pepper Noise. It is designed explicitly to solve binary pattern recognition Salt and pepper noise is an impulse type of noise, which is also problems. It has both feed forward and recurrent layer. The number referred to as intensity spikes. This is caused generally due to errors of neuron in the first layer is the same as the number of neurons in in data transmission. It has only two possible values, a and b. The the second layer. The objective of the hamming network is to probability of each is typically less than 0.1. The corrupted pixels decide which prototype vector is closest to the input vector. are set alternatively to the minimum or to the maximum value, This decision is indicated by the output of the recurrent layer. When giving the image a “salt and pepper” like appearance. Unaffected the network converges, there will be only one nonzero output. This pixels remain unchanged. For an 8-bit image, the typical value for indicates the prototype pattern that is closest to the input vector. pepper noise is 0 and for salt noise 255. The salt and pepper noise is generally caused by malfunctioning of pixel elements in the camera sensors, faulty memory locations, or timing errors in the digitization process. The probability density function for this type of noise is shown in Figure 2. Salt and pepper noise with a variance of 0.05 is shown in Image 3 Fig.1 Hamming Network A. Feedforward Layer Fig 2 . PDF for salt and pepper noise Feedforward layer calculates the correlation between each patterns of the prototype matrix and the input matrix (figure 1). The calculation results will be processed to generate the output neurons for this layer. As shown in the figure 1, the layer has the input matrix from p, which has the dimension as R x 1. This input matrix goes to the weight matrix (W1) with the dimension of S x R. The net of this layer (n1) will be the sum of the W1p and the bias input b. The weight matrix of W1 will be the matrix of the prototype data which include the patterns of all objects. The element of the bias b will be given as the number of R. The transfer function which is used in this Fig 3. Salt & Pepper Noise. layer is the linear transfer function (purelin). This function will not change the value so the output of this feedforward layer (a1) will be given as: a1 = purelin (W1p + b1) The output neurons of this layer will be used as the initial input for the recurrent layer. III. LITERATURE SURVEY. B.Recurrent Layer The neural networks were also able to extract meaningful features of the digits, such as edges. The cascade correlation The recurrent layer is also called as a competitive layer. In this network was the least successful, possibly because the network was layer, there is a neuron for each prototype pattern. The neurons are committing itself to poor results on in training when few hidden initialized with the output neurons of the feedforward layer, which units were present. It was found that an elaborate conjugate gradient minimization technique yielded little improvement in generalization 89 All Rights Reserved © 2012 IJARCSEE
  • 3. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 5, July 2012 performance and resulted in six times longer training time than H. K. Kwan introduced multilayer recurrent neural ordinary backpropagation. The importance of using a hierarchical networks in the form of 3-layer bidirectional symmetrical and network is shown in literature [9] Seong-Whan Lee finds a new asymmetrical associative memories are presented. The networks scheme for off-line recognition of totally unconstrained handwritten possess the features of both a multilayer feedforward neural network numerals using a simple multilayer cluster neural network trained and a bidirectional associative memory. These networks can have with the back propagation algorithm which avoids the problem of two modes of recalling, namely, recalling by one pattern and finding local minima & improves the recognition rates [10] recalling by a pattern pair in[12]. Le Cun et al. [19] achieved excellent results with a back Recognition of totally unconstrained handwritten numeral propagation network using size-normalized images as direct input. strings. The system is built upon a number of components, named, a Their solution consists of a network architecture which is highly presegmentation module, an isolated numeral recognizer, a constrained and specifically designed for the task. There are four segmentation-free module and a merging module. Presegmentation internal layers, two layers made of independent groups of feature consists in dividing continuous numeral string image into groups of extractors and two layers which perform averaging/sub-sampling. numerals, each of which represents an integer number of numerals. The last internal layer is fully connected to the ten-element output, For each group, the actual number of numerals and their identity are but all other connections are local and use shared weights. In total, then determined by a cascade of two recognition-based tests: there are 4,635 units and 98,442 connections but only 2,578 isolated numeral and segmentation-free. The last one is able to independent parameters. recognize a numeral group of any length. All results from all groups A modified quadratic classifier based scheme was used for the are eventually merged yielding the final interpretation of the input recognition of off-line handwritten numerals of six popular Indian numeral string. The concept of dummy symbol in order to overcome scripts such as Devnagari, Bangla, Telugu, Oriya, Kannada and the problem o f noisy parts that cannot be eliminated by standard Tamil scripts. The features used in the classifier are obtained from filtering algorithms.[13] the directional information of the numerals. For feature computation, the bounding box of a numeral is segmented into IV. DESIGN AND IMPLEMENTATION OF THE blocks and the directional features are computed in each of the SYSTEM. blocks. These blocks are then down sampled by a Gaussian filter and the features obtained from the down sampled blocks are fed to a modified quadratic classifier for recognition.[20] The system designed in this paper associates every Amit Choudhary analyzes the performance of back-propagation fundamental pattern with itself. That is, when presented with xi as feed-forward algorithm using various different activation functions input, the system should produce xi at the output. In addition, when for the neurons of hidden and output layers. For sample creation, presented with a noisy (corrupted) version of xi at the input, the 250 numerals were gathered form 35 people. After binarization, system should also produce xi at the output. The system which is these numerals were clubbed together to form training patterns for developed is a system that gets an input of digit, process it through the neural network. Network was trained to learn its behavior by the network, and generates the result. The digit which are used in the adjusting the connection strengths at every iteration. The conjugate development are limited to printed digit from 1 to 9. The system has gradient descent of each presented training pattern was calculated to some prototype data that consists of the pattern of digits, from 1 to identify the minima on the error surface for each training pattern. 9. This prototype data is used as the weight matrix for the process in Experiments were performed by selecting different combinations of the feedforward layer of the Hamming network. The system is built two activation functions out of the three activation functions using the MATLAB and the images are processed using the „logsig‟, „tansig‟ and „purelin‟ for the neurons of the hidden and Microsoft Paint. output layers and the results revealed that the percentage The type of the image file is bitmap (.bmp) . The image is read & recognition accuracy of the neural network was observed to be converted into 64×64 matrix form. This matrix is converted to 8×8 optimum when „tansig‟-„tansig‟ combination of activation functions matrix to reduce the computations. Since two dimensional input was used for neurons of hidden and output layers.[16] can‟t be given to neural network then it is converted to 64×1 Handwritten Numeral recognition plays a vital role in column vector and this column vector is the prototype pattern The postal automation services especially in countries like India where system will have a function that simulates the Hamming network. multiple languages and scripts are used. Because of intermixing of The function will act as the network and process the input data to these languages; it is very difficult to understand the script in which generate the output. The output neuron of the network indicates the the pin code is written. Objective of this paper is to resolve this result of the recognition process. problem through Multilayer feed-forward back-propagation In the feedforward layer, p is the input matrix. It will be the matrix algorithm using two hidden layer. This work has been tested on five of the input image which size is 8 x 8. Therefore, the input p will be different popular Indian scripts namely Devnagri, English, Urdu, a matrix of 64 x 1. The R number is 64, which is the number of input Tamil and Telugu. Network was trained to learn its behavior by neuron, while S is the number of the output neuron for this network, adjusting the connection strengths on every iteration. The resultant which is 9. The weight matrix W1 will be generated using the of each presented training pattern was calculated to identify the prototype data. It will take the prototype data matrix of the 9 digits, minima on the error surface for each training pattern. Experiments so the weight matrix will be a matrix of 9 x 64. The bias b will be a were performed on samples by using two hidden layers and as the matrix of 9 x 1. number of hidden layers is increased, more accuracy is achieved in In the recurrent layer, there are 9 output neurons which represent large number of epochs.[17] the number of digits. The recognition of machine printed and handwritten The salt & pepper noise having different density is added in the numerals has been the subject of much attention in pattern image by using the MATLAB function & then it is processed & recognition because of its number of applications such as bank recognized by using the designed system. check processing, interpretation of ID numbers, vehicle registration numbers and pin codes for mail sorting. Promising feature V. PERFORMANCE ANALYSIS. extraction methods have been identified in the literature for recognition of characters and numerals of many different scripts. The task was to design a system which associates every i These include template matching, projection histograms, geometric fundamental pattern with itself. That is, when presented with x as i moments, Zernike moments, contour profile, Fourier descriptors, input, the system should produce x at the output. In addition, when i and unitary transforms. A brief review of these feature extraction presented with a noisy (corrupted) version of x at the input, the i methods is found in [21] system should also produce x at the output. 90 All Rights Reserved © 2012 IJARCSEE
  • 4. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 5, July 2012 Let the Hamming distance between two binary vectors x and y (of the same dimension) be denoted as d(x, y). The design phase 100 of the Hamming memory involves simply storing all the patterns of the fundamental memory set. In the recall phase, for a given input 80 %Accuracy N memory key x є (0, 1) , the retrieved pattern is obtained as follows (1) Compute the Hamming distances dk = d(x, xk), k = 1, 2, 60 ……., m. (2) Select the minimum such distance dk = min {d1, d2, 40 …..dm} % Accuracy k 20 (3) Output the fundamental memory y = x (closest match) (4) Input: storage patterns for Hamming network. 0 (5) Input prototype images for digits 1-9 from .bmp format. (6) Example: p = 64*64 matrix of prototype input image of 1 2 3 4 5 6 7 8 9 digit 1. Input Pattern Fig. 5. Reconstruction Efficiency for Salt & Pepper Noise Reconstruction is done by Hopfield network which gives which gives maximum % accuracy for digit 1. Figure 5 shows the graph of % accuracy of reconstruction and input pattern Fig. 4. Prototype Images Scale data and display as image to use the full colormap. Colormap VI. CONCLUSION. (gray) sets the current figure‟s colormap to gray. The values are in the range from 0 to 1. A colormap matrix may have any number of Pattern recognition can be done in normal computers and rows, but it must have exactly 3 columns. Each row is interpreted as neural networks. Computers use conventional arithmetic algorithms a color, with the first element specifying the intensity of Red light, to detect whether the given pattern matches an existing one. It will the second Green light, and the third Blue. Color intensity can be say either yes or no. It does not tolerate noisy patterns. On the other specified on the interval 0.0 to 1.0. For example, [0 0 0] is black, [1 hand, neural networks can tolerate noise and, if trained properly, 1 1] is white, [1 0 0] is pure Red, [.5 .5 .5] is gray, and [127/255 1 will respond correctly for unknown patterns. Neural networks 212/255] is aquamarine. Resizes a matrix map image to an 8*8 constructed with the proper architecture and trained correctly with matrix to reduce computations. i.e. Convert and compression of good data give amazing results, not only in pattern recognition but image. also in other scientific and commercial applications. Example: p2 = resize (p, [8, 8]). The model hamming is used for image pattern classification Table 1: Classification Efficiency/ Output digit for salt & paper noise this algorithm supply the prototype images in the model memory and then use the memory later to identify the stored Input Noise Density/ Recognised output % patterns; when distorted input is given as input to the model . Efficiency of both models varies according to the noise. The Pattern 0.01 0.05 0.1 0.5 1 Accuracy hamming network could recognize the input numerals added with 1 1 1 1 1 4 80 salt and pepper noise at average of 89% . The developed system can be used in car plate recognition. In future we can consider alphabets 2 2 2 2 2 3 80 for recognition. 3 3 3 3 3 1 80 REFERENCES 4 4 4 4 4 1 80 [1] Hagan M.T., Demuth H. B., Beale M., “Neural Network Design”, 5 5 5 5 5 4 80 Thomson Learning Vikas Publishing House, 1996. [2] R C.Gonzalez, R. E. Woods, “Digital Image Processing”, Pearson 6 6 6 6 1 4 60 Education, Inc. and Dorling Kindersley Publishing, Inc.2008. 7 7 7 7 7 9 80 [3] S Jayaraman, S Esakkirajan, T Veerakumar, “Digital Image Processing”, Tata McGraw Hill Education, 2009 8 8 8 8 8 4 80 [4] Earl, Gose, Richard Johnsonbaugh, Steve Jost, “Pattern recognition and Image analysis”, Asoke K. Ghosh, Prentice Hall, 1997. 9 9 9 9 9 4 80 [5] “Statistical Pattern Recognition: A Review”, Anil K. Jain,Fellow, IEEE, Robert P.W.Duin, and Jianchang Mao, Senior Member, IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 22, Classification efficiency of the system is different for each digit for No. 1, January 2000. the increased noise density. It gives maximum 80% accuracy for [6] Zurada, J. M., “Introduction to Artificial Neural Systems”, Jaico numeral 1 to 5 and 7 to 9 publishing House, Mumbai, 2002. [7] The IEEE website. [Online]. Available: http://www.ieee.org/ “PDCA12-70 data sheet,” Opto Speed SA, Mezzovico, Switzerland. [8 ] McCulloch, W.S., and Pitts, W. (1943), "A Logical Calculus of the Ideas Immanent in Nervous Activity," Bulletin of Mathematical Biophysics, 5, 115-133. [9] Brion IC. Dolenko and Howard C. Card, “Handwritten Digit Feature Extraction and Classification Using Neural Networks", „CCECE’, 1993, IEEE 0-7803-1443, PP 88-91. 91 All Rights Reserved © 2012 IJARCSEE
  • 5. ISSN: 2277 – 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 5, July 2012 [10] Seong-Whan Lee, “off-line recognition of totally unconstrained handwritten numerals using a simple multilayer cluster neural network”, IEEE transactions on pattern analysis and machine intelligence, vol. 18, no. 6, June 1996, 648-652 [11] Rosenblatt, F. (1959), "The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain," Psychological Review 65:386-408. [12] “Recognition of handprinted numerals by two-stage feature extraction”, IEEE transactions on systems science and cybernetics, April 1970 [13] “Handwritten alphabet recognition using hamming network”, Arnold Aribowo, Samuel Lukas, Handy, Seminar National Aplikasi Teknologi Informasion, 2007 (SNATI 2007) ISSN: 1907-5022 Yogyakarta, 16 June 2007 [14] A Two-Level Hamming Network for High Performance Associative Memory”, by Nobuhiko Ikeda, Paul Watta, Metin Artiklar and Mohamad H. Hassoun. [15] “Recognition of Noisy Numerals using Neural Network”, Mohd Yusoff Mashor and Sitz Noraini Sulaiman Centre for Electronic Intelligent System (CELIS), School of Electrical and Electronic Engineering, University Sains Malaysia Engineering Campus, 14300 Nibong Tebal Pulau Pinang, Malaysia. [16] Amit Choudhary, Rahul Rishi, Savita Ahlawat, “Performance Analysis of Feed Forward MLP with various Activation Functions for Handwritten Numerals Recognition” IEEE, Volume 5, 2010,pp 852-856, [17] Stuti Asthana, Farha Haneef, Rakesh K Bhujade, “Handwritten Multiscript Numeral Recognition using Artificial Neural Networks” IJSCE 2231-2307, Volume-1, Issue-1, March 2011 pp [18] Leah Bar, Nir Sochen, and Nahum Kiryati “Image eblurring in the Presence of Salt-and-Pepper Noise”IEEE IMAGE PROCESSING, VOL. 16, NO. 4, APRIL 2007 [19] Y. Le Cun, et al., ”Constrained Neural Network for Unconstrained Handwritten Digit Recognition,” Proc. of First Int. Workshop on Frontiers in Handwriting Recognition, Montreal, Canada, 1990, pp. 145-154,. [20] U. Pal, T. Wakabayashi and F. Kimura, “Handwritten numeral recognition of six popular scripts,” Ninth International conference on Document Analysis and Recognition ICDAR 07, Vol.2, pp.749-753, 2007. [21] Øivind Due Trier, Anil K. Jain and Torfinn Taxt, Feature Extraction Methods for Character Recognition- A survey, Pattern Recognition, Volume 29, Issue 4, April 1996, pp 641-662. Smita K. Chaudhari is presently Pursuing ME in Electronics & Communication Engineering from SSGB College of Engg. & Technology, North Maharashta University- Jalgaon, Maharashtra, India. She received the BE degree from Godavari College of Engg.& Technology, North Maharashta University, Jalgaon. Her interested area is Image processing, Neural Network,VLSI. G.A.Kulkarni is presently Associate Professor & Head of Electronics & Communication Engg. Department SSGB College of Engg. & Technology, affiliated to North Maharashtra University- Jalgaon, Maharashtra, India.. He received the M.E degree from the Dr.B.A.M. University Aurangabad and presently he is persuing PhD degree from Dr.B.A.M. University Aurangabad. His research interests include communication systems & electromagnetic engg., Neural network. 92 All Rights Reserved © 2012 IJARCSEE