From Event to Action: Accelerate Your Decision Making with Real-Time Automation
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy updated
1. OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF
DATA
ABSTRACT:
The popular Biometric used to authenticate a person is Fingerprint which
is unique and permanent throughout a person’s life. A minutia matching is widely used for
fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this
paper we projected Fingerprint Recognition using Minutia Score Matching method
(FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the
boundary to preserves the quality of the image and extract the minutiae from the thinned
image. The false matching ratio is better compared to the existing algorithm.
Biometric systems operate on behavioral and physiological biometric data to
identify a person. The behavioral biometric parameters are signature, gait, speech and
keystroke, these parameters change with age and environment. However physiological
characteristics such as face, fingerprint, palm print and iris remains unchanged through out
the life time of a person. The biometric system operates as verification mode or identification
mode depending on the requirement of an application. The verification mode validates a
person’s identity by comparing captured biometric data with ready made template. The
identification mode recognizes a person’s identity by performing matches against multiple
fingerprint biometric templates. Fingerprints are widely used in daily life for more than 100
years due to its feasibility, distinctiveness, permanence, accuracy, reliability, and
acceptability. Fingerprint is a pattern of ridges, furrows and minutiae, which are extracted
using inked impression on a paper or sensors. A good quality fingerprint contains 25 to 80
minutiae depending on sensor resolution and finger placement on the sensor.
Existing system:
2. 1.In this developed a method for enhancing the ridge pattern by using a process of oriented
diffusion by adaptation of anisotropic diffusion to smooth the image in the direction parallel
to the ridge flow.
2.The image intensity varies smoothly as one traverse along the ridges or valleys by
removing most of the small irregularities and breaks but with the identity of the individual
ridges and valleys preserved.
3.Proposed a method for fingerprint verification which includes both minutiae and model
based
orientation field is used.
4. It gives robust discriminatory information other than minutiae points. Fingerprint matching
is done by combining the decisions of the matchers based on the orientation field and
minutiae.
Proposed system:
1.It proposes a method to describe a fingerprint matching based on lines
extraction
and graph matching principles by adopting a hybrid scheme which consists of a genetic
algorithm phase and a local search phase.
2.Experimental results demonstrate the robustness of algorithm.
3.proposed a method for estimating four direction orientation field by considering four steps,
i) preprocessing fingerprint image, ii) determining the primary ridge of fingerprint block
using neuron pulse coupled neural network, iii) estimating block direction by projective
distance variance of a ridge, instead of a full block, iv) correcting the estimated orientation
field. obtain principal curves for auto fingerprint identification system.
4. From principal curves, minutiae extraction algorithm is used to extract the minutiae of the
fingerprint. The experimental results shows curves obtained from graph algorithm are
smoother than the thinning algorithm developed a method for minutiae based fingerprint and
its approach to the problem as two - class pattern recognition.
3. 5. The obtained feature vector by minutiae matching is classified into genuine or imposter
by Support Vector Machine resulting remarkable performance improvement proposed a
method to overcome non linear distortion using Local Relative Error Descriptor (LRLED).
6.The algorithm consists of three steps i) a pair wise alignment method to achieve fingerprint
alignment ii) a matched minutiae pair set is obtained with a threshold to reduce non-matches
finally iii) the LRLED – based similarity measure.
7.LRLED is good at distinguishing between corresponding and non corresponding minutiae-
pairs and works well for fingerprint minutiae matching
MODULES USED:
• Authentication
• Image Capturing
• Fingerprint matching
• Fingerprint Binarization
• Performance Evalution
MODULES EXPLANATION:
Authentication:
In this section authentication will be provided to the user. Once the user gets logged inside the
process, it will check whether the user is authenticated person or not .If the user is valid person, the
user is allowed for further process. Authentication is mainly used for the purpose of preserving the
documents from the third parties.
Image Capturing:
Advancements in sensor technology have led to many new and novel imaging
sensors. However, images captured by these sophisticated sensors still suffer from systematic
noise components such as PRNU noise .These imperfections affect the light sensitivity of
each individual pixel and form a constant noise pattern. Since every image captured by the
4. same sensor exhibits the same pattern, PRNU noise can be used as a fingerprint of the sensor.
To determine whether an image is captured by a given imaging device, a fingerprint is
extracted from the individual image by the same denoising procedure used for obtaining the
sensor fingerprint.
Fingerprint matching:
The efficiency of sensor fingerprint matching in large databases, in this paper, we propose to
apply an information reduction operation and represent sensor fingerprints in a quantized
form. Ideally, we would like to obtain a representation as compact as possible. Therefore, we
particularly focus on binary quantization and, essentially, use each element’s sign
information only and disregard magnitude information completely.
Fingerprint Binarization:
Binarization of sensor fingerprints is an effective method that offers considerable
storage gain and complexity reduction without significant reduction in fingerprint matching
accuracy. Performance of fingerprint matching with binary fingerprints examine the gain
obtained in terms of storage and computational requirements. We proposed to create a
compact representation of fingerprints through quantization. Although many different
quantization strategies are possible, we focused on the most severe form of quantization by
quantizing every element of sensor fingerprints into a single bit.
Performance Evalution:
To improve on existing matching methods by addressing more practical concerns like
I/O and storage requirements and computation time while still maintaining an acceptable
matching accuracy. Analysis and experiments were conducted to determine the change in the
performance due to loss of information caused by binarization. It should be possible to get 64
times improvement in storage gain, memory operations. Our experiments, involving actual
fingerprints, showed that we can achieve 64 times reduction in storage requirements, 21 times
speedup in loading to memory, and nine times faster computation by our unoptimized
implementations.
Architecture diagram:
5. LITERATURE SURVEY
1. J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust facerecognition
via sparse representation,”
INTRODUCTION
We consider the problem of automatically recognizing human faces from frontal views
with varying expression and illumination, as well as occlusion and disguise. We cast the
recognition problem as one of classifying among multiple linear regression models and argue
that new theory from sparse signal representation offers the key to addressing this problem.
Based on a sparse representation computed by l{1}-minimization, we propose a general
classification algorithm for (image-based) object recognition. This new framework provides
new insights into two crucial issues in face recognition: feature extraction and robustness to
occlusion. For feature extraction, we show that if sparsity in the recognition problem is
properly harnessed, the choice of features is no longer critical.
PROBLEM STATEMENT
6. What is critical, however, is whether the number of features is sufficiently large and
whether the sparse representation is correctly computed. Unconventional features such as
downsampled images and random projections perform just as well as conventional features
such as Eigenfaces and Laplacianfaces, as long as the dimension of the feature space
surpasses certain threshold, predicted by the theory of sparse representation. This framework
can handle errors due to occlusion and corruption uniformly by exploiting the fact that these
errors are often sparse with respect to the standard (pixel) basis. The theory of sparse
representation helps predict how much occlusion the recognition algorithm can handle and
how to choose the training images to maximize robustness to occlusion. We conduct
extensive experiments on publicly available databases to verify the efficacy of the proposed
algorithm and corroborate the above claims.
2. E. Candes and Y. Plan, “Near-ideal model selection by _1 minimization,”
INTRODUCTION
We consider the fundamental problem of estimating the mean of a vector y = Xβ + z,
where X is an n × p design matrix in which one can have far more variables than
observations, and z is a stochastic error term—the so-called “p > n” setup. When β is sparse,
or, more generally, when there is a sparse subset of covariates providing a close
approximation to the unknown mean vector, we ask whether or not it is possible to accurately
estimate Xβ using a computationally tractable algorithm. We show that, in a surprisingly
wide range of situations, the lasso happens to nearly select the best subset of variables.
Interestingly, our results describe the average performance of the lasso; that is, the
performance one can expect in an vast majority of cases where Xβ is a sparse or nearly sparse
superposition of variables, but not in all cases.
PROBLEM STATEMENT
On the one hand, these examples show that, even with highly incoherent matrices, one
cannot expect good performance in all cases unless the sparsity level is very small. And on
the other hand, one cannot really eliminate our assumption about the coherence, since we
have shown that, with coherent matrices, the lasso would fail to work well on generically
sparse objects. One could of course consider other statistical descriptions of sparse β’s and/or
ideal models, and leave this issue open for further research.
3. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholdingalgorithm for
linear inverse problems”,
7. INTRODUCTION
This class of methods, which can be viewed as an extension of the classical gradient
algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale
problems even with dense matrix data. However, such methods are also known to converge
quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm
(FISTA) which preserves the computational simplicity of ISTA but with a global rate of
convergence which is proven to be significantly better, both theoretically and practically.
PROBLEM STATEMENT
Initial promising numerical results for wavelet-based image deblurring demonstrate the
capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.
These preliminary computational results indicate that FISTA is a simple and promising
iterative scheme, which can be even faster than the proven predicted theoretical rate. Its
potential for analyzing and designing faster algorithms in other application areas and
withother types of regularizers, as well as a more thorough computational study, are topics of
future research.
4. J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolutionas sparse
representation of raw image patches,”
INTRODUCTION
In this paper, we focus on the problem of recovering the super-resolution version of a
given low-resolution image. Although our method can be readily extended to handle multiple
input images, we mostly deal with a single input image. Like the aforementioned learning-
based methods, we will rely on patches from example images. Our method does not require
any learning on the high-resolution patches, instead working directly with the low-resolution
training patches or their features
PROBLEM STATEMENT
However, one of the most important questions for future investigation is to determine, in
terms of the within-category variation, the number of raw sample patches required to generate
a dictionary satisfying the sparse representation prior. Tighter connections to the theory of
compressed sensing may also yield conditions on the appropriate patch size or feature
dimension. From a more practical standpoint, it would be desirable to have a way of
effectively combining dictionaries to work with images containing multiple types of textures
or multiple object categories. One approach to this would integrate supervised image
segmentation and super-resolution, applying the appropriate dictionary within each segment.
8. 5. O. Bryt and M. Elad, “Compression of facial images using theK-SVD algorithm,”
INTRODUCTION
Compression of still images is a very active and matured field of research, vivid in both
research and engineering communities. Compression of images is possible because of their
vast spatial redundancy and the ability to absorb moderate errors in the reconstructed image.
This field of work offers many contributions, some of which became standard algorithms that
are wide-spread and popularAmong the many methods for image compression, one of the
best is the JPEG2000 standard—a general purpose wavelet based image compression
algorithm with very good compression performance
PROBLEM STATEMENT
In this paper we present a facial image compression method, based on sparse and
redundant representations and the K-SVD dictionary learning algorithm. The proposed
compression method is tested in various bit-rates and options, and compared to several
known compression techniques with great success. Results on the importance of redundancy
in the deployed dictionaries are presented. The contribution of this work has several facets:
first, while sparse and redundant representations and learned dictionaries have shown to be
effective in various image processing problems, their role in compression has been less
explored, and this work provides the first evidence to its success in this arena as well.
Second, the proposed scheme is very practical, and could be the foundation for systems that
use large databases of face images. Third, among the various ways to imitate the VQ and yet
be practical, the proposed method stands as an interesting option that should be further
explored. As for future work, we are currently exploring several extensions of this activity,
such as reducing or eliminating the much troubling blockiness effects due to the slicing to
patches, generalization to compression of color images, and adopting the ideas in this work
for compression of finger-prints images. The horizon and the ultimate goal, in this respect, is
a successful harnessing of the presented methodology for general images, in a way that
surpasses the JPEG2000 performance—we believe that this is achievable.