SlideShare ist ein Scribd-Unternehmen logo
1 von 19
Downloaden Sie, um offline zu lesen
1
ABSTRACT
The Face being the primary focus of attention in social interaction plays a major role
in conveying identity. A facial recognition system is a computer application for
automatically identifying or verifying a person from a digital image. The main aim of
this topic is to analyse the method of Principal Component Analysis (PCA) and its
performance when applied to face Recognition. This Algorithm creates a subspace
(face space) where faces is represented using a reduced number of features called
feature vectors. Experimental results that follow show that PCA based methods
provide better face recognition with reasonably low error rates. Principal Component
Analysis (PCA) is a classic feature extraction and data representation technique
widely used in the areas of pattern recognition and computer vision. The purpose of
PCA is to reduce the large dimensionality data space into the smaller dimensionality
feature space. This approach is based on the concept of Eigen faces, it can locate and
track a subject’s face, and then recognize the person by comparing the characteristics
of the face to those known of individuals. This algorithm treats face recognition
problem considering that fact that faces are upright and its characteristic features are
used for calculation. PCA is a good technique for face recognition as it is able to
identify faces fairly well with varying illuminations.
2
CERTIFICATE
Certified that the seminar work entitled ‘PCA BASED FACE RECOGNITION
SYSTEM’ is a bonafide work done by, SIKHA DASH bearing Reg.No:
1241019069 in partial fulfillment of the requirement for the award of the
degree of Bachelor of Technology in Electrical Instrumentation & Control
Engineering of the S’O’A University during the session 2012-2016. It is certified
that all corrections/suggestions indicated for the assessment have been
incorporated in this report. This Seminar has been approved as it satisfies the
academic requirement of the Seminar work prescribed for the Bachelor of
Technology.
SEMINAR CO-ORDINATOR
HOD, EICE
Date:
Place: Bhubaneswar
3
ACKNOWLEDGEMENT
I am extremely grateful to Dr. Niranjan Nayak, H.O.D of EI & CE Dept. for
giving me his consent to carry out the seminar. I would like to thank teaching
staffs of EI & CE Department for their active involvement in the entire process.
Last but not the least I would take the opportunity of thanking my parents and
friends for their constant support and co-operation.
Sikha Dash
(Reg no:1241019069)
4
CONTENTS
CHAPTER NO. TOPIC PAGE NO.
1 Abstract 1
2 Introduction 5
3 Overview of the system 6
A. PCA 9
B. Technical details of PCA 9
4 Applications 14
5 Advantage and Disadvantage
of PCA 15
6 Features of PCA 16
7 Conclusion 17
8 References 18
5
INTRODUCTION
Principal Component Analysis (PCA) is the general name for a technique which uses
sophisticated underlying mathematical principles to transforms a number of possibly
correlated variables into a smaller number of variables called principal components. The
origins of PCA lie in multivariate data analysis, however, it has a wide range of other
applications, as it will show in due course. PCA is called, ’one of the most important results
from applied linear algebra’ and perhaps its most common use is as the first step in trying to
analyze large data sets. Some of the other common applications include; de-noising signals,
blind source separation, and data compression.
In general terms, PCA uses a vector space transform to reduce the dimensionality of
large data sets. Using mathematical projection, the original data set, which may have
involved many variables, can often be interpreted in just a few variables (the principal
components). It is therefore often the case that an examination of the reduced dimension data
set will allow the user to spot trends, patterns and outliers in the data, far more easily than
would have been possible without performing the principal component analysis.
The aim of this essay is to explain the theoretical side of PCA, and to provide
examples of its application. It will begin with a non-rigorous motivational example from
multivariate data analysis in which it will attempt to extract some meaning from a
dimensional data set. After this motivational example, it has been discussed the PCA
technique in terms of its linear algebra fundamentals. This will lead us to a method for
implementing PCA for real-world data, and it was show that there is a close connection
between PCA and the singular value decomposition (SVD) from numerical linear algebra. It
will then look at two further examples of PCA in practice; Image Compression and Blind
Source Separation.
6
1. OVERVIEW
Principal component analysis (PCA) has been called one of the most valuable results
from applied linear algebra. PCA is used abundantly in all forms of analysis - from
neuroscience to computer graphics - because it is a simple, non-parametric method of
extracting relevant information from confusing data sets. With minimal additional effort PCA
provides a roadmap for how to reduce a complex data set to a lower dimension to reveal the
sometimes hidden, simplified dynamics that often underlie it. The goal of this tutorial is to
provide both an intuitive feel for PCA, and a thorough discussion of this topic. We will begin
with a simple example and provide an intuitive explanation of the goal of PCA. We will
continue by adding mathematical rigor to place it within the framework of linear algebra and
explicitly solve this problem. It will see how and why PCA is intimately related to the
mathematical technique of singular value decomposition (SVD). This understanding will lead
us to a prescription for how to apply PCA in the real world. It will discuss both the
assumptions behind this technique as well as possible extensions to overcome these
limitations.
The discussion and explanations in this report are informal in the spirit of a tutorial. The
goal of this report is to educate.
2. Motivation: A Toy Example
Let’s take an example to understand some phenomenon by measuring various quantities
(e.g. spectra, voltages, velocities, etc.) in a system. Unfortunately, it cannot figure out what is
happening because the data appears clouded, unclear and even redundant. This is not a trivial
problem, but rather a fundamental obstacle to experimental science. Examples abound from
complex systems such as neuroscience, photo science, meteorology and oceanography - the
number of variables to measure can be unwieldy and at times even deceptive, because the
underlying dynamics can often be quite simple. Take for example a simple toy problem from
physics diagrammed in Figure 1. Pretend it is studying the motion of the physicist’s ideal
spring.
7
This system consists of a ball of mass m attached to a massless, frictionless spring. The ball is
released a small distance away from equilibrium (i.e. the spring is stretched). Because the
spring is “ideal,” it oscillates indefinitely along the x-axis about its equilibrium at a set
frequency. This is a standard problem in physics in which the motion along the x
Figure 1: A diagram of the toy example.
direction is solved by an explicit function of time. In other words, the underlying dynamics
can be expressed as a function of a single variable x. However, being ignorant experimenters
does not know any of this. It is not known, let alone how many, axes and dimensions are
important to measure. Thus, we decide to measure the ball’s position in a three-dimensional
space (since we live in a three dimensional world). Specifically, it has place three movie
cameras around our system of interest. At 200 Hz each movie camera records an image
indicating a two dimensional position of the ball (a projection). Unfortunately, because of our
ignorance, we do not even know what are the real “x”, “y” and “z” axes, so it has choose
three camera axes { , , } at some arbitrary angles with respect to the system. The angles
between our measurements might not even be 90o! Now, it is record with the cameras for 2
minutes. The big question remains: how do we get from this data set to simple equation of
x? We know a-priori that if we were smart experimenters, it would have just measured the
position along the x-axis with one camera. But this is not what happens in the real world. It is
often do not know what measurements best reflect the dynamics of this system in question.
Furthermore, sometimes record more dimensions than we actually need! Also, we have to
deal with that pesky, real-world problem of noise. In the toy example this means that we need
8
to deal with air, imperfect cameras or even friction in a less-than-ideal spring. Noise
contaminates our data set only serving to obfuscate the dynamics further. This toy example is
the challenge experimenters face every day. We will refer to this example as we delve further
into abstract concepts. Hopefully, by the end of this paper we will have a good understanding
of how to systematically extract x using principal component analysis.
3. Framework: Change of Basis
The Goal: Principal component analysis computes the most meaningful basis to re-
express a noisy, garbled data set. The hope is that this new basis will filter out the noise and
reveal hidden dynamics. In the example of the spring, the explicit goal of PCA is to
determine: “the dynamics are along the x-axis.” In other words, the goal of PCA is to
determine that ˆx - the unit basis vector along the x-axis - is the important dimension.
Determining this fact allows an experimenter to discern which dynamics are important and
which are just redundant.
i. A Naive Basis
With a more precise definition of our goal, we need a more precise definition of our data
as well. For each time sample (or experimental trial), an experimenter records a set of
data consisting of multiple measurements (e.g. voltage, position, etc.). The number of
measurement types is the dimension of the data set. In the case of the spring, this data set
has 12,000 6-dimensional vectors, where each camera contributes a 2-dimensional
projection of the ball’s position. In general, each data sample is a vector in dimensional
space, where m is the number of measurement types. Equivalently, every time sample is a
vector that lies in an m-dimensional vector space spanned by an orthonormal basis. All
measurement vectors in this space are a linear combination of this set of unit length basis
vectors. A naive and simple choice of a basis B is the identity matrix I. where each row is
a basis vector bi with m components. To summarize, at one point in time, camera A
9
records a corresponding position ( , ). Each trial can be expressed as a six
dimensional column vector .
4. PCA
. Principal Component Analysis (PCA) is the general name for a technique which uses
sophisticated underlying mathematical principles to transforms a number of possibly
correlated variables into a smaller number of variables called principal components. The
origins of PCA lie in multivariate data analysis, however, it has a wide range of other
applications, as we will show in due course. PCA has been called, ’one of the most important
results from applied linear algebra’ and perhaps its most common use is as the first step in
trying to analyze large data sets. Some of the other common applications include; de-noising
signals, blind source separation, and data compression.
In general terms, PCA uses a vector space transform to reduce the dimensionality of
large data sets. Using mathematical projection, the original data set, which may have
involved many variables, can often be interpreted in just a few variables (the principal
components). It is therefore often the case that an examination of the reduced dimension data
set will allow the user to spot trends, patterns and outliers in the data, far more easily than
would have been possible without performing the principal component analysis.
The aim of this essay is to explain the theoretical side of PCA, and to provide
examples of its application. We will begin with a non-rigorous motivational example from
multivariate data analysis in which we will attempt to extract some meaning from a 17
dimensional data set. After this motivational example, we shall discuss the PCA technique in
terms of its linear algebra fundamentals. This will lead us to a method for implementing PCA
for real-world data, and we will see that there is a close connection between PCA and the
singular value decomposition (SVD) from numerical linear algebra. We will then look at two
further examples of PCA in practice; Image Compression and Blind Source Separation.
5. The Technical Details of PCA
The principal component analysis for the example above took a large set of data and
identified an optimal new basis in which to re-express the data. This mirrors the general aim
of the PCA method: can we obtain another basis that is a linear combination of the original
basis and that re-expresses the data optimally? There are some ambiguous terms in this
statement, which we shall address shortly, however for now let us frame the problem in the
following way.
10
Assume that we start with a data set that is represented in terms of an m × n matrix, X where
the n columns are the samples (e.g. observations) and the m rows are the variables. We wish
to linearly transform this matrix, X into another matrix, Y, also of dimension m × n, so that
for some m × m matrix, P,
Y = PX
This equation represents a change of basis. If we consider the rows of P to be the row vectors
p1, p2
 , and the columns of X to be the column vectors , . . . , , then can be
interpreted in the following
PX= ( 



.. ) =
. ⋯
⋼ ⋱ ⋼
⋯
= Y (1)
Note that , ∊ , and so , is just the standard Euclidean inner (dot) product.
This tells us that the original data, X is being projected on to the columns of P. Thus, the rows
of P, { 



.. } are a new basis for representing the columns of X. The rows
of P will later become our principal component directions.
We now need to address the issue of what this new basis should be, indeed what is the ’best’
way to re-express the data in X - in other words, how should we define independence
between principal components in the new basis.
Principal component analysis defines independence by considering the variance of the data in
the original basis. It seeks to de-correlate the original data by finding the directions in which
variance is maximised and then use these directions to define the new basis. Recall the
definition for the variance of a random variable, Z with mean, Ό.
! = E| " − $ | (2)
Suppose we have a vector of n discrete measurements, r' = r), r) , r) 
 
 
 . . r+) with
mean $,. If we subtract the mean from each of the measurements, then we obtain a translated
set of measurements r= (- - - 



.. - }, that has zero mean. Thus, the variance of
these measurements is given by the relation
11
, = --.
(3)
If we have a second vector of n measurements, s = (/ / / 



.. / ), again with zero
mean, then we can generalize this idea to obtain the covariance of r and s. Covariance can be
thought of as a measure of how much two variables change together. Variance is thus a
special case of covariance, when the two variables are identical. It is in fact correct to divide
through by a factor of n − 1 rather than n, a fact which we shall not justify here, but is
discussed in.
-/ =
1
1 − 1
-/.
We can now generalize this idea to considering our 2 × 1 data matrix, X. Recall that m was
the number of variables, and n the number of samples. We can therefore think of this matrix,
X in terms of m row vectors, each of length n.
X=
, , ⋯ ,
⋼ ⋱ ⋼
, , ⋯ ,
= ⋼ ∈ ×
, .
∈
Since we have a row vector for each variable, each of these vectors contains all the samples
for one particular variable. So for example, xi is a vector of the n samples for the ith variable.
It therefore makes sense to consider the following matrix product.
5 =
1
1 − 1
.
=
1
1 − 1
. .
⋯ .
⋼ ⋱ ⋼
. .
⋯ .
∈ ×
If we look closely at the entries of this matrix, we see that we have computed all the possible
covariance pairs between the m variables. Indeed, on the diagonal entries, we have the
variances and on the off-diagonal entries, we have the covariance. This matrix is therefore
known as the Covariance Matrix. Now let us return to the original problem, that of linearly
transforming the original data matrix using the relation Y = PX, for some matrix, P. We need
to decide upon some features that we would like the transformed matrix, Y to exhibit and
somehow relate this to the features of the corresponding covariance matrix CY. Covariance
can be considered to be a measure of how well correlated two variables are. The PCA method
makes the fundamental assumption that the variables in the transformed matrix should be as
uncorrelated as possible. This is equivalent to saying that the covariance of different variables
12
in the matrix CY, should be as close to zero as possible (covariance matrices are always
positive definite or positive semi-definite). Conversely, large variance values interest us,
since they correspond to interesting dynamics in the system (small variances may well be
noise). We therefore have the following requirements for constructing the covariance matrix,
CY:
a. Maximize the signal, measured by variance (maximize the diagonal entries)
b. Minimize the covariance between variables (minimize the off-diagonal
entries)
We thus come to the conclusion that since the minimum possible covariance is zero, it is
seeking a diagonal matrix, CY. If we can choose the transformation matrix, P in such a way
that CY is diagonal, then we will have achieved our objective.
We now make the assumption that the vectors in the new basis, 61, 62, . . ., pm are orthogonal
(in fact, we additionally assume that they are orthonormal). Far from being restrictive, this
assumption enables us to proceed by using the tools of linear algebra to find a solution to the
problem. Consider the formula for the covariance matrix, CY and our interpretation of Y in
terms of X and P.
58 =
1
1 − 1
99.
=
1
1 − 1
.
=
1
1 − 1
. .
=
1
1 − 1
. .
I.e. 58 =
:
; .
where ; = .
Note that S is a 2 × 1 symmetric matrix, since . .
= . . .
= .
. We now
invoke the well-known theorem from linear algebra that every square symmetric matrix is
orthogonally diagonalizable. That is, we can write:
; = <=<.
Where E is an 2 × 2 orthonormal matrix whose columns are the orthonormal
eigenvectors of S, and D is a diagonal matrix which has the eigenvalues of S as its (diagonal)
entries. The rank, r, of S is the number of orthonormal eigenvectors that it has. If B turns out
to be rank-deficient so that r is less than the size, m, of the matrix, then we simply need to
generate m − r orthonormal vectors to fill the remaining columns of S. It is at this point that
we make a choice for the transformation matrix, P. By choosing the rows of P to be the
eigenvectors of S, we ensure that P = ET and vice-versa. Thus, substituting this into our
derived expression for the covariance matrix, CY gives:
58 =
:
; .
=
:
<.
<=<.
<
13
Now, since E is an orthonormal matrix, we have<.
< = >, where I is the m × m identity
matrix. Hence, for this special choice of P, we have:
58 =
1
1 − 1
=
A last point to note is that with this method, we automatically gain information about
the relative importance of each principal component from the variances. The largest variance
corresponds to the first principal component, the second largest to the second principal
component, and so on. This therefore gives us a method for organizing the data in the
diagonalization stage. Once we have obtained the eigenvalues and eigenvectors of; = .
,
we sort the eigenvalues in descending order and place them in this order on the diagonal of D.
We then construct the orthonormal matrix, E by placing the associated eigenvectors in the
same order to form the columns of E (i.e. place the eigenvector that corresponds to the largest
eigenvalue in the first column, the eigenvector corresponding to the second largest eigenvalue
in the second column etc.).
We have therefore achieved our objective of diagonal sing the covariance matrix of
the transformed data. The principal components (the rows of P) are the eigenvectors of the
covariance matrix, .
, and the rows are in order of ’importance’, telling us how ’principal’
each principal component is.
My aim in writing this article was that somebody with a similar level of mathematical
knowledge as myself (i.e. early graduate level) would be able to gain a good introductory
understanding of PCA by reading this essay. I hope that they would understand that it is a
diverse tool in data analysis, with many applications, three of which we have covered in
detail here. I would also hope that they would gain a good understanding of the surrounding
mathematics, and the close link that PCA has with the singular value decomposition.
I embarked upon writing this essay with only one application in mind, that of Blind Source
Separation. However, when it came to researching the topic in detail, I found that there were
many interesting applications of PCA, and I identified dimensional reduction in multivariate
data analysis and image compression as being two of the most appealing alternative
applications. Though it is a powerful technique, with a diverse range of possible applications,
it is fair to say that PCA is not necessarily the best way to deal with each of the sample
applications that I have discussed.
For the multivariate data analysis example, we were able to identify that the
inhabitants of Northern Ireland were in some way different in their dietary habits to those of
14
the other three countries in the UK. We were also able to identify particular food groups with
the eating habits of Northern Ireland, yet we were limited in being able to make distinctions
between the dietary habits of the English, Scottish and Welsh. In order to explore this avenue,
it would perhaps be necessary to perform a similar analysis on just those three countries.
Image compression (and more generally, data compression) is by now getting to be a mature
field, and there are many sophisticated technologies available that perform this task.
JPEG is an obvious and comparable example that springs to mind (JPEG can also
involve lousy compression). JPEG utilizes the discrete cosine transform to convert the image
to a frequency-domain representation and generally achieves much higher quality for similar
compression ratios when compared to PCA. Having said this, PCA is a nice technique in its
own right for implementing image compression and it is nice to find such a pleasing
implementation.
As we saw in the last example, Blind Source Separation can cause problems for PCA
under certain circumstances. PCA will not be able to separate the individual sources if the
signals are combined nonlinearly, and can produce spurious results even if the combination is
linear. PCA will also fail for BSS if the data is non-Gaussian. In this situation, a well-known
technique that works is called Independent Component Analysis (ICA). The main
philosophical difference between the two methods is that PCA defines independence using
variance, whilst ICA defines independence using statistical independence - it identifies the
principal components by maximizing the statistical independence between each of the
components.
6. APPLICATIONS
Neuroscience:
A variant of principal components analysis is used in neuroscience to identify the
specific properties of a stimulus that increase a neuron's probability of generating an action
potential. This technique is known as spike-triggered covariance analysis. In a typical
application an experimenter presents a white noise process as a stimulus (usually either as a
sensory input to a test subject, or as a current injected directly into the neuron) and records a
train of action potentials, or spikes, produced by the neuron as a result. Presumably, certain
features of the stimulus make the neuron more likely to spike. In order to extract these
features, the experimenter calculates the covariance matrix of the spike-triggered ensemble,
the set of all stimuli (defined and discretized over a finite time window, typically on the order
15
of 100 ms) that immediately preceded a spike. The eigenvectors of the difference between the
spike-triggered covariance matrix and the covariance matrix of the prior stimulus
ensemble (the set of all stimuli, defined over the same length time window) then indicate the
directions in the space of stimuli along which the variance of the spike-triggered ensemble
differed the most from that of the prior stimulus ensemble. Specifically, the eigenvectors with
the largest positive eigenvalues correspond to the directions along which the variance of the
spike-triggered ensemble showed the largest positive change compared to the variance of the
prior. Since these were the directions in which varying the stimulus led to a spike, they are
often good approximations of the sought after relevant stimulus features.
In neuroscience, PCA is also used to discern the identity of a neuron from the shape of its
action potential. Spike sorting is an important procedure because extracellular recording
techniques often pick up signals from more than one neuron. In spike sorting, one first uses
PCA to reduce the dimensionality of the space of action potential waveforms, and then
performs clustering analysis to associate specific action potentials with individual neurons.
PCA as a dimension reduction technique is particularly suited to detect coordinated activities
of large neuronal ensembles. It has been used in determining collective variables, i.e. order
parameters, during phase transitions in the brain.
7. ADVANTAGE AND DISADVANTAGE OF PCA
PCA’s key advantages are its low noise sensitivity, the decreased requirements for
capacity and memory, and increased efficiency given the processes taking place in a smaller
dimensions; the complete advantages of PCA are listed below:
‱ Lack of redundancy of data given the orthogonal components.
‱ Reduced complexity in images’ grouping with the use of PCA.
‱ Smaller database representation since only the trainee images are stored in the form of
their projections on a reduced basis.
‱ Reduction of noise since the maximum variation basis is chosen and so the small
variations in the back-ground are ignored automatically.
Two key disadvantages of PCA are:
‱ The covariance matrix is difficult to be evaluated in an accurate manner.
16
‱ Even the simplest invariance could not be captured by the PCA unless the training
data explicitly provides this information.
8. FEATURES OF PCA
Table 1. The features of PCA
17
9. CONCLUSION
For the multivariate data analysis example, it is able to identify that the inhabitants of
Northern Ireland were in some way different in their dietary habits to those of the other three
countries in the UK. We were also able to identify particular food groups with the eating
habits of Northern Ireland, yet we were limited in being able to make distinctions between the
dietary habits of the English, Scottish and Welsh. In order to explore this avenue, it would
perhaps be necessary to perform a similar analysis on just those three countries. Image
compression (and more generally, data compression) is by now getting to be a mature field,
and there are many sophisticated technologies available that perform this task. JPEG is an
obvious and comparable example that springs to mind (JPEG can also involve lousy
compression). JPEG utilises the discrete cosine transform to convert the image to a
frequency-domain representation and generally achieves much higher quality for similar
compression ratios when compared to PCA. Having said this, PCA is a nice technique in its
own right for implementing image compression and it is nice to find such a pleasing
implementation. As we saw in the last example, Blind Source Separation can cause problems
for PCA under certain circumstances. PCA will not be able to separate the individual sources
if the signals are combined nonlinearly, and can produce spurious results even if the
combination is linear. PCA will also fail for BSS if the data is non-Gaussian. In this situation,
a well-known technique that works is called Independent Component Analysis (ICA). The
main philosophical difference between the two methods is that PCA defines independence
using variance, whilst ICA defines independence using statistical independence - it identifies
the principal components by maximising the statistical independence between each of the
components.
18
REFRENCES
[1] J. Ashok, V. Shivashankar and P. Mudiraj, “An Overview of Biometrics,”
International Journal, Vol. 2, 2010.
[2] F. Özen, “A Face Recognition System Based on Eigen-faces Method,” Procedia
Technology, Vol. 1, 2011, pp. 118-123.
[3] W. Miziolek and D. Sawicki, “Face Recognition: PCA or ICA,” Przeglad
Elektrotechniczny, Vol. 88, 2012, pp. 286-288.
[4] C. Li, Y. Diao, H. Ma and Y. Li, “A Statistical PCA Method for Face
Recognition,” in Intelligent Information Technology Application, 2008, pp. 376-
380.
[5] H. Duan, R. Yan and K. Lin, “Research on Face Recogni-tion Based on PCA,” in
Future Information Technology and Management Engineering, 2008, pp. 29-32.
[6] T. F. Karim, M. S. H. Lipu, M. L. Rahman and F. Sultana, “Face Recognition
Using PCA-based Method,” in Ad-vanced Management Science (ICAMS), 2010
IEEE Inter-national Conference on, 2010, pp. 158-162.
19
SEMINAR REPORT
On
PCA BASED FACE RECOGNITION SYSTEM
Submitted in partial fulfillment of the requirement for the award
Of
BACHELOR OF TECHNOLOGY
In
ELECTRICAL INSTRUMENTATION & CONTROL
ENGINEERING
By
SIKHA DASH
(Regd no. - 1241019069)
DEPARTMENT OF ELECTRICAL INSTRUMENTATION NAD CONTROL ENGINEERING
INSTITUTE OF TECHNICAL EDUCATION AND RESEARCH
SIKSHA ‘O’ ANUSANDHAN UNIVERSITY
(Declared u/s. 3 of the UGC Act. 1956)
Jagamohan Nagar, Jagamara, Bhubaneswar – 751030.
2012-2016

Weitere Àhnliche Inhalte

Was ist angesagt?

To provide insights into the setting of a
To provide insights into the setting of aTo provide insights into the setting of a
To provide insights into the setting of aqazwsx99
 
Credal Fusion of ClassiïŹcations for Noisy and Uncertain Data
Credal Fusion of ClassiïŹcations for Noisy and Uncertain DataCredal Fusion of ClassiïŹcations for Noisy and Uncertain Data
Credal Fusion of ClassiïŹcations for Noisy and Uncertain DataIJECEIAES
 
Spakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithmsSpakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithmsmrgazer
 
November, 2006 CCKM'06 1
November, 2006 CCKM'06 1 November, 2006 CCKM'06 1
November, 2006 CCKM'06 1 butest
 
DOJProposal7.doc
DOJProposal7.docDOJProposal7.doc
DOJProposal7.docbutest
 
Model evaluation in the land of deep learning
Model evaluation in the land of deep learningModel evaluation in the land of deep learning
Model evaluation in the land of deep learningPramit Choudhary
 
Artificial Intelligence - Reasoning in Uncertain Situations
Artificial Intelligence - Reasoning in Uncertain SituationsArtificial Intelligence - Reasoning in Uncertain Situations
Artificial Intelligence - Reasoning in Uncertain SituationsLaguna State Polytechnic University
 
Dealing with inconsistency
Dealing with inconsistencyDealing with inconsistency
Dealing with inconsistencyRajat Sharma
 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIPramit Choudhary
 
3 ćčłć‡ăƒ»ćˆ†æ•Łăƒ»ç›žé–ą
3 ćčłć‡ăƒ»ćˆ†æ•Łăƒ»ç›žé–ą3 ćčłć‡ăƒ»ćˆ†æ•Łăƒ»ç›žé–ą
3 ćčłć‡ăƒ»ćˆ†æ•Łăƒ»ç›žé–ąSeiichi Uchida
 
Object Tracking using Artificial Neural Network
Object Tracking using Artificial Neural NetworkObject Tracking using Artificial Neural Network
Object Tracking using Artificial Neural NetworkAnwar Jameel
 
Robust Visual Tracking Based on Sparse PCA-L1
Robust Visual Tracking Based on Sparse PCA-L1Robust Visual Tracking Based on Sparse PCA-L1
Robust Visual Tracking Based on Sparse PCA-L1csandit
 

Was ist angesagt? (18)

To provide insights into the setting of a
To provide insights into the setting of aTo provide insights into the setting of a
To provide insights into the setting of a
 
Credal Fusion of ClassiïŹcations for Noisy and Uncertain Data
Credal Fusion of ClassiïŹcations for Noisy and Uncertain DataCredal Fusion of ClassiïŹcations for Noisy and Uncertain Data
Credal Fusion of ClassiïŹcations for Noisy and Uncertain Data
 
Spakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithmsSpakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithms
 
November, 2006 CCKM'06 1
November, 2006 CCKM'06 1 November, 2006 CCKM'06 1
November, 2006 CCKM'06 1
 
Chapter 9
Chapter 9Chapter 9
Chapter 9
 
DOJProposal7.doc
DOJProposal7.docDOJProposal7.doc
DOJProposal7.doc
 
Model evaluation in the land of deep learning
Model evaluation in the land of deep learningModel evaluation in the land of deep learning
Model evaluation in the land of deep learning
 
Facial Expression Recognition
Facial Expression RecognitionFacial Expression Recognition
Facial Expression Recognition
 
Artificial Intelligence - Reasoning in Uncertain Situations
Artificial Intelligence - Reasoning in Uncertain SituationsArtificial Intelligence - Reasoning in Uncertain Situations
Artificial Intelligence - Reasoning in Uncertain Situations
 
Dealing with inconsistency
Dealing with inconsistencyDealing with inconsistency
Dealing with inconsistency
 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AI
 
The picture fuzzy distance measure in controlling network power consumption
The picture fuzzy distance measure in controlling network power consumptionThe picture fuzzy distance measure in controlling network power consumption
The picture fuzzy distance measure in controlling network power consumption
 
abcxyz
abcxyzabcxyz
abcxyz
 
528 439-449
528 439-449528 439-449
528 439-449
 
3 ćčłć‡ăƒ»ćˆ†æ•Łăƒ»ç›žé–ą
3 ćčłć‡ăƒ»ćˆ†æ•Łăƒ»ç›žé–ą3 ćčłć‡ăƒ»ćˆ†æ•Łăƒ»ç›žé–ą
3 ćčłć‡ăƒ»ćˆ†æ•Łăƒ»ç›žé–ą
 
Object Tracking using Artificial Neural Network
Object Tracking using Artificial Neural NetworkObject Tracking using Artificial Neural Network
Object Tracking using Artificial Neural Network
 
Thesis_Rehan_Aziz
Thesis_Rehan_AzizThesis_Rehan_Aziz
Thesis_Rehan_Aziz
 
Robust Visual Tracking Based on Sparse PCA-L1
Robust Visual Tracking Based on Sparse PCA-L1Robust Visual Tracking Based on Sparse PCA-L1
Robust Visual Tracking Based on Sparse PCA-L1
 

Andere mochten auch

Seminar Report face recognition_technology
Seminar Report face recognition_technologySeminar Report face recognition_technology
Seminar Report face recognition_technologyVivek Soni
 
Face recognition technology - BEST PPT
Face recognition technology - BEST PPTFace recognition technology - BEST PPT
Face recognition technology - BEST PPTSiddharth Modi
 
Master_Thesis_FilipeSilva
Master_Thesis_FilipeSilvaMaster_Thesis_FilipeSilva
Master_Thesis_FilipeSilvaFilipe Silva
 
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...Facial Expression Recognition Using Local Binary Pattern and Support Vector M...
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...AM Publications
 
Recognition of Facial Expressions using Local Binary Patterns of Important Fa...
Recognition of Facial Expressions using Local Binary Patterns of Important Fa...Recognition of Facial Expressions using Local Binary Patterns of Important Fa...
Recognition of Facial Expressions using Local Binary Patterns of Important Fa...CSCJournals
 
computer vision and face recognition
 computer vision and face recognition computer vision and face recognition
computer vision and face recognitionhananhelal
 
Face recognition a survey
Face recognition a surveyFace recognition a survey
Face recognition a surveyieijjournal
 
What is pattern recognition field?
What is pattern recognition field?What is pattern recognition field?
What is pattern recognition field?Randa Elanwar
 
Emotion recognition from facial expression using fuzzy logic
Emotion recognition from facial expression using fuzzy logicEmotion recognition from facial expression using fuzzy logic
Emotion recognition from facial expression using fuzzy logicFinalyear Projects
 
Recent Advances in Face Analysis: database, methods, and software.
Recent Advances in Face Analysis: database, methods, and software.Recent Advances in Face Analysis: database, methods, and software.
Recent Advances in Face Analysis: database, methods, and software.Taowei Huang
 
Facial expression recognition based on local binary patterns final
Facial expression recognition based on local binary patterns finalFacial expression recognition based on local binary patterns final
Facial expression recognition based on local binary patterns finalahmad abdelhafeez
 
Face recognition ppt
Face recognition pptFace recognition ppt
Face recognition pptSantosh Kumar
 

Andere mochten auch (14)

Seminar Report face recognition_technology
Seminar Report face recognition_technologySeminar Report face recognition_technology
Seminar Report face recognition_technology
 
Face recognition technology - BEST PPT
Face recognition technology - BEST PPTFace recognition technology - BEST PPT
Face recognition technology - BEST PPT
 
Master_Thesis_FilipeSilva
Master_Thesis_FilipeSilvaMaster_Thesis_FilipeSilva
Master_Thesis_FilipeSilva
 
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...Facial Expression Recognition Using Local Binary Pattern and Support Vector M...
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...
 
Recognition of Facial Expressions using Local Binary Patterns of Important Fa...
Recognition of Facial Expressions using Local Binary Patterns of Important Fa...Recognition of Facial Expressions using Local Binary Patterns of Important Fa...
Recognition of Facial Expressions using Local Binary Patterns of Important Fa...
 
computer vision and face recognition
 computer vision and face recognition computer vision and face recognition
computer vision and face recognition
 
Face recognition a survey
Face recognition a surveyFace recognition a survey
Face recognition a survey
 
What is pattern recognition field?
What is pattern recognition field?What is pattern recognition field?
What is pattern recognition field?
 
Emotion recognition from facial expression using fuzzy logic
Emotion recognition from facial expression using fuzzy logicEmotion recognition from facial expression using fuzzy logic
Emotion recognition from facial expression using fuzzy logic
 
Face recognition software system by Junyu Tech.(China)
Face recognition software system by Junyu Tech.(China)Face recognition software system by Junyu Tech.(China)
Face recognition software system by Junyu Tech.(China)
 
Mini Project- Face Recognition
Mini Project- Face RecognitionMini Project- Face Recognition
Mini Project- Face Recognition
 
Recent Advances in Face Analysis: database, methods, and software.
Recent Advances in Face Analysis: database, methods, and software.Recent Advances in Face Analysis: database, methods, and software.
Recent Advances in Face Analysis: database, methods, and software.
 
Facial expression recognition based on local binary patterns final
Facial expression recognition based on local binary patterns finalFacial expression recognition based on local binary patterns final
Facial expression recognition based on local binary patterns final
 
Face recognition ppt
Face recognition pptFace recognition ppt
Face recognition ppt
 

Ähnlich wie Pca seminar final report

Performance characterization in computer vision
Performance characterization in computer visionPerformance characterization in computer vision
Performance characterization in computer visionpotaters
 
Standard Statistical Feature analysis of Image Features for Facial Images usi...
Standard Statistical Feature analysis of Image Features for Facial Images usi...Standard Statistical Feature analysis of Image Features for Facial Images usi...
Standard Statistical Feature analysis of Image Features for Facial Images usi...Bulbul Agrawal
 
Fr pca lda
Fr pca ldaFr pca lda
Fr pca ldaultraraj
 
Facial Expression Recognition via Python
Facial Expression Recognition via PythonFacial Expression Recognition via Python
Facial Expression Recognition via PythonSaurav Gupta
 
Predicting Facial Expression using Neural Network
Predicting Facial Expression using Neural Network Predicting Facial Expression using Neural Network
Predicting Facial Expression using Neural Network Santanu Paul
 
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONijaia
 
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...mathsjournal
 
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...mathsjournal
 
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...mathsjournal
 
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...mathsjournal
 
IEEE Pattern analysis and machine intelligence 2016 Title and Abstract
IEEE Pattern analysis and machine intelligence 2016 Title and AbstractIEEE Pattern analysis and machine intelligence 2016 Title and Abstract
IEEE Pattern analysis and machine intelligence 2016 Title and Abstracttsysglobalsolutions
 
Face Recognition Using Gabor features And PCA
Face Recognition Using Gabor features And PCAFace Recognition Using Gabor features And PCA
Face Recognition Using Gabor features And PCAIOSR Journals
 
Face recognition using laplacianfaces
Face recognition using laplacianfaces Face recognition using laplacianfaces
Face recognition using laplacianfaces StudsPlanet.com
 
Casa cookbook for KAT 7
Casa cookbook for KAT 7Casa cookbook for KAT 7
Casa cookbook for KAT 7CosmoAIMS Bassett
 
IRJET- Proposed System for Animal Recognition using Image Processing
IRJET-  	  Proposed System for Animal Recognition using Image ProcessingIRJET-  	  Proposed System for Animal Recognition using Image Processing
IRJET- Proposed System for Animal Recognition using Image ProcessingIRJET Journal
 
Dimensionality reduction
Dimensionality reductionDimensionality reduction
Dimensionality reductionShatakirti Er
 
Noise-robust classification with hypergraph neural network
Noise-robust classification with hypergraph neural networkNoise-robust classification with hypergraph neural network
Noise-robust classification with hypergraph neural networknooriasukmaningtyas
 
IRJET- Survey on Face Recognition using Biometrics
IRJET-  	  Survey on Face Recognition using BiometricsIRJET-  	  Survey on Face Recognition using Biometrics
IRJET- Survey on Face Recognition using BiometricsIRJET Journal
 

Ähnlich wie Pca seminar final report (20)

Performance characterization in computer vision
Performance characterization in computer visionPerformance characterization in computer vision
Performance characterization in computer vision
 
Standard Statistical Feature analysis of Image Features for Facial Images usi...
Standard Statistical Feature analysis of Image Features for Facial Images usi...Standard Statistical Feature analysis of Image Features for Facial Images usi...
Standard Statistical Feature analysis of Image Features for Facial Images usi...
 
Fr pca lda
Fr pca ldaFr pca lda
Fr pca lda
 
Facial Expression Recognition via Python
Facial Expression Recognition via PythonFacial Expression Recognition via Python
Facial Expression Recognition via Python
 
Predicting Facial Expression using Neural Network
Predicting Facial Expression using Neural Network Predicting Facial Expression using Neural Network
Predicting Facial Expression using Neural Network
 
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
 
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
 
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
 
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
 
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
LASSO MODELING AS AN ALTERNATIVE TO PCA BASED MULTIVARIATE MODELS TO SYSTEM W...
 
IEEE Pattern analysis and machine intelligence 2016 Title and Abstract
IEEE Pattern analysis and machine intelligence 2016 Title and AbstractIEEE Pattern analysis and machine intelligence 2016 Title and Abstract
IEEE Pattern analysis and machine intelligence 2016 Title and Abstract
 
Face Recognition Using Gabor features And PCA
Face Recognition Using Gabor features And PCAFace Recognition Using Gabor features And PCA
Face Recognition Using Gabor features And PCA
 
H0334749
H0334749H0334749
H0334749
 
Face recognition using laplacianfaces
Face recognition using laplacianfaces Face recognition using laplacianfaces
Face recognition using laplacianfaces
 
FinalReport
FinalReportFinalReport
FinalReport
 
Casa cookbook for KAT 7
Casa cookbook for KAT 7Casa cookbook for KAT 7
Casa cookbook for KAT 7
 
IRJET- Proposed System for Animal Recognition using Image Processing
IRJET-  	  Proposed System for Animal Recognition using Image ProcessingIRJET-  	  Proposed System for Animal Recognition using Image Processing
IRJET- Proposed System for Animal Recognition using Image Processing
 
Dimensionality reduction
Dimensionality reductionDimensionality reduction
Dimensionality reduction
 
Noise-robust classification with hypergraph neural network
Noise-robust classification with hypergraph neural networkNoise-robust classification with hypergraph neural network
Noise-robust classification with hypergraph neural network
 
IRJET- Survey on Face Recognition using Biometrics
IRJET-  	  Survey on Face Recognition using BiometricsIRJET-  	  Survey on Face Recognition using Biometrics
IRJET- Survey on Face Recognition using Biometrics
 

KĂŒrzlich hochgeladen

Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxDr.Ibrahim Hassaan
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17Celine George
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfInclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfTechSoup
 
Visit to a blind student's school🧑‍🩯🧑‍🩯(community medicine)
Visit to a blind student's school🧑‍🩯🧑‍🩯(community medicine)Visit to a blind student's school🧑‍🩯🧑‍🩯(community medicine)
Visit to a blind student's school🧑‍🩯🧑‍🩯(community medicine)lakshayb543
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONHumphrey A Beña
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parentsnavabharathschool99
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
 

KĂŒrzlich hochgeladen (20)

Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptx
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfInclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
 
Visit to a blind student's school🧑‍🩯🧑‍🩯(community medicine)
Visit to a blind student's school🧑‍🩯🧑‍🩯(community medicine)Visit to a blind student's school🧑‍🩯🧑‍🩯(community medicine)
Visit to a blind student's school🧑‍🩯🧑‍🩯(community medicine)
 
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptxYOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parents
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 

Pca seminar final report

  • 1. 1 ABSTRACT The Face being the primary focus of attention in social interaction plays a major role in conveying identity. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image. The main aim of this topic is to analyse the method of Principal Component Analysis (PCA) and its performance when applied to face Recognition. This Algorithm creates a subspace (face space) where faces is represented using a reduced number of features called feature vectors. Experimental results that follow show that PCA based methods provide better face recognition with reasonably low error rates. Principal Component Analysis (PCA) is a classic feature extraction and data representation technique widely used in the areas of pattern recognition and computer vision. The purpose of PCA is to reduce the large dimensionality data space into the smaller dimensionality feature space. This approach is based on the concept of Eigen faces, it can locate and track a subject’s face, and then recognize the person by comparing the characteristics of the face to those known of individuals. This algorithm treats face recognition problem considering that fact that faces are upright and its characteristic features are used for calculation. PCA is a good technique for face recognition as it is able to identify faces fairly well with varying illuminations.
  • 2. 2 CERTIFICATE Certified that the seminar work entitled ‘PCA BASED FACE RECOGNITION SYSTEM’ is a bonafide work done by, SIKHA DASH bearing Reg.No: 1241019069 in partial fulfillment of the requirement for the award of the degree of Bachelor of Technology in Electrical Instrumentation & Control Engineering of the S’O’A University during the session 2012-2016. It is certified that all corrections/suggestions indicated for the assessment have been incorporated in this report. This Seminar has been approved as it satisfies the academic requirement of the Seminar work prescribed for the Bachelor of Technology. SEMINAR CO-ORDINATOR HOD, EICE Date: Place: Bhubaneswar
  • 3. 3 ACKNOWLEDGEMENT I am extremely grateful to Dr. Niranjan Nayak, H.O.D of EI & CE Dept. for giving me his consent to carry out the seminar. I would like to thank teaching staffs of EI & CE Department for their active involvement in the entire process. Last but not the least I would take the opportunity of thanking my parents and friends for their constant support and co-operation. Sikha Dash (Reg no:1241019069)
  • 4. 4 CONTENTS CHAPTER NO. TOPIC PAGE NO. 1 Abstract 1 2 Introduction 5 3 Overview of the system 6 A. PCA 9 B. Technical details of PCA 9 4 Applications 14 5 Advantage and Disadvantage of PCA 15 6 Features of PCA 16 7 Conclusion 17 8 References 18
  • 5. 5 INTRODUCTION Principal Component Analysis (PCA) is the general name for a technique which uses sophisticated underlying mathematical principles to transforms a number of possibly correlated variables into a smaller number of variables called principal components. The origins of PCA lie in multivariate data analysis, however, it has a wide range of other applications, as it will show in due course. PCA is called, ’one of the most important results from applied linear algebra’ and perhaps its most common use is as the first step in trying to analyze large data sets. Some of the other common applications include; de-noising signals, blind source separation, and data compression. In general terms, PCA uses a vector space transform to reduce the dimensionality of large data sets. Using mathematical projection, the original data set, which may have involved many variables, can often be interpreted in just a few variables (the principal components). It is therefore often the case that an examination of the reduced dimension data set will allow the user to spot trends, patterns and outliers in the data, far more easily than would have been possible without performing the principal component analysis. The aim of this essay is to explain the theoretical side of PCA, and to provide examples of its application. It will begin with a non-rigorous motivational example from multivariate data analysis in which it will attempt to extract some meaning from a dimensional data set. After this motivational example, it has been discussed the PCA technique in terms of its linear algebra fundamentals. This will lead us to a method for implementing PCA for real-world data, and it was show that there is a close connection between PCA and the singular value decomposition (SVD) from numerical linear algebra. It will then look at two further examples of PCA in practice; Image Compression and Blind Source Separation.
  • 6. 6 1. OVERVIEW Principal component analysis (PCA) has been called one of the most valuable results from applied linear algebra. PCA is used abundantly in all forms of analysis - from neuroscience to computer graphics - because it is a simple, non-parametric method of extracting relevant information from confusing data sets. With minimal additional effort PCA provides a roadmap for how to reduce a complex data set to a lower dimension to reveal the sometimes hidden, simplified dynamics that often underlie it. The goal of this tutorial is to provide both an intuitive feel for PCA, and a thorough discussion of this topic. We will begin with a simple example and provide an intuitive explanation of the goal of PCA. We will continue by adding mathematical rigor to place it within the framework of linear algebra and explicitly solve this problem. It will see how and why PCA is intimately related to the mathematical technique of singular value decomposition (SVD). This understanding will lead us to a prescription for how to apply PCA in the real world. It will discuss both the assumptions behind this technique as well as possible extensions to overcome these limitations. The discussion and explanations in this report are informal in the spirit of a tutorial. The goal of this report is to educate. 2. Motivation: A Toy Example Let’s take an example to understand some phenomenon by measuring various quantities (e.g. spectra, voltages, velocities, etc.) in a system. Unfortunately, it cannot figure out what is happening because the data appears clouded, unclear and even redundant. This is not a trivial problem, but rather a fundamental obstacle to experimental science. Examples abound from complex systems such as neuroscience, photo science, meteorology and oceanography - the number of variables to measure can be unwieldy and at times even deceptive, because the underlying dynamics can often be quite simple. Take for example a simple toy problem from physics diagrammed in Figure 1. Pretend it is studying the motion of the physicist’s ideal spring.
  • 7. 7 This system consists of a ball of mass m attached to a massless, frictionless spring. The ball is released a small distance away from equilibrium (i.e. the spring is stretched). Because the spring is “ideal,” it oscillates indefinitely along the x-axis about its equilibrium at a set frequency. This is a standard problem in physics in which the motion along the x Figure 1: A diagram of the toy example. direction is solved by an explicit function of time. In other words, the underlying dynamics can be expressed as a function of a single variable x. However, being ignorant experimenters does not know any of this. It is not known, let alone how many, axes and dimensions are important to measure. Thus, we decide to measure the ball’s position in a three-dimensional space (since we live in a three dimensional world). Specifically, it has place three movie cameras around our system of interest. At 200 Hz each movie camera records an image indicating a two dimensional position of the ball (a projection). Unfortunately, because of our ignorance, we do not even know what are the real “x”, “y” and “z” axes, so it has choose three camera axes { , , } at some arbitrary angles with respect to the system. The angles between our measurements might not even be 90o! Now, it is record with the cameras for 2 minutes. The big question remains: how do we get from this data set to simple equation of x? We know a-priori that if we were smart experimenters, it would have just measured the position along the x-axis with one camera. But this is not what happens in the real world. It is often do not know what measurements best reflect the dynamics of this system in question. Furthermore, sometimes record more dimensions than we actually need! Also, we have to deal with that pesky, real-world problem of noise. In the toy example this means that we need
  • 8. 8 to deal with air, imperfect cameras or even friction in a less-than-ideal spring. Noise contaminates our data set only serving to obfuscate the dynamics further. This toy example is the challenge experimenters face every day. We will refer to this example as we delve further into abstract concepts. Hopefully, by the end of this paper we will have a good understanding of how to systematically extract x using principal component analysis. 3. Framework: Change of Basis The Goal: Principal component analysis computes the most meaningful basis to re- express a noisy, garbled data set. The hope is that this new basis will filter out the noise and reveal hidden dynamics. In the example of the spring, the explicit goal of PCA is to determine: “the dynamics are along the x-axis.” In other words, the goal of PCA is to determine that ˆx - the unit basis vector along the x-axis - is the important dimension. Determining this fact allows an experimenter to discern which dynamics are important and which are just redundant. i. A Naive Basis With a more precise definition of our goal, we need a more precise definition of our data as well. For each time sample (or experimental trial), an experimenter records a set of data consisting of multiple measurements (e.g. voltage, position, etc.). The number of measurement types is the dimension of the data set. In the case of the spring, this data set has 12,000 6-dimensional vectors, where each camera contributes a 2-dimensional projection of the ball’s position. In general, each data sample is a vector in dimensional space, where m is the number of measurement types. Equivalently, every time sample is a vector that lies in an m-dimensional vector space spanned by an orthonormal basis. All measurement vectors in this space are a linear combination of this set of unit length basis vectors. A naive and simple choice of a basis B is the identity matrix I. where each row is a basis vector bi with m components. To summarize, at one point in time, camera A
  • 9. 9 records a corresponding position ( , ). Each trial can be expressed as a six dimensional column vector . 4. PCA . Principal Component Analysis (PCA) is the general name for a technique which uses sophisticated underlying mathematical principles to transforms a number of possibly correlated variables into a smaller number of variables called principal components. The origins of PCA lie in multivariate data analysis, however, it has a wide range of other applications, as we will show in due course. PCA has been called, ’one of the most important results from applied linear algebra’ and perhaps its most common use is as the first step in trying to analyze large data sets. Some of the other common applications include; de-noising signals, blind source separation, and data compression. In general terms, PCA uses a vector space transform to reduce the dimensionality of large data sets. Using mathematical projection, the original data set, which may have involved many variables, can often be interpreted in just a few variables (the principal components). It is therefore often the case that an examination of the reduced dimension data set will allow the user to spot trends, patterns and outliers in the data, far more easily than would have been possible without performing the principal component analysis. The aim of this essay is to explain the theoretical side of PCA, and to provide examples of its application. We will begin with a non-rigorous motivational example from multivariate data analysis in which we will attempt to extract some meaning from a 17 dimensional data set. After this motivational example, we shall discuss the PCA technique in terms of its linear algebra fundamentals. This will lead us to a method for implementing PCA for real-world data, and we will see that there is a close connection between PCA and the singular value decomposition (SVD) from numerical linear algebra. We will then look at two further examples of PCA in practice; Image Compression and Blind Source Separation. 5. The Technical Details of PCA The principal component analysis for the example above took a large set of data and identified an optimal new basis in which to re-express the data. This mirrors the general aim of the PCA method: can we obtain another basis that is a linear combination of the original basis and that re-expresses the data optimally? There are some ambiguous terms in this statement, which we shall address shortly, however for now let us frame the problem in the following way.
  • 10. 10 Assume that we start with a data set that is represented in terms of an m × n matrix, X where the n columns are the samples (e.g. observations) and the m rows are the variables. We wish to linearly transform this matrix, X into another matrix, Y, also of dimension m × n, so that for some m × m matrix, P, Y = PX This equation represents a change of basis. If we consider the rows of P to be the row vectors p1, p2
 , and the columns of X to be the column vectors , . . . , , then can be interpreted in the following PX= ( 



.. ) = . ⋯ ⋼ ⋱ ⋼ ⋯ = Y (1) Note that , ∊ , and so , is just the standard Euclidean inner (dot) product. This tells us that the original data, X is being projected on to the columns of P. Thus, the rows of P, { 



.. } are a new basis for representing the columns of X. The rows of P will later become our principal component directions. We now need to address the issue of what this new basis should be, indeed what is the ’best’ way to re-express the data in X - in other words, how should we define independence between principal components in the new basis. Principal component analysis defines independence by considering the variance of the data in the original basis. It seeks to de-correlate the original data by finding the directions in which variance is maximised and then use these directions to define the new basis. Recall the definition for the variance of a random variable, Z with mean, ÎŒ. ! = E| " − $ | (2) Suppose we have a vector of n discrete measurements, r' = r), r) , r) 
 
 
 . . r+) with mean $,. If we subtract the mean from each of the measurements, then we obtain a translated set of measurements r= (- - - 



.. - }, that has zero mean. Thus, the variance of these measurements is given by the relation
  • 11. 11 , = --. (3) If we have a second vector of n measurements, s = (/ / / 



.. / ), again with zero mean, then we can generalize this idea to obtain the covariance of r and s. Covariance can be thought of as a measure of how much two variables change together. Variance is thus a special case of covariance, when the two variables are identical. It is in fact correct to divide through by a factor of n − 1 rather than n, a fact which we shall not justify here, but is discussed in. -/ = 1 1 − 1 -/. We can now generalize this idea to considering our 2 × 1 data matrix, X. Recall that m was the number of variables, and n the number of samples. We can therefore think of this matrix, X in terms of m row vectors, each of length n. X= , , ⋯ , ⋼ ⋱ ⋼ , , ⋯ , = ⋼ ∈ × , . ∈ Since we have a row vector for each variable, each of these vectors contains all the samples for one particular variable. So for example, xi is a vector of the n samples for the ith variable. It therefore makes sense to consider the following matrix product. 5 = 1 1 − 1 . = 1 1 − 1 . . ⋯ . ⋼ ⋱ ⋼ . . ⋯ . ∈ × If we look closely at the entries of this matrix, we see that we have computed all the possible covariance pairs between the m variables. Indeed, on the diagonal entries, we have the variances and on the off-diagonal entries, we have the covariance. This matrix is therefore known as the Covariance Matrix. Now let us return to the original problem, that of linearly transforming the original data matrix using the relation Y = PX, for some matrix, P. We need to decide upon some features that we would like the transformed matrix, Y to exhibit and somehow relate this to the features of the corresponding covariance matrix CY. Covariance can be considered to be a measure of how well correlated two variables are. The PCA method makes the fundamental assumption that the variables in the transformed matrix should be as uncorrelated as possible. This is equivalent to saying that the covariance of different variables
  • 12. 12 in the matrix CY, should be as close to zero as possible (covariance matrices are always positive definite or positive semi-definite). Conversely, large variance values interest us, since they correspond to interesting dynamics in the system (small variances may well be noise). We therefore have the following requirements for constructing the covariance matrix, CY: a. Maximize the signal, measured by variance (maximize the diagonal entries) b. Minimize the covariance between variables (minimize the off-diagonal entries) We thus come to the conclusion that since the minimum possible covariance is zero, it is seeking a diagonal matrix, CY. If we can choose the transformation matrix, P in such a way that CY is diagonal, then we will have achieved our objective. We now make the assumption that the vectors in the new basis, 61, 62, . . ., pm are orthogonal (in fact, we additionally assume that they are orthonormal). Far from being restrictive, this assumption enables us to proceed by using the tools of linear algebra to find a solution to the problem. Consider the formula for the covariance matrix, CY and our interpretation of Y in terms of X and P. 58 = 1 1 − 1 99. = 1 1 − 1 . = 1 1 − 1 . . = 1 1 − 1 . . I.e. 58 = : ; . where ; = . Note that S is a 2 × 1 symmetric matrix, since . . = . . . = . . We now invoke the well-known theorem from linear algebra that every square symmetric matrix is orthogonally diagonalizable. That is, we can write: ; = <=<. Where E is an 2 × 2 orthonormal matrix whose columns are the orthonormal eigenvectors of S, and D is a diagonal matrix which has the eigenvalues of S as its (diagonal) entries. The rank, r, of S is the number of orthonormal eigenvectors that it has. If B turns out to be rank-deficient so that r is less than the size, m, of the matrix, then we simply need to generate m − r orthonormal vectors to fill the remaining columns of S. It is at this point that we make a choice for the transformation matrix, P. By choosing the rows of P to be the eigenvectors of S, we ensure that P = ET and vice-versa. Thus, substituting this into our derived expression for the covariance matrix, CY gives: 58 = : ; . = : <. <=<. <
  • 13. 13 Now, since E is an orthonormal matrix, we have<. < = >, where I is the m × m identity matrix. Hence, for this special choice of P, we have: 58 = 1 1 − 1 = A last point to note is that with this method, we automatically gain information about the relative importance of each principal component from the variances. The largest variance corresponds to the first principal component, the second largest to the second principal component, and so on. This therefore gives us a method for organizing the data in the diagonalization stage. Once we have obtained the eigenvalues and eigenvectors of; = . , we sort the eigenvalues in descending order and place them in this order on the diagonal of D. We then construct the orthonormal matrix, E by placing the associated eigenvectors in the same order to form the columns of E (i.e. place the eigenvector that corresponds to the largest eigenvalue in the first column, the eigenvector corresponding to the second largest eigenvalue in the second column etc.). We have therefore achieved our objective of diagonal sing the covariance matrix of the transformed data. The principal components (the rows of P) are the eigenvectors of the covariance matrix, . , and the rows are in order of ’importance’, telling us how ’principal’ each principal component is. My aim in writing this article was that somebody with a similar level of mathematical knowledge as myself (i.e. early graduate level) would be able to gain a good introductory understanding of PCA by reading this essay. I hope that they would understand that it is a diverse tool in data analysis, with many applications, three of which we have covered in detail here. I would also hope that they would gain a good understanding of the surrounding mathematics, and the close link that PCA has with the singular value decomposition. I embarked upon writing this essay with only one application in mind, that of Blind Source Separation. However, when it came to researching the topic in detail, I found that there were many interesting applications of PCA, and I identified dimensional reduction in multivariate data analysis and image compression as being two of the most appealing alternative applications. Though it is a powerful technique, with a diverse range of possible applications, it is fair to say that PCA is not necessarily the best way to deal with each of the sample applications that I have discussed. For the multivariate data analysis example, we were able to identify that the inhabitants of Northern Ireland were in some way different in their dietary habits to those of
  • 14. 14 the other three countries in the UK. We were also able to identify particular food groups with the eating habits of Northern Ireland, yet we were limited in being able to make distinctions between the dietary habits of the English, Scottish and Welsh. In order to explore this avenue, it would perhaps be necessary to perform a similar analysis on just those three countries. Image compression (and more generally, data compression) is by now getting to be a mature field, and there are many sophisticated technologies available that perform this task. JPEG is an obvious and comparable example that springs to mind (JPEG can also involve lousy compression). JPEG utilizes the discrete cosine transform to convert the image to a frequency-domain representation and generally achieves much higher quality for similar compression ratios when compared to PCA. Having said this, PCA is a nice technique in its own right for implementing image compression and it is nice to find such a pleasing implementation. As we saw in the last example, Blind Source Separation can cause problems for PCA under certain circumstances. PCA will not be able to separate the individual sources if the signals are combined nonlinearly, and can produce spurious results even if the combination is linear. PCA will also fail for BSS if the data is non-Gaussian. In this situation, a well-known technique that works is called Independent Component Analysis (ICA). The main philosophical difference between the two methods is that PCA defines independence using variance, whilst ICA defines independence using statistical independence - it identifies the principal components by maximizing the statistical independence between each of the components. 6. APPLICATIONS Neuroscience: A variant of principal components analysis is used in neuroscience to identify the specific properties of a stimulus that increase a neuron's probability of generating an action potential. This technique is known as spike-triggered covariance analysis. In a typical application an experimenter presents a white noise process as a stimulus (usually either as a sensory input to a test subject, or as a current injected directly into the neuron) and records a train of action potentials, or spikes, produced by the neuron as a result. Presumably, certain features of the stimulus make the neuron more likely to spike. In order to extract these features, the experimenter calculates the covariance matrix of the spike-triggered ensemble, the set of all stimuli (defined and discretized over a finite time window, typically on the order
  • 15. 15 of 100 ms) that immediately preceded a spike. The eigenvectors of the difference between the spike-triggered covariance matrix and the covariance matrix of the prior stimulus ensemble (the set of all stimuli, defined over the same length time window) then indicate the directions in the space of stimuli along which the variance of the spike-triggered ensemble differed the most from that of the prior stimulus ensemble. Specifically, the eigenvectors with the largest positive eigenvalues correspond to the directions along which the variance of the spike-triggered ensemble showed the largest positive change compared to the variance of the prior. Since these were the directions in which varying the stimulus led to a spike, they are often good approximations of the sought after relevant stimulus features. In neuroscience, PCA is also used to discern the identity of a neuron from the shape of its action potential. Spike sorting is an important procedure because extracellular recording techniques often pick up signals from more than one neuron. In spike sorting, one first uses PCA to reduce the dimensionality of the space of action potential waveforms, and then performs clustering analysis to associate specific action potentials with individual neurons. PCA as a dimension reduction technique is particularly suited to detect coordinated activities of large neuronal ensembles. It has been used in determining collective variables, i.e. order parameters, during phase transitions in the brain. 7. ADVANTAGE AND DISADVANTAGE OF PCA PCA’s key advantages are its low noise sensitivity, the decreased requirements for capacity and memory, and increased efficiency given the processes taking place in a smaller dimensions; the complete advantages of PCA are listed below: ‱ Lack of redundancy of data given the orthogonal components. ‱ Reduced complexity in images’ grouping with the use of PCA. ‱ Smaller database representation since only the trainee images are stored in the form of their projections on a reduced basis. ‱ Reduction of noise since the maximum variation basis is chosen and so the small variations in the back-ground are ignored automatically. Two key disadvantages of PCA are: ‱ The covariance matrix is difficult to be evaluated in an accurate manner.
  • 16. 16 ‱ Even the simplest invariance could not be captured by the PCA unless the training data explicitly provides this information. 8. FEATURES OF PCA Table 1. The features of PCA
  • 17. 17 9. CONCLUSION For the multivariate data analysis example, it is able to identify that the inhabitants of Northern Ireland were in some way different in their dietary habits to those of the other three countries in the UK. We were also able to identify particular food groups with the eating habits of Northern Ireland, yet we were limited in being able to make distinctions between the dietary habits of the English, Scottish and Welsh. In order to explore this avenue, it would perhaps be necessary to perform a similar analysis on just those three countries. Image compression (and more generally, data compression) is by now getting to be a mature field, and there are many sophisticated technologies available that perform this task. JPEG is an obvious and comparable example that springs to mind (JPEG can also involve lousy compression). JPEG utilises the discrete cosine transform to convert the image to a frequency-domain representation and generally achieves much higher quality for similar compression ratios when compared to PCA. Having said this, PCA is a nice technique in its own right for implementing image compression and it is nice to find such a pleasing implementation. As we saw in the last example, Blind Source Separation can cause problems for PCA under certain circumstances. PCA will not be able to separate the individual sources if the signals are combined nonlinearly, and can produce spurious results even if the combination is linear. PCA will also fail for BSS if the data is non-Gaussian. In this situation, a well-known technique that works is called Independent Component Analysis (ICA). The main philosophical difference between the two methods is that PCA defines independence using variance, whilst ICA defines independence using statistical independence - it identifies the principal components by maximising the statistical independence between each of the components.
  • 18. 18 REFRENCES [1] J. Ashok, V. Shivashankar and P. Mudiraj, “An Overview of Biometrics,” International Journal, Vol. 2, 2010. [2] F. Özen, “A Face Recognition System Based on Eigen-faces Method,” Procedia Technology, Vol. 1, 2011, pp. 118-123. [3] W. Miziolek and D. Sawicki, “Face Recognition: PCA or ICA,” Przeglad Elektrotechniczny, Vol. 88, 2012, pp. 286-288. [4] C. Li, Y. Diao, H. Ma and Y. Li, “A Statistical PCA Method for Face Recognition,” in Intelligent Information Technology Application, 2008, pp. 376- 380. [5] H. Duan, R. Yan and K. Lin, “Research on Face Recogni-tion Based on PCA,” in Future Information Technology and Management Engineering, 2008, pp. 29-32. [6] T. F. Karim, M. S. H. Lipu, M. L. Rahman and F. Sultana, “Face Recognition Using PCA-based Method,” in Ad-vanced Management Science (ICAMS), 2010 IEEE Inter-national Conference on, 2010, pp. 158-162.
  • 19. 19 SEMINAR REPORT On PCA BASED FACE RECOGNITION SYSTEM Submitted in partial fulfillment of the requirement for the award Of BACHELOR OF TECHNOLOGY In ELECTRICAL INSTRUMENTATION & CONTROL ENGINEERING By SIKHA DASH (Regd no. - 1241019069) DEPARTMENT OF ELECTRICAL INSTRUMENTATION NAD CONTROL ENGINEERING INSTITUTE OF TECHNICAL EDUCATION AND RESEARCH SIKSHA ‘O’ ANUSANDHAN UNIVERSITY (Declared u/s. 3 of the UGC Act. 1956) Jagamohan Nagar, Jagamara, Bhubaneswar – 751030. 2012-2016