SlideShare ist ein Scribd-Unternehmen logo
1 von 6
Downloaden Sie, um offline zu lesen
Math 20
Chapter 5 Eigenvalues and Eigenvectors
1 Eigenvalues and Eigenvectors
1. Definition: A scalar λ is called an eigenvalue of the n × n matrix A is there is a nontrivial solution
x of Ax = λx. Such an x is called an eigenvector corresponding to the eigenvalue λ.
2. What does this mean geometrically? Suppose that A is the standard matrix for a linear transformation
T : Rn
→ Rn
. Then if Ax = λx, it follows that T(x) = λx. This means that if x is an eigenvector of
A, then the image of x under the transformation T is a scalar multiple of x – and the scalar involved
is the corresponding eigenvalue λ. In other words, the image of x is parallel to x.
3. Note that an eigenvector cannot be 0, but an eigenvalue can be 0.
4. Suppose that 0 is an eigenvalue of A. What does that say about A? There must be some nontrivial
vector x for which
Ax = 0x = 0
which implies that A is not invertible which implies a whole lot of things given our Invertible Matrix
Theorem.
5. Invertible Matrix Theorem Again: The n × n matrix A is invertible if and only if 0 is not an
eigenvalue of A.
6. Definition: The eigenspace of the n × n matrix A corresponding to the eigenvalue λ of A is the set of
all eigenvectors of A corresponding to λ.
7. We’re not used to analyzing equations like Ax = λx where the unknown vector x appears on both
sides of the equation. Let’s find an equivalent equation in standard form.
Ax = λx
Ax − λx = 0
Ax − λIx = 0
(A − λI)x = 0
8. Thus x is an eigenvector of A corresponding to the eigenvalue λ if and only if x and λ satisfy (A−λI)x =
0.
9. It follows that the eigenspace of λ is the null space of the matrix A − λI and hence is a subspace of
Rn
.
10. Later in Chapter 5, we will find out that it is useful to find a set of linearly independent eigenvectors
for a given matrix. The following theorem provides one way of doing so. See page 307 for a proof of
this theorem.
11. Theorem 2: If v1, . . . , vr are eigenvectors that correspond to distinct eigenvalues λ1, . . . , λr of an
n × n matrix A, then the set {v1, . . . , vr} is linearly independent.
2 Determinants
1. Recall that if λ is an eigenvalue of the n × n matrix A, then there is a nontrivial solution x to the
equation
Ax = λx
or, equivalently, to the equation
(A − λI)x = 0.
(We call this nontrivial solution x an eigenvector corresponding to λ.)
2. Note that this second equation has a nontrivial solution if and only if the matrix A−λI is not invertible.
Why? If the matrix is not invertible, then it does not have a pivot position in each column (by the
Invertible Matrix Theorem) which implies that the homogeneous system has at least one free variable
which implies that the homogeneous system has a nontrivial solution. Conversely, if the matrix is
invertible, then the only solution is the trivial solution.
3. To find the eigenvalues of A we need a condition on λ that is equivalent to the equation (A − λI)x = 0
having a nontrivial solution. This is where determinants come in.
4. We skipped Chapter 3, which is all about determinants, so here’s a recap of just what we need to know
about them.
5. Formula: The determinant of the 2 × 2 matrix A =
a b
c d
is
detA = ad − bc.
6. Formula: The determinant of the 3 × 3 matrix A =


a11 a12 a13
a21 a22 a23
a31 a32 a33

 is
detA = a11a22a33 + a12a23a31 + a13a21a32
− a31a22a13 − a32a23a11 − a33a21a12.
See page 191 for a useful way of remembering this formula.
7. Theorem: The determinant of an n × n matrix A is 0 if and only if the matrix A is not invertible.
8. That’s useful! We’re looking for values of λ for which the equation (A − λI)x = 0 has a nontrivial
solution. This happens if and only if the matrix A − λI is not invertible. This happens if and only if
the determinant of A − λI is 0. This leads us to the characteristic equation of A.
3 The Characteristic Equation
1. Theorem: A scalar λ is an eigenvalue of an n × n matrix A if and only if λ satisfies the characteristic
equation
det(A − λI) = 0.
2. It can be shown that if A is an n × n matrix, then det(A − λI) is a polynomial in the variable λ of
degree n. We call this polynomial the characteristic polynomial of A.
3. Example: Consider the matrix A =


3 6 −8
0 0 6
0 0 2

. To find the eigenvalues of A, we must compute
det(A − λI), set this expression equal to 0, and solve for λ. Note that
A − λI =


3 6 −8
0 0 6
0 0 2

 −


λ 0 0
0 λ 0
0 0 λ

 =


3 − λ 6 −8
0 −λ 6
0 0 2 − λ

 .
Since this is a 3 × 3 matrix, we can use the formula given above to find its determinant.
det(A − λI) = (3 − λ)(−λ)(2 − λ) + (6)(6)(0) + (−8)(0)(0)
− (0)(−λ)(−8) − (0)(6)(3 − λ) − (−λ)(0)(6)
= −λ(3 − λ)(2 − λ)
Setting this equal to 0 and solving for λ, we get that λ = 0, 2, or 3. These are the three eigenvalues of
A.
4. Note that A is a triangular matrix. (A triangular matrix has the property that either all of its entries
below the main diagonal are 0 or all of its entries above the main diagonal are 0.) It turned out that
the eigenvalues of A were the entries on the main diagonal of A. This is true for any triangular matrix,
but is generally not true for matrices that are not triangular.
5. Theorem 1: The eigenvalues of a triangular matrix are the entries on its main diagonal.
6. In the above example, the characteristic polynomial turned out to be −λ(λ − 3)(λ − 2). Each of the
factors λ, λ − 3, and λ − 2 appeared precisely once in this factorization. Suppose the characteristic
function had turned out to be −λ(λ − 3)2
. In this case, the factor λ − 3 would appear twice and so we
would say that the corresponding eigenvalue, 3, has multiplicity 2.
7. Definition: In general, the multiplicity of an eigenvalue is the number of times the factor λ −
appears in the characteristic polynomial.
4 Finding Eigenvectors
1. Example (Continued): Let us now find the eigenvectors of the matrix A =


3 6 −8
0 0 6
0 0 2

. We have
to take each of its three eigenvalues 0, 2, and 3 in turn.
2. To find the eigenvectors corresponding to the eigenvalue 0, we need to solve the equation (A−λI)x = 0
where λ = 0. That is, we need to solve
(A − λI)x = 0
(A − 0I)x = 0
Ax = 0


3 6 −8
0 0 6
0 0 2

 x = 0
Row reducing the augmented matrix, we find that
x =


x1
x2
x3

 = x2


−2
1
0

 .
This tells us that the eigenvectors corresponding to the eigenvalue 0 are precisely the set of scalar
multiples of the vector


−2
1
0

. In other words, the eigenspace corresponding to the eigenvalue 0 is
Span





−2
1
0





.
3. To find the eigenvectors corresponding to the eigenvalue 2, we need to solve the equation (A−λI)x = 0
where λ = 2. That is, we need to solve
(A − λI)x = 0
(A − 2I)x = 0




3 6 −8
0 0 6
0 0 2

 −


2 0 0
0 2 0
0 0 2



 x = 0


1 6 −8
0 −2 6
0 0 0

 x = 0
Row reducing the augmented matrix, we find that
x =


x1
x2
x3

 = x3


−10
3
1

 .
This tells us that the eigenvectors corresponding to the eigenvalue 2 are precisely the set of scalar
multiples of the vector


−10
3
1

. In other words, the eigenspace corresponding to the eigenvalue 2 is
Span





−10
3
1





.
4. I’ll let you find the eigenvectors corresponding to the eigenvalue 3.
5 Similar Matrices
1. Definition: The n × n matrices A and B are said to be similar if there is an invertible n × n matrix
P such that A = PBP−1
.
2. Similar matrices have at least one useful property, as seen in the following theorem. See page 315 for
a proof of this theorem.
3. Theorem 4: If n × n matrices are similar, then they have the same characteristic polynomial and
hence the same eigenvalues (with the same multiplicities).
4. Note that if the n×n matrices A and B are row equivalent, then they are not necessarily similar. For a
simple counterexample, consider the row equivalent matrices A =
2 0
0 1
and B =
1 0
0 1
. If these two
matrices were similar, then there would exist an invertible matrix P such that A = PBP−1
. Since B
is the identity matrix, this means that A = PIP−1
= PP−1
= I. Since A is not the identity matrix,
we have a contradiction, and so A and B cannot be similar.
5. We can also use Theorem 4 to show that row equivalent matrices are not necessarily similar: Similar
matrices have the same eigenvalues but row equivalent matrices often do not have the same eigenvalues.
(Imagine scaling a row of a triangular matrix. This would change one of the matrix’s diagonal entries
which changes its eigenvalues. Thus we would get a row equivalent matrix with different eigenvalues,
so the two matrices could not be similar by Theorem 4.)
6 Diagonalization
1. Definition: A square matrix A is said to be diagonalizable if it is similar to a diagonal matrix. In
other words, a diagonal matrix A has the property that there exists an invertible matrix P and a
diagonal matrix D such that A = PDP−1
.
2. Why is this useful? Suppose you wanted to find A3
. If A is diagonalizable, then
A3
= (PDP−1
)3
= (PDP−1
)(PDP−1
)(PDP−1
)
= PDP−1
PDP−1
PDP−1
= PD(PP−1
)D(PP−1
)DP−1
= PDDDP−1
= PD3
P−1
.
In general, if A = PDP−1
, then Ak
= PDk
P−1
.
3. Why is this useful? Because powers of diagonal matrices are relatively easy to compute. For example,
if D =


7 0 0
0 −2 0
0 0 3

, then
D3
=


73
0 0
0 (−2)3
0
0 0 33

 .
This means that finding Ak
involves only two matrix multiplications instead of the k matrix multipli-
cations that would be necessary to multiply A by itself k times.
4. It turns out that an n×n matrix is diagonalizable if and only it has n linearly independent eigenvectors.
That’s what the following theorem says. See page 321 for a proof of this theorem.
5. Theorem 5 (The Diagonalization Theorem):
(a) An n × n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors.
(b) If v1, v2, . . . , vn are linearly independent eigenvectors of A and λ1, λ2, . . . , λn are their corre-
sponding eigenvalues, then A = PDP−1
, where
P = v1 · · · vn
and
D =





λ1 0 · · · 0
0 λ2 · · · 0
...
...
...
0 0 · · · λn





(c) If A = PDP−1
and D is a diagonal matrix, then the columns of P must be linearly independent
eigenvectors of A and the diagonal entries of D must be their corresponding eigenvalues.
6. What can we make of this theorem? If we can find n linearly independent eigenvectors for an n × n
matrix A, then we know the matrix is diagonalizable. Furthermore, we can use those eigenvectors and
their corresponding eigenvalues to find the invertible matrix P and diagonal matrix D necessary to
show that A is diagonalizable.
7. Theorem 4 told us that similar matrices have the same eigenvalues (with the same multiplicities). So
if A is similar to a diagonal matrix D (that is, if A is diagonalizable), then the eigenvalues of D must
be the eigenvalues of A. Since D is a diagonal matrix (and hence triangular), the eigenvalues of D
must lie on its main diagonal. Since these are the eigenvalues of A as well, the eigenvalues of A must
be the entries on the main diagonal of D. This confirms that the choice of D given in the theorem
makes sense.
8. See your class notes or Example 3 on page 321 for examples of the Diagonalization Theorem in action.

Weitere ähnliche Inhalte

Was ist angesagt?

3. Linear Algebra for Machine Learning: Factorization and Linear Transformations
3. Linear Algebra for Machine Learning: Factorization and Linear Transformations3. Linear Algebra for Machine Learning: Factorization and Linear Transformations
3. Linear Algebra for Machine Learning: Factorization and Linear TransformationsCeni Babaoglu, PhD
 
Eigenvalues and Eigenvectors
Eigenvalues and EigenvectorsEigenvalues and Eigenvectors
Eigenvalues and EigenvectorsVinod Srivastava
 
2. Linear Algebra for Machine Learning: Basis and Dimension
2. Linear Algebra for Machine Learning: Basis and Dimension2. Linear Algebra for Machine Learning: Basis and Dimension
2. Linear Algebra for Machine Learning: Basis and DimensionCeni Babaoglu, PhD
 
vector space and subspace
vector space and subspacevector space and subspace
vector space and subspace2461998
 
systems of linear equations & matrices
systems of linear equations & matricessystems of linear equations & matrices
systems of linear equations & matricesStudent
 
Linear algebra-Basis & Dimension
Linear algebra-Basis & DimensionLinear algebra-Basis & Dimension
Linear algebra-Basis & DimensionManikanta satyala
 
Null space, Rank and nullity theorem
Null space, Rank and nullity theoremNull space, Rank and nullity theorem
Null space, Rank and nullity theoremRonak Machhi
 
Liner algebra-vector space-1 introduction to vector space and subspace
Liner algebra-vector space-1   introduction to vector space and subspace Liner algebra-vector space-1   introduction to vector space and subspace
Liner algebra-vector space-1 introduction to vector space and subspace Manikanta satyala
 
Eigen value and eigen vector
Eigen value and eigen vectorEigen value and eigen vector
Eigen value and eigen vectorRutvij Patel
 
Eigenvectors & Eigenvalues: The Road to Diagonalisation
Eigenvectors & Eigenvalues: The Road to DiagonalisationEigenvectors & Eigenvalues: The Road to Diagonalisation
Eigenvectors & Eigenvalues: The Road to DiagonalisationChristopher Gratton
 
system linear equations and matrices
 system linear equations and matrices system linear equations and matrices
system linear equations and matricesAditya Vaishampayan
 
Eigen values and eigen vectors
Eigen values and eigen vectorsEigen values and eigen vectors
Eigen values and eigen vectorstirath prajapati
 
Eigen value and eigen vectors shwetak
Eigen value and eigen vectors shwetakEigen value and eigen vectors shwetak
Eigen value and eigen vectors shwetakMrsShwetaBanait1
 
Numerical solution of eigenvalues and applications 2
Numerical solution of eigenvalues and applications 2Numerical solution of eigenvalues and applications 2
Numerical solution of eigenvalues and applications 2SamsonAjibola
 

Was ist angesagt? (20)

3. Linear Algebra for Machine Learning: Factorization and Linear Transformations
3. Linear Algebra for Machine Learning: Factorization and Linear Transformations3. Linear Algebra for Machine Learning: Factorization and Linear Transformations
3. Linear Algebra for Machine Learning: Factorization and Linear Transformations
 
Eigenvalues and Eigenvectors
Eigenvalues and EigenvectorsEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
 
2. Linear Algebra for Machine Learning: Basis and Dimension
2. Linear Algebra for Machine Learning: Basis and Dimension2. Linear Algebra for Machine Learning: Basis and Dimension
2. Linear Algebra for Machine Learning: Basis and Dimension
 
vector space and subspace
vector space and subspacevector space and subspace
vector space and subspace
 
systems of linear equations & matrices
systems of linear equations & matricessystems of linear equations & matrices
systems of linear equations & matrices
 
topology definitions
 topology definitions topology definitions
topology definitions
 
Linear algebra-Basis & Dimension
Linear algebra-Basis & DimensionLinear algebra-Basis & Dimension
Linear algebra-Basis & Dimension
 
Null space, Rank and nullity theorem
Null space, Rank and nullity theoremNull space, Rank and nullity theorem
Null space, Rank and nullity theorem
 
Vector spaces
Vector spaces Vector spaces
Vector spaces
 
Maths
MathsMaths
Maths
 
Liner algebra-vector space-1 introduction to vector space and subspace
Liner algebra-vector space-1   introduction to vector space and subspace Liner algebra-vector space-1   introduction to vector space and subspace
Liner algebra-vector space-1 introduction to vector space and subspace
 
Eigen value and eigen vector
Eigen value and eigen vectorEigen value and eigen vector
Eigen value and eigen vector
 
Vector Space.pptx
Vector Space.pptxVector Space.pptx
Vector Space.pptx
 
Eigenvectors & Eigenvalues: The Road to Diagonalisation
Eigenvectors & Eigenvalues: The Road to DiagonalisationEigenvectors & Eigenvalues: The Road to Diagonalisation
Eigenvectors & Eigenvalues: The Road to Diagonalisation
 
system linear equations and matrices
 system linear equations and matrices system linear equations and matrices
system linear equations and matrices
 
eigenvalue
eigenvalueeigenvalue
eigenvalue
 
Eigen values and eigen vectors
Eigen values and eigen vectorsEigen values and eigen vectors
Eigen values and eigen vectors
 
Vector space
Vector spaceVector space
Vector space
 
Eigen value and eigen vectors shwetak
Eigen value and eigen vectors shwetakEigen value and eigen vectors shwetak
Eigen value and eigen vectors shwetak
 
Numerical solution of eigenvalues and applications 2
Numerical solution of eigenvalues and applications 2Numerical solution of eigenvalues and applications 2
Numerical solution of eigenvalues and applications 2
 

Andere mochten auch

Applied numerical methods lec13
Applied numerical methods lec13Applied numerical methods lec13
Applied numerical methods lec13Yasser Ahmed
 
Eighan values and diagonalization
Eighan values and diagonalization Eighan values and diagonalization
Eighan values and diagonalization gandhinagar
 
eigen valuesandeigenvectors
eigen valuesandeigenvectorseigen valuesandeigenvectors
eigen valuesandeigenvectors8laddu8
 
Eigen values and eigen vector ppt
Eigen values and eigen vector pptEigen values and eigen vector ppt
Eigen values and eigen vector pptSharada Marishka
 
B.tech semester i-unit-v_eigen values and eigen vectors
B.tech semester i-unit-v_eigen values and eigen vectorsB.tech semester i-unit-v_eigen values and eigen vectors
B.tech semester i-unit-v_eigen values and eigen vectorsRai University
 
Lesson14: Eigenvalues And Eigenvectors
Lesson14: Eigenvalues And EigenvectorsLesson14: Eigenvalues And Eigenvectors
Lesson14: Eigenvalues And EigenvectorsMatthew Leingang
 
Solution to linear equhgations
Solution to linear equhgationsSolution to linear equhgations
Solution to linear equhgationsRobin Singh
 

Andere mochten auch (7)

Applied numerical methods lec13
Applied numerical methods lec13Applied numerical methods lec13
Applied numerical methods lec13
 
Eighan values and diagonalization
Eighan values and diagonalization Eighan values and diagonalization
Eighan values and diagonalization
 
eigen valuesandeigenvectors
eigen valuesandeigenvectorseigen valuesandeigenvectors
eigen valuesandeigenvectors
 
Eigen values and eigen vector ppt
Eigen values and eigen vector pptEigen values and eigen vector ppt
Eigen values and eigen vector ppt
 
B.tech semester i-unit-v_eigen values and eigen vectors
B.tech semester i-unit-v_eigen values and eigen vectorsB.tech semester i-unit-v_eigen values and eigen vectors
B.tech semester i-unit-v_eigen values and eigen vectors
 
Lesson14: Eigenvalues And Eigenvectors
Lesson14: Eigenvalues And EigenvectorsLesson14: Eigenvalues And Eigenvectors
Lesson14: Eigenvalues And Eigenvectors
 
Solution to linear equhgations
Solution to linear equhgationsSolution to linear equhgations
Solution to linear equhgations
 

Ähnlich wie Eigenvalues and eigenvectors of matrices

Diagonalization of matrix
Diagonalization of matrixDiagonalization of matrix
Diagonalization of matrixVANDANASAINI29
 
Eighan values and diagonalization
Eighan values and diagonalization Eighan values and diagonalization
Eighan values and diagonalization gandhinagar
 
Diagonalization of Matrices
Diagonalization of MatricesDiagonalization of Matrices
Diagonalization of MatricesAmenahGondal1
 
Partial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebraPartial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebrameezanchand
 
Linear_Algebra_final.pdf
Linear_Algebra_final.pdfLinear_Algebra_final.pdf
Linear_Algebra_final.pdfRohitAnand125
 
eigenvalueandeigenvector72-80-160505220126 (1).pdf
eigenvalueandeigenvector72-80-160505220126 (1).pdfeigenvalueandeigenvector72-80-160505220126 (1).pdf
eigenvalueandeigenvector72-80-160505220126 (1).pdfSunny432360
 
Eigen-Decomposition: Eigenvalues and Eigenvectors.pdf
Eigen-Decomposition: Eigenvalues and Eigenvectors.pdfEigen-Decomposition: Eigenvalues and Eigenvectors.pdf
Eigen-Decomposition: Eigenvalues and Eigenvectors.pdfNehaVerma933923
 
Matrices & determinants
Matrices & determinantsMatrices & determinants
Matrices & determinantsindu thakur
 
MODULE_05-Matrix Decomposition.pptx
MODULE_05-Matrix Decomposition.pptxMODULE_05-Matrix Decomposition.pptx
MODULE_05-Matrix Decomposition.pptxAlokSingh205089
 
Notes on eigenvalues
Notes on eigenvaluesNotes on eigenvalues
Notes on eigenvaluesAmanSaeed11
 

Ähnlich wie Eigenvalues and eigenvectors of matrices (20)

Lecture_06_Part_1.ppt
Lecture_06_Part_1.pptLecture_06_Part_1.ppt
Lecture_06_Part_1.ppt
 
saket.ppt
saket.pptsaket.ppt
saket.ppt
 
Diagonalization of matrix
Diagonalization of matrixDiagonalization of matrix
Diagonalization of matrix
 
Eighan values and diagonalization
Eighan values and diagonalization Eighan values and diagonalization
Eighan values and diagonalization
 
Diagonalization of Matrices
Diagonalization of MatricesDiagonalization of Matrices
Diagonalization of Matrices
 
Partial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebraPartial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebra
 
Matrices ppt
Matrices pptMatrices ppt
Matrices ppt
 
Linear_Algebra_final.pdf
Linear_Algebra_final.pdfLinear_Algebra_final.pdf
Linear_Algebra_final.pdf
 
eigenvalueandeigenvector72-80-160505220126 (1).pdf
eigenvalueandeigenvector72-80-160505220126 (1).pdfeigenvalueandeigenvector72-80-160505220126 (1).pdf
eigenvalueandeigenvector72-80-160505220126 (1).pdf
 
Rankmatrix
RankmatrixRankmatrix
Rankmatrix
 
Eigen-Decomposition: Eigenvalues and Eigenvectors.pdf
Eigen-Decomposition: Eigenvalues and Eigenvectors.pdfEigen-Decomposition: Eigenvalues and Eigenvectors.pdf
Eigen-Decomposition: Eigenvalues and Eigenvectors.pdf
 
Matrices & determinants
Matrices & determinantsMatrices & determinants
Matrices & determinants
 
Matrix_PPT.pptx
Matrix_PPT.pptxMatrix_PPT.pptx
Matrix_PPT.pptx
 
Mat 223_Ch5-Eigenvalues.ppt
Mat 223_Ch5-Eigenvalues.pptMat 223_Ch5-Eigenvalues.ppt
Mat 223_Ch5-Eigenvalues.ppt
 
Positive
PositivePositive
Positive
 
eigen_values.pptx
eigen_values.pptxeigen_values.pptx
eigen_values.pptx
 
Lecture_note2.pdf
Lecture_note2.pdfLecture_note2.pdf
Lecture_note2.pdf
 
MODULE_05-Matrix Decomposition.pptx
MODULE_05-Matrix Decomposition.pptxMODULE_05-Matrix Decomposition.pptx
MODULE_05-Matrix Decomposition.pptx
 
Notes on eigenvalues
Notes on eigenvaluesNotes on eigenvalues
Notes on eigenvalues
 
Linear Algebra
Linear AlgebraLinear Algebra
Linear Algebra
 

Kürzlich hochgeladen

Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024Janet Corral
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...PsychoTech Services
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 

Kürzlich hochgeladen (20)

Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 

Eigenvalues and eigenvectors of matrices

  • 1. Math 20 Chapter 5 Eigenvalues and Eigenvectors 1 Eigenvalues and Eigenvectors 1. Definition: A scalar λ is called an eigenvalue of the n × n matrix A is there is a nontrivial solution x of Ax = λx. Such an x is called an eigenvector corresponding to the eigenvalue λ. 2. What does this mean geometrically? Suppose that A is the standard matrix for a linear transformation T : Rn → Rn . Then if Ax = λx, it follows that T(x) = λx. This means that if x is an eigenvector of A, then the image of x under the transformation T is a scalar multiple of x – and the scalar involved is the corresponding eigenvalue λ. In other words, the image of x is parallel to x. 3. Note that an eigenvector cannot be 0, but an eigenvalue can be 0. 4. Suppose that 0 is an eigenvalue of A. What does that say about A? There must be some nontrivial vector x for which Ax = 0x = 0 which implies that A is not invertible which implies a whole lot of things given our Invertible Matrix Theorem. 5. Invertible Matrix Theorem Again: The n × n matrix A is invertible if and only if 0 is not an eigenvalue of A. 6. Definition: The eigenspace of the n × n matrix A corresponding to the eigenvalue λ of A is the set of all eigenvectors of A corresponding to λ. 7. We’re not used to analyzing equations like Ax = λx where the unknown vector x appears on both sides of the equation. Let’s find an equivalent equation in standard form. Ax = λx Ax − λx = 0 Ax − λIx = 0 (A − λI)x = 0 8. Thus x is an eigenvector of A corresponding to the eigenvalue λ if and only if x and λ satisfy (A−λI)x = 0. 9. It follows that the eigenspace of λ is the null space of the matrix A − λI and hence is a subspace of Rn . 10. Later in Chapter 5, we will find out that it is useful to find a set of linearly independent eigenvectors for a given matrix. The following theorem provides one way of doing so. See page 307 for a proof of this theorem. 11. Theorem 2: If v1, . . . , vr are eigenvectors that correspond to distinct eigenvalues λ1, . . . , λr of an n × n matrix A, then the set {v1, . . . , vr} is linearly independent.
  • 2. 2 Determinants 1. Recall that if λ is an eigenvalue of the n × n matrix A, then there is a nontrivial solution x to the equation Ax = λx or, equivalently, to the equation (A − λI)x = 0. (We call this nontrivial solution x an eigenvector corresponding to λ.) 2. Note that this second equation has a nontrivial solution if and only if the matrix A−λI is not invertible. Why? If the matrix is not invertible, then it does not have a pivot position in each column (by the Invertible Matrix Theorem) which implies that the homogeneous system has at least one free variable which implies that the homogeneous system has a nontrivial solution. Conversely, if the matrix is invertible, then the only solution is the trivial solution. 3. To find the eigenvalues of A we need a condition on λ that is equivalent to the equation (A − λI)x = 0 having a nontrivial solution. This is where determinants come in. 4. We skipped Chapter 3, which is all about determinants, so here’s a recap of just what we need to know about them. 5. Formula: The determinant of the 2 × 2 matrix A = a b c d is detA = ad − bc. 6. Formula: The determinant of the 3 × 3 matrix A =   a11 a12 a13 a21 a22 a23 a31 a32 a33   is detA = a11a22a33 + a12a23a31 + a13a21a32 − a31a22a13 − a32a23a11 − a33a21a12. See page 191 for a useful way of remembering this formula. 7. Theorem: The determinant of an n × n matrix A is 0 if and only if the matrix A is not invertible. 8. That’s useful! We’re looking for values of λ for which the equation (A − λI)x = 0 has a nontrivial solution. This happens if and only if the matrix A − λI is not invertible. This happens if and only if the determinant of A − λI is 0. This leads us to the characteristic equation of A. 3 The Characteristic Equation 1. Theorem: A scalar λ is an eigenvalue of an n × n matrix A if and only if λ satisfies the characteristic equation det(A − λI) = 0. 2. It can be shown that if A is an n × n matrix, then det(A − λI) is a polynomial in the variable λ of degree n. We call this polynomial the characteristic polynomial of A.
  • 3. 3. Example: Consider the matrix A =   3 6 −8 0 0 6 0 0 2  . To find the eigenvalues of A, we must compute det(A − λI), set this expression equal to 0, and solve for λ. Note that A − λI =   3 6 −8 0 0 6 0 0 2   −   λ 0 0 0 λ 0 0 0 λ   =   3 − λ 6 −8 0 −λ 6 0 0 2 − λ   . Since this is a 3 × 3 matrix, we can use the formula given above to find its determinant. det(A − λI) = (3 − λ)(−λ)(2 − λ) + (6)(6)(0) + (−8)(0)(0) − (0)(−λ)(−8) − (0)(6)(3 − λ) − (−λ)(0)(6) = −λ(3 − λ)(2 − λ) Setting this equal to 0 and solving for λ, we get that λ = 0, 2, or 3. These are the three eigenvalues of A. 4. Note that A is a triangular matrix. (A triangular matrix has the property that either all of its entries below the main diagonal are 0 or all of its entries above the main diagonal are 0.) It turned out that the eigenvalues of A were the entries on the main diagonal of A. This is true for any triangular matrix, but is generally not true for matrices that are not triangular. 5. Theorem 1: The eigenvalues of a triangular matrix are the entries on its main diagonal. 6. In the above example, the characteristic polynomial turned out to be −λ(λ − 3)(λ − 2). Each of the factors λ, λ − 3, and λ − 2 appeared precisely once in this factorization. Suppose the characteristic function had turned out to be −λ(λ − 3)2 . In this case, the factor λ − 3 would appear twice and so we would say that the corresponding eigenvalue, 3, has multiplicity 2. 7. Definition: In general, the multiplicity of an eigenvalue is the number of times the factor λ − appears in the characteristic polynomial. 4 Finding Eigenvectors 1. Example (Continued): Let us now find the eigenvectors of the matrix A =   3 6 −8 0 0 6 0 0 2  . We have to take each of its three eigenvalues 0, 2, and 3 in turn. 2. To find the eigenvectors corresponding to the eigenvalue 0, we need to solve the equation (A−λI)x = 0 where λ = 0. That is, we need to solve (A − λI)x = 0 (A − 0I)x = 0 Ax = 0   3 6 −8 0 0 6 0 0 2   x = 0 Row reducing the augmented matrix, we find that x =   x1 x2 x3   = x2   −2 1 0   .
  • 4. This tells us that the eigenvectors corresponding to the eigenvalue 0 are precisely the set of scalar multiples of the vector   −2 1 0  . In other words, the eigenspace corresponding to the eigenvalue 0 is Span      −2 1 0      . 3. To find the eigenvectors corresponding to the eigenvalue 2, we need to solve the equation (A−λI)x = 0 where λ = 2. That is, we need to solve (A − λI)x = 0 (A − 2I)x = 0     3 6 −8 0 0 6 0 0 2   −   2 0 0 0 2 0 0 0 2     x = 0   1 6 −8 0 −2 6 0 0 0   x = 0 Row reducing the augmented matrix, we find that x =   x1 x2 x3   = x3   −10 3 1   . This tells us that the eigenvectors corresponding to the eigenvalue 2 are precisely the set of scalar multiples of the vector   −10 3 1  . In other words, the eigenspace corresponding to the eigenvalue 2 is Span      −10 3 1      . 4. I’ll let you find the eigenvectors corresponding to the eigenvalue 3. 5 Similar Matrices 1. Definition: The n × n matrices A and B are said to be similar if there is an invertible n × n matrix P such that A = PBP−1 . 2. Similar matrices have at least one useful property, as seen in the following theorem. See page 315 for a proof of this theorem. 3. Theorem 4: If n × n matrices are similar, then they have the same characteristic polynomial and hence the same eigenvalues (with the same multiplicities). 4. Note that if the n×n matrices A and B are row equivalent, then they are not necessarily similar. For a simple counterexample, consider the row equivalent matrices A = 2 0 0 1 and B = 1 0 0 1 . If these two matrices were similar, then there would exist an invertible matrix P such that A = PBP−1 . Since B is the identity matrix, this means that A = PIP−1 = PP−1 = I. Since A is not the identity matrix, we have a contradiction, and so A and B cannot be similar.
  • 5. 5. We can also use Theorem 4 to show that row equivalent matrices are not necessarily similar: Similar matrices have the same eigenvalues but row equivalent matrices often do not have the same eigenvalues. (Imagine scaling a row of a triangular matrix. This would change one of the matrix’s diagonal entries which changes its eigenvalues. Thus we would get a row equivalent matrix with different eigenvalues, so the two matrices could not be similar by Theorem 4.) 6 Diagonalization 1. Definition: A square matrix A is said to be diagonalizable if it is similar to a diagonal matrix. In other words, a diagonal matrix A has the property that there exists an invertible matrix P and a diagonal matrix D such that A = PDP−1 . 2. Why is this useful? Suppose you wanted to find A3 . If A is diagonalizable, then A3 = (PDP−1 )3 = (PDP−1 )(PDP−1 )(PDP−1 ) = PDP−1 PDP−1 PDP−1 = PD(PP−1 )D(PP−1 )DP−1 = PDDDP−1 = PD3 P−1 . In general, if A = PDP−1 , then Ak = PDk P−1 . 3. Why is this useful? Because powers of diagonal matrices are relatively easy to compute. For example, if D =   7 0 0 0 −2 0 0 0 3  , then D3 =   73 0 0 0 (−2)3 0 0 0 33   . This means that finding Ak involves only two matrix multiplications instead of the k matrix multipli- cations that would be necessary to multiply A by itself k times. 4. It turns out that an n×n matrix is diagonalizable if and only it has n linearly independent eigenvectors. That’s what the following theorem says. See page 321 for a proof of this theorem. 5. Theorem 5 (The Diagonalization Theorem): (a) An n × n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. (b) If v1, v2, . . . , vn are linearly independent eigenvectors of A and λ1, λ2, . . . , λn are their corre- sponding eigenvalues, then A = PDP−1 , where P = v1 · · · vn and D =      λ1 0 · · · 0 0 λ2 · · · 0 ... ... ... 0 0 · · · λn      (c) If A = PDP−1 and D is a diagonal matrix, then the columns of P must be linearly independent eigenvectors of A and the diagonal entries of D must be their corresponding eigenvalues.
  • 6. 6. What can we make of this theorem? If we can find n linearly independent eigenvectors for an n × n matrix A, then we know the matrix is diagonalizable. Furthermore, we can use those eigenvectors and their corresponding eigenvalues to find the invertible matrix P and diagonal matrix D necessary to show that A is diagonalizable. 7. Theorem 4 told us that similar matrices have the same eigenvalues (with the same multiplicities). So if A is similar to a diagonal matrix D (that is, if A is diagonalizable), then the eigenvalues of D must be the eigenvalues of A. Since D is a diagonal matrix (and hence triangular), the eigenvalues of D must lie on its main diagonal. Since these are the eigenvalues of A as well, the eigenvalues of A must be the entries on the main diagonal of D. This confirms that the choice of D given in the theorem makes sense. 8. See your class notes or Example 3 on page 321 for examples of the Diagonalization Theorem in action.