Inner (or dot) product
• Given vT = (x1, x2, . . . , xn) and wT = (y1, y2, . . . , yn),
their dot product defined as follows:
Orthogonal / Orthonormal vectors
• A set of vectors x1, x2, . . . , xn is orthogonal if
• A set of vectors x1, x2, . . . , xn is orthonormal if
• A vector v is a linear combination of the
vectors v1, ..., vk if:
where c1, ..., ck are constants.
Example: any vector in R3 can be expressed
as a linear combinations of the unit vectors
i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1)
• A set of vectors S=(v1, v2, . . . , vk ) span some
space W if every vector v in W can be written
as a linear combination of the vectors in S
Example: the unit vectors i, j, and k span R3
• A set of vectors v1, ..., vk are linearly dependent
if at least one of them (e.g., vj) can be written
as a linear combination of the rest:
(i.e., vj does not appear on the right side of the equation)
• A set of vectors v1, ..., vk is linearly independent
if no vector vj can be represented as a linear
combination of the remaining vectors, i.e. :
• A set of vectors v1, ..., vk forms a basis in
some vector space W if:
(1) (v1, ..., vk) span W
(2) (v1, ..., vk) are linearly independent
Some standard bases:
R2 R3 Rn
Orthogonal vector basis
• Basis vectors are not necessarily orthogonal.
• Any set of basis vectors (v1, ..., vk) can be
transformed to an orthogonal basis using the
Gram-Schmidt orthogonalization algorithm.
• Normalizing the basis vectors to “unit” length will
yield an orthonormal basis.
• More useful in practice since they simplify
• Suppose v1, v2, . . . , vn is an orthogonal base
in W, then any v є W can be represented in
this basis as follows:
• The xi of the expansion can be computed as follows:
(vector expansion or projection)
(coefficients of expansion or projection)
Note: if the basis is orthonormal, then vi.vi=1
Vector basis (cont’d)
• Given a set of basis vectors in W, each vector
in W can be represented (i.e., projected)
“uniquely” in this basis.
• How many vector bases are there in a vector
– Many, e.g., translate/rotate a given vector basis to
obtain a new one!
– Some vector bases are preferred than others though.
– We will revisit this issue when we discuss Principal
Components Analysis (PCA).
• A vector x ϵ Rn is represented
by n components:
• Assuming the standard base
<v1, v2, …, vN> (i.e., unit vectors
in each dimension), each xi is
the projection of x along the
direction of vi:
• x can be “reconstructed” from
its projection coefficients as
1 1 2 2
i i N N
x v x v x v x v
Vector Reconstruction (cont’d)
• Example assuming n=2:
• Assuming the standard base
<v1=i, v2=j>, xi can be obtained
by projecting x along the
direction of vi:
• x can be reconstructed from its
projection coefficients as
3 4 3
3 4 4
• Matrix addition/subtraction
– Add/Subtract corresponding elements.
– Matrices must be of same size.
• Matrix multiplication
Condition: n = q
m x n q x p m x p
• The inverse of a matrix A, denoted as A-1, has the
A A-1 = A-1A = I
• A-1 exists only if
– Singular matrix: A-1 does not exist
– Ill-conditioned matrix: A is “close” to being singular
Eigenvalues and Eigenvectors
• The vector v is an eigenvector of matrix A and
λ is an eigenvalue of A if:
Geometric interpretation: the linear transformation
implied by A cannot change the direction of the
eigenvectors v, only their magnitude.
(assuming v is non-zero)
Computing λ and v
• To compute the eigenvalues λ of a matrix A,
find the roots of the characteristic polynomial.
• The eigenvectors can then be computed:
Properties of λ and v
• Eigenvalues and eigenvectors are only
defined for square matrices.
• Eigenvectors are not unique (e.g., if v is an
eigenvector, so is kv)
• Suppose λ1, λ2, ..., λn are the eigenvalues of
• Given an n x n matrix A, find P such that:
P-1AP=Λ where Λ is diagonal
• Solution: set P = [v1 v2 . . . vn], where v1,v2 ,. . .
vn are the eigenvectors of A: eigenvalues of A
• An n x n matrix A is diagonalizable iff A has n
linearly independent eigenvectors.
– i.e., rank(P)=n, where P-1AP=Λ
• Theorem: If the eigenvalues of A are all distinct,
then the corresponding eigenvectors are
linearly independent (i.e., A is diagonalizable).
Are all n x n matrices
• If A is diagonalizable, then the corresponding
eigenvectors v1,v2 ,. . . vn form a basis in Rn
– Since v1,v2 ,. . . vn are linearly independent, they
also span Rn
• If A is also symmetric, its eigenvalues are
real, non-negative and the corresponding
eigenvectors are orthogonal.
– v1,v2 ,. . . vn form an orthogonal basis in Rn