SlideShare a Scribd company logo
1 of 37
Download to read offline
2. Matrix Algebra and Random Vectors
2.1 Introduction
Multivariate data can be conveniently display as array of numbers. In general,
a rectangular array of numbers with, for instance, n rows and p columns is
called a matrix of dimension n × p The study of multivariate methods is greatly
facilitated by the use of matrix algebra.
1
2.2 Some Basic of Matrix and Vector Algebra
Vectors
• Definition: An array x of n real number x1, x2, . . . , xn is called a vector, and
it is written as
x =




x1
x2
.
.
.
xn



 or x0
= [x1, x2, . . . , xn]
where the prime denotes the operation of transposing a column to a row.
2
• Multiplying vectors by a constant c:
cx = x =




cx1
cx2
.
.
.
cxn




• Addition of x and y is defined as
x + y =




x1
x2
.
.
.
xn



 +




y1
y2
.
.
.
yn



 =




x1 + y1
x2 + y2
.
.
.
xn + yn




3
Figure 2.2 Scatter multiplication and vector addition
4
• Length of vectors, unit vector
When n = 2, x = [x1, x2]0
, the length of x, written Lx is defined to be
Lx =
q
x2
1 + x2
2
Geometrically, the length of a vector in two dimension can be viewed as the
hypotenuse of a right triangle. The length of a vector x = [x1, x2, . . . , xn]0
and cx = [cx1, cx2, . . . , cxn]0
Lx =
q
x2
1 + x2
2 + · · · + x2
n
Lcx =
q
c2x2
1 + c2x2
2 + · · · + c2x2
n = |c|
q
x2
1 + x2
2 + · · · + x2
n = |c|Lx
Choosing c = L−1
x , we obtain the unit vector L−1
x x, which has length 1 and
lies in the direction of x.
5
6
• Angle, inner product. perpendicular
Consider two vectors x, y in a plane and the angle θ between them, as in
Figure 2.4. From the figure, θ can be represented as the difference the angle
θ1 and θ2 formed by the two vectors and the first coordinate axis. Since, by
the definition,
cos(θ1) =
x1
Lx
, cos(θ2) =
y1
Ly
sin(θ1) =
x2
Lx
, sin(θ2) =
y2
Ly
and
cos(θ2 − θ1) = cos(θ1) cos(θ2) + sin(θ1) sin(θ2)
the angle θ between the two vectors is specified by
cos(θ) = cos(θ2 − θ1) =
y1
Ly
·
x1
Lx
+
y2
Ly
·
x2
Lx
=
x1y1 + x2y2
LxLy
.
7
• Definition of inner product of the two vectors x and y
x0
y = x1y1 + x2y2.
With the definition of the inner product and cos(θ),
Lx =
√
x0x, cos(θ) =
x0
y
LxLy
=
x0
y
√
x0x
√
y0y
.
Example 2.1.(Calculating lengths of vectors and the angle between them)
Given the vectors x0
= [132] and y0
= [−21 − 1], find 3x and x + y. Next,
determine the length of x, the length of y, and the angle between x and y.
Also, check that the length of 3x is three times the length of x
8
• A pair of vectors x and y of the same dimension is said to be linearly
dependent if there exist constants c1 and c2, both not zero, such that
c1x + c2y = 0. A set of vectors x1, x2, . . . , xk is said to be linearly dependent
if there exist constants c1, c2, . . . , ck, not all zero, such that
c1x1 + c2x2 + . . . + ckxk = 0.
Linear dependence implies that at least one vector in the set can be written
as linear combination of the other vectors. Vector of the same dimension
that are not linearly dependent are said to be linearly independent.
9
• projection (or shadow) of a vector x on a vector y is
Projection of x on y =
(x0
y)
y0y
· y =
(x0
y)
Ly
1
Ly
y
where the vector L−1
y y has unit length. The length of the projection is
Length of projection =
|x0
y|
Ly
= Lx
x0
y
LxLy
= Lx| cos(θ)|
where θ is the angle between x and y.
10
Example 2.2 (Identifying linearly independent vectors) Consider if the set
of vectors
x1 =


1
2
1

 x2 =


1
0
−1

 x3 =


1
−2
1


is linearly dependent.
11
Matrices
A matrix is any rectangular array of real numbers. We denote an arbitrary
array of n rows and p columns
A{n×p} =




a11 a12 . . . a1p
a21 a22 . . . a2p
.
.
. .
.
. ... .
.
.
an1 an2 · · · anp




Example 2.3 (Transpose of a matrix) if
A{2×3} =

3 −1 2
1 5 4

then
A0
{3×2} =


3 1
−1 5
2 4


12
The product cA is the matrix that results from multiplying each elements
of A by c. Thus
cA{n×p} =




ca11 ca12 . . . ca1p
ca21 ca22 . . . ca2p
.
.
. .
.
. ... .
.
.
can1 can2 · · · canp




Example 2.4 (The sum of two matrices and multiplication of a matrix by
a constant) If
A{2×3} =

0 3 1
1 −1 1

B{2×3} =

1 −2 −3
2 5 1

then 4A and A + B ?
13
The matrix product AB is
A{n×k}B{k×p} = the (n × p) matrix whose entry in the ith row and
jth column is the inner product of the ith row of A
and the jth column of B.
or
(i, j) entry of AB = ai1b1j + ai2b2j + · · · + aikbkj =
k
X
`=1
ai`b`j
Example 2.5 (Matrix multiplication) If
A =

3 −1 2
1 5 4

, B =


−2
7
9

 , and C =

2 0
1 −1

then AB and CA ?
14
Example 2.6 (Some typical products and their dimensions) Let
A =

1 −2 3
2 4 −1

, b =


7
−3
6

 , c =


5
8
−4

 , d =

2
9

Then Ab, bc0
, b0
c, and d0
Ad ?
• Square matrices will be of special importance in our development of statistical
methods. A square matrix is said to be symmetric if A = A0
or aij = aji for
all i and j.
• Identity matrix I act like 1 in ordinary multiplication (1 · a = a · 1 = a),
I(k×k)A(k×k) = A(k×k)I(k×k) = A(k×k) for any A(k×k)
15
• The fundamental scalar relation about the existence of an inverse number
a−1
such that a−1
a = aa−1
= 1 if a 6= 0 has the following matrix algebra
extension: If there exists a matrix B such that
BA = AB = I
then B is called the inverse of A and is denoted by A−1
.
Example 2.7 (The existence of a matrix inverse) For
A =

3 2
4 1

16
• Diagonal matrices
• Orthogonal matrices
QQ0
= Q0
Q = I or Q0
= Q−1
.
• Eigenvalue λ with corresponding eigenvector x 6= 0 if
Ax = λx
Ordinarily, x is normalized so that it has length unity; that is x0
x = 1.
17
• Let A be a k ×k square symmetric matrix. Then A has k pairs of eigenvalues
and eigenvectors namely
λ1 e1, λ2 e2, . . . , λk ek
The eigenvectors can be chosen to satisfy 1 = e0
1e1 = · · · = e0
ke and be
mutually perpendicular. The eigenvectors are unique unless two or more
eigenvalues are equal.
Example 2.8 (Verifying eigenvalues and eigenvectors) Let
A =

1 −5
−5 1

.
show that λ1 = 6 and λ2 = −4 is its eigenvalues and the corresponding
eigenvectors are e1 = [1/
√
2, −1/
√
2]0
and e2 = [1/
√
2, 1/
√
2].
18
2.3 Positive Definite Matrices
The study of variation and interrelationships in multivariate data is often
based upon distances and the assumption that the data are multivariate normally
distributed. Squared distance and the multivariate normal density can be
expressed in terms of matrix products called quadratic forms. Consequently,
it should not be surprising that quadratic forms play central role in multivariate
analysis. Quadratic forms that are always nonnegative and the associated
positive definite matrices.
• spectral decomposition for symmetric matrices
A(k×k) = λ1e1e0
1 + λ1e2e0
2 + · · · + λ1eke0
k
where λ1, λ2, . . . , λk are the eigenvalues and e1, e2, . . . , ek are the associated
normalized k × 1 eigenvectors. e0
iei = 1 for i = 1, 2, . . . , k and e0
iej = 0 for
i 6= j.
19
• Because x0
Ax has only square terms x2
i and product terms xixk, it is called
a quadratic form. When a k × k symmetric matrix A is such that
0 ≤ x0
Ax
for all x0
= [x1, x2, . . . , xk], both the matrix A and the quadratic form are
said to be nonnegative definite. If the equality holds in the equation
above only for the vector x0
= [0, 0, . . . , 0], then A or the quadratic form is
said to be positive definite. In other words, A is positive definite if
0  x0
Ax
for all vectors x 6= 0.
• Using the spectral decomposition, we can easily show that a k × k matrix A
is a positive definite matrix if and only if every eigenvalue of A is positive. A
is a nonnegative definite matrix if and only if all of its eigenvalues are greater
than or equal to zero.
20
Example 2.9 ( The spectral decomposition of a matrix) Consider the
symmetric matrix
A =


13 −4 2
−4 13 −2
2 −2 10

 ,
find its spectral decomposition.
Example 2.10 ( A positive definite matrix quadratic form) Show that the
matrix for the following quadratic form is positive definite:
3x2
1 + 2x2
2 − 2
√
2x1x2.
• the “distance ” of the point [x1, x2, . . . , xp]0
to origin
(distance)2
= a11x2
1 + a22x2
2 + . . . + a2
pp
+2(a12x1x2 + a13x1x3 + . . . + ap−1,pxp−1xp)
• the square of the distance x to an arbitrary fixed point µ = [µ1, µ2, . . . , µp].
21
• A geometric interpretation based on the eigenvalues and eigenvectors of the
matrix A.
For example, suppose p = 2, Then the points x0
= [x1, x2] of constant
distance c from the origin satisfy
x0
Ax = a11x2
1 + a2
22 + 2a12x1x2 = c2
By the spectral decomposition,
A = λ1e1e0
1 + λ2e2e0
2
so
x0
Ax = λ1(x0
e1)2
+ λ2(x0
e2)2
22
23
2.4 A Square-Root Matrix
Let A be a k× positive definite matrix with spectral decomposition A =
k
P
i=1
λieie0
i. Let the normalized eigenvectors be the columns of another matrix
P = [e1, e2, . . . , ek]. Then
A =
k
X
i=1
λieie0
i = PΛP0
where PP0
= P0
P = I and Λ is the diagonal matrix
Λ =




λ1 0 · · · 0
0 λ2 · · · 0
.
.
. .
.
. ... ...
0 0 · · · λk



 with λi  0
24
Thus
A−1
= PΛ−1
P0
=
k
X
i=1
1
λi
eie0
i
The square-root matrix, of a positive definite matrix A,
A1/2
=
k
X
i=1
p
λieie0
i = PΛ1/2
P0
• symmetric: A1/20
= A1/2
• A1/2
A1/2
= A
• (A1/2
)−1
=
k
P
i=1
1
√
λi
eie0
i = PΛ−1/2
P0
• A1/2
A−1/2
= A−1/2
A1/2
= I and A−1/2
A−1/2
= A−1
, where A−1/2
=
(A1/2
)−1
.
25
Random Vectors and Matrices
A random vector is a vector whose elements are random variables.
Similarly a random matrix is a matrix whose elements are random variables.
• The expected value of a random matrix
E(X) =




E(X11) E(X12) · · · E(X1p)
E(X21) E(X22) · · · E(X2p)
.
.
. .
.
. ... .
.
.
E(Xn1) E(Xn2) · · · E(Xnp)




• E(X + Y ) = E(X) + E(Y )
• E(AXB) = AE(X)B
26
Example 2.11 (Computing expected values for discrete random variables)
Suppose p = 2 and n = 1, and consider the random vector X0
= [X1, X2]. Let
the discrete random variable X1 have the following probability function
X1 -1 0 1
p1(X1) 0.3 0.3 0.4
Similarly, let the discrete random varibale X2 have the probability function
X2 0 1
p2(X2) 0.8 0.2
Calculate E(X).
27
Mean Vectors and Covariance Matrices
Suppose X = [X1, X2, . . . , Xp] is a p×1 random vectors. Then each element
of X is a random variables with its own marginal probability distribution.
• The marginal mean µi = E(Xi), i = 1, 2, . . . , p.
• The marginal variance σ2
i = E(Xi − µi)2
, i = 1, 2, . . . , p.
• The behavior of any pair of random variables, such as Xi and Xk, is described
by their joint probability function, and a measure of the linear association
between them is provided by the covariance
σik = E(Xi − µi)(Xk − µk)
• The means and covariances of p × 1 random vector X can be set out as
matrices named population variance-covariance (matrices).
µ = E(X), Σ = E(X − µ)(X − µ)0
. 28
• Statistical independent Xi and Xk if
P(Xi ≤ xi and Xk ≤ xk) = P(Xi ≤ xi)P(Xk ≤ xk)
or
fik(xi, xk) = fi(xi)fk(xk).
• Mutually statistically independent of the p continuous random
variables X1, X2, . . . , Xp if
f1,2,...,p(x1, x2, . . . , xp) = f1(x1)f2(x2) · · · fp(xp)
• linear independent of Xi, Xk if
Cov(Xi, Xk) = 0
29
• Population correlation coefficient ρik
ρik =
σik
√
σii
√
σkk
The correlation coefficient measures the amount of linear association between
the random variable Xi and Xk.
• The population correlation matrix ρ
30
Example 2.12 (Computing the covariance matrix) Find the covariance
matrix for the two random variables X1 andX2 introduced in Example 2.11
when their joint probability function p12(x1, x2) is represented by the entries in
the body of the following table:
x1x2 0 1 p1(x1)
-1 0.24 0.06 0.3
0 0.16 0.14 0.3
1 0.4 0.00 0.4
p2(x2) 0.8 0.2 1
Example 2.13 (Computing the correlation matrix from the covariance
matrix) Suppose
Σ =


4 1 2
1 9 −3
2 −3 25

 =


σ11 σ12 σ13
σ12 σ22 σ23
σ13 σ23 σ33


Obtain the population correlation matrix ρ
31
Partitioning the Covariance Matrix
• Let
X =










X1
.
.
.
Xq
. . .
Xq+1
. . .
Xp










=



X(1)
· · ·
X(2)


 and then µ = EX =










µ1
.
.
.
µq
. . .
µq+1
. . .
µp










=


µ(1)
· · ·
µ(2)


• Define
E(X − µ)(X − µ)0
= E

(X(1)
− µ(1)
)(X(1)
− µ(1)
)0
(X(1)
− µ(1)
)(X(2)
− µ(2)
)0
(X(2)
− µ(2)
)(X(1)
− µ(1)
)0
(X(2)
− µ(2)
)(X(2)
− µ(2)
)
#
=

Σ11 Σ12
Σ21 Σ22

32
• It is sometimes convenient to use Cov(X(1)
, X(2)
) note where
Cov(X(1)
, X(2)
) = Σ12 = Σ0
21
is a matrix containing all of the covariance between a component of X(1)
and a component of X(2)
.
33
The Mean Vector and Covariance Matrix for Linear
Combinations of Random Variables
• The linear combination c0
X = c1X1 + · · · + cpXp has
mean = E(c0
X) = c0
µ
variance = Var(c0
X) = c0
Σc
where µ = E(X) and Σ = Cov(X).
• Let C be a matrix, then the linear combinations of Z = CX have
µZ = E(Z) = E(CX) = Cµx
ΣZ = Cov(Z) = Cov(CX) = CΣxC0
34
• Sample Mean
x̄0
= [x̄1, x̄2, . . . , x̄p]
• Sample Covariance Matrix
Sn =


s11 · · · s1p
.
.
. ... .
.
.
s1p · · · spp


=






1
n
n
P
j=1
(xj1 − x̄1)2
· · · 1
n
n
P
j=1
(xj1 − x̄1)(xjp − x̄p)
.
.
. ... .
.
.
1
n
n
P
j=1
(xj1 − x̄1)(xjp − x̄p) · · · 1
n
n
P
j=1
(xjp − x̄p)2






35
2.7 Matrix Inequalities and Maximization
• Cauchy-Schwarz Inequality
Let b and d be any two p × 1 vectors. Then
(b0
d)2
≤ (b0
b)(d0
d)
with equality if and only if b = cd or d = cb for some constant c.
• Extended Cauchy-Schwarz Inequality
Let b and d be any two p × 1 vectors, and B be a positive definite matrix.
Then
(b0
d)2
≤ (b0
Bb)(d0
B−1
d)
with equality if and only if b = cB−1
d or d = cBb for some constant c.
36
• Maximization Lemma
Let Bp×p be positive definite and dp×1 be a given vector. Then, for arbitrary
nonzero vector x,
max
x6=0
(x0
d)2
x0Bx
= d0
B−1
d
with the maximum attained when x = cB−1
d for any constant c 6= 0.
• Maximization of Quadratic Forms for Points on the Unit Sphere
Let B be a positive definite matrix with eigenvalues λ1 ≥ λ2 ≥ . . . ≥ λp ≥ 0
and associated normalized eigenvectors e1, e2, . . . , ep. Then
max
x6=0
x0
Bx
x0x
= λ1 (attained when x = e1)
min
x6=0
x0
Bx
x0x
= λp (attained when x = ep)
Moreover,
max
x⊥e1,...ek
x0
Bx
x0x
= λk+1 (attained when x = ek+1, k = 1, 2, . . . , p − 1)
where the symbol ⊥ is read “perpendicular to.”
37

More Related Content

What's hot

3.4 derivative and graphs
3.4 derivative and graphs3.4 derivative and graphs
3.4 derivative and graphs
math265
 
Probability mass functions and probability density functions
Probability mass functions and probability density functionsProbability mass functions and probability density functions
Probability mass functions and probability density functions
Ankit Katiyar
 
vector space and subspace
vector space and subspacevector space and subspace
vector space and subspace
2461998
 

What's hot (20)

Vector Spaces,subspaces,Span,Basis
Vector Spaces,subspaces,Span,BasisVector Spaces,subspaces,Span,Basis
Vector Spaces,subspaces,Span,Basis
 
Numerical Analysis (Solution of Non-Linear Equations) part 2
Numerical Analysis (Solution of Non-Linear Equations) part 2Numerical Analysis (Solution of Non-Linear Equations) part 2
Numerical Analysis (Solution of Non-Linear Equations) part 2
 
Probability Distribution
Probability DistributionProbability Distribution
Probability Distribution
 
Statistical ppt
Statistical pptStatistical ppt
Statistical ppt
 
3.4 derivative and graphs
3.4 derivative and graphs3.4 derivative and graphs
3.4 derivative and graphs
 
Line integral.ppt
Line integral.pptLine integral.ppt
Line integral.ppt
 
The median test
The median testThe median test
The median test
 
Probability mass functions and probability density functions
Probability mass functions and probability density functionsProbability mass functions and probability density functions
Probability mass functions and probability density functions
 
Numerical Differentiation and Integration
 Numerical Differentiation and Integration Numerical Differentiation and Integration
Numerical Differentiation and Integration
 
Chap08 fundamentals of hypothesis
Chap08 fundamentals of  hypothesisChap08 fundamentals of  hypothesis
Chap08 fundamentals of hypothesis
 
Normal Distribution
Normal DistributionNormal Distribution
Normal Distribution
 
Sampling methods
Sampling methodsSampling methods
Sampling methods
 
Inner Product Space
Inner Product SpaceInner Product Space
Inner Product Space
 
The normal distribution
The normal distributionThe normal distribution
The normal distribution
 
Linear transformation.ppt
Linear transformation.pptLinear transformation.ppt
Linear transformation.ppt
 
Normality evaluation in a data
Normality evaluation in a dataNormality evaluation in a data
Normality evaluation in a data
 
Vectorspace in 2,3and n space
Vectorspace in 2,3and n spaceVectorspace in 2,3and n space
Vectorspace in 2,3and n space
 
vector space and subspace
vector space and subspacevector space and subspace
vector space and subspace
 
Bias and Mean square error
Bias and Mean square errorBias and Mean square error
Bias and Mean square error
 
Regression ppt
Regression pptRegression ppt
Regression ppt
 

Similar to Lecture_note2.pdf

1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf
d00a7ece
 
Consider a l-D elastic bar problem defined on [0, 4]. The domain .pdf
Consider a l-D elastic bar problem defined on [0, 4]. The domain .pdfConsider a l-D elastic bar problem defined on [0, 4]. The domain .pdf
Consider a l-D elastic bar problem defined on [0, 4]. The domain .pdf
feroz544
 
Review for the Third Midterm of Math 150 B 11242014Probl.docx
Review for the Third Midterm of Math 150 B 11242014Probl.docxReview for the Third Midterm of Math 150 B 11242014Probl.docx
Review for the Third Midterm of Math 150 B 11242014Probl.docx
joellemurphey
 
linear equation in two variable.pptx
linear equation in two variable.pptxlinear equation in two variable.pptx
linear equation in two variable.pptx
KirtiChauhan62
 

Similar to Lecture_note2.pdf (20)

eigenvalue
eigenvalueeigenvalue
eigenvalue
 
1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf
 
Applications of Differential Calculus in real life
Applications of Differential Calculus in real life Applications of Differential Calculus in real life
Applications of Differential Calculus in real life
 
Matrices ppt
Matrices pptMatrices ppt
Matrices ppt
 
Nbhm m. a. and m.sc. scholarship test 2007
Nbhm m. a. and m.sc. scholarship test 2007 Nbhm m. a. and m.sc. scholarship test 2007
Nbhm m. a. and m.sc. scholarship test 2007
 
2 vectors notes
2 vectors notes2 vectors notes
2 vectors notes
 
Ch07 6
Ch07 6Ch07 6
Ch07 6
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
 
03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf
 
03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf
 
Cs jog
Cs jogCs jog
Cs jog
 
Assignment6
Assignment6Assignment6
Assignment6
 
A focus on a common fixed point theorem using weakly compatible mappings
A focus on a common fixed point theorem using weakly compatible mappingsA focus on a common fixed point theorem using weakly compatible mappings
A focus on a common fixed point theorem using weakly compatible mappings
 
11.a focus on a common fixed point theorem using weakly compatible mappings
11.a focus on a common fixed point theorem using weakly compatible mappings11.a focus on a common fixed point theorem using weakly compatible mappings
11.a focus on a common fixed point theorem using weakly compatible mappings
 
Consider a l-D elastic bar problem defined on [0, 4]. The domain .pdf
Consider a l-D elastic bar problem defined on [0, 4]. The domain .pdfConsider a l-D elastic bar problem defined on [0, 4]. The domain .pdf
Consider a l-D elastic bar problem defined on [0, 4]. The domain .pdf
 
02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf
 
02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf
 
Review for the Third Midterm of Math 150 B 11242014Probl.docx
Review for the Third Midterm of Math 150 B 11242014Probl.docxReview for the Third Midterm of Math 150 B 11242014Probl.docx
Review for the Third Midterm of Math 150 B 11242014Probl.docx
 
Multivriada ppt ms
Multivriada   ppt msMultivriada   ppt ms
Multivriada ppt ms
 
linear equation in two variable.pptx
linear equation in two variable.pptxlinear equation in two variable.pptx
linear equation in two variable.pptx
 

Recently uploaded

Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
gajnagarg
 
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
vexqp
 
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
gajnagarg
 
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
nirzagarg
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
nirzagarg
 
Gartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptxGartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptx
chadhar227
 
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
nirzagarg
 
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
nirzagarg
 
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
wsppdmt
 
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
Health
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Klinik kandungan
 

Recently uploaded (20)

Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
 
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
 
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
 
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
 
Gomti Nagar & best call girls in Lucknow | 9548273370 Independent Escorts & D...
Gomti Nagar & best call girls in Lucknow | 9548273370 Independent Escorts & D...Gomti Nagar & best call girls in Lucknow | 9548273370 Independent Escorts & D...
Gomti Nagar & best call girls in Lucknow | 9548273370 Independent Escorts & D...
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
 
Gartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptxGartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptx
 
Nirala Nagar / Cheap Call Girls In Lucknow Phone No 9548273370 Elite Escort S...
Nirala Nagar / Cheap Call Girls In Lucknow Phone No 9548273370 Elite Escort S...Nirala Nagar / Cheap Call Girls In Lucknow Phone No 9548273370 Elite Escort S...
Nirala Nagar / Cheap Call Girls In Lucknow Phone No 9548273370 Elite Escort S...
 
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
 
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
 
TrafficWave Generator Will Instantly drive targeted and engaging traffic back...
TrafficWave Generator Will Instantly drive targeted and engaging traffic back...TrafficWave Generator Will Instantly drive targeted and engaging traffic back...
TrafficWave Generator Will Instantly drive targeted and engaging traffic back...
 
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
 
Statistics notes ,it includes mean to index numbers
Statistics notes ,it includes mean to index numbersStatistics notes ,it includes mean to index numbers
Statistics notes ,it includes mean to index numbers
 
Dubai Call Girls Peeing O525547819 Call Girls Dubai
Dubai Call Girls Peeing O525547819 Call Girls DubaiDubai Call Girls Peeing O525547819 Call Girls Dubai
Dubai Call Girls Peeing O525547819 Call Girls Dubai
 
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
 
7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt
 

Lecture_note2.pdf

  • 1. 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns is called a matrix of dimension n × p The study of multivariate methods is greatly facilitated by the use of matrix algebra. 1
  • 2. 2.2 Some Basic of Matrix and Vector Algebra Vectors • Definition: An array x of n real number x1, x2, . . . , xn is called a vector, and it is written as x =     x1 x2 . . . xn     or x0 = [x1, x2, . . . , xn] where the prime denotes the operation of transposing a column to a row. 2
  • 3. • Multiplying vectors by a constant c: cx = x =     cx1 cx2 . . . cxn     • Addition of x and y is defined as x + y =     x1 x2 . . . xn     +     y1 y2 . . . yn     =     x1 + y1 x2 + y2 . . . xn + yn     3
  • 4. Figure 2.2 Scatter multiplication and vector addition 4
  • 5. • Length of vectors, unit vector When n = 2, x = [x1, x2]0 , the length of x, written Lx is defined to be Lx = q x2 1 + x2 2 Geometrically, the length of a vector in two dimension can be viewed as the hypotenuse of a right triangle. The length of a vector x = [x1, x2, . . . , xn]0 and cx = [cx1, cx2, . . . , cxn]0 Lx = q x2 1 + x2 2 + · · · + x2 n Lcx = q c2x2 1 + c2x2 2 + · · · + c2x2 n = |c| q x2 1 + x2 2 + · · · + x2 n = |c|Lx Choosing c = L−1 x , we obtain the unit vector L−1 x x, which has length 1 and lies in the direction of x. 5
  • 6. 6
  • 7. • Angle, inner product. perpendicular Consider two vectors x, y in a plane and the angle θ between them, as in Figure 2.4. From the figure, θ can be represented as the difference the angle θ1 and θ2 formed by the two vectors and the first coordinate axis. Since, by the definition, cos(θ1) = x1 Lx , cos(θ2) = y1 Ly sin(θ1) = x2 Lx , sin(θ2) = y2 Ly and cos(θ2 − θ1) = cos(θ1) cos(θ2) + sin(θ1) sin(θ2) the angle θ between the two vectors is specified by cos(θ) = cos(θ2 − θ1) = y1 Ly · x1 Lx + y2 Ly · x2 Lx = x1y1 + x2y2 LxLy . 7
  • 8. • Definition of inner product of the two vectors x and y x0 y = x1y1 + x2y2. With the definition of the inner product and cos(θ), Lx = √ x0x, cos(θ) = x0 y LxLy = x0 y √ x0x √ y0y . Example 2.1.(Calculating lengths of vectors and the angle between them) Given the vectors x0 = [132] and y0 = [−21 − 1], find 3x and x + y. Next, determine the length of x, the length of y, and the angle between x and y. Also, check that the length of 3x is three times the length of x 8
  • 9. • A pair of vectors x and y of the same dimension is said to be linearly dependent if there exist constants c1 and c2, both not zero, such that c1x + c2y = 0. A set of vectors x1, x2, . . . , xk is said to be linearly dependent if there exist constants c1, c2, . . . , ck, not all zero, such that c1x1 + c2x2 + . . . + ckxk = 0. Linear dependence implies that at least one vector in the set can be written as linear combination of the other vectors. Vector of the same dimension that are not linearly dependent are said to be linearly independent. 9
  • 10. • projection (or shadow) of a vector x on a vector y is Projection of x on y = (x0 y) y0y · y = (x0 y) Ly 1 Ly y where the vector L−1 y y has unit length. The length of the projection is Length of projection = |x0 y| Ly = Lx x0 y LxLy = Lx| cos(θ)| where θ is the angle between x and y. 10
  • 11. Example 2.2 (Identifying linearly independent vectors) Consider if the set of vectors x1 =   1 2 1   x2 =   1 0 −1   x3 =   1 −2 1   is linearly dependent. 11
  • 12. Matrices A matrix is any rectangular array of real numbers. We denote an arbitrary array of n rows and p columns A{n×p} =     a11 a12 . . . a1p a21 a22 . . . a2p . . . . . . ... . . . an1 an2 · · · anp     Example 2.3 (Transpose of a matrix) if A{2×3} = 3 −1 2 1 5 4 then A0 {3×2} =   3 1 −1 5 2 4   12
  • 13. The product cA is the matrix that results from multiplying each elements of A by c. Thus cA{n×p} =     ca11 ca12 . . . ca1p ca21 ca22 . . . ca2p . . . . . . ... . . . can1 can2 · · · canp     Example 2.4 (The sum of two matrices and multiplication of a matrix by a constant) If A{2×3} = 0 3 1 1 −1 1 B{2×3} = 1 −2 −3 2 5 1 then 4A and A + B ? 13
  • 14. The matrix product AB is A{n×k}B{k×p} = the (n × p) matrix whose entry in the ith row and jth column is the inner product of the ith row of A and the jth column of B. or (i, j) entry of AB = ai1b1j + ai2b2j + · · · + aikbkj = k X `=1 ai`b`j Example 2.5 (Matrix multiplication) If A = 3 −1 2 1 5 4 , B =   −2 7 9   , and C = 2 0 1 −1 then AB and CA ? 14
  • 15. Example 2.6 (Some typical products and their dimensions) Let A = 1 −2 3 2 4 −1 , b =   7 −3 6   , c =   5 8 −4   , d = 2 9 Then Ab, bc0 , b0 c, and d0 Ad ? • Square matrices will be of special importance in our development of statistical methods. A square matrix is said to be symmetric if A = A0 or aij = aji for all i and j. • Identity matrix I act like 1 in ordinary multiplication (1 · a = a · 1 = a), I(k×k)A(k×k) = A(k×k)I(k×k) = A(k×k) for any A(k×k) 15
  • 16. • The fundamental scalar relation about the existence of an inverse number a−1 such that a−1 a = aa−1 = 1 if a 6= 0 has the following matrix algebra extension: If there exists a matrix B such that BA = AB = I then B is called the inverse of A and is denoted by A−1 . Example 2.7 (The existence of a matrix inverse) For A = 3 2 4 1 16
  • 17. • Diagonal matrices • Orthogonal matrices QQ0 = Q0 Q = I or Q0 = Q−1 . • Eigenvalue λ with corresponding eigenvector x 6= 0 if Ax = λx Ordinarily, x is normalized so that it has length unity; that is x0 x = 1. 17
  • 18. • Let A be a k ×k square symmetric matrix. Then A has k pairs of eigenvalues and eigenvectors namely λ1 e1, λ2 e2, . . . , λk ek The eigenvectors can be chosen to satisfy 1 = e0 1e1 = · · · = e0 ke and be mutually perpendicular. The eigenvectors are unique unless two or more eigenvalues are equal. Example 2.8 (Verifying eigenvalues and eigenvectors) Let A = 1 −5 −5 1 . show that λ1 = 6 and λ2 = −4 is its eigenvalues and the corresponding eigenvectors are e1 = [1/ √ 2, −1/ √ 2]0 and e2 = [1/ √ 2, 1/ √ 2]. 18
  • 19. 2.3 Positive Definite Matrices The study of variation and interrelationships in multivariate data is often based upon distances and the assumption that the data are multivariate normally distributed. Squared distance and the multivariate normal density can be expressed in terms of matrix products called quadratic forms. Consequently, it should not be surprising that quadratic forms play central role in multivariate analysis. Quadratic forms that are always nonnegative and the associated positive definite matrices. • spectral decomposition for symmetric matrices A(k×k) = λ1e1e0 1 + λ1e2e0 2 + · · · + λ1eke0 k where λ1, λ2, . . . , λk are the eigenvalues and e1, e2, . . . , ek are the associated normalized k × 1 eigenvectors. e0 iei = 1 for i = 1, 2, . . . , k and e0 iej = 0 for i 6= j. 19
  • 20. • Because x0 Ax has only square terms x2 i and product terms xixk, it is called a quadratic form. When a k × k symmetric matrix A is such that 0 ≤ x0 Ax for all x0 = [x1, x2, . . . , xk], both the matrix A and the quadratic form are said to be nonnegative definite. If the equality holds in the equation above only for the vector x0 = [0, 0, . . . , 0], then A or the quadratic form is said to be positive definite. In other words, A is positive definite if 0 x0 Ax for all vectors x 6= 0. • Using the spectral decomposition, we can easily show that a k × k matrix A is a positive definite matrix if and only if every eigenvalue of A is positive. A is a nonnegative definite matrix if and only if all of its eigenvalues are greater than or equal to zero. 20
  • 21. Example 2.9 ( The spectral decomposition of a matrix) Consider the symmetric matrix A =   13 −4 2 −4 13 −2 2 −2 10   , find its spectral decomposition. Example 2.10 ( A positive definite matrix quadratic form) Show that the matrix for the following quadratic form is positive definite: 3x2 1 + 2x2 2 − 2 √ 2x1x2. • the “distance ” of the point [x1, x2, . . . , xp]0 to origin (distance)2 = a11x2 1 + a22x2 2 + . . . + a2 pp +2(a12x1x2 + a13x1x3 + . . . + ap−1,pxp−1xp) • the square of the distance x to an arbitrary fixed point µ = [µ1, µ2, . . . , µp]. 21
  • 22. • A geometric interpretation based on the eigenvalues and eigenvectors of the matrix A. For example, suppose p = 2, Then the points x0 = [x1, x2] of constant distance c from the origin satisfy x0 Ax = a11x2 1 + a2 22 + 2a12x1x2 = c2 By the spectral decomposition, A = λ1e1e0 1 + λ2e2e0 2 so x0 Ax = λ1(x0 e1)2 + λ2(x0 e2)2 22
  • 23. 23
  • 24. 2.4 A Square-Root Matrix Let A be a k× positive definite matrix with spectral decomposition A = k P i=1 λieie0 i. Let the normalized eigenvectors be the columns of another matrix P = [e1, e2, . . . , ek]. Then A = k X i=1 λieie0 i = PΛP0 where PP0 = P0 P = I and Λ is the diagonal matrix Λ =     λ1 0 · · · 0 0 λ2 · · · 0 . . . . . . ... ... 0 0 · · · λk     with λi 0 24
  • 25. Thus A−1 = PΛ−1 P0 = k X i=1 1 λi eie0 i The square-root matrix, of a positive definite matrix A, A1/2 = k X i=1 p λieie0 i = PΛ1/2 P0 • symmetric: A1/20 = A1/2 • A1/2 A1/2 = A • (A1/2 )−1 = k P i=1 1 √ λi eie0 i = PΛ−1/2 P0 • A1/2 A−1/2 = A−1/2 A1/2 = I and A−1/2 A−1/2 = A−1 , where A−1/2 = (A1/2 )−1 . 25
  • 26. Random Vectors and Matrices A random vector is a vector whose elements are random variables. Similarly a random matrix is a matrix whose elements are random variables. • The expected value of a random matrix E(X) =     E(X11) E(X12) · · · E(X1p) E(X21) E(X22) · · · E(X2p) . . . . . . ... . . . E(Xn1) E(Xn2) · · · E(Xnp)     • E(X + Y ) = E(X) + E(Y ) • E(AXB) = AE(X)B 26
  • 27. Example 2.11 (Computing expected values for discrete random variables) Suppose p = 2 and n = 1, and consider the random vector X0 = [X1, X2]. Let the discrete random variable X1 have the following probability function X1 -1 0 1 p1(X1) 0.3 0.3 0.4 Similarly, let the discrete random varibale X2 have the probability function X2 0 1 p2(X2) 0.8 0.2 Calculate E(X). 27
  • 28. Mean Vectors and Covariance Matrices Suppose X = [X1, X2, . . . , Xp] is a p×1 random vectors. Then each element of X is a random variables with its own marginal probability distribution. • The marginal mean µi = E(Xi), i = 1, 2, . . . , p. • The marginal variance σ2 i = E(Xi − µi)2 , i = 1, 2, . . . , p. • The behavior of any pair of random variables, such as Xi and Xk, is described by their joint probability function, and a measure of the linear association between them is provided by the covariance σik = E(Xi − µi)(Xk − µk) • The means and covariances of p × 1 random vector X can be set out as matrices named population variance-covariance (matrices). µ = E(X), Σ = E(X − µ)(X − µ)0 . 28
  • 29. • Statistical independent Xi and Xk if P(Xi ≤ xi and Xk ≤ xk) = P(Xi ≤ xi)P(Xk ≤ xk) or fik(xi, xk) = fi(xi)fk(xk). • Mutually statistically independent of the p continuous random variables X1, X2, . . . , Xp if f1,2,...,p(x1, x2, . . . , xp) = f1(x1)f2(x2) · · · fp(xp) • linear independent of Xi, Xk if Cov(Xi, Xk) = 0 29
  • 30. • Population correlation coefficient ρik ρik = σik √ σii √ σkk The correlation coefficient measures the amount of linear association between the random variable Xi and Xk. • The population correlation matrix ρ 30
  • 31. Example 2.12 (Computing the covariance matrix) Find the covariance matrix for the two random variables X1 andX2 introduced in Example 2.11 when their joint probability function p12(x1, x2) is represented by the entries in the body of the following table: x1x2 0 1 p1(x1) -1 0.24 0.06 0.3 0 0.16 0.14 0.3 1 0.4 0.00 0.4 p2(x2) 0.8 0.2 1 Example 2.13 (Computing the correlation matrix from the covariance matrix) Suppose Σ =   4 1 2 1 9 −3 2 −3 25   =   σ11 σ12 σ13 σ12 σ22 σ23 σ13 σ23 σ33   Obtain the population correlation matrix ρ 31
  • 32. Partitioning the Covariance Matrix • Let X =           X1 . . . Xq . . . Xq+1 . . . Xp           =    X(1) · · · X(2)    and then µ = EX =           µ1 . . . µq . . . µq+1 . . . µp           =   µ(1) · · · µ(2)   • Define E(X − µ)(X − µ)0 = E (X(1) − µ(1) )(X(1) − µ(1) )0 (X(1) − µ(1) )(X(2) − µ(2) )0 (X(2) − µ(2) )(X(1) − µ(1) )0 (X(2) − µ(2) )(X(2) − µ(2) ) # = Σ11 Σ12 Σ21 Σ22 32
  • 33. • It is sometimes convenient to use Cov(X(1) , X(2) ) note where Cov(X(1) , X(2) ) = Σ12 = Σ0 21 is a matrix containing all of the covariance between a component of X(1) and a component of X(2) . 33
  • 34. The Mean Vector and Covariance Matrix for Linear Combinations of Random Variables • The linear combination c0 X = c1X1 + · · · + cpXp has mean = E(c0 X) = c0 µ variance = Var(c0 X) = c0 Σc where µ = E(X) and Σ = Cov(X). • Let C be a matrix, then the linear combinations of Z = CX have µZ = E(Z) = E(CX) = Cµx ΣZ = Cov(Z) = Cov(CX) = CΣxC0 34
  • 35. • Sample Mean x̄0 = [x̄1, x̄2, . . . , x̄p] • Sample Covariance Matrix Sn =   s11 · · · s1p . . . ... . . . s1p · · · spp   =       1 n n P j=1 (xj1 − x̄1)2 · · · 1 n n P j=1 (xj1 − x̄1)(xjp − x̄p) . . . ... . . . 1 n n P j=1 (xj1 − x̄1)(xjp − x̄p) · · · 1 n n P j=1 (xjp − x̄p)2       35
  • 36. 2.7 Matrix Inequalities and Maximization • Cauchy-Schwarz Inequality Let b and d be any two p × 1 vectors. Then (b0 d)2 ≤ (b0 b)(d0 d) with equality if and only if b = cd or d = cb for some constant c. • Extended Cauchy-Schwarz Inequality Let b and d be any two p × 1 vectors, and B be a positive definite matrix. Then (b0 d)2 ≤ (b0 Bb)(d0 B−1 d) with equality if and only if b = cB−1 d or d = cBb for some constant c. 36
  • 37. • Maximization Lemma Let Bp×p be positive definite and dp×1 be a given vector. Then, for arbitrary nonzero vector x, max x6=0 (x0 d)2 x0Bx = d0 B−1 d with the maximum attained when x = cB−1 d for any constant c 6= 0. • Maximization of Quadratic Forms for Points on the Unit Sphere Let B be a positive definite matrix with eigenvalues λ1 ≥ λ2 ≥ . . . ≥ λp ≥ 0 and associated normalized eigenvectors e1, e2, . . . , ep. Then max x6=0 x0 Bx x0x = λ1 (attained when x = e1) min x6=0 x0 Bx x0x = λp (attained when x = ep) Moreover, max x⊥e1,...ek x0 Bx x0x = λk+1 (attained when x = ek+1, k = 1, 2, . . . , p − 1) where the symbol ⊥ is read “perpendicular to.” 37