1. UNIVERSITYOF DUHOK
FACLTY OF EDUATIONAL SCIENCE
SCHOOL OF BASEEDUCATION
DEPARTMWNT OF MATHMATICS
Matrices and its Applications to Solve Some
Methods of Systems of Linear Equations.
A project submitted to thecouncilof Department of
Mathematics, School of Basic Education, Universityof Duhok, in
partial fulfillment of the requirement for B.Sc. degree of
mathematics
ð·ððððððð ðð: ðºððððððððð ðð:
ðšðð ðððð ð¯ððð ðŽðððððð ðŽððð ð ðºðððð
ð²ððððð ð¯ððð
1436 .A.H 2015.A.D 2715.K
2. Acknowledgement
First of all, thanks to Allah throughout all his almighty kindness,
and loveliness for letting us to finish our project.
We would like to express our thanks to our supervisor
ðŽðððððð ðŽððð ð ðºðððð for giving us opportunity to write this research
under his friendly support. He made our research smoothly by his
discerning ideas and suggestions.
Also, we would like to thank all our friends and those people who
helped us during our work.
3. .
Contents
Chapter One
Basic Concepts in Matrix.
(1.1) MatrixâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠ.âŠ.....1
(1.2) Some Special Types of MatricesâŠâŠâŠâŠâŠâŠâŠ..âŠâŠâŠ....2
(1.3) Operations of MatricesâŠâŠâŠâŠâŠâŠâŠâŠ.âŠâŠâŠâŠâŠ..... ....11
(1.4) The Invers of a Square MatrixâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠ.17
(1.5) Some Properties of DeterminantsâŠâŠâŠâŠâŠâŠâŠ.âŠ..âŠ.25
Chapter Two
System of Linear Equations
(2.1) Linear EquationâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠ29
(2.2) Linear SystemâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠ.....30
(2.2.1) Homogeneous SystemâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠ.âŠ..33
(2.2.2) Gaussian EliminationâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠ.âŠ34
(2.2.3) Gaussian-Jordan EliminationâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠ35
(2.2.4) Cramerâs RuleâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠ.âŠ...37
ReferencesâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠâŠ.âŠâŠâŠâŠ39
4. Abstract
In this research, we have tried to introduce matrix, its types and finding
inverse and determinant of matrix has involved. Then the applications of
matrices to some methods of solving systems of linear equations, such as
Homogeneous, Gaussian Elimination, Gaussian âjordan Elimination and
Cramerâs Rule, have been illustrated.
5. Introduction
Information in science and mathematics is often organized into rows
and columns to form rectangular arrays called "Matrices" (plural of
matrix). Matrices are often tables of numerical data that arise from
physical observation, but they also occur in various mathematical
contexts.
Linear algebra is a subject of crucial important to mathematicians and
users of mathematics. When mathematics is used to solve a problem it
often becomes necessary to find a solution to a so-called system of linear
equations. Applications of linear algebra are found in subjects as divers
as economic, physics, sociology, and management consultants use linear
algebra to express ideas solve problems, and model real activities.
The aim of writing this subject is to applying matrices to solve some
types of system of linear equations.
In the first chapter of this work matrices have introduced. Then
operations on matrices (such as addition and multiplication) where
defined and the concept of the matrix inverse was discussed. In the
second chapter theorems were given which provided additional insight
into the relationship between matrices and solution of linear systems.
Then we apply matrices to solve some methods of linear system such as
Homogeneous, Gaussian Elimination, Gaussian âjordan Elimination and
Cramerâs Rule.
6. CHAPTERONE
Basic Concepts in Matrix
In this chapter we begin our study on matrix, some special types of
matrix, operation of matrix, and finding invers and the determinant of
matrices.
(1.1) Matrix
An ð à ð matrix ðŽ is rectangular array of ð à ð numbers arranged in ð
rows and ð columns:
A=
[
a11 a12 ⯠a1j ⯠a1n
a21 a22 ⯠a2j ⯠a2n
â® â® â® â®
ai1 ai2 ⯠aij ⯠ain
â® â® â® â®
am1 am2 ⯠amj ⯠amn]
The ðð ð¡â
component of ðŽ denoted ððð, is the number appearing in the
ð ð¡â
row and ð ð¡â
column of ðŽ we will some time write matrix ðŽ as A=( ððð).
An mÃn matrix is said to have the size mÃn.
Examples:
(1) [
1 2
3 4
5 6
]
3Ã2
ð=3 , ð=2 , (2) [
1 4 3 2 2
3 5 6 4 3
5 1 2 0 7
1 2 1 9 8
]
4Ã5
ð=4 , ð=5
7. (1.2) SomeSpecialTypesof Matrices
1. SquareMatrix
The matrix which its number of rows equals to numbers of columns is
called square matrix. That is
A=[
ð11 ð12 ⯠ð1ð
ð21 ð22 ⯠ð2ð
â® â® â®
ð ð1 ð ð2 ⯠ð ðð
]
mÃn
When m=n then:
A= [
ð11 ð12 ⯠ð1ð
ð21 ð22 ⯠ð2ð
â® â® â± â®
ð ð1 ð ð2 ⯠ð ðð
] nÃn
A is a square matrix
Example:
K=[
1 6 9
2 5 8
3 4 7
]
3Ã3
2. Unit (Identity) Matrix
The ð à ð matrix ðŒ ð =ððð , defined by ððð=1 if = ð , ððð=0 if ð â ð, is called
the ð Ã ð identity matrix .
ðŒ=[
1 0 ⯠0
0 1 ⯠0
â® â® â± â®
0 0 ⯠1
]
Example: ðŒ2=[1 0
0 1
] , ðŒ3=[
1 0 0
0 1 0
0 0 1
]
9. 4. DiagonalMatrix
Diagonal matrix is a square matrix in which all the elements not on the
main diagonal are zeros.
A=[
ð11 0 ⯠0
0 ð22 ⯠0
â® â® â± â®
0 0 ⯠ð33
]
The elements of a square matrix ðŽ where the subscripts are equal,
namely ð11, ð22, ⊠, ð ðð, form the main diagonal.
Example:
A=[
1 0 0
0 2 0
0 0 3
] , main diagonal=1,2,3
5. CommutativeMatrix
We say that the matrices ðŽ and ðµ are commutative under the operation
product if ðŽ and ðµ are square matrices and
ðŽ. ðµ = ðµ. ðŽ and we say that ðŽ ððð ðµ are invertible commutative if ðŽ
and ðµ are square matrices and ðŽ. ðµ = ðµ. ðŽ .
Example:
ðŽ = [5 1
1 5
] , ðµ = [2 4
4 2
]
ðŽ. ðµ = [5 1
1 5
] [2 4
4 2
] = [
10 + 4 20 + 2
2 + 20 4 + 10
] = [14 22
22 14
]
ðµ. ðŽ = [2 4
4 2
] [5 1
1 5
] = [
10 + 4 2 + 20
20 + 2 4 + 10
] = [14 22
22 14
]
Then ðŽ. ðµ = ðµ. ðŽ
10. Note: A square matrix ðŽ is said to be invertibleif there exists ðµ such that
ðŽðµ = ðµðŽ = ðŒ. ðµ is denoted ðŽâ1
and is unique.
If ððð¡(ðŽ) = 0 then a matrix is not invertible.
6. TriangularMatrix
A square matrix ðŽ is ð à ð , (ð ⥠3), is triangular matrix iff ððð =0
when ð ⥠ð + 1 ,or ð ⥠ð + 1.
The are two type of triangular matrices:
i. Upper Triangular Matrix
A square matrix is called an upper triangular matrix if all the elements
below the main diagonal are zero.
Example:
ðŽ=[
1 5 9
0 2 1
0 0 3
]
ii. Lower Triangular Matrix
A square matrix is called lower triangular matrix if all the elements above
the main diagonal are zero.
Example:
ðŽ=[
1 0 0
6 2 0
9 7 3
]
Transpose of Matrix
Transpose of ð à ð matrix ðŽ ,denoted ðŽ ð
or ðŽÌ, is ð à ð matrix with
( ðŽðð)
ð
= ðŽðð
11. ðŽ=[
ð11 ð12 ⯠ð1ð
ð21 ð22 ⯠ð2ð
â® â® â®
ð ð1 ð ð2 ⯠ð ðð
]
mÃn
, ðŽ ð
=[
ð11 ð21 ⯠ð ð1
ð12 ð22 ⯠ð ð2
â® â® â®
ð1ð ð2ð ⯠ð ðð
]
nÃm
row and columns of ðŽ are transposed in ðŽ ð
Example:
ðŽ = [
0 4
7 0
3 1
] , ðŽ ð
= [0 7 3
4 0 1
]
Note: transpose converts row vectors to column vector , vice versa.
ï· Properties of Transpose
Let ðŽ and ðµ be matrix and ð be a scalar. Assume that the size of the
matrix are such that the operations can be performed.
ï· ( ðŽ + ðµ) ð
= ðŽ ð
+ ðµ ð
Transpose of the sum
ï· ( ððŽ) ð
= ððŽ ð
Transpose of scalar multiple
ï· ( ðŽðµ) ð
= ðµ ð
ðŽ ð
Transpose of a product
ï· ( ðŽ ð) ð
= ðŽ
7. SymmetricMatrix
A real matrix ðŽ is called symmetric if ðŽ ð
= ðŽ. In other words ðŽ is square
(ð à ð )and ððð = ððð for all 1 †ð †ð, 1 †ð †ð.
Example:
ðŽ=[
1 0 5
0 2 6
5 6 3
] ðŽ ð
=[
1 0 5
0 2 6
5 6 3
]
Note: if ðŽ = ðŽ ð
then ðŽ is a symmetric matrix
8. Skew-Symmetric Matrix
12. Areal matrix ðŽ is called Skew-Symmetricif ðŽ ð
= âðŽ . In other words ðŽ is
square (ð à ð) and ððð=âððð for all 1 †ð †ð, 1 †ð †ð
Example:
A=[
0 5 6
â5 0 8
â6 â6 0
] â ðŽ=[
0 â5 â6
5 0 â8
6 8 0
]
A=[
0 5 6
â5 0 8
â6 â8 0
] ðŽ ð
=[
0 â5 â6
5 0 â8
6 8 0
] ⎠ðŽ ð
= âðŽ
Determinants of Matrix
The determinant of a square matrix ðŽ = [ððð] is a number denoted by |A|
or ððð¡(ðŽ) , through which important properties such as singularity can
be briefly characterized .
This number is defined as the following function of the matrix elements:
|ðŽ| = ððð¡(ðŽ) = ± â ð1ð1
ð2ð2
⊠ð ðð ð
Where the column indices ð1, ð2, âŠ, ð ð are taken from the set {1,2,âŠ,n}
with no repetitions allowed . The plus (minus) sign is taken if the
permutation (ð1 ð2 ⊠ð ð) is even (odd).
Someproperties of determinants willbe discussed later inthis
chapter
9. Singular and Nonsingular Matrix
A square matrix ðŽ is said to be singular if ððð¡(ðŽ) = 0 .
ðŽ is nonsingular if ððð¡(ðŽ) â 0.
Theorem:
Let ðŽ be a square matrix. Then ðŽ is a singular if
(a) all elements of a row (column) are zero.
(b) two rows (column) are equal.
(c) two rows(column) are proportional.
13. Note: (b) is a special case of (c) , but we list it separately to give it special
emphasis.
Example: we show that the following matrices are singular.
(a) A=[
2 0 â7
3 0 1
â4 0 9
] (b) B=[
2 â1 3
1 2 4
2 4 8
]
(a) All the elements in column 2 of A are zero. Thus ððð¡ = 0.
(b) Observe that every element in row 3 of B is twice the
corresponding element in row 2. We write
(row 3) = 2(row 2)
Row 2 and 3 are proportional. Thus ððð¡(ð) = 0.
10. OrthogonalMatrix
we say that a matrix ðŽ is orthogonal if ðŽ. ðŽ ð
= ðŒ = ðŽ ð
. ðŽ
Example:
ðŽ =
[
1 0 0
0
1
2
â3
2
0 â
â3
2
1
2 ]
, ðŽ ð
=
[
1 0 0
0
1
2
â
â3
2
0
â3
2
1
2 ]
ðŽ. ðŽ ð
=
[
1 0 0
0
1
2
â3
2
0 â
â3
2
1
2 ]
.
[
1 0 0
0
1
2
â
â3
2
0
â3
2
1
2 ]
=
[
1 0 0
0
1
4
+
3
4
1
2
.
ââ3
2
+
â3
2
.
1
2
0
ââ3
2
.
1
2
+
1
2
.
â3
2
3
4
.
1
4 ]
= [
1 0 0
0 1 0
0 0 1
] = ðŒ
⎠ðŽ is orthogonal matrix
15. ðŽ = [
1 â2 â6
â3 2 9
2 0 â3
] , ðŸ = 2
ðŽ ðŸ+1
= ðŽ2+1
= ðŽ3
ðŽ. ðŽ = [
â5 â6 â6
9 10 9
â4 â4 â3
] , ðŽ2
. ðŽ = [
1 â2 â6
â3 2 9
2 0 3
]
⎠ðŽ is idempotent matrix
14. StochasticMatrix
An ð à ð matrix ðŽ is called stochastic if each element is a number
between 0 and 1 and each column of ðŽ adds up to 1.
A=
[
1
4
1
3
0
1
2
2
3
3
4
1
4
0
1
4]
,
â column(1) =1 , âcolumn( 2) = 1 , â column( 3) = 1
15. TraceMatrix
Let ðŽ be a square matrix, the trace of ðŽ denoted ð¡ð(ðŽ) is the sum of the
diagonal elements of ðŽ .Thus if ðŽ is an ð à ð matrix.
ðð(ðš) = ð ðð + ð ðð + ⯠+ ð ðð
Example:
16. The trace of the matrix ðŽ =[
4 1 â2
2 â5 6
7 3 0
].
is, ð¡ð(ðŽ) = 4 + (â5) + 0 = â1
Properties of Trace
Let ðŽ and ðµ be matrix and ð be a scalar, assume that the sizes of the
matrices are such that the operations can be performed.
ï· ð¡ð( ðŽ + ðµ) = ð¡ð(ðŽ) + ð¡ð(ðµ)
ï· ð¡ð(ðŽðµ) = ð¡ð(ðµðŽ)
ï· ð¡ð(ððŽ) = ðð¡ð(ðŽ)
ï· ð¡ð( ðŽ) ð
= ð¡ð(ðŽ)
Note: if A is not square that the trace is not defined.
16. ð¯ðððððððð Matrix
A square matrix A is said to be ððððððððð if AÌ T=A.
Note: The conjugate of a complex number ð§ = ð + ðð is defined and
written zÌ =a-ib .
Example:
A=[
3 7 + ð2
7 â ð2 â2
] ,is âððððð¡ððð
Taking the complex conjugates of each of the elements in ðŽ gives
AÌ =[
3 7 â ð2
7 + ð2 â2
]
Now taking the transposes of A , we get
AÌ T=[
3 7 + ð2
7 â ð2 â2
]
So we can see that AÌ T=A
17. (1.3) Operations ofMatrices
1. Addition
If ðŽ and ðµ are m à n matrices such that
ðŽ=[
ð11 ð12 ⯠ð1ð
ð21 ð22 ⯠ð2ð
â® â® â®
ð ð1 ð ð2 ⯠ð ðð
]
mÃn
and ðµ= [
ð11 ð12 ⯠ð1ð
ð21 ð22 ⯠ð2ð
â® â® â®
ð ð1 ð ð2 ⯠ð ðð
]
mÃn
Then ðŽ + ðµ=[
ð11 + ð11 ð12 + ð12 ⯠ð1ð + ð1ð
ð21 + ð21 ð22 + ð22 ⯠ð2ð + ð2ð
â® â® â± â®
ð ð1 + ð ð1 ð ð2 + ð ð2 ⯠ð ðð + ð ðð
]
Note: Addition of matrices of different sizes is not defined.
Example: [
0 4
7 0
3 1
]+[
1 2
2 3
0 4
]=[
1 6
9 3
3 5
]
Properties of Matrix Addition
ï· ðŽ + ðµ = ðµ + ðŽ (commutative)
ï· (ðŽ + ðµ) + ð¶ = ðŽ + (ðµ + ð¶) (associative), so we can write as
ðŽ + ðµ + ð¶
ï· ðŽ + 0 = 0 + ðŽ = ðŽ
ï· ( ðŽ + ðµ) ð
= ðŽ ð
+ ðµ ð
2. Subtraction
Matrix subtraction is defined for two matrix ðŽ = [ððð] and ðµ = [ððð] of the
same size in the usual way; that is
ðŽ â ðµ = [ððð] â [ ððð ] = [ ððð â ððð].
If ðŽ and ðµ m à n matrix such that
18. ðŽ=[
ð11 ð12 ⯠ð1ð
ð21 ð22 ⯠ð2ð
â® â® â®
ð ð1 ð ð2 ⯠ð ðð
]
mÃn
and ðµ= [
ð11 ð12 ⯠ð1ð
ð21 ð22 ⯠ð2ð
â® â® â®
ð ð1 ð ð2 ⯠ð ðð
]
mÃn
Then ðŽ â ðµ=[
ð11 â ð11 ð12 â ð12 ⯠ð1ð â ð1ð
ð21 â ð21 ð22 â ð22 ⯠ð2ð â ð2ð
â® â® â± â®
ð ð1 â ð ð1 ð ð2 â ð ð2 ⯠ð ðð â ð ðð
]
Note: Subtraction of matrices of different sizes is not defined.
Example:
[
0 4
7 0
3 1
]-[
1 2
2 3
0 4
]=[
â1 2
5 â3
3 â3
]
3. Negative
Consider ð¶ to be a matrix, the negative of ð¶ denoted by âð¶ , which
defined as (â1)ð¶, Where each element in ð¶ is multiplied by (â1).
Example:
ð¶ = [3 â2 4
7 â3 0
]. Then â ð¶ = [â3 2 â4
â7 3 0
].
4. Multiplication
We can product two matrices ðŽ and ðµ if the number of column in a
matrix ðŽ be equal to the number of rows in a matrix ðµ.The element in
row ð and column ð of ðŽðµ is obtained by multiplying the corresponding
element of row ð of ðŽ and column ð of ðµ and adding the products.
[
.
.
.
] ðµ is 3 Ã ð
ðŽ is ð à 3 [
. . .
] [
. ] ðŽðµ is ð à ð
19. Note: The product of ðŽ and ðµ con not be obtained if the number of
columns in ðŽ does not equal the number of rows in ðµ.
Let ðŽ have ð columns and ðµ have ð rows .The ð ð¡â
row of ðŽ is
[ðð1 ðð2 âŠððð] and the ð ð¡â
column of ðµ is
[
ð1ð
ð2ð
â®
ð ðð]
.
Thus if = ðŽðµ , then ððð=ðð1 ð1ð +ðð2 ð2ð+âŠ+ððð ð ðð .
Propertiesof MatrixMultiplication
ï· 0ðŽ = 0, ðŽ0 = 0 (here 0 can be scalar, or a compatible matrix)
ï· ðŒðŽ = ðŽðŒ = ðŽ
ï· (ðŽðµ)ð¶ = ðŽ(ðµð¶), so we can write as ðŽðµð¶
ï· Î±(AB)=( αA)B , where ðŒ is a scalar
ï· ðŽ(ðµ + ð¶) = ðŽðµ + ðŽð¶ ,(ðŽ + ðµ)ð¶ = ðŽð¶ + ðµð¶
ï· ( ðŽðµ) ð
= ðµ ð
ðŽ ð
ScalarMultiplication
let ðŽ be a matrix and ð be a scalar, the scalar multiple of ðŽ by ð, denoted
ððŽ , is the matrix obtained by multiplying every element of ðŽ by ð, the
matrix ððŽ will be the same size of ðŽ.
Thus if ðµ = ð¶ðŽ , then ððð =cððð.
Example: let ðŽ=[1 â2 4
7 â3 0
],determine 3ðŽ.
Then
Now multiple every element of ðŽ by 3 we get
20. 3ðŽ = [ 3 â6 12
21 â9 0
].
Observe that ðŽ and 3ðŽ are both 2 à 3 matrix
Remark:
If ðŽ is a square matrix, then ðŽ multiplied by itself ð times is written Ak.
ðŽ ð
= ðŽðŽ ⊠ðŽ , ðŸ times
Familiar rules of exponents of real numbers hold for matrices.
Theorem:
If ðŽ is an ð à ð square matrix and ð and ð are nonnegative integers, then
1. ðŽ ð
ðŽ ð
= ðŽ ð+ð
2. ( ðŽ ð) ð
= ðŽ ðð
3. ðŽ0
= ðŒ ð (by definition)
We verify the first rule, the proof of the second rule is similar
ðŽ ð
ðŽ ð
= ðŽ ⊠ðŽâ
ð ð¡ðððð
ðŽ ⊠ðŽâ
ð ð¡ðððð
= ðŽ ⊠ðŽâ
ð +ð ð¡ðððð
= ðŽ ð+ð
Example: If = [ 1 â2
â1 0
] , compute ðŽ4
.
This example illustrates how the above rules can be used to reduce the
amount of computation involved in multiplying matrices. We know that
ðŽ4
= ðŽðŽðŽðŽ. We could perform three matrix multiplication to arrive at ðŽ4
.
However we con apply rule 2 above to write ðŽ4
= ( ðŽ2)2
and thus arrive
at the result using two products. We get
ðŽ2
= [ 1 â2
â1 0
] [ 1 â2
â1 0
] = [ 3 â2
â1 2
]
ðŽ4
= [ 3 â2
â1 2
] [ 3 â2
â1 2
] = [11 â10
â5 6
]
The usual index laws hold provided ðŽðµ = ðµðŽ
21. ï· ( ðŽðµ) ð
= ðŽ ð
ðµ ð
ï· ðŽ ð
ðµ ð
= ðµ ð
ðŽ ð
ï· ( ðŽ + ðµ)2
= ðŽ2
+ 2ðŽðµ + ðµ2
ï· ( ðŽ + ðµ) ð
= â ( ð
ð
)ð
ð=0 ðŽð
ðµ ðâð
ï· (ðŽ + ðµ)(ðŽ â ðµ) = ðŽ2
â ðµ2
We now state basic of the natural numbers.
Equality
The matrix ðŽ and ðµ are said to be equal if ðŽ and ðµ have the same size and
corresponding element are equal; that is ðŽ and ðµ â ð(ð à ð) and ðŽ =
[ððð] , ðµ = [ððð ] with ððð =ððð for 1 †ð †ð , 1 †ð †ð.
Example:
ðŽ = [
1 2 3
4 5 6
7 8 9
] , ðµ = [
3 1 2
4 5 6
7 8 9
] , ð¶ = [
1 1 + 1 3
4 5
12
2
7 5 + 3 3 â 3
]
ðŽ = ð¶
Minor
Let ðŽ be an ð à ð square matrix obtained from ðŽ by deleting the ð ð¡â
row
and ð ð¡â
column of ðŽ ðððis called the ðð ð¡â
minor of ðŽ .
ðŽ = [
ð11 ð12 ð13
ð21 ð22 ð23
ð31 ð32 ð33
]
ð11 = [
ð22 ð23
ð32 ð33
] , ð12 = [
ð21 ð23
ð31 ð33
] , ð13 = [
ð21 ð22
ð31 ð32
]
Cofactors
22. The cofactor ððð is defined as the coefficient of ððð in the determinant ðŽ
If is given by the formula
ð¶ðð=(â1)ð+ð
ððð
Where the minor is the determinant of order (ð â 1) Ã (ð â 1) formed
by deleting the column and row containing ððð.
ð11=(â1)1+1
ð11=+1. [
ð11 ð12 ð13
ð21 ð22 ð23
ð31 ð32 ð33
] =+1. |
ð22 ð23
ð32 ð33
| = ð22 ð33-ð32 ð23
ð12=(â1)1+2
ð12= -1. [
ð11 ð12 ð13
ð21 ð22 ð23
ð31 ð32 ð33
]=-1. |
ð21 ð23
ð31 ð33
| = ð21 ð33-ð31 ð23
ð13=(â1)1+3
ð13=+1. [
ð11 ð12 ð13
ð21 ð22 ð23
ð31 ð32 ð33
]=+1. |
ð21 ð22
ð31 ð32
| = ð21 ð32-ð31 ð22
Definition:
Let ðŽ be ð à ð matrix and ððð be the cofactor of ððð.The matrix whose
(ð, ð) ð¡â
element is ððð is called the matrix of the cofactors of ðŽ. The
transpose of this matrix is called the ðð ððððð of A and is denoted ðð ð(ðš).
[
c11 c12 ⯠c1n
c21 c22 ⯠c2n
â® â® â± â®
cn1 cn2 ⯠cnn
] [
c11 c12 ⯠c1n
c21 c22 ⯠c2n
â® â® â± â®
cn1 cn2 ⯠cnn
]
ð
ððð¡ððð¥ ðð ðððððð¡ððð ðŽðððððð¡ ððð¡ððð¥
Example: give the matrix of the cofactors and the ððððððð¡ matrix of the
following matrix A.
A=[
2 0 3
â1 4 â2
1 â3 5
]
The cofactor of A are as follows.
ð11=| 4 â2
â3 5
|=14 , ð12=â |â1 â2
1 5
|=3 , ð13=|â1 4
1 â3
|=-1
ð21=â | 0 3
â3 5
|=-9 , ð22=|2 3
1 5
|=7 , ð23=â |2 0
1 â3
|=6
23. ð31=|0 3
4 â2
|=-12 , ð32=â | 2 3
â1 â2
|=1 , ð33=| 2 0
â1 4
|=8
The matrix of cofactors of ðŽ is [
14 3 â1
â9 7 6
â12 1 8
]
The ððððððð¡ of ðŽ is the transpose of this matrix
ððð(ðŽ)= [
14 â9 â12
3 7 1
â1 6 8
]
(1.4)TheInverseof a SquareMatrix
The inverse ðâ1
of a scalar (= ð number) ð is defined by
ð ðâ1
= 1.
For square matrices we use a similar definition the inverse
ðŽâ1
of a ð à ð matrix ðŽ fulfils the relation
ðŽðŽâ1
= ðŒ
where ðŒ is the ð à ð unit matrix defined earlier.
Example: we show that, as usual, the identity matrix of the appropriate
size. ðŽ = [â1 1
â2 0
] , ðµ =
1
2
[0 â1
2 â1
]
All we need do is to check that ðŽðµ = ðµðŽ = ðŒ
ðŽðµ = [â1 1
â2 0
] Ã
1
2
[0 â1
2 â1
] =
1
2
[0 â1
2 â1
] Ã [â1 1
â2 0
] =
1
2
[2 0
0 2
]
= [1 0
0 1
]
The reader should check that ðŽðµ = ðŒ also
Note: if ðŽâ1
exists then
ððð¡(ðŽ)ððð¡(ðŽâ1
) = ððð¡(ðŽðŽâ1
) = ððð¡(ðŒ) = 1
24. Hence ððð¡(ðŽâ1
) = ( ððð¡ðŽ)â1
Example: ðŽ = [3 7
2 6
]
ððð¡(ðŽ) = 18 â 14 = 4 ,( ððð¡( ðŽ))
â1
=
1
4
ðŽâ1
=
1
4
[ 6 â7
â2 3
] = [
6
4
â7
4
â2
4
3
4
]
ððð¡(ðŽâ1
) =
18
16
â
14
16
=
4
16
=
1
4
⎠ððð¡(ðŽâ1
) = ( ððð¡ðŽ)â1
,
1
4
=
1
4
Remark:
ï· Non-square matrices do not have an inverse.
ï· The inverse of A is usually written ðŽâ1
.
ï· Not all square matrices have an inverse.
ï· A square matrix ðŽ is invertible if and only if ððð¡(ðŽ) â 0.
ï· ðŽâ1
exists if and only if ðŽ is nonsingular.
FindingInverseofMatrix
1- The Inverse of a ð Ã ð Matrix
If ðð â ðð â 0 then the 2 à 2 matrix ðŽ = [ ð ð
ð ð
] has a (unique)
inverse given by
ðŽâ1
=
1
ððâðð
[ ð âð
âð ð
]
Note: ðð â ðð = 0 that ðŽ has no inverse.
25. In words: to find the inverse of a 2 à 2 matrix ðŽ we effectively
interchange the diagonal elements ð and ð, change the sign of the other
two elements and then divide by the determinant of ðŽ.
Example: ðŽ = [3 7
2 6
]
ððð¡(ðŽ) = 18 â 14 = 4 , ðŽâ1
=
1
4
[ 6 â7
â2 3
] = [
6
4
â7
4
â2
4
3
4
]
2- The Inverseof a 3Ã3 Matrix-TheDeterminant Method
Given a square matrix ðŽ:
ï· Find ððð¡(ðŽ), if det(ðŽ)=0 then ,as we know ðŽâ1
does not exists. If
det(ðŽ)â 0 we can proceed to find the inverse matrix.
ï· Replace each element of ðŽ by its cofactor
ð¶11=[
ð22 ð23
ð32 ð33
], ð¶12=[
ð21 ð23
ð31 ð33
] , ð¶13=[
ð21 ð22
ð31 ð32
]
ð¶21=[
ð12 ð13
ð32 ð33
] , ð¶22=[
ð11 ð13
ð31 ð33
], ð¶23=[
ð11 ð12
ð31 ð32
]
ð¶31=[
ð12 ð13
ð22 ð23
], ð¶32=[
ð11 ð13
ð21 ð23
], ð¶33=[
ð11 ð12
ð21 ð22
]
Cofactor =[
ð¶11 ð¶12 ð¶13
ð¶21 ð¶22 ð¶23
ð¶31 ð¶32 ð¶33
] =ðŽ
ï· Transpose the result to form the ðð ððððð ðððððð , ððð(ðŽ).
ðŽðð(ðŽ) = [
ð¶11 ð¶21 ð¶31
ð¶12 ð¶22 ð¶32
ð¶13 ð¶23 ð¶33
] , ððð(ðŽ) = ðŽ ð
ï· Then ðŽâ1
=
1
ððð¡(ðŽ)
ððð(ðŽ).
Example: find the inverse of ðŽ = [
1 â1 2
â3 1 2
3 â2 â1
]
26. ððð¡(ðŽ) = 1 | 1 2
â2 â1
| â (â1) |â3 2
3 â1
| + 2 |â3 1
3 â2
|
= 1 Ã 3 + 1 Ã (â3) + 2 Ã 3 = 6
Since the determinant is non-zero an inverse exists.
Calculate the matrix of minors
ð=|
|
|
1 2
â2 â1
| |
â3 2
3 â1
| |
â3 1
3 â2
|
|
â1 2
â2 â1
| |
1 2
3 â1
| |
1 â1
3 â2
|
|
â1 2
1 2
| |
1 2
â3 2
| |
1 â1
â3 1
|
|
|
=[
3 â3 3
5 â7 1
â4 8 â2
]
Modify the signs according to whether ð + ð is even or odd to calculate the
matrix of cofactors
ð¶=|
3 3 3
â5 â7 â1
â4 â8 â2
|
It follows that ðŽâ1
=
1
6
ð¶ ð
=
1
6
|
3 â5 â4
3 â7 â8
3 â1 â2
|
To check that we have made no mistake we can compute
ðŽâ1
. ðŽ =
1
6
|
3 â5 â4
3 â7 â8
3 â1 â2
|. [
1 â1 2
â3 1 2
3 â2 â1
] =[
1 0 0
0 1 0
0 0 1
] .
This way of computing the invers is only useful for hand calculations in
the cases of 2 Ã 2 or 3 Ã 3 matrices.
Definition:
A matrix is said to be in row-echelon form if
1. If there are any rows of all zeros then they are at the bottom of
the matrix.
27. 2. If a row does not consist of all zeros then its first non-zero
entry (i.e. the left most nonzero entry) is a 1. This 1 is called a
leading 1.
3. In any two successive rows, neither of which consists of all
zeroes, the leading 1 of the lower row is to the right of the
leading 1 of the higher row.
Example:
The following matrices are all in row echelon form.
[
ð â6 9 1 0
0 0 ð â4 â5
0 0 0 ð 2
] [
ð 0 5
0 ð 3
0 0 ð
] [
ð â8 10 5 â3
0 ð 13 9 12
0 0 0 ð 1
0 0 0 0 0
]
Definition:
A matrix is in ððð ðððð ððð â ððððððð ðððð if
1. Any rows consisting entirely of zeros are grouped at the bottom of
the matrix.
2. The first nonzero element of each other row is 1. This element is
called a leading 1.
3. The leading 1 of each row after the first is positioned to the right of
the leading 1 of the previous row.
4. All other elements in a column that contains a leading 1 are zero.
Example:
The following matrices are all in reduced echelon form.
[
ð 0 8
0 ð 2
0 0 0
] [
ð 0 0 7
0 ð 0 3
0 0 ð 9
] [
ð 4 0 0
0 0 ð 0
0 0 0 ð
] [
ð 2 3 0
0 0 0 ð
0 0 0 0
]
The following matrices are not in reduced echelon form.
28. [
1 2 0 4
0 0 0 0
0 0 1 3
]
row of zeros
not at bottom
of matrix
[
1 2 0 3 0
0 0 3 4 0
0 0 0 0 1
]
first nonzero
element in row
2 is not 1
[
1 0 0 2
0 0 1 4
0 1 0 3
]
leading 1 in
row 3 not of
the right of
leading 1 in
row 2
[
1 7 0 8
0 1 0 3
0 0 1 2
0 0 0 0
]
nonzero
element above
leading 1 in
row 2
3- The Inverseof a 3Ã3 Matrix â ð®ðððð â ðððð ðð Elimination
Method
Let ðŽ be an ð à ð matrix
1. Adjoin the identity ð à ð matrix ðŒ ð to ðŽ to form the matrix [ðŽ: ðŒ ð].
2. Compute the reduced echelon form of [ðŽ: ðŒ ð] .
If the reduced echelon form is of the type [ðŒ ð: ðµ], then ðµ is the inverse of
ðŽ. If the reduced echelon form is not of the type [ðŒ ð: ðµ], in that the first
ð à ð sub matrix is not ðŒ ð, then ðŽ has no inverse.
SomeNotes for Operationof TheMethod
ï· Inter changing two rows.
ï· Adding a multiple of on row to on other row.
ï· Multiplying one row by a non-zero constant.
The following example illustrate the method
Example: determine the inverse of the matrix
ðŽ=[
1 â1 â2
2 â3 â5
â1 3 5
]
Applying the method of ðºðð¢ð ð â ðððððð elimination ,we get [ðŽ ⶠðŒ ð] =
[
1 â1 â2 â® 1 0 0
2 â3 â5 â® 0 1 0
â1 3 5 â® 0 0 1
]
ð2:ð 2+(â2) ð 1
ð3:ð 3+ð 1
â [
1 â1 â2 â® 1 0 0
0 â1 â1 â® â2 1 0
0 2 3 â® 1 0 1
]
30. Properties of Inverse
ï· If ðŽðµ = ðŒ and ð¶ðŽ = ðŒ, then ðµ = ð¶. Consequently ðŽ has at most
one inverse.
ï· ( ðŽðµ)â1
= ðµâ1
ðŽâ1
(assuming ðŽ, ðµ are invertible ).
ï· ( ðŽ ð)â1
= ( ðŽâ1) ð
(assuming ðŽ is invertible ).
ï· ( ðŽâ1)â1
= ðŽ , ð. ð. inverse of inverse is original matrix (assuming ðŽ
is invertible ).
ï· ðŒâ1
= ðŒ .
ï· (â ðŽ)â1
= (
1
â
) ðŽâ1
(assuming ðŽ invertible , ââ 0 ).
ï· If ðŠ = ðŽð ,where ð â ð ð
and ðŽ invertible , then ð = ðŽâ1
ðŠ
ðŽâ1
ðŠ = ðŽâ1
ðŽð = ðŒð = ð.
ï· If ðŽ1 , ðŽ2, ⊠, ðŽ ð ,are all invertible , so is their product ðŽ1 , ðŽ2, ⊠, ðŽ ð ,
and ( ðŽ1 ðŽ2 ⊠ðŽ ð)â1
= ðŽ ð
â1
⊠ðŽ2
â1
ðŽ1
â1
.
ï· If ðŽ is invertible , so is ðŽ ð
for ð ⥠1, and ( ðŽ ð)â1
= ( ðŽâ1) ð
.
Corollary 1:If ðŽðµ = ðŒ and ð¶ðŽ = ðŒ, then ðµ = ð¶, consequently ðŽ has at
most one inverse.
Proof: If ðŽðµ = ðŒ and ð¶ðŽ = ðŒ, then ðµ = ðŒðµ = ð¶ðŽðµ = ð¶ðŒ = ð¶, if ðµ and
ð¶ are both inverses of ðŽ, then , by definition, ðŽðµ = ðµðŽ = ðŒ and ðŽð¶ =
ð¶ðŽ = ðŒ, in particular ðŽðµ = ðŒ and ð¶ðŽ = ðŒ, so that ðµ = ð¶.
Corollary 2:If ðŽ and ðµ are both invertible, then so is ðŽðµ and
( ðŽðµ)â1
= ðµâ1
ðŽâ1
.
Proof: We have a guess for ( ðŽðµ)â1
, to check that the guess is correct, we
merely need to check the requirements of the definition
(ðŽðµ)(ðµâ1
ðŽâ1
) = ðŽðµðµâ1
ðŽâ1
= ðŽðŒðŽâ1
= ðŽðŽâ1
= ðŒ
(ðµâ1
ðŽâ1
)(ðŽðµ) = ðµâ1
ðŽâ1
ðŽðµ = ðµâ1
ðŒðµ = ðµâ1
ðµ = ðŒ
Corollary 3:If ðŽ is invertible, then so is ðŽ ð
and ( ðŽ ð)â1
= ( ðŽâ1) ð
.
Proof: Letâs use ðµ to denote the inverse of ðŽ (so there wonât be so many
superscripts around) by definition
ðŽðµ = ðµðŽ = ðŒ
These three matrices are the same, so their transposes are the same.
Since ( ðŽðµ) ð
= ðŽ ð
ðµ ð
, ( ðµðŽ) ð
= ðŽ ð
ðµ ð
and ðŒ ð
= ðŒ, we have ðµ ð
ðŽ ð
=
ðŽ ð
ðµ ð
= ðŒ ð
= ðŒ
Which is exactly the definition of âthe inverse of ðŽ ð
is ðµ ð
â.
31. (1.5)SomePropertiesof Determinants
1. Rows and columns can be interchanged without affecting the value
of a determinant. Consequently det(ðŽ) = det(ðŽ ð
).
Example:
ðŽ=|3 4
1 2
|, ðŽ ð
=|3 1
4 2
|
ððð¡(ðŽ) = 2, ððð¡(ðŽð) = 2 ⎠det(ðŽ) = det(ðŽ ð
)
2. If two rows, or two columns, are interchanged the sign of the
determinant is reversed.
Example:
if ðŽ=|3 4
1 â2
| , then
ððð¡(|3 4
1 â2
|) = â10 , ððð¡(|1 â2
3 4
|) = 10
3. If a row(or column) is changed by adding to or subtracting from its
elements the corresponding elements of any other row (or column)
the determinant remains unaltered.
Example:
ððð¡(|3 4
1 â2
|) = ððð¡(|3 + 1 4 â 2
1 â2
|) = ððð¡(|4 2
1 â2
|) = â10
4. If the elements in any row (or column) have a common factor α
then the determinant equals the determinant of the corresponding
matrix in which α = 1, multiplied by α.
Example:
ðŽ = |6 8
1 â2
|, α=2
det( |6 8
1 â2
|) = â20 ,det ( |6 8
1 â2
|) = 2 det (|3 4
1 â2
|)
= 2 Ã (â10) = â20
5. The determinant of an upper triangular or lower triangular matrix
is the product of the main diagonal entries.
Example: A upper triangular, B lower triangular
ðŽ = |
2 2 1
0 2 â1
0 0 4
| = det( ðŽ) = 2 à 2 à 4 = 16
32. ðµ = |
2 0 0
3 â3 0
4 1 4
| = det( ðµ) = 2 Ã (â3) Ã 4 = â24
This rule is easily verified from the definition
ððð¡(ðŽ) = ± â ð1ð1
ð2ð2
⊠ð ðð ð
because all terms vanish
except ð1= 1,ð2= 2, . . . ð ð = n, which is the product of the main
diagonal entries. Diagonal matrices are a particular case of this
rule.
6. The determinant of the product of two square matrices is the
product of the individual determinants:
ððð¡(ðŽðµ) = ððð¡(ðŽ)ððð¡(ðµ).
Example:
ðŽ = |6 8
1 â2
| , ðµ = |6 0
1 â2
| ,
det( ðŽ) = 6 à (â2) â 8 à 1 = â20
ððð¡(ðµ) = 6 Ã (â2) â 0 Ã 1 = â12
ððð¡(ðŽðµ) = ððð¡(|6 8
1 â2
| . |6 0
1 â2
|) = ððð¡(|44 â16
4 4
|) = 240
ððð¡(ðŽ)ððð¡(ðµ) = â20 à â12 = 240
This rule can be generalized to any number of factors. One
immediate application is to matrix powers:
| ðŽ2| = | ðŽ|| ðŽ| = | ðŽ|2
, and more generally | ðŽ ð | = | ðŽ| ð
for
integer ð.
7. Let A be an ð Ã ð matrix and c be a scalar then,
det( ððŽ) = ð ð
ððð¡ ( ðŽ)
Example : For the given matrix below we compute both ððð¡(ðŽ) and
ððð¡(2ðŽ).
ðŽ = [
4 â2 5
â1 â7 10
0 1 â3
]
First the scalar multiple.
2A=[
8 â4 10
â2 â14 20
0 2 â6
]
The determinants.
det( ðŽ) = 45 , det(2ðŽ) = 360 = (8)(45) = 23
ððð¡ ( ðŽ)
33. 8. Suppose that A is an invertible matrix then, det(ðŽâ1
) =
1
ððð¡(ðŽ)
Example: For the given matrix we compute det(ðŽ) ððð det(ðŽâ1
)
ðŽ = [8 â9
2 5
]
ðŽâ1
= [
5
58
9
58
â
1
29
4
29
]
Here are the determinants for both of these matrices.
det(ðŽ) = 58 det(ðŽâ1
) =
1
58
9. Suppose that A is an nà n triangular matrix then,
ððð¡(ðŽ) = ð11 ð22 ⊠ð ðð
FindingDeterminant
1-The Determinant of a ð Ã ð Matrix
ðŽ = [
ð11 ð12
ð21 ð22
], is written ððð¡(ðŽ) = ð11 ð22-ð12 ð21
Example: ðŽ = [1 2
4 â7
], ððð¡(ðŽ) = â7 â 8 = â15
2-The Determinant of a ð Ã ð Matrix
ððð¡(ðŽ)= [
ð11 ð12 ð13
ð21 ð22 ð23
ð31 ð32 ð33
] = ð11 [
ð22 ð23
ð32 ð33
] â ð12 [
ð21 ð23
ð31 ð33
] + ð13 [
ð21 ð22
ð31 ð32
]
=ð11(ð22 ð33-ð32 ð23)â ð12(ð21 ð33-ð31 ð23)+ ð13(ð21 ð32-ð31 ð22)
That is the 3 Ã 3 determinants is defined in terms of determinants of 2 Ã
2 sub-matrices of .
Or
[
ð11 ð12 ð13
ð21 ð22 ð23
ð31 ð32 ð33
]
ð11 ð12
ð21 ð22
ð31 ð32
34. det( ðŽ) = ð11 ð22 ð33 + ð12 ð23 ð31 + ð13 ð21 ð32 â ð13 ð22 ð31 â
ð12 ð21 ð33 â ð11 ð23 ð32.
Example: find the determinant of ðŽ=[
1 2 3
1 0 2
3 2 1
]
ððð¡(ðŽ) = 1 à ððð¡ |0 2
2 1
| â 2 Ã ððð¡ |1 2
3 1
| + 3 Ã ððð¡ |1 0
3 2
|
= 1 Ã (0 â 4) â 2 Ã (1 â 6) + 3 Ã (2 â 0) = 12
Or
ððð¡(ðŽ)=(1Ã0Ã1)+(2Ã2Ã3)+(1Ã2Ã3)-(3Ã0Ã3)-(2Ã1Ã1)-(2Ã2Ã1)
=0+12+6-0-2-4=18-6=12
3-Cofactor Expansion
The determinant of an ð Ã ð matrix may be found by choosing a row (or
column) and summing the products of the entries:
ððð¡(ðŽ) = ð1ð ð1ð + ð2ð ð2ð + ⯠+ ð ðð ð ðð
(cofactor expansion along the ð ð¡â
column)
ððð¡(ðŽ) = ðð1 ðð1 + ðð2 ðð2 + ⯠+ ððð ððð
(cofactor expansion along the ð ð¡â
row)
Where ððð is the determinant of ðŽ with row ð and column ð deleted,
multiplied by (â1)ð+ð
. The matrix of elements ððð is called the cofactors
matrix.
Example: cofactor expansion along the first column
ðŽ=[
3 1 0
â2 â4 3
5 4 â2
] . evaluate ððð¡(ðŽ) by cofactor expansion along the
column of ðŽ .
ððð¡(ðŽ) = [
3 1 0
â2 â4 3
5 4 â2
]
36. CHAPTER TWO
System of Linear Equations
In this chapter we study the system of linear equation. Then we
illustrate using of matrices to solve system linear equation in terms of the
methods of Homogeneous Systems, Gaussian Elimination, Gauss-Jordan
Elimination and Cramerâs Rule
(2.1)LinearEquations
Definition:
Let ð1 , . . . , ð ð, ðŠ be elements of ð , and let ð¥1, . . . , ð¥ ð be unknowns
(also called variables or ððð ððððððððððð). Then an equation of the form
ð1 ð¥1+ ⊠+ ð ð ð¥ ð= y
is called a linear equation in ð unknowns (over ð ), the scalars ðð are
called the coefficients ofthe unknowns , and ð is called the constant term
of the equation.
Example:
ð¥1+2ð¥1=5 ð1=1 , ð2=2 , y=5 (This equation are linear )
But:
ð¥2
1+3â ð¥2 =5 (This equation is not linear)
Note: not all ð are zero .
Definition:
A linear equation in the two variables ð¥1and ð¥2 is an equation that can in
the form ð1 ð¥1 +ð2 ð¥2=b
Where ð1, ð2 , and b are numbers, in general a linear equation in the ð
variables ð¥1, ð¥2,âŠ, ð¥ ðis an equation that can be written in the form
ð1 ð¥1+ð2 ð¥2+âŠ+ð ð ð¥ ð=ð
Where the coefficients ð1, ð2, âŠ, ð ð and the constant term ð are numbers.
37. Example: ð¥1+7ð¥2=3 , ð¥1+ð¥2+âŠ+ð¥ ð=4
Some exampleof equations that are not linear are:
ð¥1
2
+ð¥1 ð¥3=3,
1
ð¥1
+ð¥2+ð¥3=
5
2
, ð(ð¥1)
+ð¥2=
1
2
,
ð¥1+ð¥2
ð¥3+ð¥4
=ð¥5 + 7
(2.2) LinearSystem
In general, a system of linear equations (also called a linear system) in
the variables ð¥1 , ð¥2, ⊠, ð¥ ð consists of a finite number of linear
equations in these variables. The general form of a system of ð equations
in ð unknowns is
ð11 ð¥1 + ð12 ð¥2 + ⯠+ ð1ð ð¥ ð = ð1
ð21 ð¥1 + ð22 ð¥2 + ⯠+ ð2ð ð¥ ð = ð2
â® â® â® â®
ð ð1 ð¥1 + ð ð2 ð¥2 + ⯠+ ð ðð ð¥ ð = ðð
We will call such a system an ð Ã ð (ð by ð) linear system.
Example:
6ð¥1 + 2ð¥2 â ð¥3 = 5
3ð¥1 + ð¥2 â 4ð¥3 = 9
âð¥1 + 3ð¥2 + 2ð¥3 = 0
Definition:
The coefficients of the variables form a matrix called the matrix of
coefficient of the system.
[
ð11 ð12 ⯠ð1ð
ð21 ð22 ⯠ð2ð
â® â® â®
ð ð1 ð ð2 ⯠ð ðð
]
mÃn
38. Definition:
The coefficients, together with the constant terms, form a matrix called
the augmented matrixof the system.
ðð¢ð ðŽ=[
ð11 ð12 ⯠ð1ð ðŠ1
ð21 ð22 ⯠ð2ð ðŠ2
â® â® â® â®
ð ð1 ð ð2 ⯠ð ðð ðŠ ð
]
mÃn
Example: The matrix of coefficients and the augmented matrix of the
following system of linear equations are as shown:
ð¥1 + ð¥2 + ð¥3 = 2
2ð¥1 + 3ð¥2 + ð¥3 = 3
ð¥1 â ð¥2 â 2ð¥3 = â6
[
1 1 1
2 3 1
1 â1 â2
]
â
ðððððð ðð ðððððððððððð
[
1 1 1 2
2 3 1 3
1 â1 â2 â6
]
â
ððððððððð ðððððð
In solving systems of equations we are allowed to perform operations of
the following types:
1. Multiply an equation by a non-zero constant.
2. Add one equation (or a non-zero constant multiple of one equation) to
another equation.
These correspond to the following operations on the augmented matrix :
1. Multiply a row by a non-zero constant.
2. Add a multiple of one row to another row.
3. We also allow operations of the following type : Interchange two rows
in the matrix (this only amounts to writing down the equations of the
system in a different order).
40. ðµðððð ð¢ðð ð¡ðð¡ð¢ð¡ððð
{
(C) z = â1
(B) y = â1 â 2z â y = â1 â 2(â1) = 1
( ðŽ) ð¥ = 5 â 2ðŠ + ð§ â ð¥ = 5 â 2(1) + (â1) = 2
⎠ð¥ = 2, ðŠ = 1, ð§ = â1
(2.2.1)HomogeneousSystems
A system of linear equations is said to be homogeneous if all the constant
terms are zeros, a system of homogeneous linear equations is a system of
the form
ð11 ð¥1 + ð12 ð¥2 + ⯠+ ð1ð ð¥ ð = 0
ð21 ð¥1 + ð22 ð¥2 + ⯠+ ð2ð ð¥ ð = 0
â® â® â® â®
ð ð1 ð¥1 + ð ð2 ð¥2 + ⯠+ ð ðð ð¥ ð = 0
Such a system is always consistent as ð¥1 = 0, ð¥2 = 0 , âŠ. , ð¥ ð = 0is a
solution, this solution is called the trivial (or zero) solution, any other
solution is called a non-trivial solution.
Example:
ð¥ â ðŠ = 0
ð¥ + ðŠ = 0
Has only the trivial solution, whereas the homogeneous system
ð¥ â ðŠ + ð§ = 0
ð¥ + ðŠ + ð§ = 0
Has the complete solution ð¥ = âð§; ðŠ = 0; ð§ arbitrary. In particular,
taking ð§ = 1 gives the non-trivial solution
ð¥ = â1; ðŠ = 0; ð§ = 1.
There is simple but fundamental theorem concerning homogeneous
systems.
41. (2.2.2)GaussianElimination
This method is used to solve the linear system ðŽð = ðµ by transforms the
augmented matrix [ðŽ: ðµ] to row echelon form and then uses substitution
to obtain the solution this procedure some what more efficient than
Gauss-Jordan reduction.
Consider a linear system
1. Construct the augmented matrix for the system.
2. Use elementary row operation to transform the augmented matrix
into a triangular one.
3. Write down the new linear system for which the triangular matrix is
the associated augmented matrix.
4. Solve the new system, you may need to assign some parametric
values to some unknowns, and then apply the method of back
substitution to solve the new system.
Example: We use Gaussian elimination to solve the system of linear
equations.
2ð¥2 + ð¥3 = â8
ð¥1 â 2ð¥2 â 3ð¥3 = 0
âð¥1 + ð¥2 + 2ð¥3 = 3
The augmented matrix is
[
0 2 1 â® â8
1 â2 â3 â® 0
â1 1 2 â® 3
]
Swap Row1 and Row2
[
1 â2 â3 â® 0
0 2 1 â® â8
â1 1 2 â® 3
]
Add Row1 to Row3
42. [
1 â2 â3 â® 0
0 2 1 â® â8
0 â1 â1 â® 3
]
Swap Row2 and to Row3
[
1 â2 â3 â® 0
0 â1 â1 â® 3
0 2 1 â® â8
]
Add twice Row2 to Row3
[
1 â2 â3 â® 0
0 â1 â1 â® 3
0 0 â1 â® â2
]
Multiply Row3 by -1
[
1 â2 â3 â® 0
0 â1 â1 â® 3
0 0 1 â® 2
]
ð¥1 â 2ð¥2 â 3ð¥3 = 0âŠâŠ.(a)
âð¥2 â ð¥3 = 3âŠâŠ..(b)
ð¥3 = 2âŠâŠ..(c)
Then ð¥3 = 2 and we put (c) in (b)
Then ð¥2 = â5 and we put (b)and (c) in (a)
Then ð¥1 =-4.
(2.2.3) Gauss-JordanElimination
The Gauss-Jordan elimination method to solve a system of linear
equations is described in the following steps.
1. Write the augmented matrix of the system.
2. Use row operations to transform the augmented matrix in the
form described below, which is called the reduced row echelon
form (RREF).
43. 3. Stop process in step 2 if you obtain a row whose elements are
all zeros except the last one on the right. In that case, the
system is inconsistent and has no solutions. Otherwise, finish
step 2 and read the solutions of the system from the final
matrix.
Note: When doing step 2, row operations can be performed in any order.
Try to choose row operations so that as few fractions as possible are
carried through the computation. This makes calculation
easier when working by hand.
Example: we Solve the following system by using the Gauss-Jordan
elimination method.
{
ð¥ + ðŠ + ð§ = 5
2ð¥ + 3ðŠ + 5ð§ = 8
4ð¥ + 5ð§ = 2
The augmented matrix of the system is the following.
[
1 1 1 â® 5
2 3 5 â® 8
4 0 5 â® 2
]
We will now perform row operations until we obtain a matrix in reduced
row echelon form.
[
1 1 1 â® 5
2 3 5 â® 8
4 0 5 â® 2
]
R2â2R1
â [
1 1 1 â® 5
0 1 3 â® â2
4 0 5 â® 2
]
R3â4R1
â [
1 1 1 â® 5
0 1 3 â® â2
0 â4 1 â® â18
]
R3+4R2
â [
1 1 1 â® 5
0 1 3 â® â2
0 0 13 â® â26
]
1
13
R3
â [
1 1 1 â® 5
0 1 3 â® â2
0 0 1 â® â2
]
R2â3R3
â [
1 1 1 â® 5
0 1 0 â® 4
0 0 1 â® â2
]
R1âR3
â [
1 1 0 â® 7
0 1 0 â® 4
0 0 1 â® â2
]
R1âR2
â [
1 0 0 â® 3
0 1 0 â® 4
0 0 1 â® â2
]
44. From this final matrix, we can read the solution of the system. It is
ð¥ = 3, ðŠ = 4, ð§ = â2.
(2.2.4) CramerâsRule
If ðŽð = ðµ is a system of linear equation in n-unknown, such that
ððð¡(ðŽ) â 0 then the system has a unique solution, this solution is
ð1 =
ððð¡(ðŽ1 )
ððð¡(ðŽ)
, ð2 =
ððð¡(ðŽ2)
ððð¡(ðŽ)
, ð3 =
ððð¡(ðŽ3)
ððð¡(ðŽ)
Where ðŽð is the matrix obtained by replacing the entries in the ð ð¡â
column
of ðŽ by the entries of ðð¡â
column. The system
ð11 ð¥1 + ð12 ð¥2 + ⯠+ ð1ð ð¥ ð = ð1
ð21 ð¥1 + ð22 ð¥2 + ⯠+ ð2ð ð¥ ð = ð2
â® â® â® â®
ð ð1 ð¥1 + ð ð2 ð¥2 + ⯠+ ð ðð ð¥ ð = ðð
[
ð11 ð12 ⯠ð1ð ð1
ð21 ð22 ⯠ð2ð ð2
â® â® â® â®
ð ð1 ð ð2 ⯠ð ðð ð ð
]
nÃm
A=[
ð11 ð12 ⯠ð1ð
ð21 ð22 ⯠ð2ð
â® â® â®
ð ð1 ð ð2 ⯠ð ðð
]
nÃm
ðŽ1 = [
ð1 ð12 ⯠ð1ð
ð2 ð22 ⯠ð2ð
â® â® â®
ð ð ð ð2 ⯠ð ðð
]
nÃm
, ðŽ2=[
ð11 ð1 ⯠ð1ð
ð21 ð2 ⯠ð2ð
â® â® â®
ð ð1 ð ð ⯠ð ðð
]
nÃm
, ⊠, ðŽ ð=[
ð11 ð12 ⯠ð ð
ð21 ð22 ⯠ð ð
â® â® â®
ð ð1 ð ð2 ⯠ð ð
]
nÃm
46. ð¹ððððððððð
1. Dennis M. Schneider, Manfred Steeg ,Frank H. Young.
Linear Algebra a Concrete Introduction. MacmillanCo.
Unites of America, 1982.
2. S. Barry and S. Davis, Essential Math. Skills,National Library
of Australia, 2002.
3. Garth Williams . Linear Algebra With Applications .
Jones And Bartlett , Canada , 2001.
4. P. Dawkins . Linear Algebra.
http://www.cs.cornell.edu/courses/cs485/2006sp/linalg_c
omplete.pdf