SlideShare ist ein Scribd-Unternehmen logo
1 von 92
Downloaden Sie, um offline zu lesen
NUMERICAL SOLUTIONS OF LINEAR ALGEBRAIC
            SYSTEMS USING MATLAB




                         BY



                 ODEH FRIDAY ODEH

                 REG. NO. : 04153024




    BEING A PROJECT SUBMITTED TO THE MATHEMATICS,
    STATISTICS AND COMPUTER SCIENCE DEPARTMENT,
    UNIVERSITY OF ABUJA, IN PARTIAL FULFILLMENT OF
    THE REQUIREMENT FOR THE AWARD OF BACHELOR OF
        SCIENCE (BSc.) DEGREE IN MATHEMATICS.




                   NOVEMBER 2008


                          1 

 
DEDICATION
I dedicate this work (the first of its kind by me) to GOD, my Creator. Also, to

my parents, Mr. & Mrs. S. U. ODEH, my sisters, and my one and only late

Brother Emmanuel.




                                       2 

 
ACKNOWLEDGEMENT
My profound gratitude goes to God, whose report I have always believed, for
giving me grace to complete this race. Also, to my parents, Mr. & Mrs. Odeh
whom have also made this race possible by their words of advice. Also, to
my wonderful sisters, Aunty Mercy (big sis), Sister Joy (#2), Alice (baby-
doll), and Esther (Agbani), thanks for the motivation and inspiration.

The merit from this research is due to the information I tapped from the
fountain of one of our own scholar, my Humble Supervisor, a professor in
the making, Dr. F. O. Ogunfiditimi. I am grateful to you for your time and
energy, through your tight schedule; you still took out time for me. Thank
you Sir! Also, many thanks to the MEN of the department, Professor M. O.
Adewale and Dr. E. O. Ukpong for always making us keep our heads up. Al-
so, to the entire lecturers in the department.

My appreciation also goes to my vision helpers, Rev. (Dr) and Mrs. F. Ug-
bong for their prayers. My crew, Bulus Sidi, George Ehiagwina, Kefas Bako,
John Ukhai, Gwaz Akut, Peterson Okoro, Efe Bestman, Emeka Okorie thanks
for the brotherly love. Adeoye Oreoluwa David (LasB), Odion Irene (the only
female fried I have in the department), Igwe Chima, Ade (Class rep) how
could I have survived without you guys in the department. Thanks to my
teams (MACOSSA, University of Abuja Basketball Team, ELOHIM Choir & In-
strumentalists, ELOHIM FYB 2008) you brought out the leader in me by be-
lieving in me. To Jigga Banty, Gene, A Y…. keep the music playing. Gift, Glo-
ria, Joy, Esther, Aderonke, Ibimina, and… thank you for always being there.
To my Guy! Saleh Agyo B. four year and still kicking, big up bro! The last but
not the least, words can express you, I LOVE YOU Seyi Soyemi!

Finally, to my roommates, housemates, street mates, course mates, de-
partment mates, and all whom I cannot mention, I appreciate you all.
                                       3 

 
ABSTRACT

This project is not a comprehensive introduction to MATLAB. Instead, it fo-
cuses on the specific features of MATLAB that are useful for analyzing linear
equations. Two basic goals of this course are First, to deepen students' un-
derstanding of linear equations by giving them a new tool, a mathematical
software system, for analyzing linear equations. Second, to bring students to
a level of expertise in the mathematical software system that would allow
them to use it in other mathematics, engineering, or science courses. The
emphasis of this research is to show how the solution of Ax = b is arrived at
using MATLAB. Where A is the coefficient matrix, x is the unknown vector,
and b is the solution vector.




                                      4 

 
TABLE OF CONTENTS

TITLE PAGE…………………………………………………………………………………………………………i

CERTIFICATION PAGE…………………………………………………………………………………….…ii
DEDICATION PAGE………………………………………………………………………………………..…iii
ACKNOWLEDGEMENT PAGE…………………………………………………………………………….iv
ABSTRACT PAGE……………………………………………………………………………………………….v

CHAPTER ONE: INTRODUCTION

1.1 INTRODUCTION TO LINEAR ALGEBRAIC SYSTEMS………………………………….1

1.2 OVERVIEW OF NUMERICAL SOLUTIONS.............................................3

1.3 MATLAB........................................................................................3

1.4 DEFINITION OF TERMS...................................................................4

1.4.1 REVIEW OF VECTOR, SCALAR AND MATRICES.................................5

1.4.2 MATRIX OPERATION.....................................................................9

1.4.3 DETERMINANT...........................................................................11

1.4.4 MATRIX TRANSPOSE, MATRIX INVERSE, AND PSEUDO- INVERSE.....11

1.4.5 VECTOR AND MATRIX NORMS......................................................12

1.4.6 EIGENVALUE.............................................................................13

1.4.7 STABILITY AND CONDITIONING...................................................14

1.4.8 ERRORS...................................................................................14

1.5 SCOPE OF STUDY.........................................................................15

1.6 LIMITATIONS...............................................................................16
                                                 5 

 
1.7 METHODOLOGY............................................................................17

CHAPTER TWO: LITERATURE REVIEW

2.1 HISTORICAL BACKGROUND OF LINEAR ALGEBRA..............................20

2.2 MATRIX THEORY...........................................................................25

2.3 MATLAB HISTORY.........................................................................27

CHAPTER THREE: METHODOLOGY

3.1 INTRODUCTION……………………………………………………………………………………….32

3.2 FACTORIZATIONS (DECOMPOSTION)..............................................33

3.2.1 LU FACTORIZATION...................................................................34

3.2.2 CHOLESKY FACTORIZATION........................................................43

3.3 ITERATIVE METHOD......................................................................48

3.3.1 GAUSS-SIEDEL ITERATION METHOD............................................49

3.4 MATRIX EIGENVALUE PROBLEM......................................................52

3.5 ERROR AND ACCURACY……………………………………………………………………………58
CHAPTER FOUR: APPLICATIONS
4.1 APPLICATIONS OF LINEAR ALGEBRIAC SYSTEMS………………………………….62

4.1.1 ECONOMICS………………………………………………………………………………………….62

4.1.2 MARKOV CHAINS………………………………………………………………………………….70

4.1.3 ELECTRICAL NETWORK…………………………………………………………………………73

CHAPTER FIVE: SUMMARY, CONCLUSION, AND RECOMMENDATIONS

5.1 SUMMARY………………………………………………………………………………………………….81


                                              6 

 
5.2 CONCLUSION……………………………………………………………………………………………82

5.3 RECOMMENDATIONS……………………………………………………………………………….82

REFERENCES……………………………………………………………………………………………………84




                         7 

 
CHAPTER ONE

                                    INTRODUCTION



1.1 INTRODUCTION TO LINEAR ALGEBRAIC SYSTEMS


Linear Algebraic systems can be said to be a set of linear equations. A Linear eq-

uation is referred to as a simple equation with a simple variable. Therefore, a linear

system or system of linear equations is a collection of linear equations involving

set of variables.


Linear Systems are always of the form;


a11 x1 + a12 x2 + … + a1n xn = b1


a21 x1 + a22 x2 + … + a2n xn = b2


.      .        .    .      .



am1 x1 + am2 x2 + … + amn xn = bm


This is a general system of m linear equations in n unknowns. The system can also

be expressed in a more compact form as:


∑ aij xj = bi


Or we use the matrix equation as;
                                          8 

 
Which may be written as AX = B, where A = (aij), X = (xj), and B = (bi)


A is called the coefficient matrix, X the matrix variables.


In the double script notation, the first subscript denotes the row and the second the

column in which the entry stands. Thus a22 is the entry in the second row and

second column.


Therefore, In order to solve a set of equations, we express the equations in form of

a matrix equation or form.


A system of equation with a common solution is said to be consistent, that is, the

variables have values. When the variables have no consistent value, it has no

common solution, and then it is not consistent.




                                          9 

 
1.2 OVERVIEW OF NUMERICAL SOLUTIONS


Numerical Solutions involves the development and evaluation of methods for solv-

ing a problem or mathematical problem numerically, rather than using the methods

of algebraic mathematical analysis and theorems. It involves computing numerical

results from given mathematical data. The data is known as the input while the ob-

tained result is the output. Recent trends emphasize the use and computers to im-

plement the computation rather than manual process. The increasing power of

computer system and scientific computing is a big boost to the use and develop-

ment of numerical solutions.


Scientific computing is the field of study concerned with constructing mathemati-

cal models and numerical solution techniques and using computers to analyze and

solve scientific and engineering problems.


1.3 MATLAB


Jack Little and Cleve Moler, the founders of The MathWorks, recognized the need

among engineers and scientists for more powerful and productive computation en-

vironments beyond those provided by languages such as FORTRAN (FORmula

TRANslator) and C. In response to that need, they combined their expertise in

mathematics, engineering, and computer science to develop MATLAB, a high-


                                        10 

 
performance technical computing environment. MATLAB combines comprehen-

sive math and graphics functions with a powerful high-level language.


The name MATLAB stands for MATrix LABoratory.A program that has been

purpose-designed to make computations easy, fast, and reliable.


Basic MATLAB is good for;


a) Computations, including linear algebra, data analysis, signal processing, poly-

nomials and interpolation, numerical integration, and numerical solution of differ-

ential equations.


b) Graphics, in 2-Dimension and 3-Dimension, including colour, lighting, and

animation.


1.4 DEFINITION OF TERMS


The conventions that will be used are that of lower case letter which denotes a sca-

lar or a vector and a capital letter denotes a matrix. Typically, the MATLAB vari-

ables are named with a capital letter if they will be used as matrices and lower case

for scalar and vectors.




                                         11 

 
The MATLAB environment uses the term matrix to indicate a variable containing

real or complex numbers arranged in a two-dimensional grid. An array is, more

generally, a vector, matrix, or higher-dimensional grid of numbers


MATLAB has a lot of functions that create different types of matrices, row vector

and column vector.


1.4.1 REVIEW OF VECTOR, SCALAR AND MATRICES


A vector is a matrix. A column vector is a m-by-1 matrix given as;


a11


a21


-



am1


a row vector is a 1-by-n matrix given as;


[a11   a12    .. .     a1n]


A diagonal matrix is a matrix that has non-zero entries only on the main diagonal,

i.e. a11, a22 … ann.




                                            12 

 
A diagonal matrix whose diagonal entries are equal is also a scalar 1-by-1 matrix,

i.e. a11=a22= . . . = ann.

D=     d       0       0             =             [d]

       0       d       0

       0       0       d

When these entries equal 1, it is known as an Identity Matrix.

Generally, accepted mathematical notation uses I to denote the identity matrices of

different sizes. MATLAB could not use I for this purpose because id did not dis-

tinguish between uppercase and lowercase letters and i already served as a sub-

script and a complex unit. So, an English language pun was introduced. The Func-

tion eye(m, n) returns a m-by-n matrix(if m=n, eye(n) returns a square matrix).

There are also different matrices created with their different commands respec-

tively, e.g.


a) Pascal Matrix which consists of rows of numbers, the apex is 1 and each row

starts and ends with 1, others obtained by adding together two numbers on either

side in the row above, i.e.




                                         13 

 
P = pascal(3) =     1     1       1


                    1     2       3


                    1     3       6


b) Magic Matrix is a matrix where the entries in the column when added equal the

same, i.e.


M = magic(4) =      16        2       3 13


                    5 11 10               8


                    9     7       6    12


                    4 14 15               1


c) Hilbert Matrix has its entries aij = 1/(1+j-1)


sym(hilb(3)) =      1     ½       1/3                   1.0000 0.5000 0.3333


                    ½     1/3         ¼       =         0.5000 0.3333 0.2500


                    1/3       ¼         1/5             0.3333 0.2500 0.2000


sym(hilb(3)) = hilb(3) as above.


d) The Ones Matrix has all its entries equal to 1


                                                  14 

 
ones(3) =     1       1        1

              1       1        1

              1       1        1

e) Random Matrix is a matrix with random integers: fix(10*rand(3)) =


       `                           6     3        7

                                   0     8        9

                                   4     0        9

A symmetric matrix is a square matrix whose transpose equals the matrix, i.e.


AT = A =                  -4       1         8

                          1        0         -3

                          8        -3        5

A skew-symmetric matrix is a matrix whose transpose equals minus the matrix, i.e.

–A = AT. Every skew-symmetric matrix has its entire main diagonal equal to 0


–A = AT =                 0        4         5

                          -4       0         10

                          -5       -10       0

Orthogonal Matrix is a matrix where the transposition gives the inverse of the ma-

trix, i.e. AT = A-1

                                                      15 

 
AT = A-1 =          2/3   1/3    -2/3


                    -2/3 2/3     1/3


                    1/3   2/3    -2/3


Triangular Matrix is a matrix where all entries above the main diagonal equal 0

(lower triangular Matrix) or with all entries below the main diagonal equal to 0

(upper triangular Matrix).


Sparse Matrix is a matrix with relatively few non zero entries. Sparse matrix algo-

rithms can tackle huge sparse matrices that are utterly impractical for dense matrix

algorithms.


1.4.2 MATRIX OPERATION


Addition and subtraction of matrices is defined just as it is for arrays, element-by-

element. Addition and subtraction require both matrices to have the same dimen-

sion, or one of them is a scalar. If the dimensions are incompatible, an error results

in MATLAB environment. The sum of two row vector or two column vectors must

have the same number of components. For example, addition of a scalar and a row

vector


    w=s+v


                                          16 

 
w=    [8]    +      [2        0         -1]


w=    [2+8 0+8      -1+8] =             [10   8         7]


A row vector and a column vector of the same length can be multiplied in either

order. The result is either a scalar, inner product, or a matrix.


Multiplication of matrices is defined in a way that reflects composition of the un-

derlying linear transformations and allows compact representation of systems of

simultaneous linear equations. The matrix product C = AB is defined when the

column dimension of A is equal to the row dimension of B, or when one of them is

a scalar. If A is m-by-p and B is p-by-n, their product C is m-by-n matrix. For ex-

ample, multiplying two vectors, a row vector and a column vector.


u = [3; 1; 4] =     3

                    1

                    4

v = [2 0 -1] =      [2    0       -1]


x=v*u


x=    [3*2 +        1*0       +         4*-1] =         [6 + 0 – 4] =   [2]




                                                  17 

 
It should be noted that a 3-by-2 matrix cannot be multiplied with a 3-by-3 matrix.

Also, the multiplication of two matrices A and B, AB≠BA.


1.4.3 DETERMINANT


Determinants were originally introduced for solving linear systems. Every square

matrix A has associated with it a number called determinant of the matrix A de-

noted by |A|. The function det computes the determinant of a square matrix. A de-

terminant of order three is a scalar associated with a 3-by-3 matrix A such that;

        detA = |A| =           a11     a12   a13


                               a21     a22   a23

                               a31     a32   a33

=   a11| a22   a23|    - a12| a21    a23|    + a13| a21      a22|


        |a32   a33|         |a31     a33|            | a31   a32|


= [a11 (a22a33 - a23a32) - a12 (a21a33 - a23a31) + a13 (a21a32 - a22a31)]


1.4.4 MATRIX TRANSPOSE, MATRIX INVERSE, AND PSEUDO- INVERSE.


For real matrices, the transpose operation interchanges aij and aji. MATLAB uses

the apostrophe operator (') to perform a complex conjugate transpose, while the

general is notation ( T ), e.g.

                                               18 

 
v = [2     0     -1]        vT = 2

                                    0


                                    -1


If A is square matrix and nonsingular (det A ≠ 0), the equations AX = I and XA = I

have the same solution, X. This solution is called the inverse of A, it is denoted by

A-1 and computed by the function inv(A) in MATLAB. It has the property that:

A-1A= A A-1 = I             (I = Identity Matrix)


It should be noted that rectangular matrices do not have inverses and if A has no

inverse, it is a singular matrix, i.e.det A = 0.


A partial replacement for the inverse is provided by the Moore-Penrose pseudo-

inverse, which is computed by the pinv function. This show rectangular matrices

have pseudo-inverses using MATLAB.


1.4.5 VECTOR AND MATRIX NORMS


A vector norm for column vectors x = [xj] for j = 1, 2 . . . n is a generalized length,

it is denoted by||x||. The most important connection with computation is the p-norm

of a vector x given by;


||x|| = [∑|xi|P] 1/P   =    (|x1|P + |x2|P + . . . . . + |xn|P) 1/P


                                               19 

 
It is computed by norm(x,p). This is defined by any value of p > 1, but the most

common values of p are 1, 2, and ∞. The default value is p = 2, which correspond

to Euclidean length which is given;


||x||2 = √x12 + x22 + . . . . . + xn2


The p-norm of a matrix A, ||A||P = max||Ax||/||x||P can be computed for p = 1, 2, and

∞ by norm(A,p) in MATLAB. Again, the default value is p = 2.


1.4.6 EIGENVALUE


Matrix Eigenvalue problems deal with vector equations of the form Ax = λx, where

A is a given square matrix, x is an unknown vector and λ an unknown scalar. For

non-trivial solutions, i.e. x ≠ 0, the values of λ are called Eigenvalues of the matrix

A and the corresponding solutions of the equation Ax = λx are called the Eigenvec-

tors.


Ax = λx =      a11     a12 . . . . . . . a1n         x1       [λ]   x1

               a21     a22 . . . . . . . a2n         x2   =         x2

               .       .        .        .                          .

               an1     an2 . . . . . . . ann         xn             xn

Ax = λx => Ax – λx = 0 => (A – λI) x = 0


The unit matrix is introduced since we cannot subtract λ (scalar) from matrix A.
                                               20 

 
1.4.7 STABILITY AND CONDITIONING


In numerical analysis, the condition number associated with a problem is the

measure of that problem amenability to computation, i.e. how numerically well

conditioned the problem is. A problem with a low condition number is well

conditioned, while a problem with a high condition number is ill conditioned. The

condition number associated with a linear equation gives a bound on how accurate

the solution will be after approximate solution. Thus, the solution x, changes with

respect to b.


This leads to the notion of numerical stability. An algorithm is numerically stable

if an error once generated does not grow too much during computation. This is

possible when the problem is well conditioned. The art of numerical computation

is to find a stable algorithm for solving a well posed problem. For example,

√2=1.41421 is a well posed problem. The function f(x)=1/(x-1) , f(1.1)=10 and

f(1.001)=1000, evaluating f(x) here x=1 is an ill conditioned problem.


1.4.8 ERRORS


Errors are deviations from the expected or exact value. Assessing the accuracy of

the output results of computations is a paramount goal in Numerical solutions.




                                         21 

 
There are different types of errors encountered that affect the accuracy of the re-

sults obtained.


The various types of errors that may limit accuracy are:


a. Experimental Errors: errors from inputting mathematical data; this occurs not

only in the use of wrong data but also in inherent imperfections of physical meas-

urements and quantities.


b. Round-off Errors: the error arises because of calculations with numbers per-

formed with approximate representation of the actual numbers e.g., 3.142517 rep-

resented as 3.143.


c. Approximation Errors: this occurs as a result of truncating values. It is as a result

of the computation method used.


In MATLAB, the decimal point representations of individual entries are the signal

to use variable-precision arithmetic (vpa). The result of each arithmetic operation

is rounded to 16-significant decimal digits as default in MATLAB.


1.5 SCOPE OF STUDY


The main focus of this study is to show that Ax = b has a unique solution. There-

fore, we want to show how MATLAB works to give solution to Linear Algebraic


                                           22 

 
Systems. A little familiarity with MATLAB is required; also understanding thee

principle behind an algorithm for a problem is needed.


Linear Algebra is one of the cornerstones of modern computational mathematics.

One of the most important problems in technical computing is the solutions of sys-

tems of linear equations. In matrix notation, the general problem which is the scope

of our study takes the following form:


Given two matrices A and b, does there exist a unique solution or matrix x such

that Ax = b or xA = b?


1.6 LIMITATIONS


The concept of a matrix was historically introduced to simplify the solution of lin-

ear systems although they today have much greater and broad reaching applica-

tions.


We know that linear equations in two or three variables can be solved using tech-

niques such as elimination method and substitution method, but theses techniques

are not relevant for dealing with large systems where there are large numbers of

variables. Therefore, our restriction will be methods of solving linear equations us-

ing MATLAB. Also, every matrix in our study will be a square matrix which is

non-singular with a unique solution.

                                         23 

 
1.7 METHODOLOGY


Computational difficulties encountered with theoretical linear algebra methods are

the use of Cramer’s rule, least square method, explicitly finding the inverse of a

matrix, e.t.c. An example is using Cramer’s rule for Ax = b, it might take years to

solve a 20-by-20 matrix problem using the definition of determinant. Therefore, we

introduce numeric methods relevant to this study. They are,


    a) Iterative Methods


    b) Factorization (Decomposition): LU Factorization, Cholesky Factorization


    c) Matrix Inverse


    d) Eigenvalues


German mathematician, Carl Friedrich Gauss, proved that every equation has at

least one solution which is known as the fundamental theorem of algebra. It was

first stated as every polynomial f(x) Є |R[x] with complex coefficients can be fac-

tored into linear factors over the complex number. A polynomial in general is de-

fined as:


f(z) = zn + cn-1zn-1 + cn-2zn-2 + . . . . . + c0


Where x = complex variable

                                                   24 

 
c0, c1, . . . . .cn-1 are complex constants


Carl Friedrich Gauss reformulated this theorem to avoid the notion of complex

number stating: Every polynomial f(x) Є |R[x] with real coefficients can be fac-

tored into linear and quadratic form. The name fundamental theorem of algebra

was given at a time which algebra was basically about solving polynomial equa-

tions with complex or real coefficients. In spite of its name Fundamental Theorem

of Algebra, there is no known proof which is purely algebraic, and many mathema-

ticians believe such a proof does not exist. Besides, it is not fundamental for mod-

ern algebra. It suffices to prove that every polynomial has at least one root because

by ordinary algebraic division, we can divide by the corresponding linear factor to

give another polynomial whose degree is reduced by 1. Repeating the whole proc-

ess, we get all n of the root. A Simple illustration is x2 = 4. Solving:


x = √4 => x = 2


This shows x = 2 is a solution, but x2 = -4 when solved:


x =√-4 => x = 2√-1


What is the physical meaning of √-1? Then we say x2 = -4


Gauss in his dissertation said x2 = -4 has a solution:


                                              25 

 
x =√-4 => x = 2√-1 => x = 2i


The following formulas where also introduced in solving an equation:


    i) ax + b = 0


       x = -b/a


    ii) ax2 + bx + c = 0


       x = -b ± √b2 – 4ac


              2a


Therefore, in our methods of solving systems of linear equations, there exists a

unique solution of Ax = b.




                                         26 

 
CHAPTER TWO


                                LITERATURE REVIEW


2.1 HISTORICAL BACKGROUND OF LINEAR ALGEBRA.


The history of modern linear algebra followed the introduction and development of

the notion of a matrix. The subject of linear algebra followed the development of

determinants, which arose from the study of coefficients of systems of linear equa-

tions.


Gottfried Wilhelm Leibnitz (or Leibniz) born at Leipzig on June 21 (O.S.), 1646,

and died in Hanover on November 14, 1716, who was one of the two founders of

calculus, used determinants in 1693 and Cramer presented his determinant-based

formula for solving systems of linear equations (today known as Cramer's Rule) in

1750. Cramer's rule is a theorem in linear algebra, which gives the solution of a

system of linear equations in terms of determinants. It is named after Gabriel

Cramer (1704 - 1752), who published the rule in his 1750 Introduction à l'analyse

des lignes courbes algébriques. Let a system be represented in matrix

multiplication form as Ax = b, then the theorem states that:


xi = det Ai / det A (i = 1, 2 . . . . n)


Ai = matrix formed by replacing the ith column of A by the column vector b
                                           27 

 
Considering the linear system:


ax1 + ax2 = e

cx1 + dx2 = f

a     b         x1   =    e

c     d         x2        f

Then by Cramer’s rule:


x1 = |e         b|


      |f        d|   =    ed - bf


      |a        b|        ad - bc


      |c        d|


x2 = |a         e|


      |c        f|   =    af - ec


      |a        b|        ad - bc


      |c        d|


The rules for all n-by-n matrices are similar.




                                          28 

 
Johann Carl Friedrich Gauss (30 April 1777 – 23 February 1855) a German

mathematician and scientist who contributed significantly to many fields, including

number theory, statistics, analysis, differential geometry, geodesy, electrostatics,

astronomy, and optics. He was sometimes known as the princeps mathematicorum

(Latin, usually translated as "the Prince of Mathematicians") and "greatest

mathematician since antiquity". Gauss had a remarkable influence in many fields

of mathematics and science and is ranked as one of history's most influential

mathematicians. He developed Gaussian elimination around 1800 and used it to

solve least squares problems in celestial computations. Even though Gauss' name is

associated with this technique for successively eliminating variables from systems

of linear equations, Chinese manuscripts from several centuries earlier have been

found that explain how to solve a system of three equations in three unknowns by

Gaussian elimination.


For matrix algebra to fruitfully develop, one needed both proper notation and defi-

nition of matrix multiplication. James Joseph Sylvester (September 3, 1814

London – March 15, 1897 Oxford) an English mathematician, made fundamental

contributions to matrix theory, invariant theory, number theory, partition theory

and combinatorics. In 1848, he first introduced the term ''matrix,'' which was the

Latin word for womb, as a name for an array of numbers. Matrix algebra was nur-

                                          29 

 
tured by the work of Arthur Cayley (August 16, 1821 - January 26, 1895). He

helped found the modern British school of pure mathematics in 1855. Cayley stud-

ied compositions of linear transformations and was led to define matrix multiplica-

tion so that the matrix of coefficients for the composite transformation ST is the

product of the matrix S times the matrix T. He went on to study the algebra of

these compositions including matrix inverses. Cayley in his 1858 Memoir on the

Theory of Matrices gave the famous Cayley-Hamilton theorem that asserts that a

square matrix is a root of its characteristic polynomial. The use of a single letter A

to represent a matrix was crucial to the development of matrix algebra. Early in

the development, the formula det(AB) = det(A)det(B) provided a connection be-

tween matrix algebra and determinants. Cayley wrote, ''There would be many

things to say about this theory of matrices which should, it seems to me, precede

the theory of determinants.'' Mathematicians also attempted to develop of algebra

of vectors but there was no natural definition of the product of two vectors that

held in arbitrary dimensions.


Hermann Günther Grassmann (April 15, 1809, Stettin (Szczecin) – September

26, 1877, Stettin) was a German polymath, renowned in his day as a linguist and

now admired as a mathematician. He was also a physicist, neohumanist, general

scholar, and publisher. He proposed the first vector algebra that involved a non-

                                          30 

 
commutative vector product (that is, v x w need not equal w x v) in his book Aus-

dehnungslehre (1844). Grassmann's text also introduced the product of a column

matrix and a row matrix, which resulted in what is now called a simple or a rank-

one matrix. In the late 19th century, American mathematical physicist Josiah

Willard Gibbs (February 11, 1839 – April 28, 1903) published his famous treatise

on vector analysis. In that treatise, Gibbs represented general matrices, which he

called dyadics, as sums of simple matrices, which Gibbs called dyads. Later the

theoretical physicist Paul Adrien Maurice Dirac (August 8, 1902 – October 20,

1984) introduced the term ''bra-ket'' for what we now call the scalar product of a

''bra'' (row) vector times a ''ket'' (column) vector and the term ''ket-bra'' for the

product of a ket times a bra, resulting in what we now call a simple matrix, as

above.


There was renewed interest in matrices, particularly on the numerical analysis of

matrices, after World War II with the development of modern digital computers.

John von Neumann (December 28, 1903 – February 8, 1957) a Hungarian

American mathematician and Herman Heine Goldstine (September 13, 1913 –

June 16, 2004), introduced condition numbers in analyzing round-off errors in

1947. Alan Mathison Turing (23 June 1912 – 7 June 1954) and von Neumann

were the 20th century giants in the development of stored-program computers.

                                            31 

 
Turing introduced the LU decomposition of a matrix in 1948. The L is a lower tri-

angular matrix with 1's on the diagonal and the U is an echelon matrix. The benefit

of the QR decomposition was realized a decade later. The Q is a matrix whose

columns are orthonormal vectors and R is a square upper triangular invertible ma-

trix with positive entries on its diagonal. The QR factorization is used in computer

algorithms for various computations, such as solving equations and find eigenval-

ues.


2.2 MATRIX THEORY


A matrix which is a rectangular array of numbers has one of its principal uses in

representing systems of equations of the first degree in several unknowns. Each

matrix row represents one equation, and the entries in a row are the coefficients of

the variables in the equations, in some fixed order. Given the general form of sys-

tem of equations below:


a11 x1 + a12 x2 + … + a1n xn = b1 . . . . . . . . . Equation (1)


a21 x1 + a22 x2 + … + a2n xn = b2 . . . . . . . . . Equation (2)


.      .      .      .       .      .


am1 x1 + am2 x2 + … + amn xn = bm . . . . . . . . . Equation (n)


                                              32 

 
This is given in matrix form as:




And x1, x2, . . . xn are the unknowns.


For example, given the linear system in three unknowns:


3x1 + 2x2 – x3 = 1 . . . . . equation (1)


2x1 -2x2 + 4x3 =-2 . . . . . equation (2)


-x1 + 1/2x2 –x3 = 0 . . . . . equation (3)


The matrix form is:         3      2         -1

                            2      -2        4

                            -1     ½         -1

The unknown are given in vector form as: x =            x1

                                                        x2


                                                        x3



                                                  33 

 
Addition and multiplication can be defined so that certain sets of matrices

form algebraic systems.The sum of two matrices is defined only if they are of

the same size, i.e. thesame n-by-n matrix. If A = [aij] and B = [bij] are of the

same size, then C = A + B => cij = aij + bij is defined. This is done simply by

adding corresponding elements.


The set of all matrices of a fixed size has the property that addition is closed,

associative, and comutative. A unique matrix 0 exists such that for any matrix

A;


A+0=0+A=A


And corresponding to any matrix A, there exist a unique matrix B such that;


A+B=B+A=0


The product AB of two matrices A and B, is defined only if the number of

columns of the left factor A is the same as the number of rows of the right

factor B. Note that AB ≠ BA


2.3 MATLAB HISTORY


In the mid-1970s, Cleve Moler and several colleagues developed the FORTRAN

subroutine libraries called LINPACK and EISPACK. LINPACK is a collection of

                                           34 

 
FORTRAN subroutines for solving linear equations, while EISPACK contains

subroutines for solving eigenvalue problems. Together, LINPACK and EISPACK

represent the state of the art software for matrix computation.


In the late 1970s, Cleve, who was then chairman of the computer science depart-

ment at the University of New Mexico, wanted to be able to teach students in his

linear algebra courses using the LINPACK and EISPACK software. However, he

didn't want them to have to program in FORTRAN, because this wasn't the pur-

pose of the course. So, as a "hobby" on his own time, he started to write a program

that would provide simple interactive access to LINPACK and EISPACK. He

named his program MATLAB, for MATrix LABoratory. Over the next several

years, when Cleve would visit another university to give a talk, or as a visiting pro-

fessor, he would end up by leaving a copy of his MATLAB on the university ma-

chines. Within a year or two, MATLAB started to catch on by word of mouth

within the applied math community as a “cult” phenomenon. In early 1983, John

Little was exposed to MATLAB because of a visit Cleve made to Stanford Univer-

sity. Little, an engineer recognized the potential application of MATLAB to engi-

neering applications. So in 1983, Little teamed up with Moler and Steve Bangert to

develop a second generation, professional version of MATLAB written in C and

integrated with graphics. The MathWorks, Inc. was founded in 1984 to market and

                                          35 

 
continue development of MATLAB. The very first version was written in the late

1970s for use in courses in matrix theory, linear algebra, and numerical analysis.

The basic MATLAB data element is a matrix.


The department of Scientific Computing at Uppsala University, Sweden, started to

use MATLAB in 1986 as a teaching tool in numerical analysis, especially in nu-

merical linear algebra. The MATLAB software at that time was an academic ver-

sion, written in FORTRAN code. It has since then been updated and extended.

Solving problems in MATLAB is, therefore, generally much quicker than pro-

gramming in a high-level language such as C or FORTRAN. The focus in MAT-

LAB is on computation, not mathematics. The heart and soul of MATLAB is linear

algebra.


Now, there are two basic versions of the software, the professional version, and the

student edition. Since then, the software has evolved into an interactive system and

programming language for general scientific and technical computation and visu-

alization. There are hundreds of built-in functions that come with the basic MAT-

LAB and there are optional "toolboxes" of functions for specific purposes such as

Controls, Signal Processing, and Optimization. Most of the functions in MATLAB

and the Toolboxes are written in the MATLAB language and the source code is


                                         36 

 
readable. MATLAB commands are expressed in a form very similar to that used in

mathematics and engineering.


Some commonly used MATLAB commands are given below for any matrix A:


Command         Meaning                              Syntax/Usage

chol            Cholesky Factorization of A          C=chol(A)

condest         1-norm condition number estimate     C=condest(A)

det             Determinant of matrix A              det(A)

diag            Diagonal matrices and diagonals of   diag(A)

                matrix

eig             Eigenvalues and Eigenvectors of A    [V,E]=eig(A)

eye             Identity matrix                      eye(n)

hilb            Hilbert n-by-n matrix                hilb(n)

inv             Matrix Inverse of A                  inv(A)

lu              LU Factorization of A                [L,U]=lu(A)

linsolve        Solve a system of linear equations   x = linsolve(A,b)

magic           Magic n-by-n Matrix                  magic(n)

norm            Vector and matrix norms                   [norm(v,1) norm(v)

                                                          norm(v,inf)]


                                        37 

 
normest           2-norm estimate                          nrm = normest(S)

ones              Create array of all ones                 ones(n)

pascal            Pascal n-by-n Matrix                     pascal(n)

pinv              Moore-Penrose pseudoinverse of ma- pinv(A)

                  trix

poly              Characteristic Polynomial of A           P=poly(A)

rand              Uniformly distributed pseudorandom       rand(n)

                  numbers

rref              Reduced Row Echelon Form Of A            rref(A)

trace             Sum of diagonal elements of A            b = trace(A)

tril              Lower triangular part of matrix          tril(A)

triu              Upper triangular part of matrix          triu(A)

zeros             Create array of all zeros                zeros(n)




Linear algebra functions are located in the MATLAB matfun directory. For a

complete list, brief descriptions, and links to reference pages, type:

        help matfun




                                             38 

 
CHAPTER THREE


                                 METHODOLOGY


3.1 INTRODUCTION


One of the most important problems in technical computing is the solution of

systems of simultaneous linear equations. Considering the equation 9x = 27 which

is of the form Ax = b, that is, a 1-by-1 example.

Does the equation 9x = 27 have a unique solution?

The answer is definitely yes. The equation has the unique solution x = 3. The

solution is easily obtained by division.

9x = 27 => 9x/9 = 27/9 => x = 3

The solution is not easily obtained by computing the inverse of 9, that is, 9-
1
    = 0.1111... (Using Ax = b => x =A-1b) and then multiplying 9-1 by 27. This would

be more work and, if 9-1 is represented to a finite number of digits, less accurate.

Similar considerations apply to sets of linear equations with more than one

unknown; the MATLAB software solves such equations without computing the

inverse of the matrix.




                                           39 

 
The two division symbols used are slash, /, and backslash, , used to describe the

solution of a general system of linear equations for two situations where the

unknown matrix appears on the left or right of the coefficient matrix:


X = b/A = bA-1 Denotes the solution to the matrix equation xA = b


X = Ab = A-1b Denotes the solution to the matrix equation Ax = b


The coefficient matrix A is always in the denominator. The dimension

compatibility conditions for x = Ab require the two matrices A and b to have the

same number of rows. The solution x then has the same number of columns as b

and its row dimension is equal to the column dimension of A. For x = b/A, the

roles of rows and columns are interchanged.

In practice, linear equations of the form Ax = b occur more frequently than those

of the form xA = b. So we concentrate more on the backslash operator using

MATLAB. The most common situation involves a square coefficient matrix A and

a single right-hand side column vector b.

3.2 FACTORIZATIONS (DECOMPOSTION): The core of solving a system of

linear algebraic equations is decomposing the coefficient matrix. Through the de-

composition process, the coupled equations are decoupled and the solution can be

obtained with much less effort.
                                         40 

 
Matrix factorization which will be discussed in this section will make use of

triangular matrices, where all the entries either above or below the main diagonal

are 0. Systems of linear equations involving triangular matrices are easily and

quickly solved using either forward or back substitution.

3.2.1 LU FACTORIZATION: This is one of methods used to solve system of

linear algebraic equation. It expresses any square matrix A as the product of a

permutation of a lower triangular matrix and an upper triangular matrix. It is given

by;

A = LU

L = lower triangular matrix with ones on its diagonal

U = upper triangular matrix

The LU factorization of a matrix A allows the linear system Ax = b to be solved

quickly with x = U(Lb) in MATLAB.

Ax = b => Ax = Lux = b

Ly = b             Ux = y

Example 3.1: Determine an LU composition for the matrix




                                         41 

 
A=    3     6      -9

      2     5      -3

      -4    1      10

Solution: we first go through the row operation

=>    3     6      -9                 1          2    -3

      2     5      -3    R1/3 =>      2          5    -3

      -4    1      10                 -4         1    10

=>    1     2      -3                            1    2    -3

      2     5      -3    R2 – 2R1     =>         0    1    3

      -4    1      10                            -4   1    10

      1     2      -3                            1    2    -3

      0     1      3     R3 + 4R1     =>         0    1    3

      -4    1      10                            0    9    -2

      1     2      -3                            1    2    -3

      0     1      3     -R3/29       =>         0    1    3

      0     9      -2                            0    0    1

U=    1     2      -3

      0     1      3

      0     0      1
                                           42 

 
Now, we need to get L. We will need the elementary matrices for each of these, or
more precisely their inverses. The inverse matrix can be found by applying the
inverse operation to the identity matrix:

L = E1-1 E2-1 . . . . . Ek-1        (E = Elementary Matrix)

R1/3 => E1 =           1/3     0    0             E1-1 = 3    0   0

                       0       1    0                   0     1   0

                       0       0    1                   0     0   1

R2 – 2R1 => E2 = 1             0    0             E2-1 = 1    0   0

                       -2      1    0                   2     1   0

                       0       0    1                   0     0   1

R3 + 4R1 => E3 = 1             0    0             E3-1 = 1    0   0

                       0       1    0                   0     1   0

                       4       0    1                   -4    0   1

R3 – 9R2 => E4 = 1             0    0             E4-1 = 1    1   0

                       0       1    0                   0     1   0

                       0       -9   1                   0     9   1

-R3/29 => E5 =         0       0    0             E5-1 = 1    0   0

                       0       1    0                   0     1   0

                       0       0    -1/29               0     0   -29


                                            43 

 
L=    3        0        0

      2        1        0            (definition of L = E1-1 E2-1 E3-1 E4-1 E5-1)

      -4       9        -29

Finally, we can verify the LU Factorization

LU = A =       3        0      0     1      2         -3         3         6   -9

               2        1      0     0      1         3    =     2         5   -3

               4        9      -29   0      0         1          -4        1   10

LU = A =       3(1) + 0 + 0          3(2) + 0 + 0          3(-3) + 0 + 0

               2(1) + 0 + 0          2(2) + 0 + 0          2(-3) + 0 + 0

               -4(1) + 0 + 0         -4(2) + 9 + 0         -4(-3) + 27 - 29

LU = A =       3        6      -9

               2        5      -3

               -4       1      10

In MATLAB environment,

>> A=[3 6 -9;2 5 -3;-4 1 10]

A=    3    6 -9

      2    5       -3

      -4   1 10

                                                44 

 
L=

    -0.7500 1.0000                  0

    -0.5000 0.8148 1.0000

    1.0000             0       0

U=

    -4.0000 1.0000 10.0000

         0       6.7500 -1.5000

         0         0       3.2222

The theoretical computation of the LU Factorization of A is different from the

MATLAB computation, but when verified:

>> L*U

ans =

     3       6    -9

     2       5    -3

    -4       1    10

Example 3.2: Using the LU factorization above, find the solution of the following

system of equations:

3x1 + 6x2 -9x3 = 0

                                        45 

 
2x1 + 5x2 -3x3 = -4

-4x1 + x2 + 10x3 = 3

Solution: First we write the matrix form of the system

3     6      -9       x1           0

2     5      -3       x2     =     -4

-4    1      10       x3           3

From example 3.1, we found the LU-decomposition given as:

LU = A =     3        0      0     1    2         -3       3    6   -9

             2        1      0     0    1         3    =   2    5   -3

             4        9      -29   0    0         1        -4   1   10

From our definition, Ly = b:

3     0      0        y1           0

2     1      0        y2     =     -4

-4    9      -29      y3           3

Solving using forward substitution:

3y1                   =0                => y1 = 0

2y1 + y2              = -4              => y2 = -4

-4y1 +9y2 – 29y3      =3                => y3 =-39/29

                                            46 

 
The second system, Ux =y

1      2         -3       x1           0

0      1         3        x2       =   -4

0      0         1        x3           -39/29

Solving using backward substitution:

x1 + 2x2 – 3x3 = 0                     => x1 = -119/29

       x2 + 3x3 = -4                   => x2 = 1/29

                 x3 = -39/29           => x3 = -39/29

In MATLAB environment,

> A=[3 6 -9;2 5 -3;-4 1 10]

A=      3    6 -9

       2     5       -3

       -4    1 10

>> [L,U]=lu(A)

L=

    -0.7500 1.0000                 0

    -0.5000 0.8148 1.0000

    1.0000            0        0
                                                47 

 
U=

     -4.0000 1.0000 10.0000

         0    6.7500 -1.5000

         0      0   3.2222

>> b=[0;-4;3]

b=       0

         -4

         3

>> x=U(Lb)

x=       -4.1034

         0.0345

         -1.3448

This computed solution using MATLAB equals the theoretical computation.

Example 3.3: Solve the system of equations using MATLAB:

2x1 – 5x2 + 2x3 = 7

    x1 + 2x2 – 4x3 = 3

3x1 – 4x2 – 6x3 = 5


                                     48 

 
Solution:

>> A=[2 -5 2;1 2 -4;3 -4 -6]

A=

    2       -5     2

    1       2     -4

    3       -4    -6

>> [L,U]=lu(A)

L=

    0.6667 -0.7000 1.0000

    0.3333 1.0000                   0

    1.0000             0       0

U=

    3.0000 -4.0000 -6.0000

        0        3.3333 -2.0000

        0          0       4.6000

>> b=[7;3;5]




                                        49 

 
b=

    7

    3

    5

>> x=U(Lb)

x=

    5.0000

    1.0000

    1.0000

The solution to the given system is => x1 = 5, x2 = 1, x3 = 1

3.2.2 CHOLESKY FACTORIZATION

Cholesky Factorization requires a system to be positive definite, i.e. xTAx > 0 for

all x ≠ 0. All the diagonal elements of the matrix A are positive and the off

diagonal elements are not too big. The Cholesky Factorization expresses a

symmetric matrix as the product of a triangular matrix and its transpose:

A = L U = C CT = c11      0      0              c11   c12   c13

                   c21    c22    0              0     c22   c32

                   c31    c32    c33            0     0     c33

                                          50 

 
C = L = Lower Triangular Matrix

CT = U = Upper Triangular Matrix = Transpose of Matrix C

The Cholesky Factorization allows the linear system Ax = b => Cx CT = b

The formulas for factoring matrix A to get the lower triangular matrix C are:

c11 = √a11                       cji = aji/c11                   j=2...n

cjj = √ajj - ∑cjs2                                               s = 1, . . . j-1

cpj = 1/cjj ( apj - ∑cjs cps )                                   p = 3, . . . n

Example 3.4: Solve the linear system by cholesky method:

x1 – 3x2 + 2x3 = 27

-3x1 + 10x2 -5x3 = -78

2x1 – 5x2 + 6x2 = 64

Solution: the above system is rewritten in Ax = b as:

1       -3      2                x1      =       27

-3      10      -5               x2      =       -78

2       -5      6                x3      =       64

A=      1       -3      2                c11     0         0             c11        c21   c31

        -3      10      -5       =       c21     c22       0             0          c22   c32

        2       -5      6                c31     c32       c33           0          0     c33

                                                     51 

 
Calculating the values of c11, c21, c31, c22, c32, c33

c11 = √a11 = √1 = 1

c21 = a21/c11 = -3/1 = -3

c31 = a31/c11 = 2/1 = 2

c22 = √a22 – c212 = √10 – (-3)2 = √10 – 9 = √1 =1

c32 = 1/a22 (a22 – c31 c21) = 1/1 (-5 – (-3×2)) = -5 + 6 = 1

c33 = √a33 – c312 c212 = √6 – 4 – 1 = √1 = 1

A=     1      -3      2             1      0         0         1   -3   2

       -3     10      -5    =       -3     1         0         0   1    1

       2      -5      6             2      1         1         0   0    1

Using Cy = b => Ly = b

=>     1      0       0             y1     =         27

       -3     1       0             y2     =         -78

       2      1       1             y3     =         64

=>     y1                   = 27           => y1 = 27

       -3y1 + y2            = -78          => y2 = 3

       2y1 + y2 + y3 = 64           => y3 = 7

Then CTx = y => Ux = y




                                               52 

 
=>       1     -3    2            x1       =         27

         0     1     1            x2       =         3

         0     0     1            x3       =         7

=>       x1 - 3x2 + 2x3 = 27 =>   x1 = 1

              x2 + x3 = 3 =>      x2 = -4

                    x3 = 7 =>     x3 = 7

In MATLAB environment, the above example is given as:

>> A=[1 -3 2;-3 10 -5;2 -5 6]

A=

     1   -3   2

    -3 10 -5

     2   -5   6

>> C=chol(A)

ans =

     1   -3   2

     0    1   1

     0    0   1

>> C’=chol(A)'




                                               53 

 
ans =

     1    0   0

    -3    1   0

     2    1   1

>> b=[27;-78;64]

b=

    27

    -78

    64

>> x=C(C'b)

x=

     1

    -4

     7

Where x1 = 1, x2 = -4, x3 = 7

It can be seen that both theoretical and MATLAB solution gives the same result. It

should be noted that if the system is not positive definite, MATLAB generates an

error.

Example 3.5: Solve the system below using MATLAB:

2x3 + 3 = x2 + 3x1
                                        54 

 
x1 – 3x3 = 2x2 + 1

3x2 + x3 = 2 – 2x1

Solution: Rearranging the system, we get:

3x1 + x2 + 2x3 = 3

x1 + 2x2 - 3x3 = 1

-2x1 + 3x2 + x3 = 2

>> A=[3 1 2;1 2 -3;-2 3 1]

A=

    3    1   2

    1    2   -3

    -2   3   1

>> C=chol(A)

??? Error using ==> chol

Matrix must be positive definite.

3.3 ITERATIVE METHOD

This is a step by step process of achieving the desired result of the solution of Ax =

b, i.e. by repeating a sequence of steps successively until the result is achieved. We

use this method if the entries on the main diagonal of the matrix are large.


                                          55 

 
3.3.1 GAUSS-SIEDEL ITERATION METHOD

Assuming that ajj = 1 for j = 1, 2 . . . n, A matrix A is given as:

A=I+L+U                                  ajj ≠ 0

I = n-by-n identity matrix

L = Lower Triangular Matrix                        (main diagonal = 0)

U = Upper Triangular Matrix                        (main diagonal = 0)

Ax = b => Ax = (I + L + U)x = b

Ax = Ix + Lx + Ux =b                               (Ix = x)

Ax = x + Lx + Ux = b

x = Ax – Lx – Ux = b – Lx – Ux                     (Ax = b)

The iterative formula now becomes for m+1 iteration(s):

x(m) = b – Lx(m+1) – Ux(m)

x(m) = mth approximation

x(m+1) = (m+1)st approximation

Example 3.6: Apply Gauss-Siedel iteration to the system below. Do 5 steps starting

from 1, 1, 1 using 5-significant digits in the computation.

10x1 + x2 + x3 = 6

x1 + 10x2 + x3 = 6
                                           56 

 
x1 + x2 + 10x3 = 6

Solution:

Dividing through the system by 10

=>     x1 + 0.1x2 + 0.1x3 = 0.6

       0.1x1 + x2 + 0.1x3 = 0.6

       0.1x1 + 0.1x2 + x3 = 0.6

Putting the system in form of x = b – Lx – Ux

x1 = 0.6 -           0.1x2 - 0.1x3

x2 = 0.6 - 0.1x1            - 0.1x3

x3 = 0.6 - 0.1x1 - 0.1x2

Iterating from 1, 1, 1 => x1 = x2 = x3 = 1

1st Iteration

x1(1) = 0.6 -        0.1(0.1) - 0.1(1) = 0.40000

x3(1) = 0.6 - 0.1(0.4) - 0.1(0.46)     = 0.51400

2nd Iteration

x1(2) = 0.6 -               0.1(0.46) - 0.1(0.514) = 0.50260

x2(2) = 0.6 - 0.1(0.5026)             - 0.1(0.514) = 0.49834


                                             57 

 
x3(2) = 0.6 - 0.1(0.5026) - 0.1(0.49834)             = 0.49990

3rd Iteration

x1(3) = 0.6 -       0.1(0.49834) - 0.1(0.49990) = 0.50017

x2(3) = 0.6 - 0.1(0.50017)          - 0.1(0.49990) = 0.50014

x3(3) = 0.6 - 0.1(0.50017) - 0.1(0.50014)                   =0.49996

4th Iteration

x1(4) = 0.6 -                0.1(0.50014) - 0.1(0.49996) = 0.49999

x2(4) = 0.6 - 0.1(0.50017)                  - 0.1(0.49996) = 0.50000

x3(4) = 0.6 - 0.1(0.49999) - 0.1(0.50000)                         = 0.50000

5th Iteration

x1(5) = 0.6 -                0.1(0.50000) - 0.1(0.50000) = 0.50000

x2(5) = 0.6 - 0.1(0.50000)                  - 0.1(0.50000) = 0.50000

x3(5) = 0.6 - 0.1(0.50000) - 0.1(0.50000)                         = 0.50000

Therefore, x =      x1             0.5

                    x2       =     0.5

                    x3             0.5




                                            58 

 
3.4 MATRIX EIGENVALUE PROBLEM

In this section, we will be looking at a situation where a given matrix A (square

matrix) and vector x, the product Ax will be the same as the scalar multiplication

λx for some scalar λ.

λ is the eigenvalue of A, and x the eigenvector of A. We will often call x the

eigenvector corresponding to or associated with λ, and λ the eigenvalue

corresponding to or associated with x.

Ax = λx has a non-trivial solution x ≠ 0

Ax – λx = 0 => (A – λI)x = 0                            (I = n-by-n identity matrix)

The characteristic determinant is given as: det(A – λI) = 0

det(A – λI) =        a11 – λ       a12                  ......       a1n

                     a21           a22 – λ              ......       a2n

                     .         .   .          .         .     .      .            =0

                     an1           an2                  ......       ann - λ

We also obtain a characteristic polynomial of A from the characteristic

determinant; it is of nth degree in λ. The nth degree polynomial in λ is of the form:

P(λ) = λn + Cn-1λn-1 + . . . . . + C1λ + C0

Eigenvalues are denoted by λ1, λ2 . . . λn. When λ occurs exactly once, it is called a

simple eigenvalue.

                                                  59 

 
The sum of n eigenvalues equals the sum of the main diagonal entries of the matrix

and it is called the TRACE of the matrix. The Trace of a matrix is:

Trace A = ∑ ajj = ∑ λk                 (j = 1, 2 . . . n, k = 1, 2 . . . n)

The product of the eigenvalues = determinant of A => det(A) = λ1λ2. . .λn

A square matrix which is triangular, the eigenvalue will be the diagonal entries a11,

a22. . . ann.

The set of all the solutions to (A – λI) x = 0 is called the EIGENSPACE of A

corresponding to λ.


In MATLAB, the command [V,E]=eig(A) gives the eigenvector and eigenvalue of

matrix A.


With the eigenvalues on the diagonal of a diagonal matrix E and the corresponding

eigenvectors forming the columns of a matrix V, we have AV = VE


If V is nonsingular, this becomes the eigenvalue decomposition A =VEV-1


Example 3.7: Find all eigenvalues of A

A=      5       8    16

        4       1    8

        -4      -4   -11


                                         60 

 
Solution: The determinant of A gives us the characteristic polynomial

A – λI =      5     8       16                                λ     0     0

              4     1       8             -                   0     λ     0

              -4    -4      -11                               0     0     λ

=     5-λ     8     16

      4       1-λ   8

      -4      -4    -11-λ

det(A – λI) = (5–λ) |1–λ 8|          -8   |4         8|       +16   |4    1-λ|

                    |-4     -11-λ|        |-4        -11-λ|         |-4   -4|

= (5-λ) [(1-λ)(-11-λ)+32]-8[4(-11-λ)+32]+16[-16+4(1-λ)]=0

= (5-λ) (λ2+10λ+21)-8(-4λ-12) +16(-4λ-12) =0

=-λ3-10λ2-21λ+5λ2+50λ+105+32λ+96-64λ-192=0

=-λ3-10λ2+5λ2-21λ+50λ+32λ-64λ+105+96-192=0

=-λ3-5λ2-3λ+9=0

=λ3+5λ2+3λ-9=0

Putting λ=1

(1)3 + 5(1)2 + 3(1) – 9 = 1 + 5 + 3 - 9 = 0

λ = 1 is an eigenvalue

Putting λ = -3
                                               61 

 
(-3)3 + 5(-3)2 + 3(-3) – 9 = -27 +45 – 9 – 9 = 0

λ = -3 is also an eigenvalue twice

(λ + 3)2 (λ - 1) = 0

In MATLAB environment, the solution to our problem is given as:

>> A=[5 8 16;4 1 8;-4 -4 -11]

A=

    5       8   16

    4       1   8

    -4 -4 -11

>> [T,E]=eig(A)

T=

    0.8165 -0.5774 -0.7573

    0.4082 -0.5774 0.6509

    -0.4082 0.5774 0.0532

E=

    1.0000           0   0

        0 -3.0000        0

        0       0 -3.0000

It can be seen that the entries on the main diagonal of the matrix E corresponds to

our eigenvalues and the columns of matrix T is the eigenvector of A.

                                          62 

 
The characteristic polynomial is given by the function poly(A):

>> P=poly(A)

P=

    1.0000 5.0000     3.0000 -9.0000

It can also be seen that the coefficients of our characteristic polynomial calculated

theoretically equals our MATLAB solution.

Example 3.8: Find the eigenvalues and eigenvectors of the matrix below using

MATLAB

A=       −3    1    −3
         −8    3    −6
         2     −1    2
Solution:

>> A=[-3 1 -3;-8 3 -6; 2 -1 2]

A=

    -3   1    -3
    -8   3    -6
    2    -1   2
>> poly(A)

ans =

    1.0000 -2.0000 -1.0000 2.0000


                                          63 

 
This vector p= [1 -2 -1 2] contains the coefficients of the characteristic polynomial.

Our characteristic polynomial now becomes:

λ3 – 2λ2 – λ + 2 = 0

To find the eigenvalues we must find the zeroes of this polynomial:

>> E=eig(A)

E=

    -1.0000
    2.0000
    1.0000
λ=-1, λ=2, λ=1 => (λ+1) (λ-2) (λ-1) =0

Using the [V, D] =eig(A) command MATLAB gives the eigenvalues, eigenvectors

and the diagonal form of the matrix in one step:

>> [V,D]=eig(A)

V=

    0.4472 -0.4082 -0.5774
    0.8944 -0.8165 -0.5774
       0 0.4082 0.5774
D=

    -1.0000        0       0
       0     2.0000       0
       0       0      1.0000


                                          64 

 
3.5 ERROR AND ACCURACY

As indicated in chapter one, matrix and vector quantities norm are needed to meas-

ure errors in approximate solutions to linear systems. The three common vector

norms, each of which has an associated matrix norm are:

1 – norm => ||x||1 = ∑ |xi|                      i = 1. . . . n

2 – norm => ||x||2 = √ ∑ |xi|2

∞ - norm => ||x||∞ = max {|xi|: 1 ≤ i ≥ 1}

For any vector norm, the associated matrix norm is given as:

||A||1 = max {||Ax||: ||x|| = 1}

For the 1- and ∞- norms associated matrix norms take the form:

||A||1 = max {∑ |aij|: 1 ≤ j ≤ n}

||A||∞ = max {∑ |aij|: 1 ≤ i ≤ n}

A measure of the sensitivity to round-off error of a matrix and numerical problems

is obtained from the condition number of the matrix.

Condition number is defined as c(A) = ||A|| ||A||-1

The residual vector is given by r = b – Ax, where x is the computed solution of the

system Ax = b. The true solution is denoted by x*.

||r||/c(A)||b|| ≤ ||x* - x||/||x*|| ≤ c(A)||r||/||b||

Example 3.9: Find the 1- and ∞- norms of the matrix A, given by:


                                                    65 

 
A=     1      2      3      4        7.0001

       2      3      4      5        9.0002

       3      3      4      5        9.0004

       4      4      4      5        9.0004

       5      5      5      5        10.0003

Solution:

||A||1 is calculated from the maximum column sum

||A||1 = max {∑ |aij|: 1 ≤ j ≤ n }

||A||1 = 7.0001 + 9.0002 + 9.0004 + 9.0004 + 10.0003

||A||1 = 44.0014

||A||∞ is the maximum row sum

||A||∞ = max {∑ |aij|: 1 ≤ i ≤ n}

||A||∞ = 5 + 5 + 5 + 5 + 10.0003

||A||∞ = 30.0003

In MATLAB environment, we create the matrix A by:

>> A=[1 2 3 4 7-0001;2 3 4 5 9.0002;3 3 4 5 9.0004;4 4 4 5 9.0004;5 5 5 5

10.0003]




                                               66 

 
A=

    1.0000 2.0000     3.0000 4.0000 6.0000
    2.0000 3.0000     4.0000 5.0000 9.0002
    3.0000 3.0000     4.0000 5.0000 9.0004
    4.0000 4.0000     4.0000 5.0000 9.0004
    5.0000 5.0000     5.0000 5.0000 10.0003
>> N=[norm(A,1) norm(A,inf)]

N=

    43.0013 30.0003

It can be clearly seen from our computation that the result are perfectly equal.

norm(A,1) = ||A||1 = 44.0014            (maximum column sum)

norm(A,inf) = ||A||∞ = 30.0003         (maximum row sum)

Example 3.10: Consider the Hilbert Matrix and its inverse given from MATLAB:

H =sym(hilb(3))= [ 1, 1/2, 1/3]

                    [1/2, 1/3, 1/4]

                    [1/3, 1/4, 1/5]

H-1 =inv(sym(hilb(3)))= [      9, -36, 30]

                           [-36, 192, -180]

                           [30, -180, 180]

Calculate the minimum norms of both matrix and find the condition number.

Solution:
                                             67 

 
Maximum norm of H => ||H|| = 1 + ½ + 1/3 =11/6

Maximum norm of H-1 => ||H-1|| = 36 + 192 + 180 = 408

C(H) = ||H|| × ||H-1|| = 11/6(408) = 748

Example 3.11: If E is the perturbation of the matrix H in the example above due to

internal round offs. Calculate the round off error estimation given by;

||x* - x||/||x*|| ≤ c(H) ||E||/||A||

Where ||E|| = 0.003, c(H) = 748, ||A| = 1.8333

Solution:

||x* - x||/||x*|| ≤ 748(0.003/1.8333) =1.1968

This is an error of more than 100%. Therefore, it can be seen that more digits are

required for calculation to reduce floating point errors.




                                           68 

 
CHAPTER FOUR


                                APPLICATIONS


4.1 APPLICATIONS OF LINEAR ALGEBRIAC SYSTEMS


The solution of linear systems, their eigenvalue and eigenvector calculations are

two linear algebra problems of special importance in applications. But, because of

the sizes of matrices involved, they must be done on a computer. It is only for very

small matrices or matrices of special type that these two problems can be solved by

hand.


Applications of linear algebra to science and real life problems are numerous. The

solutions to many problems in physics, chemistry, biology, engineering, medicine,

economics, computer graphics require tools from linear algebra, so do all branches

of modern mathematics.


We will discuss a few of these in this chapter and see how they are applied to real

life problems.


4.1.1 ECONOMICS: In order to manipulate and understand the economy of a

country or region, one needs to come up with certain model based on the various

sections or sectors of their economy.


                                         69 

 
The Leontief Model is an attempt in this direction. Wassily W. Leontief (1906 –

1999), Russian economist was cited for his creation, the input – output technique.

Input was defined by him as the goods and services that each industry buys from

all the other industries, Output as the products industry sell to others. By this me-

thod, the variation in the flow of goods from one industry to another can be graphi-

cally demonstrated. Economists use his method to analyze, plan, and predict eco-

nomic changes. The Leontief represents the economy as a system of linear equa-

tions. They are of two types:


a)The Leontief Closed Model: Consider an ,economy consisting n interdependent

industries (sectors) s1. . . . .sn, meaning each industry consumes some of the goods

produced by other industries, including itself. For example, Power Holding Com-

pany of Nigeria (PHCN) consumes some of its own power. We say such an econ-

omy is closed if it satisfies its own needs.


Let mij be the number of units produced by the industry si, and necessary to pro-

duce one unit of industry sj. If pk is the production level of the industry sk, then

mijpj represents the number of units produced by si and consumed by industry sj.

Then, the total number of units produced by industry si is given by:


p1mi1 + p2mi2 + . . . . . + pnmin


                                           70 

 
In order to have a balanced economy, the total production of each industry must be

equal to its total consumption. This gives the linear system:


m11p1 + m12p2 + . . . . . + m1npn = p1


m21p1 + m22p2 + . . . . . + m2npn = p2


.     .      .      .      .      .


mn1p1 + mn2p2 + . . . . . + mnnpn = pn


If M =       m11    m12    . . . m1n

             m21    m22    . . . m2n

             .      .      .      .

             mn1    mn2    . . . mnn

And P =      p1

             P2

             .

             pn

The system can be rewritten as Mp = p where M is called the input-output matrix




                                         71 

 
Example 4.1: Suppose the economy of Gwagwalada depends on three industries;

service, electricity, and oil production. Monitoring these three industries over a pe-

riod of one year, the following observations are drawn.


    1. To produce one unit worth of service, the service industry must consume 0.3

       units of its own production, 0.3 units of electricity, and 0.3 units of oil to run

       its operation.

    2. To produce one unit of electricity, the power generating plant must but

       0.4units of service, 0.1 unit of its own production, and 0.5 units of oil.

    3. Finally, the oil production company requires 0.3 units of service, 0.6 units of

       electricity and 0.2 units of its own production to produce one unit of oil.


Find the production level of each of these industries in order to satisfy the external

and the internal demands assuming that the model is closed


Solution: Considering the following variables:


    1. p1 = production level for the service industry

    2. p2 = production level for the power generating plant

    3. p3 = production level for the oil production company


Since the model is closed, the total consumption of each industry must equal its to-

tal production.

                                           72 

 
0.3p1 + 0.3p2 + 0.3p3 = p1

        0.4p1 + 0.1p2 + 0.5p3 = p2

        0.3p1 + 0.6p2 + 0.2p3 = p3


Then M =       0.3   0.3    0.3      p=     p1

               0.4   0.1    0.5             p2

               0.3   0.6    0.2             p3

Since each column sums up to one, we write the above system as (M–I) p=0


That is, Mp = p => Mp – p = 0 => Mp – Ip = 0 => (M-I) p=0


M-I = -0.7 0.3       0.3

        0.4    -0.9 0.5

        0.3    0.6   -0.8

Using MATLAB rref command,


>> rref(M-I)


ans =


    1.0000        0 -0.8235

        0   1.0000 -0.9216

        0     0      0

                                      73 

 
To solve the system, we let p3 = t, the solution is:


p1 = 0.8235t

p2 = 0.9216t

p3 = t

The values must be non-negative for the model to make sense, i.e. t≥0. Taking

t≥100,


p1 = 82.35units

p2 = 92.16units

p3 = 100units

b) The Leontief Open Model: The fist model treats the case where no goods leave
or enter the economy, but this does not happen often in reality. For example, let di
be the demand from an ith outside industry, pi and mij be as in the closed model dis-
cussed above, then:

pi = mi1p1 + mi2p2 + . . . . . + minpn + di           (for i = 1 . . . n)

         p = Mp + d                           (M and p are same as the closed model)

         d=    d1                             (d = demand vector)

               d2

               .

               dn

                                                74 

 
To solve the linear system;


p = Mp + d => d = p-Mp=> d = Ip-Mp => d = (I-M) p


p = (I-M)-1 d


(I-M) matrix is required to be invertible; this might not be the case always.


Example 4.2: An open economy with three industries; coal mining operation, elec-

tricity generating plant, and an auto-manufacturing plant. To produce one naira of

coal, the mining operation must purchase 10kobo of its own production, 30kobo of

electricity and 10kobo worth of automobile for its transportation. To produce one

naira of electricity, it takes 25kobo of coal, 40kobo of electricity and 15kobo of au-

tomobile. Finally, to produce one naira worth of automobile, the auto-

manufacturing company plant must purchase 20kobo of coal, 50kobo of electricity,

and consume 10kobo of automobile. Assuming that during a period of one week,

the economy has an exterior demand for 50,000naira worth of coal, 75,000naira

worth of electricity, and 125,000naira worth of autos. Find the production level of

each of the three industries in that period of one week in order to exactly satisfy

both the internal and external demands


Solution: The input-output matrix of the economy



                                          75 

 
M = 0.1       0.25 0.2

      0.3     0.4     0.5

      0.1     0.15 0.1

The demand vector d =         50,000

                              75,000

                              125,000

Using p = (I-M)-1 d


I-M = 1-0.1         -0.25          -0.2           0.9    -0.25   -0.2

      -0.3            1-0.4        -0.5 =         -0.3   0.6     -0.5

      -0.1            -0.15        1-0.1          -0.1   -0.15   0.9

In MATLAB,


(I-M)-1 = inv(I-M) =          1.4646 0.8031 0.7717

                              1.0079 2.4882 1.6063

                              0.3307 0.5039 1.4646

Multiplying with the demand vector d to get p


p=    1.4646 0.8031 0.7717                 50,000

      1.0079 2.4882 1.6063                 75,000

      0.3307 0.5039 1.4646                 125,000
                                            76 

 
P=    1.0e+005 *

      2.2992

      4.3780

      2.3740

This is approximately equal to our theoretical approach which is


      p1 = 229925

      p2 = 437797.5

      p3 = 237402.5

The total output of coal mining operation =229,925 naira


The total output cost of electricity generating plant = 437,797.50 naira


The total output cost of the auto-manufacturing plant = 237,402.50 naira


4.1.2 MARKOV CHAINS


There is a random process in which events are discrete rather continuous, and the

future development of each event is independent of all historical events, or depen-

dent on only the immediate preceding event. Suppose there is a physical quantity

or mathematical system that has n possible states and at any time, the system is in

one and only one of its n states. Also, assuming at a given observation period, say

kth period, the probability of the system being in a particular state depends only on
                                          77 

 
its status at the k-1st period. Such system is the markov chain or markov process. It

deals with a square matrix that has non-negative entries and row sums al equal

to1.This type of matrix is known as the Stochastic matrix or Transition matrix.


Example 4.3


Suppose that the 2003 state of land use in a local government area of 50square

miles of area is:


     I.   Residential Use = 30%


    II.   Commercial Use = 20%


III.      Industrial Use = 50%


Find the states in 2009, 2014, 2019 assuming the transition probabilities for 5year

intervals are given by the matrix below:


S=        0.8   0.1   0.1

          0.1   0.7   0.2

          0     0.1   0.9

Solution:


From the matrix S and the 2003 state of land use, we compute the 2009 state of

land use. If a = vector for land use

                                           78 

 
aT = [a1    a2 a3] = [30 20      50]


Multiplying aT and S


=     [30    20    50]           0.8   0.1     0.1

                                 0.1   0.7     0.2

                                 0     0.1     0.9

Using MATLAB,


aT*S = bT = [26 22       52]


Therefore, in 2009 =>      Residential Use = 26% * 50 = 13square miles


                           Commercial Use = 22% * 50 =11square miles


                           Industrial Use = 52% * 50 = 26square miles


bT * S = [26 22 52]        0.8   0.1   0.1

                           0.1   0.7   0.2

                           0     0.1   0.9

bT * S = cT = [23.0 23.2 53.8]


It shows that in 2014,


Residential Use => 23.0% * 50 =11.5square miles


                                         79 

 
Commercial Use => 23.2% * 50 = 11.6square miles


Industrial Use => 53.8% * 50 = 26.9square miles


cT * S = dT = [23.0 23.2 53.8]           0.8       0.1   0.1

                                         0.1       0.7   0.2

                                         0         0.1   0.9

dT = [20.72 23.92 55.36]


In 2019,


Residential Use => 20.72% * 50 =10.36square miles


Commercial Use => 23.92% * 50 = 11.96square miles


Industrial Use => 55.36% * 50 = 26.68square miles


We can see the future from our calculation that for residential purpose, the land use

decreases with time. For commercial, the land use appreciates every five years. For

industrial purpose, fluctuates with time.


4.1.3 ELECTRICAL NETWORK


    A network consists of branches and nodes. A typical example is the street network

where the branches are the streets and the nodes are the intersections. Many net-


                                             80 

 
work problems can be modified by systems of linear equations. The basic laws go-

verning this process are:


a) Ohms’ Law: The voltage drop Er across a resistor is proportional to the instanta-

neous current I => E = IR, where I is the current and R is the constant of propor-

tionality called the resistance.


b)Kirchhoff’s’ Voltage Law: The algebraic sum of all the instantaneous voltage

drops around any close loop is 0, or the voltage impressed on a closed loop is equal

to the sum of the voltage drops in the rest of the loop ( A complete circuit for an

electrical current).


c)Kirchhoff’s’ Current Law: At any point of a circuit, the sum of the inflowing cur-

rent is equal to the sum of the out flowing currents.


Linearity of this model is seen from the matrix Ri = v. The simplest sort of elec-

trical network involves only batteries and resistors. Examples of resistors include

light bulbs, motors, and heaters.


Example 4.4: Determine the branch currents I1, I2, I3 in the network shown below:




                                          81 

 
Solution: Kirchhoff’s current law provides one equation for each node




NODE                         Current In                   Current Out


A                            I1                           I2 + I3


B                            I2 + I3                      I1


From Kirchhoff’s Current Law:


Node A => I1 = I2 + I3


Node B => I2 + I3 = I1


The voltage law provides one equation for each loop. In the loop at the top, we

choose a counterclockwise direction which coincides with the direction shown for

I1 and I3, and gives the current flow from the battery.
                                          82 

 
Ri Voltage Drops                          Voltage Sources


20I1 + 10I3                               60


From Kirchhoff’s Voltage Law:


20I1 + 10I3 = 60


The bottom loop, the counterclockwise direction coincides with the direction of I2

but opposite to I3


Ri Voltage Drops                          Voltage Sources


5I2 – 10I3                                50


5I2 – 10I3 = 50


The third loop, for which we choose the counterclockwise direction by Kirchhoff’s

Voltage Law:


20I1 + 5I2 = 60 +50


Rearranging, we have:


      I1 – I 2 – I 3 = 0


       20I1+10I3 = 60


        5I2–10I3 = 50
                                        83 

 
Putting the above equation in the form Ri = v


       1        -1   -1            I1     =         0


       20       0    10            I2     =         60


       0        5    -10           I3     =         50


Solving by LU factorization, Ri = LUi =v


R=     1        -1   -1            1      0         0      u11   u12     u13


       20       0    10    =       l21    1         0      0     u22     u23


       0        5    -10           l31    l32       1      0     0       u33


r11 = u11 = 1                  r12 = u12 = -1                    r13 = u13 = -1


r21 = l21u11 =20               r22 =l21u12 + u22 = 0             r23 = l21u13 + u23 = 0


l21 =20/1 = 20                 u22 = 20                          u23 = 30


r31 = l31u11 = 0               r32 = l31u12 + l32u22 = 5         r33 = l31u13 + l32u23 +u33


l31 = 0/1 = 0                  l32 = 5/20 = 0.25                 l33 = 0-7.5+10 = 2.5




                                              84 

 
R = LU =      1    -1     -1             1         0     0     1         -1   -1

              20   0      10       =     20        1     0     0         20   30

              0    5      -10            0         0.25 1      0         0    2.5

First, we solve Ly = v


      1       0    0      y1             0


      20      0    0      y2       =     60


      0       0.25 1      y3             50


y1                 =0              => y1 = 0

20y1 + y2          = 60            => y2 = 60

       0.25y2 +y3 = 50             => y3 = 35

Then Ui = y


      1       -1   -1     I1       =     0

      0       20   30     I2       =     60

      0       0    2.5    I3       =     35

I1    –       I2   –      I3 = 0                   => I1 = -18 +14 = -4

              20I2 +      30I3 = 60                => I2 =-360/20 = -18

                          2.50I3 = 35              => I3 = 35/2.5 = 14


                                             85 

 
An electric current’s overall power depends on the amount of current flowing.

Electric currents flow because atoms and molecules contain two types of electrical

charge, positive and negative, and these opposite charges attract each other. Direct

current (DC) is the flow of electricity in one direction. Alternating current (AC)

intermittently reverses direction because of the way it is generated.

The branch current in the network I1 = -4amps can be said to be alternating current

The branch current in network I2 = -18amps, also exhibit alternating current.

For I3 = 14amps, the current flows in one direction (Direct Current)

With MATLAB,

>> A=[1 -1 -1;20 0 10;0 5 -10]

A=

    1    -1   -1

    20    0   10

    0    5 -10

>> b= [0; 60; 50]

b=

    0

    60

    50

>> [L, U] =lu(A)
                                          86 

 
L=

    0.0500 -0.2000 1.0000

    1.0000        0     0

         0   1.0000     0

U=

    20.0000       0 10.0000

         0   5.0000 -10.0000

         0     0 -3.5000

>> i=U (Lb)

i=

     4

     6

    -2

The results are different, but still satisfy the system.

The branch current in the network I1 = 4amps, direct current.

The branch current in network I2 = 6amps, also exhibit direct current.

For I3 = -2amps, alternating current.




                                            87 

 
CHAPTER FIVE

            SUMMARY, CONCLUSION, RECOMMENDATIONS

5.1   SUMMARY

As we know, one of the most important problems in technical computing is the
solution of systems of simultaneous linear equations. In this research, the objective
shows numerical computation is the best alternative to deal with large systems, i.e.
to solve large systems of algebraic equations, numerical computing using MAT-
LAB is one of the best alternatives.

However, different methods of solving systems of algebraic equations have been
used such as: LU Factorization, Cholesky Factorization, Iterative method, and Ei-
genvalue method. In each method, problems have been solved theoretically and the
same answers were arrived at when solved using MATLAB.

Chapter one deals with introduction, scope, limitation, and methodology of solu-
tions of linear algebraic systems.

In chapter two, the history and literature review of our research is discussed. Here,
Mathematical scholars who contributed immensely to linear algebra and its solu-
tions to linear systems were identified. Also, MATLAB as a method of enhancing
solutions of linear systems was identified.

Chapter three is based on the methods compatible with MATLAB for solving li-
near systems to obtain a unique solution. At the end of a problem, MATLAB
commands are given to simplify the theoretical method.




                                          88 

 
Chapter four provides the way real life problems are resolved to linear systems,
and how a solution is arrived at using the methods of solving linear systems. The
Solution is then interpreted

5.2   CONCLUSION

Numerical Linear Algebra deals with numerical solutions of linear algebra prob-
lems. It has grown into an independent topic for research and teaching in recent
years. The reason is, of course, obvious. Numerical Linear Algebra techniques are
essential ingredients in scientific computing that are routinely used to solve prac-
tical-life problems (Signal Processing, Control Theory, Heat Transfer, Fluid Dy-
namics, Biomedical Engineering, Vibration, Statistics, Bioscience, and Economics)


As it can be seen, theoretical approach to methods of solving large systems of li-
near equations is lengthy and tedious. Therefore, in consideration, numerical me-
thod is considered the best approach in dealing with large problems in systems of
linear equations.

Although problems arise from computing, such as errors from wrong input of data,
errors from floating point numbers, e.t.c. It is still regarded as the best in accuracy
and speed. Also, it is considered reliable.

Therefore, computers can be used with minimum effort to solve systems of linear
equations to a reasonable degree of accuracy and efficiency using computing pro-
grams, e.g. MATLAB.

5.3   RECOMMENDATIONS

The MATLAB high-performance language for technical computing integrates
computation, visualization, and programming in an easy-to-use environment where
                                           89 

 
problems and solutions are expressed in familiar mathematical notation. It is a very
useful mathematical computing software recommended for use at tertiary level of
study and all level of computing involving large data. A little familiarity is just
what is needed to start using the program.

The guiding principle behind MATLAB is "Do the Right Thing." This means do-
ing what is best for everyone for the long term, and believing that "right" answers
exist.




                                          90 

 
REFERENCES


    •   Bronson, Richard (July 1988) Schaum's Outline of Matrix Operations;
        McGraw-Hill New York


    •   Heath, Michael T. (2002) Scientific Computing: An Introductory Sur-
        vey; McGraw-Hill Companies, Inc. New York


    •   Kreyszig, Erwin (March 11, 1999) Advanced Engineering Mathematics;
        WILEY ACADEMIC (John Wiley & Sons Ltd; 8th Edition, International
        Edition)



    •   Lay, David C. (August 22, 2005) Linear Algebra and Its Applications;
        Addison Wesley Publishing Company

    •   Ogunfiditimi, F. O. (2006) MTH 206(Numerical Analysis) Lecture Notes

    •   Recktenwald, Gerald W. (2000) Introduction to Numerical Methods
        with MATLAB: Implementations and Applications. Prentice-Hall Upper
        Saddle River, NJ

    •   Scheid, Francis (January 1989) Schaum's Outline of Numerical Analy-
        sis; McGraw-Hill New York



    •   Turner, Peter R. (December 1994) Numerical Analysis; Scholium Inter-
        national, Inc.



                                       91 

 
•   Ukpong, E. O. (2006) MTH 204 (Linear Algebra II) Lecture Notes


    •   Dawkins Paul (2000) LINEAR ALGEBRA:
        http://tutorial.math.lamar.edu/terms.aspx,

    •   http://en.wikipedia.org/wiki

    •   http://web.mit.edu/18.06/www/

    •   18.06 Linear Algebra, spring 2008

    •   MICROSOFT ENCARTA (Students Edition) 2008

    •   www.mathworks.com




                                       92 

 

Weitere ähnliche Inhalte

Andere mochten auch

Image proceesing with matlab
Image proceesing with matlabImage proceesing with matlab
Image proceesing with matlab
Ashutosh Shahi
 
Basics of Image Processing using MATLAB
Basics of Image Processing using MATLABBasics of Image Processing using MATLAB
Basics of Image Processing using MATLAB
vkn13
 

Andere mochten auch (13)

2,500 solved problems_in_fluid_mechanics_and_hydraulics by frank-white 5th ed...
2,500 solved problems_in_fluid_mechanics_and_hydraulics by frank-white 5th ed...2,500 solved problems_in_fluid_mechanics_and_hydraulics by frank-white 5th ed...
2,500 solved problems_in_fluid_mechanics_and_hydraulics by frank-white 5th ed...
 
Libro de MATLAB
Libro de MATLABLibro de MATLAB
Libro de MATLAB
 
Image proceesing with matlab
Image proceesing with matlabImage proceesing with matlab
Image proceesing with matlab
 
Advanced MATLAB Tutorial for Engineers & Scientists
Advanced MATLAB Tutorial for Engineers & ScientistsAdvanced MATLAB Tutorial for Engineers & Scientists
Advanced MATLAB Tutorial for Engineers & Scientists
 
Introduction to Matlab
Introduction to MatlabIntroduction to Matlab
Introduction to Matlab
 
Matlab Basic Tutorial
Matlab Basic TutorialMatlab Basic Tutorial
Matlab Basic Tutorial
 
MATLAB Programs For Beginners. | Abhi Sharma
MATLAB Programs For Beginners. | Abhi SharmaMATLAB Programs For Beginners. | Abhi Sharma
MATLAB Programs For Beginners. | Abhi Sharma
 
Basics of Image Processing using MATLAB
Basics of Image Processing using MATLABBasics of Image Processing using MATLAB
Basics of Image Processing using MATLAB
 
Solution manual of fluid mechanics fundamentals and applications - cengel [...
Solution manual of fluid mechanics   fundamentals and applications - cengel [...Solution manual of fluid mechanics   fundamentals and applications - cengel [...
Solution manual of fluid mechanics fundamentals and applications - cengel [...
 
Introduction to Digital Image Processing Using MATLAB
Introduction to Digital Image Processing Using MATLABIntroduction to Digital Image Processing Using MATLAB
Introduction to Digital Image Processing Using MATLAB
 
[Solution manual] fluid mechanics fox & mcdonald
[Solution manual] fluid mechanics fox & mcdonald[Solution manual] fluid mechanics fox & mcdonald
[Solution manual] fluid mechanics fox & mcdonald
 
Matlab Introduction
Matlab IntroductionMatlab Introduction
Matlab Introduction
 
Getting started with image processing using Matlab
Getting started with image processing using MatlabGetting started with image processing using Matlab
Getting started with image processing using Matlab
 

Ähnlich wie Numerical solutions of linear algebraic systems using matlab

Homework 21. Complete Chapter 3, Problem #1 under Project.docx
Homework 21. Complete Chapter 3, Problem #1 under Project.docxHomework 21. Complete Chapter 3, Problem #1 under Project.docx
Homework 21. Complete Chapter 3, Problem #1 under Project.docx
adampcarr67227
 
THESLING-PETER-6019098-EFR-THESIS
THESLING-PETER-6019098-EFR-THESISTHESLING-PETER-6019098-EFR-THESIS
THESLING-PETER-6019098-EFR-THESIS
Peter Thesling
 
Senior_Thesis_Evan_Oman
Senior_Thesis_Evan_OmanSenior_Thesis_Evan_Oman
Senior_Thesis_Evan_Oman
Evan Oman
 
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODSNOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
Canh Le
 
Compiled Report
Compiled ReportCompiled Report
Compiled Report
Sam McStay
 
applied math 2nd semester diploma
applied math 2nd semester diplomaapplied math 2nd semester diploma
applied math 2nd semester diploma
sakhi pathak
 

Ähnlich wie Numerical solutions of linear algebraic systems using matlab (20)

Thesis. A comparison between some generative and discriminative classifiers.
Thesis. A comparison between some generative and discriminative classifiers.Thesis. A comparison between some generative and discriminative classifiers.
Thesis. A comparison between some generative and discriminative classifiers.
 
JasmineAlstonThesis
JasmineAlstonThesisJasmineAlstonThesis
JasmineAlstonThesis
 
Homework 21. Complete Chapter 3, Problem #1 under Project.docx
Homework 21. Complete Chapter 3, Problem #1 under Project.docxHomework 21. Complete Chapter 3, Problem #1 under Project.docx
Homework 21. Complete Chapter 3, Problem #1 under Project.docx
 
THESLING-PETER-6019098-EFR-THESIS
THESLING-PETER-6019098-EFR-THESISTHESLING-PETER-6019098-EFR-THESIS
THESLING-PETER-6019098-EFR-THESIS
 
Thesis (printed)
Thesis (printed)Thesis (printed)
Thesis (printed)
 
Emona-based Interactive Amplitude Modulation/Demodulation iLab
Emona-based Interactive Amplitude Modulation/Demodulation iLabEmona-based Interactive Amplitude Modulation/Demodulation iLab
Emona-based Interactive Amplitude Modulation/Demodulation iLab
 
ANALYSIS OF A TRUSS USING FINITE ELEMENT METHODS
ANALYSIS OF A TRUSS USING FINITE ELEMENT METHODSANALYSIS OF A TRUSS USING FINITE ELEMENT METHODS
ANALYSIS OF A TRUSS USING FINITE ELEMENT METHODS
 
Pankaj_thesis.pdf
Pankaj_thesis.pdfPankaj_thesis.pdf
Pankaj_thesis.pdf
 
Senior_Thesis_Evan_Oman
Senior_Thesis_Evan_OmanSenior_Thesis_Evan_Oman
Senior_Thesis_Evan_Oman
 
thesis
thesisthesis
thesis
 
M3R.FINAL
M3R.FINALM3R.FINAL
M3R.FINAL
 
final_report_template
final_report_templatefinal_report_template
final_report_template
 
Appreciationof mathematics:My observations and opinions
Appreciationof mathematics:My observations and opinionsAppreciationof mathematics:My observations and opinions
Appreciationof mathematics:My observations and opinions
 
C++ progrmming language
C++ progrmming languageC++ progrmming language
C++ progrmming language
 
Project
ProjectProject
Project
 
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODSNOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
 
Parallel evolutionary approach report
Parallel evolutionary approach reportParallel evolutionary approach report
Parallel evolutionary approach report
 
Compiled Report
Compiled ReportCompiled Report
Compiled Report
 
applied math 2nd semester diploma
applied math 2nd semester diplomaapplied math 2nd semester diploma
applied math 2nd semester diploma
 
The Design and Analysis of Computer Algorithms [Aho, Hopcroft & Ullman 1974-0...
The Design and Analysis of Computer Algorithms [Aho, Hopcroft & Ullman 1974-0...The Design and Analysis of Computer Algorithms [Aho, Hopcroft & Ullman 1974-0...
The Design and Analysis of Computer Algorithms [Aho, Hopcroft & Ullman 1974-0...
 

Kürzlich hochgeladen

1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
AnaAcapella
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
kauryashika82
 

Kürzlich hochgeladen (20)

1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Magic bus Group work1and 2 (Team 3).pptx
Magic bus Group work1and 2 (Team 3).pptxMagic bus Group work1and 2 (Team 3).pptx
Magic bus Group work1and 2 (Team 3).pptx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 

Numerical solutions of linear algebraic systems using matlab

  • 1. NUMERICAL SOLUTIONS OF LINEAR ALGEBRAIC SYSTEMS USING MATLAB BY ODEH FRIDAY ODEH REG. NO. : 04153024 BEING A PROJECT SUBMITTED TO THE MATHEMATICS, STATISTICS AND COMPUTER SCIENCE DEPARTMENT, UNIVERSITY OF ABUJA, IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE AWARD OF BACHELOR OF SCIENCE (BSc.) DEGREE IN MATHEMATICS. NOVEMBER 2008 1   
  • 2. DEDICATION I dedicate this work (the first of its kind by me) to GOD, my Creator. Also, to my parents, Mr. & Mrs. S. U. ODEH, my sisters, and my one and only late Brother Emmanuel. 2   
  • 3. ACKNOWLEDGEMENT My profound gratitude goes to God, whose report I have always believed, for giving me grace to complete this race. Also, to my parents, Mr. & Mrs. Odeh whom have also made this race possible by their words of advice. Also, to my wonderful sisters, Aunty Mercy (big sis), Sister Joy (#2), Alice (baby- doll), and Esther (Agbani), thanks for the motivation and inspiration. The merit from this research is due to the information I tapped from the fountain of one of our own scholar, my Humble Supervisor, a professor in the making, Dr. F. O. Ogunfiditimi. I am grateful to you for your time and energy, through your tight schedule; you still took out time for me. Thank you Sir! Also, many thanks to the MEN of the department, Professor M. O. Adewale and Dr. E. O. Ukpong for always making us keep our heads up. Al- so, to the entire lecturers in the department. My appreciation also goes to my vision helpers, Rev. (Dr) and Mrs. F. Ug- bong for their prayers. My crew, Bulus Sidi, George Ehiagwina, Kefas Bako, John Ukhai, Gwaz Akut, Peterson Okoro, Efe Bestman, Emeka Okorie thanks for the brotherly love. Adeoye Oreoluwa David (LasB), Odion Irene (the only female fried I have in the department), Igwe Chima, Ade (Class rep) how could I have survived without you guys in the department. Thanks to my teams (MACOSSA, University of Abuja Basketball Team, ELOHIM Choir & In- strumentalists, ELOHIM FYB 2008) you brought out the leader in me by be- lieving in me. To Jigga Banty, Gene, A Y…. keep the music playing. Gift, Glo- ria, Joy, Esther, Aderonke, Ibimina, and… thank you for always being there. To my Guy! Saleh Agyo B. four year and still kicking, big up bro! The last but not the least, words can express you, I LOVE YOU Seyi Soyemi! Finally, to my roommates, housemates, street mates, course mates, de- partment mates, and all whom I cannot mention, I appreciate you all. 3   
  • 4. ABSTRACT This project is not a comprehensive introduction to MATLAB. Instead, it fo- cuses on the specific features of MATLAB that are useful for analyzing linear equations. Two basic goals of this course are First, to deepen students' un- derstanding of linear equations by giving them a new tool, a mathematical software system, for analyzing linear equations. Second, to bring students to a level of expertise in the mathematical software system that would allow them to use it in other mathematics, engineering, or science courses. The emphasis of this research is to show how the solution of Ax = b is arrived at using MATLAB. Where A is the coefficient matrix, x is the unknown vector, and b is the solution vector. 4   
  • 5. TABLE OF CONTENTS TITLE PAGE…………………………………………………………………………………………………………i CERTIFICATION PAGE…………………………………………………………………………………….…ii DEDICATION PAGE………………………………………………………………………………………..…iii ACKNOWLEDGEMENT PAGE…………………………………………………………………………….iv ABSTRACT PAGE……………………………………………………………………………………………….v CHAPTER ONE: INTRODUCTION 1.1 INTRODUCTION TO LINEAR ALGEBRAIC SYSTEMS………………………………….1 1.2 OVERVIEW OF NUMERICAL SOLUTIONS.............................................3 1.3 MATLAB........................................................................................3 1.4 DEFINITION OF TERMS...................................................................4 1.4.1 REVIEW OF VECTOR, SCALAR AND MATRICES.................................5 1.4.2 MATRIX OPERATION.....................................................................9 1.4.3 DETERMINANT...........................................................................11 1.4.4 MATRIX TRANSPOSE, MATRIX INVERSE, AND PSEUDO- INVERSE.....11 1.4.5 VECTOR AND MATRIX NORMS......................................................12 1.4.6 EIGENVALUE.............................................................................13 1.4.7 STABILITY AND CONDITIONING...................................................14 1.4.8 ERRORS...................................................................................14 1.5 SCOPE OF STUDY.........................................................................15 1.6 LIMITATIONS...............................................................................16 5   
  • 6. 1.7 METHODOLOGY............................................................................17 CHAPTER TWO: LITERATURE REVIEW 2.1 HISTORICAL BACKGROUND OF LINEAR ALGEBRA..............................20 2.2 MATRIX THEORY...........................................................................25 2.3 MATLAB HISTORY.........................................................................27 CHAPTER THREE: METHODOLOGY 3.1 INTRODUCTION……………………………………………………………………………………….32 3.2 FACTORIZATIONS (DECOMPOSTION)..............................................33 3.2.1 LU FACTORIZATION...................................................................34 3.2.2 CHOLESKY FACTORIZATION........................................................43 3.3 ITERATIVE METHOD......................................................................48 3.3.1 GAUSS-SIEDEL ITERATION METHOD............................................49 3.4 MATRIX EIGENVALUE PROBLEM......................................................52 3.5 ERROR AND ACCURACY……………………………………………………………………………58 CHAPTER FOUR: APPLICATIONS 4.1 APPLICATIONS OF LINEAR ALGEBRIAC SYSTEMS………………………………….62 4.1.1 ECONOMICS………………………………………………………………………………………….62 4.1.2 MARKOV CHAINS………………………………………………………………………………….70 4.1.3 ELECTRICAL NETWORK…………………………………………………………………………73 CHAPTER FIVE: SUMMARY, CONCLUSION, AND RECOMMENDATIONS 5.1 SUMMARY………………………………………………………………………………………………….81 6   
  • 8. CHAPTER ONE INTRODUCTION 1.1 INTRODUCTION TO LINEAR ALGEBRAIC SYSTEMS Linear Algebraic systems can be said to be a set of linear equations. A Linear eq- uation is referred to as a simple equation with a simple variable. Therefore, a linear system or system of linear equations is a collection of linear equations involving set of variables. Linear Systems are always of the form; a11 x1 + a12 x2 + … + a1n xn = b1 a21 x1 + a22 x2 + … + a2n xn = b2 . . . . . am1 x1 + am2 x2 + … + amn xn = bm This is a general system of m linear equations in n unknowns. The system can also be expressed in a more compact form as: ∑ aij xj = bi Or we use the matrix equation as; 8   
  • 9. Which may be written as AX = B, where A = (aij), X = (xj), and B = (bi) A is called the coefficient matrix, X the matrix variables. In the double script notation, the first subscript denotes the row and the second the column in which the entry stands. Thus a22 is the entry in the second row and second column. Therefore, In order to solve a set of equations, we express the equations in form of a matrix equation or form. A system of equation with a common solution is said to be consistent, that is, the variables have values. When the variables have no consistent value, it has no common solution, and then it is not consistent. 9   
  • 10. 1.2 OVERVIEW OF NUMERICAL SOLUTIONS Numerical Solutions involves the development and evaluation of methods for solv- ing a problem or mathematical problem numerically, rather than using the methods of algebraic mathematical analysis and theorems. It involves computing numerical results from given mathematical data. The data is known as the input while the ob- tained result is the output. Recent trends emphasize the use and computers to im- plement the computation rather than manual process. The increasing power of computer system and scientific computing is a big boost to the use and develop- ment of numerical solutions. Scientific computing is the field of study concerned with constructing mathemati- cal models and numerical solution techniques and using computers to analyze and solve scientific and engineering problems. 1.3 MATLAB Jack Little and Cleve Moler, the founders of The MathWorks, recognized the need among engineers and scientists for more powerful and productive computation en- vironments beyond those provided by languages such as FORTRAN (FORmula TRANslator) and C. In response to that need, they combined their expertise in mathematics, engineering, and computer science to develop MATLAB, a high- 10   
  • 11. performance technical computing environment. MATLAB combines comprehen- sive math and graphics functions with a powerful high-level language. The name MATLAB stands for MATrix LABoratory.A program that has been purpose-designed to make computations easy, fast, and reliable. Basic MATLAB is good for; a) Computations, including linear algebra, data analysis, signal processing, poly- nomials and interpolation, numerical integration, and numerical solution of differ- ential equations. b) Graphics, in 2-Dimension and 3-Dimension, including colour, lighting, and animation. 1.4 DEFINITION OF TERMS The conventions that will be used are that of lower case letter which denotes a sca- lar or a vector and a capital letter denotes a matrix. Typically, the MATLAB vari- ables are named with a capital letter if they will be used as matrices and lower case for scalar and vectors. 11   
  • 12. The MATLAB environment uses the term matrix to indicate a variable containing real or complex numbers arranged in a two-dimensional grid. An array is, more generally, a vector, matrix, or higher-dimensional grid of numbers MATLAB has a lot of functions that create different types of matrices, row vector and column vector. 1.4.1 REVIEW OF VECTOR, SCALAR AND MATRICES A vector is a matrix. A column vector is a m-by-1 matrix given as; a11 a21 - am1 a row vector is a 1-by-n matrix given as; [a11 a12 .. . a1n] A diagonal matrix is a matrix that has non-zero entries only on the main diagonal, i.e. a11, a22 … ann. 12   
  • 13. A diagonal matrix whose diagonal entries are equal is also a scalar 1-by-1 matrix, i.e. a11=a22= . . . = ann. D= d 0 0 = [d] 0 d 0 0 0 d When these entries equal 1, it is known as an Identity Matrix. Generally, accepted mathematical notation uses I to denote the identity matrices of different sizes. MATLAB could not use I for this purpose because id did not dis- tinguish between uppercase and lowercase letters and i already served as a sub- script and a complex unit. So, an English language pun was introduced. The Func- tion eye(m, n) returns a m-by-n matrix(if m=n, eye(n) returns a square matrix). There are also different matrices created with their different commands respec- tively, e.g. a) Pascal Matrix which consists of rows of numbers, the apex is 1 and each row starts and ends with 1, others obtained by adding together two numbers on either side in the row above, i.e. 13   
  • 14. P = pascal(3) = 1 1 1 1 2 3 1 3 6 b) Magic Matrix is a matrix where the entries in the column when added equal the same, i.e. M = magic(4) = 16 2 3 13 5 11 10 8 9 7 6 12 4 14 15 1 c) Hilbert Matrix has its entries aij = 1/(1+j-1) sym(hilb(3)) = 1 ½ 1/3 1.0000 0.5000 0.3333 ½ 1/3 ¼ = 0.5000 0.3333 0.2500 1/3 ¼ 1/5 0.3333 0.2500 0.2000 sym(hilb(3)) = hilb(3) as above. d) The Ones Matrix has all its entries equal to 1 14   
  • 15. ones(3) = 1 1 1 1 1 1 1 1 1 e) Random Matrix is a matrix with random integers: fix(10*rand(3)) = ` 6 3 7 0 8 9 4 0 9 A symmetric matrix is a square matrix whose transpose equals the matrix, i.e. AT = A = -4 1 8 1 0 -3 8 -3 5 A skew-symmetric matrix is a matrix whose transpose equals minus the matrix, i.e. –A = AT. Every skew-symmetric matrix has its entire main diagonal equal to 0 –A = AT = 0 4 5 -4 0 10 -5 -10 0 Orthogonal Matrix is a matrix where the transposition gives the inverse of the ma- trix, i.e. AT = A-1 15   
  • 16. AT = A-1 = 2/3 1/3 -2/3 -2/3 2/3 1/3 1/3 2/3 -2/3 Triangular Matrix is a matrix where all entries above the main diagonal equal 0 (lower triangular Matrix) or with all entries below the main diagonal equal to 0 (upper triangular Matrix). Sparse Matrix is a matrix with relatively few non zero entries. Sparse matrix algo- rithms can tackle huge sparse matrices that are utterly impractical for dense matrix algorithms. 1.4.2 MATRIX OPERATION Addition and subtraction of matrices is defined just as it is for arrays, element-by- element. Addition and subtraction require both matrices to have the same dimen- sion, or one of them is a scalar. If the dimensions are incompatible, an error results in MATLAB environment. The sum of two row vector or two column vectors must have the same number of components. For example, addition of a scalar and a row vector w=s+v 16   
  • 17. w= [8] + [2 0 -1] w= [2+8 0+8 -1+8] = [10 8 7] A row vector and a column vector of the same length can be multiplied in either order. The result is either a scalar, inner product, or a matrix. Multiplication of matrices is defined in a way that reflects composition of the un- derlying linear transformations and allows compact representation of systems of simultaneous linear equations. The matrix product C = AB is defined when the column dimension of A is equal to the row dimension of B, or when one of them is a scalar. If A is m-by-p and B is p-by-n, their product C is m-by-n matrix. For ex- ample, multiplying two vectors, a row vector and a column vector. u = [3; 1; 4] = 3 1 4 v = [2 0 -1] = [2 0 -1] x=v*u x= [3*2 + 1*0 + 4*-1] = [6 + 0 – 4] = [2] 17   
  • 18. It should be noted that a 3-by-2 matrix cannot be multiplied with a 3-by-3 matrix. Also, the multiplication of two matrices A and B, AB≠BA. 1.4.3 DETERMINANT Determinants were originally introduced for solving linear systems. Every square matrix A has associated with it a number called determinant of the matrix A de- noted by |A|. The function det computes the determinant of a square matrix. A de- terminant of order three is a scalar associated with a 3-by-3 matrix A such that; detA = |A| = a11 a12 a13 a21 a22 a23 a31 a32 a33 = a11| a22 a23| - a12| a21 a23| + a13| a21 a22| |a32 a33| |a31 a33| | a31 a32| = [a11 (a22a33 - a23a32) - a12 (a21a33 - a23a31) + a13 (a21a32 - a22a31)] 1.4.4 MATRIX TRANSPOSE, MATRIX INVERSE, AND PSEUDO- INVERSE. For real matrices, the transpose operation interchanges aij and aji. MATLAB uses the apostrophe operator (') to perform a complex conjugate transpose, while the general is notation ( T ), e.g. 18   
  • 19. v = [2 0 -1] vT = 2 0 -1 If A is square matrix and nonsingular (det A ≠ 0), the equations AX = I and XA = I have the same solution, X. This solution is called the inverse of A, it is denoted by A-1 and computed by the function inv(A) in MATLAB. It has the property that: A-1A= A A-1 = I (I = Identity Matrix) It should be noted that rectangular matrices do not have inverses and if A has no inverse, it is a singular matrix, i.e.det A = 0. A partial replacement for the inverse is provided by the Moore-Penrose pseudo- inverse, which is computed by the pinv function. This show rectangular matrices have pseudo-inverses using MATLAB. 1.4.5 VECTOR AND MATRIX NORMS A vector norm for column vectors x = [xj] for j = 1, 2 . . . n is a generalized length, it is denoted by||x||. The most important connection with computation is the p-norm of a vector x given by; ||x|| = [∑|xi|P] 1/P = (|x1|P + |x2|P + . . . . . + |xn|P) 1/P 19   
  • 20. It is computed by norm(x,p). This is defined by any value of p > 1, but the most common values of p are 1, 2, and ∞. The default value is p = 2, which correspond to Euclidean length which is given; ||x||2 = √x12 + x22 + . . . . . + xn2 The p-norm of a matrix A, ||A||P = max||Ax||/||x||P can be computed for p = 1, 2, and ∞ by norm(A,p) in MATLAB. Again, the default value is p = 2. 1.4.6 EIGENVALUE Matrix Eigenvalue problems deal with vector equations of the form Ax = λx, where A is a given square matrix, x is an unknown vector and λ an unknown scalar. For non-trivial solutions, i.e. x ≠ 0, the values of λ are called Eigenvalues of the matrix A and the corresponding solutions of the equation Ax = λx are called the Eigenvec- tors. Ax = λx = a11 a12 . . . . . . . a1n x1 [λ] x1 a21 a22 . . . . . . . a2n x2 = x2 . . . . . an1 an2 . . . . . . . ann xn xn Ax = λx => Ax – λx = 0 => (A – λI) x = 0 The unit matrix is introduced since we cannot subtract λ (scalar) from matrix A. 20   
  • 21. 1.4.7 STABILITY AND CONDITIONING In numerical analysis, the condition number associated with a problem is the measure of that problem amenability to computation, i.e. how numerically well conditioned the problem is. A problem with a low condition number is well conditioned, while a problem with a high condition number is ill conditioned. The condition number associated with a linear equation gives a bound on how accurate the solution will be after approximate solution. Thus, the solution x, changes with respect to b. This leads to the notion of numerical stability. An algorithm is numerically stable if an error once generated does not grow too much during computation. This is possible when the problem is well conditioned. The art of numerical computation is to find a stable algorithm for solving a well posed problem. For example, √2=1.41421 is a well posed problem. The function f(x)=1/(x-1) , f(1.1)=10 and f(1.001)=1000, evaluating f(x) here x=1 is an ill conditioned problem. 1.4.8 ERRORS Errors are deviations from the expected or exact value. Assessing the accuracy of the output results of computations is a paramount goal in Numerical solutions. 21   
  • 22. There are different types of errors encountered that affect the accuracy of the re- sults obtained. The various types of errors that may limit accuracy are: a. Experimental Errors: errors from inputting mathematical data; this occurs not only in the use of wrong data but also in inherent imperfections of physical meas- urements and quantities. b. Round-off Errors: the error arises because of calculations with numbers per- formed with approximate representation of the actual numbers e.g., 3.142517 rep- resented as 3.143. c. Approximation Errors: this occurs as a result of truncating values. It is as a result of the computation method used. In MATLAB, the decimal point representations of individual entries are the signal to use variable-precision arithmetic (vpa). The result of each arithmetic operation is rounded to 16-significant decimal digits as default in MATLAB. 1.5 SCOPE OF STUDY The main focus of this study is to show that Ax = b has a unique solution. There- fore, we want to show how MATLAB works to give solution to Linear Algebraic 22   
  • 23. Systems. A little familiarity with MATLAB is required; also understanding thee principle behind an algorithm for a problem is needed. Linear Algebra is one of the cornerstones of modern computational mathematics. One of the most important problems in technical computing is the solutions of sys- tems of linear equations. In matrix notation, the general problem which is the scope of our study takes the following form: Given two matrices A and b, does there exist a unique solution or matrix x such that Ax = b or xA = b? 1.6 LIMITATIONS The concept of a matrix was historically introduced to simplify the solution of lin- ear systems although they today have much greater and broad reaching applica- tions. We know that linear equations in two or three variables can be solved using tech- niques such as elimination method and substitution method, but theses techniques are not relevant for dealing with large systems where there are large numbers of variables. Therefore, our restriction will be methods of solving linear equations us- ing MATLAB. Also, every matrix in our study will be a square matrix which is non-singular with a unique solution. 23   
  • 24. 1.7 METHODOLOGY Computational difficulties encountered with theoretical linear algebra methods are the use of Cramer’s rule, least square method, explicitly finding the inverse of a matrix, e.t.c. An example is using Cramer’s rule for Ax = b, it might take years to solve a 20-by-20 matrix problem using the definition of determinant. Therefore, we introduce numeric methods relevant to this study. They are, a) Iterative Methods b) Factorization (Decomposition): LU Factorization, Cholesky Factorization c) Matrix Inverse d) Eigenvalues German mathematician, Carl Friedrich Gauss, proved that every equation has at least one solution which is known as the fundamental theorem of algebra. It was first stated as every polynomial f(x) Є |R[x] with complex coefficients can be fac- tored into linear factors over the complex number. A polynomial in general is de- fined as: f(z) = zn + cn-1zn-1 + cn-2zn-2 + . . . . . + c0 Where x = complex variable 24   
  • 25. c0, c1, . . . . .cn-1 are complex constants Carl Friedrich Gauss reformulated this theorem to avoid the notion of complex number stating: Every polynomial f(x) Є |R[x] with real coefficients can be fac- tored into linear and quadratic form. The name fundamental theorem of algebra was given at a time which algebra was basically about solving polynomial equa- tions with complex or real coefficients. In spite of its name Fundamental Theorem of Algebra, there is no known proof which is purely algebraic, and many mathema- ticians believe such a proof does not exist. Besides, it is not fundamental for mod- ern algebra. It suffices to prove that every polynomial has at least one root because by ordinary algebraic division, we can divide by the corresponding linear factor to give another polynomial whose degree is reduced by 1. Repeating the whole proc- ess, we get all n of the root. A Simple illustration is x2 = 4. Solving: x = √4 => x = 2 This shows x = 2 is a solution, but x2 = -4 when solved: x =√-4 => x = 2√-1 What is the physical meaning of √-1? Then we say x2 = -4 Gauss in his dissertation said x2 = -4 has a solution: 25   
  • 26. x =√-4 => x = 2√-1 => x = 2i The following formulas where also introduced in solving an equation: i) ax + b = 0 x = -b/a ii) ax2 + bx + c = 0 x = -b ± √b2 – 4ac 2a Therefore, in our methods of solving systems of linear equations, there exists a unique solution of Ax = b. 26   
  • 27. CHAPTER TWO LITERATURE REVIEW 2.1 HISTORICAL BACKGROUND OF LINEAR ALGEBRA. The history of modern linear algebra followed the introduction and development of the notion of a matrix. The subject of linear algebra followed the development of determinants, which arose from the study of coefficients of systems of linear equa- tions. Gottfried Wilhelm Leibnitz (or Leibniz) born at Leipzig on June 21 (O.S.), 1646, and died in Hanover on November 14, 1716, who was one of the two founders of calculus, used determinants in 1693 and Cramer presented his determinant-based formula for solving systems of linear equations (today known as Cramer's Rule) in 1750. Cramer's rule is a theorem in linear algebra, which gives the solution of a system of linear equations in terms of determinants. It is named after Gabriel Cramer (1704 - 1752), who published the rule in his 1750 Introduction à l'analyse des lignes courbes algébriques. Let a system be represented in matrix multiplication form as Ax = b, then the theorem states that: xi = det Ai / det A (i = 1, 2 . . . . n) Ai = matrix formed by replacing the ith column of A by the column vector b 27   
  • 28. Considering the linear system: ax1 + ax2 = e cx1 + dx2 = f a b x1 = e c d x2 f Then by Cramer’s rule: x1 = |e b| |f d| = ed - bf |a b| ad - bc |c d| x2 = |a e| |c f| = af - ec |a b| ad - bc |c d| The rules for all n-by-n matrices are similar. 28   
  • 29. Johann Carl Friedrich Gauss (30 April 1777 – 23 February 1855) a German mathematician and scientist who contributed significantly to many fields, including number theory, statistics, analysis, differential geometry, geodesy, electrostatics, astronomy, and optics. He was sometimes known as the princeps mathematicorum (Latin, usually translated as "the Prince of Mathematicians") and "greatest mathematician since antiquity". Gauss had a remarkable influence in many fields of mathematics and science and is ranked as one of history's most influential mathematicians. He developed Gaussian elimination around 1800 and used it to solve least squares problems in celestial computations. Even though Gauss' name is associated with this technique for successively eliminating variables from systems of linear equations, Chinese manuscripts from several centuries earlier have been found that explain how to solve a system of three equations in three unknowns by Gaussian elimination. For matrix algebra to fruitfully develop, one needed both proper notation and defi- nition of matrix multiplication. James Joseph Sylvester (September 3, 1814 London – March 15, 1897 Oxford) an English mathematician, made fundamental contributions to matrix theory, invariant theory, number theory, partition theory and combinatorics. In 1848, he first introduced the term ''matrix,'' which was the Latin word for womb, as a name for an array of numbers. Matrix algebra was nur- 29   
  • 30. tured by the work of Arthur Cayley (August 16, 1821 - January 26, 1895). He helped found the modern British school of pure mathematics in 1855. Cayley stud- ied compositions of linear transformations and was led to define matrix multiplica- tion so that the matrix of coefficients for the composite transformation ST is the product of the matrix S times the matrix T. He went on to study the algebra of these compositions including matrix inverses. Cayley in his 1858 Memoir on the Theory of Matrices gave the famous Cayley-Hamilton theorem that asserts that a square matrix is a root of its characteristic polynomial. The use of a single letter A to represent a matrix was crucial to the development of matrix algebra. Early in the development, the formula det(AB) = det(A)det(B) provided a connection be- tween matrix algebra and determinants. Cayley wrote, ''There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants.'' Mathematicians also attempted to develop of algebra of vectors but there was no natural definition of the product of two vectors that held in arbitrary dimensions. Hermann Günther Grassmann (April 15, 1809, Stettin (Szczecin) – September 26, 1877, Stettin) was a German polymath, renowned in his day as a linguist and now admired as a mathematician. He was also a physicist, neohumanist, general scholar, and publisher. He proposed the first vector algebra that involved a non- 30   
  • 31. commutative vector product (that is, v x w need not equal w x v) in his book Aus- dehnungslehre (1844). Grassmann's text also introduced the product of a column matrix and a row matrix, which resulted in what is now called a simple or a rank- one matrix. In the late 19th century, American mathematical physicist Josiah Willard Gibbs (February 11, 1839 – April 28, 1903) published his famous treatise on vector analysis. In that treatise, Gibbs represented general matrices, which he called dyadics, as sums of simple matrices, which Gibbs called dyads. Later the theoretical physicist Paul Adrien Maurice Dirac (August 8, 1902 – October 20, 1984) introduced the term ''bra-ket'' for what we now call the scalar product of a ''bra'' (row) vector times a ''ket'' (column) vector and the term ''ket-bra'' for the product of a ket times a bra, resulting in what we now call a simple matrix, as above. There was renewed interest in matrices, particularly on the numerical analysis of matrices, after World War II with the development of modern digital computers. John von Neumann (December 28, 1903 – February 8, 1957) a Hungarian American mathematician and Herman Heine Goldstine (September 13, 1913 – June 16, 2004), introduced condition numbers in analyzing round-off errors in 1947. Alan Mathison Turing (23 June 1912 – 7 June 1954) and von Neumann were the 20th century giants in the development of stored-program computers. 31   
  • 32. Turing introduced the LU decomposition of a matrix in 1948. The L is a lower tri- angular matrix with 1's on the diagonal and the U is an echelon matrix. The benefit of the QR decomposition was realized a decade later. The Q is a matrix whose columns are orthonormal vectors and R is a square upper triangular invertible ma- trix with positive entries on its diagonal. The QR factorization is used in computer algorithms for various computations, such as solving equations and find eigenval- ues. 2.2 MATRIX THEORY A matrix which is a rectangular array of numbers has one of its principal uses in representing systems of equations of the first degree in several unknowns. Each matrix row represents one equation, and the entries in a row are the coefficients of the variables in the equations, in some fixed order. Given the general form of sys- tem of equations below: a11 x1 + a12 x2 + … + a1n xn = b1 . . . . . . . . . Equation (1) a21 x1 + a22 x2 + … + a2n xn = b2 . . . . . . . . . Equation (2) . . . . . . am1 x1 + am2 x2 + … + amn xn = bm . . . . . . . . . Equation (n) 32   
  • 33. This is given in matrix form as: And x1, x2, . . . xn are the unknowns. For example, given the linear system in three unknowns: 3x1 + 2x2 – x3 = 1 . . . . . equation (1) 2x1 -2x2 + 4x3 =-2 . . . . . equation (2) -x1 + 1/2x2 –x3 = 0 . . . . . equation (3) The matrix form is: 3 2 -1 2 -2 4 -1 ½ -1 The unknown are given in vector form as: x = x1 x2 x3 33   
  • 34. Addition and multiplication can be defined so that certain sets of matrices form algebraic systems.The sum of two matrices is defined only if they are of the same size, i.e. thesame n-by-n matrix. If A = [aij] and B = [bij] are of the same size, then C = A + B => cij = aij + bij is defined. This is done simply by adding corresponding elements. The set of all matrices of a fixed size has the property that addition is closed, associative, and comutative. A unique matrix 0 exists such that for any matrix A; A+0=0+A=A And corresponding to any matrix A, there exist a unique matrix B such that; A+B=B+A=0 The product AB of two matrices A and B, is defined only if the number of columns of the left factor A is the same as the number of rows of the right factor B. Note that AB ≠ BA 2.3 MATLAB HISTORY In the mid-1970s, Cleve Moler and several colleagues developed the FORTRAN subroutine libraries called LINPACK and EISPACK. LINPACK is a collection of 34   
  • 35. FORTRAN subroutines for solving linear equations, while EISPACK contains subroutines for solving eigenvalue problems. Together, LINPACK and EISPACK represent the state of the art software for matrix computation. In the late 1970s, Cleve, who was then chairman of the computer science depart- ment at the University of New Mexico, wanted to be able to teach students in his linear algebra courses using the LINPACK and EISPACK software. However, he didn't want them to have to program in FORTRAN, because this wasn't the pur- pose of the course. So, as a "hobby" on his own time, he started to write a program that would provide simple interactive access to LINPACK and EISPACK. He named his program MATLAB, for MATrix LABoratory. Over the next several years, when Cleve would visit another university to give a talk, or as a visiting pro- fessor, he would end up by leaving a copy of his MATLAB on the university ma- chines. Within a year or two, MATLAB started to catch on by word of mouth within the applied math community as a “cult” phenomenon. In early 1983, John Little was exposed to MATLAB because of a visit Cleve made to Stanford Univer- sity. Little, an engineer recognized the potential application of MATLAB to engi- neering applications. So in 1983, Little teamed up with Moler and Steve Bangert to develop a second generation, professional version of MATLAB written in C and integrated with graphics. The MathWorks, Inc. was founded in 1984 to market and 35   
  • 36. continue development of MATLAB. The very first version was written in the late 1970s for use in courses in matrix theory, linear algebra, and numerical analysis. The basic MATLAB data element is a matrix. The department of Scientific Computing at Uppsala University, Sweden, started to use MATLAB in 1986 as a teaching tool in numerical analysis, especially in nu- merical linear algebra. The MATLAB software at that time was an academic ver- sion, written in FORTRAN code. It has since then been updated and extended. Solving problems in MATLAB is, therefore, generally much quicker than pro- gramming in a high-level language such as C or FORTRAN. The focus in MAT- LAB is on computation, not mathematics. The heart and soul of MATLAB is linear algebra. Now, there are two basic versions of the software, the professional version, and the student edition. Since then, the software has evolved into an interactive system and programming language for general scientific and technical computation and visu- alization. There are hundreds of built-in functions that come with the basic MAT- LAB and there are optional "toolboxes" of functions for specific purposes such as Controls, Signal Processing, and Optimization. Most of the functions in MATLAB and the Toolboxes are written in the MATLAB language and the source code is 36   
  • 37. readable. MATLAB commands are expressed in a form very similar to that used in mathematics and engineering. Some commonly used MATLAB commands are given below for any matrix A: Command Meaning Syntax/Usage chol Cholesky Factorization of A C=chol(A) condest 1-norm condition number estimate C=condest(A) det Determinant of matrix A det(A) diag Diagonal matrices and diagonals of diag(A) matrix eig Eigenvalues and Eigenvectors of A [V,E]=eig(A) eye Identity matrix eye(n) hilb Hilbert n-by-n matrix hilb(n) inv Matrix Inverse of A inv(A) lu LU Factorization of A [L,U]=lu(A) linsolve Solve a system of linear equations x = linsolve(A,b) magic Magic n-by-n Matrix magic(n) norm Vector and matrix norms [norm(v,1) norm(v) norm(v,inf)] 37   
  • 38. normest 2-norm estimate nrm = normest(S) ones Create array of all ones ones(n) pascal Pascal n-by-n Matrix pascal(n) pinv Moore-Penrose pseudoinverse of ma- pinv(A) trix poly Characteristic Polynomial of A P=poly(A) rand Uniformly distributed pseudorandom rand(n) numbers rref Reduced Row Echelon Form Of A rref(A) trace Sum of diagonal elements of A b = trace(A) tril Lower triangular part of matrix tril(A) triu Upper triangular part of matrix triu(A) zeros Create array of all zeros zeros(n) Linear algebra functions are located in the MATLAB matfun directory. For a complete list, brief descriptions, and links to reference pages, type: help matfun 38   
  • 39. CHAPTER THREE METHODOLOGY 3.1 INTRODUCTION One of the most important problems in technical computing is the solution of systems of simultaneous linear equations. Considering the equation 9x = 27 which is of the form Ax = b, that is, a 1-by-1 example. Does the equation 9x = 27 have a unique solution? The answer is definitely yes. The equation has the unique solution x = 3. The solution is easily obtained by division. 9x = 27 => 9x/9 = 27/9 => x = 3 The solution is not easily obtained by computing the inverse of 9, that is, 9- 1 = 0.1111... (Using Ax = b => x =A-1b) and then multiplying 9-1 by 27. This would be more work and, if 9-1 is represented to a finite number of digits, less accurate. Similar considerations apply to sets of linear equations with more than one unknown; the MATLAB software solves such equations without computing the inverse of the matrix. 39   
  • 40. The two division symbols used are slash, /, and backslash, , used to describe the solution of a general system of linear equations for two situations where the unknown matrix appears on the left or right of the coefficient matrix: X = b/A = bA-1 Denotes the solution to the matrix equation xA = b X = Ab = A-1b Denotes the solution to the matrix equation Ax = b The coefficient matrix A is always in the denominator. The dimension compatibility conditions for x = Ab require the two matrices A and b to have the same number of rows. The solution x then has the same number of columns as b and its row dimension is equal to the column dimension of A. For x = b/A, the roles of rows and columns are interchanged. In practice, linear equations of the form Ax = b occur more frequently than those of the form xA = b. So we concentrate more on the backslash operator using MATLAB. The most common situation involves a square coefficient matrix A and a single right-hand side column vector b. 3.2 FACTORIZATIONS (DECOMPOSTION): The core of solving a system of linear algebraic equations is decomposing the coefficient matrix. Through the de- composition process, the coupled equations are decoupled and the solution can be obtained with much less effort. 40   
  • 41. Matrix factorization which will be discussed in this section will make use of triangular matrices, where all the entries either above or below the main diagonal are 0. Systems of linear equations involving triangular matrices are easily and quickly solved using either forward or back substitution. 3.2.1 LU FACTORIZATION: This is one of methods used to solve system of linear algebraic equation. It expresses any square matrix A as the product of a permutation of a lower triangular matrix and an upper triangular matrix. It is given by; A = LU L = lower triangular matrix with ones on its diagonal U = upper triangular matrix The LU factorization of a matrix A allows the linear system Ax = b to be solved quickly with x = U(Lb) in MATLAB. Ax = b => Ax = Lux = b Ly = b Ux = y Example 3.1: Determine an LU composition for the matrix 41   
  • 42. A= 3 6 -9 2 5 -3 -4 1 10 Solution: we first go through the row operation => 3 6 -9 1 2 -3 2 5 -3 R1/3 => 2 5 -3 -4 1 10 -4 1 10 => 1 2 -3 1 2 -3 2 5 -3 R2 – 2R1 => 0 1 3 -4 1 10 -4 1 10 1 2 -3 1 2 -3 0 1 3 R3 + 4R1 => 0 1 3 -4 1 10 0 9 -2 1 2 -3 1 2 -3 0 1 3 -R3/29 => 0 1 3 0 9 -2 0 0 1 U= 1 2 -3 0 1 3 0 0 1 42   
  • 43. Now, we need to get L. We will need the elementary matrices for each of these, or more precisely their inverses. The inverse matrix can be found by applying the inverse operation to the identity matrix: L = E1-1 E2-1 . . . . . Ek-1 (E = Elementary Matrix) R1/3 => E1 = 1/3 0 0 E1-1 = 3 0 0 0 1 0 0 1 0 0 0 1 0 0 1 R2 – 2R1 => E2 = 1 0 0 E2-1 = 1 0 0 -2 1 0 2 1 0 0 0 1 0 0 1 R3 + 4R1 => E3 = 1 0 0 E3-1 = 1 0 0 0 1 0 0 1 0 4 0 1 -4 0 1 R3 – 9R2 => E4 = 1 0 0 E4-1 = 1 1 0 0 1 0 0 1 0 0 -9 1 0 9 1 -R3/29 => E5 = 0 0 0 E5-1 = 1 0 0 0 1 0 0 1 0 0 0 -1/29 0 0 -29 43   
  • 44. L= 3 0 0 2 1 0 (definition of L = E1-1 E2-1 E3-1 E4-1 E5-1) -4 9 -29 Finally, we can verify the LU Factorization LU = A = 3 0 0 1 2 -3 3 6 -9 2 1 0 0 1 3 = 2 5 -3 4 9 -29 0 0 1 -4 1 10 LU = A = 3(1) + 0 + 0 3(2) + 0 + 0 3(-3) + 0 + 0 2(1) + 0 + 0 2(2) + 0 + 0 2(-3) + 0 + 0 -4(1) + 0 + 0 -4(2) + 9 + 0 -4(-3) + 27 - 29 LU = A = 3 6 -9 2 5 -3 -4 1 10 In MATLAB environment, >> A=[3 6 -9;2 5 -3;-4 1 10] A= 3 6 -9 2 5 -3 -4 1 10 44   
  • 45. L= -0.7500 1.0000 0 -0.5000 0.8148 1.0000 1.0000 0 0 U= -4.0000 1.0000 10.0000 0 6.7500 -1.5000 0 0 3.2222 The theoretical computation of the LU Factorization of A is different from the MATLAB computation, but when verified: >> L*U ans = 3 6 -9 2 5 -3 -4 1 10 Example 3.2: Using the LU factorization above, find the solution of the following system of equations: 3x1 + 6x2 -9x3 = 0 45   
  • 46. 2x1 + 5x2 -3x3 = -4 -4x1 + x2 + 10x3 = 3 Solution: First we write the matrix form of the system 3 6 -9 x1 0 2 5 -3 x2 = -4 -4 1 10 x3 3 From example 3.1, we found the LU-decomposition given as: LU = A = 3 0 0 1 2 -3 3 6 -9 2 1 0 0 1 3 = 2 5 -3 4 9 -29 0 0 1 -4 1 10 From our definition, Ly = b: 3 0 0 y1 0 2 1 0 y2 = -4 -4 9 -29 y3 3 Solving using forward substitution: 3y1 =0 => y1 = 0 2y1 + y2 = -4 => y2 = -4 -4y1 +9y2 – 29y3 =3 => y3 =-39/29 46   
  • 47. The second system, Ux =y 1 2 -3 x1 0 0 1 3 x2 = -4 0 0 1 x3 -39/29 Solving using backward substitution: x1 + 2x2 – 3x3 = 0 => x1 = -119/29 x2 + 3x3 = -4 => x2 = 1/29 x3 = -39/29 => x3 = -39/29 In MATLAB environment, > A=[3 6 -9;2 5 -3;-4 1 10] A= 3 6 -9 2 5 -3 -4 1 10 >> [L,U]=lu(A) L= -0.7500 1.0000 0 -0.5000 0.8148 1.0000 1.0000 0 0 47   
  • 48. U= -4.0000 1.0000 10.0000 0 6.7500 -1.5000 0 0 3.2222 >> b=[0;-4;3] b= 0 -4 3 >> x=U(Lb) x= -4.1034 0.0345 -1.3448 This computed solution using MATLAB equals the theoretical computation. Example 3.3: Solve the system of equations using MATLAB: 2x1 – 5x2 + 2x3 = 7 x1 + 2x2 – 4x3 = 3 3x1 – 4x2 – 6x3 = 5 48   
  • 49. Solution: >> A=[2 -5 2;1 2 -4;3 -4 -6] A= 2 -5 2 1 2 -4 3 -4 -6 >> [L,U]=lu(A) L= 0.6667 -0.7000 1.0000 0.3333 1.0000 0 1.0000 0 0 U= 3.0000 -4.0000 -6.0000 0 3.3333 -2.0000 0 0 4.6000 >> b=[7;3;5] 49   
  • 50. b= 7 3 5 >> x=U(Lb) x= 5.0000 1.0000 1.0000 The solution to the given system is => x1 = 5, x2 = 1, x3 = 1 3.2.2 CHOLESKY FACTORIZATION Cholesky Factorization requires a system to be positive definite, i.e. xTAx > 0 for all x ≠ 0. All the diagonal elements of the matrix A are positive and the off diagonal elements are not too big. The Cholesky Factorization expresses a symmetric matrix as the product of a triangular matrix and its transpose: A = L U = C CT = c11 0 0 c11 c12 c13 c21 c22 0 0 c22 c32 c31 c32 c33 0 0 c33 50   
  • 51. C = L = Lower Triangular Matrix CT = U = Upper Triangular Matrix = Transpose of Matrix C The Cholesky Factorization allows the linear system Ax = b => Cx CT = b The formulas for factoring matrix A to get the lower triangular matrix C are: c11 = √a11 cji = aji/c11 j=2...n cjj = √ajj - ∑cjs2 s = 1, . . . j-1 cpj = 1/cjj ( apj - ∑cjs cps ) p = 3, . . . n Example 3.4: Solve the linear system by cholesky method: x1 – 3x2 + 2x3 = 27 -3x1 + 10x2 -5x3 = -78 2x1 – 5x2 + 6x2 = 64 Solution: the above system is rewritten in Ax = b as: 1 -3 2 x1 = 27 -3 10 -5 x2 = -78 2 -5 6 x3 = 64 A= 1 -3 2 c11 0 0 c11 c21 c31 -3 10 -5 = c21 c22 0 0 c22 c32 2 -5 6 c31 c32 c33 0 0 c33 51   
  • 52. Calculating the values of c11, c21, c31, c22, c32, c33 c11 = √a11 = √1 = 1 c21 = a21/c11 = -3/1 = -3 c31 = a31/c11 = 2/1 = 2 c22 = √a22 – c212 = √10 – (-3)2 = √10 – 9 = √1 =1 c32 = 1/a22 (a22 – c31 c21) = 1/1 (-5 – (-3×2)) = -5 + 6 = 1 c33 = √a33 – c312 c212 = √6 – 4 – 1 = √1 = 1 A= 1 -3 2 1 0 0 1 -3 2 -3 10 -5 = -3 1 0 0 1 1 2 -5 6 2 1 1 0 0 1 Using Cy = b => Ly = b => 1 0 0 y1 = 27 -3 1 0 y2 = -78 2 1 1 y3 = 64 => y1 = 27 => y1 = 27 -3y1 + y2 = -78 => y2 = 3 2y1 + y2 + y3 = 64 => y3 = 7 Then CTx = y => Ux = y 52   
  • 53. => 1 -3 2 x1 = 27 0 1 1 x2 = 3 0 0 1 x3 = 7 => x1 - 3x2 + 2x3 = 27 => x1 = 1 x2 + x3 = 3 => x2 = -4 x3 = 7 => x3 = 7 In MATLAB environment, the above example is given as: >> A=[1 -3 2;-3 10 -5;2 -5 6] A= 1 -3 2 -3 10 -5 2 -5 6 >> C=chol(A) ans = 1 -3 2 0 1 1 0 0 1 >> C’=chol(A)' 53   
  • 54. ans = 1 0 0 -3 1 0 2 1 1 >> b=[27;-78;64] b= 27 -78 64 >> x=C(C'b) x= 1 -4 7 Where x1 = 1, x2 = -4, x3 = 7 It can be seen that both theoretical and MATLAB solution gives the same result. It should be noted that if the system is not positive definite, MATLAB generates an error. Example 3.5: Solve the system below using MATLAB: 2x3 + 3 = x2 + 3x1 54   
  • 55. x1 – 3x3 = 2x2 + 1 3x2 + x3 = 2 – 2x1 Solution: Rearranging the system, we get: 3x1 + x2 + 2x3 = 3 x1 + 2x2 - 3x3 = 1 -2x1 + 3x2 + x3 = 2 >> A=[3 1 2;1 2 -3;-2 3 1] A= 3 1 2 1 2 -3 -2 3 1 >> C=chol(A) ??? Error using ==> chol Matrix must be positive definite. 3.3 ITERATIVE METHOD This is a step by step process of achieving the desired result of the solution of Ax = b, i.e. by repeating a sequence of steps successively until the result is achieved. We use this method if the entries on the main diagonal of the matrix are large. 55   
  • 56. 3.3.1 GAUSS-SIEDEL ITERATION METHOD Assuming that ajj = 1 for j = 1, 2 . . . n, A matrix A is given as: A=I+L+U ajj ≠ 0 I = n-by-n identity matrix L = Lower Triangular Matrix (main diagonal = 0) U = Upper Triangular Matrix (main diagonal = 0) Ax = b => Ax = (I + L + U)x = b Ax = Ix + Lx + Ux =b (Ix = x) Ax = x + Lx + Ux = b x = Ax – Lx – Ux = b – Lx – Ux (Ax = b) The iterative formula now becomes for m+1 iteration(s): x(m) = b – Lx(m+1) – Ux(m) x(m) = mth approximation x(m+1) = (m+1)st approximation Example 3.6: Apply Gauss-Siedel iteration to the system below. Do 5 steps starting from 1, 1, 1 using 5-significant digits in the computation. 10x1 + x2 + x3 = 6 x1 + 10x2 + x3 = 6 56   
  • 57. x1 + x2 + 10x3 = 6 Solution: Dividing through the system by 10 => x1 + 0.1x2 + 0.1x3 = 0.6 0.1x1 + x2 + 0.1x3 = 0.6 0.1x1 + 0.1x2 + x3 = 0.6 Putting the system in form of x = b – Lx – Ux x1 = 0.6 - 0.1x2 - 0.1x3 x2 = 0.6 - 0.1x1 - 0.1x3 x3 = 0.6 - 0.1x1 - 0.1x2 Iterating from 1, 1, 1 => x1 = x2 = x3 = 1 1st Iteration x1(1) = 0.6 - 0.1(0.1) - 0.1(1) = 0.40000 x3(1) = 0.6 - 0.1(0.4) - 0.1(0.46) = 0.51400 2nd Iteration x1(2) = 0.6 - 0.1(0.46) - 0.1(0.514) = 0.50260 x2(2) = 0.6 - 0.1(0.5026) - 0.1(0.514) = 0.49834 57   
  • 58. x3(2) = 0.6 - 0.1(0.5026) - 0.1(0.49834) = 0.49990 3rd Iteration x1(3) = 0.6 - 0.1(0.49834) - 0.1(0.49990) = 0.50017 x2(3) = 0.6 - 0.1(0.50017) - 0.1(0.49990) = 0.50014 x3(3) = 0.6 - 0.1(0.50017) - 0.1(0.50014) =0.49996 4th Iteration x1(4) = 0.6 - 0.1(0.50014) - 0.1(0.49996) = 0.49999 x2(4) = 0.6 - 0.1(0.50017) - 0.1(0.49996) = 0.50000 x3(4) = 0.6 - 0.1(0.49999) - 0.1(0.50000) = 0.50000 5th Iteration x1(5) = 0.6 - 0.1(0.50000) - 0.1(0.50000) = 0.50000 x2(5) = 0.6 - 0.1(0.50000) - 0.1(0.50000) = 0.50000 x3(5) = 0.6 - 0.1(0.50000) - 0.1(0.50000) = 0.50000 Therefore, x = x1 0.5 x2 = 0.5 x3 0.5 58   
  • 59. 3.4 MATRIX EIGENVALUE PROBLEM In this section, we will be looking at a situation where a given matrix A (square matrix) and vector x, the product Ax will be the same as the scalar multiplication λx for some scalar λ. λ is the eigenvalue of A, and x the eigenvector of A. We will often call x the eigenvector corresponding to or associated with λ, and λ the eigenvalue corresponding to or associated with x. Ax = λx has a non-trivial solution x ≠ 0 Ax – λx = 0 => (A – λI)x = 0 (I = n-by-n identity matrix) The characteristic determinant is given as: det(A – λI) = 0 det(A – λI) = a11 – λ a12 ...... a1n a21 a22 – λ ...... a2n . . . . . . . =0 an1 an2 ...... ann - λ We also obtain a characteristic polynomial of A from the characteristic determinant; it is of nth degree in λ. The nth degree polynomial in λ is of the form: P(λ) = λn + Cn-1λn-1 + . . . . . + C1λ + C0 Eigenvalues are denoted by λ1, λ2 . . . λn. When λ occurs exactly once, it is called a simple eigenvalue. 59   
  • 60. The sum of n eigenvalues equals the sum of the main diagonal entries of the matrix and it is called the TRACE of the matrix. The Trace of a matrix is: Trace A = ∑ ajj = ∑ λk (j = 1, 2 . . . n, k = 1, 2 . . . n) The product of the eigenvalues = determinant of A => det(A) = λ1λ2. . .λn A square matrix which is triangular, the eigenvalue will be the diagonal entries a11, a22. . . ann. The set of all the solutions to (A – λI) x = 0 is called the EIGENSPACE of A corresponding to λ. In MATLAB, the command [V,E]=eig(A) gives the eigenvector and eigenvalue of matrix A. With the eigenvalues on the diagonal of a diagonal matrix E and the corresponding eigenvectors forming the columns of a matrix V, we have AV = VE If V is nonsingular, this becomes the eigenvalue decomposition A =VEV-1 Example 3.7: Find all eigenvalues of A A= 5 8 16 4 1 8 -4 -4 -11 60   
  • 61. Solution: The determinant of A gives us the characteristic polynomial A – λI = 5 8 16 λ 0 0 4 1 8 - 0 λ 0 -4 -4 -11 0 0 λ = 5-λ 8 16 4 1-λ 8 -4 -4 -11-λ det(A – λI) = (5–λ) |1–λ 8| -8 |4 8| +16 |4 1-λ| |-4 -11-λ| |-4 -11-λ| |-4 -4| = (5-λ) [(1-λ)(-11-λ)+32]-8[4(-11-λ)+32]+16[-16+4(1-λ)]=0 = (5-λ) (λ2+10λ+21)-8(-4λ-12) +16(-4λ-12) =0 =-λ3-10λ2-21λ+5λ2+50λ+105+32λ+96-64λ-192=0 =-λ3-10λ2+5λ2-21λ+50λ+32λ-64λ+105+96-192=0 =-λ3-5λ2-3λ+9=0 =λ3+5λ2+3λ-9=0 Putting λ=1 (1)3 + 5(1)2 + 3(1) – 9 = 1 + 5 + 3 - 9 = 0 λ = 1 is an eigenvalue Putting λ = -3 61   
  • 62. (-3)3 + 5(-3)2 + 3(-3) – 9 = -27 +45 – 9 – 9 = 0 λ = -3 is also an eigenvalue twice (λ + 3)2 (λ - 1) = 0 In MATLAB environment, the solution to our problem is given as: >> A=[5 8 16;4 1 8;-4 -4 -11] A= 5 8 16 4 1 8 -4 -4 -11 >> [T,E]=eig(A) T= 0.8165 -0.5774 -0.7573 0.4082 -0.5774 0.6509 -0.4082 0.5774 0.0532 E= 1.0000 0 0 0 -3.0000 0 0 0 -3.0000 It can be seen that the entries on the main diagonal of the matrix E corresponds to our eigenvalues and the columns of matrix T is the eigenvector of A. 62   
  • 63. The characteristic polynomial is given by the function poly(A): >> P=poly(A) P= 1.0000 5.0000 3.0000 -9.0000 It can also be seen that the coefficients of our characteristic polynomial calculated theoretically equals our MATLAB solution. Example 3.8: Find the eigenvalues and eigenvectors of the matrix below using MATLAB A= −3 1 −3 −8 3 −6 2 −1 2 Solution: >> A=[-3 1 -3;-8 3 -6; 2 -1 2] A= -3 1 -3 -8 3 -6 2 -1 2 >> poly(A) ans = 1.0000 -2.0000 -1.0000 2.0000 63   
  • 64. This vector p= [1 -2 -1 2] contains the coefficients of the characteristic polynomial. Our characteristic polynomial now becomes: λ3 – 2λ2 – λ + 2 = 0 To find the eigenvalues we must find the zeroes of this polynomial: >> E=eig(A) E= -1.0000 2.0000 1.0000 λ=-1, λ=2, λ=1 => (λ+1) (λ-2) (λ-1) =0 Using the [V, D] =eig(A) command MATLAB gives the eigenvalues, eigenvectors and the diagonal form of the matrix in one step: >> [V,D]=eig(A) V= 0.4472 -0.4082 -0.5774 0.8944 -0.8165 -0.5774 0 0.4082 0.5774 D= -1.0000 0 0 0 2.0000 0 0 0 1.0000 64   
  • 65. 3.5 ERROR AND ACCURACY As indicated in chapter one, matrix and vector quantities norm are needed to meas- ure errors in approximate solutions to linear systems. The three common vector norms, each of which has an associated matrix norm are: 1 – norm => ||x||1 = ∑ |xi| i = 1. . . . n 2 – norm => ||x||2 = √ ∑ |xi|2 ∞ - norm => ||x||∞ = max {|xi|: 1 ≤ i ≥ 1} For any vector norm, the associated matrix norm is given as: ||A||1 = max {||Ax||: ||x|| = 1} For the 1- and ∞- norms associated matrix norms take the form: ||A||1 = max {∑ |aij|: 1 ≤ j ≤ n} ||A||∞ = max {∑ |aij|: 1 ≤ i ≤ n} A measure of the sensitivity to round-off error of a matrix and numerical problems is obtained from the condition number of the matrix. Condition number is defined as c(A) = ||A|| ||A||-1 The residual vector is given by r = b – Ax, where x is the computed solution of the system Ax = b. The true solution is denoted by x*. ||r||/c(A)||b|| ≤ ||x* - x||/||x*|| ≤ c(A)||r||/||b|| Example 3.9: Find the 1- and ∞- norms of the matrix A, given by: 65   
  • 66. A= 1 2 3 4 7.0001 2 3 4 5 9.0002 3 3 4 5 9.0004 4 4 4 5 9.0004 5 5 5 5 10.0003 Solution: ||A||1 is calculated from the maximum column sum ||A||1 = max {∑ |aij|: 1 ≤ j ≤ n } ||A||1 = 7.0001 + 9.0002 + 9.0004 + 9.0004 + 10.0003 ||A||1 = 44.0014 ||A||∞ is the maximum row sum ||A||∞ = max {∑ |aij|: 1 ≤ i ≤ n} ||A||∞ = 5 + 5 + 5 + 5 + 10.0003 ||A||∞ = 30.0003 In MATLAB environment, we create the matrix A by: >> A=[1 2 3 4 7-0001;2 3 4 5 9.0002;3 3 4 5 9.0004;4 4 4 5 9.0004;5 5 5 5 10.0003] 66   
  • 67. A= 1.0000 2.0000 3.0000 4.0000 6.0000 2.0000 3.0000 4.0000 5.0000 9.0002 3.0000 3.0000 4.0000 5.0000 9.0004 4.0000 4.0000 4.0000 5.0000 9.0004 5.0000 5.0000 5.0000 5.0000 10.0003 >> N=[norm(A,1) norm(A,inf)] N= 43.0013 30.0003 It can be clearly seen from our computation that the result are perfectly equal. norm(A,1) = ||A||1 = 44.0014 (maximum column sum) norm(A,inf) = ||A||∞ = 30.0003 (maximum row sum) Example 3.10: Consider the Hilbert Matrix and its inverse given from MATLAB: H =sym(hilb(3))= [ 1, 1/2, 1/3] [1/2, 1/3, 1/4] [1/3, 1/4, 1/5] H-1 =inv(sym(hilb(3)))= [ 9, -36, 30] [-36, 192, -180] [30, -180, 180] Calculate the minimum norms of both matrix and find the condition number. Solution: 67   
  • 68. Maximum norm of H => ||H|| = 1 + ½ + 1/3 =11/6 Maximum norm of H-1 => ||H-1|| = 36 + 192 + 180 = 408 C(H) = ||H|| × ||H-1|| = 11/6(408) = 748 Example 3.11: If E is the perturbation of the matrix H in the example above due to internal round offs. Calculate the round off error estimation given by; ||x* - x||/||x*|| ≤ c(H) ||E||/||A|| Where ||E|| = 0.003, c(H) = 748, ||A| = 1.8333 Solution: ||x* - x||/||x*|| ≤ 748(0.003/1.8333) =1.1968 This is an error of more than 100%. Therefore, it can be seen that more digits are required for calculation to reduce floating point errors. 68   
  • 69. CHAPTER FOUR APPLICATIONS 4.1 APPLICATIONS OF LINEAR ALGEBRIAC SYSTEMS The solution of linear systems, their eigenvalue and eigenvector calculations are two linear algebra problems of special importance in applications. But, because of the sizes of matrices involved, they must be done on a computer. It is only for very small matrices or matrices of special type that these two problems can be solved by hand. Applications of linear algebra to science and real life problems are numerous. The solutions to many problems in physics, chemistry, biology, engineering, medicine, economics, computer graphics require tools from linear algebra, so do all branches of modern mathematics. We will discuss a few of these in this chapter and see how they are applied to real life problems. 4.1.1 ECONOMICS: In order to manipulate and understand the economy of a country or region, one needs to come up with certain model based on the various sections or sectors of their economy. 69   
  • 70. The Leontief Model is an attempt in this direction. Wassily W. Leontief (1906 – 1999), Russian economist was cited for his creation, the input – output technique. Input was defined by him as the goods and services that each industry buys from all the other industries, Output as the products industry sell to others. By this me- thod, the variation in the flow of goods from one industry to another can be graphi- cally demonstrated. Economists use his method to analyze, plan, and predict eco- nomic changes. The Leontief represents the economy as a system of linear equa- tions. They are of two types: a)The Leontief Closed Model: Consider an ,economy consisting n interdependent industries (sectors) s1. . . . .sn, meaning each industry consumes some of the goods produced by other industries, including itself. For example, Power Holding Com- pany of Nigeria (PHCN) consumes some of its own power. We say such an econ- omy is closed if it satisfies its own needs. Let mij be the number of units produced by the industry si, and necessary to pro- duce one unit of industry sj. If pk is the production level of the industry sk, then mijpj represents the number of units produced by si and consumed by industry sj. Then, the total number of units produced by industry si is given by: p1mi1 + p2mi2 + . . . . . + pnmin 70   
  • 71. In order to have a balanced economy, the total production of each industry must be equal to its total consumption. This gives the linear system: m11p1 + m12p2 + . . . . . + m1npn = p1 m21p1 + m22p2 + . . . . . + m2npn = p2 . . . . . . mn1p1 + mn2p2 + . . . . . + mnnpn = pn If M = m11 m12 . . . m1n m21 m22 . . . m2n . . . . mn1 mn2 . . . mnn And P = p1 P2 . pn The system can be rewritten as Mp = p where M is called the input-output matrix 71   
  • 72. Example 4.1: Suppose the economy of Gwagwalada depends on three industries; service, electricity, and oil production. Monitoring these three industries over a pe- riod of one year, the following observations are drawn. 1. To produce one unit worth of service, the service industry must consume 0.3 units of its own production, 0.3 units of electricity, and 0.3 units of oil to run its operation. 2. To produce one unit of electricity, the power generating plant must but 0.4units of service, 0.1 unit of its own production, and 0.5 units of oil. 3. Finally, the oil production company requires 0.3 units of service, 0.6 units of electricity and 0.2 units of its own production to produce one unit of oil. Find the production level of each of these industries in order to satisfy the external and the internal demands assuming that the model is closed Solution: Considering the following variables: 1. p1 = production level for the service industry 2. p2 = production level for the power generating plant 3. p3 = production level for the oil production company Since the model is closed, the total consumption of each industry must equal its to- tal production. 72   
  • 73. 0.3p1 + 0.3p2 + 0.3p3 = p1 0.4p1 + 0.1p2 + 0.5p3 = p2 0.3p1 + 0.6p2 + 0.2p3 = p3 Then M = 0.3 0.3 0.3 p= p1 0.4 0.1 0.5 p2 0.3 0.6 0.2 p3 Since each column sums up to one, we write the above system as (M–I) p=0 That is, Mp = p => Mp – p = 0 => Mp – Ip = 0 => (M-I) p=0 M-I = -0.7 0.3 0.3 0.4 -0.9 0.5 0.3 0.6 -0.8 Using MATLAB rref command, >> rref(M-I) ans = 1.0000 0 -0.8235 0 1.0000 -0.9216 0 0 0 73   
  • 74. To solve the system, we let p3 = t, the solution is: p1 = 0.8235t p2 = 0.9216t p3 = t The values must be non-negative for the model to make sense, i.e. t≥0. Taking t≥100, p1 = 82.35units p2 = 92.16units p3 = 100units b) The Leontief Open Model: The fist model treats the case where no goods leave or enter the economy, but this does not happen often in reality. For example, let di be the demand from an ith outside industry, pi and mij be as in the closed model dis- cussed above, then: pi = mi1p1 + mi2p2 + . . . . . + minpn + di (for i = 1 . . . n) p = Mp + d (M and p are same as the closed model) d= d1 (d = demand vector) d2 . dn 74   
  • 75. To solve the linear system; p = Mp + d => d = p-Mp=> d = Ip-Mp => d = (I-M) p p = (I-M)-1 d (I-M) matrix is required to be invertible; this might not be the case always. Example 4.2: An open economy with three industries; coal mining operation, elec- tricity generating plant, and an auto-manufacturing plant. To produce one naira of coal, the mining operation must purchase 10kobo of its own production, 30kobo of electricity and 10kobo worth of automobile for its transportation. To produce one naira of electricity, it takes 25kobo of coal, 40kobo of electricity and 15kobo of au- tomobile. Finally, to produce one naira worth of automobile, the auto- manufacturing company plant must purchase 20kobo of coal, 50kobo of electricity, and consume 10kobo of automobile. Assuming that during a period of one week, the economy has an exterior demand for 50,000naira worth of coal, 75,000naira worth of electricity, and 125,000naira worth of autos. Find the production level of each of the three industries in that period of one week in order to exactly satisfy both the internal and external demands Solution: The input-output matrix of the economy 75   
  • 76. M = 0.1 0.25 0.2 0.3 0.4 0.5 0.1 0.15 0.1 The demand vector d = 50,000 75,000 125,000 Using p = (I-M)-1 d I-M = 1-0.1 -0.25 -0.2 0.9 -0.25 -0.2 -0.3 1-0.4 -0.5 = -0.3 0.6 -0.5 -0.1 -0.15 1-0.1 -0.1 -0.15 0.9 In MATLAB, (I-M)-1 = inv(I-M) = 1.4646 0.8031 0.7717 1.0079 2.4882 1.6063 0.3307 0.5039 1.4646 Multiplying with the demand vector d to get p p= 1.4646 0.8031 0.7717 50,000 1.0079 2.4882 1.6063 75,000 0.3307 0.5039 1.4646 125,000 76   
  • 77. P= 1.0e+005 * 2.2992 4.3780 2.3740 This is approximately equal to our theoretical approach which is p1 = 229925 p2 = 437797.5 p3 = 237402.5 The total output of coal mining operation =229,925 naira The total output cost of electricity generating plant = 437,797.50 naira The total output cost of the auto-manufacturing plant = 237,402.50 naira 4.1.2 MARKOV CHAINS There is a random process in which events are discrete rather continuous, and the future development of each event is independent of all historical events, or depen- dent on only the immediate preceding event. Suppose there is a physical quantity or mathematical system that has n possible states and at any time, the system is in one and only one of its n states. Also, assuming at a given observation period, say kth period, the probability of the system being in a particular state depends only on 77   
  • 78. its status at the k-1st period. Such system is the markov chain or markov process. It deals with a square matrix that has non-negative entries and row sums al equal to1.This type of matrix is known as the Stochastic matrix or Transition matrix. Example 4.3 Suppose that the 2003 state of land use in a local government area of 50square miles of area is: I. Residential Use = 30% II. Commercial Use = 20% III. Industrial Use = 50% Find the states in 2009, 2014, 2019 assuming the transition probabilities for 5year intervals are given by the matrix below: S= 0.8 0.1 0.1 0.1 0.7 0.2 0 0.1 0.9 Solution: From the matrix S and the 2003 state of land use, we compute the 2009 state of land use. If a = vector for land use 78   
  • 79. aT = [a1 a2 a3] = [30 20 50] Multiplying aT and S = [30 20 50] 0.8 0.1 0.1 0.1 0.7 0.2 0 0.1 0.9 Using MATLAB, aT*S = bT = [26 22 52] Therefore, in 2009 => Residential Use = 26% * 50 = 13square miles Commercial Use = 22% * 50 =11square miles Industrial Use = 52% * 50 = 26square miles bT * S = [26 22 52] 0.8 0.1 0.1 0.1 0.7 0.2 0 0.1 0.9 bT * S = cT = [23.0 23.2 53.8] It shows that in 2014, Residential Use => 23.0% * 50 =11.5square miles 79   
  • 80. Commercial Use => 23.2% * 50 = 11.6square miles Industrial Use => 53.8% * 50 = 26.9square miles cT * S = dT = [23.0 23.2 53.8] 0.8 0.1 0.1 0.1 0.7 0.2 0 0.1 0.9 dT = [20.72 23.92 55.36] In 2019, Residential Use => 20.72% * 50 =10.36square miles Commercial Use => 23.92% * 50 = 11.96square miles Industrial Use => 55.36% * 50 = 26.68square miles We can see the future from our calculation that for residential purpose, the land use decreases with time. For commercial, the land use appreciates every five years. For industrial purpose, fluctuates with time. 4.1.3 ELECTRICAL NETWORK A network consists of branches and nodes. A typical example is the street network where the branches are the streets and the nodes are the intersections. Many net- 80   
  • 81. work problems can be modified by systems of linear equations. The basic laws go- verning this process are: a) Ohms’ Law: The voltage drop Er across a resistor is proportional to the instanta- neous current I => E = IR, where I is the current and R is the constant of propor- tionality called the resistance. b)Kirchhoff’s’ Voltage Law: The algebraic sum of all the instantaneous voltage drops around any close loop is 0, or the voltage impressed on a closed loop is equal to the sum of the voltage drops in the rest of the loop ( A complete circuit for an electrical current). c)Kirchhoff’s’ Current Law: At any point of a circuit, the sum of the inflowing cur- rent is equal to the sum of the out flowing currents. Linearity of this model is seen from the matrix Ri = v. The simplest sort of elec- trical network involves only batteries and resistors. Examples of resistors include light bulbs, motors, and heaters. Example 4.4: Determine the branch currents I1, I2, I3 in the network shown below: 81   
  • 82. Solution: Kirchhoff’s current law provides one equation for each node NODE Current In Current Out A I1 I2 + I3 B I2 + I3 I1 From Kirchhoff’s Current Law: Node A => I1 = I2 + I3 Node B => I2 + I3 = I1 The voltage law provides one equation for each loop. In the loop at the top, we choose a counterclockwise direction which coincides with the direction shown for I1 and I3, and gives the current flow from the battery. 82   
  • 83. Ri Voltage Drops Voltage Sources 20I1 + 10I3 60 From Kirchhoff’s Voltage Law: 20I1 + 10I3 = 60 The bottom loop, the counterclockwise direction coincides with the direction of I2 but opposite to I3 Ri Voltage Drops Voltage Sources 5I2 – 10I3 50 5I2 – 10I3 = 50 The third loop, for which we choose the counterclockwise direction by Kirchhoff’s Voltage Law: 20I1 + 5I2 = 60 +50 Rearranging, we have: I1 – I 2 – I 3 = 0 20I1+10I3 = 60 5I2–10I3 = 50 83   
  • 84. Putting the above equation in the form Ri = v 1 -1 -1 I1 = 0 20 0 10 I2 = 60 0 5 -10 I3 = 50 Solving by LU factorization, Ri = LUi =v R= 1 -1 -1 1 0 0 u11 u12 u13 20 0 10 = l21 1 0 0 u22 u23 0 5 -10 l31 l32 1 0 0 u33 r11 = u11 = 1 r12 = u12 = -1 r13 = u13 = -1 r21 = l21u11 =20 r22 =l21u12 + u22 = 0 r23 = l21u13 + u23 = 0 l21 =20/1 = 20 u22 = 20 u23 = 30 r31 = l31u11 = 0 r32 = l31u12 + l32u22 = 5 r33 = l31u13 + l32u23 +u33 l31 = 0/1 = 0 l32 = 5/20 = 0.25 l33 = 0-7.5+10 = 2.5 84   
  • 85. R = LU = 1 -1 -1 1 0 0 1 -1 -1 20 0 10 = 20 1 0 0 20 30 0 5 -10 0 0.25 1 0 0 2.5 First, we solve Ly = v 1 0 0 y1 0 20 0 0 y2 = 60 0 0.25 1 y3 50 y1 =0 => y1 = 0 20y1 + y2 = 60 => y2 = 60 0.25y2 +y3 = 50 => y3 = 35 Then Ui = y 1 -1 -1 I1 = 0 0 20 30 I2 = 60 0 0 2.5 I3 = 35 I1 – I2 – I3 = 0 => I1 = -18 +14 = -4 20I2 + 30I3 = 60 => I2 =-360/20 = -18 2.50I3 = 35 => I3 = 35/2.5 = 14 85   
  • 86. An electric current’s overall power depends on the amount of current flowing. Electric currents flow because atoms and molecules contain two types of electrical charge, positive and negative, and these opposite charges attract each other. Direct current (DC) is the flow of electricity in one direction. Alternating current (AC) intermittently reverses direction because of the way it is generated. The branch current in the network I1 = -4amps can be said to be alternating current The branch current in network I2 = -18amps, also exhibit alternating current. For I3 = 14amps, the current flows in one direction (Direct Current) With MATLAB, >> A=[1 -1 -1;20 0 10;0 5 -10] A= 1 -1 -1 20 0 10 0 5 -10 >> b= [0; 60; 50] b= 0 60 50 >> [L, U] =lu(A) 86   
  • 87. L= 0.0500 -0.2000 1.0000 1.0000 0 0 0 1.0000 0 U= 20.0000 0 10.0000 0 5.0000 -10.0000 0 0 -3.5000 >> i=U (Lb) i= 4 6 -2 The results are different, but still satisfy the system. The branch current in the network I1 = 4amps, direct current. The branch current in network I2 = 6amps, also exhibit direct current. For I3 = -2amps, alternating current. 87   
  • 88. CHAPTER FIVE SUMMARY, CONCLUSION, RECOMMENDATIONS 5.1 SUMMARY As we know, one of the most important problems in technical computing is the solution of systems of simultaneous linear equations. In this research, the objective shows numerical computation is the best alternative to deal with large systems, i.e. to solve large systems of algebraic equations, numerical computing using MAT- LAB is one of the best alternatives. However, different methods of solving systems of algebraic equations have been used such as: LU Factorization, Cholesky Factorization, Iterative method, and Ei- genvalue method. In each method, problems have been solved theoretically and the same answers were arrived at when solved using MATLAB. Chapter one deals with introduction, scope, limitation, and methodology of solu- tions of linear algebraic systems. In chapter two, the history and literature review of our research is discussed. Here, Mathematical scholars who contributed immensely to linear algebra and its solu- tions to linear systems were identified. Also, MATLAB as a method of enhancing solutions of linear systems was identified. Chapter three is based on the methods compatible with MATLAB for solving li- near systems to obtain a unique solution. At the end of a problem, MATLAB commands are given to simplify the theoretical method. 88   
  • 89. Chapter four provides the way real life problems are resolved to linear systems, and how a solution is arrived at using the methods of solving linear systems. The Solution is then interpreted 5.2 CONCLUSION Numerical Linear Algebra deals with numerical solutions of linear algebra prob- lems. It has grown into an independent topic for research and teaching in recent years. The reason is, of course, obvious. Numerical Linear Algebra techniques are essential ingredients in scientific computing that are routinely used to solve prac- tical-life problems (Signal Processing, Control Theory, Heat Transfer, Fluid Dy- namics, Biomedical Engineering, Vibration, Statistics, Bioscience, and Economics) As it can be seen, theoretical approach to methods of solving large systems of li- near equations is lengthy and tedious. Therefore, in consideration, numerical me- thod is considered the best approach in dealing with large problems in systems of linear equations. Although problems arise from computing, such as errors from wrong input of data, errors from floating point numbers, e.t.c. It is still regarded as the best in accuracy and speed. Also, it is considered reliable. Therefore, computers can be used with minimum effort to solve systems of linear equations to a reasonable degree of accuracy and efficiency using computing pro- grams, e.g. MATLAB. 5.3 RECOMMENDATIONS The MATLAB high-performance language for technical computing integrates computation, visualization, and programming in an easy-to-use environment where 89   
  • 90. problems and solutions are expressed in familiar mathematical notation. It is a very useful mathematical computing software recommended for use at tertiary level of study and all level of computing involving large data. A little familiarity is just what is needed to start using the program. The guiding principle behind MATLAB is "Do the Right Thing." This means do- ing what is best for everyone for the long term, and believing that "right" answers exist. 90   
  • 91. REFERENCES • Bronson, Richard (July 1988) Schaum's Outline of Matrix Operations; McGraw-Hill New York • Heath, Michael T. (2002) Scientific Computing: An Introductory Sur- vey; McGraw-Hill Companies, Inc. New York • Kreyszig, Erwin (March 11, 1999) Advanced Engineering Mathematics; WILEY ACADEMIC (John Wiley & Sons Ltd; 8th Edition, International Edition) • Lay, David C. (August 22, 2005) Linear Algebra and Its Applications; Addison Wesley Publishing Company • Ogunfiditimi, F. O. (2006) MTH 206(Numerical Analysis) Lecture Notes • Recktenwald, Gerald W. (2000) Introduction to Numerical Methods with MATLAB: Implementations and Applications. Prentice-Hall Upper Saddle River, NJ • Scheid, Francis (January 1989) Schaum's Outline of Numerical Analy- sis; McGraw-Hill New York • Turner, Peter R. (December 1994) Numerical Analysis; Scholium Inter- national, Inc. 91   
  • 92. Ukpong, E. O. (2006) MTH 204 (Linear Algebra II) Lecture Notes • Dawkins Paul (2000) LINEAR ALGEBRA: http://tutorial.math.lamar.edu/terms.aspx, • http://en.wikipedia.org/wiki • http://web.mit.edu/18.06/www/ • 18.06 Linear Algebra, spring 2008 • MICROSOFT ENCARTA (Students Edition) 2008 • www.mathworks.com 92