Micro-Scholarship, What it is, How can it help me.pdf
Â
System of linear algebriac equations nsm
1. Subject : Numerical & Statistical methods
for Computer Engineering.
Topic : System of Linear Algebraic
Equations
2. Serial No Topic
01 Introduction
02 Solutions to the equations graphical representation
03 Elementary Transformations
04 Numerical solutions graphical representation
05 Direct and iterative methods
06 Gauss elimination and methodology
07 Gauss jordan and methodology
08 Gauss jacobi & gauss seidel
09 Applications
3. ď˝ A system of linear Algebraic equations is nothing but a system of ' n'
algebraic equations satisfied by a set of n unknown quantities. The
aim is to find these n unknown quantities satisfying the n equations.
ď˝ It is a very common practice to write the system of n equations in
matrix form as
ď˝ Ax = b where A is an n x n, non-singular matrix
and x and b are n x 1 matrices out of which b is known. For
small n the elementary methods like cramers rule, matrix inversion
are very convenient to get the unknown vector x from the
system Ax = b. However, for large ' n ' these methods will become
computationally very expensive because of the evaluation of matrix
determinants involved in these methods.
4.
5. ď˝ Elementary Operations
ď˝ There are three kinds of elementary matrix operations.
ď˝ Interchange two rows (or columns).
ď˝ Multiply each element in a row (or column) by a non-
zero number.
ď˝ Multiply a row (or column) by a non-zero number and
add the result to another row (or column).
ď˝ When these operations are performed on rows, they
are called elementary row operations; and when they
are performed on columns, they are called elementary
column operations.
6.
7.
8.
9. ď˝ In linear algebra, Gaussian elimination (also known as
row reduction) is an algorithm for solving systems of linear
equations. It is usually understood as a sequence of
operations performed on the associated matrix of
coefficients. This method can also be used to find the
rank of a matrix, to calculate the determinant of a matrix,
and to calculate the inverse of an invertible square matrix.
The method is named after Carl Friedrich Gauss (1777â
1855), although it was known to Chinese mathematicians
as early as 179 CE (see History section).
10. ď˝ To perform row reduction on a matrix, one uses a sequence
of elementary row operations to modify the matrix until the lower
left-hand corner of the matrix is filled with zeros, as much as
possible. There are three types of elementary row operations: 1)
Swapping two rows, 2) Multiplying a row by a non-zero number, 3)
Adding a multiple of one row to another row. Using these
operations, a matrix can always be transformed into an upper
triangular matrix, and in fact one that is in row echelon form. Once
all of the leading coefficients (the left-most non-zero entry in each
row) are 1, and every column containing a leading coefficient has
zeros elsewhere, the matrix is said to be in reduced row echelon
form. This final form is unique; in other words, it is independent of
the sequence of row operations used.
11. ď˝ The Gauss-Jordan elimination method to solve a
system of linear equations is described in the
following steps. 1. Write the augmented matrix
of the system. 2. Use row operations to
transform the augmented matrix in the form
described below, which is called the reduced
row echelon form (RREF).
12. ď˝ The Gauss-Jordan elimination method to solve a
system of linear equations is described in the following
steps. 1. Write the augmented matrix of the system. 2.
Use row operations to transform the augmented matrix
in the form described below, which is called the
reduced row echelon form (RREF). (a) The rows (if
any) consisting entirely of zeros are grouped together
at the bottom of the matrix. (b) In each row that does
not consist entirely of zeros, the leftmost nonzero
element is a 1 (called a leading 1 or a pivot). (c) Each
column that contains a leading 1 has zeros in all other
entries. (d) The leading 1 in any row is to the left of
any leading 1âs in the rows below it.
13.
14. ď˝ . Perhaps the simplest iterative method for
solving Ax = b is Jacobiâs Method. Note that the
simplicity of this method is both good and bad: good,
because it is relatively easy to understand and thus is a
good first taste of iterative methods; bad, because it is
not typically used in practice (although its potential
usefulness has been reconsidered with the advent of
parallel computing). Still, it is a good starting point for
learning about more useful, but more complicated,
iterative methods.
15. ď˝ In numerical linear algebra, the GaussâSeidel
method, also known as the Liebmann method or
the method of successive displacement, is
an iterative method used to solve a linear system
of equations. It is named after
the German mathematicians Carl Friedrich
Gauss and Philipp Ludwig von Seidel, and is
similar to the Jacobi method. Though it can be
applied to any matrix with non-zero elements on
the diagonals.
16. ď˝ The solutions of some linear systems (that can be
represented by systems of linear equations) are
more sensitive to round-off error than others. For
some linear systems a small change in one of the
values of the coefficient matrix or the right-hand
side vector causes a large change in the solution
vector
17. ď˝ Linear algebra shows up in the theory of a lot of fields
in computer science. Statistical learning models
frequently rely on matrix algebra and decomposition.
Image manipulation relies on vector manipulation and
matrix transformations. Anything with physics will use
vector manipulation and differential equations which
require linear algebra to truly understand.Â
To get into the theory of it all, you need to know linear
algebra. If you want to read white papers and consider
cutting edge new algorithms and systems, you need to
know a lot of math.Â