SlideShare ist ein Scribd-Unternehmen logo
1 von 6
Downloaden Sie, um offline zu lesen
Computational Physics 301: Exercise 1 - Matrices and Linear
Algebra
Thomas Wigg
February 14, 2014
University of Bristol, Bristol, UK
This exercise involves using programs written in the C
language to calculate the inverse of a square matrix in three
dierent ways: analytically, using LU decomposition, and
singular value (SV) decomposition.
1 Matrix Inversion for Linear Alge-
bra
1.1 Analytical Calculation of Inverse Ma-
trix
A set of simultaneous equations can be solved using matrices
by writing them in corresponding matrix equations. In the
simple case of a system with two unknowns, the equations
a11x1 + a12x2 = c1 (1)
a21x1 + a22x2 = c2 (2)
can be written in matrix form as
a11 a12
a21 a22
x1
x2
=
c1
c2
(3)
This format for two equations can be expanded to incor-
porate a set of simulatious equations of any size. This can
be generalised as
Ax = c, (4)
where A is the matrix of coecients, x is the vector of
unknown variables and c is the vector of constants.
The solution to the set of simultaneous equations can be
found by multiplying both sides of equation 4 by the inverse
of A to give
A−1
Ax = A−1
c, (5)
which then simplies to
x = A−1
c (6)
The inverse of A is given by
A−1
=
adj.A
|A|
, (7)
where adj.A is the adjoint of A, which is the transpose of
the matrix of cofactors of A, and |A| is the determinant of
A.
The determinant of a matrix of any order can be calcu-
lated by Laplace expansion
[1] according to
|A| = ai1Ci1 + ai2Ci2 + · · · + ainCin =
n
j=1
aijCij, (8)
where Cij is the cofactor of aij. The Laplace expansion in
equation 8 is performed along the ith
row. It can equally
be performed along the jth
column. The cofactor, Cij, is
calculated by
Cij = (−1)i+j
|Aij|, (9)
where |Aij|is the determinant of the matrix A with row
i and column j removed. The transpose of the matrix of
cofactors is simply found by
CT
ij = Cji. (10)
That is, changing the row and column coordinate of the
cofactor value.
1.2 Task 1 - Inverting a Matrix using An-
alytical Methods
The rst task we are asked to complete is to write a pro-
gram to calculate the inverse of a square matrix of chosen
order using the analytical techniques describe above. Due
to the recursive nature of this method of calculating deter-
minants using Laplace's equation, it was necessary to write
a function which is able to call upon itself. The determi-
nant function used performs Laplace expansion along the
1
rst row for simplicity, and continues to call upon itself un-
til the input matrix is 1 x 1 (single value), at which point an
if() command returns that single input value. The program
contains an if() function that returns a statement that the
inverse of the entered matrix is not mathematically possible
should the determinant be 0.
A very similar process is then used to progress through
each of the elements of the original matrix and calculate the
matrix of cofactors, which calls upon the determinant func-
tion to calculate the determinant of the reduced matrices
for each of the elements. Finally, the transpose of the co-
factor matrix is computed and the inverse matrix calculated
and returned. The program was initially in a state where
it would record the user-entered matrix elements from the
command window. This is a useful feature were I to be
releasing the program as a standalone matrix inverter, but
obviously as the order increases, entering all the matrix el-
ements for, say, a 10 x 10 matrix becomes impractical. In-
stead, I adapted the program to compute the inverse of a
matrix written in the routine itself. Because of this, there
are some inecient variable assignments but as the program
works eectively, I did not see a need to clean it up.
To remove the need for me to write each matrix by
hand, I wrote a routine to generate a matrix of pseudo-
random numbers of user-entered order using the rand() and
RAND_MAX functions. The program writes the comma-
separated matrix to a le which can then be copied into the
analytical inversion program. This routine can be seen in
appendix A.
Figure 1 shows a comparison of the execution times for
computing the inverses of matrices of increasing order us-
ing analytical methods. For each order, the pseudo-random
matrix was inverted ve times and the mean execution time
calculated. There was very little uctuation in the execu-
tion time, but that which occurred can be attributed to
external processes on the system using a fraction of the
available processing power.
Figure 1 shows that the execution time to compute the in-
verse of matrices of order up to eleven remains very small,
but past this the execution time rises exponentially, with
the inversion of a 13 x 13 matrix taking nearly an hour and
a half. This result implies the calculation of the inverse of a
higher order matrix will take many hours and, as such, I did
not attempt any matrices of order greater than 13. Whilst
the execution times will vary greatly for dierent systems
(due to technical specications), the same exponential in-
crease in calculation time will inevitably be seen for some
order due to the nature of the increase of the number of
calculations required to compute the inverse of a matrix of
increased order.
In fact, the exponential t used is only a good approx-
imation to the true relationship and the total number of
Figure 1: Graph showing the execution time (in seconds)
against the matrix order for a program which computes the
inverse of an entered matrix analytically. The t is of expo-
nential form such that y = 0.34204+4.74445×10−12
e2.6637
.
calculations required to compute the inversion (and hence
the execution time), scales according to
N = (n − 1)n! + n! − 1, (11)
where N is the total number of operations required and n is
the order of the entered matrix. This relationship is found
from the fact that for the computation of the determinant
there will be n! summand, where each summand requires
(n − 1) multiplications and nally to combine this into the
inverted matrix requires n! − 1 summands[1]. As is obvi-
ous from this relationship (and is correlated by the results
seen in gure 1), a small increase in the order of the ma-
trix results in a large increase in the number of required
computations.
To test the accuracy of this analytical method, the re-
sults were compared to hand calculated inverse matrices
of order 3 (for higher order than this, matrix inversion
becomes a tiresome exercise in bookkeeping) which were
veried against the values returned by an online matrix
calculator[2]. The precision of a double precision oating
point number is seventeen signicant gures
[3], so this would
logically imply that this is the maximum precision to which
the matrix elements can be stored. This was the case as,
when the routine was modied to print out the elements
to greater than seventeen signicant gures the program
simply returned zeros for the additional places. Consider-
ing the accuracy of the returned inverse elements, it was
found that they were accurate up to the number of signi-
cant gures of the true answer. As an example, one of the
true (hand-calculated) inverse matrix elements of a matrix
2
produced by the pseudo-random generator was 0.845723
and the program returned the value 0.84572324197412796,
which rounds to the true value. I am unsure as to why
the routine generates these additional values rather than
returning zeros, but I presume it must be due to the recur-
sive nature of the caluclations and rounding errors being
carried forwards.
2 Algorithms for Linear Algebra
2.1 LU Decomposition
LU decomposition describes breaking down the matrix A
in equation 4 into the product of two matrices such that
A = LU, (12)
where the matrix L is lower triangular (only has non-zero
elements on the main diagonal and below) and U is upper
triangular (as with L but only has non-zero elements on and
above the main diagonal)
[4]. For a 3 x 3 matrix in element
form, equation 11 would be written as


a11 a12 a13
a21 a22 a23
a31 a32 a33

 =


l11 0 0
l21 l22 0
l31 l32 l33




u11 u12 u13
0 u22 u23
0 0 u33

 .
(13)
Equation 4 can now be written as
Ax = LUx = c . (14)
By making the substitution
Ux = y (15)
such that
Ly = c (16)
solving equation 13 becomes trivial. By considering equa-
tion 14 as an example, it can be seen from looking at the
rst row of the L matrix that this will simply reduce to the
expression y1 = c1
l11
. This can then be substituted into the
calculation for y2 and so on. This can be generalised to give
an expression for the value of any unknown variable as
yk =
ck −
k−1
j=1 akjyj
akk
, k = 2, 3, · · · , n . (17)
2.2 Singular Value Decomposition
Singular value decomposition is again a means of simplify-
ing the method of solving equation 4. This is achieved by
decomposing A into the product of three matrices such that
A = UDVT
, (18)
where the matrix D is diagonal (only non-zero elements
along main diagonal) and the matrices U and D are
othogonal
[4]. This equation is easier to solve as the in-
verse of an orthogonal matrix is simply its transpose and
the inverse of a diagonal matrix is also simply found by





d11 0 · · · 0
0 d22 · · · 0
.
.
.
.
.
.
. . .
.
.
.
0 0 · · · dnn





−1
=





1
d11
0 · · · 0
0 1
d22
· · · 0
.
.
.
.
.
.
. . .
.
.
.
0 0 · · · 1
dnn





.
(19)
From this, the inverse of A is given by
A−1
= VD−1
UT
. (20)
2.3 Task 2 - Comparing the Performance
of Analytical, LUD and SVD Methods
for Inverting Matrices
2.3.1 Comparison of Speed of Execution of Ana-
lytical, LUD and SVD Methods for Inverting
Matrices
The second task required programs to be written that com-
pute the inverse of an entered matrix using LUD and SVD
routines from the GNU Scientic Library (GSL)
[5]. GSL
matrix functions only recognise vectors and matrices in the
gsl_vector and gsl_matrix forms respectively. Because of
this, any matrix arrays or vectors to be manipulated by the
GSL functions must rst be stored in the aforementioned
forms.
LU Decomposition - The matrix is rst decomposed
using the gsl_linalg_LU_decomp function and then simply
inverted using the gsl_linalg_LU_invert function.
SV Decomposition - The matrix is rst decom-
posed into the matrices U, D and V using the
gsl_linalg_SV_decomp function. As the GSL functions
only allow the multiplication of two matrices at a time, it is
then necessary to use a sequence of two gsl_blas_dgemm
functions to compute A−1
= VD−1
UT
.
Figure 2 shows a comparison of the execution times for
the LUD and SVD methods as well as including the pro-
jection of the analytical method from gure 1. As with the
3
analytical method, each matrix was produced using the ma-
trix generator and its inverse computed ve times with the
mean execution time shown on the graph.
Figure 2: Graph comparing the execution times (in seconds)
of a program calculating the inverse of a matrix using an-
alytical methods (cyan points) and programs utilising the
GSL functions to compute the matrix using LU decompo-
sition (blue points) and singular value decomposition (red
points). The cyan line is the exponential projection of the
analytical method seen in gure 1. The blue line is a t of
y = 0.19043+0.000170032x1.978564
projecting the execution
time of the matrix inversion by LUD methods. The red line
is a t of y = 0.16435 + 0.000129852x2.00012
projecting the
execution time of the matrix inversion using SVD.
It is evident from gure 2 that the LUD and SVD meth-
ods of computing the matrix inverse are much faster than
the analytical method. The analytical method became im-
practically slow at an order of 13, whereas the LUD and
SVD methods were still able to calculate the inverse of a
350 x 350 matrix in under 20 seconds. It is also clear that
whilst at lower orders the LUD and SVD methods return the
matrix inverse in roughly the same time, the LUD method
becomes more ecient at higher orders. I would've tested
this further, for matrices of much higher order, but the rou-
tine crashed as soon as a matrix order of 361 was entered. I
can only assume this is because the GSL functionsa are lim-
ited in the dimensions of the matrix they can manipulate.
It is also worth noting that the execution time for both the
LUD and SVD methods scales approximately to the square
of the matrix order.
k Inv. Acc. (signicant gures)
1 × 10−1
16
1 × 10−2
13
1 × 10−3
12
1 × 10−4
6
1 × 10−5
7
1 × 10−6
6
1 × 10−7
8
1 × 10−8
6
1 × 10−9
7
1 × 10−10
5
1 × 10−11
4
1 × 10−12
3
1 × 10−13
2
1 × 10−14
0
Table 1: Accuracy of inverted matrix elements (in signif-
icant gures) of the SVD method of matrix inversion for
various values of k compared to the values returned by an
online matrix inverse calculator.
2.3.2 Accuracy of LUD and SVD Routines When
Entered Matrix is Close to Singular
The nal part of this task asked us to compare the returned
inverse matrix using the LUD and SVD methods with the
true inverse, when the entered matrix is close to singular,
and compare the accuracies of both methods. For the pur-
poses of this sub-task, I will be using the close-to-singular
matrix
A =


1 1 1
1 2 −1
2 3 k

 , (21)
where k is a small number. For the LUD method of inver-
sion, the routine always returns each element of the inverse
to seventeen signicant gures (the precision of a double
precision oating point number). The routine is always ac-
curate to the number of signicant gures of the true value,
and should the true value be of fewer signicant gures
than seventeen, the routine will generate the remaining sig-
nicant gures such that if rounded to the number of sig-
nicant gures of the true value, the returned value is the
same as the true value; why this occurs is curious. For a
value of k = 1 × 10−16
the routine crashes and returns that
the determinant of the input matrix is equal to zero (and
hence the input matrix is singular), presumably because the
routine is rounding the value of k to zero.
The SVD method of computing the inverse of the close-to-
singular matrices behaved somewhat dierently to its LUD
counterpart. Whereas the LUD routine computed the in-
4
verse matrix elements, to the same number of signicant
gures as the true value, for all values of k up to the point
at which is crashes, the accuracy of the SVD routine became
gradually worse as the value of k was reduced in factors of
ten. The accuracy of the inverse matrix elements returned
by the SVD routine with their corresponding values of k
can be seen in table 1. For a value of k = 1 × 10−14
, the
SVD routine returned a completely inaccurate inverted ma-
trix. For both the LUD and SVD routines, the results were
compared to an online matrix inverse calculator
[2] and the
routines were adjusted to print the values of the inverse
matrix elements to the highest possible precision.
3 Physics Problem: Football Sta-
dium Camera
A remote overhead camera at a football stadium is sus-
pended by three cables attached to the roof. Each cable
is fed from a motorised drum so the camera can be moved
around by changing the lengths of the cables. The camera
has a mass of 50 kg and the attachment points are all 70 m
from the centre of the pitch forming an equilateral triangle
in a horizontal plane. Using an appropriate matrix algo-
rithm, calculate and plot the tension in one of the cables as
a function of the camera's position as it is moved in a hori-
zontal plane a constant depth (10 m) below the attachment
points. What is the maximum tension and where does it
occur? [You may ignore the mass of the cables]. - Exercise
1 handout
Figures 3, 4 and 5 show the physical setup of the above
problem. In gure 3, the length of the sides of the equalat-
eral triangle was simply calculated using the cosine rule
and consequently the height of the triangle was found using
Pythagoras' theorem. Using gures 4 and 5 it is possible to
construct a set of simultaneous equations relating the com-
ponents of the tension in each wire. These equations are as
follows:
T1sinθ1 + T2sinθ2 + T3sinθ3 = 50g (22)
−T1cosθ1cosφ1 + T2cosθ2cosφ2 + T3cosθ3sinφ3 = 0 (23)
−T1cosθ1sinφ1 + T2cosθ2sinφ2 + T3cosθ3cosφ3 = 0 (24)
From these it is possible to write a corresponding matrix
equation such that


sinθ1 sinθ2 sinθ3
−cosθ1cosφ1 cosθ2cosφ2 cosθ3sinφ3
−cosθ1sinφ1 cosθ2sinφ2 cosθ3cosφ3




T1
T2
T3

 =


50g
0
0

 .
(25)
Figure 3: Diagram showing the stadium from above with
the camera centred and attached to the three anchor points
on the stadium roof by negligible mass cables.
It would be possible to solve this expression as it stands,
but for simplicity of coding and understanding I expressed
the trigonometric expressions in equation 25 in terms of the
x and y coordinates of the camera, where the x and y axes
are dened in gure 3.
Figure 4: Diagram showing the camera at an arbitrary posi-
tion above the pitch. The relative angles and cable tensions
are also noted.
5
Figure 5: Diagram showing one of the cables and the camera
in the vertical plane. The tension in the cable and the
weight of the camera are also noted.
This results in the matrix expression


10
a
10
b
10
c
−x
a
70
√
3−x
b
35
√
3−x
c
−y
a −y
c
105−y
c




T1
T2
T3

 =


50g
0
0

 , (26)
where a = x2 + y2 + 100,
b = (70
√
3 − x)2 + y2 + 100 and
c = (35
√
3 − x)2 + (105 − y)2 + 100 (substituted for
ease of use).
Unfortunately due to time constraints I have been un-
able to implement the routines written earlier to solve this
physics problem. Should I have had time, I would've used
the LU decomposition method to solve equation 26, simply
because the code for this routine is marginally simpler. By
premultiplying both sides of equation 26 with the inverse
of the coordinate equation would allow the tensions in each
wire to be calculated. I would then have asked the rou-
tine to compute the tension in one of the wires for many
possible points over the pitch, varying the coordinates in
increments of, say, ten centimetres. This information could
then be represented in a topographic map of the stadium.
I am disappointed that my time-management is such that
I will not be able to complete this problem as I feel I was
progressing well with the exercise.
References
[1] Linear Algebra: Modern Introduction, D. Poole, Third
Edition, Cengage Learning, 4, 277-280, (2011).
[2] Online Matrix Calculator (with inverion function-
ality), http://www.bluebit.gr/matrix-calculator/, (ac-
cessed 14/02/2014).
[3] Programming C# 4.0, I. Griths, M. Adams, J. Liberty,
Sixth Edition, O'Reilly Media Inc., 2, 32, (2010).
[4] Numerical Recipes: The Art of Scientic Computing,
W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P.
Flannery, Third Edition, Cambridge University Press,
2, 49-75, (2007).
[5] GNU Scientic Library (GSL),
http://www.gnu.org/software/gsl/, 1.16, (accessed
14/02/2014).
6

Weitere ähnliche Inhalte

Was ist angesagt?

A new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributionsA new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributionsFrank Nielsen
 
Application of Numerical Methods (Finite Difference) in Heat Transfer
Application of Numerical Methods (Finite Difference) in Heat TransferApplication of Numerical Methods (Finite Difference) in Heat Transfer
Application of Numerical Methods (Finite Difference) in Heat TransferShivshambhu Kumar
 
Applied Statistics and Probability for Engineers 6th Edition Montgomery Solut...
Applied Statistics and Probability for Engineers 6th Edition Montgomery Solut...Applied Statistics and Probability for Engineers 6th Edition Montgomery Solut...
Applied Statistics and Probability for Engineers 6th Edition Montgomery Solut...qyjewyvu
 
83662164 case-study-1
83662164 case-study-183662164 case-study-1
83662164 case-study-1homeworkping3
 
Solution of equations for methods iterativos
Solution of equations for methods iterativosSolution of equations for methods iterativos
Solution of equations for methods iterativosDUBAN CASTRO
 
Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...ANIRBANMAJUMDAR18
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
Numerical
NumericalNumerical
Numerical1821986
 
Quickselect Under Yaroslavskiy's Dual Pivoting Algorithm
Quickselect Under Yaroslavskiy's Dual Pivoting AlgorithmQuickselect Under Yaroslavskiy's Dual Pivoting Algorithm
Quickselect Under Yaroslavskiy's Dual Pivoting AlgorithmSebastian Wild
 
2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlonozomuhamada
 
Propagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace ReductionPropagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace ReductionMohammad
 
2012 mdsp pr08 nonparametric approach
2012 mdsp pr08 nonparametric approach2012 mdsp pr08 nonparametric approach
2012 mdsp pr08 nonparametric approachnozomuhamada
 
article_imen_ridha_2016_version_finale
article_imen_ridha_2016_version_finalearticle_imen_ridha_2016_version_finale
article_imen_ridha_2016_version_finaleMdimagh Ridha
 

Was ist angesagt? (20)

A new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributionsA new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributions
 
probability assignment help (2)
probability assignment help (2)probability assignment help (2)
probability assignment help (2)
 
Application of Numerical Methods (Finite Difference) in Heat Transfer
Application of Numerical Methods (Finite Difference) in Heat TransferApplication of Numerical Methods (Finite Difference) in Heat Transfer
Application of Numerical Methods (Finite Difference) in Heat Transfer
 
Applied Statistics and Probability for Engineers 6th Edition Montgomery Solut...
Applied Statistics and Probability for Engineers 6th Edition Montgomery Solut...Applied Statistics and Probability for Engineers 6th Edition Montgomery Solut...
Applied Statistics and Probability for Engineers 6th Edition Montgomery Solut...
 
83662164 case-study-1
83662164 case-study-183662164 case-study-1
83662164 case-study-1
 
CMB Likelihood Part 1
CMB Likelihood Part 1CMB Likelihood Part 1
CMB Likelihood Part 1
 
final_report
final_reportfinal_report
final_report
 
Statistical Method In Economics
Statistical Method In EconomicsStatistical Method In Economics
Statistical Method In Economics
 
Solution of equations for methods iterativos
Solution of equations for methods iterativosSolution of equations for methods iterativos
Solution of equations for methods iterativos
 
Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
Numerical
NumericalNumerical
Numerical
 
Quickselect Under Yaroslavskiy's Dual Pivoting Algorithm
Quickselect Under Yaroslavskiy's Dual Pivoting AlgorithmQuickselect Under Yaroslavskiy's Dual Pivoting Algorithm
Quickselect Under Yaroslavskiy's Dual Pivoting Algorithm
 
Mechanical Engineering Assignment Help
Mechanical Engineering Assignment HelpMechanical Engineering Assignment Help
Mechanical Engineering Assignment Help
 
2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo
 
Es272 ch4b
Es272 ch4bEs272 ch4b
Es272 ch4b
 
Propagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace ReductionPropagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace Reduction
 
2012 mdsp pr08 nonparametric approach
2012 mdsp pr08 nonparametric approach2012 mdsp pr08 nonparametric approach
2012 mdsp pr08 nonparametric approach
 
Statistics Homework Help
Statistics Homework HelpStatistics Homework Help
Statistics Homework Help
 
article_imen_ridha_2016_version_finale
article_imen_ridha_2016_version_finalearticle_imen_ridha_2016_version_finale
article_imen_ridha_2016_version_finale
 

Ähnlich wie tw1979 Exercise 1 Report

Quantum algorithm for solving linear systems of equations
 Quantum algorithm for solving linear systems of equations Quantum algorithm for solving linear systems of equations
Quantum algorithm for solving linear systems of equationsXequeMateShannon
 
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and Systems
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and SystemsDSP_FOEHU - MATLAB 01 - Discrete Time Signals and Systems
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and SystemsAmr E. Mohamed
 
directed-research-report
directed-research-reportdirected-research-report
directed-research-reportRyen Krusinga
 
Cs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyCs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyappasami
 
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationApplied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationBrian Erandio
 
Dynamic programming1
Dynamic programming1Dynamic programming1
Dynamic programming1debolina13
 
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...ijdpsjournal
 
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...ijdpsjournal
 
circuit_modes_v5
circuit_modes_v5circuit_modes_v5
circuit_modes_v5Olivier Buu
 
Wk 6 part 2 non linearites and non linearization april 05
Wk 6 part 2 non linearites and non linearization april 05Wk 6 part 2 non linearites and non linearization april 05
Wk 6 part 2 non linearites and non linearization april 05Charlton Inao
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
 

Ähnlich wie tw1979 Exercise 1 Report (20)

Parallel algorithm in linear algebra
Parallel algorithm in linear algebraParallel algorithm in linear algebra
Parallel algorithm in linear algebra
 
Quantum algorithm for solving linear systems of equations
 Quantum algorithm for solving linear systems of equations Quantum algorithm for solving linear systems of equations
Quantum algorithm for solving linear systems of equations
 
Ada notes
Ada notesAda notes
Ada notes
 
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and Systems
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and SystemsDSP_FOEHU - MATLAB 01 - Discrete Time Signals and Systems
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and Systems
 
directed-research-report
directed-research-reportdirected-research-report
directed-research-report
 
Ijetr021210
Ijetr021210Ijetr021210
Ijetr021210
 
Ijetr021210
Ijetr021210Ijetr021210
Ijetr021210
 
Cs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyCs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer key
 
LRIEEE-MTT
LRIEEE-MTTLRIEEE-MTT
LRIEEE-MTT
 
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationApplied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
 
Dynamic programming1
Dynamic programming1Dynamic programming1
Dynamic programming1
 
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
 
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
 
Joint3DShapeMatching
Joint3DShapeMatchingJoint3DShapeMatching
Joint3DShapeMatching
 
circuit_modes_v5
circuit_modes_v5circuit_modes_v5
circuit_modes_v5
 
Wk 6 part 2 non linearites and non linearization april 05
Wk 6 part 2 non linearites and non linearization april 05Wk 6 part 2 non linearites and non linearization april 05
Wk 6 part 2 non linearites and non linearization april 05
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
 
N41049093
N41049093N41049093
N41049093
 
Chapter26
Chapter26Chapter26
Chapter26
 

tw1979 Exercise 1 Report

  • 1. Computational Physics 301: Exercise 1 - Matrices and Linear Algebra Thomas Wigg February 14, 2014 University of Bristol, Bristol, UK This exercise involves using programs written in the C language to calculate the inverse of a square matrix in three dierent ways: analytically, using LU decomposition, and singular value (SV) decomposition. 1 Matrix Inversion for Linear Alge- bra 1.1 Analytical Calculation of Inverse Ma- trix A set of simultaneous equations can be solved using matrices by writing them in corresponding matrix equations. In the simple case of a system with two unknowns, the equations a11x1 + a12x2 = c1 (1) a21x1 + a22x2 = c2 (2) can be written in matrix form as a11 a12 a21 a22 x1 x2 = c1 c2 (3) This format for two equations can be expanded to incor- porate a set of simulatious equations of any size. This can be generalised as Ax = c, (4) where A is the matrix of coecients, x is the vector of unknown variables and c is the vector of constants. The solution to the set of simultaneous equations can be found by multiplying both sides of equation 4 by the inverse of A to give A−1 Ax = A−1 c, (5) which then simplies to x = A−1 c (6) The inverse of A is given by A−1 = adj.A |A| , (7) where adj.A is the adjoint of A, which is the transpose of the matrix of cofactors of A, and |A| is the determinant of A. The determinant of a matrix of any order can be calcu- lated by Laplace expansion [1] according to |A| = ai1Ci1 + ai2Ci2 + · · · + ainCin = n j=1 aijCij, (8) where Cij is the cofactor of aij. The Laplace expansion in equation 8 is performed along the ith row. It can equally be performed along the jth column. The cofactor, Cij, is calculated by Cij = (−1)i+j |Aij|, (9) where |Aij|is the determinant of the matrix A with row i and column j removed. The transpose of the matrix of cofactors is simply found by CT ij = Cji. (10) That is, changing the row and column coordinate of the cofactor value. 1.2 Task 1 - Inverting a Matrix using An- alytical Methods The rst task we are asked to complete is to write a pro- gram to calculate the inverse of a square matrix of chosen order using the analytical techniques describe above. Due to the recursive nature of this method of calculating deter- minants using Laplace's equation, it was necessary to write a function which is able to call upon itself. The determi- nant function used performs Laplace expansion along the 1
  • 2. rst row for simplicity, and continues to call upon itself un- til the input matrix is 1 x 1 (single value), at which point an if() command returns that single input value. The program contains an if() function that returns a statement that the inverse of the entered matrix is not mathematically possible should the determinant be 0. A very similar process is then used to progress through each of the elements of the original matrix and calculate the matrix of cofactors, which calls upon the determinant func- tion to calculate the determinant of the reduced matrices for each of the elements. Finally, the transpose of the co- factor matrix is computed and the inverse matrix calculated and returned. The program was initially in a state where it would record the user-entered matrix elements from the command window. This is a useful feature were I to be releasing the program as a standalone matrix inverter, but obviously as the order increases, entering all the matrix el- ements for, say, a 10 x 10 matrix becomes impractical. In- stead, I adapted the program to compute the inverse of a matrix written in the routine itself. Because of this, there are some inecient variable assignments but as the program works eectively, I did not see a need to clean it up. To remove the need for me to write each matrix by hand, I wrote a routine to generate a matrix of pseudo- random numbers of user-entered order using the rand() and RAND_MAX functions. The program writes the comma- separated matrix to a le which can then be copied into the analytical inversion program. This routine can be seen in appendix A. Figure 1 shows a comparison of the execution times for computing the inverses of matrices of increasing order us- ing analytical methods. For each order, the pseudo-random matrix was inverted ve times and the mean execution time calculated. There was very little uctuation in the execu- tion time, but that which occurred can be attributed to external processes on the system using a fraction of the available processing power. Figure 1 shows that the execution time to compute the in- verse of matrices of order up to eleven remains very small, but past this the execution time rises exponentially, with the inversion of a 13 x 13 matrix taking nearly an hour and a half. This result implies the calculation of the inverse of a higher order matrix will take many hours and, as such, I did not attempt any matrices of order greater than 13. Whilst the execution times will vary greatly for dierent systems (due to technical specications), the same exponential in- crease in calculation time will inevitably be seen for some order due to the nature of the increase of the number of calculations required to compute the inverse of a matrix of increased order. In fact, the exponential t used is only a good approx- imation to the true relationship and the total number of Figure 1: Graph showing the execution time (in seconds) against the matrix order for a program which computes the inverse of an entered matrix analytically. The t is of expo- nential form such that y = 0.34204+4.74445×10−12 e2.6637 . calculations required to compute the inversion (and hence the execution time), scales according to N = (n − 1)n! + n! − 1, (11) where N is the total number of operations required and n is the order of the entered matrix. This relationship is found from the fact that for the computation of the determinant there will be n! summand, where each summand requires (n − 1) multiplications and nally to combine this into the inverted matrix requires n! − 1 summands[1]. As is obvi- ous from this relationship (and is correlated by the results seen in gure 1), a small increase in the order of the ma- trix results in a large increase in the number of required computations. To test the accuracy of this analytical method, the re- sults were compared to hand calculated inverse matrices of order 3 (for higher order than this, matrix inversion becomes a tiresome exercise in bookkeeping) which were veried against the values returned by an online matrix calculator[2]. The precision of a double precision oating point number is seventeen signicant gures [3], so this would logically imply that this is the maximum precision to which the matrix elements can be stored. This was the case as, when the routine was modied to print out the elements to greater than seventeen signicant gures the program simply returned zeros for the additional places. Consider- ing the accuracy of the returned inverse elements, it was found that they were accurate up to the number of signi- cant gures of the true answer. As an example, one of the true (hand-calculated) inverse matrix elements of a matrix 2
  • 3. produced by the pseudo-random generator was 0.845723 and the program returned the value 0.84572324197412796, which rounds to the true value. I am unsure as to why the routine generates these additional values rather than returning zeros, but I presume it must be due to the recur- sive nature of the caluclations and rounding errors being carried forwards. 2 Algorithms for Linear Algebra 2.1 LU Decomposition LU decomposition describes breaking down the matrix A in equation 4 into the product of two matrices such that A = LU, (12) where the matrix L is lower triangular (only has non-zero elements on the main diagonal and below) and U is upper triangular (as with L but only has non-zero elements on and above the main diagonal) [4]. For a 3 x 3 matrix in element form, equation 11 would be written as   a11 a12 a13 a21 a22 a23 a31 a32 a33   =   l11 0 0 l21 l22 0 l31 l32 l33     u11 u12 u13 0 u22 u23 0 0 u33   . (13) Equation 4 can now be written as Ax = LUx = c . (14) By making the substitution Ux = y (15) such that Ly = c (16) solving equation 13 becomes trivial. By considering equa- tion 14 as an example, it can be seen from looking at the rst row of the L matrix that this will simply reduce to the expression y1 = c1 l11 . This can then be substituted into the calculation for y2 and so on. This can be generalised to give an expression for the value of any unknown variable as yk = ck − k−1 j=1 akjyj akk , k = 2, 3, · · · , n . (17) 2.2 Singular Value Decomposition Singular value decomposition is again a means of simplify- ing the method of solving equation 4. This is achieved by decomposing A into the product of three matrices such that A = UDVT , (18) where the matrix D is diagonal (only non-zero elements along main diagonal) and the matrices U and D are othogonal [4]. This equation is easier to solve as the in- verse of an orthogonal matrix is simply its transpose and the inverse of a diagonal matrix is also simply found by      d11 0 · · · 0 0 d22 · · · 0 . . . . . . . . . . . . 0 0 · · · dnn      −1 =      1 d11 0 · · · 0 0 1 d22 · · · 0 . . . . . . . . . . . . 0 0 · · · 1 dnn      . (19) From this, the inverse of A is given by A−1 = VD−1 UT . (20) 2.3 Task 2 - Comparing the Performance of Analytical, LUD and SVD Methods for Inverting Matrices 2.3.1 Comparison of Speed of Execution of Ana- lytical, LUD and SVD Methods for Inverting Matrices The second task required programs to be written that com- pute the inverse of an entered matrix using LUD and SVD routines from the GNU Scientic Library (GSL) [5]. GSL matrix functions only recognise vectors and matrices in the gsl_vector and gsl_matrix forms respectively. Because of this, any matrix arrays or vectors to be manipulated by the GSL functions must rst be stored in the aforementioned forms. LU Decomposition - The matrix is rst decomposed using the gsl_linalg_LU_decomp function and then simply inverted using the gsl_linalg_LU_invert function. SV Decomposition - The matrix is rst decom- posed into the matrices U, D and V using the gsl_linalg_SV_decomp function. As the GSL functions only allow the multiplication of two matrices at a time, it is then necessary to use a sequence of two gsl_blas_dgemm functions to compute A−1 = VD−1 UT . Figure 2 shows a comparison of the execution times for the LUD and SVD methods as well as including the pro- jection of the analytical method from gure 1. As with the 3
  • 4. analytical method, each matrix was produced using the ma- trix generator and its inverse computed ve times with the mean execution time shown on the graph. Figure 2: Graph comparing the execution times (in seconds) of a program calculating the inverse of a matrix using an- alytical methods (cyan points) and programs utilising the GSL functions to compute the matrix using LU decompo- sition (blue points) and singular value decomposition (red points). The cyan line is the exponential projection of the analytical method seen in gure 1. The blue line is a t of y = 0.19043+0.000170032x1.978564 projecting the execution time of the matrix inversion by LUD methods. The red line is a t of y = 0.16435 + 0.000129852x2.00012 projecting the execution time of the matrix inversion using SVD. It is evident from gure 2 that the LUD and SVD meth- ods of computing the matrix inverse are much faster than the analytical method. The analytical method became im- practically slow at an order of 13, whereas the LUD and SVD methods were still able to calculate the inverse of a 350 x 350 matrix in under 20 seconds. It is also clear that whilst at lower orders the LUD and SVD methods return the matrix inverse in roughly the same time, the LUD method becomes more ecient at higher orders. I would've tested this further, for matrices of much higher order, but the rou- tine crashed as soon as a matrix order of 361 was entered. I can only assume this is because the GSL functionsa are lim- ited in the dimensions of the matrix they can manipulate. It is also worth noting that the execution time for both the LUD and SVD methods scales approximately to the square of the matrix order. k Inv. Acc. (signicant gures) 1 × 10−1 16 1 × 10−2 13 1 × 10−3 12 1 × 10−4 6 1 × 10−5 7 1 × 10−6 6 1 × 10−7 8 1 × 10−8 6 1 × 10−9 7 1 × 10−10 5 1 × 10−11 4 1 × 10−12 3 1 × 10−13 2 1 × 10−14 0 Table 1: Accuracy of inverted matrix elements (in signif- icant gures) of the SVD method of matrix inversion for various values of k compared to the values returned by an online matrix inverse calculator. 2.3.2 Accuracy of LUD and SVD Routines When Entered Matrix is Close to Singular The nal part of this task asked us to compare the returned inverse matrix using the LUD and SVD methods with the true inverse, when the entered matrix is close to singular, and compare the accuracies of both methods. For the pur- poses of this sub-task, I will be using the close-to-singular matrix A =   1 1 1 1 2 −1 2 3 k   , (21) where k is a small number. For the LUD method of inver- sion, the routine always returns each element of the inverse to seventeen signicant gures (the precision of a double precision oating point number). The routine is always ac- curate to the number of signicant gures of the true value, and should the true value be of fewer signicant gures than seventeen, the routine will generate the remaining sig- nicant gures such that if rounded to the number of sig- nicant gures of the true value, the returned value is the same as the true value; why this occurs is curious. For a value of k = 1 × 10−16 the routine crashes and returns that the determinant of the input matrix is equal to zero (and hence the input matrix is singular), presumably because the routine is rounding the value of k to zero. The SVD method of computing the inverse of the close-to- singular matrices behaved somewhat dierently to its LUD counterpart. Whereas the LUD routine computed the in- 4
  • 5. verse matrix elements, to the same number of signicant gures as the true value, for all values of k up to the point at which is crashes, the accuracy of the SVD routine became gradually worse as the value of k was reduced in factors of ten. The accuracy of the inverse matrix elements returned by the SVD routine with their corresponding values of k can be seen in table 1. For a value of k = 1 × 10−14 , the SVD routine returned a completely inaccurate inverted ma- trix. For both the LUD and SVD routines, the results were compared to an online matrix inverse calculator [2] and the routines were adjusted to print the values of the inverse matrix elements to the highest possible precision. 3 Physics Problem: Football Sta- dium Camera A remote overhead camera at a football stadium is sus- pended by three cables attached to the roof. Each cable is fed from a motorised drum so the camera can be moved around by changing the lengths of the cables. The camera has a mass of 50 kg and the attachment points are all 70 m from the centre of the pitch forming an equilateral triangle in a horizontal plane. Using an appropriate matrix algo- rithm, calculate and plot the tension in one of the cables as a function of the camera's position as it is moved in a hori- zontal plane a constant depth (10 m) below the attachment points. What is the maximum tension and where does it occur? [You may ignore the mass of the cables]. - Exercise 1 handout Figures 3, 4 and 5 show the physical setup of the above problem. In gure 3, the length of the sides of the equalat- eral triangle was simply calculated using the cosine rule and consequently the height of the triangle was found using Pythagoras' theorem. Using gures 4 and 5 it is possible to construct a set of simultaneous equations relating the com- ponents of the tension in each wire. These equations are as follows: T1sinθ1 + T2sinθ2 + T3sinθ3 = 50g (22) −T1cosθ1cosφ1 + T2cosθ2cosφ2 + T3cosθ3sinφ3 = 0 (23) −T1cosθ1sinφ1 + T2cosθ2sinφ2 + T3cosθ3cosφ3 = 0 (24) From these it is possible to write a corresponding matrix equation such that   sinθ1 sinθ2 sinθ3 −cosθ1cosφ1 cosθ2cosφ2 cosθ3sinφ3 −cosθ1sinφ1 cosθ2sinφ2 cosθ3cosφ3     T1 T2 T3   =   50g 0 0   . (25) Figure 3: Diagram showing the stadium from above with the camera centred and attached to the three anchor points on the stadium roof by negligible mass cables. It would be possible to solve this expression as it stands, but for simplicity of coding and understanding I expressed the trigonometric expressions in equation 25 in terms of the x and y coordinates of the camera, where the x and y axes are dened in gure 3. Figure 4: Diagram showing the camera at an arbitrary posi- tion above the pitch. The relative angles and cable tensions are also noted. 5
  • 6. Figure 5: Diagram showing one of the cables and the camera in the vertical plane. The tension in the cable and the weight of the camera are also noted. This results in the matrix expression   10 a 10 b 10 c −x a 70 √ 3−x b 35 √ 3−x c −y a −y c 105−y c     T1 T2 T3   =   50g 0 0   , (26) where a = x2 + y2 + 100, b = (70 √ 3 − x)2 + y2 + 100 and c = (35 √ 3 − x)2 + (105 − y)2 + 100 (substituted for ease of use). Unfortunately due to time constraints I have been un- able to implement the routines written earlier to solve this physics problem. Should I have had time, I would've used the LU decomposition method to solve equation 26, simply because the code for this routine is marginally simpler. By premultiplying both sides of equation 26 with the inverse of the coordinate equation would allow the tensions in each wire to be calculated. I would then have asked the rou- tine to compute the tension in one of the wires for many possible points over the pitch, varying the coordinates in increments of, say, ten centimetres. This information could then be represented in a topographic map of the stadium. I am disappointed that my time-management is such that I will not be able to complete this problem as I feel I was progressing well with the exercise. References [1] Linear Algebra: Modern Introduction, D. Poole, Third Edition, Cengage Learning, 4, 277-280, (2011). [2] Online Matrix Calculator (with inverion function- ality), http://www.bluebit.gr/matrix-calculator/, (ac- cessed 14/02/2014). [3] Programming C# 4.0, I. Griths, M. Adams, J. Liberty, Sixth Edition, O'Reilly Media Inc., 2, 32, (2010). [4] Numerical Recipes: The Art of Scientic Computing, W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Third Edition, Cambridge University Press, 2, 49-75, (2007). [5] GNU Scientic Library (GSL), http://www.gnu.org/software/gsl/, 1.16, (accessed 14/02/2014). 6