SlideShare ist ein Scribd-Unternehmen logo
1 von 53
Downloaden Sie, um offline zu lesen
On the Numerical Solution of
Differential Equations
Kyle Poe, University of the Pacific
January 7, 2019
This report is written to fulfill the requirements of ENGR 219, Numerical Methods
Contents
1 Introduction 3
2 The Runge-Kutta Method of Solution to Ordinary Differential Equations 4
2.1 Example solution of a system of differential equations using the classical RK4
method (Problem 25.26) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Stiffness and Multistep Methods 9
3.1 Stiffness and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Multistep Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Error Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Variable Step-Size ODE Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Boundary Value Problems, Finite Differences, and Eigenvalues 16
4.1 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1.1 The Shooting Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Finite-Difference Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3.1 An exposition on linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.4 Eigenvalue Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4.1 The Polynomial Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4.2 The Power Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4.3 The QR Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5 MATLAB Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5 Partial Differential Equations: The Finite Difference Approach 29
5.1 Application of Finite Differences to PDEs . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2.1 Dirichlet Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2.2 Neumann Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2.3 The Control-Volume Approach . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.3 Partial Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.3.1 Parabolic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.3.2 The Eigenvalue Method for Time-Dependent Partially-Discretized PDEs . . 37
5.3.3 Implementation of Boundary Conditions . . . . . . . . . . . . . . . . . . . . 38
5.3.4 Hyperbolic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Full Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.4.1 Explicit Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.4.2 Implicit Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6 The Finite Element Method 48
6.1 General Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.2 One-Dimensional Application to Elliptic ODEs . . . . . . . . . . . . . . . . . . . . . 49
6.3 Finite Element Methods in Higher Dimensions . . . . . . . . . . . . . . . . . . . . . 53
7 Afterword 53
2
1 Introduction
In this report, I will address the topics broached in ENGR 219 as listed on the syllabus, and discuss
each topic, and how they may be applied to the solution of problems in semiconductor physics and
engineering, where applicable, starting with the development and justification of the Runge-Kutta
method. As a general rule, I try to develop things as mathematically consistently as I can, and use
big-O notation and tensor operators as well as other mathematical objects where applicable. I pay
particular attention to the development of the finite element method, the theory of eigenvalues and
eigenvectors, and the generalization of finite-element schemes to N dimensional systems, although
I only treat up to 2-dimensional systems here. The general outline of this work is based upon the
exposition in Chapra and Canale’s Numerical Methods for Engineers, and I reference it frequently
as Chapra’s book or simply Chapra.
It is all too likely that I may eventually return to this report and adapt it into a crash course
on numerical methods, as while I based the contents of this report on Chapra’s book, I found
that my philosophy differed from his in a few key respects with regard to the development of the
methods contained within, and I did my best to develop things based on my understanding of
the mathematical subtleties not treated in his work (Which is not to say that I feel prepared to
undertake this kind of journey as an undergraduate!).
As a disclaimer, I do my best to reference outside work in my derivations and give credit
to Chapra where necessary. Nor is this intended to be a comprehensive discussion of numerical
methods, as many steps that would be useful in pedagogical applications are omitted or treated
with haste.
In the unlikely event that someone other than Dr. Gary Litton stumbles across this report,
then, please understand that this was written under an intense time crunch and there are likely to
be mistakes. All the same, I hope you find it to be interesting.
Honor Code Statement In accordance with University of the Pacific’s honor code, I certify
that this is my own work, and I do my best to attribute appropriate authorship to all works ref-
erenced within. Furthermore, this work was performed independently, without outside assistance.
Kyle Poe, 2018
3
2 The Runge-Kutta Method of Solution to Ordinary Dif-
ferential Equations
One of the most fundamental problems in engineering is the solution to ordinary differential equa-
tions (ODEs) without simple closed-form solutions. Often, it is far more efficient to pursue an
approximation using numerical methods than to attempt an exact solution. The go-to class of
solution approaches for these equations are termed the Runge-Kutta (RK) methods, named af-
ter German mathematicians Carl Runge and Martin Kutta. These methods are termed iterative
methods, as they take the form
yi+1 = yi + f(xi, yi, h)
and thus iteration i + 1 depends on the previous iteration i. The function which relates each step
is of the form f(x, y, h) = φ(x, y, h) · h, where h = ∆x is some small perturbation of the system’s
independent variable x. Optimally, φ(x, y, h) is defined such that f is the difference between
adjacent points yi, yi+1.
• Where y : R → R is a univariate scalar valued function,
φ(x, y, h) ≈
1
h
x+h
x
y dx
where y = dy
dx
denotes the derivative.
• Where y : Rn
→ R is a scalar valued functional,
φ(x, y, h) ≈
1
h
x+h
x
y ·
dα
dx
dx
where y denotes the gradient and α ∈ Rn
denotes the vector of parameters α1(x), α2(x), · · ·
• Where y : Rn
→ Rm
is a vector field,
φ(x, y, h) ≈
1
h
x+h
x
D(y)
dα
dx
dx
where D(y) denotes the Jacobian matrix of y.
The RK methods are defined such that this function φ may be written
φ =
N
i=1
aiki
for an Nth order method, where a ∈ RN
. The ki’s are related by the recurrence relation
k1 = f(xi, yi), kn = f(xi + pn−1h, yi +
n−1
j=1
qn−1,jkjh)
where f = dy
dx
, p ∈ RN−1
, and q ∈ RN−1
×RN−1
and is lower triangular. This recurrence relationship
makes RK methods very computationally efficient, as each ki only needs to be calculated once
and stored per step. As the actual coefficients involved are derived from Nth degree Taylor
4
approximations, the local error for an Nth degree method is O(hN+1
) and the global error is
O(hN
). To compute the solution to systems of equations using RK methods, define
f(x, y) =





g1(x, y)
g2(x, y)
...
gn(x, y)





Well known examples of RK methods include the following
Name Order a p q
Euler’s Method 1st [1] N/A N/A
Heun’s Method 2nd
0.5
0.5
[1] [1]
Midpoint Method 2nd
0
1
[0.5] [0.5]
Ralston’s Method 2nd
1/3
2/3
[3/4] [3/4]
Classical RK4 4th




1/6
1/3
1/3
1/6






1/2
1/2
1




1/2 0 0
0 1/2 0
0 0 1


2.1 Example solution of a system of differential equations using the
classical RK4 method (Problem 25.26)
From Numerical Methods for Engineers by Chapra, the last problem in chapter 25 details the
following coupled differential equations
m1
d2
x1
dt2
= m1g + k2 (x2 − x1) − k1x1
m2
d2
x2
dt2
= m2g + k3 (x3 − x2) + k2 (x1 − x2)
m3
d2
x3
dt2
= m3g + k3 (x2 − x3)
to represent linked bungee jumpers, with parameters m1 = 60kg, m2 = 70kg, m3 = 80kg, k1 =
k3 = 50 k2 = 100(N/m), g = 9.81, and initial conditions xi(0) = ˙xi(0) = 0. To solve this, we first
rewrite the system as a system of first order differential equations by considering the substitution
dxi
dt
= ˙xi. As it turns out, this is a linear system of equations, so we could fairly easily find an
exact solution using linear algebra, but we shall proceed as if it is not to demonstrate the utility
of RK methods:
d
dt








x1
x2
x3
˙x1
˙x2
˙x3








=








˙x1
˙x2
˙x3
(m1g + k2 (x2 − x1) − k1x1)/m1
(m2g + k3 (x3 − x2) + k2(x1 − x2))/m2
(m3g + k3 (x2 − x3))/m3








In matlab, this is implemented using the following code
5
1 % Define parameters
2 m = {60 ,70 ,80}; % Masses
3 k = {50 ,100 ,50}; % Spring constants
4 g = 9.81; % Gravitational constant
5
6 % Coupling differential equations of the system
7 f = @(t,x) [x(4) ; ...
8 x(5) ; ...
9 x(6) ; ...
10 (m{1}*g + k{2}*(x(2)-x(1)) - k{1}*x(1))/m{1} ; ...
11 (m{2}*g + k{3}*(x(3)-x(2)) + k{2}*(x(1) - x(2)))/m{2};
...
12 (m{3}*g + k{3}*(x(2) - x(3)))/m{3}];
13
14 % Set time interval of interest
15 tspan = [0 30];
16
17 % Set initial position vector
18 x0 = zeros (6,1);
19
20 % Solve system
21 [t,x] = RK4_driver(f,tspan ,1000 ,x0);
22
23 function [t,x] = RK4_driver(f,tspan ,N,x0)
24 % This function acts as the main handle to the Runge -Kutta method
by:
25 % - Initializing the time vector
26 % - Initializing the position matrix
27 % - Calling the "step" function for each time step
28
29 % Time vector
30 t = linspace(tspan (1),tspan (2),N);
31
32 % Small perturbation of time , "h"
33 dt = t(2)-t(1);
34
35 % Initialization of postiion data matrix
36 x = zeros(length(x0),N);
37
38 % Setting initial value
39 x(:,1) = x0;
40
41 % Run the system
42 for i = 2:N
43 x(:,i) = RK4_step(f,t(i-1),x(:,i-1),dt); % each iteration
44 end
6
45 end
46
47 function [x_new] = RK4_step(f, t, x, h)
48 % This function does the brunt of the work , by setting up
parameters unique
49 % to the classical RK4 method and computing each step
50
51 % "k" coefficients
52 a = [1 2 2 1] '/6;
53
54 % Time difference weights
55 p = [0.5 0.5 1]';
56
57 % k-dependency coefficients
58 Q = diag(p);
59
60 % Initialize k matrix
61 k = [];
62
63 % Perform the step
64 for i = 1:4
65 if i == 1
66 k(:,i) = f(t,x);
67 else
68 k(:,i) = f(t + p(i-1)*h,x + h*k*Q(1:(i-1),i-1));
69 end
70 end
71
72 % Return new value
73 x_new = x + k*a*h;
74 end
As output, this gives us a nice view of the system behavior (open in Adobe Reader for anima-
tion)
7
Animation of the linked skydivers
8
3 Stiffness and Multistep Methods
3.1 Stiffness and Stability
Often when dealing with the numerical solution to differential equations, particular equations will
display an extreme sensitivity to the step size h chosen. While there does not seem to be a precise
definition of ”stiffness”, the description adopted by Chabra is “A stiff system is one involving
rapidly changing components together with slowly changing ones”. components that Chabra al-
ludes to are the eigenvalues of the differential equation. While eigenvalues will be discussed more in
the next section, suffice to say that if a system has eigenvalues significantly less than zero, or if the
ratio between the largest and smallest eigenvalue is large, then the system is said to be stiff. With
this said, it leaves the concept unclear for the subject of nonlinear differential equations, although
the Lyapunov exponent does seem to hold some answers if one desired to venture substantially
further than Chabra’s text.
Consider the ordinary differential equation y = −1000y. It is clear by inspection that the only
eigenvalue of the system is λ = −1000, for the eigenfunction v = eλt
. If Euler’s method is applied
in the solution of this equation, then we find yi+1 = yi − 1000yi h = yi(1 − 1000h). More generally,
at step n, we have
yn = y0(1 − 1000h)n
We note here that for a positive h, clearly the change between subsequent steps should decrease
in magnitude. Therefore, this method will only be stable for |1 − 1000h| < 1. For more general
differential equations with eigenvalue λ,
|1 + λh| < 1, λ ∈ C
defines the stability domain {hλ} = D for which Euler’s method will be stable, in this case given as
a circle in the complex plane of radius 1 centered at λh = −1. For nth order differential equations,
it becomes clear that the limiting factor on stability of a method with such a limited stability
domain is determined by the most negative eigenvalue for a fixed h > 0.
Due to the limited stability domain of Euler’s method, investigation of methods which are better
equipped to handle stiff ODEs should be undertaken. Such methods are deemed “A-Stable”, where
the stability domain has the subset {hλ ∈ C : Re(λ) < 0} ⊆ D. What this means is that for any
choice of h, assuming Re(λ) < 0 (most interesting cases), the method will be stable. One example
of such a method is the backwards Euler method, for which we have
yi+1 = yi + f(xi+1, yi+1)h =⇒ yi+1(1 − hλ) = yi =⇒ yn =
y0
(1 − hλ)n
Since hλ < 0, lim
n→∞
yn = 0 and thus the method is A-stable.1
3.2 Multistep Methods
While many of the Runge-Kutta methods discussed until now are very effective, they do not utilize
previous points in their calculation, and thus discard much of the information from the trajectory of
the solution. These methods which utilize retrospect to great effect are termed multistep methods;
appropriately, as they utilize information from multiple steps in their calculations.
1
For more on this description, visit https://math.oregonstate.edu/ restrepo/475B/Notes/sourcehtml/node17.html
9
Broadly, these methods may be grouped into two categories: Newton-Cotes and Adams meth-
ods. Newton-Cotes methods are some of the “most common” methods for solving ODEs, and
operate by using information about previous points to estimate the next point, generally through
polynomial interpolation. Each come in two flavors, open and closed: ”open” methods use in-
formation from the inside of a given interval to estimate the integral over that interval, whereas
”closed” methods incorporate information from the boundary, or closure of the interval.
First, we will discuss a particular open Newton-Cotes formula, the non-self-starting Heun
method. As discussed by Chabra, one issue with the Heun method is that the predictor step
is based on the Euler method and thus is of O(h2
), while the corrector step is based on the trape-
zoidal method and is of O(h3
). To increase the efficiency of the method, a midpoint rule may
be used instead for the predictor step, by having yi+1 = yi−1 + 2hf(xi, yi). This is the so-called
“non-self-starting Heun” method, named for its inability to start itself without some starting value
for yi−1. As a case study, consider the separable differential equation dy
dx
= x − 3. In theory, the
original Heun method should exhibit a small degree of error in implementing this, but the non-self
starting should have zero error if supplied a correct y−1, since the Taylor expansion of dy
dx
has no
terms past h2
.
1 %% Open Newton -Cotes Demonstration
2 a = @(x,y) x - 3; % Differential equation
3 f = @(x) 1/2*x.^2 - 3*x; % Exact solution
4
5
6 [xSS ,ySS] = selfStartingHeun(a,[0 5] ,0.2 ,0);
7 [xNS ,yNS] = nonSelfStartingHeun(a,[0 5],0.2,0,f(0.2));
8
9 figure , hold on
10 plot(xSS ,f(xSS) - ySS ,'LineWidth ' ,2)
11 plot(xNS ,f(xNS) - yNS ,'LineWidth ' ,2)
12 ylim ([ -0.7 0.1])
13 legend('Self starting ','Non -Self Starting ')
14 xlabel('x')
15 ylabel('Error ')
16
17 function [t,x] = selfStartingHeun(a,tspan ,h,x0)
18 t = tspan (1):h:tspan (2);
19 N = length(t);
20 x = zeros(1,N);
21 x(1) = x0;
22 for i = 2:(N-1)
23 xtmp = x(i) + h*a(t(i),x(i));
24 x(i+1) = x(i) + h*(a(t(i),x(i)) + a(t(i)+h,xtmp))
/2;
25 end
26 end
27
28 function [t,x] = nonSelfStartingHeun(a,tspan ,h,x0 ,x1)
29 t = tspan (1):h:tspan (2);
10
30 N = length(t);
31 x = zeros(1,N);
32 x(1) = x0;
33 x(2) = x1;
34 for i = 2:(N-1)
35 xtmp = x(i-1) + 2*h*a(t(i),x(i));
36 x(i+1) = x(i) + h*(a(t(i),x(i)) + a(t(i)+h,xtmp))
/2;
37 end
38 end
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x
-0.7
-0.6
-0.5
-0.4
-0.3
-0.2
-0.1
0
Error
Self starting
Non-Self Starting
Figure 1: Error for the self-starting and non-self-starting Heun methods for a simple polynomial
differential equation. The non-self-starting method appears to integrate exactly.
The closed Newton-Cotes formulas are based on a closed integral approximation. The book used
the trapezoidal/Simspon’s 1/3 rule for this, but other more complicated (and perhaps occasionally
useful) methods do exist.
Next, we discuss Adams formulas, which are computed without using previous points, often
through a forward Taylor expansion about the point of interest. The open variety, often called
Adams-Bashforth formulas may be generally written as
yi+1 = yi + h
n−1
k=0
βkfi−k + O(hn+1
)
11
whereas the Adams-Moulton methods may be written
yi+1 = yi + h
n−1
k=0
βkfi+1−k + O(hn+1
)
The concept of stability domains broached in the first part of this section may be applied to all
multistep methods. For a more general description of multistep methods than that broached in
Chapra, see here.
3.3 Error Estimates
Most multistep methods rely on what may be formulated as predictor and corrector steps. When
they have the same order, then the local truncation error may be easily estimated by equation of
each method’s error term. For example, for the non-self-starting Heun’s predictor and corrector
steps, the error is given by
Ep =
1
3
h3
f (ξp)
Ec = −
1
12
h3
f (ξc)
Asserting ξp = ξc = ξ, we may approximate the local truncation error by equation of the actual
values2
ym
i+1 + Ec = y0
i+1 + Ep = y0
i+1 − 4Ec
=⇒ Ec =
y0
i+1 − ym
i+1
5
We may then design methods which iterate the corrector step until a desired tolerance is
reached, using this error estimate as a criterion. Perhaps more common, it could be used to
indicate when the step size should be altered. Alternatively, it may be used to modify the solution
at a point.
3.4 Variable Step-Size ODE Solvers
Often, it is not advisable to create a solver which relies on a fixed step size. This is for several
reasons: foremost, without the ability to change the step size, a step size must be chosen somehow
with purpose. Often this just defaults to choosing a very small step size to maximize accuracy,
but that can come at computational cost.
Rather, a solver with variable step size will often be chosen. To decide if the step size should
change, a few factors are at play:
• Does the corrector reach the defined tolerance, such that
ym
i+1−ym−1
i+1
ym−1
i+1
< ε within a reasonable
number of iterations?
• Is the guess after the defined tolerance is met under the threshold for error Emax?
• By how much should the step size change?
Adaptive Heun Method Here I implement an adaptive Heun solver with variable step size:
2
The book troubles me here in its derivation. This is not negative, based on the equations it provides.
12
0 1 2 3 4 5 6 7 8 9 10
t
0
1
2
3
4
5
6
7
8
9
10
x
Heun
Exact
Figure 2: Comparison of exact solution to adaptive Heun solution of the differential equation
f(t, x) = −x with initial condition x0 = 10
13
1 %% Adaptive Heun Method Demo
2
3 [t,x] = heunMethod (@(t,x) -x,[0 10] ,10);
4
5 function [t,x] = heunMethod(f,tspan ,x0)
6 % Initial time
7 t = tspan (1);
8
9 % Initial position
10 x = x0;
11
12 % Initialization of initial step size
13 h = dot(tspan ,[-1 1]) /8;
14
15 % Iterate procedure
16 while t(end) < tspan (2)
17 % Ensures that the last time step is correct length
18 if (tspan (2) - t(end)) < h
19 h = tspan (2) - t(end);
20 end
21
22 % Takes a step of the method
23 [tNext ,xNext] = heunStep(f,t(end),x(end),h);
24
25 % Concatenates output to initial conditions
26 x = [x xNext ];
27 t = [t tNext ];
28 end
29 end
30
31 function [tNext ,xNext] = heunStep(f,t,x,h)
32
33 % Tolerance of the corrector step
34 correctorTol = 1e -10;
35
36 % Maximum number of iterations of the corrector step
37 maxCorrector = 100;
38
39 % Error tolerance for a single step of the method
40 stepTol = 1e-5;
41
42 % Computes predictor step
43 xPredict = x + h*f(t,x);
44 xCorrect = xPredict;
45
46 % Iterates corrector
14
47 for i = 1: maxCorrector
48 xPrev = xCorrect;
49
50 % Computes corrector
51 xCorrect = x + h*(f(t,x) + f(t+h,xCorrect))/2;
52
53 % Measures iteration convergence
54 E_iter = abs(( xCorrect - xPrev)/xPrev);
55
56 % Break when convergence has been achieved
57 if E_iter < correctorTol
58 break
59 elseif i == maxCorrector
60 error (['Maximum iteration count for corrector step met , ',...
61 sprintf('Iteration error = %.2d',E_iter)])
62 end
63 end
64
65 % Measures error of convergent corrector step
66 E_corrector = (xPredict - xCorrect)/5;
67
68 % If the error is within tolerance , accepts the value
69 if abs(E_corrector) <= stepTol
70 tNext = t + h;
71 xNext = xCorrect + E_corrector;
72
73 % If the error is not within tolerance , recurse with step size h =
h/2
74 else
75 [t1 ,x1] = heunStep(f,t,x,h/2);
76 [t2 ,x2] = heunStep(f,t1(end),x1(end),h/2);
77 tNext = [t1 , t2];
78 xNext = [x1 , x2];
79 end
80 end
15
4 Boundary Value Problems, Finite Differences, and Eigen-
values
4.1 Boundary Value Problems
It is a well known fact that solutions to nth order differential equations are uniquely constrained
by n parameters. Often, these are initial values, as in for some second order differential equation
you have y(0), y (0). By contrast, some ODEs are prescribed multiple values at different points.
One case of this is the boundary value problem (BVP). Such problems prescribe information about
the endpoints, usually in the same degree. Three examples of boundary conditions are:
• Dirichlet Conditions: The explicit values at the endpoints are given. An example would
be a rod with fixed temperature at either end.
• Neumann Conditions: Information about the first derivative is given. An example would
be a rod insulated at the ends.
• Robin Conditions: A mix of Dirichlet and Neumann. Typically the worst to deal with.
There are a few different ways that these kinds of problems may be solved. First, let us examine
the shooting method.
4.1.1 The Shooting Method
At its heart, the shooting method is a recasting of the root-finding problem. Given some informa-
tion about the solution, i.e. one endpoint, an approximate solution is obtained of the form y(x, a)
for choice of parameter a. The task is then to determine the appropriate value of a such that
y(xb, a) − yb = 0
For a linear ODE, this is simply done by making two guesses, and then interpolating to find the
correct solution
a =
yb − y(x, a1)
y(x, a2) − y(x, a1)
(a2 − a1) + a1
In practice, this a parameter naturally arises as the constant of integration for a second-order
ODE.
For nonlinear ODEs, the problem is a little more difficult, but one can look to the root-finding
literature: the Newton-Raphson method, the method of bisection, and polynomial interpolation
are all viable candidates. Since this is not the focus of this chapter, I will omit an example for the
time being.
4.2 Finite-Difference Methods
The first weapon in our arsenal for dealing with more gnarly differential equations is discretiza-
tion. The simplest form of discretization of a system is in finite differences, where the domain is
partitioned into chunks such that
xi = i · ∆x, y(xi) → yi
While not all finite difference schemes are partitioned evenly, it is simplest to do so.
16
To develop the underlying theory of finite differences, we start with a small perturbation of y
in the forward direction, which we will call ∆+y3
∆+y = y(x + h) − y(x)
Using a forward Taylor expansion, we may expand this to
∆+y = y + y h +
1
2
y h2
+ O(h3
) − y
= y h +
1
2
y h2
+ O(h3
)
We therefore find that the derivative may be expressed as
y =
∆+y
h
+ O(h)
Alternatively, we may formulate this in terms of a backward difference operator ∆−:
∆−y = y(x) − y(x − h)
= y − y − y h +
1
2
y h2
+ O(h3
)
= y h −
1
2
y h2
+ O(h3
)
∴ y =
∆−y
h
+ O(h)
Adopting the language introduced at the beginning of this section, we may assert
∆+y(xi) → (∆+y)i = yi+1 − yi
∆−y(xi) → (∆−y)i = yi − yi−1
In this was, we have come up with an estimate for the derivative by utilizing points in our dis-
cretization. By equation of these values, we may develop insight into the second derivative without
any information required about the first
∆−y
h
+
1
2
y h + O(h2
) =
∆+y
h
−
1
2
y h + O(h2
)
y =
1
h
∆+y
h
−
∆−y
h
+ O(h)
=
y(x + h) − 2y(x) + y(x − h)
h2
+ O(h)
Which in discrete form may be approximated as
(y )i ≈
1
h2
(yi+1 − 2yi + yi−1) =
1
h2
1 −2 1


yi−1
yi
yi+1


3
This notation is borrowed from A First Course in the Numerical Analysis of Differential Equations, by Arieh
Iserles
17
This is a powerful result, for many reasons. Foremost, it suggests that if a system is discretized
in this way, then the second derivative operator may be recast as a n by n matrix of the following
form, which we will call L
L =
1
h2







−2 1 0 · · · 0
1 −2 1 · · · 0
0 1 −2 · · · 0
...
...
...
...
...
0 0 0 · · · −2







By making this substitution, we may rewrite ordinary differential equations involving the second
derivative as a linear system of algebraic equations.
Example: 1D Poisson’s Equation To provide an example of a problem from physics which
may be solved with finite differences, we look to Poisson’s equation. In 1D, it takes on the form
d2
V
dx2
= −
Q
Where V (x) is the electric potential, Q(x) is the charge distribution, and is the dielectric constant
of the medium. A common problem that arises in semiconductor physics is when the carrier
distribution and thus the charge distribution is known, but the potential is not. For example, if we
consider an imaginary semiconductor with known dielectric constant and charge density, we may
solve for the potential by numerical inversion.
The only trouble not yet addressed is that of incorporating the boundary conditions. Here
we will consider two cases: one where the voltage difference across the semiconductor is fixed
(Dirichlet) and one where the electromotive force is fixed (Neumann).
For both cases, we know information about the boundary; that is to say the points immediately
off of our domain of interest. If V0 and Vn are the potential at the endpoints of our semiconductor,
we have information about V−1 and Vn+1. For Dirichlet conditions, those are prescribed directly.
From Poisson’s equation, V0 is given by
1
h2
(V1 − 2V0 + V−1) = −
Q0
In this case, we assert V−1 = VA and then have
1
h2
(V1 − 2V0) = −
Q0
−
VA
h2
Similarly, for the other boundary, we have
1
h2
(−2VN + VN−1) = −
QN
−
VB
h2
In summary, we now have a system of equations with the appearance
LV = −
Q
−
1
h2
(VAe0 + VBeN )
Where e0 and eN are unit vectors with a 1 at the subscript index and 0 everywhere else. Alter-
natively, we may directly set the value at the boundary by replacing the first and last lines of the
18
system with V0 = VA and VN = VB, which may result in cleaner results at the cost of directly
altering the Laplacian of the system.
In the Neumann case, this manifests as a fixed electric field, as E = − V = −dV
dx
. Recalling
that the derivative may be expressed as a quotient of a difference and the step size h, here we opt
to describe the first derivative with a central difference. The reason this is useful quickly manifests
in the derivation of the formula
yi+1 − yi−1 = yi + hyi +
1
2
yi h2
+ O(h3
) − yi − hyi +
1
2
yi h2
+ O(h3
)
= 2hyi + O(h3
)
∴ yi =
yi+1 − yi−1
2h
+ O(h2
)
which has error on the order of h2
as opposed to h for the forward and backward difference
quotients. For endpoint A, we prescribe V0 = −EA. Working from the derivative equation and
dropping the error term gives
−EA =
V1 − V−1
2h
=⇒ V−1 = V1 + 2hEA
which may then be substituted into Poisson’s equation at the endpoint to find
1
h2
(V1 − 2V0 + V1 + 2hEA) = −
Q0
=⇒
2
h2
(V1 − V0) = −
Q0
−
2
h
EA
and an analogous expression for the other endpoint. One should take careful note of the fact that
the Laplacian has been modified in this case, where L1,2 and LN,N−1 are 2/h2
instead of 1/h2
. We
note this fact by L → L∗
. The system of equations becomes, similarly to the Dirichlet case,
L∗
V = −
Q
−
2
h
(EAe0 + EBeN )
Consider a strange semiconductor where the charge distribution is gaussian and positive, the right
end is at ground, and the electric field has a value of EA at the left end. We may calculate the
potential with the following code
1 %% Semiconductor Potential
2
3 E_A = -8; %J/umC%
4 V_B = 0; %J/C (V)%
5 L = 5; % um
6 Q_0 = 10; % C
7 eps = 1;
8
9 % Charge distribution
10 Q = @(x) Q_0 * exp(-(x - L/2) .^2/(L/4));
11
12 % Initialize discretization with 100 points
13 x = linspace (0,L) ';
14
15 % Extract step size
19
16 h = x(2) - x(1);
17
18 % Extract total number of points (100 by default)
19 N = length(x);
20
21 % Construction of the Laplacian
22 L = 1/h^2* toeplitz ([-2 1 zeros(1,N-2)]);
23
24 % Modification for Neumann boundary condition
25 L(1,2) = 2/h^2;
26
27 % Modification for Dirichlet boundary condition
28 L(N,:) = [zeros(1,N-1) 1];
29
30 % Right hand side with boundary conditions
31 rhs = -Q(x)/eps - [2/h*E_A ;... % Neumann bound
32 zeros(N-2,1) ;... % Intermediate points
33 -Q(x(end))/eps -V_B]; % Dirichlet bound
34
35 % Solution of the system
36 V = Lrhs;
37
38 % Plotting
39 plot(x,V,'LineWidth ' ,2), hold on
40 plot(x,-E_A*x + V(1),'--')
41 xlabel('x (mu{}m)')
42 ylabel('Potential (V)')
43 legend('Potential ','V_x (0)=-E_A')
20
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x ( m)
0
10
20
30
40
50
Potential(V)
Potential
Vx
(0)=-EA
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x ( m)
0
2
4
6
8
10
Q(x)(C/m)
Figure 3: Solution to Poisson’s equation with mixed boundary conditions
4.3 Eigenvalue Problems
One of the most important ideas in all of the study of differential equations is that of eigenvalues
and eigenvectors. I will now foray briefly into eigen-theory.
4.3.1 An exposition on linear algebra
Consider the following simple differential equation:
d
dx
y = λy
I write it this way instead of the typical dy
dx
for an important reason, which is in viewing the d
dx
as
an operator whose action is identical to multiplication by λ. This is a deeper concept than it may
first appear, and it is tied to the theory of linear equations and linear transformations. Without
getting too far into it, in this equation y is termed the eigenvector and λ the eigenvalue. Strictly
speaking, y need not be a vector in the traditional y ∈ Rn
sense. It could be infinite-dimensional,
or even a function (the usual case for differential equations).
21
To illustrate the parallels between eigenvalues and eigenvectors in differential equations and
lienar algebra, consider the differential equation
d2
y
dx2
+ λ2
y = 0
As a second order linear, homogeneous differential equation, we may split this into two first order
differential equations by making a substitution
dy
dx
= v,
dv
dx
= −λ2
y
Which may then be cast into matrix-vector form
d
dx
y
v
=
0 1
−λ2
0
y
v
= M
y
v
As the simple differential equation dy
dx
= λy has its solution y = eλx
y0, the solution is given by
y
v
= exp (Mx)
A
B
= A exp (Mx) e1 + B exp (Mx) e2
Evidently, the solutions occupy a two-dimensional vector space with solutions that may be de-
termined by the coordinates (A, B). While this is nice, it is not as simple as it could be. By
diagonalizing the matrix M = V DV −1
, we find that our solution may be rewritten
y
v
= V A exp
iλ 0
0 −iλ
x V −1
e1 + B exp
iλ 0
0 −iλ
x V −1
e2
= V exp
iλ 0
0 −iλ
x V −1 A
B
,
AV
BV
= V −1 A
B
= V
exp(iλx) 0
0 exp(−iλx)
AV
BV
= V
AV exp(iλx)
BV exp(−iλx)
, V = [v1v2]
= AV exp(iλx)v1 + BV exp(−iλx)v2
By utilizing the fact that M may be diagonalized by switching to its eigenspace, we have changed
the basis of the solution space to the eigenvectors of the matrix M, transformed our coordinates
A, B into the eigenspace coordinates AV , BV , and written our final answer as a clean linear com-
bination of eigenvectors of the system matrix. We are now equipped to examine the initial value
problem. To start, we invoke Euler’s identity
y
v
= AV (cos(λx) + i sin(λx))v1 + BV (cos(−λx) + i sin(−λx))v2
And simplify based on symmetries of sin and cos
y
v
= AV (cos(λx) + i sin(λx))v1 + BV (cos(λx) − i sin(λx))v2 (1)
= (AV v1 + BV v2) cos(λx) + i(AV v1 − BV v2) sin(λx) (2)
22
Applying an example condition that y(0) = 0, we find that
0
v(0)
= AV v1 + BV v2
And recalling that
AV
BV
= V −1 A
B
0
v(0)
= v1 v2 V −1 A
B
=
A
B
Therefore, we find that A = 0, B = v(0). Furthermore, we find that
AV
BV
= V −1 0
v(0)
= v(0)v−1
2
Where v−1
2 denotes the second column of V −1
. With this new information we simplify our solution
a bit more, noting that V v−1
2 = e2
y
v
= cos(λx)e2 + i sin(λx)([v1 − v2])v−1
2 v(0)
We therefore may now see that y does not depend on the cos(λx) term. Therefore, should we
posit that some y(L) = 0, unless either v(0) = 0 (the trivial solution), the matrix-vector term, or
sin(λx) is 0, then validity is lost. To take the simplest way out, let us propose
sin(λx) = 0 =⇒ λ = nπ, n ∈ Z
We then have infinitely many possible values for λ.
Example: Schr¨odinger’s Equation One example of a classic eigenvalue problem is Schr¨odinger’s
equation for the particle in a potential. Schr¨odinger’s equation (time-invariant version) manifests
in 1D as
EΨ =
− 2
2m∗
d2
Ψ
dx2
+ V (x)Ψ
Where E is the energy of the particle, |Ψ|2
is the probability density for the particle’s location,
is Planck’s constant, m∗
is the effective mass of the particle, and V (x) is the potential.
To solve this equation numerically, we can first cast it in finite differences
EΨ =
− 2
2m∗
L + diag(V ) Ψ =⇒ EΨ = HΨ
Where H is the matrix of the discretized Hamiltonian. For simplicity, here we assume that the
potential V is infinite outside of the domain of consideration, and therefore Ψ(x) = 0, x /∈ {xi}.
To solve the wavefunctions Ψ for the potential generated in 4.2.1, we implement the following
MATLAB routine
1 function [Psi ,E] = schrodinger(V,x)
2 %% 1D Schrodinger Equation Solver
3
23
4 % Definition of Constants
5 h = 1;
6 m = 1;
7
8 % Definition of Domain
9 dx = x(2) - x(1);
10 N = length(x);
11
12 % Creation of Second derivative Matrix
13 r = [-2 1 zeros(1,N-2)];
14 DD = 1/dx^2 * toeplitz(r,r);
15
16 % Solve Eigenvalue Problem
17 A = -h^2/(2*m)*DD + diag(V);
18 [Psi ,D] = eig(A);
19 Psi = Psi * diag (1./ sum(Psi .^2));
20 E = diag(D);
Open in Adobe Reader for a nice animation of all the different wavefunctions!
4.4 Eigenvalue Solution Methods
As much as I would like to explore the particularities of my favorite eigenvalue finder, MATLAB’s
eig() function, it seems that to see the source code you need to start here. That being said, there
are several ways to compute eigenvalues and eigenvectors for a system. Outlined in Chabra’s book
are the polynomial method and the power method, with Hotelling’s method discussed as an aside.
What is likely closest to what MATLAB uses however is the QR algorithm, for which I will discuss
an example.
24
4.4.1 The Polynomial Method
Let us consider the earlier ODE discussed once again, in its discretized form
(L + λ2
I)y = 0
Assuming we are not interested in the trivial case where y = 0, we may approximate the smallest
N eigenvalues of the system by finding the roots of the polynomial
det(L + λ2
I) = 0
This is a straightforward method to approximate the first few eigenvalues of a system, as long as
a robust root-finding method may be implemented.
4.4.2 The Power Method
The power method is a very functional way to determine the largest eigenvalue and corresponding
eigenvector of a system. The way it works is by iterating the matrix transformation of interest on
a particular vector, for example some random vector p, and repeatedly normalizing that vector,
and recording the magnitude after each iteration as an estimate for the eigenvalue.
Api = ˜pi+1
λi+1 = ˜pi+1 , pi+1 =
˜pi+1
˜pi+1
To get a glimpse into why this works, assuming the matrix A is nonsingular, we may write it as
A = V DV −1
(V is a unitary matrix of eigenvectors and D is diagonal, comprised of the eigenvalues
of the matrix). After applying A to a vector p n times, we get
An
p = V Dn
V −1
p = V Dn
q =
N
i=1
qiλn
i vi
Where q is just p with the basis changed to the eigenbasis of A. As n → ∞, λn
max >> λn
other, and
therefore
lim
n→∞
N
i=1
qiλn
i vi ∝ vk, λk = λmax
As a result, after many iterations, subsequent iterations will yield the maximum eigenvalue due to
the iterating vector approaching the span of the corresponding eigenvector.
To find the smallest eigenvalue with this approach, one needs only to consider the algorithm
for A−1
. This follows directly from the inversion of the eigenvalues as a result of matrix inversion:
A−1
= (V DV −1
)−1
= V D−1
V −1
, D−1
= diag([λ−1
1 λ−1
2 · · · λ−1
N ])
Therefore the power method will yield λ−1
N .
25
4.4.3 The QR Algorithm
The QR algorithm is a highly accurate but computationally expensive method that relies on
iterative QR factorization of a matrix to obtain its eigenvectors and eigenvalues. While I will not
discuss QR factorization here, it will suffice to say that it factors a matrix A = QR, where Q
is an orthogonal matrix of vectors obtained by the Graham-Schmidt orthogonalization process,
and R is an upper-triangular matrix corresponding to the coefficients of each column in A for
the basis of orthogonal vectors in Q. The key element of the QR algorithm is that for a Schur
decomposition A = UTU−1
, so A is unitarily similar4
to T. It has been shown that the matrix
sequence Ak = Q−1
k Ak−1Qk converges to an upper triangular matrix, which has the same spectrum
as A due to being unitarily similar, hence giving us the eigenvalues.
Example: Allowed Energies for a Particle As an example, I will solve for the energy spec-
trum of a particle in a well. In theory, the energy levels should agree with5
En = ωn =
n2
π2 2
2mL2
If the mass is 0.5, planck constant is 1, and the length of the domain considered is L = π,
then we should have, very simply, E = n2
. In other words, the eigenspectrum should follow a
parabola. We will check this by implementing MATLAB code to compute the QR factorization,
QR algorithm6
, and plot against the predicted parabolic energy distribution (see fig 4).
1 %% QR Algorithm Implementation
2
3 % Definition of Constants
4 h = 1;
5 m = .5;
6 L = pi;
7
8 figure , hold on
9 for N = (10*(2:2:20) +100)
10 % Definition of Domain
11 x = linspace (0,L ,300);
12 dx = x(2) - x(1);
13 V = 0;
14
15 % Creation of Second derivative Matrix
16 r = [-2 1 zeros(1,N-2)];
17 DD = 1/dx^2 * toeplitz(r,r);
18
19 % Create matrix
20 A = -h^2/(2*m)*DD + V;
21
22 % Create cell array to store iteration of eigenvalues
4
This verbage and approach follows the exposition here.
5
See wikipedia entry
6
QR code was written while taking Math 145
26
23 Dset = cell(1,Nmax);
24
25 % Calculate enery levels and plot
26 [~,D] = qralgorithm(A,N);
27 ddiag = diag(D);
28 plot (10: -1:1 , ddiag(end -9: end),'Color ' ,[(1-N/300) 0 N/300])
29 end
30 plot (1:10 ,(1:10) .^2,'-o','LineWidth ' ,3)
31 legend('N = 120',...
32 'N = 140',...
33 'N = 160',...
34 'N = 180',...
35 'N = 200',...
36 'N = 220',...
37 'N = 240',...
38 'N = 260',...
39 'N = 280',...
40 'N = 300',...
41 'Actual ')
42
43
44 function [V,D] = qralgorithm(A,N)
45 %QRALGORITHM implementation of qr algorithm
46 V = eye(size(A));
47 for i = 1:N
48 [Q,R] = qrFact(A);
49 A = R*Q;
50 V = V*Q;
51 end
52 D = A;
53 end
54
55 function [Q,R] = qrFact(A)
56 %QRFACT implementation of a qr factorization routine
57 [~,N] = size(A);
58 Q = A;
59 R = zeros(N);
60
61 R(1,1) = norm(A(:,1));
62 Q(:,1) = A(:,1)/R(1,1);
63 for j = 2:N
64 R(1:(j-1),j) = Q(: ,1:(j-1)) '*A(:,j);
65 temp = A(:,j)-Q(: ,1:(j-1))*R(1:(j-1),j);
66 R(j,j) = norm(temp);
67 Q(:,j) = temp/R(j,j);
68 end
69 end
27
1 2 3 4 5 6 7 8 9 10
n
0
100
200
300
400
500
600
700
En
(Energy)
N = 120
N = 140
N = 160
N = 180
N = 200
N = 220
N = 240
N = 260
N = 280
N = 300
Actual
Figure 4: Plot of first 10 energy levels predicted by various iterations of the QR algorithm, and
the actual energy distribution. Convergence is evident!
4.5 MATLAB Solvers
MATLAB contains a wealth of ODE solvers in the ode* function family, listed here. Its members
include
• ode45 - A nonstiff, medium accuracy, general purpose solver.
• ode23 - A nonstiff, low-accuracy solver which can be efficient for high-tolerance situations
and ODEs
• ode113 - A variable accuracy nonstiff ode solver whose efficacy varies depending on error
tolerance and ODE complexity
• ode15s - A stiff ODE solver; the go-to of its family
• ode23s, ode23t, ode23tb - Special purpose stiff ODE solvers
• ode15i - Used for fully implicit ODEs
As mentioned before, MATLAB also has the eig() function which numerically computes eigen-
values.
28
5 Partial Differential Equations: The Finite Difference Ap-
proach
While ODEs are differential equations where derivatives are taken with respect to a single inde-
pendent variable, PDEs are defined by their composition of derivatives with respect to multiple
variables, hence they are comprised of partial derivatives. In this section, I will frequently make use
of abbreviated notation for partial derivatives such that ∂i ≡ ∂
∂xi
and 2
≡ i ∂ii (the Laplacian).
In the first part of this section, I will derive the general framework behind discretization
schemes, focusing on the Laplacian. In the following portions, I will describe and solve elliptic,
parabolic, and hyperbolic PDEs of the linear, second-order type using finite difference schemes.
5.1 Application of Finite Differences to PDEs
While partial differential equations are often much scarier looking than ODEs, much of the ap-
proach for finite difference solution is the same. The domain is discretized, usually in space and
sometimes also in time, and then either an eigenvalue problem is solved or the system is inverted.
In any case, since the domain is typically not one dimensional, a little extra sophistication is re-
quired for discretization. Specifically, if the independent variable is to be treated as a function
such that f : Rn
→ R, if it is to represented as a typical one-dimensional vector, we must number
all the points in the domain such that D2
→ N. A common approach to this, illustrated for a
10x10 domain, is per the following image
In words, they are enumerated row by row such that points (n, m) → (n + 10(m − 1)). For an
N by M by P domain:
(n, m, p) → n + N(m − 1 + M(p − 1))
While assembling the method, it is common to retain the multidimensionality of the data for
readability.
29
For ODEs, approximations of derivatives were developed using finite differences.
(y )i ≈
(∆+y)i − (∆−y)i
h2
Unsurprisingly, this approximation is also valid for partial differential equations. For a two-
dimensional domain of points (x, y) and dependent variable u we first discretize the domain as
u(xi, yj) → ui,j,
xi = i · ∆x
yj = j · ∆y
We begin by generalizing the difference operators to multiple dimensions as
(∆i+u)i,j = ui+1,j − ui,j
(∆i−u)i,j = ui,j − ui−1,j
(∆j+u)i,j = ui,j+1 − ui,j
(∆j−u)i,j = ui,j − ui,j−1
We then have
∂xxu =
(∆i+u)i,j − (∆i−u)i,j
(∆x)2
, ∂yyu =
(∆j+u)i,j − (∆j−u)i,j
(∆y)2
Should we make the choice to set ∆x = ∆y = h, we may write the discretized Laplacian as
2
u =
1
h2
(ui,j−1 + ui−1,j − 4ui,j + ui+1,j + ui,j+1)
This is the infamous five-point formula, so named for using five points (shocking!). Let us consider
our mapping earlier, assigning i → n and j → m. At some fixed j (in other words, some y value),
we find ourselves with the following matrix-vector equation
2





u1,j
u2,j
...
uN,j





=
1
h2





−2 1 · · · 0
1 −2 · · · 0
...
...
...
...
0 0 · · · −2










u1,j
u2,j
...
uN,j





+
1
h2





1 0 · · · 0
0 1 · · · 0
...
...
...
...
0 0 · · · 1















u1,j−1
u2,j−1
...
uN,j−1





− 2





u1,j
u2,j
...
uN,j





+





u1,j+1
u2,j+1
...
uN,j+1










Substituting the vectors with fixed j values u∗,j → uj, u∗,j−1 → uj−1, and u∗,j+1 → uj+1 this may
be succinctly rewritten
2
uj = LN uj +
1
h2
I(uj−1 − 2uj + uj+1)
Where LN is the N-point version of the 1D discretized Laplacian developed in the ODE section.
Therefore, we may write the full discretization as
2
u =










LN 0 · · · 0
0 LN · · · 0
...
...
...
...
0 0 · · · LN





+
1
h2





−2I I · · · 0
I −2I · · · 0
...
...
...
...
0 0 · · · −2I










u
= (I ⊗ LN + LM ⊗ I)u
= (LM ⊕ LN )u
Where ⊗ denotes the Kronecker product and ⊕ the Kronecker sum. This is a powerful result, and
it may be generalized to n-dimensional finite difference schemes by simply taking the Kronecker
sum of multiple single-dimensional discretized Laplacians. While it will not be shown here, a
similar derivation may be undertaken for other operators.
30
5.2 Boundary Conditions
When solving any PDE defined on some domain Ω, boundary conditions are needed such that g
is defined for all points on ∂Ω. This is easily done for the square, as we can simply use single-
variable functions, but for a more general domain a non-obvious parametrization may be required.
To demonstrate the general approach for defining boundary conditions, the Laplace equation will
be used as a case-study.
2
u = 0
5.2.1 Dirichlet Conditions
Dirichlet conditions are incorporated by either a) making substitutions in the system of equations
by simply setting the value of certain points or b) using “ghost nodes” as before. Since the former
is not very mathematically interesting, I will consider the case of the latter. Consider boundary
conditions on the L by L square such that we have
u(x, 0) = α(x)
u(x, L) = β(x)
u(0, y) = γ(y)
u(L, y) = δ(y)
We must develop modifications to the system of equations at every boundary point, but to start
we will consider a point (xi, 0) on the bottom side of the square.
1
h2
(ui,j−1 + ui−1,j − 4ui,j + ui+1,j + ui,j+1) = 0
=⇒
1
h2
(αi + ui−1,1 − 4ui,1 + ui+1,1 + ui,2) = 0
=⇒
1
h2
(ui−1,1 − 4ui,1 + ui+1,1 + ui,2) = −
1
h2
αi
Thus for the vector ui,1 = u1 for the bottom side we have
Lu1 +
1
h2
I(−2u1 + u2) = −
1
h2
α
Considering the organization of the earlier applied boundary conditions for Poisson’s equation in
1D
LV = −
Q
−
1
h2
(VAe0 + VBeN )
It seems natural that there would be a similar way to implement Dirichlet conditions for PDEs.
Using the Kronecker product, there is!
2
u = −
1
h2
(e1 ⊗ α + eN ⊗ β + γ ⊗ e1 + δ ⊗ eN )
31
The Leibmann Method One way to solve the Laplace equation numerically is by use of the
iterative Leibmann method. After setting up the system of equations, it will usually be found
that the majority of entries in the matrix-vector system are zero. As a result, it can be overly
computationally expensive to use full-matrix methods. The Gauss-Seidel method, while usually
used for ODEs, may be used under the Leibmann name for solution of elliptic PDEs.
To use the method, one iterates over the independent variable, at each point examining the
nonzero terms and using them to solve for the point on the diagonal such that the point ui,j is
given by
ui,j =
ui+1,j + ui−1,j + ui,j+1 + ui,j−1
4
Since the system is diagonally dominant7
the system will converge. To speed up convergence,
over-relaxation techniques may be employed; for example, by taking each new point and mapping
it such that
unewλ + (1 − λ)uold → unew
at each step. Percent error between subsequent steps may be used as a metric for convergence.
5.2.2 Neumann Conditions
To implement Neumann conditions, we will attempt a slightly more subtle approach, building on
the Dirichlet implementation. First, we need to compute value for the ghost nodes, based on the
central difference approximation of the first derivative, as was done before. Now, we consider the
following boundary conditions
uy(x, 0) = a(x)
uy(x, L) = b(x)
ux(0, y) = c(y)
ux(L, y) = d(y)
Conveniently, since we are working with the square, we don’t need to worry about the other first
derivatives, since those would only utilize points along the boundary and thus are not relevant for
this particular problem8
.
ai =
ui,2 − ui,0
2h
=⇒ ui,0 = ui,2 − 2hai =⇒ u0 = u2 − 2ha
bi =
ui,N+1 − ui,N−1
2h
=⇒ ui,N+1 = ui,N−1 + 2hbi =⇒ uN+1 = uN−1 + 2hb
Considering that a = u0 in the Dirichlet case, we may use substitution to work directly from
the final result, with analogous substitutions for the other boundaries
2
u = −
1
h2
(e1 ⊗ (ui,2 − 2ha) + eN ⊗ (ui,N−1 + 2hb) + (u2,j − 2hc) ⊗ e1 + (uN−1,j + 2hd) ⊗ eN )
2
u = −
2
h
(eN ⊗ b − e1 ⊗ a + d ⊗ eN − c ⊗ e1) −
1
h2
(δ1,2 ⊗ I + δN,N−1 ⊗ I + I ⊗ δ1,2 + I ⊗ δN,N−1)u
7
Verbage used by Chapra
8
There is more nuance here, but I am not sure how to articulate it. It is not discussed in Chapra, so I figure it
was not discussed during the course.
32
2
u = −
2
h
(eN ⊗ b − e1 ⊗ a + d ⊗ eN − c ⊗ e1) −
1
h2
(δ1,2 + δN,N−1) ⊕ (δ1,2 + δN,N−1)u
∴ ( 2
+
1
h2
(δ1,2 + δN,N−1) ⊕ (δ1,2 + δN,N−1))u = −
2
h
(eN ⊗ b − e1 ⊗ a + d ⊗ eN − c ⊗ e1)
Where δi,j denotes the Kronecker delta, an empty matrix with a 1 at position i, j. We thus have
the form of the equation which applies the Neumann conditions.
As an aside, before I started this section on PDEs, it hadn’t occurred to me that the tensor
algebra would be so instrumental in a clean presentation of this theory. Granted, the mathematical
bar for understanding is set a bit higher than in Chapra’s book, but I feel that it makes things
much more clear. This took a great deal of thinking to derive and present, so in the interest of
time, I will be a little less general moving forward.
Solution of Elliptic PDEs Elliptic PDEs are often considered to be the simplest type of
nontrival PDE to solve, as they describe steady-state phenomena. All the mechanism really needed
to solve one of these using finite differences has already been described . To solve such equations,
boundary conditions are needed. Here we will discuss the 2D Laplace-Poisson equation as a model
case.
2
u = f(x, y)
Let us now solve an example with Neumann conditions.
Normally, I would scrap my work here and move on, but I would like to comment on a particular
difficulty I ran into while doing this. Without thinking much about it, I set up the Laplace
equation with all Neumann bounds, and found that the solution was horribly divergent. I was
puzzled, thinking I had done something wrong. Then, I realized that Laplace’s equation with only
Neumann conditions is an ill-posed problem, as there are infinitely many solutions–consider the
constant of integration that would naturally arise when prescribing slope at all edges. For this
reason, I will do a different version of this problem, with the following conditions:
x ∈ [0, 5], y ∈ [0, 3]
∂xu(x, 0) = ∂xu(x, 3) = x, f(x, y) = e−(x−5
2
)2
e−(y−3
2
)2
u(0, y) = 0, u(Lx, y) = sin
2π
3
y
This could be interpreted as the potential on some strange sheet of metal with a linear electric
field on two sides, grounded on one, and with a sinusoidal potential on the remaining side, with a
Gaussian charge distribution in the center.
1 %% Laplace -Poisson Example Problem
2
3 %% Create functions for Kronecker sum , delta , and unit vector
4 % Kronecker sum
5 ksum = @(A,B) kron(A,eye(size(B))) + kron(eye(size(A)),B);
6
7 % Kronecker delta
8 kdelt = @(i,j,N) sparse(i,j,1,N,N);
9
10 % Unit vector
33
Figure 5: Solution to Laplace-Poisson equation given the above conditions
11 e = @(i,N) sparse(i,1,1,N,1);
12
13 %% Set problem paramters
14 xLen = 5; % Length in x direction
15 yLen = 3; % Length in y direction
16 N = 500; % Number of points for x
17 M = 500; % Number of points for y
18
19 % Gaussian distribution at the center
20 f = @(x,y) exp(-(x-xLen /2) .^2) .* exp(-(y-yLen /2) .^2);
21
22 % "Flux" (Neumann) Condition functions
23 up_x0 = @(x) x;
24 up_xL = up_x0;
25
26 % Dirichlet conditions
27 u_0y = @(y) 0*y;
28 u_Ly = @(y) sin(y*2*pi/yLen);
29
30 % Create grid
31 x = linspace (0,xLen ,N) ';
32 y = linspace (0,yLen ,M) ';
33 [xx ,yy] = meshgrid(x,y);
34 dx = x(2) - x(1);
35 dy = y(2) - y(1);
36
37 % Define the Laplacian for x and y
38 Lx = 1/dx^2 * sparse(toeplitz ([-2 1 zeros(1,N-2)]));
39 Ly = 1/dy^2 * sparse(toeplitz ([-2 1 zeros(1,M-2)]));
40
41 % Define the system Laplacian
34
42 Laplacian = ksum(Lx ,Ly);
43
44 % Modify Laplacian for Neumann conditions
45 laplaceMod = kron(eye(N) ,(kdelt(1,2,M) + kdelt(M,M-1,M))/dy^2);
46 A = Laplacian + laplaceMod;
47
48 % Create RHS
49 rhs = reshape(f(xx ,yy),N*M,1) -2/dx*( kron(up_xL(x),e(M,M)) - kron(
up_x0(x),e(1,M))) + ...
50 -1/dx ^2*( kron(e(N,N),u_Ly(y)) + kron(e(1,N),u_0y(y)));
51
52 % Solve and plot solution
53 u = reshape(Arhs ,M,N);
54 surf(x,y,u)
55 xlabel('x')
56 ylabel('y')
57 zlabel('u')
58 shading interp
59 axis equal
60 light , light
5.2.3 The Control-Volume Approach
Situations may arise where a particular node in the finite-difference scheme may lie at a troublesome
point, where it is entirely non-obvious how to implement the boundary conditions directly. By
instead opting to treat the immediately surrounding volume as its own system and instead using
the solution on that volume for that node, these difficulties may be efficiently addressed. This is
called the control volume approach. By isolating the problematic point, only one line in the system
of equations needs to be modified. To write the equations for each node, we first rewrite the heat
equation as its average over a volume element
1
∆V ∆V
ut d(∆V ) = ut =
α
∆V ∆V
( · u) d(∆V )
By the divergence theorem,
∆V
( · u) d(∆V ) =
∂∆V
u · n dA
Over a a prismatic, polygonal element, we find
ut =
hα
∆V i Si
( u · ni) dx
In the standard case of a rectangular element, this may be rewritten as
ut =
α
∆x∆y ↑
∂yu dx +
→
∂xu dy −
↓
∂yu dx −
←
∂xu dy
35
Where arrows denote values on the corresponding side of the element. For the steady-state case
ut = 0 and using the definition of heat flux
qi = −k ui, 0 =
↑
k↑qy dx +
→
k→qx dy −
↓
k↓qy dx −
←
k←qx dy
In finite differences, this may be written as a summation over each side. Similar expressions may
be derived for dealing with the non-steady-state case.
Figure 6: Write equations for the darkened nodes in the grid in Fig. P29.10. Note that all units
are cgs. The convection coefficient is hc = 0.01 cal/(cm2
· C · s) and the thickness of the plate is 2
cm.
Problem 29.10 To illustrate these concepts, we solve the above problem. Node (0,0) may be
written using the heat convection coefficient for the left side (going halfway up to the next node)
and 0 for the bottom, since it is insulated. The middle nodes do not involve the boundary, and
have corresponding expressions that are written more readily, with the exception of the heat source
(still fairly obvious)
(0, 0) :
∆y
2
· hc(Ta − T0,0) +
∆x
2
· k (T0,1 − T0,0) +
∆y
2
· k (T1,0 − T0,0)
(1, 1) : T1,2 + T2,1 + T1,0 + T0,1 − 4T1,1
(2, 1) :
∆y1 + ∆y2
2
(k (T3,1 − T2,1) + k (T1,1 − T2,1)) + 1.5∆x(k (T2,2 − T2,1) + k (T2,0 − T2,1)) ∆z
+qz(1.5∆x +
∆y1 + ∆y2
2
)
5.3 Partial Discretization
In this section, I will discuss the extension of the established theory to the solution of time-
dependent phenomena, without discretization in time. These methods are very nice and even
work well for hyperbolic PDEs, but are comparably more computationally expensive.
36
5.3.1 Parabolic PDEs
Parabolic PDEs are most commonly encountered when dealing with time-dependent phenomena,
especially those involved with some sort of diffusive physics. They are so named for the form
they take being defined by a parabolic equation, where the dependent variable is replaced by the
∂t operator and and the independent variables replaced by the corresponding partial derivative
operators. An example would be
z = x2
+ y2
→ ∂tu = ∂xxu + ∂yyu = 2
u
This simple example is the basic form of the diffusion PDE. As a matter of fact, being able to
write a PDE as the equation of the partial derivative with respect to time and an elliptic operator
is a sufficient condition9
for a PDE to be considered parabolic. For this reason, a parabolic PDE
may be written generally as
ut = L[u]
Where L is an elliptic operator. In the interest of a succinct presentation of the numerical methods
behind their solution, I will omit a derivation of the mathematical nuance behind parabolic and
elliptic PDEs in favor of jumping straight to solution methods.
5.3.2 The Eigenvalue Method for Time-Dependent Partially-Discretized PDEs
We have discussed in previous sections how an nth-order linear ODE may be broken apart into
a system of n first order differential equations and then rewritten as a matrix-vector system.
Appealing to the framework established in the last section, we will consider in this section the 2D
heat equation
∂u
∂t
= α 2
u
Using the language developed previously, this may be discretized over x, y by10
d
dt
u = α(Lx ⊕ Ly)u
Naturally, we may sole this equation in the same way that any other system of first order differential
equations may be solved, by exponentiation
u = eα(Lx⊕Ly)t
u0 = eαLxt
⊗ eαLyt
u0
with simplification resulting by exploiting properties of the Kronecker sum11
. Since Lx, Ly are
symmetric matrices, their eigenvectors are orthogonal, and therefore
Lx = VxDxV −1
x = VxDxV T
x , Ly = VyDyV −1
y = VyDyV T
y
We may then further simplify our expression
u = (VxeαDxt
V T
x ) ⊗ (VyeαDyt
V T
y )u0
= (Vx ⊗ Vy)(eαDxt
⊗ eαDyt
)(Vx ⊗ Vy)T
u0
= (Vx ⊗ Vy)(eα(Dx⊕Dy)t
)(Vx ⊗ Vy)T
u0
9
Unsure if also necessary
10
While the previous ordering of Lx and Ly was valid for the previous derivation, it also works in this way and
comes out cleaner in the code
11
Properties of the Kronecker product may be read about here
37
By making the substitutions W = N
i=1 Vxi
and Λ = N
i=1 Dxi
, this may be extended to an
N-dimensional space as
u = WeαΛt
WT
u0
5.3.3 Implementation of Boundary Conditions
The result that has just been obtained is, while pretty, quite complex mathematically. With that
said, there is a fairly simple way to interpret it to reveal how to apply boundary conditions. The
first thing to be considered is that eigenfunctions of the 2D Laplacian are planar harmonics. By
changing the basis of the initial condition into the eigenspace of the Laplacian (the WT
u0 term),
we are approximating the initial condition as a Fourier series. From that point, any state of
the system must be a superposition of planar harmonics with coefficients that evolve over time
proportionally to their corresponding eigenvalue. For this reason, boundary conditions are still
determined by characteristics of the Laplacian (or whatever elliptic operator is implicated). For
this reason, there is a straightforward realtionship between applying boundary conditions to elliptic
PDEs and parabolic PDEs.
Consider that the differential equation for a parabolic PDE may be written as
du
dt
= L[u]
Here, I will only consider boundary conditions that are constant in time. To apply Dirichlet
conditions, we know that du
dt ∂Ω
= 0. Therefore, we must modify the right hand side such that
L[u] ∂Ω
= 0 when u ∂Ω
= g(x, y). This is a problem we already solved earlier, using ghost nodes.
For the heat equation where L[u] = α 2
u,
α 2
u → α( 2
u +
1
h2
(
Bottom
e1 ⊗ α +
Top
eN ⊗ β +
Left
γ ⊗ e1 +
Right
δ ⊗ eN )
Choose which Dirichlet conditions to apply
)
And for Neumann (analogous placement)
α 2
u → α 2
+
1
h2
(δ1,2 + δN,N−1) ⊕ (δ1,2 + δN,N−1) u +
2α
h
(eN ⊗ b − e1 ⊗ a + d ⊗ eN − c ⊗ e1)
Now, all we need to do is solve the inhomogeneous differential equation. Recall that the following
du
dt
= Au + b
Has a solution in the form of
u = eAt
(u0 + A−1
b) − A−1
b
Therefore, once the proper substitutions are made, the PDE is as good as solved.
The following is a solution for the heat equation with the conditions
x ∈ [0, 10], y ∈ [0, 4], t ∈ [0, 3)
∂xu(x, 0) = ∂xu(x, 3) = 0, u0(x, y) = e−(x−5)2
e−(y−2)2
u(0, y) = 0, u(Lx, y) = sin
π
2
y
38
10
9
8
7
-0.5
6
x
0
4
0.5
5
u
1
1.5
3 4
y
32
2
1
1
0 0
Figure 7: Heat evolution PDE at t=0.1
1 %% Heat PDE
2
3 %% Create functions for Kronecker sum , delta , and unit vector
4 % Kronecker sum
5 ksum = @(A,B) kron(A,eye(size(B))) + kron(eye(size(A)),B);
6
7 % Kronecker delta
8 kdelt = @(i,j,N) sparse(i,j,1,N,N);
9
10 % Unit vector
11 e = @(i,N) sparse(i,1,1,N,1);
12
13 %% Set problem paramters
14 xLen = 10; % Length in x direction
15 yLen = 4; % Length in y direction
16 N = 200; % Number of points for x
17 M = 80; % Number of points for y
18 alpha = 5; % Thermal diffusivity
19
20 % Initial condition
21 f = @(x,y) 5*exp(-(x-xLen /2) .^2) .* exp(-(y-yLen /2) .^2);
22
23 % "Flux" (Neumann) Condition functions
24 up_x0 = @(x) 0*x;
25 up_xL = up_x0;
26
27 % Dirichlet conditions
28 u_0y = @(y) 0*y;
29 u_Ly = @(y) sin(y*2*pi/yLen);
30
39
31 % Create grid
32 x = linspace (0,xLen ,N) ';
33 y = linspace (0,yLen ,M) ';
34 [xx ,yy] = meshgrid(x,y);
35 dx = x(2) - x(1);
36 dy = y(2) - y(1);
37
38 % Define the Laplacian for x and y
39 Lx = 1/dx^2 * sparse(toeplitz ([-2 1 zeros(1,N-2)]));
40 Ly = 1/dy^2 * sparse(toeplitz ([-2 1 zeros(1,M-2)]));
41
42 % Define the system Laplacian
43 Laplacian = ksum(Lx ,Ly);
44
45 % Modify Laplacian for Neumann conditions
46 laplaceMod = kron(eye(N) ,(kdelt(1,2,M) + kdelt(M,M-1,M))/dy^2);
47 A = alpha *( Laplacian + laplaceMod);
48
49 % Inhomogeneous part of Laplacian
50 b = alpha *(2/ dy*(kron(up_xL(x),e(M,M)) - kron(up_x0(x),e(1,M))) +
...
51 1/dx ^2*( kron(e(N,N),u_Ly(y)) + kron(e(1,N),u_0y(y))));
52
53 % Compute eigenvalues and eigenvectors of Laplacian
54 [V,D] = eig(full(A));
55 u0 = reshape(f(xx ,yy),N*M,1);
56 beta = Ab;
57 V0 = V(u0 + beta);
58 D = diag(D);
59
60 % Solution
61 u = @(t) V*( diag(exp(D*t))*V0) - beta;
62
63 h = surf(xx ,yy ,reshape(u(0),M,N));
64 xlabel('x')
65 ylabel('y')
66 zlabel('u')
67 shading interp
68 axis equal
69 light
70
71 % Animate results
72 t = linspace (0 ,3 ,1000);
73
74 for i = 1:1000
75 h.ZData = reshape(u(t(i)),M,N);
76 drawnow
40
77 end
5.3.4 Hyperbolic PDEs
Since this wasn’t really part of the class, I’ll make this brief, but I do want to show how the
methods built up to this point can easily handle the wave equation. For ease of exposition, I will
not be discussing how to apply boundary conditions. Suffice to say, it is a similar process.
We begin by considering the spatially discretized form of the wave equation
d2
u
dt2
= (Lx ⊕ Ly)u
Similar to a typical second order ODE of this form, this may be rewritten as
d
dt
u
˙u
=
0 I
Lx ⊕ Ly 0
u
˙u
Therefore, we may immediately assert the solution to be
u
˙u
= exp
0 I
Lx ⊕ Ly 0
t
u0
˙u0
It should be noted that this is the “default” solution, with Dirichlet conditions of zero on all sides.
As such, this is the natural equation of a vibrating square membrane. Furthermore, I do not claim
this to be anywhere close to the most computationally efficient approach (view in abode reader
for animation)
1 %% Wave PDE
2
3 %% Create functions for Kronecker sum , delta , and unit vector
4 % Kronecker sum
5 ksum = @(A,B) kron(A,eye(size(B))) + kron(eye(size(A)),B);
6 % Kronecker delta
7 kdelt = @(i,j,N) sparse(i,j,1,N,N);
8 % Unit vector
9 e = @(i,N) sparse(i,1,1,N,1);
10
11 %% Set problem paramters
12 xLen = 10; % Length in x direction
13 yLen = 10; % Length in y direction
14 N = 40; % Number of points for x
15 M = 40; % Number of points for y
16
17 % Initial condition
18 f = @(x,y) 5*exp(-(x-xLen /2) .^2) .* exp(-(y-yLen /2) .^2);
19
20 % Initial velocity
21 g = @(x,y) x*y*0;
22
41
Figure 8: Solution to the wave equation with Gaussian initial conditions
23 % Create grid
24 x = linspace (0,xLen ,N) ';
25 y = linspace (0,yLen ,M) ';
26 [xx ,yy] = meshgrid(x,y);
27 dx = x(2) - x(1);
28 dy = y(2) - y(1);
29
30 % Define the Laplacian for x and y
31 Lx = 1/dx^2 * sparse(toeplitz ([-2 1 zeros(1,N-2)]));
32 Ly = 1/dy^2 * sparse(toeplitz ([-2 1 zeros(1,M-2)]));
33
34 % Define the system Laplacian
35 A = ksum(Lx ,Ly);
36
37 % Compute eigenvalues and eigenvectors of the system
38 [V,D] = eig([ zeros(N*M), eye(N*M) ; full(A), zeros(N*M)]);
42
39 D = diag(D);
40 u0 = reshape(f(xx ,yy),N*M,1);
41 up0 = reshape(g(xx ,yy),N*M,1);
42 V0 = V'*[u0 ; up0];
43
44 % Solution
45 u = @(t) real(V(1:N*M,:)*( diag(exp(D*t))*V0));
5.4 Full Discretization
For the last bit, we have discussed the solution of PDEs with a partial discretization of the domain;
that being only in space. Here, I will present the methods broached in Chapra for discretization
and solution in both space and time. To complement Chapra’s exposition, I will drop the second
spatial dimension and work in one variable of space for this section.
5.4.1 Explicit Methods
Just as finite differences could be constructed for the space domain, so too may the be applied to
the time variable. For ease of notation I will use the following convention in this section unless
specified otherwise
ui(tl) ≡ ul
i
Sensibly, there is a great deal of back-and-forth between the solution to temporally dependent
ODEs and PDEs that are discretized in time. For now, we will exclusively examine the simplified
heat PDE ∂tu = L(u). Notably, at each point i in the domain, we essentially have a first order
differential equation
f(xi, ui, tl) = ( 2
u(tl))i
As a result, it stands to reason that we should be able to use something in our arsenal of ODE
solvers to solve the PDE, treating each point in the spatial domain as its own unique ODE. The
explicit method discussed by Chapra is a manifestation of Euler’s method, such that
ul+1
i = ul
i + ∆tLl
i, Ll
i =
ul
i+1 − 2ul
i + ul
i−1
h2
As long as ∆t ≤ 1
2
h2
, this method will be convergent and stable. Other explicit ODE solution
methods may also be used–as Chapra notes, the implementation of Heun’s method for this appli-
cation is called MacCormack’s method.
Explicit Method Example
5.4.2 Implicit Methods
In section 3.1 of this report, it was discussed that Euler’s method did not have a particularly
large stability domain, and thus suffered for poor choices of step size. This is also the case for
its application to solving PDEs. By opting to use so-called A-Stable methods instead, improved
stability may be found. Just as the backwards Euler method was given by
yi+1 = yi + hf(xi+1, yi+1)
43
It may be used for PDE solution as well, manifesting as
ul+1
i = ul
i + ∆tLl+1
i
By writing this in matrix-vector form for the entire system, we obtain the following (with u ≡ u12
)
ul+1
− ∆tLul+1
= ul
=⇒ (I − ∆tL)ul+1
= ul
=⇒ ul+1
= (I − ∆tL)−1
ul
We therefore now have a general solution at any point in time, by induction
u(tn) = (I − ∆tL)−n
u0
For added fanciness, we may generalize this to n dimensions as
u(tn) = (I − ∆t
N
i=1
Lxi
)−n
u0
The Crank-Nicolson Method The Crank-Nicolson method appears to be an implementation
of Heun’s method for solving PDEs. The pointwise equation is
ul+1
i = ul
i +
1
2
∆t(Ll
i + Ll+1
i )
In matrix-vector form, this may be simplified to an iterative procedure as follows
ul+1 = ul +
1
2
∆tL(ul + ul+1)
(I −
1
2
∆tL)ul+1 = (I +
1
2
∆tL)ul
ul+1 = (I −
1
2
∆tL)−1
(I +
1
2
∆tL)ul
∴ u(tn) = (I −
1
2
∆tL)−1
(I +
1
2
∆tL)
n
u0
To apply boundary conditions to these methods, we alter the Laplacian and add terms as before,
such that the Laplacian reflects the boundary conditions we desire. This is accomplished by the
substitution of the Laplacian
Lul → Lul + Aul + b
For the heat PDE,
ul+1 = ul +
1
2
∆t [(L + A)(ul + ul+1) + 2b]
(I −
1
2
∆t(L + A))ul+1 = (I +
1
2
∆t(L + A))ul + ∆tb
ul+1 = (I −
1
2
∆t(L + A))−1
(I +
1
2
∆t(L + A))ul + ∆tb
Where for Dirichlet conditions
A = 0, b =
1
h2
(u1e1 + uN eN )
12
I did this because the vec tor notation I had been using conflicts with the superscripts
44
And for Neumann conditions
A =
1
h2
(δ1,2 + δN,N−1), b =
2
h
((∂xu)N eN − (∂xu)1e1)
These may all be naturally generalized to multiple dimensions using the tensor language developed
earlier.
Example implementation of Crank-Nicolson Method Here I will examine usage of the
Crank-Nicolson Method to solve the minority carrier diffusion equation, which describes the evo-
lution of the concentration n-type carriers in a p-doped semiconductor. First, we must modify the
form of the equation to reflect the new PDE, where u = ∆np
∂u
∂t
= DN
∂2
u
∂x2
−
u
τn
+ GL
In discretized form,
DN
∂2
u
∂x2
−
u
τn
+ GL → DN Lu −
u
τn
+ GL
By applying boundary conditions to the Laplacian and correcting for the u/τn term at the bound-
ary, we have
DN (L + A) −
1
τn
(I − (δ1,1 + δN,N )) u + (DN b + GL)
Accordingly, implementation of the Crank-Nicolson method has the following structure
M = DN (L + A) −
1
τn
(I − (δ1,1 + δN,N ))
ul+1 = I −
1
2
∆tM
−1
I +
1
2
∆tM ul + ∆t(DN b + GL)
We will now solve this numerically as an equation of evolution for the following conditions
(H(x) is the heaviside function)
x ∈ [0, 5], DN = 10, τn = .1
t ∈ [0, 0.3], GL = 0.1
u0(x) = H(x − 3), u(0) = 1, u(5) = 0
1 %% Crank -Nicolson Minority Carrier Demo
2
3 %% Create functions for Kronecker delta , and unit vector
4 % Kronecker delta
5 kdelt = @(i,j,N) sparse(i,j,1,N,N);
6
7 % Unit vector
8 e = @(i,N) sparse(i,1,1,N,1);
9
10 %% Set problem paramters
45
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1np
0.3
0.24
0.18
0.12
0.06
0
Figure 9: Minority carrier distribution as a function of time, solved with the Crank-Nicolson
method.
11 xLen = 5; % Length in x direction
12 N = 50; % Number of points for x
13 u_p = 0;
14 D_N = 10;
15 tau = .1;
16 G_L = 0.1;
17 t_max = .3;
18
19 % Create grid
20 x = linspace (0,xLen ,N) ';
21 h = x(2) - x(1);
22
23 % Define time step size
24 dt = h^2/6;
25
26 % Define the Laplacian
27 Lx = 1/h^2 * sparse(toeplitz ([-2 1 zeros(1,N-2)]));
46
28
29 % Implementation of Neumann conditions on both sides
30 %A = 1/h^2 * (kdelt(1,2,N) + kdelt(N,N-1,N));
31 A = zeros(N);
32 %b = 2/h * (u_p * e(N,N) - u_p * e(1,N));
33 b = 1/h^2*(1*e(1,N) + .01*e(N,N));
34
35 % Derivative matrix
36 M = D_N * (Lx + A) - 1/tau*(eye(N) - (kdelt(1,2,N) + kdelt(N,N-1,N)
));
37
38 % Time vector
39 t = 0:dt:t_max;
40
41 % Initial condition
42 u0 = @(x) 1*(x < xLen /2);
43 % u0 = @(x) 1 + x*0;
44
45 u = zeros(N,length(t));
46 u(:,1) = u0(x);
47
48 for i = 2: length(t)
49 uTmp = (eye(N) + 0.5* dt*M)*u(:,i-1) + dt*(D_N*b+ G_L);
50 u(:,i) = (eye(N) - 0.5* dt*M)uTmp;
51 end
52
53 figure , hold on
54 colormap hot
55 color = hot(length(t));
56 for i = 1:5: length(t)
57 f = t(i)/t_max;
58 plot(x,u(:,i),'LineWidth ',2,'Color ',color(length(t)+1-i,:))
59 end
60 colorbar('TickLabels ',linspace(t_max ,0,6),...
61 'Ticks ' ,(0:5)/5,'Direction ','reverse ')
62 xlabel('x')
63 ylabel('Delta n_p')
64 set(gca ,'Color ' ,.5*[1 1 1])
47
6 The Finite Element Method
The finite element method is one of the most powerful numerical methods for solving PDEs. Not
only does it allow for finer control over the discretization of the domain, but it also allows for
highly irregular geometries, such as those seen in most engineering applications. It is however
much more mathematically complex than the finite element method.
6.1 General Approach
The application of a finite element is usually divided into six steps. These steps are:
1. Discretization of the domain
2. Generation of element equations
3. Assembly of system
4. Application of boundary conditions to system
5. Solution of the system
6. Post-processing
Discretization As was the case with the finite-difference methods, it is important to discretize
the domain such that the eventual system of equations will be finite in size and thereby solvable
using a numerical method. The simplest and easiest to compute methods typically employ objects
with planar geometry:
• 1D: Lines
• 2D: Quadrilaterals, triangles
• 3D: Parallelepipeds, triangular prisms
Shared vertices between these objects are referred to as nodes, and the planes that separate them
are called nodal planes.
Development of Equations After a choice of element is made, elements are drawn onto the
system such that it forms a kind of mesh. For piecewise polynomial discretizations, interpolating
polynomials are found as a linear combination of d ≡ dim Lagrange polynomials Ni of degree n,
n = 1 for piecewise planar/linear:
u =
d+1
i=1
uiNi
It should be noted that for non-linear interpolating polynomials, especially in higher dimensions,
this becomes much more complex.
48
Assembly and Application of Boundary Conditions We aim to write the above as a system
of equations of the form
Au = b
Where A is the assemblage property matrix and b is the vector which represents outside forces.
As was done in the finite difference section, we then take this system of equations and modify to
apply the boundary conditions.
Solution and Postprocessing Ideally, the system may be assembled such that the resulting
system is linear and may be solved with an appropriate method. The results should then be viewed
either as a raw data output or may be visualized nicely.
6.2 One-Dimensional Application to Elliptic ODEs
Here, I will develop the solution to the steady-state solution of the 1D minority carrier equation
using finite elements.
0 = DN
d2
u
dx2
−
u
τn
+ GL
Discretization For demonstrative purposes, I will be using a random discretization.
Development of Equations Mirroring the derivation in Chapra for Poisson’s equation, we
begin by asserting an approximate solution ˜u leaves a residual R behind
R = DN
d2
˜u
dx2
−
˜u
τn
+ GL
Using Galerkin’s method, we then opt to constrain the parameter space by minimization of the
residual weighted by the interpolating basis functions
D
RNi dx = 0
Where D = [xa, xb] is the subinterval of consideration. By substitution of the residual, we factor
this into three separate integrals
DN
D
d2
˜u
∂x2
−
˜u
τn
+ GL Ni dx = DN
D
d2
˜u
dx2
Ni dx −
1
τn D
˜uNi dx + GL
D
Ni dx
We first direct our attention to the integral of the second derivative. By integration by parts, we
find that
D
d2
˜u
dx2
Ni dx = Ni(x)
d˜u
dx D
−
D
d˜u
dx
dNi
dx
dx
where per the definition of Ni
Ni(x)
d˜u
dx D
=
−d˜u(x1)
dx
, i = 1
d˜u(x2)
dx
, i = 2
49
Recognizing the linear shape of Ni, we may also assert that
x2
x1
d˜u
dx
dNi
dx
dx =
(−1)i
h
(−u1 + u2), i ∈ {1, 2}
Moving into something that was not done in Chapra, we now examine the middle term. Noting
that ˜u = u1N1(x) + u2N2(x), we rewrite the integral
D
˜uNi dx = u1
D
N1Ni dx + u2
D
N2Ni dx
Where h = b − a
b
a
NiNj dx =
h
6
(1 + δij), i ∈ {1, 2}
And therefore
D
˜uNi dx =
h
6
2u1 + u2, i = 1
u1 + 2u2, i = 2
The final integrand is simply evaluated
D
Ni dx =
1
2
h
The system of equations governing the element may then be assembled
0 = DN
−d˜u(x1)
dx
d˜u(x2)
dx
−
1
h
1 −1
−1 1
u1
u2
−
1
τn
h
6
2 1
1 2
u1
u2
+ GL
h
2
1
1
And rewritten as an inhomogeneous linear matrix-vector equation for the element
DN
h
1 −1
−1 1
+
h
6τn
2 1
1 2
u1
u2
= DN
−d˜u(x1)
dx
d˜u(x2)
dx
+
h
2
GL
GL
Assembly At this time, we consider the containing system. To begin, we rewrite the last equa-
tion to treat it as if it were the ith element in the system, strung between points i and i + 1
DN
hi
1 −1
−1 1
+
hi
6τn
2 1
1 2
ui
ui+1
= DN
−d˜u(xi)
dx
d˜u(xi+1)
dx
+
hi
2
GL
GL
To assemble the system, we first invoke the following shorthand
Ai =
DN
hi
1 −1
−1 1
+
hi
6τn
2 1
1 2
, bi = ki +
hi
2
GL
GL
Where ki denotes a variable argument that is only nonzero for i = 1 and i = N − 1, due to the
cancellation of boundary conditions. Next, we recognize that a system with N − 1 many elements
will consist of N points. To generate the full system Au = b, we start with A and b as an N
dimensional zero matrix and zero vector respectively. Then, we invoke the following algorithm,
iterating over i ∈ {k ∈ Z+
: k ≤ N}, where subscript bracket notation denotes sub-matrix indexing
A[i,i+1],[i,i+1] = A[i,i+1],[i,i+1] + Ai
b[i,i+1] = b[i,i+1] + bi
50
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
np N = 5
N = 10
N = 15
N = 20
N = 25
Figure 10: Solution by finite element method at increasingly finer random discretizations
Boundary Conditions By inspection, as with Chapra’s example, the inner boundary conditions
cancel, and thus only the external boundary conditions need to be applied, in the case of Neumann
conditions. For Dirichlet conditions, there is a modification at the endpoints such that the ki term
is generated by a central difference
b1 =
DN
2h1
0 −1
0 0
u1
u2
+
DN
2h1
ua
0
bN−1 =
DN
2h1
0 0
−1 0
uN−1
uN
+
DN
2hN−1
0
ub
Evidently, this contributes a term that must modify A1 and AN−1.
Solution By numerical inversion of the matrix, selecting 5 random discretizations with N ∈
{5, 10, 15, 20, 25} points, we find that the solution quickly converges to the steady state found by
the Crank-Nicolson scheme after sufficient time had passed. The following code implements this
solution.
1 %% Finite Element Solution of Minority Carrier Equation
51
2
3 %% Create functions for Kronecker delta , and unit vector
4 % Kronecker delta
5 kdelt = @(i,j,N) sparse(i,j,1,N,N);
6
7 % Unit vector
8 e = @(i,N) sparse(i,1,1,N,1);
9
10 %% Set problem paramters
11 xLen = 5; % Length in x direction
12 N = 5; % Number of points for x
13 D_N = 10;
14 tau = .1;
15 G_L = 0.1;
16 t_max = .3;
17 ua = 1;
18 ub = 0.01;
19
20 figure , hold on
21 colormap hot
22 color = hot (5);
23
24 NN = 5:5:25;
25 for j = 1: length(NN)
26 N = NN(j);
27
28 % Create random grid for inside points
29 x_ins = sort(rand(N,1))*xLen;
30 x = [0; x_ins; xLen ];
31
32 % Define functions
33 h = @(i) x_ins(i+1) - x_ins(i);
34 A = @(i) D_N/h(i) * [1 -1; -1 1] + h(i)/(6* tau) * [2 1 ; 1 2];
35 b = @(i) h(i)/2 * G_L *[1;1];
36
37 AA = zeros(N);
38 bb = zeros(N,1);
39 for i = 1:(N-1)
40 % Determine applicability of and apply boundary conditions at point
41 AAbcs = zeros (2);
42 bbbcs = zeros (2,1);
43 if i == 1
44 AAbcs = -D_N /(2*h(i))*[0 -1; 0 0];
45 bbbcs = D_N /(2*h(i))*[ua; 0];
46 elseif i == N-1
47 AAbcs = -D_N /(2*h(i))*[0 0; -1 0];
48 bbbcs = D_N /(2*h(i))*[0; ub];
52
49 end
50
51 % Apply ith element to system matrix
52 AA([i i+1],[i i+1]) = AA([i i+1],[i i+1]) + A(i) + AAbcs;
53 bb([i i+1]) = bb([i i+1]) + b(i) + bbbcs;
54 end
55 u = AAbb;
56 plot(x,[ua;u;ub],'-o','LineWidth ',2,'Color ',color(length(NN)+1-j,:)
)
57 end
58 colorbar('TickLabels ',linspace (25,0,5),...
59 'Ticks ' ,(0:4)/4,'Direction ','reverse ')
60 xlabel('x')
61 ylabel('Delta n_p')
62 set(gca ,'Color ' ,.5*[1 1 1])
6.3 Finite Element Methods in Higher Dimensions
Higher-dimensional finite-element methods may be devised in a manner very similar to the 1
dimensional case; however, it quickly becomes much more complex to do so. Chapra provides a
nice introduction to the element equation derivation, however to truly understand the math of
what is going on requires use of much more mathematical sophistication than that assumed in
Chapra.
7 Afterword
I would like to thank Dr. Gary Litton for allowing me the option to complete this report. Along
the way, I have strengthened my numerical abilities considerably and made concrete some things
that I had been starting to forget. I also learned a few more methods of solving ODEs than I
had known in the past, and begun to see a deeper application of tensor analysis, a field that I
had not previously thought to intersect terribly with numerical differential equations. I have also
begun to be able to formulate solution methods creatively, having been afforded the opportunity to
think critically about each method and how they were interrelated. Truly, I feel that I now better
understand numerical methods and their application to the solution of differential equations.
53

Weitere ähnliche Inhalte

Was ist angesagt?

Continutiy of Functions.ppt
Continutiy of Functions.pptContinutiy of Functions.ppt
Continutiy of Functions.pptLadallaRajKumar
 
A brief introduction to finite difference method
A brief introduction to finite difference methodA brief introduction to finite difference method
A brief introduction to finite difference methodPrateek Jha
 
applications of first order non linear partial differential equation
applications of first order non linear partial differential equationapplications of first order non linear partial differential equation
applications of first order non linear partial differential equationDhananjaysinh Jhala
 
ADVANCED OPTIMIZATION TECHNIQUES META-HEURISTIC ALGORITHMS FOR ENGINEERING AP...
ADVANCED OPTIMIZATION TECHNIQUES META-HEURISTIC ALGORITHMS FOR ENGINEERING AP...ADVANCED OPTIMIZATION TECHNIQUES META-HEURISTIC ALGORITHMS FOR ENGINEERING AP...
ADVANCED OPTIMIZATION TECHNIQUES META-HEURISTIC ALGORITHMS FOR ENGINEERING AP...Ajay Kumar
 
Second order homogeneous linear differential equations
Second order homogeneous linear differential equations Second order homogeneous linear differential equations
Second order homogeneous linear differential equations Viraj Patel
 
Presentation on Numerical Method (Trapezoidal Method)
Presentation on Numerical Method (Trapezoidal Method)Presentation on Numerical Method (Trapezoidal Method)
Presentation on Numerical Method (Trapezoidal Method)Syed Ahmed Zaki
 
Introduction to optimization technique
Introduction to optimization techniqueIntroduction to optimization technique
Introduction to optimization techniqueKAMINISINGH963
 
Quadratic Programming : KKT conditions with inequality constraints
Quadratic Programming : KKT conditions with inequality constraintsQuadratic Programming : KKT conditions with inequality constraints
Quadratic Programming : KKT conditions with inequality constraintsMrinmoy Majumder
 
Numerical solution of ordinary differential equations GTU CVNM PPT
Numerical solution of ordinary differential equations GTU CVNM PPTNumerical solution of ordinary differential equations GTU CVNM PPT
Numerical solution of ordinary differential equations GTU CVNM PPTPanchal Anand
 
Solution of non-linear equations
Solution of non-linear equationsSolution of non-linear equations
Solution of non-linear equationsZunAib Ali
 
Differential Equations
Differential EquationsDifferential Equations
Differential EquationsKrupaSuthar3
 
Numerical Analysis (Solution of Non-Linear Equations) part 2
Numerical Analysis (Solution of Non-Linear Equations) part 2Numerical Analysis (Solution of Non-Linear Equations) part 2
Numerical Analysis (Solution of Non-Linear Equations) part 2Asad Ali
 
Linear Algebra Applications
Linear Algebra ApplicationsLinear Algebra Applications
Linear Algebra ApplicationsRamesh Shashank
 
Differential equations
Differential equationsDifferential equations
Differential equationsUzair Saiyed
 
Lagrange's method
Lagrange's methodLagrange's method
Lagrange's methodKarnav Rana
 

Was ist angesagt? (20)

Unit vi
Unit viUnit vi
Unit vi
 
Continutiy of Functions.ppt
Continutiy of Functions.pptContinutiy of Functions.ppt
Continutiy of Functions.ppt
 
A brief introduction to finite difference method
A brief introduction to finite difference methodA brief introduction to finite difference method
A brief introduction to finite difference method
 
applications of first order non linear partial differential equation
applications of first order non linear partial differential equationapplications of first order non linear partial differential equation
applications of first order non linear partial differential equation
 
ADVANCED OPTIMIZATION TECHNIQUES META-HEURISTIC ALGORITHMS FOR ENGINEERING AP...
ADVANCED OPTIMIZATION TECHNIQUES META-HEURISTIC ALGORITHMS FOR ENGINEERING AP...ADVANCED OPTIMIZATION TECHNIQUES META-HEURISTIC ALGORITHMS FOR ENGINEERING AP...
ADVANCED OPTIMIZATION TECHNIQUES META-HEURISTIC ALGORITHMS FOR ENGINEERING AP...
 
Second order homogeneous linear differential equations
Second order homogeneous linear differential equations Second order homogeneous linear differential equations
Second order homogeneous linear differential equations
 
Euler's and picard's
Euler's and picard'sEuler's and picard's
Euler's and picard's
 
Presentation on Numerical Method (Trapezoidal Method)
Presentation on Numerical Method (Trapezoidal Method)Presentation on Numerical Method (Trapezoidal Method)
Presentation on Numerical Method (Trapezoidal Method)
 
Introduction to optimization technique
Introduction to optimization techniqueIntroduction to optimization technique
Introduction to optimization technique
 
Fourier series 2
Fourier series 2Fourier series 2
Fourier series 2
 
Quadratic Programming : KKT conditions with inequality constraints
Quadratic Programming : KKT conditions with inequality constraintsQuadratic Programming : KKT conditions with inequality constraints
Quadratic Programming : KKT conditions with inequality constraints
 
Numerical solution of ordinary differential equations GTU CVNM PPT
Numerical solution of ordinary differential equations GTU CVNM PPTNumerical solution of ordinary differential equations GTU CVNM PPT
Numerical solution of ordinary differential equations GTU CVNM PPT
 
Introduction of Partial Differential Equations
Introduction of Partial Differential EquationsIntroduction of Partial Differential Equations
Introduction of Partial Differential Equations
 
Solution of non-linear equations
Solution of non-linear equationsSolution of non-linear equations
Solution of non-linear equations
 
Introduction to optimization Problems
Introduction to optimization ProblemsIntroduction to optimization Problems
Introduction to optimization Problems
 
Differential Equations
Differential EquationsDifferential Equations
Differential Equations
 
Numerical Analysis (Solution of Non-Linear Equations) part 2
Numerical Analysis (Solution of Non-Linear Equations) part 2Numerical Analysis (Solution of Non-Linear Equations) part 2
Numerical Analysis (Solution of Non-Linear Equations) part 2
 
Linear Algebra Applications
Linear Algebra ApplicationsLinear Algebra Applications
Linear Algebra Applications
 
Differential equations
Differential equationsDifferential equations
Differential equations
 
Lagrange's method
Lagrange's methodLagrange's method
Lagrange's method
 

Ähnlich wie On the Numerical Solution of Differential Equations

Compiled Report
Compiled ReportCompiled Report
Compiled ReportSam McStay
 
Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Xin-She Yang
 
On Inexact Newton Directions in Interior Point Methods for Linear Optimization
On Inexact Newton Directions in Interior Point Methods for Linear OptimizationOn Inexact Newton Directions in Interior Point Methods for Linear Optimization
On Inexact Newton Directions in Interior Point Methods for Linear OptimizationSSA KPI
 
Fundamentals of computational fluid dynamics
Fundamentals of computational fluid dynamicsFundamentals of computational fluid dynamics
Fundamentals of computational fluid dynamicsAghilesh V
 
Introduction to the Finite Element Method
Introduction to the Finite Element MethodIntroduction to the Finite Element Method
Introduction to the Finite Element MethodMohammad Tawfik
 
Algorithms for Reinforcement Learning
Algorithms for Reinforcement LearningAlgorithms for Reinforcement Learning
Algorithms for Reinforcement Learningmustafa sarac
 
MSc Thesis_Francisco Franco_A New Interpolation Approach for Linearly Constra...
MSc Thesis_Francisco Franco_A New Interpolation Approach for Linearly Constra...MSc Thesis_Francisco Franco_A New Interpolation Approach for Linearly Constra...
MSc Thesis_Francisco Franco_A New Interpolation Approach for Linearly Constra...Francisco Javier Franco Espinoza
 
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)Amy Cernava
 
Seminar- Robust Regression Methods
Seminar- Robust Regression MethodsSeminar- Robust Regression Methods
Seminar- Robust Regression MethodsSumon Sdb
 
Algorithmic Mathematics.
Algorithmic Mathematics.Algorithmic Mathematics.
Algorithmic Mathematics.Dr. Volkan OBAN
 
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODSNOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODSCanh Le
 

Ähnlich wie On the Numerical Solution of Differential Equations (20)

Compiled Report
Compiled ReportCompiled Report
Compiled Report
 
Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)
 
t
tt
t
 
On Inexact Newton Directions in Interior Point Methods for Linear Optimization
On Inexact Newton Directions in Interior Point Methods for Linear OptimizationOn Inexact Newton Directions in Interior Point Methods for Linear Optimization
On Inexact Newton Directions in Interior Point Methods for Linear Optimization
 
Discontinuous Galerkin Timestepping for Nonlinear Parabolic Problems
Discontinuous Galerkin Timestepping for Nonlinear Parabolic ProblemsDiscontinuous Galerkin Timestepping for Nonlinear Parabolic Problems
Discontinuous Galerkin Timestepping for Nonlinear Parabolic Problems
 
Fundamentals of computational fluid dynamics
Fundamentals of computational fluid dynamicsFundamentals of computational fluid dynamics
Fundamentals of computational fluid dynamics
 
Introduction to the Finite Element Method
Introduction to the Finite Element MethodIntroduction to the Finite Element Method
Introduction to the Finite Element Method
 
Thesis lebanon
Thesis lebanonThesis lebanon
Thesis lebanon
 
Algorithms for Reinforcement Learning
Algorithms for Reinforcement LearningAlgorithms for Reinforcement Learning
Algorithms for Reinforcement Learning
 
MSc Thesis_Francisco Franco_A New Interpolation Approach for Linearly Constra...
MSc Thesis_Francisco Franco_A New Interpolation Approach for Linearly Constra...MSc Thesis_Francisco Franco_A New Interpolation Approach for Linearly Constra...
MSc Thesis_Francisco Franco_A New Interpolation Approach for Linearly Constra...
 
Barret templates
Barret templatesBarret templates
Barret templates
 
Applied Math
Applied MathApplied Math
Applied Math
 
MScThesis1
MScThesis1MScThesis1
MScThesis1
 
Thesis_JR
Thesis_JRThesis_JR
Thesis_JR
 
tamuthesis
tamuthesistamuthesis
tamuthesis
 
Report_Final
Report_FinalReport_Final
Report_Final
 
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)
 
Seminar- Robust Regression Methods
Seminar- Robust Regression MethodsSeminar- Robust Regression Methods
Seminar- Robust Regression Methods
 
Algorithmic Mathematics.
Algorithmic Mathematics.Algorithmic Mathematics.
Algorithmic Mathematics.
 
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODSNOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
NOVEL NUMERICAL PROCEDURES FOR LIMIT ANALYSIS OF STRUCTURES: MESH-FREE METHODS
 

Kürzlich hochgeladen

Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxfenichawla
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 

Kürzlich hochgeladen (20)

Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 

On the Numerical Solution of Differential Equations

  • 1. On the Numerical Solution of Differential Equations Kyle Poe, University of the Pacific January 7, 2019 This report is written to fulfill the requirements of ENGR 219, Numerical Methods
  • 2. Contents 1 Introduction 3 2 The Runge-Kutta Method of Solution to Ordinary Differential Equations 4 2.1 Example solution of a system of differential equations using the classical RK4 method (Problem 25.26) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3 Stiffness and Multistep Methods 9 3.1 Stiffness and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Multistep Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.3 Error Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.4 Variable Step-Size ODE Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4 Boundary Value Problems, Finite Differences, and Eigenvalues 16 4.1 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.1.1 The Shooting Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.2 Finite-Difference Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.3 Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3.1 An exposition on linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.4 Eigenvalue Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.4.1 The Polynomial Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.4.2 The Power Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.4.3 The QR Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.5 MATLAB Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5 Partial Differential Equations: The Finite Difference Approach 29 5.1 Application of Finite Differences to PDEs . . . . . . . . . . . . . . . . . . . . . . . 29 5.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.2.1 Dirichlet Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.2.2 Neumann Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 5.2.3 The Control-Volume Approach . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.3 Partial Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.3.1 Parabolic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.3.2 The Eigenvalue Method for Time-Dependent Partially-Discretized PDEs . . 37 5.3.3 Implementation of Boundary Conditions . . . . . . . . . . . . . . . . . . . . 38 5.3.4 Hyperbolic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.4 Full Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.4.1 Explicit Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.4.2 Implicit Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 6 The Finite Element Method 48 6.1 General Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6.2 One-Dimensional Application to Elliptic ODEs . . . . . . . . . . . . . . . . . . . . . 49 6.3 Finite Element Methods in Higher Dimensions . . . . . . . . . . . . . . . . . . . . . 53 7 Afterword 53 2
  • 3. 1 Introduction In this report, I will address the topics broached in ENGR 219 as listed on the syllabus, and discuss each topic, and how they may be applied to the solution of problems in semiconductor physics and engineering, where applicable, starting with the development and justification of the Runge-Kutta method. As a general rule, I try to develop things as mathematically consistently as I can, and use big-O notation and tensor operators as well as other mathematical objects where applicable. I pay particular attention to the development of the finite element method, the theory of eigenvalues and eigenvectors, and the generalization of finite-element schemes to N dimensional systems, although I only treat up to 2-dimensional systems here. The general outline of this work is based upon the exposition in Chapra and Canale’s Numerical Methods for Engineers, and I reference it frequently as Chapra’s book or simply Chapra. It is all too likely that I may eventually return to this report and adapt it into a crash course on numerical methods, as while I based the contents of this report on Chapra’s book, I found that my philosophy differed from his in a few key respects with regard to the development of the methods contained within, and I did my best to develop things based on my understanding of the mathematical subtleties not treated in his work (Which is not to say that I feel prepared to undertake this kind of journey as an undergraduate!). As a disclaimer, I do my best to reference outside work in my derivations and give credit to Chapra where necessary. Nor is this intended to be a comprehensive discussion of numerical methods, as many steps that would be useful in pedagogical applications are omitted or treated with haste. In the unlikely event that someone other than Dr. Gary Litton stumbles across this report, then, please understand that this was written under an intense time crunch and there are likely to be mistakes. All the same, I hope you find it to be interesting. Honor Code Statement In accordance with University of the Pacific’s honor code, I certify that this is my own work, and I do my best to attribute appropriate authorship to all works ref- erenced within. Furthermore, this work was performed independently, without outside assistance. Kyle Poe, 2018 3
  • 4. 2 The Runge-Kutta Method of Solution to Ordinary Dif- ferential Equations One of the most fundamental problems in engineering is the solution to ordinary differential equa- tions (ODEs) without simple closed-form solutions. Often, it is far more efficient to pursue an approximation using numerical methods than to attempt an exact solution. The go-to class of solution approaches for these equations are termed the Runge-Kutta (RK) methods, named af- ter German mathematicians Carl Runge and Martin Kutta. These methods are termed iterative methods, as they take the form yi+1 = yi + f(xi, yi, h) and thus iteration i + 1 depends on the previous iteration i. The function which relates each step is of the form f(x, y, h) = φ(x, y, h) · h, where h = ∆x is some small perturbation of the system’s independent variable x. Optimally, φ(x, y, h) is defined such that f is the difference between adjacent points yi, yi+1. • Where y : R → R is a univariate scalar valued function, φ(x, y, h) ≈ 1 h x+h x y dx where y = dy dx denotes the derivative. • Where y : Rn → R is a scalar valued functional, φ(x, y, h) ≈ 1 h x+h x y · dα dx dx where y denotes the gradient and α ∈ Rn denotes the vector of parameters α1(x), α2(x), · · · • Where y : Rn → Rm is a vector field, φ(x, y, h) ≈ 1 h x+h x D(y) dα dx dx where D(y) denotes the Jacobian matrix of y. The RK methods are defined such that this function φ may be written φ = N i=1 aiki for an Nth order method, where a ∈ RN . The ki’s are related by the recurrence relation k1 = f(xi, yi), kn = f(xi + pn−1h, yi + n−1 j=1 qn−1,jkjh) where f = dy dx , p ∈ RN−1 , and q ∈ RN−1 ×RN−1 and is lower triangular. This recurrence relationship makes RK methods very computationally efficient, as each ki only needs to be calculated once and stored per step. As the actual coefficients involved are derived from Nth degree Taylor 4
  • 5. approximations, the local error for an Nth degree method is O(hN+1 ) and the global error is O(hN ). To compute the solution to systems of equations using RK methods, define f(x, y) =      g1(x, y) g2(x, y) ... gn(x, y)      Well known examples of RK methods include the following Name Order a p q Euler’s Method 1st [1] N/A N/A Heun’s Method 2nd 0.5 0.5 [1] [1] Midpoint Method 2nd 0 1 [0.5] [0.5] Ralston’s Method 2nd 1/3 2/3 [3/4] [3/4] Classical RK4 4th     1/6 1/3 1/3 1/6       1/2 1/2 1     1/2 0 0 0 1/2 0 0 0 1   2.1 Example solution of a system of differential equations using the classical RK4 method (Problem 25.26) From Numerical Methods for Engineers by Chapra, the last problem in chapter 25 details the following coupled differential equations m1 d2 x1 dt2 = m1g + k2 (x2 − x1) − k1x1 m2 d2 x2 dt2 = m2g + k3 (x3 − x2) + k2 (x1 − x2) m3 d2 x3 dt2 = m3g + k3 (x2 − x3) to represent linked bungee jumpers, with parameters m1 = 60kg, m2 = 70kg, m3 = 80kg, k1 = k3 = 50 k2 = 100(N/m), g = 9.81, and initial conditions xi(0) = ˙xi(0) = 0. To solve this, we first rewrite the system as a system of first order differential equations by considering the substitution dxi dt = ˙xi. As it turns out, this is a linear system of equations, so we could fairly easily find an exact solution using linear algebra, but we shall proceed as if it is not to demonstrate the utility of RK methods: d dt         x1 x2 x3 ˙x1 ˙x2 ˙x3         =         ˙x1 ˙x2 ˙x3 (m1g + k2 (x2 − x1) − k1x1)/m1 (m2g + k3 (x3 − x2) + k2(x1 − x2))/m2 (m3g + k3 (x2 − x3))/m3         In matlab, this is implemented using the following code 5
  • 6. 1 % Define parameters 2 m = {60 ,70 ,80}; % Masses 3 k = {50 ,100 ,50}; % Spring constants 4 g = 9.81; % Gravitational constant 5 6 % Coupling differential equations of the system 7 f = @(t,x) [x(4) ; ... 8 x(5) ; ... 9 x(6) ; ... 10 (m{1}*g + k{2}*(x(2)-x(1)) - k{1}*x(1))/m{1} ; ... 11 (m{2}*g + k{3}*(x(3)-x(2)) + k{2}*(x(1) - x(2)))/m{2}; ... 12 (m{3}*g + k{3}*(x(2) - x(3)))/m{3}]; 13 14 % Set time interval of interest 15 tspan = [0 30]; 16 17 % Set initial position vector 18 x0 = zeros (6,1); 19 20 % Solve system 21 [t,x] = RK4_driver(f,tspan ,1000 ,x0); 22 23 function [t,x] = RK4_driver(f,tspan ,N,x0) 24 % This function acts as the main handle to the Runge -Kutta method by: 25 % - Initializing the time vector 26 % - Initializing the position matrix 27 % - Calling the "step" function for each time step 28 29 % Time vector 30 t = linspace(tspan (1),tspan (2),N); 31 32 % Small perturbation of time , "h" 33 dt = t(2)-t(1); 34 35 % Initialization of postiion data matrix 36 x = zeros(length(x0),N); 37 38 % Setting initial value 39 x(:,1) = x0; 40 41 % Run the system 42 for i = 2:N 43 x(:,i) = RK4_step(f,t(i-1),x(:,i-1),dt); % each iteration 44 end 6
  • 7. 45 end 46 47 function [x_new] = RK4_step(f, t, x, h) 48 % This function does the brunt of the work , by setting up parameters unique 49 % to the classical RK4 method and computing each step 50 51 % "k" coefficients 52 a = [1 2 2 1] '/6; 53 54 % Time difference weights 55 p = [0.5 0.5 1]'; 56 57 % k-dependency coefficients 58 Q = diag(p); 59 60 % Initialize k matrix 61 k = []; 62 63 % Perform the step 64 for i = 1:4 65 if i == 1 66 k(:,i) = f(t,x); 67 else 68 k(:,i) = f(t + p(i-1)*h,x + h*k*Q(1:(i-1),i-1)); 69 end 70 end 71 72 % Return new value 73 x_new = x + k*a*h; 74 end As output, this gives us a nice view of the system behavior (open in Adobe Reader for anima- tion) 7
  • 8. Animation of the linked skydivers 8
  • 9. 3 Stiffness and Multistep Methods 3.1 Stiffness and Stability Often when dealing with the numerical solution to differential equations, particular equations will display an extreme sensitivity to the step size h chosen. While there does not seem to be a precise definition of ”stiffness”, the description adopted by Chabra is “A stiff system is one involving rapidly changing components together with slowly changing ones”. components that Chabra al- ludes to are the eigenvalues of the differential equation. While eigenvalues will be discussed more in the next section, suffice to say that if a system has eigenvalues significantly less than zero, or if the ratio between the largest and smallest eigenvalue is large, then the system is said to be stiff. With this said, it leaves the concept unclear for the subject of nonlinear differential equations, although the Lyapunov exponent does seem to hold some answers if one desired to venture substantially further than Chabra’s text. Consider the ordinary differential equation y = −1000y. It is clear by inspection that the only eigenvalue of the system is λ = −1000, for the eigenfunction v = eλt . If Euler’s method is applied in the solution of this equation, then we find yi+1 = yi − 1000yi h = yi(1 − 1000h). More generally, at step n, we have yn = y0(1 − 1000h)n We note here that for a positive h, clearly the change between subsequent steps should decrease in magnitude. Therefore, this method will only be stable for |1 − 1000h| < 1. For more general differential equations with eigenvalue λ, |1 + λh| < 1, λ ∈ C defines the stability domain {hλ} = D for which Euler’s method will be stable, in this case given as a circle in the complex plane of radius 1 centered at λh = −1. For nth order differential equations, it becomes clear that the limiting factor on stability of a method with such a limited stability domain is determined by the most negative eigenvalue for a fixed h > 0. Due to the limited stability domain of Euler’s method, investigation of methods which are better equipped to handle stiff ODEs should be undertaken. Such methods are deemed “A-Stable”, where the stability domain has the subset {hλ ∈ C : Re(λ) < 0} ⊆ D. What this means is that for any choice of h, assuming Re(λ) < 0 (most interesting cases), the method will be stable. One example of such a method is the backwards Euler method, for which we have yi+1 = yi + f(xi+1, yi+1)h =⇒ yi+1(1 − hλ) = yi =⇒ yn = y0 (1 − hλ)n Since hλ < 0, lim n→∞ yn = 0 and thus the method is A-stable.1 3.2 Multistep Methods While many of the Runge-Kutta methods discussed until now are very effective, they do not utilize previous points in their calculation, and thus discard much of the information from the trajectory of the solution. These methods which utilize retrospect to great effect are termed multistep methods; appropriately, as they utilize information from multiple steps in their calculations. 1 For more on this description, visit https://math.oregonstate.edu/ restrepo/475B/Notes/sourcehtml/node17.html 9
  • 10. Broadly, these methods may be grouped into two categories: Newton-Cotes and Adams meth- ods. Newton-Cotes methods are some of the “most common” methods for solving ODEs, and operate by using information about previous points to estimate the next point, generally through polynomial interpolation. Each come in two flavors, open and closed: ”open” methods use in- formation from the inside of a given interval to estimate the integral over that interval, whereas ”closed” methods incorporate information from the boundary, or closure of the interval. First, we will discuss a particular open Newton-Cotes formula, the non-self-starting Heun method. As discussed by Chabra, one issue with the Heun method is that the predictor step is based on the Euler method and thus is of O(h2 ), while the corrector step is based on the trape- zoidal method and is of O(h3 ). To increase the efficiency of the method, a midpoint rule may be used instead for the predictor step, by having yi+1 = yi−1 + 2hf(xi, yi). This is the so-called “non-self-starting Heun” method, named for its inability to start itself without some starting value for yi−1. As a case study, consider the separable differential equation dy dx = x − 3. In theory, the original Heun method should exhibit a small degree of error in implementing this, but the non-self starting should have zero error if supplied a correct y−1, since the Taylor expansion of dy dx has no terms past h2 . 1 %% Open Newton -Cotes Demonstration 2 a = @(x,y) x - 3; % Differential equation 3 f = @(x) 1/2*x.^2 - 3*x; % Exact solution 4 5 6 [xSS ,ySS] = selfStartingHeun(a,[0 5] ,0.2 ,0); 7 [xNS ,yNS] = nonSelfStartingHeun(a,[0 5],0.2,0,f(0.2)); 8 9 figure , hold on 10 plot(xSS ,f(xSS) - ySS ,'LineWidth ' ,2) 11 plot(xNS ,f(xNS) - yNS ,'LineWidth ' ,2) 12 ylim ([ -0.7 0.1]) 13 legend('Self starting ','Non -Self Starting ') 14 xlabel('x') 15 ylabel('Error ') 16 17 function [t,x] = selfStartingHeun(a,tspan ,h,x0) 18 t = tspan (1):h:tspan (2); 19 N = length(t); 20 x = zeros(1,N); 21 x(1) = x0; 22 for i = 2:(N-1) 23 xtmp = x(i) + h*a(t(i),x(i)); 24 x(i+1) = x(i) + h*(a(t(i),x(i)) + a(t(i)+h,xtmp)) /2; 25 end 26 end 27 28 function [t,x] = nonSelfStartingHeun(a,tspan ,h,x0 ,x1) 29 t = tspan (1):h:tspan (2); 10
  • 11. 30 N = length(t); 31 x = zeros(1,N); 32 x(1) = x0; 33 x(2) = x1; 34 for i = 2:(N-1) 35 xtmp = x(i-1) + 2*h*a(t(i),x(i)); 36 x(i+1) = x(i) + h*(a(t(i),x(i)) + a(t(i)+h,xtmp)) /2; 37 end 38 end 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0 Error Self starting Non-Self Starting Figure 1: Error for the self-starting and non-self-starting Heun methods for a simple polynomial differential equation. The non-self-starting method appears to integrate exactly. The closed Newton-Cotes formulas are based on a closed integral approximation. The book used the trapezoidal/Simspon’s 1/3 rule for this, but other more complicated (and perhaps occasionally useful) methods do exist. Next, we discuss Adams formulas, which are computed without using previous points, often through a forward Taylor expansion about the point of interest. The open variety, often called Adams-Bashforth formulas may be generally written as yi+1 = yi + h n−1 k=0 βkfi−k + O(hn+1 ) 11
  • 12. whereas the Adams-Moulton methods may be written yi+1 = yi + h n−1 k=0 βkfi+1−k + O(hn+1 ) The concept of stability domains broached in the first part of this section may be applied to all multistep methods. For a more general description of multistep methods than that broached in Chapra, see here. 3.3 Error Estimates Most multistep methods rely on what may be formulated as predictor and corrector steps. When they have the same order, then the local truncation error may be easily estimated by equation of each method’s error term. For example, for the non-self-starting Heun’s predictor and corrector steps, the error is given by Ep = 1 3 h3 f (ξp) Ec = − 1 12 h3 f (ξc) Asserting ξp = ξc = ξ, we may approximate the local truncation error by equation of the actual values2 ym i+1 + Ec = y0 i+1 + Ep = y0 i+1 − 4Ec =⇒ Ec = y0 i+1 − ym i+1 5 We may then design methods which iterate the corrector step until a desired tolerance is reached, using this error estimate as a criterion. Perhaps more common, it could be used to indicate when the step size should be altered. Alternatively, it may be used to modify the solution at a point. 3.4 Variable Step-Size ODE Solvers Often, it is not advisable to create a solver which relies on a fixed step size. This is for several reasons: foremost, without the ability to change the step size, a step size must be chosen somehow with purpose. Often this just defaults to choosing a very small step size to maximize accuracy, but that can come at computational cost. Rather, a solver with variable step size will often be chosen. To decide if the step size should change, a few factors are at play: • Does the corrector reach the defined tolerance, such that ym i+1−ym−1 i+1 ym−1 i+1 < ε within a reasonable number of iterations? • Is the guess after the defined tolerance is met under the threshold for error Emax? • By how much should the step size change? Adaptive Heun Method Here I implement an adaptive Heun solver with variable step size: 2 The book troubles me here in its derivation. This is not negative, based on the equations it provides. 12
  • 13. 0 1 2 3 4 5 6 7 8 9 10 t 0 1 2 3 4 5 6 7 8 9 10 x Heun Exact Figure 2: Comparison of exact solution to adaptive Heun solution of the differential equation f(t, x) = −x with initial condition x0 = 10 13
  • 14. 1 %% Adaptive Heun Method Demo 2 3 [t,x] = heunMethod (@(t,x) -x,[0 10] ,10); 4 5 function [t,x] = heunMethod(f,tspan ,x0) 6 % Initial time 7 t = tspan (1); 8 9 % Initial position 10 x = x0; 11 12 % Initialization of initial step size 13 h = dot(tspan ,[-1 1]) /8; 14 15 % Iterate procedure 16 while t(end) < tspan (2) 17 % Ensures that the last time step is correct length 18 if (tspan (2) - t(end)) < h 19 h = tspan (2) - t(end); 20 end 21 22 % Takes a step of the method 23 [tNext ,xNext] = heunStep(f,t(end),x(end),h); 24 25 % Concatenates output to initial conditions 26 x = [x xNext ]; 27 t = [t tNext ]; 28 end 29 end 30 31 function [tNext ,xNext] = heunStep(f,t,x,h) 32 33 % Tolerance of the corrector step 34 correctorTol = 1e -10; 35 36 % Maximum number of iterations of the corrector step 37 maxCorrector = 100; 38 39 % Error tolerance for a single step of the method 40 stepTol = 1e-5; 41 42 % Computes predictor step 43 xPredict = x + h*f(t,x); 44 xCorrect = xPredict; 45 46 % Iterates corrector 14
  • 15. 47 for i = 1: maxCorrector 48 xPrev = xCorrect; 49 50 % Computes corrector 51 xCorrect = x + h*(f(t,x) + f(t+h,xCorrect))/2; 52 53 % Measures iteration convergence 54 E_iter = abs(( xCorrect - xPrev)/xPrev); 55 56 % Break when convergence has been achieved 57 if E_iter < correctorTol 58 break 59 elseif i == maxCorrector 60 error (['Maximum iteration count for corrector step met , ',... 61 sprintf('Iteration error = %.2d',E_iter)]) 62 end 63 end 64 65 % Measures error of convergent corrector step 66 E_corrector = (xPredict - xCorrect)/5; 67 68 % If the error is within tolerance , accepts the value 69 if abs(E_corrector) <= stepTol 70 tNext = t + h; 71 xNext = xCorrect + E_corrector; 72 73 % If the error is not within tolerance , recurse with step size h = h/2 74 else 75 [t1 ,x1] = heunStep(f,t,x,h/2); 76 [t2 ,x2] = heunStep(f,t1(end),x1(end),h/2); 77 tNext = [t1 , t2]; 78 xNext = [x1 , x2]; 79 end 80 end 15
  • 16. 4 Boundary Value Problems, Finite Differences, and Eigen- values 4.1 Boundary Value Problems It is a well known fact that solutions to nth order differential equations are uniquely constrained by n parameters. Often, these are initial values, as in for some second order differential equation you have y(0), y (0). By contrast, some ODEs are prescribed multiple values at different points. One case of this is the boundary value problem (BVP). Such problems prescribe information about the endpoints, usually in the same degree. Three examples of boundary conditions are: • Dirichlet Conditions: The explicit values at the endpoints are given. An example would be a rod with fixed temperature at either end. • Neumann Conditions: Information about the first derivative is given. An example would be a rod insulated at the ends. • Robin Conditions: A mix of Dirichlet and Neumann. Typically the worst to deal with. There are a few different ways that these kinds of problems may be solved. First, let us examine the shooting method. 4.1.1 The Shooting Method At its heart, the shooting method is a recasting of the root-finding problem. Given some informa- tion about the solution, i.e. one endpoint, an approximate solution is obtained of the form y(x, a) for choice of parameter a. The task is then to determine the appropriate value of a such that y(xb, a) − yb = 0 For a linear ODE, this is simply done by making two guesses, and then interpolating to find the correct solution a = yb − y(x, a1) y(x, a2) − y(x, a1) (a2 − a1) + a1 In practice, this a parameter naturally arises as the constant of integration for a second-order ODE. For nonlinear ODEs, the problem is a little more difficult, but one can look to the root-finding literature: the Newton-Raphson method, the method of bisection, and polynomial interpolation are all viable candidates. Since this is not the focus of this chapter, I will omit an example for the time being. 4.2 Finite-Difference Methods The first weapon in our arsenal for dealing with more gnarly differential equations is discretiza- tion. The simplest form of discretization of a system is in finite differences, where the domain is partitioned into chunks such that xi = i · ∆x, y(xi) → yi While not all finite difference schemes are partitioned evenly, it is simplest to do so. 16
  • 17. To develop the underlying theory of finite differences, we start with a small perturbation of y in the forward direction, which we will call ∆+y3 ∆+y = y(x + h) − y(x) Using a forward Taylor expansion, we may expand this to ∆+y = y + y h + 1 2 y h2 + O(h3 ) − y = y h + 1 2 y h2 + O(h3 ) We therefore find that the derivative may be expressed as y = ∆+y h + O(h) Alternatively, we may formulate this in terms of a backward difference operator ∆−: ∆−y = y(x) − y(x − h) = y − y − y h + 1 2 y h2 + O(h3 ) = y h − 1 2 y h2 + O(h3 ) ∴ y = ∆−y h + O(h) Adopting the language introduced at the beginning of this section, we may assert ∆+y(xi) → (∆+y)i = yi+1 − yi ∆−y(xi) → (∆−y)i = yi − yi−1 In this was, we have come up with an estimate for the derivative by utilizing points in our dis- cretization. By equation of these values, we may develop insight into the second derivative without any information required about the first ∆−y h + 1 2 y h + O(h2 ) = ∆+y h − 1 2 y h + O(h2 ) y = 1 h ∆+y h − ∆−y h + O(h) = y(x + h) − 2y(x) + y(x − h) h2 + O(h) Which in discrete form may be approximated as (y )i ≈ 1 h2 (yi+1 − 2yi + yi−1) = 1 h2 1 −2 1   yi−1 yi yi+1   3 This notation is borrowed from A First Course in the Numerical Analysis of Differential Equations, by Arieh Iserles 17
  • 18. This is a powerful result, for many reasons. Foremost, it suggests that if a system is discretized in this way, then the second derivative operator may be recast as a n by n matrix of the following form, which we will call L L = 1 h2        −2 1 0 · · · 0 1 −2 1 · · · 0 0 1 −2 · · · 0 ... ... ... ... ... 0 0 0 · · · −2        By making this substitution, we may rewrite ordinary differential equations involving the second derivative as a linear system of algebraic equations. Example: 1D Poisson’s Equation To provide an example of a problem from physics which may be solved with finite differences, we look to Poisson’s equation. In 1D, it takes on the form d2 V dx2 = − Q Where V (x) is the electric potential, Q(x) is the charge distribution, and is the dielectric constant of the medium. A common problem that arises in semiconductor physics is when the carrier distribution and thus the charge distribution is known, but the potential is not. For example, if we consider an imaginary semiconductor with known dielectric constant and charge density, we may solve for the potential by numerical inversion. The only trouble not yet addressed is that of incorporating the boundary conditions. Here we will consider two cases: one where the voltage difference across the semiconductor is fixed (Dirichlet) and one where the electromotive force is fixed (Neumann). For both cases, we know information about the boundary; that is to say the points immediately off of our domain of interest. If V0 and Vn are the potential at the endpoints of our semiconductor, we have information about V−1 and Vn+1. For Dirichlet conditions, those are prescribed directly. From Poisson’s equation, V0 is given by 1 h2 (V1 − 2V0 + V−1) = − Q0 In this case, we assert V−1 = VA and then have 1 h2 (V1 − 2V0) = − Q0 − VA h2 Similarly, for the other boundary, we have 1 h2 (−2VN + VN−1) = − QN − VB h2 In summary, we now have a system of equations with the appearance LV = − Q − 1 h2 (VAe0 + VBeN ) Where e0 and eN are unit vectors with a 1 at the subscript index and 0 everywhere else. Alter- natively, we may directly set the value at the boundary by replacing the first and last lines of the 18
  • 19. system with V0 = VA and VN = VB, which may result in cleaner results at the cost of directly altering the Laplacian of the system. In the Neumann case, this manifests as a fixed electric field, as E = − V = −dV dx . Recalling that the derivative may be expressed as a quotient of a difference and the step size h, here we opt to describe the first derivative with a central difference. The reason this is useful quickly manifests in the derivation of the formula yi+1 − yi−1 = yi + hyi + 1 2 yi h2 + O(h3 ) − yi − hyi + 1 2 yi h2 + O(h3 ) = 2hyi + O(h3 ) ∴ yi = yi+1 − yi−1 2h + O(h2 ) which has error on the order of h2 as opposed to h for the forward and backward difference quotients. For endpoint A, we prescribe V0 = −EA. Working from the derivative equation and dropping the error term gives −EA = V1 − V−1 2h =⇒ V−1 = V1 + 2hEA which may then be substituted into Poisson’s equation at the endpoint to find 1 h2 (V1 − 2V0 + V1 + 2hEA) = − Q0 =⇒ 2 h2 (V1 − V0) = − Q0 − 2 h EA and an analogous expression for the other endpoint. One should take careful note of the fact that the Laplacian has been modified in this case, where L1,2 and LN,N−1 are 2/h2 instead of 1/h2 . We note this fact by L → L∗ . The system of equations becomes, similarly to the Dirichlet case, L∗ V = − Q − 2 h (EAe0 + EBeN ) Consider a strange semiconductor where the charge distribution is gaussian and positive, the right end is at ground, and the electric field has a value of EA at the left end. We may calculate the potential with the following code 1 %% Semiconductor Potential 2 3 E_A = -8; %J/umC% 4 V_B = 0; %J/C (V)% 5 L = 5; % um 6 Q_0 = 10; % C 7 eps = 1; 8 9 % Charge distribution 10 Q = @(x) Q_0 * exp(-(x - L/2) .^2/(L/4)); 11 12 % Initialize discretization with 100 points 13 x = linspace (0,L) '; 14 15 % Extract step size 19
  • 20. 16 h = x(2) - x(1); 17 18 % Extract total number of points (100 by default) 19 N = length(x); 20 21 % Construction of the Laplacian 22 L = 1/h^2* toeplitz ([-2 1 zeros(1,N-2)]); 23 24 % Modification for Neumann boundary condition 25 L(1,2) = 2/h^2; 26 27 % Modification for Dirichlet boundary condition 28 L(N,:) = [zeros(1,N-1) 1]; 29 30 % Right hand side with boundary conditions 31 rhs = -Q(x)/eps - [2/h*E_A ;... % Neumann bound 32 zeros(N-2,1) ;... % Intermediate points 33 -Q(x(end))/eps -V_B]; % Dirichlet bound 34 35 % Solution of the system 36 V = Lrhs; 37 38 % Plotting 39 plot(x,V,'LineWidth ' ,2), hold on 40 plot(x,-E_A*x + V(1),'--') 41 xlabel('x (mu{}m)') 42 ylabel('Potential (V)') 43 legend('Potential ','V_x (0)=-E_A') 20
  • 21. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x ( m) 0 10 20 30 40 50 Potential(V) Potential Vx (0)=-EA 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x ( m) 0 2 4 6 8 10 Q(x)(C/m) Figure 3: Solution to Poisson’s equation with mixed boundary conditions 4.3 Eigenvalue Problems One of the most important ideas in all of the study of differential equations is that of eigenvalues and eigenvectors. I will now foray briefly into eigen-theory. 4.3.1 An exposition on linear algebra Consider the following simple differential equation: d dx y = λy I write it this way instead of the typical dy dx for an important reason, which is in viewing the d dx as an operator whose action is identical to multiplication by λ. This is a deeper concept than it may first appear, and it is tied to the theory of linear equations and linear transformations. Without getting too far into it, in this equation y is termed the eigenvector and λ the eigenvalue. Strictly speaking, y need not be a vector in the traditional y ∈ Rn sense. It could be infinite-dimensional, or even a function (the usual case for differential equations). 21
  • 22. To illustrate the parallels between eigenvalues and eigenvectors in differential equations and lienar algebra, consider the differential equation d2 y dx2 + λ2 y = 0 As a second order linear, homogeneous differential equation, we may split this into two first order differential equations by making a substitution dy dx = v, dv dx = −λ2 y Which may then be cast into matrix-vector form d dx y v = 0 1 −λ2 0 y v = M y v As the simple differential equation dy dx = λy has its solution y = eλx y0, the solution is given by y v = exp (Mx) A B = A exp (Mx) e1 + B exp (Mx) e2 Evidently, the solutions occupy a two-dimensional vector space with solutions that may be de- termined by the coordinates (A, B). While this is nice, it is not as simple as it could be. By diagonalizing the matrix M = V DV −1 , we find that our solution may be rewritten y v = V A exp iλ 0 0 −iλ x V −1 e1 + B exp iλ 0 0 −iλ x V −1 e2 = V exp iλ 0 0 −iλ x V −1 A B , AV BV = V −1 A B = V exp(iλx) 0 0 exp(−iλx) AV BV = V AV exp(iλx) BV exp(−iλx) , V = [v1v2] = AV exp(iλx)v1 + BV exp(−iλx)v2 By utilizing the fact that M may be diagonalized by switching to its eigenspace, we have changed the basis of the solution space to the eigenvectors of the matrix M, transformed our coordinates A, B into the eigenspace coordinates AV , BV , and written our final answer as a clean linear com- bination of eigenvectors of the system matrix. We are now equipped to examine the initial value problem. To start, we invoke Euler’s identity y v = AV (cos(λx) + i sin(λx))v1 + BV (cos(−λx) + i sin(−λx))v2 And simplify based on symmetries of sin and cos y v = AV (cos(λx) + i sin(λx))v1 + BV (cos(λx) − i sin(λx))v2 (1) = (AV v1 + BV v2) cos(λx) + i(AV v1 − BV v2) sin(λx) (2) 22
  • 23. Applying an example condition that y(0) = 0, we find that 0 v(0) = AV v1 + BV v2 And recalling that AV BV = V −1 A B 0 v(0) = v1 v2 V −1 A B = A B Therefore, we find that A = 0, B = v(0). Furthermore, we find that AV BV = V −1 0 v(0) = v(0)v−1 2 Where v−1 2 denotes the second column of V −1 . With this new information we simplify our solution a bit more, noting that V v−1 2 = e2 y v = cos(λx)e2 + i sin(λx)([v1 − v2])v−1 2 v(0) We therefore may now see that y does not depend on the cos(λx) term. Therefore, should we posit that some y(L) = 0, unless either v(0) = 0 (the trivial solution), the matrix-vector term, or sin(λx) is 0, then validity is lost. To take the simplest way out, let us propose sin(λx) = 0 =⇒ λ = nπ, n ∈ Z We then have infinitely many possible values for λ. Example: Schr¨odinger’s Equation One example of a classic eigenvalue problem is Schr¨odinger’s equation for the particle in a potential. Schr¨odinger’s equation (time-invariant version) manifests in 1D as EΨ = − 2 2m∗ d2 Ψ dx2 + V (x)Ψ Where E is the energy of the particle, |Ψ|2 is the probability density for the particle’s location, is Planck’s constant, m∗ is the effective mass of the particle, and V (x) is the potential. To solve this equation numerically, we can first cast it in finite differences EΨ = − 2 2m∗ L + diag(V ) Ψ =⇒ EΨ = HΨ Where H is the matrix of the discretized Hamiltonian. For simplicity, here we assume that the potential V is infinite outside of the domain of consideration, and therefore Ψ(x) = 0, x /∈ {xi}. To solve the wavefunctions Ψ for the potential generated in 4.2.1, we implement the following MATLAB routine 1 function [Psi ,E] = schrodinger(V,x) 2 %% 1D Schrodinger Equation Solver 3 23
  • 24. 4 % Definition of Constants 5 h = 1; 6 m = 1; 7 8 % Definition of Domain 9 dx = x(2) - x(1); 10 N = length(x); 11 12 % Creation of Second derivative Matrix 13 r = [-2 1 zeros(1,N-2)]; 14 DD = 1/dx^2 * toeplitz(r,r); 15 16 % Solve Eigenvalue Problem 17 A = -h^2/(2*m)*DD + diag(V); 18 [Psi ,D] = eig(A); 19 Psi = Psi * diag (1./ sum(Psi .^2)); 20 E = diag(D); Open in Adobe Reader for a nice animation of all the different wavefunctions! 4.4 Eigenvalue Solution Methods As much as I would like to explore the particularities of my favorite eigenvalue finder, MATLAB’s eig() function, it seems that to see the source code you need to start here. That being said, there are several ways to compute eigenvalues and eigenvectors for a system. Outlined in Chabra’s book are the polynomial method and the power method, with Hotelling’s method discussed as an aside. What is likely closest to what MATLAB uses however is the QR algorithm, for which I will discuss an example. 24
  • 25. 4.4.1 The Polynomial Method Let us consider the earlier ODE discussed once again, in its discretized form (L + λ2 I)y = 0 Assuming we are not interested in the trivial case where y = 0, we may approximate the smallest N eigenvalues of the system by finding the roots of the polynomial det(L + λ2 I) = 0 This is a straightforward method to approximate the first few eigenvalues of a system, as long as a robust root-finding method may be implemented. 4.4.2 The Power Method The power method is a very functional way to determine the largest eigenvalue and corresponding eigenvector of a system. The way it works is by iterating the matrix transformation of interest on a particular vector, for example some random vector p, and repeatedly normalizing that vector, and recording the magnitude after each iteration as an estimate for the eigenvalue. Api = ˜pi+1 λi+1 = ˜pi+1 , pi+1 = ˜pi+1 ˜pi+1 To get a glimpse into why this works, assuming the matrix A is nonsingular, we may write it as A = V DV −1 (V is a unitary matrix of eigenvectors and D is diagonal, comprised of the eigenvalues of the matrix). After applying A to a vector p n times, we get An p = V Dn V −1 p = V Dn q = N i=1 qiλn i vi Where q is just p with the basis changed to the eigenbasis of A. As n → ∞, λn max >> λn other, and therefore lim n→∞ N i=1 qiλn i vi ∝ vk, λk = λmax As a result, after many iterations, subsequent iterations will yield the maximum eigenvalue due to the iterating vector approaching the span of the corresponding eigenvector. To find the smallest eigenvalue with this approach, one needs only to consider the algorithm for A−1 . This follows directly from the inversion of the eigenvalues as a result of matrix inversion: A−1 = (V DV −1 )−1 = V D−1 V −1 , D−1 = diag([λ−1 1 λ−1 2 · · · λ−1 N ]) Therefore the power method will yield λ−1 N . 25
  • 26. 4.4.3 The QR Algorithm The QR algorithm is a highly accurate but computationally expensive method that relies on iterative QR factorization of a matrix to obtain its eigenvectors and eigenvalues. While I will not discuss QR factorization here, it will suffice to say that it factors a matrix A = QR, where Q is an orthogonal matrix of vectors obtained by the Graham-Schmidt orthogonalization process, and R is an upper-triangular matrix corresponding to the coefficients of each column in A for the basis of orthogonal vectors in Q. The key element of the QR algorithm is that for a Schur decomposition A = UTU−1 , so A is unitarily similar4 to T. It has been shown that the matrix sequence Ak = Q−1 k Ak−1Qk converges to an upper triangular matrix, which has the same spectrum as A due to being unitarily similar, hence giving us the eigenvalues. Example: Allowed Energies for a Particle As an example, I will solve for the energy spec- trum of a particle in a well. In theory, the energy levels should agree with5 En = ωn = n2 π2 2 2mL2 If the mass is 0.5, planck constant is 1, and the length of the domain considered is L = π, then we should have, very simply, E = n2 . In other words, the eigenspectrum should follow a parabola. We will check this by implementing MATLAB code to compute the QR factorization, QR algorithm6 , and plot against the predicted parabolic energy distribution (see fig 4). 1 %% QR Algorithm Implementation 2 3 % Definition of Constants 4 h = 1; 5 m = .5; 6 L = pi; 7 8 figure , hold on 9 for N = (10*(2:2:20) +100) 10 % Definition of Domain 11 x = linspace (0,L ,300); 12 dx = x(2) - x(1); 13 V = 0; 14 15 % Creation of Second derivative Matrix 16 r = [-2 1 zeros(1,N-2)]; 17 DD = 1/dx^2 * toeplitz(r,r); 18 19 % Create matrix 20 A = -h^2/(2*m)*DD + V; 21 22 % Create cell array to store iteration of eigenvalues 4 This verbage and approach follows the exposition here. 5 See wikipedia entry 6 QR code was written while taking Math 145 26
  • 27. 23 Dset = cell(1,Nmax); 24 25 % Calculate enery levels and plot 26 [~,D] = qralgorithm(A,N); 27 ddiag = diag(D); 28 plot (10: -1:1 , ddiag(end -9: end),'Color ' ,[(1-N/300) 0 N/300]) 29 end 30 plot (1:10 ,(1:10) .^2,'-o','LineWidth ' ,3) 31 legend('N = 120',... 32 'N = 140',... 33 'N = 160',... 34 'N = 180',... 35 'N = 200',... 36 'N = 220',... 37 'N = 240',... 38 'N = 260',... 39 'N = 280',... 40 'N = 300',... 41 'Actual ') 42 43 44 function [V,D] = qralgorithm(A,N) 45 %QRALGORITHM implementation of qr algorithm 46 V = eye(size(A)); 47 for i = 1:N 48 [Q,R] = qrFact(A); 49 A = R*Q; 50 V = V*Q; 51 end 52 D = A; 53 end 54 55 function [Q,R] = qrFact(A) 56 %QRFACT implementation of a qr factorization routine 57 [~,N] = size(A); 58 Q = A; 59 R = zeros(N); 60 61 R(1,1) = norm(A(:,1)); 62 Q(:,1) = A(:,1)/R(1,1); 63 for j = 2:N 64 R(1:(j-1),j) = Q(: ,1:(j-1)) '*A(:,j); 65 temp = A(:,j)-Q(: ,1:(j-1))*R(1:(j-1),j); 66 R(j,j) = norm(temp); 67 Q(:,j) = temp/R(j,j); 68 end 69 end 27
  • 28. 1 2 3 4 5 6 7 8 9 10 n 0 100 200 300 400 500 600 700 En (Energy) N = 120 N = 140 N = 160 N = 180 N = 200 N = 220 N = 240 N = 260 N = 280 N = 300 Actual Figure 4: Plot of first 10 energy levels predicted by various iterations of the QR algorithm, and the actual energy distribution. Convergence is evident! 4.5 MATLAB Solvers MATLAB contains a wealth of ODE solvers in the ode* function family, listed here. Its members include • ode45 - A nonstiff, medium accuracy, general purpose solver. • ode23 - A nonstiff, low-accuracy solver which can be efficient for high-tolerance situations and ODEs • ode113 - A variable accuracy nonstiff ode solver whose efficacy varies depending on error tolerance and ODE complexity • ode15s - A stiff ODE solver; the go-to of its family • ode23s, ode23t, ode23tb - Special purpose stiff ODE solvers • ode15i - Used for fully implicit ODEs As mentioned before, MATLAB also has the eig() function which numerically computes eigen- values. 28
  • 29. 5 Partial Differential Equations: The Finite Difference Ap- proach While ODEs are differential equations where derivatives are taken with respect to a single inde- pendent variable, PDEs are defined by their composition of derivatives with respect to multiple variables, hence they are comprised of partial derivatives. In this section, I will frequently make use of abbreviated notation for partial derivatives such that ∂i ≡ ∂ ∂xi and 2 ≡ i ∂ii (the Laplacian). In the first part of this section, I will derive the general framework behind discretization schemes, focusing on the Laplacian. In the following portions, I will describe and solve elliptic, parabolic, and hyperbolic PDEs of the linear, second-order type using finite difference schemes. 5.1 Application of Finite Differences to PDEs While partial differential equations are often much scarier looking than ODEs, much of the ap- proach for finite difference solution is the same. The domain is discretized, usually in space and sometimes also in time, and then either an eigenvalue problem is solved or the system is inverted. In any case, since the domain is typically not one dimensional, a little extra sophistication is re- quired for discretization. Specifically, if the independent variable is to be treated as a function such that f : Rn → R, if it is to represented as a typical one-dimensional vector, we must number all the points in the domain such that D2 → N. A common approach to this, illustrated for a 10x10 domain, is per the following image In words, they are enumerated row by row such that points (n, m) → (n + 10(m − 1)). For an N by M by P domain: (n, m, p) → n + N(m − 1 + M(p − 1)) While assembling the method, it is common to retain the multidimensionality of the data for readability. 29
  • 30. For ODEs, approximations of derivatives were developed using finite differences. (y )i ≈ (∆+y)i − (∆−y)i h2 Unsurprisingly, this approximation is also valid for partial differential equations. For a two- dimensional domain of points (x, y) and dependent variable u we first discretize the domain as u(xi, yj) → ui,j, xi = i · ∆x yj = j · ∆y We begin by generalizing the difference operators to multiple dimensions as (∆i+u)i,j = ui+1,j − ui,j (∆i−u)i,j = ui,j − ui−1,j (∆j+u)i,j = ui,j+1 − ui,j (∆j−u)i,j = ui,j − ui,j−1 We then have ∂xxu = (∆i+u)i,j − (∆i−u)i,j (∆x)2 , ∂yyu = (∆j+u)i,j − (∆j−u)i,j (∆y)2 Should we make the choice to set ∆x = ∆y = h, we may write the discretized Laplacian as 2 u = 1 h2 (ui,j−1 + ui−1,j − 4ui,j + ui+1,j + ui,j+1) This is the infamous five-point formula, so named for using five points (shocking!). Let us consider our mapping earlier, assigning i → n and j → m. At some fixed j (in other words, some y value), we find ourselves with the following matrix-vector equation 2      u1,j u2,j ... uN,j      = 1 h2      −2 1 · · · 0 1 −2 · · · 0 ... ... ... ... 0 0 · · · −2           u1,j u2,j ... uN,j      + 1 h2      1 0 · · · 0 0 1 · · · 0 ... ... ... ... 0 0 · · · 1                u1,j−1 u2,j−1 ... uN,j−1      − 2      u1,j u2,j ... uN,j      +      u1,j+1 u2,j+1 ... uN,j+1           Substituting the vectors with fixed j values u∗,j → uj, u∗,j−1 → uj−1, and u∗,j+1 → uj+1 this may be succinctly rewritten 2 uj = LN uj + 1 h2 I(uj−1 − 2uj + uj+1) Where LN is the N-point version of the 1D discretized Laplacian developed in the ODE section. Therefore, we may write the full discretization as 2 u =           LN 0 · · · 0 0 LN · · · 0 ... ... ... ... 0 0 · · · LN      + 1 h2      −2I I · · · 0 I −2I · · · 0 ... ... ... ... 0 0 · · · −2I           u = (I ⊗ LN + LM ⊗ I)u = (LM ⊕ LN )u Where ⊗ denotes the Kronecker product and ⊕ the Kronecker sum. This is a powerful result, and it may be generalized to n-dimensional finite difference schemes by simply taking the Kronecker sum of multiple single-dimensional discretized Laplacians. While it will not be shown here, a similar derivation may be undertaken for other operators. 30
  • 31. 5.2 Boundary Conditions When solving any PDE defined on some domain Ω, boundary conditions are needed such that g is defined for all points on ∂Ω. This is easily done for the square, as we can simply use single- variable functions, but for a more general domain a non-obvious parametrization may be required. To demonstrate the general approach for defining boundary conditions, the Laplace equation will be used as a case-study. 2 u = 0 5.2.1 Dirichlet Conditions Dirichlet conditions are incorporated by either a) making substitutions in the system of equations by simply setting the value of certain points or b) using “ghost nodes” as before. Since the former is not very mathematically interesting, I will consider the case of the latter. Consider boundary conditions on the L by L square such that we have u(x, 0) = α(x) u(x, L) = β(x) u(0, y) = γ(y) u(L, y) = δ(y) We must develop modifications to the system of equations at every boundary point, but to start we will consider a point (xi, 0) on the bottom side of the square. 1 h2 (ui,j−1 + ui−1,j − 4ui,j + ui+1,j + ui,j+1) = 0 =⇒ 1 h2 (αi + ui−1,1 − 4ui,1 + ui+1,1 + ui,2) = 0 =⇒ 1 h2 (ui−1,1 − 4ui,1 + ui+1,1 + ui,2) = − 1 h2 αi Thus for the vector ui,1 = u1 for the bottom side we have Lu1 + 1 h2 I(−2u1 + u2) = − 1 h2 α Considering the organization of the earlier applied boundary conditions for Poisson’s equation in 1D LV = − Q − 1 h2 (VAe0 + VBeN ) It seems natural that there would be a similar way to implement Dirichlet conditions for PDEs. Using the Kronecker product, there is! 2 u = − 1 h2 (e1 ⊗ α + eN ⊗ β + γ ⊗ e1 + δ ⊗ eN ) 31
  • 32. The Leibmann Method One way to solve the Laplace equation numerically is by use of the iterative Leibmann method. After setting up the system of equations, it will usually be found that the majority of entries in the matrix-vector system are zero. As a result, it can be overly computationally expensive to use full-matrix methods. The Gauss-Seidel method, while usually used for ODEs, may be used under the Leibmann name for solution of elliptic PDEs. To use the method, one iterates over the independent variable, at each point examining the nonzero terms and using them to solve for the point on the diagonal such that the point ui,j is given by ui,j = ui+1,j + ui−1,j + ui,j+1 + ui,j−1 4 Since the system is diagonally dominant7 the system will converge. To speed up convergence, over-relaxation techniques may be employed; for example, by taking each new point and mapping it such that unewλ + (1 − λ)uold → unew at each step. Percent error between subsequent steps may be used as a metric for convergence. 5.2.2 Neumann Conditions To implement Neumann conditions, we will attempt a slightly more subtle approach, building on the Dirichlet implementation. First, we need to compute value for the ghost nodes, based on the central difference approximation of the first derivative, as was done before. Now, we consider the following boundary conditions uy(x, 0) = a(x) uy(x, L) = b(x) ux(0, y) = c(y) ux(L, y) = d(y) Conveniently, since we are working with the square, we don’t need to worry about the other first derivatives, since those would only utilize points along the boundary and thus are not relevant for this particular problem8 . ai = ui,2 − ui,0 2h =⇒ ui,0 = ui,2 − 2hai =⇒ u0 = u2 − 2ha bi = ui,N+1 − ui,N−1 2h =⇒ ui,N+1 = ui,N−1 + 2hbi =⇒ uN+1 = uN−1 + 2hb Considering that a = u0 in the Dirichlet case, we may use substitution to work directly from the final result, with analogous substitutions for the other boundaries 2 u = − 1 h2 (e1 ⊗ (ui,2 − 2ha) + eN ⊗ (ui,N−1 + 2hb) + (u2,j − 2hc) ⊗ e1 + (uN−1,j + 2hd) ⊗ eN ) 2 u = − 2 h (eN ⊗ b − e1 ⊗ a + d ⊗ eN − c ⊗ e1) − 1 h2 (δ1,2 ⊗ I + δN,N−1 ⊗ I + I ⊗ δ1,2 + I ⊗ δN,N−1)u 7 Verbage used by Chapra 8 There is more nuance here, but I am not sure how to articulate it. It is not discussed in Chapra, so I figure it was not discussed during the course. 32
  • 33. 2 u = − 2 h (eN ⊗ b − e1 ⊗ a + d ⊗ eN − c ⊗ e1) − 1 h2 (δ1,2 + δN,N−1) ⊕ (δ1,2 + δN,N−1)u ∴ ( 2 + 1 h2 (δ1,2 + δN,N−1) ⊕ (δ1,2 + δN,N−1))u = − 2 h (eN ⊗ b − e1 ⊗ a + d ⊗ eN − c ⊗ e1) Where δi,j denotes the Kronecker delta, an empty matrix with a 1 at position i, j. We thus have the form of the equation which applies the Neumann conditions. As an aside, before I started this section on PDEs, it hadn’t occurred to me that the tensor algebra would be so instrumental in a clean presentation of this theory. Granted, the mathematical bar for understanding is set a bit higher than in Chapra’s book, but I feel that it makes things much more clear. This took a great deal of thinking to derive and present, so in the interest of time, I will be a little less general moving forward. Solution of Elliptic PDEs Elliptic PDEs are often considered to be the simplest type of nontrival PDE to solve, as they describe steady-state phenomena. All the mechanism really needed to solve one of these using finite differences has already been described . To solve such equations, boundary conditions are needed. Here we will discuss the 2D Laplace-Poisson equation as a model case. 2 u = f(x, y) Let us now solve an example with Neumann conditions. Normally, I would scrap my work here and move on, but I would like to comment on a particular difficulty I ran into while doing this. Without thinking much about it, I set up the Laplace equation with all Neumann bounds, and found that the solution was horribly divergent. I was puzzled, thinking I had done something wrong. Then, I realized that Laplace’s equation with only Neumann conditions is an ill-posed problem, as there are infinitely many solutions–consider the constant of integration that would naturally arise when prescribing slope at all edges. For this reason, I will do a different version of this problem, with the following conditions: x ∈ [0, 5], y ∈ [0, 3] ∂xu(x, 0) = ∂xu(x, 3) = x, f(x, y) = e−(x−5 2 )2 e−(y−3 2 )2 u(0, y) = 0, u(Lx, y) = sin 2π 3 y This could be interpreted as the potential on some strange sheet of metal with a linear electric field on two sides, grounded on one, and with a sinusoidal potential on the remaining side, with a Gaussian charge distribution in the center. 1 %% Laplace -Poisson Example Problem 2 3 %% Create functions for Kronecker sum , delta , and unit vector 4 % Kronecker sum 5 ksum = @(A,B) kron(A,eye(size(B))) + kron(eye(size(A)),B); 6 7 % Kronecker delta 8 kdelt = @(i,j,N) sparse(i,j,1,N,N); 9 10 % Unit vector 33
  • 34. Figure 5: Solution to Laplace-Poisson equation given the above conditions 11 e = @(i,N) sparse(i,1,1,N,1); 12 13 %% Set problem paramters 14 xLen = 5; % Length in x direction 15 yLen = 3; % Length in y direction 16 N = 500; % Number of points for x 17 M = 500; % Number of points for y 18 19 % Gaussian distribution at the center 20 f = @(x,y) exp(-(x-xLen /2) .^2) .* exp(-(y-yLen /2) .^2); 21 22 % "Flux" (Neumann) Condition functions 23 up_x0 = @(x) x; 24 up_xL = up_x0; 25 26 % Dirichlet conditions 27 u_0y = @(y) 0*y; 28 u_Ly = @(y) sin(y*2*pi/yLen); 29 30 % Create grid 31 x = linspace (0,xLen ,N) '; 32 y = linspace (0,yLen ,M) '; 33 [xx ,yy] = meshgrid(x,y); 34 dx = x(2) - x(1); 35 dy = y(2) - y(1); 36 37 % Define the Laplacian for x and y 38 Lx = 1/dx^2 * sparse(toeplitz ([-2 1 zeros(1,N-2)])); 39 Ly = 1/dy^2 * sparse(toeplitz ([-2 1 zeros(1,M-2)])); 40 41 % Define the system Laplacian 34
  • 35. 42 Laplacian = ksum(Lx ,Ly); 43 44 % Modify Laplacian for Neumann conditions 45 laplaceMod = kron(eye(N) ,(kdelt(1,2,M) + kdelt(M,M-1,M))/dy^2); 46 A = Laplacian + laplaceMod; 47 48 % Create RHS 49 rhs = reshape(f(xx ,yy),N*M,1) -2/dx*( kron(up_xL(x),e(M,M)) - kron( up_x0(x),e(1,M))) + ... 50 -1/dx ^2*( kron(e(N,N),u_Ly(y)) + kron(e(1,N),u_0y(y))); 51 52 % Solve and plot solution 53 u = reshape(Arhs ,M,N); 54 surf(x,y,u) 55 xlabel('x') 56 ylabel('y') 57 zlabel('u') 58 shading interp 59 axis equal 60 light , light 5.2.3 The Control-Volume Approach Situations may arise where a particular node in the finite-difference scheme may lie at a troublesome point, where it is entirely non-obvious how to implement the boundary conditions directly. By instead opting to treat the immediately surrounding volume as its own system and instead using the solution on that volume for that node, these difficulties may be efficiently addressed. This is called the control volume approach. By isolating the problematic point, only one line in the system of equations needs to be modified. To write the equations for each node, we first rewrite the heat equation as its average over a volume element 1 ∆V ∆V ut d(∆V ) = ut = α ∆V ∆V ( · u) d(∆V ) By the divergence theorem, ∆V ( · u) d(∆V ) = ∂∆V u · n dA Over a a prismatic, polygonal element, we find ut = hα ∆V i Si ( u · ni) dx In the standard case of a rectangular element, this may be rewritten as ut = α ∆x∆y ↑ ∂yu dx + → ∂xu dy − ↓ ∂yu dx − ← ∂xu dy 35
  • 36. Where arrows denote values on the corresponding side of the element. For the steady-state case ut = 0 and using the definition of heat flux qi = −k ui, 0 = ↑ k↑qy dx + → k→qx dy − ↓ k↓qy dx − ← k←qx dy In finite differences, this may be written as a summation over each side. Similar expressions may be derived for dealing with the non-steady-state case. Figure 6: Write equations for the darkened nodes in the grid in Fig. P29.10. Note that all units are cgs. The convection coefficient is hc = 0.01 cal/(cm2 · C · s) and the thickness of the plate is 2 cm. Problem 29.10 To illustrate these concepts, we solve the above problem. Node (0,0) may be written using the heat convection coefficient for the left side (going halfway up to the next node) and 0 for the bottom, since it is insulated. The middle nodes do not involve the boundary, and have corresponding expressions that are written more readily, with the exception of the heat source (still fairly obvious) (0, 0) : ∆y 2 · hc(Ta − T0,0) + ∆x 2 · k (T0,1 − T0,0) + ∆y 2 · k (T1,0 − T0,0) (1, 1) : T1,2 + T2,1 + T1,0 + T0,1 − 4T1,1 (2, 1) : ∆y1 + ∆y2 2 (k (T3,1 − T2,1) + k (T1,1 − T2,1)) + 1.5∆x(k (T2,2 − T2,1) + k (T2,0 − T2,1)) ∆z +qz(1.5∆x + ∆y1 + ∆y2 2 ) 5.3 Partial Discretization In this section, I will discuss the extension of the established theory to the solution of time- dependent phenomena, without discretization in time. These methods are very nice and even work well for hyperbolic PDEs, but are comparably more computationally expensive. 36
  • 37. 5.3.1 Parabolic PDEs Parabolic PDEs are most commonly encountered when dealing with time-dependent phenomena, especially those involved with some sort of diffusive physics. They are so named for the form they take being defined by a parabolic equation, where the dependent variable is replaced by the ∂t operator and and the independent variables replaced by the corresponding partial derivative operators. An example would be z = x2 + y2 → ∂tu = ∂xxu + ∂yyu = 2 u This simple example is the basic form of the diffusion PDE. As a matter of fact, being able to write a PDE as the equation of the partial derivative with respect to time and an elliptic operator is a sufficient condition9 for a PDE to be considered parabolic. For this reason, a parabolic PDE may be written generally as ut = L[u] Where L is an elliptic operator. In the interest of a succinct presentation of the numerical methods behind their solution, I will omit a derivation of the mathematical nuance behind parabolic and elliptic PDEs in favor of jumping straight to solution methods. 5.3.2 The Eigenvalue Method for Time-Dependent Partially-Discretized PDEs We have discussed in previous sections how an nth-order linear ODE may be broken apart into a system of n first order differential equations and then rewritten as a matrix-vector system. Appealing to the framework established in the last section, we will consider in this section the 2D heat equation ∂u ∂t = α 2 u Using the language developed previously, this may be discretized over x, y by10 d dt u = α(Lx ⊕ Ly)u Naturally, we may sole this equation in the same way that any other system of first order differential equations may be solved, by exponentiation u = eα(Lx⊕Ly)t u0 = eαLxt ⊗ eαLyt u0 with simplification resulting by exploiting properties of the Kronecker sum11 . Since Lx, Ly are symmetric matrices, their eigenvectors are orthogonal, and therefore Lx = VxDxV −1 x = VxDxV T x , Ly = VyDyV −1 y = VyDyV T y We may then further simplify our expression u = (VxeαDxt V T x ) ⊗ (VyeαDyt V T y )u0 = (Vx ⊗ Vy)(eαDxt ⊗ eαDyt )(Vx ⊗ Vy)T u0 = (Vx ⊗ Vy)(eα(Dx⊕Dy)t )(Vx ⊗ Vy)T u0 9 Unsure if also necessary 10 While the previous ordering of Lx and Ly was valid for the previous derivation, it also works in this way and comes out cleaner in the code 11 Properties of the Kronecker product may be read about here 37
  • 38. By making the substitutions W = N i=1 Vxi and Λ = N i=1 Dxi , this may be extended to an N-dimensional space as u = WeαΛt WT u0 5.3.3 Implementation of Boundary Conditions The result that has just been obtained is, while pretty, quite complex mathematically. With that said, there is a fairly simple way to interpret it to reveal how to apply boundary conditions. The first thing to be considered is that eigenfunctions of the 2D Laplacian are planar harmonics. By changing the basis of the initial condition into the eigenspace of the Laplacian (the WT u0 term), we are approximating the initial condition as a Fourier series. From that point, any state of the system must be a superposition of planar harmonics with coefficients that evolve over time proportionally to their corresponding eigenvalue. For this reason, boundary conditions are still determined by characteristics of the Laplacian (or whatever elliptic operator is implicated). For this reason, there is a straightforward realtionship between applying boundary conditions to elliptic PDEs and parabolic PDEs. Consider that the differential equation for a parabolic PDE may be written as du dt = L[u] Here, I will only consider boundary conditions that are constant in time. To apply Dirichlet conditions, we know that du dt ∂Ω = 0. Therefore, we must modify the right hand side such that L[u] ∂Ω = 0 when u ∂Ω = g(x, y). This is a problem we already solved earlier, using ghost nodes. For the heat equation where L[u] = α 2 u, α 2 u → α( 2 u + 1 h2 ( Bottom e1 ⊗ α + Top eN ⊗ β + Left γ ⊗ e1 + Right δ ⊗ eN ) Choose which Dirichlet conditions to apply ) And for Neumann (analogous placement) α 2 u → α 2 + 1 h2 (δ1,2 + δN,N−1) ⊕ (δ1,2 + δN,N−1) u + 2α h (eN ⊗ b − e1 ⊗ a + d ⊗ eN − c ⊗ e1) Now, all we need to do is solve the inhomogeneous differential equation. Recall that the following du dt = Au + b Has a solution in the form of u = eAt (u0 + A−1 b) − A−1 b Therefore, once the proper substitutions are made, the PDE is as good as solved. The following is a solution for the heat equation with the conditions x ∈ [0, 10], y ∈ [0, 4], t ∈ [0, 3) ∂xu(x, 0) = ∂xu(x, 3) = 0, u0(x, y) = e−(x−5)2 e−(y−2)2 u(0, y) = 0, u(Lx, y) = sin π 2 y 38
  • 39. 10 9 8 7 -0.5 6 x 0 4 0.5 5 u 1 1.5 3 4 y 32 2 1 1 0 0 Figure 7: Heat evolution PDE at t=0.1 1 %% Heat PDE 2 3 %% Create functions for Kronecker sum , delta , and unit vector 4 % Kronecker sum 5 ksum = @(A,B) kron(A,eye(size(B))) + kron(eye(size(A)),B); 6 7 % Kronecker delta 8 kdelt = @(i,j,N) sparse(i,j,1,N,N); 9 10 % Unit vector 11 e = @(i,N) sparse(i,1,1,N,1); 12 13 %% Set problem paramters 14 xLen = 10; % Length in x direction 15 yLen = 4; % Length in y direction 16 N = 200; % Number of points for x 17 M = 80; % Number of points for y 18 alpha = 5; % Thermal diffusivity 19 20 % Initial condition 21 f = @(x,y) 5*exp(-(x-xLen /2) .^2) .* exp(-(y-yLen /2) .^2); 22 23 % "Flux" (Neumann) Condition functions 24 up_x0 = @(x) 0*x; 25 up_xL = up_x0; 26 27 % Dirichlet conditions 28 u_0y = @(y) 0*y; 29 u_Ly = @(y) sin(y*2*pi/yLen); 30 39
  • 40. 31 % Create grid 32 x = linspace (0,xLen ,N) '; 33 y = linspace (0,yLen ,M) '; 34 [xx ,yy] = meshgrid(x,y); 35 dx = x(2) - x(1); 36 dy = y(2) - y(1); 37 38 % Define the Laplacian for x and y 39 Lx = 1/dx^2 * sparse(toeplitz ([-2 1 zeros(1,N-2)])); 40 Ly = 1/dy^2 * sparse(toeplitz ([-2 1 zeros(1,M-2)])); 41 42 % Define the system Laplacian 43 Laplacian = ksum(Lx ,Ly); 44 45 % Modify Laplacian for Neumann conditions 46 laplaceMod = kron(eye(N) ,(kdelt(1,2,M) + kdelt(M,M-1,M))/dy^2); 47 A = alpha *( Laplacian + laplaceMod); 48 49 % Inhomogeneous part of Laplacian 50 b = alpha *(2/ dy*(kron(up_xL(x),e(M,M)) - kron(up_x0(x),e(1,M))) + ... 51 1/dx ^2*( kron(e(N,N),u_Ly(y)) + kron(e(1,N),u_0y(y)))); 52 53 % Compute eigenvalues and eigenvectors of Laplacian 54 [V,D] = eig(full(A)); 55 u0 = reshape(f(xx ,yy),N*M,1); 56 beta = Ab; 57 V0 = V(u0 + beta); 58 D = diag(D); 59 60 % Solution 61 u = @(t) V*( diag(exp(D*t))*V0) - beta; 62 63 h = surf(xx ,yy ,reshape(u(0),M,N)); 64 xlabel('x') 65 ylabel('y') 66 zlabel('u') 67 shading interp 68 axis equal 69 light 70 71 % Animate results 72 t = linspace (0 ,3 ,1000); 73 74 for i = 1:1000 75 h.ZData = reshape(u(t(i)),M,N); 76 drawnow 40
  • 41. 77 end 5.3.4 Hyperbolic PDEs Since this wasn’t really part of the class, I’ll make this brief, but I do want to show how the methods built up to this point can easily handle the wave equation. For ease of exposition, I will not be discussing how to apply boundary conditions. Suffice to say, it is a similar process. We begin by considering the spatially discretized form of the wave equation d2 u dt2 = (Lx ⊕ Ly)u Similar to a typical second order ODE of this form, this may be rewritten as d dt u ˙u = 0 I Lx ⊕ Ly 0 u ˙u Therefore, we may immediately assert the solution to be u ˙u = exp 0 I Lx ⊕ Ly 0 t u0 ˙u0 It should be noted that this is the “default” solution, with Dirichlet conditions of zero on all sides. As such, this is the natural equation of a vibrating square membrane. Furthermore, I do not claim this to be anywhere close to the most computationally efficient approach (view in abode reader for animation) 1 %% Wave PDE 2 3 %% Create functions for Kronecker sum , delta , and unit vector 4 % Kronecker sum 5 ksum = @(A,B) kron(A,eye(size(B))) + kron(eye(size(A)),B); 6 % Kronecker delta 7 kdelt = @(i,j,N) sparse(i,j,1,N,N); 8 % Unit vector 9 e = @(i,N) sparse(i,1,1,N,1); 10 11 %% Set problem paramters 12 xLen = 10; % Length in x direction 13 yLen = 10; % Length in y direction 14 N = 40; % Number of points for x 15 M = 40; % Number of points for y 16 17 % Initial condition 18 f = @(x,y) 5*exp(-(x-xLen /2) .^2) .* exp(-(y-yLen /2) .^2); 19 20 % Initial velocity 21 g = @(x,y) x*y*0; 22 41
  • 42. Figure 8: Solution to the wave equation with Gaussian initial conditions 23 % Create grid 24 x = linspace (0,xLen ,N) '; 25 y = linspace (0,yLen ,M) '; 26 [xx ,yy] = meshgrid(x,y); 27 dx = x(2) - x(1); 28 dy = y(2) - y(1); 29 30 % Define the Laplacian for x and y 31 Lx = 1/dx^2 * sparse(toeplitz ([-2 1 zeros(1,N-2)])); 32 Ly = 1/dy^2 * sparse(toeplitz ([-2 1 zeros(1,M-2)])); 33 34 % Define the system Laplacian 35 A = ksum(Lx ,Ly); 36 37 % Compute eigenvalues and eigenvectors of the system 38 [V,D] = eig([ zeros(N*M), eye(N*M) ; full(A), zeros(N*M)]); 42
  • 43. 39 D = diag(D); 40 u0 = reshape(f(xx ,yy),N*M,1); 41 up0 = reshape(g(xx ,yy),N*M,1); 42 V0 = V'*[u0 ; up0]; 43 44 % Solution 45 u = @(t) real(V(1:N*M,:)*( diag(exp(D*t))*V0)); 5.4 Full Discretization For the last bit, we have discussed the solution of PDEs with a partial discretization of the domain; that being only in space. Here, I will present the methods broached in Chapra for discretization and solution in both space and time. To complement Chapra’s exposition, I will drop the second spatial dimension and work in one variable of space for this section. 5.4.1 Explicit Methods Just as finite differences could be constructed for the space domain, so too may the be applied to the time variable. For ease of notation I will use the following convention in this section unless specified otherwise ui(tl) ≡ ul i Sensibly, there is a great deal of back-and-forth between the solution to temporally dependent ODEs and PDEs that are discretized in time. For now, we will exclusively examine the simplified heat PDE ∂tu = L(u). Notably, at each point i in the domain, we essentially have a first order differential equation f(xi, ui, tl) = ( 2 u(tl))i As a result, it stands to reason that we should be able to use something in our arsenal of ODE solvers to solve the PDE, treating each point in the spatial domain as its own unique ODE. The explicit method discussed by Chapra is a manifestation of Euler’s method, such that ul+1 i = ul i + ∆tLl i, Ll i = ul i+1 − 2ul i + ul i−1 h2 As long as ∆t ≤ 1 2 h2 , this method will be convergent and stable. Other explicit ODE solution methods may also be used–as Chapra notes, the implementation of Heun’s method for this appli- cation is called MacCormack’s method. Explicit Method Example 5.4.2 Implicit Methods In section 3.1 of this report, it was discussed that Euler’s method did not have a particularly large stability domain, and thus suffered for poor choices of step size. This is also the case for its application to solving PDEs. By opting to use so-called A-Stable methods instead, improved stability may be found. Just as the backwards Euler method was given by yi+1 = yi + hf(xi+1, yi+1) 43
  • 44. It may be used for PDE solution as well, manifesting as ul+1 i = ul i + ∆tLl+1 i By writing this in matrix-vector form for the entire system, we obtain the following (with u ≡ u12 ) ul+1 − ∆tLul+1 = ul =⇒ (I − ∆tL)ul+1 = ul =⇒ ul+1 = (I − ∆tL)−1 ul We therefore now have a general solution at any point in time, by induction u(tn) = (I − ∆tL)−n u0 For added fanciness, we may generalize this to n dimensions as u(tn) = (I − ∆t N i=1 Lxi )−n u0 The Crank-Nicolson Method The Crank-Nicolson method appears to be an implementation of Heun’s method for solving PDEs. The pointwise equation is ul+1 i = ul i + 1 2 ∆t(Ll i + Ll+1 i ) In matrix-vector form, this may be simplified to an iterative procedure as follows ul+1 = ul + 1 2 ∆tL(ul + ul+1) (I − 1 2 ∆tL)ul+1 = (I + 1 2 ∆tL)ul ul+1 = (I − 1 2 ∆tL)−1 (I + 1 2 ∆tL)ul ∴ u(tn) = (I − 1 2 ∆tL)−1 (I + 1 2 ∆tL) n u0 To apply boundary conditions to these methods, we alter the Laplacian and add terms as before, such that the Laplacian reflects the boundary conditions we desire. This is accomplished by the substitution of the Laplacian Lul → Lul + Aul + b For the heat PDE, ul+1 = ul + 1 2 ∆t [(L + A)(ul + ul+1) + 2b] (I − 1 2 ∆t(L + A))ul+1 = (I + 1 2 ∆t(L + A))ul + ∆tb ul+1 = (I − 1 2 ∆t(L + A))−1 (I + 1 2 ∆t(L + A))ul + ∆tb Where for Dirichlet conditions A = 0, b = 1 h2 (u1e1 + uN eN ) 12 I did this because the vec tor notation I had been using conflicts with the superscripts 44
  • 45. And for Neumann conditions A = 1 h2 (δ1,2 + δN,N−1), b = 2 h ((∂xu)N eN − (∂xu)1e1) These may all be naturally generalized to multiple dimensions using the tensor language developed earlier. Example implementation of Crank-Nicolson Method Here I will examine usage of the Crank-Nicolson Method to solve the minority carrier diffusion equation, which describes the evo- lution of the concentration n-type carriers in a p-doped semiconductor. First, we must modify the form of the equation to reflect the new PDE, where u = ∆np ∂u ∂t = DN ∂2 u ∂x2 − u τn + GL In discretized form, DN ∂2 u ∂x2 − u τn + GL → DN Lu − u τn + GL By applying boundary conditions to the Laplacian and correcting for the u/τn term at the bound- ary, we have DN (L + A) − 1 τn (I − (δ1,1 + δN,N )) u + (DN b + GL) Accordingly, implementation of the Crank-Nicolson method has the following structure M = DN (L + A) − 1 τn (I − (δ1,1 + δN,N )) ul+1 = I − 1 2 ∆tM −1 I + 1 2 ∆tM ul + ∆t(DN b + GL) We will now solve this numerically as an equation of evolution for the following conditions (H(x) is the heaviside function) x ∈ [0, 5], DN = 10, τn = .1 t ∈ [0, 0.3], GL = 0.1 u0(x) = H(x − 3), u(0) = 1, u(5) = 0 1 %% Crank -Nicolson Minority Carrier Demo 2 3 %% Create functions for Kronecker delta , and unit vector 4 % Kronecker delta 5 kdelt = @(i,j,N) sparse(i,j,1,N,N); 6 7 % Unit vector 8 e = @(i,N) sparse(i,1,1,N,1); 9 10 %% Set problem paramters 45
  • 46. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1np 0.3 0.24 0.18 0.12 0.06 0 Figure 9: Minority carrier distribution as a function of time, solved with the Crank-Nicolson method. 11 xLen = 5; % Length in x direction 12 N = 50; % Number of points for x 13 u_p = 0; 14 D_N = 10; 15 tau = .1; 16 G_L = 0.1; 17 t_max = .3; 18 19 % Create grid 20 x = linspace (0,xLen ,N) '; 21 h = x(2) - x(1); 22 23 % Define time step size 24 dt = h^2/6; 25 26 % Define the Laplacian 27 Lx = 1/h^2 * sparse(toeplitz ([-2 1 zeros(1,N-2)])); 46
  • 47. 28 29 % Implementation of Neumann conditions on both sides 30 %A = 1/h^2 * (kdelt(1,2,N) + kdelt(N,N-1,N)); 31 A = zeros(N); 32 %b = 2/h * (u_p * e(N,N) - u_p * e(1,N)); 33 b = 1/h^2*(1*e(1,N) + .01*e(N,N)); 34 35 % Derivative matrix 36 M = D_N * (Lx + A) - 1/tau*(eye(N) - (kdelt(1,2,N) + kdelt(N,N-1,N) )); 37 38 % Time vector 39 t = 0:dt:t_max; 40 41 % Initial condition 42 u0 = @(x) 1*(x < xLen /2); 43 % u0 = @(x) 1 + x*0; 44 45 u = zeros(N,length(t)); 46 u(:,1) = u0(x); 47 48 for i = 2: length(t) 49 uTmp = (eye(N) + 0.5* dt*M)*u(:,i-1) + dt*(D_N*b+ G_L); 50 u(:,i) = (eye(N) - 0.5* dt*M)uTmp; 51 end 52 53 figure , hold on 54 colormap hot 55 color = hot(length(t)); 56 for i = 1:5: length(t) 57 f = t(i)/t_max; 58 plot(x,u(:,i),'LineWidth ',2,'Color ',color(length(t)+1-i,:)) 59 end 60 colorbar('TickLabels ',linspace(t_max ,0,6),... 61 'Ticks ' ,(0:5)/5,'Direction ','reverse ') 62 xlabel('x') 63 ylabel('Delta n_p') 64 set(gca ,'Color ' ,.5*[1 1 1]) 47
  • 48. 6 The Finite Element Method The finite element method is one of the most powerful numerical methods for solving PDEs. Not only does it allow for finer control over the discretization of the domain, but it also allows for highly irregular geometries, such as those seen in most engineering applications. It is however much more mathematically complex than the finite element method. 6.1 General Approach The application of a finite element is usually divided into six steps. These steps are: 1. Discretization of the domain 2. Generation of element equations 3. Assembly of system 4. Application of boundary conditions to system 5. Solution of the system 6. Post-processing Discretization As was the case with the finite-difference methods, it is important to discretize the domain such that the eventual system of equations will be finite in size and thereby solvable using a numerical method. The simplest and easiest to compute methods typically employ objects with planar geometry: • 1D: Lines • 2D: Quadrilaterals, triangles • 3D: Parallelepipeds, triangular prisms Shared vertices between these objects are referred to as nodes, and the planes that separate them are called nodal planes. Development of Equations After a choice of element is made, elements are drawn onto the system such that it forms a kind of mesh. For piecewise polynomial discretizations, interpolating polynomials are found as a linear combination of d ≡ dim Lagrange polynomials Ni of degree n, n = 1 for piecewise planar/linear: u = d+1 i=1 uiNi It should be noted that for non-linear interpolating polynomials, especially in higher dimensions, this becomes much more complex. 48
  • 49. Assembly and Application of Boundary Conditions We aim to write the above as a system of equations of the form Au = b Where A is the assemblage property matrix and b is the vector which represents outside forces. As was done in the finite difference section, we then take this system of equations and modify to apply the boundary conditions. Solution and Postprocessing Ideally, the system may be assembled such that the resulting system is linear and may be solved with an appropriate method. The results should then be viewed either as a raw data output or may be visualized nicely. 6.2 One-Dimensional Application to Elliptic ODEs Here, I will develop the solution to the steady-state solution of the 1D minority carrier equation using finite elements. 0 = DN d2 u dx2 − u τn + GL Discretization For demonstrative purposes, I will be using a random discretization. Development of Equations Mirroring the derivation in Chapra for Poisson’s equation, we begin by asserting an approximate solution ˜u leaves a residual R behind R = DN d2 ˜u dx2 − ˜u τn + GL Using Galerkin’s method, we then opt to constrain the parameter space by minimization of the residual weighted by the interpolating basis functions D RNi dx = 0 Where D = [xa, xb] is the subinterval of consideration. By substitution of the residual, we factor this into three separate integrals DN D d2 ˜u ∂x2 − ˜u τn + GL Ni dx = DN D d2 ˜u dx2 Ni dx − 1 τn D ˜uNi dx + GL D Ni dx We first direct our attention to the integral of the second derivative. By integration by parts, we find that D d2 ˜u dx2 Ni dx = Ni(x) d˜u dx D − D d˜u dx dNi dx dx where per the definition of Ni Ni(x) d˜u dx D = −d˜u(x1) dx , i = 1 d˜u(x2) dx , i = 2 49
  • 50. Recognizing the linear shape of Ni, we may also assert that x2 x1 d˜u dx dNi dx dx = (−1)i h (−u1 + u2), i ∈ {1, 2} Moving into something that was not done in Chapra, we now examine the middle term. Noting that ˜u = u1N1(x) + u2N2(x), we rewrite the integral D ˜uNi dx = u1 D N1Ni dx + u2 D N2Ni dx Where h = b − a b a NiNj dx = h 6 (1 + δij), i ∈ {1, 2} And therefore D ˜uNi dx = h 6 2u1 + u2, i = 1 u1 + 2u2, i = 2 The final integrand is simply evaluated D Ni dx = 1 2 h The system of equations governing the element may then be assembled 0 = DN −d˜u(x1) dx d˜u(x2) dx − 1 h 1 −1 −1 1 u1 u2 − 1 τn h 6 2 1 1 2 u1 u2 + GL h 2 1 1 And rewritten as an inhomogeneous linear matrix-vector equation for the element DN h 1 −1 −1 1 + h 6τn 2 1 1 2 u1 u2 = DN −d˜u(x1) dx d˜u(x2) dx + h 2 GL GL Assembly At this time, we consider the containing system. To begin, we rewrite the last equa- tion to treat it as if it were the ith element in the system, strung between points i and i + 1 DN hi 1 −1 −1 1 + hi 6τn 2 1 1 2 ui ui+1 = DN −d˜u(xi) dx d˜u(xi+1) dx + hi 2 GL GL To assemble the system, we first invoke the following shorthand Ai = DN hi 1 −1 −1 1 + hi 6τn 2 1 1 2 , bi = ki + hi 2 GL GL Where ki denotes a variable argument that is only nonzero for i = 1 and i = N − 1, due to the cancellation of boundary conditions. Next, we recognize that a system with N − 1 many elements will consist of N points. To generate the full system Au = b, we start with A and b as an N dimensional zero matrix and zero vector respectively. Then, we invoke the following algorithm, iterating over i ∈ {k ∈ Z+ : k ≤ N}, where subscript bracket notation denotes sub-matrix indexing A[i,i+1],[i,i+1] = A[i,i+1],[i,i+1] + Ai b[i,i+1] = b[i,i+1] + bi 50
  • 51. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 np N = 5 N = 10 N = 15 N = 20 N = 25 Figure 10: Solution by finite element method at increasingly finer random discretizations Boundary Conditions By inspection, as with Chapra’s example, the inner boundary conditions cancel, and thus only the external boundary conditions need to be applied, in the case of Neumann conditions. For Dirichlet conditions, there is a modification at the endpoints such that the ki term is generated by a central difference b1 = DN 2h1 0 −1 0 0 u1 u2 + DN 2h1 ua 0 bN−1 = DN 2h1 0 0 −1 0 uN−1 uN + DN 2hN−1 0 ub Evidently, this contributes a term that must modify A1 and AN−1. Solution By numerical inversion of the matrix, selecting 5 random discretizations with N ∈ {5, 10, 15, 20, 25} points, we find that the solution quickly converges to the steady state found by the Crank-Nicolson scheme after sufficient time had passed. The following code implements this solution. 1 %% Finite Element Solution of Minority Carrier Equation 51
  • 52. 2 3 %% Create functions for Kronecker delta , and unit vector 4 % Kronecker delta 5 kdelt = @(i,j,N) sparse(i,j,1,N,N); 6 7 % Unit vector 8 e = @(i,N) sparse(i,1,1,N,1); 9 10 %% Set problem paramters 11 xLen = 5; % Length in x direction 12 N = 5; % Number of points for x 13 D_N = 10; 14 tau = .1; 15 G_L = 0.1; 16 t_max = .3; 17 ua = 1; 18 ub = 0.01; 19 20 figure , hold on 21 colormap hot 22 color = hot (5); 23 24 NN = 5:5:25; 25 for j = 1: length(NN) 26 N = NN(j); 27 28 % Create random grid for inside points 29 x_ins = sort(rand(N,1))*xLen; 30 x = [0; x_ins; xLen ]; 31 32 % Define functions 33 h = @(i) x_ins(i+1) - x_ins(i); 34 A = @(i) D_N/h(i) * [1 -1; -1 1] + h(i)/(6* tau) * [2 1 ; 1 2]; 35 b = @(i) h(i)/2 * G_L *[1;1]; 36 37 AA = zeros(N); 38 bb = zeros(N,1); 39 for i = 1:(N-1) 40 % Determine applicability of and apply boundary conditions at point 41 AAbcs = zeros (2); 42 bbbcs = zeros (2,1); 43 if i == 1 44 AAbcs = -D_N /(2*h(i))*[0 -1; 0 0]; 45 bbbcs = D_N /(2*h(i))*[ua; 0]; 46 elseif i == N-1 47 AAbcs = -D_N /(2*h(i))*[0 0; -1 0]; 48 bbbcs = D_N /(2*h(i))*[0; ub]; 52
  • 53. 49 end 50 51 % Apply ith element to system matrix 52 AA([i i+1],[i i+1]) = AA([i i+1],[i i+1]) + A(i) + AAbcs; 53 bb([i i+1]) = bb([i i+1]) + b(i) + bbbcs; 54 end 55 u = AAbb; 56 plot(x,[ua;u;ub],'-o','LineWidth ',2,'Color ',color(length(NN)+1-j,:) ) 57 end 58 colorbar('TickLabels ',linspace (25,0,5),... 59 'Ticks ' ,(0:4)/4,'Direction ','reverse ') 60 xlabel('x') 61 ylabel('Delta n_p') 62 set(gca ,'Color ' ,.5*[1 1 1]) 6.3 Finite Element Methods in Higher Dimensions Higher-dimensional finite-element methods may be devised in a manner very similar to the 1 dimensional case; however, it quickly becomes much more complex to do so. Chapra provides a nice introduction to the element equation derivation, however to truly understand the math of what is going on requires use of much more mathematical sophistication than that assumed in Chapra. 7 Afterword I would like to thank Dr. Gary Litton for allowing me the option to complete this report. Along the way, I have strengthened my numerical abilities considerably and made concrete some things that I had been starting to forget. I also learned a few more methods of solving ODEs than I had known in the past, and begun to see a deeper application of tensor analysis, a field that I had not previously thought to intersect terribly with numerical differential equations. I have also begun to be able to formulate solution methods creatively, having been afforded the opportunity to think critically about each method and how they were interrelated. Truly, I feel that I now better understand numerical methods and their application to the solution of differential equations. 53