SlideShare ist ein Scribd-Unternehmen logo
1 von 38
Downloaden Sie, um offline zu lesen
1
ADVANCED CONTROL SYSTEM DESIGN OF
AIRCRAFT AND SIMULATION OF ITS
TRAJECTORY
INTERNSHIP PROJECT REPORT
By
SIDDHARTH PUJARI
ROLL NO.-010
NIT ROURKELA
Department of Mathematics
Indian Institute of Space Science and Technology (IIST)
Thiruvananthapuram
December – 2015
2
BONAFIDE CERTIFICATE
This is to certify that this project report entitled “ ADVANCED CONTROL SYSTEM
DESIGN OF AIRCRAFT AND SIMULATION OF ITS TRAJECTORY ” submitted to
Indian Institute of Space Science and Technology, Thiruvananthapuram, is a bonafide
work done by Siddharth Pujari under my supervision from 10th December 2015 to 31st
December 2015.
Dr. Raju K. George
Dean(R and D)
Sr. Professor and Head, Department of Mathematics
Indian Institute of Space Science and Technology(IIST)
Valiamala P.O.
Trivandrum 695547
Place : Thiruvananthapuram
Date : 29/12/2015
3
DECLARATION BY AUTHOR
This is to declare that this report has been written by me. No part of the report is plagiarized
from other sources. All information included from other sources have been duly
acknowledged. I am that if any part of the report is found to be plagiarized, I shall take full
responsibility for it.
Siddharth Pujari
NIT Rourkela
Place : Thiruvananthapuram
Date : 29/12/2015
4
ABSTRACT
In this report I have mainly focused on the controllability of the linearized systems using its
state space models. Here as an example the aircraft model is simulated using Matlab.
Modelling of the state space model is not taken into consideration here. By just getting the
control matrices 𝐎 and 𝐵 we can easily obtain results for the controllability of the aircraft and
also plot the corresponding graph. The control matrices are taken for the longitudinal motion
of the aircraft. Apart from this, I have also focused on the theoretical aspect of advanced
control design like the Stability of Linear System, Linearization of System of equation,
solving the transition matrix, Solution of the controlled system using Transition Matrix,
Kalman’s Criterion.
The results of the simulation in Matlab took a substantiate amount of time considering the
time consuming calculation of the exponential of a matrix and the calculation of the
Controllability Grammian Matrix and the controller. In fact I had to run the code overnight
just to get the desired graph. This was the major problem which I encountered while doing
the project. But I successfully obtained satisfactory results and the system could be
controlled.
5
TABLE OF CONTENTS
CHAPTER NO. TITLE PAGE NO.
ABSTRACT iii
NOMENCLATURE xvii
CONTENTS xx
1. Brief Reviewof Differential Equations
1.2. Directional Fields
2. Systems of Linear Differential Equations
2.1 Introduction to Systems of Linear Differential Equations
2.2 Review of Matrix Theory.
2.3. Eigen Values and Eigen Vectors
2.4. Stability of Linear Systems
3. Existence and uniqueness theorem
3.1 Picard’s Theorem
3.2 Picard iteration for IVP
3.3. Cauchy Peano’s Theorem
4. Linearization of Non Linear Models
4.1 Theory
4.2. Example
5. Controllability of Linear Systems
5.1. Motivation behind Controllability
5.2. Kalman’s Criterion
5.3. The Matrix Exponential (Transition Matrix)
5.3.1. Matrix Exponential In Matlab
6
5.4. Solution of the Controlled System using Transition Matrix
5.5. Kalman Condition Revisited (Proof)
6. Controllability of Aircraft
6.1. Introduction
6.2. Simulation in Matlab
6.2.1. Code for simulation
6.2.2 Output.
7
8
BRIEF REVIEW OF DIFFERENTIALEQUATIONS
A differential equation is an equation which contains derivatives of the unknown.
(Usually it is a mathematical model of some physical phenomenon.)
Two classes of differential equations:
• O.D.E. (ordinary differential equations): linear and non-linear;
• P.D.E. (partial differential equations).
Some concepts related to differential equations:
• System: a collection of several equations with several unknowns.
• Order of the equation: the highest order of derivatives.
• Linear or non-linear equations: Let y(t) be the unknown.
Then, 𝑎0(𝑡)𝑊 (𝑛) + 𝑎1(𝑡)𝑊 (𝑛 − 1) + · · · + 𝑎 𝑛(𝑡)𝑊 = 𝑔(𝑡),(∗) is a linear equations.
If the equation cannot be written as (∗), then it’s non-linear.
Two things you must know: identify the linearity and the order of an equation.
Example 1. Let 𝑊(𝑡) be the unknown. Identify the order and linearity of the following
equations.
(a). (𝑊 + 𝑡)𝑊 ′ + 𝑊 = 1
(b). 3𝑊 ′ + (𝑡 + 4)𝑊 = 𝑡2
+ 𝑊 ′′
(c). 𝑊 ′′′ = 𝑐𝑜𝑠(2𝑡𝑊)
(d). 𝑊(4)
+ √ 𝑡𝑊′′′ + 𝑐𝑜𝑠𝑡 = 𝑒 𝑡
Problem order linear?
(a). (𝑊 + 𝑡)𝑊 ′ + 𝑊 = 1 1 No
(b). 3𝑊 ′ + (𝑡 + 4)𝑊 = 𝑡2
+ 𝑊 ′′ 2 Yes
(c). 𝑊 ′′′ = 𝑐𝑜𝑠(2𝑡𝑊) 3 No
(d). 𝑊(4)
+ √ 𝑡𝑊′′′ + 𝑐𝑜𝑠𝑡 = 𝑒 𝑡
4 No
What is a solution? Solution is a function that satisfied the equation and the derivatives
exist.
Example 2. Verify that 𝑊(𝑡) = 𝑒 𝑡
is a solution of the IVP (initial value problem)
𝑊 ′ = 𝑎𝑊, 𝑊(0) = 1.
9
Here 𝑊(0) = 1 is called the initial condition.
Answer.Let’s check if 𝑊(𝑡) satisfies the equation and the initial condition:
𝑊 ′ = 𝑎𝑒 𝑎𝑡
= 𝑎𝑊, 𝑊(0) = 𝑒0
= 1.
They are both OK. So it is a solution.
Example 3. Verify that 𝑊(𝑡) = 10 − 𝑐𝑒 − 𝑡 with c a constant, is a solution to 𝑊 ′ +
𝑊 = 10.
Answer.
𝑊 ′ = −(−𝑐𝑒 − 𝑡 ) = 𝑐𝑒 − 𝑡 , 𝑊′ + 𝑊 = 𝑐𝑒 − 𝑡 + 10 − 𝑐𝑒 − 𝑡 = 10. OK.
Let’s try to solve one equation.
Example 4. Consider the equation
(𝑡 + 1)𝑊 ′ = 𝑡 2
We can rewrite it as (𝑓𝑜𝑟 𝑡 ≠ −1)
𝑊 ′ =
𝑡2
𝑡 + 1
+ 1 =
𝑡2
− 1 +1
𝑡 + 1
=
( 𝑡+ 1)( 𝑡− 1)+ 1
𝑡 + 1
= ( 𝑡 −1) +
1
𝑡 + 1
To find y, we need to integrate y ′ :
𝑊 = ∫ 𝑊 ′ (𝑡)𝑑𝑡 = ∫ [ (𝑡 − 1) + 1 𝑡 + 1] 𝑑𝑡 =
𝑡2
2
− 𝑡 + 𝑙𝑛 |𝑡 + 1| + 𝑐
where 𝑐 is an integration constant which is arbitrary. This means there are infinitely many
solutions.
Additional condition: initial condition 𝑊(0) = 1. (Meaning: 𝑊 = 1 when 𝑡 = 0) Then
𝑊(0) = 0 + 𝑙𝑛 |1| + 𝑐 = 𝑐 = 1, so
𝑊(𝑡) =
𝑡2
2
− 𝑡 + 𝑙𝑛 |𝑡 + 1| + 1.
So for equation like 𝑊 ′ = 𝑓(𝑡), we can solve it by integration: 𝑊 = ∫ 𝑓(𝑡)𝑑𝑡.
1.2 DirectionalFields
Directional field: for first order equations 𝑊 ′ = 𝑓(𝑡, 𝑊).
Interpret 𝑊 ′ as the slope of the tangent to the solution 𝑊(𝑡) at point (𝑡, 𝑊) in the 𝑊 −
𝑡 plane.
• If 𝑊 ′ = 0, the tangent line is horizontal;
• If 𝑊 ′ > 0, the tangent line goes up;
• If 𝑊 ′ < 0, the tangent line goes down;
• The value of |𝑊 ′ | determines the steepness.
10
Example 5. Consider the equation 𝑊 ′ =
1
2
(3 − 𝑊).We know the following:
• If 𝑊 = 3, then y ′ = 0, flat slope,
• If 𝑊 > 3, then 𝑊 ′ < 0, down slope,
• If 𝑊 < 3, then 𝑊 ′ > 0, up slope.
See the directional field below (with some solutions sketched in red):
We note that, if 𝑊(0) = 3, then 𝑊(𝑡) = 3 is the solution.
Asymptotic behavior: As 𝑡 → ∞, we have 𝑊 → 3
Remarks:
(1). For equation 𝑊 ′ (𝑡) = 𝑎(𝑏 − 𝑊) with 𝑎 > 0, it will have similar behavior as
Example 5, where 𝑏 = 3 and 𝑎 =
1
2
. Solution will approach 𝑊 = 𝑏 as 𝑡 → +∞.
(2). Now consider 𝑊 ′ (𝑡) = 𝑎(𝑏 − 𝑊), but with 𝑎 < 0. This changes the sign of ′ .
We now have
– If 𝑊(0) = 𝑏, then 𝑊(𝑡) = 𝑏;
– If 𝑊(0) > 𝑏, then 𝑊 → +∞ as 𝑡 → +∞;
– If 𝑊(0) < 𝑏, then 𝑊 → −∞ as 𝑡 → +∞.
Example 6: Let 𝑊 ′ (𝑡) = (𝑊 − 1)(𝑊 − 5). Then,
• If 𝑊 = 1 or 𝑊 = 5, then 𝑊 ′ = 0.
• If 𝑊 < 1, then 𝑊 ′ > 0;
• If 1 < 𝑊 < 5, then 𝑊 ′ < 0;
• If 𝑊 > 5, then 𝑊 ′ < 0.
Directional field looks like:
11
What can we say about the solutions?
• If 𝑊(0) = 1, then 𝑊(𝑡) = 1;
• If 𝑊(0) = 5, then 𝑊(𝑡) = 5;
• If 𝑊(0) < 1, then 𝑊 → 1 as 𝑡 → +∞;
• If 1 < 𝑊(0) < 5, then 𝑊 → 1 as 𝑡 → +∞;
• If 𝑊(0) > 5, then 𝑊 → +∞ as 𝑡 → +∞.
Remark: If we have 𝑊 ′ (𝑡) = 𝑓(𝑊), and for some 𝑊0 we have 𝑓(𝑊0) = 0, then, 𝑊(𝑡) =
𝑊0 is a solution.
Example 7: Given the plot of a directional field,
which of the following ODE could have generate it?
(a). 𝑊 ′ (𝑡) = (𝑊 − 2)(𝑊 − 4)
12
(b). 𝑊 ′ (𝑡) = (𝑊 − 1)2 (𝑊 − 3)
(c). 𝑊 ′ (𝑡) = (𝑊 – 1)(𝑊 − 3)2
(d). 𝑊 ′ (𝑡) = −(𝑊 – 1) (𝑊− 3)2
We first check the constant solution, 𝑊 = 1 and 𝑊 = 3. Then (a) can not be. Then, we
check the sign of 𝑊 ′ on the intervals: 𝑊 < 1, 1 < 𝑊 < 3, 𝑎𝑛𝑑 𝑊 > 3, to match the
directional field. We found that (𝑐) could be the equation.
13
2. Systems of Linear Differential Equations
2.1. Introduction to Systems of Linear Differential Equations
Right after the invention of calculus, differential equations replaced algebraic equations
(which in turn replaced counting) as the major tool in mathematically modelling
everything. A single differential equation (also called “scalar differential equation”) is a
mathematical model of the time-evolution/spatial variation of one single substance (can
be population of a single species, amount of a single chemical, etc.); On the other hand, a
system of differential equations models the time-evolution of more than one quantities.
One example is Newton’s second law:
𝑑2
𝑥
𝑑𝑡2 = 𝑚𝑎 (1)
Which looks like a single equation but is actually a system because both 𝑥 and 𝑎 has more
than one components. Traditionally, systems of ordinary differential equations arise from
study of mechanics. Modern examples also abound, especially from biology, sociology,
economics, etc.
The general form of a system involving n unknown functions is
𝑥1 = 𝑓1(𝑥1,, 𝑥𝑛) (2)
𝑥2 = 𝑓2(𝑥1,, 𝑥𝑛) (3)
𝑥3 = 𝑓𝑛(𝑥1,, 𝑥𝑛) (4)
where the evolution of n quantities are described. Such a system is usually referred to as
an n × n first order system.
Remark 1. When 𝑛 = 2 or 3, 𝑥, 𝑊 (respectively𝑥, 𝑊, 𝑧) are often used instead of 𝑥1,, 𝑥𝑛.
When all 𝑓1, 

 
 
 , 𝑓𝑛 are linear in their variables 𝑥1,

 
 
 , 𝑥𝑛, the system is
called linear, otherwise it’s
called nonlinear. So an 𝑛 × 𝑛 first order linear system has the general form
𝑥1̇ = 𝑎11 ( 𝑡) 𝑥1 + ⋯  + 𝑎 𝑛1(𝑡) 𝑥 𝑛 + 𝑔1(𝑡) (5)
𝑥 𝑛̇ = 𝑎 𝑛1( 𝑡) + ⋯  + 𝑎 𝑛𝑛 (𝑡) 𝑥 𝑛 + 𝑔 𝑛 (𝑡). (6)
If furthermore all 𝑎𝑖𝑗 (t) are constants, that is
𝑥1̇ = 𝑎11 𝑥1 + ⋯  + 𝑎 𝑛1 𝑥 𝑛 + 𝑔1(𝑡) (7)
𝑥 𝑛̇ = 𝑎 𝑛1 𝑥1 + ⋯  + 𝑎 𝑛𝑛 (𝑡) 𝑥 𝑛 + 𝑔 𝑛 (𝑡) (8)
14
The system is said to have “constant coefficients”. As usual, when 𝑔1(𝑡) 

 𝑔 𝑛(𝑡) = 0,
the above linear systems are called “homogeneous”.
Remark 2. In almost all practical cases, the first order system will be nonlinear. There is
no systematic way to solve all general nonlinear system. In fact, even for 𝑛 × 𝑛 first
order linear system, no simple formula exists (of course unless 𝑛 = 1, which can be
solved through application of appropriate integrating factors).Only linear systems with
constant coefficients enjoy good formulas for solutions.
Nevertheless, as we will see soon, one important way to understand the general nonlinear
system is to derive from it one or more related linear, constant-coefficient systems. Once
a good understanding is reached for these constant-coefficient systems, the behaviours of
the solutions to the original nonlinear problem often can be obtained.
Write out the general form of a system of first order ODE, with 𝑥1, 𝑥2 as unknowns.
Given
𝑎𝑊′′ + 𝑏𝑊′ + 𝑐𝑊 = 𝑔(𝑡), 𝑊(0) = 𝛌, 𝑊′ (0) = 𝛜
we can do a variable change: let
𝑥1 = 𝑊, 𝑥2 = 𝑥′1 = 𝑊 ′
then
𝑥′
1 = 𝑥2 𝑥1(0) = ∝
𝑥′2 = 𝑊 ′′ =
1
𝑎
(𝑔(𝑡) − 𝑏𝑥2 − 𝑐𝑥1 ) 𝑥2(0) = 𝛜
Observation: For any 2nd order equation, we can rewrite it into a system of 2 first order
equations.
Example 1. Given
𝑊 ′′ + 5𝑊 ′ − 10𝑊 = 𝑠𝑖𝑛 𝑡, 𝑊(0) = 2, 𝑊′ (0) = 4
Rewrite it into a system of first order equations: let 𝑥1 = 𝑊 and 𝑥2 = 𝑊 ′ = 𝑥′1 , then
𝑥′1 = 𝑥2 𝑥1 (0) = 2
𝑥′2 = 𝑊 ′′ = −5𝑥2 + 10𝑥1 + sin t 𝑥2 (0) = 4
We can do the same thing to any high order equations. For 𝑛 − 𝑡ℎ order differential
equation:
𝑊 (𝑛) = 𝐹(𝑡, 𝑊, 𝑊′ ,· · · , 𝑊(𝑛 − 1))
define the variable change:
15
𝑥1 = 𝑊, 𝑥1 = 𝑊 ′ ,

 𝑥 𝑛 = 𝑊(𝑛−1)
we get
𝑥′1= y ′ = 𝑥2
𝑥′2 = y ′′ = 𝑥3
.
.
𝑥′ 𝑛−1 = 𝑊(𝑛−1)
= 𝑥 𝑛
𝑥′
𝑛 = 𝑊(𝑛)
= 𝐹( 𝑡, 𝑥1, 𝑥2,· · · , 𝑥 𝑛)
With corresponding source terms.
(Optional) Reversely, we can convert a 1st order system into a high order equation.
2. 2. Review of Matrix Theory.
A matrix of size m × n:
≀ m, 1 ≀ j ≀ n.
We consider only square matrices, i.e., m = n, in particular for n = 2 and 3.
Basic operations: A, B are two square matrices of size n.
• Addition: 𝐎 + 𝐵 = (𝑎𝑖𝑗 ) + (𝑏𝑖𝑗 )
• Scalar multiple: 𝛌𝐎 = (𝛌 · 𝑎𝑖𝑗 )
• Transpose: 𝐎 𝑇
switch the 𝑎𝑖,𝑗 with 𝑎𝑖𝑗. (𝐎 𝑇
) 𝑇
= 𝐎.
• Product: For 𝐎 · 𝐵 = 𝐶, it means 𝑐𝑖,𝑗 is the inner product of (𝑖𝑡ℎ row of 𝐎) and (𝑗𝑡ℎ
column of 𝐵). Example:
[
𝑎 𝑏
𝑐 𝑑
]· [
𝑥 𝑊
𝑢 𝑣
]= [
ax + bu ay + bv
cx + du cy + dv
]
We can express system of linear equations using matrix product.
16
Example 1.
𝑥1− 𝑥2 + 3𝑥3 = 4
2𝑥1 + 5𝑥3 = 0
𝑥2− 𝑥3 = 7
can be expressed as:
· [
1 −1 3
2 0 5
0 1 −1
]. [
𝑥
𝑊
𝑧
]=[
4
0
7
]
Some properties:
• Identity 𝐌: I = 𝑑𝑖𝑎𝑔(1, 1,· · · ,1), 𝐎𝐌 = 𝐌𝐎 = 𝐎.
• Determinant det(A):
𝑑𝑒𝑡 [
𝑎 𝑏
𝑐 𝑑
] = ad − bc,
𝑑𝑒𝑡 (
𝑎 𝑏 𝑐
𝑢 𝑣 𝑀
𝑥 𝑊 𝑧
) = 𝑎𝑣𝑥 + 𝑏𝑀𝑥 + 𝑐𝑢𝑊 − 𝑥𝑣𝑐 − 𝑊𝑀𝑎 − 𝑧𝑢𝑏.
• Inverse 𝑖𝑛𝑣(𝐎) = 𝐎−1
: 𝐎−1
𝐎 = 𝐎𝐎−1
= 𝐌.
• The following statements are all equivalent: (optional)
– (1) 𝐎 is invertible;
– (2) 𝐎 is non-singular;
– (3) 𝑑𝑒𝑡 (𝐎) ≠ 0;
– (4) row vectors in 𝐎 are linearly independent;
– (5) Column vectors in 𝐎 are linearly independent.
– (6) All eigenvalues of 𝐎 are non-zero.
17
2.3. Eigen Values and Eigen Vectors
Eigenvalues and eigenvectors of 𝐎 (only when A is 2 × 2)
λ: scalar value, 𝑣⃗: column vector, 𝑣⃗ ≡ 0.
If 𝐎𝑣⃗ = λ𝑣⃗, then (λ, 𝑣⃗) is the (eigenvalue, eigenvector) of 𝐎.
They are also called an eigen-pair of 𝐎.
Remark: If 𝑣⃗ is an eigenvector, then α𝑣⃗ for any α ≠ 0 is also an eigenvector, because
𝐎(α𝑣⃗ ) = α𝐎𝑣⃗ = αλ𝑣⃗ = λ(α𝑣⃗).
How to find (λ, v):
A𝑣⃗ − λ𝑣⃗ = 0, (𝐎 − 𝜆𝐌)𝑣⃗⃗⃗⃗⃗ = 0, 𝑑𝑒𝑡(𝐎 − 𝜆𝐌) = 0.
We see that 𝑑𝑒𝑡(𝐎 − 𝜆𝐌) is a polynomial of degree 2 (if 𝐎 is 2 × 2) in λ, and it is also called
the characteristic polynomial of 𝐎. We need to find its roots.
Example 1. Eigenvalues can be complex numbers.
A = [
2 −9
4 2
]
Let’s first find the eigenvalues.
𝑑𝑒𝑡(𝐎 − 𝜆𝐌) = 𝑑𝑒𝑡 [2 − λ 9
4 2 − λ
] = (2 − λ) 2
+ 36 = 0, ⇒ 𝜆1,2 = 2 ± 6𝑖 We see
that 𝜆2 = 𝜆1
̅̅̅, complex conjugate. The same will happen to the eigenvectors, i.e., 𝑣2⃗⃗⃗⃗⃗ = 𝑣1⃗⃗⃗⃗⃗. So
we need to only find one. Take 𝜆1 = 2 + 6𝑖, we compute 𝑣⃗ = (𝑣1
, 𝑣2
) 𝑇
:
(𝐎 − 𝜆1 𝐌) 𝑣⃗ = 0, [
−𝑖6 −9
4 −𝑖6
]· [
𝑣1
𝑣2]= 0,
−6i𝑣1
− 9𝑣2
= 0, choose 𝑣1
= 1, so 𝑣2
= −
2
3𝑖
,
so
𝑣1⃗⃗⃗⃗⃗ =( 1
−2/3𝑖
), 𝑣2⃗⃗⃗⃗⃗ = 𝑣1⃗⃗⃗⃗⃗= (
1
−
2
3𝑖
) .
18
2.4. Stability of Linear Systems
For the 2 × 2 system
𝑥′⃗⃗⃗⃗ = 𝐎 𝑥⃗
we see that 𝑥⃗ = (0,0) is the only critical point if 𝐎 is invertible. In a more general setting:
the system
𝑥′⃗⃗⃗⃗ = 𝐎 𝑥⃗−𝑏⃗⃗
would have a critical point at 𝑥⃗= 𝐎−1
𝑏. The type and stability of the critical point is solely
determined by the eigenvalues of A.
𝜆1,2 eigenvalues Type of C.P Stability
Real 𝜆1. 𝜆2 < 0 Saddle point unstable
Real 𝜆1 > 0, 𝜆2 > 0, 𝜆2
≠ 𝜆1
Node(source) Unstable
Real 𝜆1 < 0, 𝜆2 < 0, 𝜆2
≠ 𝜆1
Node(sink) asymptotically stable
Real 𝜆1 = 𝜆2 = 𝜆 Improper node asymptotically stable
if λ < 0, unstable if λ
> 0
Complex 𝜆1,2 = ±𝑖𝛜 Centre stable but not
asymptotically
Complex 𝜆1,2 = α ± iβ Spiral point asymptotically stable
if α <0, unstable if α
> 0
Example 1. We now consider again the prey-predator model, and set in values for the
constants. We consider
{
x ′ (t) = x(10 − 5y)
y ′ (t) = y(−6 + x)
19
which has 2 critical points (0,0) 𝑎𝑛𝑑 (6,2). The Jacobian matrix is
𝐜(𝑥, 𝑊) = [
10 − 5y −5x
y −6 + x
].
At (0,0) we have
𝐜(0, 0) =[
10 0
0 −6
], λ1= 10, λ2 = −6, saddle point, unstable.
At (6, 2) we have
𝐜(6, 2) =[
0 −30
2 0
], λ1,2 = ±𝑖 √ 60, center, stable but not asymp..
To see more detailed behavior of the model, we compute the two eigenvector for 𝐜(0,0),
and get 𝑣1⃗⃗⃗⃗⃗ = (1, 0) and 𝑣1⃗⃗⃗⃗⃗ = (0,1). We sketch the trajectories of solution in (x1, x2)-
plane in the next plot, where the trajectories rotate around the center counter clock wise.
One can interpret these as “circles of life”. In particular, the big circles can be interpreted
as: When there are very little predators, the prey grows exponentially, very quickly. As
the population of the prey becomes very large, there is a lot of food for the prey, and this
triggers an sudden growth of the predator. As the predators increase their numbers, the
prey population shrinks, until there is very little prey left. Then, the predators starve, and
its population decays exponentially (dies out). The circle continuous in a periodic way,
forever!
20
3. Existence and uniqueness theorem
3.1. Picard’s Theorem
Here we concentrate on the solution of the first order IVP
𝑊0 = 𝑓(𝑥, 𝑊), 𝑊(𝑥0) = 𝑊0 (1)
We are interested in the following questions:
1. Under what conditions, there exists a solution to (1).
2. Under what conditions, there exists a unique solution to (1).
Comment: An ODE may have no solution, unique solution or infinitely many solutions. For,
example 𝑊′2
+ 𝑊2
+ 1 = 0, 𝑊(0) = 1 has no solution. The ODE 𝑊′
= 2𝑥, 𝑊(0) = 1 has
unique solution 𝑊 = 1+𝑥2
, whereas the ODE x𝑊′
= 𝑊−1
, 𝑊(0) = 1 has infinitely many
solutions y = 1 + αx, α is any real number. (I only state the theorems. For proof, one may see
‘An introduction to ordinary differential equation’ by E A Coddington.)
Theorem 1. (Existence theorem):
𝑆𝑢𝑝𝑝𝑜𝑠𝑒 𝑡ℎ𝑎𝑡 𝑓( 𝑥, 𝑊) 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑖𝑛 𝑠𝑜𝑚𝑒 𝑟𝑒𝑔𝑖𝑜𝑛
𝑅 = {(𝑥, 𝑊) ∶ |𝑥 − 𝑥0| ≀ 𝑎, |𝑊 − 𝑊0| ≀ 𝑏}, (𝑎, 𝑏 > 0).
𝑆𝑖𝑛𝑐𝑒 𝑓 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑖𝑛 𝑎 𝑐𝑙𝑜𝑠𝑒𝑑 𝑎𝑛𝑑 𝑏𝑜𝑢𝑛𝑑𝑒𝑑 𝑑𝑜𝑚𝑎𝑖𝑛, 𝑖𝑡 𝑖𝑠 𝑛𝑒𝑐𝑒𝑠𝑠𝑎𝑟𝑖𝑙𝑊 𝑏𝑜𝑢𝑛𝑑𝑒𝑑 𝑖𝑛 𝑅,
𝑖. 𝑒. , 𝑡ℎ𝑒𝑟𝑒 𝑒𝑥𝑖𝑠𝑡𝑠 𝐟 > 0 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 |𝑓(𝑥, 𝑊)| ≀ 𝐟 ∀(𝑥, 𝑊)
∈ 𝑅. 𝑇ℎ𝑒𝑛 𝑡ℎ𝑒 𝐌𝑉𝑃 (1) ℎ𝑎𝑠 𝑎𝑡𝑙𝑒𝑎𝑠𝑡 𝑜𝑛𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑊
= 𝑊(𝑥) 𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 |𝑥 − 𝑥0| ≀ 𝛌 𝑀ℎ𝑒𝑟𝑒
𝛌 = 𝑚𝑖𝑛 {𝑎,
𝑏
𝑘
}
(Note that the solution exists possibly in a smaller interval)
Theorem 2.(Uniquness theorem): 𝑆𝑢𝑝𝑝𝑜𝑠𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑎𝑛𝑑 𝜕𝑓/
𝜕𝑊 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑖𝑛 𝑅 (𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑒𝑥𝑖𝑠𝑡𝑒𝑛𝑐𝑒 𝑡ℎ𝑒𝑜𝑟𝑒𝑚). 𝐻𝑒𝑛𝑐𝑒, 𝑏𝑜𝑡ℎ
𝑡ℎ𝑒 𝑓 𝑎𝑛𝑑
𝜕𝑓
𝜕𝑊
𝑎𝑟𝑒 𝑏𝑜𝑢𝑛𝑑𝑒𝑑 𝑖𝑛 𝑅, 𝑖. 𝑒.,
( 𝑎)| 𝑓( 𝑥, 𝑊)| ≀ 𝐟 𝑎𝑛𝑑 ( 𝑏) | 𝜕𝑓 𝜕𝑊| ≀ 𝐿 ∀(𝑥, 𝑊) ∈ 𝑅
21
𝑇ℎ𝑒𝑛 𝑡ℎ𝑒 𝐌𝑉𝑃 (1) ℎ𝑎𝑠 𝑎𝑡𝑚𝑜𝑠𝑡 𝑜𝑛𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑊 = 𝑊(𝑥) 𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 |𝑥 − 𝑥0|
≀ 𝛌 𝑀ℎ𝑒𝑟𝑒
𝛌 = 𝑚𝑖𝑛 {𝑎,
𝑏
𝑘
}
𝐶𝑜𝑚𝑏𝑖𝑛𝑖𝑛𝑔 𝑀𝑖𝑡ℎ 𝑒𝑥𝑖𝑠𝑡𝑒𝑛𝑐𝑒 𝑡ℎ𝑒𝑟𝑒𝑜𝑚, 𝑡ℎ𝑒 𝐌𝑉𝑃 (1) ℎ𝑎𝑠 𝑢𝑛𝑖𝑞𝑢𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑊
= 𝑊(𝑥) 𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 |𝑥 − 𝑥0| ≀ 𝛌.
Comment: Condition (b) can be replaced by a weaker condition which is known as Lipschitz
condition. Thus, instead of continuity of ∂f/∂y, we require
|𝑓(𝑥, 𝑊1) − 𝑓(𝑥, 𝑊2)| ≀ 𝐿|𝑊1 − 𝑊2| ∀(𝑥, 𝑊𝑖) ∈ 𝑅.
If 𝜕𝑓/𝜕𝑊 exists and is bounded, then it necessarily satisfies Lipschitz condition. On the other
hand, a function 𝑓(𝑥, 𝑊) may be Lipschitz continuous but 𝜕𝑓/𝜕𝑊 may not exists. For example
𝑓(𝑥, 𝑊) = 𝑥2
|𝑊|,|𝑥| ≀ 1,|𝑊| ≀ 1 is Lipschitz continous in 𝑊 but 𝜕𝑓/𝜕𝑊 does not exist at
(𝑥, 0).
*Note 1: The existence and uniqueness theorems stated above are local in nature since the
interval, |𝑥 − 𝑥0| ≀ 𝛌, where solution exists may be smaller than the original interval, |𝑥 −
𝑥0| ≀ 𝑎, where 𝑓(𝑥, 𝑊) is defined. However, in some cases, this restrictions can be
removed. Consider the linear equation
𝑊′
+ 𝑝( 𝑥) 𝑊 = 𝑟( 𝑥), (2)
where 𝑝(𝑥) and 𝑟(𝑥) are defined and continuous in a the interval 𝑎 ≀ 𝑥 ≀ 𝑏. Here (𝑥, 𝑊)
= −𝑝(𝑥)𝑊 + 𝑟(𝑥). If 𝐿 = 𝑚𝑎𝑥𝑎 ≀ 𝑥 ≀ 𝑏 |𝑝(𝑥)|, then
|𝑓(𝑥, 𝑊1) − 𝑓(𝑥, 𝑊2)| = | − 𝑝(𝑥)(𝑊1 − 𝑊2)| ≀ 𝐿|𝑊1 − 𝑊2|
Thus, f is Lipschitz continuous in y in the infinite vertical strip 𝑎 ≀ 𝑥 ≀ 𝑏 and −∞ < 𝑊 <
∞. In this case, the IVP (2) has a unique solution in the original interval 𝑎 ≀ 𝑥 ≀ 𝑏.
*Note 2: Though the theorems are stated in terms of interior point 𝑥0, the point 𝑥0 could be
left/right end point.
Comment: The conditions of the existence and uniqueness theorem are sufficeint but not
necessary. For example, consider
𝑊′
= √ 𝑊 + 1, 𝑊(0) = 0, 𝑥 ∈ [0, 1]
22
Clearly 𝑓 does not satisfy Lipschitz condition near origin. But still it has unique solution.
[Hint: Let 𝑊1
and 𝑊2
be two solutions and consider
𝑧(𝑥) = ( 𝑊1
(x)1/2
− 𝑊2
(x)1/2
)
2
.]
Comment: The existence and uniqueness theorem are also valid for certain system of first
order equations. These theorems are also applicable to a certain higher order ODE since a
higher order ODE can be reduced to a system of first order ODE.
Example 1. 𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑡ℎ𝑒 𝑂𝐷𝐞
𝑊′
= 1 + 𝑊2
, 𝑊(0) = 0.
𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑡ℎ𝑒 𝑟𝑒𝑐𝑡𝑎𝑛𝑔𝑙𝑒
𝑆 = {(𝑥, 𝑊) ∶ |𝑥| ≀ 100,|𝑊| ≀ 1}.
Clearly 𝑓 and 𝜕𝑓/𝜕𝑊 are continuous in 𝑆. Hence, there exists unique solution in the
neighbourhood of (0, 0). Now 𝑓 = 1 + 𝑊2
and |𝑓| ≀ 2 in S. Now 𝛌 = 𝑚𝑖𝑛{100, 1/
2} = 1/2. Hence, the theorems guarantee existence of unique solution in |𝑥| ≀ 1/2, which
is much smaller than the original interval |𝑥| ≀ 100. Since, the above equation is separable,
we can solve it exactly and find 𝑊(𝑥) = 𝑡𝑎𝑛(𝑥). This solution is valid only in (−𝜋/2, 𝜋/
2) which is also much smaller than [−100,100] but nevertheless bigger than that predicted
by the existence and uniqueness theorems.
3.3. Picarditeration for IVP
This method gives approximate solution to the IVP (1). Note that the IVP (1) is equivalent to
the integral equation
𝑊(𝑥) = 𝑊 0 + ∫ 𝑓(𝑡, 𝑊(𝑡)) 𝑑𝑡
𝑥
𝑥0
(3)
A rough approximation to the solution 𝑊(𝑥) is given by the function 𝑊0(𝑥) = 𝑊0, which is
simply a horizontal line through (𝑥0, 𝑊0). (don’t confuse function 𝑊0 (𝑥)with constant 𝑊0 ).
We insert this to the RHS of (3) in order to obatin a (perhaps) better approximate solution,
say 𝑊1 (𝑥). Thus,
𝑊 1 (𝑥) = 𝑊 0 + ∫ 𝑓(𝑡, 𝑊0(𝑡)) 𝑑𝑡
𝑥
𝑥0
= 𝑊0 + ∫ 𝑓(𝑡, 𝑊0)
𝑥
𝑥0
𝑑𝑡
23
At the 𝑛 − 𝑡ℎ stage we find
𝑊 𝑛(𝑥) = 𝑊 0 + ∫ 𝑓(𝑡, 𝑊 𝑛−1(𝑡)) 𝑑𝑡
𝑥
𝑥0
Theorem 3. 𝐌𝑓 𝑡ℎ𝑒 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑓( 𝑥, 𝑊)
𝑠𝑎𝑡𝑖𝑠𝑓𝑊 𝑡ℎ𝑒 𝑒𝑥𝑖𝑠𝑡𝑒𝑛𝑐𝑒 𝑎𝑛𝑑 𝑢𝑛𝑖𝑞𝑢𝑒𝑛𝑒𝑠𝑠 𝑡ℎ𝑒𝑜𝑟𝑒𝑚 𝑓𝑜𝑟 𝐌𝑉𝑃 (1),
𝑡ℎ𝑒𝑛 𝑡ℎ𝑒 𝑠𝑢𝑐𝑐𝑒𝑠𝑖𝑣𝑒 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛 𝑊𝑛( 𝑥) 𝑐𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑠 𝑡𝑜 𝑡ℎ𝑒 𝑢𝑛𝑖𝑞𝑢𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑊( 𝑥)
𝑜𝑓 𝑡ℎ𝑒 𝐌𝑉𝑃 (1).
Example 2. Apply Picard iteration for the IVP
𝑊′
= 2𝑥(1 − 𝑊), 𝑊(0) = 2.
Solution: Here 𝑊 0 (𝑥) = 2. Now
𝑊 1 (𝑥) = 2 + ∫ 2𝑡(1 − 2) 𝑑𝑡
𝑥
0
= 2 − 𝑥 2
𝑊 2 (𝑥) = 2 + ∫ 2𝑡( 𝑡 2
− 1) 𝑑𝑡
𝑥
0
= 2 − 𝑥 2
+ 𝑥 4
/2
𝑊 3(𝑥) = 2 + ∫ 2𝑡 (𝑡 2
− 𝑡 4
/2 − 1 ) 𝑑𝑡
𝑥
0
= 2 − 𝑥 2
+ 𝑥 4
/2− 𝑥 6
/3!
By induction, it can be shown that
𝑊 𝑛(𝑥) = 2 − 𝑥 2
+ 𝑥 4
/2 − 𝑥 6 /3! + · · · + (−1) 𝑛
𝑥 2𝑛
/ 𝑛!
Hence, 𝑊 𝑛( 𝑥) → 1 + 𝑒−𝑥 2
as 𝑛 → ∞. Now y(x) = 1 + 𝑒−𝑥 2
is the exact solution of the
given IVP. Thus, the Picard iterates converge to the unique solution of the given IVP.
Comment: Picard iteration has more theoretical value than practical value. It is used in the
proof of existence and uniqueness theorem. On the other hand, finding approximate solution
using this method is almost impractical for complicated function 𝑓(𝑥, 𝑊).
24
3.4. Cauchy Peano’s Theorem
The Picard existence theorem provides a locally unique solution to the differential equation
𝑊′
= 𝑓( 𝑊, 𝑡), 𝑊(𝑡0
) = 𝑊0
under the assumption that f is continuous and satisfies a Lipschitz condition in its first
variable. The Peano existence theorem has weaker hypotheses than the Picard existence
theorem: 𝑓 is only assumed to be continuous. The conclusion is weaker: there is a solution,
but it may not be unique.
Theorem: Let D be an open subset of 𝑅 × 𝑅 with
a continuous function and 𝑊′(𝑥) = 𝑓(𝑥, 𝑊(𝑥))
a continuous, explicit first-order differential equation defined on 𝐷, then every initial value
problem
𝑊(𝑥0) = 𝑊0
for 𝑓 with (𝑥0, 𝑊0) ∈ D has a local solution
𝑧: 𝐌 → 𝑅
where 𝐌 is a neighbourhood of in 𝑥0 in 𝑅 , such that 𝑧′(𝑥) = 𝑓((𝑥, 𝑧(𝑥)) for all 𝑥 ∈ 𝐌.
The solution need not be unique: one and the same initial value (𝑥0, 𝑊0) may give rise to
many different solutions 𝑧.
25
4. Linearization of Non Linear Models
4.1. Theory
Most of the differential equations and system of differential equations are encountered in
practise are non-linear. And most of the real life problems are based on the non-linear system.
But many of times we are unable to solve non-linear differential equation, so we linearize the
system to get a linear equation that can be easily solved. So here our first concern to linearize
the non-linear system. After linearize the non-linear system, we can easily applicable to apply
the numerous linear analysis methods to study the nature of the system.
Consider the non-linear system
ÅŒ(𝑡) = 𝐹(𝑡, 𝑧(𝑡))
𝑧(𝑡0) = 𝑧0
where the state 𝑧(𝑡) is an n-dimensional vector and 𝑓 be a non-linear function.
Suppose that system has a solution ∅(𝑡, 𝑧0, 𝑡0) corresponding to a particular initial condition
z(𝑡0) = 𝑧0 . If the initial data 𝑧0 𝑖s slightly changed then it is expected that the solution
∅(𝑡, 𝑧0, 𝑡0) will also slightly changed. If 𝑓(𝑡, 𝑧) is continuously differentiable with respect to
z then we can expand 𝑓(𝑡, 𝑧) is a taylor’s series about the solution ∅(𝑡, 𝑧0, 𝑡0).i.e.
𝑓(𝑡, ∅(𝑡, 𝑧0, 𝑡0) + 𝛿𝑧(𝑡)) = 𝑓(𝑡,∅(𝑡, 𝑧0, 𝑡0)) + 𝐎(𝑡)𝛿𝑧(𝑡) + ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚
here
𝐎(𝑡) = (𝑎𝑖𝑗(𝑡))𝑛𝑥𝑛 = (
𝜕𝑓𝑖
𝜕𝑓𝑗
)
Hence ∅ ̇( 𝑡, 𝑧0, 𝑡0) + 𝛿𝑧 ̇( 𝑡) = 𝑓(𝑡, ∅(𝑡, 𝑧0, 𝑡0 ) + 𝛿𝑧(𝑡))
= 𝑓(𝑡,∅(𝑡, 𝑧0, 𝑡0)) + 𝐎(𝑡)𝛿𝑧(𝑡) + ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚
Note : ∅(𝑡, 𝑧0, 𝑡0) + 𝛿𝑧(𝑡) is the solution of the given system.
Since ∅(𝑡, 𝑧0, 𝑡0 ) is a solution, we have
∅ ̇( 𝑡, 𝑧0, 𝑡0 ) + 𝛿𝑧 ̇( 𝑡) = 𝑓(𝑡, ∅( 𝑡, 𝑧0, 𝑡0)) + 𝐎(𝑡)𝛿𝑧(𝑡) + ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚
𝑓(𝑡, ∅(𝑡, 𝑧0, 𝑡0)) + 𝛿𝑧 ̇( 𝑡) = 𝑓(𝑡, ∅(𝑡, 𝑧0, 𝑡0 )) + 𝐎(𝑡)𝛿𝑧(𝑡) + ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚
26
𝛿𝑧 ̇( 𝑡) = 𝐎(𝑡)𝛿𝑧(𝑡) + ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚 (1.9)
Hence the equation
ÅŒ(𝑡) = 𝐎(𝑡)𝑧(𝑡) (1.10)
is called the linearized system of (1.6) about the solution ∅(𝑡, 𝑧0, 𝑡0).
Remarks: (1.3) If we do not neglect the order terms in (1.8) then we are with a semi-linear
system
ÅŒ(𝑡) = 𝐎(𝑡)𝑧(𝑡) + 𝑔(𝑡, 𝑧(𝑡)) (1.11)
where 𝑔(𝑡, 𝑧(𝑡)) is the sum of higher order terms.
Remarks: (1.4) If the original system (1.6) is time-invariant that is
𝑓(𝑡, 𝑧(𝑡)) = 𝑓(𝑧(𝑡)) (1.12)
then the corresponding linearized system will be of the form
ÅŒ(𝑡) = 𝐎𝑧(𝑡) (1.13)
system of the form (1.13) are known as autonomous homogeneous linear system.
4.2. Example
The state equations of an inverted pendulum are
𝑧1̇ = 𝑧2(𝑡)
𝑧2̇ (t) =
𝑔
𝑙
(sin 𝑧1(t))
What are the linearized equations about the equilibrium solution?
𝑧1(𝑡) = 𝑧2(𝑡) = 0 ?
First write the equation into the state representation form as
𝑧̇(t) = ( 𝑧̇1(𝑡)
𝑧̇2(𝑡)
) = 𝑓(𝑡, 𝑧1(𝑡), 𝑧2(𝑡)) = ( 𝑓1(𝑡)
𝑓2(𝑡)
)
Where ( 𝑓1(𝑡)
𝑓2(𝑡)
) = (
𝑧2(𝑡)
𝑔
𝑙
sin(𝑧1(𝑡))
)
𝜕 𝑓1
𝜕𝑧1
= 0 ,
𝜕 𝑓1
𝜕𝑧2
= 1 ,
𝜕 𝑓2
𝜕𝑧1
=
𝑔
𝑙
cos(𝑧1( 𝑡)) ,
𝜕 𝑓2
𝜕 𝑧2
= 0
Hence the required linearized form is
ÅŒ(𝑡) = 𝐎(𝑡)𝑧(𝑡)
27
where A(t) =(
𝜕𝑓1
𝜕𝑧1
𝜕𝑓1
𝜕𝑧2
𝜕𝑓2
𝜕𝑧1
𝜕𝑓2
𝜕𝑧2
) and ÅŒ(t) = ( 𝑧̇1(𝑡)
𝑧̇2(𝑡)
)
putting all the partial derivatives in the matrix ,we get
( 𝑧̇1(𝑡)
𝑧̇2(𝑡)
) = (
0 1
𝑔
𝑙
cos(𝑧1( 𝑡)) 0)( 𝑧1(𝑡)
𝑧2(𝑡)
)
Thus the matrix 𝐎(𝑡) will satisfy the initial condition, so
𝐎(𝑡) = (
0 1
𝑔
𝑙
0)
Hence our required linearized form of a non-linear system is
( 𝑧̇1(𝑡)
𝑧̇2(𝑡)
) = (
0 1
𝑔
𝑙
0) ( 𝑧1(𝑡)
𝑧2(𝑡)
)
28
5. Controllability of Linear Systems
5.1. Motivation behind Controllability
Control theory basically involved the influencing the behaviours of a dynamical system so as
to achieve a desired goal. Many physical Systems are controlled by the manipulation of their
inputs based on the simultaneous observation of their output.
Ex. Airplane is controlled by the pilot’s action based on the instrument reading and visual
observation.
The control problem is to determine on the basis of the available data, the input necessary to
achieve a given goal.
Mathematical Control theory exhibits a wide variety of technique that go beyond those
associated with traditional applied mathematics.
In the simplest case the vibrating system consisting of a single mass on a linear spring if the
displacement from the equilibrium is 𝑥 at a time 𝑡,then Newton’s law asserts that the
acceleration
𝑑2
𝑥
𝑑𝑡2 = −𝑥
Now we can introduce a control force u depending on 𝑥 and
𝑑𝑥
𝑑𝑡
, so that every solution at of
𝑑2
𝑥
𝑑𝑡2 = −𝑥 + 𝑢 returns to rest at 𝑥 = 0
This is the problem and motivation behind controllability.
5.2. Kalman’s Criterion
Definition: - Consider the linear system 𝑥 ̇ = 𝐎𝑥 + 𝐵𝑢 where 𝑥 ∈ 𝑅 𝑛
: state vector and
𝑢 ∈ 𝑅 𝑚
: input vector. 𝐎 ∶ Of size 𝑛 × 𝑛 and 𝐵 : of size 𝑛 × 𝑚.
The pair (𝐎, 𝐵) is controllable if, given a duration 𝑇 > 0 and two arbitrary points 𝑥0 , 𝑥 𝑇 ∈
𝑅 𝑛
, there exists a piecewise continuous function 𝑡 → 𝑢¯(𝑡) from [0, 𝑇] to𝑅 𝑚
, such that the
integral curve 𝑥¯̅(𝑡) generated by 𝑢¯ with 𝑥̅(0) = 𝑥0 , satisfies 𝑥¯̅(𝑇) = 𝑥 𝑇 .
In other words
𝑒 𝐎𝑇
𝑥0 + ∫ 𝑒 𝐎(𝑇−𝑡)
𝑇
0
𝐵𝑢̅(𝑡)𝑑𝑡 = 𝑥 𝑇 .
This property depends only on A and B:
29
Theorem(Kalman)
𝐎 𝑛𝑒𝑐𝑒𝑠𝑠𝑎𝑟𝑊 𝑎𝑛𝑑 𝑠𝑢𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑓𝑜𝑟 (𝐎, 𝐵) 𝑡𝑜 𝑏𝑒 𝑐𝑜𝑛𝑡𝑟𝑜𝑙𝑙𝑎𝑏𝑙𝑒 𝑖𝑠 𝑟𝑎𝑛𝑘 𝐶 =
𝑟𝑎𝑛𝑘 |𝐵|𝐎𝐵| · · · |𝐎 𝑛−1
𝐵 | = 𝑛.
𝐶 𝑖𝑠 𝑐𝑎𝑙𝑙𝑒𝑑 𝐟𝑎𝑙𝑚𝑎𝑛’𝑠 𝑐𝑜𝑛𝑡𝑟𝑜𝑙𝑙𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑚𝑎𝑡𝑟𝑖𝑥 (𝑜𝑓 𝑠𝑖𝑧𝑒 𝑛 × 𝑛𝑚).
5.3. The Matrix Exponential (Transition Matrix)
For each 𝑛 × 𝑛 complex matrix A, define the exponential of A to be the matrix 𝑒 𝐎
=
∑
𝐎 𝐟
𝑘!
∞
𝑘=0 = 𝐌 + 𝐎 +
1
2!
𝐎2
+ +
1
3!
𝐎3
+ · · · (1)
It is not difficult to show that this sum converges for all complex matrices 𝐎 of any finite
dimension.
If 𝐎 is a 1 × 1 matrix [𝑡], then 𝑒 𝐎
= [𝑒 𝑡
], by the Maclaurin series formula for the function y
= 𝑒 𝑡
. More generally, if 𝐷 is a diagonal matrix having diagonal entries 𝑑1, 𝑑2, . . ., 𝑑 𝑛, then
we have
𝑒 𝐷
= I + 𝐷 + +
1
2!
𝐷2
+ · · ·

=[
1 ⋯ 0
0 1 ⋮
0 ⋯ 1
] + [
𝑑1 ⋯ 0
⋮ 𝑑2 ⋮
0 ⋯ 𝑑 𝑛
] + [
𝑑1/2! ⋯ 0
⋮ ⋱ ⋮
0 ⋯ 𝑑 𝑛/2!
]
=[
𝑒 𝑑1 ⋯ 0
⋮ ⋱ ⋮
0 ⋯ 𝑒 𝑑 𝑛
]
The situation is more complicated for matrices that are not diagonal. However, if a matrix A
happens to be diagonalizable, there is a simple algorithm for computing 𝑒 𝐎
, a consequence
of the following lemma.
Lemma 1 𝐿𝑒𝑡 𝐎 𝑎𝑛𝑑 𝑃 𝑏𝑒 𝑐𝑜𝑚𝑝𝑙𝑒𝑥 𝑛 × 𝑛 matrices, 𝑎𝑛𝑑 𝑠𝑢𝑝𝑝𝑜𝑠𝑒 𝑡ℎ𝑎𝑡 𝑃 𝑖𝑠 𝑖𝑛𝑣𝑒𝑟𝑡𝑖𝑏𝑙𝑒.
𝑇ℎ𝑒𝑛 𝑒 𝑃−1 𝐎𝑃
= 𝑃−1
𝑒 𝐎 𝑃
Proof: Recall that, for all integers 𝑚 ≥ 0, we have (𝑃−1
𝑒 𝐎 𝑃) 𝑀
=𝑃−1
𝐎 𝑚
𝑃. The definition
(1) then yields
30
𝑒 𝑃−1 𝐎𝑃
= I + 𝑃−1
AP +
( 𝑃−1
AP)2
2!
+ · · ·
= I + 𝑃−1
AP +
( 𝑃−1
A2
𝑃)
2!
+ · · ·
= 𝑃−1
(𝐌 + 𝐎 +
𝐎 2
2!
+ · · ·)𝑃 = 𝑃−1
𝑒 𝐎 𝑃
If a matrix 𝐎 is diagonalizable, then there exists an invertible P so that = 𝑃−1
𝑒 𝐷
𝑃, where D
is a diagonal matrix of eigenvalues of A, and P is a matrix having eigenvectors of A as its
columns. In this case, 𝑒 𝐎
= 𝑃−1
𝑒 𝐷 𝑃 .
Example: Let 𝐎 denote the matrix
𝐎 = [
5 1
−2 2
]
The reader can easily verify that 4 and 3 are eigenvalues of 𝐎, with corresponding
eigenvectors 𝑀1 =
1
−1
and 𝑀2 =
1
−2
. It follows that
𝐎 = 𝑃 𝐷𝑃−1
= [
1 1
−1 −2
][
4 0
0 3
][
2 1
−1 −1
]
so that 𝑒 𝐎 = [
1 1
−1 −2
][ 𝑒4 0
0 𝑒3] [
2 1
−1 −1
]=[2𝑒4 − 3𝑒3 𝑒4 − 𝑒3
2𝑒3 − 2𝑒4 2𝑒3 − 𝑒4
]
5.3.1. MatrixExponential In Matlab
Y = expm(X) computes the matrix exponential of X. Although it is not computed this way,
if X has a full set of eigenvectors V with corresponding eigen values 𝐷, then [𝑉, 𝐷] =
𝑒𝑖𝑔(𝑋) and
𝑒𝑥𝑝𝑚(𝑋) = 𝑉 ∗ 𝑑𝑖𝑎𝑔(𝑒𝑥𝑝(𝑑𝑖𝑎𝑔(𝐷)))/𝑉
A = [1 1 0; 0 0 2; 0 0 -1];
expm(A)
ans =
2.7183 1.7183 1.0862
0 1.0000 1.2642
0 0 0.3679
31
5.4. Solution of the Controlled System using Transition Matrix
Consider the 𝑛-dimensional linear control system:
𝑥̇= A(t)x + B(t)u; x(𝑡0) = 𝑡0
Let ∅(𝑡, 𝑡0) be the transition matrix of the homogeneous system 𝑥̇ = 𝐎(𝑡)x. The solution of
the control system is given by (using variation of parameter method)
𝑥(𝑡) = ∅(𝑡, 𝑡0) 𝑥0 + ∫ ∅( 𝑡, 𝜏) 𝐵(
𝑡
𝑡0
𝜏)𝑢(𝜏)𝑑𝜏
The system is controllable iff for arbitrary initial and final states 𝑥0,𝑥1 there exists a control
function 𝑢 such that
𝑥1 = ∅(𝑡1, 𝑡0 ) 𝑥0 + ∫ ∅( 𝑡1, 𝜏) 𝐵(
𝑡
𝑡0
𝜏)𝑢(𝜏)𝑑𝜏
Controllability Grammian for the linear system and is given by
𝑊(𝑡 𝑜, 𝑡1) = ∫ ∅( 𝑡1, 𝜏) 𝐵(
𝑡1
𝑡0
𝜏)𝐵∗
(𝜏)∅∗( 𝑡1, 𝜏) 𝑑𝜏
Theorem: The linear control system is controllable iff 𝑊(𝑡 𝑜, 𝑡1) is invertible and the
steering control that move 𝑥0 to 𝑥1 is given by
𝑢(𝑡) = 𝐵∗
(𝜏)∅∗( 𝑡1, 𝑡) 𝑊−1
(𝑡0, 𝑡1)[𝑥1 − ∅(𝑡0, 𝑡1)𝑥0]
Proof : Controllability part is already proved earlier. We now show that the steering control
defined above actually does the transfer of states. The controlled state is given by
𝑥(𝑡) = ∅(𝑡, 𝑡0 ) 𝑥0 + ∫ ∅( 𝑡1, 𝜏) 𝐵(
𝑡
𝑡0
𝜏)𝑢(𝜏)𝑑𝜏
𝑥(𝑡) = ∅(𝑡, 𝑡0 ) 𝑥0 + ∫ ∅( 𝑡, 𝜏) 𝐵(
𝑡
𝑡0
𝜏) 𝐵∗
(𝜏)∅∗( 𝑡1, 𝜏) 𝑊−1
(𝑡0, 𝑡1)[𝑥1 − ∅(𝑡0, 𝑡1)𝑥0] 𝑑𝜏
𝑥(𝑡1) = ∅(𝑡1, 𝑡0 ) 𝑥0 + 𝑊(𝑡 𝑜, 𝑡1) 𝑊−1
(𝑡0, 𝑡1) [𝑥1 − ∅(𝑡0, 𝑡1)𝑥0]
𝑥( 𝑡1) = 𝑥1
32
5.5. Kalman Condition Revisited (Proof)
System:- 𝑋̇ = 𝐎𝑋 + 𝐵𝑈
Solution:-
𝑥(𝑡) = 𝑒 𝐎𝑡
𝑥0 + ∫ 𝑒 𝐎(𝑡−𝜏)
𝐵
𝑡
𝑡0
𝑢(𝜏)𝑑𝜏
Assuming 𝑋(𝑡1) = 0
0 = 𝑒 𝐎𝑡1 𝑥0 + ∫ 𝑒 𝐎(𝑡−𝜏)
𝐵
𝑡1
0
𝑢(𝜏)𝑑𝜏
𝑥0 = − ∫ 𝑒−𝐎(𝜏)
𝐵
𝑡1
0
𝑢(𝜏)𝑑𝜏
According to Cayley Hamilton’s Theorem,
For a 𝐎 be a 𝑛 × 𝑛 matrix, and let 𝑝(𝜆) = 𝑑𝑒𝑡(𝜆𝐌 − 𝐎) be the 𝑐ℎ𝑎𝑟𝑎𝑐𝑡𝑒𝑟𝑖𝑠𝑡𝑖𝑐 𝑝𝑜𝑙𝑊𝑛𝑜𝑚𝑖𝑎𝑙
of 𝐎. Then 𝑝(𝐎) = 0.
So 𝑒−𝐎𝜏
=∑ 𝛟𝑘(𝜏)𝐎 𝑘𝑛−1
𝑘=0
𝑥0 = − ∫ 𝐎 𝑘
𝐵 ∫ 𝛟𝑘 (𝜏)
𝑡1
0
𝑡1
0
𝑢(𝜏)𝑑𝜏
∑ = 𝐎 𝑘
𝛜𝑛−1
𝑘=0 𝛜 𝑘 where 𝛜 𝑘 = ∫ 𝛟𝑘 (𝜏)
𝑡1
0
𝑢(𝜏)𝑑𝜏
= [ 𝐵 𝐎𝐵 
 
 𝐎 𝑛−1
𝐵][𝛜0 𝛜1 
 
 𝛜 𝑛−1] 𝑇
We want a non-trivial solution.
So for that the rank of 𝐶 = [ 𝐵 𝐎𝐵 
 
 𝐎 𝑛−1
𝐵] is 𝑛
Then the system is controllable.
33
6. Controllability of Aircraft
6.1. Introduction
In the following section I examine the application of state space modelling and control theory
to aircraft problems. Controllability is concerned with whether the states of dynamic system
are affected by the control input. The mathematical definition of controllability easily are
calculated but are somewhat abstract. An alternate way of looking at controllability is to
transform the state equation to a canonical form. If the state equations are transformed so that
the new plant matrix is diagonal matrix then the equations governing the system are said to be
decoupled. Now the control matrix can be examined using the Kalman’s criterion and the
controllability of the system can be checked.
6.2. Simulation in Matlab
Here I have considered the dynamics of STOL Transport aircraft. After the theoretical
modelling for the longitudinal motion of the aircraft I got the following control matrices.
A= -1.3970 1.0000 0 0
-5.4700 -3.2700 0 0
0 1.0000 0 0
-400.0000 0 400.0000 0
B= -0.1240
-13.2000
0
0
34
6.2.1 Code for Simulation
clear
clc
disp('Linear System dot(x) = Ax + Bu where A and B are given as follows:')
A = input('The matrix A');
B =input('The matrix B');
n =input('The order of matrix');
pause
disp('Kalman Test:The controllability Matrix of the system is:')
C = B;
Q = B;
%AS = A;
for i=1:n-1
C = (A^i)*B
Q = [Q C]
end
pause
disp('The rank of the controllability matrix is:')
r=rank(Q)
pause
if r ~= n
disp('The system is not controllable');
else
disp('The system is controllable')
t = sym('t')
s = sym('s')
disp('The initial state is:')
x0 = input('Enter the initial state')
pause
disp('The final state is:')
x1 = input('Enter the final state')
pause
disp('We want to to reach the final state in time')
T = input('Enter the final time')
pause
disp('The transition matrix is:')
phi = expm(A*t)
disp('The controllability Gramian is:')
W = int((expm(A*(T-t))*B*B'*expm(A'*(T-t))),t,0,T)
W=subs(W);
pause
disp('The controller is taken as:')
u = B'*expm(A'*(T-t))*inv(W)*(x1-expm(A*T)*x0)
U=subs(u,s);
disp('The solution of the system using the above controller is:')
E=B*U;
x =(expm(A*t))*x0 + int(expm(A*(t-s))*E,s,0,t)
disp('The Graph of the solution is ')
z=linspace(0,T,10);
for i=1:n
plot(z,subs(x(i),z))
hold on
end
xlabel('Time')
ylabel('x(t)')
end
35
6.2.2. OUTPUT
Linear System dot(x) = Ax + Bu where A and B are given as follows:
The matrix A[ -1.397 1 0 0; -5.47 -3.27 0 0;0 1 0 0;-400 0 400 0]
The matrix B[-0.124;-13.2;0;0]
The order of matrix4
Kalman Test:The controllability Matrix of the system is:
C =
-13.0268
43.8423
-13.2000
49.6000
Q =
-0.1240 -13.0268
-13.2000 43.8423
0 -13.2000
0 49.6000
C =
62.0407
-72.1078
43.8423
-69.2912
Q =
-0.1240 -13.0268 62.0407
-13.2000 43.8423 -72.1078
0 -13.2000 43.8423
0 49.6000 -69.2912
C =
1.0e+03 *
-0.1588
-0.1036
-0.0721
36
-7.2794
Q =
1.0e+03 *
-0.0001 -0.0130 0.0620 -0.1588
-0.0132 0.0438 -0.0721 -0.1036
0 -0.0132 0.0438 -0.0721
0 0.0496 -0.0693 -7.2794
The rank of the controllability matrix is:
r =
4
The system is controllable
t =
t
s =
s
The initial state is:
Enter the initial state[1;2;3;4]
x0 =
1
2
3
4
The final state is:
Enter the final state[4;3;2;1]
x1 =
4
3
2
1
We want to to reach the final state in time
Enter the final time1
T =
37
1
P.S.- Since the complete output was going out of the bounds so it is very difficult to represent
it on paper.
The graph is as follows.
38
REFERENCES
1. Wen Shen, Introduction to Ordinary and Partial Differential Equations, Spring 2013, p 1-8
2. Wen Shen, Introduction to Ordinary and Partial Differential Equations, Spring 2013,
p 88 - 95
3. S. Ghorai, Picard’s existence and uniqueness theorem, Picard’s iteration, p 1-4
4. https://en.wikipedia.org/wiki/Peano_existence_theorem
5. Brian Steven, Lewis, Aircraft Control and Simulation, p- 143-201
6. Cook, Michael, Flight Dynamics, Elsevier, p 123-145
7. Wayne, Aircraft Flight Dynamics and Control, Wiley Series, p 183-221

Weitere Àhnliche Inhalte

Was ist angesagt?

Numerical solution of ordinary differential equations GTU CVNM PPT
Numerical solution of ordinary differential equations GTU CVNM PPTNumerical solution of ordinary differential equations GTU CVNM PPT
Numerical solution of ordinary differential equations GTU CVNM PPTPanchal Anand
 
Partial differential equations
Partial differential equationsPartial differential equations
Partial differential equationsmuhammadabullah
 
engineeringmathematics-iv_unit-ii
engineeringmathematics-iv_unit-iiengineeringmathematics-iv_unit-ii
engineeringmathematics-iv_unit-iiKundan Kumar
 
Finite difference method
Finite difference methodFinite difference method
Finite difference methodDivyansh Verma
 
Calculus ppt on "Partial Differentiation"#2
Calculus ppt on "Partial Differentiation"#2Calculus ppt on "Partial Differentiation"#2
Calculus ppt on "Partial Differentiation"#2L.D College of Engineering
 
L4 one sided limits limits at infinity
L4 one sided limits limits at infinityL4 one sided limits limits at infinity
L4 one sided limits limits at infinityJames Tagara
 
Numerical Solution of Ordinary Differential Equations
Numerical Solution of Ordinary Differential EquationsNumerical Solution of Ordinary Differential Equations
Numerical Solution of Ordinary Differential EquationsMeenakshisundaram N
 
Number theory
Number theoryNumber theory
Number theorymanikanta361
 
Sturm liouville problems6
Sturm liouville problems6Sturm liouville problems6
Sturm liouville problems6Nagu Vanamala
 
Ordinary differential equations
Ordinary differential equationsOrdinary differential equations
Ordinary differential equationsAhmed Haider
 
Mcq differential and ordinary differential equation
Mcq differential and ordinary differential equationMcq differential and ordinary differential equation
Mcq differential and ordinary differential equationSayyad Shafi
 
Gamma and betta function harsh shah
Gamma and betta function  harsh shahGamma and betta function  harsh shah
Gamma and betta function harsh shahC.G.P.I.T
 
Power Series - Legendre Polynomial - Bessel's Equation
Power Series - Legendre Polynomial - Bessel's EquationPower Series - Legendre Polynomial - Bessel's Equation
Power Series - Legendre Polynomial - Bessel's EquationArijitDhali
 
introduction to differential equations
introduction to differential equationsintroduction to differential equations
introduction to differential equationsEmdadul Haque Milon
 
4 stochastic processes
4 stochastic processes4 stochastic processes
4 stochastic processesSolo Hermelin
 
Application of Integrals
Application of IntegralsApplication of Integrals
Application of Integralssarcia
 

Was ist angesagt? (20)

Numerical solution of ordinary differential equations GTU CVNM PPT
Numerical solution of ordinary differential equations GTU CVNM PPTNumerical solution of ordinary differential equations GTU CVNM PPT
Numerical solution of ordinary differential equations GTU CVNM PPT
 
Partial differential equations
Partial differential equationsPartial differential equations
Partial differential equations
 
engineeringmathematics-iv_unit-ii
engineeringmathematics-iv_unit-iiengineeringmathematics-iv_unit-ii
engineeringmathematics-iv_unit-ii
 
Finite difference method
Finite difference methodFinite difference method
Finite difference method
 
Calculus ppt on "Partial Differentiation"#2
Calculus ppt on "Partial Differentiation"#2Calculus ppt on "Partial Differentiation"#2
Calculus ppt on "Partial Differentiation"#2
 
L4 one sided limits limits at infinity
L4 one sided limits limits at infinityL4 one sided limits limits at infinity
L4 one sided limits limits at infinity
 
Numerical Solution of Ordinary Differential Equations
Numerical Solution of Ordinary Differential EquationsNumerical Solution of Ordinary Differential Equations
Numerical Solution of Ordinary Differential Equations
 
Number theory
Number theoryNumber theory
Number theory
 
Sturm liouville problems6
Sturm liouville problems6Sturm liouville problems6
Sturm liouville problems6
 
Ordinary differential equations
Ordinary differential equationsOrdinary differential equations
Ordinary differential equations
 
Mcq differential and ordinary differential equation
Mcq differential and ordinary differential equationMcq differential and ordinary differential equation
Mcq differential and ordinary differential equation
 
Gamma and betta function harsh shah
Gamma and betta function  harsh shahGamma and betta function  harsh shah
Gamma and betta function harsh shah
 
Runge kutta
Runge kuttaRunge kutta
Runge kutta
 
Power Series - Legendre Polynomial - Bessel's Equation
Power Series - Legendre Polynomial - Bessel's EquationPower Series - Legendre Polynomial - Bessel's Equation
Power Series - Legendre Polynomial - Bessel's Equation
 
the inverse of the matrix
the inverse of the matrixthe inverse of the matrix
the inverse of the matrix
 
introduction to differential equations
introduction to differential equationsintroduction to differential equations
introduction to differential equations
 
Runge Kutta Method
Runge Kutta MethodRunge Kutta Method
Runge Kutta Method
 
Ch05 4
Ch05 4Ch05 4
Ch05 4
 
4 stochastic processes
4 stochastic processes4 stochastic processes
4 stochastic processes
 
Application of Integrals
Application of IntegralsApplication of Integrals
Application of Integrals
 

Ähnlich wie doc

NPDE-TCA
NPDE-TCANPDE-TCA
NPDE-TCArishav rai
 
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICSBSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICSRai University
 
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICSBSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICSRai University
 
Laplace & Inverse Transform convoltuion them.pptx
Laplace & Inverse Transform convoltuion them.pptxLaplace & Inverse Transform convoltuion them.pptx
Laplace & Inverse Transform convoltuion them.pptxjyotidighole2
 
New Information on the Generalized Euler-Tricomi Equation
New Information on the Generalized Euler-Tricomi Equation New Information on the Generalized Euler-Tricomi Equation
New Information on the Generalized Euler-Tricomi Equation Lossian Barbosa Bacelar Miranda
 
Comparative analysis of x^3+y^3=z^3 and x^2+y^2=z^2 in the Interconnected Sets
Comparative analysis of x^3+y^3=z^3 and x^2+y^2=z^2 in the Interconnected Sets Comparative analysis of x^3+y^3=z^3 and x^2+y^2=z^2 in the Interconnected Sets
Comparative analysis of x^3+y^3=z^3 and x^2+y^2=z^2 in the Interconnected Sets Vladimir Godovalov
 
Study Material Numerical Solution of Odinary Differential Equations
Study Material Numerical Solution of Odinary Differential EquationsStudy Material Numerical Solution of Odinary Differential Equations
Study Material Numerical Solution of Odinary Differential EquationsMeenakshisundaram N
 
Sistempertidaksamaanduavariabel2122
Sistempertidaksamaanduavariabel2122Sistempertidaksamaanduavariabel2122
Sistempertidaksamaanduavariabel2122Franxisca Kurniawati
 
Lecture-1-Mech.pptx . .
Lecture-1-Mech.pptx                   . .Lecture-1-Mech.pptx                   . .
Lecture-1-Mech.pptx . .happycocoman
 
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICSBSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICSRai University
 
One solution for many linear partial differential equations with terms of equ...
One solution for many linear partial differential equations with terms of equ...One solution for many linear partial differential equations with terms of equ...
One solution for many linear partial differential equations with terms of equ...Lossian Barbosa Bacelar Miranda
 
Study Material Numerical Differentiation and Integration
Study Material Numerical Differentiation and IntegrationStudy Material Numerical Differentiation and Integration
Study Material Numerical Differentiation and IntegrationMeenakshisundaram N
 
Analysis of a self-sustained vibration of mass-spring oscillator on moving belt
Analysis of a self-sustained vibration of mass-spring oscillator on moving beltAnalysis of a self-sustained vibration of mass-spring oscillator on moving belt
Analysis of a self-sustained vibration of mass-spring oscillator on moving beltVarun Jadhav
 
Ecuaciones lineales de orden superior
Ecuaciones lineales de orden superiorEcuaciones lineales de orden superior
Ecuaciones lineales de orden superiormariarivas114
 
Ejercicios resueltos de analisis matematico 1
Ejercicios resueltos de analisis matematico 1Ejercicios resueltos de analisis matematico 1
Ejercicios resueltos de analisis matematico 1tinardo
 
Lecture 2.1 Echelon method
Lecture 2.1 Echelon methodLecture 2.1 Echelon method
Lecture 2.1 Echelon methodTaoufik Ben Jabeur
 
Lecture 2.1 Echelon method
Lecture 2.1 Echelon methodLecture 2.1 Echelon method
Lecture 2.1 Echelon methodTaoufik Ben Jabeur
 
Btech_II_ engineering mathematics_unit3
Btech_II_ engineering mathematics_unit3Btech_II_ engineering mathematics_unit3
Btech_II_ engineering mathematics_unit3Rai University
 
Complex differentiation contains analytic function.pptx
Complex differentiation contains analytic function.pptxComplex differentiation contains analytic function.pptx
Complex differentiation contains analytic function.pptxjyotidighole2
 

Ähnlich wie doc (20)

NPDE-TCA
NPDE-TCANPDE-TCA
NPDE-TCA
 
Four Point Gauss Quadrature Runge – Kuta Method Of Order 8 For Ordinary Diffe...
Four Point Gauss Quadrature Runge – Kuta Method Of Order 8 For Ordinary Diffe...Four Point Gauss Quadrature Runge – Kuta Method Of Order 8 For Ordinary Diffe...
Four Point Gauss Quadrature Runge – Kuta Method Of Order 8 For Ordinary Diffe...
 
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICSBSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICS
 
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICSBSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-5_DISCRETE MATHEMATICS
 
Laplace & Inverse Transform convoltuion them.pptx
Laplace & Inverse Transform convoltuion them.pptxLaplace & Inverse Transform convoltuion them.pptx
Laplace & Inverse Transform convoltuion them.pptx
 
New Information on the Generalized Euler-Tricomi Equation
New Information on the Generalized Euler-Tricomi Equation New Information on the Generalized Euler-Tricomi Equation
New Information on the Generalized Euler-Tricomi Equation
 
Comparative analysis of x^3+y^3=z^3 and x^2+y^2=z^2 in the Interconnected Sets
Comparative analysis of x^3+y^3=z^3 and x^2+y^2=z^2 in the Interconnected Sets Comparative analysis of x^3+y^3=z^3 and x^2+y^2=z^2 in the Interconnected Sets
Comparative analysis of x^3+y^3=z^3 and x^2+y^2=z^2 in the Interconnected Sets
 
Study Material Numerical Solution of Odinary Differential Equations
Study Material Numerical Solution of Odinary Differential EquationsStudy Material Numerical Solution of Odinary Differential Equations
Study Material Numerical Solution of Odinary Differential Equations
 
Sistempertidaksamaanduavariabel2122
Sistempertidaksamaanduavariabel2122Sistempertidaksamaanduavariabel2122
Sistempertidaksamaanduavariabel2122
 
Lecture-1-Mech.pptx . .
Lecture-1-Mech.pptx                   . .Lecture-1-Mech.pptx                   . .
Lecture-1-Mech.pptx . .
 
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICSBSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
 
One solution for many linear partial differential equations with terms of equ...
One solution for many linear partial differential equations with terms of equ...One solution for many linear partial differential equations with terms of equ...
One solution for many linear partial differential equations with terms of equ...
 
Study Material Numerical Differentiation and Integration
Study Material Numerical Differentiation and IntegrationStudy Material Numerical Differentiation and Integration
Study Material Numerical Differentiation and Integration
 
Analysis of a self-sustained vibration of mass-spring oscillator on moving belt
Analysis of a self-sustained vibration of mass-spring oscillator on moving beltAnalysis of a self-sustained vibration of mass-spring oscillator on moving belt
Analysis of a self-sustained vibration of mass-spring oscillator on moving belt
 
Ecuaciones lineales de orden superior
Ecuaciones lineales de orden superiorEcuaciones lineales de orden superior
Ecuaciones lineales de orden superior
 
Ejercicios resueltos de analisis matematico 1
Ejercicios resueltos de analisis matematico 1Ejercicios resueltos de analisis matematico 1
Ejercicios resueltos de analisis matematico 1
 
Lecture 2.1 Echelon method
Lecture 2.1 Echelon methodLecture 2.1 Echelon method
Lecture 2.1 Echelon method
 
Lecture 2.1 Echelon method
Lecture 2.1 Echelon methodLecture 2.1 Echelon method
Lecture 2.1 Echelon method
 
Btech_II_ engineering mathematics_unit3
Btech_II_ engineering mathematics_unit3Btech_II_ engineering mathematics_unit3
Btech_II_ engineering mathematics_unit3
 
Complex differentiation contains analytic function.pptx
Complex differentiation contains analytic function.pptxComplex differentiation contains analytic function.pptx
Complex differentiation contains analytic function.pptx
 

doc

  • 1. 1 ADVANCED CONTROL SYSTEM DESIGN OF AIRCRAFT AND SIMULATION OF ITS TRAJECTORY INTERNSHIP PROJECT REPORT By SIDDHARTH PUJARI ROLL NO.-010 NIT ROURKELA Department of Mathematics Indian Institute of Space Science and Technology (IIST) Thiruvananthapuram December – 2015
  • 2. 2 BONAFIDE CERTIFICATE This is to certify that this project report entitled “ ADVANCED CONTROL SYSTEM DESIGN OF AIRCRAFT AND SIMULATION OF ITS TRAJECTORY ” submitted to Indian Institute of Space Science and Technology, Thiruvananthapuram, is a bonafide work done by Siddharth Pujari under my supervision from 10th December 2015 to 31st December 2015. Dr. Raju K. George Dean(R and D) Sr. Professor and Head, Department of Mathematics Indian Institute of Space Science and Technology(IIST) Valiamala P.O. Trivandrum 695547 Place : Thiruvananthapuram Date : 29/12/2015
  • 3. 3 DECLARATION BY AUTHOR This is to declare that this report has been written by me. No part of the report is plagiarized from other sources. All information included from other sources have been duly acknowledged. I am that if any part of the report is found to be plagiarized, I shall take full responsibility for it. Siddharth Pujari NIT Rourkela Place : Thiruvananthapuram Date : 29/12/2015
  • 4. 4 ABSTRACT In this report I have mainly focused on the controllability of the linearized systems using its state space models. Here as an example the aircraft model is simulated using Matlab. Modelling of the state space model is not taken into consideration here. By just getting the control matrices 𝐎 and 𝐵 we can easily obtain results for the controllability of the aircraft and also plot the corresponding graph. The control matrices are taken for the longitudinal motion of the aircraft. Apart from this, I have also focused on the theoretical aspect of advanced control design like the Stability of Linear System, Linearization of System of equation, solving the transition matrix, Solution of the controlled system using Transition Matrix, Kalman’s Criterion. The results of the simulation in Matlab took a substantiate amount of time considering the time consuming calculation of the exponential of a matrix and the calculation of the Controllability Grammian Matrix and the controller. In fact I had to run the code overnight just to get the desired graph. This was the major problem which I encountered while doing the project. But I successfully obtained satisfactory results and the system could be controlled.
  • 5. 5 TABLE OF CONTENTS CHAPTER NO. TITLE PAGE NO. ABSTRACT iii NOMENCLATURE xvii CONTENTS xx 1. Brief Reviewof Differential Equations 1.2. Directional Fields 2. Systems of Linear Differential Equations 2.1 Introduction to Systems of Linear Differential Equations 2.2 Review of Matrix Theory. 2.3. Eigen Values and Eigen Vectors 2.4. Stability of Linear Systems 3. Existence and uniqueness theorem 3.1 Picard’s Theorem 3.2 Picard iteration for IVP 3.3. Cauchy Peano’s Theorem 4. Linearization of Non Linear Models 4.1 Theory 4.2. Example 5. Controllability of Linear Systems 5.1. Motivation behind Controllability 5.2. Kalman’s Criterion 5.3. The Matrix Exponential (Transition Matrix) 5.3.1. Matrix Exponential In Matlab
  • 6. 6 5.4. Solution of the Controlled System using Transition Matrix 5.5. Kalman Condition Revisited (Proof) 6. Controllability of Aircraft 6.1. Introduction 6.2. Simulation in Matlab 6.2.1. Code for simulation 6.2.2 Output.
  • 7. 7
  • 8. 8 BRIEF REVIEW OF DIFFERENTIALEQUATIONS A differential equation is an equation which contains derivatives of the unknown. (Usually it is a mathematical model of some physical phenomenon.) Two classes of differential equations: • O.D.E. (ordinary differential equations): linear and non-linear; • P.D.E. (partial differential equations). Some concepts related to differential equations: • System: a collection of several equations with several unknowns. • Order of the equation: the highest order of derivatives. • Linear or non-linear equations: Let y(t) be the unknown. Then, 𝑎0(𝑡)𝑊 (𝑛) + 𝑎1(𝑡)𝑊 (𝑛 − 1) + · · · + 𝑎 𝑛(𝑡)𝑊 = 𝑔(𝑡),(∗) is a linear equations. If the equation cannot be written as (∗), then it’s non-linear. Two things you must know: identify the linearity and the order of an equation. Example 1. Let 𝑊(𝑡) be the unknown. Identify the order and linearity of the following equations. (a). (𝑊 + 𝑡)𝑊 ′ + 𝑊 = 1 (b). 3𝑊 ′ + (𝑡 + 4)𝑊 = 𝑡2 + 𝑊 ′′ (c). 𝑊 ′′′ = 𝑐𝑜𝑠(2𝑡𝑊) (d). 𝑊(4) + √ 𝑡𝑊′′′ + 𝑐𝑜𝑠𝑡 = 𝑒 𝑡 Problem order linear? (a). (𝑊 + 𝑡)𝑊 ′ + 𝑊 = 1 1 No (b). 3𝑊 ′ + (𝑡 + 4)𝑊 = 𝑡2 + 𝑊 ′′ 2 Yes (c). 𝑊 ′′′ = 𝑐𝑜𝑠(2𝑡𝑊) 3 No (d). 𝑊(4) + √ 𝑡𝑊′′′ + 𝑐𝑜𝑠𝑡 = 𝑒 𝑡 4 No What is a solution? Solution is a function that satisfied the equation and the derivatives exist. Example 2. Verify that 𝑊(𝑡) = 𝑒 𝑡 is a solution of the IVP (initial value problem) 𝑊 ′ = 𝑎𝑊, 𝑊(0) = 1.
  • 9. 9 Here 𝑊(0) = 1 is called the initial condition. Answer.Let’s check if 𝑊(𝑡) satisfies the equation and the initial condition: 𝑊 ′ = 𝑎𝑒 𝑎𝑡 = 𝑎𝑊, 𝑊(0) = 𝑒0 = 1. They are both OK. So it is a solution. Example 3. Verify that 𝑊(𝑡) = 10 − 𝑐𝑒 − 𝑡 with c a constant, is a solution to 𝑊 ′ + 𝑊 = 10. Answer. 𝑊 ′ = −(−𝑐𝑒 − 𝑡 ) = 𝑐𝑒 − 𝑡 , 𝑊′ + 𝑊 = 𝑐𝑒 − 𝑡 + 10 − 𝑐𝑒 − 𝑡 = 10. OK. Let’s try to solve one equation. Example 4. Consider the equation (𝑡 + 1)𝑊 ′ = 𝑡 2 We can rewrite it as (𝑓𝑜𝑟 𝑡 ≠ −1) 𝑊 ′ = 𝑡2 𝑡 + 1 + 1 = 𝑡2 − 1 +1 𝑡 + 1 = ( 𝑡+ 1)( 𝑡− 1)+ 1 𝑡 + 1 = ( 𝑡 −1) + 1 𝑡 + 1 To find y, we need to integrate y ′ : 𝑊 = ∫ 𝑊 ′ (𝑡)𝑑𝑡 = ∫ [ (𝑡 − 1) + 1 𝑡 + 1] 𝑑𝑡 = 𝑡2 2 − 𝑡 + 𝑙𝑛 |𝑡 + 1| + 𝑐 where 𝑐 is an integration constant which is arbitrary. This means there are infinitely many solutions. Additional condition: initial condition 𝑊(0) = 1. (Meaning: 𝑊 = 1 when 𝑡 = 0) Then 𝑊(0) = 0 + 𝑙𝑛 |1| + 𝑐 = 𝑐 = 1, so 𝑊(𝑡) = 𝑡2 2 − 𝑡 + 𝑙𝑛 |𝑡 + 1| + 1. So for equation like 𝑊 ′ = 𝑓(𝑡), we can solve it by integration: 𝑊 = ∫ 𝑓(𝑡)𝑑𝑡. 1.2 DirectionalFields Directional field: for first order equations 𝑊 ′ = 𝑓(𝑡, 𝑊). Interpret 𝑊 ′ as the slope of the tangent to the solution 𝑊(𝑡) at point (𝑡, 𝑊) in the 𝑊 − 𝑡 plane. • If 𝑊 ′ = 0, the tangent line is horizontal; • If 𝑊 ′ > 0, the tangent line goes up; • If 𝑊 ′ < 0, the tangent line goes down; • The value of |𝑊 ′ | determines the steepness.
  • 10. 10 Example 5. Consider the equation 𝑊 ′ = 1 2 (3 − 𝑊).We know the following: • If 𝑊 = 3, then y ′ = 0, flat slope, • If 𝑊 > 3, then 𝑊 ′ < 0, down slope, • If 𝑊 < 3, then 𝑊 ′ > 0, up slope. See the directional field below (with some solutions sketched in red): We note that, if 𝑊(0) = 3, then 𝑊(𝑡) = 3 is the solution. Asymptotic behavior: As 𝑡 → ∞, we have 𝑊 → 3 Remarks: (1). For equation 𝑊 ′ (𝑡) = 𝑎(𝑏 − 𝑊) with 𝑎 > 0, it will have similar behavior as Example 5, where 𝑏 = 3 and 𝑎 = 1 2 . Solution will approach 𝑊 = 𝑏 as 𝑡 → +∞. (2). Now consider 𝑊 ′ (𝑡) = 𝑎(𝑏 − 𝑊), but with 𝑎 < 0. This changes the sign of ′ . We now have – If 𝑊(0) = 𝑏, then 𝑊(𝑡) = 𝑏; – If 𝑊(0) > 𝑏, then 𝑊 → +∞ as 𝑡 → +∞; – If 𝑊(0) < 𝑏, then 𝑊 → −∞ as 𝑡 → +∞. Example 6: Let 𝑊 ′ (𝑡) = (𝑊 − 1)(𝑊 − 5). Then, • If 𝑊 = 1 or 𝑊 = 5, then 𝑊 ′ = 0. • If 𝑊 < 1, then 𝑊 ′ > 0; • If 1 < 𝑊 < 5, then 𝑊 ′ < 0; • If 𝑊 > 5, then 𝑊 ′ < 0. Directional field looks like:
  • 11. 11 What can we say about the solutions? • If 𝑊(0) = 1, then 𝑊(𝑡) = 1; • If 𝑊(0) = 5, then 𝑊(𝑡) = 5; • If 𝑊(0) < 1, then 𝑊 → 1 as 𝑡 → +∞; • If 1 < 𝑊(0) < 5, then 𝑊 → 1 as 𝑡 → +∞; • If 𝑊(0) > 5, then 𝑊 → +∞ as 𝑡 → +∞. Remark: If we have 𝑊 ′ (𝑡) = 𝑓(𝑊), and for some 𝑊0 we have 𝑓(𝑊0) = 0, then, 𝑊(𝑡) = 𝑊0 is a solution. Example 7: Given the plot of a directional field, which of the following ODE could have generate it? (a). 𝑊 ′ (𝑡) = (𝑊 − 2)(𝑊 − 4)
  • 12. 12 (b). 𝑊 ′ (𝑡) = (𝑊 − 1)2 (𝑊 − 3) (c). 𝑊 ′ (𝑡) = (𝑊 – 1)(𝑊 − 3)2 (d). 𝑊 ′ (𝑡) = −(𝑊 – 1) (𝑊− 3)2 We first check the constant solution, 𝑊 = 1 and 𝑊 = 3. Then (a) can not be. Then, we check the sign of 𝑊 ′ on the intervals: 𝑊 < 1, 1 < 𝑊 < 3, 𝑎𝑛𝑑 𝑊 > 3, to match the directional field. We found that (𝑐) could be the equation.
  • 13. 13 2. Systems of Linear Differential Equations 2.1. Introduction to Systems of Linear Differential Equations Right after the invention of calculus, differential equations replaced algebraic equations (which in turn replaced counting) as the major tool in mathematically modelling everything. A single differential equation (also called “scalar differential equation”) is a mathematical model of the time-evolution/spatial variation of one single substance (can be population of a single species, amount of a single chemical, etc.); On the other hand, a system of differential equations models the time-evolution of more than one quantities. One example is Newton’s second law: 𝑑2 𝑥 𝑑𝑡2 = 𝑚𝑎 (1) Which looks like a single equation but is actually a system because both 𝑥 and 𝑎 has more than one components. Traditionally, systems of ordinary differential equations arise from study of mechanics. Modern examples also abound, especially from biology, sociology, economics, etc. The general form of a system involving n unknown functions is 𝑥1 = 𝑓1(𝑥1,, 𝑥𝑛) (2) 𝑥2 = 𝑓2(𝑥1,, 𝑥𝑛) (3) 𝑥3 = 𝑓𝑛(𝑥1,, 𝑥𝑛) (4) where the evolution of n quantities are described. Such a system is usually referred to as an n × n first order system. Remark 1. When 𝑛 = 2 or 3, 𝑥, 𝑊 (respectively𝑥, 𝑊, 𝑧) are often used instead of 𝑥1,, 𝑥𝑛. When all 𝑓1, 

 
 
 , 𝑓𝑛 are linear in their variables 𝑥1,

 
 
 , 𝑥𝑛, the system is called linear, otherwise it’s called nonlinear. So an 𝑛 × 𝑛 first order linear system has the general form 𝑥1̇ = 𝑎11 ( 𝑡) 𝑥1 + ⋯  + 𝑎 𝑛1(𝑡) 𝑥 𝑛 + 𝑔1(𝑡) (5) 𝑥 𝑛̇ = 𝑎 𝑛1( 𝑡) + ⋯  + 𝑎 𝑛𝑛 (𝑡) 𝑥 𝑛 + 𝑔 𝑛 (𝑡). (6) If furthermore all 𝑎𝑖𝑗 (t) are constants, that is 𝑥1̇ = 𝑎11 𝑥1 + ⋯  + 𝑎 𝑛1 𝑥 𝑛 + 𝑔1(𝑡) (7) 𝑥 𝑛̇ = 𝑎 𝑛1 𝑥1 + ⋯  + 𝑎 𝑛𝑛 (𝑡) 𝑥 𝑛 + 𝑔 𝑛 (𝑡) (8)
  • 14. 14 The system is said to have “constant coefficients”. As usual, when 𝑔1(𝑡) 

 𝑔 𝑛(𝑡) = 0, the above linear systems are called “homogeneous”. Remark 2. In almost all practical cases, the first order system will be nonlinear. There is no systematic way to solve all general nonlinear system. In fact, even for 𝑛 × 𝑛 first order linear system, no simple formula exists (of course unless 𝑛 = 1, which can be solved through application of appropriate integrating factors).Only linear systems with constant coefficients enjoy good formulas for solutions. Nevertheless, as we will see soon, one important way to understand the general nonlinear system is to derive from it one or more related linear, constant-coefficient systems. Once a good understanding is reached for these constant-coefficient systems, the behaviours of the solutions to the original nonlinear problem often can be obtained. Write out the general form of a system of first order ODE, with 𝑥1, 𝑥2 as unknowns. Given 𝑎𝑊′′ + 𝑏𝑊′ + 𝑐𝑊 = 𝑔(𝑡), 𝑊(0) = 𝛌, 𝑊′ (0) = 𝛜 we can do a variable change: let 𝑥1 = 𝑊, 𝑥2 = 𝑥′1 = 𝑊 ′ then 𝑥′ 1 = 𝑥2 𝑥1(0) = ∝ 𝑥′2 = 𝑊 ′′ = 1 𝑎 (𝑔(𝑡) − 𝑏𝑥2 − 𝑐𝑥1 ) 𝑥2(0) = 𝛜 Observation: For any 2nd order equation, we can rewrite it into a system of 2 first order equations. Example 1. Given 𝑊 ′′ + 5𝑊 ′ − 10𝑊 = 𝑠𝑖𝑛 𝑡, 𝑊(0) = 2, 𝑊′ (0) = 4 Rewrite it into a system of first order equations: let 𝑥1 = 𝑊 and 𝑥2 = 𝑊 ′ = 𝑥′1 , then 𝑥′1 = 𝑥2 𝑥1 (0) = 2 𝑥′2 = 𝑊 ′′ = −5𝑥2 + 10𝑥1 + sin t 𝑥2 (0) = 4 We can do the same thing to any high order equations. For 𝑛 − 𝑡ℎ order differential equation: 𝑊 (𝑛) = 𝐹(𝑡, 𝑊, 𝑊′ ,· · · , 𝑊(𝑛 − 1)) define the variable change:
  • 15. 15 𝑥1 = 𝑊, 𝑥1 = 𝑊 ′ ,

 𝑥 𝑛 = 𝑊(𝑛−1) we get 𝑥′1= y ′ = 𝑥2 𝑥′2 = y ′′ = 𝑥3 . . 𝑥′ 𝑛−1 = 𝑊(𝑛−1) = 𝑥 𝑛 𝑥′ 𝑛 = 𝑊(𝑛) = 𝐹( 𝑡, 𝑥1, 𝑥2,· · · , 𝑥 𝑛) With corresponding source terms. (Optional) Reversely, we can convert a 1st order system into a high order equation. 2. 2. Review of Matrix Theory. A matrix of size m × n: ≀ m, 1 ≀ j ≀ n. We consider only square matrices, i.e., m = n, in particular for n = 2 and 3. Basic operations: A, B are two square matrices of size n. • Addition: 𝐎 + 𝐵 = (𝑎𝑖𝑗 ) + (𝑏𝑖𝑗 ) • Scalar multiple: 𝛌𝐎 = (𝛌 · 𝑎𝑖𝑗 ) • Transpose: 𝐎 𝑇 switch the 𝑎𝑖,𝑗 with 𝑎𝑖𝑗. (𝐎 𝑇 ) 𝑇 = 𝐎. • Product: For 𝐎 · 𝐵 = 𝐶, it means 𝑐𝑖,𝑗 is the inner product of (𝑖𝑡ℎ row of 𝐎) and (𝑗𝑡ℎ column of 𝐵). Example: [ 𝑎 𝑏 𝑐 𝑑 ]· [ 𝑥 𝑊 𝑢 𝑣 ]= [ ax + bu ay + bv cx + du cy + dv ] We can express system of linear equations using matrix product.
  • 16. 16 Example 1. 𝑥1− 𝑥2 + 3𝑥3 = 4 2𝑥1 + 5𝑥3 = 0 𝑥2− 𝑥3 = 7 can be expressed as: · [ 1 −1 3 2 0 5 0 1 −1 ]. [ 𝑥 𝑊 𝑧 ]=[ 4 0 7 ] Some properties: • Identity 𝐌: I = 𝑑𝑖𝑎𝑔(1, 1,· · · ,1), 𝐎𝐌 = 𝐌𝐎 = 𝐎. • Determinant det(A): 𝑑𝑒𝑡 [ 𝑎 𝑏 𝑐 𝑑 ] = ad − bc, 𝑑𝑒𝑡 ( 𝑎 𝑏 𝑐 𝑢 𝑣 𝑀 𝑥 𝑊 𝑧 ) = 𝑎𝑣𝑥 + 𝑏𝑀𝑥 + 𝑐𝑢𝑊 − 𝑥𝑣𝑐 − 𝑊𝑀𝑎 − 𝑧𝑢𝑏. • Inverse 𝑖𝑛𝑣(𝐎) = 𝐎−1 : 𝐎−1 𝐎 = 𝐎𝐎−1 = 𝐌. • The following statements are all equivalent: (optional) – (1) 𝐎 is invertible; – (2) 𝐎 is non-singular; – (3) 𝑑𝑒𝑡 (𝐎) ≠ 0; – (4) row vectors in 𝐎 are linearly independent; – (5) Column vectors in 𝐎 are linearly independent. – (6) All eigenvalues of 𝐎 are non-zero.
  • 17. 17 2.3. Eigen Values and Eigen Vectors Eigenvalues and eigenvectors of 𝐎 (only when A is 2 × 2) λ: scalar value, 𝑣⃗: column vector, 𝑣⃗ ≡ 0. If 𝐎𝑣⃗ = λ𝑣⃗, then (λ, 𝑣⃗) is the (eigenvalue, eigenvector) of 𝐎. They are also called an eigen-pair of 𝐎. Remark: If 𝑣⃗ is an eigenvector, then α𝑣⃗ for any α ≠ 0 is also an eigenvector, because 𝐎(α𝑣⃗ ) = α𝐎𝑣⃗ = αλ𝑣⃗ = λ(α𝑣⃗). How to find (λ, v): A𝑣⃗ − λ𝑣⃗ = 0, (𝐎 − 𝜆𝐌)𝑣⃗⃗⃗⃗⃗ = 0, 𝑑𝑒𝑡(𝐎 − 𝜆𝐌) = 0. We see that 𝑑𝑒𝑡(𝐎 − 𝜆𝐌) is a polynomial of degree 2 (if 𝐎 is 2 × 2) in λ, and it is also called the characteristic polynomial of 𝐎. We need to find its roots. Example 1. Eigenvalues can be complex numbers. A = [ 2 −9 4 2 ] Let’s first find the eigenvalues. 𝑑𝑒𝑡(𝐎 − 𝜆𝐌) = 𝑑𝑒𝑡 [2 − λ 9 4 2 − λ ] = (2 − λ) 2 + 36 = 0, ⇒ 𝜆1,2 = 2 ± 6𝑖 We see that 𝜆2 = 𝜆1 ̅̅̅, complex conjugate. The same will happen to the eigenvectors, i.e., 𝑣2⃗⃗⃗⃗⃗ = 𝑣1⃗⃗⃗⃗⃗. So we need to only find one. Take 𝜆1 = 2 + 6𝑖, we compute 𝑣⃗ = (𝑣1 , 𝑣2 ) 𝑇 : (𝐎 − 𝜆1 𝐌) 𝑣⃗ = 0, [ −𝑖6 −9 4 −𝑖6 ]· [ 𝑣1 𝑣2]= 0, −6i𝑣1 − 9𝑣2 = 0, choose 𝑣1 = 1, so 𝑣2 = − 2 3𝑖 , so 𝑣1⃗⃗⃗⃗⃗ =( 1 −2/3𝑖 ), 𝑣2⃗⃗⃗⃗⃗ = 𝑣1⃗⃗⃗⃗⃗= ( 1 − 2 3𝑖 ) .
  • 18. 18 2.4. Stability of Linear Systems For the 2 × 2 system 𝑥′⃗⃗⃗⃗ = 𝐎 𝑥⃗ we see that 𝑥⃗ = (0,0) is the only critical point if 𝐎 is invertible. In a more general setting: the system 𝑥′⃗⃗⃗⃗ = 𝐎 𝑥⃗−𝑏⃗⃗ would have a critical point at 𝑥⃗= 𝐎−1 𝑏. The type and stability of the critical point is solely determined by the eigenvalues of A. 𝜆1,2 eigenvalues Type of C.P Stability Real 𝜆1. 𝜆2 < 0 Saddle point unstable Real 𝜆1 > 0, 𝜆2 > 0, 𝜆2 ≠ 𝜆1 Node(source) Unstable Real 𝜆1 < 0, 𝜆2 < 0, 𝜆2 ≠ 𝜆1 Node(sink) asymptotically stable Real 𝜆1 = 𝜆2 = 𝜆 Improper node asymptotically stable if λ < 0, unstable if λ > 0 Complex 𝜆1,2 = ±𝑖𝛜 Centre stable but not asymptotically Complex 𝜆1,2 = α ± iβ Spiral point asymptotically stable if α <0, unstable if α > 0 Example 1. We now consider again the prey-predator model, and set in values for the constants. We consider { x ′ (t) = x(10 − 5y) y ′ (t) = y(−6 + x)
  • 19. 19 which has 2 critical points (0,0) 𝑎𝑛𝑑 (6,2). The Jacobian matrix is 𝐜(𝑥, 𝑊) = [ 10 − 5y −5x y −6 + x ]. At (0,0) we have 𝐜(0, 0) =[ 10 0 0 −6 ], λ1= 10, λ2 = −6, saddle point, unstable. At (6, 2) we have 𝐜(6, 2) =[ 0 −30 2 0 ], λ1,2 = ±𝑖 √ 60, center, stable but not asymp.. To see more detailed behavior of the model, we compute the two eigenvector for 𝐜(0,0), and get 𝑣1⃗⃗⃗⃗⃗ = (1, 0) and 𝑣1⃗⃗⃗⃗⃗ = (0,1). We sketch the trajectories of solution in (x1, x2)- plane in the next plot, where the trajectories rotate around the center counter clock wise. One can interpret these as “circles of life”. In particular, the big circles can be interpreted as: When there are very little predators, the prey grows exponentially, very quickly. As the population of the prey becomes very large, there is a lot of food for the prey, and this triggers an sudden growth of the predator. As the predators increase their numbers, the prey population shrinks, until there is very little prey left. Then, the predators starve, and its population decays exponentially (dies out). The circle continuous in a periodic way, forever!
  • 20. 20 3. Existence and uniqueness theorem 3.1. Picard’s Theorem Here we concentrate on the solution of the first order IVP 𝑊0 = 𝑓(𝑥, 𝑊), 𝑊(𝑥0) = 𝑊0 (1) We are interested in the following questions: 1. Under what conditions, there exists a solution to (1). 2. Under what conditions, there exists a unique solution to (1). Comment: An ODE may have no solution, unique solution or infinitely many solutions. For, example 𝑊′2 + 𝑊2 + 1 = 0, 𝑊(0) = 1 has no solution. The ODE 𝑊′ = 2𝑥, 𝑊(0) = 1 has unique solution 𝑊 = 1+𝑥2 , whereas the ODE x𝑊′ = 𝑊−1 , 𝑊(0) = 1 has infinitely many solutions y = 1 + αx, α is any real number. (I only state the theorems. For proof, one may see ‘An introduction to ordinary differential equation’ by E A Coddington.) Theorem 1. (Existence theorem): 𝑆𝑢𝑝𝑝𝑜𝑠𝑒 𝑡ℎ𝑎𝑡 𝑓( 𝑥, 𝑊) 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑖𝑛 𝑠𝑜𝑚𝑒 𝑟𝑒𝑔𝑖𝑜𝑛 𝑅 = {(𝑥, 𝑊) ∶ |𝑥 − 𝑥0| ≀ 𝑎, |𝑊 − 𝑊0| ≀ 𝑏}, (𝑎, 𝑏 > 0). 𝑆𝑖𝑛𝑐𝑒 𝑓 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑖𝑛 𝑎 𝑐𝑙𝑜𝑠𝑒𝑑 𝑎𝑛𝑑 𝑏𝑜𝑢𝑛𝑑𝑒𝑑 𝑑𝑜𝑚𝑎𝑖𝑛, 𝑖𝑡 𝑖𝑠 𝑛𝑒𝑐𝑒𝑠𝑠𝑎𝑟𝑖𝑙𝑊 𝑏𝑜𝑢𝑛𝑑𝑒𝑑 𝑖𝑛 𝑅, 𝑖. 𝑒. , 𝑡ℎ𝑒𝑟𝑒 𝑒𝑥𝑖𝑠𝑡𝑠 𝐟 > 0 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 |𝑓(𝑥, 𝑊)| ≀ 𝐟 ∀(𝑥, 𝑊) ∈ 𝑅. 𝑇ℎ𝑒𝑛 𝑡ℎ𝑒 𝐌𝑉𝑃 (1) ℎ𝑎𝑠 𝑎𝑡𝑙𝑒𝑎𝑠𝑡 𝑜𝑛𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑊 = 𝑊(𝑥) 𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 |𝑥 − 𝑥0| ≀ 𝛌 𝑀ℎ𝑒𝑟𝑒 𝛌 = 𝑚𝑖𝑛 {𝑎, 𝑏 𝑘 } (Note that the solution exists possibly in a smaller interval) Theorem 2.(Uniquness theorem): 𝑆𝑢𝑝𝑝𝑜𝑠𝑒 𝑡ℎ𝑎𝑡 𝑓 𝑎𝑛𝑑 𝜕𝑓/ 𝜕𝑊 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑖𝑛 𝑅 (𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑒𝑥𝑖𝑠𝑡𝑒𝑛𝑐𝑒 𝑡ℎ𝑒𝑜𝑟𝑒𝑚). 𝐻𝑒𝑛𝑐𝑒, 𝑏𝑜𝑡ℎ 𝑡ℎ𝑒 𝑓 𝑎𝑛𝑑 𝜕𝑓 𝜕𝑊 𝑎𝑟𝑒 𝑏𝑜𝑢𝑛𝑑𝑒𝑑 𝑖𝑛 𝑅, 𝑖. 𝑒., ( 𝑎)| 𝑓( 𝑥, 𝑊)| ≀ 𝐟 𝑎𝑛𝑑 ( 𝑏) | 𝜕𝑓 𝜕𝑊| ≀ 𝐿 ∀(𝑥, 𝑊) ∈ 𝑅
  • 21. 21 𝑇ℎ𝑒𝑛 𝑡ℎ𝑒 𝐌𝑉𝑃 (1) ℎ𝑎𝑠 𝑎𝑡𝑚𝑜𝑠𝑡 𝑜𝑛𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑊 = 𝑊(𝑥) 𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 |𝑥 − 𝑥0| ≀ 𝛌 𝑀ℎ𝑒𝑟𝑒 𝛌 = 𝑚𝑖𝑛 {𝑎, 𝑏 𝑘 } 𝐶𝑜𝑚𝑏𝑖𝑛𝑖𝑛𝑔 𝑀𝑖𝑡ℎ 𝑒𝑥𝑖𝑠𝑡𝑒𝑛𝑐𝑒 𝑡ℎ𝑒𝑟𝑒𝑜𝑚, 𝑡ℎ𝑒 𝐌𝑉𝑃 (1) ℎ𝑎𝑠 𝑢𝑛𝑖𝑞𝑢𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑊 = 𝑊(𝑥) 𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 |𝑥 − 𝑥0| ≀ 𝛌. Comment: Condition (b) can be replaced by a weaker condition which is known as Lipschitz condition. Thus, instead of continuity of ∂f/∂y, we require |𝑓(𝑥, 𝑊1) − 𝑓(𝑥, 𝑊2)| ≀ 𝐿|𝑊1 − 𝑊2| ∀(𝑥, 𝑊𝑖) ∈ 𝑅. If 𝜕𝑓/𝜕𝑊 exists and is bounded, then it necessarily satisfies Lipschitz condition. On the other hand, a function 𝑓(𝑥, 𝑊) may be Lipschitz continuous but 𝜕𝑓/𝜕𝑊 may not exists. For example 𝑓(𝑥, 𝑊) = 𝑥2 |𝑊|,|𝑥| ≀ 1,|𝑊| ≀ 1 is Lipschitz continous in 𝑊 but 𝜕𝑓/𝜕𝑊 does not exist at (𝑥, 0). *Note 1: The existence and uniqueness theorems stated above are local in nature since the interval, |𝑥 − 𝑥0| ≀ 𝛌, where solution exists may be smaller than the original interval, |𝑥 − 𝑥0| ≀ 𝑎, where 𝑓(𝑥, 𝑊) is defined. However, in some cases, this restrictions can be removed. Consider the linear equation 𝑊′ + 𝑝( 𝑥) 𝑊 = 𝑟( 𝑥), (2) where 𝑝(𝑥) and 𝑟(𝑥) are defined and continuous in a the interval 𝑎 ≀ 𝑥 ≀ 𝑏. Here (𝑥, 𝑊) = −𝑝(𝑥)𝑊 + 𝑟(𝑥). If 𝐿 = 𝑚𝑎𝑥𝑎 ≀ 𝑥 ≀ 𝑏 |𝑝(𝑥)|, then |𝑓(𝑥, 𝑊1) − 𝑓(𝑥, 𝑊2)| = | − 𝑝(𝑥)(𝑊1 − 𝑊2)| ≀ 𝐿|𝑊1 − 𝑊2| Thus, f is Lipschitz continuous in y in the infinite vertical strip 𝑎 ≀ 𝑥 ≀ 𝑏 and −∞ < 𝑊 < ∞. In this case, the IVP (2) has a unique solution in the original interval 𝑎 ≀ 𝑥 ≀ 𝑏. *Note 2: Though the theorems are stated in terms of interior point 𝑥0, the point 𝑥0 could be left/right end point. Comment: The conditions of the existence and uniqueness theorem are sufficeint but not necessary. For example, consider 𝑊′ = √ 𝑊 + 1, 𝑊(0) = 0, 𝑥 ∈ [0, 1]
  • 22. 22 Clearly 𝑓 does not satisfy Lipschitz condition near origin. But still it has unique solution. [Hint: Let 𝑊1 and 𝑊2 be two solutions and consider 𝑧(𝑥) = ( 𝑊1 (x)1/2 − 𝑊2 (x)1/2 ) 2 .] Comment: The existence and uniqueness theorem are also valid for certain system of first order equations. These theorems are also applicable to a certain higher order ODE since a higher order ODE can be reduced to a system of first order ODE. Example 1. 𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑡ℎ𝑒 𝑂𝐷𝐞 𝑊′ = 1 + 𝑊2 , 𝑊(0) = 0. 𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑡ℎ𝑒 𝑟𝑒𝑐𝑡𝑎𝑛𝑔𝑙𝑒 𝑆 = {(𝑥, 𝑊) ∶ |𝑥| ≀ 100,|𝑊| ≀ 1}. Clearly 𝑓 and 𝜕𝑓/𝜕𝑊 are continuous in 𝑆. Hence, there exists unique solution in the neighbourhood of (0, 0). Now 𝑓 = 1 + 𝑊2 and |𝑓| ≀ 2 in S. Now 𝛌 = 𝑚𝑖𝑛{100, 1/ 2} = 1/2. Hence, the theorems guarantee existence of unique solution in |𝑥| ≀ 1/2, which is much smaller than the original interval |𝑥| ≀ 100. Since, the above equation is separable, we can solve it exactly and find 𝑊(𝑥) = 𝑡𝑎𝑛(𝑥). This solution is valid only in (−𝜋/2, 𝜋/ 2) which is also much smaller than [−100,100] but nevertheless bigger than that predicted by the existence and uniqueness theorems. 3.3. Picarditeration for IVP This method gives approximate solution to the IVP (1). Note that the IVP (1) is equivalent to the integral equation 𝑊(𝑥) = 𝑊 0 + ∫ 𝑓(𝑡, 𝑊(𝑡)) 𝑑𝑡 𝑥 𝑥0 (3) A rough approximation to the solution 𝑊(𝑥) is given by the function 𝑊0(𝑥) = 𝑊0, which is simply a horizontal line through (𝑥0, 𝑊0). (don’t confuse function 𝑊0 (𝑥)with constant 𝑊0 ). We insert this to the RHS of (3) in order to obatin a (perhaps) better approximate solution, say 𝑊1 (𝑥). Thus, 𝑊 1 (𝑥) = 𝑊 0 + ∫ 𝑓(𝑡, 𝑊0(𝑡)) 𝑑𝑡 𝑥 𝑥0 = 𝑊0 + ∫ 𝑓(𝑡, 𝑊0) 𝑥 𝑥0 𝑑𝑡
  • 23. 23 At the 𝑛 − 𝑡ℎ stage we find 𝑊 𝑛(𝑥) = 𝑊 0 + ∫ 𝑓(𝑡, 𝑊 𝑛−1(𝑡)) 𝑑𝑡 𝑥 𝑥0 Theorem 3. 𝐌𝑓 𝑡ℎ𝑒 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑓( 𝑥, 𝑊) 𝑠𝑎𝑡𝑖𝑠𝑓𝑊 𝑡ℎ𝑒 𝑒𝑥𝑖𝑠𝑡𝑒𝑛𝑐𝑒 𝑎𝑛𝑑 𝑢𝑛𝑖𝑞𝑢𝑒𝑛𝑒𝑠𝑠 𝑡ℎ𝑒𝑜𝑟𝑒𝑚 𝑓𝑜𝑟 𝐌𝑉𝑃 (1), 𝑡ℎ𝑒𝑛 𝑡ℎ𝑒 𝑠𝑢𝑐𝑐𝑒𝑠𝑖𝑣𝑒 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛 𝑊𝑛( 𝑥) 𝑐𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑠 𝑡𝑜 𝑡ℎ𝑒 𝑢𝑛𝑖𝑞𝑢𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑊( 𝑥) 𝑜𝑓 𝑡ℎ𝑒 𝐌𝑉𝑃 (1). Example 2. Apply Picard iteration for the IVP 𝑊′ = 2𝑥(1 − 𝑊), 𝑊(0) = 2. Solution: Here 𝑊 0 (𝑥) = 2. Now 𝑊 1 (𝑥) = 2 + ∫ 2𝑡(1 − 2) 𝑑𝑡 𝑥 0 = 2 − 𝑥 2 𝑊 2 (𝑥) = 2 + ∫ 2𝑡( 𝑡 2 − 1) 𝑑𝑡 𝑥 0 = 2 − 𝑥 2 + 𝑥 4 /2 𝑊 3(𝑥) = 2 + ∫ 2𝑡 (𝑡 2 − 𝑡 4 /2 − 1 ) 𝑑𝑡 𝑥 0 = 2 − 𝑥 2 + 𝑥 4 /2− 𝑥 6 /3! By induction, it can be shown that 𝑊 𝑛(𝑥) = 2 − 𝑥 2 + 𝑥 4 /2 − 𝑥 6 /3! + · · · + (−1) 𝑛 𝑥 2𝑛 / 𝑛! Hence, 𝑊 𝑛( 𝑥) → 1 + 𝑒−𝑥 2 as 𝑛 → ∞. Now y(x) = 1 + 𝑒−𝑥 2 is the exact solution of the given IVP. Thus, the Picard iterates converge to the unique solution of the given IVP. Comment: Picard iteration has more theoretical value than practical value. It is used in the proof of existence and uniqueness theorem. On the other hand, finding approximate solution using this method is almost impractical for complicated function 𝑓(𝑥, 𝑊).
  • 24. 24 3.4. Cauchy Peano’s Theorem The Picard existence theorem provides a locally unique solution to the differential equation 𝑊′ = 𝑓( 𝑊, 𝑡), 𝑊(𝑡0 ) = 𝑊0 under the assumption that f is continuous and satisfies a Lipschitz condition in its first variable. The Peano existence theorem has weaker hypotheses than the Picard existence theorem: 𝑓 is only assumed to be continuous. The conclusion is weaker: there is a solution, but it may not be unique. Theorem: Let D be an open subset of 𝑅 × 𝑅 with a continuous function and 𝑊′(𝑥) = 𝑓(𝑥, 𝑊(𝑥)) a continuous, explicit first-order differential equation defined on 𝐷, then every initial value problem 𝑊(𝑥0) = 𝑊0 for 𝑓 with (𝑥0, 𝑊0) ∈ D has a local solution 𝑧: 𝐌 → 𝑅 where 𝐌 is a neighbourhood of in 𝑥0 in 𝑅 , such that 𝑧′(𝑥) = 𝑓((𝑥, 𝑧(𝑥)) for all 𝑥 ∈ 𝐌. The solution need not be unique: one and the same initial value (𝑥0, 𝑊0) may give rise to many different solutions 𝑧.
  • 25. 25 4. Linearization of Non Linear Models 4.1. Theory Most of the differential equations and system of differential equations are encountered in practise are non-linear. And most of the real life problems are based on the non-linear system. But many of times we are unable to solve non-linear differential equation, so we linearize the system to get a linear equation that can be easily solved. So here our first concern to linearize the non-linear system. After linearize the non-linear system, we can easily applicable to apply the numerous linear analysis methods to study the nature of the system. Consider the non-linear system ÅŒ(𝑡) = 𝐹(𝑡, 𝑧(𝑡)) 𝑧(𝑡0) = 𝑧0 where the state 𝑧(𝑡) is an n-dimensional vector and 𝑓 be a non-linear function. Suppose that system has a solution ∅(𝑡, 𝑧0, 𝑡0) corresponding to a particular initial condition z(𝑡0) = 𝑧0 . If the initial data 𝑧0 𝑖s slightly changed then it is expected that the solution ∅(𝑡, 𝑧0, 𝑡0) will also slightly changed. If 𝑓(𝑡, 𝑧) is continuously differentiable with respect to z then we can expand 𝑓(𝑡, 𝑧) is a taylor’s series about the solution ∅(𝑡, 𝑧0, 𝑡0).i.e. 𝑓(𝑡, ∅(𝑡, 𝑧0, 𝑡0) + 𝛿𝑧(𝑡)) = 𝑓(𝑡,∅(𝑡, 𝑧0, 𝑡0)) + 𝐎(𝑡)𝛿𝑧(𝑡) + ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚 here 𝐎(𝑡) = (𝑎𝑖𝑗(𝑡))𝑛𝑥𝑛 = ( 𝜕𝑓𝑖 𝜕𝑓𝑗 ) Hence ∅ ̇( 𝑡, 𝑧0, 𝑡0) + 𝛿𝑧 ̇( 𝑡) = 𝑓(𝑡, ∅(𝑡, 𝑧0, 𝑡0 ) + 𝛿𝑧(𝑡)) = 𝑓(𝑡,∅(𝑡, 𝑧0, 𝑡0)) + 𝐎(𝑡)𝛿𝑧(𝑡) + ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚 Note : ∅(𝑡, 𝑧0, 𝑡0) + 𝛿𝑧(𝑡) is the solution of the given system. Since ∅(𝑡, 𝑧0, 𝑡0 ) is a solution, we have ∅ ̇( 𝑡, 𝑧0, 𝑡0 ) + 𝛿𝑧 ̇( 𝑡) = 𝑓(𝑡, ∅( 𝑡, 𝑧0, 𝑡0)) + 𝐎(𝑡)𝛿𝑧(𝑡) + ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚 𝑓(𝑡, ∅(𝑡, 𝑧0, 𝑡0)) + 𝛿𝑧 ̇( 𝑡) = 𝑓(𝑡, ∅(𝑡, 𝑧0, 𝑡0 )) + 𝐎(𝑡)𝛿𝑧(𝑡) + ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚
  • 26. 26 𝛿𝑧 ̇( 𝑡) = 𝐎(𝑡)𝛿𝑧(𝑡) + ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚 (1.9) Hence the equation ÅŒ(𝑡) = 𝐎(𝑡)𝑧(𝑡) (1.10) is called the linearized system of (1.6) about the solution ∅(𝑡, 𝑧0, 𝑡0). Remarks: (1.3) If we do not neglect the order terms in (1.8) then we are with a semi-linear system ÅŒ(𝑡) = 𝐎(𝑡)𝑧(𝑡) + 𝑔(𝑡, 𝑧(𝑡)) (1.11) where 𝑔(𝑡, 𝑧(𝑡)) is the sum of higher order terms. Remarks: (1.4) If the original system (1.6) is time-invariant that is 𝑓(𝑡, 𝑧(𝑡)) = 𝑓(𝑧(𝑡)) (1.12) then the corresponding linearized system will be of the form ÅŒ(𝑡) = 𝐎𝑧(𝑡) (1.13) system of the form (1.13) are known as autonomous homogeneous linear system. 4.2. Example The state equations of an inverted pendulum are 𝑧1̇ = 𝑧2(𝑡) 𝑧2̇ (t) = 𝑔 𝑙 (sin 𝑧1(t)) What are the linearized equations about the equilibrium solution? 𝑧1(𝑡) = 𝑧2(𝑡) = 0 ? First write the equation into the state representation form as 𝑧̇(t) = ( 𝑧̇1(𝑡) 𝑧̇2(𝑡) ) = 𝑓(𝑡, 𝑧1(𝑡), 𝑧2(𝑡)) = ( 𝑓1(𝑡) 𝑓2(𝑡) ) Where ( 𝑓1(𝑡) 𝑓2(𝑡) ) = ( 𝑧2(𝑡) 𝑔 𝑙 sin(𝑧1(𝑡)) ) 𝜕 𝑓1 𝜕𝑧1 = 0 , 𝜕 𝑓1 𝜕𝑧2 = 1 , 𝜕 𝑓2 𝜕𝑧1 = 𝑔 𝑙 cos(𝑧1( 𝑡)) , 𝜕 𝑓2 𝜕 𝑧2 = 0 Hence the required linearized form is ÅŒ(𝑡) = 𝐎(𝑡)𝑧(𝑡)
  • 27. 27 where A(t) =( 𝜕𝑓1 𝜕𝑧1 𝜕𝑓1 𝜕𝑧2 𝜕𝑓2 𝜕𝑧1 𝜕𝑓2 𝜕𝑧2 ) and ÅŒ(t) = ( 𝑧̇1(𝑡) 𝑧̇2(𝑡) ) putting all the partial derivatives in the matrix ,we get ( 𝑧̇1(𝑡) 𝑧̇2(𝑡) ) = ( 0 1 𝑔 𝑙 cos(𝑧1( 𝑡)) 0)( 𝑧1(𝑡) 𝑧2(𝑡) ) Thus the matrix 𝐎(𝑡) will satisfy the initial condition, so 𝐎(𝑡) = ( 0 1 𝑔 𝑙 0) Hence our required linearized form of a non-linear system is ( 𝑧̇1(𝑡) 𝑧̇2(𝑡) ) = ( 0 1 𝑔 𝑙 0) ( 𝑧1(𝑡) 𝑧2(𝑡) )
  • 28. 28 5. Controllability of Linear Systems 5.1. Motivation behind Controllability Control theory basically involved the influencing the behaviours of a dynamical system so as to achieve a desired goal. Many physical Systems are controlled by the manipulation of their inputs based on the simultaneous observation of their output. Ex. Airplane is controlled by the pilot’s action based on the instrument reading and visual observation. The control problem is to determine on the basis of the available data, the input necessary to achieve a given goal. Mathematical Control theory exhibits a wide variety of technique that go beyond those associated with traditional applied mathematics. In the simplest case the vibrating system consisting of a single mass on a linear spring if the displacement from the equilibrium is 𝑥 at a time 𝑡,then Newton’s law asserts that the acceleration 𝑑2 𝑥 𝑑𝑡2 = −𝑥 Now we can introduce a control force u depending on 𝑥 and 𝑑𝑥 𝑑𝑡 , so that every solution at of 𝑑2 𝑥 𝑑𝑡2 = −𝑥 + 𝑢 returns to rest at 𝑥 = 0 This is the problem and motivation behind controllability. 5.2. Kalman’s Criterion Definition: - Consider the linear system 𝑥 ̇ = 𝐎𝑥 + 𝐵𝑢 where 𝑥 ∈ 𝑅 𝑛 : state vector and 𝑢 ∈ 𝑅 𝑚 : input vector. 𝐎 ∶ Of size 𝑛 × 𝑛 and 𝐵 : of size 𝑛 × 𝑚. The pair (𝐎, 𝐵) is controllable if, given a duration 𝑇 > 0 and two arbitrary points 𝑥0 , 𝑥 𝑇 ∈ 𝑅 𝑛 , there exists a piecewise continuous function 𝑡 → 𝑢¯(𝑡) from [0, 𝑇] to𝑅 𝑚 , such that the integral curve 𝑥¯̅(𝑡) generated by 𝑢¯ with 𝑥̅(0) = 𝑥0 , satisfies 𝑥¯̅(𝑇) = 𝑥 𝑇 . In other words 𝑒 𝐎𝑇 𝑥0 + ∫ 𝑒 𝐎(𝑇−𝑡) 𝑇 0 𝐵𝑢̅(𝑡)𝑑𝑡 = 𝑥 𝑇 . This property depends only on A and B:
  • 29. 29 Theorem(Kalman) 𝐎 𝑛𝑒𝑐𝑒𝑠𝑠𝑎𝑟𝑊 𝑎𝑛𝑑 𝑠𝑢𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑓𝑜𝑟 (𝐎, 𝐵) 𝑡𝑜 𝑏𝑒 𝑐𝑜𝑛𝑡𝑟𝑜𝑙𝑙𝑎𝑏𝑙𝑒 𝑖𝑠 𝑟𝑎𝑛𝑘 𝐶 = 𝑟𝑎𝑛𝑘 |𝐵|𝐎𝐵| · · · |𝐎 𝑛−1 𝐵 | = 𝑛. 𝐶 𝑖𝑠 𝑐𝑎𝑙𝑙𝑒𝑑 𝐟𝑎𝑙𝑚𝑎𝑛’𝑠 𝑐𝑜𝑛𝑡𝑟𝑜𝑙𝑙𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑚𝑎𝑡𝑟𝑖𝑥 (𝑜𝑓 𝑠𝑖𝑧𝑒 𝑛 × 𝑛𝑚). 5.3. The Matrix Exponential (Transition Matrix) For each 𝑛 × 𝑛 complex matrix A, define the exponential of A to be the matrix 𝑒 𝐎 = ∑ 𝐎 𝐟 𝑘! ∞ 𝑘=0 = 𝐌 + 𝐎 + 1 2! 𝐎2 + + 1 3! 𝐎3 + · · · (1) It is not difficult to show that this sum converges for all complex matrices 𝐎 of any finite dimension. If 𝐎 is a 1 × 1 matrix [𝑡], then 𝑒 𝐎 = [𝑒 𝑡 ], by the Maclaurin series formula for the function y = 𝑒 𝑡 . More generally, if 𝐷 is a diagonal matrix having diagonal entries 𝑑1, 𝑑2, . . ., 𝑑 𝑛, then we have 𝑒 𝐷 = I + 𝐷 + + 1 2! 𝐷2 + · · · 
=[ 1 ⋯ 0 0 1 ⋮ 0 ⋯ 1 ] + [ 𝑑1 ⋯ 0 ⋮ 𝑑2 ⋮ 0 ⋯ 𝑑 𝑛 ] + [ 𝑑1/2! ⋯ 0 ⋮ ⋱ ⋮ 0 ⋯ 𝑑 𝑛/2! ] =[ 𝑒 𝑑1 ⋯ 0 ⋮ ⋱ ⋮ 0 ⋯ 𝑒 𝑑 𝑛 ] The situation is more complicated for matrices that are not diagonal. However, if a matrix A happens to be diagonalizable, there is a simple algorithm for computing 𝑒 𝐎 , a consequence of the following lemma. Lemma 1 𝐿𝑒𝑡 𝐎 𝑎𝑛𝑑 𝑃 𝑏𝑒 𝑐𝑜𝑚𝑝𝑙𝑒𝑥 𝑛 × 𝑛 matrices, 𝑎𝑛𝑑 𝑠𝑢𝑝𝑝𝑜𝑠𝑒 𝑡ℎ𝑎𝑡 𝑃 𝑖𝑠 𝑖𝑛𝑣𝑒𝑟𝑡𝑖𝑏𝑙𝑒. 𝑇ℎ𝑒𝑛 𝑒 𝑃−1 𝐎𝑃 = 𝑃−1 𝑒 𝐎 𝑃 Proof: Recall that, for all integers 𝑚 ≥ 0, we have (𝑃−1 𝑒 𝐎 𝑃) 𝑀 =𝑃−1 𝐎 𝑚 𝑃. The definition (1) then yields
  • 30. 30 𝑒 𝑃−1 𝐎𝑃 = I + 𝑃−1 AP + ( 𝑃−1 AP)2 2! + · · · = I + 𝑃−1 AP + ( 𝑃−1 A2 𝑃) 2! + · · · = 𝑃−1 (𝐌 + 𝐎 + 𝐎 2 2! + · · ·)𝑃 = 𝑃−1 𝑒 𝐎 𝑃 If a matrix 𝐎 is diagonalizable, then there exists an invertible P so that = 𝑃−1 𝑒 𝐷 𝑃, where D is a diagonal matrix of eigenvalues of A, and P is a matrix having eigenvectors of A as its columns. In this case, 𝑒 𝐎 = 𝑃−1 𝑒 𝐷 𝑃 . Example: Let 𝐎 denote the matrix 𝐎 = [ 5 1 −2 2 ] The reader can easily verify that 4 and 3 are eigenvalues of 𝐎, with corresponding eigenvectors 𝑀1 = 1 −1 and 𝑀2 = 1 −2 . It follows that 𝐎 = 𝑃 𝐷𝑃−1 = [ 1 1 −1 −2 ][ 4 0 0 3 ][ 2 1 −1 −1 ] so that 𝑒 𝐎 = [ 1 1 −1 −2 ][ 𝑒4 0 0 𝑒3] [ 2 1 −1 −1 ]=[2𝑒4 − 3𝑒3 𝑒4 − 𝑒3 2𝑒3 − 2𝑒4 2𝑒3 − 𝑒4 ] 5.3.1. MatrixExponential In Matlab Y = expm(X) computes the matrix exponential of X. Although it is not computed this way, if X has a full set of eigenvectors V with corresponding eigen values 𝐷, then [𝑉, 𝐷] = 𝑒𝑖𝑔(𝑋) and 𝑒𝑥𝑝𝑚(𝑋) = 𝑉 ∗ 𝑑𝑖𝑎𝑔(𝑒𝑥𝑝(𝑑𝑖𝑎𝑔(𝐷)))/𝑉 A = [1 1 0; 0 0 2; 0 0 -1]; expm(A) ans = 2.7183 1.7183 1.0862 0 1.0000 1.2642 0 0 0.3679
  • 31. 31 5.4. Solution of the Controlled System using Transition Matrix Consider the 𝑛-dimensional linear control system: 𝑥̇= A(t)x + B(t)u; x(𝑡0) = 𝑡0 Let ∅(𝑡, 𝑡0) be the transition matrix of the homogeneous system 𝑥̇ = 𝐎(𝑡)x. The solution of the control system is given by (using variation of parameter method) 𝑥(𝑡) = ∅(𝑡, 𝑡0) 𝑥0 + ∫ ∅( 𝑡, 𝜏) 𝐵( 𝑡 𝑡0 𝜏)𝑢(𝜏)𝑑𝜏 The system is controllable iff for arbitrary initial and final states 𝑥0,𝑥1 there exists a control function 𝑢 such that 𝑥1 = ∅(𝑡1, 𝑡0 ) 𝑥0 + ∫ ∅( 𝑡1, 𝜏) 𝐵( 𝑡 𝑡0 𝜏)𝑢(𝜏)𝑑𝜏 Controllability Grammian for the linear system and is given by 𝑊(𝑡 𝑜, 𝑡1) = ∫ ∅( 𝑡1, 𝜏) 𝐵( 𝑡1 𝑡0 𝜏)𝐵∗ (𝜏)∅∗( 𝑡1, 𝜏) 𝑑𝜏 Theorem: The linear control system is controllable iff 𝑊(𝑡 𝑜, 𝑡1) is invertible and the steering control that move 𝑥0 to 𝑥1 is given by 𝑢(𝑡) = 𝐵∗ (𝜏)∅∗( 𝑡1, 𝑡) 𝑊−1 (𝑡0, 𝑡1)[𝑥1 − ∅(𝑡0, 𝑡1)𝑥0] Proof : Controllability part is already proved earlier. We now show that the steering control defined above actually does the transfer of states. The controlled state is given by 𝑥(𝑡) = ∅(𝑡, 𝑡0 ) 𝑥0 + ∫ ∅( 𝑡1, 𝜏) 𝐵( 𝑡 𝑡0 𝜏)𝑢(𝜏)𝑑𝜏 𝑥(𝑡) = ∅(𝑡, 𝑡0 ) 𝑥0 + ∫ ∅( 𝑡, 𝜏) 𝐵( 𝑡 𝑡0 𝜏) 𝐵∗ (𝜏)∅∗( 𝑡1, 𝜏) 𝑊−1 (𝑡0, 𝑡1)[𝑥1 − ∅(𝑡0, 𝑡1)𝑥0] 𝑑𝜏 𝑥(𝑡1) = ∅(𝑡1, 𝑡0 ) 𝑥0 + 𝑊(𝑡 𝑜, 𝑡1) 𝑊−1 (𝑡0, 𝑡1) [𝑥1 − ∅(𝑡0, 𝑡1)𝑥0] 𝑥( 𝑡1) = 𝑥1
  • 32. 32 5.5. Kalman Condition Revisited (Proof) System:- 𝑋̇ = 𝐎𝑋 + 𝐵𝑈 Solution:- 𝑥(𝑡) = 𝑒 𝐎𝑡 𝑥0 + ∫ 𝑒 𝐎(𝑡−𝜏) 𝐵 𝑡 𝑡0 𝑢(𝜏)𝑑𝜏 Assuming 𝑋(𝑡1) = 0 0 = 𝑒 𝐎𝑡1 𝑥0 + ∫ 𝑒 𝐎(𝑡−𝜏) 𝐵 𝑡1 0 𝑢(𝜏)𝑑𝜏 𝑥0 = − ∫ 𝑒−𝐎(𝜏) 𝐵 𝑡1 0 𝑢(𝜏)𝑑𝜏 According to Cayley Hamilton’s Theorem, For a 𝐎 be a 𝑛 × 𝑛 matrix, and let 𝑝(𝜆) = 𝑑𝑒𝑡(𝜆𝐌 − 𝐎) be the 𝑐ℎ𝑎𝑟𝑎𝑐𝑡𝑒𝑟𝑖𝑠𝑡𝑖𝑐 𝑝𝑜𝑙𝑊𝑛𝑜𝑚𝑖𝑎𝑙 of 𝐎. Then 𝑝(𝐎) = 0. So 𝑒−𝐎𝜏 =∑ 𝛟𝑘(𝜏)𝐎 𝑘𝑛−1 𝑘=0 𝑥0 = − ∫ 𝐎 𝑘 𝐵 ∫ 𝛟𝑘 (𝜏) 𝑡1 0 𝑡1 0 𝑢(𝜏)𝑑𝜏 ∑ = 𝐎 𝑘 𝛜𝑛−1 𝑘=0 𝛜 𝑘 where 𝛜 𝑘 = ∫ 𝛟𝑘 (𝜏) 𝑡1 0 𝑢(𝜏)𝑑𝜏 = [ 𝐵 𝐎𝐵 
 
 𝐎 𝑛−1 𝐵][𝛜0 𝛜1 
 
 𝛜 𝑛−1] 𝑇 We want a non-trivial solution. So for that the rank of 𝐶 = [ 𝐵 𝐎𝐵 
 
 𝐎 𝑛−1 𝐵] is 𝑛 Then the system is controllable.
  • 33. 33 6. Controllability of Aircraft 6.1. Introduction In the following section I examine the application of state space modelling and control theory to aircraft problems. Controllability is concerned with whether the states of dynamic system are affected by the control input. The mathematical definition of controllability easily are calculated but are somewhat abstract. An alternate way of looking at controllability is to transform the state equation to a canonical form. If the state equations are transformed so that the new plant matrix is diagonal matrix then the equations governing the system are said to be decoupled. Now the control matrix can be examined using the Kalman’s criterion and the controllability of the system can be checked. 6.2. Simulation in Matlab Here I have considered the dynamics of STOL Transport aircraft. After the theoretical modelling for the longitudinal motion of the aircraft I got the following control matrices. A= -1.3970 1.0000 0 0 -5.4700 -3.2700 0 0 0 1.0000 0 0 -400.0000 0 400.0000 0 B= -0.1240 -13.2000 0 0
  • 34. 34 6.2.1 Code for Simulation clear clc disp('Linear System dot(x) = Ax + Bu where A and B are given as follows:') A = input('The matrix A'); B =input('The matrix B'); n =input('The order of matrix'); pause disp('Kalman Test:The controllability Matrix of the system is:') C = B; Q = B; %AS = A; for i=1:n-1 C = (A^i)*B Q = [Q C] end pause disp('The rank of the controllability matrix is:') r=rank(Q) pause if r ~= n disp('The system is not controllable'); else disp('The system is controllable') t = sym('t') s = sym('s') disp('The initial state is:') x0 = input('Enter the initial state') pause disp('The final state is:') x1 = input('Enter the final state') pause disp('We want to to reach the final state in time') T = input('Enter the final time') pause disp('The transition matrix is:') phi = expm(A*t) disp('The controllability Gramian is:') W = int((expm(A*(T-t))*B*B'*expm(A'*(T-t))),t,0,T) W=subs(W); pause disp('The controller is taken as:') u = B'*expm(A'*(T-t))*inv(W)*(x1-expm(A*T)*x0) U=subs(u,s); disp('The solution of the system using the above controller is:') E=B*U; x =(expm(A*t))*x0 + int(expm(A*(t-s))*E,s,0,t) disp('The Graph of the solution is ') z=linspace(0,T,10); for i=1:n plot(z,subs(x(i),z)) hold on end xlabel('Time') ylabel('x(t)') end
  • 35. 35 6.2.2. OUTPUT Linear System dot(x) = Ax + Bu where A and B are given as follows: The matrix A[ -1.397 1 0 0; -5.47 -3.27 0 0;0 1 0 0;-400 0 400 0] The matrix B[-0.124;-13.2;0;0] The order of matrix4 Kalman Test:The controllability Matrix of the system is: C = -13.0268 43.8423 -13.2000 49.6000 Q = -0.1240 -13.0268 -13.2000 43.8423 0 -13.2000 0 49.6000 C = 62.0407 -72.1078 43.8423 -69.2912 Q = -0.1240 -13.0268 62.0407 -13.2000 43.8423 -72.1078 0 -13.2000 43.8423 0 49.6000 -69.2912 C = 1.0e+03 * -0.1588 -0.1036 -0.0721
  • 36. 36 -7.2794 Q = 1.0e+03 * -0.0001 -0.0130 0.0620 -0.1588 -0.0132 0.0438 -0.0721 -0.1036 0 -0.0132 0.0438 -0.0721 0 0.0496 -0.0693 -7.2794 The rank of the controllability matrix is: r = 4 The system is controllable t = t s = s The initial state is: Enter the initial state[1;2;3;4] x0 = 1 2 3 4 The final state is: Enter the final state[4;3;2;1] x1 = 4 3 2 1 We want to to reach the final state in time Enter the final time1 T =
  • 37. 37 1 P.S.- Since the complete output was going out of the bounds so it is very difficult to represent it on paper. The graph is as follows.
  • 38. 38 REFERENCES 1. Wen Shen, Introduction to Ordinary and Partial Differential Equations, Spring 2013, p 1-8 2. Wen Shen, Introduction to Ordinary and Partial Differential Equations, Spring 2013, p 88 - 95 3. S. Ghorai, Picard’s existence and uniqueness theorem, Picard’s iteration, p 1-4 4. https://en.wikipedia.org/wiki/Peano_existence_theorem 5. Brian Steven, Lewis, Aircraft Control and Simulation, p- 143-201 6. Cook, Michael, Flight Dynamics, Elsevier, p 123-145 7. Wayne, Aircraft Flight Dynamics and Control, Wiley Series, p 183-221