1. 1
ADVANCED CONTROL SYSTEM DESIGN OF
AIRCRAFT AND SIMULATION OF ITS
TRAJECTORY
INTERNSHIP PROJECT REPORT
By
SIDDHARTH PUJARI
ROLL NO.-010
NIT ROURKELA
Department of Mathematics
Indian Institute of Space Science and Technology (IIST)
Thiruvananthapuram
December â 2015
2. 2
BONAFIDE CERTIFICATE
This is to certify that this project report entitled â ADVANCED CONTROL SYSTEM
DESIGN OF AIRCRAFT AND SIMULATION OF ITS TRAJECTORY â submitted to
Indian Institute of Space Science and Technology, Thiruvananthapuram, is a bonafide
work done by Siddharth Pujari under my supervision from 10th December 2015 to 31st
December 2015.
Dr. Raju K. George
Dean(R and D)
Sr. Professor and Head, Department of Mathematics
Indian Institute of Space Science and Technology(IIST)
Valiamala P.O.
Trivandrum 695547
Place : Thiruvananthapuram
Date : 29/12/2015
3. 3
DECLARATION BY AUTHOR
This is to declare that this report has been written by me. No part of the report is plagiarized
from other sources. All information included from other sources have been duly
acknowledged. I am that if any part of the report is found to be plagiarized, I shall take full
responsibility for it.
Siddharth Pujari
NIT Rourkela
Place : Thiruvananthapuram
Date : 29/12/2015
4. 4
ABSTRACT
In this report I have mainly focused on the controllability of the linearized systems using its
state space models. Here as an example the aircraft model is simulated using Matlab.
Modelling of the state space model is not taken into consideration here. By just getting the
control matrices ðŽ and ðµ we can easily obtain results for the controllability of the aircraft and
also plot the corresponding graph. The control matrices are taken for the longitudinal motion
of the aircraft. Apart from this, I have also focused on the theoretical aspect of advanced
control design like the Stability of Linear System, Linearization of System of equation,
solving the transition matrix, Solution of the controlled system using Transition Matrix,
Kalmanâs Criterion.
The results of the simulation in Matlab took a substantiate amount of time considering the
time consuming calculation of the exponential of a matrix and the calculation of the
Controllability Grammian Matrix and the controller. In fact I had to run the code overnight
just to get the desired graph. This was the major problem which I encountered while doing
the project. But I successfully obtained satisfactory results and the system could be
controlled.
5. 5
TABLE OF CONTENTS
CHAPTER NO. TITLE PAGE NO.
ABSTRACT iii
NOMENCLATURE xvii
CONTENTS xx
1. Brief Reviewof Differential Equations
1.2. Directional Fields
2. Systems of Linear Differential Equations
2.1 Introduction to Systems of Linear Differential Equations
2.2 Review of Matrix Theory.
2.3. Eigen Values and Eigen Vectors
2.4. Stability of Linear Systems
3. Existence and uniqueness theorem
3.1 Picardâs Theorem
3.2 Picard iteration for IVP
3.3. Cauchy Peanoâs Theorem
4. Linearization of Non Linear Models
4.1 Theory
4.2. Example
5. Controllability of Linear Systems
5.1. Motivation behind Controllability
5.2. Kalmanâs Criterion
5.3. The Matrix Exponential (Transition Matrix)
5.3.1. Matrix Exponential In Matlab
6. 6
5.4. Solution of the Controlled System using Transition Matrix
5.5. Kalman Condition Revisited (Proof)
6. Controllability of Aircraft
6.1. Introduction
6.2. Simulation in Matlab
6.2.1. Code for simulation
6.2.2 Output.
8. 8
BRIEF REVIEW OF DIFFERENTIALEQUATIONS
A differential equation is an equation which contains derivatives of the unknown.
(Usually it is a mathematical model of some physical phenomenon.)
Two classes of differential equations:
⢠O.D.E. (ordinary differential equations): linear and non-linear;
⢠P.D.E. (partial differential equations).
Some concepts related to differential equations:
⢠System: a collection of several equations with several unknowns.
⢠Order of the equation: the highest order of derivatives.
⢠Linear or non-linear equations: Let y(t) be the unknown.
Then, ð0(ð¡)ðŠ (ð) + ð1(ð¡)ðŠ (ð â 1) + · · · + ð ð(ð¡)ðŠ = ð(ð¡),(â) is a linear equations.
If the equation cannot be written as (â), then itâs non-linear.
Two things you must know: identify the linearity and the order of an equation.
Example 1. Let ðŠ(ð¡) be the unknown. Identify the order and linearity of the following
equations.
(a). (ðŠ + ð¡)ðŠ â² + ðŠ = 1
(b). 3ðŠ â² + (ð¡ + 4)ðŠ = ð¡2
+ ðŠ â²â²
(c). ðŠ â²â²â² = ððð (2ð¡ðŠ)
(d). ðŠ(4)
+ â ð¡ðŠâ²â²â² + ððð ð¡ = ð ð¡
Problem order linear?
(a). (ðŠ + ð¡)ðŠ â² + ðŠ = 1 1 No
(b). 3ðŠ â² + (ð¡ + 4)ðŠ = ð¡2
+ ðŠ â²â² 2 Yes
(c). ðŠ â²â²â² = ððð (2ð¡ðŠ) 3 No
(d). ðŠ(4)
+ â ð¡ðŠâ²â²â² + ððð ð¡ = ð ð¡
4 No
What is a solution? Solution is a function that satisfied the equation and the derivatives
exist.
Example 2. Verify that ðŠ(ð¡) = ð ð¡
is a solution of the IVP (initial value problem)
ðŠ â² = ððŠ, ðŠ(0) = 1.
9. 9
Here ðŠ(0) = 1 is called the initial condition.
Answer.Letâs check if ðŠ(ð¡) satisfies the equation and the initial condition:
ðŠ â² = ðð ðð¡
= ððŠ, ðŠ(0) = ð0
= 1.
They are both OK. So it is a solution.
Example 3. Verify that ðŠ(ð¡) = 10 â ðð â ð¡ with c a constant, is a solution to ðŠ â² +
ðŠ = 10.
Answer.
ðŠ â² = â(âðð â ð¡ ) = ðð â ð¡ , ðŠâ² + ðŠ = ðð â ð¡ + 10 â ðð â ð¡ = 10. OK.
Letâs try to solve one equation.
Example 4. Consider the equation
(ð¡ + 1)ðŠ â² = ð¡ 2
We can rewrite it as (ððð ð¡ â â1)
ðŠ â² =
ð¡2
ð¡ + 1
+ 1 =
ð¡2
â 1 +1
ð¡ + 1
=
( ð¡+ 1)( ð¡â 1)+ 1
ð¡ + 1
= ( ð¡ â1) +
1
ð¡ + 1
To find y, we need to integrate y â² :
ðŠ = â« ðŠ â² (ð¡)ðð¡ = â« [ (ð¡ â 1) + 1 ð¡ + 1] ðð¡ =
ð¡2
2
â ð¡ + ðð |ð¡ + 1| + ð
where ð is an integration constant which is arbitrary. This means there are infinitely many
solutions.
Additional condition: initial condition ðŠ(0) = 1. (Meaning: ðŠ = 1 when ð¡ = 0) Then
ðŠ(0) = 0 + ðð |1| + ð = ð = 1, so
ðŠ(ð¡) =
ð¡2
2
â ð¡ + ðð |ð¡ + 1| + 1.
So for equation like ðŠ â² = ð(ð¡), we can solve it by integration: ðŠ = â« ð(ð¡)ðð¡.
1.2 DirectionalFields
Directional field: for first order equations ðŠ â² = ð(ð¡, ðŠ).
Interpret ðŠ â² as the slope of the tangent to the solution ðŠ(ð¡) at point (ð¡, ðŠ) in the ðŠ â
ð¡ plane.
⢠If ðŠ â² = 0, the tangent line is horizontal;
⢠If ðŠ â² > 0, the tangent line goes up;
⢠If ðŠ â² < 0, the tangent line goes down;
⢠The value of |ðŠ â² | determines the steepness.
10. 10
Example 5. Consider the equation ðŠ â² =
1
2
(3 â ðŠ).We know the following:
⢠If ðŠ = 3, then y â² = 0, flat slope,
⢠If ðŠ > 3, then ðŠ â² < 0, down slope,
⢠If ðŠ < 3, then ðŠ â² > 0, up slope.
See the directional field below (with some solutions sketched in red):
We note that, if ðŠ(0) = 3, then ðŠ(ð¡) = 3 is the solution.
Asymptotic behavior: As ð¡ â â, we have ðŠ â 3
Remarks:
(1). For equation ðŠ â² (ð¡) = ð(ð â ðŠ) with ð > 0, it will have similar behavior as
Example 5, where ð = 3 and ð =
1
2
. Solution will approach ðŠ = ð as ð¡ â +â.
(2). Now consider ðŠ â² (ð¡) = ð(ð â ðŠ), but with ð < 0. This changes the sign of â² .
We now have
â If ðŠ(0) = ð, then ðŠ(ð¡) = ð;
â If ðŠ(0) > ð, then ðŠ â +â as ð¡ â +â;
â If ðŠ(0) < ð, then ðŠ â ââ as ð¡ â +â.
Example 6: Let ðŠ â² (ð¡) = (ðŠ â 1)(ðŠ â 5). Then,
⢠If ðŠ = 1 or ðŠ = 5, then ðŠ â² = 0.
⢠If ðŠ < 1, then ðŠ â² > 0;
⢠If 1 < ðŠ < 5, then ðŠ â² < 0;
⢠If ðŠ > 5, then ðŠ â² < 0.
Directional field looks like:
11. 11
What can we say about the solutions?
⢠If ðŠ(0) = 1, then ðŠ(ð¡) = 1;
⢠If ðŠ(0) = 5, then ðŠ(ð¡) = 5;
⢠If ðŠ(0) < 1, then ðŠ â 1 as ð¡ â +â;
⢠If 1 < ðŠ(0) < 5, then ðŠ â 1 as ð¡ â +â;
⢠If ðŠ(0) > 5, then ðŠ â +â as ð¡ â +â.
Remark: If we have ðŠ â² (ð¡) = ð(ðŠ), and for some ðŠ0 we have ð(ðŠ0) = 0, then, ðŠ(ð¡) =
ðŠ0 is a solution.
Example 7: Given the plot of a directional field,
which of the following ODE could have generate it?
(a). ðŠ â² (ð¡) = (ðŠ â 2)(ðŠ â 4)
12. 12
(b). ðŠ â² (ð¡) = (ðŠ â 1)2 (ðŠ â 3)
(c). ðŠ â² (ð¡) = (ðŠ â 1)(ðŠ â 3)2
(d). ðŠ â² (ð¡) = â(ðŠ â 1) (ðŠâ 3)2
We first check the constant solution, ðŠ = 1 and ðŠ = 3. Then (a) can not be. Then, we
check the sign of ðŠ â² on the intervals: ðŠ < 1, 1 < ðŠ < 3, ððð ðŠ > 3, to match the
directional field. We found that (ð) could be the equation.
13. 13
2. Systems of Linear Differential Equations
2.1. Introduction to Systems of Linear Differential Equations
Right after the invention of calculus, differential equations replaced algebraic equations
(which in turn replaced counting) as the major tool in mathematically modelling
everything. A single differential equation (also called âscalar differential equationâ) is a
mathematical model of the time-evolution/spatial variation of one single substance (can
be population of a single species, amount of a single chemical, etc.); On the other hand, a
system of differential equations models the time-evolution of more than one quantities.
One example is Newtonâs second law:
ð2
ð¥
ðð¡2 = ðð (1)
Which looks like a single equation but is actually a system because both ð¥ and ð has more
than one components. Traditionally, systems of ordinary differential equations arise from
study of mechanics. Modern examples also abound, especially from biology, sociology,
economics, etc.
The general form of a system involving n unknown functions is
ð¥1 = ð1(ð¥1,, ð¥ð) (2)
ð¥2 = ð2(ð¥1,, ð¥ð) (3)
ð¥3 = ðð(ð¥1,, ð¥ð) (4)
where the evolution of n quantities are described. Such a system is usually referred to as
an n à n first order system.
Remark 1. When ð = 2 or 3, ð¥, ðŠ (respectivelyð¥, ðŠ, ð§) are often used instead of ð¥1,, ð¥ð.
When all ð1, âŠâŠ ⊠⊠, ðð are linear in their variables ð¥1,âŠâŠ ⊠⊠, ð¥ð, the system is
called linear, otherwise itâs
called nonlinear. So an ð Ã ð first order linear system has the general form
ð¥1Ì = ð11 ( ð¡) ð¥1 + â¯âŠ + ð ð1(ð¡) ð¥ ð + ð1(ð¡) (5)
ð¥ ðÌ = ð ð1( ð¡) + â¯âŠ + ð ðð (ð¡) ð¥ ð + ð ð (ð¡). (6)
If furthermore all ððð (t) are constants, that is
ð¥1Ì = ð11 ð¥1 + â¯âŠ + ð ð1 ð¥ ð + ð1(ð¡) (7)
ð¥ ðÌ = ð ð1 ð¥1 + â¯âŠ + ð ðð (ð¡) ð¥ ð + ð ð (ð¡) (8)
14. 14
The system is said to have âconstant coefficientsâ. As usual, when ð1(ð¡) âŠâŠ ð ð(ð¡) = 0,
the above linear systems are called âhomogeneousâ.
Remark 2. In almost all practical cases, the first order system will be nonlinear. There is
no systematic way to solve all general nonlinear system. In fact, even for ð Ã ð first
order linear system, no simple formula exists (of course unless ð = 1, which can be
solved through application of appropriate integrating factors).Only linear systems with
constant coefficients enjoy good formulas for solutions.
Nevertheless, as we will see soon, one important way to understand the general nonlinear
system is to derive from it one or more related linear, constant-coefficient systems. Once
a good understanding is reached for these constant-coefficient systems, the behaviours of
the solutions to the original nonlinear problem often can be obtained.
Write out the general form of a system of first order ODE, with ð¥1, ð¥2 as unknowns.
Given
ððŠâ²â² + ððŠâ² + ððŠ = ð(ð¡), ðŠ(0) = ðŒ, ðŠâ² (0) = ðœ
we can do a variable change: let
ð¥1 = ðŠ, ð¥2 = ð¥â²1 = ðŠ â²
then
ð¥â²
1 = ð¥2 ð¥1(0) = â
ð¥â²2 = ðŠ â²â² =
1
ð
(ð(ð¡) â ðð¥2 â ðð¥1 ) ð¥2(0) = ðœ
Observation: For any 2nd order equation, we can rewrite it into a system of 2 first order
equations.
Example 1. Given
ðŠ â²â² + 5ðŠ â² â 10ðŠ = ð ðð ð¡, ðŠ(0) = 2, ðŠâ² (0) = 4
Rewrite it into a system of first order equations: let ð¥1 = ðŠ and ð¥2 = ðŠ â² = ð¥â²1 , then
ð¥â²1 = ð¥2 ð¥1 (0) = 2
ð¥â²2 = ðŠ â²â² = â5ð¥2 + 10ð¥1 + sin t ð¥2 (0) = 4
We can do the same thing to any high order equations. For ð â ð¡â order differential
equation:
ðŠ (ð) = ð¹(ð¡, ðŠ, ðŠâ² ,· · · , ðŠ(ð â 1))
define the variable change:
15. 15
ð¥1 = ðŠ, ð¥1 = ðŠ â² ,âŠâŠ ð¥ ð = ðŠ(ðâ1)
we get
ð¥â²1= y â² = ð¥2
ð¥â²2 = y â²â² = ð¥3
.
.
ð¥â² ðâ1 = ðŠ(ðâ1)
= ð¥ ð
ð¥â²
ð = ðŠ(ð)
= ð¹( ð¡, ð¥1, ð¥2,· · · , ð¥ ð)
With corresponding source terms.
(Optional) Reversely, we can convert a 1st order system into a high order equation.
2. 2. Review of Matrix Theory.
A matrix of size m à n:
†m, 1 †j †n.
We consider only square matrices, i.e., m = n, in particular for n = 2 and 3.
Basic operations: A, B are two square matrices of size n.
⢠Addition: ðŽ + ðµ = (ððð ) + (ððð )
⢠Scalar multiple: ðŒðŽ = (ðŒ · ððð )
⢠Transpose: ðŽ ð
switch the ðð,ð with ððð. (ðŽ ð
) ð
= ðŽ.
⢠Product: For ðŽ · ðµ = ð¶, it means ðð,ð is the inner product of (ðð¡â row of ðŽ) and (ðð¡â
column of ðµ). Example:
[
ð ð
ð ð
]· [
ð¥ ðŠ
ð¢ ð£
]= [
ax + bu ay + bv
cx + du cy + dv
]
We can express system of linear equations using matrix product.
16. 16
Example 1.
ð¥1â ð¥2 + 3ð¥3 = 4
2ð¥1 + 5ð¥3 = 0
ð¥2â ð¥3 = 7
can be expressed as:
· [
1 â1 3
2 0 5
0 1 â1
]. [
ð¥
ðŠ
ð§
]=[
4
0
7
]
Some properties:
⢠Identity ðŒ: I = ðððð(1, 1,· · · ,1), ðŽðŒ = ðŒðŽ = ðŽ.
⢠Determinant det(A):
ððð¡ [
ð ð
ð ð
] = ad â bc,
ððð¡ (
ð ð ð
ð¢ ð£ ð€
ð¥ ðŠ ð§
) = ðð£ð¥ + ðð€ð¥ + ðð¢ðŠ â ð¥ð£ð â ðŠð€ð â ð§ð¢ð.
⢠Inverse ððð£(ðŽ) = ðŽâ1
: ðŽâ1
ðŽ = ðŽðŽâ1
= ðŒ.
⢠The following statements are all equivalent: (optional)
â (1) ðŽ is invertible;
â (2) ðŽ is non-singular;
â (3) ððð¡ (ðŽ) â 0;
â (4) row vectors in ðŽ are linearly independent;
â (5) Column vectors in ðŽ are linearly independent.
â (6) All eigenvalues of ðŽ are non-zero.
17. 17
2.3. Eigen Values and Eigen Vectors
Eigenvalues and eigenvectors of ðŽ (only when A is 2 à 2)
λ: scalar value, ð£â: column vector, ð£â â¡ 0.
If ðŽð£â = λð£â, then (λ, ð£â) is the (eigenvalue, eigenvector) of ðŽ.
They are also called an eigen-pair of ðŽ.
Remark: If ð£â is an eigenvector, then αð£â for any α â 0 is also an eigenvector, because
ðŽ(αð£â ) = αðŽð£â = αλð£â = λ(αð£â).
How to find (λ, v):
Að£â â λð£â = 0, (ðŽ â ððŒ)ð£âââââ = 0, ððð¡(ðŽ â ððŒ) = 0.
We see that ððð¡(ðŽ â ððŒ) is a polynomial of degree 2 (if ðŽ is 2 à 2) in λ, and it is also called
the characteristic polynomial of ðŽ. We need to find its roots.
Example 1. Eigenvalues can be complex numbers.
A = [
2 â9
4 2
]
Letâs first find the eigenvalues.
ððð¡(ðŽ â ððŒ) = ððð¡ [2 â λ 9
4 2 â λ
] = (2 â λ) 2
+ 36 = 0, â ð1,2 = 2 ± 6ð We see
that ð2 = ð1
Ì Ì Ì , complex conjugate. The same will happen to the eigenvectors, i.e., ð£2âââââ = ð£1âââââ. So
we need to only find one. Take ð1 = 2 + 6ð, we compute ð£â = (ð£1
, ð£2
) ð
:
(ðŽ â ð1 ðŒ) ð£â = 0, [
âð6 â9
4 âð6
]· [
ð£1
ð£2]= 0,
â6ið£1
â 9ð£2
= 0, choose ð£1
= 1, so ð£2
= â
2
3ð
,
so
ð£1âââââ =( 1
â2/3ð
), ð£2âââââ = ð£1âââââ= (
1
â
2
3ð
) .
18. 18
2.4. Stability of Linear Systems
For the 2 Ã 2 system
ð¥â²ââââ = ðŽ ð¥â
we see that ð¥â = (0,0) is the only critical point if ðŽ is invertible. In a more general setting:
the system
ð¥â²ââââ = ðŽ ð¥ââðââ
would have a critical point at ð¥â= ðŽâ1
ð. The type and stability of the critical point is solely
determined by the eigenvalues of A.
ð1,2 eigenvalues Type of C.P Stability
Real ð1. ð2 < 0 Saddle point unstable
Real ð1 > 0, ð2 > 0, ð2
â ð1
Node(source) Unstable
Real ð1 < 0, ð2 < 0, ð2
â ð1
Node(sink) asymptotically stable
Real ð1 = ð2 = ð Improper node asymptotically stable
if λ < 0, unstable if λ
> 0
Complex ð1,2 = ±ððœ Centre stable but not
asymptotically
Complex ð1,2 = α ± iβ Spiral point asymptotically stable
if α <0, unstable if α
> 0
Example 1. We now consider again the prey-predator model, and set in values for the
constants. We consider
{
x â² (t) = x(10 â 5y)
y â² (t) = y(â6 + x)
19. 19
which has 2 critical points (0,0) ððð (6,2). The Jacobian matrix is
ðœ(ð¥, ðŠ) = [
10 â 5y â5x
y â6 + x
].
At (0,0) we have
ðœ(0, 0) =[
10 0
0 â6
], λ1= 10, λ2 = â6, saddle point, unstable.
At (6, 2) we have
ðœ(6, 2) =[
0 â30
2 0
], λ1,2 = ±ð â 60, center, stable but not asymp..
To see more detailed behavior of the model, we compute the two eigenvector for ðœ(0,0),
and get ð£1âââââ = (1, 0) and ð£1âââââ = (0,1). We sketch the trajectories of solution in (x1, x2)-
plane in the next plot, where the trajectories rotate around the center counter clock wise.
One can interpret these as âcircles of lifeâ. In particular, the big circles can be interpreted
as: When there are very little predators, the prey grows exponentially, very quickly. As
the population of the prey becomes very large, there is a lot of food for the prey, and this
triggers an sudden growth of the predator. As the predators increase their numbers, the
prey population shrinks, until there is very little prey left. Then, the predators starve, and
its population decays exponentially (dies out). The circle continuous in a periodic way,
forever!
20. 20
3. Existence and uniqueness theorem
3.1. Picardâs Theorem
Here we concentrate on the solution of the first order IVP
ðŠ0 = ð(ð¥, ðŠ), ðŠ(ð¥0) = ðŠ0 (1)
We are interested in the following questions:
1. Under what conditions, there exists a solution to (1).
2. Under what conditions, there exists a unique solution to (1).
Comment: An ODE may have no solution, unique solution or infinitely many solutions. For,
example ðŠâ²2
+ ðŠ2
+ 1 = 0, ðŠ(0) = 1 has no solution. The ODE ðŠâ²
= 2ð¥, ðŠ(0) = 1 has
unique solution ðŠ = 1+ð¥2
, whereas the ODE xðŠâ²
= ðŠâ1
, ðŠ(0) = 1 has infinitely many
solutions y = 1 + αx, α is any real number. (I only state the theorems. For proof, one may see
âAn introduction to ordinary differential equationâ by E A Coddington.)
Theorem 1. (Existence theorem):
ðð¢ðððð ð ð¡âðð¡ ð( ð¥, ðŠ) ðð ðððð¡ððð¢ðð¢ð ðð¢ððð¡ððð ðð ð ððð ðððððð
ð = {(ð¥, ðŠ) ⶠ|ð¥ â ð¥0| †ð, |ðŠ â ðŠ0| †ð}, (ð, ð > 0).
ððððð ð ðð ðððð¡ððð¢ðð¢ð ðð ð ðððð ðð ððð ððð¢ðððð ðððððð, ðð¡ ðð ððððð ð ðððððŠ ððð¢ðððð ðð ð ,
ð. ð. , ð¡âððð ðð¥ðð ð¡ð ðŸ > 0 ð ð¢ðâ ð¡âðð¡ |ð(ð¥, ðŠ)| †ðŸ â(ð¥, ðŠ)
â ð . ðâðð ð¡âð ðŒðð (1) âðð ðð¡ðððð ð¡ ððð ð ððð¢ð¡ððð ðŠ
= ðŠ(ð¥) ððððððð ðð ð¡âð ððð¡ððð£ðð |ð¥ â ð¥0| †ðŒ ð€âððð
ðŒ = ððð {ð,
ð
ð
}
(Note that the solution exists possibly in a smaller interval)
Theorem 2.(Uniquness theorem): ðð¢ðððð ð ð¡âðð¡ ð ððð ðð/
ððŠ ððð ðððð¡ððð¢ðð¢ð ðð¢ððð¡ððð ðð ð (ððððððð ðð ð¡âð ðð¥ðð ð¡ðððð ð¡âððððð). ð»ðððð, ððð¡â
ð¡âð ð ððð
ðð
ððŠ
ððð ððð¢ðððð ðð ð , ð. ð.,
( ð)| ð( ð¥, ðŠ)| †ðŸ ððð ( ð) | ðð ððŠ| †ð¿ â(ð¥, ðŠ) â ð
21. 21
ðâðð ð¡âð ðŒðð (1) âðð ðð¡ððð ð¡ ððð ð ððð¢ð¡ððð ðŠ = ðŠ(ð¥) ððððððð ðð ð¡âð ððð¡ððð£ðð |ð¥ â ð¥0|
†ðŒ ð€âððð
ðŒ = ððð {ð,
ð
ð
}
ð¶ðððððððð ð€ðð¡â ðð¥ðð ð¡ðððð ð¡âððððð, ð¡âð ðŒðð (1) âðð ð¢ðððð¢ð ð ððð¢ð¡ððð ðŠ
= ðŠ(ð¥) ððððððð ðð ð¡âð ððð¡ððð£ðð |ð¥ â ð¥0| †ðŒ.
Comment: Condition (b) can be replaced by a weaker condition which is known as Lipschitz
condition. Thus, instead of continuity of âf/ây, we require
|ð(ð¥, ðŠ1) â ð(ð¥, ðŠ2)| †ð¿|ðŠ1 â ðŠ2| â(ð¥, ðŠð) â ð .
If ðð/ððŠ exists and is bounded, then it necessarily satisfies Lipschitz condition. On the other
hand, a function ð(ð¥, ðŠ) may be Lipschitz continuous but ðð/ððŠ may not exists. For example
ð(ð¥, ðŠ) = ð¥2
|ðŠ|,|ð¥| †1,|ðŠ| †1 is Lipschitz continous in ðŠ but ðð/ððŠ does not exist at
(ð¥, 0).
*Note 1: The existence and uniqueness theorems stated above are local in nature since the
interval, |ð¥ â ð¥0| †ðŒ, where solution exists may be smaller than the original interval, |ð¥ â
ð¥0| †ð, where ð(ð¥, ðŠ) is defined. However, in some cases, this restrictions can be
removed. Consider the linear equation
ðŠâ²
+ ð( ð¥) ðŠ = ð( ð¥), (2)
where ð(ð¥) and ð(ð¥) are defined and continuous in a the interval ð †ð¥ †ð. Here (ð¥, ðŠ)
= âð(ð¥)ðŠ + ð(ð¥). If ð¿ = ððð¥ð †ð¥ †ð |ð(ð¥)|, then
|ð(ð¥, ðŠ1) â ð(ð¥, ðŠ2)| = | â ð(ð¥)(ðŠ1 â ðŠ2)| †ð¿|ðŠ1 â ðŠ2|
Thus, f is Lipschitz continuous in y in the infinite vertical strip ð †ð¥ †ð and ââ < ðŠ <
â. In this case, the IVP (2) has a unique solution in the original interval ð †ð¥ †ð.
*Note 2: Though the theorems are stated in terms of interior point ð¥0, the point ð¥0 could be
left/right end point.
Comment: The conditions of the existence and uniqueness theorem are sufficeint but not
necessary. For example, consider
ðŠâ²
= â ðŠ + 1, ðŠ(0) = 0, ð¥ â [0, 1]
22. 22
Clearly ð does not satisfy Lipschitz condition near origin. But still it has unique solution.
[Hint: Let ðŠ1
and ðŠ2
be two solutions and consider
ð§(ð¥) = ( ðŠ1
(x)1/2
â ðŠ2
(x)1/2
)
2
.]
Comment: The existence and uniqueness theorem are also valid for certain system of first
order equations. These theorems are also applicable to a certain higher order ODE since a
higher order ODE can be reduced to a system of first order ODE.
Example 1. ð¶ððð ðððð ð¡âð ðð·ðž
ðŠâ²
= 1 + ðŠ2
, ðŠ(0) = 0.
ð¶ððð ðððð ð¡âð ðððð¡ððððð
ð = {(ð¥, ðŠ) ⶠ|ð¥| †100,|ðŠ| †1}.
Clearly ð and ðð/ððŠ are continuous in ð. Hence, there exists unique solution in the
neighbourhood of (0, 0). Now ð = 1 + ðŠ2
and |ð| †2 in S. Now ðŒ = ððð{100, 1/
2} = 1/2. Hence, the theorems guarantee existence of unique solution in |ð¥| †1/2, which
is much smaller than the original interval |ð¥| †100. Since, the above equation is separable,
we can solve it exactly and find ðŠ(ð¥) = ð¡ðð(ð¥). This solution is valid only in (âð/2, ð/
2) which is also much smaller than [â100,100] but nevertheless bigger than that predicted
by the existence and uniqueness theorems.
3.3. Picarditeration for IVP
This method gives approximate solution to the IVP (1). Note that the IVP (1) is equivalent to
the integral equation
ðŠ(ð¥) = ðŠ 0 + â« ð(ð¡, ðŠ(ð¡)) ðð¡
ð¥
ð¥0
(3)
A rough approximation to the solution ðŠ(ð¥) is given by the function ðŠ0(ð¥) = ðŠ0, which is
simply a horizontal line through (ð¥0, ðŠ0). (donât confuse function ðŠ0 (ð¥)with constant ðŠ0 ).
We insert this to the RHS of (3) in order to obatin a (perhaps) better approximate solution,
say ðŠ1 (ð¥). Thus,
ðŠ 1 (ð¥) = ðŠ 0 + â« ð(ð¡, ðŠ0(ð¡)) ðð¡
ð¥
ð¥0
= ðŠ0 + â« ð(ð¡, ðŠ0)
ð¥
ð¥0
ðð¡
23. 23
At the ð â ð¡â stage we find
ðŠ ð(ð¥) = ðŠ 0 + â« ð(ð¡, ðŠ ðâ1(ð¡)) ðð¡
ð¥
ð¥0
Theorem 3. ðŒð ð¡âð ðð¢ððð¡ððð ð( ð¥, ðŠ)
ð ðð¡ðð ððŠ ð¡âð ðð¥ðð ð¡ðððð ððð ð¢ðððð¢ðððð ð ð¡âððððð ððð ðŒðð (1),
ð¡âðð ð¡âð ð ð¢ðððð ðð£ð ðððððð¥ðððð¡ððð ðŠð( ð¥) ðððð£ððððð ð¡ð ð¡âð ð¢ðððð¢ð ð ððð¢ð¡ððð ðŠ( ð¥)
ðð ð¡âð ðŒðð (1).
Example 2. Apply Picard iteration for the IVP
ðŠâ²
= 2ð¥(1 â ðŠ), ðŠ(0) = 2.
Solution: Here ðŠ 0 (ð¥) = 2. Now
ðŠ 1 (ð¥) = 2 + â« 2ð¡(1 â 2) ðð¡
ð¥
0
= 2 â ð¥ 2
ðŠ 2 (ð¥) = 2 + â« 2ð¡( ð¡ 2
â 1) ðð¡
ð¥
0
= 2 â ð¥ 2
+ ð¥ 4
/2
ðŠ 3(ð¥) = 2 + â« 2ð¡ (ð¡ 2
â ð¡ 4
/2 â 1 ) ðð¡
ð¥
0
= 2 â ð¥ 2
+ ð¥ 4
/2â ð¥ 6
/3!
By induction, it can be shown that
ðŠ ð(ð¥) = 2 â ð¥ 2
+ ð¥ 4
/2 â ð¥ 6 /3! + · · · + (â1) ð
ð¥ 2ð
/ ð!
Hence, ðŠ ð( ð¥) â 1 + ðâð¥ 2
as ð â â. Now y(x) = 1 + ðâð¥ 2
is the exact solution of the
given IVP. Thus, the Picard iterates converge to the unique solution of the given IVP.
Comment: Picard iteration has more theoretical value than practical value. It is used in the
proof of existence and uniqueness theorem. On the other hand, finding approximate solution
using this method is almost impractical for complicated function ð(ð¥, ðŠ).
24. 24
3.4. Cauchy Peanoâs Theorem
The Picard existence theorem provides a locally unique solution to the differential equation
ðŠâ²
= ð( ðŠ, ð¡), ðŠ(ð¡0
) = ðŠ0
under the assumption that f is continuous and satisfies a Lipschitz condition in its first
variable. The Peano existence theorem has weaker hypotheses than the Picard existence
theorem: ð is only assumed to be continuous. The conclusion is weaker: there is a solution,
but it may not be unique.
Theorem: Let D be an open subset of ð Ã ð with
a continuous function and ðŠâ²(ð¥) = ð(ð¥, ðŠ(ð¥))
a continuous, explicit first-order differential equation defined on ð·, then every initial value
problem
ðŠ(ð¥0) = ðŠ0
for ð with (ð¥0, ðŠ0) â D has a local solution
ð§: ðŒ â ð
where ðŒ is a neighbourhood of in ð¥0 in ð , such that ð§â²(ð¥) = ð((ð¥, ð§(ð¥)) for all ð¥ â ðŒ.
The solution need not be unique: one and the same initial value (ð¥0, ðŠ0) may give rise to
many different solutions ð§.
25. 25
4. Linearization of Non Linear Models
4.1. Theory
Most of the differential equations and system of differential equations are encountered in
practise are non-linear. And most of the real life problems are based on the non-linear system.
But many of times we are unable to solve non-linear differential equation, so we linearize the
system to get a linear equation that can be easily solved. So here our first concern to linearize
the non-linear system. After linearize the non-linear system, we can easily applicable to apply
the numerous linear analysis methods to study the nature of the system.
Consider the non-linear system
ÅŒ(ð¡) = ð¹(ð¡, ð§(ð¡))
ð§(ð¡0) = ð§0
where the state ð§(ð¡) is an n-dimensional vector and ð be a non-linear function.
Suppose that system has a solution â (ð¡, ð§0, ð¡0) corresponding to a particular initial condition
z(ð¡0) = ð§0 . If the initial data ð§0 ðs slightly changed then it is expected that the solution
â (ð¡, ð§0, ð¡0) will also slightly changed. If ð(ð¡, ð§) is continuously differentiable with respect to
z then we can expand ð(ð¡, ð§) is a taylorâs series about the solution â (ð¡, ð§0, ð¡0).i.e.
ð(ð¡, â (ð¡, ð§0, ð¡0) + ð¿ð§(ð¡)) = ð(ð¡,â (ð¡, ð§0, ð¡0)) + ðŽ(ð¡)ð¿ð§(ð¡) + âððâðð ððððð ð¡ððð
here
ðŽ(ð¡) = (ððð(ð¡))ðð¥ð = (
ððð
ððð
)
Hence â Ì( ð¡, ð§0, ð¡0) + ð¿ð§ Ì( ð¡) = ð(ð¡, â (ð¡, ð§0, ð¡0 ) + ð¿ð§(ð¡))
= ð(ð¡,â (ð¡, ð§0, ð¡0)) + ðŽ(ð¡)ð¿ð§(ð¡) + âððâðð ððððð ð¡ððð
Note : â (ð¡, ð§0, ð¡0) + ð¿ð§(ð¡) is the solution of the given system.
Since â (ð¡, ð§0, ð¡0 ) is a solution, we have
â Ì( ð¡, ð§0, ð¡0 ) + ð¿ð§ Ì( ð¡) = ð(ð¡, â ( ð¡, ð§0, ð¡0)) + ðŽ(ð¡)ð¿ð§(ð¡) + âððâðð ððððð ð¡ððð
ð(ð¡, â (ð¡, ð§0, ð¡0)) + ð¿ð§ Ì( ð¡) = ð(ð¡, â (ð¡, ð§0, ð¡0 )) + ðŽ(ð¡)ð¿ð§(ð¡) + âððâðð ððððð ð¡ððð
26. 26
ð¿ð§ Ì( ð¡) = ðŽ(ð¡)ð¿ð§(ð¡) + âððâðð ððððð ð¡ððð (1.9)
Hence the equation
ÅŒ(ð¡) = ðŽ(ð¡)ð§(ð¡) (1.10)
is called the linearized system of (1.6) about the solution â (ð¡, ð§0, ð¡0).
Remarks: (1.3) If we do not neglect the order terms in (1.8) then we are with a semi-linear
system
ÅŒ(ð¡) = ðŽ(ð¡)ð§(ð¡) + ð(ð¡, ð§(ð¡)) (1.11)
where ð(ð¡, ð§(ð¡)) is the sum of higher order terms.
Remarks: (1.4) If the original system (1.6) is time-invariant that is
ð(ð¡, ð§(ð¡)) = ð(ð§(ð¡)) (1.12)
then the corresponding linearized system will be of the form
ÅŒ(ð¡) = ðŽð§(ð¡) (1.13)
system of the form (1.13) are known as autonomous homogeneous linear system.
4.2. Example
The state equations of an inverted pendulum are
ð§1Ì = ð§2(ð¡)
ð§2Ì (t) =
ð
ð
(sin ð§1(t))
What are the linearized equations about the equilibrium solution?
ð§1(ð¡) = ð§2(ð¡) = 0 ?
First write the equation into the state representation form as
ð§Ì(t) = ( ð§Ì1(ð¡)
ð§Ì2(ð¡)
) = ð(ð¡, ð§1(ð¡), ð§2(ð¡)) = ( ð1(ð¡)
ð2(ð¡)
)
Where ( ð1(ð¡)
ð2(ð¡)
) = (
ð§2(ð¡)
ð
ð
sin(ð§1(ð¡))
)
ð ð1
ðð§1
= 0 ,
ð ð1
ðð§2
= 1 ,
ð ð2
ðð§1
=
ð
ð
cos(ð§1( ð¡)) ,
ð ð2
ð ð§2
= 0
Hence the required linearized form is
ÅŒ(ð¡) = ðŽ(ð¡)ð§(ð¡)
27. 27
where A(t) =(
ðð1
ðð§1
ðð1
ðð§2
ðð2
ðð§1
ðð2
ðð§2
) and ÅŒ(t) = ( ð§Ì1(ð¡)
ð§Ì2(ð¡)
)
putting all the partial derivatives in the matrix ,we get
( ð§Ì1(ð¡)
ð§Ì2(ð¡)
) = (
0 1
ð
ð
cos(ð§1( ð¡)) 0)( ð§1(ð¡)
ð§2(ð¡)
)
Thus the matrix ðŽ(ð¡) will satisfy the initial condition, so
ðŽ(ð¡) = (
0 1
ð
ð
0)
Hence our required linearized form of a non-linear system is
( ð§Ì1(ð¡)
ð§Ì2(ð¡)
) = (
0 1
ð
ð
0) ( ð§1(ð¡)
ð§2(ð¡)
)
28. 28
5. Controllability of Linear Systems
5.1. Motivation behind Controllability
Control theory basically involved the influencing the behaviours of a dynamical system so as
to achieve a desired goal. Many physical Systems are controlled by the manipulation of their
inputs based on the simultaneous observation of their output.
Ex. Airplane is controlled by the pilotâs action based on the instrument reading and visual
observation.
The control problem is to determine on the basis of the available data, the input necessary to
achieve a given goal.
Mathematical Control theory exhibits a wide variety of technique that go beyond those
associated with traditional applied mathematics.
In the simplest case the vibrating system consisting of a single mass on a linear spring if the
displacement from the equilibrium is ð¥ at a time ð¡,then Newtonâs law asserts that the
acceleration
ð2
ð¥
ðð¡2 = âð¥
Now we can introduce a control force u depending on ð¥ and
ðð¥
ðð¡
, so that every solution at of
ð2
ð¥
ðð¡2 = âð¥ + ð¢ returns to rest at ð¥ = 0
This is the problem and motivation behind controllability.
5.2. Kalmanâs Criterion
Definition: - Consider the linear system ð¥ Ì = ðŽð¥ + ðµð¢ where ð¥ â ð ð
: state vector and
ð¢ â ð ð
: input vector. ðŽ ⶠOf size ð à ð and ðµ : of size ð à ð.
The pair (ðŽ, ðµ) is controllable if, given a duration ð > 0 and two arbitrary points ð¥0 , ð¥ ð â
ð ð
, there exists a piecewise continuous function ð¡ â ð¢Â¯(ð¡) from [0, ð] toð ð
, such that the
integral curve ð¥Â¯Ì (ð¡) generated by ð¢Â¯ with ð¥Ì (0) = ð¥0 , satisfies ð¥Â¯Ì (ð) = ð¥ ð .
In other words
ð ðŽð
ð¥0 + â« ð ðŽ(ðâð¡)
ð
0
ðµð¢Ì (ð¡)ðð¡ = ð¥ ð .
This property depends only on A and B:
29. 29
Theorem(Kalman)
ðŽ ððððð ð ðððŠ ððð ð ð¢ðððððððð¡ ðððððð¡ððð ððð (ðŽ, ðµ) ð¡ð ðð ðððð¡ðððððððð ðð ðððð ð¶ =
ðððð |ðµ|ðŽðµ| · · · |ðŽ ðâ1
ðµ | = ð.
ð¶ ðð ðððððð ðŸðððððâð ðððð¡ðððððððððð¡ðŠ ððð¡ððð¥ (ðð ð ðð§ð ð à ðð).
5.3. The Matrix Exponential (Transition Matrix)
For each ð à ð complex matrix A, define the exponential of A to be the matrix ð ðŽ
=
â
ðŽ ðŸ
ð!
â
ð=0 = ðŒ + ðŽ +
1
2!
ðŽ2
+ +
1
3!
ðŽ3
+ · · · (1)
It is not difficult to show that this sum converges for all complex matrices ðŽ of any finite
dimension.
If ðŽ is a 1 à 1 matrix [ð¡], then ð ðŽ
= [ð ð¡
], by the Maclaurin series formula for the function y
= ð ð¡
. More generally, if ð· is a diagonal matrix having diagonal entries ð1, ð2, . . ., ð ð, then
we have
ð ð·
= I + ð· + +
1
2!
ð·2
+ · · ·
âŠ=[
1 ⯠0
0 1 â®
0 ⯠1
] + [
ð1 ⯠0
â® ð2 â®
0 ⯠ð ð
] + [
ð1/2! ⯠0
â® â± â®
0 ⯠ð ð/2!
]
=[
ð ð1 ⯠0
â® â± â®
0 ⯠ð ð ð
]
The situation is more complicated for matrices that are not diagonal. However, if a matrix A
happens to be diagonalizable, there is a simple algorithm for computing ð ðŽ
, a consequence
of the following lemma.
Lemma 1 ð¿ðð¡ ðŽ ððð ð ðð ððððððð¥ ð à ð matrices, ððð ð ð¢ðððð ð ð¡âðð¡ ð ðð ððð£ððð¡ðððð.
ðâðð ð ðâ1 ðŽð
= ðâ1
ð ðŽ ð
Proof: Recall that, for all integers ð ⥠0, we have (ðâ1
ð ðŽ ð) ð
=ðâ1
ðŽ ð
ð. The definition
(1) then yields
30. 30
ð ðâ1 ðŽð
= I + ðâ1
AP +
( ðâ1
AP)2
2!
+ · · ·
= I + ðâ1
AP +
( ðâ1
A2
ð)
2!
+ · · ·
= ðâ1
(ðŒ + ðŽ +
ðŽ 2
2!
+ · · ·)ð = ðâ1
ð ðŽ ð
If a matrix ðŽ is diagonalizable, then there exists an invertible P so that = ðâ1
ð ð·
ð, where D
is a diagonal matrix of eigenvalues of A, and P is a matrix having eigenvectors of A as its
columns. In this case, ð ðŽ
= ðâ1
ð ð· ð .
Example: Let ðŽ denote the matrix
ðŽ = [
5 1
â2 2
]
The reader can easily verify that 4 and 3 are eigenvalues of ðŽ, with corresponding
eigenvectors ð€1 =
1
â1
and ð€2 =
1
â2
. It follows that
ðŽ = ð ð·ðâ1
= [
1 1
â1 â2
][
4 0
0 3
][
2 1
â1 â1
]
so that ð ðŽ = [
1 1
â1 â2
][ ð4 0
0 ð3] [
2 1
â1 â1
]=[2ð4 â 3ð3 ð4 â ð3
2ð3 â 2ð4 2ð3 â ð4
]
5.3.1. MatrixExponential In Matlab
Y = expm(X) computes the matrix exponential of X. Although it is not computed this way,
if X has a full set of eigenvectors V with corresponding eigen values ð·, then [ð, ð·] =
ððð(ð) and
ðð¥ðð(ð) = ð â ðððð(ðð¥ð(ðððð(ð·)))/ð
A = [1 1 0; 0 0 2; 0 0 -1];
expm(A)
ans =
2.7183 1.7183 1.0862
0 1.0000 1.2642
0 0 0.3679
31. 31
5.4. Solution of the Controlled System using Transition Matrix
Consider the ð-dimensional linear control system:
ð¥Ì= A(t)x + B(t)u; x(ð¡0) = ð¡0
Let â (ð¡, ð¡0) be the transition matrix of the homogeneous system ð¥Ì = ðŽ(ð¡)x. The solution of
the control system is given by (using variation of parameter method)
ð¥(ð¡) = â (ð¡, ð¡0) ð¥0 + â« â ( ð¡, ð) ðµ(
ð¡
ð¡0
ð)ð¢(ð)ðð
The system is controllable iff for arbitrary initial and final states ð¥0,ð¥1 there exists a control
function ð¢ such that
ð¥1 = â (ð¡1, ð¡0 ) ð¥0 + â« â ( ð¡1, ð) ðµ(
ð¡
ð¡0
ð)ð¢(ð)ðð
Controllability Grammian for the linear system and is given by
ð(ð¡ ð, ð¡1) = â« â ( ð¡1, ð) ðµ(
ð¡1
ð¡0
ð)ðµâ
(ð)â â( ð¡1, ð) ðð
Theorem: The linear control system is controllable iff ð(ð¡ ð, ð¡1) is invertible and the
steering control that move ð¥0 to ð¥1 is given by
ð¢(ð¡) = ðµâ
(ð)â â( ð¡1, ð¡) ðâ1
(ð¡0, ð¡1)[ð¥1 â â (ð¡0, ð¡1)ð¥0]
Proof : Controllability part is already proved earlier. We now show that the steering control
defined above actually does the transfer of states. The controlled state is given by
ð¥(ð¡) = â (ð¡, ð¡0 ) ð¥0 + â« â ( ð¡1, ð) ðµ(
ð¡
ð¡0
ð)ð¢(ð)ðð
ð¥(ð¡) = â (ð¡, ð¡0 ) ð¥0 + â« â ( ð¡, ð) ðµ(
ð¡
ð¡0
ð) ðµâ
(ð)â â( ð¡1, ð) ðâ1
(ð¡0, ð¡1)[ð¥1 â â (ð¡0, ð¡1)ð¥0] ðð
ð¥(ð¡1) = â (ð¡1, ð¡0 ) ð¥0 + ð(ð¡ ð, ð¡1) ðâ1
(ð¡0, ð¡1) [ð¥1 â â (ð¡0, ð¡1)ð¥0]
ð¥( ð¡1) = ð¥1
32. 32
5.5. Kalman Condition Revisited (Proof)
System:- ðÌ = ðŽð + ðµð
Solution:-
ð¥(ð¡) = ð ðŽð¡
ð¥0 + â« ð ðŽ(ð¡âð)
ðµ
ð¡
ð¡0
ð¢(ð)ðð
Assuming ð(ð¡1) = 0
0 = ð ðŽð¡1 ð¥0 + â« ð ðŽ(ð¡âð)
ðµ
ð¡1
0
ð¢(ð)ðð
ð¥0 = â â« ðâðŽ(ð)
ðµ
ð¡1
0
ð¢(ð)ðð
According to Cayley Hamiltonâs Theorem,
For a ðŽ be a ð à ð matrix, and let ð(ð) = ððð¡(ððŒ â ðŽ) be the ðâððððð¡ðððð ð¡ðð ððððŠðððððð
of ðŽ. Then ð(ðŽ) = 0.
So ðâðŽð
=â ðŸð(ð)ðŽ ððâ1
ð=0
ð¥0 = â â« ðŽ ð
ðµ â« ðŸð (ð)
ð¡1
0
ð¡1
0
ð¢(ð)ðð
â = ðŽ ð
ðœðâ1
ð=0 ðœ ð where ðœ ð = â« ðŸð (ð)
ð¡1
0
ð¢(ð)ðð
= [ ðµ ðŽðµ ⊠⊠ðŽ ðâ1
ðµ][ðœ0 ðœ1 ⊠⊠ðœ ðâ1] ð
We want a non-trivial solution.
So for that the rank of ð¶ = [ ðµ ðŽðµ ⊠⊠ðŽ ðâ1
ðµ] is ð
Then the system is controllable.
33. 33
6. Controllability of Aircraft
6.1. Introduction
In the following section I examine the application of state space modelling and control theory
to aircraft problems. Controllability is concerned with whether the states of dynamic system
are affected by the control input. The mathematical definition of controllability easily are
calculated but are somewhat abstract. An alternate way of looking at controllability is to
transform the state equation to a canonical form. If the state equations are transformed so that
the new plant matrix is diagonal matrix then the equations governing the system are said to be
decoupled. Now the control matrix can be examined using the Kalmanâs criterion and the
controllability of the system can be checked.
6.2. Simulation in Matlab
Here I have considered the dynamics of STOL Transport aircraft. After the theoretical
modelling for the longitudinal motion of the aircraft I got the following control matrices.
A= -1.3970 1.0000 0 0
-5.4700 -3.2700 0 0
0 1.0000 0 0
-400.0000 0 400.0000 0
B= -0.1240
-13.2000
0
0
34. 34
6.2.1 Code for Simulation
clear
clc
disp('Linear System dot(x) = Ax + Bu where A and B are given as follows:')
A = input('The matrix A');
B =input('The matrix B');
n =input('The order of matrix');
pause
disp('Kalman Test:The controllability Matrix of the system is:')
C = B;
Q = B;
%AS = A;
for i=1:n-1
C = (A^i)*B
Q = [Q C]
end
pause
disp('The rank of the controllability matrix is:')
r=rank(Q)
pause
if r ~= n
disp('The system is not controllable');
else
disp('The system is controllable')
t = sym('t')
s = sym('s')
disp('The initial state is:')
x0 = input('Enter the initial state')
pause
disp('The final state is:')
x1 = input('Enter the final state')
pause
disp('We want to to reach the final state in time')
T = input('Enter the final time')
pause
disp('The transition matrix is:')
phi = expm(A*t)
disp('The controllability Gramian is:')
W = int((expm(A*(T-t))*B*B'*expm(A'*(T-t))),t,0,T)
W=subs(W);
pause
disp('The controller is taken as:')
u = B'*expm(A'*(T-t))*inv(W)*(x1-expm(A*T)*x0)
U=subs(u,s);
disp('The solution of the system using the above controller is:')
E=B*U;
x =(expm(A*t))*x0 + int(expm(A*(t-s))*E,s,0,t)
disp('The Graph of the solution is ')
z=linspace(0,T,10);
for i=1:n
plot(z,subs(x(i),z))
hold on
end
xlabel('Time')
ylabel('x(t)')
end
35. 35
6.2.2. OUTPUT
Linear System dot(x) = Ax + Bu where A and B are given as follows:
The matrix A[ -1.397 1 0 0; -5.47 -3.27 0 0;0 1 0 0;-400 0 400 0]
The matrix B[-0.124;-13.2;0;0]
The order of matrix4
Kalman Test:The controllability Matrix of the system is:
C =
-13.0268
43.8423
-13.2000
49.6000
Q =
-0.1240 -13.0268
-13.2000 43.8423
0 -13.2000
0 49.6000
C =
62.0407
-72.1078
43.8423
-69.2912
Q =
-0.1240 -13.0268 62.0407
-13.2000 43.8423 -72.1078
0 -13.2000 43.8423
0 49.6000 -69.2912
C =
1.0e+03 *
-0.1588
-0.1036
-0.0721
36. 36
-7.2794
Q =
1.0e+03 *
-0.0001 -0.0130 0.0620 -0.1588
-0.0132 0.0438 -0.0721 -0.1036
0 -0.0132 0.0438 -0.0721
0 0.0496 -0.0693 -7.2794
The rank of the controllability matrix is:
r =
4
The system is controllable
t =
t
s =
s
The initial state is:
Enter the initial state[1;2;3;4]
x0 =
1
2
3
4
The final state is:
Enter the final state[4;3;2;1]
x1 =
4
3
2
1
We want to to reach the final state in time
Enter the final time1
T =
37. 37
1
P.S.- Since the complete output was going out of the bounds so it is very difficult to represent
it on paper.
The graph is as follows.
38. 38
REFERENCES
1. Wen Shen, Introduction to Ordinary and Partial Differential Equations, Spring 2013, p 1-8
2. Wen Shen, Introduction to Ordinary and Partial Differential Equations, Spring 2013,
p 88 - 95
3. S. Ghorai, Picardâs existence and uniqueness theorem, Picardâs iteration, p 1-4
4. https://en.wikipedia.org/wiki/Peano_existence_theorem
5. Brian Steven, Lewis, Aircraft Control and Simulation, p- 143-201
6. Cook, Michael, Flight Dynamics, Elsevier, p 123-145
7. Wayne, Aircraft Flight Dynamics and Control, Wiley Series, p 183-221