These are notes I prepared for a course on control that I taught at King Abdulaziz University.
The notes did not go through the intended revision after the end of the semester due to my schedule so they remain a rough first draft.
They are largely (but not completely) inspired by a control course taught by Dr. Gregory Shaver at the ME dept. at Purdue.
Much of the information was gleaned through a variety of textbooks/papers/experiences, too many to mention here.
Although a reasonable attempt has been made to ensure all the facts are correct, these notes are as-is with no guarantee of accuracy. Use at your own risk.
4. Chapter 1
Introduction to feedback control
We introduce feedback in terms of what will be discussed in these notes.
1.1 Basic notion of feedback control
A block diagram illustration of an ideal standard feedback control system is shown
in Fig. 1.1.
Figure 1.1: Block diagram illustration of ideal feedback control system.
The idea of feedback is straightforward: we monitor the actual output of the system
or plant, then generate an error signal based on the difference between the desired
output and the actual output of the dynamic system and finally use this error term
in a compensator which gives a control signal that can act as an input to the actual
system.
The compensator is a general term and it can be anything such as an algorithm,
function or just a simple gain, i.e., a constant value multiplying the error. The
goal of control design is to determine an appropriate compensator to achieve per-
formance objectives. The plant in the system is the dynamic system that is being
controlled. We model this system via mathematical tools and use it to design
the compensator. A mathematical model is always an approximation of the real
model since we cannot effectively model everything present in the real world and we
cannot also take into account variations in parameters such as geometries, forces
and operating conditions. Thus, there always exists some uncertainty in the plant
model which directly affects compensator design.
4
5. 1.1. Basic notion of feedback control
Feedback goes beyond mathematical functions and abstractions. It is a fact of
everyday life. An obvious example of human feedback is when you adjust the
water temperature in the shower. The hot-cold knob is adjusted until a comfortable
position is reached. In this case, the skin monitors the output of the system, i.e.,
the temperature of the water. It then sends this signal to the brain that judges the
difference between the desired water temperature and actual water temperature.
This difference is then used by the brain to general a control signal that commands
the hand to adjust the knobs to reduce this difference. If the water is cold, you
shiver, the brain determines that you need to be warmer, it sends a signal to
your hand to adjust the knob for warmer water until you stop shivering. Control
engineering is then simply the attempt to mimic the elegant, efficient and effective
control of processes apparent in nature.
We mentioned that the goal of control design is to determine an appropriate com-
pensator to achieve performance objectives. Some common performance objectives
are listed below:
• Stability of the closed-loop system.
• Dynamic response to be satisfactory, e.g.,
– Settling time (time of response).
– Percent overshoot.
• Tracking/steady-state error small.
• Remain within hardware limits, e.g., forces and voltages.
• Eliminate impact of disturbances such as wind gusts in UAV’s and inclines
in a car cruise control system.
• Eliminate impact of measurement noise.
• Eliminate impact of model uncertainty.
The above wish list is not something that is not usually completely possible due to
the existence of fundamental limitations in the the system. There is nothing called
a free lunch. However, we can via feedback control and other strategies, trade-
off these fundamental limitations with respect to each other to achieve the best
possible scenario based on requirements. This is the essence of control engineering.
An example of the previous is when we try to have aggressive control, i.e. small
response time, and also eliminate the impact of model uncertainty. The bandwidth
of the system is a measure of the aggressiveness of the control action and/or
the dynamic response. A higher bandwidth implies a higher frequency at which
we have good response and therefore more aggressive control. But at the same
time, in most models, the magnitude of the uncertainty becomes large at higher
frequencies. Hence we limit the bandwidth or frequency of response depending
on the frequency at which uncertainty begins to become significant. We want low
magnitude of response at frequencies where model uncertainty magnitude becomes
high to reduce the system sensitivity to this uncertainty. This is a situation where,
since we cannot avoid model uncertainties, we have to limit the aggressiveness of
the response to avoid the uncertainty becoming significant.
5
6. 1.1. Basic notion of feedback control
Actual control systems
We often treat the control and feedback of systems as an idealized scenario to aid
in the design process. It is important for the control engineer to appreciate the
actual complex nature of control systems. It is instructive to do this through a
simple example.
Recently, there has been a growing research interest in applications of piezoelec-
tric materials. These are very sensitive materials that produce a voltage sig-
nal when they sense a force. The converse phenomenon is also true, i.e., a
force/displacement is produced in the material when voltage is applied to it. This
characteristic is considered particularly advantageous and is being tested in a wide
variety of applications. Among these is their use as a variable valve actuator in
fuel injector systems.
Figure 1.2: Schematic of a proposed piezoelectric variable valve injector system.
Fuel injection for an internal combustion engine should ideally be tightly controlled
to produce an air-fuel mixture that reduces pollutants and increases efficiency while
at the same time mitigating undesirable phenomena such as combustion instability.
This is possible through a variable valve that regulates the amount of fuel flowing
through the cylinder nozzle.
One type of assembly of such a variable valve available in the literature is shown
6
7. 1.1. Basic notion of feedback control
in Fig. 1.21 . The piezo stack at the top consists of a “stacked” combination
of piezeoelectric materials that produces the desired displacement characteristics
when voltage is applied. The voltage is applied by the piezo stack driver. When
the piezo stack expands (due to voltage via the piezo stack driver), it moves the
top link down which in turn pushes the bottom link down. This displaces fluid
out of the needle upper volume into the injector body and we would expect the
needle lower volume to be reduced as well. However, assuming incompressibility,
this cannot happen because the needle lower volume does not have an opening
into the injector body. Instead, to compensate for the loss of volume induced by
the bottom link moving down, the needle is pushed up by the fluid. This opens the
nozzle. In a similar fashion, with appropriate voltage, the nozzle is closed as well.
The piezo stack via mechanical means, controls the nozzle state and it is hence
called a variable valve actuator (VVA). The chief benefits of using piezoelectric
materials here is their quick response (high bandwidth) and because incorporation
of a piezoelectric actuator helps eliminate many mechanical, moving parts which in
turn reduces cost and increases reliability. The disadvantage is that the maximum
expansion achievable by piezoelectric materials is usually low. This is remedied
by using the stair-like “ledge” shown in Fig. 1.2 which acts as a displacement
multiplier. A larger ledge causes a larger loss of needle lower volume when the
bottom link moves down which makes the needle displace more to compensate for
lost volume.
The idealized block diagram that is used in control design is shown in Fig. 1.3. It
is supposed to be as simple as possible in order to capture only the most essential
characteristics of the system. The actual valve position and the desired valve
position together generate an error signal and the compensator is used to determine
the appropriate voltage signal that will eventually reduce this error to zero. Here
the VVA is actually a 5 state control oriented model, i.e., it needs 5 variables to be
fully described in the time domain. These states include position of the actuator,
velocity of the actuator and the pressure. A full model description of the VVA would
entail many more states that do not have as much effect on the system response
but greatly affect the complexity of the problem. Such models are sometimes called
simulation models because we use them to simulate the response of the system
rather than use them to determine an appropriate compensator (control design).
Figure 1.3: Block diagram of an ideal closed-loop variable valve actuator system.
The actual or close to actual block diagram is shown in Fig. 1.4. Here we do not
ignore any known elements from our system. The valve position is measurement
1 Chris Satkoski, Gregory M. Shaver, Ranjit More, Peter Meckl, and Douglas Memering,
Dynamic Modeling of a Piezo-electric Actuated Fuel Injector, IFAC Workshop on Engine and
Powertrain Control Simulation and Modeling, 11/30-12/2/2009, IFP, Rueil-Malmaison, France.
7
8. 1.2. Control architectures
by the linear variable displacement transducer (LVDT) in an analog form. This is
then converted to a digital signal by an analog to digital converter (A/D). It is
then used to generate an error signal based on the desired valve position. The error
is fed into the compensator which outputs a digital signal which is converted to
an analog voltage via a digital to analog converter (D/A). This signal is amplified
using a pulse-width modulation (PWM) amplifier and then supplied to the VVA
actuator.
Figure 1.4: Block diagram of a closed-loop variable valve actuator system.
It is obvious that this is far more complicated system than the one considered
earlier. Every element in the system has its own dynamics which may or may
not include time delay. Furthermore we have errors in measurement and digiti-
zation/quantization error. We cannot remove any elements from consideration in
this system without first verifying their effect and on the entire system. If the
dynamics of the element are fast enough to approximate as a straight line rather
than a block in the diagram, then we can often ignore the element. For sensors
such as the LVDT, this may be accomplished by checking manufacturer published
information about them such as bandwidth, damping ratio, etc. This is a fairly
simple example but more complicated systems such as flight control systems may
have hundreds of these blocks. These notes only deal with the situation in Fig.
1.3.
1.2 Control architectures
We now consider several commonly used control architectures. In the following,
the reference command is given by the transfer function R(s), the plant by G(s),
the output by Y (s), the compensator C(s), disturbances by D(s) and noise by
N (s).
8
9. 1.2. Control architectures
Open-loop/Feedforward control
We now consider the simplest case of control called open-loop or feedforward
control. As the name suggests, there is no feedback of the output back to the
compensator. Rather everything is done without output measurements. A block
diagram representing a simple feedforward scheme is shown in Fig. 1.5.
Figure 1.5: Block diagram of a feedforward variable valve actuator system.
The transfer function from the output to the input is simply
Y
= CG ⇒ Y = CGR (1.1)
R
where the arguments have been dropped for convenience. Since ideally we want
Y (s) = R(s), a natural choice for the compensator would be C(s) = G−1 (s)
which leads to
Y = CGR = G−1 GR = R. (1.2)
This looks like a perfect scenario but we made some critical assumptions which
makes the previous result problematic. One assumption was that there are no
disturbances anywhere in the system. This assumption is false since we always
have disturbances in the system.
Although there may be multiple disturbances in different parts of the system, we
now consider one type of disturbance in the system for simplicity and for illustrative
purposes. Let there be a disturbance in the control signal being fed into the plant,
i.e., the control signal input into the plant is not the same as the compensator
output. This situation is illustrated in Fig. 1.6. For such a system we can derive
Figure 1.6: Block diagram of a feedforward variable valve actuator system with
disturbance.
the transfer function from the reference R(s) to the output Y (s) as
Y = G(CR + D) = GCR + GD (1.3)
and substituting the previous compensator choice C(s) = G−1 (s)
Y = GG−1 R + GD = R + GD. (1.4)
9
10. 1.2. Control architectures
Therefore, even in the case of a single source of disturbance, feedforward fails to
provide good reference tracking since now we have an extra G(s)D(s) term that
is unwanted and perturbs the output from the reference signal.
Another assumption is of having no model uncertainty. This is always false since
there is always uncertainty in a mathematical reasons for a variety of reasons such
as lack of modeling of certain dynamics for simplicity and physical variation between
systems. Thus, we don’t have an actual true model but a perturbed model G(s)
and therefore we can only have G−1 (s). Substituting this into (1.1) gives
Y = CGR = G−1 GR = εR. (1.5)
where ε is some factor dependent on the model uncertainty. Feedforward fails
when we have uncertainty in the model. A more acute problem occurs when G(s)
and hence G(s) have right-half plane (RHP) zeros, i.e., roots of the numerator
have positive real parts. In this situation, G−1 (s) will have RHP poles, i.e., roots
of the denominator have positive real parts; something that characterizes unstable
systems. And since G−1 = G−1 , there will never be perfect cancellation of the RHP
poles and zeros. Hence, the entire feedforward system (even without disturbances)
will be unstable because of the existence of RHP poles.
Even though the previous has painted a bleak picture, feedforward control is still a
useful tool as long as it is used in an intelligent manner. We sometimes combine
feedback and feedforward control to achieve excellent results as will be shown later.
Feedback/closed-loop control
We consider the previous problem of having input disturbance but now we employ
a feedback scheme to tackle the control and reference tracking problem. We also
add noise to the measurement of the output; something that is expected and
unavoidable. The block diagram of such a closed-loop system is shown in Fig. 1.7.
Figure 1.7: Block diagram of a feedback variable valve actuator system with dis-
turbance and noise.
10
11. 1.2. Control architectures
Following basic procedures, we determine the system transfer function as
Y = GC(R − Y − N ) + GD (1.6)
GC G GC
= R+ D− N (1.7)
1 + GC 1 + GC 1 + GC
= YR + YD − YN (1.8)
where YR (s) is the transfer function from R(s) to Y (s), YD (s) is the transfer
function from D(s) to Y (s) and YN (s) is the transfer function from N (s) to
Y (s). For the output to perfectly track the reference input, we want YR (s) → 1,
YD (s) → 0 and YN (s) → 0. Let the compensator be a large number, i.e., C(s)
∞, then
G
→ 1 ⇒ YD = 0 (1.9)
1 + GC
which implies complete disturbance rejection, i.e., the feedforward problem is
solved. Furthermore,
GC
→ 1 ⇒ YR = 1 (1.10)
1 + GC
which is exactly what we want for perfect reference tracking. This is the motivation
behind high gain feedback. But we also have
GC
→ 1 ⇒ YN = 1 (1.11)
1 + GC
which means the noise is amplified; something we certainly do not want. There
is an inherent problem here because the coefficient of R(s) and N (s) are the
same. Thus, we cannot have YR (s) → 1 and YN (s) → 0 simultaneously. We
have a fundamental contradiction here. The only way to get rid of noise is to set
C(s) = 0 but then this leads to no control action.
Feedback helps us deal with disturbances and plant uncertainty. High gain feedback
looks similar to plant inversion in feedforward control but only better. It gives
us a plant inversion like process and mitigates the effect of disturbances. The
drawback, however, is that mitigating noise and perfect tracking cannot be done
simultaneously.
In many situations, the reference tracking is in a low frequency range. An example
of this would be the reference signal of the bank angle of a commercial airliner.
Also, for most sensors, noise becomes prevalent mainly in the frequency ranges.
Thus we can design C(s) such that the transfer function
GC
YR = YN = (1.12)
1 + GC
is 1 at low frequencies (to aid in reference track) and 0 at high frequencies (to
mitigate measurement noise). The frequency response for such a transfer function
is shown in Fig. 1.8.
Figure 1.8 shows a transfer function with a bandwidth (i.e. approx. roll-off fre-
quency) of 7 radians per second. It implies, for perfect tracking and noise mitiga-
tion, that the reference signal must be predominant in frequencies below 7 radians
per second and the measurement noise must be predominant in frequencies above 7
radians per second. Thus, the extent to which you can be aggressive with control,
i.e. have high bandwidth, depends on two factors:
11
12. 1.2. Control architectures
−5
−10
−15
−20
Magnitude (dB)
−25
−30
−35
−40
−45
−50
−1 0 1 2 3
10 10 10 10 10
Frequency (rad/sec)
Figure 1.8: Typical frequency response magnitude of YR = YN to mitigate noise
and provide good tracking..
• The frequency range at which the noise is present.
• The reference signal frequency range.
This is an example of a fundamental limitation in a feedback situation. The ag-
gressiveness of the control is limited by the measurement noise and the reference
signal frequency range.
When we close the loop, it allows us to trade off the impact of disturbances, noise,
reference tracking signals and stability among others. It gives us the freedom to
trade off quantities which we might not have been able to otherwise, such as
plant uncertainty, noise mitigation and tracking accuracy. These notes deal with
developing a sophisticated outlook and precise tools to do tradeoffs.
Feedback with feedforward
Another way to tackle the above problems is to include a feedforward term in
the feedback architecture as shown in Fig. (1.9). Here Gf (s) is the feedforward
transfer function. The transfer function of the system is
Y = GC(R − Y − N ) + GGf R + GD
⇒ Y (1 + GC) = (GC + GGf )R + GD − GCN (1.13)
and hence
GC + GGf G GC
Y = R+ D− N (1.14)
1 + GC 1 + GC 1 + GC
= YR + YD − YN . (1.15)
As before, we want YR (s) → 1, YD (s) → 0 and YN (s) → 0. Notice we have an
extra degree of freedom in the design of YR (s) due to the feedforward term Gf (s).
12
13. 1.2. Control architectures
Figure 1.9: Block diagram of a two degree of freedom feedback-feedforward variable
valve actuator system with disturbance and noise.
Let Gf (s) = G−1 (s) and thus
GC + GGf GC + GG−1 (s)
= =1 (1.16)
1 + GC 1 + GC
which gives accurate steady-state tracking although we still run into problems of
model uncertainty as in the feedforward architecture. But most importantly, since
we are not using feedforward or feedback alone, accurate steady-state tracking
does not imply noise amplification. Now the challenge is the tradeoff between the
noise and disturbance. If we set C(s) ∞
G
→0 (1.17)
1 + GC
which implies YD (s) → 0. But for YN (s) → 0, we need C(s) = 0
GC
→ 0. (1.18)
1 + GC
These seemingly contradictary/conflicting requirements represent a tradeoff that
we cannot avoid. If our model is excellent, feedforward is beneficial in this case
but we still have tradeoffs. Thus, control cannot change the underlying problem
but can give us tools to play with the fundamental tradeoffs to get an acceptable
design.
Feedback with disturbance compensation
Assuming we know the disturbance D(s), will this buy us anything? Consider the
feedback architecture shown in Fig. 1.10. Here GD (s) is an additional transfer
function called the disturbance compensation transfer function.
13
14. 1.2. Control architectures
Figure 1.10: Block diagram of a combined feedback and disturbance feedback
variable valve actuator system with disturbance and noise.
From Fig. 1.10 we have
Y = GC(R − Y − N ) + CGD GD + GD
⇒ Y (1 + GC) = GCR + (GCGD + G)D − GCN (1.19)
and hence
GC GGD C + G GC
Y = R+ D− N (1.20)
1 + GC 1 + GC 1 + GC
= YR + YD − YN . (1.21)
Again we want YR (s) → 1, YD (s) → 0 and YN (s) → 0. Now we seem to have
more flexibility because setting GD (s) = −C −1 (s) gives
GGD C + G
GGD C + G = −CC −1 G + G = 0 ⇒ =0 (1.22)
1 + GC
which implies that we have perfect disturbance rejection. But again this method
has its own challenges because we need to have knowledge of the disturbance.
This can be through estimation, especially if we have some idea about the nature
of disturbance. For example, the disturbance in a manufacturing when the floor
shaking in a repeating.
14
15. Chapter 2
System models and representation
2.1 Model classification
There are many approaches to developing system models and a similarly large
number of classifications of model types. System models can be classified as white,
grey or black box models. Models where the underlying physics of the system are
first considered to help develop the model are known as white box models. Black
box models, on the other hand, are entirely data drive. The output of a system is
considered subject to a given input and a corresponding model is formed using tools
such as Fourier analysis. Grey box models combine the above two approaches in
that the model form is derived from physical principles while the model parameters
are determined using experimental data.
Models can also be classified as nominal/control models or simulation/calibration
models. The nominal/control model form is a simplified dynamic model where
the desire is to capture the dynamic coupling between control inputs and system
outputs. These models are directed towards usage for controller design since a
simplified model aids in this process. Conversely, simulation models are typically
generated to capture as many aspects of the system behavior as accurately as
possible. The intended use of these types of models is for system and controller
validation, intuition development and assumption interrogation.
2.2 State-space representation
Consider a set of 1st-order ordinary differential equations,
xi = f1 (x1 , ..., xn , u1 , ..., um )
˙ (2.1)
.
.
.
xn = fn (x1 , ..., xn , u1 , ..., um )
˙ (2.2)
where x1 , ..., xn are called the system states and u1 , ..., um are the system inputs.
15
16. 2.3. Input/output differential equation
Next consider a set of algebraic equations relating outputs to state variables and
inputs,
y1 = g1 (x1 , ..., xn , u1 , ..., um ) (2.3)
.
.
.
yp = gp (x1 , ..., xn , u1 , ..., um ). (2.4)
If we let x = [x1 , ..., xn ]T , u = [u1 , ..., um ]T , and y = [y1 , ..., yp ]T , then the above
relationships can be written in the compact state-space form
˙
x = f (x, u) (2.5)
˙
y = g(x, u). (2.6)
If the system considered is linear then it can be written in the linear parameter
varying form (LPV)
˙
x = A(t)x + B(t)u (2.7)
y = C(t)x + D(t)u (2.8)
or if the system is linear time-invariant (LTI)
˙
x = Ax + Bu (2.9)
y = Cx + Du (2.10)
where A, B, C and D are the relevant matrices.
States are the smallest set of n variables (state variables) such that knowledge
of these n variables at t = t0 together with knowledge of the inputs for t ≥ t0
determines system behaviour for t ≥ t0 . The state vector is the n-th order vector
with states as components and the state-space is the n-dimensional space with
coordinates as the state variables. Correspondingly, the state trajectory is the path
produced in the state-space as the state vector evolves over time.
The advantages of the state-space representation are that the dynamic model is
represented in a compact form with regular notation, the internal behaviour of the
system is given treatment, the model can easily incorporate complicated output
functions, definition of states helps build intuition and MIMO systems are easily
dealt with. There are many tools available for control design for this type of model
form. The drawback of the state-space representation is that the particular equa-
tions themselves may not always be very intuitive.
2.3 Input/output differential equation
The input/output differential equation relates the outputs directly to the inputs.
It is obtainable usually through taking the Laplace transform of the state-space
representation, substituting/rearranging appropriately and finally taking the inverse
16
17. 2.4. Transfer functions
Laplace transform to obtain an high-order differential equation. A linear time-
invariant input/output differential equation is given by
y n + an−1 y n−1 + ... + a1 y + a0 y = bm um + bm−1 um−1 + .... + b1 u + b0 u (2.11)
˙ ˙
The advantages of input/output differential equations are that they are concep-
tually simple, can easily be converted to transfer functions and many tools are
available in this context for control design. It is however difficult to solve in the
time domain since we would need to solve the Laplace transform which is generally
not an easy task.
2.4 Transfer functions
Recall from the study of Laplace transforms the following important transforma-
tions
L[f˙(t)] = sF (s) − f (0) (2.12)
L[f (t)] = s2 F (s) − sf (0) − f˙(0)
¨ (2.13)
.
.
.
L[f n (t)] = sn F (s) − sn−1 f (0) − sn−2 f˙(0) − ... − sy n−2 (0) − y n−1 (0). (2.14)
Now consider a generic LTI input/output differential equation given by
y n + an−1 y n−1 + ... + a1 y + a0 y = bm um + bm−1 um−1 + .... + b1 u + b0 u. (2.15)
˙ ˙
Applying the Laplace transforms from above to this differential equation yields
sn Y (s) + an−1 sn−1 Y (s) + ... + a1 sY (s) + a0 Y (s) + fy (s, t = 0) =
bm sm U (s) + ... + b1 sU (s) + b0 U (s) + fu (s, t = 0) (2.16)
where fy (s, t = 0) and fu (s, t = 0) are functions of the initial conditions as given
by the Laplace transform. Rearranging the above gives
bm sm + ... + b1 s + b0 fu (s, t = 0) − fy (s, t = 0)
Y (s) = U (s) + .
sn+ an−1 sn−1 + ... + a0 sn + an−1 sn−1 + ... + a0
1 2
(2.17)
Box 1 represents the transfer function and it describes the forced response of the
system. Box 2 represents the free response of the system depending on the initial
conditions. Depending on the system (whether it has control input or not) and the
exact system scenario (whether the initial conditions are zero or not), Y (s) may
comprise box 1 or box 2 or both. For control analysis, we usually assume box 2
is zero and thus box 1 represents the entire system response. It will be seen later
that many results which are true for this case are also true when box 1 is zero and
box 2 is not.
The roots of the numerator of a transfer functions are called zeros since they send
the transfer function to zero whereas the roots of the denominator of a transfer
17
18. 2.4. Transfer functions
function are called poles since they send the transfer function to ∞. The poles and
zeros are very important in analyzing system response and for designing appropriate
compensators.
18
19. Chapter 3
Dynamic response of systems
3.1 First-order system response
A first-order system (such as a spring-mass system) takes on the general form
τ y + y = ku
˙ (3.1)
y k
⇒ y+ = u
˙ (3.2)
τ τ
⇒ y + ay = bu,
˙ a = 1/τ , b = k/τ (3.3)
⇒ τ [sY (s) + y(0)] + Y (s) = kU (s) (3.4)
k/τ y(0)
⇒ τ Y (s) = U (s) + (3.5)
s + (1/τ ) s + (1/τ )
1 2
where τ is the time constant of the system response and k is a constant.
The roots of the denominator of the transfer function (the characteristic equation)
1
are called poles and in this case the pole is −a = − τ . If the pole lies in the
left-half of the complex s-plane (LHP), it is stable (exponentially decreasing). If it
is in the right-half of the complex s-plane (RHP) then it is unstable (exponentially
increasing). A pole on the imaginary axis (with real part equal to 0) is marginally
stable.
The free response of this system (due to the initial conditions) is given by yfree (t) =
t
y(0)e− τ = y(0)e−at .
The forced response of the system to a step input is given by ystep (t) = k +
t
[y(0) − k]e− τ . The value of y at the time constant value τ is given by y(τ ) =
0.368y(0) + 0.632k. The lower τ is, the faster the response. Thus, making |a|
bigger gives a faster response.
3.2 Second-order systems
The general form of a second-order system is given by
2 2
y + 2ζωn y + ωn y = kωn u
¨ ˙ (3.6)
19
20. 3.2. Second-order systems
where k is a constant, ωn is the natural frequency of the system, and ζ is the
damping ratio. Taking the Laplace transform as before yields the system in terms
of its forced and free response,
2
kωn [s + 2ζωn ]y(0) + y(0)
˙
Y (s) = U (s) + (3.7)
s2 + 2ζωn s + ωn
2 s2 + 2ζω s + ω 2
n n
1 2
where the common denominator is the open loop characteristic equation and whose
roots give the system poles. For such a canonical form of second-order system, the
poles are given by −ζωn ± ωn ζ 2 − 1. If 0 < ζ < 1, then we have two complex
poles (since ζ 2 − 1 is negative) implying oscillation in the system free response. In
this case poles are given by −ζωn ± ωn 1 − ζ 2 j
The free response of a second-order system (0 < ζ < 1) is given by
y(0) 1 − ζ2
yfree (t) = e−ζωn t cos ωt + tan−1 (3.8)
1 − ζ2 ζ
It can be easily seen from the above that we need ζωn > 0 or alternatively the
poles in the LHP for stability.
The forced response of a second-order system (0 < ζ < 1) to a step is given by
e−ζωn t 1 − ζ2
ystep (t) = k 1 − sin ωd t + tan−1 (3.9)
1 − ζ2 ζ
where ωd = ωn 1 − ζ 2.
For a second-order system, the rise time tr is given by
π−β
tr = (3.10)
ωd
where β is the angle such that cos β = ζ. The time to maximum overshoot is
given by
π Td
tp = = (3.11)
ωd 2
and the maximum overshoot Mp itself is given by
− √ πζ
Mp = e 1−ζ 2 ≈ e−πζ (3.12)
where the percent overshoot can be determined by multiplying the above expression
by 100.
If the system is nonminimum phase (RHP zeroes), then the response will have
undershoot instead of overshoot. When the system has a RHP zero at s = c, a
lower bound for the undershoot Mu is given by
1−δ
Mu ≥ (3.13)
ects − 1
where δ is the maximum allowable steady-state error in response beyond the settling
time. Thus for a 2% settling time, δ = 0.02.
20
21. 3.3. Design considerations
3.3 Design considerations
We consider the case where we require a certain percent overshoot in our system.
We know that the damping ratio ζ is a critical factor in this, but how must it be
chosen such that we have a certain percent maximum overshoot? Moreover, how
must we choose our poles so that this choice of ζ is satisfied.
We can use the relationship for percent overshoot given previously to determine
the damping ratio that will ensure a certain maximum overshoot, ζ% OS . As long
as we choose a damping ratio greater than ζ% OS , we will not exceed the specified
maximum overshoot. In the complex s-plane, the following can easily be shown to
be true
cos θ = ζ (3.14)
where θ is a counterclockwise angle made with the negative real axis. Hence any
poles that lie on or between the lines described by θ = ±cos−1 (ζ%OS ) satisfy
ζ ≥ ζ%OS .
For a LHP pole a, it is true that τ = 1/a. Thus, a larger magnitude of a LHP
pole signifies a faster response and these poles are known as ”fast poles”. The 2%
setting time is given by t2% = 4τ and the 5% settling time is given by t5% = 3τ ,
where τ is the time constant. Hence, to achieve a settling time of t2% or less
requires placing the poles of the system to the left of a = t−4 . A similar result
2%
can be easily derived for t5% .
We can combine the above to results for maximum overshoot and settling time to
determine precise poles for a desired system response.
3.4 Routh-Hurwitz stability criterion
As discussed previously, poles in the RHP indicate instability of a system. There-
fore, the roots of the denominator of a transfer function hold key information
regarding the stability and response characteristics of the system.
Consider the denominator of a closed-loop transfer function, also known as the
closed-loop characteristic equation,
CLCE(s) = sn + a1 sn−1 + . . . + an−1 s + an = 0 (3.15)
where a1 , . . . , an are real constants. The system is stable if the roots of the
CLCE(s) are in the LHP. If the system is stable, then a1 , . . . , an > 0 but a1 , . . . , an >
0 does not necessarily imply stability. Furthermore, if any ak ≤ 0, then the system
has at least one pole with a positive or zero real part which implies stability or
marginal stability, respectively.
The Routh-Hurwitz stability criterion helps determine not only whether a system
is stable or not but it also specifies how many unstable poles are present without
explicitly solving the characteristic equation, CLCE(s). It can also be used for
multiple gain closed-loop systems.
By the Routh-Hurwitz stability criterion, if a1 , . . . , an = 0 and if a1 , . . . , an are all
positive (or equivalently all negative) then we can use the Routh array to determine
stability and the number of unstable poles.
21
22. 3.4. Routh-Hurwitz stability criterion
To form the Routh array, first arrange a1 , . . . , an in two rows as follows
Row
n 1 a2 a4 . . . 0
n-1 a1 a3 a5 . . . 0
Then we determine the rest of the elements in the array. For the third row,
coefficients are computed using the pattern below
1 a2 1 a4 1 a6
a1 a3 a1 a5 a1 a7
b1 = − b2 = − b3 = − . . . (3.16)
a1 a1 a1
This is continued until the remaining elements are all zero. Similarly, for the fourth
row
a1 a3 a1 a5 a1 a7
b1 b2 b1 b3 b1 b4
c1 = − c2 = − c3 = − . . . (3.17)
b1 b1 b1
and for the fifth row
b1 b2 b1 b3 b1 b4
c1 c2 c1 c3 c1 c4
d1 = − d2 = − d3 = − . . . (3.18)
c1 c1 c1
where all the series of elements are continued until the remaining are all zero. The
Routh array is then given by
Row
n 1 a2 a4 . . . 0
n-1 a1 a3 a5 . . . 0
n-2 b1 b3 b5 . . . 0
n-3 c1 c3 c5 . . . 0
n-4 d1 d3 d5 . . . 0
. . .
. . .
. . .
2 e1 e2
1 f1
0 g1
The number of roots of the closed-loop characteristic equation in the RHP is equal
to the number of changes in sign of the coefficients of the first column of the array
[1, a1 , b1 , c1 , . . .]. The system is stable if and only if a1 , . . . , an > 0 and all the
terms in [1, a1 , b1 , c1 , . . .] are positive.
We have a special case when the first element in a row becomes zero, since this
implies division by 0 for succeeding rows. For this case, replace the 0 with ε
(arbitrarily small constant) and continue with obtaining expressions for the elements
in the succeeding rows. Finally, when all expressions are obtained, let ε → 0 and
evaluate the sign of the expressions. This is all that is needed since we are only
22
23. 3.5. Pole/Zero effects
concerned with sign changes. If we find no sign changes, then the 0 element
signifies a pair of poles on the imaginary axis (no real parts) implying marginal
stability.
Another special case is when all the elements in a row become zero. In this case
we may still proceed in constructing the Routh array using the derivative of a
polynomial defined using the previous (nonzero) row. If the n-th row is all zero,
first form a polynomial of order n − 1 in s using the elements in row n − 1. Then
d
obtain the derivative of this polynomial with respect to s, i.e. ds and replace the
zero row (the n-th row) with the coefficients of this derivative. The rest of the
array is completed as usual.
3.5 Pole/Zero effects
The location of the closed-loop system poles determines the nature of the system
models but the location of the closed-loop zeros determines the proportion in
which these modes are combined. This can be easily verified via partial fraction
expansions. Poles and zeros far away from the imaginary axis are known as fast
poles and zeros, respectively. Conversely, poles and zeros close to the imaginary
axis are known as slow poles and zeros. Fast zeros, RHP or LHP, have little to no
impact on system response.
Slow zeros have a significant effect on system response. Slow LHP zeros lead to
overshoot and slow RHP zeros lead to undershoot. A lower bound Mu for the
maximum undershoot (due to the presence of a RHP zero at s = c) is given by
1−δ
Mu ≥ (3.19)
ects − 1
where δ = 0.02 for a 2% settling time ts . From the denominator of the above, it
is obvious that fast zeros (large c)lead to a smaller lower bound for the maximum
undershoot Mu and vice versa. Alternatively a short settling time ts , implying a
high bandwidth, would lead to more undershoot. Hence for a given RHP zero,
we can only lower ts so much before getting unacceptable undershoot. To avoid
undershoot, the closed-loop bandwidth is limited by the magnitude of a RHP zero.
We determine the maximum bandwidth acceptable below.
The the approximate 2% settling time is given by ts ≈ 4τ = 4/a where τ is the
time constant and a is the real part of the slowest LHP closed-loop system pole.
Thus Mu can then be expressed as
1−δ
Mu ≥ ∼ . (3.20)
e4c/a −1
Thus if the magnitude of the real part of the slowest LHP closed-loop pole is
smaller than the RHP zero, undershoot will remain low since the denominator of
the above will become large. The converse is true as well. Keep real part of slowest
LHP pole less than real part of any RHP zero to avoid undershoot.
The above results are true for LHP zeros as well in an analogous manner but for
overshoot instead. Thus, in order to avoid overshoot, we must keep |a| < |c| where
a is the slowest LHP pole and c is the slowest LHP zero.
23
24. 3.5. Pole/Zero effects
If the magnitude of the real part of the dominant closed-loop poles is less than the
magnitude of the largest unstable open-loop pole, then significant overshoot will
occur. Keep real part of slowest LHP pole greater than real part of any RHP pole
to avoid overshoot.
24
25. Chapter 4
Frequency response tools
4.1 Frequency response
We consider the input to any transfer function G(s) = B(s)/A(s) to be u(t) =
A sin ωt where A is a constant and ω is the system frequency. Then it can be
shown that the steady-state output is then given by
yss (t) = |G(jω)|A sin ωt + ∠G(jω) (4.1)
where |G(jω)| is simply the magnitude of G(s) when evaluated at jω and the
phase ∠G(jω) is given by
Im B(jω) Im A(jω)
∠G(jω) = ∠B(jω)−∠A(jω) = tan−1 −tan−1 (4.2)
Re B(jω) Re A(jω)
for all ω ∈ R+ . Much useful and generalizable information can be gleaned from
the above (the frequency response function G(jω) or FRF).
4.2 Bode plots
The pair of plots of magnitude |G(jω)| vs. frequency ω and phase ∠G(jω) vs.
frequency ω are collectively known as the Bode plots of G(s). The magnitude
is usually plotted in decibels (dB) and the frequency for both plots is on the
logarithmic scale. The decibel value of the FRF G(jω) is given by
G(jω) = 20 log10 |G(jω)| (4.3)
and the logarithmic scale value of ω is log10 ω. The appropriate values for the
magnitude and phase are determined as previously shown.
The benefit of using a logarithmic scale are many fold but most importantly, multi-
plication in a linear scale implies addition in the logarithmic scale. Thus, since the
poles/zeros of a transfer function can be expressed as an arbitrarily long multiplica-
tion of factors, in the logarithmic scale we can decompose a transfer function into
a summation of elementary transfer functions based on the original poles/zeros.
This helps in plotting the function and obtaining fundamental insights into the
behaviour of the plant.
25
26. 4.2. Bode plots
Example Describe the frequency response of the following closed-
loop transfer function using bode plots
45s + 237
G(s) = . (4.4)
(s2 + 3s + 1)(s + 13)
The bode plots for the transfer function can be generated using the
bode command in MATLAB. These are shown in Figs. (4.1) and (4.2).
Figure (4.1) shows the frequency response bode plots on the standard
logarithmic scale and Fig. (4.2) shows the frequency response bode
plots on a linear scale. The benefits of the logarithmic scale are obvi-
ous from the figures since they show the response of the system better
when compared to the linear scale plots. This is especially true at very
low and very high frequencies where the linear scale response grows
asymptotically.
40
20
Magnitude (dB)
0
−20
−40
−60
−80
0
−45
Phase (deg)
−90
−135
−180
−3 −2 −1 0 1 2
10 10 10 10 10 10
Frequency (Hz)
Figure 4.1: Logarithmic scale bode plot of G(s).
Note that the frequency is shown in Hz and not in rad/sec to make
the linear scale bode plots distinguishable.
Many systems, including control systems, have their performance specified in terms
of frequency response criteria. An important one of these is the bandwidth of
the system. We define it, in general terms, as the frequency range for which
the magnitude of the transfer function from input to output is close to unity (in
absolute terms) or zero (in dB). This implies that the bandwidth is the frequency
range for which the system response is close to ideal.
The previous definition can be expressed more exactly for control systems since they
normally behave like low-pass systems. Low-pass systems are systems for which
the response begins near zero decibels and then rolls off with increasing frequency.
26
27. 4.3. Gain and phase margins
20
15
Magnitude (abs) 10
5
0
0
−45
Phase (deg)
−90
−135
−180
10 20 30 40 50 60 70 80 90 100
Frequency (Hz)
Figure 4.2: Linear scale bode plot of G(s).
It is usually the case that the magnitude response does not roll off immediately but
rather remains level for some frequency range. For such systems, the bandwidth is
the frequency at which the magnitude has rolled off by 3 dB from its low-frequency
level value. Another definition which is the used in the rest of these notes is that
the bandwidth is the real part of the slowest LHP pole. It is commonly the case
that the bandwidth from the two definitions usually closely coincides.
4.3 Gain and phase margins
As has been previously discussed, a system is stable if all of its poles are in the LHP.
Otherwise it is unstable. Not all stable systems are the same and some systems are
more stable than others. Thus, we may are faced with the problem of the extent
of stability of a system. The concepts of gain and phase margins are useful in
this regard and bode plots in particular are helpful in determining these. It must
be noted that gain and phase margins are only meaningful for stable closed-loop
systems that become unstable with increasing gain.
Consider a simple stable feedback system with the plant H(s) = B(s)/A(s) and
the compensator given by C(s) = KP (s)/L(s) where K is a constant gain. Let
G(s) be the closed-loop transfer function given by
H(s)C(s)
G(s) = (4.5)
1 + H(s)C(s)
Then the closed-loop characteristic equation CLCE which is the denominator of
the closed-loop transfer function can be easily found to be
CLCE(s) = 1 + KH(s)C(s) (4.6)
27
28. 4.3. Gain and phase margins
where the roots of 1 + KH(s)C(s) are the closed-loop poles of the system. These
are given by the solutions to
KH(s)C(s) = −1. (4.7)
Since angles in the complex s-plane are measured in a counterclockwise direction
from the positive real axis, in polar coordinates, the point −1 lies at an angle ∠ −
1 = −180◦ and magnitude | − 1| = 1. Thus, if we evaluate the frequency response
of the closed-loop system G(jω), the system will become unstable precisely at
a point where we have both ∠G(jω) = −180◦ and |G(jω)| = 1. It can also
be shown that the closed-loop system is also stable for |G(jω)| < 1. But since
we already have a stable system there will be no point where |G(jω)| < 1 and
∠G(jω) = −180◦ simultaneously. This point is commonly known as the crossover
frequency. The gain crossover frequency is when only |G(jω)| = 1 or G(jω) = 0
is true and the phase crossover frequency is when only ∠G(jω) = −180◦ is true.
It can be seen from the previous that since systems with RHP poles are a priori
unstable, the concepts of phase and gain margins are meaningless.
It follows that we can define the gain and phase margins as how much we will
need to change the system (in terms of magnitude and angle) before we reach
instability. Moreover, we can exploit the bode plots to formulate succinct definitions
of these. Thus, the phase margin is defined as the amount of additional phase lag
needed at the gain crossover frequency which will bring the system to the verge
of instability (i.e. ∠G(jω) = −180◦ or ∠G(jω) = +180◦ ). Correspondingly, the
gain margin is defined as the reciprocal of the magnitude |G(jω)| or the negative
decibel magnitude − G(jω) at the phase crossover frequency. This gain margin
determination is based on a decibel plot since 20 log10 1 = 0.
For minimum-phase systems, the phase margin must be negative for stability. Al-
ternatively, we can use ∠G(jω) = 180◦ as the stability criterion since it refers to
the same angle in the complex s-plane. In this case, for minimum-phase systems,
the phase margin must be positive for stability.
The gain margin describes the maximum multiple of the gain K that can be applied
in feedback before the system becomes unstable. A gain margin in decibels that
is positive implies that the system is stable and conversely, a negative decibel gain
margin implies instability of the underlying system.
For a stable minimum-phase system, the gain margin indicates how much we
can increase the gain before the onset of instability. Conversely for an unstable
minimum-phase system, it indicates how much we must decrease gain to regain
stability. However, since an unstable system would yield a negative decibel value,
this implies 0 < K < 1. This is not a feasible control gain for most systems and
therefore is not a meaningful measure of instability for many unstable systems.
For systems that never approach the phase crossover frequency, the gain margin is
either ∞ or −∞ indicating that the system is stable for all gains or unstable for
all gains, respectively. Furthermore, the system may have more than one crossover
frequency indicating that the system is stable only for gain values within a certain
range. For stable systems with more than one gain crossover frequency, the phase
margin is measured at the highest gain crossover frequency.
For nonminimum-phase system or systems with undefined phase/gain margins, the
best recourse is to use the Nyquist stability criterion.
28
29. 4.4. Phase margin and second-order systems
4.4 Phase margin and second-order systems
For a strict canonical second-order closed-loop system, the phase margin is related
to the damping ratio, ζ. We must be careful to make sure we have a true second-
order closed-loop system before using the following results. The second-order
closed-loop transfer function CLTF(s) is given by
2
ωn
CLTF(s) = (4.8)
s2 + 2ζωn s + ωn
2
and the related second-order open-loop transfer function OLTF(s) is given by
2
ωn
OLTF(s) = . (4.9)
s(s + 2ζωn )
The phase margin for this open-loop transfer function in closed-loop with gain K
can then be related to ζ as follows
2ζ
Phase margin = tan−1 (4.10)
1 + 4ζ 4 − 2ζ 2
≈ 100ζ for 0 < Phase margin < 70◦ (4.11)
The gain crossover frequency ωgc for such a second-order system can also be
determined as
ωgc = ωn 1 + 4ζ 4 − 2ζ 2 (4.12)
This allows us to determine the required gain crossover for a specified percent
overshoot (since percent overshoot can be used to determine ζ).
The bandwidth, ωBW of the system is given by
ωBW = ωn 1 − 2ζ 2 + 4ζ 4 − 4ζ 2 + 2 (4.13)
It can also be shown for the above system that ωgc ≤ ωBW ≤ 2ωgc .
Since the time constant τ = 1/ωBW , a higher bandwidth implies a faster response
(smaller time constant). It also allows us to specify ωBW in terms of τ and de-
termine the corresponding range of ωgc . We can use this and the previous result
to design a system in terms of phase margins and crossover gains using specified
design requirements.
4.5 Root-locus
The basic characteristics of the transient response of a system are closely related
to pole locations. Pole locations depends on the value of the loop gain in a simple
feedback setting where the compensator is a constant gain. Hence, it is becomes
important to know how the closed-loop poles move in the complex s-plane as the
gain is varied. Once the desired poles are determined using previously discussed
techniques, the design problem then only involves determining the appropriate gain
29
30. 4.5. Root-locus
to place the system poles at their desired locations. In many cases, simple gain
will not work and we would need to add a more complex compensator.
Root-locus addresses the design problem when dealing with adjusting a simple
parameter (such as a gain but could be otherwise). It was developed by W. R.
Evans and involves plotting the roots of the characteristic equation of the closed-
loop system for all values (0 to ∞) of an adjustable system parameter that is
usually the loop gain. Root-locus means the locus of roots of the characteristic
equation.
Consider a simple feedback system with plant H(s) and compensator C(s). The
closed-loop transfer function G(s) is then given by
H(s)C(s)
G(s) = . (4.14)
1 + H(s)C(s)
The characteristic equation is the denominator of the above equated to 0 and it
satisfies the following
H(s)C(s) = −1. (4.15)
We can express this in the polar form as
∠H(s)C(s) = ±180◦ |H(s)C(s)| = 1. (4.16)
Only the values of s that satisfy the above satisfy the characteristic equation and
it follows that these are the poles of the closed-loop transfer function G(s).
Let the H(s)C(s) have poles at p1 , . . . , pn and zeroes at z1 , . . . , zm . Then the
angle at a given s maybe determined using the following
∠H(s)C(s) = ∠z1 + . . . + ∠zm − ∠p1 − . . . − ∠pn (4.17)
where the angle of an arbitrary zero/pole x is given by ∠x = tan−1 (Im x/Re x).
√
The magnitude of any complex number x is given by |x| = Re x + Im x. This
can be used to determine |H(s)C(s)|.
• Angles are measured counterclockwise from the positive real axis.
• Root-loci are symmetric about the real-axis, due to the presence of complex
conjugates.
• If the difference between the orders of the numerator and denominator of
H(s)C(s) is greater or equal to two, then the sum of the poles is a constant.
This implies that if some poles move toward the right, others have to move
toward the left.
• A slight change in pole-zero configurations may cause significant changes in
the root-locus.
• The patterns of the root-loci depend only on the relative separation of the
open-loop poles and zeros.
• If a pole in the root-locus plot only moves over into the RHP (implying
instability) or LHP for a fixed bounded interval of the gain, then the system
is called conditionally stable. It is not desirable in a control system since
the danger always exists of becoming unstable/stable by both increasing the
gain too much or decreasing the gain too much.
30
31. 4.5. Root-locus
• If the construction of G(s) involved a cancellation of poles with zeros be-
cause of the interaction between the plant and compensator, all the roots
of the true characteristic equation will not show (since we are dealing with
a reduced equation). To avoid this problem, add the canceled closed-loop
pole retroactively to the closed-loop poles obtained from the root-locus plot
of G(s). This implies that the canceled pole is still a pole of the system but
is only canceled in feedback (or it is available after feedback).
Example For the feedback system shown in Fig. (4.3) with
1
H(s) = (4.18)
s(s + 1)2
find the root-locus plot and the gain values that ensure the system
R(s) U(s) Y(s)
+ K H(s)
-
Figure 4.3: Simple gain feedback system.
stable.
The root-locus plot for this transfer function can be approximately
drawn by hand or plotted using the rlocus command in MATLAB as
shown in Fig. (4.4). The two poles and their movement with increas-
0.86 0.76 0.64 0.5 0.34 0.16
1
0.94
0.5
0.985
Imaginary Axis
2 1.75 1.5 1.25 1 0.75 0.5 0.25
0
0.985
−0.5
0.94
−1
0.86 0.76 0.64 0.5 0.34 0.16
−2 −1.5 −1 −0.5 0 0.5
Real Axis
Figure 4.4: Root-locus plot for H(s).
31
32. 4.6. Nyquist stability criterion
1.2
1.2 0.5 0.38 0.24 0.12
1
1
0.64 System: sys
Gain: 2
0.8 Pole: 0.000468 + 1i
0.8
Damping: −0.000468
Overshoot (%): 100
Imaginary Axis
0.76
0.6 Frequency (rad/sec): 1
0.6
0.4
0.4 0.88
0.2
0.2 0.97
0
−0.2 0.97
0.2
−0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2
Real Axis
Figure 4.5: Maximum stabilizing gain for closed-loop system.
ing gain is indicated on the plot. The stable pole at s = −1 does not
become unstable for any value of K. However, the marginally stable
pole at s = −1 does cross the imaginary axis to become unstable for
some value of K. This maximum value of the gain is shown in Fig.
(4.5). Thus for any gain value K > 2, the system becomes unstable.
4.6 Nyquist stability criterion
This is a helpful tool that is useful for many kinds of systems including time
delay systems and robust stability. Given a closed-loop transfer function G(s), we
want to determine closed-loop stability/instability and also the number of unstable
closed-loop poles.
Nyquist plots are obtained by plotting G(jω) for all ω on the complex plane (real
vs. imaginary). The direction of the plotted line is also indicated in terms of
increasing ω. These can usually be done with the aid of bode plots by reading off
values of magnitude/phase for a certain frequency from the bode plots and then
plotting these points in the complex plane using polar coordinates (i.e. magnitude
and direction/angle). Angles are measured in a counterclockwise manner from the
positive real axis. The Nyquist plot is symmetric about the real axis due to the
presence of complex conjugates.
The number of unstable poles N is then given by the equation
N = NCW + NOL (4.19)
where NOL is the number of open-loop unstable poles and NCW is the number of
clockwise encirclements of the point −1/K by the Nyquist plot. The variable K
is an independent gain on the plant and is usually considered 1 when the Nyquist
32
33. 4.6. Nyquist stability criterion
plot is not being used for design purposes. A counterclockwise encirclement is the
same as a negative clockwise encirclement.
Example Consider the simple feedback system shown in Fig. (4.6).
Given
R(s) U(s) Y(s)
+ C(s) G(s)
-
Figure 4.6: Block diagram of simple single degree of freedom feedback system.
3 K
G(s) =
, C(s) = (4.20)
s−1 s+3
where K is a gain to be determined, find the appropriate Nyquist plot
and comment on the stability of the closed loop system.
Since Nyquist plotting requires an independent gain on the plant, we
rearrange and manipulate the feedback system to obtain the closed-
loop transfer function as
3
GCL (s) = KH(s) = K . (4.21)
(s + 3)(s − 1)
The corresponding block diagram is shown in Fig. (4.7). The bode
R(s) U(s) Y(s)
+ K H(s)
-
Figure 4.7: Block diagram of feedback system manipulated for generating the
Nyquist plot.
diagram is then plotted as in Fig. (4.8) and values at relevant points
are picked out in order to make the Nyquist plot. Typical points to get
the angle/magnitude of the frequency response are when the angle is
at multiples of 90◦ . A more complex Nyquist plot would require more
points to get an accurate representation. The corresponding Nyquist
plot is shown in Fig. (4.9) with the directions put in according to
increasing frequency.
It is clear from the expression of H(s) that we have 1 unstable open-
loop pole at s = 1, i.e. NOL = 1. From the Nyquist plot we have 1
33
34. 4.6. Nyquist stability criterion
0
−20
Magnitude (dB)
−40
−60
−80
−150
−155
Phase (deg)
−160
−165
−170
−175
−180
−2 −1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
Figure 4.8: Bode plot of closed-loop system.
0.4
0.3
0.2
0.1
Imag Axis
0
−0.1
−0.2
−0.3
−0.4
−1.5−1.4−1.3−1.2 −1.1 −1 −0.9 −0.8−0.7−0.6 −0.5−0.4−0.3−0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
Real Axis
Figure 4.9: Nyquist plot of closed-loop system.
counterclockwise encirclement or -1 clockwise encirclements for −1 <
−1/K < 0, i.e. NCW = −1. It follows that
−1 + 1 = 0 for − 1 < −1/K < 0
NCL = NCW + NOL =
0+1=1 for − 1/K < 1
(4.22)
and thus
0 if K > 1
NCL = (4.23)
1 if K < 1
34
35. 4.6. Nyquist stability criterion
In other words, any gain greater than 1 ensures stability of the closed-
loop system.
When there is a pole at the origin or anywhere else on the imaginary axis, the
Nyquist plot will become singular which leads to a problem of counting the encir-
clements. This problem is countered by evaluating G(s) at a point which is not
singular but is very close to the singular point. If the singular point is given by ωs j,
then the point close to it is usually taken to be ε + ωs j, where ε is an arbitrarily
small positive real number as shown in Fig. (4.10).
Figure 4.10: s-plane showing path taken to determine how symmetric singularities
join when making Nyquist plots.
This point is then located on the plot and is made arbitrarily large in magnitude
(tending to ∞) which helps indicate in which direction the singular points join at
infinite distance.
Example Consider a feedback system similar to the one shown in
Fig. (4.7) with closed-loop transfer function
1
GCL (s) = KH(s) = K . (4.24)
s(s + 1)2
where K is a gain to be determined. Find the appropriate Nyquist plot.
The bode diagram is then plotted as in Fig. (4.11) and values at rele-
vant points are picked out in order to make the Nyquist plot. Typical
points to get the angle/magnitude of the frequency response are when
the angle is at multiples of 90◦ . A more complex Nyquist plot would
require more points to get an accurate representation.
35
36. 4.6. Nyquist stability criterion
20
0
Magnitude (dB)
−20
−40
−60
−90
−135
Phase (deg)
−180
−225
−270
−1 0 1
10 10 10
Frequency (rad/sec)
Figure 4.11: Bode plot of closed-loop system.
Since this transfer function has a pole on the imaginary axis at s = 0,
we do not evaluate GCL (j0) but instead evaluate at GCL ( + j0)
where is an arbitrarily small number. This is in order to determine
the asymptotic direction of the closed contour of the Nyquist plot. The
corresponding Nyquist plot with directions is shown in Fig. (4.12).
10
5
Imag Axis
0
−5
−10
−2 0 2 4 6 8
Real Axis
Figure 4.12: Nyquist plot of closed-loop system.
The stability of the closed-loop system maybe determined as before
36
37. 4.7. Feedback with disturbances
using the Nyquist plot.
4.7 Feedback with disturbances
Consider a single degree of freedom feedback system (shown below) with input
disturbance Di (disturbance between the compensator C(s) output U (s) and the
input of the plant G(s)), output disturbance Do (disturbance in the plant output)
and measurement noise/disturbance Dm . Let the system input be R(s).
Di Do
R(s) U(s) + + Y(s)
+ C(s) + G(s) +
-
+
+
Dm
Figure 4.13: Block diagram of a single degree-of-freedom feedback system with
disturbances.
For such a system, we can write the following two functions
Y (s) = T (s)R(s) + S(s)Do (s) + Si (s)Di (s) − T (s)Dm (s) (4.25)
U (s) = Su (s)R(s) + Su (s)Do (s) − T (s)Di (s) − Su (s)Dm (s) (4.26)
where S(s) is the sensitivity function, T (s) is the complementary sensitivity func-
tion, Si (s) is the input-disturbance sensitivity function and Su (s) is the control
sensitivity function. These are given by the following relations
G(s)C(s)
T (s) = (4.27)
1 + G(s)C(s)
1
S(s) = (4.28)
1 + G(s)C(s)
G(s)
Si (s) = = G(s)S(s) (4.29)
1 + G(s)C(s)
C(s)
Su (s) = = C(s)S(s) (4.30)
1 + G(s)C(s)
We can use the above to derive transfer functions between any input/output pair.
Furthermore, we can independently assess the stability of each input/output pair
by evaluating the relevant transfer function.
37
38. 4.7. Feedback with disturbances
Example Find the closed-loop sensitivity functions for a closed-loop
system with plant transfer function
3
G(s) = (4.31)
(s + 4)(−s + 2)
and compensator
−s + 2
C(s) = . (4.32)
s
Also comment on the stability of the system.
Since G(s) = B(s)/A(s) and C(s) = P (s)/L(s)
B(s) = 3 A(s) = (s + 4)(−s + 2) P (s) = −s + 2 L(s) = s
(4.33)
which gives us
GC BP 3
T (s) = = = (4.34)
1 + GC AL + BP s+7
1 AL s(s + 4)
S(s) = = = 2 (4.35)
1 + GC AL + BP s + 4s + 3
(s + 4)(−s + 2)
Su (s) = SC = (4.36)
s2 + 4s + 3
3s
Si (s) = SG = 2 . (4.37)
(s + 4s + 3)(−s + 2)
All the above closed-loop transfer functions are stable except for the
input-disturbance Si (s) since it has a positive pole. Thus, the input-
disturbance to output path is not stable and therefore the entire system
is unstable.
As can be seen, the complementary sensitivity function T (s) is the basic transfer
function from the input R(s) and the output Y (s). Thus, for good tracking we
need T (s) → 1. But as can be seen from the first function for Y (s), this increases
the measurement noise Dm . The sensitivity on the other hand is the transfer
function from the output disturbance Do (s) to the output Y (s). Moreover the
S(s) sensitivity function is directly related to the input sensitivity Si (s) and the
control sensitivity Su (s), both of which we want to reduce to as low as possible.
Therefore, we want S(s) → 0 and T (s) → 1. This is both possible and is actually
a constraint on the system as it can be easily shown that
T (s) + S(s) = 1 (4.38)
But as mentioned earlier, T (s) → 1 amplifies measurement noise. If T (s) = 1
(and thus S(s) = 0) for all frequencies ω, then the system has infinite bandwidth
(fast response). However, it amplifies the measurement noise at all frequencies
which is even more noticeable at high frequencies since measurement noise is
usually significant at higher frequencies. Therefore, having a high bandwidth is
not necessarily something desirable from a control design standpoint.
Infinite bandwidth leads to high measurement noise because T (s) = 1 ∀ ω.
38
39. 4.8. Trends from Bode & Poisson’s integral constraints
In real systems, T (s) cannot be 1 for all frequencies and rolls off at some finite
bandwidth frequency.
• The bandwidth is defined as ωBW = 1/τ where τ is the time constant of the
slowest LHP pole or in an alternate definition is the 3 dB point on a bode
magnitude plot.
• When the true plant transfer function is not available, we proceed with our
analysis using the nominal plant transfer function G0 (s). In this case the
sensitivity functions remain the same but with G0 (s) instead of G(s). Thus
we have T0 (s), S0 (s), Si0 (s) and Su0 (s).
• For a plant G(s) = B(s)/A(s) and a compensator C(s) = P (s)/L(s), the
closed-loop characteristic Acl is as before and is the denominator of all the
sensitivity functions above
Acl = AL + BP = 0 (4.39)
• We need all the sensitivity functions to be stable for stability, i.e. we need
all the roots of Acl to be in the LHP.
• This is only true if there is no cancellation of unstable poles between C(s)
and G(s). Canceling unstable poles is not recommended since we will never
have a perfect model and this will cause us to end up with a nonminimum-
phase zero instead of canceling the unstable pole. Moreover, this will result
in a RHP zero and pole very close to each other and this leads to much
higher instability as will be seen later.
4.8 Trends from Bode & Poisson’s integral constraints
Consider the case when we have no open-loop RHP planes or zeros. Then for
stability, the following is satisfied
∞
0 τd > 0
ln|S(jω)| dω = (4.40)
0 −k π
2 τd = 0
where k = lims→∞ sG(s)C(s) and τd is the time delay in the system. This
is the first of Bode’s integral constraints on sensitivity. The next equation for
complementary sensitivity also needs to be satisfied for stability
∞
1 1 π
ln T j d = πτd − (4.41)
0 ω ω 2kr
−1
where kr = − lims→∞ 1/sG(s)C(s). Since an integral is the area under a graph,
the above signify how much of the graph of T (jω) and S(jω) needs to be above
or below the 0 axis. Upon inspection of the equations, it can be seen that time
delay causes a more positive integral, leading to the fact that time delays cause
both T (jω) and S(jω) to need to move more above the 0 axis for stability (to
satisfy the more positive integrals). This is undesirable as mentioned before and
therefore, time delays are undesirable.
39