SlideShare ist ein Scribd-Unternehmen logo
1 von 134
Introduction to Control Topics, Proportional Controller
Design, Compensator Style Controller Design
Instructed by:
Marcello Napolitano
Figures and Text by:
Andrew Wilhelm
2 | P a g e
Table of Contents
Chapter 1: Introduction to Control and the Open Loop Transfer Function .................................... 4
Chapter 2: Laplace Transformation ................................................................................................ 7
2.1 Real Distinct Roots ..................................................................................................... 10
2.2 Real Distinct Roots with Multiplicity......................................................................... 14
2.3 Complex Conjugate Roots.......................................................................................... 18
Chapter 3: Mathematical Modeling of Physical Systems ............................................................. 25
3.1 Translational System .................................................................................................. 25
3.1.1 Single Mass System ................................................................................................ 26
3.1.2 Multiple Mass System............................................................................................. 29
3.2 Rotational Systems ..................................................................................................... 34
Chapter 4: Analysis of Time Response for Different Systems ..................................................... 42
4.1 Generic First Order System ........................................................................................ 42
4.1.1 Specifications of a Generic First Order System...................................................... 44
4.1.2 Non-Generic First Order System ............................................................................ 46
4.2 Generic Second Order System.................................................................................... 47
4.2.1 Specifications of a Generic Second Order System ................................................. 54
4.2.2 Non-Generic Second Order System........................................................................ 56
4.2.3 Blended Systems..................................................................................................... 57
Chapter 5: Steady State Error and Stability Analysis ................................................................... 61
5.1 Steady State Error Constants ...................................................................................... 62
5.1.1 Type Zero System................................................................................................... 64
5.1.2 Type One System.................................................................................................... 65
5.1.3 Type Two System ................................................................................................... 66
5.2 Routh-Hurwitz Stability Criterion.............................................................................. 66
5.2.1 Routh-Hurwitz Array.............................................................................................. 67
5.2.2 Stability of a Closed Loop Response ...................................................................... 69
Chapter 6: Complex Block Diagram Reduction ........................................................................... 71
6.1 Block Diagram Rules.................................................................................................. 72
6.2 Reduction Examples ................................................................................................... 76
Chapter 7: Controller Design using Root-Locus Method ............................................................. 85
3 | P a g e
7.1 Manual Construction of Root Locus........................................................................... 86
7.2 Computer Construction of Root Locus....................................................................... 90
Chapter 8: Compensator Design ................................................................................................. 110
8.1 Phase Lead Compensator.......................................................................................... 112
8.2 Proportional + Integral (PI) Compensator ................................................................ 125
4 | P a g e
Chapter 1: Introduction to Control and
the Open Loop Transfer Function
To begin analysis of a control problem, the idea of an open loop transfer function must be
formulated. The idea is to relate a specific input to the output of the system. This is done by a
transfer function. This transfer function, in essence, represents the system that the input is
applied to. Once applied, this system will produce an output based upon that specific transfer
function. This relation is shown below.
Figure 1: Open Loop System
The transfer function contained is represented mathematically as shown in the following
formula.
𝑇𝑟𝑎𝑛𝑠𝑓𝑒𝑟 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛 =
𝐿[ 𝐼𝑛𝑝𝑢𝑡]
𝐿[ 𝑂𝑢𝑡𝑝𝑢𝑡]
= 𝐺( 𝑠) =
𝑁𝑢𝑚( 𝑠)
𝐷𝑒𝑛( 𝑠)
From here it is evident that the transfer function is made of both a numerator and a
denominator. They are both made up of linear, constant coefficient differential equations. These
equations must be solved in S-domain using a Laplace transformation. With this in mind, the
transfer function will be written in the S-domain, switching back and fourth when going from
input to output. When written in the S-domain the numerator and denominator essentially define
the system. The numerator depends on the initial conditions of the system. This also yields the
zeros of the system. Along with the numerator, the denominator is just as or more important.
The denominator of the transfer function is known as the characteristic equation of the system.
Also, when taking the differential equation it is the equation found without involving initial
conditions of the system. It is the roots of the characteristic equation that will ultimately define
how the system will behave and what type of response is expected. These roots are known as the
poles of the system.
The system described above just shows the input/output relationship of a system. It does
not involve any kind of control whatsoever. In order to control the system, a more complex
5 | P a g e
system must be generated. The key to controlling a system is feedback from the output, back to
the input. Control is impossible without it. This can be done by using a closing the loop on the
system. To have control there must be a closed loop system that provides feedback to the
system. This relationship can be shown in the following figure.
Figure 2: Closed Loop System
As seen in the figure above there is feedback that flows from the output channel back to a
summing junction at the input. The controller is located in front of the system, which is made up
of a transfer function, in the control loop. This controller will adjust the system by a given value
to yield more desired results. The primary goal of automatic controls is to design the controller
placed in front of the system. This is also known as the gain. In simple cases this gain is a
constant value, but in more complex control systems this gain is another transfer function.
When designing this controller, two key components must be taken into account. First,
the stability of the system is the most important factor for the control system. The gain give to
the system must not be such that the system is driven unstable. Instability can be caused by a
number of factor including too high of a constant gain value, along with dominance issues
caused by the gain on the system. The second factor taken into account is how the system will
respond and whether or not that response is within the set of specifications. These specifications
are provided by the consumer and typically come in two forms. The first is the transient or
immediate response, as well as, the steady state response. When all of these are taken into
account the proper type of controller can be selected and then designed.
After the design aspects of a controller are understood, the different types of dynamic
systems that are controlled must be defined. These dynamic systems are typically broken into 5
categories and they can be represented in the figure below.
6 | P a g e
Figure 3: Classifications of Dynamic System
The systems on the left column of the figure above are much simpler than those on the
right. When dealing with more complex control systems, some attributes associated with the
right column must be taken into account. An example of a time varying system would be an
aircraft in flight. As the aircraft progresses along its flight path fuel is burn. This means the
mass of the aircraft varies with respect to time making it a much more complicated system to
control. Along with that, an unknown system uses prior knowledge to predict system behavior in
the future rather than using a defined system model. Finally, a system with noise is much more
complicated. This is important because all systems have some sort of noise or interference
within the system. A noise free system is the ideal system and is never fully achieved.
The systems used will only be that of the left side. These are the simplest systems and
must be understood before more complex schemes can be created. From here the systems must
be modeled and the converted in to transfer functions in the s-domain. Once this is done analysis
will be done to find the specifications of the systems and to see how changes in the systems
effect those specifications. After the specifications are understood controller design is described.
7 | P a g e
Chapter 2: Laplace Transformation
The modeling of controls systems must begin with the Laplace transformation. The
primary goal of modeling is to derive the equation of motion for the system. Since systems are
dependent on differential equations, it is necessary to reduce these differentials into a single
equation with respect to the independent variable. To do this a Laplace transformation is carried
out. A Laplace transformation can be used on constant coefficient, linear differential equations.
When applied the differentials are reduced to an algebraic expression. Along with this, the
dependent variable, typically time (t), switches to a new domain. This domain is the s-domain.
A Laplace transformation is essential plotting the dependent variable, which is with respect to
differentials, on the s-domain. As stated above, this s-domain is only with respect to an algebraic
expression rather than the complex differentials. This is what makes Laplace ideal for mapping
the equations of motion for control systems.
To apply this Laplace transformation, a few steps must be performed. The equations of
motion for the system are first written. These equations are with respect to differential equations
in the t-domain. The Laplace transformation is then performed on these equations. After in the
s-domain, a partial fraction expansion is performed on the algebraic equations of motion. Once
this expansion is done, the inverse Laplace transformation is carried out. This yields the final
solution in the t-domain. This process can be represented in the figure below.
Figure 4: Laplace Transformation of Systems
t-domain
t-domain s-
domain
8 | P a g e
Along with the method of applying a Laplace transformation, several properties of
Laplace must be defined.
1) Linearity
𝐿[ 𝑐1 𝑓1( 𝑡) + 𝑐2 𝑓2 ( 𝑡)] = 𝐿[ 𝑐1 𝑓1( 𝑡)] + 𝐿[ 𝑐2 𝑓2( 𝑡)] = 𝑐1 𝐹1 ( 𝑠) + 𝑐2 𝐹2( 𝑠)
This property basically describes how the Laplace of a sum of functions is the same as
the Laplace of both parts separately.
2) Derivatives of a Function
𝐿[𝑓̇( 𝑡)] = 𝑠𝐹( 𝑠) − 𝑓(0)
𝐿[𝑓̈( 𝑡)] = 𝑠2
𝐹( 𝑠) − 𝑠𝑓(0) − 𝑓̇(0)
𝐿[ 𝑓 𝑛( 𝑡)] = 𝑠 𝑛
𝐹( 𝑠) − 𝑠 𝑛−1
𝑓(0)… − 𝑓 𝑛−1(0)
The second property describes the most useful tool of the Laplace transformation. This
describes how the Laplace transformation takes differentials and transforms them into algebraic
polynomials. This is the primary function of a Laplace transformation
3) Integral
𝐿 [∫ 𝑓( 𝑡) 𝑑𝑡] =
𝐹( 𝑠)
𝑠
This property shows how the Laplace of an integral is expressed in the s-domain.
4) Initial Value Theory
𝑓(0) = lim
𝑡→0
𝑓( 𝑡) = lim
𝑠→∞
𝑠𝐹( 𝑠)
The initial value theory is useful for finding the starting value of the function without
having to perform the full partial fraction expansion. This theory is only true if the function is
continuous over the domain space.
5) Final Value Theory
𝑓𝑠𝑠 = lim
𝑡→∞
𝑓( 𝑡) = lim
𝑠→0
𝑠𝐹( 𝑠)
The final property of Laplace allows for the solution of the steady state value without
having to perform the partial fraction expansion. For this to hold valid the function must be
stable in the s-domain. This, in conjunction with the initial value theory, is very useful when
trying to define how a system behaves without solving the complex equations of motion.
After the properties of Laplace are better understood, some simple Laplace
transformations of function are found. There are many Laplace transformations for different
functions but the common Laplace transformations are listed in the table below.
9 | P a g e
Table 1: Laplace Transformation Table for Common Functions
Function Time Domain f(t) s-Domain F(s)
Unit Impulse δ(t) 1
Unit Step 1 1/s
Ramp T 1/s2
Nth power tn n!/sn+1
Exponential Decay e-at 1/(s+a)
te-at 1/(s+a)2
Sine sin(ωt) ω/(s2+ω2)
Cosine cos(ωt) s/(s2+ω2)
From here an example problem is worked to form a transfer function from an ordinary,
linear, constant coefficient differential equation. This is shown in the following steps.
Example #1
𝑥̈( 𝑡) + 𝑎𝑥̇( 𝑡) + 𝑏𝑥( 𝑡) = 1
𝑥(0) = 𝑐1 , 𝑥̇(0) = 𝑐2
1) Use the second property of Laplace on the differentials in the equation
𝐿[ 𝑥̈( 𝑡)] = 𝑠2
𝑋( 𝑠) − 𝑠𝑥(0) − 𝑥̇(0) = 𝑠2
𝑋( 𝑠) − 𝑠𝑐1 − 𝑐2
𝐿[ 𝑥̇( 𝑡)] = 𝑠𝑋( 𝑠) − 𝑥(0) = 𝑠𝑋( 𝑠) − 𝑐1
𝐿[ 𝑥( 𝑡)] = 𝑋( 𝑠)
𝐿[1] =
1
𝑠
2) Substitute these expressions back into the original differential equation
( 𝑠2
𝑋( 𝑠) − 𝑠𝑐1 − 𝑐2) + 𝑎( 𝑠𝑋( 𝑠) − 𝑐1) + 𝑏 𝑋( 𝑠) =
1
𝑠
3) Separate terms that are constant and those that contain a function
𝑠2
𝑋( 𝑠) + 𝑎𝑠𝑋( 𝑠) + 𝑏 𝑋( 𝑠) =
1
𝑠
+ 𝑠𝑐1 + 𝑐2 + 𝑎𝑐1
𝑠2
𝑋( 𝑠) + 𝑎𝑠𝑋( 𝑠) + 𝑏 𝑋( 𝑠) =
𝑐1 𝑠2
+ 𝑐2 𝑠 + 𝑎𝑐1 𝑠 + 1
𝑠
4) Pull the function out and set all other terms on the other side of the equation
10 | P a g e
𝑋( 𝑠)[ 𝑠2
+ 𝑎𝑠 + 𝑏] =
𝑐1 𝑠2
+ 𝑐2 𝑠 + 𝑎𝑐1 𝑠 + 1
𝑠
𝑋( 𝑠) =
𝑐1 𝑠2
+ 𝑐2 𝑠 + 𝑎𝑐1 𝑠 + 1
𝑠[ 𝑠2 + 𝑎𝑠 + 𝑏]
=
𝑁𝑢𝑚( 𝑠)
𝐷𝑒𝑛( 𝑠)
Once the transfer function is acquired, the more complicated partial fraction expansion
must be performed. The expansion is dependent on the nature of the roots in the denominator.
There are three cases that these roots can follow. These cases are listed below in order of
increasing complexity.
2.1 Real Distinct Roots
The first case is the case of real distinct roots. This is when the roots of the denominator
are only made up of real numbers that are non-repeating. In the s-domain these roots are
described as shown in the following figure.
Figure 5: Real Distinct Roots
As shown in the figure, these roots lie purely on the real axis. This is the simplest case of
the Laplace transformation. An example using real distinct roots is described next.
-7 -6 -5 -4 -3 -2 -1 0 1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Real Axis
ImaginaryAxis
11 | P a g e
Example #2
𝑦̈( 𝑡) + 7𝑦̇( 𝑡) + 10𝑦( 𝑡) = 2
𝑦(0) = 1 , 𝑦̇(0) = 2
Step #1
𝐿[ 𝑦̈( 𝑡)] = 𝑠2
𝑌( 𝑠) − 𝑠𝑦(0) − 𝑦̇(0) = 𝑠2
𝑌( 𝑠) − 𝑠 − 2
𝐿[ 𝑦̇( 𝑡)] = 𝑠𝑌( 𝑠) − 𝑦(0) = 𝑠𝑌( 𝑠) − 1
𝐿[ 𝑦( 𝑡)] = 𝑌( 𝑠)
𝐿[2] =
2
𝑠
Step #2
( 𝑠2
𝑌( 𝑠) − 𝑠 − 2) + 7( 𝑠𝑌( 𝑠) − 1) + 10𝑌( 𝑠) =
2
𝑠
Step #3
𝑠2
𝑌( 𝑠) + 7𝑠𝑌( 𝑠) + 10𝑌( 𝑠) =
2
𝑠
+ 𝑠 + 7 + 2
𝑠2
𝑌( 𝑠) + 7𝑠𝑌( 𝑠) + 10𝑌( 𝑠) =
𝑠2
+ 9𝑠 + 2
𝑠
Step #4
𝑌( 𝑠)[ 𝑠2
+ 7𝑠 + 10] =
𝑠2
+ 9𝑠 + 2
𝑠
𝑌( 𝑠) =
𝑠2
+ 9𝑠 + 2
𝑠[ 𝑠2 + 7𝑠 + 10]
=
𝑠2
+ 9𝑠 + 2
𝑠( 𝑠 + 2)( 𝑠 + 5)
After the steps described above are used to find the transfer function defining the system.
The partial fraction expansion must be carried out. This example shows a function with real
distinct roots shown by the following figure.
12 | P a g e
Figure 6: Pole Locations of Example #2
To begin the partial fraction the roots of the denominator must be split up into separate
fractions. Each of these will be dependent on a constant coefficient which is located in the
numerator. This can be seen below.
𝑌( 𝑠) =
𝑘1
𝑠
+
𝑘2
( 𝑠 + 2)
+
𝑘3
( 𝑠 + 5)
From here each coefficient must be solved for. This is done by multiplying the entire
function, or transfer function, by the root. Then the function is solved for at the value which
makes the root drive the function undefined. This number would also be the number that makes
the specific partial fraction divide by zero. The solution for the partial fraction coefficients for
this example is shown as following.
𝑘1 = [ 𝑠𝑌( 𝑠)] 𝑠=0 = [𝑠
𝑠2
+ 9𝑠 + 2
𝑠( 𝑠 + 2)( 𝑠 + 5)
]
𝑠=0
= [
𝑠2
+ 9𝑠 + 2
( 𝑠 + 2)( 𝑠 + 5)
]
𝑠=0
=
(0)2
+ 9(0) + 2
((0) + 2)((0)+ 5)
=
2
(2)(5)
=
2
10
= 0.2
-6 -5 -4 -3 -2 -1 0 1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Real Axis
ImaginaryAxis
13 | P a g e
𝑘2 = [( 𝑠 + 2) 𝑌( 𝑠)] 𝑠=−2 = [( 𝑠 + 2)
𝑠2
+ 9𝑠 + 2
𝑠( 𝑠 + 2)( 𝑠 + 5)
]
𝑠=−2
= [
𝑠2
+ 9𝑠 + 2
𝑠( 𝑠 + 5)
]
𝑠=−2
=
(−2)2
+ 9(−2) + 2
(−2)((−2)+ 5)
=
4 − 18 + 2
(−2)(3)
=
−12
−6
= 2
𝑘3 = [( 𝑠 + 5) 𝑌( 𝑠)] 𝑠=−5 = [( 𝑠 + 5)
𝑠2
+ 9𝑠 + 2
𝑠( 𝑠 + 2)( 𝑠 + 5)
]
𝑠=−5
= [
𝑠2
+ 9𝑠 + 2
𝑠( 𝑠 + 2)
]
𝑠=−5
=
(−5)2
+ 9(−5)+ 2
(−5)((−5)+ 2)
=
25 − 45 + 2
(−5)(−3)
=
−18
15
= −1.2
After these coefficients are solved for, they are substituted back into the partial fraction
expansion of the function.
𝑌( 𝑠) = 0.2 ∗
1
𝑠
+ 2 ∗
1
𝑠 + 2
− 1.2 ∗
1
𝑠 + 5
Finally, this function in the s-domain is rewritten back in the t-domain by taking the
inverse Laplace transformation.
𝑦( 𝑡) = 0.2 + 2 ∗ 𝑒−2𝑡
− 1.2 ∗ 𝑒−5𝑡
To verify the results of the partial fraction expansion overall Laplace transformation, the
final function is solved for at the initial condition. If the initial conditions checks then the
solution is correct.
𝑦(0) = 0.2 + 2 ∗ 𝑒−2(0)
− 1.2 ∗ 𝑒−5(0)
𝑦(0) = 0.2 + 2(1)− 1.2(1)
𝑦(0) = 1
Once the solution is verified, it is possible to plot the time order response of the given
transfer function. This time order response is shown in the next figure.
14 | P a g e
Figure 7: Time Order Response for Example #2
2.2 Real Distinct Roots withMultiplicity
The next case taken into account is when the roots of the denominator are both real and
repeating. This is shown in the s-domain by the following figure.
Figure 8: Roots with Multiplicity
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
Time (s)
Responce
-3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Real Axis
ImaginaryAxis
15 | P a g e
As shown in the figure, these roots lie purely on the real axis but occur at the same
locations. This is the next hardest case of the Laplace transformation. An example using real
distinct roots with multiplicity is described next.
Example #3
𝑦̈( 𝑡) + 10𝑦̇( 𝑡) + 25𝑦( 𝑡) = 𝑒−5𝑡
𝑦(0) = 1 , 𝑦̇(0) = 1
Step #1
𝐿[ 𝑦̈( 𝑡)] = 𝑠2
𝑌( 𝑠) − 𝑠𝑦(0) − 𝑦̇(0) = 𝑠2
𝑌( 𝑠) − 𝑠 − 1
𝐿[ 𝑦̇( 𝑡)] = 𝑠𝑌( 𝑠) − 𝑦(0) = 𝑠𝑌( 𝑠) − 1
𝐿[ 𝑦( 𝑡)] = 𝑌( 𝑠)
𝐿[ 𝑒−5𝑡] =
1
𝑠 + 5
Step #2
( 𝑠2
𝑌( 𝑠) − 𝑠 − 1) + 10( 𝑠𝑌( 𝑠) − 1) + 25𝑌( 𝑠) =
1
𝑠 + 5
Step #3
𝑠2
𝑌( 𝑠) + 10𝑠𝑌( 𝑠) + 25𝑌( 𝑠) =
1
𝑠 + 5
+ 𝑠 + 1 + 10
𝑠2
𝑌( 𝑠) + 10𝑠𝑌( 𝑠) + 25𝑌( 𝑠) =
1
𝑠 + 5
+
𝑠( 𝑠 + 5)
𝑠 + 5
+
11( 𝑠 + 5)
𝑠 + 5
=
𝑠2
+ 16𝑠 + 56
𝑠 + 5
Step #4
𝑌( 𝑠)[ 𝑠2
+ 10𝑠 + 25] =
𝑠2
+ 16𝑠 + 56
𝑠 + 5
𝑌( 𝑠) =
𝑠2
+ 16𝑠 + 56
( 𝑠 + 5)[ 𝑠2 + 10𝑠 + 25]
=
𝑠2
+ 16𝑠 + 56
( 𝑠 + 5)( 𝑠 + 5)( 𝑠 + 5)
=
𝑠2
+ 16𝑠 + 56
( 𝑠 + 5)3
This function has a repeated root in the denominator. These are seen in the following
figure.
16 | P a g e
Figure 9: Location of Roots for Example #3
This makes the partial fraction expansion that of the second case. The partial fraction
expansion for this repeated root example is shown below.
𝑌( 𝑠) =
𝑘11
( 𝑠 + 5)
+
𝑘12
( 𝑠 + 5)2
+
𝑘13
( 𝑠 + 5)3
From this partial fraction expansion it is evident that the coefficients will be slightly
different than the distinct roots. Although, the process used to find these roots is very similar.
𝑘13 = [( 𝑠 + 5)3
𝑌( 𝑠)] 𝑠=−5 = [( 𝑠 + 5)3
𝑠2
+ 16𝑠 + 56
( 𝑠 + 5)3
]
𝑠=−5
= [ 𝑠2
+ 16𝑠 + 56] 𝑠=−5 = (−5)2
+ 16(−5) + 56 = 25 − 80 + 56 = 1
𝑘12 =
1
1!
𝑑
𝑑𝑠
[( 𝑠 + 5)3
𝑌( 𝑠)] 𝑠=−5 =
𝑑
𝑑𝑠
[( 𝑠 + 5)3
𝑠2
+ 16𝑠 + 56
( 𝑠 + 5)3
]
𝑠=−5
=
𝑑
𝑑𝑠
[ 𝑠2
+ 16𝑠 + 56] 𝑠=−5 = [2𝑠 + 16] 𝑠=−5 = −10 + 16 = 6
𝑘11 =
1
2!
𝑑2
𝑑𝑠2
[( 𝑠 + 5)3
𝑌( 𝑠)] 𝑠=−5 =
1
2
𝑑2
𝑑𝑠2
[( 𝑠 + 5)3
𝑠2
+ 16𝑠 + 56
( 𝑠 + 5)3
]
𝑠=−5
=
1
2
𝑑
𝑑𝑠
[2𝑠 + 16] 𝑠=−5 =
1
2
∗ [2] = 1
-6 -5 -4 -3 -2 -1 0 1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Real Axis
ImaginaryAxis
17 | P a g e
As shown above the coefficients are worked out by taking a derivative. Along with this
derivative, a factorial must divide the final result depending on which coefficient is being solved
for. After the coefficients are solved for then they are substituted back into the expression and
the inverse Laplace transformation is applied just as case #1.
𝑌( 𝑠) = 1 ∗
1
( 𝑠 + 5)
+ 6 ∗
1
( 𝑠 + 5)2
+ 1 ∗
1
( 𝑠 + 5)3
𝑦( 𝑡) = 𝑒−5𝑡
+ 6 ∗ 𝑡𝑒−5𝑡
+
𝑡2
2
𝑒−5𝑡
To verify the results of the partial fraction expansion overall Laplace transformation, the
final function is solved for at the initial condition. If the initial conditions checks then the
solution is correct.
𝑦(0) = 1 ∗ 𝑒−5(0)
+ 6 ∗ (0) 𝑒−5(0)
+ 1 ∗
(0)2
2
𝑒−5(0)
𝑦(0) = 1 + 0 + 0
𝑦(0) = 1
Once the solution is verified, it is possible to plot the time order response of the given
transfer function. This time order response is shown in the next figure.
Figure 10: Time Order Response for Example #3
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Time (s)
Responce
18 | P a g e
2.3 Complex Conjugate Roots
The final case discussed is the case of complex conjugate roots. This is where the roots
of the denominator have both a real and imaginary component. The roots always act in pairs,
one on both the negative and positive side of the imaginary axis. This is described by the figure
below.
Figure 11: Complex Conjugate Roots
The figure above shows how the roots act in pairs and act off the real axis. Roots of this
nature are the most complicated to deal with due to the complex number. A numerical example
involving complex conjugate roots is described as follows.
Example #4
𝑦̈( 𝑡) + 3𝑦̇( 𝑡) + 11𝑦( 𝑡) = 5
𝑦(0) = 2 , 𝑦̇(0) = 4
Step #1
𝐿[ 𝑦̈( 𝑡)] = 𝑠2
𝑌( 𝑠) − 𝑠𝑦(0) − 𝑦̇(0) = 𝑠2
𝑌( 𝑠) − 2𝑠 − 4
𝐿[ 𝑦̇( 𝑡)] = 𝑠𝑌( 𝑠) − 𝑦(0) = 𝑠𝑌( 𝑠) − 2
𝐿[ 𝑦( 𝑡)] = 𝑌( 𝑠)
-4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1
-5
-4
-3
-2
-1
0
1
2
3
4
5
Real Axis
ImaginaryAxis
19 | P a g e
𝐿[5] =
5
𝑠
Step #2
( 𝑠2
𝑌( 𝑠) − 2𝑠 − 4) + 3( 𝑠𝑌( 𝑠) − 2) + 11𝑌( 𝑠) =
5
𝑠
Step #3
𝑠2
𝑌( 𝑠) + 3𝑠𝑌( 𝑠) + 11𝑌( 𝑠) =
5
𝑠
+ 2𝑠 + 4 + 6
𝑠2
𝑌( 𝑠) + 3𝑠𝑌( 𝑠) + 11𝑌( 𝑠) =
2𝑠2
+ 10𝑠 + 5
𝑠
Step #4
𝑌( 𝑠)[ 𝑠2
+ 3𝑠 + 11] =
2𝑠2
+ 10𝑠 + 5
𝑠
𝑌( 𝑠) =
2𝑠2
+ 10𝑠 + 5
𝑠[ 𝑠2 + 3𝑠 + 11]
Since the quadratic in the numerator is not factorable, the quadratic equation must be
used to find the roots of the equation.
−3 ± √32 − 4(1)(11)
2(1)
=
−3 ± √−35
2
= −
3
2
±
√35
2
𝑖
The roots from the quadratic equation are then put back into the function.
𝑌( 𝑠) =
2𝑠2
+ 10𝑠 + 5
𝑠 (𝑠 +
3
2
+
√35
2
𝑖)(𝑠 +
3
2
−
√35
2
𝑖)
This function has a complex conjugate pair in the denominator. The roots of this
equation are shown in the following figure.
20 | P a g e
Figure 12: Location of Roots for Example #4
Now the partial fraction expansion of the function is performed. Since the complex
conjugate pair is related, the coefficients are also a conjugate pair.
𝑌( 𝑠) =
𝑘1
𝑠
+
𝑘2
(𝑠 +
3
2
+
√35
2
𝑖)
+
𝑘2
∗
(𝑠 +
3
2
−
√35
2
𝑖)
The first coefficient solved for is that of the real distinct root.
𝑘1 = [ 𝑠𝑌( 𝑠)] 𝑠=0 = [𝑠
2𝑠2
+ 10𝑠 + 5
𝑠[ 𝑠2 + 3𝑠 + 11]
]
𝑠=0
= [
2𝑠2
+ 10𝑠 + 5
[ 𝑠2 + 3𝑠 + 11]
]
𝑠=0
=
2(0)2
+ 10(0)+ 5
(0)2 + 3(0) + 11
=
5
11
The next step is to move to the complex conjugate roots.
𝑘2 = [(𝑠 +
3
2
+
√35
2
𝑖) 𝑌( 𝑠)]
𝑠=−
3
2
−
√35
2
𝑖
=
[
(𝑠 +
3
2
+
√35
2
𝑖)
2𝑠2
+ 10𝑠 + 5
𝑠 (𝑠 +
3
2
+
√35
2
𝑖) (𝑠 +
3
2
−
√35
2
𝑖)
] 𝑠=−
3
2
−
√35
2
𝑖
-4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1
-3
-2
-1
0
1
2
3
Real Axis
ImaginaryAxis
21 | P a g e
=
[
2𝑠2
+ 10𝑠 + 5
𝑠 (𝑠 +
3
2
−
√35
2
𝑖)
] 𝑠=−
3
2
−
√35
2
𝑖
=
2 (−
3
2
−
√35
2
𝑖)
2
+ 10 (−
3
2
−
√35
2
𝑖) + 5
(−
3
2
−
√35
2
𝑖)(−
3
2
−
√35
2
𝑖 +
3
2
−
√35
2
𝑖)
=
2 (
9
4
−
3√35
2
𝑖 −
35
2
) + (−
30
2
−
10√35
2
𝑖) + 5
(
3√35
2
𝑖 −
35
2
)
=
−
81
2
− 8√35𝑖
−
35
2
+
3√35
2
𝑖
At this point the fraction must be rationalized. This is done to remove the imaginary part
from the denominator of the fraction.
𝑘2 =
−
81
2
− 8√35𝑖
−
35
2
+
3√35
2
𝑖
∗ (
−
35
2
−
3√35
2
𝑖
−
35
2
−
3√35
2
𝑖
) =
1155
4
+
803√35
4
𝑖
385
= 0.75 + 3.085𝑖
Since the coefficient is a conjugate pair the following holds true.
𝑘2 = 𝑘2
∗
= 0.75 − 3.085𝑖
Substituting the coefficients back into the partial fraction expansion of the function and
taking the inverse Laplace transformation yields the following.
𝑌( 𝑠) = 0.4545 ∗
1
𝑠
+ (0.75 + 3.085𝑖) ∗
1
(𝑠 +
3
2
+ √35
2
𝑖)
+ (0.75 − 3.085𝑖) ∗
1
(𝑠 +
3
2
− √35
2
𝑖)
𝑦( 𝑡) = 0.4545 + (0.75 + 3.085𝑖) 𝑒
(−
3
2
−𝑖
√35
2
) 𝑡
+ (0.75 − 3.085𝑖) 𝑒
(−
3
2
+𝑖
√35
2
) 𝑡
Once the inverse Laplace transformation is carried out, some simplifications are
necessary to remove the imaginary numbers from the function. The first way to do this is to
apply Moivre’s formula to the imaginary coefficient pair. The method of applying this formula
is shown below.
From the complex relationship:
𝑥 = 𝑎 + 𝑖𝑏
𝑥∗
= 𝑎 − 𝑖𝑏
Plotting these functions:
22 | P a g e
From this plot the following relationships can be formed:
𝑥 = 𝑎 + 𝑖𝑏 = 𝑀𝑒 𝑖𝛷
𝑥∗
= 𝑎 − 𝑖𝑏 = 𝑀𝑒−𝑖𝛷
Where:
𝑀 = √ 𝑎2 + 𝑏2
𝛷 = tan−1
(
𝑏
𝑎
)
Applying this formula to the example will reduce the coefficients of the complex roots as
following.
𝑘2 = 0.75 + 3.085𝑖 = 𝑀𝑒 𝑖𝛷
𝑘2
∗
= 0.75 − 3.085𝑖 = 𝑀𝑒−𝑖𝛷
𝑀 = √(0.75)2 + (3.085)2 = 3.175
𝛷 = tan−1
3.085
0.75
= 1.33
𝑘2 = 3.175𝑒1.33𝑖
𝑘2
∗
= 3.175𝑒−1.33𝑖
Once this is done, the inverse Laplace transformation of the function can be rewritten.
𝑦( 𝑡) = 0.4545 + 3.175𝑒1.33𝑖
𝑒
(−
3
2
−𝑖
√35
2
) 𝑡
+ 3.175𝑒−1.33𝑖
𝑒
(−
3
2
+𝑖
√35
2
) 𝑡
a
ib
-ib
Φ
M
23 | P a g e
At this point, another formula is used to reduce the equation farther. This simplification
is known as the Euler sine and cosine formulas. To perform the reduction the equation must first
be rearranged as shown.
𝑦( 𝑡) = 0.4545 + 3.175𝑒−1.5𝑡
[𝑒
−𝑖(√35
2
𝑡+1.33)
+ 𝑒
𝑖(√35
2
𝑡+1.33)
]
From here the relationships below are used to reduce the equation.
𝑒 𝑖𝑥
= cos( 𝑥) + 𝑖 sin( 𝑥)
𝑒−𝑖𝑥
= cos( 𝑥) − 𝑖 sin( 𝑥)
So:
𝑒 𝑖𝑥
+ 𝑒−𝑖𝑥
= cos( 𝑥) + 𝑖 sin( 𝑥) + cos( 𝑥) − 𝑖 sin( 𝑥) = 2cos( 𝑥)
𝑒 𝑖𝑥
− 𝑒−𝑖𝑥
= cos( 𝑥) + 𝑖 sin( 𝑥) − cos( 𝑥) + 𝑖 sin( 𝑥) = 𝑖 ∗ 2 sin( 𝑥)
Using the expressions above the following reduction can be made.
[𝑒
−𝑖(√35
2
𝑡+1.33)
+ 𝑒
𝑖(√35
2
𝑡+1.33)
] = 2cos(
√35
2
𝑡 + 1.33)
Therefore:
𝑦( 𝑡) = 0.4545 + 3.175𝑒−1.5𝑡
[2 cos(
√35
2
𝑡 + 1.33)]
𝑦( 𝑡) = 0.4545 + 6.35𝑒−1.5𝑡
cos(
√35
2
𝑡 + 1.33)
From here it is possible to plot the time order response for the given transfer function.
This time order response is shown in the next figure.
24 | P a g e
Figure 13: Time Order Response for Example #4
This is the final solution for the complex conjugate case of partial fraction expansion.
Although difficult, these functions are very useful in analyzing an aircraft in flight.
Once all the cases for breaking down a partial fraction expansion are understood, the
system can be created. The Laplace transformation sets up the transfer functions necessary to
describe the motion of a mechanical system. With these transformations the solution of these
systems can be easily generated. This will be analyzed in depth next.
0 1 2 3 4 5 6
-3
-2
-1
0
1
2
3
Time (s)
Responce
25 | P a g e
Chapter 3: Mathematical Modeling of
Physical Systems
To better understand the concept of a transfer function, some physical systems are broken
down and modeled. The two main types of systems analyzed are translational systems and
rotational systems. These systems behave in the same way just under different loads. Once
these systems are modeled an equilibrium equation is applied to solve the system. Eventually
these equations are used to form the transfer functions that represent how the mechanical system
translates or rotates.
In order to solve for the transfer functions governing the system, a six step process is
necessary. These steps are described as following.
Step #1: Evaluate the type of system presented and identify the input/output relationships
Step #2: Draw the free body diagram of the system in the t-domain
Step #3: Draw the free body diagram of the system in the s-domain
Step #4: Perform Laplace transformation and write the equations of motion for the system
assuming equilibrium conditions
Step #5: For multipart systems use Cramer’s rule to solve for displacements
𝑋1( 𝑠), 𝑋2( 𝑠), … 𝑋 𝑛( 𝑠)
Step #6: Divide displacements by the applied force to form transfer functions
Are the steps necessary to solve for the transfer function are understood, the physical
model is broken down into three components.
3.1 Translational System
The translational system is made up of three components which are described as such.
1) Inertial (mass)
𝐹𝐼 ( 𝑡) = 𝑚𝑥̈( 𝑡)
𝐹𝐼 ( 𝑠) = 𝑚𝑠2
𝑋( 𝑠)
2) Friction (Viscous)
M
26 | P a g e
𝐹𝐹 ( 𝑡) = 𝑓𝑣 𝑥̇( 𝑡)
𝐹𝐹 ( 𝑠) = 𝑓𝑣 𝑠𝑋( 𝑠)
3) Elastic
𝐹𝐸 ( 𝑡) = 𝑘𝑥( 𝑡)
𝐹𝐸 ( 𝑠) = 𝑘𝑋( 𝑠)
To better understand how all of these components interact with each other a few
examples of a translation system should be solved. The first example represents a simple single
mass system.
3.1.1 Single Mass System
In order to solve for the transfer function, the steps outline prior must be carried out.
Step #1
𝐼𝑛𝑝𝑢𝑡: 𝐹𝐴 ( 𝑡)
𝑂𝑢𝑡𝑝𝑢𝑡: 𝑥1( 𝑡)
𝑇𝑟𝑎𝑛𝑠𝑓𝑒𝑟 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛:
𝑋1( 𝑠)
𝐹𝐴 ( 𝑠)
After the input/output relationships are found, the free body diagram in the t-domain must
be constructed.
27 | P a g e
Step #2
After the free body diagram is expressed in the t-domain, it must be rewritten in the s-
domain by carrying out a simple Laplace transformation.
Step #3
Once the free body diagrams are draw and understood, the equations of motion for the
system are derived. This system only has one mass; therefore there is only one equations of
motion. In this case the equation of motion can be represented as following.
Step #4
𝑀𝑠2
𝑋1( 𝑠) + 𝑓𝑣 𝑠𝑋1( 𝑠) + 𝑘𝑋1( 𝑠) = 𝐹𝐴 ( 𝑠)
Since this system has only one equation of motion, Cramer’s rule is not necessary and the
final two steps can be combined.
Step #5 + #6
𝑋1( 𝑠)[ 𝑀𝑠2
+ 𝑓𝑣 𝑠 + 𝑘] = 𝐹𝐴 ( 𝑠)
𝑋1( 𝑠)
𝐹𝐴 ( 𝑠)
=
1
[ 𝑀𝑠2 + 𝑓𝑣 𝑠 + 𝑘]
This final step yields the transfer function for the given system. With this transfer
function the model can be simulated. To see how the system performs under varying conditions
a sensitivity analysis is performed. The first analysis will consider variations in the mass of the
system. The response for tis analysis is shown in the following figure.
28 | P a g e
Figure 14: Analysis of Mass Variation for Example #1
As shown, increasing the mas of the system decreases the transient response of the
system. It also increases the time it takes for the system to converge to its steady state value.
Next, variations of the damping experienced by the mass are shown. This analysis is found
below.
Figure 15: Analysis of Damper Variation for Example #1
0 2 4 6 8 10 12 14 16 18 20
-5
0
5
10
15
20
x 10
-4
Time (s)
Responce
System Responce for Varrying Mass
M=1 (kg)
M=3 (kg)
M=5 (kg)
M=7 (kg)
M=9 (kg)
0 1 2 3 4 5 6
-1
-0.5
0
0.5
1
1.5
2
2.5
3
x 10
-3
Time (s)
Responce
System Responce for Varying Damper
fv=2 (N/m)
fv=4 (N/m)
fv=6 (N/m)
fv=8 (N/m)
fv=10 (N/m)
29 | P a g e
This analysis shows how increasing the damper value decreases the transient response of
the system. Also, this increase causes the system to dampen out to its steady state value faster.
Finally, the analysis of the spring constant is taken into consideration. The effects of varying the
spring constant are shown in the following figure.
Figure 16: Analysis of Spring Variation for Example #1
This figure shows that increasing the spring constant decreases the transient response of
the system. As well, it increases the time it takes for the system to converge to its steady state
value.
Once this simple, single mass system is understood, a more complicated multiple mass
system must be introduced. Although more complicated, the system follows the same procedure
as the simple single mass system.
3.1.2 Multiple Mass System
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
-5
0
5
10
15
20
x 10
-4
Time (s)
Responce
System Responce for Varying Spring
k=2 (N)
k=4 (N)
k=6 (N)
k=8 (N)
k=10 (N)
30 | P a g e
Step #1
𝐼𝑛𝑝𝑢𝑡: 𝐹𝐴 ( 𝑡)
𝑂𝑢𝑡𝑝𝑢𝑡: 𝑥1( 𝑡) , 𝑥2( 𝑡)
𝑇𝑟𝑎𝑛𝑠𝑓𝑒𝑟 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛:
𝑋1( 𝑠)
𝐹𝐴 ( 𝑠)
,
𝑋2( 𝑠)
𝐹𝐴 ( 𝑠)
Step #2
Step #3
Step #4
𝑀1 𝑠2
𝑋1( 𝑠) + 𝑓𝑣 𝑠𝑋1( 𝑠) + ( 𝑘1 + 𝑘2) 𝑋1( 𝑠) − 𝑓𝑣 𝑠𝑋2( 𝑠) − 𝑘2 𝑋2( 𝑠) = 0
−𝑓𝑣 𝑠𝑋1( 𝑠) − 𝑘2 𝑋1( 𝑠) + 𝑀2 𝑠2
𝑋2( 𝑠) + 𝑘2 𝑋2( 𝑠) = 𝐹𝐴 ( 𝑠)
31 | P a g e
This system of equations is then represented in matrix format.
[
𝑀1 𝑠2
+ 𝑓𝑣 𝑠 + ( 𝑘1 + 𝑘2) −( 𝑓𝑣 𝑠 + 𝑘2)
−( 𝑓𝑣 𝑠 + 𝑘2) 𝑀2 𝑠2
+ 𝑓𝑣 𝑠 + 𝑘2
] [
𝑋1( 𝑠)
𝑋2( 𝑠)
] = [
0
𝐹𝐴 ( 𝑠)
]
Simplifying the matrix yields the following.
[
𝐴 𝐶
𝐶 𝐵
][
𝑋1( 𝑠)
𝑋2( 𝑠)
] = [
0
𝐹𝐴 ( 𝑠)
]
Step #5
𝑋1( 𝑠) =
|
0 𝐶
𝐹𝐴 ( 𝑠) 𝐵
|
|
𝐴 𝐶
𝐶 𝐵
|
=
(0) 𝐵 − 𝐶𝐹𝐴 ( 𝑠)
𝐴𝐵 − 𝐶2
=
−𝐶𝐹𝐴 ( 𝑠)
𝐴𝐵 − 𝐶2
𝑋2( 𝑠) =
|
𝐴 0
𝐶 𝐹𝐴 ( 𝑠)
|
|
𝐴 𝐶
𝐶 𝐵
|
=
𝐴𝐹𝐴 ( 𝑠) − (0) 𝐶
𝐴𝐵 − 𝐶2
=
𝐴𝐹𝐴 ( 𝑠)
𝐴𝐵 − 𝐶2
Where:
𝐴𝐵 − 𝐶2
= (𝑀1 𝑠2
+ 𝑓𝑣 𝑠 + ( 𝑘1 + 𝑘2))( 𝑀2 𝑠2
+ 𝑓𝑣 𝑠 + 𝑘2) − ( 𝑓𝑣 𝑠 + 𝑘2)2
= 𝑀1 𝑀2 𝑠4
+ ( 𝑀1 + 𝑀2) 𝑓𝑣 𝑠3
+ ( 𝑀1 𝑘2 + 𝑓𝑣
2
+ 𝑀2 𝑘1 + 𝑀2 𝑘2 − 𝑓𝑣
2) 𝑠2
+ ( 𝑓𝑣 𝑘2 + 𝑓𝑣 𝑘1 + 𝑓𝑣 𝑘2 − 2𝑓𝑣 𝑘2) 𝑠 + ( 𝑘1 𝑘2 + 𝑘2
2
− 𝑘2
2)
= 𝑀1 𝑀2 𝑠4
+ ( 𝑀1 + 𝑀2) 𝑓𝑣 𝑠3
+ ( 𝑀1 𝑘2 + 𝑀2 𝑘1 + 𝑀2 𝑘2) 𝑠2
+ ( 𝑓𝑣 𝑘1) 𝑠 + ( 𝑘1 𝑘2) = 𝐷𝑒𝑛( 𝑠)
Step #6
𝑋1( 𝑠)
𝐹𝐴 ( 𝑠)
=
𝑓𝑣 𝑠 + 𝑘2
𝑀1 𝑀2 𝑠4 + ( 𝑀1 + 𝑀2) 𝑓𝑣 𝑠3 + ( 𝑀1 𝑘2 + 𝑀2 𝑘1 + 𝑀2 𝑘2) 𝑠2 + ( 𝑓𝑣 𝑘1) 𝑠 + ( 𝑘1 𝑘2)
𝑋2( 𝑠)
𝐹𝐴 ( 𝑠)
=
𝑀1 𝑠2
+ 𝑓𝑣 𝑠 + ( 𝑘1 + 𝑘2)
𝑀1 𝑀2 𝑠4 + ( 𝑀1 + 𝑀2) 𝑓𝑣 𝑠3 + ( 𝑀1 𝑘2 + 𝑀2 𝑘1 + 𝑀2 𝑘2) 𝑠2 + ( 𝑓𝑣 𝑘1) 𝑠 + ( 𝑘1 𝑘2)
Once the transfer functions for both masses are found, it is possible to find the time order
response for them. These responses are found by varying the parameters involved in the system.
Te first parameters to be analyzed are the masses in the system. The time order response with
respect to varying masses is shown as follows.
32 | P a g e
Figure 17: Analysis of Mass Variation for Example #2
As seen in the first example, increasing the mass decreases the transient response while
increasing the time it takes to reach steady state. These graphs also show that changes to the first
mass cause the transient response of the system to remain fairly constant, but they cause
significant changes in the steady state response of the system. When changing the second mass
the effects are opposite. The steady state response remains constant while changes occur during
the transient response. Along with analyzing the effect of mass variation, it is possible to
analyze the effects of the damper between the masses. This is shown by the next figure.
0 10 20 30 40 50 60
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x 10
-3
Time (s)
Mass1Responce
Mass 1 Responce for Varying Mass 1
Mass 1=1 (kg)
Mass 1=3 (kg)
Mass 1=5 (kg)
0 10 20 30 40 50 60
-3
-2
-1
0
1
2
3
x 10
-3
Time (s)
Mass1Responce
Mass 1 Responce for Varying Mass 2
Mass 2=1 (kg)
Mass 2=3 (kg)
Mass 2=5 (kg)
0 10 20 30 40 50 60
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
x 10
-3
Time (s)
Mass2Responce
Mass 2 Responce for Varying Mass 1
Mass 1=1 (kg)
Mass 1=3 (kg)
Mass 1=5 (kg)
0 10 20 30 40 50 60
-3
-2
-1
0
1
2
3
x 10
-3
Time (s)
Mass2Responce
Mass 2 Responce for Varying Mass 2
Mass 2=1 (kg)
Mass 2=3 (kg)
Mass 2=5 (kg)
33 | P a g e
Figure 18: Analysis of Damper Variation for Example #2
As shown in the above figure changing the damper between the two masses causes slight
differences in both the transient and steady state responses of both masses. These effects are
minimal and impact the second mass more than the first. Increasing the damper value causes a
decreased transient response, as well as, a decreased time to reach steady state for both masses.
Finally, after the effects of the damper are understood, the spring constants are taken into
account. This is shown in the following figure.
0 10 20 30 40 50 60
-3
-2
-1
0
1
2
3
x 10
-3
Time (s)
Mass1Responce
Mass 1 Responce for Varying Damper
Damper=2 (N/m)
Damper=4 (N/m)
Damper=6 (N/m)
Damper=8 (N/m)
Damper=10 (N/m)
0 10 20 30 40 50 60
-4
-3
-2
-1
0
1
2
3
4
x 10
-3
Time (s)
Mass2Responce
Mass 2 Responce for Varying Damper
Damper=2 (N/m)
Damper=4 (N/m)
Damper=6 (N/m)
Damper=8 (N/m)
Damper=10 (N/m)
34 | P a g e
Figure 19: Analysis of Spring Variation for Example #2
When looking at the results increasing the spring constant in the first spring causes a
decreased transient response, along with, a decreased time to steady state. This holds true for
both masses. Increasing the second spring constant causes the transient response for both masses
to remain fairly constant, but causes a longer time to reach the steady state value. This example
represents a more complex system. Increasing the number of masses, or numbers of degrees of
freedom, the transfer functions become of higher order and more complex.
3.2 Rotational Systems
After translational systems, rotational systems are analyzed. These systems are nearly
identical to the translational systems present prior. The major difference is the use of an angular
reference frame and applied torques rather than forces. Along with that, the components are
slightly different. These differences are shown below.
0 10 20 30 40 50 60
-3
-2
-1
0
1
2
3
x 10
-3
Time (s)
Mass1Responce
Mass 1 Responce for Varying Spring 1
Spring 1=4 (N)
Spring 1=8 (N)
Spring 1=12 (N)
0 10 20 30 40 50 60
-3
-2
-1
0
1
2
3
x 10
-3
Time (s)
Mass1Responce
Mass 1 Responce for Varying Spring 2
Spring 2=6 (N)
Spring 2=10 (N)
Spring 2=14 (N)
0 10 20 30 40 50 60
-3
-2
-1
0
1
2
3
x 10
-3
Time (s)
Mass2Responce
Mass 2 Responce for Varying Spring 1
Spring 1=4 (N)
Spring 1=8 (N)
Spring 1=12 (N)
0 10 20 30 40 50 60
-3
-2
-1
0
1
2
3
x 10
-3
Time (s)
Mass2Responce
Mass 2 Responce for Varying Spring 2
Spring 2=6 (N)
Spring 2=10 (N)
Spring 2=14 (N)
35 | P a g e
1) Moment of Inertia
𝑀𝐼( 𝑡) = 𝐼 𝑅𝑅 𝜃̈( 𝑡)
𝑀𝐼( 𝑠) = 𝐼 𝑅𝑅 𝑠2
𝜃( 𝑠)
2) Dampers
𝑀 𝐷( 𝑡) = 𝐷𝜃̇( 𝑡)
𝑀 𝐷( 𝑠) = 𝐷𝑠𝜃( 𝑠)
3) Springs
𝑀 𝐸( 𝑡) = 𝑘 𝑅 𝜃( 𝑡)
𝑀 𝐸( 𝑠) = 𝑘 𝑅 𝜃( 𝑠)
After the components of these systems are understood, an example problem can be
solved.
Example #3
Step #1
𝐼𝑛𝑝𝑢𝑡: 𝑇𝐴( 𝑡)
𝑇𝐴( 𝑡)
36 | P a g e
𝑂𝑢𝑡𝑝𝑢𝑡: 𝜃1( 𝑡) , 𝜃2 ( 𝑡)
𝑇𝑟𝑎𝑛𝑠𝑓𝑒𝑟 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛:
𝜃1 ( 𝑠)
𝑇𝐴 ( 𝑠)
,
𝜃2( 𝑠)
𝑇𝐴( 𝑠)
Step #2
37 | P a g e
Step #3
Step #4
𝐼 𝑅𝑅1
𝑠2
𝜃1( 𝑠) + 𝐷1 𝑠𝜃1( 𝑠) + 𝑘 𝑅1
𝜃1( 𝑠) − 𝑘 𝑅2
𝜃2( 𝑠) + 𝑘 𝑅2
𝜃1( 𝑠) = 0
𝑘 𝑅2
𝜃2( 𝑠) − 𝑘 𝑅2
𝜃1( 𝑠) + 𝐷2 𝑠𝜃2( 𝑠) + 𝐼 𝑅𝑅2
𝑠2
𝜃2( 𝑠) = 𝑇𝐴 ( 𝑠)
38 | P a g e
This system of equations is represented in matrix format.
[
𝐼 𝑅𝑅1
𝑠2
+ 𝐷1 𝑠 + (𝑘 𝑅1
+ 𝑘 𝑅2
) −𝑘 𝑅2
−𝑘 𝑅2
𝐼 𝑅𝑅2
𝑠2
+ 𝐷2 𝑠 + 𝑘 𝑅2
][
𝜃1 ( 𝑠)
𝜃2 ( 𝑠)
] = [
0
𝑇𝐴( 𝑠)
]
Simplifying the matrix yields the following.
[
𝐴 𝐶
𝐶 𝐵
][
𝜃1( 𝑠)
𝜃2( 𝑠)
] = [
0
𝑇𝐴 ( 𝑠)
]
Step #5
𝜃1( 𝑠) =
|
0 𝐶
𝑇𝐴( 𝑠) 𝐵
|
|
𝐴 𝐶
𝐶 𝐵
|
=
(0) 𝐵 − 𝐶𝑇𝐴( 𝑠)
𝐴𝐵 − 𝐶2
=
−𝐶𝑇𝐴 ( 𝑠)
𝐴𝐵 − 𝐶2
𝜃2( 𝑠) =
|
𝐴 0
𝐶 𝑇𝐴( 𝑠)
|
|
𝐴 𝐶
𝐶 𝐵
|
=
𝐴𝑇𝐴( 𝑠) − (0) 𝐶
𝐴𝐵 − 𝐶2
=
𝐴𝑇𝐴( 𝑠)
𝐴𝐵 − 𝐶2
Where:
𝐴𝐵 − 𝐶2
= (𝐼 𝑅𝑅1
𝑠2
+ 𝐷1 𝑠 + (𝑘 𝑅1
+ 𝑘 𝑅2
)) (𝐼 𝑅𝑅2
𝑠2
+ 𝐷1 𝑠 + 𝑘 𝑅2
) − (𝑘 𝑅2
)
2
= 𝐼 𝑅𝑅1
𝐼 𝑅𝑅2
𝑠4
+ 𝐼 𝑅𝑅1
𝐷2 𝑠3
+ 𝐼 𝑅𝑅2
𝐷1 𝑠3
+ 𝐼 𝑅𝑅1
𝑘 𝑅2
𝑠2
+ 𝐷1 𝐷2 𝑠2
+ 𝐼 𝑅𝑅2
𝑘 𝑅1
𝑠2
+ 𝐼 𝑅𝑅2
𝑘 𝑅2
𝑠2
+ 𝐷1 𝑘 𝑅2
𝑠 + 𝐷2 𝑘 𝑅1
𝑠 + 𝐷2 𝑘 𝑅2
𝑠 + 𝑘 𝑅1
𝑘 𝑅2
+ 𝑘 𝑅2
2
− 𝑘 𝑅2
2
= 𝐼 𝑅𝑅1
𝐼 𝑅𝑅2
𝑠4
+ (𝐼 𝑅𝑅1
𝐷2 + 𝐼 𝑅𝑅2
𝐷1)𝑠3
+ (𝐼 𝑅𝑅1
𝑘 𝑅2
+ 𝐷1 𝐷2 + 𝐼 𝑅𝑅2
𝑘 𝑅1
+ 𝐼 𝑅𝑅2
𝑘 𝑅2
)𝑠2
+ (𝐷1 𝑘 𝑅2
+ 𝐷2 𝑘 𝑅1
+ 𝐷2 𝑘 𝑅2
)𝑠 + 𝑘 𝑅1
𝑘 𝑅2
= 𝐷𝑒𝑛( 𝑠)
Step #6
𝜃1
( 𝑠)
𝑇𝐴
( 𝑠)
=
𝑘 𝑅2
𝐼 𝑅𝑅1
𝐼 𝑅𝑅2
𝑠4 + ( 𝐼 𝑅𝑅1
𝐷2 + 𝐼 𝑅𝑅2
𝐷1) 𝑠3 + ( 𝐼 𝑅𝑅1
𝑘 𝑅2
+ 𝐷1 𝐷2 + 𝐼 𝑅𝑅2
𝑘 𝑅1
+ 𝐼 𝑅𝑅2
𝑘 𝑅2
) 𝑠2 + ( 𝐷1 𝑘 𝑅2
+ 𝐷2 𝑘 𝑅1
+ 𝐷2 𝑘 𝑅2
) 𝑠 + 𝑘 𝑅1
𝑘 𝑅2
𝜃2
( 𝑠)
𝑇𝐴
( 𝑠)
=
𝐼 𝑅𝑅1
𝑠2 + 𝐷1 𝑠 +( 𝑘 𝑅1
+ 𝑘 𝑅2
)
𝐼 𝑅𝑅1
𝐼 𝑅𝑅2
𝑠4 + ( 𝐼 𝑅𝑅1
𝐷2 + 𝐼 𝑅𝑅2
𝐷1) 𝑠3 + ( 𝐼 𝑅𝑅1
𝑘 𝑅2
+ 𝐷1 𝐷2 + 𝐼 𝑅𝑅2
𝑘 𝑅1
+ 𝐼 𝑅𝑅2
𝑘 𝑅2
) 𝑠2 + ( 𝐷1 𝑘 𝑅2
+ 𝐷2 𝑘 𝑅1
+ 𝐷2 𝑘 𝑅2
) 𝑠 + 𝑘 𝑅1
𝑘 𝑅2
Once these transfer functions are formed, it is possible to plot the time response of the
system. This time order response is found by running a sensitivity analysis on the system. The
first variable hat is analyzed are the masses in the system. This is shown in the following figure.
39 | P a g e
Figure 20: Analysis of Mass Variation for Example #3
When looking at the responses of the system it is evident that the first mass has more
influence over the system. Increasing the first mass completely changes the shape of the
transient response, along with, increasing the time required to reach steady state. Increasing the
second has less effect on the transient response but still increases the time required to reach
steady state. Once the mass variation is analyzed, it is possible to run the simulation varying the
friction or damper experienced by each mass. This is shown next.
0 5 10 15
-5
0
5
10
15
x 10
-4
Time (s)
Mass1Responce
Mass 1 Responce for Varying Mass 1
Mass 1=1 (kg)
Mass 1=5 (kg)
Mass 1=9 (kg)
0 5 10 15
-5
0
5
10
15
x 10
-4
Time (s)
Mass1Responce
Mass 1 Responce for Varying Mass 2
Mass 2=1 (kg)
Mass 2=5 (kg)
Mass 2=9 (kg)
0 5 10 15
-5
0
5
10
15
x 10
-4
Time (s)
Mass2Responce
Mass 2 Responce for Varying Mass 1
Mass 1=1 (kg)
Mass 1=5 (kg)
Mass 1=9 (kg)
0 5 10 15
-5
0
5
10
15
x 10
-4
Time (s)
Mass2Responce
Mass 2 Responce for Varying Mass 2
Mass 2=1 (kg)
Mass 2=5 (kg)
Mass 2=9 (kg)
40 | P a g e
Figure 21: Analysis of Damper Variation for Example #3
The above figures show that varying the damper on the second rotation mass has the most
effect on the system. It completely changes the shape of the transient response of the first mass
and greatly reduces that of the second mass. When changing the first damper, slight decreases
are experienced in the transient response while the time to steady state is increased. Finally,
once the effects of the dampers are found, the sensitivity analysis is run by varying the spring
constants in the system. These results are shown in the following figure.
0 5 10 15
-5
0
5
10
15
x 10
-4
Time (s)
Mass1Responce
Mass 1 Responce for Varying Damper 1
Damper 1=2 (N/m)
Damper 1=6 (N/m)
Damper 1=10 (N/m)
0 5 10 15
-5
0
5
10
15
x 10
-4
Time (s)
Mass1Responce
Mass 1 Responce for Varying Damper 2
Damper 2=2 (N/m)
Damper 2=6 (N/m)
Damper 2=10 (N/m)
0 5 10 15
-5
0
5
10
15
x 10
-4
Time (s)
Mass2Responce
Mass 2 Responce for Varying Damper 1
Damper 1=2 (N/m)
Damper 1=6 (N/m)
Damper 1=10 (N/m)
0 5 10 15
-5
0
5
10
15
x 10
-4
Time (s)
Mass2Responce
Mass 2 Responce for Varying Damper 2
Damper 2=2 (N/m)
Damper 2=6 (N/m)
Damper 2=10 (N/m)
41 | P a g e
Figure 22: Analysis of Spring Variation for Example #3
The above figure shows that varying the first spring has more influence on the system.
As this spring increases the less transient response occurs for both masses. Along with this, the
masses converge to steady state faster as the spring constant increases. The second spring also
influences the system in the same way, but the impact is not as severe.
This shows how physical systems are modeled mathematically. They system is broken
down into a series of equations that are put into matrix form. This matrix is then solved creating
transfer functions for the given degrees of freedom. It is these transfer functions that are solved
to find the time order response of the system. The next chapter will further expand the properties
of a system time order response.
0 5 10 15
-5
0
5
10
15
20
x 10
-4
Time (s)
Mass1Responce
Mass 1 Responce for Varying Spring 1
Spring 1=1 (N)
Spring 1=5 (N)
Spring 1=9 (N)
0 5 10 15
-5
0
5
10
15
20
x 10
-4
Time (s)
Mass1Responce
Mass 1 Responce for Varying Spring 2
Spring 2=3 (N)
Spring 2=7 (N)
Spring 2=11 (N)
0 5 10 15
-5
0
5
10
15
20
x 10
-4
Time (s)
Mass2Responce
Mass 2 Responce for Varying Spring 1
Spring 1=1 (N)
Spring 1=5 (N)
Spring 1=9 (N)
0 5 10 15
-5
0
5
10
15
20
x 10
-4
Time (s)
Mass2Responce
Mass 2 Responce for Varying Spring 2
Spring 2=3 (N)
Spring 2=7 (N)
Spring 2=11 (N)
42 | P a g e
Chapter 4: Analysis of Time Response for
Different Systems
After mechanical systems are analyzed, it becomes evident that the system results are
dependent on the transfer function modeling the system. This transfer function is made up of
both a numerator and a denominator as shown below.
𝑌( 𝑠)
𝑈( 𝑠)
=
𝑁𝑢𝑚( 𝑠)
𝐷𝑒𝑛( 𝑠)
The nature of this system is dependent on the roots of the denominator as shown
previously. The order of the polynomial in the denominator is what plays the main role in
defining the system. Any nth order system can be described as a blend of first and second order
systems. This is described in the following example.
Example #1
7th order system
1) 3(2 𝑛𝑑
𝑜𝑟𝑑𝑒𝑟)+ 2(1 𝑠𝑡
𝑜𝑟𝑑𝑒𝑟)
2) 2(2 𝑛𝑑
𝑜𝑟𝑑𝑒𝑟)+ 3(1 𝑠𝑡
𝑜𝑟𝑑𝑒𝑟)
3) 1(2 𝑛𝑑
𝑜𝑟𝑑𝑒𝑟)+ 5(1 𝑠𝑡
𝑜𝑟𝑑𝑒𝑟)
4) 7(1 𝑠𝑡
𝑜𝑟𝑑𝑒𝑟)
These descriptions of higher order system shows that both first and second order systems
are essential to defining the time response for systems. The first type of response to be taken
into account is a generic first order response. This case is a simple expression and is derived in
the following section.
4.1 Generic First Order System
The generic first order system is the easiest to analyze. It is made up of a single real
distinct pole. Along with this, the steady state value must converge to the value one to be
considered generic. The derivation of this system is shown below.
𝐺𝐹𝑂𝑆( 𝑠) = 𝐺( 𝑠) =
𝑎
𝑠 + 𝑎
𝑢( 𝑡) = 1 → 𝑈( 𝑠) =
1
𝑠
43 | P a g e
𝑌( 𝑠) = 𝑈( 𝑠) 𝐺( 𝑠) =
1
𝑠
∗
𝑎
𝑠 + 𝑎
=
𝑘1
𝑠
+
𝑘2
𝑠 + 𝑎
Where:
𝑘1 = [ 𝑠𝑌( 𝑠)] 𝑠=0 = [𝑠 ∗
1
𝑠
∗
𝑎
𝑠 + 𝑎
]
𝑠=0
=
𝑎
(0) + 𝑎
=
𝑎
𝑎
= 1
𝑘2 = [( 𝑠 + 𝑎) 𝑌( 𝑠)] 𝑠=−𝑎 = [( 𝑠 + 𝑎) ∗
1
𝑠
∗
𝑎
𝑠 + 𝑎
]
𝑠=−𝑎
=
𝑎
(−𝑎)
= −1
Substituting:
𝑌( 𝑠) =
1
𝑠
+
−1
𝑠 + 𝑎
𝑦( 𝑡) = 1 − 𝑒−𝑎𝑡
This derivation yields the expression for a generic first order system. After its derivation
is understood, two key properties of this type of system must be defined. First, the initial value
of the function must be zero. Second, the steady state value is equal to one. This can be proved
by applying both the initial value and final value theorems to the function. This is represented
mathematically below.
Initial Value Theorem
lim
𝑡→0
𝑦( 𝑡) = lim
𝑡→0
(1 − 𝑒−𝑎𝑡) = 1 − 𝑒0
= 1 − 1 = 0
Final Value Theorem
lim
𝑡→∞
𝑦( 𝑡) = lim
𝑡→∞
(1 − 𝑒−𝑎𝑡) = 1 − 𝑒∞
= 1 − 0 = 1
After these two theorems are verified, the function can be plotted with varying root
values. This trend is seen in the following graph.
44 | P a g e
Figure 23: Generic First Order Response for Varying Root Values
This graph shows that no matter what the value of the root, the initial and final values are
the same. Along with showing the trend, plotting the generic first order system allows for
solution of the specifications of the system.
4.1.1 Specifications of a Generic First Order System
For this type of system there are three specifications and they are listed below.
1) Time Constant – This is the time required to reach 63% of the steady state value.
Figure 24: Time Constant
0 1 2 3 4 5 6
0
0.2
0.4
0.6
0.8
1
Time (s)
Response
Generic First Order Response for Varying Root Values
a=1
a=3
a=5
a=7
a=9
a=11
45 | P a g e
𝑇 =
1
𝑎
2) Rise Time – This is the time required to go from 10% to 90% of the steady state value.
Figure 25: Rise Time
𝑇𝑟 = 𝑡2 + 𝑡1
𝑦( 𝑡1) = 0.1 = 1 − 𝑒−𝑎 𝑡1 𝑦( 𝑡2) = 0.9 = 1 − 𝑒−𝑎𝑡2
𝑒−𝑎 𝑡1 = 0.9 𝑒−𝑎 𝑡2 = 0.1
−𝑎𝑡1 = ln(0.9) −𝑎𝑡2 = ln(0.1)
−𝑎𝑡1 = −0.1 −𝑎𝑡2 = −2.3
𝑡1 =
0.1
𝑎
𝑡2 =
2.3
𝑎
𝑇𝑟 =
2.3
𝑎
−
0.1
𝑎
=
2.2
𝑎
3) Settling Time – This is the time required to reach 98% of the steady state value.
46 | P a g e
Figure 26: Settling Time
𝑦( 𝑇𝑠) = 1 − 𝑒−𝑎 𝑇𝑠 = 0.98
−𝑎𝑇𝑠 = ln(0.02)
𝑇𝑠 =
3.91
𝑎
The specifications for a generic first order system can be seen in the derivations above.
These specifications help define how the first order system behaves and what kind of system
output is expected.
Once the generic case is understood, a more common case for the first order system is
analyzed. This case is known as a non-generic first order system. The derivation for the non-
generic case is similar to that of the generic case. This can be seen below.
4.1.2 Non-Generic First Order System
Once the generic case is understood, a more common case for the first order system is
analyzed. This case is known as a non-generic first order system and is derived below.
𝐹𝑂𝑆( 𝑠) = 𝐺( 𝑠) =
𝑏
𝑠 + 𝑎
=
𝑐 ∗ 𝑎
𝑠 + 𝑎
𝑢( 𝑡) = 1 → 𝑈( 𝑠) =
1
𝑠
𝑌( 𝑠) = 𝑈( 𝑠) 𝐺( 𝑠) =
𝑐
𝑠
∗
𝑎
𝑠 + 𝑎
=
𝑘1
𝑠
+
𝑘2
𝑠 + 𝑎
47 | P a g e
From the final value theory the steady state value can be found.
𝑦𝑠𝑠 = lim
𝑠→0
(𝑠 ∗
1
𝑠
∗
𝑐 ∗ 𝑎
𝑠 + 𝑎
) =
𝑐 ∗ 𝑎
(0) + 𝑎
= 𝑐
Where:
𝑘1 = [ 𝑠𝑌( 𝑠)] 𝑠=0 = [𝑠 ∗
𝑐
𝑠
∗
𝑎
𝑠 + 𝑎
]
𝑠=0
=
𝑐 ∗ 𝑎
(0)+ 𝑎
=
𝑐 ∗ 𝑎
𝑎
= 𝑐
𝑘2 = [( 𝑠 + 𝑎) 𝑌( 𝑠)] 𝑠=−𝑎 = [( 𝑠 + 𝑎) ∗
𝑐
𝑠
∗
𝑎
𝑠 + 𝑎
]
𝑠=−𝑎
=
𝑐 ∗ 𝑎
(−𝑎)
= −𝑐
Substituting:
𝑌( 𝑠) =
𝑐
𝑠
+
−𝑐
𝑠 + 𝑎
𝑦( 𝑡) = 𝑐 − 𝑐𝑒−𝑎𝑡
= 𝑐(1 − 𝑒−𝑎𝑡)
From this derivation it is evident that the generic and non-generic responses are almost
identical. The only difference is the factor in front of the response. Furthermore, the factor in
front of the non-generic response is the steady state value for the system. This is the only
difference between the two first order cases and changes the final value of the response.
4.2 Generic SecondOrder System
Now that the first order system is understood, the second order system is analyzed. A
second order system is made up of two poles of varying location in the s-domain. As well as, the
system must have a steady state value of one to be considered generic. The generic second order
system is derived below.
𝐺𝑆𝑂𝑆( 𝑠) =
𝜔 𝑛
2
𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛
2
=
𝑐2
𝑠2 + 𝑐1 𝑠 + 𝑐2
Where:
𝜔 𝑛 = 𝑁𝑎𝑡𝑢𝑟𝑎𝑙 𝐹𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦
𝜉 = 𝐷𝑎𝑚𝑝𝑖𝑛𝑔 𝐶𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡
Once the basic layout of this system is understood, there are several different possibilities
for the roots of the system. The roots can follow a series of cases. These cases are presented as
follows.
Case #1
48 | P a g e
Figure 27: Real Distinct Roots
This case is when the roots of the denominator are real and distinct. Under this condition
the system is over damped. A time response of an overdamped system can be seen below.
Figure 28: Overdamped System
From this figure it is evident that an over damped system will never exceed the steady
state value. The behavior is similar to that of a first order system. This system is highly
dampened and does not osculate.
Case #2
-5 -4 -3 -2 -1 0 1
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Real Axis
ImaginaryAxis
Real Distinct Roots
0 1 2 3 4 5 6
0
0.2
0.4
0.6
0.8
1
Time (s)
Response
Overdamped System
49 | P a g e
Figure 29: Repeated Roots
This case is when the roots of the denominator are real but repeated. Under this
condition the system is critically damped. A time response of a critically damped system is
shown below.
Figure 30: Critically Damped System
From this figure it is evident that a critically damped system will never exceed the steady
state value as well. The difference, although, is that this is the dampening at which the system
-4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Real Axis
ImaginaryAxis
Repeated Roots
0 1 2 3 4 5 6
0
0.2
0.4
0.6
0.8
1
Time (s)
Response
Critically System
50 | P a g e
crosses from a first order behavior to that of a second order. It is the lowest dampening a system
can have before it begins to osculate.
Case #3
Figure 31: Complex Conjugate Roots
This case is when the roots of the denominator are a complex conjugate pair. Under this
condition the system is under damped. A time response of an under damped system can be seen
below.
Figure 32: Critically Damped System
-4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1
-3
-2
-1
0
1
2
3
Real Axis
ImaginaryAxis
Complex Conjugate Roots
0 1 2 3 4 5 6
0
0.2
0.4
0.6
0.8
1
1.2
Time (s)
Response
Critically Damped System
51 | P a g e
This figure shows that an under damped system behaves much differently than that of the
prior cases. The system will exceed that of the steady state value and then osculate about that
value until it eventually converges. This is the most typically response of a second order system.
Case #4
Figure 33: Purely Complex Roots
This case is when the roots of the denominator are a purely complex pair with no real
component. Under this condition the system is undamped. A time response of an undamped
system can be seen below.
Figure 34: Undamped System
-4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1
-3
-2
-1
0
1
2
3
Real Axis
ImaginaryAxis
Pure Complex Roots
0 1 2 3 4 5 6
0
0.5
1
1.5
2
2.5
Time (s)
Response
Critically Damped System
52 | P a g e
This shows how an undamped system will behave. Unlike all the other cases previously
defined, this case does not converge on a steady state value. This type of system represents free
oscillatory motion.
After the nature of the roots is analyzed, the system can be solved for by taking the
inverse Laplace transformation. This transformation is more complex than the first order system.
The solution is presented below.
𝑌( 𝑠)
𝑈( 𝑠)
=
𝜔 𝑛
2
𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛
2
Where:
𝑈( 𝑠) =
1
𝑠
So:
𝑌( 𝑠) =
1
𝑠
∗
𝜔 𝑛
2
𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛
2
𝑦( 𝑡) = 1 −
1
√1 − 𝜉2
𝑒−𝜉𝜔 𝑛 cos [𝜔 𝑛√1 − 𝜉2 𝑡 − 𝜙]
Where:
𝜙 = tan−1
𝜉
√1 − 𝜉2
Once the solution for the generic second order system is found, two key parameters must
be noted. Both the generic second and first order systems have these in common. First, the
initial value of the function must be zero. Second, the steady state value is equal to one. This
can be proved by applying both the initial value and final value theorems to the function. This is
represented mathematically below.
Initial Value Theorem
𝑦(0) = lim
𝑡→0
𝑦( 𝑡) = lim
𝑠→∞
𝑠𝑌( 𝑠) = lim
𝑠→∞
[𝑠 ∗
1
𝑠
∗
𝜔 𝑛
2
𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛
2
] =
𝜔 𝑛
2
∞
= 0
Final Value Theorem
𝑦(0) = lim
𝑡→∞
𝑦( 𝑡) = lim
𝑠→0
𝑠𝑌( 𝑠) = lim
𝑠→0
[𝑠 ∗
1
𝑠
∗
𝜔 𝑛
2
𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛
2
] =
𝜔 𝑛
2
𝜔 𝑛
2
= 1
53 | P a g e
Along with providing these parameters, solution of the generic second order system
allows for an analyses of the poles produced. These poles can be plotted in the s-domain as
shown.
Figure 35: Poles of a Second Order System
The location of the pole in this figure shows how the natural frequency and dampening
ratio define the location. The location of this pole is solved for by using the expression below.
𝑃1,2 = −( 𝜉𝜔 𝑛) ± 𝑖𝜔 𝑛√1 − 𝜉2
Along with this, the following relation can be used to find the dampening ratio based on
the angle of the pole location.
𝜉 = cos 𝛩
Finally, two special cases can be found when analyzing the location of the pole. First,
should the dampening become zero, the poles would lie on the imaginary axis. This, in turn,
would make the system undamped and move in free oscillatory motion. Second, if the
dampening becomes one, then poles would become repeated on the real axis. When this occurs
the system is critically damped.
Once the makeup of the generic second order system is understood, the specifications for
this type of system are defined. Like the generic first order system, these specifications help
define how the system behaves and what type of output is expected.
𝜔 𝑛
𝛩
𝜔 𝑛 sin 𝛩
𝜔 𝑛 cos 𝛩
54 | P a g e
4.2.1 Specifications of a Generic Second Order System
For the second order case there are four specifications. These specifications are useful in
describing the response of the system in common terms. These specification are derived as
shown next.
1) Rise Time – This is the time required to go from 10% to 90% of the steady state value.
𝑇𝑟 =
1 + 1.1𝜉 + 1.4𝜉2
𝜔 𝑛
Figure 36: Rise Time for a Second Order System
2) Peak Time – This is the time required to reach the maximum value( 𝑦 𝑚𝑎𝑥).
𝑇𝑃 =
𝜋
𝜔 𝑛√1 − 𝜉2
55 | P a g e
Figure 37: Peak Time for a Second Order System
3) Percent Overshoot (%OS) – This is the amount that the waveform overshoots the steady
state value to reach the maximum value.
%𝑂𝑆 =
𝑦 𝑚𝑎𝑥 − 𝑦𝑠𝑠
𝑦𝑠𝑠
∗ 100
4) Settling Time – This is the time required to reach and stay within ±2% of the steady state
value.
𝑇𝑠 =
4
𝜉𝜔 𝑛
56 | P a g e
Figure 38: Settling Time for a Second Order System
After the specifications for a generic second order system are understood, the system is
fully described. This leads to the discussion on a non-generic second order system. This system
is not much different from a generic second order system. Along with that, the change between
the two is identical to the change between a generic and non-generic first order system. This
change can be seen below.
4.2.2 Non-Generic Second Order System
Once the generic second order system is understood, a more general system is taken into
account. This is the non-generic second order system and is derived next.
𝑆𝑂𝑆( 𝑠) = 𝑐 ∗ 𝐺𝑆𝑂𝑆( 𝑠) =
𝑐 ∗ 𝜔 𝑛
2
𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛
2
Where:
𝑐 = 𝑆𝑡𝑒𝑎𝑑𝑦 𝑆𝑡𝑎𝑡𝑒 𝑉𝑎𝑙𝑢𝑒
57 | P a g e
As the case with the first order system, the non-generic second order system only changes
by multiplying the steady state value to the function. This, ultimately, moves the steady state
value from one to the location dictated by the non-generic function.
4.2.3 Blended Systems
The final topic on the time response of systems is the blending of systems. This is
analyzed by blending two first order systems as shown below.
𝐺( 𝑠) =
𝑎
𝑠 + 𝑎
∗
𝑏
𝑠 + 𝑏
=
𝑎𝑏
( 𝑠 + 𝑎)( 𝑠 + 𝑏)
Typically this type of system would behave like a second order system. This, however, is
only true if the roots are complex conjugates. Since both roots are real distinct roots the system,
in theory, should not behave like an underdamped system. The response for this system with
varying root values is shown below.
Figure 39: Blending Two First Order Systems with Varying Roots
This graph illustrates the idea of dominance in a system. Dominance occurs when the
roots of the system are far from each other on the negative side of the real axis. The root that
will dominate the time order response is the root closest to the origin. As the roots of the system
0 2 4 6 8 10 12
0
0.2
0.4
0.6
0.8
1
Time (s)
Response
Two Blended First Order Systems
a=20,b=2
a=5,b=2
a=2,b=2
a=1,b=2
a=0.7,b=2
a=0.5,b=2
a=0.3,b=2
58 | P a g e
move closer to each other they begin to interfere with the time order response. When both the
roots are the same, as demonstrate prior, the system is critically damped.
This idea is applied to any higher order system as well. Since higher order systems are
nothing more than a blend of first and second order systems, dominance plays a role in dictating
the system output. To demonstrate this, a third order system is taken into account. This system
is broken up into both a second order and first order component. This is shown in the following
formula.
𝐺( 𝑠) =
𝑎
𝑠 + 𝑎
∗
𝜔 𝑛
2
𝑠2 + 2𝜁𝜔 𝑛 + 𝜔 𝑛
2
Figure 40: Blended First and Second Order System
This figure shows how dominance affects the system. When the pole of the first order
system is far on the negative side of the real axis, it barely affects the second order response.
This changes though when the pole moves close to the complex conjugate roots of the second
order system. As the pole moves closer it causes interference in the second order response.
Once the first order pole moves past the complex conjugate roots of the second order system, the
first order response begins to dominate the overall system response.
0 2 4 6 8 10 12
0
0.5
1
1.5
Time (s)
Response
First and Second Order Blended System
Pure Second Order
a=20*RP(Second Order)
a=5*RP(Second Order)
a=2*RP(Second Order)
a=1*RP(Second Order)
a=0.5*RP(Second Order)
a=0.3*RP(Second Order)
59 | P a g e
The final analysis made on the time order response of a system is adding a zero to the
second order response. This changes the governing equation as follows.
𝐺( 𝑠) =
𝑠 + 𝑎
1
∗
𝜔 𝑛
2
𝑠2 + 2𝜁𝜔 𝑛 + 𝜔 𝑛
2
This system is then plotted for varying zero locations. This is shown in the next figure.
Figure 41: Adding a Zero to the Second Order Response
As shown in the figure, adding a zero to the second order response causes the response
time to change. Should the zero lay far to the left on the negative real axis, it will have little
effect on the overall system output. As the zero moves closer to the complex conjugate roots of
the second order system the response time increases. Should the zero lay on the positive side of
the real axis, the response of the system changes sign. Along with this, the response increases as
the zero moves farther right on the positive axis.
This summarizes the time order response of system. Since higher order systems are made
up of both first and second order systems, it is possible to apply these techniques to any order
system. The main problem in find the time order response is the dominance problems. Should
0 1 2 3 4 5 6 7 8
-1
-0.5
0
0.5
1
1.5
2
2.5
3
Time (s)
Response
Adding a Zero to a Second Order System
Pure Second Order
Z=20*RP(Second Order)
Z=5*RP(Second Order)
Z=2*RP(Second Order)
Z=1*RP(Second Order)
Z=-2*RP(Second Order)
Z=-5*RP(Second Order)
60 | P a g e
the poles of the system lie remotely close to each other, they will interfere with the time order
response.
61 | P a g e
Chapter 5: Steady State Error and
Stability Analysis
The time response of systems yields the general specifications of both generic and non-
generic first and second order systems. Along with this, it provides insight to how blended
systems will behave. Once these ideas are understood, steady state error is introduced. Steady
state error occurs when a system converges to a value different than that of the expected value.
Figure 42: Feedback Control Loop
The figure above shows how a simple unity feedback loop is executed. After the input is
met with the transfer function it produces an output. This output not only defines the system at
that instant, it also goes to a summing junction back at the input. This is how the control loop is
formed. Once this is understood, the idea of steady state error can be formed. When the output
of the system is met with the input at the summing junction it produces an error function. This is
where the steady state error lies and what causes the steady state value to converge to a value
other than predicted. For this type of error to exist the system must be stable and it must be
closed loop. Steady state error can be modeled mathematically as shown below.
𝑒( 𝑡) = 𝑢( 𝑡) − 𝑦( 𝑡)
Or in the s-domain
𝐸( 𝑠) = 𝑈( 𝑠) − 𝑌( 𝑠)
To find the steady state value from this function a limit must be taken.
𝑒 𝑠𝑠 = lim
𝑡→∞
𝑒( 𝑡) = lim
𝑡→∞
[ 𝑢( 𝑡) − 𝑦( 𝑡)] = lim
𝑠→0
𝑠𝐸( 𝑠) = lim
𝑠→0
𝑠[ 𝑈( 𝑠) − 𝑌( 𝑠)]
The formula above represents the generic expression for steady state error. This equation
can be reduced based on the type of system in the transfer function. To define the type of system
present, the formula below is analyzed.
𝑈( 𝑠) 𝑌( 𝑠)𝐸( 𝑠)
62 | P a g e
𝐺( 𝑠) =
𝑘( 𝑠 − 𝑧1)… ( 𝑠 − 𝑧 𝑛)
𝑠 𝐿( 𝑠 − 𝑝1)… ( 𝑠 − 𝑝 𝑛−𝑙)
=
𝑘 ∏ ( 𝑠 − 𝑧1)𝑛
𝑖=1
𝑠 𝐿 ∏ (𝑠 − 𝑝𝑗 )𝑛−𝐿
𝑗=1
Where:
𝐿 = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑜𝑙𝑒𝑠 𝑎𝑡 𝑡ℎ𝑒 𝑜𝑟𝑖𝑔𝑖𝑛 = 𝑠𝑦𝑠𝑡𝑒𝑚 𝑡𝑦𝑝𝑒
𝐾 = 𝑠𝑦𝑠𝑡𝑒𝑚 𝑔𝑎𝑖𝑛
The equation above is the generic representation of any order transfer function. The
important part of this expression is the number of poles at the origin. This defines the type of
system the transfer function represents.
5.1 Steady State Error Constants
Once the system type is understood, the type of error constant can be found. The
definition for the error constant changes as the system type changes. These differences are minor
but are important. The error constants are defined as the following.
1) Position Error Constant
𝐾𝑝 = lim
𝑠→0
𝐺( 𝑠) = {
𝐾
∏ 𝑧𝑖
𝑛
𝑖=1
∏ 𝑝𝑗
𝑛−𝐿
𝑗=1
∞
}
𝑓𝑜𝑟 𝐿 = 0
𝑓𝑜𝑟 𝐿 ≥ 1
From this expression for the position error constant it is evident that this constant is only
defined when the system is a zero type. This type of system has no roots at the origin and is
associated with a step input. Should a root at the origin exist, the position error constant will be
infinite. Along with the error constant, steady state error for a step input is derived below.
𝐸( 𝑠) = 𝑈( 𝑠) − 𝑌( 𝑠) = 𝑈( 𝑠) − (
𝐾𝐺( 𝑠)
1 + 𝐾𝐺( 𝑠)
) 𝑈( 𝑠) = 𝑈( 𝑠) (
1
1 + 𝐾𝐺( 𝑠)
) =
1
𝑠
(
1
1 + 𝐾𝐺( 𝑠)
)
𝑒 𝑠𝑠 = lim
𝑡→∞
𝑒( 𝑡) = lim
𝑠→0
𝑠𝐸( 𝑠) = lim
𝑠→0
(𝑠 ∗
1
𝑠
(
1
1 + 𝐾𝐺( 𝑠)
)) = lim
𝑠→0
(
1
1 + 𝐾𝐺( 𝑠)
)
=
1
1 + 𝐾𝐾𝑝
2) Velocity Error Constant
𝐾𝑣 = lim
𝑠→0
𝑠𝐺( 𝑠) =
{
0
𝐾
∏ 𝑧𝑖
𝑛
𝑖=1
∏ 𝑝𝑗
𝑛−𝐿
𝑗=1
∞ }
𝑓𝑜𝑟 𝐿 = 0
𝑓𝑜𝑟 𝐿 = 1
𝑓𝑜𝑟 𝐿 ≥ 2
63 | P a g e
From this expression for the velocity error constant, a few conclusions are drawn. First,
the type of system will produce a zero velocity error constant if there are no roots at the origin.
Second, the error constant will be infinite if there are two or more roots at the origin. This type
of error constant is only defined for systems with one root at the origin. Along with, it is
associated with a ramp input. Finally, the steady state error for a ramp input is derived below.
𝐸( 𝑠) = 𝑈( 𝑠) − 𝑌( 𝑠) = 𝑈( 𝑠) − (
𝐾𝐺( 𝑠)
1 + 𝐾𝐺( 𝑠)
) 𝑈( 𝑠) = 𝑈( 𝑠)(
1
1 + 𝐾𝐺( 𝑠)
) =
1
𝑠2
(
1
1 + 𝐾𝐺( 𝑠)
)
𝑒 𝑠𝑠 = lim
𝑡→∞
𝑒( 𝑡) = lim
𝑠→0
𝑠𝐸( 𝑠) = lim
𝑠→0
(𝑠 ∗
1
𝑠2
(
1
1 + 𝐾𝐺( 𝑠)
)) = lim
𝑠→0
(
1
𝑠
∗ (
1
1 + 𝐾𝐺( 𝑠)
))
=
1
𝐾𝐾𝑣
3) Acceleration Error Constant
𝐾𝑎 = lim
𝑠→0
𝑠2
𝐺( 𝑠) =
{
0
𝐾
∏ 𝑧𝑖
𝑛
𝑖=1
∏ 𝑝𝑗
𝑛−𝐿
𝑗=1
∞ }
𝑓𝑜𝑟 𝐿 ≤ 1
𝑓𝑜𝑟 𝐿 = 2
𝑓𝑜𝑟 𝐿 ≥ 3
From this expression for the acceleration error constant, a few conclusions are drawn.
First, the type of system is will produce a zero velocity error constant if the system has one or no
roots at the origin. Second, the error constant will be infinite if there are three or more roots at
the origin. This type of error constant is only defined for systems with two roots at the origin.
Along with, this is associated with a parabolic input. Finally, the steady state error for a
parabolic input is defined by the formula below.
𝐸( 𝑠) = 𝑈( 𝑠) − 𝑌( 𝑠) = 𝑈( 𝑠) − (
𝐾𝐺( 𝑠)
1 + 𝐾𝐺( 𝑠)
) 𝑈( 𝑠) = 𝑈( 𝑠)(
1
1 + 𝐾𝐺( 𝑠)
) =
1
𝑠3
(
1
1 + 𝐾𝐺( 𝑠)
)
𝑒 𝑠𝑠 = lim
𝑡→∞
𝑒( 𝑡) = lim
𝑠→0
𝑠𝐸( 𝑠) = lim
𝑠→0
(𝑠 ∗
1
𝑠3
(
1
1 + 𝐾𝐺( 𝑠)
)) = lim
𝑠→0
(
1
𝑠2
∗ (
1
1 + 𝐾𝐺( 𝑠)
))
=
1
𝐾𝐾𝑎
After the error constants and steady state error for a variety of systems are defined, three
numerical examples are analyzed. Each of the system types are taken into account.
64 | P a g e
5.1.1 Type Zero System
The first example is with respect to a type zero system.
𝐺( 𝑠) =
( 𝑠 + 15)
( 𝑠 + 2)( 𝑠 + 5)
𝑒 𝑠𝑠 = 0.05
Given the closed loop transfer function along with the following steady state error find
the error constant along with the gain associated with the steady state error.
Solving for the error constant:
𝐾𝑝 = lim
𝑠→0
𝐺( 𝑠) = lim
𝑠→0
(
( 𝑠 + 15)
( 𝑠 + 2)( 𝑠 + 5)
) =
15
10
= 1.5
Solving for the gain:
𝑒 𝑠𝑠 =
1
1 + 𝐾𝐾𝑝
=
1
1 + 𝐾(1.5)
= 0.05
1 = 0.05 + (0.075) 𝐾
𝐾 =
0.95
0.075
= 12.7
This result is verified by the following figure.
Figure 43: Steady State Error for a Type Zero System
0 1 2 3 4 5 6 7 8 9 10
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Time (s)
ErrorResponse
System Steady State Error
65 | P a g e
5.1.2 Type One System
The second example focuses on the velocity error constant and a type one system.
𝐺( 𝑠) =
1
𝑠( 𝑠 + 5)( 𝑠 + 20)
𝑒 𝑠𝑠 = 0.05
Given the closed loop transfer function along with the following steady state error find
the error constant along with the gain associated with the steady state error.
Solving for the error constant:
𝐾𝑣 = lim
𝑠→0
𝑠𝐺( 𝑠) = lim
𝑠→0
(𝑠 ∗
1
𝑠( 𝑠 + 5)( 𝑠 + 20)
) =
1
100
= 0.01
Solving for the gain:
𝑒 𝑠𝑠 =
1
𝐾𝐾𝑣
=
1
𝐾(0.01)
= 0.05
1 = (0.05)(0.01) 𝐾
𝐾 =
1
0.0005
= 2000
This result is verified by the following figure.
Figure 44: Steady State Error for a Type One System
0 2 4 6 8 10 12 14 16 18 20
-0.05
0
0.05
0.1
0.15
0.2
Time (s)
ErrorResponse
System Steady State Error
66 | P a g e
5.1.3 Type Two System
The third example focuses on the acceleration error constant and a type two system.
𝐺( 𝑠) =
( 𝑠 + 5)
𝑠2( 𝑠 + 2)( 𝑠 + 4)
𝑒 𝑠𝑠 = 0.025
Given the closed loop transfer function along with the following steady state error find
the error constant along with the gain associated with the steady state error.
Solving for the error constant:
𝐾𝑎 = lim
𝑠→0
𝑠2
𝐺( 𝑠) = lim
𝑠→0
(𝑠2
∗
( 𝑠 + 5)
𝑠2( 𝑠 + 2)( 𝑠 + 4)
) =
5
8
= 0.625
Solving for the gain:
𝑒 𝑠𝑠 =
1
𝐾𝐾𝑎
=
1
𝐾(0.625)
= 0.025
1 = (0.625)(0.025) 𝐾
𝐾 =
1
0.015625
= 64
Once these error constants are found a number of conclusions are noticed about the
steady state error of a system. First, the type of system determines the type of error constant that
is defined for the system. Second, a larger gain value will yield a lower steady state value.
Finally, although improving steady state error, increasing the gain of the system can cause
instability in the system. Along with, the increased gain value will disrupt the other
specifications of the system.
5.2 Routh-Hurwitz Stability Criterion
The steady state value of the system brings about the importance of the gain of the system
and starts to show how all of the specifications of the system play into each other. To find what
gains are ideal for a specific system, stability analysis is useful. It is known that too large of a
67 | P a g e
gain value will drive the system unstable. With this in mind, there is a range of gain values for
which the system is stable. To find this range of gain values a Routh-Hurwitz stability analysis
is performed. For the system to be stable, in general, it must have a bounded input/bounded
output relationship. If this requirement is met, the Routh-Hurwitz criterion involves a simple
inspection of the denominator of the transfer function. The Routh-Hurwitz array is set up as
following.
Given:
𝐷𝑒𝑛( 𝑠) = 𝑠 𝑛
+ 𝑄 𝑛−1 𝑠 𝑛−1
+ 𝑄 𝑛−2 𝑠 𝑛−2
+ ⋯+ 𝑄1 𝑠 + 𝑄0
𝑠 𝑛
1 𝑄 𝑛−2 𝑄 𝑛−4 * *
𝑠 𝑛−1
𝑄 𝑛−1 𝑄 𝑛−3 𝑄 𝑛−5
* 𝑏1 𝑏2
* 𝑐1
* *
* *
𝑠 *
𝑠0
𝑄0
Figure 45: Routh-Hurwitz Array
Where:
𝑏1 =
( 𝑄 𝑛−1)( 𝑄 𝑛−2) − (1)( 𝑄 𝑛−3)
𝑄 𝑛−1
𝑏2 =
( 𝑄 𝑛−1 )( 𝑄 𝑛−4) − (1)( 𝑄𝑛−5)
𝑄 𝑛−1
𝑐1 =
( 𝑏1)( 𝑄𝑛−3) − ( 𝑄 𝑛−1)( 𝑏2)
𝑏1
The usefulness of this array is in the first column. The number of sign changes in this
column tells the number of the unstable roots in the system. If there are no sign changes in this
column then the system is stable. The effectiveness of this method is shown by the next
example.
5.2.1 Routh-Hurwitz Array
To find if a system is stable, the Routh-Hurwitz array is constructed. This construction is
described as follows.
𝐷𝑒𝑛( 𝑠) = 𝑠5
+ 3𝑠4
+ 5𝑠3
+ 8𝑠2
+ 11𝑠 + 3
Set up the Routh-Hurwitz array.
68 | P a g e
𝑠5 1 5 11
𝑠4 3 8 3
𝑠3
𝑏1 𝑏2
𝑠2
𝑐1 𝑐2
𝑠1
𝑑1
𝑠0
𝑒1
Where:
𝑏1 =
(3)(5) − (1)(8)
3
= 2.33
𝑏2 =
(3)(11)− (1)(3)
3
= 10
𝑐1 =
( 𝑏1)(8) − (3)( 𝑏2)
𝑏1
=
(2.33)(8) − (3)(10)
2.33
= −4.86
𝑐2 =
( 𝑏1)(3)
𝑏1
= 3
𝑑1 =
( 𝑐1)( 𝑏2) − ( 𝑏1)( 𝑐2)
𝑐1
=
(−4.86)(10)− (2.33)(3)
−4.86
= 11.44
𝑒1 =
( 𝑑1)( 𝑐2)
𝑑1
= 𝑐2 = 3
Substituting the values back into the array yields the following.
𝑠5 1 5 11
𝑠4 3 8 3
𝑠3 2.33 10
𝑠2 -4.86 3
𝑠1 11.44
𝑠0 3
From the Routh Hurwitz array it is possible to conclude that the system is unstable.
Furthermore, the system will have two unstable roots due to the two sign changes in the first
column. These roots will lie on the positive side of the imaginary axis in the s-domain.
The final useful tool of the Routh-Hurwitz array is the ability to find the range of gain
values for which a closed loop system is stable. In order to do this the control loop containing
69 | P a g e
the transfer function must be closed. This incorporates the gain and the feedback loop into the
transfer function representing the system. An example showing this process is shown below.
5.2.2 Stability of a Closed Loop Response
Now that the Routh-Hurwitz array is understood, the stability of a closed loop system is
analyzed. This is described by the following example
𝐺( 𝑠) =
1
( 𝑠 − 5)( 𝑠 + 10)( 𝑠 + 15)
=
1
( 𝑠 − 5)( 𝑠2 + 25𝑠 + 150)
=
1
𝑠3 + 20𝑠2 + 25𝑠 − 750
The closed loop transfer function is represented as follows.
𝑌( 𝑠)
𝑈( 𝑠)
=
𝐾𝐺( 𝑠)
1 + 𝐾𝐺( 𝑠)
=
𝐾
𝑠3 + 20𝑠2 + 25𝑠 − 750
1 +
𝐾
𝑠3 + 20𝑠2 + 25𝑠 − 750
=
𝐾
𝑠3 + 20𝑠2 + 25𝑠 + ( 𝐾 − 750)
The Routh-Hurwitz criterion is applied here.
𝐷𝑒𝑛( 𝑠) = 𝑠3
+ 20𝑠2
+ 25𝑠 + ( 𝐾 − 750)
𝑠3 1 25
𝑠2 20 𝐾 − 750
𝑠1
𝑏1
𝑠0
𝑐1
Where:
𝑏1 =
(20)(25)− (1)( 𝐾 − 750)
20
=
1250 − 𝐾
20
𝑐1 = 𝐾 − 750
For the system to be stable all of the values in the first column of the array must be
positive. This holds true the following relationship is valid.
𝑌( 𝑠)𝑈( 𝑠)
70 | P a g e
1250 − 𝐾
20
> 0
𝐾 − 750 > 0
𝐾 > 750
𝐾 < 1250
750 < 𝐾 < 1250
These results are also verified by the following simulation results.
Figure 46: System Response for Varying Gain Values
This shows what gain values will yield a stable system. As shown in the figure, the
system is stable inside the specific range and unstable outside of it. Also, these values represent
where they system become bound and unbound. This tool is very useful in selecting an
appropriate gain value for the system. Using these values, in hand with the specifications
described previously, a control scheme starts to form.
0 5 10 15
0
2
4
x 10
11
Time (s)
SystemResponse
Response for K=650
0 5 10 15
0
200
400
Time (s)
SystemResponse
Response for K=750
0 5 10 15
0
5
10
Time (s)
SystemResponse
Response for K=900
0 5 10 15
0
5
Time (s)
SystemResponse
Response for K=1100
0 5 10 15
0
5
Time (s)
SystemResponse
Response for K=1250
0 5 10 15
-10
0
10
Time (s)
SystemResponse
Response for K=1350
71 | P a g e
Chapter 6: Complex Block Diagram
Reduction
When dealing with multiple transfer functions in control loops, block diagram reduction
becomes important. Complex block diagrams can end up being large and hard to manage. This,
in turn, makes the simulation of the system more difficult. To help manage complex block
diagrams a few rules can be used to simplify the block diagram to a single transfer function.
This transfer function represents the overall system transfer function. To break down complex
block diagrams, three different architectures are taken advantage of.
1) Transfer functions in series
𝐺 𝐸𝑇𝑆( 𝑠) = 𝐺1( 𝑠) ∗ 𝐺2( 𝑠)
2) Transfer functions in parallel
𝐺 𝐸𝑇𝑆 ( 𝑠) = 𝐺1( 𝑠) + 𝐺2( 𝑠)
3) Feedback loop
72 | P a g e
𝑌( 𝑠) = 𝐺( 𝑠) ∗ 𝐸( 𝑠) = 𝐺( 𝑠)(𝑈( 𝑠) − 𝐻( 𝑠) 𝑌( 𝑠))
𝑌( 𝑠) + 𝐺( 𝑠) 𝐻( 𝑠) 𝑌( 𝑠) = 𝐺( 𝑠) 𝑈( 𝑠)
𝑌( 𝑠)(1 + 𝐺( 𝑠) 𝐻( 𝑠)) = 𝐺( 𝑠) 𝑈( 𝑠)
𝑌( 𝑠) =
𝐺( 𝑠) 𝑈( 𝑠)
(1 + 𝐺( 𝑠) 𝐻( 𝑠))
𝑌( 𝑠)
𝑈( 𝑠)
=
𝐺( 𝑠)
1 + 𝐺( 𝑠) 𝐻( 𝑠)
Should the system contain a gain the transfer function is expressed as the
following.
𝑌( 𝑠)
𝑈( 𝑠)
=
𝐾𝐺( 𝑠)
1 + 𝐾𝐺( 𝑠) 𝐻( 𝑠)
6.1 BlockDiagram Rules
Along with these architectural advantages, two rules can be used to reduce the
block diagrams. These rules involve both pick off points and summing junctions. These
rules are useful for moving transfer functions about these points and help the reduction.
The two rules are shown below.
𝑌( 𝑠)𝑈( 𝑠) 𝐸( 𝑠)
73 | P a g e
Rule #1 – Moving transfer functions around a pickoff point
`
≈
74 | P a g e
≈
75 | P a g e
Rule #2 – Moving transfer functions around a pickoff point
≈
76 | P a g e
≈
After these rules and architectures are understood, it is possible to reduce virtually any
structure into a single transfer function.
6.2 ReductionExamples
To show the effectiveness of these reductions several examples are performed. Once the
transfer functions are reduced, the effective transfer function output is compared to the original
77 | P a g e
system output to see the percent difference. If the reduction was done properly the percent error
should be close to zero.
Example #1
Step #1: Reduce inner most feedback loop
Step #2: Combine blocks in series
78 | P a g e
Step #3: Combine blocks in parallel
Step #4: Combine blocks in series
Step #5: Reduce inner feedback loop
Step #6: Combine blocks in series
Step #7: Reduce final feedback loop
79 | P a g e
After the block diagram is reduced, it is possible to verify the results by comparing the
system output after each step. This is shown by the following figure.
Figure 47: Results Simulation of Example #1
As the above figure shows, the system output for each reduction is identical. Along with
that, the error between the final and original output is zero. This verifies the results of the block
diagram reduction. Once this is understood, another example is carried out.
0 5 10 15
0
0.5
Time (s)
SystemResponse
Original System Response
0 5 10 15
0
0.5
Time (s)
SystemResponse
System Response After Step #1
0 5 10 15
0
0.5
Time (s)
SystemResponse
System Response After Step #2
0 5 10 15
0
0.5
Time (s)
SystemResponse
System Response After Step #3
0 5 10 15
0
0.5
Time (s)
SystemResponse
System Response After Step #4
0 5 10 15
0
0.5
Time (s)
SystemResponse
System Response After Step #5
0 5 10 15
0
0.5
Time (s)
SystemResponse
System Response After Step #6
0 5 10 15
0
0.5
Time (s)
SystemResponse
System Response After Step #7
0 5 10 15
-0.1
0
0.1
Time (s)
SystemResponse
Final Result Error
80 | P a g e
Example #2
Step #1: Simplify both inner feedback loops
Step #2: Combine in Series
Step #3: Remove Final Feedback Loop
81 | P a g e
After the block diagram is reduced, it is possible to verify the results by comparing the
system output after each step. This comparison is made in the following figure.
Figure 48: Results Simulation of Example #2
As the above figure shows, the system output for each reduction is identical. Along with
that, the error between the final and original output is zero. This verifies the results of the block
diagram reduction. Once this is understood, another example is carried out.
0 5 10 15
-1
0
1
Time (s)
SystemResponse
Original System Response
0 5 10 15
-1
0
1
Time (s)
SystemResponse
System Response After Step #1
0 5 10 15
-1
0
1
Time (s)
SystemResponse
System Response After Step #2
0 5 10 15
-1
0
1
Time (s)SystemResponse
System Response After Step #3
0 5 10 15
-1
0
1
Time (s)
SystemResponse
Final Result Error
82 | P a g e
Example #3
Step #1: Consolidate inner loops
Step #2: Combine transfer functions in series and simplify right hand feedback loop
83 | P a g e
Step #3: Combine loops
Step #4: Combine in series
Step #5: Remove final feedback loop
After the block diagram is reduced, it is possible to verify the results by comparing the
system output after each step. This comparison is made in the following figure.
84 | P a g e
Figure 49: Results Simulation of Example #3
The above figure shows how the system response is the same for each other reductions
performed. Along with that, the error between the original and final block diagram is zero so the
solution is correct.
0 5 10 15
0
0.2
0.4
Time (s)
SystemResponse
Original System Response
0 5 10 15
0
0.2
0.4
Time (s)
SystemResponse
System Response After Step #1
0 5 10 15
0
0.2
0.4
Time (s)
SystemResponse
System Response After Step #2
0 5 10 15
0
0.2
0.4
Time (s)
SystemResponse
System Response After Step #3
0 5 10 15
0
0.2
0.4
Time (s)
SystemResponse
System Response After Step #4
0 5 10 15
0
0.2
0.4
Time (s)
SystemResponse
System Response After Step #5
0 5 10 15
-0.1
0
0.1
Time (s)
SystemResponse
Final Result Error
85 | P a g e
Chapter 7: Controller Design using Root-
Locus Method
Now that single input/ single output systems are understood, controller design can be
taken into consideration. To do this the closed loop characteristic equations must be tracked as a
function of the gain. This means that the behavior of the roots of the characteristic equation
needs to be plotted in the s-domain with respect to a change in the gain value. The method of
doing this is known as the Root Locus method. To define the Root Locus for a system, six rules
along with two lemmas are important. These are listed as following.
Rule #1 – The Root Locus is always symmetric with respect to the real axis.
Rule #2 – A Root Locus is made of “N” branches starting at the poles. These branches will
proceed to either the zeros of the system or follow the asymptotic lines.
Rule #3 – Interesting points for the Root Locus are the poles and the zeros locations. When
starting at negative infinity on the real axis look towards the origin. If the number of interesting
points is odd then that segment is part of the Root Locus. If that number is even then that
segment is not part of the Root Locus.
Rule #4 – The location of the intersection of the asymptotic lines with the real axis is located at
the point provided by the following formula.
𝜎 =
∑ 𝑅𝑒𝑎𝑙 𝑃𝑎𝑟𝑡𝑠 𝑜𝑓 𝑡ℎ𝑒 𝑃𝑜𝑙𝑒𝑠 − ∑ 𝑅𝑒𝑎𝑙 𝑃𝑎𝑟𝑡𝑠 𝑜𝑓 𝑡ℎ𝑒 𝑍𝑒𝑟𝑜𝑠
𝑁 − 𝑀
Where:
𝑁 = 𝑂𝑟𝑑𝑒𝑟 𝑜𝑓 𝑡ℎ𝑒 𝐷𝑒𝑛𝑜𝑚𝑖𝑛𝑎𝑡𝑜𝑟
𝑀 = 𝑂𝑟𝑑𝑒𝑟 𝑜𝑓 𝑡ℎ𝑒 𝑁𝑢𝑚𝑒𝑟𝑎𝑡𝑜𝑟
Rule #5 – This provides the angles that the asymptotic lines make with the real axis at the
intersection point.
𝜙 𝐴 𝑗
=
(2𝑗 + 1) ∗ 180
𝑁 − 𝑀
Where:
𝑗 = 𝑖 − 1
𝑖 = 1, … ( 𝑁 − 𝑀)
86 | P a g e
Rule #6 – The following equation yields the location of the breakaway points. These points are
where the Root Locus leaves the real axis and becomes imaginary.
𝑑
𝑑𝑠
𝐺( 𝑠) 𝐻( 𝑠) = 0
Rule #7 – This provides the location of the transition points for the system. These points are
where the Root Locus crosses the imaginary axis. To find these points to closed loop transfer
function must be solved for. After this function is found, it is necessary to set up the Routh-
Hurwitz array for the system. Then, set the squared row equal to zero and solve for s.
Magnitude Lemma – The formula below is used to find the gain value at any point on the Root
Locus.
|1 + 𝐾𝐺( 𝑠) 𝐻( 𝑠)| = 0
| 𝐾𝐺( 𝑠) 𝐻( 𝑠)| = 1
7.1 Manual ConstructionofRoot Locus
After the rules of building the Root Locus is understood, a number of examples are
performed to define the method. The first example will show how to build the Root Locus by
hand. The other three will use computer simulation to replace the hand drawing.
Example #1
𝐺( 𝑠) =
( 𝑠 + 15)
( 𝑠 − 2)( 𝑠 + 4)
Step #1: Carry out rule #3
87 | P a g e
This figure shows how the Root Locus is defined on the real axis.
Step #2: Carry out rules #4 and #5
𝑁 − 𝑀 = 1
𝑖 = 1,2,
𝑗 = 𝑖 − 1 = 0,1,
Where i is the order of the denominator as well. After these calculations are made, the
asymptotic lines can be found. This is done by following the following expression.
𝜙 𝐴 𝑗
=
(2𝑗 + 1) ∗ 180
𝑁 − 𝑀
Solving the equation for the given values of j yields the asymptotic lines. The angles of
the asymptotic lines for this system are derived below.
𝜙 𝐴1
=
(1) ∗ 180
1
= 180°
𝜙 𝐴2
=
(3) ∗ 180
1
= 540°
After the angles of the asymptotic lines are found, the intersection point of the real axis
must be determined. This can be done by using the following equation.
𝜎 =
∑ 𝑅𝑃 𝑃𝑜𝑙𝑒𝑠 − ∑ 𝑅𝑃 𝑍𝑒𝑟𝑜𝑠
𝑁 − 𝑀
=
2 − 4 + 15
1
= 13
In this case the asymptotic lines are not important in defining the Root Locus.
Step #3: Carry out rule #6
𝑑
𝑑𝑠
𝐺( 𝑠) = 0
For this system:
𝑑
𝑑𝑠
𝐺( 𝑠) =
𝑑
𝑑𝑠
( 𝑠 + 15)
( 𝑠2 + 2𝑠 − 8)
=
(1)( 𝑠2
+ 2𝑠 − 8) − ( 𝑠 + 15)(2𝑠 + 2)
( 𝑠2 + 2𝑠 − 8)2
=
𝑠2
+ 2𝑠 − 8 − 2𝑠2
− 2𝑠 − 30𝑠 − 30
( 𝑠2 + 2𝑠 − 8)2
=
−𝑠2
− 30𝑠 − 38
( 𝑠2 + 2𝑠 − 8)2
= 0
Analyzing the numerator to find the zeros yields:
(−𝑠2
− 30𝑠 − 38) = ( 𝑠2
+ 30𝑠 + 38) = 0
−30 ± √302 − 4(1)(38)
2 ∗ 1
=
−30 ± √748
2
= −1.3253, −28.6748
𝑠 𝐵1
= −1.3253
88 | P a g e
𝑠 𝐵2
= −28.6748
Using these points to further define the Root Locus graph yields the following.
Step #4: Carry out rule #7
𝑌( 𝑠)
𝑈( 𝑠)
=
𝑘𝐺( 𝑠)
1 + 𝑘𝐺( 𝑠)
In this case:
𝑌( 𝑠)
𝑈( 𝑠)
=
𝑘( 𝑠 + 15)
𝑠2 + ( 𝑘 + 2) 𝑠 + (15𝑘 − 8)
The array can now be constructed as shown in the table below.
𝑠2 1 15k-8
𝑠1
K+2 --
𝑠0 15k-8 --
From this array the stable values of k are calculated below.
𝑘 > 0.533
This system only has one transition point when s is equal to zero.
Step #5: Perform Magnitude Lemma to find the gain values to fully define the DNA analysis of
the system
89 | P a g e
𝑘 𝑠∗ =
| 𝑠∗
− 2|| 𝑠∗
+ 4|
| 𝑠∗ + 15|
For the breakaway points:
𝑘 𝑠=−1.325 =
|−1.325 − 2||−1.325 + 4|
|−1.325 + 15|
=
2.325 ∗ 2.675
13.675
= 0.437
𝑘 𝑠=−28 .67 =
|−28.67 − 2||−28.67 + 4|
|−28.67 + 15|
=
30.67 ∗ 24.67
13.67
= 55.35
After these calculations are performed the “DNA” analysis of the root locus can be
performed. This is done as shown below.
𝑘 = 0 𝑆𝑦𝑠𝑡𝑒𝑚 𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙
𝑘 𝜖 ]0, 𝑘1[ 𝑈𝑛𝑠𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚, 2 𝐹𝑂𝑆
𝑘 = 𝑘1 𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑆𝑦𝑠𝑡𝑒𝑚, 2 𝐹𝑂𝑆
𝑘 𝜖 ] 𝑘1, 𝑘2[ 𝑆𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚,2 𝐹𝑂𝑆
𝑘 = 𝑘2 𝑆𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚, 2 𝐹𝑂𝑆 𝑅𝑒𝑝𝑒𝑎𝑡𝑒𝑑
𝑘 𝜖 ] 𝑘2, 𝑘3[ 𝑆𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚, 1 𝑆𝑂𝑆
𝑘 = 𝑘3 𝑆𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚, 2 𝐹𝑂𝑆 𝑅𝑒𝑝𝑒𝑎𝑡𝑒𝑑
𝑘 > 𝑘3 𝑆𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚,2 𝐹𝑂𝑆
Where:
𝑘1 = 0.533
𝑘2 = 0.437
𝑘3 = 55.35
After the DNA analysis is carried out, the Root Locus for the system is fully defined.
The full Root Locus is shown as follows.
90 | P a g e
7.2 Computer ConstructionofRoot Locus
After this example is performed by hand, three more are performed using computer
simulation. This method is useful and examples more complex Root Locus.
Example #2
Perform Root Locus on the following system to meet the desired specifications:
𝐺( 𝑠) =
1
( 𝑠 − 2)( 𝑠 + 4)( 𝑠 + 14)
𝜔 𝑛 = 1.5 − 2.5
𝜁 = 0.25 − 0.4
Expanding the denominator:
𝐺( 𝑠) =
1
𝑠3 + 16𝑠2 + 20𝑠 − 112
From here the asymptotic lines and the location of the intersection of these lines on the
real axis must be determined. To do this a series of minor calculations must be made first.
𝑁 − 𝑀 = 3
Where:
𝑁 = 𝑂𝑟𝑑𝑒𝑟 𝑜𝑓 𝑡ℎ𝑒 𝐷𝑒𝑛𝑜𝑚𝑖𝑛𝑎𝑡𝑜𝑟
𝑀 = 𝑂𝑟𝑑𝑒𝑟 𝑜𝑓 𝑡ℎ𝑒 𝑁𝑢𝑚𝑒𝑟𝑎𝑡𝑜𝑟
Also:
91 | P a g e
𝑖 = 1,2,3
𝑗 = 𝑖 − 1 = 0,1,2
Where i is the order of the denominator as well. After these calculations are made, the
asymptotic lines can be found. This is done by following the following expression.
𝜙 𝐴 𝑗
=
(2𝑗 + 1) ∗ 180
𝑁 − 𝑀
Solving the equation for the given values of j yields the asymptotic lines. The angles of
the asymptotic lines for this system are derived below.
𝜙 𝐴1
=
(1)∗ 180
3
= 60°
𝜙 𝐴2
=
(3) ∗ 180
3
= 180°
𝜙 𝐴3
=
(5) ∗ 180
3
= 300° = −60°
After the angles of the asymptotic lines are found, the intersection point of the real axis
must be determined. This can be done by using the following equation.
𝜎 =
∑ 𝑅𝑃 𝑃𝑜𝑙𝑒𝑠 − ∑ 𝑅𝑃 𝑍𝑒𝑟𝑜𝑠
𝑁 − 𝑀
=
2 − 4 − 14
3
= −5.33
Following the generation of the asymptotic relationship, the break points for the root
locus must be found, these are the points where the root locus leaves the real axis and become
some form of a second order system. The break points can be found by using the following
equation.
𝑑
𝑑𝑠
𝐺( 𝑠) = 0
For this system:
𝑑
𝑑𝑠
𝐺( 𝑠) =
𝑑
𝑑𝑠
1
𝑠3 + 16𝑠2 + 20𝑠 − 112
=
−(1)(3𝑠2
+ 32𝑠 + 20)
( 𝑠3 + 16𝑠2 + 20𝑠 − 112)2
= 0
Analyzing the numerator to find the zeros yields:
(3𝑠2
+ 32𝑠 + 20) = 0
−32 ± √322 − 4(3)(20)
2 ∗ 3
=
−32 ± √784
6
= −0.667, −10
Upon further inspection of the root locus it is found the -10 is not a part of the root locus
so it is disregarded. These are the points at which the root locus leaves the real axis. From here
92 | P a g e
a Routh-Hurwitz analysis of the system is necessary. This will provide the k values for which
the system is stable. This is done by constructing the Routh-Hurwitz array based on the
denominator of the system. In order to construct this array the closed loop transfer function for
the system must be found. This can be done using the following equation.
𝑌( 𝑠)
𝑈( 𝑠)
=
𝑘𝐺( 𝑠)
1 + 𝑘𝐺( 𝑠)
In this case:
𝑌( 𝑠)
𝑈( 𝑠)
=
𝑘
1𝑠3 + 16𝑠2 + 20𝑠 − 112 + 𝑘
The array can now be constructed as shown in the table below.
𝑠3 1 20
𝑠2
16 k-112
𝑠1
𝑏1 --
𝑠0 k-112 --
From this array the stable values of k are calculated below.
𝑘 > 112
432 − 𝑘 > 0 → 𝑘 < 432
𝑘 𝜖 ]112,432[
This shows that the system will be stable for a range of k values going from 112 to 432.
After this is found, a further calculation must be done. Using the 𝑠2
row of the Routh-Hurwitz
array, the intersection points of the root locus and the imaginary axis can be found. This is done
as shown below.
16𝑠2
+ ( 𝑘 − 112)
Where k in this case is the value at the crossing point. For this system:
𝑘 = 432
16𝑠2
+ (432 − 112) = 16𝑠2
+ 320 =0
0 ± √02 − 4(16)(320)
2 ∗ 16
=
0 ± √−20480
32
= ±√20𝑖
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design
Introduction to Control System Design

Weitere ähnliche Inhalte

Was ist angesagt?

Modern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of SystemsModern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of SystemsAmr E. Mohamed
 
Lecture 2 transfer-function
Lecture 2 transfer-functionLecture 2 transfer-function
Lecture 2 transfer-functionSaifullah Memon
 
Time Domain and Frequency Domain
Time Domain and Frequency DomainTime Domain and Frequency Domain
Time Domain and Frequency Domainsajan gohel
 
State space analysis shortcut rules, control systems,
State space analysis shortcut rules, control systems, State space analysis shortcut rules, control systems,
State space analysis shortcut rules, control systems, Prajakta Pardeshi
 
Transfer function, determination of transfer function in mechanical and elect...
Transfer function, determination of transfer function in mechanical and elect...Transfer function, determination of transfer function in mechanical and elect...
Transfer function, determination of transfer function in mechanical and elect...Saad Mohammad Araf
 
Dcs lec01 - introduction to discrete-time control systems
Dcs   lec01 - introduction to discrete-time control systemsDcs   lec01 - introduction to discrete-time control systems
Dcs lec01 - introduction to discrete-time control systemsAmr E. Mohamed
 
STate Space Analysis
STate Space AnalysisSTate Space Analysis
STate Space AnalysisHussain K
 
Exercise 1a transfer functions - solutions
Exercise 1a   transfer functions - solutionsExercise 1a   transfer functions - solutions
Exercise 1a transfer functions - solutionswondimu wolde
 
Control system lectures
Control system lectures Control system lectures
Control system lectures Naqqash Sajid
 
Pe 3032 wk 1 introduction to control system march 04e
Pe 3032 wk 1 introduction to control system  march 04ePe 3032 wk 1 introduction to control system  march 04e
Pe 3032 wk 1 introduction to control system march 04eCharlton Inao
 
linear algebra in control systems
linear algebra in control systemslinear algebra in control systems
linear algebra in control systemsGanesh Bhat
 
Chapter 4 time domain analysis
Chapter 4 time domain analysisChapter 4 time domain analysis
Chapter 4 time domain analysisBin Biny Bino
 
Ch2 mathematical modeling of control system
Ch2 mathematical modeling of control system Ch2 mathematical modeling of control system
Ch2 mathematical modeling of control system Elaf A.Saeed
 
Control System Design
Control System DesignControl System Design
Control System DesignHitesh Sharma
 
IC8451 Control Systems
IC8451 Control SystemsIC8451 Control Systems
IC8451 Control Systemsrmkceteee
 
Root locus method
Root locus methodRoot locus method
Root locus methodRavi Patel
 
Stability of Control System
Stability of Control SystemStability of Control System
Stability of Control Systemvaibhav jindal
 
3 modelling of physical systems
3 modelling of physical systems3 modelling of physical systems
3 modelling of physical systemsJoanna Lock
 

Was ist angesagt? (20)

Modern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of SystemsModern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of Systems
 
Lecture 2 transfer-function
Lecture 2 transfer-functionLecture 2 transfer-function
Lecture 2 transfer-function
 
Time Domain and Frequency Domain
Time Domain and Frequency DomainTime Domain and Frequency Domain
Time Domain and Frequency Domain
 
State space analysis shortcut rules, control systems,
State space analysis shortcut rules, control systems, State space analysis shortcut rules, control systems,
State space analysis shortcut rules, control systems,
 
Basics of control system
Basics of control system Basics of control system
Basics of control system
 
Transfer function, determination of transfer function in mechanical and elect...
Transfer function, determination of transfer function in mechanical and elect...Transfer function, determination of transfer function in mechanical and elect...
Transfer function, determination of transfer function in mechanical and elect...
 
Dcs lec01 - introduction to discrete-time control systems
Dcs   lec01 - introduction to discrete-time control systemsDcs   lec01 - introduction to discrete-time control systems
Dcs lec01 - introduction to discrete-time control systems
 
STate Space Analysis
STate Space AnalysisSTate Space Analysis
STate Space Analysis
 
Exercise 1a transfer functions - solutions
Exercise 1a   transfer functions - solutionsExercise 1a   transfer functions - solutions
Exercise 1a transfer functions - solutions
 
Control system lectures
Control system lectures Control system lectures
Control system lectures
 
Pe 3032 wk 1 introduction to control system march 04e
Pe 3032 wk 1 introduction to control system  march 04ePe 3032 wk 1 introduction to control system  march 04e
Pe 3032 wk 1 introduction to control system march 04e
 
linear algebra in control systems
linear algebra in control systemslinear algebra in control systems
linear algebra in control systems
 
Chapter 4 time domain analysis
Chapter 4 time domain analysisChapter 4 time domain analysis
Chapter 4 time domain analysis
 
Ch2 mathematical modeling of control system
Ch2 mathematical modeling of control system Ch2 mathematical modeling of control system
Ch2 mathematical modeling of control system
 
Modern control system
Modern control systemModern control system
Modern control system
 
Control System Design
Control System DesignControl System Design
Control System Design
 
IC8451 Control Systems
IC8451 Control SystemsIC8451 Control Systems
IC8451 Control Systems
 
Root locus method
Root locus methodRoot locus method
Root locus method
 
Stability of Control System
Stability of Control SystemStability of Control System
Stability of Control System
 
3 modelling of physical systems
3 modelling of physical systems3 modelling of physical systems
3 modelling of physical systems
 

Andere mochten auch

Lag Compensator
Lag CompensatorLag Compensator
Lag CompensatorIslam Naqi
 
control system Lab 01-introduction to transfer functions
control system Lab 01-introduction to transfer functionscontrol system Lab 01-introduction to transfer functions
control system Lab 01-introduction to transfer functionsnalan karunanayake
 
CONTROL SYSTEM LAB MANUAL
CONTROL SYSTEM LAB MANUALCONTROL SYSTEM LAB MANUAL
CONTROL SYSTEM LAB MANUALPRINCE SHARMA
 
8178001772 control
8178001772 control8178001772 control
8178001772 controlMaRwa Hamed
 
Control systems engineering. by i.j. nagrath
Control systems engineering. by i.j. nagrathControl systems engineering. by i.j. nagrath
Control systems engineering. by i.j. nagrathSri Harsha
 
computer Integrated Manufacturing التصنيع المتكامل باستخدام الحاسب(CIM)
computer Integrated Manufacturing التصنيع المتكامل باستخدام الحاسب(CIM)computer Integrated Manufacturing التصنيع المتكامل باستخدام الحاسب(CIM)
computer Integrated Manufacturing التصنيع المتكامل باستخدام الحاسب(CIM)HayyanSayyed
 
High Performance Digital Control Presentation Apec 2016 Dr. Hamish Laird
High Performance Digital Control Presentation Apec 2016 Dr. Hamish LairdHigh Performance Digital Control Presentation Apec 2016 Dr. Hamish Laird
High Performance Digital Control Presentation Apec 2016 Dr. Hamish LairdHamish Laird
 
Multivariable Control System Design for Quadruple Tank Process using Quantita...
Multivariable Control System Design for Quadruple Tank Process using Quantita...Multivariable Control System Design for Quadruple Tank Process using Quantita...
Multivariable Control System Design for Quadruple Tank Process using Quantita...IDES Editor
 
Slope Stability Analysis Using Graphical Method تبسيط الطريقة البيانية في تحل...
Slope Stability Analysis Using Graphical Method تبسيط الطريقة البيانية في تحل...Slope Stability Analysis Using Graphical Method تبسيط الطريقة البيانية في تحل...
Slope Stability Analysis Using Graphical Method تبسيط الطريقة البيانية في تحل...Ali A. Alzahrani
 
استخدام الكمبيوتر بشكل كامل في التصنيع والأتمتة Cim
استخدام الكمبيوتر بشكل كامل في التصنيع والأتمتة Cimاستخدام الكمبيوتر بشكل كامل في التصنيع والأتمتة Cim
استخدام الكمبيوتر بشكل كامل في التصنيع والأتمتة Cimhayyansa
 
static series synchronus compensator
static series synchronus compensatorstatic series synchronus compensator
static series synchronus compensatorbhupendra kumar
 

Andere mochten auch (20)

Lead-lag controller
Lead-lag controllerLead-lag controller
Lead-lag controller
 
Lag Compensator
Lag CompensatorLag Compensator
Lag Compensator
 
control system Lab 01-introduction to transfer functions
control system Lab 01-introduction to transfer functionscontrol system Lab 01-introduction to transfer functions
control system Lab 01-introduction to transfer functions
 
LINAC 4 – Control System Design (small)
LINAC 4 – Control System Design (small)LINAC 4 – Control System Design (small)
LINAC 4 – Control System Design (small)
 
Control chap8
Control chap8Control chap8
Control chap8
 
CONTROL SYSTEM LAB MANUAL
CONTROL SYSTEM LAB MANUALCONTROL SYSTEM LAB MANUAL
CONTROL SYSTEM LAB MANUAL
 
8178001772 control
8178001772 control8178001772 control
8178001772 control
 
Control systems engineering. by i.j. nagrath
Control systems engineering. by i.j. nagrathControl systems engineering. by i.j. nagrath
Control systems engineering. by i.j. nagrath
 
Compensation ppt
Compensation pptCompensation ppt
Compensation ppt
 
Pole Placement in Digital Control
Pole Placement in Digital ControlPole Placement in Digital Control
Pole Placement in Digital Control
 
computer Integrated Manufacturing التصنيع المتكامل باستخدام الحاسب(CIM)
computer Integrated Manufacturing التصنيع المتكامل باستخدام الحاسب(CIM)computer Integrated Manufacturing التصنيع المتكامل باستخدام الحاسب(CIM)
computer Integrated Manufacturing التصنيع المتكامل باستخدام الحاسب(CIM)
 
High Performance Digital Control Presentation Apec 2016 Dr. Hamish Laird
High Performance Digital Control Presentation Apec 2016 Dr. Hamish LairdHigh Performance Digital Control Presentation Apec 2016 Dr. Hamish Laird
High Performance Digital Control Presentation Apec 2016 Dr. Hamish Laird
 
Multivariable Control System Design for Quadruple Tank Process using Quantita...
Multivariable Control System Design for Quadruple Tank Process using Quantita...Multivariable Control System Design for Quadruple Tank Process using Quantita...
Multivariable Control System Design for Quadruple Tank Process using Quantita...
 
Slope Stability Analysis Using Graphical Method تبسيط الطريقة البيانية في تحل...
Slope Stability Analysis Using Graphical Method تبسيط الطريقة البيانية في تحل...Slope Stability Analysis Using Graphical Method تبسيط الطريقة البيانية في تحل...
Slope Stability Analysis Using Graphical Method تبسيط الطريقة البيانية في تحل...
 
Control chap10
Control chap10Control chap10
Control chap10
 
Control chap9
Control chap9Control chap9
Control chap9
 
Control chap7
Control chap7Control chap7
Control chap7
 
استخدام الكمبيوتر بشكل كامل في التصنيع والأتمتة Cim
استخدام الكمبيوتر بشكل كامل في التصنيع والأتمتة Cimاستخدام الكمبيوتر بشكل كامل في التصنيع والأتمتة Cim
استخدام الكمبيوتر بشكل كامل في التصنيع والأتمتة Cim
 
Control chap5
Control chap5Control chap5
Control chap5
 
static series synchronus compensator
static series synchronus compensatorstatic series synchronus compensator
static series synchronus compensator
 

Ähnlich wie Introduction to Control System Design

Introduction to Automatic Control Systems
Introduction to Automatic Control SystemsIntroduction to Automatic Control Systems
Introduction to Automatic Control SystemsMushahid Khan Yusufzai
 
Open Loop and close loop control system ppt.pptx
Open Loop and close loop control system ppt.pptxOpen Loop and close loop control system ppt.pptx
Open Loop and close loop control system ppt.pptxAmritSingha5
 
Control systemengineering notes.pdf
Control systemengineering notes.pdfControl systemengineering notes.pdf
Control systemengineering notes.pdfk vimal kumar
 
CS Mod1AzDOCUMENTS.in.pptx
CS Mod1AzDOCUMENTS.in.pptxCS Mod1AzDOCUMENTS.in.pptx
CS Mod1AzDOCUMENTS.in.pptxShruthiShillu1
 
Meeting w12 chapter 4 part 2
Meeting w12   chapter 4 part 2Meeting w12   chapter 4 part 2
Meeting w12 chapter 4 part 2Hattori Sidek
 
Control Systems Feedback.pdf
Control Systems Feedback.pdfControl Systems Feedback.pdf
Control Systems Feedback.pdfLGGaming5
 
Basic elements in control systems
Basic elements in control systemsBasic elements in control systems
Basic elements in control systemsSatheeshCS2
 
Unity Feedback PD Controller Design for an Electronic Throttle Body
Unity Feedback PD Controller Design for an Electronic Throttle BodyUnity Feedback PD Controller Design for an Electronic Throttle Body
Unity Feedback PD Controller Design for an Electronic Throttle BodySteven Ernst, PE
 
Integral Backstepping Sliding Mode Control of Chaotic Forced Van Der Pol Osci...
Integral Backstepping Sliding Mode Control of Chaotic Forced Van Der Pol Osci...Integral Backstepping Sliding Mode Control of Chaotic Forced Van Der Pol Osci...
Integral Backstepping Sliding Mode Control of Chaotic Forced Van Der Pol Osci...ijctcm
 
Control systems
Control systems Control systems
Control systems Dr.YNM
 
Meeting w1 chapter 1
Meeting w1   chapter 1Meeting w1   chapter 1
Meeting w1 chapter 1Hattori Sidek
 
Introduction to control system 1
Introduction to control system 1Introduction to control system 1
Introduction to control system 1turna67
 
KNL3353_Control_System_Engineering_Lectu.ppt
KNL3353_Control_System_Engineering_Lectu.pptKNL3353_Control_System_Engineering_Lectu.ppt
KNL3353_Control_System_Engineering_Lectu.pptSherAli984263
 
Some important tips for control systems
Some important tips for control systemsSome important tips for control systems
Some important tips for control systemsmanish katara
 

Ähnlich wie Introduction to Control System Design (20)

ME416A_Module 1.pdf
ME416A_Module 1.pdfME416A_Module 1.pdf
ME416A_Module 1.pdf
 
Introduction to Automatic Control Systems
Introduction to Automatic Control SystemsIntroduction to Automatic Control Systems
Introduction to Automatic Control Systems
 
Control Systems
Control SystemsControl Systems
Control Systems
 
Open Loop and close loop control system ppt.pptx
Open Loop and close loop control system ppt.pptxOpen Loop and close loop control system ppt.pptx
Open Loop and close loop control system ppt.pptx
 
Control systemengineering notes.pdf
Control systemengineering notes.pdfControl systemengineering notes.pdf
Control systemengineering notes.pdf
 
lecture1423904331 (1).pdf
lecture1423904331 (1).pdflecture1423904331 (1).pdf
lecture1423904331 (1).pdf
 
lecture1423904331 (1).pdf
lecture1423904331 (1).pdflecture1423904331 (1).pdf
lecture1423904331 (1).pdf
 
Lec3
Lec3Lec3
Lec3
 
CS Mod1AzDOCUMENTS.in.pptx
CS Mod1AzDOCUMENTS.in.pptxCS Mod1AzDOCUMENTS.in.pptx
CS Mod1AzDOCUMENTS.in.pptx
 
Chapter 1
Chapter 1Chapter 1
Chapter 1
 
Meeting w12 chapter 4 part 2
Meeting w12   chapter 4 part 2Meeting w12   chapter 4 part 2
Meeting w12 chapter 4 part 2
 
Control Systems Feedback.pdf
Control Systems Feedback.pdfControl Systems Feedback.pdf
Control Systems Feedback.pdf
 
Basic elements in control systems
Basic elements in control systemsBasic elements in control systems
Basic elements in control systems
 
Unity Feedback PD Controller Design for an Electronic Throttle Body
Unity Feedback PD Controller Design for an Electronic Throttle BodyUnity Feedback PD Controller Design for an Electronic Throttle Body
Unity Feedback PD Controller Design for an Electronic Throttle Body
 
Integral Backstepping Sliding Mode Control of Chaotic Forced Van Der Pol Osci...
Integral Backstepping Sliding Mode Control of Chaotic Forced Van Der Pol Osci...Integral Backstepping Sliding Mode Control of Chaotic Forced Van Der Pol Osci...
Integral Backstepping Sliding Mode Control of Chaotic Forced Van Der Pol Osci...
 
Control systems
Control systems Control systems
Control systems
 
Meeting w1 chapter 1
Meeting w1   chapter 1Meeting w1   chapter 1
Meeting w1 chapter 1
 
Introduction to control system 1
Introduction to control system 1Introduction to control system 1
Introduction to control system 1
 
KNL3353_Control_System_Engineering_Lectu.ppt
KNL3353_Control_System_Engineering_Lectu.pptKNL3353_Control_System_Engineering_Lectu.ppt
KNL3353_Control_System_Engineering_Lectu.ppt
 
Some important tips for control systems
Some important tips for control systemsSome important tips for control systems
Some important tips for control systems
 

Mehr von Andrew Wilhelm

Infrastructure Requirements for Urban Air Mobility: A Financial Evaluation
Infrastructure Requirements for Urban Air Mobility: A Financial EvaluationInfrastructure Requirements for Urban Air Mobility: A Financial Evaluation
Infrastructure Requirements for Urban Air Mobility: A Financial EvaluationAndrew Wilhelm
 
Additive Manufacturing in the Aerospace Sector: An Intellectual Property Case...
Additive Manufacturing in the Aerospace Sector: An Intellectual Property Case...Additive Manufacturing in the Aerospace Sector: An Intellectual Property Case...
Additive Manufacturing in the Aerospace Sector: An Intellectual Property Case...Andrew Wilhelm
 
Forecasting Hybrid Aircraft: How Changing Policy is Driving Innovation
Forecasting Hybrid Aircraft: How Changing Policy is Driving InnovationForecasting Hybrid Aircraft: How Changing Policy is Driving Innovation
Forecasting Hybrid Aircraft: How Changing Policy is Driving InnovationAndrew Wilhelm
 
eCommerce and the Third-Party Logistics Sector
eCommerce and the Third-Party Logistics SectoreCommerce and the Third-Party Logistics Sector
eCommerce and the Third-Party Logistics SectorAndrew Wilhelm
 
Cmu financial analysis
Cmu financial analysisCmu financial analysis
Cmu financial analysisAndrew Wilhelm
 
Market Assessment of Commercial Supersonic Aviation
Market Assessment of Commercial Supersonic AviationMarket Assessment of Commercial Supersonic Aviation
Market Assessment of Commercial Supersonic AviationAndrew Wilhelm
 
Delphi Forecast for Curing Down Syndrome
Delphi Forecast for Curing Down SyndromeDelphi Forecast for Curing Down Syndrome
Delphi Forecast for Curing Down SyndromeAndrew Wilhelm
 

Mehr von Andrew Wilhelm (9)

Infrastructure Requirements for Urban Air Mobility: A Financial Evaluation
Infrastructure Requirements for Urban Air Mobility: A Financial EvaluationInfrastructure Requirements for Urban Air Mobility: A Financial Evaluation
Infrastructure Requirements for Urban Air Mobility: A Financial Evaluation
 
Additive Manufacturing in the Aerospace Sector: An Intellectual Property Case...
Additive Manufacturing in the Aerospace Sector: An Intellectual Property Case...Additive Manufacturing in the Aerospace Sector: An Intellectual Property Case...
Additive Manufacturing in the Aerospace Sector: An Intellectual Property Case...
 
Forecasting Hybrid Aircraft: How Changing Policy is Driving Innovation
Forecasting Hybrid Aircraft: How Changing Policy is Driving InnovationForecasting Hybrid Aircraft: How Changing Policy is Driving Innovation
Forecasting Hybrid Aircraft: How Changing Policy is Driving Innovation
 
eCommerce and the Third-Party Logistics Sector
eCommerce and the Third-Party Logistics SectoreCommerce and the Third-Party Logistics Sector
eCommerce and the Third-Party Logistics Sector
 
Cmu financial analysis
Cmu financial analysisCmu financial analysis
Cmu financial analysis
 
Market Assessment of Commercial Supersonic Aviation
Market Assessment of Commercial Supersonic AviationMarket Assessment of Commercial Supersonic Aviation
Market Assessment of Commercial Supersonic Aviation
 
Delphi Forecast for Curing Down Syndrome
Delphi Forecast for Curing Down SyndromeDelphi Forecast for Curing Down Syndrome
Delphi Forecast for Curing Down Syndrome
 
State space design
State space designState space design
State space design
 
Flight_Vehicle_Design
Flight_Vehicle_DesignFlight_Vehicle_Design
Flight_Vehicle_Design
 

Kürzlich hochgeladen

Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Servicemeghakumariji156
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptxJIT KUMAR GUPTA
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfJiananWang21
 
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills KuwaitKuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwaitjaanualu31
 
Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086anil_gaur
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptMsecMca
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueBhangaleSonal
 
Air Compressor reciprocating single stage
Air Compressor reciprocating single stageAir Compressor reciprocating single stage
Air Compressor reciprocating single stageAbc194748
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxmaisarahman1
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapRishantSharmaFr
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Call Girls Mumbai
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsvanyagupta248
 
Hostel management system project report..pdf
Hostel management system project report..pdfHostel management system project report..pdf
Hostel management system project report..pdfKamal Acharya
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayEpec Engineered Technologies
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VDineshKumar4165
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationBhangaleSonal
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.Kamal Acharya
 

Kürzlich hochgeladen (20)

Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills KuwaitKuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 
Air Compressor reciprocating single stage
Air Compressor reciprocating single stageAir Compressor reciprocating single stage
Air Compressor reciprocating single stage
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
Hostel management system project report..pdf
Hostel management system project report..pdfHostel management system project report..pdf
Hostel management system project report..pdf
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equation
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 

Introduction to Control System Design

  • 1. Introduction to Control Topics, Proportional Controller Design, Compensator Style Controller Design Instructed by: Marcello Napolitano Figures and Text by: Andrew Wilhelm
  • 2. 2 | P a g e Table of Contents Chapter 1: Introduction to Control and the Open Loop Transfer Function .................................... 4 Chapter 2: Laplace Transformation ................................................................................................ 7 2.1 Real Distinct Roots ..................................................................................................... 10 2.2 Real Distinct Roots with Multiplicity......................................................................... 14 2.3 Complex Conjugate Roots.......................................................................................... 18 Chapter 3: Mathematical Modeling of Physical Systems ............................................................. 25 3.1 Translational System .................................................................................................. 25 3.1.1 Single Mass System ................................................................................................ 26 3.1.2 Multiple Mass System............................................................................................. 29 3.2 Rotational Systems ..................................................................................................... 34 Chapter 4: Analysis of Time Response for Different Systems ..................................................... 42 4.1 Generic First Order System ........................................................................................ 42 4.1.1 Specifications of a Generic First Order System...................................................... 44 4.1.2 Non-Generic First Order System ............................................................................ 46 4.2 Generic Second Order System.................................................................................... 47 4.2.1 Specifications of a Generic Second Order System ................................................. 54 4.2.2 Non-Generic Second Order System........................................................................ 56 4.2.3 Blended Systems..................................................................................................... 57 Chapter 5: Steady State Error and Stability Analysis ................................................................... 61 5.1 Steady State Error Constants ...................................................................................... 62 5.1.1 Type Zero System................................................................................................... 64 5.1.2 Type One System.................................................................................................... 65 5.1.3 Type Two System ................................................................................................... 66 5.2 Routh-Hurwitz Stability Criterion.............................................................................. 66 5.2.1 Routh-Hurwitz Array.............................................................................................. 67 5.2.2 Stability of a Closed Loop Response ...................................................................... 69 Chapter 6: Complex Block Diagram Reduction ........................................................................... 71 6.1 Block Diagram Rules.................................................................................................. 72 6.2 Reduction Examples ................................................................................................... 76 Chapter 7: Controller Design using Root-Locus Method ............................................................. 85
  • 3. 3 | P a g e 7.1 Manual Construction of Root Locus........................................................................... 86 7.2 Computer Construction of Root Locus....................................................................... 90 Chapter 8: Compensator Design ................................................................................................. 110 8.1 Phase Lead Compensator.......................................................................................... 112 8.2 Proportional + Integral (PI) Compensator ................................................................ 125
  • 4. 4 | P a g e Chapter 1: Introduction to Control and the Open Loop Transfer Function To begin analysis of a control problem, the idea of an open loop transfer function must be formulated. The idea is to relate a specific input to the output of the system. This is done by a transfer function. This transfer function, in essence, represents the system that the input is applied to. Once applied, this system will produce an output based upon that specific transfer function. This relation is shown below. Figure 1: Open Loop System The transfer function contained is represented mathematically as shown in the following formula. 𝑇𝑟𝑎𝑛𝑠𝑓𝑒𝑟 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛 = 𝐿[ 𝐼𝑛𝑝𝑢𝑡] 𝐿[ 𝑂𝑢𝑡𝑝𝑢𝑡] = 𝐺( 𝑠) = 𝑁𝑢𝑚( 𝑠) 𝐷𝑒𝑛( 𝑠) From here it is evident that the transfer function is made of both a numerator and a denominator. They are both made up of linear, constant coefficient differential equations. These equations must be solved in S-domain using a Laplace transformation. With this in mind, the transfer function will be written in the S-domain, switching back and fourth when going from input to output. When written in the S-domain the numerator and denominator essentially define the system. The numerator depends on the initial conditions of the system. This also yields the zeros of the system. Along with the numerator, the denominator is just as or more important. The denominator of the transfer function is known as the characteristic equation of the system. Also, when taking the differential equation it is the equation found without involving initial conditions of the system. It is the roots of the characteristic equation that will ultimately define how the system will behave and what type of response is expected. These roots are known as the poles of the system. The system described above just shows the input/output relationship of a system. It does not involve any kind of control whatsoever. In order to control the system, a more complex
  • 5. 5 | P a g e system must be generated. The key to controlling a system is feedback from the output, back to the input. Control is impossible without it. This can be done by using a closing the loop on the system. To have control there must be a closed loop system that provides feedback to the system. This relationship can be shown in the following figure. Figure 2: Closed Loop System As seen in the figure above there is feedback that flows from the output channel back to a summing junction at the input. The controller is located in front of the system, which is made up of a transfer function, in the control loop. This controller will adjust the system by a given value to yield more desired results. The primary goal of automatic controls is to design the controller placed in front of the system. This is also known as the gain. In simple cases this gain is a constant value, but in more complex control systems this gain is another transfer function. When designing this controller, two key components must be taken into account. First, the stability of the system is the most important factor for the control system. The gain give to the system must not be such that the system is driven unstable. Instability can be caused by a number of factor including too high of a constant gain value, along with dominance issues caused by the gain on the system. The second factor taken into account is how the system will respond and whether or not that response is within the set of specifications. These specifications are provided by the consumer and typically come in two forms. The first is the transient or immediate response, as well as, the steady state response. When all of these are taken into account the proper type of controller can be selected and then designed. After the design aspects of a controller are understood, the different types of dynamic systems that are controlled must be defined. These dynamic systems are typically broken into 5 categories and they can be represented in the figure below.
  • 6. 6 | P a g e Figure 3: Classifications of Dynamic System The systems on the left column of the figure above are much simpler than those on the right. When dealing with more complex control systems, some attributes associated with the right column must be taken into account. An example of a time varying system would be an aircraft in flight. As the aircraft progresses along its flight path fuel is burn. This means the mass of the aircraft varies with respect to time making it a much more complicated system to control. Along with that, an unknown system uses prior knowledge to predict system behavior in the future rather than using a defined system model. Finally, a system with noise is much more complicated. This is important because all systems have some sort of noise or interference within the system. A noise free system is the ideal system and is never fully achieved. The systems used will only be that of the left side. These are the simplest systems and must be understood before more complex schemes can be created. From here the systems must be modeled and the converted in to transfer functions in the s-domain. Once this is done analysis will be done to find the specifications of the systems and to see how changes in the systems effect those specifications. After the specifications are understood controller design is described.
  • 7. 7 | P a g e Chapter 2: Laplace Transformation The modeling of controls systems must begin with the Laplace transformation. The primary goal of modeling is to derive the equation of motion for the system. Since systems are dependent on differential equations, it is necessary to reduce these differentials into a single equation with respect to the independent variable. To do this a Laplace transformation is carried out. A Laplace transformation can be used on constant coefficient, linear differential equations. When applied the differentials are reduced to an algebraic expression. Along with this, the dependent variable, typically time (t), switches to a new domain. This domain is the s-domain. A Laplace transformation is essential plotting the dependent variable, which is with respect to differentials, on the s-domain. As stated above, this s-domain is only with respect to an algebraic expression rather than the complex differentials. This is what makes Laplace ideal for mapping the equations of motion for control systems. To apply this Laplace transformation, a few steps must be performed. The equations of motion for the system are first written. These equations are with respect to differential equations in the t-domain. The Laplace transformation is then performed on these equations. After in the s-domain, a partial fraction expansion is performed on the algebraic equations of motion. Once this expansion is done, the inverse Laplace transformation is carried out. This yields the final solution in the t-domain. This process can be represented in the figure below. Figure 4: Laplace Transformation of Systems t-domain t-domain s- domain
  • 8. 8 | P a g e Along with the method of applying a Laplace transformation, several properties of Laplace must be defined. 1) Linearity 𝐿[ 𝑐1 𝑓1( 𝑡) + 𝑐2 𝑓2 ( 𝑡)] = 𝐿[ 𝑐1 𝑓1( 𝑡)] + 𝐿[ 𝑐2 𝑓2( 𝑡)] = 𝑐1 𝐹1 ( 𝑠) + 𝑐2 𝐹2( 𝑠) This property basically describes how the Laplace of a sum of functions is the same as the Laplace of both parts separately. 2) Derivatives of a Function 𝐿[𝑓̇( 𝑡)] = 𝑠𝐹( 𝑠) − 𝑓(0) 𝐿[𝑓̈( 𝑡)] = 𝑠2 𝐹( 𝑠) − 𝑠𝑓(0) − 𝑓̇(0) 𝐿[ 𝑓 𝑛( 𝑡)] = 𝑠 𝑛 𝐹( 𝑠) − 𝑠 𝑛−1 𝑓(0)… − 𝑓 𝑛−1(0) The second property describes the most useful tool of the Laplace transformation. This describes how the Laplace transformation takes differentials and transforms them into algebraic polynomials. This is the primary function of a Laplace transformation 3) Integral 𝐿 [∫ 𝑓( 𝑡) 𝑑𝑡] = 𝐹( 𝑠) 𝑠 This property shows how the Laplace of an integral is expressed in the s-domain. 4) Initial Value Theory 𝑓(0) = lim 𝑡→0 𝑓( 𝑡) = lim 𝑠→∞ 𝑠𝐹( 𝑠) The initial value theory is useful for finding the starting value of the function without having to perform the full partial fraction expansion. This theory is only true if the function is continuous over the domain space. 5) Final Value Theory 𝑓𝑠𝑠 = lim 𝑡→∞ 𝑓( 𝑡) = lim 𝑠→0 𝑠𝐹( 𝑠) The final property of Laplace allows for the solution of the steady state value without having to perform the partial fraction expansion. For this to hold valid the function must be stable in the s-domain. This, in conjunction with the initial value theory, is very useful when trying to define how a system behaves without solving the complex equations of motion. After the properties of Laplace are better understood, some simple Laplace transformations of function are found. There are many Laplace transformations for different functions but the common Laplace transformations are listed in the table below.
  • 9. 9 | P a g e Table 1: Laplace Transformation Table for Common Functions Function Time Domain f(t) s-Domain F(s) Unit Impulse δ(t) 1 Unit Step 1 1/s Ramp T 1/s2 Nth power tn n!/sn+1 Exponential Decay e-at 1/(s+a) te-at 1/(s+a)2 Sine sin(ωt) ω/(s2+ω2) Cosine cos(ωt) s/(s2+ω2) From here an example problem is worked to form a transfer function from an ordinary, linear, constant coefficient differential equation. This is shown in the following steps. Example #1 𝑥̈( 𝑡) + 𝑎𝑥̇( 𝑡) + 𝑏𝑥( 𝑡) = 1 𝑥(0) = 𝑐1 , 𝑥̇(0) = 𝑐2 1) Use the second property of Laplace on the differentials in the equation 𝐿[ 𝑥̈( 𝑡)] = 𝑠2 𝑋( 𝑠) − 𝑠𝑥(0) − 𝑥̇(0) = 𝑠2 𝑋( 𝑠) − 𝑠𝑐1 − 𝑐2 𝐿[ 𝑥̇( 𝑡)] = 𝑠𝑋( 𝑠) − 𝑥(0) = 𝑠𝑋( 𝑠) − 𝑐1 𝐿[ 𝑥( 𝑡)] = 𝑋( 𝑠) 𝐿[1] = 1 𝑠 2) Substitute these expressions back into the original differential equation ( 𝑠2 𝑋( 𝑠) − 𝑠𝑐1 − 𝑐2) + 𝑎( 𝑠𝑋( 𝑠) − 𝑐1) + 𝑏 𝑋( 𝑠) = 1 𝑠 3) Separate terms that are constant and those that contain a function 𝑠2 𝑋( 𝑠) + 𝑎𝑠𝑋( 𝑠) + 𝑏 𝑋( 𝑠) = 1 𝑠 + 𝑠𝑐1 + 𝑐2 + 𝑎𝑐1 𝑠2 𝑋( 𝑠) + 𝑎𝑠𝑋( 𝑠) + 𝑏 𝑋( 𝑠) = 𝑐1 𝑠2 + 𝑐2 𝑠 + 𝑎𝑐1 𝑠 + 1 𝑠 4) Pull the function out and set all other terms on the other side of the equation
  • 10. 10 | P a g e 𝑋( 𝑠)[ 𝑠2 + 𝑎𝑠 + 𝑏] = 𝑐1 𝑠2 + 𝑐2 𝑠 + 𝑎𝑐1 𝑠 + 1 𝑠 𝑋( 𝑠) = 𝑐1 𝑠2 + 𝑐2 𝑠 + 𝑎𝑐1 𝑠 + 1 𝑠[ 𝑠2 + 𝑎𝑠 + 𝑏] = 𝑁𝑢𝑚( 𝑠) 𝐷𝑒𝑛( 𝑠) Once the transfer function is acquired, the more complicated partial fraction expansion must be performed. The expansion is dependent on the nature of the roots in the denominator. There are three cases that these roots can follow. These cases are listed below in order of increasing complexity. 2.1 Real Distinct Roots The first case is the case of real distinct roots. This is when the roots of the denominator are only made up of real numbers that are non-repeating. In the s-domain these roots are described as shown in the following figure. Figure 5: Real Distinct Roots As shown in the figure, these roots lie purely on the real axis. This is the simplest case of the Laplace transformation. An example using real distinct roots is described next. -7 -6 -5 -4 -3 -2 -1 0 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real Axis ImaginaryAxis
  • 11. 11 | P a g e Example #2 𝑦̈( 𝑡) + 7𝑦̇( 𝑡) + 10𝑦( 𝑡) = 2 𝑦(0) = 1 , 𝑦̇(0) = 2 Step #1 𝐿[ 𝑦̈( 𝑡)] = 𝑠2 𝑌( 𝑠) − 𝑠𝑦(0) − 𝑦̇(0) = 𝑠2 𝑌( 𝑠) − 𝑠 − 2 𝐿[ 𝑦̇( 𝑡)] = 𝑠𝑌( 𝑠) − 𝑦(0) = 𝑠𝑌( 𝑠) − 1 𝐿[ 𝑦( 𝑡)] = 𝑌( 𝑠) 𝐿[2] = 2 𝑠 Step #2 ( 𝑠2 𝑌( 𝑠) − 𝑠 − 2) + 7( 𝑠𝑌( 𝑠) − 1) + 10𝑌( 𝑠) = 2 𝑠 Step #3 𝑠2 𝑌( 𝑠) + 7𝑠𝑌( 𝑠) + 10𝑌( 𝑠) = 2 𝑠 + 𝑠 + 7 + 2 𝑠2 𝑌( 𝑠) + 7𝑠𝑌( 𝑠) + 10𝑌( 𝑠) = 𝑠2 + 9𝑠 + 2 𝑠 Step #4 𝑌( 𝑠)[ 𝑠2 + 7𝑠 + 10] = 𝑠2 + 9𝑠 + 2 𝑠 𝑌( 𝑠) = 𝑠2 + 9𝑠 + 2 𝑠[ 𝑠2 + 7𝑠 + 10] = 𝑠2 + 9𝑠 + 2 𝑠( 𝑠 + 2)( 𝑠 + 5) After the steps described above are used to find the transfer function defining the system. The partial fraction expansion must be carried out. This example shows a function with real distinct roots shown by the following figure.
  • 12. 12 | P a g e Figure 6: Pole Locations of Example #2 To begin the partial fraction the roots of the denominator must be split up into separate fractions. Each of these will be dependent on a constant coefficient which is located in the numerator. This can be seen below. 𝑌( 𝑠) = 𝑘1 𝑠 + 𝑘2 ( 𝑠 + 2) + 𝑘3 ( 𝑠 + 5) From here each coefficient must be solved for. This is done by multiplying the entire function, or transfer function, by the root. Then the function is solved for at the value which makes the root drive the function undefined. This number would also be the number that makes the specific partial fraction divide by zero. The solution for the partial fraction coefficients for this example is shown as following. 𝑘1 = [ 𝑠𝑌( 𝑠)] 𝑠=0 = [𝑠 𝑠2 + 9𝑠 + 2 𝑠( 𝑠 + 2)( 𝑠 + 5) ] 𝑠=0 = [ 𝑠2 + 9𝑠 + 2 ( 𝑠 + 2)( 𝑠 + 5) ] 𝑠=0 = (0)2 + 9(0) + 2 ((0) + 2)((0)+ 5) = 2 (2)(5) = 2 10 = 0.2 -6 -5 -4 -3 -2 -1 0 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real Axis ImaginaryAxis
  • 13. 13 | P a g e 𝑘2 = [( 𝑠 + 2) 𝑌( 𝑠)] 𝑠=−2 = [( 𝑠 + 2) 𝑠2 + 9𝑠 + 2 𝑠( 𝑠 + 2)( 𝑠 + 5) ] 𝑠=−2 = [ 𝑠2 + 9𝑠 + 2 𝑠( 𝑠 + 5) ] 𝑠=−2 = (−2)2 + 9(−2) + 2 (−2)((−2)+ 5) = 4 − 18 + 2 (−2)(3) = −12 −6 = 2 𝑘3 = [( 𝑠 + 5) 𝑌( 𝑠)] 𝑠=−5 = [( 𝑠 + 5) 𝑠2 + 9𝑠 + 2 𝑠( 𝑠 + 2)( 𝑠 + 5) ] 𝑠=−5 = [ 𝑠2 + 9𝑠 + 2 𝑠( 𝑠 + 2) ] 𝑠=−5 = (−5)2 + 9(−5)+ 2 (−5)((−5)+ 2) = 25 − 45 + 2 (−5)(−3) = −18 15 = −1.2 After these coefficients are solved for, they are substituted back into the partial fraction expansion of the function. 𝑌( 𝑠) = 0.2 ∗ 1 𝑠 + 2 ∗ 1 𝑠 + 2 − 1.2 ∗ 1 𝑠 + 5 Finally, this function in the s-domain is rewritten back in the t-domain by taking the inverse Laplace transformation. 𝑦( 𝑡) = 0.2 + 2 ∗ 𝑒−2𝑡 − 1.2 ∗ 𝑒−5𝑡 To verify the results of the partial fraction expansion overall Laplace transformation, the final function is solved for at the initial condition. If the initial conditions checks then the solution is correct. 𝑦(0) = 0.2 + 2 ∗ 𝑒−2(0) − 1.2 ∗ 𝑒−5(0) 𝑦(0) = 0.2 + 2(1)− 1.2(1) 𝑦(0) = 1 Once the solution is verified, it is possible to plot the time order response of the given transfer function. This time order response is shown in the next figure.
  • 14. 14 | P a g e Figure 7: Time Order Response for Example #2 2.2 Real Distinct Roots withMultiplicity The next case taken into account is when the roots of the denominator are both real and repeating. This is shown in the s-domain by the following figure. Figure 8: Roots with Multiplicity 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 Time (s) Responce -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real Axis ImaginaryAxis
  • 15. 15 | P a g e As shown in the figure, these roots lie purely on the real axis but occur at the same locations. This is the next hardest case of the Laplace transformation. An example using real distinct roots with multiplicity is described next. Example #3 𝑦̈( 𝑡) + 10𝑦̇( 𝑡) + 25𝑦( 𝑡) = 𝑒−5𝑡 𝑦(0) = 1 , 𝑦̇(0) = 1 Step #1 𝐿[ 𝑦̈( 𝑡)] = 𝑠2 𝑌( 𝑠) − 𝑠𝑦(0) − 𝑦̇(0) = 𝑠2 𝑌( 𝑠) − 𝑠 − 1 𝐿[ 𝑦̇( 𝑡)] = 𝑠𝑌( 𝑠) − 𝑦(0) = 𝑠𝑌( 𝑠) − 1 𝐿[ 𝑦( 𝑡)] = 𝑌( 𝑠) 𝐿[ 𝑒−5𝑡] = 1 𝑠 + 5 Step #2 ( 𝑠2 𝑌( 𝑠) − 𝑠 − 1) + 10( 𝑠𝑌( 𝑠) − 1) + 25𝑌( 𝑠) = 1 𝑠 + 5 Step #3 𝑠2 𝑌( 𝑠) + 10𝑠𝑌( 𝑠) + 25𝑌( 𝑠) = 1 𝑠 + 5 + 𝑠 + 1 + 10 𝑠2 𝑌( 𝑠) + 10𝑠𝑌( 𝑠) + 25𝑌( 𝑠) = 1 𝑠 + 5 + 𝑠( 𝑠 + 5) 𝑠 + 5 + 11( 𝑠 + 5) 𝑠 + 5 = 𝑠2 + 16𝑠 + 56 𝑠 + 5 Step #4 𝑌( 𝑠)[ 𝑠2 + 10𝑠 + 25] = 𝑠2 + 16𝑠 + 56 𝑠 + 5 𝑌( 𝑠) = 𝑠2 + 16𝑠 + 56 ( 𝑠 + 5)[ 𝑠2 + 10𝑠 + 25] = 𝑠2 + 16𝑠 + 56 ( 𝑠 + 5)( 𝑠 + 5)( 𝑠 + 5) = 𝑠2 + 16𝑠 + 56 ( 𝑠 + 5)3 This function has a repeated root in the denominator. These are seen in the following figure.
  • 16. 16 | P a g e Figure 9: Location of Roots for Example #3 This makes the partial fraction expansion that of the second case. The partial fraction expansion for this repeated root example is shown below. 𝑌( 𝑠) = 𝑘11 ( 𝑠 + 5) + 𝑘12 ( 𝑠 + 5)2 + 𝑘13 ( 𝑠 + 5)3 From this partial fraction expansion it is evident that the coefficients will be slightly different than the distinct roots. Although, the process used to find these roots is very similar. 𝑘13 = [( 𝑠 + 5)3 𝑌( 𝑠)] 𝑠=−5 = [( 𝑠 + 5)3 𝑠2 + 16𝑠 + 56 ( 𝑠 + 5)3 ] 𝑠=−5 = [ 𝑠2 + 16𝑠 + 56] 𝑠=−5 = (−5)2 + 16(−5) + 56 = 25 − 80 + 56 = 1 𝑘12 = 1 1! 𝑑 𝑑𝑠 [( 𝑠 + 5)3 𝑌( 𝑠)] 𝑠=−5 = 𝑑 𝑑𝑠 [( 𝑠 + 5)3 𝑠2 + 16𝑠 + 56 ( 𝑠 + 5)3 ] 𝑠=−5 = 𝑑 𝑑𝑠 [ 𝑠2 + 16𝑠 + 56] 𝑠=−5 = [2𝑠 + 16] 𝑠=−5 = −10 + 16 = 6 𝑘11 = 1 2! 𝑑2 𝑑𝑠2 [( 𝑠 + 5)3 𝑌( 𝑠)] 𝑠=−5 = 1 2 𝑑2 𝑑𝑠2 [( 𝑠 + 5)3 𝑠2 + 16𝑠 + 56 ( 𝑠 + 5)3 ] 𝑠=−5 = 1 2 𝑑 𝑑𝑠 [2𝑠 + 16] 𝑠=−5 = 1 2 ∗ [2] = 1 -6 -5 -4 -3 -2 -1 0 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real Axis ImaginaryAxis
  • 17. 17 | P a g e As shown above the coefficients are worked out by taking a derivative. Along with this derivative, a factorial must divide the final result depending on which coefficient is being solved for. After the coefficients are solved for then they are substituted back into the expression and the inverse Laplace transformation is applied just as case #1. 𝑌( 𝑠) = 1 ∗ 1 ( 𝑠 + 5) + 6 ∗ 1 ( 𝑠 + 5)2 + 1 ∗ 1 ( 𝑠 + 5)3 𝑦( 𝑡) = 𝑒−5𝑡 + 6 ∗ 𝑡𝑒−5𝑡 + 𝑡2 2 𝑒−5𝑡 To verify the results of the partial fraction expansion overall Laplace transformation, the final function is solved for at the initial condition. If the initial conditions checks then the solution is correct. 𝑦(0) = 1 ∗ 𝑒−5(0) + 6 ∗ (0) 𝑒−5(0) + 1 ∗ (0)2 2 𝑒−5(0) 𝑦(0) = 1 + 0 + 0 𝑦(0) = 1 Once the solution is verified, it is possible to plot the time order response of the given transfer function. This time order response is shown in the next figure. Figure 10: Time Order Response for Example #3 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Time (s) Responce
  • 18. 18 | P a g e 2.3 Complex Conjugate Roots The final case discussed is the case of complex conjugate roots. This is where the roots of the denominator have both a real and imaginary component. The roots always act in pairs, one on both the negative and positive side of the imaginary axis. This is described by the figure below. Figure 11: Complex Conjugate Roots The figure above shows how the roots act in pairs and act off the real axis. Roots of this nature are the most complicated to deal with due to the complex number. A numerical example involving complex conjugate roots is described as follows. Example #4 𝑦̈( 𝑡) + 3𝑦̇( 𝑡) + 11𝑦( 𝑡) = 5 𝑦(0) = 2 , 𝑦̇(0) = 4 Step #1 𝐿[ 𝑦̈( 𝑡)] = 𝑠2 𝑌( 𝑠) − 𝑠𝑦(0) − 𝑦̇(0) = 𝑠2 𝑌( 𝑠) − 2𝑠 − 4 𝐿[ 𝑦̇( 𝑡)] = 𝑠𝑌( 𝑠) − 𝑦(0) = 𝑠𝑌( 𝑠) − 2 𝐿[ 𝑦( 𝑡)] = 𝑌( 𝑠) -4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 -5 -4 -3 -2 -1 0 1 2 3 4 5 Real Axis ImaginaryAxis
  • 19. 19 | P a g e 𝐿[5] = 5 𝑠 Step #2 ( 𝑠2 𝑌( 𝑠) − 2𝑠 − 4) + 3( 𝑠𝑌( 𝑠) − 2) + 11𝑌( 𝑠) = 5 𝑠 Step #3 𝑠2 𝑌( 𝑠) + 3𝑠𝑌( 𝑠) + 11𝑌( 𝑠) = 5 𝑠 + 2𝑠 + 4 + 6 𝑠2 𝑌( 𝑠) + 3𝑠𝑌( 𝑠) + 11𝑌( 𝑠) = 2𝑠2 + 10𝑠 + 5 𝑠 Step #4 𝑌( 𝑠)[ 𝑠2 + 3𝑠 + 11] = 2𝑠2 + 10𝑠 + 5 𝑠 𝑌( 𝑠) = 2𝑠2 + 10𝑠 + 5 𝑠[ 𝑠2 + 3𝑠 + 11] Since the quadratic in the numerator is not factorable, the quadratic equation must be used to find the roots of the equation. −3 ± √32 − 4(1)(11) 2(1) = −3 ± √−35 2 = − 3 2 ± √35 2 𝑖 The roots from the quadratic equation are then put back into the function. 𝑌( 𝑠) = 2𝑠2 + 10𝑠 + 5 𝑠 (𝑠 + 3 2 + √35 2 𝑖)(𝑠 + 3 2 − √35 2 𝑖) This function has a complex conjugate pair in the denominator. The roots of this equation are shown in the following figure.
  • 20. 20 | P a g e Figure 12: Location of Roots for Example #4 Now the partial fraction expansion of the function is performed. Since the complex conjugate pair is related, the coefficients are also a conjugate pair. 𝑌( 𝑠) = 𝑘1 𝑠 + 𝑘2 (𝑠 + 3 2 + √35 2 𝑖) + 𝑘2 ∗ (𝑠 + 3 2 − √35 2 𝑖) The first coefficient solved for is that of the real distinct root. 𝑘1 = [ 𝑠𝑌( 𝑠)] 𝑠=0 = [𝑠 2𝑠2 + 10𝑠 + 5 𝑠[ 𝑠2 + 3𝑠 + 11] ] 𝑠=0 = [ 2𝑠2 + 10𝑠 + 5 [ 𝑠2 + 3𝑠 + 11] ] 𝑠=0 = 2(0)2 + 10(0)+ 5 (0)2 + 3(0) + 11 = 5 11 The next step is to move to the complex conjugate roots. 𝑘2 = [(𝑠 + 3 2 + √35 2 𝑖) 𝑌( 𝑠)] 𝑠=− 3 2 − √35 2 𝑖 = [ (𝑠 + 3 2 + √35 2 𝑖) 2𝑠2 + 10𝑠 + 5 𝑠 (𝑠 + 3 2 + √35 2 𝑖) (𝑠 + 3 2 − √35 2 𝑖) ] 𝑠=− 3 2 − √35 2 𝑖 -4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 -3 -2 -1 0 1 2 3 Real Axis ImaginaryAxis
  • 21. 21 | P a g e = [ 2𝑠2 + 10𝑠 + 5 𝑠 (𝑠 + 3 2 − √35 2 𝑖) ] 𝑠=− 3 2 − √35 2 𝑖 = 2 (− 3 2 − √35 2 𝑖) 2 + 10 (− 3 2 − √35 2 𝑖) + 5 (− 3 2 − √35 2 𝑖)(− 3 2 − √35 2 𝑖 + 3 2 − √35 2 𝑖) = 2 ( 9 4 − 3√35 2 𝑖 − 35 2 ) + (− 30 2 − 10√35 2 𝑖) + 5 ( 3√35 2 𝑖 − 35 2 ) = − 81 2 − 8√35𝑖 − 35 2 + 3√35 2 𝑖 At this point the fraction must be rationalized. This is done to remove the imaginary part from the denominator of the fraction. 𝑘2 = − 81 2 − 8√35𝑖 − 35 2 + 3√35 2 𝑖 ∗ ( − 35 2 − 3√35 2 𝑖 − 35 2 − 3√35 2 𝑖 ) = 1155 4 + 803√35 4 𝑖 385 = 0.75 + 3.085𝑖 Since the coefficient is a conjugate pair the following holds true. 𝑘2 = 𝑘2 ∗ = 0.75 − 3.085𝑖 Substituting the coefficients back into the partial fraction expansion of the function and taking the inverse Laplace transformation yields the following. 𝑌( 𝑠) = 0.4545 ∗ 1 𝑠 + (0.75 + 3.085𝑖) ∗ 1 (𝑠 + 3 2 + √35 2 𝑖) + (0.75 − 3.085𝑖) ∗ 1 (𝑠 + 3 2 − √35 2 𝑖) 𝑦( 𝑡) = 0.4545 + (0.75 + 3.085𝑖) 𝑒 (− 3 2 −𝑖 √35 2 ) 𝑡 + (0.75 − 3.085𝑖) 𝑒 (− 3 2 +𝑖 √35 2 ) 𝑡 Once the inverse Laplace transformation is carried out, some simplifications are necessary to remove the imaginary numbers from the function. The first way to do this is to apply Moivre’s formula to the imaginary coefficient pair. The method of applying this formula is shown below. From the complex relationship: 𝑥 = 𝑎 + 𝑖𝑏 𝑥∗ = 𝑎 − 𝑖𝑏 Plotting these functions:
  • 22. 22 | P a g e From this plot the following relationships can be formed: 𝑥 = 𝑎 + 𝑖𝑏 = 𝑀𝑒 𝑖𝛷 𝑥∗ = 𝑎 − 𝑖𝑏 = 𝑀𝑒−𝑖𝛷 Where: 𝑀 = √ 𝑎2 + 𝑏2 𝛷 = tan−1 ( 𝑏 𝑎 ) Applying this formula to the example will reduce the coefficients of the complex roots as following. 𝑘2 = 0.75 + 3.085𝑖 = 𝑀𝑒 𝑖𝛷 𝑘2 ∗ = 0.75 − 3.085𝑖 = 𝑀𝑒−𝑖𝛷 𝑀 = √(0.75)2 + (3.085)2 = 3.175 𝛷 = tan−1 3.085 0.75 = 1.33 𝑘2 = 3.175𝑒1.33𝑖 𝑘2 ∗ = 3.175𝑒−1.33𝑖 Once this is done, the inverse Laplace transformation of the function can be rewritten. 𝑦( 𝑡) = 0.4545 + 3.175𝑒1.33𝑖 𝑒 (− 3 2 −𝑖 √35 2 ) 𝑡 + 3.175𝑒−1.33𝑖 𝑒 (− 3 2 +𝑖 √35 2 ) 𝑡 a ib -ib Φ M
  • 23. 23 | P a g e At this point, another formula is used to reduce the equation farther. This simplification is known as the Euler sine and cosine formulas. To perform the reduction the equation must first be rearranged as shown. 𝑦( 𝑡) = 0.4545 + 3.175𝑒−1.5𝑡 [𝑒 −𝑖(√35 2 𝑡+1.33) + 𝑒 𝑖(√35 2 𝑡+1.33) ] From here the relationships below are used to reduce the equation. 𝑒 𝑖𝑥 = cos( 𝑥) + 𝑖 sin( 𝑥) 𝑒−𝑖𝑥 = cos( 𝑥) − 𝑖 sin( 𝑥) So: 𝑒 𝑖𝑥 + 𝑒−𝑖𝑥 = cos( 𝑥) + 𝑖 sin( 𝑥) + cos( 𝑥) − 𝑖 sin( 𝑥) = 2cos( 𝑥) 𝑒 𝑖𝑥 − 𝑒−𝑖𝑥 = cos( 𝑥) + 𝑖 sin( 𝑥) − cos( 𝑥) + 𝑖 sin( 𝑥) = 𝑖 ∗ 2 sin( 𝑥) Using the expressions above the following reduction can be made. [𝑒 −𝑖(√35 2 𝑡+1.33) + 𝑒 𝑖(√35 2 𝑡+1.33) ] = 2cos( √35 2 𝑡 + 1.33) Therefore: 𝑦( 𝑡) = 0.4545 + 3.175𝑒−1.5𝑡 [2 cos( √35 2 𝑡 + 1.33)] 𝑦( 𝑡) = 0.4545 + 6.35𝑒−1.5𝑡 cos( √35 2 𝑡 + 1.33) From here it is possible to plot the time order response for the given transfer function. This time order response is shown in the next figure.
  • 24. 24 | P a g e Figure 13: Time Order Response for Example #4 This is the final solution for the complex conjugate case of partial fraction expansion. Although difficult, these functions are very useful in analyzing an aircraft in flight. Once all the cases for breaking down a partial fraction expansion are understood, the system can be created. The Laplace transformation sets up the transfer functions necessary to describe the motion of a mechanical system. With these transformations the solution of these systems can be easily generated. This will be analyzed in depth next. 0 1 2 3 4 5 6 -3 -2 -1 0 1 2 3 Time (s) Responce
  • 25. 25 | P a g e Chapter 3: Mathematical Modeling of Physical Systems To better understand the concept of a transfer function, some physical systems are broken down and modeled. The two main types of systems analyzed are translational systems and rotational systems. These systems behave in the same way just under different loads. Once these systems are modeled an equilibrium equation is applied to solve the system. Eventually these equations are used to form the transfer functions that represent how the mechanical system translates or rotates. In order to solve for the transfer functions governing the system, a six step process is necessary. These steps are described as following. Step #1: Evaluate the type of system presented and identify the input/output relationships Step #2: Draw the free body diagram of the system in the t-domain Step #3: Draw the free body diagram of the system in the s-domain Step #4: Perform Laplace transformation and write the equations of motion for the system assuming equilibrium conditions Step #5: For multipart systems use Cramer’s rule to solve for displacements 𝑋1( 𝑠), 𝑋2( 𝑠), … 𝑋 𝑛( 𝑠) Step #6: Divide displacements by the applied force to form transfer functions Are the steps necessary to solve for the transfer function are understood, the physical model is broken down into three components. 3.1 Translational System The translational system is made up of three components which are described as such. 1) Inertial (mass) 𝐹𝐼 ( 𝑡) = 𝑚𝑥̈( 𝑡) 𝐹𝐼 ( 𝑠) = 𝑚𝑠2 𝑋( 𝑠) 2) Friction (Viscous) M
  • 26. 26 | P a g e 𝐹𝐹 ( 𝑡) = 𝑓𝑣 𝑥̇( 𝑡) 𝐹𝐹 ( 𝑠) = 𝑓𝑣 𝑠𝑋( 𝑠) 3) Elastic 𝐹𝐸 ( 𝑡) = 𝑘𝑥( 𝑡) 𝐹𝐸 ( 𝑠) = 𝑘𝑋( 𝑠) To better understand how all of these components interact with each other a few examples of a translation system should be solved. The first example represents a simple single mass system. 3.1.1 Single Mass System In order to solve for the transfer function, the steps outline prior must be carried out. Step #1 𝐼𝑛𝑝𝑢𝑡: 𝐹𝐴 ( 𝑡) 𝑂𝑢𝑡𝑝𝑢𝑡: 𝑥1( 𝑡) 𝑇𝑟𝑎𝑛𝑠𝑓𝑒𝑟 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛: 𝑋1( 𝑠) 𝐹𝐴 ( 𝑠) After the input/output relationships are found, the free body diagram in the t-domain must be constructed.
  • 27. 27 | P a g e Step #2 After the free body diagram is expressed in the t-domain, it must be rewritten in the s- domain by carrying out a simple Laplace transformation. Step #3 Once the free body diagrams are draw and understood, the equations of motion for the system are derived. This system only has one mass; therefore there is only one equations of motion. In this case the equation of motion can be represented as following. Step #4 𝑀𝑠2 𝑋1( 𝑠) + 𝑓𝑣 𝑠𝑋1( 𝑠) + 𝑘𝑋1( 𝑠) = 𝐹𝐴 ( 𝑠) Since this system has only one equation of motion, Cramer’s rule is not necessary and the final two steps can be combined. Step #5 + #6 𝑋1( 𝑠)[ 𝑀𝑠2 + 𝑓𝑣 𝑠 + 𝑘] = 𝐹𝐴 ( 𝑠) 𝑋1( 𝑠) 𝐹𝐴 ( 𝑠) = 1 [ 𝑀𝑠2 + 𝑓𝑣 𝑠 + 𝑘] This final step yields the transfer function for the given system. With this transfer function the model can be simulated. To see how the system performs under varying conditions a sensitivity analysis is performed. The first analysis will consider variations in the mass of the system. The response for tis analysis is shown in the following figure.
  • 28. 28 | P a g e Figure 14: Analysis of Mass Variation for Example #1 As shown, increasing the mas of the system decreases the transient response of the system. It also increases the time it takes for the system to converge to its steady state value. Next, variations of the damping experienced by the mass are shown. This analysis is found below. Figure 15: Analysis of Damper Variation for Example #1 0 2 4 6 8 10 12 14 16 18 20 -5 0 5 10 15 20 x 10 -4 Time (s) Responce System Responce for Varrying Mass M=1 (kg) M=3 (kg) M=5 (kg) M=7 (kg) M=9 (kg) 0 1 2 3 4 5 6 -1 -0.5 0 0.5 1 1.5 2 2.5 3 x 10 -3 Time (s) Responce System Responce for Varying Damper fv=2 (N/m) fv=4 (N/m) fv=6 (N/m) fv=8 (N/m) fv=10 (N/m)
  • 29. 29 | P a g e This analysis shows how increasing the damper value decreases the transient response of the system. Also, this increase causes the system to dampen out to its steady state value faster. Finally, the analysis of the spring constant is taken into consideration. The effects of varying the spring constant are shown in the following figure. Figure 16: Analysis of Spring Variation for Example #1 This figure shows that increasing the spring constant decreases the transient response of the system. As well, it increases the time it takes for the system to converge to its steady state value. Once this simple, single mass system is understood, a more complicated multiple mass system must be introduced. Although more complicated, the system follows the same procedure as the simple single mass system. 3.1.2 Multiple Mass System 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 -5 0 5 10 15 20 x 10 -4 Time (s) Responce System Responce for Varying Spring k=2 (N) k=4 (N) k=6 (N) k=8 (N) k=10 (N)
  • 30. 30 | P a g e Step #1 𝐼𝑛𝑝𝑢𝑡: 𝐹𝐴 ( 𝑡) 𝑂𝑢𝑡𝑝𝑢𝑡: 𝑥1( 𝑡) , 𝑥2( 𝑡) 𝑇𝑟𝑎𝑛𝑠𝑓𝑒𝑟 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛: 𝑋1( 𝑠) 𝐹𝐴 ( 𝑠) , 𝑋2( 𝑠) 𝐹𝐴 ( 𝑠) Step #2 Step #3 Step #4 𝑀1 𝑠2 𝑋1( 𝑠) + 𝑓𝑣 𝑠𝑋1( 𝑠) + ( 𝑘1 + 𝑘2) 𝑋1( 𝑠) − 𝑓𝑣 𝑠𝑋2( 𝑠) − 𝑘2 𝑋2( 𝑠) = 0 −𝑓𝑣 𝑠𝑋1( 𝑠) − 𝑘2 𝑋1( 𝑠) + 𝑀2 𝑠2 𝑋2( 𝑠) + 𝑘2 𝑋2( 𝑠) = 𝐹𝐴 ( 𝑠)
  • 31. 31 | P a g e This system of equations is then represented in matrix format. [ 𝑀1 𝑠2 + 𝑓𝑣 𝑠 + ( 𝑘1 + 𝑘2) −( 𝑓𝑣 𝑠 + 𝑘2) −( 𝑓𝑣 𝑠 + 𝑘2) 𝑀2 𝑠2 + 𝑓𝑣 𝑠 + 𝑘2 ] [ 𝑋1( 𝑠) 𝑋2( 𝑠) ] = [ 0 𝐹𝐴 ( 𝑠) ] Simplifying the matrix yields the following. [ 𝐴 𝐶 𝐶 𝐵 ][ 𝑋1( 𝑠) 𝑋2( 𝑠) ] = [ 0 𝐹𝐴 ( 𝑠) ] Step #5 𝑋1( 𝑠) = | 0 𝐶 𝐹𝐴 ( 𝑠) 𝐵 | | 𝐴 𝐶 𝐶 𝐵 | = (0) 𝐵 − 𝐶𝐹𝐴 ( 𝑠) 𝐴𝐵 − 𝐶2 = −𝐶𝐹𝐴 ( 𝑠) 𝐴𝐵 − 𝐶2 𝑋2( 𝑠) = | 𝐴 0 𝐶 𝐹𝐴 ( 𝑠) | | 𝐴 𝐶 𝐶 𝐵 | = 𝐴𝐹𝐴 ( 𝑠) − (0) 𝐶 𝐴𝐵 − 𝐶2 = 𝐴𝐹𝐴 ( 𝑠) 𝐴𝐵 − 𝐶2 Where: 𝐴𝐵 − 𝐶2 = (𝑀1 𝑠2 + 𝑓𝑣 𝑠 + ( 𝑘1 + 𝑘2))( 𝑀2 𝑠2 + 𝑓𝑣 𝑠 + 𝑘2) − ( 𝑓𝑣 𝑠 + 𝑘2)2 = 𝑀1 𝑀2 𝑠4 + ( 𝑀1 + 𝑀2) 𝑓𝑣 𝑠3 + ( 𝑀1 𝑘2 + 𝑓𝑣 2 + 𝑀2 𝑘1 + 𝑀2 𝑘2 − 𝑓𝑣 2) 𝑠2 + ( 𝑓𝑣 𝑘2 + 𝑓𝑣 𝑘1 + 𝑓𝑣 𝑘2 − 2𝑓𝑣 𝑘2) 𝑠 + ( 𝑘1 𝑘2 + 𝑘2 2 − 𝑘2 2) = 𝑀1 𝑀2 𝑠4 + ( 𝑀1 + 𝑀2) 𝑓𝑣 𝑠3 + ( 𝑀1 𝑘2 + 𝑀2 𝑘1 + 𝑀2 𝑘2) 𝑠2 + ( 𝑓𝑣 𝑘1) 𝑠 + ( 𝑘1 𝑘2) = 𝐷𝑒𝑛( 𝑠) Step #6 𝑋1( 𝑠) 𝐹𝐴 ( 𝑠) = 𝑓𝑣 𝑠 + 𝑘2 𝑀1 𝑀2 𝑠4 + ( 𝑀1 + 𝑀2) 𝑓𝑣 𝑠3 + ( 𝑀1 𝑘2 + 𝑀2 𝑘1 + 𝑀2 𝑘2) 𝑠2 + ( 𝑓𝑣 𝑘1) 𝑠 + ( 𝑘1 𝑘2) 𝑋2( 𝑠) 𝐹𝐴 ( 𝑠) = 𝑀1 𝑠2 + 𝑓𝑣 𝑠 + ( 𝑘1 + 𝑘2) 𝑀1 𝑀2 𝑠4 + ( 𝑀1 + 𝑀2) 𝑓𝑣 𝑠3 + ( 𝑀1 𝑘2 + 𝑀2 𝑘1 + 𝑀2 𝑘2) 𝑠2 + ( 𝑓𝑣 𝑘1) 𝑠 + ( 𝑘1 𝑘2) Once the transfer functions for both masses are found, it is possible to find the time order response for them. These responses are found by varying the parameters involved in the system. Te first parameters to be analyzed are the masses in the system. The time order response with respect to varying masses is shown as follows.
  • 32. 32 | P a g e Figure 17: Analysis of Mass Variation for Example #2 As seen in the first example, increasing the mass decreases the transient response while increasing the time it takes to reach steady state. These graphs also show that changes to the first mass cause the transient response of the system to remain fairly constant, but they cause significant changes in the steady state response of the system. When changing the second mass the effects are opposite. The steady state response remains constant while changes occur during the transient response. Along with analyzing the effect of mass variation, it is possible to analyze the effects of the damper between the masses. This is shown by the next figure. 0 10 20 30 40 50 60 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x 10 -3 Time (s) Mass1Responce Mass 1 Responce for Varying Mass 1 Mass 1=1 (kg) Mass 1=3 (kg) Mass 1=5 (kg) 0 10 20 30 40 50 60 -3 -2 -1 0 1 2 3 x 10 -3 Time (s) Mass1Responce Mass 1 Responce for Varying Mass 2 Mass 2=1 (kg) Mass 2=3 (kg) Mass 2=5 (kg) 0 10 20 30 40 50 60 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 x 10 -3 Time (s) Mass2Responce Mass 2 Responce for Varying Mass 1 Mass 1=1 (kg) Mass 1=3 (kg) Mass 1=5 (kg) 0 10 20 30 40 50 60 -3 -2 -1 0 1 2 3 x 10 -3 Time (s) Mass2Responce Mass 2 Responce for Varying Mass 2 Mass 2=1 (kg) Mass 2=3 (kg) Mass 2=5 (kg)
  • 33. 33 | P a g e Figure 18: Analysis of Damper Variation for Example #2 As shown in the above figure changing the damper between the two masses causes slight differences in both the transient and steady state responses of both masses. These effects are minimal and impact the second mass more than the first. Increasing the damper value causes a decreased transient response, as well as, a decreased time to reach steady state for both masses. Finally, after the effects of the damper are understood, the spring constants are taken into account. This is shown in the following figure. 0 10 20 30 40 50 60 -3 -2 -1 0 1 2 3 x 10 -3 Time (s) Mass1Responce Mass 1 Responce for Varying Damper Damper=2 (N/m) Damper=4 (N/m) Damper=6 (N/m) Damper=8 (N/m) Damper=10 (N/m) 0 10 20 30 40 50 60 -4 -3 -2 -1 0 1 2 3 4 x 10 -3 Time (s) Mass2Responce Mass 2 Responce for Varying Damper Damper=2 (N/m) Damper=4 (N/m) Damper=6 (N/m) Damper=8 (N/m) Damper=10 (N/m)
  • 34. 34 | P a g e Figure 19: Analysis of Spring Variation for Example #2 When looking at the results increasing the spring constant in the first spring causes a decreased transient response, along with, a decreased time to steady state. This holds true for both masses. Increasing the second spring constant causes the transient response for both masses to remain fairly constant, but causes a longer time to reach the steady state value. This example represents a more complex system. Increasing the number of masses, or numbers of degrees of freedom, the transfer functions become of higher order and more complex. 3.2 Rotational Systems After translational systems, rotational systems are analyzed. These systems are nearly identical to the translational systems present prior. The major difference is the use of an angular reference frame and applied torques rather than forces. Along with that, the components are slightly different. These differences are shown below. 0 10 20 30 40 50 60 -3 -2 -1 0 1 2 3 x 10 -3 Time (s) Mass1Responce Mass 1 Responce for Varying Spring 1 Spring 1=4 (N) Spring 1=8 (N) Spring 1=12 (N) 0 10 20 30 40 50 60 -3 -2 -1 0 1 2 3 x 10 -3 Time (s) Mass1Responce Mass 1 Responce for Varying Spring 2 Spring 2=6 (N) Spring 2=10 (N) Spring 2=14 (N) 0 10 20 30 40 50 60 -3 -2 -1 0 1 2 3 x 10 -3 Time (s) Mass2Responce Mass 2 Responce for Varying Spring 1 Spring 1=4 (N) Spring 1=8 (N) Spring 1=12 (N) 0 10 20 30 40 50 60 -3 -2 -1 0 1 2 3 x 10 -3 Time (s) Mass2Responce Mass 2 Responce for Varying Spring 2 Spring 2=6 (N) Spring 2=10 (N) Spring 2=14 (N)
  • 35. 35 | P a g e 1) Moment of Inertia 𝑀𝐼( 𝑡) = 𝐼 𝑅𝑅 𝜃̈( 𝑡) 𝑀𝐼( 𝑠) = 𝐼 𝑅𝑅 𝑠2 𝜃( 𝑠) 2) Dampers 𝑀 𝐷( 𝑡) = 𝐷𝜃̇( 𝑡) 𝑀 𝐷( 𝑠) = 𝐷𝑠𝜃( 𝑠) 3) Springs 𝑀 𝐸( 𝑡) = 𝑘 𝑅 𝜃( 𝑡) 𝑀 𝐸( 𝑠) = 𝑘 𝑅 𝜃( 𝑠) After the components of these systems are understood, an example problem can be solved. Example #3 Step #1 𝐼𝑛𝑝𝑢𝑡: 𝑇𝐴( 𝑡) 𝑇𝐴( 𝑡)
  • 36. 36 | P a g e 𝑂𝑢𝑡𝑝𝑢𝑡: 𝜃1( 𝑡) , 𝜃2 ( 𝑡) 𝑇𝑟𝑎𝑛𝑠𝑓𝑒𝑟 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛: 𝜃1 ( 𝑠) 𝑇𝐴 ( 𝑠) , 𝜃2( 𝑠) 𝑇𝐴( 𝑠) Step #2
  • 37. 37 | P a g e Step #3 Step #4 𝐼 𝑅𝑅1 𝑠2 𝜃1( 𝑠) + 𝐷1 𝑠𝜃1( 𝑠) + 𝑘 𝑅1 𝜃1( 𝑠) − 𝑘 𝑅2 𝜃2( 𝑠) + 𝑘 𝑅2 𝜃1( 𝑠) = 0 𝑘 𝑅2 𝜃2( 𝑠) − 𝑘 𝑅2 𝜃1( 𝑠) + 𝐷2 𝑠𝜃2( 𝑠) + 𝐼 𝑅𝑅2 𝑠2 𝜃2( 𝑠) = 𝑇𝐴 ( 𝑠)
  • 38. 38 | P a g e This system of equations is represented in matrix format. [ 𝐼 𝑅𝑅1 𝑠2 + 𝐷1 𝑠 + (𝑘 𝑅1 + 𝑘 𝑅2 ) −𝑘 𝑅2 −𝑘 𝑅2 𝐼 𝑅𝑅2 𝑠2 + 𝐷2 𝑠 + 𝑘 𝑅2 ][ 𝜃1 ( 𝑠) 𝜃2 ( 𝑠) ] = [ 0 𝑇𝐴( 𝑠) ] Simplifying the matrix yields the following. [ 𝐴 𝐶 𝐶 𝐵 ][ 𝜃1( 𝑠) 𝜃2( 𝑠) ] = [ 0 𝑇𝐴 ( 𝑠) ] Step #5 𝜃1( 𝑠) = | 0 𝐶 𝑇𝐴( 𝑠) 𝐵 | | 𝐴 𝐶 𝐶 𝐵 | = (0) 𝐵 − 𝐶𝑇𝐴( 𝑠) 𝐴𝐵 − 𝐶2 = −𝐶𝑇𝐴 ( 𝑠) 𝐴𝐵 − 𝐶2 𝜃2( 𝑠) = | 𝐴 0 𝐶 𝑇𝐴( 𝑠) | | 𝐴 𝐶 𝐶 𝐵 | = 𝐴𝑇𝐴( 𝑠) − (0) 𝐶 𝐴𝐵 − 𝐶2 = 𝐴𝑇𝐴( 𝑠) 𝐴𝐵 − 𝐶2 Where: 𝐴𝐵 − 𝐶2 = (𝐼 𝑅𝑅1 𝑠2 + 𝐷1 𝑠 + (𝑘 𝑅1 + 𝑘 𝑅2 )) (𝐼 𝑅𝑅2 𝑠2 + 𝐷1 𝑠 + 𝑘 𝑅2 ) − (𝑘 𝑅2 ) 2 = 𝐼 𝑅𝑅1 𝐼 𝑅𝑅2 𝑠4 + 𝐼 𝑅𝑅1 𝐷2 𝑠3 + 𝐼 𝑅𝑅2 𝐷1 𝑠3 + 𝐼 𝑅𝑅1 𝑘 𝑅2 𝑠2 + 𝐷1 𝐷2 𝑠2 + 𝐼 𝑅𝑅2 𝑘 𝑅1 𝑠2 + 𝐼 𝑅𝑅2 𝑘 𝑅2 𝑠2 + 𝐷1 𝑘 𝑅2 𝑠 + 𝐷2 𝑘 𝑅1 𝑠 + 𝐷2 𝑘 𝑅2 𝑠 + 𝑘 𝑅1 𝑘 𝑅2 + 𝑘 𝑅2 2 − 𝑘 𝑅2 2 = 𝐼 𝑅𝑅1 𝐼 𝑅𝑅2 𝑠4 + (𝐼 𝑅𝑅1 𝐷2 + 𝐼 𝑅𝑅2 𝐷1)𝑠3 + (𝐼 𝑅𝑅1 𝑘 𝑅2 + 𝐷1 𝐷2 + 𝐼 𝑅𝑅2 𝑘 𝑅1 + 𝐼 𝑅𝑅2 𝑘 𝑅2 )𝑠2 + (𝐷1 𝑘 𝑅2 + 𝐷2 𝑘 𝑅1 + 𝐷2 𝑘 𝑅2 )𝑠 + 𝑘 𝑅1 𝑘 𝑅2 = 𝐷𝑒𝑛( 𝑠) Step #6 𝜃1 ( 𝑠) 𝑇𝐴 ( 𝑠) = 𝑘 𝑅2 𝐼 𝑅𝑅1 𝐼 𝑅𝑅2 𝑠4 + ( 𝐼 𝑅𝑅1 𝐷2 + 𝐼 𝑅𝑅2 𝐷1) 𝑠3 + ( 𝐼 𝑅𝑅1 𝑘 𝑅2 + 𝐷1 𝐷2 + 𝐼 𝑅𝑅2 𝑘 𝑅1 + 𝐼 𝑅𝑅2 𝑘 𝑅2 ) 𝑠2 + ( 𝐷1 𝑘 𝑅2 + 𝐷2 𝑘 𝑅1 + 𝐷2 𝑘 𝑅2 ) 𝑠 + 𝑘 𝑅1 𝑘 𝑅2 𝜃2 ( 𝑠) 𝑇𝐴 ( 𝑠) = 𝐼 𝑅𝑅1 𝑠2 + 𝐷1 𝑠 +( 𝑘 𝑅1 + 𝑘 𝑅2 ) 𝐼 𝑅𝑅1 𝐼 𝑅𝑅2 𝑠4 + ( 𝐼 𝑅𝑅1 𝐷2 + 𝐼 𝑅𝑅2 𝐷1) 𝑠3 + ( 𝐼 𝑅𝑅1 𝑘 𝑅2 + 𝐷1 𝐷2 + 𝐼 𝑅𝑅2 𝑘 𝑅1 + 𝐼 𝑅𝑅2 𝑘 𝑅2 ) 𝑠2 + ( 𝐷1 𝑘 𝑅2 + 𝐷2 𝑘 𝑅1 + 𝐷2 𝑘 𝑅2 ) 𝑠 + 𝑘 𝑅1 𝑘 𝑅2 Once these transfer functions are formed, it is possible to plot the time response of the system. This time order response is found by running a sensitivity analysis on the system. The first variable hat is analyzed are the masses in the system. This is shown in the following figure.
  • 39. 39 | P a g e Figure 20: Analysis of Mass Variation for Example #3 When looking at the responses of the system it is evident that the first mass has more influence over the system. Increasing the first mass completely changes the shape of the transient response, along with, increasing the time required to reach steady state. Increasing the second has less effect on the transient response but still increases the time required to reach steady state. Once the mass variation is analyzed, it is possible to run the simulation varying the friction or damper experienced by each mass. This is shown next. 0 5 10 15 -5 0 5 10 15 x 10 -4 Time (s) Mass1Responce Mass 1 Responce for Varying Mass 1 Mass 1=1 (kg) Mass 1=5 (kg) Mass 1=9 (kg) 0 5 10 15 -5 0 5 10 15 x 10 -4 Time (s) Mass1Responce Mass 1 Responce for Varying Mass 2 Mass 2=1 (kg) Mass 2=5 (kg) Mass 2=9 (kg) 0 5 10 15 -5 0 5 10 15 x 10 -4 Time (s) Mass2Responce Mass 2 Responce for Varying Mass 1 Mass 1=1 (kg) Mass 1=5 (kg) Mass 1=9 (kg) 0 5 10 15 -5 0 5 10 15 x 10 -4 Time (s) Mass2Responce Mass 2 Responce for Varying Mass 2 Mass 2=1 (kg) Mass 2=5 (kg) Mass 2=9 (kg)
  • 40. 40 | P a g e Figure 21: Analysis of Damper Variation for Example #3 The above figures show that varying the damper on the second rotation mass has the most effect on the system. It completely changes the shape of the transient response of the first mass and greatly reduces that of the second mass. When changing the first damper, slight decreases are experienced in the transient response while the time to steady state is increased. Finally, once the effects of the dampers are found, the sensitivity analysis is run by varying the spring constants in the system. These results are shown in the following figure. 0 5 10 15 -5 0 5 10 15 x 10 -4 Time (s) Mass1Responce Mass 1 Responce for Varying Damper 1 Damper 1=2 (N/m) Damper 1=6 (N/m) Damper 1=10 (N/m) 0 5 10 15 -5 0 5 10 15 x 10 -4 Time (s) Mass1Responce Mass 1 Responce for Varying Damper 2 Damper 2=2 (N/m) Damper 2=6 (N/m) Damper 2=10 (N/m) 0 5 10 15 -5 0 5 10 15 x 10 -4 Time (s) Mass2Responce Mass 2 Responce for Varying Damper 1 Damper 1=2 (N/m) Damper 1=6 (N/m) Damper 1=10 (N/m) 0 5 10 15 -5 0 5 10 15 x 10 -4 Time (s) Mass2Responce Mass 2 Responce for Varying Damper 2 Damper 2=2 (N/m) Damper 2=6 (N/m) Damper 2=10 (N/m)
  • 41. 41 | P a g e Figure 22: Analysis of Spring Variation for Example #3 The above figure shows that varying the first spring has more influence on the system. As this spring increases the less transient response occurs for both masses. Along with this, the masses converge to steady state faster as the spring constant increases. The second spring also influences the system in the same way, but the impact is not as severe. This shows how physical systems are modeled mathematically. They system is broken down into a series of equations that are put into matrix form. This matrix is then solved creating transfer functions for the given degrees of freedom. It is these transfer functions that are solved to find the time order response of the system. The next chapter will further expand the properties of a system time order response. 0 5 10 15 -5 0 5 10 15 20 x 10 -4 Time (s) Mass1Responce Mass 1 Responce for Varying Spring 1 Spring 1=1 (N) Spring 1=5 (N) Spring 1=9 (N) 0 5 10 15 -5 0 5 10 15 20 x 10 -4 Time (s) Mass1Responce Mass 1 Responce for Varying Spring 2 Spring 2=3 (N) Spring 2=7 (N) Spring 2=11 (N) 0 5 10 15 -5 0 5 10 15 20 x 10 -4 Time (s) Mass2Responce Mass 2 Responce for Varying Spring 1 Spring 1=1 (N) Spring 1=5 (N) Spring 1=9 (N) 0 5 10 15 -5 0 5 10 15 20 x 10 -4 Time (s) Mass2Responce Mass 2 Responce for Varying Spring 2 Spring 2=3 (N) Spring 2=7 (N) Spring 2=11 (N)
  • 42. 42 | P a g e Chapter 4: Analysis of Time Response for Different Systems After mechanical systems are analyzed, it becomes evident that the system results are dependent on the transfer function modeling the system. This transfer function is made up of both a numerator and a denominator as shown below. 𝑌( 𝑠) 𝑈( 𝑠) = 𝑁𝑢𝑚( 𝑠) 𝐷𝑒𝑛( 𝑠) The nature of this system is dependent on the roots of the denominator as shown previously. The order of the polynomial in the denominator is what plays the main role in defining the system. Any nth order system can be described as a blend of first and second order systems. This is described in the following example. Example #1 7th order system 1) 3(2 𝑛𝑑 𝑜𝑟𝑑𝑒𝑟)+ 2(1 𝑠𝑡 𝑜𝑟𝑑𝑒𝑟) 2) 2(2 𝑛𝑑 𝑜𝑟𝑑𝑒𝑟)+ 3(1 𝑠𝑡 𝑜𝑟𝑑𝑒𝑟) 3) 1(2 𝑛𝑑 𝑜𝑟𝑑𝑒𝑟)+ 5(1 𝑠𝑡 𝑜𝑟𝑑𝑒𝑟) 4) 7(1 𝑠𝑡 𝑜𝑟𝑑𝑒𝑟) These descriptions of higher order system shows that both first and second order systems are essential to defining the time response for systems. The first type of response to be taken into account is a generic first order response. This case is a simple expression and is derived in the following section. 4.1 Generic First Order System The generic first order system is the easiest to analyze. It is made up of a single real distinct pole. Along with this, the steady state value must converge to the value one to be considered generic. The derivation of this system is shown below. 𝐺𝐹𝑂𝑆( 𝑠) = 𝐺( 𝑠) = 𝑎 𝑠 + 𝑎 𝑢( 𝑡) = 1 → 𝑈( 𝑠) = 1 𝑠
  • 43. 43 | P a g e 𝑌( 𝑠) = 𝑈( 𝑠) 𝐺( 𝑠) = 1 𝑠 ∗ 𝑎 𝑠 + 𝑎 = 𝑘1 𝑠 + 𝑘2 𝑠 + 𝑎 Where: 𝑘1 = [ 𝑠𝑌( 𝑠)] 𝑠=0 = [𝑠 ∗ 1 𝑠 ∗ 𝑎 𝑠 + 𝑎 ] 𝑠=0 = 𝑎 (0) + 𝑎 = 𝑎 𝑎 = 1 𝑘2 = [( 𝑠 + 𝑎) 𝑌( 𝑠)] 𝑠=−𝑎 = [( 𝑠 + 𝑎) ∗ 1 𝑠 ∗ 𝑎 𝑠 + 𝑎 ] 𝑠=−𝑎 = 𝑎 (−𝑎) = −1 Substituting: 𝑌( 𝑠) = 1 𝑠 + −1 𝑠 + 𝑎 𝑦( 𝑡) = 1 − 𝑒−𝑎𝑡 This derivation yields the expression for a generic first order system. After its derivation is understood, two key properties of this type of system must be defined. First, the initial value of the function must be zero. Second, the steady state value is equal to one. This can be proved by applying both the initial value and final value theorems to the function. This is represented mathematically below. Initial Value Theorem lim 𝑡→0 𝑦( 𝑡) = lim 𝑡→0 (1 − 𝑒−𝑎𝑡) = 1 − 𝑒0 = 1 − 1 = 0 Final Value Theorem lim 𝑡→∞ 𝑦( 𝑡) = lim 𝑡→∞ (1 − 𝑒−𝑎𝑡) = 1 − 𝑒∞ = 1 − 0 = 1 After these two theorems are verified, the function can be plotted with varying root values. This trend is seen in the following graph.
  • 44. 44 | P a g e Figure 23: Generic First Order Response for Varying Root Values This graph shows that no matter what the value of the root, the initial and final values are the same. Along with showing the trend, plotting the generic first order system allows for solution of the specifications of the system. 4.1.1 Specifications of a Generic First Order System For this type of system there are three specifications and they are listed below. 1) Time Constant – This is the time required to reach 63% of the steady state value. Figure 24: Time Constant 0 1 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 Time (s) Response Generic First Order Response for Varying Root Values a=1 a=3 a=5 a=7 a=9 a=11
  • 45. 45 | P a g e 𝑇 = 1 𝑎 2) Rise Time – This is the time required to go from 10% to 90% of the steady state value. Figure 25: Rise Time 𝑇𝑟 = 𝑡2 + 𝑡1 𝑦( 𝑡1) = 0.1 = 1 − 𝑒−𝑎 𝑡1 𝑦( 𝑡2) = 0.9 = 1 − 𝑒−𝑎𝑡2 𝑒−𝑎 𝑡1 = 0.9 𝑒−𝑎 𝑡2 = 0.1 −𝑎𝑡1 = ln(0.9) −𝑎𝑡2 = ln(0.1) −𝑎𝑡1 = −0.1 −𝑎𝑡2 = −2.3 𝑡1 = 0.1 𝑎 𝑡2 = 2.3 𝑎 𝑇𝑟 = 2.3 𝑎 − 0.1 𝑎 = 2.2 𝑎 3) Settling Time – This is the time required to reach 98% of the steady state value.
  • 46. 46 | P a g e Figure 26: Settling Time 𝑦( 𝑇𝑠) = 1 − 𝑒−𝑎 𝑇𝑠 = 0.98 −𝑎𝑇𝑠 = ln(0.02) 𝑇𝑠 = 3.91 𝑎 The specifications for a generic first order system can be seen in the derivations above. These specifications help define how the first order system behaves and what kind of system output is expected. Once the generic case is understood, a more common case for the first order system is analyzed. This case is known as a non-generic first order system. The derivation for the non- generic case is similar to that of the generic case. This can be seen below. 4.1.2 Non-Generic First Order System Once the generic case is understood, a more common case for the first order system is analyzed. This case is known as a non-generic first order system and is derived below. 𝐹𝑂𝑆( 𝑠) = 𝐺( 𝑠) = 𝑏 𝑠 + 𝑎 = 𝑐 ∗ 𝑎 𝑠 + 𝑎 𝑢( 𝑡) = 1 → 𝑈( 𝑠) = 1 𝑠 𝑌( 𝑠) = 𝑈( 𝑠) 𝐺( 𝑠) = 𝑐 𝑠 ∗ 𝑎 𝑠 + 𝑎 = 𝑘1 𝑠 + 𝑘2 𝑠 + 𝑎
  • 47. 47 | P a g e From the final value theory the steady state value can be found. 𝑦𝑠𝑠 = lim 𝑠→0 (𝑠 ∗ 1 𝑠 ∗ 𝑐 ∗ 𝑎 𝑠 + 𝑎 ) = 𝑐 ∗ 𝑎 (0) + 𝑎 = 𝑐 Where: 𝑘1 = [ 𝑠𝑌( 𝑠)] 𝑠=0 = [𝑠 ∗ 𝑐 𝑠 ∗ 𝑎 𝑠 + 𝑎 ] 𝑠=0 = 𝑐 ∗ 𝑎 (0)+ 𝑎 = 𝑐 ∗ 𝑎 𝑎 = 𝑐 𝑘2 = [( 𝑠 + 𝑎) 𝑌( 𝑠)] 𝑠=−𝑎 = [( 𝑠 + 𝑎) ∗ 𝑐 𝑠 ∗ 𝑎 𝑠 + 𝑎 ] 𝑠=−𝑎 = 𝑐 ∗ 𝑎 (−𝑎) = −𝑐 Substituting: 𝑌( 𝑠) = 𝑐 𝑠 + −𝑐 𝑠 + 𝑎 𝑦( 𝑡) = 𝑐 − 𝑐𝑒−𝑎𝑡 = 𝑐(1 − 𝑒−𝑎𝑡) From this derivation it is evident that the generic and non-generic responses are almost identical. The only difference is the factor in front of the response. Furthermore, the factor in front of the non-generic response is the steady state value for the system. This is the only difference between the two first order cases and changes the final value of the response. 4.2 Generic SecondOrder System Now that the first order system is understood, the second order system is analyzed. A second order system is made up of two poles of varying location in the s-domain. As well as, the system must have a steady state value of one to be considered generic. The generic second order system is derived below. 𝐺𝑆𝑂𝑆( 𝑠) = 𝜔 𝑛 2 𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛 2 = 𝑐2 𝑠2 + 𝑐1 𝑠 + 𝑐2 Where: 𝜔 𝑛 = 𝑁𝑎𝑡𝑢𝑟𝑎𝑙 𝐹𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦 𝜉 = 𝐷𝑎𝑚𝑝𝑖𝑛𝑔 𝐶𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 Once the basic layout of this system is understood, there are several different possibilities for the roots of the system. The roots can follow a series of cases. These cases are presented as follows. Case #1
  • 48. 48 | P a g e Figure 27: Real Distinct Roots This case is when the roots of the denominator are real and distinct. Under this condition the system is over damped. A time response of an overdamped system can be seen below. Figure 28: Overdamped System From this figure it is evident that an over damped system will never exceed the steady state value. The behavior is similar to that of a first order system. This system is highly dampened and does not osculate. Case #2 -5 -4 -3 -2 -1 0 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 Real Axis ImaginaryAxis Real Distinct Roots 0 1 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 Time (s) Response Overdamped System
  • 49. 49 | P a g e Figure 29: Repeated Roots This case is when the roots of the denominator are real but repeated. Under this condition the system is critically damped. A time response of a critically damped system is shown below. Figure 30: Critically Damped System From this figure it is evident that a critically damped system will never exceed the steady state value as well. The difference, although, is that this is the dampening at which the system -4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 Real Axis ImaginaryAxis Repeated Roots 0 1 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 Time (s) Response Critically System
  • 50. 50 | P a g e crosses from a first order behavior to that of a second order. It is the lowest dampening a system can have before it begins to osculate. Case #3 Figure 31: Complex Conjugate Roots This case is when the roots of the denominator are a complex conjugate pair. Under this condition the system is under damped. A time response of an under damped system can be seen below. Figure 32: Critically Damped System -4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 -3 -2 -1 0 1 2 3 Real Axis ImaginaryAxis Complex Conjugate Roots 0 1 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 1.2 Time (s) Response Critically Damped System
  • 51. 51 | P a g e This figure shows that an under damped system behaves much differently than that of the prior cases. The system will exceed that of the steady state value and then osculate about that value until it eventually converges. This is the most typically response of a second order system. Case #4 Figure 33: Purely Complex Roots This case is when the roots of the denominator are a purely complex pair with no real component. Under this condition the system is undamped. A time response of an undamped system can be seen below. Figure 34: Undamped System -4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 -3 -2 -1 0 1 2 3 Real Axis ImaginaryAxis Pure Complex Roots 0 1 2 3 4 5 6 0 0.5 1 1.5 2 2.5 Time (s) Response Critically Damped System
  • 52. 52 | P a g e This shows how an undamped system will behave. Unlike all the other cases previously defined, this case does not converge on a steady state value. This type of system represents free oscillatory motion. After the nature of the roots is analyzed, the system can be solved for by taking the inverse Laplace transformation. This transformation is more complex than the first order system. The solution is presented below. 𝑌( 𝑠) 𝑈( 𝑠) = 𝜔 𝑛 2 𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛 2 Where: 𝑈( 𝑠) = 1 𝑠 So: 𝑌( 𝑠) = 1 𝑠 ∗ 𝜔 𝑛 2 𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛 2 𝑦( 𝑡) = 1 − 1 √1 − 𝜉2 𝑒−𝜉𝜔 𝑛 cos [𝜔 𝑛√1 − 𝜉2 𝑡 − 𝜙] Where: 𝜙 = tan−1 𝜉 √1 − 𝜉2 Once the solution for the generic second order system is found, two key parameters must be noted. Both the generic second and first order systems have these in common. First, the initial value of the function must be zero. Second, the steady state value is equal to one. This can be proved by applying both the initial value and final value theorems to the function. This is represented mathematically below. Initial Value Theorem 𝑦(0) = lim 𝑡→0 𝑦( 𝑡) = lim 𝑠→∞ 𝑠𝑌( 𝑠) = lim 𝑠→∞ [𝑠 ∗ 1 𝑠 ∗ 𝜔 𝑛 2 𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛 2 ] = 𝜔 𝑛 2 ∞ = 0 Final Value Theorem 𝑦(0) = lim 𝑡→∞ 𝑦( 𝑡) = lim 𝑠→0 𝑠𝑌( 𝑠) = lim 𝑠→0 [𝑠 ∗ 1 𝑠 ∗ 𝜔 𝑛 2 𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛 2 ] = 𝜔 𝑛 2 𝜔 𝑛 2 = 1
  • 53. 53 | P a g e Along with providing these parameters, solution of the generic second order system allows for an analyses of the poles produced. These poles can be plotted in the s-domain as shown. Figure 35: Poles of a Second Order System The location of the pole in this figure shows how the natural frequency and dampening ratio define the location. The location of this pole is solved for by using the expression below. 𝑃1,2 = −( 𝜉𝜔 𝑛) ± 𝑖𝜔 𝑛√1 − 𝜉2 Along with this, the following relation can be used to find the dampening ratio based on the angle of the pole location. 𝜉 = cos 𝛩 Finally, two special cases can be found when analyzing the location of the pole. First, should the dampening become zero, the poles would lie on the imaginary axis. This, in turn, would make the system undamped and move in free oscillatory motion. Second, if the dampening becomes one, then poles would become repeated on the real axis. When this occurs the system is critically damped. Once the makeup of the generic second order system is understood, the specifications for this type of system are defined. Like the generic first order system, these specifications help define how the system behaves and what type of output is expected. 𝜔 𝑛 𝛩 𝜔 𝑛 sin 𝛩 𝜔 𝑛 cos 𝛩
  • 54. 54 | P a g e 4.2.1 Specifications of a Generic Second Order System For the second order case there are four specifications. These specifications are useful in describing the response of the system in common terms. These specification are derived as shown next. 1) Rise Time – This is the time required to go from 10% to 90% of the steady state value. 𝑇𝑟 = 1 + 1.1𝜉 + 1.4𝜉2 𝜔 𝑛 Figure 36: Rise Time for a Second Order System 2) Peak Time – This is the time required to reach the maximum value( 𝑦 𝑚𝑎𝑥). 𝑇𝑃 = 𝜋 𝜔 𝑛√1 − 𝜉2
  • 55. 55 | P a g e Figure 37: Peak Time for a Second Order System 3) Percent Overshoot (%OS) – This is the amount that the waveform overshoots the steady state value to reach the maximum value. %𝑂𝑆 = 𝑦 𝑚𝑎𝑥 − 𝑦𝑠𝑠 𝑦𝑠𝑠 ∗ 100 4) Settling Time – This is the time required to reach and stay within ±2% of the steady state value. 𝑇𝑠 = 4 𝜉𝜔 𝑛
  • 56. 56 | P a g e Figure 38: Settling Time for a Second Order System After the specifications for a generic second order system are understood, the system is fully described. This leads to the discussion on a non-generic second order system. This system is not much different from a generic second order system. Along with that, the change between the two is identical to the change between a generic and non-generic first order system. This change can be seen below. 4.2.2 Non-Generic Second Order System Once the generic second order system is understood, a more general system is taken into account. This is the non-generic second order system and is derived next. 𝑆𝑂𝑆( 𝑠) = 𝑐 ∗ 𝐺𝑆𝑂𝑆( 𝑠) = 𝑐 ∗ 𝜔 𝑛 2 𝑠2 + 2𝜉𝜔 𝑛 𝑠 + 𝜔 𝑛 2 Where: 𝑐 = 𝑆𝑡𝑒𝑎𝑑𝑦 𝑆𝑡𝑎𝑡𝑒 𝑉𝑎𝑙𝑢𝑒
  • 57. 57 | P a g e As the case with the first order system, the non-generic second order system only changes by multiplying the steady state value to the function. This, ultimately, moves the steady state value from one to the location dictated by the non-generic function. 4.2.3 Blended Systems The final topic on the time response of systems is the blending of systems. This is analyzed by blending two first order systems as shown below. 𝐺( 𝑠) = 𝑎 𝑠 + 𝑎 ∗ 𝑏 𝑠 + 𝑏 = 𝑎𝑏 ( 𝑠 + 𝑎)( 𝑠 + 𝑏) Typically this type of system would behave like a second order system. This, however, is only true if the roots are complex conjugates. Since both roots are real distinct roots the system, in theory, should not behave like an underdamped system. The response for this system with varying root values is shown below. Figure 39: Blending Two First Order Systems with Varying Roots This graph illustrates the idea of dominance in a system. Dominance occurs when the roots of the system are far from each other on the negative side of the real axis. The root that will dominate the time order response is the root closest to the origin. As the roots of the system 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 Time (s) Response Two Blended First Order Systems a=20,b=2 a=5,b=2 a=2,b=2 a=1,b=2 a=0.7,b=2 a=0.5,b=2 a=0.3,b=2
  • 58. 58 | P a g e move closer to each other they begin to interfere with the time order response. When both the roots are the same, as demonstrate prior, the system is critically damped. This idea is applied to any higher order system as well. Since higher order systems are nothing more than a blend of first and second order systems, dominance plays a role in dictating the system output. To demonstrate this, a third order system is taken into account. This system is broken up into both a second order and first order component. This is shown in the following formula. 𝐺( 𝑠) = 𝑎 𝑠 + 𝑎 ∗ 𝜔 𝑛 2 𝑠2 + 2𝜁𝜔 𝑛 + 𝜔 𝑛 2 Figure 40: Blended First and Second Order System This figure shows how dominance affects the system. When the pole of the first order system is far on the negative side of the real axis, it barely affects the second order response. This changes though when the pole moves close to the complex conjugate roots of the second order system. As the pole moves closer it causes interference in the second order response. Once the first order pole moves past the complex conjugate roots of the second order system, the first order response begins to dominate the overall system response. 0 2 4 6 8 10 12 0 0.5 1 1.5 Time (s) Response First and Second Order Blended System Pure Second Order a=20*RP(Second Order) a=5*RP(Second Order) a=2*RP(Second Order) a=1*RP(Second Order) a=0.5*RP(Second Order) a=0.3*RP(Second Order)
  • 59. 59 | P a g e The final analysis made on the time order response of a system is adding a zero to the second order response. This changes the governing equation as follows. 𝐺( 𝑠) = 𝑠 + 𝑎 1 ∗ 𝜔 𝑛 2 𝑠2 + 2𝜁𝜔 𝑛 + 𝜔 𝑛 2 This system is then plotted for varying zero locations. This is shown in the next figure. Figure 41: Adding a Zero to the Second Order Response As shown in the figure, adding a zero to the second order response causes the response time to change. Should the zero lay far to the left on the negative real axis, it will have little effect on the overall system output. As the zero moves closer to the complex conjugate roots of the second order system the response time increases. Should the zero lay on the positive side of the real axis, the response of the system changes sign. Along with this, the response increases as the zero moves farther right on the positive axis. This summarizes the time order response of system. Since higher order systems are made up of both first and second order systems, it is possible to apply these techniques to any order system. The main problem in find the time order response is the dominance problems. Should 0 1 2 3 4 5 6 7 8 -1 -0.5 0 0.5 1 1.5 2 2.5 3 Time (s) Response Adding a Zero to a Second Order System Pure Second Order Z=20*RP(Second Order) Z=5*RP(Second Order) Z=2*RP(Second Order) Z=1*RP(Second Order) Z=-2*RP(Second Order) Z=-5*RP(Second Order)
  • 60. 60 | P a g e the poles of the system lie remotely close to each other, they will interfere with the time order response.
  • 61. 61 | P a g e Chapter 5: Steady State Error and Stability Analysis The time response of systems yields the general specifications of both generic and non- generic first and second order systems. Along with this, it provides insight to how blended systems will behave. Once these ideas are understood, steady state error is introduced. Steady state error occurs when a system converges to a value different than that of the expected value. Figure 42: Feedback Control Loop The figure above shows how a simple unity feedback loop is executed. After the input is met with the transfer function it produces an output. This output not only defines the system at that instant, it also goes to a summing junction back at the input. This is how the control loop is formed. Once this is understood, the idea of steady state error can be formed. When the output of the system is met with the input at the summing junction it produces an error function. This is where the steady state error lies and what causes the steady state value to converge to a value other than predicted. For this type of error to exist the system must be stable and it must be closed loop. Steady state error can be modeled mathematically as shown below. 𝑒( 𝑡) = 𝑢( 𝑡) − 𝑦( 𝑡) Or in the s-domain 𝐸( 𝑠) = 𝑈( 𝑠) − 𝑌( 𝑠) To find the steady state value from this function a limit must be taken. 𝑒 𝑠𝑠 = lim 𝑡→∞ 𝑒( 𝑡) = lim 𝑡→∞ [ 𝑢( 𝑡) − 𝑦( 𝑡)] = lim 𝑠→0 𝑠𝐸( 𝑠) = lim 𝑠→0 𝑠[ 𝑈( 𝑠) − 𝑌( 𝑠)] The formula above represents the generic expression for steady state error. This equation can be reduced based on the type of system in the transfer function. To define the type of system present, the formula below is analyzed. 𝑈( 𝑠) 𝑌( 𝑠)𝐸( 𝑠)
  • 62. 62 | P a g e 𝐺( 𝑠) = 𝑘( 𝑠 − 𝑧1)… ( 𝑠 − 𝑧 𝑛) 𝑠 𝐿( 𝑠 − 𝑝1)… ( 𝑠 − 𝑝 𝑛−𝑙) = 𝑘 ∏ ( 𝑠 − 𝑧1)𝑛 𝑖=1 𝑠 𝐿 ∏ (𝑠 − 𝑝𝑗 )𝑛−𝐿 𝑗=1 Where: 𝐿 = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑜𝑙𝑒𝑠 𝑎𝑡 𝑡ℎ𝑒 𝑜𝑟𝑖𝑔𝑖𝑛 = 𝑠𝑦𝑠𝑡𝑒𝑚 𝑡𝑦𝑝𝑒 𝐾 = 𝑠𝑦𝑠𝑡𝑒𝑚 𝑔𝑎𝑖𝑛 The equation above is the generic representation of any order transfer function. The important part of this expression is the number of poles at the origin. This defines the type of system the transfer function represents. 5.1 Steady State Error Constants Once the system type is understood, the type of error constant can be found. The definition for the error constant changes as the system type changes. These differences are minor but are important. The error constants are defined as the following. 1) Position Error Constant 𝐾𝑝 = lim 𝑠→0 𝐺( 𝑠) = { 𝐾 ∏ 𝑧𝑖 𝑛 𝑖=1 ∏ 𝑝𝑗 𝑛−𝐿 𝑗=1 ∞ } 𝑓𝑜𝑟 𝐿 = 0 𝑓𝑜𝑟 𝐿 ≥ 1 From this expression for the position error constant it is evident that this constant is only defined when the system is a zero type. This type of system has no roots at the origin and is associated with a step input. Should a root at the origin exist, the position error constant will be infinite. Along with the error constant, steady state error for a step input is derived below. 𝐸( 𝑠) = 𝑈( 𝑠) − 𝑌( 𝑠) = 𝑈( 𝑠) − ( 𝐾𝐺( 𝑠) 1 + 𝐾𝐺( 𝑠) ) 𝑈( 𝑠) = 𝑈( 𝑠) ( 1 1 + 𝐾𝐺( 𝑠) ) = 1 𝑠 ( 1 1 + 𝐾𝐺( 𝑠) ) 𝑒 𝑠𝑠 = lim 𝑡→∞ 𝑒( 𝑡) = lim 𝑠→0 𝑠𝐸( 𝑠) = lim 𝑠→0 (𝑠 ∗ 1 𝑠 ( 1 1 + 𝐾𝐺( 𝑠) )) = lim 𝑠→0 ( 1 1 + 𝐾𝐺( 𝑠) ) = 1 1 + 𝐾𝐾𝑝 2) Velocity Error Constant 𝐾𝑣 = lim 𝑠→0 𝑠𝐺( 𝑠) = { 0 𝐾 ∏ 𝑧𝑖 𝑛 𝑖=1 ∏ 𝑝𝑗 𝑛−𝐿 𝑗=1 ∞ } 𝑓𝑜𝑟 𝐿 = 0 𝑓𝑜𝑟 𝐿 = 1 𝑓𝑜𝑟 𝐿 ≥ 2
  • 63. 63 | P a g e From this expression for the velocity error constant, a few conclusions are drawn. First, the type of system will produce a zero velocity error constant if there are no roots at the origin. Second, the error constant will be infinite if there are two or more roots at the origin. This type of error constant is only defined for systems with one root at the origin. Along with, it is associated with a ramp input. Finally, the steady state error for a ramp input is derived below. 𝐸( 𝑠) = 𝑈( 𝑠) − 𝑌( 𝑠) = 𝑈( 𝑠) − ( 𝐾𝐺( 𝑠) 1 + 𝐾𝐺( 𝑠) ) 𝑈( 𝑠) = 𝑈( 𝑠)( 1 1 + 𝐾𝐺( 𝑠) ) = 1 𝑠2 ( 1 1 + 𝐾𝐺( 𝑠) ) 𝑒 𝑠𝑠 = lim 𝑡→∞ 𝑒( 𝑡) = lim 𝑠→0 𝑠𝐸( 𝑠) = lim 𝑠→0 (𝑠 ∗ 1 𝑠2 ( 1 1 + 𝐾𝐺( 𝑠) )) = lim 𝑠→0 ( 1 𝑠 ∗ ( 1 1 + 𝐾𝐺( 𝑠) )) = 1 𝐾𝐾𝑣 3) Acceleration Error Constant 𝐾𝑎 = lim 𝑠→0 𝑠2 𝐺( 𝑠) = { 0 𝐾 ∏ 𝑧𝑖 𝑛 𝑖=1 ∏ 𝑝𝑗 𝑛−𝐿 𝑗=1 ∞ } 𝑓𝑜𝑟 𝐿 ≤ 1 𝑓𝑜𝑟 𝐿 = 2 𝑓𝑜𝑟 𝐿 ≥ 3 From this expression for the acceleration error constant, a few conclusions are drawn. First, the type of system is will produce a zero velocity error constant if the system has one or no roots at the origin. Second, the error constant will be infinite if there are three or more roots at the origin. This type of error constant is only defined for systems with two roots at the origin. Along with, this is associated with a parabolic input. Finally, the steady state error for a parabolic input is defined by the formula below. 𝐸( 𝑠) = 𝑈( 𝑠) − 𝑌( 𝑠) = 𝑈( 𝑠) − ( 𝐾𝐺( 𝑠) 1 + 𝐾𝐺( 𝑠) ) 𝑈( 𝑠) = 𝑈( 𝑠)( 1 1 + 𝐾𝐺( 𝑠) ) = 1 𝑠3 ( 1 1 + 𝐾𝐺( 𝑠) ) 𝑒 𝑠𝑠 = lim 𝑡→∞ 𝑒( 𝑡) = lim 𝑠→0 𝑠𝐸( 𝑠) = lim 𝑠→0 (𝑠 ∗ 1 𝑠3 ( 1 1 + 𝐾𝐺( 𝑠) )) = lim 𝑠→0 ( 1 𝑠2 ∗ ( 1 1 + 𝐾𝐺( 𝑠) )) = 1 𝐾𝐾𝑎 After the error constants and steady state error for a variety of systems are defined, three numerical examples are analyzed. Each of the system types are taken into account.
  • 64. 64 | P a g e 5.1.1 Type Zero System The first example is with respect to a type zero system. 𝐺( 𝑠) = ( 𝑠 + 15) ( 𝑠 + 2)( 𝑠 + 5) 𝑒 𝑠𝑠 = 0.05 Given the closed loop transfer function along with the following steady state error find the error constant along with the gain associated with the steady state error. Solving for the error constant: 𝐾𝑝 = lim 𝑠→0 𝐺( 𝑠) = lim 𝑠→0 ( ( 𝑠 + 15) ( 𝑠 + 2)( 𝑠 + 5) ) = 15 10 = 1.5 Solving for the gain: 𝑒 𝑠𝑠 = 1 1 + 𝐾𝐾𝑝 = 1 1 + 𝐾(1.5) = 0.05 1 = 0.05 + (0.075) 𝐾 𝐾 = 0.95 0.075 = 12.7 This result is verified by the following figure. Figure 43: Steady State Error for a Type Zero System 0 1 2 3 4 5 6 7 8 9 10 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Time (s) ErrorResponse System Steady State Error
  • 65. 65 | P a g e 5.1.2 Type One System The second example focuses on the velocity error constant and a type one system. 𝐺( 𝑠) = 1 𝑠( 𝑠 + 5)( 𝑠 + 20) 𝑒 𝑠𝑠 = 0.05 Given the closed loop transfer function along with the following steady state error find the error constant along with the gain associated with the steady state error. Solving for the error constant: 𝐾𝑣 = lim 𝑠→0 𝑠𝐺( 𝑠) = lim 𝑠→0 (𝑠 ∗ 1 𝑠( 𝑠 + 5)( 𝑠 + 20) ) = 1 100 = 0.01 Solving for the gain: 𝑒 𝑠𝑠 = 1 𝐾𝐾𝑣 = 1 𝐾(0.01) = 0.05 1 = (0.05)(0.01) 𝐾 𝐾 = 1 0.0005 = 2000 This result is verified by the following figure. Figure 44: Steady State Error for a Type One System 0 2 4 6 8 10 12 14 16 18 20 -0.05 0 0.05 0.1 0.15 0.2 Time (s) ErrorResponse System Steady State Error
  • 66. 66 | P a g e 5.1.3 Type Two System The third example focuses on the acceleration error constant and a type two system. 𝐺( 𝑠) = ( 𝑠 + 5) 𝑠2( 𝑠 + 2)( 𝑠 + 4) 𝑒 𝑠𝑠 = 0.025 Given the closed loop transfer function along with the following steady state error find the error constant along with the gain associated with the steady state error. Solving for the error constant: 𝐾𝑎 = lim 𝑠→0 𝑠2 𝐺( 𝑠) = lim 𝑠→0 (𝑠2 ∗ ( 𝑠 + 5) 𝑠2( 𝑠 + 2)( 𝑠 + 4) ) = 5 8 = 0.625 Solving for the gain: 𝑒 𝑠𝑠 = 1 𝐾𝐾𝑎 = 1 𝐾(0.625) = 0.025 1 = (0.625)(0.025) 𝐾 𝐾 = 1 0.015625 = 64 Once these error constants are found a number of conclusions are noticed about the steady state error of a system. First, the type of system determines the type of error constant that is defined for the system. Second, a larger gain value will yield a lower steady state value. Finally, although improving steady state error, increasing the gain of the system can cause instability in the system. Along with, the increased gain value will disrupt the other specifications of the system. 5.2 Routh-Hurwitz Stability Criterion The steady state value of the system brings about the importance of the gain of the system and starts to show how all of the specifications of the system play into each other. To find what gains are ideal for a specific system, stability analysis is useful. It is known that too large of a
  • 67. 67 | P a g e gain value will drive the system unstable. With this in mind, there is a range of gain values for which the system is stable. To find this range of gain values a Routh-Hurwitz stability analysis is performed. For the system to be stable, in general, it must have a bounded input/bounded output relationship. If this requirement is met, the Routh-Hurwitz criterion involves a simple inspection of the denominator of the transfer function. The Routh-Hurwitz array is set up as following. Given: 𝐷𝑒𝑛( 𝑠) = 𝑠 𝑛 + 𝑄 𝑛−1 𝑠 𝑛−1 + 𝑄 𝑛−2 𝑠 𝑛−2 + ⋯+ 𝑄1 𝑠 + 𝑄0 𝑠 𝑛 1 𝑄 𝑛−2 𝑄 𝑛−4 * * 𝑠 𝑛−1 𝑄 𝑛−1 𝑄 𝑛−3 𝑄 𝑛−5 * 𝑏1 𝑏2 * 𝑐1 * * * * 𝑠 * 𝑠0 𝑄0 Figure 45: Routh-Hurwitz Array Where: 𝑏1 = ( 𝑄 𝑛−1)( 𝑄 𝑛−2) − (1)( 𝑄 𝑛−3) 𝑄 𝑛−1 𝑏2 = ( 𝑄 𝑛−1 )( 𝑄 𝑛−4) − (1)( 𝑄𝑛−5) 𝑄 𝑛−1 𝑐1 = ( 𝑏1)( 𝑄𝑛−3) − ( 𝑄 𝑛−1)( 𝑏2) 𝑏1 The usefulness of this array is in the first column. The number of sign changes in this column tells the number of the unstable roots in the system. If there are no sign changes in this column then the system is stable. The effectiveness of this method is shown by the next example. 5.2.1 Routh-Hurwitz Array To find if a system is stable, the Routh-Hurwitz array is constructed. This construction is described as follows. 𝐷𝑒𝑛( 𝑠) = 𝑠5 + 3𝑠4 + 5𝑠3 + 8𝑠2 + 11𝑠 + 3 Set up the Routh-Hurwitz array.
  • 68. 68 | P a g e 𝑠5 1 5 11 𝑠4 3 8 3 𝑠3 𝑏1 𝑏2 𝑠2 𝑐1 𝑐2 𝑠1 𝑑1 𝑠0 𝑒1 Where: 𝑏1 = (3)(5) − (1)(8) 3 = 2.33 𝑏2 = (3)(11)− (1)(3) 3 = 10 𝑐1 = ( 𝑏1)(8) − (3)( 𝑏2) 𝑏1 = (2.33)(8) − (3)(10) 2.33 = −4.86 𝑐2 = ( 𝑏1)(3) 𝑏1 = 3 𝑑1 = ( 𝑐1)( 𝑏2) − ( 𝑏1)( 𝑐2) 𝑐1 = (−4.86)(10)− (2.33)(3) −4.86 = 11.44 𝑒1 = ( 𝑑1)( 𝑐2) 𝑑1 = 𝑐2 = 3 Substituting the values back into the array yields the following. 𝑠5 1 5 11 𝑠4 3 8 3 𝑠3 2.33 10 𝑠2 -4.86 3 𝑠1 11.44 𝑠0 3 From the Routh Hurwitz array it is possible to conclude that the system is unstable. Furthermore, the system will have two unstable roots due to the two sign changes in the first column. These roots will lie on the positive side of the imaginary axis in the s-domain. The final useful tool of the Routh-Hurwitz array is the ability to find the range of gain values for which a closed loop system is stable. In order to do this the control loop containing
  • 69. 69 | P a g e the transfer function must be closed. This incorporates the gain and the feedback loop into the transfer function representing the system. An example showing this process is shown below. 5.2.2 Stability of a Closed Loop Response Now that the Routh-Hurwitz array is understood, the stability of a closed loop system is analyzed. This is described by the following example 𝐺( 𝑠) = 1 ( 𝑠 − 5)( 𝑠 + 10)( 𝑠 + 15) = 1 ( 𝑠 − 5)( 𝑠2 + 25𝑠 + 150) = 1 𝑠3 + 20𝑠2 + 25𝑠 − 750 The closed loop transfer function is represented as follows. 𝑌( 𝑠) 𝑈( 𝑠) = 𝐾𝐺( 𝑠) 1 + 𝐾𝐺( 𝑠) = 𝐾 𝑠3 + 20𝑠2 + 25𝑠 − 750 1 + 𝐾 𝑠3 + 20𝑠2 + 25𝑠 − 750 = 𝐾 𝑠3 + 20𝑠2 + 25𝑠 + ( 𝐾 − 750) The Routh-Hurwitz criterion is applied here. 𝐷𝑒𝑛( 𝑠) = 𝑠3 + 20𝑠2 + 25𝑠 + ( 𝐾 − 750) 𝑠3 1 25 𝑠2 20 𝐾 − 750 𝑠1 𝑏1 𝑠0 𝑐1 Where: 𝑏1 = (20)(25)− (1)( 𝐾 − 750) 20 = 1250 − 𝐾 20 𝑐1 = 𝐾 − 750 For the system to be stable all of the values in the first column of the array must be positive. This holds true the following relationship is valid. 𝑌( 𝑠)𝑈( 𝑠)
  • 70. 70 | P a g e 1250 − 𝐾 20 > 0 𝐾 − 750 > 0 𝐾 > 750 𝐾 < 1250 750 < 𝐾 < 1250 These results are also verified by the following simulation results. Figure 46: System Response for Varying Gain Values This shows what gain values will yield a stable system. As shown in the figure, the system is stable inside the specific range and unstable outside of it. Also, these values represent where they system become bound and unbound. This tool is very useful in selecting an appropriate gain value for the system. Using these values, in hand with the specifications described previously, a control scheme starts to form. 0 5 10 15 0 2 4 x 10 11 Time (s) SystemResponse Response for K=650 0 5 10 15 0 200 400 Time (s) SystemResponse Response for K=750 0 5 10 15 0 5 10 Time (s) SystemResponse Response for K=900 0 5 10 15 0 5 Time (s) SystemResponse Response for K=1100 0 5 10 15 0 5 Time (s) SystemResponse Response for K=1250 0 5 10 15 -10 0 10 Time (s) SystemResponse Response for K=1350
  • 71. 71 | P a g e Chapter 6: Complex Block Diagram Reduction When dealing with multiple transfer functions in control loops, block diagram reduction becomes important. Complex block diagrams can end up being large and hard to manage. This, in turn, makes the simulation of the system more difficult. To help manage complex block diagrams a few rules can be used to simplify the block diagram to a single transfer function. This transfer function represents the overall system transfer function. To break down complex block diagrams, three different architectures are taken advantage of. 1) Transfer functions in series 𝐺 𝐸𝑇𝑆( 𝑠) = 𝐺1( 𝑠) ∗ 𝐺2( 𝑠) 2) Transfer functions in parallel 𝐺 𝐸𝑇𝑆 ( 𝑠) = 𝐺1( 𝑠) + 𝐺2( 𝑠) 3) Feedback loop
  • 72. 72 | P a g e 𝑌( 𝑠) = 𝐺( 𝑠) ∗ 𝐸( 𝑠) = 𝐺( 𝑠)(𝑈( 𝑠) − 𝐻( 𝑠) 𝑌( 𝑠)) 𝑌( 𝑠) + 𝐺( 𝑠) 𝐻( 𝑠) 𝑌( 𝑠) = 𝐺( 𝑠) 𝑈( 𝑠) 𝑌( 𝑠)(1 + 𝐺( 𝑠) 𝐻( 𝑠)) = 𝐺( 𝑠) 𝑈( 𝑠) 𝑌( 𝑠) = 𝐺( 𝑠) 𝑈( 𝑠) (1 + 𝐺( 𝑠) 𝐻( 𝑠)) 𝑌( 𝑠) 𝑈( 𝑠) = 𝐺( 𝑠) 1 + 𝐺( 𝑠) 𝐻( 𝑠) Should the system contain a gain the transfer function is expressed as the following. 𝑌( 𝑠) 𝑈( 𝑠) = 𝐾𝐺( 𝑠) 1 + 𝐾𝐺( 𝑠) 𝐻( 𝑠) 6.1 BlockDiagram Rules Along with these architectural advantages, two rules can be used to reduce the block diagrams. These rules involve both pick off points and summing junctions. These rules are useful for moving transfer functions about these points and help the reduction. The two rules are shown below. 𝑌( 𝑠)𝑈( 𝑠) 𝐸( 𝑠)
  • 73. 73 | P a g e Rule #1 – Moving transfer functions around a pickoff point ` ≈
  • 74. 74 | P a g e ≈
  • 75. 75 | P a g e Rule #2 – Moving transfer functions around a pickoff point ≈
  • 76. 76 | P a g e ≈ After these rules and architectures are understood, it is possible to reduce virtually any structure into a single transfer function. 6.2 ReductionExamples To show the effectiveness of these reductions several examples are performed. Once the transfer functions are reduced, the effective transfer function output is compared to the original
  • 77. 77 | P a g e system output to see the percent difference. If the reduction was done properly the percent error should be close to zero. Example #1 Step #1: Reduce inner most feedback loop Step #2: Combine blocks in series
  • 78. 78 | P a g e Step #3: Combine blocks in parallel Step #4: Combine blocks in series Step #5: Reduce inner feedback loop Step #6: Combine blocks in series Step #7: Reduce final feedback loop
  • 79. 79 | P a g e After the block diagram is reduced, it is possible to verify the results by comparing the system output after each step. This is shown by the following figure. Figure 47: Results Simulation of Example #1 As the above figure shows, the system output for each reduction is identical. Along with that, the error between the final and original output is zero. This verifies the results of the block diagram reduction. Once this is understood, another example is carried out. 0 5 10 15 0 0.5 Time (s) SystemResponse Original System Response 0 5 10 15 0 0.5 Time (s) SystemResponse System Response After Step #1 0 5 10 15 0 0.5 Time (s) SystemResponse System Response After Step #2 0 5 10 15 0 0.5 Time (s) SystemResponse System Response After Step #3 0 5 10 15 0 0.5 Time (s) SystemResponse System Response After Step #4 0 5 10 15 0 0.5 Time (s) SystemResponse System Response After Step #5 0 5 10 15 0 0.5 Time (s) SystemResponse System Response After Step #6 0 5 10 15 0 0.5 Time (s) SystemResponse System Response After Step #7 0 5 10 15 -0.1 0 0.1 Time (s) SystemResponse Final Result Error
  • 80. 80 | P a g e Example #2 Step #1: Simplify both inner feedback loops Step #2: Combine in Series Step #3: Remove Final Feedback Loop
  • 81. 81 | P a g e After the block diagram is reduced, it is possible to verify the results by comparing the system output after each step. This comparison is made in the following figure. Figure 48: Results Simulation of Example #2 As the above figure shows, the system output for each reduction is identical. Along with that, the error between the final and original output is zero. This verifies the results of the block diagram reduction. Once this is understood, another example is carried out. 0 5 10 15 -1 0 1 Time (s) SystemResponse Original System Response 0 5 10 15 -1 0 1 Time (s) SystemResponse System Response After Step #1 0 5 10 15 -1 0 1 Time (s) SystemResponse System Response After Step #2 0 5 10 15 -1 0 1 Time (s)SystemResponse System Response After Step #3 0 5 10 15 -1 0 1 Time (s) SystemResponse Final Result Error
  • 82. 82 | P a g e Example #3 Step #1: Consolidate inner loops Step #2: Combine transfer functions in series and simplify right hand feedback loop
  • 83. 83 | P a g e Step #3: Combine loops Step #4: Combine in series Step #5: Remove final feedback loop After the block diagram is reduced, it is possible to verify the results by comparing the system output after each step. This comparison is made in the following figure.
  • 84. 84 | P a g e Figure 49: Results Simulation of Example #3 The above figure shows how the system response is the same for each other reductions performed. Along with that, the error between the original and final block diagram is zero so the solution is correct. 0 5 10 15 0 0.2 0.4 Time (s) SystemResponse Original System Response 0 5 10 15 0 0.2 0.4 Time (s) SystemResponse System Response After Step #1 0 5 10 15 0 0.2 0.4 Time (s) SystemResponse System Response After Step #2 0 5 10 15 0 0.2 0.4 Time (s) SystemResponse System Response After Step #3 0 5 10 15 0 0.2 0.4 Time (s) SystemResponse System Response After Step #4 0 5 10 15 0 0.2 0.4 Time (s) SystemResponse System Response After Step #5 0 5 10 15 -0.1 0 0.1 Time (s) SystemResponse Final Result Error
  • 85. 85 | P a g e Chapter 7: Controller Design using Root- Locus Method Now that single input/ single output systems are understood, controller design can be taken into consideration. To do this the closed loop characteristic equations must be tracked as a function of the gain. This means that the behavior of the roots of the characteristic equation needs to be plotted in the s-domain with respect to a change in the gain value. The method of doing this is known as the Root Locus method. To define the Root Locus for a system, six rules along with two lemmas are important. These are listed as following. Rule #1 – The Root Locus is always symmetric with respect to the real axis. Rule #2 – A Root Locus is made of “N” branches starting at the poles. These branches will proceed to either the zeros of the system or follow the asymptotic lines. Rule #3 – Interesting points for the Root Locus are the poles and the zeros locations. When starting at negative infinity on the real axis look towards the origin. If the number of interesting points is odd then that segment is part of the Root Locus. If that number is even then that segment is not part of the Root Locus. Rule #4 – The location of the intersection of the asymptotic lines with the real axis is located at the point provided by the following formula. 𝜎 = ∑ 𝑅𝑒𝑎𝑙 𝑃𝑎𝑟𝑡𝑠 𝑜𝑓 𝑡ℎ𝑒 𝑃𝑜𝑙𝑒𝑠 − ∑ 𝑅𝑒𝑎𝑙 𝑃𝑎𝑟𝑡𝑠 𝑜𝑓 𝑡ℎ𝑒 𝑍𝑒𝑟𝑜𝑠 𝑁 − 𝑀 Where: 𝑁 = 𝑂𝑟𝑑𝑒𝑟 𝑜𝑓 𝑡ℎ𝑒 𝐷𝑒𝑛𝑜𝑚𝑖𝑛𝑎𝑡𝑜𝑟 𝑀 = 𝑂𝑟𝑑𝑒𝑟 𝑜𝑓 𝑡ℎ𝑒 𝑁𝑢𝑚𝑒𝑟𝑎𝑡𝑜𝑟 Rule #5 – This provides the angles that the asymptotic lines make with the real axis at the intersection point. 𝜙 𝐴 𝑗 = (2𝑗 + 1) ∗ 180 𝑁 − 𝑀 Where: 𝑗 = 𝑖 − 1 𝑖 = 1, … ( 𝑁 − 𝑀)
  • 86. 86 | P a g e Rule #6 – The following equation yields the location of the breakaway points. These points are where the Root Locus leaves the real axis and becomes imaginary. 𝑑 𝑑𝑠 𝐺( 𝑠) 𝐻( 𝑠) = 0 Rule #7 – This provides the location of the transition points for the system. These points are where the Root Locus crosses the imaginary axis. To find these points to closed loop transfer function must be solved for. After this function is found, it is necessary to set up the Routh- Hurwitz array for the system. Then, set the squared row equal to zero and solve for s. Magnitude Lemma – The formula below is used to find the gain value at any point on the Root Locus. |1 + 𝐾𝐺( 𝑠) 𝐻( 𝑠)| = 0 | 𝐾𝐺( 𝑠) 𝐻( 𝑠)| = 1 7.1 Manual ConstructionofRoot Locus After the rules of building the Root Locus is understood, a number of examples are performed to define the method. The first example will show how to build the Root Locus by hand. The other three will use computer simulation to replace the hand drawing. Example #1 𝐺( 𝑠) = ( 𝑠 + 15) ( 𝑠 − 2)( 𝑠 + 4) Step #1: Carry out rule #3
  • 87. 87 | P a g e This figure shows how the Root Locus is defined on the real axis. Step #2: Carry out rules #4 and #5 𝑁 − 𝑀 = 1 𝑖 = 1,2, 𝑗 = 𝑖 − 1 = 0,1, Where i is the order of the denominator as well. After these calculations are made, the asymptotic lines can be found. This is done by following the following expression. 𝜙 𝐴 𝑗 = (2𝑗 + 1) ∗ 180 𝑁 − 𝑀 Solving the equation for the given values of j yields the asymptotic lines. The angles of the asymptotic lines for this system are derived below. 𝜙 𝐴1 = (1) ∗ 180 1 = 180° 𝜙 𝐴2 = (3) ∗ 180 1 = 540° After the angles of the asymptotic lines are found, the intersection point of the real axis must be determined. This can be done by using the following equation. 𝜎 = ∑ 𝑅𝑃 𝑃𝑜𝑙𝑒𝑠 − ∑ 𝑅𝑃 𝑍𝑒𝑟𝑜𝑠 𝑁 − 𝑀 = 2 − 4 + 15 1 = 13 In this case the asymptotic lines are not important in defining the Root Locus. Step #3: Carry out rule #6 𝑑 𝑑𝑠 𝐺( 𝑠) = 0 For this system: 𝑑 𝑑𝑠 𝐺( 𝑠) = 𝑑 𝑑𝑠 ( 𝑠 + 15) ( 𝑠2 + 2𝑠 − 8) = (1)( 𝑠2 + 2𝑠 − 8) − ( 𝑠 + 15)(2𝑠 + 2) ( 𝑠2 + 2𝑠 − 8)2 = 𝑠2 + 2𝑠 − 8 − 2𝑠2 − 2𝑠 − 30𝑠 − 30 ( 𝑠2 + 2𝑠 − 8)2 = −𝑠2 − 30𝑠 − 38 ( 𝑠2 + 2𝑠 − 8)2 = 0 Analyzing the numerator to find the zeros yields: (−𝑠2 − 30𝑠 − 38) = ( 𝑠2 + 30𝑠 + 38) = 0 −30 ± √302 − 4(1)(38) 2 ∗ 1 = −30 ± √748 2 = −1.3253, −28.6748 𝑠 𝐵1 = −1.3253
  • 88. 88 | P a g e 𝑠 𝐵2 = −28.6748 Using these points to further define the Root Locus graph yields the following. Step #4: Carry out rule #7 𝑌( 𝑠) 𝑈( 𝑠) = 𝑘𝐺( 𝑠) 1 + 𝑘𝐺( 𝑠) In this case: 𝑌( 𝑠) 𝑈( 𝑠) = 𝑘( 𝑠 + 15) 𝑠2 + ( 𝑘 + 2) 𝑠 + (15𝑘 − 8) The array can now be constructed as shown in the table below. 𝑠2 1 15k-8 𝑠1 K+2 -- 𝑠0 15k-8 -- From this array the stable values of k are calculated below. 𝑘 > 0.533 This system only has one transition point when s is equal to zero. Step #5: Perform Magnitude Lemma to find the gain values to fully define the DNA analysis of the system
  • 89. 89 | P a g e 𝑘 𝑠∗ = | 𝑠∗ − 2|| 𝑠∗ + 4| | 𝑠∗ + 15| For the breakaway points: 𝑘 𝑠=−1.325 = |−1.325 − 2||−1.325 + 4| |−1.325 + 15| = 2.325 ∗ 2.675 13.675 = 0.437 𝑘 𝑠=−28 .67 = |−28.67 − 2||−28.67 + 4| |−28.67 + 15| = 30.67 ∗ 24.67 13.67 = 55.35 After these calculations are performed the “DNA” analysis of the root locus can be performed. This is done as shown below. 𝑘 = 0 𝑆𝑦𝑠𝑡𝑒𝑚 𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑘 𝜖 ]0, 𝑘1[ 𝑈𝑛𝑠𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚, 2 𝐹𝑂𝑆 𝑘 = 𝑘1 𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝑆𝑦𝑠𝑡𝑒𝑚, 2 𝐹𝑂𝑆 𝑘 𝜖 ] 𝑘1, 𝑘2[ 𝑆𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚,2 𝐹𝑂𝑆 𝑘 = 𝑘2 𝑆𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚, 2 𝐹𝑂𝑆 𝑅𝑒𝑝𝑒𝑎𝑡𝑒𝑑 𝑘 𝜖 ] 𝑘2, 𝑘3[ 𝑆𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚, 1 𝑆𝑂𝑆 𝑘 = 𝑘3 𝑆𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚, 2 𝐹𝑂𝑆 𝑅𝑒𝑝𝑒𝑎𝑡𝑒𝑑 𝑘 > 𝑘3 𝑆𝑡𝑎𝑏𝑙𝑒 𝑆𝑦𝑠𝑡𝑒𝑚,2 𝐹𝑂𝑆 Where: 𝑘1 = 0.533 𝑘2 = 0.437 𝑘3 = 55.35 After the DNA analysis is carried out, the Root Locus for the system is fully defined. The full Root Locus is shown as follows.
  • 90. 90 | P a g e 7.2 Computer ConstructionofRoot Locus After this example is performed by hand, three more are performed using computer simulation. This method is useful and examples more complex Root Locus. Example #2 Perform Root Locus on the following system to meet the desired specifications: 𝐺( 𝑠) = 1 ( 𝑠 − 2)( 𝑠 + 4)( 𝑠 + 14) 𝜔 𝑛 = 1.5 − 2.5 𝜁 = 0.25 − 0.4 Expanding the denominator: 𝐺( 𝑠) = 1 𝑠3 + 16𝑠2 + 20𝑠 − 112 From here the asymptotic lines and the location of the intersection of these lines on the real axis must be determined. To do this a series of minor calculations must be made first. 𝑁 − 𝑀 = 3 Where: 𝑁 = 𝑂𝑟𝑑𝑒𝑟 𝑜𝑓 𝑡ℎ𝑒 𝐷𝑒𝑛𝑜𝑚𝑖𝑛𝑎𝑡𝑜𝑟 𝑀 = 𝑂𝑟𝑑𝑒𝑟 𝑜𝑓 𝑡ℎ𝑒 𝑁𝑢𝑚𝑒𝑟𝑎𝑡𝑜𝑟 Also:
  • 91. 91 | P a g e 𝑖 = 1,2,3 𝑗 = 𝑖 − 1 = 0,1,2 Where i is the order of the denominator as well. After these calculations are made, the asymptotic lines can be found. This is done by following the following expression. 𝜙 𝐴 𝑗 = (2𝑗 + 1) ∗ 180 𝑁 − 𝑀 Solving the equation for the given values of j yields the asymptotic lines. The angles of the asymptotic lines for this system are derived below. 𝜙 𝐴1 = (1)∗ 180 3 = 60° 𝜙 𝐴2 = (3) ∗ 180 3 = 180° 𝜙 𝐴3 = (5) ∗ 180 3 = 300° = −60° After the angles of the asymptotic lines are found, the intersection point of the real axis must be determined. This can be done by using the following equation. 𝜎 = ∑ 𝑅𝑃 𝑃𝑜𝑙𝑒𝑠 − ∑ 𝑅𝑃 𝑍𝑒𝑟𝑜𝑠 𝑁 − 𝑀 = 2 − 4 − 14 3 = −5.33 Following the generation of the asymptotic relationship, the break points for the root locus must be found, these are the points where the root locus leaves the real axis and become some form of a second order system. The break points can be found by using the following equation. 𝑑 𝑑𝑠 𝐺( 𝑠) = 0 For this system: 𝑑 𝑑𝑠 𝐺( 𝑠) = 𝑑 𝑑𝑠 1 𝑠3 + 16𝑠2 + 20𝑠 − 112 = −(1)(3𝑠2 + 32𝑠 + 20) ( 𝑠3 + 16𝑠2 + 20𝑠 − 112)2 = 0 Analyzing the numerator to find the zeros yields: (3𝑠2 + 32𝑠 + 20) = 0 −32 ± √322 − 4(3)(20) 2 ∗ 3 = −32 ± √784 6 = −0.667, −10 Upon further inspection of the root locus it is found the -10 is not a part of the root locus so it is disregarded. These are the points at which the root locus leaves the real axis. From here
  • 92. 92 | P a g e a Routh-Hurwitz analysis of the system is necessary. This will provide the k values for which the system is stable. This is done by constructing the Routh-Hurwitz array based on the denominator of the system. In order to construct this array the closed loop transfer function for the system must be found. This can be done using the following equation. 𝑌( 𝑠) 𝑈( 𝑠) = 𝑘𝐺( 𝑠) 1 + 𝑘𝐺( 𝑠) In this case: 𝑌( 𝑠) 𝑈( 𝑠) = 𝑘 1𝑠3 + 16𝑠2 + 20𝑠 − 112 + 𝑘 The array can now be constructed as shown in the table below. 𝑠3 1 20 𝑠2 16 k-112 𝑠1 𝑏1 -- 𝑠0 k-112 -- From this array the stable values of k are calculated below. 𝑘 > 112 432 − 𝑘 > 0 → 𝑘 < 432 𝑘 𝜖 ]112,432[ This shows that the system will be stable for a range of k values going from 112 to 432. After this is found, a further calculation must be done. Using the 𝑠2 row of the Routh-Hurwitz array, the intersection points of the root locus and the imaginary axis can be found. This is done as shown below. 16𝑠2 + ( 𝑘 − 112) Where k in this case is the value at the crossing point. For this system: 𝑘 = 432 16𝑠2 + (432 − 112) = 16𝑠2 + 320 =0 0 ± √02 − 4(16)(320) 2 ∗ 16 = 0 ± √−20480 32 = ±√20𝑖