In connection to the above ideas about equilibrium point we will restrict our discussion to systems given by dxt The equilibrium state xe= 0 of this system obeys the relation as dx/dt =
Trang 1Taking the Laplace transform yields
and after the inverse transformation for x(t), y(t) hold
x(t) = eAtx(0) +
Z t 0
y(t) = CeAtx(0) + C
Z t 0
eAt = L−1(sI − A)−1
(3.2.11) The equation (3.2.10) shows some important properties and features Its solution consists
of two parts: initial conditions term (zero-input response) and input term dependent on u(t) (zero-state response)
The solution of (3.2.5) for free system (u(t) = 0) is
and the exponential term is defined as
eAt=
∞
X
i=1
Aiti
The matrix
Φ(t) = eAt= L−1(sI − A)−1
(3.2.14)
is called the state transition matrix, (fundamental matrix, matrix exponential ) The solution
of (3.2.5) for u(t) is then
The matrix exponential satisfies the following identities:
The equation (3.2.14) shows that the system matrix A plays a crucial role in the solution of state-space equations Elements of this matrix depend on coefficients of mass and heat transfer, activation energies, flow rates, etc Solution of the state-space equations is therefore influenced by physical and chemical properties of processes
The solution of state-space equations depends on roots of the characteristic equation
This will be clarified from the next example
Example 3.2.2: Calculation of matrix exponential
Consider a matrix
A=
−1 −1
Trang 2
The matrix exponential corresponding to A is defined in equation (3.2.14) as
Φ(t) = L−1
( s
1 0
0 1
−
−1 −1
−1)
= L−1
(
s + 1 1
0 s + 2
−1)
= L−1
1
det
s + 1 1
0 s + 2
s + 2 −1
0 s + 1
= L−1
1 (s+1)(s+2)
s + 2 −1
0 s + 1
= L−1
1 s+1
−1 (s+1)(s+2)
0 s+21
The elements of Φ(t) are found from Table 3.1.1as
Φ(t) =
e−t e−2t− e−t
3.2.3 Canonical Transformation
Eigenvalues of A, λ1, , λn are given as solutions of the equation
If the eigenvalues of A are distinct, then a nonsingular matrix T exists, such that
is an diagonal matrix of the form
Λ =
λ1 0 0
0 λ2 0
0 0 λn
The canonical transformation (3.2.21) can be used for direct calculation of e−At Substituting
Afrom (3.2.21) into the equation
dx(t)
gives
d(T−1x)
Solution of the above equation is
or
and therefore
Trang 3eΛt=
eλ 1 t 0 0
0 eλ 2 t 0
0 0 eλn t
3.2.4 Stability, Controllability, and Observability of Continuous-Time
Systems
Stability, controllability, and observability are basic properties of systems closely related to state-space models These properties can be utilised for system analysis and synthesis
Stability of Continuous-Time Systems
An important aspect of system behaviour is stability System can be defined as stable if its response to bounded inputs is also bounded The concept of stability is of great practical interest as nonstable control systems are unacceptable Stability can also be determined without an analytical solution of process equations which is important for nonlinear systems
Consider a system
dx(t)
Such a system is called forced as the vector of input variables u(t) appears on the right hand side of the equation However, stability can be studied on free (zero-input) systems given by the equation
dx(t)
u(t) does not appear in the previous equation, which is equivalent to processes with constant inputs If time t appears explicitly as an argument in process dynamics equations we speak about nonautonomous system, otherwise about autonomous system
In our discussion about stability of (3.2.30) we will consider stability of motion of xs(t) that corresponds to constant values of input variables Let us for this purpose investigate any solution (motion) of the forced system x(t) that is at t = 0 in the neighbourhood of xs(t) The problem
of stability is closely connected to the question if for t ≥ 0 remains x(t) in the neighbourhood of
xs(t) Let us define deviation
˜
then,
d˜x(t)
dt +
dxs(t)
dt = f (˜x(t) + x
s(t), u(t), t) d˜x(t)
dt = f (˜x(t) + x
s
(t), u(t), t) − f(xs(t), t) d˜x(t)
The solution xs(t) in (3.2.32) corresponds for all t > 0 to relation ˜x(t) = 0 and ˙˜x(t) = 0 Therefore the state ˜x(t) = 0 is called equilibrium state of the system described by (3.2.32) This equation can always be constructed and stability of equilibrium point can be interpreted as stability in the beginning of the state-space
Trang 4Stability theorems given below are valid for nonautonomous systems However, such systems are very rare in common processes In connection to the above ideas about equilibrium point we will restrict our discussion to systems given by
dx(t)
The equilibrium state xe= 0 of this system obeys the relation
as dx/dt = 0
We assume that the solution of the equation (3.2.33) exists and is unique
Stability can be intuitively defined as follows: If xe = 0 is the equilibrium point of the sys-tem (3.2.33), then we may say that xe= 0 is the stable equilibrium point if the solution of (3.2.33) x(t) = x[x(t0), t] that begins in some state x(t0) “close” to the equilibrium point xe= 0 remains
in the neighbourhood of xe= 0 or the solution approaches this state
The equilibrium state xe= 0 is unstable if the solution x(t) = x[x(t0), t] that begins in some state x(t0) diverges from the neighbourhood of xe= 0
Next, we state the definitions of stability from Lyapunov asymptotic stability and asymptotic stability in large
Lyapunov stability: The system (3.2.33) is stable in the equilibrium state xe= 0 if for any given
ε > 0, there exists δ(ε) > 0 such that for all x(t0) such that kx(t0)k ≤ δ implies kx[x(t0), t]k ≤ ε for all t ≥ 0
Asymptotic (internal) stability: The system (3.2.33) is asymptotically stable in the equilibrium state xe = 0 if it is Lyapunov stable and if all x(t) = x[x(t0), t] that begin sufficiently close to the equilibrium state xe= 0 satisfy the condition limt→∞kx(t)k = 0
Asymptotic stability in large: The system (3.2.33) is asymptotically stable in large in the equilibrium state xe= 0 if it is asymptotic stable for all initial states x(t0)
In the above definitions, the notation kxk has been used for the Euclidean norm of a vector x(t) that is defined as the distance of the point given by the coordinates of x from equilibrium point xe= 0 and given as kxk = (xTx)1/2
Note 3.2.1 Norm of a vector is some function transforming any vector x ∈ Rn to some real number kxk with the following properties:
1 kxk ≥ 0,
2 kxk = 0 iff x = 0,
3 kkxk = |k| kxk for any k,
4 kx + yk ≤ kxk + kyk
Some examples of norms are kxk = (xTx)1/2, kxk =Pn
i=1|xi|, kxk = max |xi| It can be proven that all these norms satisfy properties 1-4
Example 3.2.3: Physical interpretation – U-tube
Consider a U-tube as an example of the second order system Mathematical model of this system can be derived from Fig 3.2.2considering the equilibrium of forces
We assume that if specific pressure changes, the force with which the liquid flow is inhibited,
is proportional to the speed of the liquid Furthermore, we assume that the second Newton law is applicable The following equation holds for the equilibrium of forces
F pv= 2F gρh + kFdh
dt + F Lρ
d2h
dt2
or
d2h
dt2 + k
Lρ
dh
dt +
2g
Lh =
1
Lρpv where
Trang 5h h
L
pv
Figure 3.2.2: A U-tube
F - inner cross-sectional area of tube,
k - coefficient,
pv - specific pressure,
g - acceleration of gravity,
ρ - density of liquid
If the input is zero then the mathematical model is of the form
d2x1
dt2 + a1
dx1
dt + a0x1= 0 where x1 = h − hs, a0 = 2g/L, a1 = k/Lρ The speed of liquid flow will be denoted by
x2 = dx1/dt If x1, x2 are elements of state vector x then the dynamics of the U-tube is given as
dx1
dt = x2
dx2
dt = −a0x1− a1x2
If we consider a0= 1, a1= 1, x(0) = (1, 0)T then the solution of the differential equations is shown in Fig.3.2.3 At any time instant the total system energy is given as a sum of kinetic and potential energies of liquid
V (x1, x2) = F Lρx
2
2 +
Z x 1
0
2F gρxdx Energy V satisfies the following conditions: V (x) > 0, x 6= 0 and V (0) = 0
These conditions show that the sum of kinetic and potential energies is positive with the exception when liquid is in the equilibrium state xe= 0 when dx1/dt = dx2/dt = 0 The change of V in time is given as
dV
dt =
∂V
∂x1
dx1
dt +
∂V
∂x2
dx2
dt dV
dt = 2F gρx1x2+ F Lρx2
−2gLx1−Lρk x2
dV
dt = −F kx22
Trang 60 1 2 3 4 5 6 7 8 9 10
−0.6
−0.4
−0.2 0 0.2 0.4 0.6 0.8 1
t
Figure 3.2.3: Time response of the U-tube for initial conditions (1, 0)T
−1
−0.8
−0.6
−0.4
−0.2 0 0.2 0.4 0.6 0.8 1
−1
−0.8
−0.6
−0.4
−0.2 0 0.2 0.4 0.6 0.8 1
x1
V1 V2 V3
Figure 3.2.4: Constant energy curves and state trajectory of the U-tube in the state plane
As k > 0, time derivative of V is always negative except if x2 = 0 when dV /dt = 0 and hence V cannot increase If x2= 0 the dynamics of the tube shows that
dx2
dt = −2gLx1
is nonzero (except xe= 0) The system cannot remain in a nonequilibrium state for which
x2= 0 and always reaches the equilibrium state which is stable The sum of the energies V
is given as
V (x1, x2) = 2F gρx
2
2 + F Lρ
x2
2
V (x1, x2) = F ρ
2 (2gx
2+ Lx2) Fig.3.2.4shows the state plane with curves of constant energy levels V1< V2< V3and state trajectory corresponding to Fig 3.2.3where x1, x2 are plotted as function of parameter t Conclusions about system behaviour and about state trajectory in the state plane can be generalised by general state-space It is clear that some results about system properties can also
be derived without analytical solution of state-space equations
Stability theory of Lyapunov assumes the existence of the Lyapunov function V (x) The con-tinuous function V (x) with concon-tinuous derivatives is called positive definite in some neighbourhood
Trang 7∆ of state origin if
and
for all x 6= 0 within ∆ If (3.2.36) is replaced by
for all x ∈ ∆ then V (x) is positive semidefinite Definitions of negative definite and negative semidefinite functions follow analogously
Various definitions of stability for the system dx(t)/dt = f (x), f (0) = 0 lead to the following theorems:
Stability in Lyapunov sense: If a positive definite function V (x) can be chosen such that dV
dt =
∂V
∂x
T
then the system is stable in origin in the Lyapunov sense
The function V (x) satisfying this theorem is called the Lyapunov function
Asymptotic stability: If a positive definite function V (x) can be chosen such that
dV
dt =
∂V
∂x
T
then the system is asymptotically stable in origin
Asymptotic stability in large: If the conditions of asymptotic stability are satisfied for all x and if V (x) → ∞ for kxk → ∞ then the system is asymptotically stable by large in origin There is no general procedure for the construction of the Lyapunov function If such a function exists then it is not unique Often it is chosen in the form
V (x) =
n
X
k=1
n
X
r=1
Krk are real constants, Krk= Kkr so (3.2.40) can be written as
and K is symmetric matrix V (x) is positive definite if and only if the determinants
K11,
K11, K12
K21, K22
,
K11, K12, K13
K21, K22, K23
K31, K32, K33
are greater than zero
Asymptotic stability of linear systems: Linear system
x(t)
is asymptotically stable (in large) if and only if one of the following properties is valid:
1 Lyapunov equation
where µ is any symmetric positive definite matrix, has a unique positive definite symmetric solution K
Trang 82 all eigenvalues of system matrix A, i.e all roots of characteristic polynomial det(sI − A) have negative real parts
Proof : We prove only the sufficient part of 1 Consider the Lyapunov function of the form
if K is a positive definite then
and for dV /dt holds
dV (x)
dt =
dx dt
T
Kx+ xTKdx
Substituting dx/dt from Eq (3.2.43) yields
dV (x)
T
dV (x)
Applying (3.2.44) we get
dV (x)
and because µ is a positive definite matrix then
dV (x)
for all x 6= 0 and the system is asymptotically stable in origin As the Lyapunov function can be written as
and therefore
The corresponding norm is defined as (xTKx)1/2 It can easily be shown that K exists and all conditions of the theorem on asymptotic stability by large in origin are fulfilled The second part
of the proof - necessity - is much harder to prove
The choice of µ for computations is usually
Controllability of continuous systems
The concept of controllability together with observability is of fundamental importance in theory
of automatic control
Definition of controllability of linear system
dx(t)
is as follows: A state x(t0) 6= 0 of the system (3.2.56) is controllable if the system can be driven from this state to state x(t1) = 0 by applying suitable u(t) within finite time t1− t0, t ∈ [t0, t1]
If every state is controllable then the system is completely controllable
Trang 9Definition of reachable of linear systems: A state x(t1) of the system (3.2.56) is reachable if the system can be driven from the state x(t0) = 0 to x(t1) by applying suitable u(t) within finite time t1− t0, t ∈ [t0, t1]
If every state is reachable then the system is completely reachable
For linear systems with constant coefficients (linear time invariant systems) are all reachable states controllable and it is sufficient to speak about controllability Often the definitions are simplified and we can speak that the system is completely controllable (shortly controllable) if there exists such u(t) that drives the system from the arbitrary initial state x(t0) to the final state x(t1) within a finite time t1− t0, t ∈ [t0, t1]
Theorem (Controllability of linear continuous systems with constant coefficients): The system dx(t)
is completely controllable if and only if rank of controllability matrix Qc is equal to n Qc[n × nm]
is defined as
where n is the dimension of the vector x and m is the dimension of the vector u
Proof : We prove only the “if” part Solution of the Eq (3.2.57) with initial condition x(t0) is x(t) = eAtx(t0) +
Z t 0
For t = t1follows
x(t1) = eAt 1x(t0) + eAt 1
Z t 1
0
The function e−Aτ can be rewritten with the aid of the Cayley-Hamilton theorem as
e−Aτ = k0(τ )I + k1(τ )A + k2(τ )A2+ · · · + kn−1(τ )An−1 (3.2.62) Substituting for e−Aτ from (3.2.62) into (3.2.61) yields
x(t1) = eAt1x(t0) + eAt1
Z t 1
0
k0(τ )B + k1(τ )AB + +k2(τ )A2B+ · · · + kn−1(τ )An−1Bu(τ)dτ (3.2.63) or
x(t1) = eAt 1x(t0) +
+eAt1
Z t 1
0
(B AB A2B An−1B) ×
×
k0(τ )u(τ )
k1(τ )u(τ )
k2(τ )u(τ )
kn−1(τ )u(τ )
Complete controllability means that for all x(t0) 6= 0 there exists a finite time t1− t0 and suitable u(t) such that
− x(t0) = (B AB A2B An−1B)
Z t 1
0
k0(τ )u(τ )
k1(τ )u(τ )
k2(τ )u(τ )
k (τ )u(τ )
Trang 10From this equation follows that any vector −x(t0) can be expressed as a linear combination of the columns of Qc The system is controllable if the integrand in (3.2.64) allows the influence of u
to reach all the states x Hence complete controllability is equivalent to the condition of rank of
Qc being equal to n The controllability theorem enables a simple check of system controllability with regard to x The test with regard to y can be derived analogously and is given below Theorem (Output controllability of linear systems with constant coefficients): The system out-put y of (3.2.57), (3.2.58) is completely controllable if and only if the rank of controllability matrix
Qyc[r × nm] is equal to r (with r being dimension of the output vector) where
We note that the controllability conditions are also valid for linear systems with time-varying coefficients if A(t), B(t) are known functions of time The conditions for nonlinear systems are derived only for some special cases Fortunately, in the majority of practical cases, controllability
of nonlinear systems is satisfied if the corresponding linearised system is controllable
Example 3.2.4: CSTR - controllability
Linearised state-space model of CSTR (see Example 2.4.2) is of the form
dx1(t)
dt = a11x1(t) + a12x2(t)
dx2(t)
dt = a21x1(t) + a22x2(t) + b21u1(t)
or
dx(t)
dt = Ax(t) + Bu1(t)
where
A=
a11 a12
a21 a22
, B=
0
b21
The controllability matrix Qc is
Qc= (B|AB) =
0 a12b21
b21 a22b21
and has rank equal to 2 and the system is completely controllable It is clear that this is valid for all steady-states and hence the corresponding nonlinear model of the reactor is controllable
Observability
States of a system are in the majority of cases measurable only partially or they are nonmeasurable Therefore it is not possible to realise a control that assumes knowledge of state variables In this connection a question arises whether it is possible to determine state vector from output measurements We speak about observability and reconstructibility To investigate observability, only a free system can be considered
Definition of observability: A state x(t0) of the system
dx(t)
is observable if it can be determined from knowledge about y(t) within a finite time t ∈ [t0, t1]
If every state x(t0) can be determined from the output vector y(t) within arbitrary finite interval
t ∈ [t0, t1] then the system is completely observable
Definition of reconstructibility : A state of system x(t0) is reconstructible if it can be deter-mined from knowledge about y(t) within a finite time t ∈ [t00, t0] If every state x(t0) can be determined from the output vector y(t) within arbitrary finite interval t ∈ [t00, t0] then the system
is completely reconstructible
...of automatic control
Definition of controllability of linear system
dx(t)
is as follows: A state x(t0) 6= of the system (3.2 .56 ) is controllable if the... reachable states controllable and it is sufficient to speak about controllability Often the definitions are simplified and we can speak that the system is completely controllable (shortly controllable)... systems with constant coefficients): The system out-put y of (3.2 .57 ), (3.2 .58 ) is completely controllable if and only if the rank of controllability matrix
Qyc[r × nm]