Furthermore, this implies e is bounded and will converge to a neighborhood of the origin and all signals in the system are uniformly bounded.. Fig.1, 2, and 3 show the results of compari
Trang 1Assumption 4 The approximation error ε is bounded as follows:
N
ε ε ≤ , (15) where εN > 0is an unknown constant
Let M ˆ and N ˆ be the estimates respectively of M andN Based on these estimates, let
[ , ]
Z % = diag M N % % , Z diag M N ˆ = [ , ] ˆ ˆ for convenience Then, the following inequality holds:
2ˆ
ω ω
ω ρ ϑ ≤ , (21)
Trang 2where ρω = max{ M , N F, M 1}and ˆT ˆ ˆ ˆT 1
ρ is an unknown coefficient, whereas ϑωis a known function
3.2 Parameters update law and stability analysis
Substituting (14) and (16) into (13), we have
nn T nn
ω ω
φ = ρ ε , defining φ φ φ % = − ˆ with φ % error ofφ, then, guarantee that all signals
in the system are uniformly bounded and that the tracking error converges to a neighborhood of the origin
Proof Consider the following positive define Lyapunov function candidate as
L & = ττ & + tr M F M % − & % + tr N R N % − & % + γ φφ− % % & (26)
Trang 3Substituting (23) and the anterior two terms of (24) into (26), after some straightforward manipulations, we obtain
T r
where c c c1, ,2 3are positive constants
Using (11) and the last two terms of (24), we obtain
2
1 2
1
( 1) ˆ
ˆ ( ) ( 1) tanh
k tr Z Z
ω ω
Trang 4toΩ Ωφ, ZandΩτ, respectively Furthermore, this implies e is bounded and will converge
to a neighborhood of the origin and all signals in the system are uniformly bounded
Input vector of neural network is [1, T, , ]T ˆ
nn d
x = x e ψ , and number of hidden layer nodes 25 The initial weight of neural network isM ˆ (0) (0), (0) (0) = N ˆ = The initial condition of controlled plant is x (0) [0.1,0.2] = T The other parameters are chosen as follows:
1 0.01, 0.1, 0.01, 10
k = γ = λ = α = , Λ =2,F=8I M , R=5I N , with IM, IN corresponding identity matrices
Fig.1, 2, and 3 show the results of comparisons, the PD controller and the adaptive controller based on NN proposed, of tracking errors, output tracking and control input, respectively These results indicate that the adaptive controller based on NN proposed presents better control performance than that of the PD controller Fig.4 depicts the results of output of NN, norm values ofM N ˆ ˆ , , respectively, to illustrate the boundedness of the estimates of
ˆ ˆ ,
M N and the control role of NN From the results as figures, it can be seen that the learning rate of neural network is rapid, and tracks objective in less than 2 seconds Moreover, as desired, all signals in system, including control signal, tend to be smooth
Trang 50 5 10 15 20 -0.45
Trang 64 Decentralized Adaptive Neural Network Control of a Class of Large-Scale Nonlinear Systems with linear function interconnections
In the section, the above proposed scheme is extended to large-scale decentralized nonlinear systems, which the subsystems are composed of the class of the above-mentioned non-affine nonlinear functions Two schemes are proposed, respectively The first scheme designs a RBFN-based adaptive control scheme with the assumption which the interconnections between subsystems in entire system are bounded linearly by the norms of the tracking filtered error In another scheme, the interconnection is assumed as stronger nonlinear function
We consider the differential equations in the following form described, and assume the large-scale system is composed of the nonlinear subsystems:
Trang 7The control objective is: determine a control law, force the output,yi , to follow a given desired output,xdi , with an acceptable accuracy, while all signals involved must be bounded
Define the desired trajectory vector [ , , , l i 1]T
( , )
i f x ui i i
δ = ∗ represents ideal control inverse
Adding and subtracting δito the right-hand side of x &il i = f x ui( , )i i + gi of (33), one obtains
( , )
i
x & = f x u + − − g δ k τ − Y , (38) and yields
Trang 8i ki i i x u ui i i uci i i vri gi
τ & = − τ + Δ % ∗ − + ψ δ − − + , (42)
where Δ %i( , , x u ui i i∗) = f x ui( , )i i − f x ui( ,i i∗)is error between nonlinear function and its ideal control function, we can use the RBFN to approximate it
4.1.1 Neural network-based approximation
Given a multi-input-single-output RBFN, let n1iand m1ibe node number of input layer and hidden layer, respectively The active function used in the RBFN is Gaussian function,S l( ) exp[ 0.5(x = − z i−μlk 2) /σk2] , l= ⋅⋅⋅1, ,n1i,k= ⋅⋅⋅1, ,m1iwhere n1i1
i R
z ∈ × is input vector of the RBFN, n1i m1i
i
W ∈ R × Assumption 8 The approximation error ε ( xnn) is bounded byεi ≤ εNi, withεNi > 0is
Trang 10whereρωi =max(W i , μi F,σi , 2W i 1), ˆ ˆ ˆ ˆ ˆTˆ ˆTˆ 1
i S i i F S i i F W i S i F W i S i F
ϑ = ′μ + ′σ + ′ + ′ + , with ⋅1 1 norm Notice that ρωiis an unknown coefficient, whereas ϑωiis a known function
4.1.2 Controller design and stability analysis
Substituting (43) and (44) into (42), we have
Trang 11Wi i tr Z Zi i φi i i did di i
γ τ % + γ φφ γ− % % & + − % % &
(62)
Trang 12Inserting (56) and (58) into the above inequality, we obtain
ˆ ˆ
i
i i i i i i i i i
i T
i i i i di i i Wi i i i
Trang 13above inequality can be written as
1 4 ( )
i i i
E K = − d − Γ Γ , λmin( ) E the minimum singular value ofE ThenL & ≤ 0,
as long as ki > c2iand sufficiently largedi, Ewould be positive definite, and
Trang 14( )
2 min 2
σ = − − + − The desired trajectoryx d11=0.1 [sin(2 ) cos( )]π t − t ,
21 0.1 cos(2 )
d
Trang 15Input vectors of neural networks are [ , , ] ,T ˆ T 1, 2
i i i i
z = x τ ψ i = , and number of hidden layer nodes both 8 The initial weight of neural network isW ˆ (0) (0)i = The center values and the widths of Gaussian function are initialized as zeroes, and 5, respectively The initial condition of controlled plant isx1(0) [0.1,0.2] = T x2(0) [0,0] = T The other parameters are chosen as follows:
5, 5
i ki
Λ = = γWi= 0.001, γφi = 1, γdi= 1, λφi= 0.01, λdi= 0.01 , αi=10 , Fi= 10 IWi ,
2 ,i 2 i
G = I Hμ = Iσ , with IWi, I Iμi, σi corresponding identity matrices
Fig.5 shows the results of comparisons of tracking errors of two subsystems Fig.6 gives control input of two subsystems, Fig.7 and Fig.8 the comparison of tracking of two subsystems, respectively Fig.9 and Fig.10 illustrate outputs of two RBFNs and the change of norms ofW ˆ ˆ ˆ , , μ σ, respectively From these results, it can be seen that the effectiveness of the proposed scheme is validated, and tracking errors converge to a neighborhood of the zeroes and all signals in system are bounded Furthermore, the learning rate of neural network controller is rapid, and can track the desired trajectory in about 1 second From the results of control inputs, after shortly shocking, they tend to be smoother, and this is because neural networks are unknown for objective in initial stages
-0.4
-0.2
0 0.2
Trang 17(dash-4.2 RBFN-based decentralized adaptive control for the class of large-scale nonlinear systems with nonlinear function interconnections
Assumption 10 The interconnection effect is bounded by the following function:
Define the desired trajectory vector [ , , , l i 1]T
di di di di
x = y y & L y − , [ , , , ( )l i ]T
di di di di
X = y y & L y and tracking error [ ,1 2, , i]T
Trang 18of ( , ) x ui i ∈Ω ×i R , such that f x u ( ,i i∗) − = δi 0 , i.e δi = f x ui( ,i i∗) holds Here,δi = f x ui( ,i i∗) represents an ideal control inverse Adding and subtracting δito the right-hand side of x &il i = f x ui( , )i i + gi of (33), one obtains
( , )
il i i i i i i di i i
x & = f x u + + − g δ Y − k τ , (80) and yields
( , )
i ki i f x ui i i gi i
τ & = − τ + + + δ , (81)
Similar to the above-mentioned equation (40), ψ ˆi = f x ui( , )i ˆi holds
Based on the above conditions, in order to control the system and make it be stable, we design the approximation pseudo-control input ψ ˆi as follows:
ˆ
i ki i Ydi uci W Sgi gi i i vri
ψ = − τ − − − τ τ − , (82) where uci is output of a neural network controller, which adopts a RBFN, vri is robustifying control term designed in stability analysis, ˆT (| |)
gi gi i
compensate the interconnection nonlinearity (we will define later)
Adding and subtracting ψ ˆito the right-hand side of (81), withδi = ki iτ + Ydi = f x ui( ,i i∗),
Trang 19where Δ %i( , , x u ui i i∗) = f x ui( , )i i − f x ui( ,i i∗)is error between the nonlinear function and its ideal control function, we can use the RBFN to approximate it
4.2.1 Neural network-based approximation
Based on the approximation property of RBFN, Δ%i( , , x u ui i i∗)can be written as
i x u ui i i∗ Wi Si zi εi zi
Δ % = + , (84) where Wi is the weight vector, S zi( )i is Gaussian basis function, εi( ) zi is the approximation error and the input vectorzi∈ Rq, qthe number of input node
Assumption 12 The approximation error εi( ) zi is bounded by| | εi ≤ εNi, withεNi > 0is
an unknown constant The input of the RBFN is chosen as [ , , ]T ˆ T
z = x τ ψ Moreover, output of the RBFN is designed as
W as estimates of idealWi, which are given by the RBFN tuning algorithms
Assumption 13 The ideal value of Wisatisfies
|| Wi|| ≤ WiM, (86) where WiMis positive known constant, with estimation errors as ˆ
W % = W W −
4.2.2 Controller design and stability analysis
Substituting (84) and (85) into (83), we have
i i i i Wi i i
W & = F S τ γ − W τ , (88)
Trang 20Proof Consider the following positive define Lyapunov function candidate as
i i i i i gi i gi i i
L = τ + W F W W G W % − % + % − % + λ φφ− % (92) The time derivative of the above equation is given by
Since ξij( ) ⋅ is a smooth function, there exists a smooth function ζ τij(| j|),(1 ≤ i j n , ≤ )
such that ξ τij(| j|) | = τ ζ τj| ij(| j|) hold Thus, we have
Trang 212 2
1
ˆ ˆ
ˆ ˆ
Trang 222
ˆ ˆ
+ % + % with c c c c1i, 2i, ,3i 4ipositive constants Moreover, we utility the
facts,a a %Tˆ || |||| || || || ≤ a % a − a % 2 , (101) can be rewritten as
Trang 24Furthermore, this implies ei is bounded and will converge to a neighborhood of the origin and all signals in the system are bounded
4.2.3 Simulation Study
In order to validate the effectiveness of the proposed scheme, we implement an example, and assume that the large-scale system is composed of the following two subsystems defined by
Trang 250 5 10 15 20 -2.5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
-0.4 -0.2 0 0.2 0.4 0.6
-0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4
0.5
xd21 x21