The closed loop system should exhibit the following properties: 1 closed loop stability, 2 zero steady state error to step reference com-mands, 3 good low frequency reference command fol
Trang 1(30.201)
(30.202)
(30.203)
The system poles are s = ±14.1050, s = ±4.5299 Eigenvector analysis shows that the fast instability at s = 14.1050 is primarily associated with the upper (shorter) link, while the slower instability at s = 4.5299 is
primarily associated with the lower (longer) link The system does not possess any natural integrators (i.e., no zero eigenvalues) and, as expected, the singular values σi [P(jω)] are flat at low frequencies (see
Fig 30.9)
Closed Loop Objectives
A controller to be implemented within a negative feedback loop is sought The closed loop system should exhibit the following properties: (1) closed loop stability, (2) zero steady state error to step reference com-mands, (3) good low frequency reference command following (step commands followed with little overshoot within 3 s), (4) good low frequency disturbance attenuation, (5) good high frequency noise attenuation, (6) good stability robustness margins at the plant output
Each step of the control system design process is now described A central idea is the formation of a
so-called design plant P d from the original plant P The design plant P d is what is submitted to our H2 -LQG/LTR design machinery
Step 1: Augment Plant P with Integrators to Get Design Plant Pd ==== [A, B, C]
In order to guarantee zero steady-state error to step reference commands, we begin by augmenting the
plant P = [A p , B p , C p ] with integrators—one in each control channel—to form the design plant P d = [A,
B, C]; i.e., P d = P(I2×2/s) This is done as follows:
(30.204)
FIGURE 30.8 Two degree-of-freedom PUMA 560 robotic
manipulator.
A p
31.7613 –33.0086 0.0000 0.0000 56.9381
=
B p
1037.7259 –3919.6674 3919.6674
=
C p = [I2×2 02×2]
A 02×2 02×4
B p A p
=
Trang 2(30.201)
(30.202)
(30.203)
The system poles are s = ±14.1050, s = ±4.5299 Eigenvector analysis shows that the fast instability at s = 14.1050 is primarily associated with the upper (shorter) link, while the slower instability at s = 4.5299 is
primarily associated with the lower (longer) link The system does not possess any natural integrators (i.e., no zero eigenvalues) and, as expected, the singular values σi [P(jω)] are flat at low frequencies (see
Fig 30.9)
Closed Loop Objectives
A controller to be implemented within a negative feedback loop is sought The closed loop system should exhibit the following properties: (1) closed loop stability, (2) zero steady state error to step reference com-mands, (3) good low frequency reference command following (step commands followed with little overshoot within 3 s), (4) good low frequency disturbance attenuation, (5) good high frequency noise attenuation, (6) good stability robustness margins at the plant output
Each step of the control system design process is now described A central idea is the formation of a
so-called design plant P d from the original plant P The design plant P d is what is submitted to our H2 -LQG/LTR design machinery
Step 1: Augment Plant P with Integrators to Get Design Plant Pd ==== [A, B, C]
In order to guarantee zero steady-state error to step reference commands, we begin by augmenting the
plant P = [A p , B p , C p ] with integrators—one in each control channel—to form the design plant P d = [A,
B, C]; i.e., P d = P(I2×2/s) This is done as follows:
(30.204)
FIGURE 30.8 Two degree-of-freedom PUMA 560 robotic
manipulator.
A p
31.7613 –33.0086 0.0000 0.0000 56.9381
=
B p
1037.7259 –3919.6674 3919.6674
=
C p = [I2×2 02×2]
A 02×2 02×4
B p A p
=
Trang 3Adaptive and Nonlinear
Control Design
31.1 Introduction
31.2 Lyapunov Theory for Time-Invariant Systems
31.3 Lyapunov Theory for Time-Varying Systems
31.4 Adaptive Control Theory
Regulation and Tracking Problems • Certainty Equivalence Principle • Direct and Indirect Adaptive Control • Model Reference Adaptive Control (MRAC) • Self-Tuning Controller (STC)
31.5 Nonlinear Adaptive Control Systems
31.6 Spacecraft Adaptive Attitude Regulation Example
31.7 Output Feedback Adaptive Control
31.8 Adaptive Observers and Output Feedback Control
31.9 Concluding Remarks
31.1 Introduction
The most important challenge for modern control theory is that it should deliver acceptable performance while dealing with poor models, high nonlinearities, and low-cost sensors under a large number of op-erating conditions The difficulties encountered are not peculiar to any single class of systems and they appear in virtually every industrial application Invariably, these systems contain such a large amount of model and parameter uncertainty that “fixed”controllers can no longer meet the stability and performance requirements Any reasonable solution for such problems must be a suitable amalgamation between nonlinear control theory, adaptive elements, and information processing Such are the factors behind the birth and evolution of the field of adaptive control theory, strongly motivated by several practical applications such as chemical process control and design of autopilots for high-performance aircraft, which operate with proven stability over a wide variety of speeds and altitudes
A commonly accepted definition for an adaptive system is that it is any physical system that is designed from an adaptive standpoint!1 All existing stability and convergence results, in the field of adaptive control theory, hinge on the crucial assumption that the unknown parameters must occur linearly within the plant containing known nonlinearities Conceptually, the overall process makes the parameter estimates themselves as state variables, thus enlarging the dimension of the state space for the original system By nature, adaptive control solutions for both linear and nonlinear dynamical systems lead to nonlinear time-varying formulations wherein the estimates of the unknown parameters are updated using input–output data A parameter adaptation mechanism (typically nonlinear) is used to update the param-eters within the control law Given the nonlinearity due to adaptive feedback, there is the need to ensure that the closed-loop stability is preserved It is thus an unmistakable fact that the fields of adaptive control and nonlinear system stability are intrinsically related to one another and any new insights gained in one Maruthi R Akella
The University of Texas at Austin
Trang 4Neural Networks and Fuzzy Systems
32.1 Neural Networks and Fuzzy Systems
32.2 Neuron Cell
32.3 Feedforward Neural Networks
32.4 Learning Algorithms for Neural Networks
Hebbian Learning Rule • Correlation Learning Rule • Instar Learning Rule • Winner Takes All (WTA) • Outstar Learning Rule • Widrow–Hoff LMS Learning Rule • Linear Regression • Delta Learning Rule • Error Backpropagation Learning
32.5 Special Feedforward Networks
Functional Link Network • Feedforward Version of the Counterpropagation Network • WTA
Architecture • Cascade Correlation Architecture • Radial Basis Function Networks
32.6 Recurrent Neural Networks
Hopfield Network • Autoassociative Memory • Bidirectional Associative Memories (BAM)
32.7 Fuzzy Systems
Fuzzification • Rule Evaluation • Defuzzification • Design Example
32.8 Genetic Algorithms
Coding and Initialization • Selection and Reproduction • Reproduction • Mutation
32.1 Neural Networks and Fuzzy Systems
New and better electronic devices have inspired researchers to build intelligent machines operating in a fashion similar to the human nervous system Fascination with this goal started when McCulloch and Pitts (1943) developed their model of an elementary computing neuron and when Hebb (1949) intro-duced his learning rules A decade later Rosenblatt (1958) introduced the perceptron concept In the early 1960s Widrow and Holf (1960, 1962) developed intelligent systems such as ADALINE and MADALINE Nillson (1965) in his book Learning Machines summarized many developments of that time The pub-lication of the Mynsky and Paper (1969) book, with some discouraging results, stopped for some time the fascination with artificial neural networks, and achievements in the mathematical foundation of the back-propagation algorithm by Werbos (1974) went unnoticed The current rapid growth in the area of neural networks started with the Hopfield (1982, 1984) recurrent network, Kohonen (1982) unsupervised training algorithms, and a description of the backpropagation algorithm by Rumelhart et al (1986)
Bogdan M Wilamowski
University of Wyoming