The feedforward neural network is specialized for the static mapping problems, whereas in the robot control problem, non-linear dynamic properties need to be dealt with and a dierent ty
Trang 1required Specialized gripper design is an active area of
research, where quick retooling enables the robot
dur-ing the production cycle Automated palletizdur-ing
opera-tions may require additional degrees of freedom with
more sophisticated drive mechanisms, controllers, and
expert programming features [12] In GE's Electrical
Distribution and Control plant (Morristown, TN) ®ve
CG-120 gantry robots (C&D Robotics) palletize a
range of product sizes (cartons range from 8 8 in2
to 12 40 in.2) Since 1989, GE's robots have provided
versatility over manual and conventional palletizing
and eliminated injuries caused by manual lifting [13]
Spray Coating This is the process of applying paint
or a coating agent in thin layers to an object, resulting
in a smooth ®nish Industrial robots are suitable for
such applications, where a human worker is in
con-stant exposure to hazardous fumes and mist which
can cause illness and ®re In addition, industrial robots
provide a higher level of consistency than the human
operator Continuous-path control is required to
emu-late the motions of a human worker, with ¯exible
mul-tiple programming features for quick changeovers
Hydraulic drives are recommended to minimize
elec-trical spark hazards Chrysler Corp has found an
alternative process to ®ll the seams on its new LH
vehicles to eliminate warping and inconsistent ®lling
In 1992, a four-robot station (Nachi Robotic Systems
Inc.) at Chrysler's completely retooled plant (Ontario,
Canada) successfully replaced the manual ®lling of the
seams with silicon-bronze wire [14]
Assembly Many products designed for human
assembly cannot be assembled automatically by
industrial robots The integration of product design
and assembly design belongs to the concept of design
for manufacturability [15] More recent research in
design for assembly has been completed at the
University of Cincinnati [16] Design for
manufactur-ability results in the design of factories for robots
Fewer parts, complex molding, and subassemblies
which allow a hierarchical approach to assembly has
lead to robotic applications For example, the IBM
Proprinter, which was designed for automatic
assem-bly, uses 30 parts with mating capabilities (screwless)
to assemble and test in less than 5 min For
part-mating applications, such as inserting a
semiconduc-tor chip into a circuit board (peg-in-hole), remote
center compliance (RCC) devices have proved to be
an excellent solution More recently, Reichle
(Wetzikon, Switzerland), a midsize manufacturer of
telecommunications switching equipment, needed a
system to automate the labor-intensive assembly of
electronic connectors Using hardware and softwarefrom Adept Technology Inc (San Jose, CA), threeAdeptOne robots reduce personpower requirementsfrom 10 to 2 In addition, the system provides speedand a high degree of robotic accuracy [17]
Inspection and Measurement With a growing est in product quality, the focus has been on ``zerodefects.'' However, the human inspection system hassomehow failed to achieve its objectives Robotapplications of vision systems have provided services
inter-in part location, completeness and correctness ofassembly products, and collision detection duringnavigation Current vision systems, typically two-dimensional systems, compare extracted informationfrom objects to previously trained patterns forachieving their goals Co-ordinate measuringmachines (CMM) are probing machining centersused for measuring various part features such as con-centricity, perpendicularity, ¯atness, and size in three-dimensional rectilinear or polar co-ordinate systems
As an integrated part of a ¯exible manufacturingsystem, the CMMs have reduced inspection timeand cost considerably, when applied to complexpart measurement
Machine vision applications require the ability tocontrol both position and appearance in order tobecome a productive component of an automatedsystem This may require a three-dimensional visioncapability, which is an active research area [18] AtLoranger Manufacturing (Warren, PA), 100% inspec-tion of the rim of an ignition part is required for com-pleteness Using back lighting and with the cameramounted in line, each rim is viewed using pixel con-nectivity When a break in pixels is detected an auto-matic reject arm takes the part o the line [19].Other processing applications for robot use includemachining (grinding, deburring, drilling, and wirebrushing) and water jet-cutting operations Theseoperations employ powerful spindles attached to therobot end eector, rotating against a stationarypiece For example, Hydro-Abrasive Machining (LosAngeles, CA) uses two gantry robots with abrasivewater-jet machining heads They cut and machine any-thing from thin sheet metal to composites severalinches thick with tolerances of 0.005 in for smallparts to 0.01 in for larger parts [19] Flexible manu-facturing systems combined with robotic assembly andinspection, on the one hand, and intelligent robots withimproved functionality and adaptability, on the other,will initiate a structural change in the manufacturingindustry for improved productivity for years to come
Trang 25.3 ROBOT CHARACTERISTICS
In this section an industrial robot is considered to be
an open kinematic chain of rigid bodies called links,
interconnected by joints with actuators to drive them
A robot can also be viewed as a set of integrated
sub-systems [10]:
1 Manipulator: the mechanical structure that
per-forms the actual work of the robot, consisting
of links and joints with actuators
2 Feedback devices: transducers that sense the
position of various linkages and/or joints that
transmit this information to the controller
3 Controller: computer used to generate signals
for the drive system so as to reduce response
error in positioning and applying force during
robot assignments
4 Power source: electric, pneumatic, and hydraulic
power systems used to provide and regulate the
energy needed for the manipulator's actuators
The manipulator con®guration is an important
con-sideration in the selection of a robot It is based on the
kinematic structure of the various joints and links and
their relationships with each other There are six basic
motions or degrees of freedom to arbitrarily position
and orient an object in a three-dimensional space
(three arm and body motions and three wrist
move-ments) The ®rst three links, called the major links,
carry the gross manipulation tasks (positioning).Examples are arc welding, spray painting, and water-jet cutting applications The last three links, the minorlinks, carry the ®ne manipulation tasks (force/tactile).Robots with more than six axes of motion are calledredundant degree-of-freedom robots The redundantaxes are used for greater ¯exibility, such as obstacleavoidance in the workplace Examples are parts assem-bly, and machining applications Typical joints arerevolute (R) joints, which provide rotational motionabout an axis, and prismatic (P) joints, which providesliding (linear) motion along an axis Using this nota-tion, a robot with three revolute joints would be abbre-viated as RRR, while one with two revolute jointsfollowed by one prismatic joint would be denotedRRP
There are ®ve major mechanical con®gurationscommonly used for robots: cartesian, cylindrical, sphe-rical, articulated, and selective compliance articulatedrobot for assembly (SCARA) Workplace coverage,particular reach, and collision avoidance, are impor-tant considerations the selection of a robot for anapplication Table 2 provides a comparative analysis
of the most commonly used robot con®gurations alongwith their percent of use Details for each con®gurationare documented by Ty and Tien [20].Figure 2 showsthe arm geometries for the most commonly used robotcon®guration: (a) cartesian (PPP), (b) cylindrical(RPP), (c) articulated (RRR), (d) spherical (RRP),
Table 2 Comparisons of Robot Con®gurations
machine loading
envelope; high to controlspeed
a Source: VD Hunt Robotics Sourcebook New York: Elsevier, 1988.
Trang 3jectory control of the end eector applies to the tasks
in the ®rst category, gross manipulation, as de®ned by
Yoshikawa Robots, in general, use the ®rst three axes
for gross manipulation (position control) while the
remaining axes orient the tool during the ®ne
manip-ulation (force or tactile control) The dynamic
equa-tions of an industrial robot are a set of highly
nonlinear dierential equations For an end eector
to move in a particular trajectory at a particular
velo-city a complex set of torque (force) functions are to be
applied by the joint actuators Instantaneous feedback
information on position, velocity, acceleration, and
other physical variables can greatly enhance the
per-formance of the robot
In most systems, conventional single-loop
control-lers track the tasks, which are de®ned in terms of a
joint space reference trajectory In practice, the
track-ing error is compensated through an iterative process
which adjusts the reference input so that the actual
response (Y) of the manipulator is close to the desired
trajectory (Yd) When following a planned trajectory,
control at time t will be more accurate if the controller
can account for the end eector's position at an
earlier time Figure 3 represents the basic block
dia-gram of a robot trajectory system interacting with its
environment
With increasing demands for faster, more accurate,
and more reliable robots, the ®eld of robotics has faced
the challenges of reducing the required online
compu-tational power, calibration time, and engineering cost
when developing new robot controllers If the robot is
to be controlled in real time the algorithms used must
be ecient and robust Otherwise, we will have to
com-promise the robot control strategies, such as reducingthe frequency content of the velocity pro®le at whichthe manipulator moves
The robot arm position control is a complex matic and dynamic problem and has received research-ers' attention for quite some time During the lastseveral years, most research on robot control hasresulted in eective but computationally expensivealgorithms A number of approaches have been pro-posed to develop controllers that are robust and adap-tive to the nonlinearities and structural uncertainties.However, they are also computationally very dicultand expensive algorithms to solve As of this day, mostrobot controllers use joint controllers that are based ontraditional linear controllers and are ineective indealing with the nonlinear terms, such as friction andbacklash
kine-One popular robot control scheme is called puted-torque control'' or ``inverse-dynamics control.''Most robot control schemes found in robust, adaptive,
``com-or learning control strategies can be considered as cial cases of the computed-torque control The com-puted-torque-like control technique involves thedecomposition of the control design problem intotwo parts [23]:
spe-1 Primary controller, a feedforward (inner-loop)design to track the desired trajectory underideal conditions
2 Secondary controller, a feedback (outer-loop)design to compensate for undesirable deviations(disturbances) of the motion from the desiredtrajectory based on a linearized model
Figure 3 Basic control block for a robot trajectory system
Trang 4The primary controller compensates for the nonlinear
dynamic eects, and attempts to cancel the nonlinear
terms in the dynamic model Since the parameters in
the dynamic model of the robot are not usually exact,
undesired motion errors are expected These errors can
be corrected by the secondary controller Figure 4
represents the decomposition of the robot controller
showing the primary and secondary controllers
It is well known that humans perform control
func-tions much better than the machinelike robots In
order to control voluntary movements, the central
ner-vous system must determine the desired trajectory in
the visual co-ordinates, transform its co-ordinate to the
body co-ordinate, and ®nally generate the motor
com-mands [24] The human information processing device
(brain) has been the motivation for many researchers
in the design of intelligent computers often referred to
as neural computers
Psaltis et al [25] describe the neural computer as a
large interconnected mass of simple processing
ele-ments (arti®cial neurons) The functionality of this
mass, called the arti®cial neural network (ANN), is
determined by modifying the strengths of the
connec-tions during the learning phase This basic
generaliza-tion of the morphological and computageneraliza-tional feature
of the human brain has been the abstract model used in
the design of the neural computers
Researchers interested in neural computers have
been successful in computationally intensive areas
such as pattern recognition and image interpretation
problems These problems are generally static mapping
of input vectors into corresponding output classes
using a feedforward neural network The feedforward
neural network is specialized for the static mapping
problems, whereas in the robot control problem,
non-linear dynamic properties need to be dealt with and a
dierent type of neural network structure must be
used Recurrent neural networks have the dynamic
properties, such as feedback architecture, needed forthe appropriate design of such robot controllers
5.5 ARTIFICIAL NEURAL NETWORKSArti®cial neural networks are highly parallel, adaptive,and fault-tolerant dynamic systems, modeled like theirbiological counterparts The phrases ``neural net-works'' or ``neural nets'' are also used interchangeably
in the literature, which refer to neurophysiology, thestudy of how the brain and its nervous system work.Arti®cial neural networks are speci®ed by the follow-ing de®nitions [26]:
1 Topology: this describes the networked ture of a set of neurons The set of neurons areorganized into layers which are then classi®ed
architec-as either feedforward networks or recurrent works In feedforward layers, each output in alayer is connected to each input in the nextlayer In a recurrent ANN, each neuron canreceive as its input a weighted output fromother layers in the network, possibly includingitself.Figure 5illustrates these simple represen-tations of the ANN topologies
net-2 Neuron: a computational element that de®nesthe characteristics of input/output relationships
A simple neuron is shown inFig 6, which sums
N weighted inputs (called activation) and passesthe result through a nonlinear transfer function
to determine the neuron output Some linear functions that are often used to mimicbiological neurons are: unit step function andlinear transfer-function A very common for-mula for determining a neuron's output isthrough the use of sigmoidal (squashing)functions:
Figure 4 Controller decomposition in primary and secondary controllers
Trang 5McCulloch and Pitts proved that a synchronous
network of neurons (M-P network), described above,
is capable of performing simple logical tasks
(compu-tations) that are expected of a digital computer In
1958, Rosenblatt introduced the ``perceptron,'' in
which he showed how an M-P network with adjustable
weights can be trained to classify sets of patterns His
work was based on Hebb's model of adaptive learning
rules in the human brain [29], which stated that the
neuron's interconnecting weights change continuously
as it learns [30]
In 1960, Bernard Widrow introduced ADALINE
(ADAptive LINear Element), a single-layer perceptron
and later extended it to what is known as
MADALINE, multilayer ADALINE [31] In
MADALINE, Widrow introduced the steepest
des-cend method to stimulate learning in the network
His variation of learning is referred to as the
Widrow±Ho rule or delta rule
In 1969, Minsky and Papert [32] reported on the
theoretical limitations of the single layer M-P
net-work, by showing the inability of the network to
classify the exclusive-or (XOR) logical problem
They left the impression that neural network research
is a farce and went on to establish the ``arti®cial
intelligence'' laboratory at MIT Hence, the research
activity related to ANNs went to sleep until the early
1980s when the work by Hop®eld, an established
phy-sicist, on neural networks rekindled the enthusiasm
for this ®eld Hop®eld's autoassociative neural
net-work (a form of recurrent neural netnet-work) solved
the classic hard optimization problem (traveling
salesman) [33]
Other contributors to the ®eld, Steven Grossberg
and Teuvo Kohonon, continued their research during
the 1970s and early 1980s (referred to as the ``quiet
years'' in the literature) During the ``quiet years,''
Steven Grossberg [34,35] worked on the mathematical
development necessary to overcome one of the
limita-tions reported by Minsky and Papert [32] Teuvo
Kohonon [36] developed the unsupervised training
method, the self-organizing map Later, Bart Kosko
[37] developed bidirectional associative memory
(BAM) based on the works of Hop®eld and
Grossberg Robert Hecht-Nielson [38] pioneered the
work on neurocomputing
It was not until 1986 that the two-volume book,
edited by Rumelhart and McClleland, titled Parallel
Distributed Processing (PDP), exploded the ®eld of
arti®cial neural networks [38] In this book (PDP), a
new training algorithm called the backpropagation
method (BP), using the gradient search technique was
used to train a multilayer perceptron to learn the XORmapping problem described by Minsky and Papert[39] Since then, ANNs have been studied for bothdesign procedures and training rules (supervised andunsupervised), and are current research topics Anexcellent collection of theoretical and conceptualpapers on neural networks can be found in books edi-ted by Vemuri [30], and Lau [40] Interested readerscan also refer to a survey of neural networks book
by Chapnick [41] categorized by: theory, hardwareand software, and how-to books
The multilayer feedforward networks, using the BPmethod, represent a versatile nonlinear map of a set
of input vectors to a set of desired output vectors onthe spatial context (space) During the learning pro-cess, an input vector is presented to the network andpropagates forward from input layers to output layers
to determine the output signal The output signalvector is then compared with the desired output vec-tor, resulting in an error signal This error signal isbackpropagated through the network in order toadjust the network's connecting strengths (weights).Learning stops when the error vector has reached
an acceptable minimum [26] An example of ward network consisting of three layers is shown inFig 7
feedfor-Many studies have been undertaken in order toapply both the ¯exibility and the learning ability ofbackpropagation to robot control on an experimentalscale [42±44] In a recent study, an ANN utilizing anadaptive step-size algorithm, based on a random-search technique, improved the convergence speed ofthe BP method for solving the inverse kinematic pro-blem for a two-link robot [45] The robot control pro-blem is a dynamic problem, where the BP method onlyprovides a static mapping of the input vectors intooutput classes Therefore, its bene®ts are limited Inaddition, like any other numerical method, this novellearning method has limitations (slow convergencerate, local minimum) Attempts to improve the learn-ing rate of BP have resulted in many novel approaches[46,47] It is necessary to note that the most importantbehavior of the feedforward networks using the BPmethod is its classi®cation ability or the generalization
to fresh data rather than temporal utilization of pastexperiences
A recurrent network is a multilayer network inwhich the activity of the neurons ¯ows both frominput layer to output layer (feedforward) and alsofrom the output layer back to the input layer (feed-back) in the course of learning [38] In a recurrentnetwork each activity of the training set (input
Trang 6510 Golnazarian and Hall
Figure 8 Arti®cial neural networkÐrecurrent
Figure 9 The direct and inverse kinematic problems
Trang 7end-eector position and orientation for a given set of
joint displacements is referred to as the direct kinematic
problem Thus, for a given joint co-ordinate vector q
and the global co-ordinate space x, it is to solve
where f is a nonlinear, continuous, and dierentiable
function This equation has a unique solution On the
other hand, given the end-eector position and
orien-tation, the inverse kinematic problem calculates the
cor-responding joint variables to drive the joint servo
controllers by solving
The solution to this equation, also called the arm
solution, is not unique Since trajectory tasks are
usually stated in terms of the reference co-ordinate
frame, the inverse kinematics problem is used more
frequently
5.6.1 Homogeneous Transformation Matrix
Before further analyzing the robot kinematic
pro-blem, a brief review of matrix transformations is
needed Figure 10 illustrates a single vector de®ned
in the fig co-ordinate frame Pi x0; y0; z0 The task
is to transform the vector de®ned with the fig
co-ordinate frame to a vector with the fi 1g
co-ordi-nate frame, Pi 1 x; y; z Simply, this
transforma-tion is broken up into a rotatransforma-tional part and a
translational part:
Since rotation is a linear transformation, the rotation
between the two co-ordinate frames is given by
3
Rz
cos sin 0sin cos 0
26
3
The following general statements can be made:
1 A co-ordinate transformation R represents arotation of a coordinate frame to a new posi-tion
2 The columns of R give the direction cosines ofthe new frame axes expressed in the old frame
3 It can be extended to a product of more thantwo transformation matrices
To complete the transformation in Fig 10, tion between frames fig and fi 1g still needs to takeplace
transla-di
dxdydz
26
Figure 10 Transformation between two co-ordinate frames
Trang 8Pi 1
xyz
264
37
37
5
dxdydz
264
375
Pi 1 Ri di Pi
1
Equation (9), where Ri di is a (3 4) matrix and
Pi 1T is a (4 1) vector, can only be used to
trans-form the components of p from frame fig to fi 1g
Due to singularity of the matrix above, the inverse of
the transformation cannot be achieved To incorporate
the inverse transformation in Eq (9), the concept of
homogeneous co-ordinate representation* replaces the
(3 4) transformation matrix with a (4 4)
transfor-mation matrix by simply appending a ®nal (1 4)
row, de®ned as [0 0 0 1], to [Ri di Correspondingly,
the Pi 1 vector will be replaced by (4 1) vector of
377
x0
y0
z01
266
37
It can be seen that the transformation equation (5)
is equivalent to the matrix equation (10) The (4 4)
transformation matrix, denoted Hi, contains all the
information about the ®nal frame, expressed in terms
of the original frame:
Hi R0 d1i
11
Using the fact that Riis orthogonal it is easy to show
that the inverse transformation H 1
12
The matrix Hi has homogenized the representation
of translation and rotations of a coordinate frame
Therefore, matrix Hi is called the homogeneous
trans-formation matrix The upper left (3 3) submatrix
represents the rotation matrix; the upper right (3 1)
submatrix represents the position vector of the origin
of the rotated co-ordinate system with respect to the
reference system; the lower left (1 3) submatrix
represents perspective transformation for visual
sen-sing with a camera; and the fourth diagonal element
is the global scaling factor In the robot manipulator,the perspective transformation is always a zero vectorand the scale factor is 1 The frame transformation isnow given by
This transformation, represented by the matrix Hi, isobtained from simpler transformations representingthe three basic translations along (three entries of di),and three rotations (three independent entries of Ri)about the frames axes of x, y, and z They form thesix degrees of freedom associated with the con®gura-tion of P These fundamental transforms, expressed in
a 4 4 matrix notation, are shown as
37
37
37
Rot z;
cos sin 0 0sin cos 0 0
266
37
1 The zi 1-axis lies along the axis motion the ithjoint
2 The xi-axis is normal to the zi 1 axis, pointingaway from it
* The representation of an n-component position vector by
an n 1)-component vector is called homogeneous
co-ordinate representation
Trang 93 The yi-axis complete the right-hand-rule
co-ordinate system
This is illustrated in Fig 11 Note that joint i joins link
i 1 with link i Frame i, which is the body frame of
link i, has its z-axis located at joint i 1 If the joint is
revolute, then the rotation is about the z-axis If the
joint is prismatic, the joint translation is along the
z-axis
The D-H representation depends on four
geometri-cal parameters associated with each link to completely
describe the position of successive link coordinates:
ai the shortest distance between ziand zi 1 along
the xi
i the twist angle between ziand zi 1about the xi
di the shortest distance between xi and xi 1along
the zi 1
i the angle between xi 1 and xi about the zi 1
For a revolute joint, i is the variable representing the
joint displacement where the adjacent links rotate with
respect to each other along the joint axis In prismatic
joints in which the adjacent links translate linearly to
each other along the joint axis, di is the joint
displace-ment variable, while i is constant In both cases, the
parameters ai and i are constant, determined by the
geometry of the link
In general we denote the joint displacement by qi,
which is de®ned as
qi i for a revolute joint
qi di for a prismatic jointThen, a (4 4) homogeneous transformation matrixcan easily relate the ith co-ordinate frame to the(i 1)th co-ordinate frame by performing the follow-ing successive transformations:
1 Rotate about zi 1-axis an angle of i,Rot(zi 1; i
2 Translate along the zi 1-axis a distance of di,Trans(0, 0, di)
3 Translate along the xi-axis a distance of ai,Trans ai; 0; di
4 Rotate about the xi-axis an angle of i,Rot(xi; i)
The operations above result in four basic neous matrices The product of these matrices yields acomposite homogeneous transformation matrix iAi 1.TheiAi 1 matrix is known as the D-H transformationmatrix for adjacent co-ordinate frames, fig and fi 1g.Thus,
homoge-iAi 1 Trans 0; 0; di Rot zi 1; i
Trans ai; 0; 0 Rot xi; i
Figure 11 Denavit±Hartenberg frame assignment
Trang 10
cos i sin i 0 0sin i cos i 0 0
266664
377775
377775
377775
iAi 1
cos i cos isin i sin isin i icos i
sin i cos icos i sin icos i isin i
15
Using theiAi 1 matrix in Fig 10, vector Pi expressed
in homogeneous ordinates with respect to
co-ordinate system fig, relates to vector Pi 1in co-ordinate
fi 1g by
where Pi 1 x; y; z; 1T and Pi x0; y0; z0; 1T
5.6.3 Manipulator Arm Kinematic Equations
The transformation matrix of Eq (16) relates points
de®ned in frame fig to frame fi 1g For a robot
manipulator with six links, the position of the end
eector (last link) with respect to the base is
deter-mined by successively multiplying together the single
(D-H) transformation matrix that relates frame f6g to
nT0 xn 1A0 q12A1 q23A2 q3 nAn 1 qn 18The ®nal transformation matrix nT0, also called thearm matrix, de®nes the ®nal con®guration of any endeector with respect to the inertia frame f0g, depicted
in Fig 12 The tool origin represents any appropriatepoint associated with the tool frame (or the transport-ing object) The origin (Ot) frame can be taken either atthe wrist or at the tool tip, or placed symmetricallybetween the ®ngers of the end eector (gripper) The
nT0 matrix may be written as
377
where three mutually perpendicular unit vectors, asshown in Fig 12, represent the tool frame in a carte-sian co-ordinate system In the above equation:
n normal unit vector, normal to the ®ngers of therobot arm following the right-hand rule
s unit sliding vector, pointing to the direction ofthe sideways motion of the ®ngers (open andclose)
Trang 114A3
cos 4 sin 4 0 0sin 4 cos 4 0 01
2666
In Eq (22) the rotation matrix toolRbase, the (3 3)upper left submatrix, expresses the orientation of toolframe f4g relative to the base frame f0g as
toolRbase n s a
nx sx ax
ny sy ay
nz sz az
264
375
toolRbasecos 1 2 4 sin 1 2 4 0sin 1 2 4 cos 1 2 4 0
264
37
5 23
Note that the approach vector a 0 0 1 is ®xed,and independent of the joint variables This is one ofthe characteristics of the AdeptOne robot, or even allSCARA robots, which are designed to manipulateobjects directly from above Industrial applicationssuch as circuit board assembly, is the common area
of use for this robot
The vector di, the right column vector in Eq (22),represents position of the tool frame f4g relative to thebase frame f0g as
di
dxdydz
26
37
3
7 24
5.6.4 The Inverse Kinematic SolutionFor the purpose of driving and controlling a robot, it isnecessary to solve Eq (20) for the joint variables sincethe actuator variables are the joint variables This isthe inverse (backward) kinematic problem associatedwith a robot manipulator: given the position and theorientation of the end eector (X) in the base frame,
®nd the joint displacement (h):
(a)
(b)
Figure 13 (a) A four-axis SCARA robot (AdeptOne) (b)
The AdeptOne industrial robot
Trang 12h f 1 X 25
The backward solution algorithm is generally more
dicult than the forward solution In the
three-dimen-sional space, six co-ordinates are needed to specify a
rigid body (three position co-ordinates and three
angles of rotation) Since the six equations generated
are nonlinear trigonometric functions and are coupled,
a simple and unique solution for q may not even exist,
For a high number of degrees of freedom the inverse
kinematic problem can result in very complicated
solu-tion algorithms [58]
Some simpli®cation is possible by properly
design-ing the robot geometry For example, when the axes
for the three revolute joints of a six degree-of-freedom
robot coincide at the wrist of the end eector, it is
possible to decouple the six equations in Eq (15) into
two sets of three simpler equations [61] The ®rst set
of equations decides the position of the ®rst three
joint variables Once the ®rst three joint variables
are determined, the last three joint variables are
obtained such that the end eector has the correct
orientation
Recall the four-axis SCARA (AdeptOne) example
whose forward kinematic solution is de®ned by Eq
(22) Suppose that the position and orientation of the
®nal frame (tool frame) is given as Eq (19):
37
To ®nd the corresponding joint variables
1; 2; d3; 4, we must solve the following
simulta-neous set of nonlinear trigonmetric equations:
®nding the joint variables 1; 2; d3; 4, in terms ofthe end-eector position and orientation for theSCARA manipulator is shown in Table 4
This inverse kinematics solution is not unique due
to multiplicity of q2 Complete derivation for the tion for the four-axis SCARA (AdeptOne) robotmanipulator can be found in textbooks by Spongand Vidyasagar [52] and Schilling [53] In general, theinverse kinematic problem can be solved by variousmethods The methods used usually are algebraic, geo-metrical, or iterative The common approach is to use aclosed form solution to the inverse kinematic problem.However, these solutions are manipulator-dependentand still often too dicult to solve in closed form Inmany cases (nonclosed form) the iterative method isrequired with kinematically redundant robots [58].One popular method is to use the N-dimensionalNewton±Raphson algorithm or its modi®ed version
solu-to solve the nonlinear equations [62] Fast iterativetechniques for the inverse kinematic problems havebeen reported based on nonlinear least-squares mini-mization with accurate and rapid convergence [63].Other techniques based on the geometrical relationshipbetween the various links and joints have beenreported, depending on the points of reference chosen
In general, most researchers resort to numericalmethods to obtain the inverse solution
Table 4 Inverse Kinematics of a Four-Axis SCARA RobotBase joint:
q1 arctan 2p1 r2
r; where r2d2 d2 a2 a2
2a1a2Elbow joint:
q2 arctan 2dxdy arctan 2a1 a2cos 2a2sin 2Vertical extension joint:
q3 d1 d4 dzTool roll joint:
q4 q1 q2 arctan 2nynx
Trang 13The use of iterative techniques raises the problem of
accurate real-time implementation of manipulator
kinematic control The need to compute the inverse
Jacobian at several points, importance of closeness of
the initial solution to the exact solution (otherwise the
algorithm diverges) and accumulation of the
lineariza-tion error, re¯ects the computalineariza-tional complexity of the
methods for online use
Arti®cial neural network (ANN) theory has
pro-vided an alternative solution for solving the inverse
kinematic problem Arti®cial neural networks are
highly parallel, adaptive and fault-tolerant dynamic
systems modeled like their biological counterparts
[39] An application of ANNs to this problem is to
train the network with the input data in the form of
pairs of end-eector positions and orientations and the
corresponding joint values After the training is
com-pleted, the ANN can generalize and give good results
(joint angles) at new data points (position and
orienta-tion)
Recently, ANNs have been augmented with an
iterative procedure using the Newton±Raphson
techni-que resulting in an increase in computational eciency
by twofold for the PUMA 560 robot [44] An
ANN-based scheme has also been proposed for computing
manipulator inverse kinematics where no prior
knowl-edge of the manipulator kinematics is required [64]
In the event that the physical structure of a robot is
changed (or damaged) during an operation, ANN
architecture can supplement the existing robot
con-troller to learn the new transformations quickly
with-out repairing the robot physically [42] In a recent
study, an ANN utilizing an adaptive step-size
algo-rithm, based on a random-search technique, improved
the convergence speed for solving the inverse kinematic
problem for a two-link robot [45]
5.6.5 Manipulator Motion Kinematic Equations
Previously, we have described the forward and inverse
arm solutions relating joint positions and end-eector
position and orientation However, the manipulator is
stationary at a speci®c con®guration Typically, a
robot is in motion to perform tasks such as spray
painting, arc welding, and sealing Continuous-path
motion control is required in these applications
where the tool must travel a speci®c paths a prescribed
time (trajectory) One must then determine and control
the velocity and acceleration of the end-eector
between points on the path
The forward kinematic, Eq (20), relates the
end-eector position to the joint displacements When the
end eector is in motion, an in®nitesimal directionalchange in its position is determined by dierentiatingthe kinematic equations (20) with respect to time, thisyields
of joints of the manipulator and m is the ality of the cartesian co-ordinate of the tool underconsideration The Jacobian is one of the most impor-tant quantities in the analysis and control of robotmotion It is used for smooth trajectory planning andexecution, in the derivation of the dynamic equations,and in transformation of forces applied by the endeector into the resultant torque generated at eachjoint The generalized cartesian velocity vector, _X isde®ned as
dimension-_X !vwhere v vx; vy; vz represents the linear velocity andthe ! !x; !y; !z the angular velocity of the tool.Consider the four-axis SCARA robot whose kine-matic description has been developed The Jacobian ofthe SCARA robot is
J q
a1sin1 a2sin 1 2 a2sin 1 2 0 0
a1cos1a2cos 1 2 a2cos 1 2 0 0
3777777
28The ®rst three rows of J q correspond to linear tooldisplacement, while the last three correspond to angu-lar tool displacement Then a joint space trajectory q tcorresponding to x t can be obtained by inverting theJacobian along with the inverse kinematic solution:
Trang 14By dierentiating the Eq (29) the desired joint
accel-erations are found as well,
h _J h 1_X J h 1X 30
The problem with solving for the joint space
dieren-tials (velocity and acceleration) using the Jacobian is
that at certain point in joint space, the tool Jacobian
may lose its rank That is, there is a reduction in the
number of linearly independent rows and columns
The numerical solutions of Eqs (29) and (30) produce
very large dierential values The points at which J q
loses rank are called joint-space singularities There are
two types of joint-space singularities:
1 Boundary singularity, which occurs when the
tool tip is on the surface of the work envelope
These are not particularly serious, because they
can be avoided by mechanical constraints
2 Interior singularity occurs inside the work
envelope when two or more axes become
colli-near These are more serious, because
cancella-tion of counteracting rotacancella-tions about an axis
causes the tool tip position to remain constant
From inspection of the Jacobian for the SCARA
robot, Eq (28), if and only if the upper-left 2 2
sub-matrix becomes singular jJ qj 0, the J q loses
rank
jJ qj a1sin 1 a2sin 1 2 a3cos 2
a2sin 1 2a1cos 1 a2cos 1 2
a1a2sin 1cos 1 2 a22sin 1 2
If sin 2 0, the Jacobian matrix is singular and has no
inverse This will occur when the elbow angle q2 is an
integer multiple of The tool tip is on the outer
sur-face of the work envelope (arm is reaching straight
out) whereas when jq2j the arm is folded inside
the surface of the work envelope
5.7 ROBOT ARM DYNAMICS
Similar to the robot kinematic problem, there are two
types of robot dynamic problems, a direct (forward)
dynamic problem and an inverse dynamic problem, asshown in Fig 14 The problem of direct or forwarddynamics is to calculate the joint trajectories (position,velocity, and acceleration), q t, given the force(torque) pro®le, t, which causes the desired motiontrajectory:
This section presents the inverse dynamic equation
of motion for robot manipulators The robot arm matic solution along with the velocity and the accel-eration of the link coordinates will be used to obtainthe inverse dynamic model for the four-axis SCARArobot (AdeptOne)
kine-In order to control the robot manipulator in realtime, at adequate sampling frequency, it is necessary
to balance the generalized torques accurately and quently from four sources [7]:
fre-1 Dynamical source, arising from the motion ofthe robot:
a Inertia, mass property resisting change inmotion, proportional to joint acceleration
b Coriolis, vertical forces derived from thelink interactions, proportional to the pro-duct of the joint velocities
c Centripetal forces, constraining rotationabout a point, proportional to the square
of the joint velocity
2 Static source, arising from friction in the jointmechanism
3 Gravity source, arising from force of gravity oneach link
4 External source, arising from external loads(tasks) on the end effector
We can formulate the following expression for theinverse dynamics problem:
s M hh C h; _h F _h G h sd 34
Trang 15pagates the joint forces (torques) exerted on each link
from end eector back to the base frame The N-E
formulation is simple and very fast However, the
deri-vation is messy (vector cross-product terms) and
recur-sive equations destroy the structure of the dynamic
model, which is very important for the design of
robot controllers [51]
The advantage of using recursive form (numerical)
over the closed form (analytical) is only the speed of
computation, particularly as the number of axes
increases The L-E method has a computational
com-plexity of order O n4, n being the number of axes This
is in contrast to N-E formulation which has a
compu-tational complexity of order O n
Lee et al [68] also obtained an ecient set of
closed-form solutions based on the generalized d'Alembert
(G-D) principle that retain the structure of the problem
with a moderate computational complexity of order
O n3 A symbolic program called algebraic robot
modeler (AMR) has also been reported to generate
ecient customized dynamic equations for a variety
of robots [70]
In the following section, the complete dynamic
model of the four-axis SCARA robot (AdeptOne) is
derived based on the L-E formulation (see Fig 13)
The (AdeptOne) SCARA robot is kinematically
sim-ple, and its unique direct drive actuating system
elim-inates gear friction and backlash It has closed-form
dynamic solutions which will provide the simple
phy-sical interpretations (inertia, centrifugal and Coriolis
forces, friction, and gravity) necessary to design the
robot controller
5.7.1 The Lagrange±Euler Formulation
The Lagrange±Euler method describes the dynamic
behavior of a robot in terms of work and energy stored
in the system The constraining forces are eliminated
during the formulation and the closed-form solution is
derived independent of any co-ordinate system This
method is based on the utilization of [52]:
1 The (4 4) Denavit±Hartenberg matrix
repre-sentation, iAi 1, which describes the spatial
relationship between the ith and (i 1)th link
co-ordinate frames
2 Fundamental properties of kinetic energy (K)
and potential energy (P)
3 Lagrange's equation of motion:
it is composed of a torque vector, or when prismatic it
is a force vector
Since potential energy is only position dependent,
Eq (35) can be further de®ned as
iI0 3 3 inertia tensor matrix of link i
Since energy is additive, the total kinetic energy stored
in the whole arm linkage is then given by
K Xni1
Recall the homogeneous transformation matrix, iT0,that gives the position and orientation of any link,ir,with respect to base frame as
Dierentiation with respect to time gives the velocity
of the link position,i0, with respect to base frame as
iv0drdtdtd iT0ir Xi
j1
@iT0
@j _jir 40Substituting Eq (40) in Eq (37) and subsequently in
Eq (38) yields
Trang 16@j _jir
!T
Xi j1
Xi k1
@iT0T
@j irT iI0
ir@@iT0
k _j_k 41
Asada and Slotine [57] suggest to rewrite the
expres-sions in Eq (41) by using the n n manipulator
iner-tia tensor matrix, M,
M Xn
i 1
imXij1
Xi k1
@iTT 0
@j irT ir
@iT0
@k
Xij1
Xi k1
377
7 42
The matrix M incorporates all the mass properties of
the entire arm linkage The manipulator inertia matrix,
also called mass matrix, is symmetrical and in
quad-ratic form Since the kinetic energy is positive, unless
the robot is at rest, the inertia matrix is positive de®nite
K 12 Xn
j1
Xn j1
Note that Mij, component of inertia matrix M, is a
function of joint variables q and represents coupling
between the i and j links The diagonal term Mii
repre-sents the self-inertial coecients Since the mass M is
positive de®nite, the coecient of coupling falls
between 0 (no inertial interaction) and 1 (tightly
coupled)
Since the mass matrix M involves Jacobian
matrices, which are con®guration dependent and can
vary, two links can be highly coupled in one
con®gura-tion and completely decoupled in another Desirably,
the manipulator inertia matrix would be diagonal with
constant self-inertial terms This will allow the
dynamic properties to stay the same for all
con®gura-tions and, the control algorithms simple [71]
The kinetic energy depends on q and dq/dt
P Xni1
migiT0irTherefore, the arm dynamic equation is
M q q _M q _q 12 @q@ _qTM q _q
Xni1
De®ning the Coriolis/centripetal vector as
C q; _q _M q _q 12 @q@ _qTM q _q 48The ®nal dynamic form de®ned in Eq (34) is derived.The friction and disturbance terms can be added tocomplete the dynamic equation The robot dynamicequation represents nonlinearities due to co-ordinatetransformations which are trigonometric Additionalnonlinearities appear in both kinetic and potentialenergy terms due to Coriolis and centrifugal terms.The Coriolis terms represent the velocity coupling oflinks j and k felt at joint i, in torque or force form,whereas the centrifugal terms re¯ect the velocitycoupling of only link j at joint i The Coriolis andcentrifugal coecients are small and become impor-tant when the robot is moving at high speed
The gravity coecients arise from the potentialenergy stored within individual links These coecientsalso vary with con®guration Equation (47) providesthe dynamic properties based on the assumption that
no loads are attached to the arm Eect of a load onthe dynamic coecients is additive and therefore can
be considered as a point mass and an extension of the
Trang 17last link (tool) Robot manipulators use actuators
(electric or hydraulic) to move Since the actuator
and motor inertias are decoupled values acting only
at the joints, they can be treated as additive terms to
the mass matrix [82] Also, note that each torque
(force) computation involves the summation over a
possible range of n joints This creates a computation
complexity in the order of O n4 This high order of
calculations is very slow for online implementation In
practice, the manipulator dynamic must be modeled
accurately for precise and high speed motion control
5.7.2 Dynamic Model of the SCARA Robot
Tasks performed by robots are de®ned previously as
gross manipulation and ®ne manipulation tasks
Motion trajectory control of the end eector applies
to the tasks in the ®rst category Robots, in general,
use the ®rst three axes for gross manipulation (position
control), while the remaining axis orients the tool
dur-ing the ®ne manipulation (force or tactile control)
Robots, in parts assembly applications, are to pick a
component up with a vertical movement, move it
hor-izontally and then downwards vertically for insertion
These can be achieved by the four-axis SCARA robots
Examples of robots which belong to this class include
the AdeptOne robot, the Intelledex 440 robot, and the
IBM 7545 robot The ®rst three axes of a SCARA
robot position the end eector, while the fourth axis
orients the tool through a roll motion [53]
This section provides the dynamic equations of
AdeptOne SCARA robot, based on the L-E
formula-tion discussed above AdeptOne robot is a direct-drive
robot As there is no gearbox, the friction at each joint
is very small The vector of joint variables is
1 2 d3 4
where the third joint is prismatic, while the remaining
three joints are revolute
From reviewing the results ofTable 4, it is clear that
the tool roll angle 4 has no eect on the end-eector
position, therefore 4 can be ignored in our
investiga-tion of the moinvestiga-tion trajectory control The mass of the
fourth link and the attached end eector is also
assumed to be very small in comparison with the
masses of the other links
The link-co-ordinate diagram for this robot is
shown in Fig 13 for the ®rst three axes The three
links are assumed to be thin cylinders of mass
m1; m2, and m3 The vertical column height d1 is
sta-tionary, and when joint 1 is activated, only rod of
length a1 (mass of m1) rotates The dynamic ships are governed by Eq (47):
relation- M q q _M q _q 12@q@ _qTM q _q
Xni1
2 m2 2m3a1a2sin 2_1_2
m2
Joint 3 is a prismatic joint, used to establish the verticaltool position, and is completely independent of the ®rsttwo joints Thus, there are no Coriolis and centrifugalforces on this joint However, The mass of the thirdjoint eects the motion of the ®rst two joints by acting
Trang 18One needs to know the resulting motion trajectories:
; _; before one can calculate the joint torques,
These equations are time-varying, nonlinear, and
coupled dierential equations The trajectory problem
of this robot manipulator revolves around ®nding the
joint torques i t, such that the joint angles i t track
the desired trajectories id t Torque-based control
techniques are built directly on the dynamic models
of the robot manipulators
5.8 ROBOT NEURAL CONTROLLER
In order to design intelligent robot controllers, one
must also provide the robot with a means of
respond-ing to problems on the temporal context (time) as well
as spatial (space) It is the goal of the intelligent robot
researchers to design a neural learning controller to
utilize the available data from the repetition in the
robot operation The neural learning controller based
on the recurrent network architecture has the
time-varying feature that once a trajectory is learned, it
should learn a second one in a shorter time
In Fig 15, the time-varying recurrent network will
provide the learning block (primary controller) for the
inverse dynamic equations discussed above The
net-work compares the desired trajectories: d; _d; d, with
continuous paired values for the three-axis robot
; _; : , at every instant in a sampling time period
The new trajectory parameters are then combined with
the error signal from the secondary controller
(feed-back controller) for actuating the robot manipulator
arm
Neural networks can be applied in two ways in the
design of the robot controller described inFig 4: (1)
system identi®cation model and (2) control ANNs can
be used to obtain the system model identi®cation
which can be used to design the appropriate controller
They can also be used directly in design of the
control-ler [72] once the real system model is available Neural
network approaches to robot control are discussed ingeneral by Psaltis et al [25], and Yabuta and Yamada[73] These approaches can be classi®ed into:
1 Supervised control, a trainable controller, unlikethe old teaching pendant, allows responsiveness
to sensory inputs A trainable neuromorphiccontroller reported by Guez and Selinsky [74]provides an example of a fast, real-time, androbust controller
2 Direct inverse control is trained for the inversedynamics of the robot Kung and Hwang [75]used two networks online in their design of thecontroller
3 Neural adaptive control, neural nets combinedwith adaptive controllers, results in greaterrobustness and ability to handle nonlinearity.Chen [76] reported the use of the BP methodfor a nonlinear self-tuning adaptive controller
4 Backpropagation of utility, involves information
¯owing backward through time Werbos'sbackpropagation through time is an example
of such a technique [77]
5 Adaptive critic method, uses a critic to evaluaterobot performance during training Very com-plex method; requires more testing [78]
In the direct inverse control approach, the recurrentneural network will learn the inverse dynamics of therobot in order to improve the controller performance.The neural network model replaces the primary con-troller (see Fig 4) In this approach a feedback con-troller (secondary controller) will be used to teach thenetwork initially As learning takes place, the neuralnetwork takes full control of the system
Kawato and his research group were successful inusing this approach in trajectory control of a three-degree-of-freedom robot [24, 79] Their approach isknown as feedback-error-learning control However,their neural network structure was simply the linearcollection of all nonlinear dynamic terms (they calledthem subsystems) in the dynamic motion equation.Learning was purely for the estimates of the subsys-tems As the degrees of freedom increase the networksize needs to increase (order of n4) For example, forsix degrees of freedom 942 subsystems are needed,compared with 43 for the three degrees of freedom.However, due to the parallel processing capability ofthe neural network, the implementation of Kawato'smethod is still an attractive method
Goldberg and Pearlmutter [80] have demonstratedthe utility of the feedback-error-learning approach forthe motion control of the ®rst two joints of the CMU
Figure 15 Recurrent neural learning controller