1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Biomimetics - Biologically Inspired Technologies - Yoseph Bar Cohen Episode 2 Part 5 pdf

30 158 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 865,43 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Then, by using the diffusion-based learning algorithm, these optimal solu-tions can be generalized overall the bounded task space.. 16.3 OPTIMAL MOTION FORMATION In the previous section,

Trang 1

engineering, will lead to a deeper understanding of ourselves and will be significant for constructingthe next generation of advanced artificial systems such as human friendly robots.

The following sections of this chapter will introduce the recent biomimetic system controlresearches From the point of view of the system’s self-organization, we will describe in Section16.2 the nonlinear and redundant sensory-motor learning problem We will introduce the problem

of optimal motion formation under environmental constrains in Section 16.3 In Section 16.4, wewill study the system’s mechanical interaction and environmental adaptation, and show a novelbiologically inspired two degree of freedom adaptive control theory with its application to a robot’sforce tracking control The conclusion will be given in Section 16.5

Animals survive in complex natural environment using their sensory-motor behavior The ization and development of brain nervous system’s motor control functions largely depend on thephysical interaction with the external environment Self-organization of the environmental adaptivemotor function is one of the most interesting characteristics that we should learn in biomimeticcontrol research

organ-Charles T Snowdon, who was a President of the Animal Behavior Society, described animal’sbehavior as follows: ‘‘Animal behavior is the bridge between the molecular and physiologicalaspects of biology and the ecological Behavior is the link between organisms and environment, andbetween the nervous system and the ecosystem Behavior is one of the most important properties

of animal life Behavior plays a critical role in biological adaptations Behavior is how we humansdefine our own lives Behavior is that part of an organism by which it interacts with its environment.Behavior is as much a part of an organisms as its coat, wings, etc The beauty of an animal includesits behavioral attributes’’ (Snowdon)

Historically, there are two broad approaches to studying animal behaviors: (1) ethologicalapproach and (2) experimental physiological approach (http://salmon.psy.plym.ac.uk/) Ethologistsmainly concern with the problems of how to identify and describe species-specific behaviors.They try to understand the evolutionary pathway through which the genetic basis for thebehavior came about They use field experiments and make observations of animal behaviorunder natural conditions On the other hand, behaviorists and comparative psychologists concen-trate on how we learn new behaviors by using statistical methods and carefully controlledexperimental variables for a restricted number of species, principally rats and pigeons, underlaboratory conditions

The famous Russian physiologist Pavlov, who is recognized as the founder of behaviorism,trained a dog by ringing a bell before mealtime Through the course of time, he discovered thatsimply by ringing the bell, the dog would salivate It is now known as the concept of conditionedreflex (Pavlov, 1923) A similar conceptual approach was also developed by him to study humanbehavior Sherrington, on the other hand, studied spinal reflexes and gave out his theory of thereciprocal innervation of agonist and antagonist skeletal muscle innervation, which is known asSherrington’s Law (Sherrington, 1906) Bernstein, another Russian scientist, further pointed outdifferent important problems in motor learning and organization Dealing with the redundancyproblem in motor behavior, he proposed the concept ofsynergy in muscles’ coordinative actions so

as to constraint the motion D.O.F with respect to the required tasks He suggested that it is such asynergy that results in the reflex motions between each D.O.F Moreover, this synergy changes withrespect to the environmental variations, which is beyond the philosophy of Pavlov’s conditionedreflex (Bernstein, 1967)

Today, with the development of information science, robotics, and control engineering, itbecomes easier to study the motor behaviors and the principal control mechanisms of the brainnervous system more quantitatively and systematically

Trang 2

16.2.1 Nonlinear and Redundant Sensory-Motor Organization

Even simple reaching movements that can be performed by a 5-month-old baby are never simple incybernetics At least, it requires solving severalnonlinear coordination transformations from theobject space to the muscle space The transformations may also contain the problem ofredundancy.Figure 16.1 shows a 3D computer simulation model of whole body dynamic musculo-skeletalsystem of human developed in RIKEN BMC As seen from this model, in order to realize thenatural human motions, it is necessary to control more than 105 D.O.F by over 300 muscles Figure16.2 shows another research example on how to control the 3 D.O.F position of an object by wholearm cooperative manipulation under the influence from the external forces Here, each arm has 4D.O.F and interacts with the object by all links not the end-effectors Human body is such a super-redundant system, and the redundancy exists in a lot of levels of the motor coordinates The inversesolution of the redundancy problem generally forms a solution manifold in the motor control space,the solution is not unique and thus is not easy to define Therefore, although the redundant D.O.F.provides the biological system with powerful hardware foundation to realize various smooth anddelicate motions that have high tolerance (fault-tolerance to the functional disability in someparts of the system) and adaptability (adapt to the environmental uncertainties, variations, anddifferent objectives), in order to enjoy these benefits, during organizing the sensory-motor coord-ination, we have to overcome theill-posed nonlinear problems These problems come not only fromthe kinematics but also from the dynamics By now, there are many researches proposed from theviewpoints of robotic engineering as well as biologically inspired learning theory The proposedapproaches can be largely summarized as: (1) learning approach based on neural network; and (2)Jacobian approach from robotic engineering

Figure 16.1 (See color insert following page 302) A 3D computer simulation model of whole body human dynamic musculo-skeletal system.

Trang 3

16.2.2 Motor Learning Using Neural Network

In the learning-based approach, the main efforts have been made through: (1) supervised learning;and (2) self-organization

Fundamentally, supervised learning depends closely on the availability of an external teacher Inthis approach, we first construct a neural network and define a smooth nonlinear function for a set ofneurons Then, for a given set of inputs, we use the error between the desired response from theteacher and the network’s actual output to adjust the interconnection weights between each neuron.Researches of supervised learning resulted in the later biological discovery of long-term depression(LTD) in cerebellum (Rosenblatt, 1962; Ito, 1984), which in turn clarified one of the basic functions

of cerebellum in motor learning and adaptation However, the later developments of supervisedlearning in artificial neural network may not match in detail with the real neural networks(Rumelhart et al., 1986)

One of the important abilities of supervised learning is the so-called generality, which meansthat, after sufficient learning, for a new input that was not learned before, the network can generateproper output It is proved for the multi-layered artificial neural networks that, with sufficientnumbers of neurons in the hidden layer, the network can approximate any continuous mapping frominput to output (Funahashi, 1989) For motor learning, however, the condition of sufficient learningindicates that we have to perform sufficient physical trial motions by the body This is necessary insupervised learning but is not efficient for motor learning in biological systems For motor learning,the main target is rather to realize the generality of motion with limited physical trials

By modifying the supervised learning, three models: (1) direct inverse (Kuperstein, 1988); (2)distal supervised learning (Jordan and Rumelhart, 1992); and (3) feedback error learning (Kawato

et al., 1987; Miyamoto et al., 1988) have been proposed for the specific problem of motor learning.The main considerations of the modifications are about the selection of the suitable teacher signaland the concave property of the nonlinear transformation However, these three models have twocommon disadvantages derived more or less from supervised learning Firstly, in applying analgorithm such as backpropagation, global information of the network’s output error is used toadjust all weights between nerve cells It requires massive connections among all neurons, which isdifficult to realize artificially Secondly, the resultant motor output may not have topology con-serving property with respect to the sensory input, or even no spatial optimality as we will show inthe next subsection Because of these disadvantages, in the tasks such as to move the hand smoothly

in the task space, there may exist a dramatic change in the joint angles (Guez and Ahmad, 1988;Gorinevsky, 1993)

Figure 16.2 Whole body cooperative manipulation of an object.

Trang 4

Comparing with supervised learning, the self-organization approach does not depend onany external teacher It focuses on the spatial order of the input data and organizes the learningsystem so that the neighbor nodes have the similar outputs (Amari, 1980; Kohonen, 1982) Byconsidering spatial characteristics of motor learning, self-organization algorithm has also beenextended to generate the topology conserving sensory-motor map (Ritter et al., 1989) In thisapproach, we first construct a three-dimensional lattice, and specify the sensory input vectors, thecorresponding inverse Jacobian matrixes, and the joint angle vectors to each node within the lattice.The lattice then outputs desired joint angles for the arm to perform many physical trial motions Foreach trial motion, a visual system is used to input the end-effector position of the arm in the taskspace The algorithm is then used to search for a winner node with its sensory vector closest to thevisual input After that, the sensory vector, the inverse Jacobian matrix as well as the joint anglevector of the winner node, together with that in its neighbor nodes, are adjusted, respectively Theneighbor region of adjustment decreases as the learning proceeds As a result, the vectors (or amatrix) in one node are similar to that of its neighbor nodes That is, a topology conserving map isself-organized without any supervisor’s command In this algorithm, for every adjustment step, thearm has to perform the real physical trial motions Since it is still within the learning process,sometimes these trial motions are dangerous or may be impossible due to the incorrectness of themap In addition, both in searching the winner node as well as when adjusting the neighbor nodes,the approach requires a centralized gating network to interact with all nodes, which makes thelearning algorithm centralized and not parallel as seen from the computation point of view Finally,besides the fact of topology conserving, we could not obtain any information about the map’sspatial optimality.

16.2.3 Diffusion-Based Learning

Researches on motor learning of biological system are not limited to the two learning approaches

in above subsection In order to overcome their drawbacks, we presented a diffusion-based motorlearning approach, in which each neuron only interacts with its neighbor neurons and generates asensory-motor map with some spatial optimality

In detail, we consider the spatial optimality of the coordination: to minimize the motor controlerror of the system as well as the differentiation of the motor control with respect to the sensoryinput overall the bounded task space By using variational calculus, we derive a partial differentialequation (PDE) of the motor control with respect to the task space The equation includes adiffusion term For the given boundary conditions and the initial conditions, this PDE can be solveduniquely and the solution is a well-coordinated map (Luo and Ito, 1998)

From the motor learning point of view, our approach contains both the aspects of supervisedlearning and self-organization Firstly, we assumed that the forward many-to-one relation from thehand system’s motor control to the task space sensory input can be obtained using supervisedlearning, and at the boundary, the supervisor can provide correct motor teacher information Then,

by evolving the diffusion equation, we can obtain the sensory-motor coordination overall thebounded task space

16.2.3.1 Robotic Researches of Kinematic Redundancy

Before describing diffusion-based learning, we first briefly review the redundancy problemand summarize previous robotic approaches Without losing generality, we only consider thekinematic nonlinear relation between the work space and the joint space which is represented as

Trang 5

where u ¼ u ½ 1, u2, , umT, x¼ x ½ 1,x2, ,xnT,m > n, and

and dim R(J)þ dim N(J) ¼ m.

Assuming the JacobianJ is known, we summarize five typical inverse kinematics approaches:

1 By using the transpose of the matrixJ, we calculate

_

u ¼ JT

where xdis the desired end-effector position (Chiacchio et al., 1991)

2 For the case when rank (J)¼ n, we use J þ, the pseudo-inverse ofJ, to obtain

_

whereJJþ ¼ I and vector (I  J þ J)h 2 N(J) When rank(J(u)) < n, then J is singular, the joint u is

the singular configuration (Klein and Huang, 1983)

3 By specifying additional task constraints to extend J as a full rank square matrix Je, we have(Baillieul, 1985)

_

4 The regularization method to minimize the cost functionk dx  Jdu k þl k du k.

5 Based on compliance control, by using the relations:

a priori, which seems unlikely in biological system In addition, the cost functions and task

Trang 6

constraints considered in some of these approaches may not really be applied by the biologicalsystem There are also several drawbacks such as: (1) all approaches need numerous computation

of the Jacobian matrix and/or its pseudo-inverse; (2) the approaches 2 and 3 may be numericallyunstable; and (3) the so-called quasicyclic problem (Lee and Kil, 1994) Therefore, research

on how the biological system organizes its sensory-motor coordination should reflect not onlythe mathematical aspects of the algorithm and its computational efficiency, but also the bio-logical reality

16.2.3.2 Diffusion-Based Learning Algorithm

Consider again the nonlinear and redundant relation represented by an unknown function

We try to obtain the inverse y¼ g 1(x) that minimizes a spatial criterion in a bounded task space:

V(y)¼1

where a(t) and b(t) are two adjustment coefficients, and A is the inverse Jacobian that will be

mentioned later Using variational method, it can be proved that the optimal solution of the inverse

y¼ g 1(x) follows the PDE:

@y

This PDE has two terms The first term is a diffusion term that has the effect to interpolate thesolutions of the y in the task space x, while the second term acts for reducing the position errors Thediscrete version of the equation is

One of the main points in this approach is how to set the adjustment coefficients a(t) and b(t) in

the learning process In our study, in order to learn the inverse Jacobian matrix A, we set the time

functions a(t) and b(t), so that b(t) ¼ 1  a(t) For example, initially we select coefficients a ¼ 1

and b ¼ 1 for only diffusion, after that, set a ¼ 0 and b ¼ 1 for error correction Therefore, during

the diffusion process, the inverse matrix A can be obtained by

Trang 7

i, j are calculated during the two learning steps usingforward relation of Equation (16.8).

The final learning algorithm is summarized as follows:

1 Use supervised learning to learn forward x¼ g(y).

2 Select a boundary range in the task space x and divide it into a N N lattice.

3 Perform trial motions on the boundary and remember the corresponding y

i, j and the initial inverse Jacobian A0

i, j, respectively, for all

i, j¼ 1,2, , N, and set the time functions a(t) and b(t) initially as a ¼ 1 and b ¼ 0 for only

diffusion, after that, set a ¼ 0 and b ¼ 1 for error correction.

,j, Dxt,j,@Ei,j

@At,j¼ (Dyt

,j At

i, jDxt,j)DxtT

i, jforEi,j¼1k Dyt

,j At

i:jDxt,j k2

6 Adjust yti,þ1j and the inverse Jacobian matrixAtþ1

i, j as in Equations (16.11) and (16.12)

Note that, for thestep 1, since x¼ g(y) is a function from high to lower dimension, it is possible to

learn it using the general supervised learning If we already learned the system’s forward relation instep 1, then during performing the learning steps of 5 and 6, the motor system is not necessary toperform the physical trial motions Figure 16.5 shows the resultant map for a three-link robot armusing above learning approach It is clear that the arm not only reaches its desired positions in all ofthe task space, but also the joints change smoothly with respect to the change of the arm’s end-effector

This approach has three advantages:

1 It does not require too many trial motions for the sensory-motor system

2 During the map formation process, it requires only the local interactions between each node

3 It guarantees the final map’s spatial optimality overall the bounded task space

The detailed proof of the above diffusion-based learning algorithm using variational technique isgiven in Luo and Ito (1998)

It should be noted that the redundancy considered here only involves the kinematic aspect Forthe redundancy problem considering the system’s dynamics, refer to Arimoto’s recent research(Arimoto, 2004)

Origin

Origin

(i,j − 1) (i − 1,j) (i + 1,j)

(i,j + 1)

(i,j) (i,j − 1)

used to show the target positions that the robot should reached.

Trang 8

16.2.3.3 Diffusion-Based Generalization of Optimal Control

Diffusion-based learning can also be effectively applied to generalize an optimal control for a robotmanipulator (Luo et al., 2001)

Generally, in optimal control we have to solve a two-point boundary value problem with respect

to increase and decrease of time However, it is very difficult to solve it analytically, especially for anonlinear system like a robot

By now, there are many numerical approaches to solving the optimal control problem for agiven set of initial and terminal conditions However, these approaches require enormouscomputations For every change in the initial and terminal conditions, they have to perform thecomplex computation again, which make it difficult to realize the optimal control for the robot inreal time

In our approach, we assume that, for some initial and terminal conditions, we already obtainedthe optimal solutions Then, by using the diffusion-based learning algorithm, these optimal solu-tions can be generalized overall the bounded task space For example, as shown in Figure 16.6, weassume that if for the initial S and four terminal conditions of T to T , the optimal control inputs

Figure 16.5 Resultant map of the 3 D.O.F robot reaching its end-effector onto different positions of x space with different configurations For the smooth change of the end-effector’s position, the robot’s joint angles are also changed smoothly.

Trang 9

are already obtained, then, by using diffusion-based algorithm, we can obtain all semioptimalcontrol solutions for all the initial and terminal conditions within a bounded work space as shown inFigure 16.7 without solving the nonlinear two-point boundary value problem.

Our approach greatly reduces the computational cost In addition, since the diffusion-basedlearning process is completely parallel distributed, it only requires local interaction between thenodes of a learning network (a lattice) and therefore can be realized by the modern integrated circuittechnology easily

Recent neuron scientific discoveries show that, nitric oxide (NO), a gas that diffuses betweenneuron cells locally, can modulate the local synaptic plasticity and thus plays an important rule inmotor learning and generalization (Yanagihara and Kondo, 1996) We expect that our diffusion-based learning theory may provide some mathematical understanding of the function of NO in theneural information processing and motor learning

16.3 OPTIMAL MOTION FORMATION

In the previous section, we described on how to solve the sensory-motor organization from theredundant sensory space input to the motor control output In this section, we consider the optimalmotion formation problem for the arm to move from one position to another in the task space.16.3.1 Optimal Free Motion Formation

For a simple human arm’s point-to-point (PTP) reaching movement in free motion space, it is foundexperimentally that the path of human arm tends to be straight, and the velocity profile of the armtrajectory is smooth and bell-shaped (Morasso, 1981; Abend et al., 1982) These invariant featuresgive us hints about the internal representation of motor control in the central nervous system (CNS).One of the main approaches adopted in computational neuroscience is to account for theseinvariant features via optimization theory Specifically, Flash and Hogan (1985) proposed theminimum jerk criterion

0.1 0.3 0.5

Figure 16.6 Diffusion-based spatial generalization of the optimal control Here S is an initial position and

T 1 to T 4 are four terminal positions for which we already have the optimal controls We can then obtain the semioptimal controls from S to any terminal positions such as O without solving the complex two-point boundary value problems.

Trang 10

2

ðTf 0x

0

−0.5 0 0.5 1.5 2.5

1 2

0.1 0.2 0.3 0.4 0.5

time (s) Joint 2 Joint 1

Optimal and semi-optimal joint angles

0

−0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

0.1 0.2 0.3 0.4 0.5

time (s) Joint 2 Joint 1

optimal and semi-optimal joint velocities

0

−3

−2

− 1 0 1 2 3

0.1 0.2 0.3 0.4 0.5

time (s)

Joint 2 Joint 1

optimal and semi-optimal joint torques

0

−0.5 0 0.5 1.5 2.5

1 2

0.1 0.2 0.3 0.4 0.5

time (s) Joint 1 Joint 2

optimal and semi-optimal joint angles

0

−3

− 2

−1 0 1 2

0.1 0.2 0.3 0.4 0.5

time (s) Joint 1 Joint 1

optimal and semi-optimal joint velocities

0

−3

− 2

−1 0 1 2 3

0.1 0.2 0.3 0.4 0.5

time (s)

Joint 2 Joint 1

optimal and semi-optimal joint torques (a) The motion from S to E1 (b) The motion from S to E2

(c) The trajectories in the task space

E1

E2

T3 T2

0 0 0.1 0.2 0.3 0.4 0.5

Figure 16.7 Comparison of the semioptimal solutions of the diffusion-based approach with the optimal ones Here, (c) shows the robot’s end-effector trajectories in the task space, while (a) and (b) show two examples of the time responses for the motions from S point to E1 and E2 points as given in (c), respectively It is clear that the solutions of our diffusion-based approach are almost the same as those that are obtained by solving the complex two-point boundary problem.

Trang 11

x(t)¼ x(0) þ (x(Tf) x(0))(10s3 15s4þ 6s5) (16:15)

wheres ¼ t/Tf.

Uno et al., on the other hand, proposed to take into account about the arm’s dynamics as aconstraint condition when performing optimal motion planning Based on this idea, the minimumjoint torque–change criterion

J¼1

2

ðTf 0

is presented (Uno et al., 1989a), which implies that human implicitly plans the PTP reaching

movements in the human body space based on the arm’s dynamic model Here t is the combined

vector of the joint torques They also expanded this model to a muscle model (Uno et al., 1989b)and proposed the minimum muscle force change criterion to show that CNS may generate uniquehand trajectory by minimizing a global performance criterion of

J¼

ðTf 0

where f is the combined vector of the muscle forces

Kawato et al also presented a cascade neural network model that may be possible for thenervous system to solve such a minimizing torque–change problem (Kawato et al., 1987; Miyamoto

et al., 1988)

16.3.2 Optimal Motion Formation under Environmental Constraints

Studies of above section considered only the simple PTP human arm movements in the free motionspace However, how about the optimal criterion for the more complex constraint motions such asopening a door, turning a steering wheel, rotating a coffee mill, et al.?

To ask this question, we performed experiments of crank rotation task As shown in Figure 16.8,rotating a crank requires only one degree of freedom force, however, we have to define the torquesfor the two joints of the arm This is also a force redundant problem

At the same time, we have performed many optimum calculations for the different kinds ofcriterions including the minimum jerk, minimum torque change, the minimum muscle forcechange, the minimum end-effector’s interaction force change as well as our proposed criterion

to minimize the combination of end-effector’s interaction force change and muscle force change

Figure 16.8 (See color insert following page 302) Experiments of human motion formation in crank rotation tasks.

Trang 12

Compared with the experimental result of the measured force shown in Figure 16.9(a), thecomputational results for the cases of the minimum muscle force change criterion and the combin-ation of the hand interaction force change and muscle force change criterion are, respectively,shown in Figure 16.9(b) and (c) (Ohta et al., 2004).

From Figure 16.9, it is observed that the predicted numerical result of the contact force vectorswhen using the minimum muscle force change criterion (which was proposed for P.T.P motion inthe free motion space) is inadequate here Instead, the human arm tends to minimize

J¼

ð

Tf

0( _FFTF_Fþ w_ffT_ff)dt

the combination of the hand interaction force F change and the muscle force f change as inFigure 16.9(c) and Figure 16.10 Therefore, we strongly suggest that human arm movement isrealized by different optimal criterions according to different task conditions as well as taskrequirements

The combined criterion also captures well the muscle activities in the constrained multi-jointmotions It covers both the motions in the free motion space and the constrained motion space, since

in the free motion space the interaction force at the end-effector is zero Therefore, the combinedcriterion reduces merely to the minimum muscle force change criterion in the free motion space.How can the central nervous system measure the hand contact force and how can it solve theoptimal constraint dynamic motion control are left as open questions

16.4 MECHANICAL INTERACTION AND ENVIRONMENTAL ADAPTATIONThis section further goes to describe the motor control functions on the mechanical interaction withdynamic environment It is well known that human can perform physical interactions with uncertaindynamic environment skillfully In fact, through force feedback from tendon and the co-activation

of antagonist muscles, human can control the arm’s mechanical impedance adaptively with respect

to the environmental dynamics so as to realize the desired time response of the motion as well as thecontact force (Hogan, 1984)

In order to realize such adaptive motor functions by a robot, we should not only search for thesoft artificial actuators such as biological muscles, but also discover the control principles of themotor functions Technically, according to the task requirements, the contact tasks can be specified

Experiment Muscle force

Hand force + Muscle force

10 (N)

Figure 16.9 Comparison of the interaction force vectors between the human hand and the crank in experiment (a), and numerical simulations of (b) that use the minimum muscle force change criterion, (c) the combination of the hand interaction force change and the muscle force change criterion.

Trang 13

into two classes: those that require compliant interaction with the environment such as to push andopen a door, and those that require to impose some exact force to the environment With respect tothese two different contact requirements, impedance control and explicit force control have beenproposed, respectively In this section, we summarize these two control approaches and introducetheir recent developments.

0 0 0 0

0 400

Trang 14

16.4.1 Impedance Control

Let us formulate a robot’s dynamic equation in contact task space as

where Irð Þ is the robot’s inertia matrix, cx rðx,˙xÞ˙x is the centrifugal and Coriolis force vector,

respectively uris the robot’s control input vector and f is the contact force from the environment.For simplicity, let us consider the environmental dynamics as

In detail, the control input is designed as

u¼ Cxðxd xÞ þ Cff

so that the robot dynamics as seen from the environment be as

Mrdxþ Drd˙xþ Krdðx  xdÞ ¼ f

whereCxandCfare the robot’s position and force feedback controllers, respectively

From the stability point of view, we usually require the robot to be passive with respect to theenvironmental interactions Passivity is defined as the property that the system does not flow energy

to outside The robot’s passivity as seen from its environment or the manipulated object is veryuseful for the stable and safely mechanical interaction When applying impedance control, if thedesired position xdis constant, then the robot is passive However, if the xdchanges with respect totime, then the robot may lose the passivity as seen from the environment

In order for the robot to realize the passivity while performing the time varying interactions, Li andHorowitz (1999) proposed a passive velocity field control (PVFC), they also suggested to apply PVFC

to control a human interactive robot and smart exercise machines Unlike the passivity based controlscheme by Slotine and Li (1991), in which they considered the passivity of a tracking error system,PVFC remains passive of the robot with respect to the external environment by adding a virtualflywheel to exchange the mechanical energy with the real robot However, PVFC has the followingtwo main problems Firstly, when specifying desired velocity vector field, PVFC does not considerthe uncertainties of the environmental geometric constraints Secondly, although PVFC maintains the

Trang 15

passivity, the contact task performance cannot be adjusted with respect to the environment dynamics.

In most contact tasks, we do not know the environmental constraints before performing the tasks;specifically, its shape, size, location as well as its mechanical dynamics Therefore, the environmentaluncertainties will influence the robot’s task performance even using PVFC

Mussa-Ivaldi et al., on the other hand, investigated the organization of motor output ofspinally dissected frogs They stimulated the spinal cord with microelectrodes and observedisometric forces produced by the muscles of the legs It is reported that, elicited by a singlestimulation, the ankle position converges to a single equilibrium point of a force vector field,and the point of convergence is shifted by superposing several vector fields resulting frommultiple stimulations (Mussa-Ivaldi and Gister, 1992) Inspired by the biological studies onprimitive motor behavior, we proposed an adaptive PVFC approach where learning of the unknownenvironmental geometry is based on the vector field interpolation theory (Luo et al., 1995;Saitoh et al., 2004) In our approach, we first parameterize the desired velocity vector field by theweighted combination of a set of basis vector fields according to the environmental model Then, inorder to overcome the influences from the model uncertainties of the environment, we use forcefeedback to adjust the weight parameters of the desired velocity field for the robot to approach thereal environment

For the dynamic environment, in order to maintain the passivity of the impedance control withtime varying impedance center, a sufficient condition to adjust the impedance center is derived asfollows (Kishi et al., 2003)

Theorem: Define V(x) as the robot’s desired velocity vector of impedance center in the task spacewithout considering the environment uncertainties, ˙x0 ¼ aV(x); the adjustable robot impedance

center, where a is an adjustment parameter For a given constant r > 0, if we adjust

Ngày đăng: 10/08/2014, 01:22