1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Sensing Intelligence Motion - How Robots & Humans Move - Vladimir J. Lumelsky Part 13 docx

30 136 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Sensing Intelligence Motion - How Robots & Humans Move
Chuyên ngành Motion Planning and Human Performance
Thể loại article
Định dạng
Số trang 30
Dung lượng 498,89 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

For example, the Bug2algorithm developed in Section 3.3.2, operating under the same conditions as forthe human subjects, in the version with incomplete information produces elegantsoluti

Trang 1

computer mouse Every time the cursor approaches a labyrinth wall within somesmall distance—that is your “radius of vision”—the part of the wall within thisradius becomes visible, and so you can decide where to turn to continue themotion Once you step back from the wall, that piece of the wall disappears fromthe screen.

Your performance in this new setting will of course deteriorate compared tothe case with complete information above You will likely wander around, hittingdead ends and passing some segments of the path more than once Because youcannot now see the whole labyrinth, there will be no hope of producing a near-optimal solution; you will struggle just to get somehow to point T This is

demonstrated in two examples of tests with human subjects shown in Figure 7.3.Among the many such samples with human subjects that were obtained in thecourse of this study (see the following sections), these two are closest to the bestand worst performance, respectively Most subjects fell somewhere in between.While this performance is far from what we saw in the test with completeinformation, it is nothing to be ashamed of—the test is far from trivial Thosewho had a chance to participate in youth wilderness training know how hard onehas to work to find a specific spot in the forest, with or without a map And many

of us know the frustration of looking for a specific room in a large unfamiliarbuilding, in spite of its well-structured design

Human Versus Computer Performance in a Labyrinth. How about paring the human performance we just observed with the performance of a decentmotion planning algorithm? The computer clearly wins For example, the Bug2algorithm developed in Section 3.3.2, operating under the same conditions as forthe human subjects, in the version with incomplete information produces elegantsolutions shown in Figure 7.4: In case (a) the “robot” uses tactile information,and in case (b) it uses vision, with a limited radius of visionr v, as shown.Notice the remarkable performance of the algorithm in Figure 7.4b: The pathproduced by algorithm Bug2, using very limited input information—in fact, afraction of complete information—almost matches the nearly optimal solution inFigure 7.2a that was obtained with complete information

com-We can only speculate about the nature of the inferior performance of humans

in motion planning with incomplete information The examples above suggestthat humans tend to be inconsistent (one might say, lacking discipline): Somenew idea catches the eye of the subject, and he or she proceeds to try it, withoutthinking much about what this change will mean for the overall outcome.The good news is that it is quite easy to teach human subjects how to use

a good algorithm, and hence acquire consistency and discipline With a littlepractice with the Bug2 algorithm, for example, the subjects started producingpaths very similar to those shown in Figure 7.4

This last point—that humans can easily master motion planning algorithmsfor moving in a labyrinth—is particularly important As we will see in the nextsection, the situation changes dramatically when human subjects attempt motionplanning for arm manipulators We will want to return to this comparison when

Trang 2

(b) Figure 7.3 Two examples of human performance when operating in the labyrinth of

Figure 7.1 with incomplete information about the scene Sample (a) is closer to the best performance, while sample (b) is closer to the worst performance observed in this study.

Trang 3

Figure 7.4 Performance of algorithm Bug2 (Chapter 3) in the labyrinth of Figure 7.1.

(a) With tactile sensing and (b) with vision that is limited to radiusr v.

Trang 4

discussing the corresponding tests, so let us repeat the conclusion from the abovediscussion:

When operating in a labyrinth, humans have no difficulty learning and using motion planning algorithms with incomplete information.

7.2.2 Moving an Arm Manipulator

Operating with Complete Information. We are now approaching the mainpoint of this discussion There was nothing surprising about the human perfor-mance in a labyrinth; by and large, the examples of maze exploration aboveagree with our intuition We expected that humans would be good at moving in alabyrinth when seeing all of it (moving with complete information), not so goodwhen moving in a labyrinth “in the dark” (moving with incomplete information),and quite good at mastering a motion planning algorithm, and this is what hap-pened We can use these examples as a kind of a benchmark for assessing humanperformance in motion planning

We now turn to testing human performance in moving a simple two-linkrevolute–revolute arm, shown in Figure 7.5 As before, the subject is sitting in

target positions in the test;P is the arm endpoint in its current position; O1 ,O2 ,O3 , and

O are obstacles.

Trang 5

front of the computer screen, and controls the arm motion using the computermouse The first link,l1, of the arm rotates about its jointJ0located at the fixedbase of the arm The joint of the second link, J1, is attached to the first link,and the link rotates about point J1, which moves together with linkl1 Overall,the arm looks like a human arm, except that the second link,l2, has a piece thatextends outside the “elbow” J1 (This kinematics is quite common in industrialand other manipulators.) And, of course, the arm moves only in the plane ofthe screen.

How does one control the arm motion in this setup? By positioning the cursor

on link l1 and holding down the mouse button, the subject will make the linkrotate about joint J0and follow the cursor At this time linkl2will be “frozen”relative to link l1 and hence move with it Similarly, positioning the cursor onlinkl2and holding down the mouse button will make the second link rotate aboutjoint J1, with link l1 being “frozen” (and hence not moving at all) Each suchmotion causes the appropriate link endpoint to rotate on a circular arc

Or—this is another way to control the arm motion—one can position thecursor at the endpoint P of link l2 and drag it to whatever position in the armworkspace one desires, instantaneously or in a smooth motion The arm endpointwill follow the cursor motion, with both links moving accordingly During thismotion the corresponding positions of both links are computed automatically inreal time, using the inverse kinematics equations (Subjects are not told aboutthis mechanism, they just see that the arm moves as they expect.) This secondoption allows one to control both links motion simultaneously It is as if someonemoves your hand on the table—your arm will follow the motion

We will assume that, unlike in the human arm, there are no limits to the motion

of each joint in Figure 7.5 That is, each link can in principle rotate clockwise

or counterclockwise indefinitely Of course, after every 2π each link returns to

its initial position, so one may or may not want to use this capability [Lookingahead, sometimes this property comes in handy When struggling with movingaround an obstacle, a subject may produce more than one rotation of a link.Whether or not the same motion could be done without the more-than-2π link

rotation, not having to deal with a constraint on joint angle limits makes the testpsychologically easier for the subject.]

The difficulty of the test is, of course, that the arm workspace contains cles When attempting to move the arm to a specified target position, the subjectswill need to maneuver around those obstacles In Figure 7.5 there are four obsta-cles One can safely guess, for example, that obstacleO1 may interfere with themotion of linkl1and that the other three obstacles may interfere with the motion

obsta-of linkl2

Similar to the test with a labyrinth, in the arm manipulator test with completeinformation the subject is given the equivalent of the bird’s-eye view: One has acomplete view of the arm and the obstacles, as shown in Figure 7.5 Imagine youare that subject You are asked to move the arm, collision-free, from its startingpositionS to the target position T The arm may touch an obstacle, but the system

Trang 6

will not let you move the arm “through” an obstacle Take your time—time isnot a consideration in this test.

Three examples of performance by human subjects in controlled experimentsare shown in Figure 7.6.3Shown are the arm’s starting and target positionsS and

T , along with the trajectory (dotted line) of the arm endpoint on its way from S

toT The examples represent what one might call an “average” performance by

human subjects.4

The reader will likely be surprised by these samples Why is human mance so unimpressive? After all, the subjects had complete information aboutthe scene, and the problem was formally of the same (rather low) complexity

perfor-as in the labyrinth test The difference between the two sets of tests is indeeddramatic: Under similar conditions the human subjects produced almost optimalpaths in the labyrinth (Figure 7.2) but produced rather mediocre results in thetest with the arm (Figure 7.6)

Why, in spite of seeing the whole scene with the arm and obstacles (Figure 7.5),the subjects exhibited such low skills and such little understanding of the task

Is there perhaps something wrong with the test protocol, or with control means

of the human interface—or is it indeed real human skills that are representedhere? Would the subjects improve with practice? Given enough time, would theyperhaps be able to work out a consistent strategy? Can they learn an existing algo-rithm if offered this opportunity? Finally, subjects themselves might comment thatwhereas the arm’s work space seemed relatively uncluttered with obstacles, inthe test they had a sense that the space was very crowded and “left no room formaneuvering.”

The situation becomes clearer in the arm’s configuration space (C-space,

Figure 7.7) As explained in Section 5.2.1, theC-space of this revolute–revolute

arm is a common torus (see Figure 5.5) Figure 7.7 is obtained by flatteningthe torus by cutting it at point T along the axes θ1 and θ2 This producesfour points T in the resulting square, all identified, and two pairs of identified C-space boundaries, each pair corresponding to the opposite sides of the C-space

square For reference, four “shortest” paths (M-lines) between pointsS and T are

shown (they also appear in Figure 5.5; see the discussion on this in Section 5.2.1).The dark areas in Figure 7.7 are C-space obstacles that correspond to the four

obstacles in Figure 7.5

Note that the C-space is quite crowded, much more than one would think

when looking at Figure 7.5 By mentally following in Figure 7.7 obstacle outlinesacross the C-space square boundaries, one will note that all four workspace

obstacles actually form a single obstacle in C-space This simply means that

when touching one obstacle in work space, the arm may also touch some other

3 The experimental setup used in Figure 7.6c slightly differs from the other two; this played no visible role in the test outcomes.

4 The term “average” here has no formal meaning: It signifies only that some subjects did better and some did worse A more formal analysis of human performance in this task will be given in the next section A few subjects did not finish the test and gave up, citing tiredness or hopelessness (“There

is no solution here”, “You cannot move fromS to T here” .).

Trang 8

T T

S

Figure 7.7 C-space of the arm and obstacles shown in Figure 7.5.

obstacle, and this is true sequentially, for pairs (O1, O2), (O2, O3), (O3, O4),(O4, O1) No wonder the subjects found the task difficult In real-world tasks,such interaction happens all the time; and the difficulties only increase with morecomplex multilink arms and in three-dimensional space

Operating the Arm with Incomplete Information. Similar to the test withincomplete information in the labyrinth, here a subject would at all times seepoints S and T , along with the arm in its current positions Obstacles would be

hidden Thus the subject moves the arm “in the dark”: When during its motionthe arm comes in contact with an obstacle—or, in the second version of the test,some parts of the obstacle come within a given “radius of vision” r v from somearm’s points—those obstacle parts become temporarily visible Once the contact

is lost—or, in the second version, once the arm-to-obstacle distance increasesbeyondr v—the obstacle is again invisible

The puzzling observation in such tests is that, unlike in the tests with thelabyrinth, the subjects’ performance in moving the arm “in the dark” is on aver-age indistinguishable from the test with complete information In fact, somesubjects performed better when operating with complete information, while others

Trang 9

performed better when operating “in the dark.” One subject did quite well “in thedark,” then was not even able to finish the task when operating with a completelyvisible scene, and refused to accept that in both cases he had dealt with the samescene: “This one [with complete information] is much harder; I think it has nosolution.” It seems that extra information doesn’t help What’s going on?

Human Versus Computer Performance with the Arm. As we did abovewith the labyrinth, we can attempt a comparison between the human and computerperformance when moving the arm manipulator, under the same conditions Since

in previous examples human performance was similar in tests with completeand incomplete information, it is not important which to consider: For example,the performance shown in Figure 7.6 is representative enough for our informalcomparison On the algorithm side, however, the input information factor makes atremendous difference—as it should The comparison becomes interesting whenthe computer algorithm operates with incomplete (“sensing”) information.Shown in Figure 7.8 is the path generated in the same work space of Figure 7.5

by the motion planning algorithm developed in Section 5.2.2 The algorithmoperates under the model with incomplete information To repeat, its sole inputinformation comes from the arm sensing; known at all times are only the arm

Trang 10

positions S and T and its current position The arm’s sensing is assumed to

allow the arm to sense surrounding objects at every point of its body, withinsome modest distance r v from that point In Figure 7.8, radius r v is equal toabout half of the link l1 thickness; such sensing is readily achievable today inpractice (see Chapter 8)

Similar to Figure 7.6, the resulting path in Figure 7.8 (dotted line) is the pathtraversed by the arm endpoint when moving from position S to position T

Recall that the algorithm takes as its base path (called M-line) one of the fourpossible “shortest” straight lines in the arm’sC-space (see lines M1, M2, M3, M4

in Figure 5.5); distances and path lengths are measured in C-space in radians.

In the example in Figure 7.8, the shortest of these four is chosen (it is shown aslineM1, a dashed line) In other words, if no obstacles were present, under the

algorithm the arm endpoint would have moved along the curveM1; given the

obstacles, it went along the dotted line path

The elegant algorithm-generated path in Figure 7.8 is not only shorter thanthose generated by human subjects (Figure 7.6) Notice the dramatic differ-ence between the corresponding (human versus computer) arm test and thelabyrinth test While a path produced in the labyrinth by the computer algorithm(Figure 7.4) presents no conceptual difficulty for an average human subject, theyfind the path in Figure 7.8 incomprehensible What is the logic behind thosesweeping curves? Is this a good way to move the arm from S to T ? The best

way? Consequently, while human subjects can easily master the algorithm in thelabyrinth case, they find it hard—in fact, seemingly impossible—to make use

of the algorithm for the arm manipulator

7.2.3 Conclusions and Plan for Experiment Design

We will now summarize the observations made in the previous section, and willpose a few questions that will help us design a more comprehensive study ofhuman cognitive skills in space reasoning and motion planning:

1 The labyrinth test is a good easy-case benchmark for testing one’s generalspace reasoning abilities, and it should be included in the battery of tests.There are a few reasons for this: (a) If a person finds it difficult to move

in the labyrinth—which happens rarely—he or she will be unlikely tohandle the arm manipulator test (b) The labyrinth test prepares a subjectfor the test with an easier task, making the switch to the arm test moregradual (c) A subject’s successful operation in the labyrinth test suggeststhat whatever difficulty the subject may have with the arm test, it likelyrelates to the subject’s cognitive difficulties rather than to the test design

or test protocol

2 When moving the arm, subjects exhibit different tastes for control means:Some subjects, for example, prefer to change both joint angles simulta-neously, “pulling” the arm endpoint in the direction they desire, whereasother subjects prefer to move one joint at the time, thus producing circular

Trang 11

arcs in the path; see Figure 7.6 Because neither technique seems ently better or easier than the other, for subjects’ convenience both types

inher-of control should be available to them during the test

3 Since working with a bird’s-eye view (complete information) as opposed to

“in the dark” (incomplete information) makes a difference—clearly so inthe labyrinth test and seemingly less so in the arm manipulator test—thisdichotomy should be consistently checked out in the comprehensive study

4 In the arm manipulator test it has been observed that the direction of armmotion may have a consistent effect on the subjects’ performance Obvi-ously, in the labyrinth test this effect appears only when operating withincomplete information (“moving in the dark”) This effect is, however,quite pronounced in either test with the arm manipulator, with complete

or with incomplete information Namely, in the setting of Figure 7.5, thegenerated path and the time to finish were noted to be consistently longerwhen moving from position T to S than when moving from S to T This

suggests that it is worthwhile to include the direction of motion as a tor in the overall test battery (And, the test protocol should be set up sothat the order of subtests has no effect on the test results.) One possi-ble reason for this peculiar phenomenon is a psychological effect of one’spaying more attention to route alternatives that are closer to the direction

fac-of the intended route than to those in other directions Consider the ple “labyrinth” shown in Figure 7.9: The task is to reach one point (S or

sim-T ) from the other while moving “in the dark.” When walking from S to

T , most subjects will be less inclined to explore the dead-end corridor A

because it leads in a direction almost opposite to the direction toward T ,

and they will on average produce shorter paths On the other hand, whenwalking fromT to S, more subjects will perceive corridor A as a promising

direction and will, on average, produce longer paths Such considerationsare harder to pinpoint for the arm test, but they do seem to play a role.5

5 The less-than-ideal performance of the subjects in the arm manipulator testsmakes one wonder if something else is at work here Can it be that thehuman–computer interface offered to the subjects is somewhat “unnatural”for them—and this fact, rather than their cognitive abilities, is to blamefor their poor performance? Some subjects did indeed blame the computerinterface for their poor performance.6 Some subjects believed that theirperformance would improve dramatically if they had a chance to operate aphysical arm rather than a virtual arm on the computer screen (“if I had areal thing to grab and move in physical space, I would do much better .”).

This is a serious argument; it suggests that adding a physical test to theoverall test battery might provide interesting results

5 Of course, no such effect can be expected for the computer algorithm.

6 An “unscientific” observation made here was that older subjects, such as visiting professors who graciously agreed to participate in the experiment, were more critical of the human–computer inter- face than younger subjects The latter were more willing than the former to accept the test results

as measuring their real spatial reasoning abilities.

Trang 12

• Performance as a function of gender (consider the proverbial proficiency

of men in handling maps)

• Performance as a function of age: For example, are children better thanadults in spatial reasoning tasks (as they seem to be in some computergames or with the Rubik’s Cube)?

• Performance as a function of educational level and professional tion: For example, wouldn’t we expect students majoring in mechanicalengineering to do better in our tests than students majoring in comparativeliterature?

orienta-7 Finally, there is an important question of training and practice We all knowthat with proper training, people achieve miracles in motion planning; justthink of an acrobat on a high trapeze In the examples above, subjectswere given a chance to get used to the task before a formal test wascarried out, but no attempt was made to consistently study the effect ofpractice on human performance The effect of training is especially serious

in the case of arm operation, in view of the growing area of teleoperationtasks (consider the arm operator on the Space Shuttle, or a partially disabledperson commanding an arm manipulator to take food from the refrigerator).This suggests that the training factor must be a part of the larger study.This list covers a good number of issues and consequently calls for a ratherambitious study In the specific study described below, not all questions on thelist have been addressed thoroughly enough, due to the difficulty of arranging astatistically representative group of subjects Some questions were addressed onlycursorily For example, attempts to enlist in the experiment a local kindergarten

or a primary school had a limited success, and so was an attempt to round upenough subjects over the age of 60

The very limited number of tests carried out for these insufficiently studiedissues provide these observations: (a) Children do not seem to do better thanadults in our tests (b) Subjects aged 60 and over seem to have significantly moredifficulty carrying out the tests: in the arm test, in particular, they would give

Trang 13

Handles

Links

Figure 7.10 The physical two-link arm used in the tests of human performance.

up more often than younger subjects before reaching the solution (c) The level

of one’s education and professional orientation seems to play an insignificantrole: Secretaries do as well or as poorly as mechanical engineering PhDs orprofessional pilots, who pride themselves in their spatial reasoning

The Physical Arm Test Setup. This experimental system has been set up in

a special booth, with about 5 ft by 5 ft floor area, enough to accommodate atable with the two-link arm and obstacles, and a standing subject The inside

of the booth is painted black, to help with the “move in the dark” test For avalid comparison of subjects’ performance with the virtual environment test, thephysical arm and obstacles (Figure 7.10) are proportionally similar to those inFigure 7.5 (Only two obstacles can be clearly seen in Figure 7.10; obstacle O1

of Figure 7.5 was replaced for technical reasons by two stops; see Figure 7.10.)For the subjects’ convenience the arm is positioned on a slightly slanted table.Each arm link is about 2 ft long A subject moves one or both arm links using thehandles shown During the test the arm positions are sampled by potentiometersmounted on the joint axes, and they are documented in the host computer forfurther analysis, together with the corresponding timing information.7

Special features have been added for testing the scene visibility factor Openingthe booth doors and turning on its light produces the visible scene; closing the

7 The physical arm and the booth system, including hardware, electronics, and related software, have been designed by Branimir Stankovic and Steve Seaney at the University of Wisconsin Robotics Laboratory [120].

Trang 14

door shut and turning off the light makes it an invisible environment For thelatter test, the side surfaces of the arm links and of the obstacles are equippedwith densely spaced contacts and LED elements located along perimeters of bothlinks (Figure 7.10) There are 117 such LEDs on the inner link (link 1) and 173LEDs on the outer link (link 2) When a link touches an obstacle, one or moreLEDs light up, informing the subject of a collision and giving its exact location.Visually, the effect is similar to how a contact is shown in the virtual arm test.8

7.3.1 The Setup

Two batteries of tests, called Experiment One and Experiment Two, have been

carried out to address the issues listed in the previous section Experiment One

addresses the effect of three factors on human performance: interface factor, which focuses on the effect of a virtual versus physical interface; visibility factor,

which relates to the subject’s seeing the whole scene versus the subject’s “moving

in the dark”; and direction factor, which deals with the effect of the direction of motion in the same scene Each factor is therefore a dichotomy with two levels.

We are especially interested in the effects of interface and visibility, since theseaffect most directly one’s performance in motion planning tasks The direction

of motion is a secondary factor, added to help clarify the effect of the othertwo factors

Experiment Two is devoted specifically to the effect of training on one’sperformance The effect is studied in the context of the factors described above

One additional factor here, serving an auxiliary role, is the object-to-move factor,

which distinguishes between moving a point robot in a labyrinth versus moving

a two-link arm manipulator among obstacles The arm test is the primary focus

of this study; the labyrinth test is used only as a benchmark, to introduce thehuman subjects to the tests’ objectives

The complete list of factors, each with two levels (settings), is therefore asfollows9:

A Object-to-move factor, with two levels:

1 Moving a point robot in a labyrinth, as in Figure 7.1

2 Moving a two-link revolute-revolute arm manipulator in a planar space with obstacles, as in Figure 7.5

work-B Interface factor, with two levels:

1 In this test, called the virtual test, the subject operates on the computer

screen, moving the arm links with the computer mouse; all necessary help

8 In addition to this arm, a wooden mockup of the arm, of the same dimensions as the test arm, was built and installed outside the booth, to help subjects practice their motor skills in the task.

9 More details on the experiment design and test conditions can be found in Ref 121.

Trang 15

is done by the underlying software Both the labyrinth test (Figure 7.1)and the arm test (Figure 7.5) are done in this version.

2 In this test, called the physical test, the subject works in the booth,

moving the physical arm (Figure 7.10) Only the arm tests, and nolabyrinth tests, were done in this version

C Visibility factor, with two levels:

1 Visible environment: The object (one of those in factor A) and its ronment are fully visible

envi-2 Invisible environment: Obstacles cannot be seen by the subject, exceptwhen the robot (in case of the point robot) or a part of its body (in case

of the arm) is close enough to an obstacle, in which case a small part

of the obstacle near the contact point becomes visible for the duration

of contact The arm is visible at all times

D Direction factor (for the arm manipulator test only), with two levels:

1 “Left-to-right” motion (denoted below LtoR), as in Figure 7.8.

2 “Right-to-left” motion (denoted RtoL); in Figure 7.8 this would

corre-spond to moving the arm from positionT to position S.

E Training factor The goal here is to study the effect of prior learning and

practice on human performance This factor is studied in combination withall prior factors and has two levels:

1 Subjects’ performance with no prior training Here the subjects are onlyexplained the rules and controls, and are given the opportunity to tryand get comfortable with the setup, before the actual test starts

2 Subjects’ performance is measured after a substantial prior training andpractice

Therefore, the focus of Experiment One is on factors B, C, and D, and thefocus of Experiment Two is on factor E (with the tests based on the same factors

B, C, and D) Because each factor is a dichotomy with two levels, all possiblecombinations of levels for factors B, C, and D produce eight tasks that eachsubject can be subjected to:

Task 1: Virtual, visible, left-to-right

Task 2: Virtual, visible, right-to-left

Task 3: Virtual, invisible, left-to-right

Task 4: Virtual, invisible, right-to-left

Task 5: Physical, visible, left-to-right

Task 6: Physical, visible, right-to-left

Task 7: Physical, invisible, left-to-right

Task 8: Physical, invisible, right-to-left

In addition to these tasks, a smaller study was carried out to measure the effect

of three auxiliary variables:

Ngày đăng: 10/08/2014, 02:21

🧩 Sản phẩm bạn có thể quan tâm