1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Who Needs Emotions The Brain Meets the Robot - Fellous & Arbib Part 18 doc

20 92 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 20
Dung lượng 180,21 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

We now demon-strate how such a change in the emotional state of the agents would affect the best role allocation.. In Figure 11.4a, when all the agents were fearless, the number of scout

Trang 1

explicit, intended communication or by the intended actions they take in the world Further emotional signals are communicated across a variety of channels, verbally and nonverbally These channels vary in capacity, the specificity of the information effectively communicated, and the cognitive overhead in using them A person can smile at a cute baby without much thought but may need more resources to verbally express happiness Agent teams typically have two channels: communication and action These dif-ferences suggest potential benefits for using emotions in pure agent teams For instance, there might be an advantage to having agent teams communi-cate attitudinal or emotional information as well as an advantage to expos-ing this information to teammates automatically, through low-cost channels Consider building agents so that they could not only communicate and act deliberately after an accurate and possibly computationally intensive assess-ment of the state, but also emit some low-cost emotional signal based on an approximate state assessment For example, a robot could have hardwired circuitry that triggers light-emitting diodes that represent emotional cues like fear to indicate a state where the robot is in danger, worry to indicate low likelihood of success, and helplessness to indicate that it needs to help These emotional cues can be computed and transmitted quickly and could result

in the team being able to coordinate itself without having to wait for the accurate state estimation to be performed If, for example, agents could use these emotional cues to determine action selection of the other agents in the team, it could result in greater synchronization and, consequently, bet-ter teamwork

EXPERIMENTAL ILLUSTRATION

In this section, as an illustration of the effect of emotions on multiagent team-work, we demonstrate how the allocation of roles in a team is affected by emotions like fear Our approach is to introduce an RMTDP (Nair, Tambe,

& Marsella, 2003) for the team of agents, then to model the agents such that their emotional states are included

We now demonstrate how emotions can affect decision making in a team of helicopters To this end, recall the RMTDP analysis of TOPs men-tioned above The emotional state of the agent could skew how the agent sees the world This could result in the agent applying different transition, observation, or reward functions In this discussion, we will focus on how fear may affect the reward function used in the RMTDP For instance, in

a fearful state, agents may consider the risk of failure to be much higher than in a nonfearful state In the helicopter domain, such agents might

Trang 2

penalize heavily those states where a helicopter crashes We now demon-strate how such a change in the emotional state of the agents would affect the best role allocation

We consider a team of six helicopters and vary the number of agents that fear losing a helicopter to enemy fire These agents would place a heavy

penalty on those states where one or more helicopter crashed Figure 11.4a,b

shows the number of scouts allocated to each route (X-axis) as we vary the number of fearful agents in the team (Y-axis) from none to all six for two

different penalties for helicopter crashes In Figure 11.4a, when all the agents

were fearless, the number of scouts sent out was three, all on route 2; how-ever, when fearful agents were introduced, the number of scouts sent out changed to four, also on route 2, because the team was now prepared to lose out on the chance of a higher reward if they could ensure that each scout

that was sent out would be safer In Figure 11.4b, we reduced the amount

of penalty the agents ascribed to a helicopter crash When fearful agents were introduced, the number of scouts remained unchanged but the scouts now used route 1, a safer albeit longer route, instead of route 2, which was more dangerous but allowed the mission to be completed more quickly Thus, with the introduction of fear, we found that the team’s decision-making behav-ior changed such that the members either deployed more scouts or assigned the scouts to a safer route

Figure 11.4 Role allocations in fearful teams with different reward functions

Role allocations for reward function (a) Increasing the number of fearful

agents results in more scouts being sent together to increase the safety of the

scouting team (b) Increasing the number of fearful agents results in moving

scouts from a shorter but more risky route to a longer but safer route

0.

1.

2.

5

0 5 1 5 2 5 3 3.

Scouts on Route 1 Scouts on Route 3

Number of fearful agents

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Number of fearful agents

Scouts on Route 1

Scouts on Route 2

Trang 3

Although, the emotion “fear” was modeled simply as a penalty for states where a helicopter crashes, the purpose of the experiment was simply to show that emotional response affects what the team perceives is its best al-location In order to evaluate teams where emotions are represented more realistically , we would need the following:

• A more realistic model of how an agent’s emotional state would change based on new percepts This model of how the emotional state transitions can be incorporated as part of the transition function in the RMTDP model in order to evaluate the team’s performance in the presence of emotion

• A more realistic model of how humans (which the agents are simulating) would respond based on their emotional state This would form part of the TOP where the individual agent’s action selection is specified

Both the model of how emotional state changes as well as the model of human behavior in the presence of emotion should ideally be informed by human behavior in such task domains

CONCLUSION

This chapter represents the first step in introducing emotions in multiagent teamwork We examined the role of emotions in three different kinds of team: first, in teams of simulated humans, introducing emotions results in more believable agent behavior and consequently better simulations; second, in virtual organizations, where agents could simulate emotions to be more believable and engaging to the human and anticipate the human’s needs by modeling the human; and third, in pure agent teams, where the introduc-tion of emointroduc-tions could help bring in the same advantages that emointroduc-tions bring

to human teams

Teams of simulated agents and mixed human–agent teams can greatly benefit with computational models of emotion In particular, to evaluate and improve such teams, we would need the following:

• A model of how an agent’s emotional state would change based

on new percepts

• A model of how humans would respond based on their emotional state

Acknowledgment This research was supported by grant 0208580 from the

National Science Foundation.

Trang 4

References

André, E., Rist, T., Mulken, S V., & Klesen, M (2000) The automated design of believable dialogues for animated presentation teams In J Cassell, J Sullivan,

S Prevost, & E Churchill (Eds.), Embodied conversational agents (pp 220–255).

Cambridge, MA: MIT Press.

Bernstein, D S., Zilberstein, S., & Immerman, N (2000) The complexity of

de-centralized control of MDPs In C Boutilier & M Goldszmidt (Eds.),

Proceed-ings of the 16th Conference in Uncertainty in Artificial Intelligence (pp 32–37),

Stanford University Stanford, CA: Morgan Kaufmann.

Boutilier, C (1996) Planning, learning and coordination in multiagent decision

pro-cesses In Y Shoham (Ed.), Proceedings of the Sixth Conference on Theoretical

Aspects of Rationality and Knowledge (pp 195–210) De Zeeuwse Stromen, The

Netherlands: Morgan Kaufmann.

Campos, J., & Sternberg, C (1981) Perception, appraisal and emotion: The onset

of social referencing In M Lamb & L Sherrod (Eds.), Infant social cognition

(pp 273–314) Hillsdale, NJ: Erlbaum.

Cavazza, M., Charles, F., & Mead, S J., (2002) Interacting with virtual characters

in interactive storytelling, In Proceedings of the First International Joint

Confer-ence on Autonomous Agents and Multiagent Systems (pp 15–19) Bologna, Italy:

ACM.

Chalupsky, H., Gil, Y., Knoblock, C., Lerman, K., Oh, J., Pynadath, D., Russ, T., & Tambe, M (2001) Electric elves: Applying agent technology to support human

organizations In H Hirsh & S Chien (Eds.), Proceedings of the Thirteenth

Inno-vative Applications of Artificial Intelligence Conference Seattle, WA: AAAI.

Cohen, P R., & Levesque, H J., (1991) Teamwork In Nỏs, 25(4), 487–512 Darwin, C (1998) The expression of emotions in man and animals (3rd ed.) New

York: Oxford University Press (Original work published 1872)

Ekman, P (2001) Telling lies: Clues to deceit in the marketplace, politics and

mar-riage New York: Norton.

Elliott, C (1992) The Affective Reasoner: A Process Model of Emotions in a Multi-Agent System Evanston, IL, Northwestern University Institute for the Learn-ing Sciences Dissertation.

El Nasr, M S., Yen, J., & Ioerger, T (2000) Flame: Fuzzy logic adaptive model of

emotions Journal of Autonomous Agents and Multiagent Systems, 3, 219–257 Frijda, N (1987) Emotion, cognitive structure, and action tendency Cognition and

Emotion, 1, 115–143.

Goleman, D (1995) Emotional intelligence New York: Oxford University Press Grosz, B., & Kraus, S (1996) Collaborative plans for complex group action

Arti-ficial Intelligence, 86, 269–357.

Hatfield, E., Cacioppo, J., & Rapson, R (1994) Emotional contagion, Cambridge:

Cambridge University Press.

Howard, R A (1960) Dynamic programming and Markov processes Cambridge,

MA: MIT Press.

Hunsberger, L., & Grosz, B (2000) A combinatorial auction for collaborative

Trang 5

planning In Proceedings of Fourth International Conference on Multiagent

Sys-tems (ICMAS-2000) (pp 151–158) Boston: IEEE Computer Society.

Izard, C (1993) Four systems for emotion activation: Cognitive and noncognitive

processes Psychological Review, 100, 68–90.

Jennings, J (1990) Teamwork: United in victory Englewood Cliffs, NJ: Silver Burdett

Press.

Jennings, N (1995) Controlling cooperative problem solving in industrial

multi-agent systems using joint intentions Artificial Intelligence, 75, 195–240 Katzenbach, J., & Smith, D K (1994) The wisdom of teams New York: Harper

Business.

Kitano, H., Asada, M., Kuniyoshi, Y., Noda, I., & Osawa, E (1997) RoboCup: The

robot world cup initiative In Proceedings of First International Conference on

Autonomous Agents (Agents ’97) (pp 340–347), Marina del Rey, CA, Feb 5–8.

New York: ACM Press.

Kitano, H., Tadokoro, S., & Noda, I (1999) RoboCup-Rescue: Search and rescue

for large scale disasters as a domain for multiagent research In Proceedings of

IEEE Conference on Systems, Men, and Cybernetics (SMC-99) Tokyo, Japan:

IEEE System, Man, and Cybernetics Society.

Lazarus, R (1991) Emotion and adaptation New York: Oxford University Press.

Lisetti, C L., & Schiano, D (2000) Facial expression recognition: Where human– computer interaction, artificial intelligence, and cognitive science intersect.

Pragmatics and Cognition, 8 185–235.

Marsella, S., & Gratch, J (2002) A step toward irrationality: Using emotion to

change belief In Proceedings of First International Joint Conference on

Autono-mous Agents and Multi-agent Systems (AAMAS-02) (pp 334–341) Bologna,

Italy: ACM.

Marsella, S., & Gratch, J (2003) Modeling coping behavior in virtual humans: Don’t

worry, be happy In Proceedings of Second International Joint Conference on

Autonomous Agents and Multi-agent Systems (AAMAS-03) (pp 313–320).

Melbourne: ACM.

Marsella, S., Johnson, W L., & LaBore, C (2000) Interactive pedagogical drama.

In Proceedings of Fourth International Conference on Autonomous Agents

(ICMAS-2000) (301–308) Barcelona, Spain: ACM.

Moffat, D., & Frijda, N (1994) Where there’s a will there’s an agent In

Proceed-ings of Workshop on Agent Theories, Architectures and Languages (ATAL-95)

(pp 245–260) Amsterdam: Springer.

Nair, R., Tambe, M., & Marsella, S (2003) Role allocation and reallocation in

multiagent teams: Towards a practical analysis In Proceedings of Second

In-ternational Joint Conference on Autonomous Agents and Multi-agent Systems

(AAMAS-03) (pp 552–559) Melbourne: ACM.

Oatley, K (1992) Best laid schemes: The psychology of emotions Cambridge:

Cam-bridge University Press.

Ortony, A., Clore, G., & Collins, A (1988) The cognitive structure of emotions.

Cambridge: Cambridge University Press.

Picard, R.W (1997) Affective computing Cambridge, MA: MIT Press.

Trang 6

Pynadath, D V., & Tambe, M (2002) The communicative multiagent team

deci-sion problem: Analyzing teamwork theories and models Journal of Artificial

Intelligence Research, 16, 389–423.

Reilly, N (1996) Believable Social and Emotional Agents Pittsburgh: Carnegie Mellon

University Dissertation.

Rickel, J., Marsella, S., Gratch, J., Hill, R., Traum, D., & Swartout, W (2002).

Toward a new generation of virtual humans for interactive experiences IEEE

Intelligent Systems, 17(4), 32–38.

Scerri, P., Pynadath, D V., & Tambe, M (2002) Towards adjustable autonomy

for the real world Journal of Artificial Intelligence Research, 17, 171–228.

Scherer, K (1984) On the nature and function of emotion: A component process

approach In K R Scherer & P Ekman (Eds.), Approaches to emotion (pp 293–

317) Hillsdale, NJ: Erlbaum.

Schut, M C., Wooldridge, M., & Parsons, S (2001) Reasoning about intentions in uncertain domains In Proceedings of European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, Toulouse, France.

Lecture Notes in Computer Sciences, 2143, 84–85.

Smith, C A., & Scott, H S (1997) A componential approach to the meaning of

facial expressions In J A Russell & J M Fernández-Dols, (Eds.), The

psychol-ogy of facial expression (pp 229–254) Paris: Cambridge University Press.

Tambe, M (1997) Towards flexible teamwork Journal of Artificial Intelligence

Re-search, 7, 83–124.

Tidhar, G (1993) Team-oriented programming: Social structures Technical report 47.

Melbourne, Australia: Australian A.I Institute.

Velásquez, J (1998) When robots weep: Emotional memories and

decision-mak-ing In Proceedings of Fifteenth National Conference on Artificial Intelligence

(AAAI-98) (pp 70–75), Madison, WI: Cambridge, MA: MIT Press.

Wooldridge, M (2000) Intelligent agents In G Weiss (Ed.), Multiagent systems:

A modern approach to distributed AI (pp 27–78) Cambridge, MA: MIT Press.

Trang 7

This page intentionally left blank

Trang 8

P A R T I V

CONCLUSIONS

Trang 9

This page intentionally left blank

Trang 10

Beware the Passionate Robot

michael a arbib

12

The warning, “Beware the Passionate Robot,” comes from the observa-tion that human emoobserva-tions sometimes have unfortunate effects, raising the concern that robot emotions might not always be optimal However, the bulk of the chapter is concerned with biology: analyzing brain mecha-nisms for vision and language to ground an evolutionary account relat-ing motivational systems to emotions and the cortical systems which elaborate them Finally, I address the issue of whether and how to char-acterize emotions in such a way that one might say that a robot has emotions even if they are not empathically linked to human emotions.

A CAUTIONARY TALE

On Tuesday, I had an appointment with N at 3 P.M., but when I phoned his secretary at 2:45 to check the place of the meeting, I learned that she had forgotten to confirm the meeting with N I was not particularly upset, we rescheduled the meeting for 4 P.M the next day, and I proceeded to make contented use of the unexpected free time to catch up on my correspondence

On Wednesday, I decided in midafternoon to put together a chart to discuss with N at our meeting; but the printer gave me some problems, and it was already after 4 when I left my office for the meeting, feeling somewhat flustered but glad that I had a useful

Ngày đăng: 10/08/2014, 02:21

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm