1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Workshop on European Scientific and Industrial Collaboration on Promoting Advanced docx

5 320 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Vision-guided intelligent robots for automating manufacturing, materials handling and services
Tác giả Rainer Bischoff, Volker Graefe
Trường học Bundeswehr University Munich
Chuyên ngành Robotics
Thể loại Conference paper
Năm xuất bản 1998
Thành phố Girona
Định dạng
Số trang 5
Dung lượng 780,62 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In the order of commissioning we have set up an autonomous vehicle, a stationary manipulator and a humanoid robot with omnidirectional motion capability, a sensor head and two arms.. We

Trang 1

Vision-guided Intelligent Robots for Automating Manufacturing, Materials Handling

and Services

Rainer Bischoff and Volker Graefe

Bundeswehr University Munich Institute of Measurement Science Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany Tel.: +49-89-6004-3589, Fax: +49-89-6004-3074 E-Mail: {Rainer.Bischoff | Graefe}@unibw-muenchen.de

Abstract

"Seeing" machines and "intelligent" robots have been the focus of research conducted by the Institute of Measurement Science since 1977 Our goal is to gain a basic understanding of vision, autonomy and intelligence of technical systems, and to construct seeing intelligent robots These should be able to operate robustly and at an acceptable speed in the real world, to survive in a dynamically changing natural environment, and to perform autonomously a wide variety of tasks

In this paper we report on three autonomous robots that have been developed during recent research projects for automating manufacturing, materials handling, and services In the order of commissioning we have set up an autonomous vehicle, a stationary manipulator and a humanoid robot with omnidirectional motion capability, a sensor head and two arms We use standard video cameras on all robots as the main sensing modality We focused our research on navigation in known and unknown environments, machine learning, and manipulator control without any knowledge of quantitative models

1 Introduction

As a result of the increasing demands of automating manufacturing processes and services with greater flexibility, intelligent robots with the ability to adapt to knew environments and various circumstances are key factors for success To develop such robots manifold competencies are required in disciplines such as mechanical engineering, electrical engineering, computer science and mathematics

Our expertise is to build modular robotic systems with various kinematic chains that use vision sensors to perceive their environment and to perform user-defined tasks efficiently To put the robots into operation no or only minor modification of the infrastructure is necessary because our approach uses vision as the main sensing modality and does not depend on any priori knowledge

of quantitative models We have developed powerful image processing hardware, as well as software and control algorithms, to enable robots to operate autonomously (section 2) Our AGV

ATHENE II is able to navigate in partly structured environments, e.g., in factories and office

buildings, making it suitable for all kinds of transportation tasks that are required to automate manufacturing and services (section 3) Our stationary articulated manipulator is equipped with

an uncalibrated stereo-vision system being able to handle diverse objects without calculating its inverse kinematics (section 4) In our current research project we have developed a prototype of

Trang 2

Figure 1: Typical traffic situation

with various objects in a complex dy-namic scene, recognized in real time

by the vision system BVV 3

a future service robot, a mobile manipulator with 18 degrees of freedom Because of its modular-ity in both hardware and software it can be adapted to customers’ requirements, e.g., to meet their needs for tasks like transporting and handling of goods, surveillance, inspection, or mainte-nance (section 5)

Applied Research and Practical Relevance

From the very beginning our research work has been essentially guided by the rule that every result had to be proved and demonstrated in practical experiments and in the real world While this approach is rather demanding, compared to mere computer simulations it has the great advantage of yielding by far more reliable and valuable results The fact that most of our research has been conducted in cooperation with industrial partners has greatly helped us in directing our work towards results that lend themselves to practical applications in the real world

2 Digital Image Processing and Real-Time Vision Systems

All our robots use digital image processing as a powerful means of perception Standard video cameras provide images to a multi-processor system that

evalu-ates them in real time Such a multi-processor system may

consist of simple microprocessors, digital signal processors,

transputers, or a combination of those Communication

bottle-necks are avoided by using high-bandwidth video busses and

high-performance data links between the processors Together

with controlled correlation, an exceptionally fast and robust

feature extraction algorithm developed by our institute, fast

and reliable image processing is possible

Autonomous Road Vehicles

A first major demonstration experiment that in 1987 caught

much international attention was a road vehicle equipped with

our real-time vision system BVV 2 that allowed it to run on the

German Autobahn Although at that time no other traffic was

allowed on the road, the achieved speed of 96 km/h constituted

a world record for autonomous road vehicles Notably, in

con-trast to all other autonomous vehicles known at that time, the

driving speed was limited only by the performance of the

vehi-cle’s engine and not by the vision system Key to this success

has been our real-time vision system BVV 2 anticipating in its

architecture the concept of object-oriented vision that was only

later formulated explicitly Its two successors, BVV 3 and 4 ,

with their 100 times higher performance, enabled us to fully

implement object-oriented vision algorithms [Graefe 1993]

Thus, a simultaneous recognition of various objects in complex

dynamic scenes has been made possible This constituted the

basis for an accurate perception of normal traffic situations

(Figure 1)

Obstacle Avoidance

Obstacle avoidance is a major concern for all mobile robots

We have developed an obstacle detection and classification

system suitable for high-speed driving, and a motion stereo

Trang 3

10 m workshop

exit 1

exit 2

stairwell kitchen

xerox mani-lab

mechanics start

landmark

e-lab rob-lab finish

Figure 2: ATHENE II, an intelligent

mobile robot, mainly used for studying indoor navigation and machine learning

NAME LIST

>>>Tour<<<

***

mechanics exit 1 exit 2 mani-lab rob-lab e-lab mani-lab stairwell exit 2 workshop e-lab

***

END

***

Figure 3: The course traveled by ATHENE II

according to the mission description shown

on the right

algorithm that allows an accurate distance measurement from a moving vehicle to an obstacle or other stationary target without knowing the size of the target object or any parameter of the camera used

3 Mobile Robots ATHENE I und II

Navigation concepts for factory buildings and office

envi-ronments have been investigated with our vision-guided

mobile robots ATHENE I and II (Figure 2) These robots

are able to perform various transportation tasks in

exten-sive environments We developed the concept of

object-oriented and behavior-based navigation Its main

charac-teristic is that the selection of the behaviors to be executed

in each moment is based on a continuous recognition and

evaluation of the robot’s dynamically changing situation

This situation essentially depends on the perceived states

of relevant objects, the robot’s repertoire of available

behaviors and its actual goals to be accomplished The

navigation system relies on topological maps that the robot

learns during exploration runs An operator informs the robot of the names of relevant mission locations, e.g “copy machine” or “laboratory” Other users may then use those common location names in communicating with the robot [Bischoff et al 1996]

Executing a complex navigation task

Figure 3 shows, as an example, the mission description that the robot was given in an experiment, and the resulting course followed by the robot To make the task more complex for demonstra-tion purposes the robot was instructed

to pass a rather large number of

interme-diate locations on its way to its final

des-tination, the e-lab The mission

descrip-tion is simply a list of all the locadescrip-tions

that should be passed by the robot, and

it ends with the final destination

At the start of the experiment the robot

knew that it was somewhere between

the e-lab and the kitchen, facing the

kitchen It had a map of the environment

that it had acquired in previous

experi-ments, and that did not contain any

gross errors in its metric attributes (In

other experiments the robot completed

similar missions with maps into which

errors of several meters for the lengths

of some corridors had been introduced.)

4 Calibration-Free Manipulator

We have realized a calibration-free manipulator robot that consists of an articulated arm and a stereo vision system (Figure 4) For this robot, we have developed a manipulation method that does not rely on any prior calibration of any parameters of the system, in sharp contrast to

Trang 4

camera C

camera C

J3

J 2

J1

J0

1

2

J4

camera gripper

object camera

Figure 5: HERMES, a humanoid

service robot with two arms, an omnidirectionally mobile base and

a stereo vision system

Figure 4: Calibration-free

mani-pulator with five degrees of freedom

and a stereo vision system

conventional methods Our method does not require any knowledge of the parameters of the manipulator (e.g., length of its links or relationship between commanded control words and actual movements of the arm) and of the cameras (e.g., focal length, distortion characteristics, position relative to the manipu-lator) Even severe disturbances, as arbitrary changes of the cameras’ orientations, that would make other robots fail are tolerated while the robot is operating Key to the systems’ ex-traordinary robustness are the renunciation of model knowledge and a direct transition from image data to motor control words Because no calibration is needed such a robot is well suited for environments like homes or offices that require a high degree of robustness in dealing with unexpected situations and where maintenance personnel is not readily available [Graefe, Ta 1995] Currently we are studying methods of knowledge representation suitable for machine learning The goal is a robot that accumu-lates experience in the course of its normal operations and, thus, continuously improves its skills (learning by doing) Moreover, whenever changing conditions invalidate past experience the robot should automatically modify what it has learned

5 Service Robot HERMES

The humanoid service robot HERMES with its two arms and

two “eyes” resembles a human in size and shape (Figure 5) It already possesses many

charac-teristics that are needed by future service robots HERMES’ two

arms are attached to a bendable body This manipulation system

enables the robot to open drawers and doors, and to pick up

objects both from the ground and from tables HERMES

per-ceives its environment with two video cameras that are mounted

on a moveable sensor head The cameras’ images are processed

by a multi-processor system in real time Visual feedback enables

HERMES to carry out various transportation, manipulation and

supervision tasks A user-friendly and situation-sensitive human

interface allows even inexperienced users to communicate with

the robot effectively and in natural way A specially designed

drive system with two powered and steered wheels guarantees

free manoeuverability in all directions [Bischoff 1997]

Central building blocks of the robot are compact drive modules

that incorporate in double cubes powerful motor-gear

combina-tions, the necessary power electronics, various sensors (angle

encoder, current converter, temperature sensor), a micro

control-ler for motion control and state supervision and an intelligent bus

interface (CAN) With these modules and various mechanical

links and adapters many different kinematic structures can be

built The electrical links for power and communication lines are

realized by uniform cables and connectors along the kinematic

chain of the robot structure

Trang 5

6 Conclusions

The ultimate goal of our research work is the development and construction of a robot that has

a practical intelligence similar to that of animals We are convinced that in the future such robots will have a great significance for society by performing many and diverse services for humans Towards this goal we have developed, and presented here, three of our robots:

C a vision-guided mobile robot that navigates in structured environments based on the

recogni-tion of its current situarecogni-tion,

C a completely uncalibrated manipulator that handles various objects by using an uncalibrated

stereo vision system, and

C a humanoid service robot that combines the abilities of the former mentioned robots and can

be used for transporting and handling goods at different locations of extensive environments

Main Research Topics

The following list gives an overview of the principal working areas of the Institute of Measure-ment Science at the Bundeswehr University Munich:

• architecture and design of real-time vision systems

• recognition, classification and tracking of objects in dynamic scenes

• motion stereo for distance measurement and spatial interpretation of image sequences

• calibration-free robots (i.e., robots not requiring quantitatively correct models)

• object- and behavior-oriented stereo vision as a basis for the control of such robots

• recognition of dynamically changing situations in real time as the basis for behavior selection

by robots and for man-machine communication

• system architectures for behavior-based mobile robots

• machine learning, e.g., for object recognition, motion control and knowledge acquisition for navigation

Offer of Cooperation and Services

We offer services and cooperation in our principal working areas, e.g., expert reports, studies, mid-term development cooperations and scientific project backing We welcome tasks that enable

us to put new scientific discoveries into practice We have extensive knowledge in the areas of machine vision and development of intelligent robotic control We possess powerful computer systems, state-of-the-art equipped laboratories, experimental fields and workshops that we could provide for joint research and development purposes We address our offer above all to techno-logically ambitious small and medium sized companies We are eager to continue contributing to

an effective technology transfer from science to industry, as we have done in the past

References

Bischoff, R.; Graefe, V.; Wershofen, K P (1996) Combining Object-Oriented Vision and

Behavior-Based Robot Control Robotics, Vision and Parallel Processing for Industrial Automa-tion Ipoh, pp 222-227

Bischoff, R (1997) HERMES - A Humanoid Mobile Manipulator for Service Tasks

Interna-tional Conference on Field and Service Robotics Canberra, pp 508-515

Graefe, V (1993) Vision for Intelligent Road Vehicles Proceedings, IEEE Symposium on

Intelligent Vehicles Tokyo, pp 135-140

Graefe, V.; Ta, Q (1995) An Approach to Self-learning Manipulator Control Based on Vision.

IMEKO International Symposium on Measurement and Control in Robotics, ISMCR '95 Smolenice, pp 409-414

Ngày đăng: 13/02/2014, 09:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm