1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Mobile Robots Navigation 2008 Part 1 pot

40 137 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Mobile Robots Navigation
Tác giả Alejandra Barrera
Thể loại Book
Năm xuất bản 2010
Thành phố Vukovar
Định dạng
Số trang 40
Dung lượng 3,77 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Chapter 2 describes the prototype of an optical azimuth angular sensor based on infrared linear polarization to compute the robot’s position while navigating within an indoor arena.Chapt

Trang 1

Mobile Robots Navigation

Trang 3

Edited by Alejandra Barrera

In-Tech

intechweb.org

Trang 4

Published by In-Teh

In-Teh

Olajnica 19/2, 32000 Vukovar, Croatia

Abstracting and non-profit use of the material is permitted with credit to the source Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher No responsibility is accepted for the accuracy of information contained in the published articles Publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained inside After this work has been published by the In-Teh, authors have the right to republish it, in whole or part, in any publication of which they are an author or editor, and the make other personal use of the work

Technical Editor: Goran Bajac

Cover designed by Dino Smrekar

Mobile Robots Navigation,

Edited by Alejandra Barrera

p cm

ISBN 978-953-307-076-6

Trang 5

a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes.

The book addresses those activities by integrating results from the research work of several authors all over the world Research cases are documented in 32 chapters organized within 7 categories next described

Sensory perception

The accurate perception of sensory information by the robot is critical to support the correct construction of spatial representations to be exploited with navigational purposes Different types of sensor devices are introduced in this part of the book together with interpretation methods of the acquired sensory information Specifically, Chapter 1 presents the design of a sensor combining omni-directional and stereoscopic vision to facilitate the 3D reconstruction

of the environment

Chapter 2 describes the prototype of an optical azimuth angular sensor based on infrared linear polarization to compute the robot’s position while navigating within an indoor arena.Chapter 3 depicts the design of a stereoscopic vision module for a wheeled robot, where left and right images from the same scene are captured, and one of two appearance-based pixel descriptors for surface ground extraction are employed, luminance or Hue, depending on the environment particular characteristics This vision module also detects obstacle edges and provides the reconstruction of the scene based on the stereo image analysis

Chapter 4 presents a sensor setup for a 3D scanner to promote a fast 3D perception of those regions in the robot’s vicinity that are relevant for collision avoidance The acquired 3D data

is projected into the XY-plane in which the robot is moving and used to construct and update egocentric 2.5D maps storing either the coordinates of closest obstacles or environmental structures

Closing this first part of the book, Chapter 5 depicts a sensor fusion technique where perceived data are optimized and fully used to build navigation rules

Trang 6

Robot localization

In order to perform successful navigation through any given environment, robots need to localize themselves within the corresponding spatial representation A proper localization allows the robot to exploit the map to plan a trajectory to navigate towards a goal destination

In the second part of the book, four chapters address the problem of robot localization from visual perception In particular, Chapter 6 describes a localization algorithm using information from a monocular camera and relying on separate estimations of rotation and translation to provide an uncertainty feedback for both motion components while the robot navigates in outdoor environments

Chapter 7 proposes a self-localization method using a single visual image, where the relationship between artificial or natural landmarks and known global reference points is identified by a parallel projection model

Chapter 8 presents computer simulations of robot heading and position estimation by using a single vision sensor system to complement the encoders’ function during robot motion

By means of experiments with a robotic wheelchair, Chapter 9 demonstrates the localization ability within a topological map built by using only an omni-directional camera, where environmental locations are recognized by identifying natural landmarks in the scene

Path planning

Several chapters focus on discussing path planning algorithms within static and dynamic environments, and two of them deal with multiple robots In this way, Chapter 10 presents a path planning algorithm based on the use of a neural network to build up a collision penalty function Results from simulations show proper obstacle avoidance in both static and dynamic arenas

Chapter 11 proposes a path planning algorithm avoiding obstacles by classifying them according to their size to decide the next robot navigation action The algorithm starts by considering the shortest path, which is then expanded on either side spreading out by considering the obstacles type and proximity

In the context of indoor semi-structured environments full of corridors connecting offices and laboratories, Chapter 12 compares several approaches developed for door identification based

on handle recognition, where doors are defined as goals for the robot during the path planning process The chapter describes a two-step multi-classifier that combines region detection and feature extraction to increase the computational efficiency of the object recognition procedure

In the context of planetary exploration vehicles, Chapter 13 describes a path planning and navigation system based on the recognition of occluded areas in a local map Experimental results show the performance of a vehicle navigating through an irregular rocky terrain by perceiving its environment, determining the next sensing position that maximizes the non-occluded region within each local map, and executing the local path generated

Chapter 14 presents a robotic architecture based on the integration of diverse computation and communication processes to support the path planning and navigation of service robots Applied to the flock traffic navigation context, Chapter 15 introduces an algorithm capable of planning paths for multiple agents on partially known and changing environments

Trang 7

Chapter 16 studies the problem of path planning and navigation of robot formations in static environments, where a formation is defined, composed and repaired according to a proposed mereological method

Obstacle avoidance

One of the basic capabilities that mobile robots need to exhibit in navigating within any given environment is obstacle detection and avoidance This part of the book is dedicated to review diverse mechanisms to deal with obstacles, being static and/or dynamic, implemented on robots with different purposes, from service robots in domestic or office-like environments to car-like vehicles in outdoors arenas Specifically, Chapter 17 proposes an approach to reactive obstacle avoidance for service robots by using the concept of artificial protection field, which

is understood as a dynamic geometrical neighborhood of the robot and a set of situation assessment rules that determine if the robot needs to evade an object not present in its map when its path was planned

Chapter 18 describes a hierarchical action-control method for omni-directional mobile robots

to achieve a smooth obstacle avoidance ensuring safety in the presence of moving obstacles including humans

Chapter 19 presents a contour-following controller to allow a wheeled robot to follow discontinuous walls contours This controller is integrated by a standard wall-following controller and two complementary controllers to avoid collisions and find lost contours.Chapter 20 introduces a fuzzy decision-making method to control the motion of car-like vehicles in dynamic environments showing their ability to park in spatial configurations with different placement of static obstacles, to run with the presence of dynamic obstacles, and to achieve a final target from a given arbitrary initial position

Chapter 21 presents a qualitative vision-based method to follow a path avoiding obstacles

Analysis of navigational behavior

A correct evaluation of the navigational behavior of a mobile robotic system is required prior its use solving real tasks in real-life scenarios This part of the book stresses the importance

of employing qualitative and quantitative measures to analyze the robot performance From diverse perspectives, five chapters provide analysis metrics and/or results from comparative analysis of existing methods to assess different behavioral aspects, from positioning underwater vehicles to transmitting video signals from tele-operated robots

From an information theory perspective, Chapter 22 studies the robot learning performance in terms of the diversity of information available during training Authors employ motivational measures and entropy-based environmental measures to analyze the outcome of several robotic navigation experiments

Chapter 23 focuses on the study of positioning as a navigation problem where GPS reception

is limited or non-existent in the case of autonomous underwater vehicles that are forced to use deadreckoning in between GPS sightings in order to navigate accurately Authors provide

an analysis of different position estimators aiming at allowing vehicle designers to improve performance and efficiency, as well as reduce vehicle instrumentation costs

Trang 8

Chapter 24 provides results from analyzing several performance metrics to contrast mobile robots navigation algorithms including safety, dimension and smoothness of the planned trajectory

Chapter 25 analyses the performance of different codecs in transmitting video signals from a teleoperated mobile robot Results are shown from robot tests in an indoor scenario

With an aim at supporting educational and research activities, in Chapter 26, authors provide

a virtual environment to develop mobile robot systems including tools to simulate kinematics, dynamics and control conditions, and monitor in real time the robot performance during navigation tasks

Inspiration from nature

Research cycles involving living organisms’ studies, computational modeling, and robotic experimentation, have inspired for many years the understanding of the underlying physiology and psychology of biological systems while also inspiring new robotic architectures and applications This part of the book describes two different studies that have taken inspiration from nature to design and implement robotic systems exhibiting navigational capabilities, from visual perception and map building to place recognition and goal-directed behavior Firstly, Chapter 27 presents a computational system-level model of rat spatial cognition relating rat learning and memory processes by interaction of different brain structures to endow a mobile robot with skills associated to global and relative positioning in space, integration of the traveled path, use of kinesthetic and visual cues during orientation, generation of topological-metric spatial representation of the unknown environment, management of rewards, learning and unlearning of goal locations, navigation towards the goal from any given departure location, and on-line adaptation of the cognitive map to changes in the physical configuration of the environment From a biological perspective, this work aims at providing to neurobiologists/neuroethologists a technological platform to test with robots biological experiments whose results can predict rodents’ spatial behavior.Secondly, Chapter 28 proposes an approach inspired after developmental psychology and some findings in neuroscience that allows a robot to use motor representations for learning

a complex task through imitation This framework relies on development, understood as the process where the robot acquires sophisticated capabilities over time as a sequence of simpler learning steps At the first level, the robot learns about sensory-motor coordination Then, motor actions are identified based on lower level, raw signals Finally, these motor actions are stored in a topological map and retrieved during navigation

A sociological application is introduced in Chapter 30, consisting on providing powered wheelchairs able to predict and avoid risky situations and navigate safely through congested areas and confined spaces in the public transportation environment Authors

Trang 9

propose a high-level architecture that facilitates terrain surveillance and intelligence gathering through laser sensors implanted in the wheelchair in order to anticipate accidents

by identifying obstacles and unusual patters of movement

Chapter 31 describes the communication, sensory, and artificial intelligence systems implemented on the CAESAR (Contractible Arms Elevating Search And Rescue) robot, which supplies rescuers with critical information about the environment, such as gas detection, before they enter and risk their lives in unstable conditions

Finally, another monitoring system is depicted by Chapter 32 A mobile robot being controlled by this system is able to perform a measuring task of physical variables, such as high temperatures being potentially hazardous for humans, while navigating within a known environment by following a predefined path

The successful research cases included in this book demonstrate the progress of devices, systems, models and architectures in supporting the navigational behavior of mobile robots while performing tasks within several contexts With no doubt, the overview of the state of the art provided by the book may be a good starting point to acquire knowledge of intelligent mobile robotics

Alejandra Barrera

Mexico’s Autonomous Technological Institute (ITAM)

Mexico

Trang 15

Rémi Boutteau, Xavier Savatier, Jean-Yves Ertaud and Bélahcène Mazari

Institut de Recherche en Systèmes Electroniques Embarqués (IRSEEM)

France

1 Introduction

In most of the missions a mobile robot has to achieve – intervention in hostile environments,

preparation of military intervention, mapping, etc – two main tasks have to be completed:

navigation and 3D environment perception Therefore, vision based solutions have been

widely used in autonomous robotics because they provide a large amount of information

useful for detection, tracking, pattern recognition and scene understanding Nevertheless,

the main limitations of this kind of system are the limited field of view and the loss of the

depth perception

A 360-degree field of view offers many advantages for navigation such as easiest motion

estimation using specific properties on optical flow (Mouaddib, 2005) and more robust

feature extraction and tracking The interest for omnidirectional vision has therefore been

growing up significantly over the past few years and several methods are being explored to

obtain a panoramic image: rotating cameras (Benosman & Devars, 1998), muti-camera

systems and catadioptric sensors (Baker & Nayar, 1999) Catadioptric sensors, i.e the

combination of a camera and a mirror with revolution shape, are nevertheless the only

system that can provide a panoramic image instantaneously without moving parts, and are

thus well-adapted for mobile robot applications

The depth perception can be retrieved using a set of images taken from at least two different

viewpoints either by moving the camera or by using several cameras at different positions

The use of the camera motion to recover the geometrical structure of the scene and the

camera’s positions is known as Structure From Motion (SFM) Excellent results have been

obtained during the last years with SFM approaches (Pollefeys et al., 2004; Nister, 2001), but

with off-line algorithms that need to process all the images simultaneous SFM is

consequently not well-adapted to the exploration of an unknown environment because the

robot needs to build the map and to localize itself in this map during its world exploration

The in-line approach, known as SLAM (Simultaneous Localization and Mapping), is one of

the most active research areas in robotics since it can provide a real autonomy to a mobile

robot Some interesting results have been obtained in the last few years but principally to

build 2D maps of indoor environments using laser range-finders A survey of these

algorithms can be found in the tutorials of Durrant-Whyte and Bailey (Durrant-Whyte &

Bailey, 2006; Bailey & Durrant-Whyte, 2006)

1

Trang 16

Vision-based SLAM algorithms are generally dedicated to monocular systems which are

cheaper, less bulky, and easier to implement than stereoscopic ones Stereoscopic systems

have, however, the advantage to work in dynamic environments since they can grab

simultaneously two images Calibration of the stereoscopic sensor enables, moreover, to

recover the Euclidean structure of the scene which is not always possible with only one

camera

In this chapter, we propose the design of an omnidirectional stereoscopic system dedicated

to mobile robot applications, and a complete scheme for localization and 3D reconstruction

This chapter is organized as follows Section 2 describes our 3D omnidirectional sensor

Section 3 is dedicated to the modelling and the calibration of the sensor Our main

contribution, a Simultaneous Localization and Mapping algorithm for an omnidirectional

stereoscopic sensor, is then presented in section 4 The results of the experimental evaluation

of each step, from calibration to SLAM, are then exposed in section 5 Finally, conclusions

and future works are presented section 6

2 System overview

2.1 Sensor description

Among all possible configurations of central catadioptric sensors described by Nayar and

Baker (Baker & Nayar, 1999), the combination of a hyperbolic mirror and a camera is

preferable for the sake of compactness since a parabolic mirror needs a bulky telecentric

lens

Although it is possible to reconstruct the environment with only one camera, a stereoscopic

sensor can produce a 3D reconstruction instantaneously (without displacement) and will

give better results in dynamic scenes For these reasons, we developed a stereoscopic system

dedicated to mobile robot applications using two catadioptric sensors as shown in Figure 1

Fig 1 View of our catadioptric stereovision sensor mounted on a Pioneer robot Baseline is

around 20cm for indoor environments and can be extended for outdoor environments The

overall height of the sensor is 40cm

2.2 Imposing the Single-Viewpoint (SVP) Constraint

The formation of images with catadioptric sensors is based on the Single-Viewpoint theory (Baker & Nayer, 1999) The respect of the SVP constraint permits the generation of geometrically correct perspective images In the case of a hyperbolic mirror, the optical

center of the camera has to coincide with the second focus F’ of the hyperbola located at a

distance of 2e from the mirror focus as illustrated in Figure 2 The eccentricity e is a

parameter of the mirror given by the manufacturer

Fig 2 Image formation with a hyperbolic mirror The camera center has to be located at 2e

from the mirror focus to respect the SVP constraint

A key step in designing a catadioptric sensor is to respect this constraint as much as possible To achieve this, we first calibrate our camera with a standard calibration tool to determine the central point and the focal length Knowing the parameters of both the mirror and the camera, the image of the mirror on the image plane can be easily predicted if the SVP constraint is respected as illustrated in Figure 2 The expected mirror boundaries are superposed on the image and the mirror has then to be moved manually to fit this estimation as shown in Figure 3

Fig 3 Adjustment of the mirror position to respect the SVP constraint The mirror border has to fit the estimation (green circle)

Trang 17

A 3D Omnidirectional Sensor For Mobile Robot Applications 3

Vision-based SLAM algorithms are generally dedicated to monocular systems which are

cheaper, less bulky, and easier to implement than stereoscopic ones Stereoscopic systems

have, however, the advantage to work in dynamic environments since they can grab

simultaneously two images Calibration of the stereoscopic sensor enables, moreover, to

recover the Euclidean structure of the scene which is not always possible with only one

camera

In this chapter, we propose the design of an omnidirectional stereoscopic system dedicated

to mobile robot applications, and a complete scheme for localization and 3D reconstruction

This chapter is organized as follows Section 2 describes our 3D omnidirectional sensor

Section 3 is dedicated to the modelling and the calibration of the sensor Our main

contribution, a Simultaneous Localization and Mapping algorithm for an omnidirectional

stereoscopic sensor, is then presented in section 4 The results of the experimental evaluation

of each step, from calibration to SLAM, are then exposed in section 5 Finally, conclusions

and future works are presented section 6

2 System overview

2.1 Sensor description

Among all possible configurations of central catadioptric sensors described by Nayar and

Baker (Baker & Nayar, 1999), the combination of a hyperbolic mirror and a camera is

preferable for the sake of compactness since a parabolic mirror needs a bulky telecentric

lens

Although it is possible to reconstruct the environment with only one camera, a stereoscopic

sensor can produce a 3D reconstruction instantaneously (without displacement) and will

give better results in dynamic scenes For these reasons, we developed a stereoscopic system

dedicated to mobile robot applications using two catadioptric sensors as shown in Figure 1

Fig 1 View of our catadioptric stereovision sensor mounted on a Pioneer robot Baseline is

around 20cm for indoor environments and can be extended for outdoor environments The

overall height of the sensor is 40cm

2.2 Imposing the Single-Viewpoint (SVP) Constraint

The formation of images with catadioptric sensors is based on the Single-Viewpoint theory (Baker & Nayer, 1999) The respect of the SVP constraint permits the generation of geometrically correct perspective images In the case of a hyperbolic mirror, the optical

center of the camera has to coincide with the second focus F’ of the hyperbola located at a

distance of 2e from the mirror focus as illustrated in Figure 2 The eccentricity e is a

parameter of the mirror given by the manufacturer

Fig 2 Image formation with a hyperbolic mirror The camera center has to be located at 2e

from the mirror focus to respect the SVP constraint

A key step in designing a catadioptric sensor is to respect this constraint as much as possible To achieve this, we first calibrate our camera with a standard calibration tool to determine the central point and the focal length Knowing the parameters of both the mirror and the camera, the image of the mirror on the image plane can be easily predicted if the SVP constraint is respected as illustrated in Figure 2 The expected mirror boundaries are superposed on the image and the mirror has then to be moved manually to fit this estimation as shown in Figure 3

Fig 3 Adjustment of the mirror position to respect the SVP constraint The mirror border has to fit the estimation (green circle)

Trang 18

3 Modelling of the sensor

The modelling of the sensor is a necessary step to obtain metric information about the scene

from the camera It establishes the relationship between the 3D points of the scene and their

projections into the image (pixel coordinates) Although there are many calibration methods,

they can be classified into two main categories: parametric and non-parametric The first

family consists in finding an appropriate model for the projection of a 3D point onto the

image plane Non-parametric methods associate one projection ray to each pixel

(Ramalingram et al., 2005) and provide a “black box model” of the sensor They are well

adapted for general purposes but they make more difficult the minimization algorithms

commonly used in computer vision (gradient descent, Gauss-Newton,

Levenberg-Marquardt, etc)

3.1 Projection model

Using a parametric method requires the choice of a model, which is very important because

it has an effect on the complexity and the precision of the calibration process Several models

are available for catadioptric sensors: complete model, polynomial approximation of the

projection function and generic model

The complete model relies on the mirror equation, the camera parameters and the rigid

transformation between them to calculate the projection function (Gonzalez-Barbosa &

Lacroix, 2005) The large number of parameters to be estimated leads to an error function

which is difficult to minimize because of numerous local minima (Mei & Rives, 2007) The

polynomial approximation of the projection function was introduced by Scaramuzza

(Scaramuzza et al., 2006), who proposed a calibration toolbox for his model The generic

model, also known as the unified model, was introduced by Geyer (Geyer & Daniilidis,

2000) and Barreto (Barreto, 2006), who proved its validity for all central catadioptric

systems This model was then modified by Mei (Mei & Rives, 2007), who generalized the

projection matrix and also took into account the distortions We chose to work with the

unified model introduced by Mei because any catadioptric system can be used and the

number of parameters to be estimated is quite reasonable

Fig 4 Unified projection model

As shown in Figure 4, the projection pu vT of a 3D point X with coordinates

w w

w Y Z

X in the world frame can be computed using the following steps:

 The coordinates of the point X are first expressed in the sensor frame by the rigid

transformation W which depends on the seven parameters of the vector

z y x z y x

w q q q t t t q

1

V The first four parameters are the rotation R parameterised by a quaternion and the three others are those of the translation T The coordinates of X in the mirror frame are thus given by:

Z Y X Z Y

X

(1)

 The point XX Y ZT in the mirror frame is projected from the center onto the

unit sphere giving  T

S S

S Y Z X

S

X This point is then projected onto the

normalized plane from a point located at a distance ξ from the center of the sphere These transformations are combined in the function H which depends on only one

parameter:V 2   The projection onto the normalized plane, written mx yT

is consequently obtained by:

Z Y Z X y

2 2 2

2 2 2

Z Y X

Z Z Y X

Y Z Y X X

Z Y X

S S

)1

(

)2(2

)1

(

2 2 3 4 6 5 4 2 2 1

2 2 4 3 6 5 4 2 2 1

y k

xy k k

k k y

x k

xy k k

k k x

 Final projection is a perspective projection involving the projection matrix K This

matrix contains 5 parameters: the generalized focal lengths u and v, the

Trang 19

A 3D Omnidirectional Sensor For Mobile Robot Applications 5

3 Modelling of the sensor

The modelling of the sensor is a necessary step to obtain metric information about the scene

from the camera It establishes the relationship between the 3D points of the scene and their

projections into the image (pixel coordinates) Although there are many calibration methods,

they can be classified into two main categories: parametric and non-parametric The first

family consists in finding an appropriate model for the projection of a 3D point onto the

image plane Non-parametric methods associate one projection ray to each pixel

(Ramalingram et al., 2005) and provide a “black box model” of the sensor They are well

adapted for general purposes but they make more difficult the minimization algorithms

commonly used in computer vision (gradient descent, Gauss-Newton,

Levenberg-Marquardt, etc)

3.1 Projection model

Using a parametric method requires the choice of a model, which is very important because

it has an effect on the complexity and the precision of the calibration process Several models

are available for catadioptric sensors: complete model, polynomial approximation of the

projection function and generic model

The complete model relies on the mirror equation, the camera parameters and the rigid

transformation between them to calculate the projection function (Gonzalez-Barbosa &

Lacroix, 2005) The large number of parameters to be estimated leads to an error function

which is difficult to minimize because of numerous local minima (Mei & Rives, 2007) The

polynomial approximation of the projection function was introduced by Scaramuzza

(Scaramuzza et al., 2006), who proposed a calibration toolbox for his model The generic

model, also known as the unified model, was introduced by Geyer (Geyer & Daniilidis,

2000) and Barreto (Barreto, 2006), who proved its validity for all central catadioptric

systems This model was then modified by Mei (Mei & Rives, 2007), who generalized the

projection matrix and also took into account the distortions We chose to work with the

unified model introduced by Mei because any catadioptric system can be used and the

number of parameters to be estimated is quite reasonable

Fig 4 Unified projection model

As shown in Figure 4, the projection pu vT of a 3D point X with coordinates

w w

w Y Z

X in the world frame can be computed using the following steps:

 The coordinates of the point X are first expressed in the sensor frame by the rigid

transformation W which depends on the seven parameters of the vector

z y x z y x

w q q q t t t q

1

V The first four parameters are the rotation R parameterised by a quaternion and the three others are those of the translation T The coordinates of X in the mirror frame are thus given by:

Z Y X Z Y

X

(1)

 The point XX Y ZT in the mirror frame is projected from the center onto the

unit sphere giving  T

S S

S Y Z X

S

X This point is then projected onto the

normalized plane from a point located at a distance ξ from the center of the sphere These transformations are combined in the function H which depends on only one

parameter:V 2   The projection onto the normalized plane, written mx yT

is consequently obtained by:

Z Y Z X y

2 2 2

2 2 2

Z Y X

Z Z Y X

Y Y Z X

X

Z Y X

S S

)1

(

)2(2

)1

(

2 2 3 4 6 5 4 2 2 1

2 2 4 3 6 5 4 2 2 1

y k

xy k k

k k y

x k

xy k k

k k x

 Final projection is a perspective projection involving the projection matrix K This

matrix contains 5 parameters: the generalized focal lengths u and v, the

Trang 20

coordinates of the principal point u0 and v0, and the skew  Let K be this

projection function, and  T

0 v00

u

v u u

V

V  The global

projection function of a 3D point X , written P( X V, ), is obtained by chain composition of

the different functions presented before:

),()

,(V X K D H W V X

These steps allow the computation of the projection onto the image plane of a 3D point

knowing its coordinates in the 3D space In a 3D reconstruction framework, it is necessary to

do the inverse operation, i.e to compute the direction of the luminous ray corresponding to

a pixel This step consists in computing the coordinates of the point X S belonging to the

sphere given the coordinates x yT of a 2D point on the normalized plane This step of

retro projection, also known as lifting, is achieved using formula (6)

1(1

1

))(

1(1

1

))(

1(1

2 2

2 2 2

2 2

2 2 2

2 2

2 2 2

y x

y x

y y

x

y x

x y

x

y x

S

3.2 Calibration

Calibration consists in the estimation of the parameters of the model which will be used for

3D reconstruction algorithms Calibration is commonly achieved by observing a planar

pattern at different positions With the tool we have designed, the pattern can be freely

moved (the motion does not need to be known) and the user only needs to select the four

corners of the pattern Our calibration process is similar to that of Mei (Mei & Rives, 2007) It

consists of a minimization over all the model parameters of an error function between the

estimated projection of the pattern corners and the measured projection using the

Levenberg-Marquardt algorithm (Levenberg, 1944; Marquardt, 1963)

If n is the number of 3D pointsX i, x i their projections in the images, we are looking for

the parameter vector V which minimizes the cost functionE(V):

 2

1

),(2

1)

3.3 Relative pose estimation

The estimation of the intrinsic parameters presented in the previous section allows to establish the relationship between 3D points and theirs projections for each sensor of the stereoscopic system To obtain metric information from the scene, for example by triangulation, the relative pose of the two sensors has to be known

This step is generally performed by a pixel matching between both images followed by the estimation of the essential matrix This matrix, originally introduced by Longuet-Higgins (Longuet-Higgins, 1981), has the property to contain information on the epipolar geometry

of the sensor It is then possible to decompose this matrix into a rotation matrix and a translation vector, but the last one can only be determined up to a scale factor (Bunschoten

& Kröse, 2003) The geometrical structure of the scene can consequently be recovered only

up to this scale factor

Although in some applications, especially for 3D visualization, the scale factor is not needed,

it is required for preparation of intervention or for navigation To accomplish these tasks, the size of the objects and their distance from the robot must be determined The 3D reconstruction has therefore to be Euclidean

Thus, we suggest in this section a method to estimate the relative pose of the two sensors, with a particular attention to the estimation of the scale factor The estimation of the relative pose of two vision sensors requires a partial knowledge of the environment to determine the scale factor For this reason, we propose a method based on the use of a calibration pattern whose dimensions are known and which must be visible simultaneously by both sensors Let (C 1,x 1,y 1,z 1) and (C 2,x 2,y 2,z 2) be the frames associated with the two sensors of

the stereoscopic system, and M be a 3D point, as shown in Figure 5

Ngày đăng: 12/08/2014, 02:22

TỪ KHÓA LIÊN QUAN