If there is a distance greater than the robot’s size between two points which belong to the non-traversable region border, then the robot can cross between them, and the points will be c
Trang 1Voronoi diagram referred in this work is the VD generated for sets of
points, where a generator is defined as a set of points In this case, the
re-gion segmentation based on the minimum Euclidean distance computation
between a location and a set of points is called Generalized Voronoi
Dia-gram (GVD)
Definition 5 Generalized Voronoi iagram
Given G { g g1 2 gn} a collection of n point series in the plane
which do not overlap
2
1 2
i
(6.20) for every point p 2, the minimum Euclidean distance from p to
an-other point belonging to the series gi is called d E( p g i)
( p g i) min( dE( p p i) pi gi)
E
d
(6.21) The Voronoi region will be defined as:
2
(6.22) and the given sequence:
{ ( ) ( ) ( n)}
will be the generalized Voronoi diagram generated by G
From now on, the GVD term is used when the generators are series of
points instead of isolated points Those series of points will be defined as
generator group.
The algorithm we present in this work is based on the GVD and is
im-plemented in a sensor based way because it is defined in terms of a metric
function (d E( p g i)), that measure the Euclidean distance to the closest
object (gi), represented as a set of points supplied by a sensor system
C Castejón et al
D
Trang 2Supposedly the robot is modelled as a point operating in a subset
belong-ing to the two-dimensional Euclidean space (in our particular case,
al-though this is true with n-dimension too) The space W which is called
Workspace, is obstacle populated { C C1 2 Cn} that will be considered
as a close set The set of points where the robot can manoeuvre freely will
be called free space and is defined in [5] as:
1
¯ i n i¿
i
(6.24) The workspace W is represented as a two-dimensional binary
im-ageB i j ( , ), where each position ( , ) i j has assigned a field value (0 or 1),
that indicates if a pixel belong to a generator (field 0) o not (field 1) For
each point belonged to the free space ( , ) i j FS is, at least, one point
closest to the occupied space FS that will be called base point [50]
The LVD is obtained from the distances to the generated points which
belong to the objects’ borders To obtain the local model, the algorithm is
executed in the three steps presented in next paragraphs
1 Clustering
Data representing borders between traversable and non-traversable regions
are grouped in clusters The cluster determination, in the three-dimensional
information case, is not trivial In other works, the sorted two-dimensional
information, from a scanner laser, is easily separated in clusters, using the
distances between successive scanned points [6], [27] In this approach, the
3D information cannot be treated as other authors do, and a labelling
tech-nique is used to obtain the clusters The kernel applied is a circumference
The radius is the robot’s size If there is a distance greater than the robot’s
size between two points which belong to the non-traversable region border,
then the robot can cross between them, and the points will be considered as
belonging to different clusters
In figure 6.41 the data grouping of the real environment presented in 38
is presented The result of this step is three generators called A, B and C
6.4.3 TRM Algorithm
Trang 3Fig 6.41 Data grouping to obtain generators
For the VD construction, the assignment rule is the Euclidean distance to generators Generators are the clusters For each free space cell ( , ) i j FS, we mean whose cells where the robot can evolve, the dis-tance between the cell and each point, which belongs to each clusters, is calculated The minimum Euclidean distance between the cell and the points to one cluster will be considered to assign the cell to the
correspond-ing Voronoi region.
The label cell is evaluated, based on the minimum distance, as follows (see figure 6.42):
If the distance to an A cluster is less than the rest, the cell is evaluated as
belonging to the Voronoi region associated to the cluster A
If there are two equidistant clusters in a cell (for example A and B), then
the cell is labelled as Voronoi edge.
For bigger equidistant it will be labelled as Voronoi node.
C Castejón et al
2 Distance to the Generator Elements Computing
3 LVD Algorithm
Trang 4Fig 6.42 Label cell evaluation
The labelled of each cell belonged to the FS is carried out computing
the distance between the centre of the cells Nevertheless, the real
meas-urement can be located in any position inside the cell This must be taken
into account when the distance to the generators is calculated The
maxi-mum discretized error is calculated, based on figure 6.43:
max ( 2 1) ( 2 1)
Fig 6.43 Maximum distance between two cells
In the same figure 43 can be seen that:
Error Caused by Discretizing
Trang 5236
2 2 2 2
c
c
c
c
R
R
R
R
(6.26) and replacing 6.26 in 6.25 we obtain:
(6.27) Therefore, dmax is the maximum distance that we must consider when
the labelling step is performed We mean, the maximum error E when
two cells are evaluated is:
max
And the LVD generation step is modified as follows:
For each cell ( , ) i j belonging to the free space:
1 The distance to all the points belonged to each generator is computed
2 The minimum distance to each generator is obtained We consider, for
example in figure 6.42, the distance difference between two generator
A and B asE A Bc( , ) d E( gA, ) p d E( gB, ) p , where p is the
centre of the cell ( , ) i j with coordinates( , ) x y , then:
If E A Bc( , ) ! u 2 E for allA z B, the cell is considered as
be-longed to the Voronoi region associated to the object A
If E A Bc( , ) u 2 E and E H Bc( , ) ! u 2 E for all H z A z B,
the cell is considered as Voronoi edge between two objects A and
B
For bigger equidistant the cell will be considered as Voronoi node
In figure 6.44 the result of the DVL computing from figure 6.41 is
shown In the figure the Voronoi edges and nodes are highlighted
C Castejón et al
Trang 6Fig 6.44 DVL representation
A priori, the cell size has influence on time computing and on the model
accuracy To test the influence, two cell sizes (suitable for the environ-ment, the robot size and the robot speed) have been chosen The cell sizes are 20 cm and 50 cm The experimental results have been carried out in an environment presented in figure 6.45 and the TNM in figures 6.46 and 6.47
Fig 6.45 Environment with high obstacles density
6.4.4 Cell Size Influence
Trang 7Fig 6.46 TNM Top view
Fig 6.47 TNM Side view
The LVD obtained are presented in figures 6.50 where a 20 cm cell resolution is implemented, and 6.51 with a 50 cm resolution And in fig-ures 6.48 and 6.49 the visibility maps are presented to compare the cell size
Fig 6.48 Visibility Map Cell size
of 20 cm
Fig 6.49 Visibility Map Cell size
of 50 cm
C Castejón et al
238
Trang 8Fig 6.50 DVL for 20 cm cell size Fig 6.51 DVL for 50 cm cell size
Apparently, the graphical result does not change, we mean, the LVD form has not been loosen, only the number of points have decreased In 50
cm cell size the number of nodes decreases and, above all, we notice that the difference between the two resolutions increases with the distance to the sensor system Of course, the processing time in steps with digital im-age processing algorithms, is reduced because of the increases in cell size The image changes the size from 100 200 u to 40 80 u (see table 6.1)
In table 6.1 the time computing for 50 cm cell size is reduced to the 20%
of the time for the 20 cm cell size
In order to explore a large unknown environment, the robot needs to build
a global model, to know where it is at each instant, and where it can go The model can be introduced a priori in the robot or it can be built while the robot is travelling, fussing local models The model merge consist of assembling a set of data, obtained in different acquisitions in a unique model [17] In this article, an incremental global map is built merging the different LVD while the robot is moving
6.5 Incremental Model Construction
Tab 6.1 Different cell size time computing
Trang 9To obtain an incremental model, based on the robot successive percep-tions, the following steps must be carried out [38]:
1 Environment information sensing: The robot, stopped, uses its exter-nal sensors (in this case a 3D scanner laser and a compass) to acquire all the needed information and build a local model (LVD)
2 The local model (LVD), useful for navigation, is transformed in a global coordinates system, with a differential GPS, and the local model is integrated with the global current model
3 The robot moves to the next position, by path-planning or guiding, and goes back to the step 1
The exact global model construction, based on the successive percep-tions, is in general difficult to obtain due to the sensor uncertainty Be-sides, in the majority of the cases, it is not interesting not only for the im-precision, but also the CPU time computing and the high memory storing makes the model not interesting for real-time objectives
For the integration, a transformation in the global Cartesian system is needed A Global Positioning System (GPS) is used to perform the trans-formation for each LVD point( , x yi i) The algorithm is the following: For the first local map (transformed into global reference system):
1 To obtain a seed pointP x ( CM, yCM) LVD, and to set it as Centre
of Mass (CM )
2 Calculate the Euclidean distance between all the points
i i i
P x y LVD and theCM If the distance is less than an error value calledrmax, the point Pi is included in the list belonging to the
CM and a new CM is calculated If the distance is greater than max
r , then a new list is formed and it is set as another CM
For the rest of the LVD maps (transformed into global reference sys-tem):
1 For each point Pi LVD we try to find the CM in the global map to which Pi belongs (the Euclidean distance between Pi and
CM is less than rmax)
2 If we do not find it a new list is formed with CM P
C Castejón et al
Trang 10Equation 6.29 calculates the CM for a set of N points The
Euclid-ean distance is calculated in equation 6.30 to know if a point belongs to
theCM
CM
(6.29)
(6.30) The rmaxvalue must be chosen considering the map cell resolution, and
it always depends on the robot’s velocity movement
In figures 6.52, 6.53 and 6.54 the steps performed to build the global
map are presented
Fig 6.52 LVD Fig 6.53 Data map clustering
Fig 6.54 Incremental Voronoi map
Trang 11In next figures (first, see figure 6.55), an exploration task for an outdoor robot is presented The robot senses the environment and builds the LVD (figure 6.56), then it moves three meters ahead and repeats the process eleven times, building the incremental model (in figures 6.57 to 6.66)
In the global model built, the good results can be appreciated thanks to the additional information provided by the GPS that supply global meas-ures The local model integration is easy and allows building topo-geometrical models in real time
Fig 6.55 Real environment Fig 6.56 Robot exploration
sequence Position 1
Fig 6.57 Robot exploration
sequence Position 2
Fig 6.58 Robot exploration
sequence Position 3
C Castejón et al
Trang 12Fig 6.59 Robot exploration
sequence Position 4
Fig 6.60 Robot exploration
sequence Position 5
Fig 6.61 Robot exploration
sequence Position 6
Fig 6.62 Robot exploration
sequence Position 7
Fig 6.63 Robot exploration
sequence Position 8
Fig 6.64 Robot exploration
sequence Position 9
Trang 13Fig 6.65. Robot exploration
sequence Position 10
Fig 6.66 Robot exploration
sequence Position 11 During the sequence, we can see that the robot acquires new local mod-els and regions that in previous maps were considered as non traversable
or not visible, change to traversable regions with the increase of new data and the different robot point of view With this algorithm, the global model can be built off-line, based on different local models or in real time, as the robot is travelling in autonomous or guiding mode
6.6 Conclusions
A new methodology to model outdoor environments, based on three-dimensional information and a topographical analysis, has been done The model can be built in real time while the robot is moving and it is possible
to carry out local models integration in order to build an environment’s knowledge data base in a global map, using a GPS system onboard the ro-bot as work [9] presents The computing time is low, taking into account the cell size used (20 cm), that is, the minimum size considered for an out-door environment and for a long size robot The computing time decreases
in 80% for a 50 cm cell size
As conclusions based on the experimental results presented in this pa-per, we can highlight the followings:
x The 3D scanner laser is a good choice, to obtain information to model environments with a certain degree of complexity It senses with pre-cision the terrain surface, and obtains a 3D dense map
x Nevertheless, the sensor system presents problems in negative sloped terrains, where the information density is fewer when the distance be-tween the measurements and the sensor increases This is because of the scanner laser non linearity, when a vertical scan with constant in-crement is done Different solutions can be set out to solve this prob-lem, such as: the local model size reduction or the sensor placing (in a
C Castejón et al
244
Trang 14more elevated position) and the use of a non constant increment in the vertical scans, to obtain more information in the slope area
x The cell size used in the model discretization has influence on time computing For a same size environment, the increase in the resolu-tion cell will decrease the number of cells presents on the map and then, all the digital image algorithms will reduce the process time Based on the experimental results, we have conclude that, for the ro-bot size and the environment type we are going to work, a good size cell will be between 20 and 50 cm
x The DVL process time, increases when the number of free space cells increases and when the number of points belonged to the generators increases
6.7 Acknowledgements
The authors gratefully acknowledge the funds provided by the Spanish Government through the CICYT Project TAP 1997-0296 and the DPYT project DPI 2000-0425 Further, we thank Angela Nombela for her assis-tance
References
1 Howard A and Seraji H., Vision-based terrain characterization and traversability assessment, Journal of Robotic Systems 18 (2001),
no 10, 577–587
2 S Betgé-Brezetz, R Chatila, and M Devi, Natural scene understand-ing for mobile robot navigation, IEEE International Conference on
Robotics and Automation, vol 1, 1994, pp 730–6
3 Stéphane Betgé-Brezetz, Modélisation incrémentale et localisation pour la navigation d’un robot mobile autonome en environnement naturel, Ph.D thesis, Laboratoir d’analyse et d’architecture des
systèmes du CNRS Université Paul Sabatier de Touluse, février 1996
4 D Blanco, B.L Boada, C Castejón, C Balaguer, and L.E Moreno,
Sensor-based path planning for a mobile manipulator guided by the humans, The 11th international conference on Advanced Robotics,
ICAR 2003, vol 1, 2003
5 D Blanco, B.L Boada, and L Moreno, Localization by voronoi dia-grams correlation, IEEE International Conference on Robotics and
Automation, vol 4, 2001, pp 4232–7