Experimental Results in Using Aerial LADAR Data for Mobile Robot NavigationNicolas Vandapel, Raghavendra Donamukkala, and Martial Hebert The Robotics Institute Carnegie Mellon University
Trang 1The use of terrain information will be the key source of accuracy with thismethod For areas in which the map information is sparse the observation of altitudewill not provide much information with which to bound the uncertainty in position.The positioning accuracy achievable will also depend on the terrain over whichthe vehicle is travelling A flat, uniform bottom yields no information to bound theestimate of the filter In this case, the uncertainty in the estimate will grow alongthe arc subtended by the range observation If, on the other hand, some uniqueterrain features are present, such as hills and crevices, the probability distributionwill converge to a compact estimate As shown in the example presented here, the2σ uncertainty bounds converge to approximately 10m in the X and Y positionestimates The depth accuracy remains constant and is a function of the accuracy ofthe depth sensor.
The uncertainty in the position estimate will grow while the body is not receivingaltimeter readings As shown in Figure 4, the error in the lateral position of the towedbody relative to the ship grows large since there is no information available to thefilter Once the altimeter readings are received, the uncertainty in both the X and
Y positions are reduced So long as the trend in the terrain elevation remains fairlyunique, the uncertainty will remain small
4.1 Sydney Harbour Demonstration
In order to facilitate the demonstration of these techniques, data sets taken in SydneyHarbour have been acquired This data includes a detailed bathymetric map ofthe harbour, shown in Figure 5 (a), and ship transect data, including GPS and depthsoundings, shown in Figure 5 (b) This data has kindly been donated by the AustralianDefence Science and Technology Organization (DSTO) in relation to their hosting
of the 3rd Shallow Water Survey Conference held in Sydney in November, 2003.The particle filter based techniques described in this paper have been applied tothese data sets Figure 6 shows results of these tests The ship location is initiallyassumed to be unknown and particles are distributed randomly across the extent
of the Harbour The GPS fixes were used to generate noisy velocity and headingcontrol inputs to drive the filter predictions Observations of altitude using the ship’sdepth sounder were then used to validate the estimated particle locations using asimple Gaussian height model relative to the bathymetry in the map As can be seen
in the figure, the filter is able to localise the ship and successfully track its motionthroughout the deployment The particle clouds converge to the true ship positionwithin the first 45 observations and successfully track the remainder of the shiptrack Figure 7 shows the errors between the ship position and heading estimatesgenerated by the filter and the GPS positions
The assumption that the initial ship location is unknown is somewhat unrealisticfor the target application of these techniques as submersibles will generally bedeployed from a known initial location with good GPS fixes This represents a worstcase scenario, however, and it is encouraging to see that the technique is able tolocalise the ship even in the absence of an initial estimate of its position
Trang 2(a) (b)
Fig 5 Sydney Harbour bathymetric data (a) The Harbour contains a number of interesting
features, including the Harbour tunnel on the right hand side and a number of large holeswhich will present unique terrain signatures to the navigation filter (b) The ship path for theSydney Harbour transect Shown are the contours of the harbour together with the path of thevehicle Included in this data set are the GPS position and depth sounder observations at 5sintervals
5 Conclusions
The proposed terrain-aided navigation scheme has been shown to reliably track a shipposition in a harbour situation given depth soundings and a bathymetric map of theharbour This technique is currently being augmented to support observations using
a multi-beam or scanning sonar in preparation for deployment using the UnmannedUnderwater Vehicle Oberon available at the University of Sydney’s Australian Centrefor Field Robotics
Following successful demonstration of the map based navigation approach, thetechniques developed will be applied to building terrain maps from the informationavailable solely from the vehicle’s on-board sensors There is considerable informa-tion contained in strong energy sonar returns received from the sea floor as well as
in the images supplied by on-board vision systems This information can be bined to aid in the identification and classification of natural features present in theenvironment, allowing detailed maps of the sea floor to be constructed These mapscan then be used for the purposes of navigation in a similar manner to that of themore traditional, parametric feature based SLAM algorithm [13,12]
com-Acknowledgements
The authors wish to acknowledge the support provided under the ARC Centre ofExcellence Program by the Australian Research Council and the New South Walesgovernment Thanks must also go to the staff of the Defence Science and TechnologyOrganization (DSTO) for making the Sydney Harbour bathymetry and transect dataavailable
Trang 3(a) (b)
Fig 6 Monte Carlo localisation example using the Sydney Harbour bathymetric map The
line represents the ship track in this deployment and the particles are shown overlaid on thefigure (a) The particles are initially drawn from the uniform distribution across the extent
of the harbour (b) The potential location of the ship is reduced to areas of the harbour with
a common depth to the start of the trial and (c) begin to converge on the true ship location.(d) Once the particles have converged to the actual position of the ship, its motion is tracked
as additional observations are taken As can be seen, the particle clouds track the true shippath over the extent of the run in spite of there being no absolute observations of the shipposition
References
1 N Bergman, L Ljung, and F Gustafsson Terrain navigation using Bayesian statistics
IEEE Control Systems Magazine, 19(3):33–40, 1999.
2 F Gustafsson, N Bergman, U Forssell, J Jansson, R Karlsson, and P-J Nordlund
Particle filters for positioning, navigation and tracking IEEE Trans on Signal Processing,
1999
3 A.E Johnson and M Hebert Seafloor map generation for autonomousunderwater vehicle
navigation Autonomous Robots, 3(2-3):145–68, 1996.
4 D Langer and M Hebert Building qualitative elveation maps from underwater sonar
data for autonomous underwater navigation In Proc IEEE Intl Conf on Robotics and
Automation, volume 3, pages 2478–2483, 1991.
Trang 4Fig 7 The error between the mean of the particle densities and the GPS positions.
The errors are bounded by the 2σ error bounds for the distributions
5 S Majumder, S Scheding, and H.F Durrant-Whyte Sensor fusion and map building
for underwater navigation In Proc Australian Conf on Robotics and Automation, pages
25–30 Australian Robotics Association, 2000
6 C De Moustier and H Matsumoto Seafloor acoustic remote sensing with multibeam
echo-sounders and bathymetric sidescan sonar systems Marine Geophysical Researches,
15(1):27–42, 1993
7 V Rigaud and L Marc Absolute location of underwater robotic vehicels by acoustic
data fusion In Proc IEEE Intl Conf on Robotics and Automation, volume 2, pages
1310–1315, 1990
8 R.Karlsson, F Gustafsson, and T Karlsson Particle filtering and cramer-rao lower bound
for underwater navigation In Internal Report LiTH-ISY-R-2474, 2002.
9 S Thrun, D Fox, and W Burgard A probabilistic approach to concurrent mapping and
localization for mobile robots Machine Learning and Autonomous Robots (joint issue),
1998
10 S Thrun, D Fox, W Burgard, and F Dellaert Robust monte carlo localization for mobile
robots Artificial Intelligence, 2000.
11 L Whitcomb, D Yoerger, H Singh, and J Howland Advances in underwater robot
vehicles for deep ocean exploration: Navigation, control and survey operations The
Ninth Internation Symposium on Robotics Research, pages 346–353, 1999.
12 S.B Williams, G Dissanayake, and H.F Durrant-Whyte Constrained initialisation of
the simultaneous localisation and mapping algorithm In Proc Intl Conference on Field
and Service Robotics, pages 315–320, 2001.
13 S.B Williams, G Dissanayake, and H.F Durrant-Whyte Towards terrain-aided
naviga-tion for underwater robotics Advanced Robotics, 15(5):533–550, 2001.
Trang 5Experimental Results in Using Aerial LADAR Data for Mobile Robot Navigation
Nicolas Vandapel, Raghavendra Donamukkala, and Martial Hebert
The Robotics Institute
Carnegie Mellon University
vandapel@ri.cmu.edu
Abstract In this paper, we investigate the use of high resolution aerial LADAR (LAser
De-tection And Ranging) data for autonomous mobile robot navigation in natural environments.The use of prior maps from aerial LADAR survey is considered for enhancing system per-formance in two areas First, the prior maps are used for registration with the data from therobot in order to compute accurate localization in the map Second, the prior maps are usedfor computing detailed traversability maps that are used for planning over long distances.Our objective is to assess the key issues in using such data and to report on a first batch ofexperiments in combining high-resolution aerial data and on-board sensing
1 Introduction
Autonomous mobility in unstructured, natural environments is still a daunting lenge due to the difficulty in analyzing the data from mobility sensors, such as stereocameras or laser range finders Recent developments make it possible and economi-cal to acquire high-resolution aerial data of an area prior a ground robot traverses it.The resolution of former digital elevation maps is too limited to be used effectivelyfor local robot navigation New high-resolution aerial mapping systems opens thedoor to preprocessing a terrain model at a resolution level comparable to the oneproduced by on-board sensors
chal-In this paper, we investigate the use of high resolution aerial LADAR data forautonomous mobile robot navigation in natural environments The use of prior mapsfrom aerial survey is considered for enhancing system performance in two areas.First, the prior maps are used for registration with the data from the robot in order
to compute accurate localization in the map Second, the prior maps are used forcomputing detailed traversability maps that are used for planning over long distances.Our objective is to assess the key issues in using such data and to report on a first set
of experiments in combining high-resolution aerial data and on-board 3-D sensing
In the application considered here, the typical mission scenario aims at ing waypoint navigation over hundreds of meters in a rough terrain cluttered withvarious types of vegetation The ground vehicle system, built by General RoboticsDynamic Systems (GDRS), is equipped with a three dimensional (3-D) laser sen-sor, an inertial navigation unit and a Global Positioning System (GPS) OverheadLADAR data is provided prior to the mission
perform-Using aerial LADAR data poses two main challenges: the volume of data and thenature of the terrain The aerial LADAR data contains typically 44 millions of 3-D
S Yuta et al (Eds.): Field and Service Robotics, STAR 24, pp 103–112, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Trang 6points each associated with an identifier and a reflectance value The area mappedcovers 2.5 km × 3.0 km Such a large data set is enormous amount of data to useeffectively on-board a robot In addition, the data includes two components of theterrain - the vegetation cover and the terrain surface, which need to be discriminatedfrom one to another The work reported here is a first step toward using this type ofhigh-resolution aerial data in conjunction with data from a robot’s mobility sensor.The paper is organized as follows: The second section presents the field testsites and the sensors we used The third section deals with the details of vegetationfiltering both with ground and aerial LADAR data In Sections 4 and 5 our work
on vegetation filtering is in the context of two problems: robot position estimationusing 3-D terrain registration and path planning
2.2 Sensors
We used the GDRS mobility laser scanner [18], mounted on a ground mobile robot,and a Saab TopEye mapping system mounted on a manned helicopter [1], to collectdata
The ground LADAR is mounted on a turret sitting at the front of the groundvehicle The vehicle is built using the chassis of a 4 wheels drive All Terrain Vehicle(ATV) The laser has a maximum range of 80 meters and a 7.5 cm range resolution
In the configuration used in these experiments, its field of view is 90o× 15o Inaddition, the sensor can pan and tilt by ±90o×±15ofor increasing terrain coverage.The aerial mapping system is operated at 400 meter above the ground The laserbeam is deflected by an oscillating mirror which produces a Z-shaped laser trackalong the flight path The range resolution is 1 cm and the point position accuracyvaries between 10 and 30 cm, depending on the altitude The laser records twoechoes per pulse (first and last), but only objects taller than 1.8 meter will producetwo echoes For each demonstration, the helicopter flew over the test area alongseveral directions to produce higher point density, 4 to 30 points per square meter
Trang 73 Vegetation Filtering
In this section we justify the need for removing the vegetation from the raw data, wereview the issues of perceiving vegetation with laser sensors, we present a state of theart of the techniques commonly used, and we explain the two methods implementedfor the aerial and the ground LADAR data
3.1 Motivation
Vegetation constitutes a major challenge for robot navigation for the followingreasons: 1) Vegetated areas are unstable features, susceptible to natural changesand human activities, 2) It is difficult to recover and model the shape of treesand bushes, 3) Vegetation can prevent the detection of hazards such as trenches orrocks, 4) Vegetation such as trees and bushes might constitute a hazard, whereasgrass might not and overhanging branches are sometimes difficult to detect, 5)Vegetation prevents the detection of the terrain surface, which is used in terrain-based localization, and is required when the canopies produce GPS drop-off
3.2 Vegetation and Active Range Sensors
Issues related to the interaction of an aerial LADAR sensor with ground cover can
be summarized into three broad catergories: sensing, data interpretation, and groundversus aerial sensing
The foliage penetration rate will depend on: the canopy density (winter able); the scanning angle (nadir preferable); the laser footprint on the ground (smallaperture, low altitude preferable) The signal returned might contains several echoes,one for the canopy and one for the ground
prefer-A change in elevation can be interpreted as a change in the ground cover (terrainversus trees) or as a change in the terrain elevation Because of the range dispersionwhen the laser beam hits multiple objects at different depths, the range measurementswill be erroneous, depending on the flight path and the scanning angle
Ground scans are less sensitive to the effects described above because the laserrange is shorter (dozens versus hundreds of meters) and so the beam footprint on thetarget is smaller Another difference is the geometry of the scene Aerial LADAR willmost likely map the ground and the top of the canopy Trunks and small obstacles willnot be perceived With a ground LADAR, the bottom of the canopy, small obstaclesand trunks will be seen Trunks will produce range shadows, occluding a large part
of the terrain The last difference will be the density of points, with a difference oftwo orders of magnitude
3.3 State of the Art
Filtering LADAR data has been mainly studied in the remote sensing communitywith three objectives: producing surface terrain models [13] (in urban or natural
Trang 8environment), studying forest biomass [14], and inventoring forest resources [8] Tofilter LADAR data authors used linear prediction [13], mathematical morphology(grey opening) [5], dual rank filter [15], texture [6], and adaptive window filtering[17] All these methods are sensitive to the terrain slope In the computer visioncommunity, Mumford [10] pioneered the work for ground range images Macedo[16] and Matthies [3] focused on obstacle detection among grass In [7], Hebertproposed to use 3-D points local shape distribution to segment natural scenes intolinear, scatter and solid surface features.
3.4 Methods Implemented
We implemented two methods to filter the vegetation The first one takes advantage
of the aerial LADAR capability to detect multiple echoes per laser pulse emitted.The ground LADAR does not have this capability, so a second filtering method had
to be implemented
Multi-echoes based filtering The LADAR scans from multiple flights are gathered
and the terrain is divided into a grid with 1 m × 1 m cells Each point falling within agiven cell is classified as ground or vegetation by k-mean clustering on the elevation.Laser pulses with multiple echoes (first and last) are used to seed the two clusters(vegetation and ground respectively) Single echo pulses are assigned initially to theground cluster After convergence, if the difference between the mean value of thetwo clusters is less than a threshold, both clusters are merged into the ground cluster.The clustering is performed in groups of 5x5 cells centered at every cell in the grid
As we sweep the space, each point is classified 25 times and a majority vote definesthe cluster to which the point is assigned This method has been used to produce theresults presented in section 5
Cone based filtering We had to implement a new method for filtering the vegetation
from ground data Our approach is inspired by [19] and is based on a simple fact: thevolume below a ground point will be free of any LADAR return For each LADARpoint, we estimate the density of data points falling into a cone oriented downwardand centered at the point of interest
While the robot traverses a part of the terrain, LADAR frames are registeredusing the Inertial Navigation System (INS) The INS is sensitive to shocks (e.g., ahigh velocity wheel hitting a rock), which causes misalignment of consecutive scans
In order to deal with slightly misaligned frames, we introduce a blind area defined
by the parameter ρ (typically 15 cm) The opening of the cone (typically 10-20o)depends on the expected maximum slope in the terrain and the distribution of thepoints
This approach has been used to produce the results presented section 4 Ourcurrent implementation filters 67,000 points spread over 100 m × 100 m, in 25 s on
a Pentium III, 1.2 GHz
Trang 94 Terrain Registration
In this section we present briefly our method for 3-D terrain registration and we showregistration results These results are obtained in an area in which registration wouldnot be possible without the vegetation filtering and ground recovery algorithmsdescribed above
4.1 Terrain Registration Method
The objective of terrain registration is to recover the vehicle position in the map
by matching a local map from 3-D data from on-board sensors with 3-D data fromthe prior map The core of our registration approach involves the computation ofpose-invariant surface signatures in the neighborhood of feature points in both therobot and the prior map Correspondences between the features are established bycomparing the signatures, and the most consistent set of correspondences is retainedfor computing the registration transformation An initial description of this class ofapproaches can be found in [12] Initial extensions to terrain matching are described
in [9] Details of the current version of the map registration approaches are described
in [20] Importantly, this approach does not require accurate prior knowledge ofvehicle pose In fact, we have performed registration with 20 m initial error in robotposition and +/-10o error in orientation
Key to the current approach is the automatic selection of interest points or featurepoints in the terrain maps This is challenging because the points must be selected
in a consistent manner between the aerial and ground data The approach can besummarized as follows Points are selected using three criteria computed from theconfiguration of the 3-D surface in the neighborhood of each point The first criterionuses the configuration of range shadows in the vicinity of the point; essentially, pointswith low density of range shadows in their neighborhood are selected in priority.The second criterion evaluates the amount of variation of the terrain surface in thevicinity of each candidate point, so that only points with sufficiently curvy terrainare selected Finally, the signatures are analyzed so that only those points whosesignatures contain enough information are retained Given a 3-D map constructedfrom the on-board sensors, the algorithm that combines the three criteria extracts asmall set of points that are used as the feature points for matching Andrew Johnsonintroduced a more elaborated but more expensive landmark point selection strategy
in [11] By contrast, our implementation runs on-board the mobile robot
We present the performance evaluation below for the registration using the datadescribed above The key conclusion of the experiments is that vegetation filteringand ground recovery is critical to reliable terrain registration for two reasons First,the vegetation generates very different data sets when viewed from the vantage point
of the robot as opposed to the vantage point of the aerial sensor This is true in obviouscases such as tree canopies, but this is also true of more mundane structures such
as bushes Second, this problem is compounded by the fact that selecting featuresfor registration is based on the amount of variation in 3-D data around each point,which causes feature points to be selected near vegetation areas, the least reliable
Trang 10to be for matching For example, bushes and trees tend to be the most prominent3-D structures in the scene, occluding other terrain surface features, such as rockformation and ledge Furthermore, the last stage in registration involves minimizingthe registration error between the two 3-D data sets, after transformation by thetransformation computed from correspondences between feature points Because ofthe stochastic nature of the data in vegetation areas, this minimization is, in practice,unreliable.
4.2 Example with the Yuma Data Set
Ledge course Figure 1-(a) shows an example of air-ground registration The ground
area to be registered does not contain any vegetation, but the aerial data does Withoutfiltering the vegetation the points retained for performing the registration are mostlylocated in the wash, visible in the rear of the scene The green surface representsthe aerial data, it is 100 m×100 m The hill that is visible is roughly 10 m high andthe ledge cliff is at most 3 m high The data was collected during the demonstrationperformed at Yuma, Arizona, in may 2002
Wash course In this second example the ground data contains bushes and trees.
Aerial and ground data are represented as points clouds Figure 1-(b) and (c) tively The robot drove 9 m The filtering method works correctly even in the presence
respec-of a steep slope, as shown in the side respec-of the ledge Figure 1-(c) The vegetation pointsare plotted in color while the ground points are in white The ground (aerial) datacontains 50,000 (541,136) 3-D points, 9,583 (387,393) of them are classified asvegetation and 40,417 (153,721) as ground The processing time is respectively 15seconds and 117 seconds
One added difficulty in this example is that, after filtering the vegetation in therobot data, the terrain surface contains a number of empty areas with no data, termed
“range shadows” This is because in many cases there is not enough data to recoverthe ground surface after filtering This makes the feature selection more difficultbecause we cannot use the range shadows as a criterion to reject potential featurepoints In the aerial data, on the other hand, the ground surface can be recoveredmore reliably
Figure 1-(d) contains the aerial data without vegetation (in grey) registered withthe ground data without vegetation (in red) The ground (aerial) mesh has a resolution
of 0.27 m (1.2 m), it covers 40 m×25 m (100 m×100 m), and it contains 1,252 (9,000)vertices and 400 (637) of them are used to produce spin-images The registrationprocedure lasts 2.3 s
5 Path Planning
In this section, we describe the use of prior maps from high-resolution aerial data forpath planning We consider the problem of planning from a start position to a distantgoal point through a set of intermediate waypoints We evaluate different options
Trang 11(a) Ledge: registration (b) Wash: aerial data
(c) Wash: ground data (d) Wash: registration
Fig 1 Terrain registration with vegetation filtering, Yuma data set (a) Registration example
from the Ledge course; (b),(c),(d) registration example from the Wash course In (b) theelevation is color coded from blue to red In (c) the white area corresponds to the terrainsurface, the colored points to the vegetation segmented In (d), the grey represents the aerialdata and the red mesh the ground data Both in (c) and (d) the vegetation has been filtered
for processing the prior map data and show the effect of vegetation filtering on theperformance
5.1 Traversability Maps
Our task is to provide a path between waypoints based on aerial LADAR data ing with the raw data we first divide the space into 1 m × 1 m cells Points are thensegmented into two clusters: vegetation and ground For each cell the ratio betweenthe number of vegetation points and the total number of points defines a confidence
Start-criterion on the terrain reconstructed below the tree canopy We called it the
vege-tationess This new criterion allows the path planner to consider new trajectories,
usually prohibited because vegetation is not treated a priori as an obstacle
The traversability at each location in the map is evaluated using the standardapproach of convolving a vehicle model with the elevation map More precisely,using the ground points we compute height traversability maps (one every 45o
in heading) as follows: 1) we interpolate the position of each tire on top of theground surface, 2) we fit a plane and extract the current roll, pitch, and remainingground clearance of the vehicle, 3) these values are remapped between 0 (non-traversable) and 1 (traversable) and then thresholded, using a sigmoid function, andthe static performances of the vehicle (maximal frontal and side slopes, and theground clearance) The final traversability value assigned to the cell is the value of
Trang 12the least favorable of the three criterion If one of the three criterion exceeds therobot’s limits, the cell is marked as non-traversable.
5.2 Planner
To test our approach we implement a grid-based path planner to determine the path
of least cost in our gridded map The cost at each node is computed as follow :
We tested the relative influence of the traversability and vegetationess map on
the results produced by the path planner with the Yuma data set We performed
3 different tests using 47 pairs of starting/ending points, selected randomly in thescene We computed a path for each of them using three different sets of maps: 1)
the 8 directional traversability maps and the vegetationess map, 2) one directional map and the vegetationess map, 3) the 8 directional traversability maps only Each
path produced, 141 in total, has been evaluated visually using a high resolution aerialimage (17 cm) as ground truth The corresponding failure rates are 4.2 %, 19.2%and 4.2% for an average path lenght of 529 m, 535 m and 522 m respectively In allcases, a valid path is known to exist and a failure is recorded whenever a path cannot
be generated from the cost map
The importance of using the directionality map is clear from the experiments.The role of the vegetation map is not critical on the Yuma data set because of thenature of the terrain: desert with tall bushes and trees with sparse vegetation There
is no continuous tree canopy as in the example presented in the next section
5.3 Example with the APHill Data Set
Figure 2 presents an example of path obtained with our planner using the APHilldata set The terrain is made of tall trees producing a continuous canopy cover of theground The density of foliage is such that sufficient ground points were sensed to
reconstruct the terrain below the canopy Figure 2-(a) represents the vegetationess
map The path computed by the path planner, between 15 waypoints, is overlaid inblack The total cumulative distance is 152 m Figure 2-(b) shows the traversabilitymap computed after filtering The green areas are traversable regions and the darkred terrain is non-traversable Blue points are part of a water pond Without filteringthe vegetation, only the dirt roads appear traversable The interest of the method
is explicit on this example During the demonstration a different path planner, veloped by NIST, was used to produce a similar path passing through the samewaypoints [2] The robot actually navigated autonomously along this path, avoidinglocal obstacle, such as small obstacle and overhanging branches, not perceived inthe aerial LADAR data
Trang 13de-(a) Vegetation map (b) Traversability map and path planned
Fig 2 Path planning and vegetation filtering, APHill data set (a) Vegetationess map with a
color scale from green to red for no vegetation to high density vegetation, in black the pathplanned (c) Traversability map with from green to red traversable to non-traversable terrains
6 Conclusion
In this article, we presented results on the use of aerial LADAR data for 3-D terrainregistration and path planning obtained during two different field demonstrations,using a ground mobile robot We showed that because vegetation hides potentialobstacles, masks the ground terrain features, and introduces artifacts which misleadpoint selection strategies, it is a major problem for mobile robot navigation in naturalenvironment We presented the methods used to filter vegetation both for ground andaerial LADAR data Once the ground terrain is recovered, we have been able toproduce ground-ground / air-ground 3-D terrain registration for terrains in whichregistration would not be possible without vegetation filtering We proposed to useground terrain recovery below the canopy for path planning We demonstrated with
an autonomous ground robot that this new approach is viable
Although encouraging, these results are still limited, and work still needs to bedone to further evaluate performance and to extend the approach to reach operationalperformance Key directions of future work include: systematic evaluation of ourapproach with correlation to terrain ground truth, making the vegetation filteringmethod invariant to the terrain slope, and improvement of path planning by takinginto account the clearance between the terrain and the bottom of the canopy
Acknowledgments
This project was funded by DARPA under the PerceptOR program, under subcontract
to General Dynamics Robotic Systems This work would not have been possiblewithout the help of William Klarquist from PercepTEK
References
1 E.P Baltsavias, “Airborne laser scanning: existing systems and firms and other resources”,
ISPR Journal of Photogrammetry & Remote Sensing, 1999, vol 54
Trang 142 S Balakirsky and A Lacaze, “World modeling and behavior generation for autonomous
ground vehicle” IEEE International Conference on Robotics and Automation, 2000
3 A Castano and L Matthies, “Foliage Discimination using a Rotating Ladar”,
Interna-tional Conference on Robotics and Automation, 2003
4 D Coombs, K Murphy, A Lacaze and S Legowik, “Driving autonomously offroad up
to 36 km/hr”, IEEE Intelligent Vehicles Symposium, 2000
5 W Eckstein and O Munkelt, “Extrating objects from digital terrain models”, Remote
Sensing and Reconstruction for Three-Dimensional Objects and scenes, SPIE
Proceed-ings vol 2572, 1995
6 S.O Elberink and H.G Mass, “The use of anisotropic height texture measures for the
segmentation of airborne laser scanner data”, International Archives of Photogrammetry
and Remote Sensing, vol XXXIII
7 M Hebert and N Vandapel, “Terrain Classification Techniques from LADAR Data for
Autnomous Navigation”, Collaborative Technology Alliance Conference, 2003
8 J Hyyppa, O Kelle, M Lehikoinen and M Inkinen, “A Segmentation-based method toretrieve stem volume estimates from 3-D tree height models produce by laser scanners”,
IEEE Transaction on Geoscience and Remotre Sensing, vol 39, no 5, May 2000
9 D Huber and M Hebert, “A new approach to 3-D terrain mapping”, IEEE/RSJ
Interna-tional Conference on Intelligent Robots and Systems, 1999
10 J Huang, A.B Lee and D Mumford, “Statistics of range images”, IEEE International
Conference on Computer Vision and Pattern Recognition, 2000
11 A Johnson, “Surface landmark selection and matching in natural terrain”, IEEE
Interna-tional Conference on Computer Vision and Pattern Recognition, 2000
12 A Johnson, “Spin-Images: a representation for 3-D surface matching”, Ph.D Thesis,
Carnegie Mellon University, 1997
13 K Krauss and N Pfeifer, “Determination of terrain models in wooden areas with airborne
laser scanner data” ISPRS Journal of Photogrammetry & Remote Sensing, vol 53
14 M.A Lefsky et al., “Lidar remote sensing of the canopy structure and biophysical
prop-erties of Douglas-FirWestern Hemlock forests” Remote Sensing Environment, vol 70,
1999
15 P Lohmann, A Koch and M Shaeffer, “Approaches to the filtering of laser scanner data”,
International Archives of Photogrammetry and Remote Sensing vol XXXIII, 2000
16 J Macedo, R Manduchi and L Matthies, “Laser-based discrimination of grass from
ob-stacles for autonomous navigation”, International Symposium on Experimental Robotics,
2000
17 B Petzold, P Reiss and W Stossel, “Laser scanning surveying and mapping agencies are
using a new technique for the derivation of the digital terrain models”, ISPRS Journal of
Photogrammetry and Remote Sensing, 2000
18 M Shneier et al., “A Repository of sensor data for autonomous driving research”, SPIE
Aerosense Conference, 2003
19 G Sithole, “Filtering of laser altimetry data using a slope adaptive filter”, ISPRS workshop
on Land Surface Mapping and Characterization using laser altimetry,2001
20 N Vandapel and M hebert, “3-D rover localization in airborne LADAR data”,
Interna-tioanl Symposium on Experimental Robotics, 2002
Trang 15S Yuta et al (Eds.): Field and Service Robotics, STAR 24, pp 113–122, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Autonomous Detection of Untraversability of the Path
on Rough Terrain for the Remote Controlled Mobile Robots
Kazuma Hashimoto1and Shin’ichi Yuta2
Abstract The mobile robot which traverses on a rough terrain, should have an ability to
recognize the shape of the ground surface and to examine the traversability by itself Even ifthe robot is remotely controlled by operator, such an ability is still very important to save theoperator’s load and to keep safety For this purpose, the robot measures the front ground, andgenerate the local elevation map which represents the area where the robot is going to pass,and examines that the wheels can pass through and the bottom surface of robot body dose notcontact with ground This paper reports a simple method of traversability test for the wheeledmobile robot and show an experimental system with some results
1 Introduction
We are investigating on the mobile robots, which works based on the combination
of remote operation and autonomous control on the rough terrain The lunar rover
is one of the examples For such kind of vehicles, the operator remotely gives apath or a sequence of sub-goals, which the robot should track to However, sincethe environment is irregular and not known well, the operator may not always give
a safe path Therefore, the robot has to keep its safety by itself, even while it movesaccording to the operator’s commands
For this purpose, the robot should continuously observe its front ground surface,and check its traversability When it judges not safe to traverse, it should stop andwait a new command from the operator
This paper reports a simple method of traversability test for the wheeled mobilerobot by itself, and show an experimental system with some results
Trang 16Fig 2 Sensing front
The traversability of the front ground is tested by the following steps, when therobot moves on the rough terrain:
1 While moving forwards, the robot always observe a single lateral plane with acertain inclination in front of the robot, and measure the range to the groundsurface on this plane from the robot ( fig 1 )
2 The robot continuously generates a local elevation map of the ground surface inthe area of the front of the robot, by integrating measured data
3 The robot examines the possibility of traveling over of the given path at a certaindistant front of the robot
2.1 Sensing Front
The robot uses a line distance sensor, which is attached on the front top of the robotbody, and heading towards the front ground It measures the range to the groundsurface on the single lateral line ahead of the robot They are converted to the positiondata of ground surface on the global coordinates frame, by calculating with the robotposition and posture information, and stored to generate a two-dimensional elevationmap
Let denote the global coordinate frame XGL, YGL, ZGL, and the robot coordinateframe XR, YR, ZR, of which, the origin is center of the robot bottom surface and
Trang 17X axis is towards the robot front(figure 2) The line range sensor is attached on therobot at (xs, 0, zs) of robot coordinate, towards the front ground with the slant angle
β rotated around Y axis The sensor measures the distance L to the ground point,which is shown P in figure 2, for each direction α
When the robot position in global coordinate is given as (xr, yr, zr) and robotposture is given as (φ, θ, ψ) by Z-Y-X Euler angle, the position of point P is calculated
T1 1 = cos ψ cos θ cos β − sin β(sin ψ sin φ + cos ψ sin θ cos φ) (2)
T1 3 = xr + xs cos ψ cos θ + zs(sin ψ sin φ + cos ψ sin θ cos φ) (4)
T2 1 = sin ψ cos θ cos β− sin β(− cos ψ sin φ + sin ψ sin θ cos φ) (5)
T2 3 = yr+ xssin ψ cos θ + zs(− cos ψ sin φ + sin ψ sin θ cos φ) (7)
As a sensor for such range measurement, the stereo camera, mechanically ning laser range sensor, or, light-plane intersecting method can be considered.The robot posture (φ, θ, ψ) can be measured by the accumulation of angularvelocities obtained by the gyro sensor, and/or using the direction to the referencelandmarks The robot position is estimated by the accumulation of the motion datawith its posture information
scan-2.2 Making Elevation Map
The robot always collects data on the front ground while it travels, and integratesthem to generate a local elevation map The elevation map can be represented byheight of each ground surface on the global two dimensional plane [1] To generatethe map, a two dimensional array of cells are defined at first, and when the robotgets three dimensional position information of the points on the ground surface,
it registers them as a height information of the corresponding cell to generate theelevation map So, by continuous measurement while the robot moves forward, theeach cell of the map will have an elevation information
In case a grid cell has multiple height data, the resultant height of the cell isdecided as;