A Combination of Terrain Prediction and Correction for Search and Rescue Robot Autonomous Navigation International Journal of Advanced Robotic Systems, Vol 6, No 3 (2009) ISSN 1729 8806, pp 207 214 20[.]
Trang 1International Journal of Advanced Robotic Systems, Vol 6, No 3 (2009)
207
Correction for Search and Rescue Robot Autonomous Navigation
Yan Guo1, Aiguo Song1, Jiatong Bao1, Hongru Tang2 and Jianwei Cui1
1 School of Instrument Science and Engineering, Southeast University, China
2 School of Energy and Power Engineering, Yangzhou Univeristy, China Corresponding author E-mail: a.g.song@seu.edu.cn, y.guo@seu.edu.cn
Abstract: This paper presents a novel two-step autonomous navigation method for search and rescue robot The
algorithm based on the vision is proposed for terrain identification to give a prediction of the safest path with the support vector regression machine (SVRM) trained off-line with the texture feature and color features And correction algorithm of the prediction based the vibration information is developed during the robot traveling, using the judgment function given in the paper The region with fault prediction will be corrected with the real traversability value and be used to update the SVRM The experiment demonstrates that this method could help the robot to find the optimal path and be protected from the trap brought from the error between prediction and the real environment
Keywords: mobile robot, image analyze, terrain prediction/correction, navigation
1 Introduction
Search and rescue robot is a special type of mobile robot,
for its application in the search and rescue works after
nature or man-made disasters such as earthquake,
hurricane, debris and mine collapse Most of search and
rescue robots work under the remote way, and more and
more researches have been focused on the autonomous
navigation, which means running from the start point to
the specified point autonomously and safely It reflects
the intelligence level of the search and rescue robot
Working in an unstructured environment, the robots
have to detect the surroundings and make decision how
to arrive the target safely, and the troubles are obstacles,
like rocks and vegetations, and non-traveled regions For
this reason it is beneficial for a search and rescue robot to
know which way is the safest
For searching the safest path, ladar sensors were used to
segment the ground surface from vegetation and from
rocks and trunks (Talukder, A., Manduchi, R., Rankin, A.,
Matthies, L., 2002) (Hebert, M., Vandapel, N., 2003)
(Vandapel, N., Huber, D., Kapuria, A., Hebert, M., 2004)
(Manduchi, R., Castano, A., Taluker, A., Matthies, L.,
2005) This method depends on the positions of the
targets, in another word, the distance between the
obstacles and the robot It only can identify whether and
where the obstacles are, but can not describe their types
and have no consciousness about whether the obstacles
are fatal or available to travel Vision-based terrain
classification and prediction receive more and more
attentions of the researchers in last few years (Bellutta, P.,
Manduchi, L., Matthies, K., Owens, K., Rankin, A., 2000)
(Castano, R., Manduchi, L., Fox, J., 2001) The approach using the nature feature, just like color, shape and texture, divides the terrain into two classes which are travelable and non-travelable (Angelova, A., Matthies, L., Helmick, D., Perona, P., 2007) Robot terrain interaction parameters associated with training images are used to visually forecast terrain traversability (Seraji, H., 1999) (Kim, D., Sang, M O., James, M R., 2007) (Poppingga, J., Birk, A., Pathak, K., 2008) All these methods are trying to build a perfect prediction of the terrain in front of robot, but it is hard to certificate the prediction always correct For example, we hardly identify the road and water both covered with leaves just from the data of ladar sensors and visually images
Some other mobile robot research groups focus on the methods based vibration Vibration-based terrain classification was first suggested by Iagnemma and Dubowsky (Iagnemma, K., Dubowsky, S., 2002) The method collects the vibration data when the robot is running, and then they reduce the dimensionality of the data by Principal Component Analysis (PCA) and use Linear Discriminant Analysis (LDA) for classification (Brooks, C A., Iagnemma, K., 2005) And Support Vector Machine (SVM) method is used to the classification of the terrain for the mobile robot (Weiss, C., Fröhlich, H., Zell, A., 2006) But all these methods just focus on the terrain style recognition thought the vibration data just when the robot covers on the interesting region It means that the robot could just know the terrain styles of the region where he has passed through and where he is standing
on, and he has no power to understand the future path
Trang 2We propose an alternative approach which includes two
steps for the autonomous navigation of search and rescue
robot At first, the method based on the vision is
proposed for terrain identification to give a prediction of
the safest path In an off-line training phase, the Support
Vector Regression Machine (SVRM) is trained on a set of
extracted features of images from our terrain database
Once the SVRM is trained, the newly collected images
during the running of robot could be calculated online
and we could resolve the optimal solution of the safest
path And then, running following the prediction, the
robot collects the vibration data to make a judgment of
the abovementioned prediction If it displays the false
from the result of the judgment, the robot has enough
reasons to believe the pervious prediction incorrect and
must stop For the region with fault prediction, the
features and the real traversability could be collected as a
new data point added to the training database The
SVRM would be recalculated and updated The robot
would calculate the prediction with the up-to-date SVRM
The rest of this paper is organized as follows: In section 2,
we describe our approach to terrain prediction In section
3, the method of correction of prediction mentioned in
section 2 is developed Section 4 presents our
experimental results Section 5 concludes the paper and
suggests future work
2 Terrain prediction
As humans, we recognize the ways with our vision which
means we find out the optimal path, from the images
captured by our eyes, depending on the experience
established past Following this opinion, we extract
several features from the images captured by the onboard
camera, and the conformation of optimal path is
calculated under a classification based on the extracted
features And the classifier is trained with the method of
support vector regression
2.1 Features Extraction
The color and texture features are thought significant for
the images captured by the onboard camera The entries
of the feature representation are the following (Gonzales,
R C., Woods, R E., Eddins, S L, 2005):
1 The average value r of the red content in the image
2 The average value g of the green content in the image
3 The average value b of the blue content in the image
4 The mean m of the gray image The feature is a
measurement of average intensity
( )
i H
m z p z
∈
5 The standard deviation σ of the gray image The
feature is a measurement of average contrast
( ) ( )2
i H
z m p z
σ
∈
6 The smoothness R of the gray image The feature is a measurement of the relative smoothness of the intensity in a region R∈ 0,1[ ], and R is 0 for a region
of constant intensity and approaches 1 for regions with large excursions in the values of its intensity levels
∈
∈
−
=
∑
∑
2
2
1
i H
i H
z m p z R
7 The third momentμ3 The feature is a measurement of
the skewness of a histogram μ3 is 0 for symmetric
histograms, positive by histograms skewed to the right, about the mean, and negative for histograms skewed to the left
μ
∈
=∑ − 3
i H
8 The uniformity U The feature is a measurement of uniformity of intensity histogram and is the maximum when all the gray levels are equal
( )
∈
=∑ 2
i
i H
9 The entropy e The feature is a measurement of randomness for the all gray levels of the intensity histogram
∈
= −∑ i log2 i
i H
In equations (1) ~ (6), H is the intensity levels, z is i
random variable indicating intensity, and p z is ( )i
histogram of the intensity levels
Using these nine features, we create the training and test
raw vector v of to describe the feature information of each
image
For describing the traversability of the terrain where the robot covers, standard deviations of angular accelerations
of roll and pitch are adopted Shown in fig 1, φ is the roll and θ is the pitch So the traversability is the following,
⎝ ∑ 2 ∑ 2⎠
N is sample number The traversability T represents the
difficulty that robot pass through the region
2.2 SVRM Training
Support Vector Regression Machine (SVRM) belongs to the family of kernel methods The special idea is to transfer the nonlinear problem to some high dimensional feature space where could find the approximate linear relationship between inputs and targets, through the first mapping method based on kernel function SVRM is a convex quadratic optimization, and the solution is global optimal
Trang 3Fig 1 The coordinate of the robot
Given the dataset points { (v T1, 1 ) (, v2,T2), ,(vn,Tn) },
n is the number of dataset points, such that
∈ n, =1,2, ,
v is the ith input and Ti ∈R i, =1,2, ,n
is the ith target output The standard format of SVRM
(Vapnik, V., 1998) is:
*
* , , ,
1 min
2
T
b
Subject
( ) ( )
ξ ξ
+ − ≤ +
≥ =
*
*
, , , 0, 1, ,
i
i
T
T
i
b b
The dual is:
*
1
min
2
T
Q
Subject (α α ) α α
=
1
0,0 , , 1, ,
l
i
C i l (10)
Where Q ij=K v v( i, j), the approximate function is:
(α α) ( )
=
∑ * 1
,
l
i
In off-line training, the image of one region will be
captured by the onboard camera and be extracted the
features which shown in equation (7) Then the robot
traverses this region and the Inertia Measurement Unit
(IMU) on board would record the roll and pitch angular
accelerations The standard deviations of angular
accelerations of roll and pitch, in another word meaning
the real traversability of this region, are calculated with
the equation (8) Following this method, the training
dataset points (v T are collected for several different , )
regions And the result of the training will be used in the
prediction of optimal path As SVRM implementation we
use LIBSVM (Chang, C.C., Lin C.J., 2009)
2.3 Optimal Path
The robot takes the photo in front and divides into M×N
sub-regions The features of each sub-region image are
extracted with the abovementioned method, and then the
trained SVRM is to calculate the traversability prediction
Fig 2 The optimal regions and path with these features When the traversability predictions of all of the sub-regions are received, the optimal regions are considered with the sub-regions which have the highest traversability prediction value in each row and the optimal path is that covering these optimal regions It is shown in Fig.2
3 Correction of prediction
The optimal path developed from the section 2 is the prediction of the terrain in direction, depending on the experience the robot received before, the off-line training
Obviously, the prediction could not match the real situation completely For the limited of the experience of robot, some traps could not be identified just using the image features So we develop the method to correct the error between the prediction and the real environment
The slip is one of the fatal situations for the search and rescue robot, and it would result to the loss of traveling ability and fail to complete the task The slip is defined as following,
τ τ
ω τ
ω
−
( ) r a dt v Y
S
r is the radius of the driving wheel, ω is the angular
velocity measured with the encoder on board, a Yis the acceleration value in Y axis measured with IMU, τ is the sample time, and vτ −1 is the actual velocity of the robot in last sample time S∈ 0,1[ ], S=0 means slip never occupied and S=1 means there is completely slip between the robot and ground
We develop a judgment function which is used to judge the error between the prediction and real traversability, using the parameters X, X*, S, which donate real
traversability measured with IMU on board, traversability prediction and the slip The judgment function is as following,
( * )= − ( *)+βφ
The ( *)
,
K X X is kernel function, we use radial basis function − − * σ2
exp( X X / 2 ) here And φ( )S is response function, β is scale coefficient we use sign function
α
−
sgn S here The α is the threshold of slip
Trang 4( * )= − − − * σ2 +β −α
The threshold of judgment function is consisted of kernel
function threshold and response function threshold,
( * )= − ( *)+βφ
Because of the value of φ( )S is 1 and -1, in order to travel
safely the φ( )S must be -1, and φthreshold( )S = −1 The
threshold of judgment function just depends on the
threshold of kernel function and scale coefficient
( * )= −( ( *)+β)
When the robot travels following the optimal path, it is
keeping measuring the slip and real traversability and
calculates the error using the way of equation (14) If the
result of judgment function is below the given threshold,
that means the prediction and the real situation are
matched Otherwise, it means that the prediction could not
describe the situation of the region and the prediction is
inaccurate In order to prevent the robot out of control in
the dangerous terrain, our strategy is that the robot must
stop and move to the block which is on the side of the
current And moving to left or right depends on the
location of the next optimal region We require minimizing
the distance between the new block and the next optimal
region For the region with fault prediction, the features
and the real traversability could be collected as a new data
point added to the training database The SVRM would be
recalculated and updated The robot would calculate the
prediction with the up-to-date SVRM It is shown in Fig 3
Fig 3 Error correction of the prediction
4 Experiment
The proposed method has been applied to field terrain test for the purpose of autonomous navigation The search and rescue robot designed ourselves (Guo, Y., Bao, J.T., Song, A.G., 2009), shown in Fig 4, is used in the experiment The robot is driven with tracks and carries a CCD camera on top and the IMU (Crossbow VG400) inside
4.1 Off-line Training
We use the on-board camera of the robot to take the photos of the terrain to extract the features abovementioned in section 2 to form the feature vectors And then the robot traverses the terrain shown in the photos and collects the standard derivations of the angular acceleration of pitch and roll The feature vectors and the standard derivations compose the training points
We collect 300 training points for the off-line training, part of them are shown in Tab.1
For the parameters of SVRM, σ= 0.1 , C∈ 0.1,1[ ] and
ε∈ 0.01,0.5 Using the different C and ε, the figurer of
mean squared error is developed and shown in Fig.5 In
Fig.5, we get the optimal parameter of C and ε through
search the mesh to find the parameter point that has the minimal mean squared error The optimal parameter point is (C,ε) (= 1,0.5)
Fig 4 The picture of the search and rescue robot
Fig 5 Mean squared error for different values of C and ε
Trang 5No r g b m σ R μ 3 U e T
1 91.56 69.69 62.35 75.31 34.02 0.0175 0.515 0.00933 6.975 0.7812
2 104.70 84.37 77.11 89.78 40.42 0.0245 0.596 0.00778 7.240 0.7761
3 119.50 94.88 85.80 101.36 33.28 0.0167 0.424 0.0094 8.988 0.8977
4 121.44 98.72 90.76 104.55 33.71 0.0172 0.395 0.0091 7.019 0.9056
5 110.45 83.36 72.22 90.22 32.02 0.0155 0.388 0.0100 6.918 0.8813
6 97.28 69.67 58.53 76.89 30.42 0.0140 0.478 0.0113 6.787 0.8673
7 83.14 58.55 46.23 64.34 23.33 0.0083 0.164 0.0135 6.479 0.8096
8 71.19 51.79 41.33 56.33 22.24 0.0075 0.147 0.0143 6.400 0.7834
9 63.48 45.98 36.33 50.42 22.48 0.0077 0.209 0.0147 6.361 0.6897
10 58.19 48.48 43.31 51.29 26.50 0.0107 0.153 0.0112 6.6425 0.5485
… … … … … …
Table 1 The partial training points
4.2 Autonomous Navigation
The start point and man-specified goal points are signed
in the picture of experiment filed, shown in Fig.6, which
is covered with rubble and sand The robot should travel
from the start point to the first goal point and then travel
to the second goal point
At first, the robot turn around face to the goal and the
camera carried in the robot captures one image in front,
shown in Fig.7 The image is linearly divided into 5×5
sub-images as the optimal region candidates The noise
points are removed from each sub-image through gauss
filter
Fig 6 The experiment filed
Fig 7 The image in front of robot
Features are extracted from the sub-images using the abovementioned algorithm in section 2 and the feature vectors are sent to off-line trained SVRM to calculate the
traversability prediction T , the result is shown in Fig.8
Searching the mesh of prediction result, the sub-images which have the highest traversability prediction in each row are picked up as the optimal regions All the optimal regions compose the optimal path It is shown in Fig.9, and the regions in black box are the optimal regions From the optimal regions selected using the color and texture features, we could find that the result of this prediction algorithm is approximately identical compared with that found out using human experience And this
Fig 8 The result of prediction
Fig 9 The optimal path based on prediction
Trang 6(a) (b)
(c) (d)
Fig 10 Navigation based on the optimal path including
(a)(b)(c)(d)
Fig 11 The acceleration of roll and pitch measured by
IMU
algorithm could avoid the interference of result from the
obstacles that have different color or texture features from
the terrain environment
Receiving the optimal path, the robot could travel to the
goal following this path through the method of inertia
navigation The process is shown in Fig.10 During the
traveling, the accelerations of roll and pitch are measured
by the onboard IMU and shown in Fig.11 The real
traversability of the current region is developing based
the data
For the slip estimation, the real velocity is measured by
the IMU and the real velocity is the actual speed of robot,
shown in Fig.12 with blue line The measured velocity is
the calculated velocity based on the angular velocity
measured with the coder in the robot, and it describes the
actual speed of the driving trucks of robot The measured
velocity is shown in Fig.12 with red line According to the
definition in equation (12), the slip estimation could be
received
Judging the error between traversability prediction and
real traversability, we use the judgment function in
Fig 12 The real velocity and measured velocity
Fig 13 The result of judgment function equation (14) with the parameters σ = 0.1 , α= 0.2 and
β = 0.2 The result of judgment function is shown in Fig.13
For the safety of navigation, the ( *)=
, 0.8
threshold
( * )= −
threshold
f X X S base on the value of ( *)
,
threshold
and β From the result of judgment function, we could find that the judgment function value is under the threshold when the robot travels on the regions shown in Fig.10(a)(b)(c) That because the traversability prediction and real traversability is approximately identical However, when the robot travel on the region shown in Fig.10(d), the sand in this region leads to the serious slip and the response function φ( )S is active So the value of
judgment function jumps over the threshold as soon as possible
This region is that with fault prediction The real traversability value and the features based on color and texture are collect as a new data point added to the training database The SVRM is recalculated and updated
Because of the threshold over, the robot stops and move
to the right block The camera recaptures one image and calculates optimal path with the up-to-date SVRM The robot safely travels to the final goal with the new optimal path It is shown in Fig.14
We repeat this experiment ten times in the same environment, and the success rate is up to 90%
Trang 7Fig 14 Recaptured image and new optimal path
5 Conclusion
In this paper, we propose an alternative approach which
includes two steps for the autonomous navigation of
search and rescue robot At first, for the purpose of
finding the relative features with the difficulty of
traveling, we pick up nine features of color and texture
from the image as feature vectors The Support Vector
Regression Machine (SVRM) is trained to find the
relationship between the traveling difficulty and the
features Using the off-line trained SVRM, the
traversability prediction is calculated and the optimal
path is developing During the traveling following the
optimal path, the real traversability based on the
vibration information measured by onboard IMU is
received The slip of robot is recognized with the real
velocity measured by IMU and the measured velocity
calculated with the angular velocity got from the coder
inside We develop a judgment function with the
traversability prediction, real traversability and slip to
find the prediction fault It could protect the robot from
the trap caused by the prediction error For the region
with fault prediction, the features and the real
traversability could be collected as a new data point
added to the training database The SVRM would be
recalculated and updated
Our method is to resolve the problem that the prediction
algorithm can not check the prediction result during the
traveling following the prediction The experiment
demonstrates that this method is effective But limited with the performance of the embedded computer system
in the robot, the process speed of the algorithm is not enough fast to allow the robot travel in the fast speed In future, we will continue do some works to increase the
algorithm efficiency and decrease the performance time
6 Acknowlege
This research is made possible with support from the Project under Science Innovation Program of Chinese Education Ministry (No.708045)
7 References
Talukder, A., Manduchi, R., Rankin, A., Matthies, L (2002) Fast and Reliable Obstacle Detection and Segmentation for Cross-country Navigation IEEE Intelligence Vehicles Symposium, Versailles, France,
2002 Hebert, M., Vandapel, N (2003) Terrain Classification Techniques from Ladar Data for Autonomous Navigation Collaborative Technology Alliances Conference, 2003
Vandapel, N., Huber, D., Kapuria, A., Hebert, M (2004) Natural Terrain Classification using 3-d Ladar Data IEEE international Conference on Robotics and Automation, New Orleans, USA, 2004
Manduchi, R., Castano, A., Taluker, A., Matthies, L (2005) Obstacle Detection and Terrain Classification for Autonomous Off-road Navigation Robotics and Automation Vol 18, pp 81-102, 2005
Bellutta, P., Manduchi, L., Matthies, K., Owens, K., Rankin, A (2000) Terrain Perception for Demo III IEEE Intelligent Vehicles Symposium, Dearborn, USA,
2000 Castano, R., Manduchi, L., Fox, J (2001) Classification Experiments on Real-Word Textures Workshop on Empirical Evaluation in Computer Vision, Kauai, USA, 2001
Angelova, A., Matthies, L., Helmick, D., Perona, P., (2007) Fast Terrain Classification Using Variable-Length Representation for Autonomous Navigation IEEE Computer Society Conference on Computer Vision and Pattern Recognition Minneapolis, USA,
2007 Seraji, H (1999) Traversability Index: A New Concept for Planetary Rover IEEE International Conference on Robotics and Automation Detroit, USA, 1999
Kim, D., Sang, M O., James, M R (2007) Traversability Classification for UGV Navigation: A Comparison of Patch and Superpixel Representations IEEE International Conference on Robotics and Automation San Diego, USA, 2007
Poppingga, J., Birk, A., Pathak, K (2008) Hough Based Terrain Classification for Realtime Detection of Drivable Ground Journal of Field Robotics Vol 25, 1,
pp 67-88, 2008
Trang 8Iagnemma, K., Dubowsky, S (2002) Terrain
Classification for High-Speed Rough-Terrain
Autonomous Vehicle Navigation SPIE Conference on
Unmanned Ground Vehicle Technology IV, 2002
Brooks, C A., Iagnemma, K (2005) Vibration-Based
Terrain Classification for Planetary Exploration
Rovers IEEE Transactions on Robotics Vol 21, 6, pp
1185-1191, 2005
Weiss, C., Fröhlich, H., Zell, A., (2006) Vibration-Based
Terrain Classification Using Support Vector Machines
IEEE International Conference on Intelligent Robots
and Systems Beijing, China, 2006
Gonzalez, R C., Woods, R E., Eddins, S L (2005) Digital Image Processing Using Matlab Prentice Hall, Upper Saddle River, NJ, 2005
Vapnik, V (1998) Statistical Learning Theory Wiley, New York, NY, 1998
Chang, C.C., Lin C.J., (2009) LIBSVM: a Library for Support Vector Machines http://www.csie.ntu.edu.tw/~cjlin/libsvm, 2009
Guo, Y., Bao, J.T., Song, A.G (2009) Designed and implementation of semi-autonomous search robot
IEEE International Conference on Mechatronics and Automation Changchun, China, 2009