Mobile Robot Positioning Based on ZigBee Wireless Sensor Networks and Vision Sensor 423Mobile Robot Positioning Based on ZigBee Wireless Sensor Networks and Vision Sensor Wang Hongbo X
Trang 2When the robot executes an action primitive, i.e when the robot moves, it updates the belief
as shown in (4) The action model denes p(s�|s, a) as the probability of reaching state s�,
starting at state s and executing action a:
(7)
This probability p(s�|s, a) will represent our action model and it will be calculated a priori,
depending on the possible action the robot can perform in that state space according to table 1
The robot’s navigation is also an important aspect of this model:
The robot must be centered in the corridor, as much as possible, in order to get
correct observations from the environment
The robot must avoid bumping into any dynamic obstacle and, of course, walls and
doors
Transitions from one state to the next must be detected
We use information from camera and infrared sensors to achieve these ob jectives:
The robot’s head is moving horizontally in an arc of [-90º ,90º] (gure 4) at 1 Hz
The robot is continuously obtaining infrared information in each arc position The
robot will detect any obstacle to avoid collision It will also use this information in
order to be centred in the corridor
The robot will capture an image if the angle’s head is near 90º or -90º(blue zone in
gure 4) It lets us know what elements (doors or walls) there are on both sides of
the robot The robot maintains a memory of the elements that have been detected at
both sides When any of these elements change, the state will be changed too For
example, if the robot detects a door on its right and a wall on its left, it will be
aware of a state change when it detects a wall on its right, for instance
Fig 3 shows the ground-truth path that the robot has followed along the corridor, avoiding
the obstacles (persons and walls) and centering itself in it This information has been
manually recorded placing marks on the oor behind the robot at constant rate of 1 mark
every 10 seconds and interpolating the path
Fig 3 Ground-truth path followed by the robot
Fig 4 Active sensing The robot senses the whole area with its infrared sensor It analyzes the images captured in the blue area
3.2 Sensor Model
Our sensor model takes three types of sensations from the image obtained by the robot’s camera:
Depth The main goal of this observation is to measure how far the robot is from
the wall when it is oriented toward the end of the corridor For this purpose we detect the number of ceiling lights that the robot perceives If the number of ceiling lights is high, the robot is far from the end If this number is low, the robot is near
to the end In gure 5 we can see the original image and the image that shows the detection of ceiling lights (green boxes)
Doors The robot is able to count the number of doors it can observe ahead using a
color lter The doors are supposed to be perpendicular to the oor and the jambs parallel to them If a region of the image fullls with these specications, it is assumed to be a door In gure 6 we can see the original image and the image with the result of the door detection (blue boxes)
Nearby landmarks This observation gives us information about which landmarks
are around the robot We dene landmarks as the doors or walls that are situated
on the right and left sides of the robot For example, an observation may be that there is a door on the left side and a wall on the right side
Fig 5 Detecting 6 ceiling lights and 8 doors
Trang 3Visual Based Localization of a Legged Robot with a topological representation 415
When the robot executes an action primitive, i.e when the robot moves, it updates the belief
as shown in (4) The action model denes p(s�|s, a) as the probability of reaching state s�,
starting at state s and executing action a:
(7)
This probability p(s�|s, a) will represent our action model and it will be calculated a priori,
depending on the possible action the robot can perform in that state space according to table 1
The robot’s navigation is also an important aspect of this model:
The robot must be centered in the corridor, as much as possible, in order to get
correct observations from the environment
The robot must avoid bumping into any dynamic obstacle and, of course, walls and
doors
Transitions from one state to the next must be detected
We use information from camera and infrared sensors to achieve these ob jectives:
The robot’s head is moving horizontally in an arc of [-90º ,90º] (gure 4) at 1 Hz
The robot is continuously obtaining infrared information in each arc position The
robot will detect any obstacle to avoid collision It will also use this information in
order to be centred in the corridor
The robot will capture an image if the angle’s head is near 90º or -90º(blue zone in
gure 4) It lets us know what elements (doors or walls) there are on both sides of
the robot The robot maintains a memory of the elements that have been detected at
both sides When any of these elements change, the state will be changed too For
example, if the robot detects a door on its right and a wall on its left, it will be
aware of a state change when it detects a wall on its right, for instance
Fig 3 shows the ground-truth path that the robot has followed along the corridor, avoiding
the obstacles (persons and walls) and centering itself in it This information has been
manually recorded placing marks on the oor behind the robot at constant rate of 1 mark
every 10 seconds and interpolating the path
Fig 3 Ground-truth path followed by the robot
Fig 4 Active sensing The robot senses the whole area with its infrared sensor It analyzes the images captured in the blue area
3.2 Sensor Model
Our sensor model takes three types of sensations from the image obtained by the robot’s camera:
Depth The main goal of this observation is to measure how far the robot is from
the wall when it is oriented toward the end of the corridor For this purpose we detect the number of ceiling lights that the robot perceives If the number of ceiling lights is high, the robot is far from the end If this number is low, the robot is near
to the end In gure 5 we can see the original image and the image that shows the detection of ceiling lights (green boxes)
Doors The robot is able to count the number of doors it can observe ahead using a
color lter The doors are supposed to be perpendicular to the oor and the jambs parallel to them If a region of the image fullls with these specications, it is assumed to be a door In gure 6 we can see the original image and the image with the result of the door detection (blue boxes)
Nearby landmarks This observation gives us information about which landmarks
are around the robot We dene landmarks as the doors or walls that are situated
on the right and left sides of the robot For example, an observation may be that there is a door on the left side and a wall on the right side
Fig 5 Detecting 6 ceiling lights and 8 doors
Trang 4Fig 6 Image information extraction results
Once the data is collected, we apply equation (5) to correct the belief, according to the
information obtained by the sensors The types of sensations described before are
independent from them, so we can apply equation 6
(8)
(9)
4 Experimental setup and results
Several experiments were conducted to test the quality of the localization algorithm The
experiments were carried out in our office environment in our normal daily work So, the
environment is noisy because of the normal activity: people walking in the corridor, doors
opening, and so on This sometimes produces erroneous sensory information that will show
how robust the model being used is This is a requirement presented in section 1
According to the desiderata presented in section 1, we evaluate the robot’s localization
performance based on:
The robot’s robustness to detect state changes
Overall accuracy
4.2 Test for detecting state changes
In order to test the state change detection, we designed a task in which the robot was
required to go forward in the corridor and to stop when a new state was detected Fig 7
shows an example of this test The correct state change detections are displayed as blue
circles and the wrong state change detections are displayed as red boxes We obtained 82%
accuracy in state change detections This information has been obtained by an operator
monitoring the process and verifying when the robot has correctly detected the new state
Fig 7 Correct state change detection (blue circles) and the incorrect state change detection (red boxes)
4.2 Accuracy
To test the overall accuracy, we have designed two experiments: in the rst one the initial position is not known, and in the second one the initial position is known In each experiment we measured two values:
Error in localization To measure the error in localization, we count the nodes that
a robot must traverse from the position that the localization algorithm calculates as
the most probable (The mode of the Bel distribution), for the robot’s actual position
Entropy The entropy of the states probabilities is a usual measure for determining
when the belief about the robot pose is concentrated in few states This doesn’t mean the robot is well localized This is only a dispersion measure that indicates when the belief would determine the robot pose When this value is near 0, the Bel probabilities are accumulated in a single node, which is considered to be the robot’s position If the entropy value is near 1, the robot’s position cannot be determined and the Bel information is not useful (the error is not meaningful neither) We must consider situation in which continuous errors in perception lead robot pose converge into a wrong pose with low entropy
(10)
4.2.2 Initial position unknown
In this test (Figure 8), the initial position is not known Figure 9 shows how the error and the entropy (gure 10) evolve in the localization process At the initial step, the entropy is high, because it is distributed in several states, making it difficult to determine the robot’s actual
Trang 5Visual Based Localization of a Legged Robot with a topological representation 417
Fig 6 Image information extraction results
Once the data is collected, we apply equation (5) to correct the belief, according to the
information obtained by the sensors The types of sensations described before are
independent from them, so we can apply equation 6
(8)
(9)
4 Experimental setup and results
Several experiments were conducted to test the quality of the localization algorithm The
experiments were carried out in our office environment in our normal daily work So, the
environment is noisy because of the normal activity: people walking in the corridor, doors
opening, and so on This sometimes produces erroneous sensory information that will show
how robust the model being used is This is a requirement presented in section 1
According to the desiderata presented in section 1, we evaluate the robot’s localization
performance based on:
The robot’s robustness to detect state changes
Overall accuracy
4.2 Test for detecting state changes
In order to test the state change detection, we designed a task in which the robot was
required to go forward in the corridor and to stop when a new state was detected Fig 7
shows an example of this test The correct state change detections are displayed as blue
circles and the wrong state change detections are displayed as red boxes We obtained 82%
accuracy in state change detections This information has been obtained by an operator
monitoring the process and verifying when the robot has correctly detected the new state
Fig 7 Correct state change detection (blue circles) and the incorrect state change detection (red boxes)
4.2 Accuracy
To test the overall accuracy, we have designed two experiments: in the rst one the initial position is not known, and in the second one the initial position is known In each experiment we measured two values:
Error in localization To measure the error in localization, we count the nodes that
a robot must traverse from the position that the localization algorithm calculates as
the most probable (The mode of the Bel distribution), for the robot’s actual position
Entropy The entropy of the states probabilities is a usual measure for determining
when the belief about the robot pose is concentrated in few states This doesn’t mean the robot is well localized This is only a dispersion measure that indicates when the belief would determine the robot pose When this value is near 0, the Bel probabilities are accumulated in a single node, which is considered to be the robot’s position If the entropy value is near 1, the robot’s position cannot be determined and the Bel information is not useful (the error is not meaningful neither) We must consider situation in which continuous errors in perception lead robot pose converge into a wrong pose with low entropy
(10)
4.2.2 Initial position unknown
In this test (Figure 8), the initial position is not known Figure 9 shows how the error and the entropy (gure 10) evolve in the localization process At the initial step, the entropy is high, because it is distributed in several states, making it difficult to determine the robot’s actual
Trang 6position In state 2, the entropy is low, which means that the belief is concentrated in few
states (typically two in our case,
representing symmetry in the environment), showing instability in the error graph When
error stabilizes at step 15 and the entropy is still low, the robot has localized itself Some
localization error can happen due to observation errors or motion errors (this increases
entropy), but the system recovers quickly
Fig 8 The robot moves from position 1 to position 2 The robot does not know its initial
position
Fig 9 Error measured as the distance from Bel mode to the actual robot position
Fig 10 Bel entropy
4.2.2 Initial position known
In this test (Fig 11), the initial position is known The robot starts at a known position and this is why the error (Fig 12) starts at low values Some perception errors in state 24 cause the robot to get lost for a few movements, but when these errors disappear, the robot recovers its position knowledge
The experimentation results show interesting conclusions First of all, our approach works and is ableto be carried out with the localization task The experimentation results also show that the knowledge of the initial state does not inuence the process in the long term
Fig 11 The robot moves from position 1 to position 2 The robot knows its initial position
Trang 7Visual Based Localization of a Legged Robot with a topological representation 419
position In state 2, the entropy is low, which means that the belief is concentrated in few
states (typically two in our case,
representing symmetry in the environment), showing instability in the error graph When
error stabilizes at step 15 and the entropy is still low, the robot has localized itself Some
localization error can happen due to observation errors or motion errors (this increases
entropy), but the system recovers quickly
Fig 8 The robot moves from position 1 to position 2 The robot does not know its initial
position
Fig 9 Error measured as the distance from Bel mode to the actual robot position
Fig 10 Bel entropy
4.2.2 Initial position known
In this test (Fig 11), the initial position is known The robot starts at a known position and this is why the error (Fig 12) starts at low values Some perception errors in state 24 cause the robot to get lost for a few movements, but when these errors disappear, the robot recovers its position knowledge
The experimentation results show interesting conclusions First of all, our approach works and is ableto be carried out with the localization task The experimentation results also show that the knowledge of the initial state does not inuence the process in the long term
Fig 11 The robot moves from position 1 to position 2 The robot knows its initial position
Trang 8Fig 12 Error measured as the distance from Bel mode to the actual robot position
Fig 13 Bel entropy
5 Conclusions
In this chapter we have presented the performance of a localization method of legged AIBO
robots in not-engineered environments, using vision as an active input sensor This method
is based on classic markovian approach but it has not been previously used with legged
robots in indoor office environments We have shown that the robot is able to localize itself
in real time even inenvironments with noise produced by the human activity in a real office
It deals with uncertainty in its actions and uses perceived natural landmarks of the
environment as the main sensor input
We have also shown that data obtained from sensors, mainly the camera, is discriminate
enough and allows a fast convergence from an initial unknown state, in which the belief has
been distributed uniformly over the set of states We have also shown that the robot can
overcome action failures while localizing, and it recovers from them in an efficient way
The set of observations we have chosen have been descriptive enough to be efficient in the
localization process We think that the way we determine the number of doors and ceiling
lights has been the key for the success of the localization system This approach depends on
the a priori environment knowledge, but we are studying new types of useful observations
to make thisapproach applicable in other environments where this a priori knowledge is not available
Despite these results, there are some limitations that deserve future research One of the key limitation arises from the low accuracy in the localization due to the granularity of the large areas dened as states in the map building Maybe granularities near to the metric approximation could be more useful for many indoor applications
We believe that probabilistic navigation techniques hold great promise for enabling legged robots to become reliable enough to operate in real office environments
6 References
J.Borenstein, B.Everett, and L.Feng, Navigating mobile robots: Systems and techniques MA: Ltd
Wesley, 1996
R Nourbakhsh, R Powers, and S Bircheld, Dervish - an office-navigating robot., AI
Magazine, vol 16, no 2, pp 53–60, 1995
B Kuipers, The spatial semantic hierarchy, Artif Intel l., vol 119, no 1-2, pp 191–233, 2000
D.Schultz and D.Fox, Bayesian color estimation for adaptative vision-based robot localization, in
IROS-04, Sept 2004
S Enderle, M Ritter, D Fox, S Sablatnog, G K Kraetzschmar, and G Palm, Soccer-Robot
Localization Using Sporadic Visual Features, in Intel ligent Autonomous Systems 6
(IAS-6) (E Pagello, F Groen, T Arai, R Dillmann, and A Stentz, eds.), (Amsterdam, The Netherlands), pp 959– 966, IOS Press, 2000
S Enderle, H Folkerts, M Ritter, S Sablatnog, G K Kraetzschmar, and G Palm, Vision-
Based Robot Localization Using Sporadic Features, in Robot Vision (R Klette, S Peleg,
and G Sommer, eds.), vol 1998 of Lecture Notes in Computer Science, Berlin, Heidelberg, Germany: Spinger-Verlag, 2001
M Veloso, E Winner, S Lenser, J Bruce, and T Balch, Vision-servoed localization and
behavior-based planning for an autonomous quadrup legged robot, in Proceedings of the Fifth
International Conference on Articial Intel ligence Planning Systems, (Breckenridge, CO), pp 387–394, April 2000
R Simmons and S Koening, Probabilistic navigation in partially observable environments, in
Proceedings of the 1995 International Joint Conference on Articial Intel ligence, (Montreal (Canada)), pp 1080–1087, July 1995
D Radhakrishnan and I Nourbakhsh, Topological localization by training a vision-based
transition detector, in Proceedings of IROS 1999, vol 1, pp 468 – 473, October 1999
H Choset and K Nagatani, Topological simultaneous localization and mapping (slam): toward
exact localization without explicit localization, IEEE Transactions on Robotics and
Automation, vol 17, pp 125 – 137, April 2001
D Fox, W Burgard, and S Thrun, Markov localization for mobile robots in dynamic
environments, Journal of Articial Intel ligence Research, vol 11, pp 391–427, 1999
J Koseck´a and F li, Vision based topological markov localization, in Proceedings of the 2004
IEEE International Conference on Robotics and Automation, (Barcelona (Spain)), Apr 2004
M E López, L M Bergasa, and M.S.Escudero, Visually augmented POMDP for indoor robot
navigation, Applied Informatics, pp 183–187, 2003
Trang 9Visual Based Localization of a Legged Robot with a topological representation 421
Fig 12 Error measured as the distance from Bel mode to the actual robot position
Fig 13 Bel entropy
5 Conclusions
In this chapter we have presented the performance of a localization method of legged AIBO
robots in not-engineered environments, using vision as an active input sensor This method
is based on classic markovian approach but it has not been previously used with legged
robots in indoor office environments We have shown that the robot is able to localize itself
in real time even inenvironments with noise produced by the human activity in a real office
It deals with uncertainty in its actions and uses perceived natural landmarks of the
environment as the main sensor input
We have also shown that data obtained from sensors, mainly the camera, is discriminate
enough and allows a fast convergence from an initial unknown state, in which the belief has
been distributed uniformly over the set of states We have also shown that the robot can
overcome action failures while localizing, and it recovers from them in an efficient way
The set of observations we have chosen have been descriptive enough to be efficient in the
localization process We think that the way we determine the number of doors and ceiling
lights has been the key for the success of the localization system This approach depends on
the a priori environment knowledge, but we are studying new types of useful observations
to make thisapproach applicable in other environments where this a priori knowledge is not available
Despite these results, there are some limitations that deserve future research One of the key limitation arises from the low accuracy in the localization due to the granularity of the large areas dened as states in the map building Maybe granularities near to the metric approximation could be more useful for many indoor applications
We believe that probabilistic navigation techniques hold great promise for enabling legged robots to become reliable enough to operate in real office environments
6 References
J.Borenstein, B.Everett, and L.Feng, Navigating mobile robots: Systems and techniques MA: Ltd
Wesley, 1996
R Nourbakhsh, R Powers, and S Bircheld, Dervish - an office-navigating robot., AI
Magazine, vol 16, no 2, pp 53–60, 1995
B Kuipers, The spatial semantic hierarchy, Artif Intel l., vol 119, no 1-2, pp 191–233, 2000
D.Schultz and D.Fox, Bayesian color estimation for adaptative vision-based robot localization, in
IROS-04, Sept 2004
S Enderle, M Ritter, D Fox, S Sablatnog, G K Kraetzschmar, and G Palm, Soccer-Robot
Localization Using Sporadic Visual Features, in Intel ligent Autonomous Systems 6
(IAS-6) (E Pagello, F Groen, T Arai, R Dillmann, and A Stentz, eds.), (Amsterdam, The Netherlands), pp 959– 966, IOS Press, 2000
S Enderle, H Folkerts, M Ritter, S Sablatnog, G K Kraetzschmar, and G Palm, Vision-
Based Robot Localization Using Sporadic Features, in Robot Vision (R Klette, S Peleg,
and G Sommer, eds.), vol 1998 of Lecture Notes in Computer Science, Berlin, Heidelberg, Germany: Spinger-Verlag, 2001
M Veloso, E Winner, S Lenser, J Bruce, and T Balch, Vision-servoed localization and
behavior-based planning for an autonomous quadrup legged robot, in Proceedings of the Fifth
International Conference on Articial Intel ligence Planning Systems, (Breckenridge, CO), pp 387–394, April 2000
R Simmons and S Koening, Probabilistic navigation in partially observable environments, in
Proceedings of the 1995 International Joint Conference on Articial Intel ligence, (Montreal (Canada)), pp 1080–1087, July 1995
D Radhakrishnan and I Nourbakhsh, Topological localization by training a vision-based
transition detector, in Proceedings of IROS 1999, vol 1, pp 468 – 473, October 1999
H Choset and K Nagatani, Topological simultaneous localization and mapping (slam): toward
exact localization without explicit localization, IEEE Transactions on Robotics and
Automation, vol 17, pp 125 – 137, April 2001
D Fox, W Burgard, and S Thrun, Markov localization for mobile robots in dynamic
environments, Journal of Articial Intel ligence Research, vol 11, pp 391–427, 1999
J Koseck´a and F li, Vision based topological markov localization, in Proceedings of the 2004
IEEE International Conference on Robotics and Automation, (Barcelona (Spain)), Apr 2004
M E López, L M Bergasa, and M.S.Escudero, Visually augmented POMDP for indoor robot
navigation, Applied Informatics, pp 183–187, 2003
Trang 10A R Cassandra, L P Kaelbling, and J A Kurien, Acting under uncertainty: Discrete bayesian
models for mobile robot navigation, in Proceedings of IEEE/RSJ International
Conference on Intelligent Robots and Systems, 1996
F Gechter, V Thomas, and F Charpillet, Robot localization by stochastic vision based device, in
The 5th World Multi-Conference on Systemics, Cybernetics and Informatics - SCI
2001 The 7th International Conference on Information Systems Analysis and Synthesis - ISAS 2001, Orlando, FL, USA, Jul 2001
M Sridharan, G Kuhlmann, and P Stone, Practical vision-based monte carlo localization on a
legged robot, in IEEE International Conference on Robotics and Automation, April
2005
P Guerrero and J R del Solar, Auto-localización de un robot móvil aibo mediante el método de
monte carlo, Anales del Instituto de Ingenieros de Chile, vol 115, no 3, pp 91–102,
2003
S Thrun, D Fox, W Burgard, and F Dallaert, Robust monte carlo localization for mobile robots,
Artif Intel l., vol 128, no 1-2, pp 99–141, 2001
S Thrun, Robotic mapping: a survey, pp 1–35, 2003
Trang 11Mobile Robot Positioning Based on ZigBee Wireless Sensor Networks and Vision Sensor 423
Mobile Robot Positioning Based on ZigBee Wireless Sensor Networks and Vision Sensor
Wang Hongbo
X
Mobile Robot Positioning Based on ZigBee
Wireless Sensor Networks and Vision Sensor
Wang Hongbo
College of Mechanical Engineering, Yanshan University
Qinhuangdao, 066004, China
1 Introduction
In the practical application of the mobile robot, the first encountered problem is positioning
The positioning is an important basic function for an autonomous mobile robot, which is
the premise of the robot to complete its mission, and is also currently the closely watched
research topic (Meng et al., 2000; Wang et al., 2008; Panzieri et al., 2008) Positioning
technology is intended to obtain more accurate position and orientation of the robot based
on the information of the prior environment map and the information from multiple
sensors The positioning methods can be roughly categorized as relative position
measurements and absolute position measurements (Zhang et al., 2005; Guccione et al.,
2000) Relative positioning includes the use of encoders, gyroscopes and accelerometer
(Zunaidi et al., 2006) Absolute positioning involves using beacons (active or passive)
(Venet et al., 2002), global positioning systems (GPS) (Willgoss et al., 2003), landmark
recognition (Borenstein et al., 1997) and model matching methods (Fang, et al., 2006)
GPS is widely used for absolute positioning of the mobile robot, but it cannot properly
receive satellite signals indoors and has lower accuracy, which makes its indoor application
to a mobile robot difficult (Kim and Park, 2008) In recent years, the robot positioning based
on ZigBee wireless sensor networks has become a new study hotspot (Leland et al., 2006;
Lai et al., 2007) ZigBee wireless sensor network is a new wireless standard having the
following features: low cost, low power, low speed, short time delay, large network
capacity and high reliability (Yang and Liu, 2007) Although ZigBee has higher positioning
accuracy than GPS, it cannot meet the needs of accurate positioning for a mobile robot
Therefore, the more accurate poisoning method has to be employed
Many references have approached the positioning problem of mobile robots by employing
landmarks (Se et al., 2002; Armingol et al., 2002; Bais and Sablatnig, 2006) The key idea of
the positioning method is to use special marks that include a wealth of geometric
information under perspective projection so that the camera location can be easily
computed from the image of the guide marks
Researchers have chosen ceiling lights as landmarks since they can be easily detected due to
the high contrast between the light and the ceiling surface Also, the ceiling lights do not need
to be installed specially (Wang and Ishimatsu, 2004; Nguyen et al., 2007) Therefore, we
choose ceiling lights as landmarks to realize the accurate positioning of the mobile robot
22
Trang 12This chapter presents a positioning method for a mobile robot based on the ZigBee wireless
sensor networks and vision sensor Using the ZigBee, the absolute position of the mobile
robot can be estimated To obtain the accurate positioning, we use vision sensor (Jiang,
2008; Yang et al., 2008), and choose ceiling lights as landmarks to realize the relative
positioning of the mobile robot The method not only overcomes the shortcomings of low
positioning accuracy using ZigBee wireless sensor networks, but also decreases the
computation complexity of using landmark absolute positioning To guide the mobile
robot, the path planning and navigation control method are described Finally, a mobile
robot system is introduced and the positioning experiments are conducted
2 The Positioning System Based on ZigBee
ZigBee is an emerging short-range, low-rate wireless network technology which is based on
the IEEE 802.15.4 This is a standard of the IEEE wireless personal area network working
group that is known as IEEE 802.15.4 technical standards
2.1 The Composition of the ZigBee Wireless Channel
IEEE 802.15.4 works in the industrial, scientific, medical (ISM) band It defines two working
bands: the 868/915MHz and 2.4GHz frequency bands They correspond to two physical
layer standards, and both physical layers are based on the direct sequence spread spectrum
(DSSS) technology The 868MHz band is the additional ISM band in Europe and the
915MHz band is the additional ISM band in America They cannot be used in China In the
IEEE 802.15.4, the ISM band is divided into 27 channels with three data rates The 2.4GHz
band has 16 channels of 250kb/s rate and the channel spacing is 5MHz; the 915MHz band
has 10 channels of 40kb/s rate and the channel spacing is 2MHz; the 868 MHz band has
only one channel of 20kb/s rate The wireless channels used in ZigBee are shown in Fig 1
Fig 1 The distribution of frequency and channels
2.2 Positioning System of ZigBee
The ZigBee wireless sensor networks positioning system includes two categories of nodes:
the reference node and the positioning node (blind node, also known as mobile node)
A node which has a static location is called a reference node This node must be configured
with X i and Y ivalues that are corresponding to the physical location The main task for a
reference node is to provide an information packet that contains X i and Y i coordinates for the blind node, also referred to as an anchor node A reference node can be run on either a CC2430 chip or a CC2431 chip
A blind node will communicate with the neighbouring reference nodes, collectingX i, Y i
and RSSI (Received Signal Strength Indicator) values from each of these nodes, and calculate its position X, Y based on the parameter input using the location engine hardware Afterwards the calculated position should be sent to a control station This control station could be a PC or another node in the system A blind node must use a CC2431 chip
The biggest difference between CC2431 chip and CC2430 chip is that the CC2431 chip includes a licensed location engine hardware core from Motorola Adopting this core, the positioning accuracy of about 0.25m that is much higher than that of GPS can be achieved, and positioning time is in the range of 40μs
The basis of the distance measurement based on theoretical model is the wireless signal transmission theory In free space, the signal strength the receiver receives is given by the following formula
2
( )(4 )t t r
transmitting antenna gain and receiving antenna gain, respectively; d is the distance between the sender and the receiver; L is the system loss factor; λ is the wavelength
Based on this principle, IEEE 802.15.4 gives a simplified channel model The received signal strength is a function of the transmitted power and the distance between the sender and the receiver The received signal strength will decrease with increased distance and can be expressed in the following equation
the transmitter power
Figure 2 shows how the curve of the received signal strength value changes with the distance between the transmitter and the receiver
2.4 Positioning Process
The CC2431 location engine uses distributed computing methods and hardware to compute the location This consumes less CPU resources The location engine collects 3 to 16 reference nodes’ messages If it receives more than 16 node’s messages, it will arrange the
Trang 13Mobile Robot Positioning Based on ZigBee Wireless Sensor Networks and Vision Sensor 425
This chapter presents a positioning method for a mobile robot based on the ZigBee wireless
sensor networks and vision sensor Using the ZigBee, the absolute position of the mobile
robot can be estimated To obtain the accurate positioning, we use vision sensor (Jiang,
2008; Yang et al., 2008), and choose ceiling lights as landmarks to realize the relative
positioning of the mobile robot The method not only overcomes the shortcomings of low
positioning accuracy using ZigBee wireless sensor networks, but also decreases the
computation complexity of using landmark absolute positioning To guide the mobile
robot, the path planning and navigation control method are described Finally, a mobile
robot system is introduced and the positioning experiments are conducted
2 The Positioning System Based on ZigBee
ZigBee is an emerging short-range, low-rate wireless network technology which is based on
the IEEE 802.15.4 This is a standard of the IEEE wireless personal area network working
group that is known as IEEE 802.15.4 technical standards
2.1 The Composition of the ZigBee Wireless Channel
IEEE 802.15.4 works in the industrial, scientific, medical (ISM) band It defines two working
bands: the 868/915MHz and 2.4GHz frequency bands They correspond to two physical
layer standards, and both physical layers are based on the direct sequence spread spectrum
(DSSS) technology The 868MHz band is the additional ISM band in Europe and the
915MHz band is the additional ISM band in America They cannot be used in China In the
IEEE 802.15.4, the ISM band is divided into 27 channels with three data rates The 2.4GHz
band has 16 channels of 250kb/s rate and the channel spacing is 5MHz; the 915MHz band
has 10 channels of 40kb/s rate and the channel spacing is 2MHz; the 868 MHz band has
only one channel of 20kb/s rate The wireless channels used in ZigBee are shown in Fig 1
Fig 1 The distribution of frequency and channels
2.2 Positioning System of ZigBee
The ZigBee wireless sensor networks positioning system includes two categories of nodes:
the reference node and the positioning node (blind node, also known as mobile node)
A node which has a static location is called a reference node This node must be configured
with X i and Y ivalues that are corresponding to the physical location The main task for a
reference node is to provide an information packet that contains X i and Y i coordinates for the blind node, also referred to as an anchor node A reference node can be run on either a CC2430 chip or a CC2431 chip
A blind node will communicate with the neighbouring reference nodes, collectingX i, Y i
and RSSI (Received Signal Strength Indicator) values from each of these nodes, and calculate its position X, Y based on the parameter input using the location engine hardware Afterwards the calculated position should be sent to a control station This control station could be a PC or another node in the system A blind node must use a CC2431 chip
The biggest difference between CC2431 chip and CC2430 chip is that the CC2431 chip includes a licensed location engine hardware core from Motorola Adopting this core, the positioning accuracy of about 0.25m that is much higher than that of GPS can be achieved, and positioning time is in the range of 40μs
The basis of the distance measurement based on theoretical model is the wireless signal transmission theory In free space, the signal strength the receiver receives is given by the following formula
2
( )(4 )t t r
transmitting antenna gain and receiving antenna gain, respectively; d is the distance between the sender and the receiver; L is the system loss factor; λ is the wavelength
Based on this principle, IEEE 802.15.4 gives a simplified channel model The received signal strength is a function of the transmitted power and the distance between the sender and the receiver The received signal strength will decrease with increased distance and can be expressed in the following equation
the transmitter power
Figure 2 shows how the curve of the received signal strength value changes with the distance between the transmitter and the receiver
2.4 Positioning Process
The CC2431 location engine uses distributed computing methods and hardware to compute the location This consumes less CPU resources The location engine collects 3 to 16 reference nodes’ messages If it receives more than 16 node’s messages, it will arrange the
Trang 14values of d in ascending sequence and use 16 nodes of the RSSI values in front Network
communications between the adjacent nodes exchange coordinates of the position and RSSI
information In the process, the blind node obtains the RSSI values and the coordinates of
adjacent reference nodes of the known address According to the RSSI values, the blind
node can calculate the distances between itself and the reference nodes Finally, the blind
node uses the Maximum Likelihood Estimation Method to calculate its coordinate
Fig 2 The curve of the received signal strength value
The RSSI-based localization algorithm can be divided into the following steps:
(1) The blind node sends a request to all the reference nodes to obtain the signal of location
information For the blind node in the sensor networks, when it needs to determine its
position, it will broadcast to all the surrounding reference nodes, requesting their
coordinates and ID information
(2) The reference nodes send their information of coordinates and ID after the blind node’s
request is received
(3) The blind node receives coordinates from reference nodes with different ID, and
measures RSSI values of different reference nodes
(4) When the blind node receives the coordinates and the RSSI values of m nodes, it will
convert the RSSI values to the corresponding distances d Then it arranges the values of
d in ascending sequence and establishes the corresponding collections of distances and
coordinates
Distances collection: Distance ={d1, d2, …, d m},d1< d2<…<d m;
Coordinates collection: Coordinate= {( , )x y1 1 , ( , )x y2 2 , , ( , )x y m m }
(5) In order to improve the positioning accuracy and reduce the positioning error caused
by distance measured, the blind node judges the measured distances The distance d is
invalid when it is greater than the limit value The distance d and the coordinate
must be removed from the collections The blind node retains the rest of the n (n ≥ 3)
group’s information
(6) For the n group’s information of n reference nodes, the Maximum Likelihood
Estimation Method is used to calculate the blind node’s coordinate
2.5 The Maximum Likelihood Estimation
In the three-dimensional space, if the distances from one point to 4 reference points are known, the coordinate of an unknown point can be determined In the sensor networks, the coordinate system is two-dimensional As long as the distances from the blind node to 3 reference nodes are known, the position of blind node can be calculated The positioning algorithms based on ranging include Trilateration, Triangulation, Weighted Centroid
Algorithm and Maximum Likelihood Estimation In this chapter, we use the Maximum
Likelihood Estimation Algorithm
The known coordinates of n reference nodes are ( , )x y1 1 ,( , )x y2 2 ,( , )x y3 3 , ,( , )x y n n ,
respectively The distances between the blind node O and the n reference nodes are
1
d ,d2,d3, ,d n The coordinate of the blind node is( , )x y From Fig 3, we have
Fig 3 Maximum Likelihood Estimation
1
X A A A b, (6) where:
Trang 15Mobile Robot Positioning Based on ZigBee Wireless Sensor Networks and Vision Sensor 427
values of d in ascending sequence and use 16 nodes of the RSSI values in front Network
communications between the adjacent nodes exchange coordinates of the position and RSSI
information In the process, the blind node obtains the RSSI values and the coordinates of
adjacent reference nodes of the known address According to the RSSI values, the blind
node can calculate the distances between itself and the reference nodes Finally, the blind
node uses the Maximum Likelihood Estimation Method to calculate its coordinate
Fig 2 The curve of the received signal strength value
The RSSI-based localization algorithm can be divided into the following steps:
(1) The blind node sends a request to all the reference nodes to obtain the signal of location
information For the blind node in the sensor networks, when it needs to determine its
position, it will broadcast to all the surrounding reference nodes, requesting their
coordinates and ID information
(2) The reference nodes send their information of coordinates and ID after the blind node’s
request is received
(3) The blind node receives coordinates from reference nodes with different ID, and
measures RSSI values of different reference nodes
(4) When the blind node receives the coordinates and the RSSI values of m nodes, it will
convert the RSSI values to the corresponding distances d Then it arranges the values of
d in ascending sequence and establishes the corresponding collections of distances and
coordinates
Distances collection: Distance ={d1, d2, …, d m},d1< d2<…<d m;
Coordinates collection: Coordinate= {( , )x y1 1 , ( , )x y2 2 , , ( , )x y m m }
(5) In order to improve the positioning accuracy and reduce the positioning error caused
by distance measured, the blind node judges the measured distances The distance d is
invalid when it is greater than the limit value The distance d and the coordinate
must be removed from the collections The blind node retains the rest of the n (n ≥ 3)
group’s information
(6) For the n group’s information of n reference nodes, the Maximum Likelihood
Estimation Method is used to calculate the blind node’s coordinate
2.5 The Maximum Likelihood Estimation
In the three-dimensional space, if the distances from one point to 4 reference points are known, the coordinate of an unknown point can be determined In the sensor networks, the coordinate system is two-dimensional As long as the distances from the blind node to 3 reference nodes are known, the position of blind node can be calculated The positioning algorithms based on ranging include Trilateration, Triangulation, Weighted Centroid
Algorithm and Maximum Likelihood Estimation In this chapter, we use the Maximum
Likelihood Estimation Algorithm
The known coordinates of n reference nodes are ( , )x y1 1 ,( , )x y2 2 ,( , )x y3 3 , ,( , )x y n n ,
respectively The distances between the blind node O and the n reference nodes are
1
d ,d2,d3, ,d n The coordinate of the blind node is( , )x y From Fig 3, we have
Fig 3 Maximum Likelihood Estimation
1
X A A A b, (6) where:
Trang 163 Positioning Based on Vision Sensor
In order to obtain the accurate positioning, the ceiling lights are chosen as landmarks and
the vision sensor is used to recognize the landmarks
3.1 Landmark Recognition
Using a vision system, the central line and contour of the ceiling light can be obtained
However, the camera may acquire some unnecessary images such as sunlight through
windows or any other light sources These unnecessary images have to be eliminated
(a) The raw image from camera
(b) Land recognition from all light sources Fig 4 An example of landmark recognition
By means of the labelling program, the geometrical data (area, length, width, position, minimum and maximum horizontal/vertical coordinates) of every white cluster and the number of white clusters in the images can be obtained From the geometrical data and the distance of the neighbouring white cluster, the desired images can be picked out from among others An example of landmark recognition is shown in Fig 4 Figure 4(a) shows a raw image from camera From the labelling and geometric calculation, the ceiling lights can
be selected as landmarks from all light sources as shown in Fig 4(b)
3.2 The Coordinate System of a Single Light
On the basis of the raster coordinates of the single fluorescent light, we have the position vectors pi x ui y ui f (i = 1, 2) of projection points P iof light endsQ iin the camera coordinate systemo x y z c c c cas shown in Fig 5 From the perspective transformation, the relation betweenqiandpican be expressed as
Fig 5 Relation between the coordinate systems of the camera and a single light
2 2 2 ' 2
q k k
1 1
Trang 173 Positioning Based on Vision Sensor
In order to obtain the accurate positioning, the ceiling lights are chosen as landmarks and
the vision sensor is used to recognize the landmarks
3.1 Landmark Recognition
Using a vision system, the central line and contour of the ceiling light can be obtained
However, the camera may acquire some unnecessary images such as sunlight through
windows or any other light sources These unnecessary images have to be eliminated
(a) The raw image from camera
(b) Land recognition from all light sources Fig 4 An example of landmark recognition
By means of the labelling program, the geometrical data (area, length, width, position, minimum and maximum horizontal/vertical coordinates) of every white cluster and the number of white clusters in the images can be obtained From the geometrical data and the distance of the neighbouring white cluster, the desired images can be picked out from among others An example of landmark recognition is shown in Fig 4 Figure 4(a) shows a raw image from camera From the labelling and geometric calculation, the ceiling lights can
be selected as landmarks from all light sources as shown in Fig 4(b)
3.2 The Coordinate System of a Single Light
On the basis of the raster coordinates of the single fluorescent light, we have the position vectors pi x ui y ui f (i = 1, 2) of projection points P iof light endsQ iin the camera coordinate systemo x y z c c c cas shown in Fig 5 From the perspective transformation, the relation betweenqiandpican be expressed as
Fig 5 Relation between the coordinate systems of the camera and a single light
2 2 2 ' 2
q k k
1 1