This work presents two main parts: Indoor positioning systemfor human and Simultaneous Mapping and localization SLAM for Robot in disaster relief.In the first part, a hybrid algorithm th
Introduction
According to the World Health Organization (WHO), "Disaster is an occurrence disrupting the normal conditions of existence and causing a level of suffering that exceeds the capacity of adjustment of the affected community” [Ll] Disasters around the world including natural and man-made hazard have been damaging to communities and countries They can be classified into two main types in Table In the case of natural disasters, such as the well-known Hur-
Type Disasters Natural | Earthquakes, typhoons, hurricanes (storm), cyclone, volcanic eruption Man-made | Fire, Explosion, collision, shipwreck, structural col- lapse, environmental pollution.
Bang 1.1: The taxonomy of disaster ricane Katrina in 2005 and the Haiti earthquake in 2010, the affected victims and areas were sizeable so that the rescue required worldwide and long-term action However, the minor to moderate natural disasters may bring slight damage to a building and hurt people inside In the case of man-made disasters, fire in a building is the most frequent disaster Figure |[ [| shows the trends in fires, deaths, injuries and dollar loss from U.S of fire statistic from 2008 to 2017.
million fires are reported and they cause more than 18000 casualties every year [2]
Indoor Positioning System
The general scheme of the proposed framework shows in Figure|[.2| The IPS deploys differ- ent sensors on smart-phone to build an effective indoor positioning system that can work in dif- ferent previous mentioned indoor environments Especially, the proposed framework focuses on fusing different indoor techniques (Fingerprinting, path-loss, PDR) and different technologies (WiFi, BLE, inertial sensors) and building map to improve the position accuracy and efficiency as the infrastructure becomes collapsed gradually This IPS is low-cost, easy to integrate into
1 | Sparse Training | Offline | Fingerprinting Data P| for Kernel (Gaussian Genarated WiFi Map
Proccess ) i | ~————=ô— ow a a al el eal cl cal al cal al cal ll a al cal li ca al alt al al cal lt cal al cal i cal al cal st cal lf alt Ee ee ee ee al a —_ `
Acelerometer A Position & b — Le Step Detection = Step Length anes ee (Trajectory)[ | —> Ũ
Hinh 1.3: The hybrid framework of indoor positioning system in the building for disaster relief the building and convenient for people inside the building It also can combine with robot to use dynamic map (robot map) when the environments change Furthermore, In the offline phase of WiFi fingerprinting techniques, robot can be used to collected WiFi RSS at the reference points to reduce training time The general scheme of IPS in the building for disaster relief as shown in Eigure |[.3| The proposed framework aims at achieving location robustness and accuracy for disaster relief by combining a number of techniques: e Rss based path-loss for BLE iBeacon: the BLE beacons are deployed by RSS based using path-loss model to measure the distance between mobile users and these beacons The noise of RSS measurement can be reduced by Kalman filter (KF) The benefit of BLE beacon is low energy consumption So, it can work in blackout condition. e Constructing a "WiFi map" With the aim of reducing the time needed for data training during the offline phase and for improving the accuracy of WiFi fingerprinting, a Gaus- sian process (GP) regression is employed This makes it possible to obtain the mean and variance of the considered WiFi map based on the correlation between RSS of sparse training points Moreover, An efficient method is proposed to evaluate the user’s position by real-time RSS measurement and the WiFi map. e Motion estimation of PDR To detect motion and calculate the movement of a pedestrian using smart-phones, we aim at improving the step detection and stride length algorithm by using only the accelerometer Besides, instead of using the absolute heading from the compass, a Magdwick filters [21] is applied by combining values from the accelerometer and the gyroscope to avoid the effect of magnetic fields on the magnetometer and to estimate the relative heading. e Location hybrid method a hybrid method is applied by using a particle filter for combin- ing the WiFi, BLE estimation with the PDR and the features of the building map (robot
IMU “ee ROS On Rasberry pi p
Hinh 1.4: The architecture of the robot using SLAM for disaster relief map) This hybrid makes the indoor positioning system possible to achieve high accuracy and robustness.
Simultaneous Localization and Mapping for Robot
It is clear that a robot using wheels is difficult to operate in disaster in some situations.
However, in this thesis, taking into account the SLAM technique and low-cost robot, a two wheels robot is designed to implement SLAM techniques The Robot uses Hector slam tech- niques for fast online learning of occupancy grid maps requiring low computational resources.
The architecture of the robot shows in Figure |1.4| The robot uses Lidar, Inertial Measurement Units (MU), camera and odometry of two wheels to draw 2D map The main center processing is used Raspberry pi 3B+ running the Robot Operating System (ROS) which is a framework for Robot to implement different algorithms, for instance, differential controller, hectorSLAM, Navigation, sensor fusing using Extended Kalman filter, Filter for IMU in addition, a micro- controller is applied to implement PID velocity control for Robot as well as communicate with ROS A key challenge for this scenario is that when the robot moves in different infrastructures, the odometry information may be not available Therefore, in this work, the robot is applied hector_slam that is an open source package using EFK SLAM in ROS to generate a two dimen- sional map with LIDAR without odometry information.
The rest of the thesis is organized as follows: e Chapter 22] provides the overview of related works on indoor positioning and SLAM. e Chapter B] describes the methodology of the indoor positioning system e Chapter 4] describes the methodology for ROBOT using SLAM e Chapter | presents experiments and results of IPS e Chapter |6| presents experiments and result of SLAM for ROBOT e Chapter [7] concludes the thesis with a summary and future work.
Backgrounds and Related Works
This chapter presents the general indoor positioning approaches, related works of IPS for disaster relief and the SLAM techniques for robot.
According to the suggestion by Pahlavan et al [22], a basic diagram of the indoor position- ing system which describes these components and their relationships in Figure 2 [| It includes a number of location sensing devices, a positioning algorithm, a display system First, a number of location sensing devices measure metrics related to the relative position of the transmitter or receiver which respect to a known reference point (RP) It can use a different type of sensing technologies such as infrared (IR), ultrasound, WiFi, Bluetooth, Zigbee, RFID, UWB, Visible light The location metrics show the relationships of the direction (angle) or distance with time, phase, or received signal strength level between reference points and mobile target (MT) Those metrics are the angle of arrival (AOA), time of arrival (TOA), time of different arrival (TDOA), carrier signal phase of arrival (POA), or received signal strength (RSS) [16] Then, the posi- tioning algorithm process the location metrics and estimates the coordinate of MT Finally, the display system converts such coordinate into the suitable format for the end user The litera- ture review diagram of IPS and corresponding related works show in Figure [2.2] and Table 22 i} respectively.
Location metrics Location TOA, AOA, RSS, etc Coordinate (x, y, Z)
—'' LOCATION Pátktzmieh Signal : ESTIMATION menue
Hinh 2.1: A functional block diagram of positioning system
Fusing WiFi RF (Wifi, BLE, Fingerprinting + UWB, RFID )
Kalman filter, EXF, UKF — Filter-based Algorithms ae s 7 Pedestrian Dead Reckoning Particle filter Ap Coverage
NN, KNN, Detemn inistic WKNN method
PDR + fingerprinting - Hybrid Fingerprinting st ơ Mediut Tiss
FOR BLE Probability Pa Coverage areaBLE + WiFi em Network
bÁo se < Neural Network Clustering ch
General Indoor Positioning System
Wireless localization can be categorized into two main approaches: Geometrical-calculation based and fingerprinting based (or scene-analysis based) [53] The former approach relies on measurement of geometrical parameters (i.e distance, angle) using various physical properties of radio signal, such as Time of Arrival (TOA) [26], Time Difference of Arrival (TDOA) [26], Angle of Arrival (AoA) [27], Received Signal Strength (RSS) Systems that incorporate AOA, TOA or TDOA usually achieve high localization accuracy with errors lower than 1m, but they are complicated to synchronize between transmitters and receivers [44], hence unavailable in general RSS is the parameter used in most of these studies, however, achieving robustness and accuracy is challenging for most of these studies because the radio signal is affected by reflection, refraction, shadowing and scattering in the indoor environment.
Fingerprinting/Sense Analysis is a technique that estimate the position base on scene analy- sis [25] This method can directly exploit existing infrastructures to estimate the user’s position with a lower cost WiFi Fingerprinting technique takes advantages of the similarities between the RSSs It is usually conducted in two phases [25], [40]: offline phase or training or calibra- tion or training phase and online phase or tracking phase In Figure [2.3] the basic operation of fingerprinting [25]] are described In the offline phase, the ”radio map” is built with observed RSS of all the detected WiFi Signals from different access points (APs) at many reference points (RPs) of known locations and it is saved into the database In the online phase, a mobile device (or target) measures the vector RSS from available APs and estimate its position by using the fingerprinting database and positioning algorithms Based on the information contained into the database regarding the "radio map", several research works have been proposed to solve the indoor positioning A diagram summarizing some of these approaches is shown in Fig |2.4 Firstly, the deterministic approaches usually estimate the position based on the closest RSSs in the pre-stored radio map with real-time measurements The most typical algorithms for this approach are K-nearest neighborhood (KNN) [39 4} 34], nearest-neighbour [401 [16], weight k- nearest neighborhood [[L0; 41], and Median Filtering [42] Euclidean distance is normally used to measure the similarities between the observed RSSs and the mean of the fingerprints col- lected at each training point in deterministic approaches There are also some distances, for ex- ample, Logarithmic Gaussian Distance (LGD) [23], Penalized Logarithmic Gaussian Distance
FINGERPRINTING TECHNIQUE v v | | se gs Pattern :
KNN Coverage Area SVM K-mean clustering
WKNN Ba cian Network | |Neural Network Weighted Median Filtering y Deep Learning clustering
Hinh 2.4: Indoor positioning approaches using WiFi Fingerprinting
(PLGD) [10], Cosine distance [52], that can be used for these methods.
Secondly, probability approaches have concentrated on a more precise distance measure that can take into account the variability of the RSS training vectors These methods estimate probability density for the training of RSS and then calculate likelihood or a posteriori esti- mates during the online phase using the observed RSSs and the estimated densities User po- sition is performed by a maximum-likelihood [43] 25) 44]] or maximum a posteriori (MAP) estimation [45} [16], or histogram matching by generating fingerprinting distributions rely on radio-map fingerprints [46] Compared to deterministic approaches, they often require larger computational resources and training sets.
Thirdly, pattern recognition techniques are based on classifiers, that estimate the most likely location of the user’s position by using discriminating observed RSS during online phase with surveyed fingerprinting data Support vector machine [47 48], neural network (49) [48], deep learning with CSI in [50], DeepFi in [51] are examples of pattern recognition schemes.
Lastly, clustering approaches are the basic idea to reduce high computation when the num- ber of RPs increases with the area size These algorithms reduce the search space of the user location to the smaller number of RPs based on the dependence of characteristics of RSS fin- gerprinting on environment features and available RPs In [LO], the authors applied a K-Mean clustering approach by using Euclidean distance to find the centroid of each cluster To enhance classification between each cluster, He, Suining et al in [52] used cosine metric for K-Mean Clustering by evaluating the similarity between two signal vectors and Cramariuc et al in [10], the authors used a penalized logarithmic Gaussian Distance approach.
The major disadvantages of the fingerprinting method include the need for dense training coverage and the poor extrapolation to areas not covered during training [32] During the of- fline phase it can be extremely time-consuming and labor-intensive to build substantially large fingerprinting databases [[54} In contrast to fingerprinting, "path loss” model RF signal can be applied to compute the relationship between RSS and distance of APs and user’s position.
Shchekotov, Maxim in [Š4] using a simple signal propagation model as equation follows:
0 where P is the signal power at an RF distance d, and Pp is the know signal power at the refer- ence distance đo and 7 is the path-loss exponent Po, đo and y can calculated from experiment results Then, trilateration technique can be applied to estimate the user’s position Dao, Trung- Kien et al in [I5] added some more parameters that related effects of walls and floors and used a genetic algorithm to search these parameters in the experiment Yang ef al in [535i] used Linear regression and correlation constraint-based method to improve the location accuracy us- ing lateration methods However, the position relationship is highly complex due to multi-path, mental reflection, and interference noise [56] Thus, the path-loss model may not be adequately captured by a fixed invariant model.
Another widely-adopted localization approach is pedestrian dead reckoning (PDR) [[12}
28), (29]], which leverages inertial sensors to measure the pedestrian displacement relative to the previous position The main challenge in these approaches is that the inertial sensors in commercial smart-phones often suffer from imperfect calibration and noisy measurement [8].
Step counting is currently a major method to capture the walking path and the movement of pedestrians [112} [[ I] A number of variants on probabilistic Bayesian inference approaches have appeared in the literature [30]-[13]] which sequentially estimates the unknown state from noisy observations using a dynamic predictive model based on pedestrian’s stride and direction and the observation likelihood It can also provide an uncertainty measure of the estimates Kalman filter [BO], Extended Kalman filter [BI], sigma-point Kalman smoother [32) 323], Particle fil- ter [4 [I3 have been applied to improve the accuracy in indoor localization systems Kalman filtering and its variants are the most efficient in terms of memory and computation while parti- cle filters can converge to the true posterior state distribution for non-Gaussian and multimodel cases [32].
Indoor Positioning in Disaster relief
PierfrancescoBellini et al [57] present a solution for the guiding personnel during mainte- nance and/or emergency conditions They propose integrating indoor/outdoor position and nav- igation in their system developing for hospital emergency management It’s purposes to provide support to teams to get detail about reaching the event location; involved personnel in getting the closest and updated exit, and registered users in reaching points of interest This indoor naviga- tion base on low-cost mobile sensor and Adaptive Extended Kalman filter In order to estimate the current position, the mobile using the sensors to perform adjustments with respect to the position set using a QR code then, the movements taking into account of the device’s sensors such as gyroscopes, magnetic compass, and accelerometers as an inertial navigation system.
This system can be low cost and get a better result that compares to classical Kalman and dead reckoning The final error is lower than 20cm at the end of the path with 40m of length.
Yoon, Hyungchul, et al [58]] present in-building emergency response assistance system that focuses on getting information on the location and physical statuses of trapped victims inside a building during a disaster It comprises two subsystems: Victim positioning system (VPS) and a Victim Assessment system (VAS) The VPS is developed for smart-phones using RSS of Wifi signal and Fingerprinting technique with referencing a pre-established Wifi-fingerprinting map of the building The VAS uses patterns obtained from measured 3D acceleration changes by the status of a victim The VPS find the locations of smart-phones inside a building following
12 a disaster that relies on Wifi signals from wireless access points It is further assumed that many wireless access points will survive a minor to moderate disaster and continuous working.
The location information can be displayed on the smart-phone locally as part of a system to guide the victim to the nearest safe exit and can be transmitted for use by the on-site emergency responders.
The victim assessment system is designed to assess and inform the emergency responders of the status of the victims by collecting real-time data from sensors, such as accelerometer, gyro- scope, and a magnetic field sensor embedded in smart-phone A passive VAS recognizes eight different types of activities: walking, running, standing, sitting/lying, rolling fainting, stepping up stairs, and stepping down stairs The Naive Bayes classifier can estimate activities linked into the four physical statues of victims (highly ambulatory, ambulatory, nonambulatory, and uncon- scious) in order to aid emergency responders in coordinating evaluation and rescue efforts.
Son, Donghyun, et [4] present the indoor positioning system for emergency rescue evacua- tion support system with partial infrastructure It deploys PDR and WiFi fingerprinting based on RSS which combine Particle filter algorithm to improve the accuracy the mobile users use smart-phone to measure heading and distance from inertial sensors and collect RSS values from WiFi access points Then combine the wall filter and RSS particle filter to estimate user posi- tion The simulation result shows the average error distance from 0.94 m to 2.81 m depending on numbers of the available infrastructure of Wifi system The benefits off this system are low cost, working in partial infrastructure.
Lee, Hyo Won et al [59] propose the indoor localization scheme for disaster relief applica- tion like rescuing people in the building It comprises two phases: In the first phase, using a probability base on the path loss model to estimate the subarea where the victim nodes located.
In the second phase (or online phase), mobile nodes corresponding to rescuers go around the building utilizing pedestrian dead reckoning technique (PDR) using internal sensor measure- ments of smart-phone to track its position and send a signal to stationary nodes (victims) near its node When station nodes receive a signal from mobile nodes, they report RSS to the mo- bile node Then, the rescuers estimate the location of the stationary node base on a path loss model and a subarea database constructed in the offline phase The stationary node on each subarea is determined by localization server estimating the probabilities based on the path loss model This scheme can reduce the cost of the off-line phase compared with fingerprinting- based method because it does not measure RSS in that phase another advantage of this is that it use communication between devices instead of pre-installed facilities.
Simultaneous Localization and Mapping Problem
SLAM is an abbreviation for simultaneous localization and mapping also know as Concur- rent Mapping and Localization (CML), which is a technique for estimating sensor motion and reconstruction structure in an unknown environment [24] Slam dealing with the necessity of building a map of the environment while simultaneously determining the location of the robot within this map [60] The accuracy of the map depends on the accuracy of localization and vice versa This technique was originally proposed to achieve autonomous control of robots in robotics [24].
Consider a robot moving in an unknown environments as depicted in Figure [2.5] with: e The robot is given: the robot’s controls u, to drive the robot at time k, and the Observation
Hỡnh 2.5: A Robot in an unknown environment in [6 ẽẽJ: Landmarks being observed at different position along the robot’s trajectory
: Extended Kalman SLAM Solutions Filter (EKF)
\ Kalman-based Extended Information SLAM Filter (EIF)
ROS Supports for Unscented Kalman 2D Map Gmapping Filter (UKF)
Hector SLAM Graph-based Sparse Extended SLAM Information Filter
Hình 2.6: A literature review for SLAM
Z¿„ taken from the robot of the location of the i” landmark at time k e It wants to have: map of the environment m; describing the location of the i” landmark which is time invariant, and the state vector x; describing the location and orientation of the vehicle (the path of the robot).
SLAM consists of multiple parts: Landmark extraction, data association, state estimation, state update and landmark update [62] The literature review for SLAM shows in Figure [2.6]
The most common sensors using for SLAM can be categorized into laser-based, sonar-based, vision-based systems Besides that, some RF technologies are also used for SLAM such as WiFi [163], UWB [64], RFID [65] The Solution to the SLAM problem can be fallen into three main categories: Kalman Filters (KF) based, Particle based and graph-based SLAM Kalman filters are Bayes filters that represent posteriors using Gaussians [6ỉ], i.e unimodal, multivari- ate distributions KF SLAM relied on the assumption that the observations and the state tran- sition functions are linear with added Gaussian noise, the initial posteriors are also Gaussian.
There are some main variations of KF that commonly use for SLAM: the Extended Kalman Filter (EKF) [66], Extended Information Filter (EIF) [67], Sparse Extended Information Filter (SEIF) [68], Unscented Kalman Filter (UKF) [69] The main drawbacks of EKF and KF im- plementation are that high computational complexity and large linearization error [[70] Particle Filter (PF) is another implementation of a Bayes filter In contrast to the KF based, it does not use a parametric model for the probability distributions which makes it capable of handling highly nonlinear sensors and non-Gaussian noise The PF can combine with other techniques to deal
15 with SLAM problem, for example, FastSLAM [ƑT] and the fastSLAM2.0 [Ữ2j, unscented par- ticle filter (UFastSLAM) [3] FastSLAM relaxes the limitation of Gaussian distribution noise and divides the SLAM problem into two parts: robot path and landmark estimation A main characteristic of fastSLAM is that each particle makes its own local data association so the computational complexity and memory usage are improved Another wide Solution for SLAM 1s graph-based SLAM [4 which represents robot positions and observations as nodes and mea- surement constraints as edges [61] This algorithm solves the full SLAM problem that the entire map and path are recovered, instead of just recent pose and map in online SLAM [|5] This different allows considering dependencies between previous and current poses Graph-based SLAM can work with all of the data at once to find the optimal SLAM solution.
In ROS, there are some packages that are implemented 2D mapping using LIDAR such as hector_slam (2011) use EFK SLAM, Gmapping(2007) using fastSLAM, and Karto SLAM(2010) using Graph-based SLAM [76], ETHZASL-ICP-Mapper(2013) hector_slam does not detect loop closures but it can work without odometry information In disaster scenarios, the loop-closures is not really important during the short period of SAR Therefore, it can be a promising solution for this work.
SLAM for Disaster Relief
Alexander Kleiner in [65] propose the novel method for realtime exploration and SLAM based on RFID tags for Robot search and rescue The tags are autonomously distributed in the environments Their approach allows the computationally efficient construction of a map within harsh environments, and coordinated exploration of a team of robots [65].
In [77], the authors use hector_slam that is an Open Source Modules for Autonomous Map- ping and Navigation with Rescue Robots This robot using ROS solves SLAM problem to gener- ate sufficiently accurate metric maps useful for navigation of first responders or a robot system.
This system also solve unreliable odometry information in SAR by using purely relying on fast LIDAR data scan matching at full LIDAR update rate.
In Uban Search and Rescue (USAR), SLAM using different perception techniques can be applied for robots to generate 3D maps and localize themselves In [67], an extended information filter (EIF) based SLAM algorithm was proposed for building dense 3D maps in indoor USAR environments via the use of mobile rescue robots The data association is performed using a combination of scale invariant feature transformation (SIFT) feature detection and matching.
The SLAM techniques using a camera to for robot move in 6D and no odometry information.
In [66], The EKF SLAM techniques using laser range finders is proposed for constructing a 3D map of rubble by teleoperated mobile robots In [[78]] propose an online multi-robot SLAM system for 3D LIDARs for disaster scenario.
Methods for Indoor Positioning of Human
WiFi RSS based
In the indoor environments, the WiFi signals highly fluctuate because of shadowing, multi- path, mental reflection, and interference noise [56] Figure B.1] shows the experiment result of RSS measurements from one access point The cyan stars show the RSS measurements in distance from 0.5m to 9m These values change highly vary in this range so, using the path-loss model for this RSS value is quite hard to archive high accuracy.
The Figure describes the histogram of RSS from two different training data at | meter and 2 meters of three WiFi access points The values of the different beacon in light of sight condition are heterogeneous Therefore in this thesis, we using fingerprinting method for WiFi using pre-training data.
Hình 3.1: The relationship between RSS and distance using path-loss model
iBeacon based Localization
The iBeacon technology is established upon BLE4.0, thus, it is very energy efficient and can be utilized for localization based on RSS for BLE devices, i.e a smart-phone [38] An iBeacon will periodically broadcast an advertisement packet containing a unique ID and a calibrated RSS value at a one-meter distance This value allows us to determine the distance between an iBeacon and a device Note that no paired connections are required for receiving these packets.
More details can be found in [79] The main advantage of the iBeacon is that it has very long battery life, for example, Estimote iBeacon uses four CR2477 batteries and Estimote Location enabled by default This allows for up to 5 years of battery life [SOI].
The iBeacon based localization leverages the RSS of BLE The signal propagation of an iBeacon can be formulated using equation |3 Í| Therefore, we can obtain distance of d; as
Based on the estimated distances between iBeacon and a device, the lateration techniques can be applied for localization In this work, these distances are used to fusing with PDR techniques.
By this fusing, the number of iBeacon can be reduced However, RSS values are affected by the environment and have, consequently, high levels of noise In order to deal with these noises A Kalman filter can be applied to filter RSS measurements [Đ ẽÍ |Đ2l] RSS values change randomly, therefore, the transition matrix # and the measurement matrix H are set to one Moreover, there is no external control input Donate r?““ and z3/ are estimated RSS and raw RSS at time t, respectively; Q is noise measurement, R is noise process With these assumptions, the prediction and update phase can be shown as:
Histogram of RSS of three beacons at 1m Histogram of RSS of three beacons at 2m
12 ] 8 data beacon 1 BAN data beacon 1
M@@ data beacon 2 16 Ì mum data beacon 2
10 7 @@@TM@ data beacon 3 14 4 AM data beacon 3
Hinh 3.2: Histogram of RSS from two different training data of 3 beacons (access points) e Prediction Phase:
P.=(1—K,)Ề yBLE — pBLE 4 Kị ( BLE _ pBLE)
The result of the Kalman filter on two samples of raw RSSI data from two iBeacon can be seen in Figure |3.3L The Kalman filter is able to remove a large part of the noise from the data, but as a trade-off, has to give up a bit of the responsiveness.
Hình 3.3: Raw, and filtered RSS at distance 1m from an experiment of two BLE beacons (R 0.001 and O = 1)
Covariance of RSS by Kernel
| Optimal Parameters | Conjugate of Kernel | pc Descent |
Hinh 3.4: A Gaussian process (GP) regression for an indoor positioning system.
2 WiFi Fingerprinting using Gaussian Process Regression
WiFi fingerprinting includes two phases: In the offline phase, WiFi RSS are collected to the database by smartphones or robot at specific points to build "WiFi map" On the online phase, the real-time RSS measurements are compared with the "WiFi map” to estimate the user position This section presents two phases of WiFi techniques to build "WiFi map” and Algorithm to estimate position from this map.
2.1 Offline phase: Building WiFi Fingerprinting Maps Using a Gaussian Process Regression
Fingerprinting techniques require high-density training data to get high accuracy However, the data collection is labour intensive In this proposed framework, a GP regression is used to minimise the training time as well as for improving the effectiveness on WiFi fingerprint- ing [17] GP has many advantages that make it applicable for indoor positioning systems using WiFi RSS [171 83} [84] It is non-parametric, continuous and correctly handling uncertainty in both process and estimation [85] GP is especially useful because of the noisy RSS WiFi mea- surements due to various phenomena such as reflection, scatting and diffraction.
To generate a WiFi map using GP regression for indoor positioning system from the training data, the GP relies on a covariance function kernel that establishes the correlation of values at different points, as shown in Figure B.4| The conjugate gradient descent method is utilised to optimise the hyperparameters of the function kernel Finally, the GP generates prediction points by estimating the posterior distribution of the WiFi map in an interested space.
Assuming that r = {r¡,¡ = 1, n} is the observed RSS vector that includes n received access points (AP) at corresponding coordinate points in d dimension x = {x;,i = 1, n}, x; € R¢, so that the pair (xj,7;) represents the training data Each observation r; can be related to a
20 transformation f(x;) through a Gaussian noise model from a noisy process as: ri = f (xi) +€, (3.5) where {€} is the generated measurement noise from a Gaussian distribution with zero mean and variance o? Any two output values, r, and rg are assumed to be correlated by a covariance function based on their input values x, and x, : cov(rp,ra) = k(Xp,Xq) + 07 Sng; (3.6) where &(X„,X„) is a kernel, ƠỉZ is the variance , Ong is 1 if p = g and O otherwise The kernel function considered in this work is is squared exponential kernel as equation:
_—v\2 k(Xp,Xq) = OF exp (2) (3.7) where OF and / are called the hyper-parameters OF is the signal variance and / is the length scale that determines how strongly the correlation between points drops off.
From equation 3.6) the covariance over the corresponding observations r for all input value x becomes: cov(r) = K+ 071, (3.8) where, K is the m x n covariance matrix of all pairs of training points Then, training points are generated by the posterior distribution over function x, = {X7,ù = 1, m}, given the training data x,r by: p(rx|X¿,X,T) ~ (Mr, Fe); (3.9)
6; =k —ki (K+6,1,) 'k, O10) where k¿¿ = cov(x.,X.) is the vector variance of generated points x, and k, = cov(x,,x) is the vector of covariance between x„ and training points x In this work, the WiFi map is built up by predicted points spaced lm x 1m apart.
An example for generated WiFi map for one access point is show in the Figure B.5aj and Figure 8.56} the mean generate for signal strength and the variance is higher in areas far from training points.
Let ỉ = (07, l, ỉ7) denote the hyper parameters we need to estimate The log likelihood of the observation is given by [84] log p(y|x, 8) =—Sy" 1 1(K + ỉ1) — s log |K + ứ1| — 2 log 2m (3.11) which follows directly from the fact that the observations are jointly Gaussian Equation |3.11 can be maximized using conjugate gradient descent To do so, we need to compute the partial derivatives of the log likelihood. ở _i “1 /p-leyT OK
Hình 3.5: An example of generated WiFi map of one access point
75 4 —— Mean -10 4 —— Mean x Data x Data 50 4 Confidence Confidence
Hinh 3.6: An experiment of GP for hyper-parameter estimation The Figure illustrating the pos- terior RSS mean and confidence interval
Considering the subsequent partial derivatives of the kernel function with respect parameters.
The partial derivatives of each element of Gaussian kernel function k(x,,x,) follow as
OK ơ (x) — xạ)? aon 0ICKP (Soe
An experiment of GP for hyper-parameter estimation shows 1n Figure|3.6b, The length scale
/ characterizes how smooth of the predicted mean is The OF represent the WiFi signal variance.
Smaller the OF, slower the variation of the function.
Online phase - WiFi position estimation
On the online phase, the WiFi fingerprinting position is estimated by measuring the proba- bilities of new visible RSS in generated training points (WiFi map) All the steps of this algo- rithm are as follows:
22 e Step 1: for each access point 7, the likelihood is computed as:
\/2zơệ 20x, where H;, and Gc are the means and the variances of predicted points x, from equa- tion 3 10)
(3.14) e Step 2: giving the location x,, if each access point is considered independently, we can compute the weights of L visible access points in m = |0, M] predicted training points by sum of logarithm probability from equation B [4| as:
[=] e Step 3: Sort the weights, get K nearest weights (,k = [1,K],K T and previous value oy, , 1 then
Then, the state estimation can be determined by:
N i= e Step 4 Resampling : generate a new set of particle {xi} ¡ by resampling with replace- ment N times from {xl }¥, with probability p{xi, = x} = wi and using a Sequential
Importance Resampling (SIR) [88], which tries to estimate the probability distribution.
Methods for ROBOT using SLAM
Hector SLAM
Hector SLAM algorithm is selected as the framework for this work This is primarily be- cause this SLAM algorithm is suited to the condition that odometer information cannot be acquired, or error of the odometer is over the tolerance The approach has ability to estimate 6Degree of Freedom(DOF) state comprising of translation and rotation of the platform HectorSLAM not only rely on fast LIDAR data scan-matching at full LIDAR update rate, but also combine with an attitude estimation system and an optional pitchroll unit to stabilize the laser scanner, the system can build environment maps even if the ground is not-flat [7] The com- prehensive discussion of hector slam is available in [89 90].
Pose Estimation
For estimating the 6D pose of the platform, An EFK is applied for IMU measurement how- ever, the velocity and position update is a pure integration of the measured accelerations and the system would be unstable without additional feedback through measurement updates There- fore, the pose estimation of the full 6DOF robot pose and twist is estimated by implementing the EFK and fusing measurements from an IMU, the 2 pose error from the laser scan matcher and optional from magnetometer or odometry from the encoder The comprehensive explanation for this method is in [89].
2 Kinematics for Two Wheels Robot
The scheme for kinematics control of the robot shows in Figure To control the robot, there are three main parts The first part is the differential controller which is a node in ROS.
This node receives the velocity command (geometry_msgsTwist.msg) then compute the target tangent velocity for two wheels The second part is close loop control using PID control It converts the target velocity from the differential controller to Pulse Width Modulation (PWM) value for two motors Furthermore, it updates the state of the velocity of each motor from en- coders then make sure that the actual velocity of motor responds to input target velocity The last part is odometry which is also the ROS node It estimates the odometry data (pose (x, y’, 8’)) of the robot from the actual angular velocity of two wheels.
Differential Controller
The differential controller takes velocity command (/cmd_vel) and computes tangential ve- locities for left and right wheels The comprehensive discussion of the differential driver is available in [91].
Let’s define the inputs as:
Host (Embedded computer) Microcontroller motor_driver_encoder cmd vel//============= `, ltangent_vel tgt -
:rtangent_vel_ tgt Í Iwheel_pwm , `
=) z J lwheel_enc_vel rwheel_enc_vel
—ơ1 lwheel_enc Ne V, \ J rwheel_ enc % J ! om i seen eee oe oe
Hinh 4.2: The scheme of the robot for close loop control and odometry e y.: target linear velocity of the robot body e @,: target rotational velocity of the robot body. e L: distance between robot Wheels.
Define the following outputs: e v;: tangential velocity of the right wheel. e v;: tangential velocity of the left wheel.
According to [91], the tangent velocity of different wheel can be computed as equation: v= 2vwe.-+LweL
Close Loop Control using PID Motor state update
The motor state is the angular velocity of each motor and it can be computed from encoder.
Define the following: e NP: Number of Pusle from encoder of the wheel ER : Encoder resolution
At: Sampling time to reading wheel speed (s)
@ : Angular velocity, in rad/s e r: Radius of the wheel in meter. e v: Linear velocity, in m/s
- The angular velocity from encoder resolution and number of pulse reading in a time At is calculated as equation:
The robot is applied a PID controller to correct for the error between angular velocity target and feedback angular velocity from the motors The algorithm to control for this is shown in Figure The PID is widely used in industrial control system because of the reduced number of parameters to be tuned [92] The PID controller basically a sum of three separate control adjustment techniques that cover different situations: e Proportional (D) - control command is a proportion of the current error. e Integral (I) - control command is a proportion of past errors e Differential (D) - control command is a proportion of the future change in error
The velocity PID controller illustrates in algorithm B}
Algorithm 3: Velocity PID algorithm Data: tangential velocity v
Result: PWM 1 while loop every 10ms do 2 compute target angular velocity: @/“8“ = =;
3 update the actual angular velocity : w”° from equation 4.2];
4 5 calculate error: error = œ,“”Š“ — 4t - t t errSum = errSum- error ; errRate — error prec rror 6
7 compute the output PWM: PWM, = K, x error + K; x errSum + Kg x errRate ; 8 preErr = error; r là
Two experiments of different coefficients (K,,Ki,Kq) of PID for the left motor show atFigure 4.4 These coefficients affect directly to the response of the output.
Odometry Estimation
The odometry consists 2D pose information(x’, y’, 0’) that can be calculated by encoder data from two wheels The kinematic equations for differential driver of this robot follows the theory in [9 [| 93] Assuming two scenarios when the robot moving: either the robot is moving straight or moving around an Instantaneous Center of Curvature (ICC) as Figure 4.3] e Moving Straight Forward If the robot moving forward, two wheels move at the same time, the dynamics are simple as:
The robot’s pose at time / (x;,y;) can be updated as:
0, = 0-1 here, 0; is the robot’s orientation at time f.
Tangent Velocity Left to Angular Velocity Left r
SeponSE }— Errorằ, | K,|ôœ)dr (s) mg Output-+ PWM Motor Left Ê Ề ũ a feedback
!Tangent Velocity Right to ' Angular Velocity Right r
-sepoint (5y )— Error | K,[e(z)dr ơ2)-JWm Output+> PWM Motor Right ( a LÙ - feedback
Hinh 4.3: Speed control using PID algorithm for two wheels robot e Rotating Around a Circle The distance between ICC and midpoint of the wheel axis Rycc is computed by:
Ricc = — — (4.5)L Vr — VI and rotational velocity @,:
ICC can be compute as: x; — Ricc sin(ỉ,—1)
The robot’s pose at time / 1s updated by:
Xt cos(@-At) —sin(@At) O} | x1 -—ICC, ICC, y, | = | sin(@,At) cos(@At) O} |w—1—ICC,| + | ICC, (4.8)
Xt cos(@-At) —sin(@At) O} | Rsim(6, ¡) x-1 — Rsin(@,_1) y,| = | sin(@Art) cos(@Art) 0| | —Rcos(A@_1)| + | y—1 +Rcos(O,_1)
Set Speed kP kl kD
Start 0.6 1 ||0.022 | |0.02 so [Cod] current Speed (pm) 0.61
Hình 4.4: The experiment PID velocity control of left motor using different coefficients
The sensors are used in this robot including LIDAR, IMU, Camera and motor’s encoder.
The encoder is mentioned at the section 2] This section describes the general information of the rest sensors.
Laser Range Finder
Laser Range Finders (LIDAR) can be used to measure the distance between the robot and ob- ject in it’s vicinity by illuminating the target with pulsed laser light and measuring the reflected pulses The parameters related to LIDAR that is used in this robot include the following [94]: e Minimum angle: Start angle of the scan e Maximum angle: end angle of the scan e Angular increment (angular resolution): Angular distance between measurements e Scan time : Time between two scans
Hinh 4.5: Two instantaneous velocity scenarios: straight forward or turn along some circle
Laser frame (default output on ROS ) Z-axis ts directed outside
Hỡnh 4.6: Rplidar Al frame broadcasting from [ỉ4]
Minimun range: Minimun observable range valueMaximum range: maximun observable range valueList of ranges: List of all measurement in a scanList of intensities: List of all intensities in a scan
Inertial Measurement Unit (IMU)
The IMU is used to measure accelerations, angular rates or sometime magnetic field These measurements are used for pose estimation in section |1.2} This robot uses MPU 9250 of In- vensense This IMU served as a 9 axis accelerometer, gyroscope and magnetometer.
A Raspberry Pi Camera is used in this robot to transfer the moving images of the envi- ronments to the rescuer or to search the victims It is really useful for rescuers to control the
36 robot moving in unknown environments to build the maps Some information about camera is as follows: e Optical Sensor Type: Exmor RS CMOS Optical sensor resolution: 8 megapixels Sensor Name: Sony Exmor RS IMX219 sensor Focus adjustment: Focus free
Still Image Resolutions: 3280 x 2464 Video Resolutions: 1920 x 1080 at 30 fps
The navigation is the algorithms that use measurements from the LIDAR sensor and pose estimation to control the robot moving autonomously without problems such as without crashing or get stuck in some location or getting lost It also uses the maps that were explored in the previous time.
The robot needs to be controlled remotely by the rescuer Therefore, it is necessary to have the applications from the computers or smart-phones This part includes the package of ROS running user interface to control robot as well as show the moving images from the robot.
Android application is also developed to connect with the robot server for robot missions.
Experiment and Result of Indoor Positioning System
This chapter presents two main sections: the fist section is the general architecture of the IPS for real experiment, and the second section discuses the performance of the proposed framework from the results of different test-bed.
The experimental architecture for IPS shows in Figure I| There are five main parts for this system: e WiFi Access point: it is considered to be RSS transmitter which broadcast signal in the specific time for Receiver (smart-phone) This access point can deploy WiFi router of the building or be designed by WiFi modules in Figure e BLE Beacon: It has the same functionalities as the WiFi access point This system uses iBeacon on Estimote company that is shown in Figure e Android Application: The app running on smart-phone is used for updating the building maps, registering new users, sending and receiving WiFi data, estimating user position based on different sensors of smart-phone. e Server: The sever is responsible receiving and processing the requirements from Clients (smart-phone users) such as creating a new map, updating the map, accessing to database, WiFi position estimation, collecting database, etc. e Database: database run on server to store different data such as user, maps, beacon, rssi train, WiFi map.
Android Application The main functionalities of the Android application
e Measure data from sensor of WiFi, BLE, and IMU. e Implement positioning algorithms (Kalman filter, particle filter, motion detection, KNN, etc.)
Hinh 5.1: The experimental architecture of Indoor Positioning System e Collect training data e Create a user 1nterface(U]D for users.
The summary architecture of Android application for two main functions shows in Figure Map view creates the interface to show the map of the building and user’s position on the map.
The map controller interacts with different models (Floop map, movement, WiFi access point, BLE beacon) and implements the position algorithms that are presented on the chapter |3] to estimate the user position The map and user’s position can be sent to server and show on the GUI of the smart-phone The sequence diagram for map view is illustrated in the Figure 5.4 the WiFi access point and BLE beacon have the same attributes to identify the wireless sensor; movement model is the information of the step detection, step length, and the heading; floor map model is used for different map area of the building; socket data model consists of the information in the "JSON" format to communicate the web-server Another important task for the android application in this thesis is collecting training data The sequence diagram of this part is illustrated on the Figure 5.5} The training controller collects data from WiFi and BLE, then, save to CSV files on Android The purpose of this is also for the evaluation of the proposed framework In the real experiment, all WiFi RSS training are sent to the server and save to the database on the server Some of the results this application are shown Figures [A I]in appendix.
Hinh 5.2: The wireless sensors modules are used for IPS
Web Server and Database Sailjs MVC
The server is used Sailjs which is a comprehensive MVC-style framework for nodejs [951].
Especially, it is designed for rapid development of server-side application in Javascript The MVC model shows in Figure [5.6, The controller process the client requirements, connect with the database via the models and update to the view The models of database and their relation- ships show is Figure The services API of web server show in Table[A.I]in the appendix.
In this section, in order to evaluate the effectiveness and the improvements of the proposed frameworks, the experiments are performed in two different test-beds: the first test-bed evaluates the performance of different WiFi techniques(GP and KNN in [34Ì) and the test-bed-2 compares the proposed framework with WiFi fingerprinting and PDR, respectively Moreover, in the test- bed two, different scenarios are performed to evaluate the effects of infrastructure breakdown.
All experimental Evaluation is deployed by Python programming language.
Experimental Setup Training data
: For collecting train data, the Android application running on smart-phone are deployed to collect WiFi and BLE RSS at the specific points and log to CSV file.
[me Floor Map | | me me | | seta Log csv |
Hinh 5.3: The summary architecture of Android Application for IPS
Map Socket Map view Controller Wifi Access point BLE beacon Movement Floor map data
' ' ' ' ¡_Inovernent update h a ' ' ' ' aa ' ' -4-~~~~~~~~~~ enw ee eee eee we ee ee eee ee ' '
' ' ' ' dƒ========m== Ti NHAN 5 AE fo Pes ese Rie
' movement update ' Pa ' ơ ' ' ' lý ' GSS eee ees CSE eS ESS Fe FSS '
Hình 5.4: The sequence diagram of map view
It is similar to collect data train However, the heading estimation using Magdwick filter is performed on the android, then all the data of inertial sensor, time, and heading are logged to the CSV file during the collecting test data.
The first test-bed was at a laboratory room with size 11m x 9.5m, using three access points that are created by a WiFi module signal broadcast every 500ms The training positions as testing data are collected by an application using a Samsung SG-G395F running Android 8.0.0.
The training position show in Figure
The second test-bed was on the fifth floor of Krona building in University of South-Eastern Norway (USN) with size 56.1m x 61.5m, as shown in Figure 5.8} The training data was placed 2.0 meters apart along the corridors RSS and values of accelerometer and gyroscope were col-
Training view Training Wifi Access point BLE beacon Log CSV Controller t H H i ' ' ' ' ' ơ -L ' ' ' collect data ' ' '
Hình 5.5: The sequence diagram of training data lected using a data logging application running on the smart-phone while the user was walking on the specific trajectory.
Evaluation Methodology To evaluate our framework of IPS, we use 2 metrics
e Accuracy : use root mean square errors of the estimated position and real position. error; = VG-—#)2+ (y—9,)? (5.1)
Where (x,y) represents for the true physical position, and (£;,9;) represents the estimated position.
Hj=]1 where : n is the number of test points. e Precision: using cumulative probability functions (CDF) of the error data that is known as the success probability of position estimations.
Results and Evaluation
Test-bed 1: evaluation of sparse training data for WiFi fingerprinting
Data set of training points were experimented two maps in our laboratory room: map-1 con- sists of 48 samples data spatially distributed with 1.5mx1.5m grid spacing; map-2 consists of 12 samples data spatially distributed with 3mx3m grid spacing The results of two experiments with 22 testing samples is presented in Table Regarding map-1, the mean errors of the GP method are slightly lower than the KNN method However, the position error confidence prob- ability within 80% is approximately 2.7m for both methods, as shown in Fig ð.9a| In the case of map-2 with sparse samples data , the average error of the GP method is 2.05m, while the
HTTP request}—User action ROUTES
MODEL jj’ CONTROLLER |—t pete VIEW
DATABASE
Hinh 5.6: MVC model for Sailjs average error of KNN is 2.3m Furthermore, at the same 80 % of error confidence probability, the error of KNN is about 3m, which is near 0.6 m higher than the error of GP, as shown in Fig 5.9b] The results are shown in Table E I] method | mean errors (1.5m) | mean errors (3m) GP 1.762 2.05 WKNN 1.86 2.3
Bang 5.1: Mean errors of two different method using training data spatially distributed with Imx 1m and 3mx3m grid spacing.
Test-bed 2: Evaluation the performance of the propose framework
Comparison between the hybrid method with WiFi fingerprinting and PDR The trajectory results for three different methods for two experiments are shown in Fig- ure The first experiment show on the left column which is the proposed framework in structured environments The corresponding cumulative distribution function of total position- ing errors for the three approaches are illustrated in Figure The mean errors of the three methods are shown in Table The green line on the Figure is the results from GP fingerprinting, the mean error is 2.58m and mean error of the PDR method is 3.9m because of inaccurate heading estimation during walking While the hybrid method can obtain a mean error of 1.02m by using 500 particles combined with checking walls from the map and without an initial position The average positioning accuracy of the proposed framework was reduced by 60.4% and 73.8% compared with the WiFi fingerprinting and the PDR method.
The second experiment shows on the right column of the Figure[5.10} It is similar to the first experiment, however, it starts from the known point, the particles are already converged It is
104 @ RSS
2 4 6 8 10 x (m) Hinh 5.7: Training positions of one access point in the room 11m x 9.5m considered for this situation because the mean error highly decreases from 1.02m to 0.62m for this proposed framework This can be used for rescuers or the victims who may already know the start position in the building If there is no WiFi access point working and just a few BLE beacons, therefore, setting the initial position is really useful for this system.
In addition, As the results are described in Table the algorithm to wall filter from envi- ronment maps is also the significant factor to improve position accuracy The mean error of the proposed framework in the normal environment increase from 1.02m to 1.25m when changing from using the wall filter to no wall filter.
The last scenarios of the experiment in from the Table|5.2|¡s the hybrid method of only using BLE and PDR, that means no WiFi access point working In three testing situation, the mean of error increase approximately twice compare to the test using All WiFi access points One of the main reason for that is only three BLE beacons using in these experiments, the result can be improved by increasing the number of Beacon However, the result (1.1m) from the test of having an initial position and wall filter is good enough to localize people in disaster relief.
In order to evaluate the effects of the partial infrastructures, from the test data that was collected on the fifth floor, eight WiFi access points are chosen for these experiments Then, these APs are gradually removed from the data base The number of WiFi access points is varied for evaluation purposes Assuming the three BLE beacon all work in these conditions.
The result from the Figure 5.11] and the Table 5.3] describe the results of the IPS in different scenarios It is clear that the number of WiFi AP can enhance the overall localization perfor-
| | 308 |53o6 [6307 6208 6209 Ve th ;Ì vi Ệ ch vty ty td ph sào ses = - ‘a 5
` L_ 62.& —60 nàn: s4 ||| Fa h sa] [sao F / "Isa26 5327
— LÝ 5398 £449 5444 ` TÊN Sk 5337 | = oe X“ = por] | ham a pvp 1 hi | =
Hinh 5.8: Training positions on the fifth floor The circles with different color represent for the reference positions and the signal power strength mance, the mean error increase from 1.02m to 2.03m for the case that no initial position and from 0.62m to 1.1m for the case of having an initial position The mean errors are quite stable at the highest error in scenarios when less than 3 APs are available These mean errors the position estimation of these scenarios just relies on the BLE beacons only When the number of APs are more than 3, the mean of error can be gradually increased proportionally to APs.
(a) Cumulative probability functions of errors of two (b) Cumulative probability functions of errors of two methods using map-1 methods using map-2
Hinh 5.9: CDF results of test-bed 1
WiFi) PDR | BLE and PDR BLE, WiFi, PDR
No Initial Position, 258 139 | 2.03 1.02 Avoiding walls
No Initial Position, 258 139 125 1.25 No avoiding walls
Bang 5.2: Results of mean error in m for GP, PDR and Hybrid method in different experiments
Number of WiFi access point
Hinh 5.11: The localization performance with the partially available infrastructure in in the cases (initalized position and not)
Only BLE No Initial Position | 1.02 | 1.2 14 | 1.5 | 1.54} 1.9 | 2.02 | 2.03 | 2.03 Initial position 0.62 | 0.7 | 0.78 | 0.85 | 0.9 | 1.02 | 1.01 | 1.1) | 1.1
SAP | 7 AP | 6AP | SAP | 4AP | 3AP | 2AP | LAP
Bang 5.3: The localization performance with the partially available infrastructure
Considering the computational time of the particle filter, different experimental results are shown in Table |ð.4| The higher number of particles for the filter reaches better accuracy How- ever, the computational time is also higher Constraining the wall penetration of each particle reduces computation time.
Method Computation Time (ms) | Number of Particles PDR + GP 58 200
PDR + GP + map 85 200 PDR + GP 220 500 PDR + GP + map 278 500 PDR + GP + map 830 1000
Bang 5.4: Results of the implemented hybrid methods considering the number of particles used in the filter and the corresponding computational time.
Comparison of Average Location Errors
Considering the location accuracy of different approaches Table 5.5] shows the location ac- curacy of some approaches in different environments In [17], the authors using only GPR in the supermarket environment, they attained the accuracy of 2.3m that is much higher than hybrid methods The hybrid methods in [34] and [96] using KNN fingerprinting with Particle filter and Kalman filter archived lower accuracy at 1.52 m and 1.96 m,respectively Nevertheless, compared to the proposed framework, the mean of error is improved about 32% and 48%, re- spectively In [G0], the authors fuse WiFi weighted path-loss, PDR and landmarks using Kalman filter The result is quite accurate in the area 27.5m x 16.4m However, it needs to set up the start point and correct the drift at a new landmark In this case, the proposed framework can be achieved 0.65m with the initial position In summary, the proposed method can be achieved high accurate.
Study Environment Method Error LL7i Supermarket Gaussian Process Regression | 2.3m Iử4] 41.26m x 26.10 | Hybrid KNN, Partilce 1.52m [96] Building KNN, Kalman 1.96m
WiFi using weighted path loss,
Kalman filter (initial point) WiFi (GP), iBeacon (Kalman), Proposed | 56.1m x 61.5m PRD, Particle 1.02m
Bang 5.5: Mean error of different approaches
From the results of different test-beds, the proposed framework is significantly improved comparing to the only WiFi fingerprinting method or PDR The accuracy based on the number of the access point and BLE Beacons that are available in the building The accuracy of the PDR method also depends on the smart-phone holding on their hand or putting in the pocket.
However, the benefit of using particle filter is that the location can be estimated event no infor- mation of heading In the case of no WiFi access point and BLE beacons, people can use only PDR to localize themselves in the building, however, the smart-phone should hold in their hand and set up the initial position.
Additionally, it is easy to see that this system can be applied in the different application using location services such as in hospital, museums, airports So this system can not only work when the disaster situation, but also work well in the normal conditions All of the position of the users can be monitor on the server, it can supply useful information for assessing the victim the building Therefore, this IPS can be a reasonable solution for the human to minimize the damages of disaster.
(a) CDF of errors (b) CDF of errors
— Propose method — Propose method 254 — PDR 254 —— PDR
+— Ad 41424 L4 ileal i ""-—— Ld dd ch 4 Sere
* WiFi access points oo * WiFi access points : `
— true trajectory — true trajectory ỏ -rơn: — —= K
PDR Ì 1
Experiments and Results of SLAM
The designed robot is shown in Figure |6.1b} It includes Raspberry pi 3B+ as a host to run all Algorithms, RPLidar, Arduino mini, IMU 9250 and two drivers and motors with encoders.
The experiment on the fifth floor of Krona Building The map shows on Figure The aim was to study the performance of the SLAM in detecting the object and building the map the real- world conditions The robot was controlled remotely to move around the corridor; covering as many areas as possible The odometry, laser_scan and imu data were logged during the robot’s operation Then the simulation of the SLAM 1s performed based on the recorded data.
1.3 Remote Controller using Smart-phone
The robot can be controlled by an application on smart-phone running Android OS that 1s shown 1n Figure The simple application can display video from the camera of the robot via WiFi and has the joystick to send the velocity command to the robot Particularly, the app sub- scribes the camera/rgb_compressed and publishes the velocity command (vel_cmd) to control the robot from the joystick.
T= = l L wu w h2 = ơ wo“â hằ œ oOC3 c3 © œG32 tc hr
Hinh 6.2: The real map of the fifth floor of Krona building oa
1.4 ROS Nodes for Hector Slam
The active ROS nodes for Hector slam are illustrated in Figure |6.4| From the data of the laser and the transform of imu data, hector mapping can estimate the pose output The hec- tor_trajector_server node keeps track of tf trajectories extracted from tf data and makes this data accessible via a service and topic; the hector_geoiff is the node that permits to save the map and the trajectory data.
The trajectory and building map of SLAM experiments on the fifth floor are shown in Figure to Figure The maps running hector slam from laser_scan are seemed to be quite accurate for both Hector slam and Gmapping Because of the laser scan cannot identify well the glass walls, so the quality of the maps may be not good in some case It can see from results of the test on Eigure|6.5c|and Figures the number of the map updates fro Gmapping are lower than Hector slam in all experiment The objects on the environment can be drawn
Hình 6.3: The Android application on smart-phone to control robot
Cstatic to `) /hector_trajectory_server /trajectory /imu J
/imu/data /imu_attitude_to_tf_node 2 Cn_nie)
Hình 6.4: ROS nodes network of Hector slam running from bag file. quite good with hector slam Furthermore, the average performance of CPU during the SLAM working for Hector was about 15.6% while for Gmapping was 32%.
In addition, the output of the Hector slam is the estimated pose, the update rate is signif- icantly faster than Gmapping without odometry data, While the output of Gmapping is pose correction instead of the real pose and the correction is updated every 5 to 6 second.
(a) The experiment of hector SLAM on the fifth floor ee fi
(c) The experiment of hector SLAM on the small area
Hinh 6.5: The maps are built from Hector slam and gmapping
Conclusion and Future Work
In this project, a framework combining WiFi fingerprinting methods, BLE and Pedestrian Dead Reckoning (PDR) by using a particle filter The Gaussian Process Regression for WiFi fingerprinting is deployed to generate a WiFi map that can partially reduce the time of training data and high accuracy for indoor environments The proposed hybrid approach makes it possi- ble to address the well-known drift problem of the PDR approach This is possible by combining the PDR with fingerprinting so that high accuracy can be achieved in a certain area based on the Received Signal strength (RSS) This approach also has the advantage that it can be easily deployed in real situations Moreover, a particle filter is leveraged in our proposed framework to update each particle based on the PDR motion model by using effective algorithms for step detection, stride length and heading Then, the position of WiFi fingerprinting estimations is combined with feature maps for each particle correction The experiments were conducted in a real building without additional WiFi infrastructures and three BLE beacon The results in experiments indicate the effectiveness of the proposed framework in different scenarios The mean errors from experiments can reach from 2.03m to 0.62m depending on the number of WiFi access point and the information of start point.
In the second work, a robot is designed using hector slam to explore maps for indoor pOSI- tioning system and search and rescue victims in the disaster scenarios The results from exper- iments show the advantages of the Hector slam on this condition The Hector slam technique consumes low computational resource and thus can be used on low-weight, low-power and low-cost processor They are very important factors using for disaster scenarios Moreover, the combination of the output pose update from hector slam without odometry information and imu can make the robot work better to mapping in non-flat environments.
The limitation of this work is that two different systems of SLAM and Indoor position sys- tem working separately The dynamic map from the robot can be updated directly to the server.
However, It is much more convenient to integrate them into one system The robot can be au- tomatically mapping as well as collecting WiFi data for Fingerprinting Hence, the accuracy of WiFi fingerprinting techniques can be improved.
Additionally, in the large areas, the number of the database for WiFi fingerprinting is also large, so the computation cost will be increased In order to reduce computational cost The
54 clustering techniques can be applied to divide the whole area into many subareas to improve performance of the system Moreover, the stride lengths are computed for offline calibration. thus, we will consider calibrating these parameters while people are working inside the building.
These above problems can be the future works for the indoor positioning system.
For the Robot, besides 2D map, other solution to build 3D map using inertial visual SLAM and victim detection can be deployed for these scenarios Furthermore, the future work will be focused on developing for the Snake robot which is much more suitable in that condition.
Some Views from Android application show in Figure|A.1
The models of database show in Figure |A.2]:
Receive RSSI collect from devices API
"PƯT /trilateration’ : ’Rssi_collectController.trilateration’,
"MapController.updatecurrentmap' Bang A.1: The web services API are desined
: bynett ae Mac: C5:A8:99:AD:64:93 -51.0 dB
= | ) UUID: B9407F30-F5F8-466E-AFF9-25556B57FE6D 74:a0:2fcc:9c:0f feet E“ 4 Major: 118 Minor: 2 Krona-klient
"„s KH Mac: E6:34:E0:2B:34:83 eduroam mm ) UUID: B9407F30-F5F8-466E-AFF9-25556B57FE6D -52.0dB
= RSSI: -80/0/-55 Distance: 2,818 m bynett ơ Bm -66.0 dB
= )) UUID: B9407F30-F5F8-466E-AFF9-25556B57FE6D Krona-klient Em DZ Major: 118 Minor: 1 ( -66.0 dB
= oom eduroam sa -66.0 dB aaa 74:a0:2f:a6:5b:fe 1 Krona-klient cad 71.0 dB
-71.0 dB 74:a0:2f:a6:5b:fT bynett -73.0 dB 74:a0:2f:cc:9c:00 eduroam
(a) Map view (b) BLE beacon list (c) WiFi list
BX’ Sl 44% ẹ 18:26 BNE Ail 44% ẹ 18:26 BX’ Sl 44% ẹ 18:26
Indoor localization Indoor localization Indoor localization
40 Z y X Collect WiFi: samples START WIFI xy yz zy W % y 7 Collect BLE: samples START BLE
(d) Calibaration for moverment(e) Testing MIU and logging data(f) Collect WiFi and BLE for train- detection for experiment ing data
Hinh A.1: Some user interface of Android experiment for IPS
Users id name email password role map posX posY online rssi_collects id name beacons users width height
RSSL collect id beacon user distance angle
Beacon name model map bssid ssid posx posY working rssi_trainings
Rssi Train id beacon cluster distance rssi_mean rssi_vatiance posX posY
Hinh A.2: The database of web server
# -*- coding: utf-8 -*- Created on Fri Nov 30 22:14:26 2018
@author: THONG _KTDT import numpy as np from matplotlib import pyplot as plt import pandas as pd from StepDetection import AccStepDetection from HeadingEstimation import DirectionEstimate
#path ='/datal/imu_data_15steps.csv' path ='./data2/raw_imu5.csv' pathRss ='./data2/rssi_collect3_12steps.csv' imu_ data = pd.read_csv(path) rssi_data = pd.read_csv(pathRss) print(imu_data.head()) t=imu_data["t"|
Orientation = imu_data["direction" | Ax = imu_data["Ax"|
Az =imu_data["Az" | Gx =imu_data["Gx"|
Gz =imu_data["Gz" | Mx = imu_data["Mx"]
Gx_deg = Gx * 180 / np.pi Gy_deg = Gy * 180/np.pi Gz_deg = Gz * 180 /np.pI t rss =rssi_data["time"|
#plt.plot(t,Gx_deg , 'b-', markersize=1)
#plt.plot(t,Gy_deg , 'g-', markersize=1)
#plt.plot(t,Gz_deg , 'r-', markersize=1)
#plt.plot(t,Mz , T-', markersize=1) currentAcc = np.array(np.zeros((3,1))) currentGyro = np.array(np.zeros((3,1))) currentMag = np.array(np.zeros((3,1)))
#finalAcc = np.array(np.zeros((len(t), 1))) THRESHOLD = np.array(np.zeros((len(t), 1))) velocity = np.array(np.zeros((len(t), 1))) velocity2 = np.array(np.zeros((len(t), 1)))
#Create class for stepDetection accSensor = AccStepDetection() gySensor = DirectionEstimate() angular velocity = np.array(np.zeros((len(t), 1))) angular _velocity_deviation = np.array(np.zeros((len(t), 1))) from MagneticSensor import MagneticEstimate magSensor = MagneticEstimate() mag _ magnitude = np.array(np.zeros((len(t), 1))) mag_ deviation = np.array(np.zeros((len(t), 1))) heading = np.array(np.zeros((len(t), 1)))
HEHEHE LEST KALMAN HERE##fWffểHHH-UH-H3 3
# Initialise matrices and variables C = np.array([[1, O]])
#calculate stride length steps = 0 totalStrideLength = 0 last_time = 0. startGetStride = 0
#State vector [x, y , orientation] x True = np.matrix(np.zeros((3, 1))) xDR = np.matrix(np.zeros((3, 1))) from particle import Particle particlePdr = Particle(10) px = np.matrix(np.zeros((3, particlePdr.NP))) # particle store pw = np.matrix(np.zeros((3, particlePdr.NP))) # Weight direction = 0 lastDirection =0
# History for visual hxDR = xDR for 1 in range (len(t)-1): time = t[i] currentAcc[0] = Ax[i] currentAcc[1] = Ay[i] currentAcc[2] = Az{1] currentGyro[0] = Gx_degfi] currentGyro[1] = Gy_deg[i] currentGyro[2] = Gz_degfi] currentMag|[0] = Mx{[i] currentMag|1] = My[i] currentMag|[2] = Mz[i]
# test g h filter dt = time-last_time
# pre_est, dx =g h filter t(Orientation|[i], pre est, dx, 0.1, 0.001, dt) velocity2[i], velocity[i], stepDectect = accSensor.detectStep2(currentAcc, time) angular velocity deviation[i], angular _velocity[i] gySensor.CalculateDeviation(currentGyro) mag_deviation[i], mag_magnitude[i| magSensor.CalculateDeviation(currentMag) heading|i] = magSensor.computeCompassHeading(currentMag) if(stepDectect —=]): startGetStride = 1 print("Orientation = ", Orientation|1]) direction = Orientation|i] steps += stepDectect strideLength = 0 if(startGetStride ==1 ): finish, Amax = accSensor.getAmax(velocity[1]) if(finish =—=]): print("Amax = ", Amax) strideLength = accSensor.calculateStrideLength(Amax, 0) print("Stride Length =", strideLength) startGetStride = 0 print("orientation at max = ", Orientation|1]) theta = Orientation|i] - lastDirection print("lastDirection = ”, lastDirection) print("current direction = ", direction) print("theta =", theta) xDR = particlePdr.predictionMotion(xDR, strideLength, theta) print("xDr = ", xDR) hxDR = np.hstack((hxDR, xDR)) lastDirection = Orientation|i] totalStrideLength += strideLength last_ time = time print("number of steps = ", steps) print("total stride length = ”, totalStrideLength)
THRESHOLD = np.array(np.zeros((len(t), 1))) for 1 in range(len(THRESHOLD)):
THRESHOLD[i] = 4 fig = plt.figure(Q) plt.title("Step Dectection")
#plt.plot(t,finalAcc, 'b-', markersize=1) plt.plot(t,THRESHOLD, 'b-', markersize=1, label ="Threshold") plt.plot(t,velocity, 'g-', markersize=1, label ="Magnitude after window") plt.plot(t,velocity2, 'ro', markersize=2, label ="Magnitude before window") plt.ylabel("Magnitude of acceleration m/s'2") plt.xlabel( “Time samples") plt.legend(loc=upper left’) fig = plt.figure(Q) plt.title("Compass orientation") plt.plot(t, Orientation, 'b-', markersize=1) fig = plt.figureQ) plt.title("trajection PDR") plt.plot(np.array(hxDRJ[0,:]).flattenQ, np.array(hxDR[1,:]).flattenQ, 'b-', markersize=2)
#plt.title("Angular velocity deviation")
#plt.plot(t, angular velocity deviation, 'b-', markersize=1)
## plt.plot(t,angular_velocity, 'r-', markersize=1)
#plt.ylabel("Magnitude of angular acceleration rad/s'2")
#plt.legend(loc=upper left’) fig = plt.figure(Q) plt.title(’"Magnitude Magnetic") plt.plot(t, heading, 'b-', markersize=1)
#plt.plot(t,mag_ magnitude, Tr-', markersize=1) plt.showQ)
WHO World health organization [Online] Available: http://apps.who.int/disasters/repo/
U F Administration U.s fire statistics [Online] Available: https://www.usfa.fema.gov/ data/statistics/tab-2.
J R Hall et al., Burns, Toxic Gases and Other Fire-like Hazards in Non-fire Situations.
National Fire Protection Association Quincy, MA, 2004.
D Son, E Cho, M Choi, K Khine, and T T Kwon, “Effects of partial infrastructure on indoor positioning for emergency rescue evacuation support system,” in Proceedings of the First CONEXT Workshop on ICT Tools for Emergency Networks and DisastEr Relief.
A F G Ferreira, D M A Fernandes, A P Catarino, and J L Monteiro, “Localiza- tion and positioning systems for emergency responders: A survey,’ JEEE Communications Surveys & Tutorials, vol 19, no 4, pp 2836-2870, 2017.
A Alarifi, A Al-Salman, M Alsaleh, A Alnafessah, S Al-Hadhrami, M Al-Ammar, and H Al-Khalifa, “Ultra wideband indoor positioning technologies: Analysis and recent advances,” Sensors, vol 16, no 5, p 707, 2016.
N Nakajima and K Hattori, “Autonomous pedestrian positioning using ultrasound sensor for stride measurement,” in Proc of the International Conference on Indoor Positioning and Indoor Navigation (IPIN), 2015.
M Bhattacharya, C.-H Chu, and T Mullen, “Rfid implementation in retail industry: Cur- rent status, issues, and challenges,” in Proc of the 38th Annual Meeting of the Decision Sciences Institute, Phoenix, AZ, 2007, pp 2171-2176.
Z Jianyong, L Haiyong, C Zili, and L Zhaohui, “Rssi based bluetooth low energy indoor positioning,” in Proc of the IEEE International Conference on Indoor Positioning and Indoor Navigation (IPIN), 2014, pp 526-533.
A Cramariuc, H Huttunen, and E S Lohan, “Clustering benefits in mobile-centric wifi positioning in multi-floor buildings,” in 20/6 International Conference on Localization and GNSS (ICL-GNSS) TEEE, 2016, pp 1-6.
A R Jimenez, F Seco, C Prieto, and J Guevara, “A comparison of pedestrian dead- reckoning algorithms using a low-cost mems imu,” pp 37-42, 2009.
R Feliz Alonso, E Zalama Casanova, and J Gomez Garcia-Bermejo, “Pedestrian tracking using inertial sensors,” Journal of Physical Agents, vol 3, no 1, pp 35-42, 2009.
[13] E Li, C Zhao, G Ding, J Gong, C Liu, and F Zhao, “A reliable and accurate indoor localization method using phone inertial sensors,” in Proceedings of the 2012 ACM con- ference on ubiquitous computing ACM, 2012, pp 421-430.
[14] T T T Pham, T.-L Le, and T.-K Dao, “Improvement of person tracking accuracy in camera network by fusing wifi and visual information,” /nformatica, vol 41, no 2, 2017.
[15] T.-K Dao, T.-T Pham, and E Castelli, “A robust wlan positioning system based on proba- bilistic propagation model,” in 2013 9th International Conference on Intelligent Environ- ments IEEE, 2013, pp 24-29.
[16] A Khalajmehrabadi, N Gatsis, and D Akopian, “Modern wlan fingerprinting indoor posi- tioning methods and deployment challenges,” JEEE Communications Surveys & Tutorials, vol 19, no 3, pp 1974-2002, 2017.
[17] S Kumar, R M Hegde, and N Trigoni, “Gaussian process regression for fingerprinting based localization,’ Ad Hoc Networks, vol 51, pp 1-10, 2016.
[18] S He and S.-H G Chan, “Wi-fi fingerprint-based indoor positioning: Recent advances and comparisons,’ JEEE Communications Surveys & Tutorials, vol 18, no 1, pp 466—
[19] J Yang, Z Wang, and X Zhang, “An ibeacon-based indoor positioning systems for hos- pitals,’ International Journal of Smart Home, vol 9, no 7, pp 161-168, 2015.
[20] Y Liu and G NeJat, “Robotic urban search and rescue: A survey from the control perspec- tive,” Journal of Intelligent & Robotic Systems, vol 72, no 2, pp 147-165, 2013.
PHAN LY LICH TRÍCH NGANG
Ho va tên : Hồ Sỹ Thông
Sinh năm: 1988Nơi Sinh : DaklakEmail : sythongmta@gmail.com
QUA TRINH DAO TAO
1 Đại hoc : Hoc tại Học viện Kỹ thuật Quân sự, ngành Điện — Điện tử, năm 2006-
2 Sa đại học : Học tại Đại học Bách Khoa TP Hồ Chí Minh, Chuyên ngành Kỹ thuật điện tử, khóa học đợt 2 năm 2016