GRADUATION THESIS MAJOR: MECHATRONICS ENGINEERING TECHNOLOGYINSTRUCTOR: HA LE NHU NGOC THANH HUYNH QUOC AN PHAM DANG KHOASTUDY ON POSITION ESTIMATION BY USING SENSOR FUSION ALGORITHM FOR
INTRODUCTION
Brief history of the formation of Robots
The concept of robots began as a dream, rooted in fantasy, aimed at creating machines that mimic human characteristics and assist in various tasks This vision led to the invention and enhancement of robots over the centuries, with notable advancements such as a robot capable of movement, swimming, and digestion in 1738, and another that could write with goose feathers in 1774 The term "robot" was first introduced in 1921 by Czechoslovak playwright Karel Čapek in his play, marking a significant moment in the evolution of humanoid machines.
The robot age commenced in 1945 with the invention of the first manipulator featuring program memory, created by George Devol This groundbreaking device was capable of moving repeatedly between designated points, marking a significant advancement in robotics technology.
Robots are increasingly utilized across various professions, ranging from simple tasks to complex operations Robotics engineering is a multidisciplinary field that integrates advanced developments and applications in numerous areas of science and technology.
Figure 1 1 Relationship between robotics engineering and other fields
Classify robots according to usage needs
Robots are increasingly integrated into various facets of human life and production, primarily categorized into two types: service robots and industrial robots Their design and functionality are tailored to specific manufacturing applications.
Figure 1 2 Robot arm assembles cars
Purpose of choosing the topic
The rapid advancement of science and technology has led to an increasing demand for robots in various production sectors Robots offer significant benefits, including enhanced productivity and precision, while effectively taking over dangerous and labor-intensive tasks Although they previously struggled to handle unique situations like humans, advancements in artificial intelligence are progressively addressing these limitations.
The research aims to explore position, navigation, and control systems for autonomous vehicles utilizing the Sensor Fusion algorithm, specifically highlighting their effectiveness in outdoor environments.
The primary objective is to enhance the Sensor Fusion algorithm tailored for autonomous vehicles, enabling precise perception of their surroundings and efficient navigation in complex outdoor environments This advancement will empower autonomous vehicles to make well-informed decisions, resulting in improved performance, greater reliability, and heightened safety.
The next objective focuses on the practical implementation and seamless integration of the Sensor Fusion algorithm into real-world autonomous vehicle systems for outdoor scenarios This aims to tackle technical challenges posed by varying environmental conditions, including different lighting, obstacles, and terrains By effectively integrating Sensor Fusion, the research seeks to enhance the adaptability and usability of autonomous vehicles, ensuring smooth and efficient operation across diverse environments.
This research aims to enhance autonomous vehicle technology, promoting its successful implementation in diverse environments, both indoors and outdoors By focusing on safe and autonomous operation, the study has the potential to transform the future of autonomous vehicles, allowing them to navigate various real-world situations, from warehouses to urban streets.
Limit topic
After studying the facts and the knowledge they have learned, the group decided on the tasks to be done and limited the topic as follows:
This article explores various methods utilizing the Sensor Fusion algorithm to enhance the accuracy and reliability of positioning for a mobile robot model developed in the laboratory By researching and implementing these techniques, we aim to improve the robot's movement control, ensuring more precise navigation and operational efficiency.
Develop a multi-sensor mobile robot model featuring two guided wheels and four self-adjusting wheels, serving as a foundational platform for research and application of software control algorithms utilizing sensor fusion techniques.
Research Methods
To carry out this UGV robot project, the research team combined theory and testing to complete it
Regarding theory: apply learned knowledge combined with referencing documents from sources such as books, internet, etc about theoretical foundations, structural and safety standards
This experiment focuses on programming a control system to accurately estimate the position and heading of a robot by integrating data from IMU sensors, GPS, a compass, and a sensor fusion algorithm Communication with the Pixhawk is facilitated through the MAVLink protocol, allowing for real-time data exchange The study involves experimenting with and adjusting parameters to identify the optimal settings for the robot's performance.
Research situation in the country and abroad
The multi-sensor mobile robot developed at Hanoi National University utilizes advanced sensor fusion techniques, integrating four modern sensors: rotation axis coding, compass direction, laser measurement, and camera Research employing an extended Kalman filter has significantly enhanced the accuracy of the robot's positioning during navigation, showcasing the effectiveness of sensor fusion in robotic applications.
Figure 1 4 Multi-sensor mobile robot applies Sensor-Fusion implemented at Hanoi National University
The Spot smart robot, developed by Boston Dynamics, is designed to assist users in various activities, including entertainment Equipped with advanced Lidar sensors, Spot effectively scans objects, road surfaces, and its surroundings Utilizing Sensor Fusion technology, it integrates data from lidar, image, and temperature sensors to navigate and explore geological environments, particularly in mining Additionally, by incorporating a 360-degree camera, Spot enhances its ability to analyze terrain and environmental conditions, aiding in weather forecasting.
Figure 1 5 Spot Robot of Boston Dynamics
The content will be presented in the following chapters
CHAPTER 2 THEORETICAL BASIS: presents the theories based on which to build robots
CHAPTER 3 CALCULATION - SELECTION OF MECHANICAL SYSTEM DESIGN OPTIONS: presents calculations to select necessary components
CHAPTER 4 ESTIMATOR AND CONTROLLER DESIGN: build a controller for the robot
CHAPTER 5: EXPERIMENTAL RESULTS - EVALUATION: Test run the machine according to the set criteria and then evaluate the quality of the robot
CHAPTER 6: CONCLUSION AND DEVELOPMENT DIRECTION: review what has been done and what has not been done
THEORETICAL BASIS
Introduction to the Research Subject
2.1.1 Definition of Sensor Fusion algorithm
Different sensor types, or modalities, possess unique strengths and weaknesses Radars excel at measuring distance and speed, even in adverse weather, but lack the ability to interpret street signs or recognize traffic light colors Cameras are effective at reading signs and classifying objects like pedestrians and vehicles, yet they can be hindered by dirt, sunlight, rain, snow, or low light conditions Lidars offer precise object detection but fall short in range and cost-effectiveness compared to cameras and radars.
Sensor fusion integrates data from various radars, lidars, and cameras to create a comprehensive model of a vehicle's surroundings This enhanced model improves accuracy by leveraging the unique strengths of each sensor Consequently, vehicle systems can utilize the insights gained from sensor fusion to enable smarter decision-making and actions.
The Sensor Fusion algorithm is a critical technique in computer science and engineering that integrates data from various sensors to enhance the accuracy and depth of information regarding the environment or objects being observed This project aims to explore the application of the Sensor Fusion algorithm specifically for estimating the position and heading of a robot.
Figure 2 1 Apply sensor fusion algorithm so the robot can know more clearly about external environmental conditions
Since the early 1990s, research on sensor fusion methods for mobile robot navigation has gained significant interest Ultrasonic sensors are commonly utilized for indoor operations and object detection; however, they often require supplementation with camera sensors to address their limitations Many researchers employ Bayesian probability methods alongside mesh maps for data synthesis The advent of laser sensors has introduced new techniques, where lasers project onto objects at specific angles, and specialized cameras capture the reflected light to calculate positions through triangulation Despite challenges with glossy and transparent surfaces, integrating 3D visual guidance sensors can mitigate these issues Some studies focus on enhancing 3D information collection through advanced algorithms like Lacroix processing and Perceptron dynamic triangulation While much of this research emphasizes global mapping for obstacle avoidance, there is less focus on accurately determining the robot's precise location.
Recent research has focused on enhancing robot state estimation by integrating the extended Kalman filter (EKF) with odometric measurements and various sensors, including laser rangefinders, gyroscopes, compasses, and cameras This combination aims to improve the reliability and accuracy of robotic navigation and positioning.
1 Sensor Fusion applications in AR/VR and mobile devices:
Sensor Fusion is essential for enhancing virtual reality (VR) and augmented reality (AR) experiences by integrating data from various sensors, including accelerometers, gyroscopes, light sensors, and distance sensors This technology enables AR/VR devices to precisely monitor users' movements and locations in 3D space, resulting in more immersive and realistic interactive experiences.
Figure 2 2 Sensor Fusion applications in AR/VR and mobile devices
2 Application of Sensor Fusion in automation devices:
Sensor Fusion technology is crucial for automated devices like self-driving cars, as it integrates data from various sensors, including radar, lidar, cameras, and accelerometers This combination enables vehicles to precisely identify and assess the location, speed, and movement direction of nearby objects, significantly enhancing the safety and performance of automation systems.
Figure 2 3 Application of Sensor Fusion in automation devices
3 Sensor Fusion technology and applications in the field of image recognition and environmental information:
Sensor Fusion is crucial for image recognition and environmental analysis, as it integrates data from visual, audio, and proximity sensors This technology allows for the analysis of images, object recognition, and the assessment of their location and condition within a 3D environment Its applications span various fields, including security monitoring, commodity sorting, and machine vision.
Figure 2 4 Apply Sensor Fusion in the field of image recognition and environmental information
Dynamic model
Angular rate of robot(rad/s) ω
Distance of two wheels on each sides L
Linear velocity of left wheel 𝒗 𝑳
Linear velocity of right wheel 𝒗 𝑹
Angular velocity of left wheel 𝝎 𝑳
Angular velocity of right wheel 𝝎 𝑹
The following table displays the parameters and symbols of differential drive dynamic model which are used to descibe the state and motion equation
Figure 2 5 The top-down view of the robot
The top-down view of the robot illustrates both the global and robot's coordinate frames, highlighting the angular velocity (ω in rad/s) and linear velocity (v in m/s) The robot's motion in a 2D plane is described by the subsequent state equation.
Generally, the state of robot at some time(t) can be described as a vector of three elements: position x, position y and the orientation ϴ
In robotics, the velocities of the robot (V), left wheel (VL), and right wheel (VR) are crucial for understanding movement By using the angular velocity (ω) and the radius of rotation (R), we can calculate the velocities of the left and right wheels as they rotate around the center point Ic, which has a radius of R.
The velocity of the left and right wheels when turning around Ic with turning radius R:
From the previous equation, we found R and :
From the figure 2.9, Kinematic model of robot:
(2.2.7) The kinematic equation can be written in matrix form
Where (x,y) is the coordinate of center point of robot and the angle characterizes the robot’s chassis orientation In this equation, v , represents the vehicle’s longitude velocity and
is the velocity of rotation
R L v v v (2.2.10) from (4) and (8), finally we got:
Odometry is a technique for estimating the position and orientation of robots and vehicles using sensor data It primarily relies on encoders attached to the vehicle's wheels, which measure wheel rotation This information allows for the calculation of the distance traveled and changes in orientation.
The velocity of wheel is the angular velocity of wheel times wheel’s radius:
The velocity of wheels can be descibes as following matrix form:
Wheel Encoders: Encoders measure how many times a wheel has turned By knowing the circumference of the wheel, you can calculate the distance each wheel has traveled.
Most robots are equipped with wheel encoders for navigation; however, these encoders can experience slippage and noise, resulting in inaccurate measurements Consequently, relying solely on wheel encoder data makes it challenging to determine the robot's exact location and orientation with complete certainty.
PID Controller
The Proportional-Integral-Derivative (PID) controller is a fundamental component in industrial feedback control systems, designed to minimize the error between a desired setpoint and the actual output By tuning its parameters, the PID controller effectively reduces oscillations and overshooting, leading to a more stable system The controller consists of three essential components: proportional, integral, and derivative, each playing a crucial role in achieving optimal performance.
Proportional (P): Based on the difference between the desired value and process variable
It generates a control signal proportional to the current error, aiming to minimize this error
Integral (I): Calculating the total accumulated error over time The shorter the sampling time, the greater the impact of the integral adjustment, effectively reducing the deviation
The derivative (D) component generates a control signal that is proportional to the rate of change of the input error A longer response time allows for a broader range of derivative adjustments, enabling the controller to swiftly react to changes in input.
There are many methods for optimizing PID controller, here are some of popular parameter tunning methods:
Manual tuning is a method where operators manually adjust PID parameters to achieve optimal control behavior By observing the system's response, they incrementally modify the proportional (P), integral (I), and derivative (D) settings to minimize errors While this approach is simple, it can be time-consuming and necessitates experienced personnel to ensure effective tuning.
Ziegler-Nichols Method: This is a popular classical method for setting PID parameters
To tune a control system, start by setting the integral (I) and derivative (D) gains to zero Gradually increase the proportional (P) gain until stable oscillations, known as the ultimate gain, are observed in the output This ultimate gain value can then be utilized to calculate the integral and derivative gains for optimal system performance.
D gains based on standard formulas There are two approaches in this method: the open- loop method (reaction curve method) and the closed-loop:
Table 2 2 parameters tuning by Ziegler-Nichols Method:
PID 0.6Ku 2Kp/Pu KpPu/8
PID controllers play a crucial role in motion control systems for robots, CNC machines, and motors by measuring variables such as speed and position to fine-tune inputs Their primary objective is to ensure that the object adheres to a predetermined path or arrives at a specific destination By continuously adjusting parameters based on real-time feedback, PID controllers enhance the system's accuracy and stability, enabling swift and precise adaptation to varying operating conditions.
Figure 2 9 Motor control system using PID
PID controllers play a crucial role in process control across various industries, including chemical manufacturing and food processing By regulating key variables such as temperature, pressure, and flow rate, these controllers help maintain consistent product quality This regulation ensures that all parameters remain within specified limits, ultimately optimizing production processes and improving overall efficiency.
Figure 2 10 Control liquid level by using PID
Energy control systems are essential for optimizing voltage, power factor, and motor performance These controllers measure the energy output of the system and intelligently calculate and adjust the necessary inputs to sustain the desired energy levels efficiently.
Navigation
An Inertial Measurement Unit (IMU) is a sophisticated device designed to measure and report both specific gravity and angular rate of an attached object Typically, an IMU comprises various sensors that work together to provide accurate motion data.
Accelerometers: These measure specific force or acceleration
Magnetometers (optional): These are used to measure the magnetic field around the system
Gyroscopes: These measure the angular rate:
Figure 2 12 Apollo Inertial Measurement Unit
Figure 2 13 The VECTORNAV Inertial Measurement Unit
Integrating a magnetometer and filtering algorithms into an Inertial Measurement Unit (IMU) converts it into an Attitude and Heading Reference System (AHRS), which delivers detailed orientation data by synthesizing measurements from magnetic fields, angular rates, and accelerations.
Accelerometers are instruments designed to measure acceleration, indicating the rate at which an object's speed changes They provide measurements in units such as meters per second squared (m/s²) or G-forces (g), with one G-force equivalent to approximately 9.8 m/s² on Earth.
Accelerometers are electromechanical sensors that measure various forces, including static forces like gravity and dynamic forces such as movement and vibrations With advancements in technology and decreasing costs, three-axis accelerometers are increasingly prevalent in various applications.
In an AHRS, accelerations are determining the orientation (pitch and roll) of objects
Gyroscopes, commonly known as gyros, are essential devices used to track and control rotation MEMS (microelectromechanical system) gyroscopes are compact and cost-effective sensors that measure the spinning speed of an object, referred to as angular velocity or angular rate, which is quantified in degrees per second (°/s) or radians per second (rad/s) These angular rate measurements are integrated to provide an estimate of the system's attitude, making gyros crucial for various applications in navigation and motion sensing.
A triple axis MEMS gyroscope measures rotation across three axes: x, y, and z While single and dual axis gyros are available, the compact, cost-effective triple axis gyro on a single chip is gaining popularity in various applications.
A magnetometer is a sensor designed to measure both the strength and direction of magnetic fields Among the various types available, MEMS magnetometers predominantly utilize magnetoresistance technology to detect the surrounding magnetic environment.
Figure 2 16 Simple model of ECompass
An Attitude and Heading Reference System (AHRS) utilizes a magnetometer to measure yaw, complementing the pitch and roll data provided by an accelerometer By comparing the system's magnetic field to the Earth's magnetic field, the magnetometer functions similarly to a traditional compass, ensuring accurate orientation and navigation.
An individual inertial sensor captures movement along a single axis, but when three sensors are arranged orthogonally in a triad configuration, they form a 3-axis inertial sensor capable of measuring motion in three dimensions By integrating this setup with a 3-axis accelerometer and a 3-axis gyroscope, the system is classified as a 6-axis system, providing dual measurements for each axis, resulting in a total of six measurements An inertial measurement unit (IMU) gathers and outputs data regarding the angular velocity and specific force or acceleration experienced by the attached object, including acceleration relative to the object's frame, angular rotation rates, and occasionally magnetic field data To interpret the object's orientation from these outputs, users can utilize mathematical methods such as the AHRS estimation algorithm, including the Kalman Filter.
2.4.2.1 Attitude and Heading Reference System
An Attitude and Heading Reference System (AHRS) employs an Inertial Measurement Unit (IMU) to gather data on angular rate, acceleration, and the Earth's magnetic field, enabling the estimation of an object's orientation Typically, an AHRS consists of a 3-axis gyroscope, a 3-axis accelerometer, and a 3-axis magnetometer, which work together to accurately determine the system's orientation.
An Attitude and Heading Reference System (AHRS) combines measurements from gyroscopes, accelerometers, and magnetometers to estimate a system's orientation Utilizing algorithms such as Kalman, Mahony, or Madgwick filters, these methods process raw sensor data to generate optimized attitude estimates based on predefined sensor assumptions By integrating data through an AHRS filter algorithm, the system achieves a high-frequency, drift-free solution for accurate orientation determination.
The method developed by Swiss mathematician Leonhard Euler defines the orientation of one reference system relative to another using three distinct rotations, each characterized by a specific angle In navigation systems, these angles are referred to as {ϕ, θ, ψ}, which correspond to roll, pitch, and yaw, and are typically expressed in radians (rad).
Figure 2 18 Definition of Euler angles as used in this work with left: rotation around the z-axis, middle: rotation θ around the y-axis and right: rotation φ around the x-axis
Quaternions, introduced by Hamilton in 1843, are a widely utilized method for representing orientation in algorithms like Mahony and Madgwick This 4-dimensional representation of orientation consists of components w, x, y, and z, making quaternions a powerful tool for orientation estimation.
Which q w R is a scaler or real part and q w 2 q x 2 q 2 y q z 2 1is a vector or imaginary part and {i, j, k} are three imaginary unit numbers
1 i j k ijk j ji k , jk kj i , ki ik j
A unitary quaternion is a quaternion whose q 1 :
The product of two quaternions can be expressed as equivalent matrix multiplication
I is an Identity matrix, [ q v ] is a skew matrix or cross-product matrix Skew-symmetric [ ] [0], [1], [2], [3], [11]
A Direct Cosine Matrix (DCM), also known as a Rotation matrix, signifies the transformation between two reference frames, encapsulating the rotations and orientations of objects in space This matrix is essential for converting the rotational transformation from the vehicle's Body-frame to the Navigation-frame.
The rotational transformation from the vehicle’s Navigation-frame to the Body frame:
The quaternion 𝑞 is used as an orientation quaternion in the navigation frame A point 𝑝) the navigation frame n and a point in the body frame b are related as follows:
The quaternions to Eulers angles conversion.
The conversion from a quaternion to Euler angles can be performed through the following steps using the standard aerospace convention (yaw, pitch, roll):
The quaternions from Eulers angles conversion cos cos cos sin sin sin
2 2 2 2 2 2 cos cos sin sin sin cos
2 2 cos sin sin sin cos sin
The attitude kinematic differential equation links the change over time in the orientation of an object to its angular velocity, ω (as measured by a gyroscope)
The angular velocity vector \(\omega \in \mathbb{R}^3\) can be represented as a quaternion in Hamilton space, expressed as \(\omega = [0, \omega_x, \omega_y, \omega_z]^T\) Utilizing quaternion algebra, the quaternion product can be equivalently represented through matrix multiplication The expanding operator \(\Omega(\omega)\) is defined as: [0], [2], [9], [11].
From equation (2.4.3), we have the following kinematic equation [0], [1], [2],[8],[7],[9], [11], [13], [14],
ω is the angular rate(rad/s) q is the change of quaternion through time period
2.4.3 Measurement model for orientation determination [6-11]
Fusion algorithm
The Kalman filter, introduced by Rudolf Kalman in 1960, is a mathematical method that provides accurate estimates of unknown variables over time, particularly in systems affected by random noise Its applications span various fields, including aerospace and robotics The filter gained prominence through its implementation in the Apollo Guidance Computer, which was crucial for the successful navigation of the Apollo spacecraft to the moon and marked humanity's first steps on another celestial body.
Figure 2 26 Apollo Guidance Compute integrated with Kalman filter
The Kalman filter is described as a two-step process
Prediction step propagate the state and covariance through the dynamic model from one time step to the next:
The predicted state vector at time k is represented as 𝒙 ∨ 𝒌, with F serving as the transition matrix in the motion prediction model The input matrix is denoted by 𝑮, while 𝒖 represents the input vector derived from sensors The state covariance matrix, 𝑷 ∨, indicates the predicted noise covariance, which assesses the uncertainty of the predicted state vector and is crucial for calculating the filter gain Additionally, 𝑷 ∨ 𝒌 offers a quantitative measure of confidence in the state estimate.
𝒙 ∨ 𝑸 𝒌−𝟏 is the Process Noise matrix
Correction step process a measurement update at each time step where one exists
Where K is the filter gain, H is the transition matrix of measurement model And R is the measurement noise
Correction: v is the is the innovation, which is the diffence between the predicted measurement based on the current state estimate and actual measurement from observation sensor such as GPS
To determine the position and velocity of a vehicle moving in one-dimensional space at time k, we need to estimate its state vector, represented as 𝒙 = [𝒑] This estimation allows us to understand the vehicle's movement from the previous time point k-1 to the current time k.
The goal is to utilize a motion model derived from accelerometer data to predict the new state of a vehicle, where position (p) and velocity (v) are key variables Due to noise and integral errors in the accelerometer readings over time, we will employ a GPS observation model to correct the predictions at each time step Each component—initial estimate, predicted state, and final corrected state—are treated as random variables characterized by their means and covariances Thus, the Kalman filter serves as a method to integrate data from various sensors, yielding a refined estimate of an unknown state while accounting for uncertainties in both motion and measurements.
Kalmal filter is designed mainly for a linear system However, in the reality, the linear system doesn’t exist, even the simple system like a resistor with a voltage applied isn’t truly linear
Figure 2 28 The correlation between current and voltage of resistor
The standard Kalman filter is an effective estimation tool, but it struggles with nonlinear systems The extended Kalman filter (EKF) addresses this limitation by adapting the standard algorithm through linearization This process relies on the principle that a nonlinear function can be approximated as linear in a small vicinity around a specific operating point, utilizing the first-order terms from a Taylor series expansion of the nonlinear function.
Figure 2 29 Slope of tangent line
Linearizing a system involves selecting an operating point, denoted as 'a', and finding a linear approximation of the nonlinear function in the vicinity of 'a' In two dimensions, this
36 corresponds to determining the tangent line to the function f(x) at x=a The first-order Taylor Series expansion
From (2.5.7), the motion prediction model:
The transition matrix F, H, now are Jacobian matrix:
1 n m m n df df dx dx df dx df df dx dx
The primary distinction between the Extended Kalman Filter (EKF) and the linear Kalman filter lies in the replacement of the state transition matrix with a Jacobian matrix This Jacobian matrix comprises the first-order partial derivatives of the equations that define the nonlinear system model.
The motion model can be written as:
The predicted measurement model can be written as:
From the general motion model equation (2.5.11), we have the prediction model
is the predicted state vector at time k, x k 1
is the initial estimate state vector at time k-1, F is the Jacobian matrix as transition matrix, P k
is the predicted state covariance matrix at time k to estimate the uncertainty in predict state vector and P
is the initial estimate covariance at time k-1 Q is noise covariance matrix.
Base on the measurement model (2.5.9), the correction step:
is the measurement based on the current predicted state estimate vector x k
The Jacobian matrix (H) serves as the transition matrix for the initial predicted state vector, while K represents the filter gain at time k The actual measurement vector (y_k) is derived from the observation sensor at time k, highlighting the relationship between predicted and observed data.
is called the innovation or the difference between actual predicted state measurement vector and the observation sensor vector
, update the initial state estimate vector at time k and update the state covariance at time k
Ardupilot overview
ArduPilot is an open source, unmanned vehicle autopilot software suite, capable of controlling drones, autonomous ground vehicles, boats, etc
ArduPilot, initially created by enthusiasts for controlling model airplanes and autonomous vehicles, has transformed into a robust and comprehensive autopilot system widely utilized by industries, research organizations, and hobbyists alike.
Figure 2 30 An Rover with Ardupilot
ArduPilot's initial version was designed exclusively for fixed-wing aircraft, utilizing thermoelectric sensors to gauge the horizon's position by measuring temperature differences between the sky and ground This system underwent significant enhancements, transitioning to an Inertial Measurement Unit (IMU) that integrates accelerometers, gyroscopes, and magnetometers for improved performance.
Between 2013 and 2014, ArduPilot expanded its compatibility to various hardware platforms and operating systems beyond the original Atmel Arduino architecture This evolution was marked by the commercial launch of the Pixhawk hardware flight controller, developed collaboratively by PX4, 3DRobotics, and the ArduPilot team Additionally, it included support for Parrot's Bebop2 and Linux-based controllers like the Raspberry Pi-based NAVIO2 and the BeagleBone-based ErleBrain A significant milestone during this period was the first flight of a Linux-based aircraft in mid-2014.
As of 2018, ArduPilot code development has advanced significantly, focusing on seamless integration and communication with robust companion computers for enhanced autonomous navigation The platform now supports various VTOL architectures, incorporates ROS integration, and extends its capabilities to gliders and submarines, ensuring a comprehensive solution for diverse aircraft types.
ArduPilot offers many features, including those common to all of the following vehicles:
Fully autonomous, semi-autonomous and fully manual flight modes, programmable missions with 3D waypoints, optional geofencing
Stabilization options to negate the need for a third party co-pilot
Simulation with a variety of simulators, including ArduPilot SITL
Our system supports a wide array of navigation sensors, including various RTK GPS models, traditional L1 GPS units, barometers, magnetometers, laser and sonar rangefinders, optical flow sensors, ADS-B transponders, infrared sensors, airspeed sensors, and advanced computer vision and motion capture devices.
Sensor communication via SPI, I²C, CAN Bus, Serial communication, SMBus
Failsafes for loss of radio contact, GPS and breaching a predefined boundary, minimum battery power level
Support for navigation in GPS denied environments, with vision-based positioning, optical flow, SLAM, Ultra Wide Band positioning
Support for actuators such as parachutes and magnetic grippers
Support for brushless and brushed motors
Rich documentation through ArduPilot wiki
Support and discussion through ArduPilot discourse forum, Gitter chat channels, GitHub, Facebook
Figure 2 31 Flowchart structure for the Ardupilot autopilot
CALCULATION - SELECTION OF MECHANICAL SYSTEM DESIGN OPTIONS
Technical requirements
After evaluating the outdoor environment for robotic applications and establishing project criteria for speed, flexibility, and safety, the team determined that the robot's mechanical design needed to be robust yet compact to navigate obstacles effectively Consequently, they decided to design the robot with these essential parameters in mind.
The maximum weight of the Robot is 15kg
The maximum speed the robot can achieve is 1.5m/s
Select movement navigation structure
The moving navigation mechanism is crucial for a robot's movement, and selecting the right structure depends on usage needs and the operating environment Consider these various team structures to optimize performance.
Option 1: Move with 4-wheel drive
Figure 3 1 Moving mechanism with four motors
Easy to manufacture and control
Make sure the vehicle is balanced when carrying heavy loads
Option 2: Move 3 wheels with a motor that is both guiding and moving
Figure 3 2 Moving mechanism with four motors
Easy to imbalance when heavy load
It is not easy to control
Can not move two way
Option 3: Move with four wheels (two front wheels, two rear wheels).
Figure 3 3 Moving mechanism with four motors
Keep good balance when carrying heavy
Difficult to move in two directions
Option 4: Move on six wheels with 2 guiding and driving wheels and 4 self-aligning wheels
Figure 3 4 Moving mechanism with 2 motors
Keeps good balance under heavy load
The cost is not high
The 6 wheels may not be flat
To effectively test the Sensor Fusion algorithm, the team selected option four, which features a six-wheeled design comprising two guiding and driving wheels along with four self-aligning wheels.
Controlling the direction of movement: with the movement direction mechanism selected above, we combine the rotation of the wheels to create the direction of movement for the vehicle
The combination of rotation speed and rotation direction of the two wheels creates the desired direction of vehicle movement:
Figure 3 5 The combination of rotation between the two wheels creates the direction of movement for the vehicle
Forward: wheels 1 and 2 move forward simultaneously in the same direction (forward direction), at the same speed (forward direction)
Going backward: wheels 1 and 2 move simultaneously backwards in the same direction (reverse direction), at the same speed (reverse direction)
To navigate obliquely to the right, wheels 1 and 2 advance simultaneously, with wheel 1 operating at a higher speed than wheel 2, resulting in a speed differential that varies based on the cornering angle.
To navigate obliquely to the left, both wheels 1 and 2 advance simultaneously; however, wheel 1 operates at a slower speed than wheel 2 This speed variation between the two wheels is influenced by the specific cornering angle.
Rotate in place clockwise: wheels 1 and 2 move simultaneously at the same speed, but wheel 1 rotates forward, wheel 2 rotates backward
Rotate in place clockwise: wheels 1 and 2 move simultaneously at the same speed, but wheel 1 rotates forward, wheel 2 rotates backward
Rotate in place clockwise: wheels 1 and 2 move simultaneously at the same speed, but wheel 1 rotates forward, wheel 2 rotates backward
Transmission
After the design plan has been drawn up, the automatic robot must meet the set requirements including:
The sturdy robot structure ensures safety when moving
Simple design, high stability and aesthetics
From the above requirements, our team offers a car model with the following parameters:
Table 3 1 Parameters set out when designing:
The robot is engineered for outdoor use, necessitating adaptability to navigate unexpected obstacles and pedestrian traffic Its operating mechanism must allow for seamless movements, including rotation and turns to the left and right, ensuring consistent performance in dynamic environments.
We need to choose the right mechanism to adjust the speed and rotation of the motor so that the robot can run more optimally
There are currently three types of drives: electric drives, mechanical drives, hydraulic and pneumatic drives
The proposed solution for the outdoor vehicle model is mechanical transmission, which effectively transmits mechanical energy from the motor to various robot components This mechanism plays a crucial role in altering velocity, torque, force, and motion, ensuring optimal performance in outdoor environments.
In the work process, observational learning highlights three effective types of motion transmission mechanisms: belt drive, chain drive, and gear drive These methods are essential for efficient mechanical operations, each offering unique advantages that enhance performance in various applications.
A belt drive operates on the principle of friction transmission between a belt and a gear, allowing for efficient torque and speed transmission over longer distances compared to reducers Its flexible transmission ratio can be easily adjusted by varying the diameter of the pulleys The system comprises three essential components: the driving wheel, the driven wheel, and the belt.
Chain drives consist of a chain and both leading and driven sprockets, effectively transmitting motion and load from the driving shaft to the driven shaft through the engagement of chain links with sprocket teeth These systems typically feature parallel shafts, and multiple driven sprockets can be integrated into the transmission.
A gear drive effectively transmits rotation between shafts, ensuring that the teeth of the gears interlock tightly to prevent slipping This mechanism allows for the transmission of both small forces, as seen in precision engineering, and large forces, such as those required in rolling mills, while maintaining precise transmission speeds Additionally, gear systems require minimal shaft clearance, enabling the efficient transmission of rotation and the ability to modify speed, torque, or direction of rotation.
Based on the above analysis, we have evaluated the appropriate selection criteria according to the table below:
Table 3 2 Criteria for evaluating transmitters in the model
Belt drive Chain drive Gear drive
Operating distance Long and wide Medium Short
Transmission ratio Flexible Hard Large
Apply force to the shaft
Structure Simple, easy to replace
Easy, but requires high precision
Quietness High, no noise Noise Very noise
Maintenance Easy, cost effective Check sprockets and chain links
Check the smoothness of the system
Although analyzed, the team chose to place the shaft directly connected to the wheel mechanism
Figure 3 10 The transmitter that we have proposed
For optimal precision in maintaining alignment, a direct transmission system from the axle to the wheel is preferred over analytical transmissions, which are more affordable and simpler to assemble This design incorporates additional shock absorbers, ensuring both functionality and aesthetic appeal.
Choose a motor for the robot
For outdoor settings with high foot traffic, where precision is essential, the required moving speed can be moderate, and the size of the equipment can be relatively compact.
Initial parameters set for the robot:
Estimated maximum weight of robot: m = 15 (kg)
Maximum speed that can be achieved: v = 1.5 (m/s)
Wheel diameter is d = 100 mm (aluminum wheel weight 220 grams, width 30mm, shaft diameter 8mm)
Forces that can act on the robot when moving in the working environment:
Figure 3 12 Forces acting on the Robot
With possible parameters in the reference system:
θ: Tilt of the robot compared to the horizontal (x axis)
FN: Gravity of the platform acting on the Robot (N)
Fms: Friction force between wheel and ground (N)
Because the Robot is moved outdoors but is tested mainly on flat terrain, the slope is usually negligible
Acceleration of the vehicle when changing state from v 0 = 0 to V max = 1.5 m/s in 3s:
We have a formula to calculate velocity: v = v0 + at (3.2)
According to Newton's second law and combination (1), we have the total forces acting on the robot according to the following equation:
The traction force required for the robot to reach the required velocity is:
The friction coefficient of the wheel à = 0.01
The acceleration due to gravity is 9.8 m/s2
We have the linear velocity calculated using the following formula: v = r*ω (3.7)
Maximum angular speed of the wheel:
With the necessary engine capacity for the vehicle to operate according to the requirements:
So the work required for each motor is:
* With n1 is no1 = 0.99: Gearbox performance
With the calculated parameters based on the initial requirements: T = (0.3995N*m), power
P = 6.11 (W), number of motor revolutions after going through the gearbox N = 293 (rev/ minute) We choose Planet motor N= 320 rpm V = 12V P = 6.11W, 12 pulse encoder
Motor type DC Planet motor with gearbox
Encoder 12 (pulse), 2 channels A B, 5v DC power
Figure 3 13 Connection wire of DC Servo motor
Figure 3 15 Cross-sectional drawing of DC Servo Motor
Design of Robot motion transmission system
Design vehicle diameter 100mm, mounting shaft 8mm
Constructed from lightweight cast aluminum alloy, this vehicle features a robust plating that enhances both its durability and aesthetic appeal.
The vehicle's tire is covered with a layer of elastic rubber, which functions as a shock absorber tire when the vehicle is in motion
Crafted from lightweight cast aluminum alloy, this product features a durable plating that enhances both its longevity and aesthetic appeal, ensuring an attractive and resilient vehicle design.
The connection between the engine and the wheel in a mechanism is supported by damping by a pair of springs
Figure 3 17 Robot's main wheel connection assembly
Designed with dimensions of 100x100x58x3mm
Made from 304 stainless steel, and covered with a layer of plating that both increases vehicle durability and helps the vehicle have high aesthetics
Is the main lifting part of the motion transmission system
3.5.2 Structure of motion transmission system
Installed on both sides of the robot frame
Figure 3 20 Cross-section of the transmission system
Figure 3 21 Analysis of applied forces
Operating principle: this is the main part that transmits motion to the entire system
Using the motor to drive directly through the wheel:
Advantage: Easy and suitable calculation structure
The defect in the system makes it vulnerable to shock, potentially disrupting motion transmission However, the inclusion of dual shock absorbers effectively mitigates any risk of shock during movement.
Check the durability of the shaft
Designed on a model with dimensions of d = ϕ 8 mm connected directly from the engine to the wheel:
Figure 3 23 The motor connects to the wheel with a slider d = 8mm
Shaft material is Chromium-plated C45 Steel With durability limit babout 610 Mpa, yield limit ch= 340Mpa and allowable torsional stress τ= 23 Mpa
C45 carbon steel has a durability limit of 610 (N/mm2), we get the allowable stress value: [σ] = 48 (N/mm2)
Corresponding to the allowable safety factor:
Shaft shaft length 50mm connected by rigid coupling
The weight of the wheel after loading is 15kg and will be divided equally into 6 wheels
With the above parameters, the preliminary engine diameter will be calculated:
With τ take the smallest of 20 Mpa
Calculate the preliminary diameter of the wheel
The vehicle features a balanced design with 4 lifting wheels and 2 steering wheels, strategically positioned on its base This configuration ensures that the load is centered, allowing for an even distribution of force across all 6 wheels.
Assuming the vehicle is carrying a load weighing 15kg, the force acting on the vehicle will be:
With 6 wheels in contact with the floor, N = 6 friction coefficients of the vehicle are calculated:
Shaft moment balance equation at A (motor):
Force balance equation in the x direction:
⇔ 𝐹 𝐴𝑋 = 𝐹 𝑂𝑋 − 𝐹 𝑚𝑠 = 5.145 − 1.47 = 3.675(N) (3.20) Force balance equation in the y direction:
⇔ 𝐹 𝐴𝑌 = 𝐹 𝑂𝑌 − 𝐹 𝐵𝑌 = 85.75 − 24.5 = 61.25(N) (3.21) Bending moment on motor shaft at coupling:
Torque: T= Fms *ddc/ 2 = 1.47 * 0.05 = 73.5Nmm = 0.0735 (Nm) (3.25)
Bending and torsion resistance conditions:
+ Testing of flexural strength conditions:
F: is the force acting on the motor shaft
A: total area of the motor shaft
=> Meets allowable stress requirements with motor shaft d=8 (mm)
+ Preliminary calculation of the shaft: Based on the allowable torsional stress of the material: d ≥ formula (3.15)
(Based on formula 10.9 in the book "Tính toán thiết kế hệ dẩn động cơ khí - Trịnh Chất -
To meet the allowable torsional stress requirements with the proposed requirements, we need to choose a shaft with diameter d ≥7.88mm
Therefore, choose a motor shaft with d = 8mm that meets the load-bearing requirements under the set conditions
Other components
After determining the conditions for the shaft connecting to the d = 8 mm motor, to connect the two moving shafts, choose a flexible coupling with 8x8 holes
When comparing the two types of joints during operation, the hard joints cause misalignment of the rotation shaft, leading to errors during operation
Inner diameter of both ends: 8-8 mm
The flexible coupling includes 1 coupling and 4 shaft fixing bolts The soft coupling's bolts are 2 mm hexagonal heads
ESTIMATOR AND CONTROLLER DESIGN
Position Controller Design
As mentioned in previous chapter (2.2.1), the motion model of robot on 2 dimensional planar is:
For position controller, which control the robot move to a goal in planar, has desired x, desired y and desired heading from current vehicle position and desired position as setpoints
The Position error (x, y) measures the distance between the desired and current vehicle positions, serving as input for controlling the vehicle's velocity Additionally, the heading error is utilized to regulate the sensor's yaw rate.
Figure 4 1 Position controller with EKF
2 2 postion goal current goal current e x x y y
Desired heading: arctan goal current goal goal current y y
The error of lateral control: heading goal current e heading heading
Estimator Design
Base on the equation (2.4.11), we have the following quaternion-based kinematic equation
Based on the 1 st order Taylor expansion series mentioned in chapter (2.5.7):
And from (4.1) We have the dynamic model of quaternion:
The equation can be written in form of equation (2.5.11)
is the general form of predicted state vector at time t+1
The update quaternion at t+1 q t 1 is equivalent to predicted state vector at time k x k
From equation general form equation (2.5.11) or (4.3)
The Jacobian matrix of process mode respect to quaternion
The Jacobian matrix of process mode respect to angular rate
The process noise covariance of gyroscope: [5-14]
The correct state is computed as:
1 y t is correct measurement from sensors
We start by defining the corrected measurement vector as:
Or, because in this case, we just estimate the quaternion for heading, linear acceleration on a b x and a b y can be considers as 0, we can write corrected measurement vector as:
These are the values obtained from the sensors where g is tri-accelerometer measured in m s/ 2 and m tri-magnetometer sample in m-Gauss
The quaternion-based rotation matrix that transform a vector from NED navigation frame to body frame is written as: [6-9]
As mentioned in the chapter of Theoretical Foundation (2.4.14) and (2.4.15) The gravitational acceleration vector in a global NED frame is normally defined as: [7-11]
The geomagnetic field vector as:
For the acceleration and magnetic field, only their directions are needed, not their magnitudes
To compare these vectors against their corresponding observations, we must also normalize the sensors’ measurements
The expected gravitational acceleration in the sensor frame, can be estimated from orientation as described in (2.4.14): b * n a C n g
Similarly, from (2.4.15), the expected magnetic field in sensor frame: b * n m C n g
The measurement model, ℎ, and its Jacobian, 𝐻, can be used to correct the predicted model The Measurement model is directly defined with these transformations as
The Jacobian matrix of measurement model respect to state vector:
The measurement noise covariance matrix is derived from the unique noise characteristics of each sensor Given that the sensor noises are assumed to be uncorrelated and consistent across all directions, the resulting matrix is structured as a diagonal matrix.
Apply correction phase (4.8) to update the corrected state vector
Convert the updated quaternion after estimation to vehicle orientation, Base on the equation (2.4.8)
Figure 4 3 Heading angle from Ardupilot EKF
Figure 4 4 Heading angle from custom EKF
Figure 4 5 The INS/GPS fusion
Figure 4 6 The quaternion kinematics model
From equation (4.1), the dynamic model of quaternion is
From quaternion, the INS motion model: [1,11,14-18]
The initial estimate state vector to estimate
From equation (2.5.13) and INS motion model:
is the general form of predicted state vector
From the result of equation (4.16) and (4.19), the predicted state vector x t 1
From the previous quaternion attitude model and INS motion model, the control input vector u: b b b b b b T t x y z x x x u a a a (4.21)
The Jacobian of prediction state vector respect to positions and quaternion
The Jacobian of prediction state vector respect to input data
The correct state is computed as:
1 y t is correct measurement from sensors
We start by defining the corrected measurement vector as:
The predicted measurement model of position
The position and attitude predicted measurement model of vehicle y t 1 p a m
The Jacobian matrix of measurement model respect to predicted state vector:
Figure 4 7 Result of the simulation
Figure 4 8 Results of the simulation
Figure 4 9 Results of the simulation
EXPERIMENTAL RESULTS - EVALUATION
Simulation with SITL result
SITL (Software In The Loop) MavProxy is an essential simulation environment that enables developers and researchers to test and develop rover control algorithms without requiring physical hardware This versatile tool simulates rover behavior, while MavProxy serves as a powerful command-line ground control station, ensuring effective communication between the simulated rover and the operator It offers real-time data visualization and control, enhancing the development process for rover systems.
Figure 5 1 The geodetic waypoint in simulation
In this simulation environment, users can emulate GPS, IMU, and Magnetometer data while utilizing the Extended Kalman Filter (EKF) for accurate position and orientation estimation The rover's position is available in geodetic coordinates (longitude, latitude, altitude) and can be transformed into the NED (North, East, Down) frame for detailed analysis Additionally, the simulation allows for the integration of waypoints, facilitating the evaluation of navigation and path-planning algorithms.
Figure 5 3 GPS in NED Coordinates Frame
GPS data is regarded as accurate and reliable in this setting, with the GPS position firmly aligned along the Waypoint line Both the Waypoint and GPS deliver positional information in the geodetic frame, which includes longitude, latitude, and altitude Once the data is extracted from the binary file, it is transformed into the NED (North, East, Down) frame, where the x-axis indicates north and the y-axis indicates east.
In addition to Raw GPS data, the SILT simulation offers Raw IMU and Raw Magnetometer data, as well as EKF-derived position and rover orientation information, all conveniently stored in a binary file for easy extraction and analysis.
Figure 5 5 Yaw heading by gyroscope
This vehicle's heading measurement is obtained by integrating the angular rate data from a gyroscope around the z-axis However, The gyroscope is drift and integrating angular rates
81 over time to determine heading can accumulate errors, potentially leading to inaccuracies in the heading measurement
2) Heading angle from raw magnetic data:
Figure 5 7 Yaw heading by magnetic field
This vehicle's heading measurement is derived from the raw magnetic field readings of a compass, which is commonly used in navigation to gauge magnetic fields However, magnetic
82 measurements can be influenced by environmental disturbances, leading to potential errors in heading accuracy.
3) Heading angle from AHRS Mahony:
Figure 5 9 Yaw heading by AHRS Mahony
The AHRS Mahony algorithm slightly improves heading estimation, but it still needs further refinement for better accuracy
Figure 5 11 Yaw heading by EKF
The heading results from the SITL Mavproxy Ardupilot EKF estimation are outstanding and will be used as a benchmark for my custom EKF development
Figure 5 13 Yaw heading by EKF
This result showcases our custom-developed EKF sensor fusion algorithm, utilizing raw data from virtual magnetometers, accelerometers, and gyroscopes extracted from the Mavproxy SITL simulation
5.1.2 Position and orientation using estimation :
Figure 5 15 comparision between heading estimation of custom EKF and ardupilot EKF
This result demonstrates our custom-developed sensor fusion algorithm that combines data from magnetometers, gyroscopes, accelerometers, and GPS
Figure 5 16 comparision between position estimation of custom EKF and ardupilot EKF
Figure 5 17 comparision between position estimation of custom EKF and ardupilot EKF
Because the GPS in virtual environment is noise-free, let generate some random noise to GPS data
Figure 5 19 Noise GPS with EKF position estimation
Experiment with real raw data result
5.2.1 Raw data from t-log file
The robot's path data, which follows its movement toward set waypoints, is carefully saved in a tlog file after the robot finishes its planned journey
Figure 5 20 Rover image from satellite
Let rover move in auto mode:
Figure 5 21 Set waypoint for rover
Figure 5 22 Set waypoint for rover
Figure 5 23 Set waypoint for rover
The raw IMU,GPS are extracted from tlog to csv file:
Figure 5 25 Waypoint and raw GPS
Convert raw GPS geodetic coordinate to NED:
Figure 5 26 Raw GPS geodetic coordinate to NED
5.2.2 Raw data from data flash bin file
Data from a Pixhawk dataflash binary file offers a comprehensive and detailed record of a flight, surpassing the information provided by tlog data Unlike tlog files, which may overlook critical details due to transmission issues, dataflash files store all data directly on the vehicle's onboard storage, ensuring that every piece of information is captured in real-time This reliability makes dataflash files essential for thorough analysis and troubleshooting, as they deliver a clearer and more complete perspective on the vehicle's performance throughout its mission.
The rover's journey involved completing over two full rounds, which caused the same positional data to be recorded multiple times, leading to numerous instances of overwritten position information.
Figure 5 33 Waypoint and raw GPS
Let convert the GPS geodetic coordinate to local NED frame position:
Figure 5 34 GPS in NED Frame
We utilize actual experimental raw IMU and GPS data alongside our custom Extended Kalman Filter (EKF) fusion to accurately estimate both position and orientation The performance of our method is then assessed by comparing these results with those obtained from the Ardupilot EKF.
The graph illustrates a comparison of three position estimation methods: raw GPS data represented by green dots, the Ardupilot EKF indicated by a red line, and my custom Extended Kalman Filter (EKF) shown as a blue line.
Figure 5 35 Compare position data of GPS raw, EKF and custom estimation
Figure 5 36 GPS and custom estimation
Figure 5 37 Ardupilot heading and custom heading
The graph illustrates a comparison between the heading estimates from the Ardupilot Extended Kalman Filter (EKF) and our custom EKF The Ardupilot EKF, represented by the blue line, demonstrates a smoother and more stable performance, indicating superior handling of sensor noise and data fluctuations compared to our custom EKF, shown by the red line, which exhibits erratic changes and instability This analysis underscores the need for enhancements in our custom EKF, particularly in smoothing and stability, to improve its overall performance in similar conditions.
We manually operate the rover using remote control (RC) while it transmits raw data to our computer through telemetry On the computer, we utilize the Extended Kalman Filter (EKF) to process the raw sensor data, enabling real-time estimation of the rover's position and quaternion Additionally, we store the GPS raw data in the North, East, Down (NED) frame along with the estimated states.
Figure 5 38 Real-time heading estimation
Figure 5 39 Real-time position estimation
Figure 5 40 Real-time position estimation
Figure 5 41 Real-time heading estimation
Figure 5 42 Real-time position estimation
Figure 5 43 Real-time position estimation
Apply EKF state estimation into position controller
We apply EKF fusion into position controller in SITL simulation:
Let set goal position North 150m, East 100m, result: North 147.22m, East 98.519m:
Let set goal position North 150m, East 100m, result: North 152.925m, East 102.704m:
Let set goal position North 150m, East 100m, result: North 154.75m, East 100.88m:
Let set goal position North 150m, East 0m, result: North 155.75m, East 3m:
Let set goal position North 150m, East 0m, result: North 154.1m, East 1m:
CONCLUSION AND DEVELOPMENT DIRECTION
Conclusion
The team has researched and designed a suitable structure for outdoor autonomous vehicles
Successfully applied the Sensor Fusion algorithm to design a position controller for the vehicle
Perform system testing and evaluation during robot assembly, ensuring the robot can operate at high intensity
- Limited resources, time and skills, limited access to technology
Although they have only completed more than half of the initial tasks set, the team has successfully applied the Sensor Fusion algorithm to control autonomous vehicles
Investing in Unmanned Ground Vehicles (UGVs) may require a significant initial outlay, but their deployment in exploring hazardous or challenging environments offers numerous advantages UGVs enhance productivity, feature adaptable designs, and allow for remote operation and monitoring, making them an invaluable asset in various applications.
Development Direction
Experimental obstacle avoidance for outdoor self-propelled vehicles
Add more sensors to increase the robot's accuracy
Reinforce the self-propelled vehicle's frame to make it more durable
[1] Joan Sola, Quaternion kinematics for the error-state Kalman filter
[3] Wertz, SPACECRAFT ATTITUDE DETERMINATIONAND CONTROL
[4] readthedocs, Kalman Filter, https://ahrs.readthedocs.io/en/latest/filters/ekf.html
[5] Jay A Farrell, Aided GPS navigation with high-rate sensor
[6] UNIVERSITÀ DI PISA, Attitude Estimation Using a Quaternion-Based
Kalman Filter With Adaptive and Norm based Estimation of External Acceleration
[7] Orientation estimation using a quaternion-basedindirect Kalman filter with adaptive estimation ofexternal acceleration
[8] Kaiqiang, A New Quaternion-Based Kalman Filter for Real-Time Attitude
Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm
[9] Sabatini, Sabatini, A M Quaternion-based extended Kalman filter for determining
[10] Manon Kok, Jeroen D Holy and Thomas B Schon, Using Inertial Sensors for
[11] Intuitive Understanding of Kalman Filtering with MATLAB
[12] Timothy D Barfoot, STATE ESTIMATION FOR ROBOTICS
[13] Jesús García, Engineering UAS Applications, chapter 4: navigation
[14] University of Toronto Coursera, state-estimation-localization-self-driving-car
[15] Error State Extended Kalman Filter Localization for Underground Mining
[16] INS/GNSS Tightly-Coupled Integration Using Quaternion-Based AUPF for USV
[17] Mingu Kim , The Design of GNSS/IMU Loosely-Coupled Integration Filter for Wearable EPTS of Football Players