1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

plandingh and decision making for aerial robots

420 39 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 420
Dung lượng 5,65 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Intelligent Systems, Control and Automation: Science and Engineering Yasmina Bestaoui Sebbane Planning and Decision Making for Aerial Robots... The issues specific to planning and deci

Trang 1

Intelligent Systems, Control and Automation:

Science and Engineering

Yasmina Bestaoui Sebbane

Planning

and Decision Making for

Aerial Robots

Trang 2

Science and Engineering

Volume 71

Series editor

S G Tzafestas, Zografou, Athens, Greece

Editorial Advisory Board

P Antsaklis, Notre Dame, IN, USA

P Borne, Lille, France

D G Caldwell, Salford, UK

C S Chen, Akron, OH, USA

T Fukuda, Nagoya, Japan

S Monaco, Rome, Italy

G Schmidt, Munich, Germany

S G Tzafestas, Athens, Greece

F Harashima, Tokyo, Japan

N K Sinha, Hamilton ON, Canada

D Tabak, Fairfax, VA, USA

K Valavanis, Lafayette, LA, USA

For further volumes:

http://www.springer.com/series/6259

Trang 3

Yasmina Bestaoui Sebbane

Planning and Decision

Making for Aerial Robots

123

Trang 4

UFR Sciences and Technologies

Université d’Evry Val-D’Essone

Evry, Essonne

France

ISSN 2213-8986 ISSN 2213-8994 (electronic)

ISBN 978-3-319-03706-6 ISBN 978-3-319-03707-3 (eBook)

DOI 10.1007/978-3-319-03707-3

Springer Cham Heidelberg New York Dordrecht London

Library of Congress Control Number: 2013956331

 Springer International Publishing Switzerland 2014

This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer Permissions for use may be obtained through RightsLink at the Copyright Clearance Center Violations are liable to prosecution under the respective Copyright Law The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made The publisher makes no warranty, express or implied, with respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)

Trang 5

To my family

Trang 6

This book provides an introduction into the emerging field of planning anddecision making of aerial robots An aerial robot is the ultimate of UnmannedAerial Vehicles, an aircraft endowed with built-in intelligence, no direct humancontrol, and able to perform a specific task It must be able to fly within a partiallystructured environment, to react and adapt to changing environmental conditions,and to accommodate the uncertainty that exists in the physical world An aerialrobot can be termed as a physical agent that exists and flies in the real 3D world,can sense its environment, and act on it to achieve some goals So throughout thisbook, an aerial robot will also be termed as an agent.

Fundamental problems in aerial robotics are the tasks of moving through space,sensing about space, and reasoning about space Reasoning in the case of acomplex environment represents a difficult problem The issues specific to rea-soning about space are planning and decision making Planning deals with thetrajectory algorithmic development based on the available information Decisionmaking determines the most important requirements and evaluates possibleenvironment uncertainties

The issues specific to planning and decision making of aerial robots in theirenvironment are examined in this book, leading to the contents of this book:Motion planning, Deterministic decision making, Decision making under uncer-tainty, and finally Multi-robot planning A variety of techniques are presented inthis book, and some case studies are developed The topics considered in this bookare multidisciplinary and lie at the intersection of Robotics, Control Theory,Operational Research, and Artificial Intelligence

vii

Trang 7

1 Introduction 1

1.1 Motivation 1

1.2 Aerial Robots 3

1.3 Aerial Robotics and Artificial Intelligence 6

1.4 Preliminaries 9

1.4.1 Probability Fundamentals 9

1.4.2 Uncertainty Fundamentals 12

1.4.3 Nonlinear Control Fundamentals 18

1.4.4 Graph Theory Fundamentals 21

1.4.5 Linear Temporal Logic Fundamentals 24

1.4.6 Rough Sets 30

1.5 Modeling 31

1.5.1 Modeling of the Environment 37

1.5.2 Modeling of the Aerial Robot 37

1.5.3 Aerial Robots in Winds 43

1.6 Conflict Detection 46

1.6.1 Deterministic Approach 46

1.6.2 Probabilistic Approach 51

1.7 Conclusions 52

References 53

2 Motion Planning 59

2.1 Introduction 59

2.2 Controllability 60

2.3 Trajectory Planning 64

2.3.1 Trim Trajectory Generation 66

2.3.2 Leg-Based Guidance 68

2.3.3 Dubins and Zermelo Problems 69

2.3.4 Optimal Control Based Approaches 77

2.3.5 Parametric Curves 87

2.4 Nonholonomic Motion Planning 99

2.4.1 Differential Flatness 99

2.4.2 Nilpotence 103

2.4.3 Constrained Motion Planning 106

ix

Trang 8

2.4.4 Motion Planning for Highly Congested Spaces 109

2.5 Obstacle/Collision Avoidance 111

2.5.1 Problem Formulation 111

2.5.2 Discrete Search Methods 115

2.5.3 Continuous Search Methods 139

2.6 Replanning Approaches 148

2.6.1 Incremental Replanning 149

2.6.2 Anytime Algorithms 157

2.7 Conclusions 163

References 164

3 Deterministic Decision Making 171

3.1 Introduction 172

3.2 Symbolic Planning 173

3.2.1 Hybrid Automaton 174

3.2.2 Temporal Logic Motion Planning 180

3.3 Computational Intelligence 181

3.3.1 Neural Networks 182

3.3.2 Evolution Algorithms 185

3.3.3 Decision Tables 189

3.3.4 Fuzzy Systems 190

3.4 Arc Routing Methods 192

3.4.1 Traveling Salesman Problem 193

3.4.2 Dubins Traveling Salesman Problem 199

3.4.3 Chinese Postman Problem 202

3.4.4 Rural Postman Problem 203

3.5 Case Studies 207

3.5.1 Surveillance Mission 207

3.5.2 Evolutionary Planner 213

3.5.3 Bridge Monitoring 220

3.5.4 Soaring Flight for Fixed Wing Aerial Robot 231

3.6 Conclusions 239

References 240

4 Decision Making Under Uncertainty 245

4.1 Introduction 246

4.2 Generic Framework for Dynamic Decisions 248

4.2.1 Problem Formulation 248

4.2.2 Utility Theory 252

4.2.3 Decision Trees and Path Utility 252

4.2.4 Bayesian Inference and Bayes Nets 252

4.2.5 Influence Diagrams 255

4.3 Markov Approach 255

Trang 9

4.3.1 Markov Models 256

4.3.2 Markov Decision Process Presentation 256

4.3.3 Partially Observable Markov Decision Process 262

4.3.4 Bayesian Connection with Partially Observable Markov Decision Process 264

4.3.5 Learning Processes 265

4.3.6 Monte Carlo Value Iteration 266

4.3.7 Markov Logic 268

4.3.8 Belief Space Approach 270

4.4 Stochastic Optimal Control Theory 275

4.4.1 Bayesian Connection with State Space Models 276

4.4.2 Learning to Control 277

4.4.3 Chance Constrained Algorithms 278

4.4.4 Probabilistic Traveling Salesman Problem 280

4.4.5 Type 2 Fuzzy Logic 283

4.5 Motion Grammar 284

4.5.1 Description of the Approach 286

4.5.2 Grammars for Aerial Robots 287

4.5.3 Temporal Logic Specifications 289

4.6 Case Studies 292

4.6.1 Robust Orienteering Problem 292

4.6.2 Exploration of an Uncertain Terrain 299

4.6.3 Rescue Path Planning in Uncertain Adversarial Environment 301

4.6.4 Receding Horizon Path Planning with Temporal Logic Constraints 303

4.7 Real-Time Applications 306

4.8 Conclusions 311

References 312

5 Multi Aerial Robot Planning 317

5.1 Introduction 318

5.2 Team Approach 319

5.2.1 Cooperation 321

5.2.2 Cascade-Type Guidance Law 323

5.2.3 Consensus Approach 327

5.2.4 Flocking Behavior 329

5.2.5 Connectivity and Convergence of Formations 333

5.3 Deterministic Decision Making 336

5.3.1 Distributed Receding Horizon Control 338

5.3.2 Conflict Resolution 340

5.3.3 Artificial Potential 342

5.3.4 Symbolic Planning 346

Trang 10

5.4 Association with Limited Communications 348

5.4.1 Introduction 348

5.4.2 Problem Formulation 349

5.4.3 Genetic Algorithms 353

5.4.4 Games Theory Reasoning 355

5.5 Multi-Agent Decision Making Under Uncertainty 356

5.5.1 Decentralized Team Decision Problem 357

5.5.2 Algorithms for Optimal Planning 365

5.5.3 Task Allocation: Optimal Assignment 368

5.5.4 Distributed Chance Constrained Task Allocation 374

5.6 Case Studies 377

5.6.1 Reconnaissance Mission 377

5.6.2 Expanding Grid Coverage 381

5.6.3 Optimization of Perimeter Patrol Operations 383

5.6.4 Stochastic Strategies for Surveillance 388

5.7 Conclusions 391

References 392

6 General Conclusions 397

Index 399

Trang 11

AEMS Anytime Error Minimization Search

ATSP Asymmetric Traveling Salesman Problem

CBHD Campbell Baker Hausdorff Dynkin

CCPP Capacitated Chinese Postman Problem

DEC-MDP DECentralized Markov Decision Process

DEC-POMPD DECentralized Partially Observable Markov Decision ProcessDEC-POMDP-

COM

DECentralized Partially Observable Markov Decision ProcessCOMmunication

DMOC Discrete Mechanics and Optimal Control

DRRT Dynamic Rapidly extending Random Tree

DTRP Dynamic Traveling Repairman Problem

DTSP Dubins Traveling Salesman Problem

xiii

Trang 12

EGPSO Evolutionary Graph-based Particle Swarm Optimization

ETSP Euclidean Traveling Salesman Problem

GEST Guided Expansive Search Trees

GVRP Generalized Vehicule Routing Problem

IID Independent Identically Distributed

I-POMDP Informed Partially Observable Markov Decision ProcessIT2-FS Interval Type 2-Fuzzy System

LQG-MP Linear Quadratic Gaussian

LQR-B Linear Quadratic Regulator in Belief space

MDLe Motion Description Language extended

MDLn Motion Description Language networked

MDTRP Multi-vehicle Dynamic Traveling Repairman

MILP Mixed Integer LInear Programming

MMDP Multiagent Markov Decision Process

MMKP MultiChoice Multidimensional Knapsack Problem

MS-RRT Multi-Sample Rapidly expanding Random Tree

NURBS Non-Uniform Rational B Splines

Trang 13

OWR Optimal Wind Routing

PDF Probability Distribution Function

SIPP Safe Interval Path Planning

SISO Single Input Single Output

SQP Sequential Quadratic Programming

STLC Small Time Locally Controllable

STSP Symmetric Traveling Salesman Problem

VRPTW Vehicle Routing Problem with Time Windows

WCSPP Weight Constrained Shortest Path Problem

Trang 14

This book is organized as follows:

1 Chapter 1: Introduction: Aerial robots being the ultimate of the unmannedaerial vehicles, aerial robotics is a rapidly evolving and expanding field Asreasoning in the case of a complex dynamic environment represents a difficultproblem, planning and decision making are two fundamental problems inrobotics that have been addressed from different perspectives Combinedplanning and decision making is becoming increasingly important due to therecognition that a diverse set of applications involve reasoning with both dis-crete actions and continuous motions The topics considered are multidisci-plinary and lie at the intersection of robotics, control theory, operationalresearch, and artificial intelligence This chapter introduces aerial robotics andthe relationship of planning and decision making of aerial robots with artificialintelligence Aerial robotics follows the weak hypothesis of artificial intelli-gence that means that an aerial robot should be able to simulate the humancognitive process but it cannot experience mental states by itself Then, somebasic notions about probability, uncertainty, nonlinear control theory, graphtheory, linear temporal logic, and rough sets are given Many tasks requiring aninternal representation of the aerial robot and its environment, modeling of theenvironment and airplane/helicopter like aerial robot thus follow As the rela-tive size of most aerial robots generally places them under air susceptibility tovariations in wind conditions, state of the atmosphere is considered Finally,collision/avoidance being a critical requirement for the safe operation of aerialrobots, conflict detection is analyzed addressing the problem of collisiondetection with deterministic and probabilistic approaches

2 Chapter 2: Motion Planning: Motion planning, still an active research topic, ispresented in this chapter It is a fundamental task for an aerial robot that mustplan complex strategies and establish long-term plans Many topics are con-sidered in this chapter Controllability concerns the possibility for an aerialrobot to fly from an initial position to another one It provides an answer to thequestion of whether the state can be driven to a specific point from any (nearby)initial condition and under an appropriate choice of control Controllability of

an aerial robot represented by its translational kinematic model is studied Theproblem of trajectory planning, an important aspect of aerial robot guidance,

xvii

Trang 15

follows: trim trajectories and maneuvers are introduced as well as Dubins andZermelo problems The trajectories must be flyable and safe Thus, nonholo-nomic motion planning is studied using the notions of differential flatness andnilpotence As aerial robots are likely to operate in a dynamic environment,collision avoidance is a fundamental part of motion planning The operationalenvironment is considered to be three-dimensional, it may contain zones thatthe robot is not allowed to enter and these zones may not be fully characterized

at the start of a mission 3D aerial path planning has the objective to completethe given mission efficiently while maximizing the safety of the aircraft Tosolve this problem, deterministic and probabilistic approaches are presented Tocope with imperfect information and dynamic environments, efficient replan-ning algorithms must be developed that correct previous solutions for a fraction

of the computation required to generate such solutions from scratch Thus,replanning methods such as incremental and anytime algorithms are studied

3 Chapter 3: Deterministic Decision Making: Decision Making is mission levelautonomy and intelligence Given an agent that can fly and sense its environ-ment, the considered task is to plan intelligent motions and take decisions whenrequired If one has perfect information about the environmental conditions thatwill be encountered, a safe path can be constructed Symbolic planningmethods such as hybrid automaton and linear temporal logic are first presented.Symbolic motion planning is the problem of automatic construction of robotcontrol strategies from task specifications given in high-level human-like lan-guage Some computational intelligence approaches follow such as neuralnetworks, evolution algorithms, decision tables, and fuzzy systems Intelligentdecision making, the discipline where planning algorithms are developed byemulating certain characteristics of intelligent biological system is an emergingarea of planning One important application in aerial robotics being the choice

of the way points to some operations research methods such as travelingsalesman problem, Chinese postman problem, and rural postman problem arepresented They enable to formulate and solve such flight planning problems.Finally, some case studies are discussed The first case study concerns sur-veillance mission using neural networks as function approximation tools toimprove computational efficiency of a direct trajectory optimization The sec-ond proposes a flight route planning technique for autonomous navigation of anaerial robot based on the combination of evolutionary algorithms and virtualpotential fields The third application concerns bridge monitoring The aerialrobot is required to take photos of thousands of points located on a bridge Sothe problem of choosing adequate subsets of waypoints appears while theenvironmental constraints must be verified; the proposed solution uses opera-tional research techniques The final case is soaring flight for an airplane-likerobot, as appropriate flight techniques are expected to allow extraction ofenergy from the atmosphere

4 Chapter 4: Decision Making Under Uncertainty: Realistic mission planningscenarios include uncertainty on obstacles and other no-fly zones Additionally,the aerial robot must be able to overcome environmental uncertainties such as

Trang 16

modeling errors, external disturbances, and an incomplete situational ness If there are uncertainties in the information, usually several scenarios areconsidered and a trade-off solution is offered In this chapter, the probabilistictrajectory prediction is introduced and some generic frameworks for dynamicdecisions are presented such as utility theory, Bayesian inference, Bayes nets,and influence diagrams Then, Markov approach is considered In aerialrobotics, action consequences are observations incomplete and sometimes non-deterministic So some planning approaches such as Markov Decision Process,Partially Observable Markov Decision Process, stochastic optimal controltheory, and Markov logic are presented Planning with uncertain winds is animportant topic in aeronautics and is presented within this framework Thestochastic routing problems being the generalization of the arc routing methodsintroduced in the previous chapter are presented The probabilistic travelingsalesman problem is essentially a traveling salesman problem in which thenumber of points to be visited in each problem instance is a random variable.Another challenge is the development of mathematical frameworks that for-mally integrate high-level planning with continuous control primitives Anothertopic, Motion grammar, enables aerial robots to handle uncertainty in theoutcomes of control actions through online parsing Some case studies arehence developed The first case is the robust orienteering problem which is ageneralization of the traveling salesman problem The second applicationconcerns the exploration of an uncertain terrain while the third is the rescuepath planning in an uncertain adversarial environment The final case studiesare receding horizon path planning with temporal logic constraints Finally,some details about the real-time applications are given at the end of the chapter.

aware-5 Chapter 5: Multi Aerial Robot Planning: Multi-robot systems are a majorresearch topic in robotics Designing, testing, and deploying in the real world alarge number of aerial robots is a concrete possibility due the recent techno-logical advances The first section of this chapter treats the different aspects ofcooperation in multi-agent systems A cooperative control should be designed

in terms of the available feedback information A cascade-type guidance law isproposed, followed by consensus approach and flocking behavior Sinceinformation flow over the network changes over time, cooperative control mustreact accordingly but ensure group cooperative behavior which is the majorissue in analysis and synthesis Connectivity and convergence of formations arealso studied Team approach is followed by deterministic decision making.Plans may be required for a team of aerial robots to plan for sensing, plan foraction, or plan for communication Distributed receding horizon control as well

as conflict resolution, artificial potentials, and symbolic planning are thusanalyzed Then, association with limited communications is studied, followed

by genetic algorithms and game theory reasoning Next, multi-agent decisionmaking under uncertainty is considered, formulating the Bayesian decentralizedteam decision problem, with and without explicit communication Algorithmsfor optimal planning are then introduced as well as for task allocation anddistributed chance constrained task allocation Finally, some case studies are

Trang 17

presented such as reconnaissance mission that can be defined as the road searchproblem or the general vehicle routing problem Then, an approach is consid-ered to coordinate a group of aerial robots without a central supervision, byusing only local interactions between the robots The third case is the optimi-zation of perimeter patrol operation If an aerial robot must be close from alocation to monitor it correctly and the number of aerial robots does not allowcovering each site simultaneously, a path planning problem arises Finally,stochastic strategies for surveillance are presented.

6 Chapter 6: General Conclusions: Final conclusions and prospects are sented in this last chapter

Trang 18

Abstract Aerial robots being the ultimate of the unmanned aerial vehicles, aerial

robotics is a rapidly evolving and expanding field As reasoning in the case of acomplex dynamic environment represents a difficult problem, planning and deci-sion making are two fundamental problems in robotics that have been addressedfrom different perspectives Combined planning and decision making is becomingincreasingly important due to the recognition that a diverse set of applications involvereasoning with both discrete actions and continuous motions The topics consideredare multidisciplinary and lie at the intersection of robotics, control theory operationalresearch and artificial intelligence This chapter introduces aerial robotics and therelationship of planning and decision making of aerial robots with artificial intel-ligence Aerial robotics follows the weak hypothesis of artificial intelligence thatmeans that an aerial robot should be able to simulate the human cognitive processbut it cannot experience mental states by itself Then, some basic notions about prob-ability, uncertainty, nonlinear control theory, graph theory, linear temporal logic andrough sets are given Many tasks requiring an internal representation of the aerialrobot and its environment, modeling of the environment and airplane/helicopter likeaerial robot thus follow As the relative size of most aerial robots generally placesthem under air susceptibility to variations in wind conditions, state of the atmosphere

is considered Finally, Collision/obstacles avoidance being a critical requirement forthe safe operation of aerial robots, conflict detection is analysed addressing the prob-lem of collision detection with deterministic and probabilistic approaches

Trang 19

2 1 Introductionairborne animals such as birds or insects Thus, aerial robots may be categorizedinto different types: fixed (heavier or lighter than air), rotary and flapping wingvehicle [87, 113] A principal reason in the interest in aerial robots is the desire

to perform civil missions in more efficient and with less cost than with mannedvehicles and to reduce risks to humans New concepts of operation demand moreversatility in the face of changing mission and environment specifications, adapt-ability to events in the immediate and broader environment and good performanceand robustness in changing operating conditions Applications focus on the civil-ian and public domains that include Observation, Surveillance and Reconnaissance,Search and Rescue, Wildfire mapping needs, Monitoring (agriculture, weather, envi-ronment, structures ), Disaster management, Thermal infrared power line surveys,

TV news coverage, Telecommunication, Aerial imaging/mapping, Highway speedcontrol [5,66]

The aim of current research is to have a fully autonomous flight vehicle that selectsthe tasks to be performed for a given mission and perform them To select the tasks,

an aerial robot must make plans and take decisions Decision making determines themain most important requirements such as the choice of the way-points and evaluatespossible environment uncertainties In sequential decision making, the aerial robotseeks to choose the best actions based on its observations of the world to optimize

an objective function over the course of a series of such decisions, depending on itsmission Planning deals with the trajectory algorithmic development based on theavailable information Depending upon the mission requirements, the trajectory cancontain low speed parts Moreover, in special cases, hovering aerial robots shouldstay in a particular position/area for a certain period of time [9,24,26,30] Dealingwith uncertainty in complex dynamic environments is a challenge to the operation

of real world aerial robots Such systems must be able to form plans of intelligentactions [3]

Multi-robot systems can play a big role in near future as they can perform tasksthat a single aerial robot may have difficulties to complete In the case of a multi-agentsystem, each aerial robot needs to make inferences about the others as well, possiblyunder limited communications over a course of repeated interactions.Methods arecurrently developed that exploit the structure present in aerial robot’s interactions andmethods that distribute computation among the agents [10, 106,119] With recentadvances in hardware and software, distributed multi-robot systems are becomingmore versatile and applicable in different domains [44,86] The computational abil-ities provide the strongest constraints on the current aerial robots Although moreadvances in the hardware mechanisms are to be expected, improvements in softwareare essential [33] Multi-agent sequential decision making under uncertainty is a rele-vant topic for real-world application of autonomous multi-robot system Multi-agentsequential decision making typically comprises: Problem Representation, Planning,Coordination and Learning

Trang 20

1.2 Aerial Robots

Aerial robots are unmanned aerial vehicles capable of performing complex missionswithout human intervention [63,77,100] In order to be an aerial robot, an unmannedaircraft must be able to [19,76,114]:

• Perceive its environment, consequently update its activity and respond to changes

to its own state,

• Command the actuators and keep as close as possible to the currently plannedtrajectory despite un-modeled dynamics, parametric uncertainties and sensorsnoise,

• Regulate its motion with respect to the local environment,

• Be able to avoid obstacles and other aircrafts and assess the information from themulti-sensor to the environment,

• Be able to assess its capability and determine if that capability meets the ment of the current mission,

require-• Follow mission plans, account for changing weather and changes in its operatingenvironment,

• Move within a partially structured environment where the map does not contain allobjects and is not necessarily accurate or move within an unstructured environmentwhere no map is available

Current Unmanned Aerial Vehicles are in general remote controlled while somecan fly autonomously based on pre-programmed flight plans Missions currentlyundertaken by the unmanned aerial vehicle systems are predefined [5,46,67] Thevehicle follows a flight plan initially developed as a commercial aircraft with a flightmanagement system The flight plan is a sequence of way points defined by latitudeand longitude, based on the objectives of the mission This series of waypointsare assumed to be joined by straight line segments, originating at the climb phasejust after take-off and terminating at the landing phase During the autonomousflight, the airborne computer system collects the data in real-time from the sensors

to generate the commands to the autopilot controlling the flight [55] Currently theamount of manpower is still intense Aerial robots must begin to do more on theirown to ultimately think [20,34,96] The research was initially directed towards thedevelopment of entities acting alone (systems consisting of a single aerial robot) [12,

21,39] This work has demonstrated the feasibility and potential development of suchintelligent entities More recently, research in aerial robotics began to be interested

in the implementation of such entities and complexity interaction of behavior thatresults Work on the teams of aerial robots has been developed around this idea Thedecisions taken by an entity belonging to such systems are more complex because ofthe interactions It becomes more difficult to obtain a behavior collectively intelligent[27,32,58,65]

Motion planning and decision making are two fundamental problems in roboticsthat have been addressed from different perspectives Bottom up motion planningtechniques concentrate on creating control inputs or closed loop controllers that fly

Trang 21

4 1 Introduction

Fig 1.1 The mono-vehicle hierarchy

the aerial robot form one configuration to another while taking into account ent dynamics and motion constraints On the other hand, top down task planningapproaches are ususally focused on finding coarse typically discrete robot actions inorder to achieve more complex tasks A hierarchical structure of the system archi-tecture is a natural approach for reducing the complexity of large scale systems,depending if this is a single robot or multi-robot system

differ-• For a single aerial robot, (see Fig.1.1), the lowest level is the reference trajectorytracking The autopilot uses as references the trajectory and path generated by thesecond level (from the bottom), while satisfying vehicle constraints and clearance[56] These paths and trajectories have been calculated using the set of way-points

or flight plan that has been determined depending on the assigned mission [93].The highest level is the mission planning, it determines the mission target areas,probable obstacles and restricted areas of flight This level uses decision makingeither deterministic or under uncertainty

• For a multi-agent system, (see Fig.1.2), trajectory generation and tracking arestill considered in the lowest levels While the highest system is still the missionplanning, the fourth level (from the bottom) is the multi-objective decision makinglevel The team must be able to plan its mission, to choose what goals it will addressamong those proposed, in what order and how it will perform its operations [42,88,

105] The cooperation of the team of vehicles is on the third level while resourceallocation and way point selection have to be performed on the second level Thevehicles must be able to generate a new plan in response to events occurring duringthe mission that may invalidate the flight plan in progress There are multipleobjectives that can be utilized throughout the course of an autonomous mission[89,99] To provide the mission autonomy of a team of aerial robots has become

a major challenge, requiring the development of methods of decision-making forthe various operations of a mission

Different levels use different models This hierarchical structure uses models of theaerial robot and the environment with different levels of details They become more

Trang 22

Fig 1.2 The multi-vehicles hierarchy

detailed as the hierarchy is lower The different levels do not share the same tion on the aerial robot and the environment [14,16,18] Tasks can be defined as aset of actions to accomplish a job or an assignment It can be represented by a number

informa-of lower level goals that are necesssary for achieving the overall goal informa-of the system

An important task of the ‘built-in-intelligence’ is to monitor their health and preventflight envelope violations for safety An aerial robot health condition depends on itsaerodynamic loading, actuator operating status, structural fatigue, Any technique

to maintain specific flight parameters within the operational envelope of a vehiclefalls under envelope protection Even though a key feature of envelope protection

is to identify a relationship between operational limits and command/control inputs,

it is difficult to obtain an exact analytic relationship by on line or off line training.Furthermore, since aerial robots are expected to be operated more aggressively thanmanned counterparts, envelope protection is very important in aerial robot and must

be done automatically due to the absence of a pilot [59]

Optimality may be defined as minimizing the flight time between initial and goalpoints or energy consumed, or a mix between them Weighting factors are neededfor cost functions that have more than one term The shortest path that uses theleast amount of fuel is often neither the shortest possible path, nor the path thatuses the least fuel, but one which reaches a balance between them The relativeweights of these terms determine what sort of balance results Substantial oversight

is often required to analyze the sensitivity of solution characteristic to cost functionweights and this sensitivity analysis may be specific to particular problems ratherthan fully generalizable Lexicographic approach can also be used In time-varyingenvironments problem, the vehicle has to avoid obstacles that are moving in time.Optimal planners for time-varying environments generally attempt to minimize pathlength or time

Trang 23

6 1 Introduction

1.3 Aerial Robotics and Artificial Intelligence

Robotics is one of the most appealing and natural applicative area for ArtificialIntelligence The fast development of aerial robotics applications poses planningand decision making as central issues in the robotics research with several real worldchallenges for the Artificial Intelligence community [54,81]

Autonomy is the ability to make one’s own decisions and act on them Anautonomous system reacts to its external inputs and takes some action without oper-ator control Intelligence can be thought of as a very general mental capability thatinvolves the ability to reason, plan, solve problems, think abstractly, comprehendcomplex ideas, learn quickly and learn from experience [35, 51, 53] An intelli-gent system is defined as a rational system, allowing to abstract questions about thenature of intelligence and the criteria characterizing the intelligence of a machine

An important aspect of intelligence is inductive inference Induction is the process ofpredicting the future from the past [68,79] It is the process of finding rules in pastdata and using these rules to guess future data All induction problems can be phrased

as sequence prediction tasks An intelligent system uses some internal algorithms

to emulate a human expert in determining its course of action The inputs may begenerated by an operator, as in the case of a scheduling system that inputs requestedactivities and uses a heuristic search engine to produce an optimum, conflict freeschedule Both types of systems perform tasks that would otherwise require humanoperators If the input is generated automatically by the operational environmentand fed into an intelligent system then this system is both autonomous and intelli-gent An underlying assumption is that autonomous and intelligent systems increaseoperational efficiency with acceptable levels of risk to the mission [4]

Traditionally, the following topics illustrate the relevance of intelligent systemstechnologies to aerospace sciences (the list is not exhaustive):

• Autonomous systems, intelligent and adaptive behavior

• Planning and scheduling algorithms

• Data fusion and reasoning, intelligent data/image processing,

• Evolutionary (genetic) algorithms, neural networks, fuzzy logic,

• Expert systems, knowledge-based systems and knowledge engineering,

• Model-based reasoning, machine learning techniques

• Human-machine interaction

• Qualitative simulation …

In computer science, the first approximation to the concept of intelligent objectpresented the object specific reasoning paradigm where object’s inherent propertiesand object’s avatar interactions were stored in a data base The drawback was thateach new extension of the object interaction properties would need an adjustment ofthe data stored in the interaction database A posterior research included intrinsic fea-tures, interaction data and information relative to both the avatar and objects behav-iors during their interaction The objective is to avoid working with fixed behaviorsbut to generate interaction plans Efficient algorithms, that can update the trajec-tory on-line in response to both sensory data and task specifications, therefore, are

Trang 24

needed [52,83,101] Agents as a computational abstraction have replaced ‘objects’

in software and have provided the necessary ingredients to move to societies of acting intelligent entities, based on concepts like agent societies and game theory.Such abstractions are dispersed throughout the scientific world, depending largely

inter-on applicatiinter-ons Here, aerial robotics is interested by this cinter-oncept Indeed, agentresearchers are sometimes inspired by robots, sometimes use robots in motivatingexamples, and sometimes make contributions to robotics Both practical and ana-lytical techniques in agent research influence, and are influenced by, research intoautonomous robots and multi-robot systems [85, 94] Robots are agents, too Anagent is a computer system that is situated in some environment and that is capable

of autonomous action in this environment in order to meet its design objectives [91,

92] It is a physical or virtual entity:

• Which is capable of acting in its environment, whose behavior tends to meet itsobjectives, taking into account the resources and skills available to it and depending

on its perception, its representations and the communications it receives,

• That can communicate directly with other agents,

• Which is driven by a set of tendencies in the form of individual objectives or of asatisfaction or survival function, to be optimized,

• Which is able to perceive its environment, which has only a partial representation

of this environment,

• Which has its own resources and limitations, which can provide services.This definition refers to the concepts of competence, objectives, limited percep-tion and communication An agent is equipped with sensors allowing it to perceiveits environment and effectors to act in this environment Based on these sensors, theagent learns about its environment that can enrich the knowledge provided a priori

by the designer This information allows it then to decide how to act This decisionfollows a deliberative process more or less complex An agent may, for example, baseits decisions on its current perceptions, or the history of its perceptions After delib-eration, the agent executes through its effectors, the action to be done [112,121] Theset of actions executed by an agent depends on the available effectors Some actionscan be performed only in specific situations, so they require a set of pre-conditionsare satisfied Sensors and actuators can be inaccurate Therefore, the informationcollected by sensors may be inaccurate or incomplete Diversifying and multiplyingsensors of an agent, it is possible in some cases to increase the information collectedand its accuracy Moreover, the multiplication of sensors requires to merge infor-mation from different sources Unfortunately, information may be contradictory orhave different scales [69,127] Regarding effectors, imprecision may also arise [75].Indeed, the result of the execution of an action is not completely controlled by theagent It also depends on the conditions under which the action is performed Theinaccuracy of sensors and effectors can highlight two types of uncertainty that anagent faces: the first concerns the observability of the environment and the secondrelates to the results of the action These uncertainties are related to certain char-acteristics of the environment, in particular observability and deterministic aspects[95,131]

Trang 25

8 1 IntroductionThe environment may have different properties:

• The environment can be static or dynamic A dynamic environment is likely tochange during the deliberation Otherwise, it is static In a static environment, theagent does not have to worry about the fact that the environment may have changedsince it began deliberating and therefore its decision becomes inappropriate

• The environment can be continuous or discrete An environment is discrete if there

is a finite number of possible actions and perceptions Otherwise, it is continuous

• The environment can be deterministic or stochastic When the next state of theenvironment is completely determined by its current state and the action taken, the

environment is deterministic P (s, a, s) ∈ {0, 1} is the probability of moving from state s to state swhen the agent performs the action a If, from the same state andthe same action, it is possible to arrive in different states, then the environment is

stochastic Thus P (s, a, s) ∈ [0, 1] A probability distribution can then represent

the uncertainty on the unknown or unmodelled aspects of the environment

• The environment can be fully observable or partially observable It is completelyobservable if the agent receives all the information necessary for the decision

• In a sequential or episodic environment, the experience of the agent can be dividedinto episodes Each episode consists of a perception phase, followed by the exe-cution of a single action Each episode is independent of the others, the result ofthe execution of an action depends only on the current episode In a sequentialenvironment, execution of an action can influence all future actions

Complex environments are partially observable, stochastic, sequential, dynamicand continuous Such environments are called open [115,129]

Removing the human from the flight planning tasks and replacing him by ware systems is a challenge Autonomy is enabled by intelligence and automation.For unmanned systems, it is the ability to sense the situation and act on it appropri-ately It is based on distributed decision making, situation awareness, decentralizedintelligence leading to emergent behaviors, multiple robot control algorithms, eventstates related to prescribed pre-programmed behaviors Most decisions involvesome considerations of unknown factors [109] Often, the decision making processtakes the form of relatively straightforward scenario analysis, or ‘what if’ reasoning,

soft-in which the outcomes of different decisions are evaluated soft-in the light of variouspossible end values of the unknown factors [43] In aerial robotics, the uncertaintiesare complex and ad hoc decision making is not sufficient [69,90]

Combined planning and decision making is becoming increasingly importantdue to the recognition that a diverse set of applications involve reasoning with bothdiscrete actions and continuous motions This issue poses unique computational chal-lenges: a vast hybrid discrete/continuous space must be considered while accountingfor complex geometries and dynamics, physical constraints, collision avoidance, androbot-environment interactions [111] As a result, continued progress relies on anintegrative approach that brings together techniques from robotics, Artificial Intelli-gence, control theory and logic Areas of particular recent cross-fertilization include:

Trang 26

• Motion planning for single and multiple aerial robots

• Decision-theoretic making for single and multiple aerial robots

• Decision making under uncertainty for single and multiple aerial robots

Algorithms are constructed based on different theoretical assumptions and ments concerning the following issues [70];

require-• Environment-vehicle : What is the structure of the environment, the vehicle’scapabilities, its shape ?

• Soundness: is the planned trajectory guaranteed to be collision free?

• Completeness: is the algorithm guaranteed to find a path if one exists?

• Optimality : What is the cost of the actual path obtained versus the optimal path?

• Space or time complexity: What is the storage space or computer time taken tofind a solution?

Aerial robotics follows the weak hypothesis of Artificial Intelligence That meansthat an aerial robot is able to simulate the human cognitive process but it cannotexperience mental states by itself By opposition, the strong hypothesis of ArtificialIntelligence leads to the construction of machines that are able to reach cognitivemental states, a machine conscious of its own existence with emotions and con-sciousness [1,2] This strong hypothesis approach is outside the scope of this book.Aspects of situational awareness are also out of the scope of this book

1.4 Preliminaries

This section presents the fundamentals of probability, uncertainty, nonlinear controltheory, graph theory, linear temporal logic and rough sets necessary to comprehendthe methods presented in the following chapters

1.4.1 Probability Fundamentals

Probability finds application throughout many scientific fields, but finds particularuse in the modeling of uncertainty in terms of an aerial robot configuration, inter-preting sensor data and mapping This section reviews some of the fundamentaluses of probability as related to aerial robotics Probability theory provides a gen-eral technique for describing random or chance variations [117] Observations varydepending on what appears to be random variations If there is an experiment whoseoutcome depends on chance, the outcome of the experiment can be described as arandom variable X The sample spaceΩ is the space from which X can be drawn If

the sample space is finite or countably infinite, then the random variable and samplespace are said to be discrete; otherwise, they are said to be continuous [38]

Trang 27

10 1 Introduction

1.4.1.1 Discrete Events

The discrete probability function Prob (X) scores that a discrete event X ∈ Ω has

occurred

Definition 1.1 Discrete probability The discrete probability Prob (X) can be defined

in terms of three axioms

1 Nonnegativity : Prob (A) ≥ 0 for every event A.

2 Additivity If A1, A2, , A Nis a set of disjoint events then

Prob (A1∪ A2∪ ∪ A N ) = Prob(A1) + Prob(A2) + · · · + Prob(A N )

where A ∪ B denotes the event A or the event B.

4 Normalization The probability of the entire sample spaceΩ is equal to 1, that

is P (Ω) = 1

Two events A and B are said to be statistically independent if

where A ∩ B denotes the event A and the event B The conditional probability of A

taking on certain values, given that B has a specific value or range of values is defined

as Prob (A|B), it is the probability of event A, given B The Bayes rule is given by

Prob (A|B) = η Prob (B|A)Prob(A)

whereη is the normalizing term.

1.4.1.2 Continuous Events

Definition 1.2 Continuous probability The probability of a continuous event is

defined in terms of a cumulative distribution function F (x) that scores the probability that the random variable takes on a value less than or equal to x.

The cumulative distribution function F(x) has the following properties

1 F(x) is continuous

2 F’(x) exists except at most at a finite number of points

3 F’(˙x) is at least piecewise continuous.

Trang 28

Definition 1.3 Probability density function The probability density function

(PDF) f(x) is the derivative of the cumulative distribution function

f (x) = dF (x)

The probability density function has the following properties

1 f (x) = 0 implies x is not in the range of X

where f (x) is its probability distribution function.

Definition 1.5 Variance index The variance V (x) or σ2is given by

σ2= E(x − E[x])2

(1.6)

The variance is an estimate of the region about the expected value within which

likely values of x are to be found The standard deviation S (x) is defined as σ.

1.4.1.3 Sampling from Random Variables

Approximating the probability distribution of a random variable using samples orparticles can lead to tractable algorithms for estimation and control [23], given two

multivariate probability distributions p (x) and q(x) The expectation of the target

In many cases this integral cannot be evaluated in closed form Instead, this value

is approximated by drawing N independent identically distributed random samples

x (1) , , x (N) from the proposal distribution q (x) The weighted sample mean is

where w i is known as the importance weight As long as q (x) > 0, ∀x such that

p (x) > 0 and under weak assumptions on the boundedness of f (.) and the moments

Trang 29

12 1 Introduction

of p (x), from the strong law of large numbers, the convergence property exists:

Hence, the expectation, which could not be evaluated exactly in closed form, can

be approximated as a summation over a finite number of particles In addition, theexpectation over the target distribution has been approximated by drawing samplesfrom the proposal distribution The convergence property (1.9) can also be used to

approximate the probability of a certain event such as the event f (x) ∈ A This is

N



i=1

This expression is simply the weighted fraction of particles for which f (x (i) ) ∈ A.

The convergence property

Trang 30

1.4.2.1 Aleatory Uncertainty

Aleatory uncertainty refers to the inherent variation associated with the aerial robotand its environment and it is not due to a lack of knowledge and cannot be reduced.Aleatory uncertainty is also known as variability, irreducible uncertainty, stochastic

or random uncertainty [7,15,23]

1.4.2.2 Epistemic Uncertainty

Epistemic uncertainty refers to the lack of knowledge or incomplete information ofthe aerial robot and its environment It is also known as subjective uncertainty, state

of knowledge uncertainty or reducible uncertainty [28,29,60] Evidential reasoning

is a task to assess a certain hypothesis when certain pieces of evidence are givenand the result is a new piece of evidence representing the consensus of the originalpieces of evidence, which in turn can be applied to another stage of reasoning Thereare three major frameworks of evidential reasoning in the literature: The Bayesianprobability theory, the Fuzzy set theory and the Dempster-Shafer theory of evidence[126] These three methods are introduced below

Probability Theory

In the probabilistic approach, uncertainties are characterized by the probabilitiesassociated with the events The probability can be represented by the frequency ofoccurences, if sufficient data are available, from which a probability distribution

is generated for statistical analyses Probabilistic analysis is the most widely usedmethod for characterizing uncertainty, which could be due to stochastic disturbancesuch as noise, variability conditions or risk considerations Uncertainty can be mod-eled using either discrete or continuous probability distribution functions(PDF) Ingeneral, there are three approaches to calculate the probability of event A [72,75,

3 Subjective determination: Bayesian approach P (A) = the value given by

experts It is usually used for random events that cannot be repeated in large

quantity Any probability measure P (.) on a finite set Θ can be uniquely

deter-mined by a probability distribution function

from the formula given below

p (A) =θ∈A

Trang 31

14 1 Introduction

Possibility Theory or Fuzzy Sets

Fuzzy sets theory allows uncertainty modeling when training or historical data arenot available for analysis Fuzzy theory facilitates uncertainty analysis when theuncertainty arises from lack of knowledge or fuzziness rather than randomness alone.Classical set theory allows for crisp membership while on the other hand, fuzzytheory allows for gradual degree membership The approach has been used to analyzeuncertainty associated with incomplete and subjective information Fuzzy sets can

be used as a basis for possibility since a proposition that associates an uncertainparameter to a fuzzy set induces a possibility distribution for this quantity, whichprovides information about the values that this quantity can assure Given the set

Θ, the possibility is defined on S and it is a set function with values in [0, 1] A possibility measure Pos is characterized by the property

Pos (∪ i ∈I A i ) = sup i ∈I Pos (A i ) (1.16)for any value of subsets ofΘ defined by an arbitrary index set I, it is semi-continuous

from below The possibility measure satisfies the following conditions

Pos (∅) = 0 Pos (Θ) = 1

where V and W are two non intersecting sets involved in Θ.

A necessity measure, Nec is characterized by the following property

Nec (∩ i ∈I A i ) = inf i ∈I Pos (A i ) (1.18)for any value of subsets ofΘ defined by an arbitrary index set I, it is semi-continuous

from above

The essential difference between possibility theory and probability theory is thatthe probability sum of all non intersecting events in probability theory is 1 while itmay not be 1 in possibility theory Furthermore, probability theory may be viewed

as a special branch of imprecise probabilities, in which focal elements are alwaysnested The imprecision in probabilities and utilities is modelled in fuzzy sets throughmembership functions defined on the sets of possible probabilities and utilities Thetheory of fuzzy sets aims to model ambiguity by assigning a degree of membership

μ(x) ∈ [0, 1] The parameter x is represented as a fuzzy number with support A0,the wider the support of the membership function, the higher the uncertainty

Definition 1.6 Fuzzy Set The fuzzy set A in a given, non-empty space X is the set

of pairs

μ A : X → [0, 1] is the membership function of a fuzzy set A This function assigns

to each element x ∈ X its membership degree to the fuzzy set A.

Trang 32

Three cases can be distinguished

1 μ A (x) = 1 means the full membership of element x to the fuzzy set A, x ∈ A

2 μ A (x) = 0 means the lack of membership of element x to the fuzzy set A, x /∈ A

3 0< μ A (x) < 1 means a partial membership of element x to the fuzzy set A

Symbolic notations can be used as follows:

The elements of x i ∈ X may be not only numbers, but also objects or other notions.

The line of function does not symbolize the division but means the assigning ofmembership degrees to particular elements Similarly, the + sign does not mean theaddition but the union of sets

The fuzzy sets theory describes the uncertainty in a different sense than the ability theory The only similarity between the fuzzy sets theory and the probabilitytheory is the fact that both the fuzzy set membership function and the probability takethe values in the interval [0, 1] In some applications, standard forms of membership

prob-functions are used

2

(1.22)

where ¯x is the middle and σ defines the width of the Gaussian curve.

Trang 33

Membership Function of Classπ

It is given by the following equation

Membership Function of Class t

In some applications, the membership function of class t may be alternative to thefunction of classπ.

The fuzzy set that contains all elements with a membership ofα ∈ [0, 1] and

above is called theα − cut of the membership function At a resolution level of α,

it will have support of A α, the higher the value ofα, the higher the confidence in the

parameter This makes possibility theory naturally suited for representing evidence infuzzy sets, sinceα−cuts of fuzzy sets are also nested The membership function is cut

horizontally at a finite number ofα −level between 0 and 1 For each α −level of the

parameter, the model is run to determine the minimum and maximum possible value

of the outputs This information is then directly used to construct the correspondingfuzziness (membership function)

Evidence theory or Dempster-Shafer evidence

Dempster Shafer evidence theory is based on the concept that information of a givenproblem can be imprecise by nature The basic probability assignment of a belieffunction form of discrete type cannot always provide a precise description of any

Trang 34

evidence for all the situations The continuous form of belief function is more priate for the expression of the vagueness of an evidence in many situations Hence,the bound result that consists of both belief values: credibility, the lower bound ofprobability and, plausibility, upper bound of probability, is presented.

appro-Definition 1.7 Rule A piece of evidence or a rule in a rule based system is

repre-sented by a subset A of the frame of discernment Θ and a belief function associated with A is represented by a probability density function ρ A (Θ) where Θ is a variable

indicating the degree of truth for the evidence ¯A represents the complement of A, 1

is used to denote the truth of evidence

Unlike possibility or fuzzy set and probability theory, in evidence theory there is

no need to make an assumption or approximation for the imprecise information temic plausibility can be represented in evidence theory via belief functions, wherethe degrees of belief may or may not have mathematical properties of probabilities

Epis-In comparison to probability theory, instead of using probability distributions to ture system uncertainties, a belief function is used to represent a degree of belief.Probability values are assigned to sets of possibilities rather than a single event.The basic entity in the Dempster-Shafer theory is a set of exclusive and exhaustivehypotheses about some problem domain It is the frame of discernment Θ then a basic belief assignment (BBA) is a function m : Ψ → [0, 1], where Ψ is the set of

cap-all subsets ofΘ and the power set of Θ is Ψ = 2 Θ The function m can be interpreted

as distributing belief to each of element in C, with the following criteria satisfied:

subsets

1.4.2.3 Uncertainty in Vehicle Dynamics and Environment Knowledge

Under uncertainty in vehicle dynamics, the future aerial robot configuration cannot

be predicted accurately [35] This could be due to the inherent characteristics ofthe vehicle dynamics itself or limited precision in the aerial robot’s tracking perfor-mance This type of uncertainty is referred to the uncertainty in action effect Underenvironment knowledge uncertainty, the aerial robot has incomplete or imperfectinformation about its environment This could be due to inaccuracy in the a-priorimap or imperfect and noisy exteroceptive sensory information that is provided to theaerial robot in order to map the environment

Trang 35

18 1 Introduction

1.4.3 Nonlinear Control Fundamentals

An aerial robot being represented in general by nonlinear affine models, some notionsabout nonlinear control are introduced in this paragraph The nonlinear equivalence

of both finite and infinite zero structures of linear systems have been well understoodfor Single Input Single Output systems [73] Consider a Multiple Input MultipleOutput nonlinear system of the form

˙x = f (x) + g(x)u

where x ∈ Rn , u ∈ R m , y ∈ R pare the state, input and output respectively Let the

mappings f , g, h be smooth in an open set U ⊂ R m containing the origin x= 0 with

f (0) = 0 and h(0) = 0 Denote g = [g1, g2, , g m ] and h = colh1, h2, , h p



Definition 1.8 Relative degree for a SISO system A SISO system, with m = p = 1

in (1.27) has a relative degree r at x= 0 if

in a neighborhood of x= 0 and

L g L r−1

where L grepresents the derivative of Lie

If system (1.27) has a relative degree r, then on an appropriate set of coordinates

in a neighborhood of x= 0, it takes the following normal form:

where ξ = col {ξ1, ξ2, , ξ r } , b1(0, 0) = 0, ˙η = f0(η, 0) is the zero dynamics.

With a state feedback, this normal form reduces to the zero dynamics cascaded with

a ‘clean’ chain of integrators ‘Clean’ means that no other signal enters the middle

Trang 36

Definition 1.10 Controllability The system is controllable from x if for any

x f ∈ M, there exists a T > 0 such that x f ∈ R M (x, ≤ T).

R V (x, ≤ T) = 

0≤t≤T

Any goal state is reachable from x in finite time.

Definition 1.11 Small Time Local Controllability The system is small time

locally controllable (STLC) from x if R V (x, ≤ T) contains a neighborhood of x for all neighborhoods V and all T > 0.

Small Time Local Controllability is of particular interest because it implies that thesystem can locally maneuver in any direction and if the system is small time locally

controllable at all x ∈ M, then the system can follow any curve on M arbitrarily

closely This allows the system to maneuver through cluttered space, since any motion

of a system with no motion constraints can be approximated by a system that is SmallTime Locally Controllable everywhere

Definition 1.12 Controllability Lie algebra The controllability Lie algebra C of

an affine control system is defined to be the span over the ring of smooth functions

of elements of the form

¯

Lie (G) = X k ,X k−1, [ , [X2, X1] ] where X i ∈ {f , g1, , g m } and k =

0, 1, 2, ¯ Lie (G) is the span of all iterated Lie brackets of vector fields in G Each

of its terms is called a Lie product and the degree of a Lie product is the total number

of original vector fields in the product

Lie product satisfy the Jacobi identity and this fact can be used to find a Philip-Hallbasis, a subset of all possible Lie products that spans the Lie algebra A Lie subalgebra

of a module is a submodule that is closed under the Lie bracket A Lie algebra of

vector fields over a manifold M is said transitive if it spans the whole tangent space

at each point of M.

Definition 1.13 Accessibility distribution The controllability Lie algebra may be

used to define the accessibility distribution as the distribution

From the definition of C, it follows that C (X) is an involutive distribution The

controllability Lie algebra is the smallest Lie algebra containing{g1, , g m} which

is closed under Lie bracketing The controllability distribution contains valuable

information about the set of states that are accessible from a given starting point X0

Trang 37

x, that is Lie ({f , g1, , g m }) (x) = TM If f = 0 and by assumption the control set

is U = Rm , then the system is symmetric and the Lie Algebra Rank Condition also implies small time local controllability.

Definition 1.14 Bad bracket A Lie product term is defined to be a bad bracket

if the drift term f appears an odd number of times in the product and each control vector field g i , i = 1 m, appears an even number of times (including zero) A

good bracket is any Lie product that is not bad.

Definition 1.15 Reachable set Let R ν (X0, T) ⊂ R mbe the subset of all states

acces-sible from state X0in time T with the trajectories being confined to a neighborhood

U of X0 This is called the reachable set from X0

Theorem 1.2 The system

2 The Lie Algebra Rank Condition is satisfied by good Lie brackets up to degree k.

good brackets of degree < j.

Theorem 1.3 If the aerial robot is represented by the system

Trang 38

If system (1.27) has a relative degree {r1, r2, , r m } at x = 0, then with an

appropriate change of coordinates, it can be described by:

which contains m clean chain of integrators Moreover, if the distribution spanned

by the column vector of g (x) is involutive in a neighborhood of x = 0, a set of local coordinates can be selected such that g0(x) = 0 The clean chain of integrators are

called a prime form for linear systems [73]

1.4.4 Graph Theory Fundamentals

Graph theory being extensively used in planning, some definitions are introduced inthis paragraph

Definition 1.16 Directed graph A directed graph G consists of a vertex set V (G) and an edge set E (G) ⊆ V(G) × V(G) For an edge e = (u, v) ∈ E(G), u is called the head vertex of e and v is called the tail vertex.

If(u, v) ∈ E(G) for all (v, u) ∈ E(G), then the graph is undirected The graph is

simple if there are no edges of the form(u, u) for u ∈ V(G).

Two standard ways exist to represent a graph G = (V, E): as a collection of

adjacency lists or as an adjacency matrix Either way applies to both directed and

undirected graphs For the adjacency-matrix representation of a graph G = (V, E),

the vertices are numbered 1, 2, , |V| where |V| represents the size of V.

Definition 1.17 Adjacency matrix The adjacency matrix representation of a graph

G = (V, E) consists of a |V| × |V| matrix A =a ij

such that

a ij=

1 if (i, j) ∈ E

Remark 1.1 The adjacency matrix A of an undirected graph is its own transpose

A = A T An adjacency matrix can also represent a weighted graph When the graph

is dense, an adjacency matrix representation is prefered When the graph is sparse,the adjacency list is prefered

Trang 39

22 1 Introduction

Definition 1.18 Indegree The indegree of a vertex u is the number of edges that

have u as their head vertex The indegree matrix of a graph G, denoted D (G), is a

diagonal matrix of size|V(G) × V(G)| defined as follows:

Definition 1.19 Outdegree The outdegree of a vertex u for a graph G is defined

analogously, as the number of edges having u as the trail vertex.

Definition 1.20 Laplacian matrix Given a graph G, the Laplacian matrix

associ-ated with it is given by

where D (G) is the indegree matrix and A d (G) is the adjacency matrix associated with the graph The diagonal entity d iiof the Laplacian matrix is then the indegree

of vertex i and the negative entries in row i correspond to the neighbors of vertex i.

Let G be a graph representing the communication links between a team of aerialrobots The properties of such a communication graph are captured by its adjacency

matrix A d (G) For any communication graph considered, the row sums of the

corre-sponding Laplacian matrix are always zero and hence the all ones vector is always aneigenvector corresponding to the zero eigenvalue for any Laplacian matrix consid-ered In the case of undirected graphs, the Laplacian matrix is symmetric and all itseigenvalues are non negative The smallest eigenvalue of the Laplacianλ1is zero andits multiplicity equals the number of connected components of the graph The secondeigenvalueλ2is directly related to the connectivity of the graph This is also the casefor directed graphs One property of the eigenvalues for the Laplacian matrix of such

a communication graph can be derived from Gershgorin theorem [50] Since all onal entries of the Laplacian are the indegrees of the corresponding vertex, it follows

diag-that all its eigenvalues are located in the disc centered at d = max i (indegree(i)) of

radius d, sor for any eigenvalueλ of the Laplacian |λ| ≤ 2d.

Definition 1.21 Connectivity The edge (vertex) connectivity of a graph G is the

smallest number of edge (vertex) deletions sufficient to disconnect G.

There is a close relationship between both quantities The vertex connectivity isalways no smaller than the edge connectivity, since deleting one vertex incident oneach edge in a cut set succeeds in disconnecting the graph Of course, smaller vertexsubsets may be possible The minimum vertex degree is an upper bound on boththe edge and vertex connectivity, since deleting all its neighbors (or the edges to allits neighbors) disconnects the graph into one big and one single vertex components.Communication graph connectivity is generally referred to as the vertex connectivity

of the defined communication graph

Trang 40

Definition 1.22 Directed path Given a directed graph G , P = (u1, , u k ), is a directed path in G if for every 1 ≤ i < k, there exists edge (u i , u i+1) ∈ E(G).

Definition 1.23 Tree A tree is a directed graph with no cycles, in which all nodes

have several outgoing edges to each of which correspond another node, called a child

A child node can in turn have other children All nodes, excluding the root, have oneand only one incoming edge, which comes from the parent The root has no parent

Definition 1.24 Rooted Directed Spanning Tree Given a directed graph G, T is

called a rooted directed spanning tree of G if T is a subgraph of G, it has no cycles

and there exists a directed path from at least one vertex, the root to every other vertex

in T

Definition 1.25 Clique A clique in an undirected graph G = (V, E) is a subset

V⊆ V of vertices, each pair of which is connected by an edge in E It is a complete subgraph of E The size of a clique is the number of vertices it contains.

The clique problem is the optimization problem of finding a clique of maximum

size in a graph A naive algorithm for determining whether a graph G = (V, E) with

|V| has a clique of size k is to list all k-subsets of V and check each one to see whether

empty set∅ is necessarily a member of I.

3 If A ∈ I, B ∈ I and |A| < |B|, then there exists some element x ∈ B − A such that

A ∪ {x} ∈ I M satisfies the exchange property.

Definition 1.27 Graphic Matroid The graphic matroid M G = (S G , I G ) defined in terms of a given undirected graph G = (V, E) as follows:

1 The set S G is defined to be E, the set of edges of G.

2 If A is a subset of E, then A ∈ I G if and only if A is acyclic That is the set of edges A is independent if and only if the subgraph G A = (V, A) forms a forrest.

Definition 1.28 Hierarchical product graph Given n graphs G = G1×G2×· · ·×

G n is called their hierarchical product graph if the vertices of G i+1are replaced by

a copy of G i such that only the vertex labeled 1 from each G i replaces each of the

vertices of G i+1, ∀i ∈ [1, n − 1]

Definition 1.29 Sum graph Given graphs G1, , G n with vertex sets V i =

V (G i ) =1G , , n G i

distinct∀i and edge sets E i = E(G i ), a graph G is called their sum graph G1+ · · · + G n if there exists a map f : V(G) →n

i=1V isuch that

1 V (G) =n

i=1V i

Ngày đăng: 14/09/2020, 16:14

w