The former case may cause: 1 a long waiting time of other sensors stay-ing near the charging location; 2 quick exhaustion of the MC’s energy.. Therefore, the charging strategy should adj
Trang 1HANOI UNIVERSITY OF SCIENCE AND
Department: Department of Software engineering
Institute: School of Information and Communication Technology
Hanoi, 2022
Trang 2Declaration of Authorship and Topic Sentences
1 Personal information Full
name: La Van Quan Phone
Dr Nguyen Phi Le
Trang 3I would like to thank my supervisor, Dr Nguyen Phi Le, for her continuedsupport and guidance throughout the course of my Masters’ studies She hasbeen a great teacher and mentor for me since my undergraduate years, and I
am proud to have completed this thesis under her supervision
I want to thank my family and my friends, who have given me theirunconditional love and support to finish my Masters’ studies
Finally, I would like to again thank Vingroup and the Vingroup InnovationFoundation, who have supported my studies through their DomesticMaster/Ph.D Scholarship program
Parts of this work were published in the paper Q-learning-based, OptimizedOn-demand Charging Algorithm in WRSN by La Van Quan, Phi Le Nguyen,Thanh-Hung Nguyen, and Kien Nguyen in the Proceedings of the 19th IEEEInternational Symposium on Network Computing and Applications, 2020
La Van Quan was funded by Vingroup Joint Stock Company and supported
by the Domestic Master/Ph.D Scholarship Programme of Vingroup InnovationFoun-dation (VINIF), Vingroup Big Data Institute, code VINIF.2020.ThS.BK.03
Trang 4In recent years, Wireless Sensor Networks (WSNs) have attracted great attention worldwide WSNs consist of sensor nodes deployed on an surveillance area to monitor and control the physical environment In WSNs, every sensor node needs to perform several important tasks, two of which are sensing and communication Every time the above tasks are performed, the sensor’s energy will be lost over time Therefore some sensor nodes may die A sensor node is considered dead when it runs out of energy Correspondingly, the lifetime of WSNs is defined as the time from the start of operation until a sensor dies [ 1 ] Thus, one of the important issues to improve the quality of WSNs is to maximize the life of the network.
In classical WSNs, sensor nodes have fixed energy and always degrade over time The limited battery capacity of the sensor is always a "bottleneck" that greatly af-fects the life of the network To solve this problem, Wireless Rechargeable Sensor Networks (WRSNs) were born WRSNs include sensors equipped with battery charg-ers and one
or more mobile chargers (Mobile Chargers (MC)) responsible for adding power to the sensors In WRSNs, MCs move around the network, stopping at spe-cific locations (called charging sites) and charging the sensors Thus, it is necessary to find a charging route for MC to improve the lifetime of WRSNs [ 2 ], [ 3
Keywords: Wireless Rechargeable Sensor Network, Fuzzy Logic,Reinforcement Learning, Q-Learning, Network Lifetime
Author
La Van Quan
Trang 51.1 Problem overview 1
1.2 Thesis contributions 2
1.3 Thesis structure 3
2 Theoretical Basis 4 2.1 Wireless Rechargeable Sensor Networks 4
2.2 Q-learning 7
2.3 Fuzzy Logic 8
3 Literature Review 10 3.1 Related Work 10
3.2 Problem definition 11
4 Fuzzy Q-charging algorithm 13 4.1 Overview 13
4.2 State space, action space and Q table 13
4.3 Charging time determination 15
4.4 Fuzzy logic-based safe energy level determination 16
4.4.1 Motivation 16
4.4.2 Fuzzification 17
4.4.3 Fuzzy controller 17
4.4.4 Defuzzification 18
4.5 Reward function 21
4.6 Q table update 22
5 Experimental Results 24 5.1 Impacts of parameters 25
5.1.1 Impacts of 25
5.1.2 Impacts of 26
Trang 65.2 Comparison with existing algorithms 27
5.2.1 Impacts of the number of sensors 27
5.2.2 Impacts of the number of targets 28
5.2.3 Impacts of the packet generation frequency 28
5.2.4 Non-monitored targets and dead sensors over time 30
Trang 7List of Figures
2.1 A wireless sensor network 5
2.2 A sensor structure 5
2.3 Network model 6
2.4 Q-learning overview 8
3.1 Network model 12
4.1 The flow of Fuzzy Q-learning-based charging algorithm 14
4.2 Illustration of the Q-table 14
4.3 Fuzzy input membership functions 18
4.4 Fuzzy output membership function 19
5.1 Impact of on the network lifetime 25
5.2 Impact of on the network lifetime 26
5.3 Network lifetime vs the number of sensors 27
5.4 Network lifetime vs the number of targets 29
5.5 Network lifetime vs the packet generation frequency 29
5.6 Comparison of non-monitored targets over time 30
5.7 Comparison of dead sensors over time 31
Trang 8List of Tables
4.1 Input variables with their linguistic values and corresponding
mem-bership function 18
4.2 Output variable with its linguistic values and membership function 18 4.3 Fuzzy rules for safe energy level determination 19
4.4 Inputs of linguistic variables 19
4.5 Fuzzy rules evaluation 20
5.1 System parameters 25
Trang 9a sensor node, aiming to guarantee both the target coverage and connectivity.
In a normal operation, the MC moves around the networks and performs ing strategies, which can be classified into the periodic [9 1][10][11][12] or on-demand charging [13][2 14][15] [16][17][18] In the former, the MC, with apredefined trajec-tory, stops at charging locations to charge the nearby sensors’batteries In the latter, the MC will move and charge upon receiving requests fromthe sensors, which have the remaining energy below a threshold The periodicstrategy is limited since it can-not adapt to the sensors’ energy consumption ratedynamic On the contrary, the on-demand charging approach potentially deals withthe uncertainty of the energy consumption rate Since a sensor with a drainingbattery triggers the on-demand op-eration, the MC’s charging strategy faces a newtime constraint challenge The MC needs to handle two crucial issues: deciding thenext charging location and staying period at the location
Trang 10charg-Although many, the existing on-demand charging schemes in the literature face two serious problems The first one is the consideration of the same role for the sen-sor nodes in WRSNs That is somewhat unrealistic since, intuitively, several sensors, depending on their locations, significantly impact the target coverage and the con- nectivity than others Hence, the existing charging schemes may enrich unnecessary sensors’ power while letting necessary ones run out of energy, leading to charging algorithms’ inefficiency It is of great importance to take into account the target coverage and connectivity simultaneously The second problem is about the MC’s charging amount, which is either a full capacity (of sensor battery) or a fixed amount of energy The former case may cause: 1) a long waiting time of other sensors stay-ing near the charging location; 2) quick exhaustion of the MC’s energy In contrast, charging a too small amount to a node may lead to its lack of power to operate until the next charging round Therefore, the charging strategy should adjust the transferred energy level dynamically following the network condition.
1.2 Thesis contributions
Motivated by the above, this thesis propose a novel on-demand charging scheme for WRSN that assures the target coverage and connectivity and adjusts the energy level charged to the sensors dynamically My proposal, named Fuzzy Q-charging, aims
to maximize the network lifetime, which is the time until the first target is not monitored First, this work exploit Fuzzy logic in an optimization algorithm that determines the optimal charging time at each charging location, aiming to maximize the numbers of alive sensors and monitoring targets Fuzzy logic is used to cope with network dynamics by taking various network parameters into account during the determination process of optimal charging time Second, this thesis leverage the Q-learning technique
in a new algorithm that selects the next charging location to maximize the network lifetime The MC maintains a Q-table containing the charging locations’ Q-values representing the charging locations’ goodness The Q-values will be updated in a real- time manner whenever there is a new charging request from a sensor I design the Q- value to prioritize charging locations at which the MC can charge a node depending on its critical role After finishing tasks in one place, the MC chooses the next one, which has the highest Q-value, and determines an optimal charging time The main contributions of the paper are as follows.
• This thesis propose a Fuzzy logic-based algorithm that determines theenergy level to be charged to the sensors The energy level is adjusteddynamically following the network condition
• Based on the above algorithm, this thesis introduce a new method that mizes the optimal charging time at each charging location It considers sev-
Trang 11opti-eral parameters (i.e., remaining energy, energy consumption rate, to-charging location’s distance) to maximize the number of alive sensors.
sensor-• This thesis propose Fuzzy Q-charging, which uses Q-learning in itscharging scheme to guarantee the target coverage and connectivity FuzzyQ-charging’s reward function is designed to maximize the charged amount
to essential sen-sors and the number of monitored targets
1.3 Thesis structure
The rest of this thesis is constructed as follows
• Chapter 3 introduces the related knowledge of this study, including theoverview of the WSN and the WRSN, the previous works of the chargingscheme opti-mization problem and some optimization algorithms
• Chapter 2 describes concepts related to wireless sensor networks, learning, and fuzzy logic
q-• Chapter 4 presents the proposed algorithms, which are comprised of the fuzzy logic Q-learning approach
• Chapter 5 evaluates and compares the performance of the proposed algorithms with existing research
• Chapter 5.2.4 concludes the thesis and discusses about future works
Trang 12Chapter 2
Theoretical Basis
2.1 Wireless Rechargeable Sensor Networks
A Wireless Sensor Network (WSN) is a network that consists of several spatially distributed and specialized sensors connected by a communications infrastructure to monitor and record physical conditions in a variety of situations The sensors in WRSNs will collectively convey the sensing data to the Base Station (BS), also known
as a sink, where it will be gathered, processed, and/or multiple actions done as needed A typical WSN connecting with end-users is seen in Fig 2.1.
Sensors play an important role in a sensor network To monitor the physicalenvironments and communicate with others efficiently, sensors have a lot of re-quirements They not only need to record surroundings accurately andprecisely, be capable of computing, analyzing, and storing the sensing data, butalso have to be small in space, low in cost, and effective in power consumption
Sensors commonly comprise four fundamental units: a sensing unit, whichmon-itors environments and converts the analog signal into a digital signal; aprocessing unit, which processes the digital data and stores in memory; atransceiver unit, which provides communication capability; and a power unit, whichsupplies energy to the sensor [1] In addition, some sensors also have a navigationsystem to deter-mine positions of themselves, other sensors and sensing targets, amobilizer to add the mobile capability, etc A sensor structure is shown in Fig 2.2
WSNs have a wide range of applications in a variety of fields They were firstdeployed in the 1950s as part of a sound surveillance system designed by the USNavy to detect and track Soviet submarines WSNs are now used in a variety ofcivilian applications, including environmental monitoring, health monitoring, smartagriculture, and so on A WSN can be used in a forest, for example, to alert au-thorities to the risk of a forest fire Furthermore, WSN can track the location of a firebefore it has a chance to expand out of control WSNs have a lot of potential in thearea of health monitoring Scientists, for example, have developed a WSN-basedsugar monitoring device The system can record the fluctuation rate of glucose in
Trang 13Figure 2.1: A wireless sensor network
Figure 2.2: A sensor structure
diabetes patients’ blood and warn them in time
Despite its many benefits, a WSN has several limits Because a WSN mustmaintain its low-cost characteristic, some features must be eliminated A sensor in
a WSN, for example, has a low-capacity battery that is often non-rechargeable tery replacements are impossible to obtain if WSNs are installed in remote terrain.When a sensor’s battery dies, it can no longer record, send data, or communicate,causing the network to malfunction Furthermore, WSN sensors are tiny, resulting
Bat-in limited computBat-ing and storage capabilities Although there are many challengeswith WSNs such as communicating range, signal attenuation, security, and so on,this thesis focuses on the energy difficulty as one of the most important As a re-sult, the sensor nodes’ energy depletion avoidance has gotten a lot of interest fromresearchers and network users all around the world
Trang 14Figure 2.3: Network model
Many efforts have been made to reduce the energy usage of WSNs They haveattempted to optimize radio signals using cognitive radio standardization, lowerdata rate using data aggregation, save more energy using sleep/wake-up schemes,and pick efficient energy routing protocols However, none of them completelysolved the energy problem of the sensor node in WSNs The battery will ultimatelyrun out if there is no external source of electricity for the sensors Gathering energyfrom the environment is another way to overcome the sensor’s energy depletionproblem Each sensor has an energy harvester that converts power from externalsources such as solar, thermal, wind, kinetic, and other forms of energy intoelectrical power Sensors can use the converted power to recharge their batteries,extending the network’s lifespan However, this strategy is overly reliant on anunstable and uncontrolled ambient energy supply
In recent years, thanks to advancements in wireless energy transfer and able battery technology, a recharging device can be used to recharge the battery of sensors in WSNs As a result, WRSNs, a new generation of sensor networks, was born (Fig 2.3) The sensor nodes in WRSNs are equipped with a wireless energy receiver via wireless transfer radio waves based on electromagnetic radiation and magnetic resonant coupling technology, giving them an edge over standard WSNs WRSNs use one or more chargers to recharge sensor nodes on a regular basis As a result, the lifetime of the network is optimally prolonged for eternal operations WRSNs are easier
recharge-to maintain long-term and reliable operations than typical WSNs because they give a more flexible, customizable, and dependable energy replenishment.
A new network generation, on the other hand, would provide new issues that have never been faced before WRSNs, in particular, necessitate a charger employment approach Charging terminals and MCs are the two types of chargers available A charging terminal is a device that has a fixed placement in the network and can
Trang 15recharge many sensors Because the network scale is normally large, asignificant number of charging terminals would be required The network istrying to figure out how many charging terminals are needed to refresh all of thesensors, which is analogous to the coverage issue with WSNs Furthermore, acharging terminal does not have an infinite energy supply, thus it will need to berecharged after some time As a result, the charging terminal is not adependable device for long-term operation It’s a better idea to use MCs torecharge sensors A MC can travel via the network, allowing it to cover a largeregion If it runs out of power, it will return to the BS to replenish its battery As aresult, the only issue is that we need to figure out the MC’s charging structure.
The standard Q-learning framework consists of four components: anenviron-ment, one or more agents, a state space, and an action space, asshown in Fig 2.4 The Q-value represents the approximate goodness of theaction concerning the agent’s goal An agent chooses actions according to thepolicy and the Q-value After performing an action, the agent modifies its policy
to attain its goal The Q-value is updated using the Bellman equation as follows:
An explicit procedure to implement the Q-learning algorithm is provided inAlgorithm 1
Trang 16Figure 2.4: Q-learning overviewAlgorithm 1: Q-Learning Algorithm
1 Initialize Q(s; a);
2 for each episode do
3 Get initial state s;
4 Select a using policy derived from Q;
5 Take action a, observe next state s0 and obtain reward r;
In logic Boolean, a classic logical statement is a declarative sentence thatdeliv-ers factual information If the information is correct, the statement is true; ifthe information is erroneous, the statement is false However, sometimes, true
or false values are not enough
Lotfi et al [7] coined the term "fuzzy logic" in the 1960s to describe a type oflogic processing that contains more than two true values The fact that someassertions contain imprecise or non-numerical information influences fuzzylogic The term "fuzzy" was also used to describe ambiguity and unclearinformation As a result, fuzzy logic can describe and manipulate ambiguousand uncertain data, and it has been used in a variety of industries
Following the fuzzy method, fuzzy logic uses particular input values, such asmulti-numeric values or linguistic variables, to produce a specific output Thefuzzy technique will determine if an object fully or partially contains a property,even if the property is ambiguous For example, the term "extremely strongengine" is based on the fuzzy method There are hidden degrees of intensity("very") of the trait in question ("strong")
Trang 17A fuzzy logic system consists of three components: fuzzification, fuzzy logiccon-troller, and defuzzification The first component converts the crisp values ofthe variable into their fuzzy form using some membership functions The secondone is responsible for simulating the human reasoning process by making fuzzyinference based on inputs and a set of defined IF-THEN rules The module itselfcan be separated into two subcomponents, namely Knowledge Base andInference Engine Knowledge Base is a set of specifically designed rules so thattogether with the in-put states of variables, they will produce consistent results.Each rule’s form is "IF {set of input} THEN {output}" More explicitly, a fuzzy rule
Ri with k-inputs and 1-output has the following form
Ri : IF (I1 is Ai1) (I2 is Ai2) : : : (Ik is Aik)
(2.2)
THEN (O is B i );
where fI1; : : : ; Ikg represents the crisp inputs to the rule fAi1; : : : ; Aikg and Bi
are linguistic variables The operator can be AND, OR, or NOT InferenceEngine is in charge of the estimation of the Fuzzy output set It calculates themembership degree ( ) of the output for all linguistic variables by applying therule set described in Knowledge Base For Fuzzy rules with lots of inputs, theoutput calculation depends on the operators used inside it, i.e., AND, OR, orNOT The calculation for each type of operator is described as follows:
Center of Gravity of B (CoGB) = R +1 B(z)zdz (2.3)
R
where B (z) is the output membership function of the linguistic variable B.
Trang 18an MC can not fulfill all sensors’ demand in dense networks, W Xu et al in [11]introduce a multi-chargers approximation model to increase the charging speed In[12], C Lin et al derive a new energy transfer model with distance and anglefactors They also consider the problem of minimizing the total charging delay for allnodes They use linear programming and obtain the optimal solution As thecharging schedule is always fixed, the periodic scheme fails to adapt to thedynamic of sensors’ energy consumption.
Regarding the on-demand charging, the authors in [ 17 ] address the node failure problem They first propose to choose the next charging node based on the charging probability Second, they introduce a charging node selected method to minimize the number of other requesting nodes suffering from energy depletion In [ 2 , 14 ], aiming to maximize the charging throughput, they propose a double warning threshold charging scheme Two dynamic warning thresholds are triggered depending on the residual energy of sensors The authors in [ 18 ] studied how to optimize the serving order of the charging requests waiting in the queue using the gravitational search algorithm In [ 16 ],
X Cao et al introduce a new metric (i.e., charging reward), which quantifies the charging scheme’s quality The authors then address the problem of maximizing the total reward in each charging tour under the constraint of the MC’s
Trang 19energy and sensors’ charging time windows They use a deep reinforcementlearning-based on-demand charging algorithm to solve the addressed problem.
The existing charging algorithms have two serious problems First, the chargingtime problem has not been thoroughly considered Most of the charging schemesleverage either the fully charging approach [1, 2, 9, 10, 13, 14, 17], or the partialcharging one [21] I want to emphasize that the charging time is an essential factorthat decides how much the charging algorithm can prolong the network lifetime.Moreover, there is no existing work considering the target coverage andconnectivity constraints concurrently Most previous works treat all sensors inWRSNs evenly; hence, the MC may charge unnecessary sensors while necessaryones may run out of energy Unlike them, this work addresses the target coverageand connectivity constraints in charging schedule optimization This thesis uniquelyconsiders the optimization of charging time and charging location simultaneously Iuse Fuzzy logic and Q-learning in my proposal
Fuzzy logic has been applied in many fields such as signal processing [ 22 , 23 ], robotics [ 24 ], embedded controllers [ 25 ] In WSNs, Fuzzy logic is a promising tech- nique in dealing with various problems, including localization, routing [ 26 , 27 ], clus- tering [ 19 ], and data aggregation [ 28 , 29 ] R M Al-Kiyumi et al in [ 26 ] propose a Fuzzy logic-based routing for lifetime enhancement in WSNs, which maps network status into corresponding cost values to calculate the shortest path In [ 20 ], the authors also leverage Fuzzy logic and Q-learning, but in a cooperative multi-agent system for controlling the energy of a microgrid In [ 30 ], Fuzzy and Q-learning are combined to address the problem of thermal unit commitment Specifically, each in-put state vector
is mapped with the Fuzzy rules to determine all the possible actions with corresponding Q-values The main idea is exploiting Fuzzy logic to map net-work status into corresponding cost values to calculate the shortest path Recently, the authors in [ 15 ] use Fuzzy logic in an algorithm for adaptively determining the charging threshold and deciding the charging schedule Different from the others, I use Fuzzy logic and Q- learning in my unique Fuzzy Q-charging proposal The earlier version of this work has been published in [ 31 ], which considers only Q-charging.
3.2 Problem definition
Figure 3.1 shows the considered network model, in which a WRSN monitors several targets The network has three main components: an MC, sensor nodes, and a base station The MC is a robot that can move and carry a wireless power charger The sensor nodes can receive charged energy from the MC via a wireless medium The base station is static and responsible for gathering sensing information We assume that there are n sensors S j (j = 1; : : : ; n) and m targets T k (k = 1; : : : ; m) We call a sensor a target-covering sensor if it covers at least one target Moreover,
Trang 20Figure 3.1: Network model
if there exists an alive routing path between a sensor and the base station, it isconnected to the base station The target is defined as to be monitored when atleast one sensor connected to the base station covers it
A sensor node that has its remaining energy below Eth (i.e., a predefinedthresh-old) will send a charging request to the MC We target a non-preemptivecharging schedule, in which charging requests from sensors are queued at the MC
We as-sume that there are k charging locations denoted by D1; : : : ; Dk in thenetwork When the MC completes its tasks at a charging location, it runs theproposed al-gorithm to select the next optimal charging location from D1; : : : ; Dk.Moreover, the MC also determines the optimal charging time at that charginglocation When the energy of the MC goes below a threshold, it returns to the depot
to recharge itself Besides gathering the sensing information, the base station isalso responsible for collecting information about the remaining energy sensors.Based on that, the MC estimates every sensor’s energy consumption rate using theweighted averag-ing method Given all sensors and the targets’ locations, the on-demand charging algorithm aims to maximize the number of monitored targets
Trang 21Chapter 4
Fuzzy Q-charging algorithm
4.1 Overview
This thesis follow the on-demand charging strategy, in which a sensor sends
a charging request to the MC when its energy is below a predefined threshold
Eth The request is inserted into the waiting list at the MC The MC thenperforms the following procedures to update the Q-table
• The MC leverages Fuzzy logic to calculate a so-called safe energy level,which is sufficiently higher than Eth The MC then uses the algorithmdescribed in Section 4.3 to determine the charging time at each charginglocation The charging time is optimized to maximize the number ofsensors which guarantee the safe energy level
• The MC calculates the reward of every charging location using (4.9), and update the Q-table using equation (4.1)
After finishing charging at a charging location, the MC selects the nextcharging location as the one with the highest Q-value Finally, the MC moves tothe next charging location and charges for the determined charging time Whenthe energy of the MC goes below a threshold, it returns to the depot to rechargeitself Figure 4.1 presents the overview of our charging algorithm
4.2 State space, action space and Q table
In our Q-learning-based model, the network is considered the environmentwhile the MC is the agent A state is defined by the current charging location of the
MC, and an action is a move to the next charging location Each MC maintains itsown Q-table, which is a two-dimensional array Each row represents a state, andeach column represents an action An item Q(Dj; Di) in the j-th row and i-th columnrepresents the Q-value corresponding to the action when the MC moves from thecurrent charging square Dj to the next charging location Di Figure 4.2 shows
Trang 22Figure 4.1: The flow of Fuzzy Q-learning-based charging algorithm
Figure 4.2: Illustration of the Q-table
an illustration of our Q-table In the figure, the gray row represents the Q-valuesconcerning all possible actions when the MC stays at the charging location Dc Thegreen cell depicts the maximum Q-value regarding the next charging location
Let Dc be the current charging location and Di be an arbitrary charginglocation, then the Q-value of action moving from Dc to Di is iteratively updated
by using the Bellman equation as follows:
Q (Dc; Di) Q (Dc; Di) + (r(Di) + maxQ (Di; Dj) Q (Dc; Di)): (4.1)
1 j l
The equation ’s right side consists of two elements, including the current Q-valueand the temporal difference The temporal difference measures the gap betweenthe estimated target, i.e., r(Di) + maxQ (Di; Dj), and the old Q-value, i.e., Q (Dc; Di)
Trang 23of the reward function and the mechanism for updating the Q-table in Sections
4.5,4.6
4.3 Charging time determination
This work aims to design a charging strategy so that the number of sensorsreaching a safe energy level is as big as possible after each charging round.Here, the safe energy level means the energy amount that is sufficiently greaterthan Eth We define the safe energy level, Esf , as
A sensor node has the critical status if its remaining energy is smaller than to
Esf The sensor with critical status is named as critical sensor Otherwise, a sensornode is called a normal sensor For each charging location Di (1 i l), we want todetermine the optimal charging time Ti to minimize the number of critical sensors
We adopt the multi-nodes charging model, in which the MC cansimultaneously charge to all sensors According to [32], the per second energythat a sensor Sj is charged when the MC stays at Di is given by
(d(Sj; Di) + )2
where and are known constants decided by the hardware of the charger and receiver d(S j ; D i ) is the Euclidean distance between S j and D i We denote e j as the energy consumption rate of S j which is estimated by the MC Suppose that the MC charges S j
at D i , we denote the remaining energy of S j when the charging process starts and finishes as E j and E j0 , then E j0 = E j +(pij e j ) T i At the charging location D i , we call pij e j
the energy gain of S j The remaining energy of S j will increase if its energy gain is positive and decreases otherwise Note that the energy of S j equals to
the safety energy level, if the charging time equals toEsf E aj , which is named as the
e a jsafety charging time of Sj with respect to the charging location Di, and denoted as ij The sensors can be classified into four groups The first and second ones contain normal sensors with positive energy gain and critical sensors with negative energy gain, respectively The third and fourth groups contain normal sensors with negative energy gain, and critical sensors with positive energy gain, respectively Obviously, the first and second groups’ sensors don’t change their status no matter how long the MC charges at D i In contrast, a sensor S j in the third group will fall into the critical status, and a sensor in the four groups can alleviate the critical status, if the
piaj