However, network coding allows the intermediate node to combine some input packets into one or several output packets based on the assumption that the intended nodes are able to decode t
Trang 1do not scale well Typically, MANETs are resource constrained nodes, e.g battery power can be limited, and therefore higher data throughput cannot always be achieved by increasing transmission power Overall data throughout for MANETs is also an issue as communications are often multi-hop in nature It is therefore a challenging research problem to send more information with low power to optimize the throughput
To improve network throughput, a novel idea of network coding for the current packet switched networks has been proposed by Ahlswede et al [1] Traditionally, the intermediate node in the network just forwards the input packets to the intended nodes However, network coding allows the intermediate node to combine some input packets into one or several output packets based on the assumption that the intended nodes are able to decode these combined packets Figure 1 is a simple illustration for this promising idea which shows how network coding can save a great deal of transmissions, and thus improving the overall wireless network throughput
In the three-node scenario in Figure 1, Alice and Bob want to send packets to each other with the help of a relay Without network coding, Alice sends packet p1 to the relay
Trang 2and then the relay sends it to Bob Likewise, Bob sends packet p2 to Alice Therefore, a total of 4 transmissions are required for Alice and Bob to exchange one pair of packets With network coding, the relay combines packets p1 and p2 together simply using XOR and then broadcasts it to both Alice and Bob Alice and Bob then extract the packets they want by performing the required decoding operation The total number of transmissions is therefore reduced to 3 This illustrates the basic idea of how network coding is able to improve the network throughput
Recently the research focus on network coding has shifted towards the practical aspects of research, in particular, for wireless mesh networks For example, COPE [2] is regarded as the first practical implementation of network coding in wireless mesh networks In [2-5] the authors introduce COPE as a new packet forwarding architecture which combines several packets together by bit-wise exclusive OR (XOR) operation,
Figure 1: A simplified illustration of network coding, showing how network
coding saves bandwidth consumption It shows Alice and Bob to exchange a pair
of packets using 3 transmissions instead of 4
(a) No coding
(b) Coding
Trang 3coupled with a completely opportunistic approach in routing packets COPE inserts a coding shim between the IP and MAC layers, which identifies coding opportunities and benefits from them by forwarding multiple packets in a single transmission What is more,
by taking advantage of the opportunistic property and simple XOR coding algorithm, COPE manages to address practical issues when integrating network coding in the current communication protocol stack The details of the opportunistic property and practical considerations will be described in the next chapter Experimental results [2-5] have shown that COPE can substantially improve the throughput of wireless mesh networks by
3 to 4 folds in simple one-hop topologies and it also slightly improves the throughput in large scale multi-hop wireless mesh networks
Trang 4In addition, few works have been reported on designing a practical network coding scheme for wireless mobile ad-hoc networks MANETs play a very important role in many fields of application nowadays, for example the emergency situation and military application However, the throughput and robustness of wireless mobile ad-hoc networks are limited due to many factors, such as the node resource and changing topologies We are therefore interested in investigating the outstanding issues described here with the aim to improve the performance of opportunistic network coding in wireless mobile ad-hoc networks
In this thesis we propose to first study and investigate the performance behavior and key control parameters of opportunistic network coding in large scale static wireless mesh networks Our proposed approach is to be independent of routing protocol so that this opportunistic network coding scheme can take advantage of any new designed or existing routing protocols To validate our proposed schemes, we choose QualNet as the pseudo implementation environment and simulation platform since QualNet is the most realistic network simulator there is, and its protocols are fully implemented similar to that of actual implementation So our proposed solutions and simulation results would be much closer to real system implementations
Due to the time restriction of this project, my experiment only deals with UDP traffic rather than those various traffic in the real world and are not able to be implemented
in large scale mobile ad-hoc network scenarios However, the valuable simulation results and analysis presented in this thesis would shed light on further research about the complex traffic situation within mobile environment
Trang 51.3 Thesis Contributions
This thesis has carried out the following work and contributed to the understanding
of performance of practical network coding in wireless mesh networks In addition, detailed studies on the behavior and key control parameters of COPE on simple topologies have shed light on the way to address the issue on large scale wireless networks Lastly, based on these valuable insights, we have proposed and designed an intelligent opportunistic network coding scheme which would be particularly suitable for the MANET environment We summarize our contributions as follows:
1 Extend the QualNet simulator to include opportunistic network coding
functionalities – We have designed and developed the new functionalities in the existing protocol stack of QualNet We further integrate these new functionalities into QualNet as a new enhanced network layer protocol Through this implementation on QualNet, the performance of any kind of opportunistic network coding scheme and network size can be easily studied through QualNet simulations
2 Evaluate and study the behavior of opportunistic network coding scheme via our
enhanced QualNet both on simple as well as large scale topologies – We enhance the original COPE functionality and simulate its behavior in Alice-and-Bob (i.e 1-to-1), X and Cross topologies Based on the simulation results from these simple topologies we further study its performance in a 20-node multi-hop wireless mesh network
3 Evaluate and study the key parameters of opportunistic network coding scheme
with respect to overall network throughput, e.g the impact of control message on the performance of opportunistic network coding scheme We then use the findings
Trang 6for further study of network coding scheme on large scale wireless ad-hoc networks
We find out there is an optimal value for the control message interval to maximally improve the network throughput
4 We propose an intelligent opportunistic network coding scheme that is suitable for
large scale wireless ad-hoc networks and demonstrated the effectiveness of our solutions through simulations The proposed intelligent algorithm manages to reduce the overhead and interference caused by control message in large scale multi-hop networks without degrading the benefit brought by network coding
1.4 Related Works
Network coding is regarded as a promising technique to improve network throughput It originates from Ahlswede et al [1], which demonstrates that intermediate nodes in network may combine several received packets into one or several output packets Much theoretical work has been done to optimize network coding scheme in information and networking systems Li et al [6] has showed that linear codes are sufficient for multicast traffic to achieve the maximum capacity bounds At the same time, Koetter and Medard [7] has proposed an algebraic approach and showed that coding and decoding can
be done in polynomial time Ho et al [8] has presented the concept of random linear network coding, which makes network coding more practical, especially in distributed networks such as wireless networks In paper [4] an intra-flow network coding scheme is proposed to deal with intra-flow traffic, which can effectively handle reliability issue Joon-Sang [9] has presented a network coding based ad-hoc multicast protocol CodeCast, which is especially well suited for multimedia application in a wireless network All of the above works have shown results either analytically and/or through extensive simulations
Trang 7In the last few years, many researchers have focused on developing practical network coding techniques in wireless networks [10-12] for inter-flow traffic, to significantly improve the network capacity A great deal of attention has been focused on dealing with the practical issues and designing the implementable protocols with network coding [13-15] Generally, the common practical issues are that how to integrate network coding technique into the current network protocol stacks and achieve low complexity of coding and decoding scheme In [2] S Katti et al proposed COPE which is regarded as the first practical network coding implementation for wireless mesh network dealing with inter-flow and unicast traffic With the opportunistic listening and opportunistic coding characteristics, COPE exploits the broadcast nature of the wireless channel Through eavesdropping and sharing information among neighbors, the intermediate node with COPE can simply XOR multiple packets into one packet and broadcast it to several neighbors All the neighbors are then able to decode the specific packets from the combined packet by the simple XOR method The authors show that this opportunistic network coding scheme improves network throughput several times for wireless mesh network In addition, K Ajoy et al [21] further evaluated the performance of COPE with two different routing protocols, i.e AODV and OLSR using three different queue management schemes such as FIFO, RED and RIO The authors show that OLSR provides better performance than AODV for COPE while the FIFO achieves shortest packet delay among the three queue management schemes Finally in paper [16] which is a valuable primer for network coding the authors explicitly explain some popular network coding schemes and also describe the advantages and challenges in research area and practical implementation They also enumerate some other promising fields where network coding
Trang 8could be applied, from peer-to-peer (P2P) file distribution networks to wireless ad-hoc networks, from the improvement of network capacity to increase of network security
1.5 Thesis Organization
The rest of this thesis is organized as follows
Chapter 2 presents the overview of opportunistic network coding including two components of the opportunistic character, i.e opportunistic listening and opportunistic coding We also explain the packets coding and decoding algorithm implemented in this specific opportunistic network coding scheme
In chapter 3, we present the design of opportunistic network coding algorithm and architecture A detailed description of the control flow is presented to show how we design this opportunistic network coding scheme
Chapter 4 shows our implementation of the opportunistic network coding scheme in QualNet simulator We first introduce the architecture of QualNet simulator, including the protocol model and application program interface Then we enumerate some key programming abstractions of our implementation in QualNet, which show how we implement the opportunistic network coding scheme and how we integrate them into QualNet simulator
In chapter 5, we present our simulation results and analyze the observations, including the parameters that affect the results and the way to improve the coding scheme
In chapter 6, an intelligent version of opportunistic network coding scheme is proposed based on the conclusions drawn in previous chapter We further demonstrate how this intelligent scheme would be suitable for MANETs
Trang 9Chapter 7 presents the performance evaluation of the intelligent opportunistic network coding scheme proposed in previous chapter
In the last chapter, Chapter 8, we draw some conclusions based on those simulations We also highlight a number of areas where further enhancements can be explored
Trang 10Chapter 2
Opportunistic Network Coding Overview
COPE is a new forwarding architecture for current packet switched network especially for wireless mesh networks with unicast traffic Traditionally, the intermediate node in the wireless network directly forwards the input packet to the next hop However,
in COPE the intermediate node may XOR several input packets into one output packet and then broadcast the combined packet to several intended neighbors based on the assumption that all the intended neighbors are able to decode the combined packet No synchronization or prior knowledge of senders, receivers, or traffic rates are necessary i.e all of which may vary at any time COPE depends highly on the local information shared
by neighbors to detect and exploit coding opportunities whenever they arise To successfully exchange the information among neighbors in a wireless environment, an opportunistic mechanism is introduced which has two main components, i.e
store all overheard packets in their local buffers, called the packet pool To achieve this, all
Trang 11nodes in the network should be equipped with omni-direction antennas and set to
“promiscuous” listening mode A specific callback function is necessary to handle the packets heard in the promiscuous mode We will introduce how the callback function is implemented in QualNet simulator in Chapter 4 The packet pool, in our implementation, is
an FIFO queue with fixed capacity of N 0 The larger the N 0 is, the more packets the node can store, therefore, there would be more opportunities to perform network coding
However, if N 0 is too large, the packets in the head of packet pool would be too old, thus
degrading the coding efficiency In our implementation, N 0 is set at 128 The total amount
of storage required is less than 192 kilobytes, which is easily available on today’s PCs, laptops or PDAs This constitutes the Opportunistic Listening function
In addition to promiscuous listening, each node periodically sends reception reports
to its neighbors to share information with them which is important to the coding decision This reception report contains the information on the packets that the host has heard and stored in its packet pool Once the neighbors have received these reception reports, they
extract all the relevant information and store them in another local buffer, called the report
pool The reception reports are normally inserted into the data packets as an extra packet
header and sent together with the data However, when nodes have no data packets to send, they will periodically broadcast “Hello” messages which contain the reception reports to their neighbors In the original COPE implementation, this Hello message is the same as the control packet, but in our implementation, control packets are indicated as a separate kind of packet The Hello message would be part of an intelligent scheduling scheme, which will be described at Chapter 6
Trang 12In summary, through the Opportunistic Listening technique, the nodes learn and share their states with each other, thus contributing to the nodes’ network coding decision
2.2 Opportunistic Coding
Opportunistic coding allows nodes to combine multiple packets into a single packet based on the assumption that all intended recipients can extract the packets they want from the combined packet However, the main issue is regarding which packets to code, and how to code Each node should answer this question based on local information and without further consulting with other nodes A basic-method to address the coding issue in wireless networks is that each node maintains a FIFO queue of packets to be forwarded When the MAC indicates that the node can send, the node picks the packets at the head of the queue, checks which other packets in the queue which may be encoded with this packet, XORs those packets together, and broadcasts the single combined packet
However, the question is which packets should be combined together to maximize network throughput since a node may have multiple coding options It should therefore pick the one that maximizes the number of packets delivered in a single transmission In [2] the authors provide a good example to illustrate this situation In Figure 2(b) node E
has eight packets in its output queue P1-P8 The list in Figure 2(a) shows the next hop of
each packet in E’s output queue When the MAC notifies node E to transmit, node E
de-queues packet P1 from head of the output queue and tries to code with other packets in the
output queue With the help of the information in reception reports, node E knows what packets its neighbors have in the Packet Pool Now node E has some coding options as shown in Figure 2(c)
Trang 13The first option is P1 ⊕ P2, which is not a good option because none of the recipient nodes can decode the P1 ⊕ P2 The second option in Figure 2(c) shows a better coding decision P1 ⊕ P3 As node C has packet P3 so it can successfully decode packet
P1; and node D has packet P3 so it can decode packet P1 successfully As for the third
and fourth options, it can be seen that these two are bad coding decisions as none of the
recipient nodes can decode either P1 ⊕ P3 ⊕ P5 or P1 ⊕ P3+ ⊕ P4 One of the interesting
observations is that the second option is a better coding decision than the third and fourth options although the third and fourth options can code more packets than the second
option The fifth option P1 ⊕ P3 ⊕ P6 is a much better option than the second option as
three of the recipient nodes (B, C and D) can decode their intended packet successfully, i.e
node B has P1 and P3 so it can decode P6, node C has P1 and P6 so it can decode P3 and node D has P3 and P6 so it can decode P1 successfully Coding option sixth is the worst
as it would have coded four packets and none of the intended next-hop can decode the
encoded packet successfully The seventh option P1 ⊕ P2 ⊕ P6 ⊕ P8 in the Figure 2(c) is
the best coding decision for that scenario with the maximum number of packets XORed
together Here node A has P1, P3 and P6 so it can decode P8 successfully Similarly node
B, C and D can decode their intended packet P6, P3 and P1 respectively from the encoded packet of P1 ⊕ P2 ⊕ P6 ⊕ P8
As can be seen from this simple example, a general opportunistic coding rule can
be like this proposed in paper [2]: A node can XOR n packets p 1 , … , p n together to transmit to n next-hops r 1 , … , r n only if each next-hop r i has all n-1 packets p j for j ≠ i
Each time when a node is ready to send, it tries to find the maximum n in order to code and
transmit as many packets as possible in a single packet This coding scheme has a few
Trang 14other important characteristics First, there is no scheduling or assumed synchronization Second, no packet is delayed; every time the node sends a packet it picks the head of the queue as it would have done in the current approach The difference is that, whenever possible, the node tries to load each transmission with additional information through coding Third, the scheme does not cause packet reordering as it considers the packets according to their order in the FIFO queue, for both transmission and coding This characteristic is particularly important for TCP flows, which may mistake packet reordering as a congestion signal
2.3 Packet Coding Algorithm
In this section, we introduce the details of packet coding algorithm based on the opportunistic coding idea mentioned above Some practical issues in the implementation and the specific solutions are also proposed
First, the coding scheme in the original COPE does not introduce additional delay The node always de-queues the head of its output queue and checks coding opportunities when it is ready to send If there is no coding opportunity, the node will send the packet without waiting for the arrival of a matching codable packet However, in our design, we have added a waiting scheme before the coding procedure shown in Figure 3 This waiting scheme takes action immediately before the network coding procedure When the node is ready to send, it checks the packet in its output queue and executes this waiting scheme before it executes the network coding This waiting scheme works as follows: a node checks the coding opportunity if and only if the number of packets in its output queue is
greater than N and no new packet arrives during the last T seconds N and T are called queue threshold and waiting duration of the waiting scheme, respectively The values of N
Trang 15and T would significantly impact the performance of the algorithm It is understandable that the waiting scheme would accumulate N packets in the output queue thus improving the coding opportunities with the price of delaying the packet by T However, the
Coding Option Is it good? Coding Option Is it good?
A’s Packet Pool B’s Packet
C’s Packet Pool D’s Packet Pool
E’s Output Queue
P1
P7 P5
P8 P4
Trang 16improvement in network throughput may shorten the packet delay which may compensate for the increase in delay due to the waiting scheme
Second, the coding scheme gives preference to XOR-ing packets of similar lengths, because XOR-ing small packets with larger one reduces overall bandwidth savings Empirical studies show that the packet-size distribution in the Internet is bimodal with peaks at 40 and 1500 bytes [17] We can therefore limit the overhead of searching for packets with the right sizes by distinguishing between small and large packets We might still have to XOR packets of different sizes only when necessary In this case, the shorter packets are padded with zeros and the receiving node can easily remove the padding by checking the packet-size field in the IP header of each native packet
Third, the coding scheme will never encode together packets headed to the same nexthop, since the nexthop will not be able to decode them Hence, we only need to consider packets headed to different next hops Hence the relay node maintains a virtual queue for each neighbor When a new packet is inserted into the output queue, an entry is added into the virtual queue for the specific intended neighbor
Figure 3: A simplified illustration of waiting scheme, showing a threshold N in the output queue A node sends packet if and only if the threshold N is fulfilled and no packets came during the last T seconds
Trang 17Finally, we want to ensure that each intended neighbor is able to decode its native packet from the combined packet Thus, for each packet in output queue, our relay node checks whether each of its neighbors has already heard the packet The neighbor’s information is shared and learned by the reception report, mentioned previously
In our implementation, each node maintains the following data structures
Each node has 3 FIFO queues of packets to be forwarded, which we call the output queue, which is the default node configuration All these 3 queues have different priorities from 0 to 2 In our implementation, data packets have lowest priority 0 while Hello messages and control messages have the highest priority 2
For each neighbor, the node maintains two per-neighbor virtual queues, one for small packets (e.g smaller than 100 bytes), and the other for large packets The virtual queue for a neighbor A contain pointers to the packets in the output queue whose nexthop
is A
Each node maintains two extra buffers named packet pool and reception report pool Packet pool is used to store the overheard packets and reception report pool is used
to store the reception report from neighbors
The packet pool stores the native packets while the reception report pool stores the report indicating the specific details of the packets overheard by neighbors One entry of the report contains the packet’s ID and previous hop and next hop’s address The details
of the packet format will be explained in next Chapter
The specific coding procedure is illustrated by the following pseudo-code
Trang 18Coding procedure –
Pick packet p at the head of the output queue
Sending Packets = {p}
Nexthops = {nexthop (p)}
if size (p)>100 bytes then
queue = virtual queue for large packet
Else
queue = virtual queue for small packet
End if
for Neighbor i = 1 to M do
Pick packet pi the head of virtual queue Q (i, queue)
if ∀n ∈ Nexthops ∪ {i}, n can decode p ⊕ p i based on the reception report
p = p ⊕ p i
Sending Packets = Sending Packets ∪ {p i}
Nexthops = Nexthops ∪ {i}
end if
end for
queue = ! queue
for Neighbor i = 1 to M do
Pick packet p i , the head of virtual queue Q (i, queue)
if ∀n ∈ Nexthops ∪ {i}, n can decode p ⊕ p i based on the reception report
p = p ⊕ p i
Sending Packets = Sending Packets ∪ {p i}
Nexthops = Nexthops ∪ {i}
end if
end for
return Sending Packets
Trang 19In above pseudo-code p represents the specific packet in the output queue while Q
indicates the whole output queue in term of the overall virtual queue structure for all
neighbours The variable queue is a two-state variable indicating which queue is selected between those two queues for large and small packets In this algorithm, it uses “!” operation to switch between these two states For example, queue=!queue means if originally queue indicates the one for large packets, after the operation, queue indicates the one for small packets and vice verse Given the values of i and queue, Q(i, queue) can locate the specific virtual queue for neighbour i
In the original coding scheme, the authors introduced an intelligent guessing scheme which is based on the integrated ETX [18] routing scheme At the congestion situation, the reception report may be dropped at the wireless channel or may be too late to reach the intended node Thus the relay node may miss some coding opportunity Depending on the packet delivery probability of the wireless link, the relay node may guess whether the intended neighbour has received the packets or not Even though the authors show the intelligent guessing technique can somehow benefit the total network coding opportunity at congestion situation in static wireless mesh networks, we do not implement this technique in our design for several reasons First, we would like to design
an opportunistic network coding scheme independent of routing protocol, thus making the algorithm very flexible at difference scenarios by cooperating with the suitable routing protocol instead of integrating with ETX based routing algorithm Furthermore, ETX algorithm calculates the metric value by measuring the loss rate of broadcast packets between a pair of neighbour nodes, indicating the link quality However, this method is not suitable for the wireless mobile environment, because the topology is always changing
Trang 20as well as the link quality In contrast, the AODV [19] and OLSR [20] are two more practical routing protocols for wireless mobile ad-hoc networks than those based on ETX
2.4 Packet Decoding
The packet decoding scheme is much simpler than the coding side As mentioned above, in COPE each node maintains an extra buffer, named the packet pool, to store a copy of the packets overheard or sent out Each packet is indicated by a specific packet ID consisting of the packet source address and IP sequence number When a node receives an
encoded packet consisting of n native packets, the node goes through ids of the native packets one by one in the local packet pool and retrieves the corresponding n-1 packets Ultimately it XORs these n-1 packets with the received encoded packet to get the intended
packet
2.5 Pseudo Broadcast
In COPE, the node sends the packets in an encoded manner, which is a single encoded packet that contains information of several packets with several different next hops Moreover, for opportunistic listening, all nodes snoop at the network to monitor all packets that are transmitted among its neighbors The natural method to do this would be broadcast However, one of the biggest disadvantages of 802.11 MAC protocol is that a recipient does not send an acknowledgement in response to a broadcast packet In the absence of an acknowledgement, the broadcast mode offers no retransmissions and consequently very low reliability In addition, a broadcast source does not detect collision, and thus does not back off and retransmit If several nodes sharing the same wireless channel broadcast data packets to the neighbors, the total network throughput would be severely degraded due to the congestion On the other hand, unicast mode ensures both
Trang 21sending node retransmission and back-off but only to one specific destination at a time However, unicast does not help in opportunistic listening and consequently not in opportunistic coding
To address this problem pseudo broadcast is introduced in COPE Pseudo broadcast
is actually unicast and therefore benefit from the reliability and the back-off mechanism The link layer destination field of the encoded packet is set to the MAC address of one of the intended recipients However, an extra header after the link layer header is added, listing all other next hops of the encoded packet (except link layer destination) Recall that all nodes in the network listen on promiscuous mode They snoop in the network by eaves-dropping all packets transmitted among neighbors In this way the node is able to process those packets not headed to it When a node hears an encoded packet, it checks the link layer destination field to determine if it is the intended receiver or not If it is, it will process this packet directly If not, this node further checks the next hop list in the next packet header to see whether it is the intended receiver or not If not, it just stores a copy of that packet-as a native packet-in the packet pool If it is meant for the next hop, it processes this encoded packet further to retrieve the intended packet and then stores a copy of the decoded native packet in its packet pool As all packets are sent using 802.11 unicast, the MAC layer is able to detect collisions and back-off properly Pseudo broadcast is therefore more reliable than simple broadcast and inherits all the advantages of broadcast
In this chapter, we have presented the overview of the opportunistic network coding for wireless environment In next chapter, we will show the specific architecture of opportunistic network coding in our implementation including the specific packet header structure and the whole control flow of the algorithm
Trang 22Chapter 3
Opportunistic Network Coding Architecture
In this chapter, we introduce the architecture of the opportunistic network coding scheme which we have implemented in QualNet simulator The details of the architecture are based on the overview of the opportunistic characteristics, coding and decoding algorithm introduced in the previous chapter Here the packet header structure will be shown and the functionality of each field in the header will be explained Next the overall control flowchart will be presented to illustrate the structure of the opportunistic network coding algorithm in our implementation
3.1 Packet header
Figure 4 shows the modified variable-length coding header for the opportunistic network coding scheme, which is inserted into each packet If the routing protocol has its own header, our coding header sits in between the routing and MAC layer headers Otherwise, it sits between the IP and MAC headers Only the shaded fields in Figure 4, is required in every coding header (called the constant block) Besides this, there are two other header blocks containing the identifiers of the coded native packets and the reception reports
Constant block: The first block records some constant values for the whole
coding header For example, it records the number of coded native packets in this encoded packet, the number of reception reports attached in this header, the packet sequence number and the total length of the header Besides these, the protocol serial information and parity can also be inserted in this constant block In our implementation, we have added the version information and check-sum fields in this block
Trang 23Identifiers (Ids) of the coded native packets: This block records metadata to
enable packet decoding The number of entries is indicated in the first constant block Each entry contains the information of corresponding native packet It begins with the packet Id, which is a 32-bit hash of the packet’s source IP address and IP sequence number This is followed by the IP address of the native packet’s nexthop When a node hears an XOR-ed packet, it checks the list of nexthop in this block to see whether it is the intended nexthop of this XOR-ed packet, in which case it decodes the packet, and processes it further
Reception reports: As shown in Figure 4, reception reports form the last block in
the header, and the number of report entries is also recorded in the first constant block Each report entry specifies the source of the reported packet SRC_IP, which is followed
by the IP sequence number of the last packet received from the source LAST_PKT, and a
Figure 4: Packet header for our algorithm The first constant block indicates the number of
entries in the following blocks The second block identifies the native packets encoded and their next hops The last block contains reception reports Each entry identifies a source, the last IP sequence number received from the source, and a 32-bit long bit-map of most recent packets seen from that source
Trang 24bit-map of recently heard packets This bit-map technique for reception report has two advantages: compactness and effectiveness In particular, it allows the nodes to report each packet multiple times with minimal overhead This prevents reception reports from being lost at high traffic congestion situation
Our packet header structure by-and-large follow the original COPE header structure; however, in order to make this opportunistic mechanism fit in QualNet and the mobile environment, we make some modifications in our implementation First, we remove the asynchronous acknowledgment scheme in the original COPE Originally, COPE exploits hop-by-hop ACKs and retransmission to guarantee the reliability in the hop-by-hop fashion, thus adding an ACKs block at the end of the header structure However, we discover this asynchronous ACKs technique is not a good solution; sometime it even makes the performance worse So in our header structure, there is no ACKs block, thus resulting in smaller overhead Besides this, we rearrange the header structure thus placing a constant block at the beginning of the header shown in Figure 4
In addition, we use 32-bit bitmap instead of 8-bit at the reception report block The bitmap
is used to represent packets For example, if the first bit of the bitmap indicates packet 10, then the second bit indicates packet 11 and so on A longer bitmap can indicate more packets than a short one, which is used to compensate for the delay associated with the reception report This is particularly so in a mobile environment, where a node can provide more information to a new incoming neighbor which has no information about his neighbor node, therefore potentially improve the coding opportunity Another modification is that we replace the MAC address at the Nexthop field of the second block
in the header with IP address IPv4 address which is 16-bit shorter than MAC address
Trang 25means less overhead in the header thus compensating for the long bitmap Moreover, the replacement could make our solution independent of the underlying MAC layer, making it suitable for different networks
3.2 Control Flow
This section describes the overall packet flow of our opportunistic network coding scheme, which mainly consists of two parts, i.e the Sending Side and the Receiving Side
3.2.1 Sending Side
Figure 5: Flowcharts for our opportunistic network coding implementation
Trang 26Figure 5(a) shows the flowchart of the Sender Side When a node is ready to send
a packet, it first checks whether it needs to wait for the next new input packet or not Based on the waiting scheme we have introduced in previous chapter, the node just checks the number of packets in its output queue and whether it has received new packets in the
last T seconds If the number of packet in the output queue is less than the threshold value
N or if the wait period of T seconds has yet to expire, the node will wait for additional new
packets Otherwise, the node goes to the next step immediately to de-queue from the head
of the output queue Next the node continues to traverse the packets in the output queue to pick up some packets that are able to be coded with the head packet according to the coding algorithm After the packets are XOR-ed together, the node constructs the header block with the Ids of the coded native packets in the header followed by the reception reports Finally, the combined packet with the extra coding header is transmitted Alternatively, if no other packets can be coded with the head packet, the native head packet is just added with the coding header and transmitted without any delay
3.2.2 Receiving Side
At the receiver side, whenever a node receives a packet, it checks whether this packet is a coded packet or not If this packet is native without the extra coding header, the node processes it in the usual way Otherwise, the node processes it according to the flowchart of the receiving side illustrated in Figure 5(b) First, the node extracts the reception reports from the header and updates the neighbor’s state recorded in its report pool Next, the node checks whether there is more than one packet combined together in this packet, in which case the node tries to decode it After the node manages to get the native packet, it stores a copy of the native packet in its packet pool and goes on to check
Trang 27whether it is the intended next hop If not, it just stops handling this packet Otherwise, it passes this native packet to higher level protocol for further processing
In the next chapter, we will describe our implementation details of this opportunistic network coding architecture in QualNet simulator
Trang 28Chapter 4
Implementation in QualNet Simulator
In this chapter, we describe the details of our implementation of opportunistic network coding scheme in QualNet simulator Before describing the detailed programming, it is useful to present a brief description of the QualNet simulator
4.1 Simulator Abstraction
The QualNet simulator is a commercial network simulation tool derived from GloMoSim that was first released at 2000 by Scalable Network Technologies (SNT) We use QualNet 4.0 as our performance evaluation study platform QualNet is famous for its ultra high-fidelity, scalability, portability, and extensibility It can simulate a scenario with thousands of network nodes because it takes full advantage of multi-threading capability
of multi-core 64 bit processor In addition, the source code and configuration files are identical to those in the real communication system organized in OSI protocol stack model Furthermore, the protocol architecture in QualNet is also very close to the real TCP/IP network structure, which consists of the Application, Transport, Network, Link (MAC) and Physical Layers, from top to bottom Compared to other open source network simulation tools, QualNet is closest to the real system implementation, thus capable generating more realistic and accurate simulation results
4.1.1 Discrete event simulator
Compared to the continuous time simulator, a discrete event simulator is much more popular among the fields of industry implementation or academic research There are many articles online providing many convincing evidence and analytics on this
Trang 29statement, so we do not discuss further in this thesis QualNet is also a discrete event simulator where the system state changes over time only when an event occurs An event can be anything in the network system such as a packet generation request, a collision or time out and so on, which triggers the system to change its state or perform a specific operation In QualNet there are two event types: Packet events and Timer events Packet events are used to simulate exchange of data packets between layers or nodes To send a packet to the adjacent layer, the QualNet kernel passes the handle to the specific node and the node schedules a packet event for the adjacent layer, then returns the handle back to kernel After a pre-set delay, the occurrence of the packet event simulates the arrival of the data packet, which triggers the QualNet kernel passing the handle to this node again and the adjacent layer in this node processes the data packet further Next, this data packet is passed to another adjacent layer until it is freed Packet events are also used for modeling communication between different nodes in the network In fact, the communication among nodes in the network can only be achieved by scheduling data packets and exchanging them with each other On the other hand, Timer events are used to perform the function of alarms For example, a time alarm is used to trigger the periodic broadcast of control message every one second Timer events are very useful and important for the simulator to schedule more complex event pattern like in a real application system
In QualNet, both the Packet event and Timer event are defined via the same message data structure A message contains the information about an event such as the event type, sequence number, generating node and the associated data Figure 6 shows the message data structure in QualNet
Trang 30Some of the fields of message data structure are explained bellow
layerType: Layer associated with event indicating which layer will receive this
message protocolType: Protocol associated with event indicating which protocol will
process this message in the layer instanceId: For multiple instances of a protocol, this field indicates which
instance will receive this message
struct message_str
{
Message* next; // For kernel use only
short layerType; // Layer which will receive the message
short protocolType; // Protocol which will receive the message in the layer
Trang 31infoArray: Stores additional information that used in the processing of events
and the information that needs to be transported between layers Packet: This field is optional If the event is for actual data packet, this field
would hold the data Headers added by different layer are included
in this field
packetSize: The total size of the packet field
numberOfHeaders: Recording how many headers are added
headerProtocols: It is an array storing the specific protocol which adds the
header
headerSizes: It is an array storing the specific size for each header
The last three fields are for packet tracing If the packet tracing function is enabled, these fields are filled up during the simulation to facilitate the analysis after the simulation, but it slows down the simulation All the fields listed above are some key parts in the message data structure, more details can be found in the API reference provided by QualNet We have added some new entries in the message structure to facilitate our implementation of network coding scheme, which will be described later
4.1.2 Protocol Model in QualNet
As mentioned above, each node in QualNet runs a protocol stack just like the physical communication device in the real world and each protocol operates at one of the layers in the stack Before we implement our own protocol into QualNet, we describe how
a protocol is modeled in QualNet Figure 7 shows the general protocol model as a finite state machine in QualNet
Trang 32A general protocol model in QualNet consists of three states, Initialization, Event Dispatcher and Finalization In the Initialization state, the protocol reads parameters from the simulation configuration file to configure its initial state Then the protocol transfers to the Event Dispatcher state, which is the kernel of the protocol model This state contains two sub-states, Wait For Event state and Event Handler state, which construct a loop Initially, all protocols are waiting for the events headed to them to happen, in which case the protocol transfers to the Event Handler state to call the specific handler function to process the event Afterwards, the protocol transfers back to the Wait For Event state to wait for the next event to happen After all potential events in the simulation are processed, the protocol transfers to the last state called Finalization, in which the protocol may print out the packet tracing data and also the simulation statistics into some output files
Figure 7: Packet Model in QualNet
Trang 334.1.3 Application Program Interface
QualNet provides several Application Program Interface (API) functions for the events operations Some of these APIs can be called from any layer while others can only
be called by specific layers The complete list of APIs and their explicit descriptions can
be found in the API reference provided by QualNet Here we select some examples which are helpful for our implementation in QualNet
MESSAGE_Alloc: This API allocates a new Message structure, which is called
when a new message has to be sent through the system MESSAGE_Free: It is called to free a message when this message is no longer
needed in the system
MESSAGE_AddInfo: This is to allocate one “info” field with given info
type for the message
MESSAGE_RemoveInfo: Remove one “info” field with given info type from
the info array of the message
MESSAGE_PacketAlloc: Allocate the “payLoad” field for the packet to be
delivered It is called when a message is a packet event carrying specific data
MESSAGE_AddHeader: Add a new header with the specific header size to
the packet enclosed in the message
MESSAGE_RemoveHeader: Remove the header from the packet enclosed in the
message
MESSAGE_Send: It is called to pass message within QualNet
Trang 34IO_ReadNodeInput: It is called to read the parameters from external
configuration file
IO_PrintStat: It is called to print out the statistics collected during the
simulation into an output file
NetworkIpSneakPeekAtMacPacket: Called directly by the MAC layer, this allows
a routing protocol to “sneak a peek” or “tap” messages it would not normally see from the MAC layer
Besides these APIs to process the message, there are many other APIs enclosed in Scheduler and Queue classes that deal with the queuing system For example, some APIs are used to insert a packet into or de-queue a packet from a queue while others are used to construct some kinds of queuing systems There are also some APIs for queue management, such as FIFO queue, RIO queue or RED queue
4.1.4 QualNet Simulator Architecture
From the section of protocol model in QualNet, we have learned a protocol is modeled as a finite state machine with three states: Initialization, Event Dispatcher and Finalization However, how are those protocols managed as a stack in QualNet similar to the TCP/IP protocol stack in the real world? As we know the protocols are grouped into layers in the protocol stack of TCP/IP model, which is achieved by registering those protocols into the event type table and protocol type table managed by QualNet What is more, the corresponding event handler function should be embedded into each layer’s entrance Take the AODV routing protocol as an example In a wireless ad-hoc network with the nodes equipped with AODV routing protocol, when a packet is passed to the
Trang 35network layer from the transport layer, the entrance function in the network layer is checked to determine whether this packet needs to be routed or not If it needs to be routed, the network layer entrance function calls the embedded Event Handler function of AODV protocol registered at protocol table to process this packet further
Another issue is how this protocol stack operates in QualNet As mentioned above,
a protocol model in QualNet has three components: Initialization, Event Dispatcher and Finalization, which operate in hierarchical manner, first at the node level, second at the layer level and finally at the protocol level
At the start of the simulation, each node in the network is initialized by the kernel
of QualNet The initialization function of the node calls the initialize function for layer initialization The layers are initialized in a bottom up order All the layers are initialized one node at a time, except the MAC layer which is initialized locally Each layer initialization function then calls all the protocol initialization functions running in that layer The initialization functions of a protocol create and initialize the protocol state variables If the value of the variable is not given by the user, a default value is selected during the Initialize state After all the nodes are initialized, the simulator is ready to generate events and process events
When an event occurs, the QualNet kernel passes the handle to the node where the event occurs The node calls a dispatcher function to determine which layer should process the event further The event dispatcher function of that selected layer then calls the event dispatcher function for the appropriate protocol based on the protocol type information enclosed in the event, normally in the packet header The protocol event
Trang 36dispatcher function then calls the corresponding event handler function to perform the actions for the occurred event
At the end of the simulation, Finalization functions are called automatically and hierarchically in a manner similar to Initialization functions Finalization functions usually print out the statistics collected during the simulation time
4.2 Programming Abstraction in QualNet
As we know a native QualNet does not support any network coding functionality,
we should therefore implement the opportunistic network coding functionality in QualNet’s protocol stack before we can further evaluate its performance via simulations
In this section we will describe the implementation details of our opportunistic network coding scheme in QualNet As mentioned in chapter 2, this opportunistic network coding scheme works between the MAC and IP layers by inserting an extra coding header between MAC and IP headers However, in order to be consistent with the real network architecture, we do not create an extra independent communication layer between Link and Network layer Instead, we implement the opportunistic network coding protocol in the network layer but at the bottom entrance It means the opportunistic network coding protocol will process the packet passed from MAC layer first before passing it to the IP protocol On the other hand, before the IP protocol passes the IP packet to the MAC layer, the network coding protocol will traverse the packets in the output queue to search for the opportunity of combining several packets together
Before describing the details of the functions, we will present the states and variables that our network coding protocol maintains Figure 8 shows the overall map of structures created for the protocol
Trang 38The main structure maintained by the protocol is the CopeData which contains other small part data structures It is initialized by the specific protocol initialization function, CopeInit(), at the beginning of the simulation Recall the protocol model in QualNet, where the protocol initialization function is called hierarchically to read the parameters from the external configuration files to initialize the protocol states In addition,
as can be seen there are two local buffers in the CopeData, i.e “ppScheduler” and
“reportInfo” “ppScheduler” is managed as a FIFO queue and is the packet pool storing the overheard packet from neighbors There are three functions to manage this queue, one
of which is CopePpQueueInit() to initialize this queue system, and another is CopePpQueueInsert() to insert one cope of the new overheard packet into the local packet pool The last is CopePpQueueExtract() to extract specific packets from this packet pool for decoding The capacity of this queue is set to the default value of QualNet and when the queue is full, and the head of the queue will be dropped automatically due to the characteristic of FIFO The other local buffer is the “reportInfo”, which, in fact, is a hash table to keep the reception report information for the neighbors and itself The head of this table is the self reception report records When a new neighbor sends a reception report to this node, CreateSelfReport() function is called to insert a new element in the “reportInfo” hash table If an existing neighbor sends new information to this node, CopeReportUpdate() and Cope_SubReportUpdate() functions is called to update the records for this neighbor To facilitate debug, we add another function CopePrintReportInfo() to print out all the content of this local buffer The length of this hash table is not limited since the number of neighbors for one specific node would not be too large to consume large amount of memory However, the specific array for each
Trang 39element in this hash table is upper bounded by MAX_Entry, which is set at 128 in our implementation Besides these two local buffers, the protocol also keeps a structure, CopeStats, to collect the statistics during the simulation These statistics are printed out by CopeFinalize() function at the end of simulation Another important variable is pkt_seq_no which records the local packet IP sequence number to help identify a specific
IP packet with the packet original source address in the network Some other entries in the CopeData structure are variables to store the parameter values configured in the external configuration file, such as “helloInterval” storing the Hello message interval,
“processHello” storing the Boolean value indicating whether to process Hello message and so on Besides the main data structure CopeData for the protocol to maintain the protocol states, CopeHeaderType structure is used by the protocol to indicate the extra coding header structure It is created according to the packet header structure discussed in chapter 3 The functions CopeAddHeader() and CopeRemoveHeader() are used to add or remove the extra coding header into or away from the IP packet
Having explained the overall data structures required by this opportunistic network coding scheme, we next describe our programming details of the algorithm For the sender side, the challenging part is checking the coding opportunity and intelligently combining the packets together Based on the flowchart shown in Figure 5(a) in Chapter 3, we construct a more detailed modeling diagram for the sender side function CopeCoding() shown in Figure 9