Mechanisms are provided to initialize a protocol, configure an instantiation forevery connection, manipulate the configuration during communication and maintainconsistent configurations
Trang 1Artificial neural networks have been used in areas of communication systems includingsignal processing and call management (Kartalopolus, 1994) This chapter suggests a furtheruse of neural networks for the maintenance of application tailored communication systems.
In this context, a neural network minimises the difference between an applications required
Quality of Service (QoS) and that provided by the end-to-end connection.
Communication systems based on the ISO Open System Interconnection (OSI) modelhistorically suffered inefficiencies such as function duplication and excessive data copying
Telecommunications Optimization: Heuristic and Adaptive Techniques, edited by D Corne, M.J Oates and G.D Smith
Copyright © 2000 John Wiley & Sons Ltd ISBNs: 0-471-98855-3 (Hardback); 0-470-84163X (Electronic)
Trang 2However, a combination of modern protocol implementation techniques and an increase inthe power and resources of modern computers has largely eliminated these overheads.Zitterbart (1993) defines the characteristics of various distributed applications and identifesfour possible classes based on their communication requirements Table 12.1 illustratesthese classifications and highlights the broad range of transport services required by moderndistributed applications In the face of such diversity, the challenge of optimizingperformance shifts from the efficiency of individual mechanisms to the provision of aservice that best satisfies the broad range of application requirements Providing such aservice is further complicated by external factors such as end-to-end connectioncharacteristics, host heterogeneity and fluctuations in network utilization Traditionalprotocols, such as TCP/IP do not contain the broad functionality necessary to satisfy allapplication requirements in every operating environment In addition, the QoS required by
an application may change over the lifetime of a connection If a protocol provides a greaterQoS than is required then processor time and network bandwidth may be wasted For thesereasons, applications that use existing protocols do not necessarily receive thecommunication services they require
Table 12.1 Diversity of application transport requirements.
BurstFactor
Delaysens
Jittersens
Ordersens
LossTol
PriorityDeliveryInteractive Voice Low Low High High Low High NoTime Critical Tele conf Mod Mod High High Low Mod Yes
Configurable protocols offer customised communication services that are tailored to aparticular set of application requirements and end-to-end connection characteristics Theymay be generated manually, through formal languages or graphical tools, or automaticallywith code scanning parsers that determine application communication patterns
12.1.2 Adaptable Communication Systems
Whilst configurable communication systems provide a customized service, they are unable
to adapt should the parameters on which they were based change Adaptable protocols
support continuously varying application requirements by actively selecting internalprotocol processing mechanisms at runtime There are several advantages in this:
Trang 31 Application QoS: it is not uncommon for an application to transmit data with variable
QoS requirements For example, a video conferencing application may require differentlevels of service depending upon the content of the session Consider a video sequencethat consists of a highly dynamic set of action scenes followed by a relatively staticclose-up sequence The first part, due to rapid camera movement, is reasonably tolerant
of data loss and corruption, but intolerant of high jitter In contrast, the static close-upscenes are tolerant to jitter but require minimal data loss and corruption
2 Connection QoS: adaptable protocols are able to maintain a defined QoS over varying
network conditions Whilst certain architectures offer guaranteed or statistical servicesthe heterogeneuos mix of interconnection devices that form the modern internet doeslittle to cater for end-to-end QoS The adverse effects of variables such as throughput,delay and jitter can be minimised by using appropriate protocol mechanisms
3 Lightweight: certain environments are able to support service guarantees such as
defined latency and transfer rates Once these are ascertained an adaptable protocolmay remove unnecessary functions to achieve higher transfer rates
The Dynamic Reconfigurable Protocol Stack (DRoPS) (Fish et al., 1998) defines an
architecture supporting the implementation and operation of multiple runtime adaptablecommunication protocols Fundamental protocol processing mechanisms, termed
microprotocols are used to compose fully operational communication systems Each
microprotocol implements an arbitrary protocol processing operation The complexity of agiven operation may range from a simple function, such as a checksum, to a complex layer
of a protocol stack, such as TCP The runtime framework is embedded within an operatingsystem and investigates the benefits that runtime adaptable protocols offer in thisenvironment Mechanisms are provided to initialize a protocol, configure an instantiation forevery connection, manipulate the configuration during communication and maintainconsistent configurations at all end points Support is also provided for runtime adaptationagents that automatically reconfigure a protocol on behalf of an application These agentsexecute control mechanisms that optimize the configuration of the associated protocol Theremainder of this chapter will address the optimization of protocol configuration Otheraspects of the DRoPS project are outside the scope of this chapter, but may be found in Fish
et al (1998; 1999) and Megson et al (1998).
12.2 Optimising protocol configuration
The selection of an optimal protocol configuration for a specific, but potentially variable,set of application requirements is a complex task The evaluation of an appropriateconfiguration should at least consider the processing overheads of all availablemicroprotocols and their combined effect on protocol performance Additionalconsideration should be paid to the characteristics of the end-to-end connection This is due
to the diversity of modern LANs and WANs that are largely unable to provide guaranteedservices on an end-to-end basis An application using an adaptable protocol may manuallymodify its connections to achieve an appropriate service (work on ReSource reserVationProtocols (RSVP) addresses this issue)
Trang 4Whilst providing complete control over the functionality of a communication system,the additional mechanisms and extra knowledge required for manual control may deterdevelopers from using an adaptable system History has repeatedly shown that the simplestsolution is often favoured over the more complex, technically superior, one For example,the success of BSD Sockets may be attributed to its simple interface and abstraction ofprotocol complexities Manual adaptation relies on the application being aware of protocolspecific functionality, the API calls to manipulate that functionality and the implications ofreconfiguration The semantics of individual microprotocols are likely to be meaningless tothe average application developer This is especially true in the case of highly granularprotocols such as advocated by the DRoPS framework As previously stated, protocolconfiguration is dependent as much on end-to-end connection characteristics as applicationrequirements Manual adaptation therefore requires network performance to be monitored
by the application, or extracted from the protocol through additional protocol specificinterfaces Both approaches increase the complexity of an application and reduce itsefficiency Finally, it is unlikely that the implications of adaptation are fully understood byanyone but the protocol developer themselves These factors place additional burdens on adeveloper who may subsequently decide that an adaptable protocol is just not worth theeffort If it is considered that the ‘application knows best’ then manual control is perhapsmore appropriate However, it is more likely to be a deterrent in the more general case
It would be more convenient for an application to specify its requirements in moreabstract QoS terms (such as tolerated levels of delay, jitter, throughput, loss and error rate)and allow some automated process to optimize the protocol configuration on its behalf
A process wishing to automate protocol optimization must evaluate the most appropriate
protocol configuration with respect to the current application requirements as well as to-end connection conditions These parameters refer to network characteristics (such as
end-error rates), host resources (such as memory and CPU time) and scheduling constraints forreal-time requirements The complexity of evaluating an appropriate protocol configuration
is determined by the number of conditions and requirements, the number of states that each may assume, and the total number of unique protocol configurations.
Within DRoPS, a protocol graph defines default protocol structure, basic functiondependencies and alternative microprotocol implementations In practice, a protocoldeveloper will specify this in a custom Adaptable Protocol Specification Language (APSL).Defining such a graph reduces the number of possible protocol configurations to a function
of the number of objects in the protocol graph and the number of alternative mechanismsprovided by each This may be expressed as:
∏
=
K
k k
F
1
(12.1)
where, F is the number of states of configuration k and k Kthe total number of functions
in the protocol graph The automated process must therefore consider N combinations of
requirements, conditions and configurations, which is defined as:
j j I
i
i R F C
N
1 1 1
(12.2)
Trang 5where C is the number of states of condition i and i R the number of states of requirement j
j, and where I and J are the total number of conditions and requirements This represents the
total number of evaluations necessary to determine the most appropriate configuration foreach combination of requirements and conditions The complexity of this task increases
relentlessly with small increases in the values of I, J and K; as illustrated in Figure 12.1.
Part (a) shows the effect of adding extra protocol layers and functions, and part (b) theeffect of increasing the condition and requirement granularity
12.2.1 Protocol Control Model
The runtime framework supports mechanisms for the execution of protocol specific
adaptation policies These lie at the heart of a modular control system that automatically
optimises the configuration of a protocol The methods used to implement these policies arearbitrary and of little concern to the architecture itself However, the integration of DRoPSwithin an operating system places several restrictions on the characteristics of these policies.The adaptation policy must posses a broad enough knowledge to provide a good solutionfor all possible inputs However in the execution of this task it must not degradeperformance by squandering system level resources Therefore, any implementation must besmall to prevent excessive kernel code size and lightweight so as not to degrade systemperformance
Adaptation policies are embedded within a control system, as depicted in Figure 12.2.Inputs consist of QoS requirements from the user and performance characteristics from thefunctions of the communication system Before being passed to the adaptation policy, bothsets of inputs are shaped This ensures that values passed to the policy are within knownbounds and are appropriately scaled to the expectations of the policy
User requirements are passed to the control system through DRoPS in an arbitrary range
of 0 to 10 A value of 0 represents a ‘don't care’ state, 1 a low priority and 10 a highpriority These values may not map 1:1 to the policy, i.e the policy may only expect 0 to 3.The shaping function normalizes control system inputs to account for an individual policiesinterpretation
End-to-end performance characteristics are collected by individual protocol Beforebeing used by the policy, the shaping function scales these values according to the capability
of the reporting function For example, an error detected by a weak checksum functionshould carry proportionally more weight than one detected by a strong function The shapedrequirements and conditions are passed to the adaptation policy for evaluation Based on thepolicy heuristic an appropriate protocol configuration is suggested
The existing and suggested configurations are compared and appropriate adaptationcommands issued to convert the former into the latter Protocol functions, drawn from alibrary of protocol mechanisms, are added, removed and exchanged, and the updatedprotocol configuration is used for subsequent communication The DRoPS runtimeframework ensures that changes in protocol configuration are propagated and implemented
at all end points of communication The new configuration should provide a connection withcharacteristics that match the required performance more closely than the old configuration.Statistics on the new configuration will be compiled over time and if it fails to performadequately it will be adapted
Trang 6Figure 12.1 Increasing complexity of the configuration task.
Trang 7Figure 12.2 Model of automatic adaptation control system.
12.2.2 Neural Networks as Adaptation Policies
Various projects have attempted to simplify the process of reconfiguration by mapping
application specified QoS requirements to protocol configurations Work by Box et al.
(1992) and Zitterbart (1993) classified applications into service classes according to Table12.1 and mapped each to a predefined protocol configuration The DaCaPo project uses a
search based heuristic, CoRA (Plagemann et al., 1994), for evaluation and subsequent
renegotiation of protocol configuration The classification of building blocks andmeasurement of resource usage are combined in a structured search approach enablingCoRA to find suitable configurations The properties of component functions, described in aproprietry language L, are based on tuples of attribute types such as throughput, delay andloss probability CoRA configures protocols for new connections at runtime with respect to
an applications requirements, the characteristics of the offered transport service and theavailability of end system resources The second approach provides a greater degree ofcustomisation, but the time permitted to locate a new configuration determines the quality ofsolution found Beyond these investigations there is little work on heuristics for the runtimeoptimisation of protocol configuration
In the search for a more efficient method of performing this mapping, an approachsimilar to that used in Bhatti and Knight (1998) for processing QoS information aboutmedia flows was considered However, the volume of data required to represent and reasonabout QoS rendered this solution intractable for fine-grained protocol configuration in anOperating System environment Although impractical, this served to highlight the highlyconsistent relationships between conditions, requirements and the actual performance ofindividual configurations For example, consider two requirements, bit error tolerance andrequired throughput, and a protocol with variable error checking schemes The morecomprehensive the error checking, the greater the impact it has on throughput This is the
Trang 8case for processing overhead (raw CPU usage) and knock-on effects from the detection oferrors (packet retransmission) As emphasis is shifted from correctness to throughput, theselection of error function should move from complete to non-existent, depending on thelevel of error in the end-to-end connection.
12.2.3 Motivation
If requirements and conditions are quantized and represented as a vector, the process of
mapping to protocol configurations may be reduced to a pattern matching exercise Initialinterest in the use of neural networks was motivated by this fact, as pattern is an application
at which neural networks are particularly adept The case for neural network adaptationpolicies is strengthened by the following factors:
1 Problem data: the problem data is well suited to representation by a neural network.
Firstly extrapolation is never performed due to shaping and bounding in the controlmechanism Secondly, following shaping the values presented at the input nodes maynot necessarily be discrete Rather than rounding, as one would in a classic state table,the networks ability to interpolate allows the suggestion of protocol configurations forcombinations of characteristics and requirements not explicitly trained
2 Distribution of overheads: the largest overhead in the implementation and operation
of a neural network is the training process For this application the overheads in off lineactivities, such as the time taken to code a new protocol function or adaptation policy,
do not adversely effect the more important runtime performance of the protocol Thus,the overheads are being moved from performance sensitive online processing to off lineactivities, where the overheads of generating an adaptation policy are minimalcompared to the time required develop and test a new protocol
3 Execution predictability: the execution overheads of a neural network are constant
and predictable The quality of solution found does not depend upon an allotted searchtime and always results in the best configuration being found (quality of solution isnaturally dependent on the training data)
12.2.4 The Neural Network Model
The aim of using a neural network is to capitalise on the factors of knowledgerepresentation and generalisation to produce small, fast, knowledgeable and flexibleadaptation heuristics In its most abstract form, the proposed model employs a neuralnetwork to map an input vector, composed of quantized requirements and conditions, to anoutput vector representing desired protocol functionality
A simple example is illustrated in Figure 12.3 Nodes in the input layer receiverequirements from the application and connection characteristics from the protocol Thevalues presented to an input node represents the quantized state (for example low, medium
or high) of that QoS characteristic No restrictions are placed on the granularity of thesestates and as more are introduced the ability of an application to express its requirementsincreases Before being passed to the network input node, values are shaped to ensure they
Trang 9stay within a certain range expected by the policy It should be noted that this process doesnot round these values to the closest state as would be required in a state table Thenetworks ability to generalise allows appropriate output to be generated for input values notexplicitly trained.
Figure 12.3 Mapping QoS parameters to protocol configuration.
When the network is executed the values written in the nodes of the output layerrepresent the set of functions that should appear in a new protocol configuration To achievethis, output nodes are logically grouped according to the class of operation they perform;individual nodes represent a single function within that class Output nodes also representnon-existent functions, such as that representing no error checking in the example Thisforms a simple YES / NO pattern on the output nodes, represented by 1 and 0 respectively
For example, if error checking is not required, the node representing no error checking will
exhibit a YES whilst the other nodes in this logical class with exhibit NO
In many cases, the values presented at the output nodes will not be black and white, 1 or
0, due to non-discrete input values and the effect of generalisation Therefore the value ineach node represents a degree of confidence that the function represented should appear inany new configuration When more than one node in a logical group assumes a non-zerovalue, the function represented by the highest confidence value is selected To reduceprocessing overhead, only protocol functions that have alternative microprotocols arerepresented in the output layer
12.3 Example Neural Controller
This section outlines the steps taken to implement a neural network based adaptation policyfor the Reading Adaptable Protocol (RAP) RAP is a complete communication systemcomposed of multiple microprotocols; it contains a number of adaptable functions,summarised in Table 12.2, a subset of which are used in the example adaptation policy
Trang 10Table 12.2 Adaptable functionality of the Reading Adaptable Protocol.
Protocol mechanism Alternative implementations
Buffer allocation preallocated cache, dynamic
Fragmentation and
reassembly
stream based, message basedsequence control none, complete
flow control none, window based
acknowledgement scheme IRQ, PM-ARQ
checksums none, block checking, full CRC
12.3.1 Adaptation Policy Training Data
A neural network gains knowledge through the process of learning In this application thetraining data should represent the most appropriate protocol configuration for eachcombination of application requirements and operating conditions The development of aneural network adaptation controller is a three stage process:
1 Evaluate protocol performance: this process determines the performance of each
protocol configuration in each operating environment Network QoS parameters arevaried and the response of individual configurations logged
2 Evaluate appropriate configurations: the result of performance evaluation is used to
determine the most appropriate configuration for each set of requirements in eachoperating environment This requires development of an appropriate fitness function
3 Generate a policy: having derived an ideal set of protocol configurations a neural
network must be trained and embedded within an adaptation policy
The result of these three stages is an adaptation policy that may be loaded into the DRoPSruntime framework and used to control the configuration of a RAP based system
12.3.1 Evaluating Protocol Performance
The evaluation of protocol performance is performed by brute force experimentation.During protocol specification, a configuration file is used to identify microprotocolresources and default protocol configurations Using this file it is possible for the APSLparser to automatically generate client and server applications that evaluate the performancecharacteristics of all valid protocol configurations
Evaluating every protocol configuration in a static environment, where connectioncharacteristics remain fixed, does not account for the protocols performance over real worldconnections in which connection characteristics are potentially variable To functioncorrectly in such circumstances an adaptation policy requires knowledge of how differentconfigurations perform under different conditions To simulate precisely defined networkcharacteristics, a traffic shaper is introduced This intercepts packets traversing a host’s