Distributed Event Monitoring for Software DefinedNetworks Quan Vuong, Ha Manh Tran, Son Thanh Le School of Computer Science and Engineering International University – HCMC Vietnam Nationa
Trang 1Distributed Event Monitoring for Software Defined
Networks
Quan Vuong, Ha Manh Tran, Son Thanh Le School of Computer Science and Engineering International University – HCMC Vietnam National University
Email:{vquan, tmha, ltson}@hcmiu.edu.vn
Abstract—Software defined network separates data and control
planes that facilitate network management functions, especially
enabling programmable network control functions Event
moni-toring is a fault management function involved in collecting and
filtering event notification messages from network devices This
study presents an approach of distributed event monitoring for
software defined network Monitoring events usually deals with
a large amount of event log data, log collecting and filtering
processes thus require a high degree of automation and efficiency
This approach takes advantage of the OpenFlow and syslog
protocols to collect and store log events obtained from network
devices on a syslog server It also uses the adaptive semantic
filtering method to filter and present non-trivial events for system
administrators to take further actions We have evaluated this
approach on a network simulation platform and provided some
log collection and filtering results with analysis
Keywords: Event Monitoring, Event Filtering, Syslog, Adaptive
Semantic Filtering, Software Defined Network
I INTRODUCTION Software defined network (SDN) [1], [2] is an emerging
computer networking paradigm that separates the control and
data planes for more efficient network management Compared
with the traditional networks, SDN enables programmability
for network control and abstraction for network infrastructure,
thus assisting system administrators in managing large
net-works with a high level of efficiency and scalability Event
monitoring is one of the main network management
func-tions required by various funcfunc-tions ranging from performance
monitoring, fault diagnosis to high availability and intrusion
detection It is involved in collecting and filtering a huge
number of log events Traditional network devices, during
execution, report operational data and log event data to the
managing systems through network management protocols
such as snmp [3], syslog [4], netconf [5] and cli
Monitor-ing events on SDN with automation and efficiency is thus
demanding
A common approach to deal with this demand in SDN
is to upload all operational data and log event data to a
centralized data store that is available for several SDN
con-The changes of network devices need to be updated as quickly
as possible to the entire network, while a controller only manages a part of the network We have proposed an approach
of distributed event monitoring for SDN This approach applies existing protocols and applications to collect and store log events on a syslog server It also uses an enhanced method
of adaptive semantic filtering to filter non-trivial events for system administrators to take further actions The contribution
is thus threefold:
1) Proposing an approach of distributed event monitoring for SDN focusing on collection and filtering
2) Implementing the approach using the existing rsyslog tool and ASF-BDT method
3) Evaluating the approach on ONOS simulation platform and providing result analysis
The rest of the paper is structured as follows: the next section introduces some background of SDN focusing on architecture, planes and layers, an overview of the standard OpenFlow protocol [6] and simulation platforms Section III presents the approach of distributed event monitoring for SDN that includes the architecture, syslog protocol and application, and semantic filtering method Several experiments in Sec-tion IV reports the results of collecting and filtering log events before the paper is concluded in Section V
II BACKGROUND
A SDN Architecture
Software Defined Network (SDN) [1] is an emerging ap-proach for the next generation of computer network Contrary
to the traditional networks, SDN decouples the control and data planes which are integrated in conventional network devices The data plane only contains the forwarding devices
(white box devices) and the control plane consists of con-trollers (network brain) along with a network operating system
(NOS) which is installed on the controllers to regulate the for-warding devices Two planes communicate using an OpenFlow protocol which is the first open southbound standard for SDN
2015 International Conference on Advanced Computing and Applications
Trang 2the forwarding devices in Network Infrastructure to coordinate
network traffic or detect network faults via Southbound
Inter-face which belongs to the data plane as described in Figure 1
As a well-defined paradigm, SDN provides a lot of benefits
for campuses and enterprises The enterprises apply feasibly
the SDN to build their efficient network which reduces the
cost operation, optimizes computing resources and improving
their business continuity However, network faults usually
occur when a number of forwarding devices are increasingly
connected to the network Besides, controllers are logically
centralized and regularly consulted to forwarding devices, this
causes much overhead on the network SDN controllers are,
therefore, possibly corrupted
The communication between the data and control planes is
through Southbound Interface layer which integrates with the
OpenFlow protocol and contains several forwarding devices
Likewise, the management plane interacts with the control
plane via Northbound Interface which provides some types
of APIs such as REST APIs for developers to program
the application for different purposes Despite the fact that
OpenFlow is the mainstream protocol for SDN, many other
protocols are supported in Southbound Interface layer such as
netconf, snmp or vosdb An SDN controller contains a network
operating system as a core responsible for tracking and
dis-tributing network environment information to the applications
and synchronizing operational information among controllers
Fig 1 Software Defined Networks in (a) planes, (b) layers, and (c) system
design architecture [7]
Generally, there are many types of NOS which mainly
support for single or distributed instances As a single NOS,
there are several controller frameworks such as NOX, Beacon,
Ryu, Floodlight, etc Moreover, NOS supports both single
and distributed functions including OpenDaylight, Onix and
ONOS Their architecture fosters multiple controllers which
can tolerate network failures When one of controllers is down,
the policy management of distributed NOS promotes another
controller to replace the failed controller in the shortest time
Due to high network resilience, NOS can serve for large
scale network systems ONOS and OpenDaylight also support
graphical user interfaces to visualize network topology through
web applications This feature allows system administrators to
manage and interact the SDN network easily Table I compares
different controllers based on their specification [12]
Onix is the first distributed SDN controller to implement
a global network view It originally targets network
Name Architecture Northbound API Prog Language Floodlight centralized RESTful API Java
Onix distributed NVP NBAPI Python, C OpenDaylight distributed REST, RESTCONF Java ONOS distributed RESTful API Java
ization in data centers It remains closed source and further development is not revealed in publications
Floodlight is the first open source SDN controller that gains much attraction in research and industry It supports high availability in the manner of its proprietary sibling, Big Switch’s Big Network Controller via a hot standby system
It does not support a distributed architecture for large scale performance
OpenDaylight is another open source SDN controller backed by a large consortium of networking companies It implements a number of vendor-driven features Similarly to ONOS, OpenDaylight runs on a cluster of servers for high availability using distributed data store and leader election The OpenDaylight clustering architecture currently evolves and becomes one of the most used architectures
B OpenFlow Protocol
As mentioned above, OpenFlow [6] is a popular protocol which is integrated in a lot of forwarding devices such as OpenVSwitch in SDN architecture The controllers maintain the forwarding devices through the OpenFlow protocol An OpenFlow device has many flow tables controlled by a pipeline process and every flow table has three parts, as shown in Fig 2: (i) a matching rule; (ii) action to do when matching packets; (iii) counters that keep the statistics of matched packets When
a packet arrive in an OpenFlow device, the lookup process starts to find the matching rules in flow tables whether the packet matches with the packet rules If not, it drops the packet
or transfer the packet to the controller Otherwise, the packet
is executed by a set of actions which include: (i) forward the packet to the outgoing port(s); (ii) encapsulate and forward the packet to the controller; (iii) drop the packet; (iv) send the packet to the normal processing pipeline; (v) send the packet
to the next flow table or to special tables, such as group or metering tables
Fig 2 OpenFlow-enabled SDN devices [7]
The communication between the controllers and forwarding devices is Secure Channel integrated in OpenFlow-enabled switch, as shown in Fig 3 The Secure Channel allows
Trang 3commands and packets to be sent between controllers and
switches using the OpenFlow protocol and Flow Table inside
the forwarding devices It is necessary to categorize switches
into dedicated OpenFlow switches that do not support normal
Layer 2 and Layer 3 processing, and OpenFlow-enabled
switches The dedicated OpenFlow switches only support the
OpenFlow protocol for forwarding packets between controllers
and switches However, OpenFlow-enabled switches are
com-mercial routers and switches enhanced with Openflow features
Typically, the Flow Table re-uses the existing hardware, such
as TCAM (Ternary Content Addressable Memory) The
au-thors of the study [8] have presented two types of OpenFlow
switches currently The first generation of OpenFlow switches
is referred to as Type 0 that supports the header formats,
as shown in Fig 2 and four basic actions mentioned above
Furthermore, OpenFlow switches can rewrite portions of the
packet header for NAT, or to obfuscate addresses on
interme-diate links, and to map packets to a priority class Likewise,
some Flow Tables can match on arbitrary fields in the packet
header, enabling experiments with new non-IP protocol Type
1 switches are defined as a particular set of features emerges.
The detail requirements of an Openflow switch are defined by
the OpenFlow Switch specification [6].
Fig 3 Controller access to Flow Table using Secure Channel [8]
To monitor flows in SDN network, the authors of study [9]
have presented Payless – a monitoring framework which
provides a flexible RESTful API for flow statistics collection
at different aggregation levels The monitoring framework
includes several components such as Requester Interpreter,
Scheduler, Switch selector, Aggregator and Data Store to
process the flow statistic collection The goal of the study
is to achieve accurate and timely statistics, while incurring
little network overhead Therefore, they proposed an adaptive
monitoring algorithm which optimizes the time to collect the
flow statistics and improves the efficiency of the network
In other approach, the authors of study [10] have proposed
a Distributed and Collaborative Monitoring (DCM) system
to monitor per-flow in the network It allows the switches
to achieve flow monitoring tasks and balance measurement
load collaboratively DCM also uses novel two-stage Bloom
filters to categorize the group of flows because the difference
types of flow group have different actions When switches
receive a packet, they check the matching rules to monitor the flow of the packet DCM proposed the first stage Bloom filter referred to as admission Bloom filter to group the monitored flows and then the second stage Bloom filter referred to
as action Bloom filter decides the corresponding monitoring actions This technique saves the switch processing resource and improves the performance of network monitoring
C Simulation Platforms
To support distributed network system, NOS has to meet the demanding requirements of scalability, performance and availability ONOS (Open Network Operating System) from ONLAB is a network operating system that can adapt these challenges ONOS uses an Apache Karaf [11] feature mech-anism for application subsystem to easily facilitate software delivery and management across all ONOS instances in a cluster The authors of the study [12] have presented two prototypes in ONOS Prototype 1 focuses on a distributed platform and fault tolerance, while prototype 2 concentrates
on improving performance by notifying events and minimizing latency
With Prototype 1, global network view allows system administrators to control and manage the network with vi-sualization Through the web technology, they can monitor the whole network topology Furthermore, ONOS supports multiple server instances which act as a master to manage
a subset of switches Consequently, the network achieves high availability which is also a key feature requirement of ONOS The distributed architecture of ONOS allows the network
to operate continuously when one of ONOS instances fails
To achieve this challenge, ONOS architecture system has multiple redundant instances waiting for the failure of the master instance Then, the system elects one of the redundant instances to become the new leader for managing the entire network
In Prototype 2, event notification is built on inter-instance
to publish events immediately based on a Hazelcast [13] distributed system Events are raised when ONOS instances receive the information of topology change or flow installation This mechanism improves the performance and resilience
of network system by discovering early network events, so reducing the latency of network services
To visualize the network topology on web application, ONOS provides the Global Network View system which is the application with a view of the Network LLDP (Link Layer Discovery Protocol) protocol is used to collect the network information to visualize the network topology All elements of the network include hosts, switches, links, and any state associated with the network such as utilization System administrators can program this network view through APIs
An API lets an application look at the view as a network graph Some examples of network graphs include: (i) create a simple application to calculate the shortest path; (ii) optimize network utilization by monitoring the network view and programming changes to paths to adjust load (traffic engineering); (iii)
Trang 4isolate a part of network that is being upgraded or quarantined
for virus detection
Fig 4 ONOS distributed controller architecture [14]
Mininet [15] is one of the popular network simulation
tools which especially supports for SDN There are a lot
of functionalities that Mininet includes, such as, creating
network, customizing network, CLI, sharing network It also
provides the scalability function and deployment in hundreds
of nodes For creating network, it supports scripts to establish
the network The Python programming language is used to
write scripts in Mininet After that, these scripts run on CLI
and Mininet sets up the virtual network with switches, hosts,
routers as the same real devices These devices can also be
interacted and controlled via CLI In the other hand, Mininet
provides functions that allows remote controllers to connect
to the network Mininet can also be shared on many virtual
machine platforms, such as Virtualbox, VMWare or Xen, by
pre-installing in the first one and then exporting to the new
ones
Docker [16] is an open platform for developers and system
administrators to build, ship, and run distributed applications
At its core, Docker provides a way to run almost any
applica-tion securely isolated in a container The isolaapplica-tion and security
allow users to run many containers simultaneously on their
host The lightweight nature of containers, which run without
the extra load of a hypervisor, means users can get more out
of their hardware Docker is similar to a virtual machine But
unlike a virtual machine, instead of creating the whole virtual
operating system, Docker allows applications to use the same
Linux kernel as the system that they run on and only requires
applications to be shipped with packages not already running
on the host computer This gives a significant performance
boost and reduces the size of applications, as shown in Fig 5
III DISTRIBUTEDEVENTMONITORING
Fig 6 shows an architecture design of distributed log event
monitoring on SDN This design allows hosts and switches to
send log events to controllers that in turn send them to a syslog
server using the syslog protocol Filtering functions work
on log event datasets to eliminate trivial events and provide
Fig 5 Containers vs VMs [17]
the remaining events for system administrators or analysis supporting systems The monitoring process is autonomous
Fig 6 An architecture of SDN distributed log event monitoring The controller manages several OF switches or a part of
a network with several hosts connected to the switch The connection between a controller and a switch is a link based
on the OF channel as explained in the OpenFlow protocol Controllers synchronize to share the operational data of for-warding devices such as state, traffic An operation deployed
on each controller monitors the state of switches and hosts The controller regularly sends messages to check whether a host is alive It also exchanges messages to to maintain the links connect among controller, switches and hosts When
a switch or a host goes offline, an event is raised to the controller and saved in the store The controller then informs the state change of devices to others for data synchronization The controller maintains similar stores and the log events are pushed to the syslog server using the syslog protocol
Trang 5The Rsyslog [18] application supports simple
expression-based filtering functions for filtering messages It
gen-erally offers four types of filtering conditions including
severity-based and facility-based selectors, property-based
and expression-based filters and BSD-style blocks [19]
The expression-based filter facilitates filtering on arbitrary
complex expressions that contain boolean, arithmetic and
string operations This filter contains if-then rules and
possibly evolve into a full configuration scripting
lan-guage A rule is represented in the format: if expr
then action-part-of-selector-line, where if
and then are fixed keywords that mus be present
expr is a (potentially quite complex) expression such
as the comparison of messages and keywords, and
action-part-of-selector-line is the saving path
for filtered events We have written multiple filtering scripts
based on the expression-based filter to filter the event datasets
by severity
Fig 7 Processes of the adaptive semantic filtering method
The syslog protocol [4] is a standard transport protocol that
allows a device to send event notification messages through
IP address to event message collectors or the syslog servers
A syslog message carries the following information: facility,
severity, hostname, timestamp and message The facility is
recognized as the source of messages These sources can be
operating systems, applications or processes The facility is
broadly categorized by the types of sources and represented
by integer, e.g., 0 is for kernel message The severity is also
presented by single-digit integer It describes the importance
of the event message such as info, warn, error, debug, etc that
allows system administrators to manage the types of messages
easily and filter the non-trivial event messages immediately
The hostname contains the host name or IP address configured
on the host The message contains the text of the syslog
mes-sage with some additional information about the application
that generated the message The timestamp is the local time
as the event message is generated The timestamp needs to be
accurate because system administrators configure the network
devices in synchronization time and conveniently looks up the
important event message in that time The generated time of messages also influence the precision of error messages When two events are closely generated in the same time, it may
be the same error messages The authors of study [20] have proposed an approach of adaptive semantic filtering (ASF) for dealing with this problem
The ASF-BDT filtering application have applied the adap-tive semantic filtering with bounded dynamic threshold [21]
to correlate events in the log event datasets collected from the syslog server This method is an enhanced version of the ASF method It contains multiple filtering functions: simple filtering functions using specific fields and rules, temporal filtering functions using time stamps, and semantic filtering functions usingΦ coefficient and Pearsons correlation coefficient [22],
as shown in Fig 7 The result of this application is a set of non-trivial log events that allow system administrators to take actions for system reliability and stability
IV EVALUATION
A SDN Configuration
We have configured a topology with 6 switches and 2 ONOS controllers on Mininet, as shown in Fig 8 Every switch connects to 6 hosts except for 2 switches in the middle The SDN simulation offers a distributed network system running on Inspiron 5447 Core i5-4210U Processor 1.70GHz (4 CPUs), 6GB RAM and 1TB HDD with Ubuntu Server 14.04 LTS
Fig 8 A configured SDN topology using ONOS
The controllers run with 2 ONOS controller instances created by using the Docker platform The syslog server is installed and run by Rsyslog [18] We have also configured the Rsyslog server with a remote IP address and default port
514 to collect log events from the ONOS controller in the syslog message format Note that the ONOS controllers send the event messages to the remote IP using the syslog protocol The event messages are recorded and stored in the log files for fault analysis and detection
We have created several activities and checked events raised
on the hosts Since the hosts connect to ports on the switches,
Trang 6events thus reflect changes on switches, such as host, port and
link status These events are raised and pushed to the syslog
server System administrators apply tools on events from the
log files for fault analysis and detection
The Rsyslog tool provides a set of rules to filter event
messages following the severity levels, such as info, warn,
error, system and debug The severity level assists system
administrators to reduce a huge number of events Fig 9
describes filtering rules in the rsyslog configuration to collect
events in log files
Fig 9 Filtering rules in the rsyslog configuration
B Event Collection
0
200
400
600
800
1000
1200
1400
Time (day)
Log data Info event Warn event Error event
Fig 10 Size of log events for a period of 5 days
To evaluate the SDN automated log system as configured
above, the log system automatically collects a large number of
log datasets A log dataset is collected from several log files
that contains various types of log events The total size of these
datasets is more than 1.2 GB including log events with info,
warn, error and system messages We have created several
failure scenarios for the period of 5 days Some scenarios
include manually shutting down one or multiple switches and
hosts during execution to record the non-trivial warning and
failure event messages The log dataset is separated by the
severity levels into 3 smaller datasets: info, warn and error
datasets
Fig 10 reports the size of different datasets collected by
a period of 5 days The total log dataset size exponentially
increases, especially for the last three days the dataset
approx-imately reaches to 800 MB The warn dataset size increases
0 1x106 2x106 3x106 4x106 5x106 6x10
Time (day)
Log data Info event Warn event Error event
Fig 11 Number of log events for a period of 5 days
similarly to the total log dataset size, while the remaining datasets slowly increase per day The total number of events reaches 5.2 millions for the whole log dataset including approximately 2.4 millions warn events and 50 thousands error events, as shown in Fig 11
0 100 200 300 400 500 600 700
Time (day) Warn event
Fig 12 Log datasets for warn messages for a period of 5 days Specific statistics reports the details of warn and error datasets by the same periods While error events definitely require actions from system administrators, some warn events imply potential problems necessary to be considered The error dataset is also much smaller than the warn dataset The warn dataset quickly increases approximately 700 MB after 2 days, but many warn events are trivial, as shown in Fig 12
We created several failure situations on network devices, the automated log system thus emits a large number of warn events The error dataset records error events after 3 days with few errors, as shown in Fig 13
C Event Filtering
We have used three datasets: the whole log dataset, warn dataset and error dataset for comparison Fig 14 presents the numbers of resulting events for these datasets using ASF-BDT with the Φ range of (0.5, 0.8) Except for the error dataset, the number of resulting events increases considerably
Trang 70
2
4
6
8
10
12
Time (day) Error event
Fig 13 Log datasets for error messages for a period of 5 days
as the threshold increases The whole log dataset reduces
significantly after using ASF-BDT (approximately 75%) and
the info dataset contributes a large number of correlated events
The warn dataset reduces the same rate as the whole log
dataset, while the error dataset also contains several correlated
events such as I/O Error: Broken pipe We observe
that the experimental dataset possesses a high degree of event
correlation
2
4
6
8
10
12
14
16
5 )
Threshold (s)
Log data Warn event Error event
Fig 14 Resulting events for different datasets using ASF-BDT
V CONCLUSION
We have proposed an approach of distributed log event
monitoring that assists system administrators in managing log
events for SDN The approach is characterized by the
capa-bility of collecting and filtering a large number of log events
from network devices autonomously and efficiently Collecting
events requires the configuration of rsyslog and simulation
applications and implementation of scripts that run with the
existing OpenFlow and syslog protocols Filtering events
ap-plies the ASF-BDT method for obtaining non-trivial events
precisely We have configured an SDN network topology with
Mininet and ONOS controllers on the Docker platform The
network system runs with multiple testing scenarios, such
as shutting down some switches and hosts during execution
to record warn and error event messages Some experiments have collected a large volume of log datasets with different types of severities The whole dataset is approximately 1.2
GB for a period of 5 collection days Other experiments have correlated events in the dataset to provide non-trivial events The future work focuses on setting up a larger SDN network topology with more controllers, switches and hosts so that we can collect a huge amount of event log data with a high level
of event diversity We also configure netconf supported by the next version of ONOS to improve monitoring on the SDN network
ACKNOWLEDGEMENTS This research activity is funded by Vietnam National Uni-versity in Ho Chi Minh City (VNU-HCM) under the grant number C2015-28-02
REFERENCES
[1] N Feamster, J Rexford, and E Zegura The Road to SDN Queue– Large-Scale Implementations, 11(12):20:20–20:40, December 2013.
[2] M Boucadair and C Jacquenet Software-Defined Networking: A Perspective from within a Service Provider Environment RFC 7149, March 2014.
[3] J Case, M Fedor, M Schoffstall, and J Davin A Simple Network Management Protocol (SNMP) RFC 1157, May 1990.
[4] R Gerhards The Syslog Protocol RFC 5424, March 2009.
[5] R Enns, M Bjorklund, J Sch¨onw¨alder, and A Bierman NETCONF Configuration Protocol RFC 6241, June 2011.
[6] The OpenFlow Switch Specication URL: http://OpenFlowSwitch.org Last access in May 2015.
[7] D Kreutz, F M V Ramos, P E Verissimo, C E Rothenberg,
S Azodolmolky, and S Uhlig Software-Defined Networking: A
Comprehensive Survey Proc IEEE, 103(1):14–76, Jan 2015.
[8] N McKeown, T Anderson, H Balakrishnan, G Parulkar, L Peterson,
J Rexford, S Shenker, and J Turner OpenFlow: Enabling Innovation
in Campus Networks SIGCOMM Comput Commun Rev., 38(2):69–74,
March 2008.
[9] S R Chowdhury, M F Bari, R Ahmed, and R Boutaba PayLess:
A Low Cost Network Monitoring Framework for Software Defined
Networks In Proc Network Operations and Management Symposium (NOMS’14), pages 1–9 IEEE, May 2014.
[10] Ye Yu, Chen Qian, and Xin Li Distributed and collaborative traffic
monitoring in software defined networks In Proc 3rd Workshop on Hot Topics in Software Defined Networking, HotSDN ’14, pages 85–90,
New York, NY, USA, 2014 ACM.
[11] The Apache Karaf URL: http://karaf.apache.org/ Last access in May 2015.
[12] P Berde, M Gerola, J Hart, Y Higuchi, M Kobayashi, T Koide,
B Lantz, B O’Connor, P Radoslavov, W Snow, and G Parulkar.
ONOS: Towards an Open, Distributed SDN OS In Proc 3rd Workshop
on Hot Topics in Software Defined Networking, HotSDN ’14, pages 1–6,
New York, NY, USA, 2014 ACM.
[13] Hazelcast Distributed Data Structure URL: http://docs.hazelcast.org/ docs/latest/manual/html/distributed-data-structures.html Last access in May 2015.
[14] An Open-Source Distributed SDN Operating System URL: http://www.slideshare.net/albertspijkers/onos-sdn-open-networking Last access in May 2015.
[15] B Lantz, B Heller, and N McKeown A network in a laptop:
Rapid prototyping for software-defined networks In Proc 9th ACM SIGCOMM Workshop on Hot Topics in Networks, Hotnets-IX, pages
19:1–19:6, New York, NY, USA, 2010 ACM.
[16] The Docker Platform URL: https://docs.docker.com/ Last access in May 2015.
[17] Containers vs vms URL: http://www.linuxfeed.org/2015/07/presentazione-a-docker/ Last access in May 2015.
[18] The Rocket Fast System for Log Processing URL: http://www.rsyslog com/ Last access in May 2015.
Trang 8stable/configuration/filters.html Last access in May 2015.
[20] Y Liang, Y Zhang, H Xiong, and R K Sahoo An Adaptive Semantic
Filter for Blue Gene/L Failure Log Analysis In Proc 21th International
Parallel and Distributed Processing Symposium (IPDPS 07), pages 1–8.
IEEE Computer Society, 2007.
Adaptive Semantic Filtering with Bounded Dynamic Threshold for
Log Data Analytics Journal of Science and Technology, Vietnamese Academy of Science and Technology, 52:122–130, 2014.
[22] H T Reynolds The Analysis of Cross-Classifications The Free Press,
New York, 1977.