1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Các phương pháp tiết kiệm năng lượng sử dụng công nghệ mạng điều khiển bằng phần mềm trong môi trường điện toán đám mây

111 242 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 111
Dung lượng 3,16 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The advances in Cloud Computing services as well as Information and Communication Technologies (ICT) in the last decades have massively influenced economy and societies around the world. The Internet infrastructure and services are growing day by day and play a considerable role in all aspects including business, education as well as entertainment. In the last four years, the percentage of people using Internet witnesses an annual growth of 3.5%, from 39% world population’s percentage in Dec-2013 to 51.7% in June-2017 [1]. To support the demand of cloud network infrastructure and Internet services in the rapid growth of users, it is necessary for the Internet providers to have a large number of devices, complex design and architecture that have the capacity to perform increasingly number of operations for a scalability. Consequently, many huge cloud infrastructures have been employed by Telcos, Internet Service Providers (ISPs) and enterprises for the exploded demand of various applications and data cloud-services such as YouTube, Dropbox, e-learning, cloud office etc. To meet the requirements of these booming services all around the world, cloud network infrastructures have been built up in a very large scale, even geographically distributed data centers with a huge number of network devices and servers. In addition, the maintenance of the systems with high availability and reliability level requires a notable redundancy of devices such as routers, switches, links etc. As a result, having such a large infrastructure consumes a huge volume of energy, which leads to consequent environmental and economic issues: - Environmentally, the amount of energy consumption and carbon footprint of the ITC-sector is remarkable. The manufacture of ICT equipment is estimated its use and disposal account for 2% of global CO2 emissions, which is equivalent to the contributions from the aviation industry [2]. The networking devices and components estimate around 37% of the total ICT carbon emission [3]; - Economically, the huge consumed power leads to the costs sustained by the providers/operators to keep the network up and running at the desired service level and their need to counterbalance ever-increasing cost of energy. Although network energy efficiency has recently attracted much attention from communities [4], there are still many issues in realization of the energy-efficient network including inflexibility and the lack of an energy-aware network. The main difficulties of the network energy efficiency as well as its research motivations are shortly described as follows:

Trang 1

MINISTRY OF EDUCATION AND TRAINING HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY

TRAN MANH NAM

CÁC PHƯƠNG PHÁP TIẾT KIỆM NĂNG LƯỢNG SỬ DỤNG CÔNG NGHỆ MẠNG ĐIỀU KHIỂN BẰNG PHẦN MỀM TRONG

MÔI TRƯỜNG ĐIỆN TOÁN ĐÁM MÂY SDN-BASED ENERGY-EFFICIENT NETWORKING IN

CLOUD COMPUTING ENVIRONMENTS

DOCTORAL THESIS OF TELECOMMUNICATIONS ENGINEERING

HANOI - 2018

Trang 2

CONTENTS

LIST OF FIGURES viii

LIST OF TABLES x

INTRODUCTION 1

CHAPTER 1 AN OVERVIEW OF ENERGY-EFFICIENT APPROACHES IN CLOUD COMPUTING ENVIRONMENTS 6

1.1 Today's Internet 6

1.1.1 Cloud Computing Services and Infrastructures 6

1.1.2 Energy consumption problems 6

1.2 An Overview of Energy-Efficient Approaches 8

1.2.1 Energy consumption characteristics 8

1.2.2 Energy-Efficient Approaches' Classification 9

1.3 Software-defined Networking (SDN) technology 10

1.3.1 SDN Architecture 10

1.3.2 SDN Southbound API - OpenFlow Protocol 11

1.3.3 SDN Controllers 12

1.4 Difficulties on Network Energy Efficiency and Motivations 13

1.5 Dissertation’s Contributions 14

1.5.1 Proposing an energy-aware and flexible data center network that is based on the SDN technology 14

1.5.2 Proposing energy-efficient approaches in a network virtualization for cloud environments 14

1.5.3 Proposing an energy-aware data center virtualization for cloud environments 15

CHAPTER 2 SDN-BASED ENERGY-AWARE DATA CENTER NETWORK 16 2.1 Background Technologies 16

2.1.1 DCN technique and architecture 16

2.1.2 Existing system 22

2.2 Power-Control System of a DC Network 22

2.2.1 Energy modeling of a network 23

2.2.2 The Diagram of the Power-Control System 25

2.3 Energy-Aware Routing based on Power Profile of Devices in Data Center Networks using SDN 29

2.3.1 Energy-Aware Routing and Topology Optimization Algorithm 30

2.3.2 Performance evaluation 36

2.4 Green Data Center using centralized Power-control of the Network and servers 39

2.4.1 Extended Power-Control System 40

2.4.2 Use case 41

2.4.3 Topology-aware VM migration algorithm 43

2.4.4 VM Migration cost and Power modeling of a Server 45

2.4.5 Experimental Results 45

2.5 Conclusion 48

CHAPTER 3 ENERGY-EFFICIENT NETWORK VIRTUALIZATION FOR CLOUD ENVIRONMENTS 49

Trang 3

3.1 Network Virtualization and Virtual Network Embedding 51

3.2 Constructing Energy-Aware SDN-based Network Virtualization System 51 3.2.1 System’s Diagram 52

3.2.2 System’s workflow 53

3.3 Modeling and Problem Formulation 54

3.3.1 VNE Modeling 54

3.3.2 Objective and Constraints 55

3.3.3 Time-based Embedding Strategies 57

3.4 Energy-efficient VNE algorithms 58

3.4.1 Energy-cost Coefficient of Capacity 58

3.4.2 Virtual Node Mapping algorithms 59

3.4.3 Virtual Link Mapping (VLiM) Algorithm 62

3.5 Performance Evaluation 63

3.6 Conclusion 67

CHAPTER 4 AN ENERGY-AWARE DATA CENTER VIRTUALIZATION FOR CLOUD ENVIRONMENTS 68

4.1 Virtual DC Technologies 69

4.1.1 Virtual data center embedding 69

4.1.2 Virtual machine migration and server consolidation 71

4.1.3 Discussion 71

4.2 Design Objectives 73

4.3 Problem Formulation 74

4.3.1 Data Center Modeling 74

4.3.2 Energy Modeling of DC Components 75

4.3.3 Energy-Efficient Problem Formulation 76

4.4 A New Concept for VDC Embedding 77

4.4.1 Energy-aware VDC architecture 77

4.4.2 Energy-aware VDC embedding algorithm 78

4.4.3 Joint VDC Embedding and VM Migration Algorithms 81

4.5 Performance Evaluation 84

4.5.1 Performance criteria 84

4.5.2 Numerical results 85

4.6 Conclusion 91

CHAPTER 5 CONCLUSION AND FUTURE WORK 92

5.1 Major contributions 92

5.2 Future research directions 93

LIST OF PUBLICATIONS 94

REFERENCES 96

Trang 4

ABBREVIATIONS

APCI Advanced Configuration & Power Interface

ASIC Application specific integrated circuits

D-ITG Distributed internet traffic generator

EA-NV Energy-aware network virtualization

EA-VDC Energy-aware Virtual Data Center

IaaS Infrastructure-as-a-service

ICT Information and communication technologies

PSnEP Power scaling and energy-profile-aware

RMD-EE Reducing middle node energy efficiency

SDSN Software-Defined Substrate Network

Trang 5

SNMP Simple network management protocol

Trang 6

LIST OF FIGURES

Figure 1.1: Estimate of the global carbon footprint of ICT (including PCs, telcos’ networks

and devices, printers and datacenters) [15] 7

Figure 1.2: Energy consumption estimation for the European telcos’ network infrastructures in the”Business-As-Usual” (BAU) and in the Eco-sustainable (ECO) scenarios, and cumulative energy savings between the two scenarios [16] 7

Figure 1.3: Operating Expenses (OPEX) estimation related to energy costs for the European telcos’ network infrastructures in the ”Business-As-Usual” (BAU) and in the Eco-sustainable (ECO) scenarios, and cumulative savings between the two scenarios [17] 8

Figure 1.4: SDN Architecture 11

Figure 1.5: OpenFlow controller and switches 12

Figure 2.1: DCN Architecture [43] 18

Figure 2.2: Three-tier DCN Architecture [45] 18

Figure 2.3: Fat-tree DCN Topology 19

Figure 2.4: Dcell DCN Architecture [53] 19

Figure 2.5: BCube DCN Architecture [54] 20

Figure 2.6: Fat-tree architecture with k = 4 21

Figure 2.7: Diagram of the ElasticTree system [57] 22

Figure 2.8: Energy – Utilization relation of a network [58] 23

Figure 2.9: Power-control System of a Network 26

Figure 2.10: Fat-tree topology with Minimum Spanning Tree 28

Figure 2.11: Power Scaling Algorithm 32

Figure 2.12: Power Scaling and Energy-Profile-Aware - PSnEP algorithm (Proposed Algorithm 1) The flowchart describes the process between Edge and Aggregation switches 34

Figure 2.13: use-case with PSnEP algorithm in a DCN 35

Figure 2.14: PSnEP vs Power scaling (PS) with k=6 Fat-tree, mix scenario 38

Figure 2.15: Energy-saving level ratio of the PSnEP algorithm to the PS algorithm in different sizes 39

Figure 2.16: Extended Power-Control system (Ext-PCS) 40

Figure 2.17: Example 42

Figure 2.18: First-fit Migration [67] Algorithm 42

Trang 7

Figure 2.19: Topology-Aware Placement Algorithm 43

Figure 2.20: K=8, comparison with full mesh scenario 46

Figure 2.21: K=16, comparison with full mesh scenario 47

Figure 2.22: K=8, comparison with Honeyguide 47

Figure 2.23: K=16, comparison with Honeyguide 48

Figure 3.1: FlowVisor – Hypervisor-like Network Layer [71] 50

Figure 3.2: Example of a virtual network on top of a physical network 51

Figure 3.3: Energy-Aware Network Virtualization system’s Diagram 52

Figure 3.4: Online VNE mapping method 57

Figure 3.5: Online using Time Window method 58

Figure 3.6: The GUI of an Energy-aware network virtualization platform 64

Figure 3.7 AR– Online 65

Figure 3.8: AR – Online using Time Windows 65

Figure 3.9: Percentage of Power Consumption to Full State in Online Strategy 65

Figure 3.10 Percentage of Power Consumption to Full State in OuTW Strategy 65

Figure 3.11: Comparison of comsumed energy between Online and OuTW strategies 66

Figure 3.12: Comparison of acceptance ratio between Online and OuTW strategies 66

Figure 4.1: Traditional cloud service provider vs NaaS 68

Figure 4.2: Embedding virtual data center requests on a physical data center 70

Figure 4.3: Virtual data center embedding - Static mapping; 72

Figure 4.4: Virtual data center embedding - Dynamic mapping 72

Figure 4.5: Energy proportional property of energy-aware data centers 73

Figure 4.6: Energy-Aware VDC Architecture 78

Figure 4.7: VDC Embedding Flowchart 79

Figure 4.8: Flowchart of Partial Migration (PM) 83

Figure 4.9: Migration on Arrival 84

Figure 4.10: Fluctuation of system utilization (SecondNet) 86

Figure 4.11: DC Utilization per Load 87

4.12: Acceptance Ratio per VM 87

Figure 4.13: Acceptance Ratio per VDC 88

Trang 8

Figure 4.14: Total power consumption of the physical DC 88

Figure 4.15: Average consumed power per serving VDC 89

Figure 4.16: Number of migrations for different strategies 90

Figure 4.17: Comparison of embedding - migration strategies 90

4.18: Different embedding-magrition strategies: (a) GreenHead, (b) SecondNet, (c) Partial Migration, (d) Migration on Arrival, (e) Full Migration 91

LIST OF TABLES Table 1.1: The Internet’s users in the world [1] 6

Table 1.2: Estimated power consumption sources in a generic platform of IP router 8

Table 1.3: Classification of energy-efficient approaches of the future Internet [4] 9

Table 2.1: Power Summary For A 48-Port Pronto 3240 30

Table 2.2: Energy consumption of NetFPGA-Based OpenFlow Switch 31

Table 2.3: Energy-saving ratio of PSnEP to Power scaling algorithm in different topology’s sizes 39

Table 2.4: Traffic demand 41

Table 2.5: Power profile of server Dell PowerEdge R710 46

Table 3.1: Virtual Network Embedding Terminology 54

Table 3.2: Acceptance ratio and power consumption of the system under different window size in OuTW 67

Table 4.1: Standard deviation of system utilization 86

Trang 9

INTRODUCTION

1 Overview of Network Energy Efficiency in Cloud Computing Environments

The advances in Cloud Computing services as well as Information and Communication Technologies (ICT) in the last decades have massively influenced economy and societies around the world The Internet infrastructure and services are growing day by day and play

a considerable role in all aspects including business, education as well as entertainment In the last four years, the percentage of people using Internet witnesses an annual growth of 3.5%, from 39% world population’s percentage in Dec-2013 to 51.7% in June-2017 [1]

To support the demand of cloud network infrastructure and Internet services in the rapid growth of users, it is necessary for the Internet providers to have a large number of devices, complex design and architecture that have the capacity to perform increasingly number of operations for a scalability Consequently, many huge cloud infrastructures have been employed by Telcos, Internet Service Providers (ISPs) and enterprises for the exploded demand of various applications and data cloud-services such as YouTube, Dropbox, e-learning, cloud office etc To meet the requirements of these booming services all around the world, cloud network infrastructures have been built up in a very large scale, even geographically distributed data centers with a huge number of network devices and servers

In addition, the maintenance of the systems with high availability and reliability level requires a notable redundancy of devices such as routers, switches, links etc As a result, having such a large infrastructure consumes a huge volume of energy, which leads to consequent environmental and economic issues:

- Environmentally, the amount of energy consumption and carbon footprint of the

ITC-sector is remarkable The manufacture of ICT equipment is estimated its use and disposal account for 2% of global CO2 emissions, which is equivalent to the contributions from the aviation industry [2] The networking devices and components estimate around 37% of the total ICT carbon emission [3];

- Economically, the huge consumed power leads to the costs sustained by the

providers/operators to keep the network up and running at the desired service level and their need to counterbalance ever-increasing cost of energy

Although network energy efficiency has recently attracted much attention from communities [4], there are still many issues in realization of the energy-efficient network including inflexibility and the lack of an energy-aware network The main difficulties of the network energy efficiency as well as its research motivations are shortly described as follows:

- Inflexible network: first, one important point the network in cloud data centers (DC)

nowadays is the inflexibility issue For changing the processing algorithm and the control plane of a network, its administrators should carefully re-design,

Trang 10

re-configure and migrate the network for a long time In many cases, there is a technical challenge for an administrator to apply new approaches and evaluate their efficiency Consequently, the flexible and programmable network is strictly necessary Secondly, there are difficulties in evaluating the energy-saving levels of new energy-efficient approaches in a network due to the lack of the centralized power-control system This system allows administrators and developers to monitor, control and managing the working states as well as power consumption of all network devices in real-time

- Energy-aware networking for virtualization technologies in cloud environments:

cloud computing has emerged in the last few years as a promising paradigm that facilitates such new service models as Infrastructure-as-a-Service (IaaS), Storage-as-a-Service (SaaS), Platform-as-a-Service (PaaS), Network-as-a-Service (NaaS)

For such kinds of cloud services, virtualization techniques including network

virtualization [5] [6] [7] and data center virtualization [8] [9] [10] have quickly

developed and attracted much attention of research and industrial communities Currently, research in virtualization technologies mainly focuses on the resource optimization and resource provisioning approaches [8] [9] There are very few works focusing on the energy efficiency of a network With the benefits of flexible controlling and resource management of virtualization technologies as well as new network technologies such as Software-defined Networking (SDN) [11] [12] [13], researching in network energy efficiency in virtualization is an important and promising approach

Additionally, the SDN technology, the emergence of new trends in networking technology, provides new way to realize and optimize network energy efficiency Software-defined networking [11] aims to change the inflexible state networking, by breaking vertical integration, separating the network’s control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network Consequently, SDN is an important key for resolving aforementioned difficulties

2 Research Scope and Methodology

a) Research Scope

The scope of this research focuses on the network energy efficiency in cloud computing environments, including: (1) energy efficiency in centralized data center network; (2) energy efficiency in network virtualization; and (3) energy efficiency in data center virtualization The proposed energy-efficient approaches are based on the Software-defined Networking technology [11] [12] [13]

b) Research Methodology: the research methodology is used following the reference

[14]

Trang 11

o Step 1: Problem formulation:

▪ Interrogative form

▪ Describe relations among constructs

o Step 2: Hypothesis formulation: answering to problem statements

o Step 3: Research design: building research plan for a research process including survey, related work and experiments

o Step 4: Sampling and Data Collection

o Step 5: Data analysis

o Step 6: Manuscript Writing

3 Contributions and Structure of the Dissertation

Recently, Software-defined Networking technology [5] is likely an evolutionary step in Internet technologies that makes networking become more flexible and programmable SDN

is an important key to resolving the difficulties of energy efficiency This technology also can quickly realize the virtualization technologies including network virtualization and data center virtualization Consequently, SDN-based energy-efficient networking approaches in cloud environments are focused on this dissertation with the following contributions:

- The SDN technology is used as core technology in this dissertation for proposing energy-efficient network approaches The first contribution of this dissertation is

resolving the lack of energy-aware network in a DC by (1) proposing a SDN-based

power-control system (PCS) of a network The proposed system allows the

administrator of a network to flexibly control and monitor the state of network devices and the energy consumption of the whole network infrastructure Thanks to the flexibility and availability of this PCS system, several energy-efficient algorithms are proposed and evaluated on it successfully

- The network virtualization (NV) technology in cloud environments becomes more popular and plays an important role for such cloud services including Network-as-a-service (NaaS), Infrastructure-as-a-service (IaaS) The energy-aware NV platform

is necessary for network energy efficiency Appropriately, (2) the SDN-based

energy-aware network virtualization (EA-NV) platform is proposed in this

dissertation The platform is aware of power consumption of the network virtualization environment Two novel energy-efficient virtual network embedding

algorithms are also proposed and implemented in this platform that focus on increasing the energy-saving level and maintaining the reasonable resource optimization of a network

- Virtual data center technology is a concept of network virtualization in cloud environments that allows creating multiple separated virtual data centers (VDC) on

top of the physical data center [8] [9] [10] In consequence, (3) an energy-aware

virtual data center platform is deployed On this system, novel energy-aware

algorithms are also proposed which focus on the following objectives: (1) resource

Trang 12

efficiency that deals with efficient mapping of virtual resources on substrate resources in terms of CPU, memory and network bandwidth; and (2) energy efficiency that deals with minimizing energy consumption of the virtual data center while meeting virtual data center mapping demands

The above contributions of this dissertation are organized as the collection of several SDN-based network energy-efficient approaches which are presented in five chapters as follows:

- The first chapter presents an overview of energy-efficient network in cloud

environments and their classification The difficulties of the network’s energy efficiency area as well as the background of the Software-defined Networking technology are also described in details

- In the second chapter, a SDN-based power-control system (PCS) of a data center

network is proposed Based on this platform, developers can propose, implement and evaluate several network energy-saving algorithms Two energy-efficient approaches, which are applied onto the PCS system, are also proposed with their results and algorithms published in:

✓ Tran Manh Nam, Nguyen Huu Thanh, Doan Anh Tuan “Green Data Center

Using Centralized Power-Management Of Network And Servers”, The 15th

international Conference on Electronics, Information, and Communication (IEEE - ICEIC), Jan 2016, Da Nang, Vietnam

✓ Tran Manh Nam, Nguyen Huu Thanh, Ngo Quynh Thu and Hoang Trung Hieu,

Stefan Covaci, “Energy-Aware Routing based on Power Profile of Devices in

Data Center Networks using SDN”, the 12th IEEE ECTI-CON conference - 2015,

Hua-Hin, Thailand - Achieved a student Grant of ECTI-CON, Jun, 2015

✓ Tran Manh Nam, Truong Thu Huong, Nguyen Huu Thanh, Pham Van Cong, Ngo

Quynh Thu, Pham Ngoc Nam, “A Reliable Analyzer for Energy-Saving

Approaches in Large Data Center Networks”, IEEE ICCE - The International

Conference on Communications and Electronics - 2014, Da Nang, Vietnam

✓ Tran Manh Nam, Tran Hoang Vu, Vu Quang Trong, Nguyen Huu Thanh, Pham

Ngoc Nam, “Implementing Rate Adaptive Algorithm in Energy-Aware Data

Center Network”, National Conference on Electronics and Communications

(REV2013-KC01)., Hanoi, Vietnam

- The third chapter describes an energy-aware network virtualization concept and its

power monitoring and controlling abilities The proposed concept is SDN-based which allows developers to implement several energy-efficient virtual network embedding algorithms Two energy-efficient embedding algorithms, namely

heuristic energy-efficient node mapping and reducing middle node energy efficiency, are proposed in this section The results and algorithms of this chapter

are published in:

Trang 13

✓ Tran Manh Nam, Nguyen Huu Thanh, Nguyen Hong Van, Kim Bao Long,

Nguyen Van Huynh, Nguyen Duc Lam, Nguyen Van Ca, “Constructing

Energy-Aware Software-Defined Network Virtualization”, Proceedings of Asia-Pacific

Advanced Network Research Workshop (APAN-NRW), August 10th - 14th

2015, Kuala Lumpur, Malaysia - (best student paper award)

✓ Thanh Nguyen Huu, Anh-Vu Vu, Duc-Lam Nguyen, Van-Huynh Nguyen, Manh-Nam Tran, Quynh-Thu Ngo, Thu-Huong Truong, Tai-Hung Nguyen,

Thomas Magedanz “A Generalized Resource Allocation Framework in Support

of Multilayer Virtual Network Embedding based on SDN”, Elsevier - Computer

Networks, 2015

✓ Nam T.M., Huynh N.V., Thanh N.H (2016) “Reducing Middle Nodes Mapping Algorithm for Energy Efficiency in Network Virtualization” In: Advances in Information and Communication Technology, ICTA 2016 Advances in Intelligent Systems and Computing, vol 538 Springer, Cham

https://doi.org/10.1007/978-3-319-49073-1_54

✓ Tran Manh Nam, Nguyen Tien Manh, Truong Thu Huong, Nguyen Huu Thanh

(2018) “Online Using Time Window Embedding Strategy in Green Network

Virtualization”, International Conference on Information and Communication

Technology and Digital Convergence Business (ICIDB-2018), Hanoi, Vietnam (presented)

- SDN-based Energy-aware Virtual Data Center (VDC) approach is presented in the

fourth chapter The VDC technology and its main problems, namely VDC

embedding problems, are described in details Three Joint VDC Embedding and VM

migration strategies are successfully proposed and evaluated on top of this

SDN-based VDC concept The experimental results and detailed algorithms of this chapter are published in:

✓ Tran Manh Nam, Nguyen Van Huynh, Le Quang Dai, Nguyen Huu Thanh, “An

Energy-Aware Embedding Algorithm for Virtual Data Centers”, ITC28 -

International Teletraffic Congress, Sep - 2016, Wurzburg, Germany

✓ Tran Manh Nam, Nguyen Huu Thanh, Hoang Trung Hieu, Nguyen Tien Manh,

Nguyen Van Huynh, Tuan Hoang (2017) “Joint Network Embedding and

Server Consolidation for Energy-Efficient Dynamic Data Center Virtualization”,

Elsevier - Computer Networks, 2017 - doi.org/10.1016/j.comnet.2017.06.007

- In the last chapter, the conclusion of the dissertation and its future work are

presented

Trang 14

CHAPTER 1 AN OVERVIEW OF ENERGY-EFFICIENT APPROACHES IN CLOUD COMPUTING ENVIRONMENTS

This chapter provides an overview of the Internet status nowadays and the efficient approaches in cloud computing environments, on which the networking community

energy-is focusing currently The chapter also addresses the difficulties and motivations on network energy efficiency and the future Internet technologies in cloud computing environments including the Software-Defined Networking technology, network virtualization technology and data center virtualization technology In a nutshell, the research approaches and contributions of this dissertation are summarized in this chapter

1.1 Today's Internet

1.1.1 Cloud Computing Services and Infrastructures

The advances in Information and Communication Technologies (ICT) in the last decades have massively influenced economy and societies around the world The Internet services as well as cloud computing services are growing day by day and play a considerable role in all aspects including education, business and entertainment As we can see in the Table 1.1¸ in the last four years, the percentage of people using Internet witnesses an annual growth of 3.5%, from 39% world population’s percentage in Dec-2013 to 51.7% in June-2017 [1]

Table 1.1: The Internet’s users in the world [1]

Date Number of users World population’s percentage

1.1.2 Energy consumption problems

Although the benefits of having that infrastructure are considerable, such a large system consumes the high volume of energy and leads to consequent issues:

Trang 15

Figure 1.1: Estimate of the global carbon footprint of ICT (including PCs,

telcos’ networks and devices, printers and datacenters) [15]

- Environmentally, the amount of energy consumption and carbon footprint of the

ITC-sector is remarkable (Figure 1.1) Gartner Company, the ICT research and advisory company, estimates that the manufacture of ICT equipment, its use and disposal account for 2% of global CO2 emissions, which is equivalent to the contributions from the aviation industry [2] The networking devices and components eliminate around 37% of the total ICT carbon emission [3];

- Economically, the huge consumed power leads to the costs sustained by the

providers/operators to keep the network up and running at the desired service level and leads to their need of counterbalancing ever-increasing cost of energy (Figure 1.2 and Figure 1.3)

Figure 1.2: Energy consumption estimation for the European telcos’ network infrastructures in the”Business-As-Usual” (BAU) and in the Eco-sustainable (ECO) scenarios, and

cumulative energy savings between the two scenarios [16]

Because of these issues, the requirement of designing a high performance and efficient network has become a crucial matter for Telcos and ISPs towards greener cloud environments

Trang 16

energy-Figure 1.3: Operating Expenses (OPEX) estimation related to energy costs for the European telcos’ network infrastructures in the ”Business-As-Usual” (BAU) and in the Eco-sustainable

(ECO) scenarios, and cumulative savings between the two scenarios [17]

1.2 An Overview of Energy-Efficient Approaches

In this section, first, the most significant part of energy consumption of network device

is characterized with its existing researches Secondly, the taxonomy energy-efficient approaches, which are currently undertaken, is also presented

1.2.1 Energy consumption characteristics

Table 1.2: Estimated power consumption sources in a generic platform of IP router

Efficient energy use, sometimes simply called energy efficiency concept, is far from being new in a computing system To the best of our knowledge, the first support of power

management system was published in 1999, namely “Advanced Configuration & Power

Interface” (ACPI) standard [18] Thenceforth, more energy-saving mechanisms were

developed and introduced, especially in hardware enhancement with the new CPUs, which could be more efficient and consumed less energy Tucker [19] and Neilson [20] estimated

on IP routers that the control plane weighs 11%, data plane for 54% and power and heat management for 35% Tucker and Neilson also broke out the energy consumption of data plane in more detail as described in Table 1.2 From 54% energy consumption of data plane, the buffer management weighs 5%, the packet processing weighs about 32%; the network interfaces weigh about 7%; and the switching fabric for about 10% This estimation work provides a clear indication for developers in order to increase the energy-saving level of networks in the further researches

Trang 17

1.2.2 Energy-Efficient Approaches' Classification

From the general point of view, existing approaches are founded on few basic concepts

As shown in surveys of Raffaele Bolla et al [4] and Aruna Banzino et al [21], the largest part of undertaken energy-efficient concepts is founded on few energy-saving mechanisms and power management criteria that are already partially available in computing systems

These approaches, which are depicted in the Table 1.3, are classified as (1) re-engineering; (2) dynamic adaptation; and (3) smart sleeping [4]

Table 1.3: Classification of energy-efficient approaches of the future Internet [4]

1.2.2.1 Re-Engineering

The re-engineering approaches focus on introducing and designing more energy-efficient

elements inside network equipment architectures Novel technologies mainly consist of new silicon (ex: for Application Specific Integrated Circuits (ASICs) [22], Field Programmable Gate Arrays (FPGAs) [23], etc.) and memory technologies (ex: Ternary Content-Addressable Memory (TCAM), etc.) for packet processing engines, and novel network media technologies (energy-efficient lasers for fiber channel, etc.) The approaches can be

divided into two sub-approaches as follows: (1) energy-efficient silicon which focuses on developing new silicon technologies [24]; and (2) complexity reduction which focuses on

reducing equipment complexity in terms of header processing, buffer size, switching fabric speedup and memory access bandwidth speedup [25] [26]

1.2.2.2 Dynamic Adaptation

The dynamic adaptation approaches of network resources are aimed at modulating capacities of devices (working speeds, computational capabilities of packet processing…) according to the current traffic demand [4] These approaches are founded on two main kinds

of power management capabilities provided by the hardware level, namely power scaling and idle logic

Power scaling capabilities allow dynamically reducing the working rate of processing

engines or of link interfaces [27] [28] This is usually accomplished by tuning the clock frequency and/or the voltage of processors, or by throttling the CPU clock (i.e., the clock signal is gated or disabled for some number of cycles at regular intervals) On the other hand,

idle logic allows reducing power consumption by rapidly turning off sub-components when

no activities are performed, and by re-waking them up when the system receives new activities In detail, wake-up instants may be triggered by external events in a pre-emptive

Trang 18

mode (e.g., “wake-on-packet”), and/or by a system internal scheduling process (e.g., the system wakes itself up every certain periods, and controls if there are new activities to process)

1.2.2.3 Sleeping/Standby

Sleeping and standby approaches are founded on power management primitives, which allow devices or part of them to turn themselves almost completely off, and enter very low energy states, while all their functionalities are frozen [4] Thus, sleeping/standby states can

be thought as deeper idle states, characterized by higher energy savings and much larger wake-up times In more detail, the applications and services of a device (or its part) stop working and lose their network connectivity [29] [30] when it goes sleeping As a result, the sleeping device loses its network ”presence” since it cannot maintain network connectivity, and answer to application/service-specific messages Moreover, when the device wakes up,

it has to re-initialize its applications and services by sending a non-negligible amount of signaling traffic

1.3 Software-defined Networking (SDN) technology

Recently, the future Internet technologies in cloud computing environments such as Software-defined Networking [11]; Network Virtualization (NV) [6] [7]; Network Function Virtualization (NFV) [31]; Virtual Data Center (VDC) [32] are booming and are strongly implemented in cloud environments [8] [9] [10] On the way to realize these technologies and transfer to the industrial market, the flexible network is mandatory SDN technology with its characteristics including programmable, capable of centralized management will play very important role in the innovation of all other techniques In this Section, the overview of the SDN technology is depicted

1.3.1 SDN Architecture

Software-defined Networking (SDN) [11] is an emerging networking paradigm that gives hope to change the limitations of current network infrastructures First, it breaks the vertical integration by separating the network’s control logic (the control plane) from the underlying routers and switches that forward the traffic (the data plane) [33] Second, with the separation

of the control and data planes, network switches become simple forwarding devices and the control logic is implemented in a logically centralized controller (or network operating system1), simplifying policy enforcement and network re-configuration and evolution

A simplified view of this architecture is shown in Figure 1.4 It is important to emphasize that a logically centralized programmatic model does not postulate a physically centralized system In fact, the need to guarantee adequate levels of performance, scalability, and reliability would preclude such a solution Instead, production-level SDN network designs resort to physically distributed control planes The separation of the control plane and the data plane can be done by a well-defined programming interface between the switches and

Trang 19

the SDN controller The controller exercises direct control over the state in the data plane elements via this well-defined application programming interface (API), as depicted in Figure 1.4 The most notable example of such an API is OpenFlow [34], [35]

SDN controller (ex: POX, Floodlight, ODL)

Network applications

SDN Data Forwarding element (ex:

OF switch, OVS)

Southbound API (ex: OpenFlow)

Northbound API (ex: RestAPI, )

Routing Centralized

Management Monitoring

Network OS

Figure 1.4: SDN Architecture

1.3.2 SDN Southbound API - OpenFlow Protocol

OpenFlow [34] [35] is the first and also the most widely known SDN protocol for southbound API, it provides the communication protocol between the control plane on SDN controller and the forwarding planes on OpenFlow switches OpenFlow specifies how these planes communicate and interact with each other since the connection is setup until the end The OpenFlow protocol is layered above the Transmission Control Protocol, leveraging the use of Transport Layer Security (TLS) The default port number for controllers to listen on

is 6653 for switches that want to connect

An OpenFlow switch has one or more tables of packet (Figure 1.5) handling rules (flow table) Each rule matches a subset of the traffic and performs certain actions (dropping, forwarding, modifying, etc.) on the traffic Depending on the rules installed by a controller application, an OpenFlow switch can be instructed by the controller behave like a router, switch, firewall, or perform other roles (e.g., load balancer, traffic shaper, and in general those of a middlebox) A flow-table contains several flow entries, each flow entry consists

of three main parts:

- Match rule: this includes various fields to match on a packet: IP source address, IP

destination address, MAC source address, MAC destination address, TCP source port address, etc A field can be left empty, which means any packets can match with this field

Trang 20

- Action: this action is applied to the match packet Actions include forwarding packet

to another port, drop packet, etc

- Stats: this part records the number of packet and byte that has matched with this

flow entry It also records the duration from the starting time until current This stats component is usually used for monitoring and in management functions

Secure channel

DATA PATH

(Flow Tables, Meter Table)

SDN Controller

Openflow switch Openflow switch

Openflow switch

CONTROL PATH

Global Network View Centralized management

Figure 1.5: OpenFlow controller and switches

When a packet arrives, it will be paired with the first matching flow entry in the flow table If the packet is not matched with any entries, the switch will send an OpenFlow

PacketIn message to the controller which will take appropriate actions afterwards After that,

the controller will send an OpenFlow FlowMod message back to the switch in order to create

a new entry matching this packet together with some action That way, if later similar packets get into the switch, the switch does not need to ask the controller for further action

1.3.3 SDN Controllers

In Software-defined Networking, SDN Controller does exactly what its name suggests, controlling the network as the “brain” of network It has the global view of a network, with all information about the network topology, flow tables of the OpenFlow switches, etc Using this information, the SDN Controller manages OpenFlow switches via southbound APIs (e.g OpenFlow) and leads to the deployment of applications and business logic ’above’ via northbound APIs

The first developed SDN Controller is NOX which was introduced by Natasha Gude et

al in [36] Subsequently, other open source controllers were also developed, e.g POX [37], Beacon [38], and Floodlight (forked from Beacon) [39] Later, multiple vendors such as Cisco, IBM, HPE, VMware and Juniper joined the SDN Controller market and each of them possessed their own products From Beacon, HPE, Cisco, and IBM Controllers have moved towards OpenDaylight (ODL) [40] Despite being one of the early controllers, and being less popular than its counterparts, the POX controller, written in Python, is still fully functional,

Trang 21

easy to be grasped, installed and configured, that makes it ideal for academic researchers in their experiment That also explains why this POX controller is selected in this dissertation for the SDN architecture

1.4 Difficulties on Network Energy Efficiency and Motivations

Although the concept of network energy efficiency is not new, there are still issues in realization of the energy-efficient network due to the inflexibility of a network and the lack

of an energy-aware network These difficulties are depicted as follows:

- Inflexible network: First, cloud services up-to-date frequently and lead to the change

of network infrastructure On the contrary, one important point of networks nowadays is the inflexibility issue Administrators should plan and prepare well for any changes in the network, which might require re-designing, re-configuring and migrating In many cases, there is a big challenge for any developers to apply any new approaches and evaluate them Consequently, the network flexibility is vitally necessary Secondly, there are difficulties in evaluating the energy-saving levels of new energy-efficient approaches in a network due to the lack of the power-control system of a network Developers struggle when they propose and evaluate a new energy-saving approach

- Cloud computing has been blooming in the last few years as a promising paradigm that facilitates new service models such as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Network-as-a-Service (NaaS) On the cloud

computing environments, virtualization techniques such as network virtualization [5] [6] [7] and data center virtualization [8] [9] [10] have rapidly been developed and attracted much attention from industrial communities Currently, virtualization

works mainly focus on the resource optimization and resource provisioning approaches [7] [41], while there are only few works focusing on the energy efficiency One of the main difficulties of network energy efficiency in virtualization

technologies is the lack of energy measurement method of the network infrastructure

in cloud environments Consequently, the implementation of energy-aware platforms, which work well for network virtualization and data center virtualization,

is an important and promising approach in the energy efficiency area of the networking

Above difficulties as well as the potentials of SDN technology are great motivation for the construction of SDN-based energy-efficient networking in cloud computing environments In this dissertation, several energy-efficient networking approaches are proposed with specific algorithms and, equally important, experimental results The detailed contributions are described in the next section

Trang 22

1.5 Dissertation’s Contributions

The contributions of this dissertation are: (1) proposing an energy-aware and flexible

data center network that is based on the SDN technology A power-control system (PCS),

which can be easily extended and adaptable with several situations, is proposed and developed based on SDN technology Two energy-efficient algorithms are also proposed on

this PCS system and their performance is evaluated and compare with other algorithms; (2)

proposing energy-efficient approaches in a network virtualization for cloud environments;

and (3) proposing an energy-aware data center virtualization for cloud environments The

more detailed information of these contributions is described in the next Sections

1.5.1 Proposing an energy-aware and flexible data center network that is

based on the SDN technology

In the ideal case for energy-efficiency, devices should consume energy proportional to their traffic demand (load) That is, energy consumption in a low utilization scenario should

be much lower than in a case of high traffic utilization The energy consumption of the whole network depends on the number of active network devices and their current working states Consequently, understanding power profile of network devices is an important issue in order

to contribute to the energy efficient approach and build a power-control system of a network

To achieve the target, the following works are implemented: (1) profiling an energy consumption of a single network device as well as of the whole network; (2) constructing a power-control system for a network that allow administrator to monitor and control the energy states of each network device as well as the whole network; (3) proposing the energy-aware routing algorithm that is based on the power profile of network devices; and (4) integrating the power-control system of network devices with the power control system of a physical machine, and then proposing a VM migration techniques for the optimization of the energy consumption The detailed information of above contributions is described in the chapter 2

1.5.2 Proposing energy-efficient approaches in a network virtualization for

cloud environments

Recently, cloud computing has emerged in recent years as a promising paradigm that facilitates such new service models as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Network-as-a-Service (NaaS) For such kinds of cloud services, virtualization techniques including network virtualization [5] [6] [7] and data center virtualization [8] [9] [10] have been rapidly developed and attracted much attention from the research communities as well as the industrial market As for network virtualization, an important question is how to realize and evaluate an energy-saving level of network virtualization mechanisms in cloud environments The current lack of an energy-aware network virtualization constitutes significant difficulties in deploying and evaluating the energy-efficient network With these above motivations, an energy-aware network

Trang 23

virtualization concept is proposed with power monitoring and control abilities The detailed

contributions are described as follows:

- Proposing an SDN-based Energy-Aware Network Virtualization (EA-NV)

platform Based on incoming virtual network requests (VNRs), the system performs

separate Virtual Network Embedding algorithms (VNE) and evaluates their

performance as well as the power-saving level

- Proposing a novel heuristic energy-efficient (HEE) virtual network embedding

algorithm and reducing middle node energy efficiency (RMN-EE) virtual network

embedding algorithm The experimental results of these two VNE algorithms show

that the energy-saving level of the system increases while the acceptance ratio of the

system, understood as resource optimization, is maintaining

The detailed information of above contributions is described in the chapter 3

1.5.3 Proposing an energy-aware data center virtualization for cloud

environments

Beside the network virtualization, data center virtualization in cloud environments is a

new trend of the cloud services which aim to create several virtual data centers on top of a

physical data center A major challenge of network virtualization in data centers is the virtual

data center embedding (VDCE) problem as solving VDCE is NP-hard For that reason,

current research mostly follows heuristic and meta-heuristic approaches In this research, the

energy-efficient data center virtualization is emphasized with the following contributions:

- Proposing an energy-aware data center virtualization platform and addressing

challenges in providing energy-efficient VDCE These platform works under the

condition of dynamic VDC requests, in which virtual data center requests arrive and

leave the physical data center dynamically The evaluation results show that the

performance of conventional static VDCE algorithms is unstable and degraded

under dynamic conditions

- Proposing a novel VDC embedding algorithm, namely HEA-E algorithm, with the

following objectives: (1) resource efficiency that deals with efficient mapping of

virtual resources on substrate resources in terms of CPU, memory and network

bandwidth; and (2) energy efficiency that deals with minimizing the energy

consumption of the virtual data center while satisfying the mapping demands The

proposed VDC embedding algorithm is also integrated with new remapping and

server consolidation strategies, which are developed to overcome the dynamic VDC

mapping problem and to mitigate the complexity of the joint embedding migration

approach Evaluation results show that our approach performs better than some

existing ones in terms of acceptance ratio, resource utilization and energy

consumption

The detailed information of above contributions is described in the chapter 4

Trang 24

CHAPTER 2 SDN-BASED ENERGY-AWARE

DATA CENTER NETWORK

For network energy efficiency, most efforts have focused on re-engineering approaches

that are applying on single network device [24] [25] [26] Although these approaches have gained good power-saving results, they only focused on saving energy of single device In fact, a cloud data center network (DCN) recently consists of thousand of devices and is designed with different topologies The traffic demands of a DCN continuously change minute by minute and the DCN is typically provisioned for peak workload while running well below capacity most of the time [42] Consequently, the performance of a DCN strongly depends on the topology optimizing and traffic routing This property also helps improving energy efficiency in low traffic demand scenario by optimization a DC network topology, turning on the only necessary part and re-routing the traffic on this The remaining part of

network components then is put into the sleeping mode in order to reduce power

consumption

From this point of view, a centralized power-control system that has monitoring, topology optimizations and traffic routing abilities for a DCN is necessary Based on this system, several energy-efficient algorithms can be proposed and deployed with worthwhile power savings and optimal performance effects Consequently, the power-control system (PCS) of

a DCN is proposed with following contributions:

- Propose a power-control system that has following capabilities: (1) monitor the energy consumption status as well as its efficiency; (2) control the working states of the devices due to the energy consumption of the system; and (3) implementing several energy-efficient topology optimization and traffic routing approaches

- Propose a novel energy-aware routing algorithm that efficiently works with different types of network devices in term of power saving The algorithm routes a traffic

demand based on the power profile of a network device and also based on the

2.1.1 DCN technique and architecture

DCN in a Data center creates the links among elements inside this network and provides connectivity among them DCN architecture, which lays out network components and installs network techniques within a data center, is usually implemented from two sub-

Trang 25

technical points First, selecting networking techniques inside a DCN, which satisfy the bandwidth demands and service requirements Secondly, designing a network topology that satisfies the requirement and builds a cost-effective DCN to scale up the data center So that

in this Section, the existing DCN architectures models are described The suitable DCN networking technique and topology, that are satisfied the requirement of building an energy-aware network platform in this dissertation, are also presented

2.1.1.1 DCN Technique

For building power-control system of a data center, DCN should have the flexibility to manage and upgrade its resources For example, DCN should quickly detect the novel necessary topology that satisfies the traffic requirement and re-route the traffic onto this topology, or DCN could quickly detect starved VMs and schedule residual resources, e.g., migrating these VMs to an idle server with low overhead Both above examples require a centralized and flexible control plane to coordinate the DCN devices

The traditional model for networking, despite being effective to the certain extent where

it has to use antiquated methods of passing data, could not meet the flexibility level required

to deliver today's massive amounts of data Moreover, when the hardware and software are coupled, the network becomes expensive to maintain, scale, and harder for users to innovate and administrators to tune applications

To address these issues, we are turning to Software-defined Networking technology (SDN) SDN services, typically controlled and monitored from centrally located sources, have the global view of the entire network With SDN, traffic flow is managed with software applications, which are significantly more dynamic, being a solution for optimization and tuning which are not available in local management of switches and routers On the other hand, scalability is easier to be achieved in SDN, since the software scales to as many switches or routers as there are in the network Adding hardware simply creates new pathways for the software to manage, monitor, and uses to create the most efficient traffic flow With a central SDN solution, the network routing also could be customized easier, shaping it to the specific interests and needs of that data center By using algorithms to create

a solution, SDN relies on OpenFlow, Puppet, and other protocols to remain agile, flexible, and cost-efficient

2.1.1.2 DCN Architecture

Recently, research works have shown that the DCN topology is categorized as a hierarchical model, recursive model, or rack-to-rack model (Figure 2.1) [43] [44]

Trang 26

DCN Architecture (topology design)

Hierarchical Model

Fat-tree, VL2

Recursive Model Dcell, Bcube

Rack to rack Model Safida, Jellyfish

Figure 2.1: DCN Architecture [43]

2.1.1.2.1 Hierarchical Model

Hierarchical model networks with their elements (devices) are arranged in multiple layers and characterize network traffic differently One of the most advantages of this model is reducing congestion within a network because an upper layer switch prevents an overload

of traffic that would otherwise all go through the same switch in a lower layer Three-tier architecture [45] [46] is the most widely deployed DCN architecture that follows a layered approach to arrange the network switches in three layers The network elements (switches

and routers) are arranged in three layers namely: (a) access layer, (b) aggregation layer, and

(c) core layer in the three-tier DCN architecture (Figure 2.2)

Figure 2.2: Three-tier DCN Architecture [45]

The legacy three-tier DCN architecture does not have the capability to meet the current data center bandwidth and growth trend The major shortcomings of the legacy DCN architecture can be expressed in terms of: scalability, cost, energy consumption, cross-section bandwidth, and agility [47] To accommodate the shortcomings of the legacy DCN architecture, new architectures are proposed by the research community

Fat-tree [48] [49], one of the new architectures that was proposed by Al-Fares et al,

consists of core, aggregation, and edge layers (Figure 2.3) This architecture aims to

maximize the end-to-end bisection bandwidth In a DCN with Fat-tree topology, switches at

the aggregation and edge layers are arranged in blocks, namely Performance Optimized Data

Centers (PODs), which are responsible for routing end-to-end communications Core

switches in Fat-tree topology simply maintain the connectivity among these PODs, so that Fat-tree topology reduces the traffic load over the core layer

Trang 27

Figure 2.3: Fat-tree DCN Topology

Another example of the hierarchical model is the VL2 based on DCN architecture [50] The architecture uses a flat automated addressing scheme that facilitates the placement of servers anywhere in the network without configuring the address manually VL2 also uses commodity network switches for cost reduction and energy efficiency It mainly focuses on: (a) automated addressing, (b) potential for transparent service migration, (c) load-balancing traffic flow for high cross-section bandwidth, and (d) end devices based address resolution

The hierarchical model is mostly used in DCN network and shows many realistic evaluation results Nowadays, since most of the commodity network devices support these architectures that are used to connect the massive number of servers with each other Facebook [51] and Google [52] are typical examples of using the hierarchical model in their DCN where both of these technology groups use the Fat-tree topology

2.1.1.2.2 Recursive Model

A recursive DCN consists of individual cells, each of which contains a single switch and number of servers, and each server bridges different cells In a recursive model, DCell [53] and BCube [54] are typical examples of this model and implemented in many DCN

Figure 2.4: Dcell DCN Architecture [53]

Trang 28

Dcell [53] is a server-centric hybrid DCN architecture where one server is directly connected to other servers (Figure 2.4) The Dcell follows a recursively build hierarchy of cells and each server in this network consists many network interface cards (NICs) In DCell,

a server is connected to a number of other servers and switches via communication links, which area assumed to be bidirectional

The BCube network architecture [54] is server-centric approach and contains two type of

devices: (1) server with multiple ports; and (2) switches that connect a constant number of

server BCube is a recursively defined structure A BCube 0 is simply n servers connecting to

an n-port switch A BCube 1 is constructed from n BCube 0 s and n n-port switches More

generically, a BCube k (k ≥ 1)) is constructed from n BCube k−1 s and n k n-port switches Each

server in a BCube k has k + 1 ports, which are numbered from level-0 to level-k It is easy to see that a BCube k has N = n k +1 servers and k+1 level of switches, with each level having n k

n-port switches

Figure 2.5: BCube DCN Architecture [54]

Figure 2.5 shows a BCube 1 with n = 4 with 2 levels Source-based routing is performed

using intermediate nodes as packet forwarder which is ensuring, decreasing the hamming distance between each consecutive intermediate host to the destination Periodic searching for the optimal path is performed in order to cope with any failures in the network One-to-

all, all-to-one and all-to-all traffic can also be routed by using redundant (k+1) ports on each

host

Although most recursive DCNs architectures have the high scalability to allow DC expansion and there are also cost-effective because of using cheap switches, they are not commonly seen in a DC In particular, most of the recursive DCNs employ a computational server as a network device and lack adequate field testing of their designs

2.1.1.2.3 Rack-to-rack Model

Due to the current occurrences of the bottleneck in the backbone links of two previous models, recent research on DCN architectures focuses on making a direct connection between different racks, as opposed to setting trunk layers In [55], a DCN topology, namely Jellyfish, uses a random graph to build end-to-end communication Instead of a fixed topology, Jellyfish network simply guarantees that each switch has ports connecting to

Trang 29

another switch while remaining ports are used to connect to servers In [56], L.Gyarmati and T.A.Trinh proposed a rack-to-rack architecture, namely Scafida, which addresses large node degree within a DCN topology Scafida architecture is scale-free topology where the longest path has fixed upper bound Scafida provides methodologies to construct such a topology for data centers while making reasonable modifications to original scale-free network paradigm Scafida consists of a heterogeneous set of switches and hosts in terms number of ports/links/interfaces The topology is built incrementally by adding a node and then, randomly connecting all the available ports to existing empty ports The number of ports is limited by the available ports on a node, unlike original scale-free networks Such a network provides high fault tolerance

2.1.1.2.4 Using Fat-tree topology as reference architecture

Figure 2.6: Fat-tree architecture with k = 4

Among the above DCN models of architecture, Fat-tree [49] topology is the most promising topology for deploying Cloud data centers By providing parallel traffic which leads to all-to-all, one-to-all, or one-to-many communications, many large data centers such

as Facebook [51], Google [52] use the Fat-tree topology for their DCN On the other hand, Fat-tree topology has good oversubscription ratio (1:1) that is usually provided to ensure the size-independent quality of service

For the Fat-tree topology, there are four traffic scenarios are defined as follows: near

traffic scenario; middle traffic scenario; far traffic scenario; and mix traffic scenario

- Near traffic scenario: any flows has a source and a destination in the same POD and

near each other, so in this scenario the exchanged traffic traverses over only edge switches

Trang 30

- Middle traffic scenario: a source and destination of any flows reside in the same

POD, so in this scenario, all flows traverse over edge and aggregation switches

- Far traffic scenario: a source and destination of any flows resides in different PODs,

so in this situation all flows traverse over edge, aggregation and core switches

- Mix traffic scenario: this is the mixing scenario among three above scenarios

These four traffic scenarios are used through this chapter for evaluating the performance

as well as energy-saving level

2.1.2 Existing system

In [57], Heller et al proposed an ElasticTree system for dynamically adapting the energy consumption of a data center network ElasticTree consists of three logical modules:

optimizer, routing, and power control - as shown in Figure 2.7 The role of the optimizer

module is to find the minimum power network subset, which satisfies current traffic conditions Its inputs are the topology, traffic matrix, a power model for each switch, and the

desired fault tolerance properties (spare switches and spare capacity) The optimizer outputs

a set of active components to both the power control and routing modules Power control toggles the power states of ports, linecards, and entire switches, while routing module

chooses paths for all flows, then pushes routes into the network

Figure 2.7: Diagram of the ElasticTree system [57]

Although the ElasticTree system works well for dynamically adapting, the diagram still lacks the monitoring module that provides the system with monitoring capability as well as visualization capability Consequently, in this dissertation, the ElasticTree system is

extended by adding more monitoring module and optimizing the Optimizer module The

next section will describe this extended system in more details

2.2 Power-Control System of a DC Network

In this section, a power-control system (PCS) is proposed, which is extended from ElasticTree system [57] by adding a new module and a new function This system allows administrator to monitor and control the working state as well as the energy consumption of

a network In the next sections, the energy modeling of whole DC network is depicted first, and then the detailed diagram and components of this system are described in details

Trang 31

2.2.1 Energy modeling of a network

2.2.1.1 Energy modeling and profiling of a single network device

In the ideal case of energy efficiency [58], devices should consume energy proportionally

to their utilization (Figure 2.8) That means, energy consumption in a low utilization scenario should be much lower than in the case of high traffic utilization In the Figure 2.8, U(%) and P(%) are the utilization in percentage of a device and the power consumption in percentage

of a device, respectively U = 100% means that a device is working in full resource state, and P=100% means that a device is consuming a maximum energy

Figure 2.8: Energy – Utilization relation of a network [58]

Recently, research communities [58] focus on answering an important question, how to

optimize the consumed energy volume by a device proportionally to its actual load The

energy consumption of whole network depends on the number of active network devices and their current working states Consequently, understanding power profile of switch is an important research point, which leads to an energy efficient approach and the establishment

of power-management system of a network As an initial step to understanding the energy consumption patterns of a variety of networking devices, a detailed power instrumentation study is conducted

In [59] Priya et al developed a power model to estimate the power consumed by any

switches The linear power model of a switch, P sw, is defined:

𝑷𝒔𝒘= 𝑃𝑐ℎ𝑎𝑠𝑠𝑖𝑠+ 𝑛𝑢𝑚𝑙𝑖𝑛𝑒𝑐𝑎𝑟𝑑∗ 𝑃𝑙𝑖𝑛𝑒𝑐𝑎𝑟𝑑+ ∑𝑐𝑜𝑛𝑓𝑖𝑔𝑛𝑢𝑚𝑝𝑜𝑟𝑡𝑐𝑜𝑛𝑓𝑖𝑔(𝑖)∗ 𝑃𝑐𝑜𝑛𝑓𝑖𝑔(𝑖)

𝑖=0

(2.1)

Where: 𝑃𝑐ℎ𝑎𝑠𝑠𝑖𝑠 is the power consumed by the switch’s chassis; 𝑃𝑙𝑖𝑛𝑒𝑐𝑎𝑟𝑑 is the consumed

power of linecard with no active ports; and 𝑛𝑢𝑚𝑙𝑖𝑛𝑒𝑐𝑎𝑟𝑑 is an actual number of cards that

are plugged into the switch Variable config in the summation represents the possible

configurations for working speeds of ports, 𝑛𝑢𝑚𝑝𝑜𝑟𝑡𝑐𝑜𝑛𝑓𝑖𝑔(𝑖) and 𝑃𝑐𝑜𝑛𝑓𝑖𝑔(𝑖) are number of

ports and power consumed by a port running at working speeds i, respectively

Trang 32

In another reference [58], Pham et al proposed an energy model of a NetFPGA-based

OpenFlow Switch [60] This model is contributed for power scaling method in the NetFPGA

card which makes this switch become energy-aware The NetFPGA is the low-cost reconfigurable hardware platform optimized for high-speed networking The NetFPGA includes all of the logic resources, memory, and Gigabit Ethernet interfaces which are required to build a complete programmable switch, router, and/or security device Because the entire data path is implemented in hardware, the system can support back-to-back packets

at full Gigabit line rates and has a processing latency measured in only a few clock cycles The linear model can be formally expressed as follow:

- PFPGA-Core: power consumption of FPGA Core

In the other work, reference [61] provided an energy model that also divided the energy consumption of port into two part including static consumed power dynamic consumed

power These static and dynamic sub-powers are denoted as P port s and P port d, respectively This energy model is defined as follows:

(2.4)

Where:

- Psw: the power of one switch, including power consumed by both the chassis, the line-cards and the ports;

- Pchassis: the power consumed by a single switch chassis;

- Pcards: the power consumed by all line-cards on a switch;

- Pport: the power consumed by each port, which includes both the static power Pportsand the dynamic power Pportd;

- Pportf: the dynamic power of a port in full link capacity;

- din j : the data rate of the incoming flow at port j;

- dout j: the data rate of outgoing flow at port j;

- C: the maximum link capacity

Trang 33

As we can see, there are few modeling methods for the energy consumptions of networking devices These methods have many similarities in components, so that in this dissertation, the above methods are summarized and a general energy model is proposed for

a network switch that is used in the completely analytical model The general model is described as follows:

p while 𝑃𝑝 is the power consumption of a port working at state p; 𝑃̇ denotes a set of working

speeds of a port Currently, there are many working speeds of a switchport such as 40Gbps, 10Gbps, 1Gbps, 100Mbps, and idle 𝑃𝑒𝑥𝑡 denotes an extension consumed power For example 𝑃𝑒𝑥𝑡 is PFPGA-Core in case of Gigabit NetFPGA-based switch

2.2.1.2 Energy modeling of a DC Network

Currently, there are many switches with their ports working at several forwarding speeds:

40Gbps, 10Gbps, 1Gbps, 100Mbps, and idle The consumed energy of each port’s state is

different Consequently, the energy consumption of a network is calculated as the total consumed energy of all switches with their states including ports’ forwarding speeds From the general model of one switch in the equation (2.5), the energy model of whole network is described more detail in below equations:

𝑃 𝑁𝑊 = ∑ 𝑃 𝑠𝑤

𝑘 𝑖=0

𝑃 𝑁𝑊 = ∑ 𝑃 𝑠𝑡 + ∑

𝑘 𝑖=0

∑ 𝑛 𝑝 × 𝑃 𝑝 𝑝∈𝑃̇

𝑘 𝑖=0

+ ∑ 𝑠 × 𝑃 𝑒𝑥𝑡 𝑘

𝑖=0

(2.7)

2.2.2 The Diagram of the Power-Control System

In this dissertation, the power-control system (PCS) is proposed, which is extended from ElasticTree system [57] In ElasticTree, Heller et al proposed to use monitoring protocol simple network management protocol (SNMP) [62] as an exchanged protocol for switch controlling and monitoring Although SNMP is a worldwide protocol that is used in term of network monitoring, it is still inflexible protocol and has many limitations in term of controlling network in real time To due with these limitations, the SDN-based PCS system

is proposed with several extensions These extensions of PCS from ElasticTree system are

described as: (1) extending the power control module for supporting the Openflow protocol,

the core protocol of SDN technology Implementing the OpenFlow protocol on both

Trang 34

controller and switches makes a seamless protocol for controlling and monitoring the

network; (2) adding the monitoring module for real-time monitoring the network state and

traffic by using OpenFlow protocol

The PCS architecture’s diagram is depicted in Figure 2.9 The data center network consists of all SDN switches with their connections, network topology and the power profile

of switches The SDN controller of this PCS consists of four main modules, namely:

monitoring, optimizer, routing and power control The monitoring module collects

information from a DCN and visualizes its traffic state, topology and current status of energy

consumption The optimizer module finds the most energy-efficient subnet that satisfies the

current offered traffic based on the traffic flows, the topology and the energy-profile of a

switch of the DCN After this calculation, the optimizer module outputs the active topology, contains active devices and connections among them, to the routing module and power

control module Afterwards, the power control module changes the power states of switches,

line cards, and interfaces, whereas the routing module chooses the paths for all flows

Openflow protocol SSL/TCP

Openflow protocol

SSL/TCP

DATA CENTER NETWORK

(SDN switches, DCN topology, Device Power Profile)

Optimizer

Optimize topology based on current

traffic and energy condition

SOFTWARE-DEFINED NETWORKING CONTROLLER

Figure 2.9: Power-control System of a Network

The SDN controller, which contains all optimizer, routing, monitoring and power control

modules, communicates with a DCN via the secure channel, known as OpenFlow protocol The detailed descriptions of DCN and SDN controller are defined in the next Sections

2.2.2.1 Data Center Network Components

As we mentioned above, there are many advantages of Fat-tree network topology in a DC network Accordingly, in this dissertation, the DCN network topology of the PCS is the Fat-

tree topology A k Fat-tree is a DC network architecture with three layers which are edge,

aggregation and core using k-port switches There are k PODs and each POD contains k/2

edge switches and k/2 aggregation switches Each k-port switch in the edge layer uses k/2 ports which directly connect to k/2 servers while remaining k/2 ports is connected to upper

Trang 35

aggregation layer of the hierarchy There are (k/2) 2 k-port core switches Each core switch

has one port connected to each of k PODs The i th port of any core-switches is connected to PODi so that consecutive ports in the aggregation layer of each POD switch are connected

to core switches on (k/2) strides

The number of core switches: 𝑛𝑠𝑤𝑐𝑜𝑟𝑒 = (𝑘2)2

The number of aggregation switches: 𝑛𝑠𝑤𝑎𝑔𝑔 = 𝑘(𝑃𝑂𝐷) ×𝑘2= 𝑘22

The number of edge switches: 𝑛𝑠𝑤𝑒𝑑𝑔𝑒 = 𝑘(𝑃𝑂𝐷) ×𝑘2= 𝑘22

Each Edge switch connect to k/2 servers, so that number of servers that k Fat-tree topology

2.2.2.2 Minimum Spanning Tree - MST

In order to minimize the energy consumption of a network, a minimum spanning tree (MST) topology is used which puts a part of the data center network in sleep mode in underutilized load situation In a case of no traffic demand, the DC network maintains a MST

Trang 36

for minimum connectivity between servers As depicted in Figure 2.10, at initial phase when there is no traffic demand among servers, then:

- All servers are turned-off;

- only the leftmost core switch Sw core and the leftmost aggregation switch Sw agg are turned on with their network interfaces running at the lowest operating speed;

- all access switches Sw acc are turned on and run at the lowest operating speed

Figure 2.10: Fat-tree topology with Minimum Spanning Tree

In k fat-tree topology, the remaining of MST working topology are: one core switch; k

aggregation switch (one for each POD); and 𝑘

Trang 37

2.2.2.3 Software-Defined Networking Controller

In the diagram of the power-control system, the SDN controller is built as the central component which monitors and controls all DCN devices The SDN controller performs different functionalities such as: collecting DCN information; network monitoring; routing and defining flow tables for OpenFlow switches; executes optimization algorithm for energy efficiency As we mentioned above, the SDN controller is extended from the POX, which supports energy-aware functionalities The Figure 2.9 illustrates the SDN controller with its

components: Optimizer, Monitoring, Power Control, and Routing

- Monitoring module: collects network information from DCN switches and monitors

all mandatory states of these switches and links among them including traffic utilization, working speeds, power states and DCN topology This information is exchanged between OpenFlow switches and SDN Controller by using OpenFlow messages

- Optimizer module: after collecting information from monitoring module, this

module is in charge of optimizing the routes in the data center based on the current

topology derived from the Monitoring Beside some routing mechanisms supported

by the current SDN controllers such as Dynamic All Pairs Shortest Path, Spanning Tree, and a hierarchical load-balancing routing algorithm, researchers also can develop several algorithms focusing on energy efficiency Outputs of this module

are: (1) topology optimization result which is used to command the power control

module by changing the topology status including turn-on or turn-off necessary devices; and (2) energy-aware routing strategy which send the routing decision to

the routing module

- Power Control module: toggles the power states of ports, line cards, and entire

switches through OpenFlow messages and APIs of OpenFlow switches to

‘‘command’’ them ‘‘ON or OFF’’ or change to an appropriate power saving mode (e.g., changing clock frequencies, working speeds)

- Routing module: based on the optimization results from optimizer module as well

as the traffic demand of the DCN, routing module routes the traffic demand on the active sub-network which contains turned-on switches

2.3 Energy-Aware Routing based on Power Profile of Devices in Data Center Networks using SDN

The consumed power of DCN is calculated by the total power consumption of its devices The power consumed by each device depends on the following factors: the number of active

ports; the capacity rates at which each port operates Normally maximum-speed port consumes the largest amount of power while the idle port consumes the least [58]; this

consumed power also depends on specific devices which are individually different and based

on their power profiles; the amount of traffic that goes through a port does not have any significant effects on its power consumption

Trang 38

In consequence, the power consumption of a DCN depends on the number of active links and switches as well as routing algorithm applied For instance, [61] shows that the switchport of a commercial OpenFlow-enabled Pronto switch consumes 63mW, 260mW and 913mW in their working rates of 10Mbps, 100Mbps and 1Gbps, respectively (Table 2.1) As we can see, the ratio of energy consumption of 1Gbps port to 100Mbps port is

approximately 3.5, it means that three 100Mbps ports are consumed less than one 1Gbps

port In contradiction to a common consensus that energy consumption of a network can be saved by turning off links and switches as many as possible, this argument is not always true From the above example, in order to increase energy-saving level, in some cases, several low-speed ports can be used instead of one high-speed port In other words, instead of accumulating the amount of traffic to go throughput in a high speed link, a routing algorithm can be performed to distribute the traffic to several low speed ports, so that more energy can

approximately 9.6 times, that is much greater than the ratio 3.5 times of Pronto This

consumed energy difference will have an impact on the routing and topology optimization processes such as accumulating and distributing traffic From this point of view, the routing and topology optimization process must be based on the energy profile of devices

On this power-control system, one energy-aware routing algorithm is proposed based the power profiles of devices and topology optimization approach The routing algorithm can flexibly implement routing and topology optimization as well as effectively work with different network devices The energy-aware routing and optimization algorithm, which will

be presented in the next section, is embedded in the optimizer and routing modules

2.3.1 Energy-Aware Routing and Topology Optimization Algorithm

In this section, the energy-aware routing and topology optimization algorithm is described that aims to save consumed energy of the whole network as well as has an ability to aware and adapts to different devices Although there are some existing multipath routing strategies

in a SDN-based DCN [63] [64], the strategies do not focus on the network energy efficiency

In this dissertation, the proposed algorithm focuses on the energy efficiency of a network,

Trang 39

other multipath routing problem are our future work and not analyzed in this paper The

algorithm is extended from power scaling strategy, the common used strategy [27] [28], and

integrates well with power-control system In order to adapt to different network devices, in this dissertation the ratio of consumed energy between different working states of ports,

R PCE, is proposed that expresses the difference between working rates of a port of each device The algorithm and RPCE ratios are described as in the next Sections

2.3.1.1 Port Consumed-Energy Ratio (RPCE)

As described above, the power consumption of a port with different link rates is remarkable and it also energy-efficiently affects the routing and topology optimizing

processes So that in this dissertation, the port consumed-energy ratios - R PCE, among switchports’ forwarding rates is proposed This ratio can be between 1Gbps to 100Mbps, namely 𝑅𝑃𝐶𝐸1𝐺:100𝑀; or 10Gbps to 1Gbps, namely 𝑅𝑃𝐶𝐸10𝐺:1𝐺; or extendable to 40Gbps to 10Gbps also The ratios are depicted in the below equations:

𝑅𝑃𝐶𝐸1𝐺:100𝑀(𝐹𝑃𝐺𝐴) (Table 2.2) is 9.6; and ratio of a commercial Pronto 3240 Switch,

𝑅𝑃𝐶𝐸1𝐺:100𝑀(𝑃𝑟𝑜𝑛) is 3.5 It means that if the offered traffic ups to 900Mbps, distributing this

traffic through nice lower-speed ports (100Mbps) is more energy-efficient than route traffic though only one 1Gbps port (in case of using NetFPGA-based switch) The power

consumption of this NetFPGA-based switch at the 1Gbps, 100Mbps, 10Mbps and idle states

are described as in the table below

Table 2.2: Energy consumption of NetFPGA-Based OpenFlow Switch

2.3.1.2 Power Scaling Algorithm

The role of the optimizer module is to find a network subset which satisfies current traffic

demand Its input includes topology, network traffic utilization, the power profiles of switches, and the desired fault tolerance properties In this dissertation, the

Trang 40

power scaling approach is implemented that supports reducing energy consumption

significantly by adaptively changing the working rate of the processing engines or links such

as reducing operating clock of devices or decreasing the link rate of a switchport

Start

Determine the traffic

demand

in/out the link T link

LC0 <Tlink < LC10 LC10 <Tlink < LC100 LC100 <Tlink < LC1G

Change link speed to

Figure 2.11: Power Scaling Algorithm

The diagram in Figure 2.11 shows the operation of the power scaling algorithm Based

on the traffic state measured by the monitoring module, the optimizer determines which state must be used on this link then sends the results to the power control and routing modules The speeds of link can be changed to 10Mbps, 100Mbps, 1Gbps and 10Gbps adaptively to traffic demands In this flowchart:

- T Link: Traffic demand on link

- LC 0 , LC 10 , LC 100, LC 1G , LC 10G - Link Capability in working speeds 0Mbps, 10Mbps, 100Mbps, 1Gbps and 10Gbps, respectively

Although the described flowchart is limited in 10Gbps speed, the algorithm is easily extended to any higher working speeds of a device

2.3.1.3 Power Scaling with Energy-Profile-Aware algorithm - PSnEP

As we can see in the previous sections, port consumed-energy ratios will have an impact

on the routing and topology optimization processes such as accumulating and distributing traffic Consequently, a further scheme is developed by adding the topology optimization

algorithm and device profiling method, and propose a power scaling and

energy-profile-aware (PSnEP) algorithm By using this, the scheme can automatically determine in

which conditions the traffic will be routed via separate lower-speed ports, 𝑝𝑜𝑟𝑡𝑙𝑠, instead of

one higher-speed port, 𝑝𝑜𝑟𝑡ℎ𝑠

Ngày đăng: 22/02/2018, 10:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w