1. Trang chủ
  2. » Công Nghệ Thông Tin

Thread and data mapping for multicore systems improving communication and memory accesses

61 60 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 61
Dung lượng 2,21 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Regarding the parallel applications, the main challenge of sharing-aware ping in shared memory based parallel applications is to detect data sharing.. In the context of shared memory bas

Trang 1

Communication and Memory Accesses

Trang 2

SpringerBriefs in Computer Science

Series editors

Stan Zdonik, Brown University, Providence, RI, USA

Shashi Shekhar, University of Minnesota, Minneapolis, MN, USAXindong Wu, University of Vermont, Burlington, VT, USA

Lakhmi C Jain, University of South Australia, Adelaide, SA, AustraliaDavid Padua, University of Illinois Urbana-Champaign, Urbana, IL, USAXuemin Sherman Shen, University of Waterloo, Waterloo, ON, CanadaBorko Furht, Florida Atlantic University, Boca Raton, FL, USA

V S Subrahmanian, University of Maryland, College Park, MD, USAMartial Hebert, Carnegie Mellon University, Pittsburgh, PA, USAKatsushi Ikeuchi, University of Tokyo, Tokyo, Japan

Bruno Siciliano, University of Naples Federico II, Napoli, Italy

Sushil Jajodia, George Mason University, Fairfax, VA, USA

Newton Lee, Woodbury University, Burbank, CA, USA

Trang 3

SpringerBriefs present concise summaries of cutting-edge research and practicalapplications across a wide spectrum of fields Featuring compact volumes of 50 to

125 pages, the series covers a range of content from professional to academic.Typical topics might include:

• A timely report of state-of-the art analytical techniques

• A bridge between new research results, as published in journal articles, and acontextual literature review

• A snapshot of a hot or emerging topic

• An in-depth case study or clinical example

• A presentation of core concepts that students must understand in order to makeindependent contributions

Briefs allow authors to present their ideas and readers to absorb them withminimal time investment Briefs will be published as part of Springer’s eBookcollection, with millions of users worldwide In addition, Briefs will be available forindividual print and electronic purchase Briefs are characterized by fast, globalelectronic dissemination, standard publishing contracts, easy-to-use manuscriptpreparation and formatting guidelines, and expedited production schedules We aimfor publication 8–12 weeks after acceptance Both solicited and unsolicitedmanuscripts are considered for publication in this series

More information about this series athttp://www.springer.com/series/10028

Trang 4

Eduardo H M Cruz • Matthias Diener

Trang 5

Eduardo H M Cruz

Federal Institute of Parana (IFPR)

Paranavai, Parana, Brazil

ISSN 2191-5768 ISSN 2191-5776 (electronic)

SpringerBriefs in Computer Science

ISBN 978-3-319-91073-4 ISBN 978-3-319-91074-1 (eBook)

https://doi.org/10.1007/978-3-319-91074-1

Library of Congress Control Number: 2018943692

© The Author(s), under exclusive licence to Springer International Publishing AG, part of Springer Nature 2018

This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.

The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by the registered company Springer International Publishing AG part

of Springer Nature.

The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Trang 6

This book has its origin in our research Starting in 2010, we began researchingbetter ways to perform thread mapping to optimize communication in parallelarchitectures In 2012, we extended the research to data mapping, as multicorearchitectures with multiple memory controllers were becoming more popular It isnow the year 2018 and the research is still ongoing

In this book, we explain all the theory behind thread and data mapping and how

it can be used to reduce the memory access latency We also give an overview of thestate of the art, showing how early mechanisms, dependent on expensive proceduressuch as simulation and source code modifications, evolved to modern mechanisms,which are transparent to programmers and have such a low overhead that are able torun online (during the execution of the applications)

We would like to thank our families and friends who supported us during thislong journey We also thank our colleagues at the Parallel and Distributed ProcessingGroup (GPPD) of UFRGS, who discussed research ideas with us, analyzed andcriticized our work, and supported our research

Paranavai, Brazil

Urbana, IL, USA

Porto Alegre, Brazil

v

Eduardo Henrique Molina da Cruz

Matthias DienerPhilippe Olivier Alexandre Navaux

Trang 7

1 Introduction 1

1.1 Improving Memory Locality with Sharing-Aware Mapping 2

1.2 Monitoring Memory Accesses for Sharing-Aware Mapping 6

1.3 Organization of the Text 8

2 Sharing-Aware Mapping and Parallel Architectures 9

2.1 Understanding Memory Locality in Shared Memory Architectures 9

2.1.1 Lower Latency When Sharing Data 10

2.1.2 Reduction of the Impact of Cache Coherence Protocols 10

2.1.3 Reduction of Cache Misses 11

2.1.4 Reduction of Memory Accesses to Remote NUMA Nodes 13

2.1.5 Better Usage of Interconnections 14

2.2 Example of Shared Memory Architectures Affected by Memory Locality 14

2.2.1 Intel Harpertown 14

2.2.2 Intel Nehalem/Sandy Bridge 15

2.2.3 AMD Abu Dhabi 15

2.2.4 Intel Montecito/SGI NUMAlink 16

2.3 Locality in the Context of Network Clusters and Grids 16

3 Sharing-Aware Mapping and Parallel Applications 19

3.1 Parallel Applications and Sharing-Aware Thread Mapping 19

3.1.1 Considerations About the Sharing Pattern 19

3.1.2 Sharing Patterns of Parallel Applications 22

3.1.3 Varying the Granularity of the Sharing Pattern 23

3.1.4 Varying the Number of Sharers of the Sharing Pattern 26

3.2 Parallel Applications and Sharing-Aware Data Mapping 28

3.2.1 Parameters that Influence Sharing-Aware Data Mapping 28

3.2.2 Analyzing the Data Mapping Potential of Parallel Applications 30

vii

Trang 8

viii Contents

3.2.3 Influence of the Page Size on Sharing-Aware Data Mapping 32

3.2.4 Influence of Thread Mapping on Sharing-Aware Data Mapping 33

4 State-of-the-Art Sharing-Aware Mapping Methods 35

4.1 Sharing-Aware Static Mapping 35

4.1.1 Static Thread Mapping 36

4.1.2 Static Data Mapping 36

4.1.3 Combined Static Thread and Data Mapping 37

4.2 Sharing-Aware Online Mapping 38

4.2.1 Online Thread Mapping 38

4.2.2 Online Data Mapping 39

4.2.3 Combined Online Thread and Data Mapping 41

4.3 Discussion on Sharing-Aware Mapping and the State-of-Art 42

4.4 Improving Performance with Sharing-Aware Mapping 45

4.4.1 Mapping Mechanisms 46

4.4.2 Methodology of the Experiments 47

4.4.3 Results 47

5 Conclusions 49

References 51

Trang 9

DBA Dynamic Binary Analysis

IBS Instruction-Based Sampling

ILP Instruction Level Parallelism

IPC Instructions per Cycle

MPI Message Passing Interface

NUMA Non-Uniform Memory Access

PMU Performance Monitoring Unit

QPI QuickPath Interconnect

TLB Translation Lookaside Buffer

TLP Thread Level Parallelism

ix

Trang 10

Chapter 1

Introduction

Since the beginning of the information era, the demand for computing powerhas been unstoppable Whenever the technology advances enough to fulfill theneeds of a time, new and more complex problems arise, such that the technology

is again insufficient to solve them In the past, the increase of the performancehappened mainly due to instruction level parallelism (ILP), with the introduction

of several pipeline stages, out-of-order and speculative execution The increase

However, the available ILP exploited by compilers and architectures is reachingits limits (Caparros Cabezas and Stanley-Marbell 2011) The increase of clockfrequency is also reaching its limits because it raises the energy consumption, which

is an important issue for current and future architectures (Tolentino and Cameron

2012)

To keep performance increasing, processor architectures are becoming moredependent on thread level parallelism (TLP), employing several cores to compute inparallel These parallel architectures put more pressure on the memory subsystem,since more bandwidth to move data between the cores and the main memory

is required To handle the additional bandwidth, current architectures introducecomplex memory hierarchies, formed by multiple cache levels, some composed bymultiple banks connected to a memory controller The memory controller interfaces

a Uniform or Non-Uniform Memory Access (UMA or NUMA) system However,with the upcoming increase of the number of cores, a demand for an even highermemory bandwidth is expected (Coteus et al.2011)

In this context, the reduction of data movement is an important goal forfuture architectures to keep performance scaling and to decrease energy consump-tion (Borkar and Chien2011) Most data movement in current architectures occurs

due to memory accesses and communication between the threads The nication itself in shared memory environments is performed through accesses to

commu-blocks of memory shared between the threads One of the solutions to reduce

data movement consists of improving the memory locality (Torrellas2009) In this

© The Author(s), under exclusive licence to Springer International Publishing AG,

part of Springer Nature 2018

E H M Cruz et al., Thread and Data Mapping for Multicore Systems,

1

of the clock rate frequency was also an important way to improve performance

Trang 11

2 1 Introduction

book, we analyze techniques that improve memory locality by performing a globalscheduling (Casavant and Kuhl1988) of threads and data of parallel applicationsconsidering their memory access behavior In thread mapping, threads that sharedata are mapped to cores that are close to each other In data mapping, data ismapped to NUMA nodes close to the cores that are accessing it This type of thread

and data mapping is called sharing-aware mapping.

In the rest of this chapter, we first describe how sharing-aware mapping improvesmemory locality and thereby the performance and energy efficiency Afterwards, weexplain the challenges of detecting the necessary information to perform sharing-aware mapping Finally, we show how we organized the text

Mapping

During the execution of a multi-threaded application, the mapping of its threads andtheir data can have a great impact on the memory hierarchy, both for performanceand energy consumption (Feliu et al.2012) The potential for improvements depends

on the architecture of the machines, as well as on the memory access behavior ofthe parallel application Considering the architecture, each processor family uses adifferent organization of the cache hierarchy and the main memory system, such thatthe mapping impact varies among different systems In general, the cache hierarchy

is formed by multiple levels, where the levels closer to the processor cores tend

to be private, followed by caches shared by multiple cores For NUMA systems,besides the cache hierarchy, the main memory is also clustered between cores orprocessors These complex memory hierarchies introduce differences in the memoryaccess latencies and bandwidths and thereby in the memory locality, which varydepending on the core that requested the memory operation, the target memory bankand, if the data is cached, which cache resolved the operation

An example of architecture that is affected by sharing-aware thread and datamapping is the Intel Sandy Bridge architecture (Intel2012), illustrated in Fig.1.1.Sandy Bridge is a NUMA architecture, as we can observe that each processor isconnected to a local memory bank An access to a remote memory bank, a remoteaccess, has an extra performance penalty because of the interchip interconnectionlatency Virtual cores within a core can share data using the L1 and L2 cache levels,and all cores within a processor can share data with the L3 cache Data that isaccessed by cores from different caches require the cache coherence protocol tokeep the consistency of the data due to write operations

Regarding the parallel applications, the main challenge of sharing-aware ping in shared memory based parallel applications is to detect data sharing Thishappens because, in these applications, the source code does not express explicitlywhich memory accesses happen to a block of memory accessed by more than one

map-thread This data sharing then is considered as an implicit communication between

Trang 12

1.1 Improving Memory Locality with Sharing-Aware Mapping 3

Fig 1.1 Sandy Bridge

architecture with two

processors Each processor

consists of eight 2-way SMT

L3

Interchip Interconnection

Processor Core 8

Main Memory Main

Memory

the threads, and occurs whenever a thread accesses a block of memory that is alsoaccessed by other thread Programming environments that are based on implicitcommunication include Pthreads and OpenMP (2013) On the other hand, thereare parallel applications that are based on message passing, where threads sendmessages to each other using specific routines provided by a message passinglibrary, such as MPI (Gabriel et al.2004) In such applications, the communication

is explicit and can be monitored by tracking the messages sent Due to the implicit

communication, parallel applications using shared memory present more challengesfor mapping and are the focus of this book

In the context of shared memory based parallel applications, the memory accessbehavior influences sharing-aware mapping because threads can share data amongthemselves differently for each application In some applications, each thread hasits own data set, and very little memory is shared between threads On the otherhand, threads can share most of their data, imposing a higher overhead on cachecoherence protocols to keep consistency among the caches In applications thathave a lot of shared memory, the way the memory is shared is also important to

be considered Threads can share a similar amount of data with all the other threads,

or can share more data within a subgroup of threads In general, sharing-awaredata mapping is able to improve performance in all applications except when alldata of the application is equally shared between the threads, while sharing-awarethread mapping improves performance in applications whose threads share moredata within a subgroup of threads

In Diener et al (2015b), it is shown that, on average, 84.9% of the memoryaccesses to a given page are performed from a single NUMA node, considering

a 4 NUMA node machine and 4 KBytes memory pages Results regarding thispotential for data mapping is found in Fig.1.2a This result indicates that mostapplications have a high potential for sharing-aware data mapping, since, onaverage, 84.9% of the whole of the memory accesses could be improved by having

a more efficient mapping of pages to NUMA nodes Nevertheless, the potential

is different between applications, in which we can observe in the applicationsshown in Fig.1.2a, where the values range from 76.6%, in CG, to 97.4%, in BT.These different behaviors between applications impose a challenge to mappingmechanisms

Trang 13

0 10 20 30 40 50 60

(1) SP

0 10 20 30 40 50 60

0 10 20 30 40 50 60

(2) Vips

(a)

(b)

Fig 1.2 Analysis of parallel applications, adapted from Diener et al (2015b) (a) Average number

of memory accesses to a given page performed by a single NUMA node Higher values indicate

a higher potential for data mapping (b) Example sharing patterns of applications Axes represent

thread IDs Cells show the amount of accesses to shared pages for each pair of threads Darker cells indicate more accesses

Regarding the mapping of threads to cores, Diener et al (2015b) also shows thatparallel applications can present a wide variety of data sharing patterns Severalpatterns that can be exploited by sharing-aware thread mapping were found inthe applications For instance, in Fig.1.2b1, neighbor threads share data, which iscommon in applications parallelized using domain decomposition Other patternssuitable for mapping include patterns in which distant threads share data, or evenpatterns in which threads share data in clusters due to a pipeline parallelizationmodel On the other hand, applications such as Vips, whose sharing pattern is found

in Fig.1.2b2, each thread has a similar amount of data shared to all threads, suchthat no thread mapping can optimize the communication Nevertheless, in theseapplications, data mapping still can improve performance by optimizing the memoryaccesses to the data private to each thread The analysis reveals that most parallelapplication can benefit from sharing-aware mapping

Sharing-aware thread and data mapping improve performance and energy ciency of parallel applications by optimizing memory accesses (Diener et al.2014).Improvements happen for three main reasons First, cache misses are reduced bydecreasing the number of invalidations that happen when write operations areperformed on shared data (Martin et al.2012) For read operations, the effectivecache size is increased by reducing the replication of cache lines on multiplecaches (Chishti et al.2005) Second, the locality of memory accesses is increased

effi-by mapping data to the NUMA node where it is most accessed Third, theusage of interconnections in the system is improved by reducing the traffic onslow and power-hungry interchip interconnections, using more efficient intrachipinterconnections instead Although there are several benefits of using sharing-awarethread and data mapping, the wide variety of architectures, memory hierarchiesand memory access behavior of parallel applications restricts its usage in currentsystems

Trang 14

1.1 Improving Memory Locality with Sharing-Aware Mapping 5

Execution time

L3 cache misses

Interchip traffic

Energy consumption

Thread mapping

Data mapping

Fig 1.3 Results obtained with sharing-aware mapping, adapted from Diener et al (2014) (a)

Average reduction of execution time, L3 cache misses, interchip interconnection traffic and

energy consumption when using combined thread and data mappings (b) Average execution time

reduction provided by a combined thread and data mapping, and by using thread and data mapping separately

The performance and energy consumption improvements that can be obtained

by sharing-aware mapping are evaluated in Diener et al (2014) The results using

an Oracle mapping, which generates a mapping considering all memory accesses

performed by the application, is shown in Fig.1.3a Experiments were performed

in a machine with 4 NUMA nodes capable of running 64 threads simultaneously.The average execution time reduction reported was 15.3% The reduction ofexecution time was possible due to a reduction of 12.8% of cache misses, and19.8% of interchip interconnection traffic Results were highly dependent on thecharacteristics of the parallel application In applications that exhibit a high potentialfor sharing-aware mapping, execution time was reduced by up to 39.5% Energyconsumption was also reduced, by 11.1% on average

Another notable result is that thread and data mapping should be performedtogether to achieve higher improvements (Diener et al.2014) Figure1.3b showsthe average execution time reduction of using thread and data mapping together, andusing them separately The execution time reduction of using a combined thread anddata mapping was 15.3% On the other hand, the reduction of using thread mapping

Trang 15

6 1 Introduction

alone was 6.4%, and using data mapping alone 2.0% It is important to note thatthe execution time reduction of a combined mapping is higher than the sum of thereductions of using the mappings separately This happens because each mappinghas positive effect on the other mapping

Mapping

The main information required to perform sharing-aware mapping are the memoryaddresses accessed by each thread This can be done statically or online Whenmemory addresses are monitored statically, the application is usually executed incontrolled environments such as simulators, where all memory addresses can betracked (Barrow-Williams et al 2009) These techniques are not able to handleapplications whose sharing pattern changes between execution Also, the amount

of different memory hierarchies present in current and future architectures limit theapplicability of static mapping techniques In online mapping, the sharing informa-tion must be detected while running the application As online mapping mechanismsrequire memory access information to infer memory access patterns and makedecisions, different information collection approaches have been employed withvarying degrees of accuracy and overhead Although capturing all memory accessesfrom an application would provide the best information for mapping algorithms, theoverhead would surpass the benefits from better task and data mappings For thisreason, this is only done for static mapping mechanisms

In order to achieve a smaller overhead, most traditional methods for collectingmemory access information are based on sampling Memory access patterns can beestimated by tracking page faults (Diener et al.2014,2015a; LaRowe et al.1992;Corbet2012a,b), cache misses (Azimi et al.2009), TLB misses (Marathe et al.2010;Verghese et al 1996), or by using hardware performance counters (Dashti et al

2013), among others Sampling based mechanisms present an accuracy lower thanintended, as we show in the next paragraphs This is due to the small number ofmemory accesses captured (and their representativeness) in relation to all of thememory accesses For instance, a thread may wrongly appear to access a page lessthan another thread because its memory accesses were undersampled In anotherscenario, a thread may have few access to a page, while having cache or TLB misses

in most of these accesses, leading to bad mappings in mechanisms based on cache orTLB misses Other work proposes hardware-based mechanisms (Cruz et al.2016c),which have a higher accuracy but do not work in existing hardware

Following the hypothesis that a more accurate estimation of the memoryaccess pattern of an application can result in a better mapping and performanceimprovements, we performed experiments varying the amount of memory samplesused to calculate the mapping Experiments were run using the NAS parallelbenchmarks (Jin et al 1999) and the PARSEC benchmark suite (Bienia et al

Trang 16

1.2 Monitoring Memory Accesses for Sharing-Aware Mapping 7

Fig 1.4 Results obtained

with different metrics for

sharing-aware mapping (a)

Accuracy of the final

mapping (higher is better).

(b) Execution time

samples (lower is better)

Amount of samples (relative to the total number of memory accesses):

The accuracy and execution time obtained with the different methods forbenchmarks BT, FT, SP, and Facesim are presented in Fig.1.4 To calculate theaccuracy, we compare if the NUMA node selected for each page of the applications

is equal to the NUMA node that performed most accesses to the page (the higher thepercentage, the better) The execution time is calculated as the reduction compared

to using 1:107samples (the bigger the reduction, the better) The accuracy results,presented in Fig.1.4a, show that accuracy increases with the number of memoryaccess samples used, as expected It shows that some applications require moresamples than others For instance, BT and FT require at least 1:10 of samples toachieve a good accuracy, while SP and Facesim achieve a good accuracy using 1:104

of samples

The execution time results, shown in Fig.1.4b, indicate a clear trend: a higheraccuracy in estimating memory access patterns translates into a shorter executiontime However, most sampling mechanisms are limited to a small number ofsamples due to the high overhead of the techniques used for capturing mem-ory accesses (Azimi et al 2009; Diener et al 2014), harming accuracy and

Trang 17

8 1 Introduction

thereby performance Given these considerations, it is expected that hardware-basedapproaches have higher performance improvements, since they have access to alarger number of memory accesses

The remainder of this book is organized as follows Chapter 2 explains howsharing-aware mapping affects the hardware Chapter3explains how sharing-awaremapping affects parallel applications Chapter 4 describes several mechanismsproposed to perform sharing-aware mapping Finally, Chap.5draws the conclusions

of this book

Trang 18

improved, which leads to an increase of performance and energy efficiency Foroptimal performance improvements, data and thread mapping should be performedtogether (Terboven et al.2008).

In this section, we first analyze the theoretical benefits that an improved memorylocality provide to shared memory architectures Afterwards, we present examples

of current architectures, and how memory locality impacts their performance.Finally, we briefly describe how locality affects the performance in network clustersand grids

Architectures

Sharing-aware thread and data mapping is able to improve the memory locality inshared memory architectures, and thereby the performance and energy efficiency Inthis section, we explain the main reasons for that

© The Author(s), under exclusive licence to Springer International Publishing AG,

part of Springer Nature 2018

E H M Cruz et al., Thread and Data Mapping for Multicore Systems,

9

Trang 19

10 2 Sharing-Aware Mapping and Parallel Architectures

Fig 2.1 System architecture

with two processors Each

processor consists of eight

2-way SMT cores and three

L3

Interconnection

Processor Core 8

Main Memory Main

Memory

C

2.1.1 Lower Latency When Sharing Data

To illustrate how the memory hierarchy influences memory locality, Fig.2.1shows

an example of an architecture where there are three possibilities for sharing betweenthreads Threads running on the same core (Fig.2.1 A) can share data through thefast L1 or L2 caches and have the highest sharing performance Threads that run ondifferent cores (Fig.2.1 B) have to share data through the slower L3 cache, but canstill benefit from the fast intrachip interconnection When threads share data acrossphysical processors in different NUMA nodes (Fig.2.1 C), they need to use theslow interchip interconnection Hence, the sharing performance in case C is theslowest in this architecture

2.1.2 Reduction of the Impact of Cache Coherence Protocols

Cache coherence protocols are responsible for keeping data integrity in sharedmemory parallel architectures They keep track of the state of all cache lines,identifying which cache lines are exclusive to a cache, shared between caches, and

if the cache line has been modified Most coherence protocols employed in currentarchitectures derive from the MOESI protocol Coherence protocols are optimized

by thread mapping because data that would be replicated in several caches in a

bad mapping, thereby considered as shared, can be allocated in a cache shared by the threads that access the data, such that the data is considered as private This is

depicted in Fig.2.2, where two threads that share data are running in separate cores.When using a bad thread mapping, as in Fig.2.2a, the threads are running in coresthat do not share any cache, and due to that the shared data is replicated in bothcaches, and the coherence protocol needs to send messages between the caches tokeep data integrity On the other hand, when the threads use a good mapping, as inFig.2.2b, the shared data is stored in the same cache, and the coherence protocoldoes not need to send any messages

Trang 20

2.1 Understanding Memory Locality in Shared Memory Architectures 11

Data shared by threads 0 and 1 Cores running threads 0 and 1

Fig 2.2 The relation between cache coherence protocols and sharing-aware thread mapping (a)

Bad thread mapping The cache coherence protocol needs to send messages to keep the integrity

of data replicated in the caches, degrading the performance and energy efficiency (b) Good thread

mapping The data shared between the threads are stored in the same shared cache, such that the cache coherence protocol does not need to send any messages

2.1.3 Reduction of Cache Misses

Since thread mapping influences the shared data stored in cache memories, it alsoaffects cache misses We identified three main types of cache misses that are reduced

by an efficient thread mapping (Cruz et al.2012):

Trang 21

12 2 Sharing-Aware Mapping and Parallel Architectures

0

Thread 0 reads the

data from main

Thread 0 requests cache-to-cache transfer

Thread 0 reads the data

Time(a)

0

Thread 0 reads the data from main memory

Thread 1 writes the data

Thread 0 reads the data

Time(b)

Fig 2.3 How a good mapping can reduce the amount of invalidation misses in the caches In this

example, core 0 reads the data, then core 1 writes, and finally core 0 reads the data again (a) Bad

thread mapping The cache coherence protocol needs to invalidate the data in the write operation,

and then performs a cache-to-cache transfer for the read operation (b) Good thread mapping The

cache coherence protocol does not need to send any messages

Since, the threads do not share a cache, when thread 1 writes, it misses the cacheand thereby needs first to retrieve the cache line to its cache, and then invalidates thecopy in the cache of thread 0 When thread 0 reads the data again, it will generateanother cache miss, requiring a cache-to-cache transfer from the cache of thread 1

On the other hand, when the threads share a common cache, as shown inFig.2.3b, the write operation of thread 1 and the second read operation of thread 0 donot generate any cache miss Summarizing, we reduce the number of invalidations

of cache lines accessed by several threads, since these lines would be considered asprivate by cache coherence protocols (Martin et al.2012)

Trang 22

2.1 Understanding Memory Locality in Shared Memory Architectures 13

Threads 0 and 1 Threads 3 and 4 Free space

L2

Core 0 Core 1

L2 Core 2 Core 3 (a)

L2

Core 0 Core 1

L2 Core 2 Core 3 (b)

Fig 2.4 Impact of sharing-aware mapping on replication misses (a) Bad thread mapping The

replicated data leaves almost no free space in the cache memories (b) Good thread mapping.

Since the data is not replicated in the caches, there is more free space to store other data

cache lines By mapping applications that share large amounts of data to cores thatshare a cache, the space wasted with replicated cache lines is minimized, leading to

a reduction of cache misses In Fig.2.4, we illustrate how thread mapping affectscache line replication, in which two pairs of threads share data within each pair

In case of a bad thread mapping, in Fig.2.4a, the threads that share data aremapped to cores that do not share a cache, such that the data needs to be replicated

in the caches, leaving very little free cache space to store other data In case of agood thread mapping, in Fig.2.4b, the threads that share data are mapped to coresthat share a common cache In this way, the data is not replicated and there is muchmore free space to store other data in the cache

2.1.4 Reduction of Memory Accesses to Remote NUMA Nodes

In NUMA systems, the time to access the main memory depends on the core thatrequested the memory access and the NUMA node that contains the destinationmemory bank (Ribeiro et al 2009) We show in Fig.2.5an example of NUMAarchitecture If the core and destination memory bank belong to the same node, we

have a local memory access, as in Fig.2.5 A On the other hand, if the core and

the destination memory bank belong to different NUMA nodes, we have a remote

memory access, as in Fig.2.5 B Local memory accesses are faster than remotememory accesses By mapping the threads and data of the application in such away that we increase the number of local memory accesses over remote memoryaccesses, the average latency of the main memory is reduced

Trang 23

14 2 Sharing-Aware Mapping and Parallel Architectures

Fig 2.5 System architecture

with two processors Each

processor consists of eight

2-way SMT cores and three

L3

Interconnection

Processor Core 8

Main Memory Main

Memory

A

B

2.1.5 Better Usage of Interconnections

The objective is to make better use of the available interconnections in theprocessors We can map the threads and data of the application in such a waythat we reduce interchip traffic and use intrachip interconnections instead, whichhave a higher bandwidth and lower latency In order to reach this objective, thecache coherence related traffic, such as cache line invalidations and cache-to-cache transfers, has to be reduced The reduction of remote memory accesses alsoimproves the usage of interconnections

by Memory Locality

Most architectures are affected by the locality of memory accesses In this section,

we present examples of such architectures, and how memory locality impacts theirperformance

2.2.1 Intel Harpertown

The Harpertown architecture (Intel2008) is an Uniform Memory Access (UMA)architecture, where all cores have the same memory accesses latency to any mainmemory back We illustrate the hierarchy of Harpertown in Fig.2.6 Each processorhas four cores, where every core has private L1 instruction and data caches, andeach L2 cache is shared by two cores Threads running on cores that share a L2cache (Fig.2.6 A) have the highest data sharing performance, followed by threadsthat share data through the intrachip interconnection (Fig.2.6 B) Threads running

on cores that make use of the Front Side Bus (FSB) (Fig.2.6 C) have the lowestdata sharing performance To access the main memory, all cores also need to accessthe FSB, which generates a bottleneck

Trang 24

2.2 Example of Shared Memory Architectures Affected by Memory Locality 15

Intrachip Interconnection

L2 L1

Core 0

L1 Core 1

L2 L1

Core 4

L1 Core 5

L2 L1

A B

C

Fig 2.6 Intel Harpertown architecture

2.2.2 Intel Nehalem/Sandy Bridge

Intel Nehalem (Intel2010b) and Sandy Bridge (Intel2012), as well as their currentsuccessors, follow the same memory hierarchy They are NUMA architectures,

in which each processor in the system has one integrated memory controller.Therefore, each processor forms a NUMA node Each core is 2-way SMT, using

a technology called Hyper-Threading The interchip interconnection is called

QuickPath Interconnect (QPI) (Ziakas et al.2010) The memory hierarchy is thesame as the ones illustrated in Figs.2.1and2.5

2.2.3 AMD Abu Dhabi

The AMD Abu Dhabi (AMD2012) is also a NUMA architecture The memoryhierarchy is depicted in Fig.2.7, using two AMD Opteron 6386 processors Eachprocessor forms 2 NUMA nodes, as each one has 2 memory controllers There arefour possibilities for data sharing between the cores Every couple of cores share

a L2 cache (Fig.2.7 A) The L3 cache is shared by eight cores (Fig.2.7 B).Cores within a processor can also share data using the intrachip interconnection(Fig.2.7 C) Cores from different processors need to use the interchip interconnec-

tion, called HyperTransport (Conway2007) (Fig.2.7 D)

Regarding the NUMA nodes, cores can access the main memory with three ferent latencies A core may access its local node (Fig.2.7 E), using its localmemory controller It can access the remote NUMA node using the other memorycontroller within the same processor (Fig.2.7 F) Finally, a core can access a mainmemory bank connected to a memory controller located in a different processor(Fig.2.7 G)

Trang 25

dif-16 2 Sharing-Aware Mapping and Parallel Architectures

Intrachip Interconnection

L3 L2

C6 L1 C7

L3 L2

L1 C15

Main Memory

Main Memory

Processor 0

Hyper Transport Intrachip Interconnection

L3 L2 L1

C16 L1

C17

L2 L1

C22 L1 C23

L3 L2 L1 C24

L1 C25

L2 L1 C30

L1 C31

Processor 1

Main Memory

Main Memory

Fig 2.7 AMD Abu Dhabi architecture

2.2.4 Intel Montecito/SGI NUMAlink

It is a NUMA architecture, but with characteristics different from the previouslymentioned We illustrate the memory hierarchy in Fig.2.8, considering that eachprocessor is a dual-core Itanium 9030 (Intel 2010a) There are no shared cachememories, and each NUMA node contains two processors Regarding data sharing,cores can exchange data using the intrachip interconnection (Fig.2.8 A), the intran-ode interconnection (Fig.2.8 B), or the SGI NUMAlink interconnection (Woodacre

et al.2005) (Fig.2.8 C) Regarding the NUMA nodes, cores can access their localmemory bank (Fig.2.8 D), or the remote memory banks (Fig.2.8 E)

Until this section, we analyzed how the different memory hierarchies influencedmemory locality, and how improving memory locality benefits performance andenergy efficiency Although it is not the focus of this book, it is important tocontextualize the concept of locality in network clusters and grids, which usemessage passing instead of shared memory In such architectures, each machinerepresents a node, and the nodes are connected by routers, switches and networklinks with different latencies, bandwidths and topologies Each machine is actually

a shared memory architecture, as described in the previous sections, and can benefitfrom increasing memory locality

Trang 26

2.3 Locality in the Context of Network Clusters and Grids 17

Processor 0

Intrachip Interconnection L3

L2 L1 Core 2

L3 L2 L1 Core 3 Processor 1 NUMA Node 0

Main Memory

NUMAlink

Intranode Interconnection

Intrachip Interconnection L3

L2 L1 Core 4

L3 L2 L1 Core 5 Processor 2

Intrachip Interconnection L3

L2 L1 Core 6

L3 L2 L1 Core 7 Processor 3 NUMA Node 1

Main Memory

C

E

D

Fig 2.8 Intel Montecito/SGI NUMAlink architecture

Parallel applications running in network clusters or grids are organized inprocesses instead of threads.1Processes, contrary to threads, do not share memoryamong themselves by default In order to communicate, the processes send andreceive messages to each other This parallel programming paradigm is calledmessage passing The discovery of the communication pattern of message passingbased applications is straightforward compared to discovering the memory accesspattern in shared memory based applications This happens because the messages

keep fields that explicitly identify the source and destination, which is an explicit

communication, such that communication can be detected by monitoring themessages On the other hand, in shared memory based applications, the focus of

this book, the communication is implicit and happens when different threads access

the same data

processes However, it would not be possible to share the same virtual memory addresses space, requiring operating system support to communicate using shared memory.

Trang 27

To analyze the memory locality in the context of sharing-aware thread mapping,

we must investigate how threads of parallel applications share data, which we

call sharing pattern (Cruz et al.2014) For thread mapping, the most importantinformation is the number of memory accesses to shared data, and with whichthreads the data is shared The memory address of the data is not relevant In thissection, we first expose some considerations about factors that influence threadmapping Then, we analyze the data sharing patterns of parallel applications.Finally, we vary some parameters to verify how they impact the detected sharingpatterns

3.1.1 Considerations About the Sharing Pattern

We identified the following items that influence the detection of the data sharingpattern between the threads:

© The Author(s), under exclusive licence to Springer International Publishing AG,

part of Springer Nature 2018

E H M Cruz et al., Thread and Data Mapping for Multicore Systems,

19

Trang 28

20 3 Sharing-Aware Mapping and Parallel Applications

Fig 3.1 How the granularity of the sharing pattern affects the spatial false sharing problem (a)

Coarse granularity The data accessed by the threads belong to the same memory block (b) Fine

granularity The data accessed by the threads belong to different memory blocks

3.1.1.1 Dimension of the Sharing Pattern

The sharing pattern can be analyzed by grouping different numbers of threads Tocalculate the amount of data sharing between groups of threads of any size, thetime and space complexity rises exponentially, which discourages the use of threadmapping for large groups of threads Therefore, we evaluate the sharing pattern only

between pairs of threads, generating a sharing matrix, which has a quadratic space

complexity

3.1.1.2 Granularity of the Sharing Pattern (Memory Block Size)

To track which threads access each data, we need to divide the entire memory

address space in blocks A thread that access a memory block is called a sharer of the block The size of the memory block directly influences the spatial false sharing

problem (Cruz et al.2012), depicted in Fig.3.1

Using a large block, as shown in Fig.3.1a, threads accessing the same memoryblock, but different parts of it, will be considered sharers of the block Therefore, theusage of large blocks increases the spatial false sharing Using blocks of lower size,

as in Fig.3.1b, decreases the spatial false sharing, since it increases the possibilitythat access in different memory positions would be performed in different blocks

In UMA architectures, the memory block size usually used is the cache linesize, as the most important resource optimized by thread mapping is the cache InNUMA architectures, the memory block is usually set to the size of the memorypage, since the main memory banks are spread over the NUMA nodes using a pagelevel granularity

3.1.1.3 Data Structure Used to Store the Thread Sharers of a Memory

Block

We store the threads that access each memory block in a vector we call sharers

vector There are two main ways to organize the sharers vector.

Trang 29

3.1 Parallel Applications and Sharing-Aware Thread Mapping 21

In the first way, we reserve one element for each thread of the parallel application,each one containing a time stamp of the last time the corresponding thread accessed

the memory block In this way, at a certain time T , a thread is considered as a sharer

of a memory block if the difference between T and the time stored in the element of

the corresponding thread is lower than a certain threshold In other words, a thread

is a sharer of a block it has accessed the block recently

The second way to organize the structure is simpler, we store the IDs of thelast threads that accessed the block The most recent thread is stored in the firstelement of the vector, and the oldest thread in the last element of the vector In thisorganization, a thread is considered a sharer of the memory block if its ID is stored

in the vector

The first method to organize the thread sharers of a memory block has theadvantage of storing information about all threads of the parallel application, aswell as being able more flexible to adjust when a thread is considered a sharer Thesecond method has the advantage of being much simpler to be implemented In thisbook, we focus our analysis of the sharing pattern using the second method

3.1.1.4 History of the Sharing Pattern

Another important aspect of the sharing pattern is how much of the past events

we should consider in each memory block This aspect influences the temporal

false sharing problem (Cruz et al. 2012), illustrated in Fig.3.2 Temporal falsesharing happens when two threads access the same memory block, but at verydistant times during the execution, as shown in Fig.3.2a To be actually considered

as a data sharing, the time difference between the memory accesses to the sameblock should not be long, as in Fig.3.2b Using the first method to organize thesharers of each memory block (explained in the previous item), the temporal sharingcan be controlled by adjusting the threshold that defines if a memory access isrecent enough to determine if a thread is sharer of a block Using the secondimplementation, we can control the temporal sharing by setting how many threadIDs can be stored in the sharers vector The more elements in the sharers vector,more thread IDs can be tracked, but increases the temporal false sharing

0

Thread 0 accesses

shared data X

Thread 1 accesses shared data X

Time

(a)

0

Thread 0 accesses shared data X

Thread 1 accesses shared data X

Time(b)

Fig 3.2 The temporal false sharing problem (a) False temporal sharing (b) True temporal

sharing

Trang 30

22 3 Sharing-Aware Mapping and Parallel Applications

0 5 10 15 20 25 30

(c)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(d)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(e)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(i)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(j)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(k)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(o)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(p)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(q)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(t)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(u)

0 5 10 15 20 25 30

0 5 10 15 20 25 30

(v)

0 50 100 150 200 250

0 50 100 150 200 250

(w)

Fig 3.3 Sharing patterns of parallel applications from the OpenMP NAS Parallel Benchmarks

and the PARSEC Benchmark Suite Axes represent thread IDs Cells show the number of accesses

to shared data for each pair of threads Darker cells indicate more accesses The sharing patterns were generated using a memory block size of 4 KBytes, and storing 2 sharers per memory block.

(a) BT, (b) CG, (c) DC, (d) EP, (e) FT, (f) IS, (g) LU, (h) MG, (i) SP, (j) UA, (k) Blackscholes, (l) Bodytrack, (m) Canneal, (n) Dedup, (o) Facesim, (p) Ferret, (q) Fluidanimate, (r) Freqmine, (s) Raytrace, (t) Streamcluster, (u) Swaptions, (v) Vips, (w) x264

3.1.2 Sharing Patterns of Parallel Applications

The sharing matrices of several parallel applications are illustrated in Fig.3.3, whereaxes represent thread IDs and cells show the number of accesses to shared datafor each pair of threads Darker cells indicate more accesses The sharing patternswere generated using a memory block size of 4 KBytes, and storing 2 sharers permemory block Since the absolute values of the contents of the sharing matricesvary significantly between the applications, we normalized each sharing matrix toits own maximum value and color the cells according to the amount of data sharing.The data used to generate the sharing matrices was collected by instrumenting theapplications using Pin (Bach et al.2010), a dynamic binary instrumentation tool.The instrumentation code monitors all memory accesses, keeping track of whichthreads access each memory block We analyze applications from the OpenMP NASParallel Benchmarks (Jin et al.1999) and the PARSEC Benchmark Suite (Bienia

et al.2008b)

Ngày đăng: 04/03/2019, 16:43

TỪ KHÓA LIÊN QUAN

w