1. Trang chủ
  2. » Giáo án - Bài giảng

tính toán song song thoại nam parallelprocessing 10 parallel paradigm programming model sinhvienzone com

28 73 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 28
Dung lượng 402,44 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Implicit Parallelism  The compiler and the run-time support system automatically exploit the parallelism from the sequential-like program written by users  Ways to implement implici

Trang 2

– Explicit parallel models

– Other programming models

SinhVienZone.Com

Trang 3

Parallel Programming Paradigms

 Parallel programming paradigms/models are the

ways to

– Structure the algorithm of a parallel program

 Commonly-used algorithmic paradigms

– Phase parallel

SinhVienZone.Com

Trang 4

Parallel Programmability Issues

 The programmability of a parallel programming

Trang 5

Structuredness

 A program is structured if it is comprised of

structured constructs each of which has these 3

properties

– Is a single-entry, single-exit construct

– Different semantic entities are clearly identified

– Related operations are enclosed in one construct

 The structuredness mostly depends on

– The programming language

– The design of the program SinhVienZone.Com

Trang 7

Portability

 A program is portable across a set of computer

system if it can be transferred from one machine

to another with little effort

 Portability largely depends on

– The language of the program

– The target machine’s architecture

 Levels of portability

1. Users must change the program’s algorithm

2. Only have to change the source code

3. Only have to recompile and relink the program

4. Can use the executable directly

SinhVienZone.Com

Trang 8

Parallel Programming Models

 Widely-accepted programming models are

Trang 9

Implicit Parallelism

 The compiler and the run-time support system

automatically exploit the parallelism from the

sequential-like program written by users

 Ways to implement implicit parallelism

– Parallelizing Compilers

– User directions

– Run-time parallelization

SinhVienZone.Com

Trang 10

Parallelizing Compiler

 A parallelizing (restructuring) compiler must

– Performs dependence analysis on a sequential

program’s source code

– Uses transformation techniques to convert sequential

code into native parallel code

 Dependence analysis is the identifying of

– Data dependence

– Control dependence

SinhVienZone.Com

Trang 11

 Data dependence

 Control dependence

 When dependencies do exist, transformation

techniques/ optimizing techniques should be used

– To eliminate those dependencies or

– To make the code parallelizable, if possible

Trang 12

… End Do

Q needs the value A of

P, so N iterations of the

Do loop can not be

parallelized

Each iteration of the Do loop

have a private copy A(i), so

we can execute the Do loop in parallel

SinhVienZone.Com

Trang 13

Some Optimizing Techniques for

Eliminating Data Dependencies(cont’d)

End Do

The Do loop can not be

executed in parallel since the

computing of Sum in the i-th

iteration needs the values of

the previous iteration

A parallel reduction function is used

to avoid data dependency

SinhVienZone.Com

Trang 14

User Direction

 Users help the compiler in parallelizing by

– Providing additional information to guide the parallelization process

– Inserting compiler directives (pragmas) in the source code

 User is responsible for ensuring that the code is correct after parallelization

 Example (Convex Exemplar C)

#pragma_CNX loop_parallel

for (i=0; i <1000;i++){

A[i] = foo (B[i], C[i]); SinhVienZone.Com

Trang 15

-15- Khoa Khoa học và Kỹ thuật Máy tính - Đại học Bách Khoa TP.HCM

– The compiler and the run-time system recognize and

exploit parallelism at both the compile time and run-time

 Example: Jade language (Stanford Univ.)

– More parallelism can be recognized

– Automatically exploit the irregular and dynamic

parallelism

SinhVienZone.com https://fb.com/sinhvienzonevn

SinhVienZone.Com

Trang 16

Conclusion - Implicit Parallelism

 Advantages of the implicit programming model

– Ease of use for users (programmers)

– Reusability of old-code and legacy sequential

requires a lot of research and studies

– Research outcome shows that automatic parallelization

is not so efficient (from 4% to 38% of parallel code SinhVienZone.Com

Trang 17

Explicit Programming Models

 Data-Parallel

 Message-Passing

 Shared-Variable

SinhVienZone.Com

Trang 18

Data-Parallel Model

 Applies to either SIMD or SPMD modes

 The same instruction or program segment executes over different data sets simultaneously

 Massive parallelism is exploited at data set level

 Has a single thread of control

 Has a global naming space

 Applies loosely synchronous operation

SinhVienZone.Com

Trang 19

Data-Parallel: An Example

main() { double local[N], tmp[N], pi, w;

Example: a data-parallel program

to compute the constant “pi”

SinhVienZone.Com

Trang 20

Message-Passing Model

 Multithreading: program consists of multiple

processes

– Each process has its own thread of control

– Both control parallelism (MPMD) and data parallelism

(SPMD) are supported

 Asynchronous Parallelism

– All process execute asynchronously

– Must use special operation to synchronize processes

 Multiple Address Spaces SinhVienZone.Com

Trang 21

Message-Passing Model (cont’d)

 Explicit Interactions

– Programmer must resolve all the interaction issues:

data mapping, communication, synchronization and

aggregation

 Explicit Allocation

– Both workload and data are explicitly allocated to the

process by the user

SinhVienZone.Com

Trang 22

Message-Passing Model:

An Example

#define N 1000000 main() {

double local, pi, w;

long i, taskid, numtask;

A: w=1.0/N;

MPI_Init(&argc, &argv);

MPI_Comm_rank(MPI_COMM_WORLD, &taskid); MPI_Comm_size(MPI_COMM_WORLD, &numtask); B: for (i=taskid;i<N;i=i+numtask) {

P: local= (i +0.5)*w;

Q: local=4.0/(1.0+local*local); } C: MPI_Reduce(&local, &pi, 1, MPI_DOUBLE,

MPI_SUM, 0, MPI_COMM_WORLD); D: if (taskid==0) printf(“pi is %f\n”, pi*w);

Example: a message-passing program to compute the constant “pi”

Message-Passing

operations

SinhVienZone.Com

Trang 23

Shared-Variable Model

 Has a single address space

 Has multithreading and asynchronous model

 Data reside in a single, shared address space, thus does not have to be explicitly allocated

 Workload can be implicitly or explicitly allocated

 Communication is done implicitly

– Through reading and writing shared variables

 Synchronization is explicit SinhVienZone.Com

Trang 24

pi=pi+local;

SinhVienZone.Com

Trang 25

Comparision of Four Models

Trang 26

Comparision of Four Models (cont’d)

 Implicit parallelism

– Easy to use

– Can reuse existing sequential programs

– Programs are portable among different architectures

Trang 27

Comparision of Four Models

(cont’d)

 Message-passing model

that need to manage a global data structure

machines with native shared-variable model (multiprocessors: DSMs, PVPs, SMPs)

 Shared-variable model

programs

SinhVienZone.Com

Trang 28

Other Programming Models

Ngày đăng: 30/01/2020, 22:30

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN