1. Trang chủ
  2. » Giáo án - Bài giảng

tính toán song song thoại nam parallelprocessing 06 speedup sinhvienzone com

19 35 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 19
Dung lượng 460,59 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Amdahl’s Law – Fixed Problem Size 1  The main objective is to produce the results as soon as possible – ex video compression, computer graphics, VLSI routing, etc  Implications – Up

Trang 1

Speedup

Thoai Nam

SinhVienZone.Com

Trang 2

 Speedup & Efficiency

 Amdahl’s Law

 Gustafson’s Law

 Sun & Ni’s Law

SinhVienZone.Com

Trang 3

Speedup & Efficiency

 Speedup:

S = Time(the most efficient sequential

 Efficiency:

E = S / N with N is the number of processors

SinhVienZone.Com

Trang 4

Amdahl’s Law – Fixed Problem

Size (1)

 The main objective is to produce the results as

soon as possible

– (ex) video compression, computer graphics, VLSI routing, etc

 Implications

– Upper-bound is

– Make Sequential bottleneck as small as possible

– Optimize the common case

 Modified Amdahl’s law for fixed problem size

including the overhead

SinhVienZone.Com

Trang 5

Amdahl’s Law – Fixed Problem

Size (2)

Sequential

Sequential P 0 P 1 P 2 P 3 P 4 P 5 P 6 P 7 P 8 P 9

Parallel

T(1)

T(N)

Ts=T(1)  Tp= (1-)T(1)

T(N) = T(1)+ (1-)T(1)/N

Number of processors SinhVienZone.Com

Trang 6

Amdahl’s Law – Fixed Problem

Size (3)

N N

T T

T Speedup

1 )

1 (

1 )

1 ( ) 1

( )

1 (

) 1 (

) (

) 1

(

N Time

Time

SinhVienZone.Com

Trang 7

Enhanced Amdahl’s Law

T

T T

N

T T

T Speedup

overhead overhead

) 1 (

1 )

1 ( ) 1

( ) 1 (

) 1 (

The overhead includes parallelism

and interaction overheads

SinhVienZone.Com

Trang 8

Gustafson’s Law – Fixed Time (1)

 User wants more accurate results within a time limit

– Execution time is fixed as system scales

– (ex) FEM (Finite element method) for structural analysis, FDM (Finite difference method) for fluid dynamics

 Properties of a work metric

– Easy to measure

– Architecture independent

– Easy to model with an analytical expression

– No additional experiment to measure the work

– The measure of work should scale linearly with sequential time complexity of the algorithm

 Time constrained seems to be most generally viable

model!

SinhVienZone.Com

Trang 9

Gustafson’s Law – Fixed Time (2)

Parallel

Sequential

P 9

W 0

W s

 = Ws / W(N) W(N) = W(N) + (1-)W(N)

 W(1) = W(N) + (1-)W(N)N

W(N)

SinhVienZone.Com

Trang 10

Gustafson’s Law – Fixed Time without overhead

N W

NW

W k

N W

k

W N

T

T

).

(

).

1

( )

(

) 1 (            

Time = Work k

W(N) = W

SinhVienZone.Com

Trang 11

Gustafson’s Law – Fixed Time

with overhead

W W

N W

W

NW

W k

N W

k

W N

T

T Speedup

0

1 ( 1

( )

(

)

1

( )

(

) 1 (

W(N) = W + W0

SinhVienZone.Com

Trang 12

Sun and Ni’s Law –

Fixed Memory (1)

 Scale the largest possible solution limited by the memory space Or, fix memory usage per processor

 Speedup,

– Time(1)/Time(N) for scaled up problem is not

appropriate

– For simple profile, and G(N) is the increase of

parallel workload as the memory capacity

Trang 13

Sun and Ni’s Law –

Fixed Memory (2)

N

N G

N

G SpeedupMC

)

( )

1 (

) (

) 1

(

 W=W+(1- )W

 Let M be the memory capacity of a single node

N nodes:

– the increased memory N*M

– The scaled work: W=W+(1- )G(N)W

SinhVienZone.Com

Trang 14

 Definition:

A function is homomorphism if there exists a function

such that for any real number c and variable x,

 Theorem:

If W = for some homomorphism function ,

available processors, the simplified memory-bounced

speedup is

Sun and Ni’s Law –

Fixed Memory (3)

N

N G

N G W

N

N

g W

W N g

W S

N

N

) 1

(

) ( ) 1

( )

(

) (

1

1

*

g

) ( ) ( )

( cx g c g x

g

)

(M

) ( ) ( )

( cx g c g x

SinhVienZone.Com

Trang 15

Proof:

Let the memory requirement of W n be M, W n =

M is the memory requirement when 1 node is available With N nodes available, the memory capacity will increase

to N*M

Using all of the available memory, for the scaled parallel portion :

Sun and Ni’s Law – Fixed Memory (4)

N

W*  ( )  ( ) ( )  ( )

)

(M g

*

N W

N

N N

N N

W N

N

g W

W N g W

N

W W

W

W S

) (

) (

1

1

*

* 1

*

* 1

*

SinhVienZone.Com

Trang 16

– When the problem size is independent of the system, the

problem size is fixed, G(N)=1  Amdahl’s Law

– When memory is increased N times, the workload also

increases N times, G(N)=N  Gustafson’s Law

– For most of the scientific and engineering applications, the computation requirement increases faster than the

memory requirement, G(N)>N

Speedup

N

N N

W N

N

G W

W N G

W S

) (

) (

1

1

*

SinhVienZone.Com

Trang 17

Examples

0

2

4

6

8

10

0 2 4 6 8 10

Processors

S(Linear) S(Normal)

SinhVienZone.Com

Trang 18

Scalability

 Parallelizing a code does not always result in a speedup;

sometimes it actually slows the code down! This can be due

to a poor choice of algorithm or to poor coding

 The best possible speedup is linear, i.e it is proportional to the number of processors: T(N) = T(1)/N where N = number

of processors, T(1) = time for serial run

linearly as the number of processors increases is said to be

scalable Many codes scale up to some number of

processors but adding more processors then brings no

improvement Very few, if any, codes are indefinitely

scalable

SinhVienZone.Com

Trang 19

Factors That Limit Speedup

 Software overhead

Even with a completely equivalent algorithm, software overhead arises in the concurrent implementation (e.g there may be additional index

calculations necessitated by the manner in which data are "split up"

among processors.)

i.e there is generally more lines of code to be executed in the parallel program than the sequential program

 Load balancing

 Communication overhead

SinhVienZone.Com

Ngày đăng: 30/01/2020, 22:30