1. Trang chủ
  2. » Công Nghệ Thông Tin

Advanced Computer Architecture - Lecture 39: Input/Output systems

48 47 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Input Output Systems
Người hướng dẫn Prof. Dr. M. Ashraf Chughtai
Trường học mac/vu
Chuyên ngành advanced computer architecture
Thể loại lecture
Định dạng
Số trang 48
Dung lượng 1,26 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Advanced Computer Architecture - Lecture 39: Input/Output systems. This lecture will cover the following: bus structures connecting I/O devices; I/O interconnect trends; I/O performance measurement; bus-based interconnect; bus standards; CPU–memory buses; bus transition protocols;...

Trang 1

CS 704

Advanced Computer Architecture

Lecture 39

Input Output Systems

(Bus Structures Connecting I/O Devices)

Prof Dr M Ashraf Chughtai

Trang 3

Last time we noticed that the overall

performance of a computer is measured by its

throughput, which is very much influenced by the systems external to the processor

The effect of neglecting the I/Os on the overall performance of a computer system can best be visualized by Amdahl's Law which identifies

that: system speed-up limited by the slowest

part!

part!

We noticed that an I/O system comprises

storage I/Os and Communication I/Os

Recap: I/O System

Trang 4

Recap: I/O Systems

The Storage I/Os consist of Secondary and

Tertiary Storage Devices; and

The communication I/O consists of I/O Bus system which interconnect the microprocessor and

memory with the I/O devices

The development in processing effected the

storage industry and motivated to develop:

– the smaller, cheaper, more reliable and lower

power embedded storages for ubiquitous

computing; and …

Trang 5

Recap: I/O Systems

– high capacity, hierarchically managed

storages as data utilities

We noticed that diversity, capacity, latency and bandwidth are the most important parameters

of I/O performance measurement

I/O system works on the principle of

producer-server model, which comprises an area called

queue, wherein the tasks accumulate while waiting

to be serviced

The metrics of disk I/O performance are:

Trang 6

Recap: I/O Systems

Response Time, which is the time to Queue +

Device Service time; and

Throughput, which is the percent of the total

Trang 7

I/O Performance Measurement: Example

Flash memory takes:

65 ns to read 1 byte

1.5 µsec to write 1 byte and

5 msec to erase 4KB

Disk Storage has:

Average seek time = 4.0 msec.

Average rotational delay = 8.3 msec.

Transfer time = 4.2 MB/sec;

Controller overhead = 0.1 msec.

Trang 8

I/O Performance Measurement: Example

Average read or write time for disk is same and

is calculated as:

= Average seek time + Average rotational delay +

Transfer time + Controller overhead

= 4.0 ms+ 8.3 ms + 64KB/4.2 MB/sec + 0.1 ms

= 27.3 msec.

Read time for flash is the ratio of the flash

size to the read bandwidth:

= 64KB/1B/65ns = 4.3 ms

Flash is about 6 times faster than the disk for reading 64KB

Trang 9

I/O Performance Measurement: Example

Write time for flash is sum of the erase time and the ratio of the flash size to the write bandwidth:

= (64KB/4KB/5ms) + ( 64KB/1B/1.5µs)

= 178.3 ms

The disk is about 6 times faster than the flash

for writing 64KB

Trang 10

Interconnect Trends

The I/O interconnect is the glue that interfaces computer system components

I/O interconnects are facilitated using High

speed hardware interfaces and logical

protocols

Based on the desired communication distance, bandwidth, latency and reliability,

interconnects are classified as used:

Backplanes, channels, Networks

Trang 11

Interconnect Trends Cont’d

message-based narrow pathways distributed

Trang 12

The advantages of using buses are:

– Low cost: a single set of wires is shared

multiple ways

– Versatility: Easy to add new devices &

peripherals may even be ported between computers using common bus

Trang 13

Bus-Based Interconnect

Disadvantage

The major disadvantage of a bus is that it

creates a communication bottleneck,

possibly limiting the maximum I/O

throughput

In server systems, where I/O is frequent,

design a bus-system capable of meeting the demand of the processor is a real

challenge

Trang 14

Bus-Based Interconnect

Bus speed is limited by physical factors,

such as:

the bus length

the bus loading, i.e., number of devices

connected to a bus

these physical limits prevent arbitrary bus speedup, which make the bus design

difficult

Trang 15

Bus-Based Interconnect

Buses are classified into Two generic types as:

I/O busses: are lengthy, facilitate to connect

many types of devices, offer wide range in

the data bandwidth, and follow a bus

standard

(I/O bus is sometimes called a channel )

CPU–memory buses: high speed, matched to

the memory system to maximize memory –

CPU bandwidth, single device (sometimes

called a backplane)

Trang 16

Bus Transactions

Bus transactions are usually defined with

reference to the memory, i.e., what they do with memory – memory read or memory write

Bus transaction includes two parts: Sending

the address and Receiving the data

Read Transaction:

Address is first sent down the bus to the

memory together with asserting the read

signal; and ….

Trang 17

Bus Transactions

The memory responds by sending the

data and de-asserting the wait signal

Write Transaction:

Address and data are sent down the bus

to the memory together with asserting the write signal

The memory stores the data and

de-asserting the wait signal

Trang 18

Bus Transition Protocols

Bus transition or bus Communication

Protocols specify the sequence of events and timing requirements in transferring information

Synchronous Bus Transfers: follows a

sequence of operations relative to a common clock

Asynchronous Bus Transfers is not clocked

and uses control lines (req., ack.) which

provide handshaking among the devices

having bus transition

Trang 19

Synchronous Bus Protocols

Trang 20

Synchronous Bus Protocols

Trang 22

Asynchronous Handshake

t0 : Master has obtained control and asserts

address, direction, data; Waits a

specified amount of time for slaves to decode target

t1: Master asserts request line

t2: Slave asserts ack, indicating data

received

t3: Master releases req

t4: Slave releases ack

Trang 24

Read Transaction

t0 : Master has obtained control and asserts

address, direction, data; Waits a

specified amount of time for slaves to

decode target\

t1: Master asserts request line

t2: Slave asserts ack, indicating ready to

transmit data

t3: Master releases req, data received

t4: Slave releases ack

Trang 25

Bus Arbitration Protocols

Having understood the bus transactions, the

most important is to understand how is a bus reserved by a device that wishes to

communicates when multiple devices need the bus access?

This is accomplished by introducing one or

more bus masters into the system

A Bus Master has ability to control the bus

requests and initiate a bus transaction

Bus Slave is module activated by the master for transaction

Trang 26

Bus Arbitration Protocols

In a simple system processor is a bus master

as it initiates a bus request; and memory is

usually a slave

Alternately, a system may have multiple bus masters, each of which may initiate a bus

transfer to the same slave

This will create chaos; as it is similar to when number of students (masters) in a class room start asking questions to the instructor (slave) How the instructor will overcome this

situation?

Trang 27

Bus Arbitration Protocols

The instructor must have a protocol to decide who is the next (master) to talk

Similarly, the protocol to manage the bus

transaction by more than one masters is

referred to as Bus Arbitration Protocol

Bus Arbitration Protocol provide the

mechanism for arbitrating (deciding) access to the bus so that it is used in a cooperative

manner

Here, a device or processor (master) wanting

to use the bus signals a bus-request and is

later granted the bus

Trang 28

Bus Arbitration Protocols

Once the bus is granted the master uses the bus and when finished the transaction signals the arbiter that bus is no more required

The arbiter then may grant the bus to another master

The multiple-master bus have a set of three

control lines for performing the request, grant and release operation

° ° ° Master Slave

Grant Request Release

Trang 29

Bus Arbitration Schemes

The bus arbitration schemes usually try to

balance two factors:

Bus-priority: every device has certain

priority; the device with highest priority

should be serviced first

Fairness: every device that want to use the

bus is guaranteed to get the bus eventually The bus arbitration schemes can be

classified as:

Daisy Chain Arbitration

Centralized Parallel Arbitration

Distributed Arbitration

Trang 30

Bus Arbitration

Parallel (Centralized) Arbitration

Serial Arbitration (daisy chaining)

BG

BR A.U.

A.U.

Bus Request Bus Grant

Trang 31

Bus Arbitration Schemes

Daisy Chain Arbitration

The bus-grant line is run through the devices

from highest-to-lowest priority

Fig 8.13 –pp 670 (organization and design)

If the device has requested bus access, it uses

the grant line to determine access has been

given to it

Trang 32

Daisy Chain Arbitration

Sequence of Daisy Chain Arbitration

1 Signal the request line

2 Wait for a transition on the grant line from

low-to-high (it indicates that bus is being

reassigned)

3 Intercept the grant signal, and do not allow the

lower priority devices to see it (stop asserting the request line)

4 Use the bus

5 Signal that the bus is no longer required by

asserting the release line

Trang 33

Bus Arbitration Schemes

Centralized Parallel Arbitration

This scheme uses multiple request lines

The devices independently request the bus

A centralized arbiter chooses from among

the devices, request the bus access and

notify the selected device that is now the bus-master

Trang 34

Bus Arbitration Schemes

Distributed Arbitration schemes are

classified as:

Distributed arbitration by self-selection

Distributed arbitration by Collision

Detection

Trang 35

Bus Arbitration Schemes

This scheme also uses multiple request line

The devices requesting the bus access

determine who will be granted the access

Here, each device wanting the access places a

code indicating its identity on the bus

By examining this code, the devices can

determine the highest priority device that has made request

Trang 36

Bus Arbitration Schemes

detection

In this scheme each device independently

request the bus

Multiple simultaneous requests result is

collision

A device is selected among the collided

devices based on the priority

Trang 37

Bus Options: Design Decisions

Trang 38

Bus Design Decisions

The decisions regarding design of a bus

system depend on:

1 Bus Bandwidth

2 Data width

3 Transfer size

Based on the bus bandwidth ; separate

address and data buses are used for high performance while the multiplexed address and data line are used for low cost design

Trang 39

Bus Design Decisions

Based on the data width ; wider (64-bit) data bus

is recommended for high performance systems and narrow (8-bit) offers cheap solution

Based on the transfer size, multiple word are

transferred for high performance computing as

it offers less overhead while single word

transfer is used for low cost design as it is

simples

Split transition, Bus masters, and clocking are other important parameters in bus design

decisions

Trang 40

Bus Design Decisions

Based on the bus masters, multiple master are used in high performance computing; and

single master that involve no arbitration is used for low cost systems

Split transition is used for high performance

design where separate requests and reply

packets get higher bandwidth; it involves

multiple masters

The synchronous multiple masters protocols

are described hereafter

Trang 41

Synchronous Bus Protocols- Multiple Masters

Address Data

Where as bus has multiple masters, the multiple

processors or I/O devices can initiate bus

transaction

Trang 42

Synchronous Bus Protocols- Multiple Masters

Here, the bus can offer higher bandwidth using

packets as opposed to holding the bus for full

transaction

This technique is called a split transaction or

pipelined bus

Here, the bus events are divided into number of

requests and replies; so the bus can be used in

time between request and reply

The split transaction makes the bus available for other masters while the memory reads the word

Trang 43

Bus Standards

SCSI: Small Computer System Interface

Clock rate: 5 MHz / 10 MHz (fast) / 20 MHz(ultra) Width: n = 8 bits / 16 bits (wide);

up to n – 1 devices to communicate on a bus or

“string”

Devices can be slave (“target”) or master

(“initiator”)

SCSI protocol: a series of “phases”, during

which specific actions are taken by the

controller and the SCSI disks

Trang 44

SCSI: Small Computer System Interface

Bus Free: No device is currently

accessing the bus

Arbitration: When the SCSI bus goes free,

multiple devices may request (arbitrate

for) the bus; fixed priority by address

Selection: informs the target that it will

participate (Reselection if disconnected)

Command: the initiator reads the SCSI

command bytes from host memory and

sends them to the target

Trang 45

SCSI: Small Computer System Interface

Data Transfer: data in or out, initiator:

target

Message Phase: message in or out,

initiator: target (identify, save/restore data pointer, disconnect, command complete)

Status Phase: target, just before

command complete

Trang 46

1993 I/O Bus Survey (P&H, 2nd Ed)

Trang 47

1993 MP Server Memory Bus Survey

Trang 48

Thanks

and Allah Hafiz

Ngày đăng: 05/07/2022, 11:58