1. Trang chủ
  2. » Luận Văn - Báo Cáo

1 computer architecture a quantitative approach 4ed

705 5 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Computer Architecture: A Quantitative Approach Fourth Edition
Tác giả John L. Hennessy, David A. Patterson
Trường học Stanford University
Chuyên ngành Computer Architecture
Thể loại textbook
Năm xuất bản 2023
Thành phố Stanford
Định dạng
Số trang 705
Dung lượng 5,69 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The 4th edition continues the tradition of presentingthe latest in innovations with commercial impact, alongside the foundational con-cepts: advanced processor and memory system design t

Trang 2

In Praise of Computer Architecture: A Quantitative Approach

Fourth Edition

“The multiprocessor is here and it can no longer be avoided As we bid farewell

to single-core processors and move into the chip multiprocessing age, it is greattiming for a new edition of Hennessy and Patterson’s classic Few books have had

as significant an impact on the way their discipline is taught, and the current tion will ensure its place at the top for some time to come.”

edi-—Luiz André Barroso, Google Inc

“What do the following have in common: Beatles’ tunes, HP calculators, late chip cookies, and Computer Architecture? They are all classics that havestood the test of time.”

choco-—Robert P Colwell, Intel lead architect

“Not only does the book provide an authoritative reference on the concepts thatall computer architects should be familiar with, but it is also a good starting pointfor investigations into emerging areas in the field.”

—Krisztián Flautner, ARM Ltd

“The best keeps getting better! This new edition is updated and very relevant tothe key issues in computer architecture today Plus, its new exercise paradigm ismuch more useful for both students and instructors.”

—Norman P Jouppi, HP Labs

Computer Architecture builds on fundamentals that yielded the RISC revolution,including the enablers for CISC translation Now, in this new edition, it clearlyexplains and gives insight into the latest microarchitecture techniques needed forthe new generation of multithreaded multicore processors.”

—Marc Tremblay, Fellow & VP, Chief Architect, Sun Microsystems

“This is a great textbook on all key accounts: pedagogically superb in exposingthe ideas and techniques that define the art of computer organization and design,stimulating to read, and comprehensive in its coverage of topics The first editionset a standard of excellence and relevance; this latest edition does it again.”

—Milos˘ Ercegovac, UCLA

“They’ve done it again Hennessy and Patterson emphatically demonstrate whythey are the doyens of this deep and shifting field Fallacy: Computer architectureisn’t an essential subject in the information age Pitfall: You don’t need the 4thedition of Computer Architecture.

—Michael D Smith, Harvard University

Trang 3

“Hennessy and Patterson have done it again! The 4th edition is a classic encorethat has been adapted beautifully to meet the rapidly changing constraints of

‘late-CMOS-era’ technology The detailed case studies of real processor productsare especially educational, and the text reads so smoothly that it is difficult to putdown This book is a must-read for students and professionals alike!”

—Pradip Bose, IBM

“This latest edition of Computer Architecture is sure to provide students with thearchitectural framework and foundation they need to become influential archi-tects of the future.”

— Ravishankar Iyer, Intel Corp

“As technology has advanced, and design opportunities and constraints havechanged, so has this book The 4th edition continues the tradition of presentingthe latest in innovations with commercial impact, alongside the foundational con-cepts: advanced processor and memory system design techniques, multithreadingand chip multiprocessors, storage systems, virtual machines, and other concepts.This book is an excellent resource for anybody interested in learning the architec-tural concepts underlying real commercial products.”

—Gurindar Sohi, University of Wisconsin–Madison

“I am very happy to have my students study computer architecture using this tastic book and am a little jealous for not having written it myself.”

fan-—Mateo Valero, UPC, Barcelona

“Hennessy and Patterson continue to evolve their teaching methods with thechanging landscape of computer system design Students gain unique insight intothe factors influencing the shape of computer architecture design and the poten-tial research directions in the computer systems field.”

—Dan Connors, University of Colorado at Boulder

“With this revision, Computer Architecture will remain a must-read for all puter architecture students in the coming decade.”

com-—Wen-mei Hwu, University of Illinois at Urbana–Champaign

“The 4th edition of Computer Architecture continues in the tradition of providing

a relevant and cutting edge approach that appeals to students, researchers, anddesigners of computer systems The lessons that this new edition teaches willcontinue to be as relevant as ever for its readers.”

—David Brooks, Harvard University

“With the 4th edition, Hennessy and Patterson have shaped Computer ture back to the lean focus that made the 1st edition an instant classic.”

Architec-—Mark D Hill, University of Wisconsin–Madison

Trang 4

Computer Architecture

A Quantitative Approach

Fourth Edition

Trang 5

John L Hennessy is the president of Stanford University, where he has been a member of the faculty since 1977 in the departments of electrical engineering and computer science Hen- nessy is a Fellow of the IEEE and ACM, a member of the National Academy of Engineering and the National Academy of Science, and a Fellow of the American Academy of Arts and Sciences Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC tech- nology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neu- mann Award, which he shared with David Patterson He has also received seven honorary doctorates.

In 1981, he started the MIPS project at Stanford with a handful of graduate students After pleting the project in 1984, he took a one-year leave from the university to cofound MIPS Com- puter Systems, which developed one of the first commercial RISC microprocessors After being acquired by Silicon Graphics in 1991, MIPS Technologies became an independent company in

com-1998, focusing on microprocessors for the embedded marketplace As of 2006, over 500 million MIPS microprocessors have been shipped in devices ranging from video games and palmtop computers to laser printers and network switches

David A Patterson has been teaching computer architecture at the University of California, Berkeley, since joining the faculty in 1977, where he holds the Pardee Chair of Computer Sci- ence His teaching has been honored by the Abacus Award from Upsilon Pi Epsilon, the Distin- guished Teaching Award from the University of California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Undergraduate Teaching Award from IEEE Patterson re- ceived the IEEE Technical Achievement Award for contributions to RISC and shared the IEEE Johnson Information Storage Award for contributions to RAID He then shared the IEEE John von Neumann Medal and the C & C Prize with John Hennessy Like his co-author, Patterson is a Fellow of the American Academy of Arts and Sciences, ACM, and IEEE, and he was elected to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley En- gineering Hall of Fame He served on the Information Technology Advisory Committee to the U.S President, as chair of the CS division in the Berkeley EECS department, as chair of the Com- puting Research Association, and as President of ACM This record led to a Distinguished Service Award from CRA.

At Berkeley, Patterson led the design and implementation of RISC I, likely the first VLSI reduced instruction set computer This research became the foundation of the SPARC architecture, cur- rently used by Sun Microsystems, Fujitsu, and others He was a leader of the Redundant Arrays

of Inexpensive Disks (RAID) project, which led to dependable storage systems from many panies He was also involved in the Network of Workstations (NOW) project, which led to cluster technology used by Internet companies These projects earned three dissertation awards from the ACM His current research projects are the RAD Lab, which is inventing technology for reli- able, adaptive, distributed Internet services, and the Research Accelerator for Multiple Proces- sors (RAMP) project, which is developing and distributing low-cost, highly scalable, parallel computers based on FPGAs and open-source hardware and software.

Trang 7

Publisher Denise E M Penrose

Project Manager Dusty Friedman, The Book Company

In-house Senior Project Manager Brandy Lilly

Developmental Editor Nate McFadden

Editorial Assistant Kimberlee Honjo

Cover Design Elisabeth Beller and Ross Carron Design

Cover Image Richard I’Anson’s Collection: Lonely Planet Images

Composition Nancy Logan

Text Design: Rebecca Evans & Associates

Technical Illustration David Ruppe, Impact Publications

Copyeditor Ken Della Penta

Proofreader Jamie Thaman

Indexer Nancy Ball

Printer Maple-Vail Book Manufacturing Group

Morgan Kaufmann Publishers is an Imprint of Elsevier

500 Sansome Street, Suite 400, San Francisco, CA 94111

This book is printed on acid-free paper.

© 1990, 1996, 2003, 2007 by Elsevier, Inc

All rights reserved.

Published 1990 Fourth edition 2007

Designations used by companies to distinguish their products are often claimed as trademarks or istered trademarks In all instances in which Morgan Kaufmann Publishers is aware of a claim, the product names appear in initial capital or all capital letters Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail: permissions@elsevier com You may also complete your request on-line via the Elsevier Science homepage (http:// elsevier.com), by selecting “Customer Support” and then “Obtaining Permissions.”

reg-Library of Congress Cataloging-in-Publication Data

Hennessy, John L.

Computer architecture : a quantitative approach / John L Hennessy, David

A Patterson ; with contributions by Andrea C Arpaci-Dusseau [et al.].

—4th ed.

p.cm.

Includes bibliographical references and index.

ISBN 13: 978-0-12-370490-0 (pbk : alk paper)

ISBN 10: 0-12-370490-1 (pbk : alk paper) 1 Computer architecture I

Patterson, David A II Arpaci-Dusseau, Andrea C III Title.

QA76.9.A73P377 2006

004.2'2—dc22

2006024358

For all information on all Morgan Kaufmann publications,

visit our website at www.mkp.com or www.books.elsevier.com

Printed in the United States of America

06 07 08 09 10 5 4 3 2 1

Trang 8

To Andrea, Linda, and our four sons

Trang 10

This book is dense in facts and figures, in rules of thumb and theories, inexamples and descriptions It is stuffed with acronyms, technologies, trends, for-mulas, illustrations, and tables And, this is thoroughly appropriate for a work onarchitecture The architect’s role is not that of a scientist or inventor who willdeeply study a particular phenomenon and create new basic materials or tech-niques Nor is the architect the craftsman who masters the handling of tools tocraft the finest details The architect’s role is to combine a thorough understand-ing of the state of the art of what is possible, a thorough understanding of the his-torical and current styles of what is desirable, a sense of design to conceive aharmonious total system, and the confidence and energy to marshal this knowl-edge and available resources to go out and get something built To accomplishthis, the architect needs a tremendous density of information with an in-depthunderstanding of the fundamentals and a quantitative approach to ground histhinking That is exactly what this book delivers.

As computer architecture has evolved—from a world of mainframes, computers, and microprocessors, to a world dominated by microprocessors, andnow into a world where microprocessors themselves are encompassing all thecomplexity of mainframe computers—Hennessy and Patterson have updatedtheir book appropriately The first edition showcased the IBM 360, DEC VAX,and Intel 80x86, each the pinnacle of its class of computer, and helped introducethe world to RISC architecture The later editions focused on the details of the80x86 and RISC processors, which had come to dominate the landscape This lat-est edition expands the coverage of threading and multiprocessing, virtualization

mini-Foreword

by Fred Weber, President and CEO of MetaRAM, Inc.

Trang 11

x ■ Computer Architecture

and memory hierarchy, and storage systems, giving the reader context ate to today’s most important directions and setting the stage for the next decade

appropri-of design It highlights the AMD Opteron and SUN Niagara as the best examples

of the x86 and SPARC (RISC) architectures brought into the new world of processing and system-on-a-chip architecture, thus grounding the art and science

multi-in real-world commercial examples

The first chapter, in less than 60 pages, introduces the reader to the mies of computer design and the basic concerns of computer architecture, gives

taxono-an overview of the technology trends that drive the industry, taxono-and lays out a qutaxono-an-titative approach to using all this information in the art of computer design Thenext two chapters focus on traditional CPU design and give a strong grounding inthe possibilities and limits in this core area The final three chapters build out anunderstanding of system issues with multiprocessing, memory hierarchy, andstorage Knowledge of these areas has always been of critical importance to thecomputer architect In this era of system-on-a-chip designs, it is essential forevery CPU architect Finally the appendices provide a great depth of understand-ing by working through specific examples in great detail

quan-In design it is important to look at both the forest and the trees and to moveeasily between these views As you work through this book you will find plenty

of both The result of great architecture, whether in computer design, buildingdesign or textbook design, is to take the customer’s requirements and desires andreturn a design that causes that customer to say, “Wow, I didn’t know that waspossible.” This book succeeds on that measure and will, I hope, give you as muchpleasure and value as it has me

Trang 12

1.8 Measuring, Reporting, and Summarizing Performance 28

1.10 Putting It All Together: Performance and Price-Performance 44

2.1 Instruction-Level Parallelism: Concepts and Challenges 66

2.4 Overcoming Data Hazards with Dynamic Scheduling 892.5 Dynamic Scheduling: Examples and the Algorithm 97

2.7 Exploiting ILP Using Multiple Issue and Static Scheduling 114

Contents

Trang 13

Case Studies with Exercises by Robert P Colwell 142

3.4 Crosscutting Issues: Hardware versus Software Speculation 1703.5 Multithreading: Using ILP Support to Exploit

Case Study with Exercises by Wen-mei W Hwu and

4.3 Performance of Symmetric Shared-Memory Multiprocessors 2184.4 Distributed Shared Memory and Directory-Based Coherence 230

4.6 Models of Memory Consistency: An Introduction 243

4.8 Putting It All Together: The Sun T1 Multiprocessor 249

5.2 Eleven Advanced Optimizations of Cache Performance 293

Trang 14

Contents ■ xiii

5.4 Protection: Virtual Memory and Virtual Machines 3155.5 Crosscutting Issues: The Design of Memory Hierarchies 3245.6 Putting It All Together: AMD Opteron Memory Hierarchy 326

Case Studies with Exercises by Norman P Jouppi 342

6.3 Definition and Examples of Real Faults and Failures 3666.4 I/O Performance, Reliability Measures, and Benchmarks 371

6.7 Designing and Evaluating an I/O System—The Internet

6.8 Putting It All Together: NetApp FAS6000 Filer 397

Case Studies with Exercises by Andrea C Arpaci-Dusseau and

A.2 The Major Hurdle of Pipelining—Pipeline Hazards A-11

A.5 Extending the MIPS Pipeline to Handle Multicycle Operations A-47A.6 Putting It All Together: The MIPS R4000 Pipeline A-56

Trang 15

xiv ■ Contents

B.9 Putting It All Together: The MIPS Architecture B-32

Companion CD Appendices

Updated by Thomas M Conte

Revised by Timothy M Pinkston and José Duato

Revised by Krste Asanovic

by David Goldberg

Online Appendix (textbooks.elsevier.com/0123704901)

Trang 16

Why We Wrote This Book

Through four editions of this book, our goal has been to describe the basic ples underlying what will be tomorrow’s technological developments Ourexcitement about the opportunities in computer architecture has not abated, and

princi-we echo what princi-we said about the field in the first edition: “It is not a dreary science

of paper machines that will never work No! It’s a discipline of keen intellectualinterest, requiring the balance of marketplace forces to cost-performance-power,leading to glorious failures and some notable successes.”

Our primary objective in writing our first book was to change the way peoplelearn and think about computer architecture We feel this goal is still valid andimportant The field is changing daily and must be studied with real examplesand measurements on real computers, rather than simply as a collection of defini-tions and designs that will never need to be realized We offer an enthusiasticwelcome to anyone who came along with us in the past, as well as to those whoare joining us now Either way, we can promise the same quantitative approach

to, and analysis of, real systems

As with earlier versions, we have strived to produce a new edition that willcontinue to be as relevant for professional engineers and architects as it is forthose involved in advanced computer architecture and design courses As much

as its predecessors, this edition aims to demystify computer architecture through

an emphasis on cost-performance-power trade-offs and good engineering design

We believe that the field has continued to mature and move toward the rigorousquantitative foundation of long-established scientific and engineering disciplines

This Edition

The fourth edition of Computer Architecture: A Quantitative Approach may be

the most significant since the first edition Shortly before we started this revision,Intel announced that it was joining IBM and Sun in relying on multiple proces-sors or cores per chip for high-performance designs As the first figure in thebook documents, after 16 years of doubling performance every 18 months, sin-

Trang 17

xvi ■ Preface

gle-processor performance improvement has dropped to modest annual ments This fork in the computer architecture road means thatfor the first time inhistory, no one is building a much faster sequential processor If you want yourprogram to run significantly faster, say, to justify the addition of new features,you’re going to have to parallelize your program

Hence, after three editions focused primarily on higher performance byexploiting instruction-level parallelism (ILP), an equal focus of this edition isthread-level parallelism (TLP) and data-level parallelism (DLP) While earliereditions had material on TLP and DLP in big multiprocessor servers, now TLPand DLP are relevant for single-chip multicores This historic shift led us tochange the order of the chapters: the chapter on multiple processors was the sixthchapter in the last edition, but is now the fourth chapter of this edition

The changing technology has also motivated us to move some of the contentfrom later chapters into the first chapter Because technologists predict muchhigher hard and soft error rates as the industry moves to semiconductor processeswith feature sizes 65 nm or smaller, we decided to move the basics of dependabil-ity from Chapter 7 in the third edition into Chapter 1 As power has become thedominant factor in determining how much you can place on a chip, we alsobeefed up the coverage of power in Chapter 1 Of course, the content and exam-ples in all chapters were updated, as we discuss below

In addition to technological sea changes that have shifted the contents of thisedition, we have taken a new approach to the exercises in this edition It is sur-prisingly difficult and time-consuming to create interesting, accurate, and unam-biguous exercises that evenly test the material throughout a chapter Alas, theWeb has reduced the half-life of exercises to a few months Rather than workingout an assignment, a student can search the Web to find answers not long after abook is published Hence, a tremendous amount of hard work quickly becomesunusable, and instructors are denied the opportunity to test what students havelearned

To help mitigate this problem, in this edition we are trying two new ideas.First, we recruited experts from academia and industry on each topic to write theexercises This means some of the best people in each field are helping us to cre-ate interesting ways to explore the key concepts in each chapter and test thereader’s understanding of that material Second, each group of exercises is orga-nized around a set of case studies Our hope is that the quantitative example ineach case study will remain interesting over the years, robust and detailed enough

to allow instructors the opportunity to easily create their own new exercises,should they choose to do so Key, however, is that each year we will continue torelease new exercise sets for each of the case studies These new exercises willhave critical changes in some parameters so that answers to old exercises will nolonger apply

Another significant change is that we followed the lead of the third edition of

Computer Organization and Design (COD) by slimming the text to include the

material that almost all readers will want to see and moving the appendices that

Trang 18

some will see as optional or as reference material onto a companion CD Therewere many reasons for this change:

1 Students complained about the size of the book, which had expanded from

594 pages in the chapters plus 160 pages of appendices in the first edition to

760 chapter pages plus 223 appendix pages in the second edition and then to

883 chapter pages plus 209 pages in the paper appendices and 245 pages inonline appendices At this rate, the fourth edition would have exceeded 1500pages (both on paper and online)!

2 Similarly, instructors were concerned about having too much material tocover in a single course

3 As was the case for COD, by including a CD with material moved out of the

text, readers could have quick access to all the material, regardless of theirability to access Elsevier’s Web site Hence, the current edition’s appendiceswill always be available to the reader even after future editions appear

4 This flexibility allowed us to move review material on pipelining, instructionsets, and memory hierarchy from the chapters and into Appendices A, B, and

C The advantage to instructors and readers is that they can go over the reviewmaterial much more quickly and then spend more time on the advanced top-ics in Chapters 2, 3, and 5 It also allowed us to move the discussion of sometopics that are important but are not core course topics into appendices on the

CD Result: the material is available, but the printed book is shorter In thisedition we have 6 chapters, none of which is longer than 80 pages, while inthe last edition we had 8 chapters, with the longest chapter weighing in at 127pages

5 This package of a slimmer core print text plus a CD is far less expensive tomanufacture than the previous editions, allowing our publisher to signifi-cantly lower the list price of the book With this pricing scheme, there is noneed for a separate international student edition for European readers.Yet another major change from the last edition is that we have moved theembedded material introduced in the third edition into its own appendix, Appen-dix D We felt that the embedded material didn’t always fit with the quantitativeevaluation of the rest of the material, plus it extended the length of many chaptersthat were already running long We believe there are also pedagogic advantages

in having all the embedded information in a single appendix

This edition continues the tradition of using real-world examples to strate the ideas, and the “Putting It All Together” sections are brand new; in fact,some were announced after our book was sent to the printer The “Putting It AllTogether” sections of this edition include the pipeline organizations and memoryhierarchies of the Intel Pentium 4 and AMD Opteron; the Sun T1 (“Niagara”) 8-processor, 32-thread microprocessor; the latest NetApp Filer; the InternetArchive cluster; and the IBM Blue Gene/L massively parallel processor

Trang 19

demon-xviii ■ Preface

Topic Selection and Organization

As before, we have taken a conservative approach to topic selection, for there aremany more interesting ideas in the field than can reasonably be covered in a treat-ment of basic principles We have steered away from a comprehensive survey ofevery architecture a reader might encounter Instead, our presentation focuses oncore concepts likely to be found in any new machine The key criterion remainsthat of selecting ideas that have been examined and utilized successfully enough

to permit their discussion in quantitative terms

Our intent has always been to focus on material that is not available in lent form from other sources, so we continue to emphasize advanced contentwherever possible Indeed, there are several systems here whose descriptionscannot be found in the literature (Readers interested strictly in a more basic

equiva-introduction to computer architecture should read Computer Organization and Design: The Hardware/Software Interface, third edition.)

An Overview of the Content

Chapter 1 has been beefed up in this edition It includes formulas for staticpower, dynamic power, integrated circuit costs, reliability, and availability We gointo more depth than prior editions on the use of the geometric mean and the geo-metric standard deviation to capture the variability of the mean Our hope is thatthese topics can be used through the rest of the book In addition to the classicquantitative principles of computer design and performance measurement, thebenchmark section has been upgraded to use the new SPEC2006 suite

Our view is that the instruction set architecture is playing less of a role todaythan in 1990, so we moved this material to Appendix B It still uses the MIPS64architecture For fans of ISAs, Appendix J covers 10 RISC architectures, the80x86, the DEC VAX, and the IBM 360/370

Chapters 2 and 3 cover the exploitation of instruction-level parallelism inhigh-performance processors, including superscalar execution, branch prediction,speculation, dynamic scheduling, and the relevant compiler technology As men-tioned earlier, Appendix A is a review of pipelining in case you need it Chapter 3surveys the limits of ILP New to this edition is a quantitative evaluation of multi-threading Chapter 3 also includes a head-to-head comparison of the AMD Ath-lon, Intel Pentium 4, Intel Itanium 2, and IBM Power5, each of which has madeseparate bets on exploiting ILP and TLP While the last edition contained a greatdeal on Itanium, we moved much of this material to Appendix G, indicating ourview that this architecture has not lived up to the early claims

Given the switch in the field from exploiting only ILP to an equal focus onthread- and data-level parallelism, we moved multiprocessor systems up to Chap-ter 4, which focuses on shared-memory architectures The chapter begins withthe performance of such an architecture It then explores symmetric anddistributedmemory architectures, examining both organizational principles andperformance Topics in synchronization and memory consistency models are

Trang 20

next The example is the Sun T1 (“Niagara”), a radical design for a commercialproduct It reverted to a single-instruction issue, 6-stage pipeline microarchitec-ture It put 8 of these on a single chip, and each supports 4 threads Hence, soft-ware sees 32 threads on this single, low-power chip.

As mentioned earlier, Appendix C contains an introductory review of cacheprinciples, which is available in case you need it This shift allows Chapter 5 tostart with 11 advanced optimizations of caches The chapter includes a new sec-tion on virtual machines, which offers advantages in protection, software man-agement, and hardware management The example is the AMD Opteron, givingboth its cache hierarchy and the virtual memory scheme for its recently expanded64-bit addresses

Chapter 6, “Storage Systems,” has an expanded discussion of reliability andavailability, a tutorial on RAID with a description of RAID 6 schemes, and rarelyfound failure statistics of real systems It continues to provide an introduction toqueuing theory and I/O performance benchmarks Rather than go through a series

of steps to build a hypothetical cluster as in the last edition, we evaluate the cost,performance, and reliability of a real cluster: the Internet Archive The “Putting ItAll Together” example is the NetApp FAS6000 filer, which is based on the AMDOpteron microprocessor

This brings us to Appendices A through L As mentioned earlier, Appendices

A and C are tutorials on basic pipelining and caching concepts Readers relativelynew to pipelining should read Appendix A before Chapters 2 and 3, and thosenew to caching should read Appendix C before Chapter 5

Appendix B covers principles of ISAs, including MIPS64, and Appendix Jdescribes 64-bit versions of Alpha, MIPS, PowerPC, and SPARC and their multi-media extensions It also includes some classic architectures (80x86, VAX, andIBM 360/370) and popular embedded instruction sets (ARM, Thumb, SuperH,MIPS16, and Mitsubishi M32R) Appendix G is related, in that it covers architec-tures and compilers for VLIW ISAs

Appendix D, updated by Thomas M Conte, consolidates the embedded rial in one place

mate-Appendix E, on networks, has been extensively revised by Timothy M ston and José Duato Appendix F, updated by Krste Asanovic, includes a descrip-tion of vector processors We think these two appendices are some of the bestmaterial we know of on each topic

Pink-Appendix H describes parallel processing applications and coherence cols for larger-scale, shared-memory multiprocessing Appendix I, by DavidGoldberg, describes computer arithmetic

proto-Appendix K collects the “Historical Perspective and References” from eachchapter of the third edition into a single appendix It attempts to give propercredit for the ideas in each chapter and a sense of the history surrounding theinventions We like to think of this as presenting the human drama of computerdesign It also supplies references that the student of architecture may want topursue If you have time, we recommend reading some of the classic papers inthe field that are mentioned in these sections It is both enjoyable and educational

Trang 21

xx ■ Preface

to hear the ideas directly from the creators “Historical Perspective” was one ofthe most popular sections of prior editions

Appendix L (available at textbooks.elsevier.com/0123704901) contains

solu-tions to the case study exercises in the book

Navigating the Text

There is no single best order in which to approach these chapters and appendices,except that all readers should start with Chapter 1 If you don’t want to readeverything, here are some suggested sequences:

ILP:Appendix A, Chapters 2 and 3, and Appendices F and G

Memory Hierarchy:Appendix C and Chapters 5 and 6

Thread-and Data-Level Parallelism: Chapter 4, Appendix H, and Appendix E

ISA:Appendices B and JAppendix D can be read at any time, but it might work best if read after the ISAand cache sequences Appendix I can be read whenever arithmetic moves you

Chapter Structure

The material we have selected has been stretched upon a consistent frameworkthat is followed in each chapter We start by explaining the ideas of a chapter.These ideas are followed by a “Crosscutting Issues” section, a feature that showshow the ideas covered in one chapter interact with those given in other chapters.This is followed by a “Putting It All Together” section that ties these ideastogether by showing how they are used in a real machine

Next in the sequence is “Fallacies and Pitfalls,” which lets readers learn fromthe mistakes of others We show examples of common misunderstandings andarchitectural traps that are difficult to avoid even when you know they are lying inwait for you The “Fallacies and Pitfalls” sections is one of the most popular sec-tions of the book Each chapter ends with a “Concluding Remarks” section

Case Studies with Exercises

Each chapter ends with case studies and accompanying exercises Authored byexperts in industry and academia, the case studies explore key chapter conceptsand verify understanding through increasingly challenging exercises Instructorsshould find the case studies sufficiently detailed and robust to allow them to cre-ate their own additional exercises

Brackets for each exercise (<chapter.section>) indicate the text sections ofprimary relevance to completing the exercise We hope this helps readers to avoidexercises for which they haven’t read the corresponding section, in addition toproviding the source for review Note that we provide solutions to the case study

Trang 22

exercises in Appendix L Exercises are rated, to give the reader a sense of theamount of time required to complete an exercise:

[10] Less than 5 minutes (to read and understand)

[15] 5–15 minutes for a full answer

[20] 15–20 minutes for a full answer

[25] 1 hour for a full written answer

[30] Short programming project: less than 1 full day of programming[40] Significant programming project: 2 weeks of elapsed time

[Discussion] Topic for discussion with others

A second set of alternative case study exercises are available for instructors

who register at textbooks.elsevier.com/0123704901 This second set will be

revised every summer, so that early every fall, instructors can download a new set

of exercises and solutions to accompany the case studies in the book

Supplemental Materials

The accompanying CD contains a variety of resources, including the following:

■ Reference appendices—some guest authored by subject experts—covering arange of advanced topics

■ Historical Perspectives material that explores the development of the keyideas presented in each of the chapters in the text

■ Search engine for both the main text and the CD-only content

Additional resources are available at textbooks.elsevier.com/0123704901 The instructor site (accessible to adopters who register at textbooks.elsevier.com)

includes:

■ Alternative case study exercises with solutions (updated yearly)

■ Instructor slides in PowerPoint

■ Figures from the book in JPEG and PPT formats

The companion site (accessible to all readers) includes:

■ Solutions to the case study exercises in the text

■ Links to related material on the Web

■ List of errata

New materials and links to other resources available on the Web will beadded on a regular basis

Trang 23

xxii ■ Preface

Helping Improve This Book

Finally, it is possible to make money while reading this book (Talk about performance!) If you read the Acknowledgments that follow, you will see that wewent to great lengths to correct mistakes Since a book goes through many print-ings, we have the opportunity to make even more corrections If you uncover anyremaining resilient bugs, please contact the publisher by electronic mail

cost-(ca4bugs@mkp.com) The first reader to report an error with a fix that we

incor-porate in a future printing will be rewarded with a $1.00 bounty Please check the

errata sheet on the home page (textbooks.elsevier.com/0123704901) to see if the

bug has already been reported We process the bugs and send the checks aboutonce a year or so, so please be patient

We welcome general comments to the text and invite you to send them to a

separate email address at ca4comments@mkp.com.

Concluding Remarks

Once again this book is a true co-authorship, with each of us writing half thechapters and an equal share of the appendices We can’t imagine how long itwould have taken without someone else doing half the work, offering inspirationwhen the task seemed hopeless, providing the key insight to explain a difficultconcept, supplying reviews over the weekend of chapters, and commiseratingwhen the weight of our other obligations made it hard to pick up the pen (Theseobligations have escalated exponentially with the number of editions, as one of uswas President of Stanford and the other was President of the Association forComputing Machinery.) Thus, once again we share equally the blame for whatyou are about to read

John HennessyDavid Patterson

Trang 24

Although this is only the fourth edition of this book, we have actually creatednine different versions of the text: three versions of the first edition (alpha, beta,and final) and two versions of the second, third, and fourth editions (beta andfinal) Along the way, we have received help from hundreds of reviewers andusers Each of these people has helped make this book better Thus, we have cho-sen to list all of the people who have made contributions to some version of thisbook

Contributors to the Fourth Edition

Like prior editions, this is a community effort that involves scores of volunteers.Without their help, this edition would not be nearly as polished

Reviewers

Krste Asanovic, Massachusetts Institute of Technology; Mark Brehob, University

of Michigan; Sudhanva Gurumurthi, University of Virginia; Mark D Hill, versity of Wisconsin–Madison; Wen-mei Hwu, University of Illinois at Urbana–Champaign; David Kaeli, Northeastern University; Ramadass Nagarajan, Univer-sity of Texas at Austin; Karthikeyan Sankaralingam, Univeristy of Texas at Aus-tin; Mark Smotherman, Clemson University; Gurindar Sohi, University ofWisconsin–Madison; Shyamkumar Thoziyoor, University of Notre Dame, Indi-ana; Dan Upton, University of Virginia; Sotirios G Ziavras, New Jersey Institute

Uni-of Technology

Focus Group

Krste Asanovic, Massachusetts Institute of Technology; José Duato, UniversitatPolitècnica de València and Simula; Antonio González, Intel and UniversitatPolitècnica de Catalunya; Mark D Hill, University of Wisconsin–Madison; Lev

G Kirischian, Ryerson University; Timothy M Pinkston, University of SouthernCalifornia

Trang 25

xxiv ■ Acknowledgments

Appendices

Krste Asanovic, Massachusetts Institute of Technology (Appendix F); Thomas

M Conte, North Carolina State University (Appendix D); José Duato, tat Politècnica de València and Simula (Appendix E); David Goldberg, XeroxPARC (Appendix I); Timothy M Pinkston, University of Southern California(Appendix E)

Universi-Case Studies with Exercises

Andrea C Arpaci-Dusseau, University of Wisconsin–Madison (Chapter 6); Remzi

H Arpaci-Dusseau, University of Wisconsin–Madison (Chapter 6); Robert P well, R&E Colwell & Assoc., Inc (Chapter 2); Diana Franklin, California Poly-technic State University, San Luis Obispo (Chapter 1); Wen-mei W Hwu,University of Illinois at Urbana–Champaign (Chapter 3); Norman P Jouppi, HPLabs (Chapter 5); John W Sias, University of Illinois at Urbana–Champaign(Chapter 3); David A Wood, University of Wisconsin–Madison (Chapter 4)

Col-Additional Material

John Mashey (geometric means and standard deviations in Chapter 1); Chenming

Hu, University of California, Berkeley (wafer costs and yield parameters inChapter 1); Bill Brantley and Dan Mudgett, AMD (Opteron memory hierarchyevaluation in Chapter 5); Mendel Rosenblum, Stanford and VMware (virtualmachines in Chapter 5); Aravind Menon, EPFL Switzerland (Xen measurements

in Chapter 5); Bruce Baumgart and Brewster Kahle, Internet Archive (IA cluster

in Chapter 6); David Ford, Steve Kleiman, and Steve Miller, Network Appliances(FX6000 information in Chapter 6); Alexander Thomasian, Rutgers (queueingtheory in Chapter 6)

Finally, a special thanks once again to Mark Smotherman of Clemson sity, who gave a final technical reading of our manuscript Mark found numerousbugs and ambiguities, and the book is much cleaner as a result

Univer-This book could not have been published without a publisher, of course Wewish to thank all the Morgan Kaufmann/Elsevier staff for their efforts and sup-port For this fourth edition, we particularly want to thank Kimberlee Honjo whocoordinated surveys, focus groups, manuscript reviews and appendices, and NateMcFadden, who coordinated the development and review of the case studies Ourwarmest thanks to our editor, Denise Penrose, for her leadership in our continu-ing writing saga

We must also thank our university staff, Margaret Rowland and CeciliaPracher, for countless express mailings, as well as for holding down the fort atStanford and Berkeley while we worked on the book

Our final thanks go to our wives for their suffering through increasingly earlymornings of reading, thinking, and writing

Trang 26

Contributors to Previous Editions

Reviewers

George Adams, Purdue University; Sarita Adve, University of Illinois at Urbana–Champaign; Jim Archibald, Brigham Young University; Krste Asanovic, Massa-chusetts Institute of Technology; Jean-Loup Baer, University of Washington;Paul Barr, Northeastern University; Rajendra V Boppana, University of Texas,San Antonio; Doug Burger, University of Texas, Austin; John Burger, SGI;Michael Butler; Thomas Casavant; Rohit Chandra; Peter Chen, University ofMichigan; the classes at SUNY Stony Brook, Carnegie Mellon, Stanford, Clem-son, and Wisconsin; Tim Coe, Vitesse Semiconductor; Bob Colwell, Intel; DavidCummings; Bill Dally; David Douglas; Anthony Duben, Southeast MissouriState University; Susan Eggers, University of Washington; Joel Emer; BarryFagin, Dartmouth; Joel Ferguson, University of California, Santa Cruz; Carl Fey-nman; David Filo; Josh Fisher, Hewlett-Packard Laboratories; Rob Fowler,DIKU; Mark Franklin, Washington University (St Louis); Kourosh Gharachor-loo; Nikolas Gloy, Harvard University; David Goldberg, Xerox Palo AltoResearch Center; James Goodman, University of Wisconsin–Madison; DavidHarris, Harvey Mudd College; John Heinlein; Mark Heinrich, Stanford; DanielHelman, University of California, Santa Cruz; Mark Hill, University of Wiscon-sin–Madison; Martin Hopkins, IBM; Jerry Huck, Hewlett-Packard Laboratories;Mary Jane Irwin, Pennsylvania State University; Truman Joe; Norm Jouppi;David Kaeli, Northeastern University; Roger Kieckhafer, University ofNebraska; Earl Killian; Allan Knies, Purdue University; Don Knuth; Jeff Kuskin,Stanford; James R Larus, Microsoft Research; Corinna Lee, University of Tor-onto; Hank Levy; Kai Li, Princeton University; Lori Liebrock, University ofAlaska, Fairbanks; Mikko Lipasti, University of Wisconsin–Madison; Gyula A.Mago, University of North Carolina, Chapel Hill; Bryan Martin; Norman Mat-loff; David Meyer; William Michalson, Worcester Polytechnic Institute; JamesMooney; Trevor Mudge, University of Michigan; David Nagle, Carnegie MellonUniversity; Todd Narter; Victor Nelson; Vojin Oklobdzija, University of Califor-nia, Berkeley; Kunle Olukotun, Stanford University; Bob Owens, PennsylvaniaState University; Greg Papadapoulous, Sun; Joseph Pfeiffer; Keshav Pingali,Cornell University; Bruno Preiss, University of Waterloo; Steven Przybylski; JimQuinlan; Andras Radics; Kishore Ramachandran, Georgia Institute of Technol-ogy; Joseph Rameh, University of Texas, Austin; Anthony Reeves, Cornell Uni-versity; Richard Reid, Michigan State University; Steve Reinhardt, University ofMichigan; David Rennels, University of California, Los Angeles; Arnold L.Rosenberg, University of Massachusetts, Amherst; Kaushik Roy, Purdue Univer-sity; Emilio Salgueiro, Unysis; Peter Schnorf; Margo Seltzer; Behrooz Shirazi,Southern Methodist University; Daniel Siewiorek, Carnegie Mellon University;

J P Singh, Princeton; Ashok Singhal; Jim Smith, University of Wisconsin–Madison; Mike Smith, Harvard University; Mark Smotherman, Clemson Univer-sity; Guri Sohi, University of Wisconsin–Madison; Arun Somani, University of

Trang 27

xxvi ■ Acknowledgments

Washington; Gene Tagliarin, Clemson University; Evan Tick, University of gon; Akhilesh Tyagi, University of North Carolina, Chapel Hill; Mateo Valero,Universidad Politécnica de Cataluña, Barcelona; Anujan Varma, University ofCalifornia, Santa Cruz; Thorsten von Eicken, Cornell University; Hank Walker,Texas A&M; Roy Want, Xerox Palo Alto Research Center; David Weaver, Sun;Shlomo Weiss, Tel Aviv University; David Wells; Mike Westall, Clemson Univer-sity; Maurice Wilkes; Eric Williams; Thomas Willis, Purdue University; MalcolmWing; Larry Wittie, SUNY Stony Brook; Ellen Witte Zegura, Georgia Institute ofTechnology

Ore-Appendices

The vector appendix was revised by Krste Asanovic of the Massachusetts tute of Technology The floating-point appendix was written originally by DavidGoldberg of Xerox PARC

Insti-Exercises

George Adams, Purdue University; Todd M Bezenek, University of Wisconsin–Madison (in remembrance of his grandmother Ethel Eshom); Susan Eggers;Anoop Gupta; David Hayes; Mark Hill; Allan Knies; Ethan L Miller, University

of California, Santa Cruz; Parthasarathy Ranganathan, Compaq WesternResearch Laboratory; Brandon Schwartz, University of Wisconsin–Madison;Michael Scott; Dan Siewiorek; Mike Smith; Mark Smotherman; Evan Tick; Tho-mas Willis

Special Thanks

Duane Adams, Defense Advanced Research Projects Agency; Tom Adams; SaritaAdve, University of Illinois at Urbana–Champaign; Anant Agarwal; Dave Albo-nesi, University of Rochester; Mitch Alsup; Howard Alt; Dave Anderson; PeterAshenden; David Bailey; Bill Bandy, Defense Advanced Research ProjectsAgency; L Barroso, Compaq’s Western Research Lab; Andy Bechtolsheim; C.Gordon Bell; Fred Berkowitz; John Best, IBM; Dileep Bhandarkar; Jeff Bier,BDTI; Mark Birman; David Black; David Boggs; Jim Brady; Forrest Brewer;Aaron Brown, University of California, Berkeley; E Bugnion, Compaq’s West-ern Research Lab; Alper Buyuktosunoglu, University of Rochester; Mark Cal-laghan; Jason F Cantin; Paul Carrick; Chen-Chung Chang; Lei Chen, University

of Rochester; Pete Chen; Nhan Chu; Doug Clark, Princeton University; BobCmelik; John Crawford; Zarka Cvetanovic; Mike Dahlin, University of Texas,Austin; Merrick Darley; the staff of the DEC Western Research Laboratory; JohnDeRosa; Lloyd Dickman; J Ding; Susan Eggers, University of Washington; WaelEl-Essawy, University of Rochester; Patty Enriquez, Mills; Milos Ercegovac;Robert Garner; K Gharachorloo, Compaq’s Western Research Lab; Garth Gib-son; Ronald Greenberg; Ben Hao; John Henning, Compaq; Mark Hill, University

Trang 28

of Wisconsin–Madison; Danny Hillis; David Hodges; Urs Hoelzle, Google;David Hough; Ed Hudson; Chris Hughes, University of Illinois at Urbana–Champaign; Mark Johnson; Lewis Jordan; Norm Jouppi; William Kahan; RandyKatz; Ed Kelly; Richard Kessler; Les Kohn; John Kowaleski, Compaq ComputerCorp; Dan Lambright; Gary Lauterbach, Sun Microsystems; Corinna Lee; RubyLee; Don Lewine; Chao-Huang Lin; Paul Losleben, Defense Advanced ResearchProjects Agency; Yung-Hsiang Lu; Bob Lucas, Defense Advanced ResearchProjects Agency; Ken Lutz; Alan Mainwaring, Intel Berkeley Research Labs; AlMarston; Rich Martin, Rutgers; John Mashey; Luke McDowell; SebastianMirolo, Trimedia Corporation; Ravi Murthy; Biswadeep Nag; Lisa Noordergraaf,Sun Microsystems; Bob Parker, Defense Advanced Research Projects Agency;Vern Paxson, Center for Internet Research; Lawrence Prince; Steven Przybylski;Mark Pullen, Defense Advanced Research Projects Agency; Chris Rowen; Marg-aret Rowland; Greg Semeraro, University of Rochester; Bill Shannon; BehroozShirazi; Robert Shomler; Jim Slager; Mark Smotherman, Clemson University;the SMT research group at the University of Washington; Steve Squires, DefenseAdvanced Research Projects Agency; Ajay Sreekanth; Darren Staples; CharlesStapper; Jorge Stolfi; Peter Stoll; the students at Stanford and Berkeley whoendured our first attempts at creating this book; Bob Supnik; Steve Swanson;Paul Taysom; Shreekant Thakkar; Alexander Thomasian, New Jersey Institute ofTechnology; John Toole, Defense Advanced Research Projects Agency; Kees A.Vissers, Trimedia Corporation; Willa Walker; David Weaver; Ric Wheeler, EMC;Maurice Wilkes; Richard Zimmerman.

John HennessyDavid Patterson

Trang 29

1.10 Putting It All Together: Performance and Price-Performance 44

Trang 30

Fundamentals of Computer Design

And now for something completely different

Monty Python’s Flying Circus

Trang 31

2 ■ Chapter One Fundamentals of Computer Design

Computer technology has made incredible progress in the roughly 60 years sincethe first general-purpose electronic computer was created Today, less than $500will purchase a personal computer that has more performance, more main mem-ory, and more disk storage than a computer bought in 1985 for 1 million dollars.This rapid improvement has come both from advances in the technology used tobuild computers and from innovation in computer design

Although technological improvements have been fairly steady, progress ing from better computer architectures has been much less consistent During thefirst 25 years of electronic computers, both forces made a major contribution,delivering performance improvement of about 25% per year The late 1970s sawthe emergence of the microprocessor The ability of the microprocessor to ridethe improvements in integrated circuit technology led to a higher rate of improve-ment—roughly 35% growth per year in performance

aris-This growth rate, combined with the cost advantages of a mass-producedmicroprocessor, led to an increasing fraction of the computer business beingbased on microprocessors In addition, two significant changes in the computermarketplace made it easier than ever before to be commercially successful with anew architecture First, the virtual elimination of assembly language program-ming reduced the need for object-code compatibility Second, the creation ofstandardized, vendor-independent operating systems, such as UNIX and itsclone, Linux, lowered the cost and risk of bringing out a new architecture These changes made it possible to develop successfully a new set of architec-tures with simpler instructions, called RISC (Reduced Instruction Set Computer)architectures, in the early 1980s The RISC-based machines focused the attention

of designers on two critical performance techniques, the exploitation of level parallelism (initially through pipelining and later through multiple instructionissue) and the use of caches (initially in simple forms and later using more sophisti-cated organizations and optimizations)

instruction-The RISC-based computers raised the performance bar, forcing prior tectures to keep up or disappear The Digital Equipment Vax could not, and so itwas replaced by a RISC architecture Intel rose to the challenge, primarily bytranslating x86 (or IA-32) instructions into RISC-like instructions internally,allowing it to adopt many of the innovations first pioneered in the RISC designs

archi-As transistor counts soared in the late 1990s, the hardware overhead of ing the more complex x86 architecture became negligible

translat-Figure 1.1 shows that the combination of architectural and organizationalenhancements led to 16 years of sustained growth in performance at an annualrate of over 50%—a rate that is unprecedented in the computer industry

The effect of this dramatic growth rate in the 20th century has been twofold.First, it has significantly enhanced the capability available to computer users Formany applications, the highest-performance microprocessors of today outper-form the supercomputer of less than 10 years ago

Trang 32

1.1 Introduction ■ 3

Second, this dramatic rate of improvement has led to the dominance ofmicroprocessor-based computers across the entire range of the computer design.PCs and Workstations have emerged as major products in the computer industry.Minicomputers, which were traditionally made from off-the-shelf logic or fromgate arrays, have been replaced by servers made using microprocessors Main-frames have been almost replaced with multiprocessors consisting of small num-bers of off-the-shelf microprocessors Even high-end supercomputers are beingbuilt with collections of microprocessors

These innovations led to a renaissance in computer design, which emphasizedboth architectural innovation and efficient use of technology improvements Thisrate of growth has compounded so that by 2002, high-performance microproces-sors are about seven times faster than what would have been obtained by relyingsolely on technology, including improved circuit design

Figure 1.1 Growth in processor performance since the mid-1980s This chart plots performance relative to the VAX 11/780 as measured by the SPECint benchmarks (see Section 1.8) Prior to the mid-1980s, processor perfor- mance growth was largely technology driven and averaged about 25% per year The increase in growth to about 52% since then is attributable to more advanced architectural and organizational ideas By 2002, this growth led to a difference in performance of about a factor of seven Performance for floating-point-oriented calculations has increased even faster Since 2002, the limits of power, available instruction-level parallelism, and long memory latency have slowed uniprocessor performance recently, to about 20% per year Since SPEC has changed over the years, performance of newer machines is estimated by a scaling factor that relates the performance for two different versions of SPEC (e.g., SPEC92, SPEC95, and SPEC2000)

993 Alpha 21264, 0.6 GHz

649 Alpha 21164, 0.6 GHz

481 Alpha 21164, 0.5 GHz

280 Alpha 21164, 0.3 GHz

183 Alpha 21064A, 0.3 GHz

117 PowerPC 604, 0.1GHz

80 Alpha 21064, 0.2 GHz

51

HP PA-RISC, 0.05 GHz

24 IBM RS6000/540

18 MIPS M2000 13 MIPS M/120 9 Sun-4/260 5 VAX 8700

1.5, VAX-11/785 25%/year

52%/year

20%

VAX-11/780

Trang 33

4 ■ Chapter One Fundamentals of Computer Design

However, Figure 1.1 also shows that this 16-year renaissance is over Since

2002, processor performance improvement has dropped to about 20% per yeardue to the triple hurdles of maximum power dissipation of air-cooled chips, littleinstruction-level parallelism left to exploit efficiently, and almost unchangedmemory latency Indeed, in 2004 Intel canceled its high-performance uniproces-sor projects and joined IBM and Sun in declaring that the road to higher perfor-mance would be via multiple processors per chip rather than via fasteruniprocessors This signals a historic switch from relying solely on instruction-level parallelism (ILP), the primary focus of the first three editions of this book,

to thread-level parallelism (TLP) and data-level parallelism (DLP), which arefeatured in this edition Whereas the compiler and hardware conspire to exploitILP implicitly without the programmer’s attention, TLP and DLP are explicitlyparallel, requiring the programmer to write parallel code to gain performance.This text is about the architectural ideas and accompanying compilerimprovements that made the incredible growth rate possible in the last century,the reasons for the dramatic change, and the challenges and initial promisingapproaches to architectural ideas and compilers for the 21st century At the core

is a quantitative approach to computer design and analysis that uses empiricalobservations of programs, experimentation, and simulation as its tools It is thisstyle and approach to computer design that is reflected in this text This book waswritten not only to explain this design style, but also to stimulate you to contrib-ute to this progress We believe the approach will work for explicitly parallelcomputers of the future just as it worked for the implicitly parallel computers ofthe past

In the 1960s, the dominant form of computing was on large puters costing millions of dollars and stored in computer rooms with multipleoperators overseeing their support Typical applications included business dataprocessing and large-scale scientific computing The 1970s saw the birth of theminicomputer, a smaller-sized computer initially focused on applications in sci-entific laboratories, but rapidly branching out with the popularity of time-sharing—multiple users sharing a computer interactively through independentterminals That decade also saw the emergence of supercomputers, which werehigh-performance computers for scientific computing Although few in number,they were important historically because they pioneered innovations that latertrickled down to less expensive computer classes The 1980s saw the rise of thedesktop computer based on microprocessors, in the form of both personal com-puters and workstations The individually owned desktop computer replacedtime-sharing and led to the rise of servers—computers that provided larger-scaleservices such as reliable, long-term file storage and access, larger memory, andmore computing power The 1990s saw the emergence of the Internet and theWorld Wide Web, the first successful handheld computing devices (personal digi-

Trang 34

1.2 Classes of Computers ■ 5

tal assistants or PDAs), and the emergence of high-performance digital consumerelectronics, from video games to set-top boxes The extraordinary popularity ofcell phones has been obvious since 2000, with rapid improvements in functionsand sales that far exceed those of the PC These more recent applications use

embedded computers, where computers are lodged in other devices and theirpresence is not immediately obvious

These changes have set the stage for a dramatic change in how we view puting, computing applications, and the computer markets in this new century.Not since the creation of the personal computer more than 20 years ago have weseen such dramatic changes in the way computers appear and in how they areused These changes in computer use have led to three different computing mar-kets, each characterized by different applications, requirements, and computingtechnologies Figure 1.2 summarizes these mainstream classes of computingenvironments and their important characteristics

com-Desktop Computing

The first, and still the largest market in dollar terms, is desktop computing top computing spans from low-end systems that sell for under $500 to high-end,heavily configured workstations that may sell for $5000 Throughout this range

Desk-in price and capability, the desktop market tends to be driven to optimize performance. This combination of performance (measured primarily in terms ofcompute performance and graphics performance) and price of a system is whatmatters most to customers in this market, and hence to computer designers As aresult, the newest, highest-performance microprocessors and cost-reduced micro-processors often appear first in desktop systems (see Section 1.6 for a discussion

price-of the issues affecting the cost price-of computers)

Desktop computing also tends to be reasonably well characterized in terms ofapplications and benchmarking, though the increasing use of Web-centric, inter-active applications poses new challenges in performance evaluation

routers at the high end) Price of microprocessor

module

$50–$500 (per processor)

$200–$10,000 (per processor)

$0.01–$100 (per processor)

graphics performance

Throughput, availability, scalability

Price, power consumption, application-specific performance

Figure 1.2 A summary of the three mainstream computing classes and their system characteristics Note the wide range in system price for servers and embedded systems For servers, this range arises from the need for very large-scale multiprocessor systems for high-end transaction processing and Web server applications The total num- ber of embedded processors sold in 2005 is estimated to exceed 3 billion if you include 8-bit and 16-bit microproces- sors Perhaps 200 million desktop computers and 10 million servers were sold in 2005.

Trang 35

6 ■ Chapter One Fundamentals of Computer Design

Servers

As the shift to desktop computing occurred, the role of servers grew to providelarger-scale and more reliable file and computing services The World Wide Webaccelerated this trend because of the tremendous growth in the demand andsophistication of Web-based services Such servers have become the backbone oflarge-scale enterprise computing, replacing the traditional mainframe

For servers, different characteristics are important First, dependability is ical (We discuss dependability in Section 1.7.) Consider the servers runningGoogle, taking orders for Cisco, or running auctions on eBay Failure of suchserver systems is far more catastrophic than failure of a single desktop, sincethese servers must operate seven days a week, 24 hours a day Figure 1.3 esti-mates revenue costs of downtime as of 2000 To bring costs up-to-date, Ama-zon.com had $2.98 billion in sales in the fall quarter of 2005 As there were about

crit-2200 hours in that quarter, the average revenue per hour was $1.35 million ing a peak hour for Christmas shopping, the potential loss would be many timeshigher

Dur-Hence, the estimated costs of an unavailable system are high, yet Figure 1.3and the Amazon numbers are purely lost revenue and do not account for lostemployee productivity or the cost of unhappy customers

A second key feature of server systems is scalability Server systems oftengrow in response to an increasing demand for the services they support or anincrease in functional requirements Thus, the ability to scale up the computingcapacity, the memory, the storage, and the I/O bandwidth of a server is crucial Lastly, servers are designed for efficient throughput That is, the overall per-formance of the server—in terms of transactions per minute or Web pages served

Figure 1.3 The cost of an unavailable system is shown by analyzing the cost of downtime (in terms of ately lost revenue), assuming three different levels of availability, and that downtime is distributed uniformly.

immedi-These data are from Kembel [2000] and were collected and analyzed by Contingency Planning Research

Trang 36

1.2 Classes of Computers ■ 7

per second—is what is crucial Responsiveness to an individual request remainsimportant, but overall efficiency and cost-effectiveness, as determined by howmany requests can be handled in a unit time, are the key metrics for most servers

We return to the issue of assessing performance for different types of computingenvironments in Section 1.8

A related category is supercomputers They are the most expensive ers, costing tens of millions of dollars, and they emphasize floating-point perfor-mance Clusters of desktop computers, which are discussed in Appendix H, havelargely overtaken this class of computer As clusters grow in popularity, the num-ber of conventional supercomputers is shrinking, as are the number of companieswho make them

comput-Embedded Computers

Embedded computers are the fastest growing portion of the computer market.These devices range from everyday machines—most microwaves, most washingmachines, most printers, most networking switches, and all cars contain simpleembedded microprocessors—to handheld digital devices, such as cell phones andsmart cards, to video games and digital set-top boxes

Embedded computers have the widest spread of processing power and cost.They include 8-bit and 16-bit processors that may cost less than a dime, 32-bitmicroprocessors that execute 100 million instructions per second and cost under

$5, and high-end processors for the newest video games or network switches thatcost $100 and can execute a billion instructions per second Although the range

of computing power in the embedded computing market is very large, price is akey factor in the design of computers for this space Performance requirements

do exist, of course, but the primary goal is often meeting the performance need at

a minimum price, rather than achieving higher performance at a higher price Often, the performance requirement in an embedded application is real-timeexecution A real-time performance requirement is when a segment of the appli-cation has an absolute maximum execution time For example, in a digital set-topbox, the time to process each video frame is limited, since the processor mustaccept and process the next frame shortly In some applications, a more nuancedrequirement exists: the average time for a particular task is constrained as well asthe number of instances when some maximum time is exceeded Suchapproaches—sometimes called soft real-time—arise when it is possible to occa-sionally miss the time constraint on an event, as long as not too many are missed.Real-time performance tends to be highly application dependent

Two other key characteristics exist in many embedded applications: the need

to minimize memory and the need to minimize power In many embedded cations, the memory can be a substantial portion of the system cost, and it isimportant to optimize memory size in such cases Sometimes the application isexpected to fit totally in the memory on the processor chip; other times the

Trang 37

appli-8 ■ Chapter One Fundamentals of Computer Design

application needs to fit totally in a small off-chip memory In any event, theimportance of memory size translates to an emphasis on code size, since data size

is dictated by the application

Larger memories also mean more power, and optimizing power is often cal in embedded applications Although the emphasis on low power is frequentlydriven by the use of batteries, the need to use less expensive packaging—plasticversus ceramic—and the absence of a fan for cooling also limit total power con-sumption We examine the issue of power in more detail in Section 1.5

criti-Most of this book applies to the design, use, and performance of embeddedprocessors, whether they are off-the-shelf microprocessors or microprocessorcores, which will be assembled with other special-purpose hardware

Indeed, the third edition of this book included examples from embeddedcomputing to illustrate the ideas in every chapter Alas, most readers found theseexamples unsatisfactory, as the data that drives the quantitative design and evalu-ation of desktop and server computers has not yet been extended well to embed-ded computing (see the challenges with EEMBC, for example, in Section 1.8).Hence, we are left for now with qualitative descriptions, which do not fit wellwith the rest of the book As a result, in this edition we consolidated the embed-ded material into a single appendix We believe this new appendix (Appendix D)improves the flow of ideas in the text while still allowing readers to see how thediffering requirements affect embedded computing

The task the computer designer faces is a complex one: Determine whatattributes are important for a new computer, then design a computer to maximizeperformance while staying within cost, power, and availability constraints Thistask has many aspects, including instruction set design, functional organization,logic design, and implementation The implementation may encompass inte-grated circuit design, packaging, power, and cooling Optimizing the designrequires familiarity with a very wide range of technologies, from compilers andoperating systems to logic design and packaging

In the past, the term computer architecture often referred only to instructionset design Other aspects of computer design were called implementation, ofteninsinuating that implementation is uninteresting or less challenging

We believe this view is incorrect The architect’s or designer’s job is muchmore than instruction set design, and the technical hurdles in the other aspects ofthe project are likely more challenging than those encountered in instruction setdesign We’ll quickly review instruction set architecture before describing thelarger challenges for the computer architect

Instruction Set Architecture

We use the term instruction set architecture (ISA) to refer to the actual visible instruction set in this book The ISA serves as the boundary between the

Trang 38

1.3 Defining Computer Architecture ■ 9

software and hardware This quick review of ISA will use examples from MIPSand 80x86 to illustrate the seven dimensions of an ISA Appendices B and J givemore details on MIPS and the 80x86 ISAs

1 Class of ISA—Nearly all ISAs today are classified as general-purpose registerarchitectures, where the operands are either registers or memory locations.The 80x86 has 16 general-purpose registers and 16 that can hold floating-point data, while MIPS has 32 general-purpose and 32 floating-point registers(see Figure 1.4) The two popular versions of this class are register-memory

ISAs such as the 80x86, which can access memory as part of many tions, and load-store ISAs such as MIPS, which can access memory onlywith load or store instructions All recent ISAs are load-store

instruc-2 Memory addressing—Virtually all desktop and server computers, includingthe 80x86 and MIPS, use byte addressing to access memory operands Somearchitectures, like MIPS, require that objects must be aligned An access to anobject of size s bytes at byte address A is aligned if A mods = 0 (See FigureB.5 on page B-9.) The 80x86 does not require alignment, but accesses aregenerally faster if operands are aligned

3 Addressing modes—In addition to specifying registers and constant operands,addressing modes specify the address of a memory object MIPS addressingmodes are Register, Immediate (for constants), and Displacement, where aconstant offset is added to a register to form the memory address The 80x86supports those three plus three variations of displacement: no register (abso-lute), two registers (based indexed with displacement), two registers where

Figure 1.4 MIPS registers and usage conventions In addition to the 32 purpose registers (R0–R31), MIPS has 32 floating-point registers (F0–F31) that can hold either a 32-bit single-precision number or a 64-bit double-precision number.

Trang 39

general-10 ■ Chapter One Fundamentals of Computer Design

one register is multiplied by the size of the operand in bytes (based withscaled index and displacement) It has more like the last three, minus the dis-placement field: register indirect, indexed, and based with scaled index

4 Types and sizes of operands—Like most ISAs, MIPS and 80x86 supportoperand sizes of 8-bit (ASCII character), 16-bit (Unicode character or halfword), 32-bit (integer or word), 64-bit (double word or long integer), andIEEE 754 floating point in 32-bit (single precision) and 64-bit (double pre-cision) The 80x86 also supports 80-bit floating point (extended doubleprecision)

5 Operations—The general categories of operations are data transfer, metic logical, control (discussed next), and floating point MIPS is a simpleand easy-to-pipeline instruction set architecture, and it is representative of theRISC architectures being used in 2006 Figure 1.5 summarizes the MIPS ISA

arith-The 80x86 has a much richer and larger set of operations (see Appendix J)

6 Control flow instructions—Virtually all ISAs, including 80x86 and MIPS,support conditional branches, unconditional jumps, procedure calls, andreturns Both use PC-relative addressing, where the branch address is speci-fied by an address field that is added to the PC There are some small differ-ences MIPS conditional branches (BE, BNE, etc.) test the contents of registers,while the 80x86 branches (JE, JNE, etc.) test condition code bits set as sideeffects of arithmetic/logic operations MIPS procedure call (JAL) places thereturn address in a register, while the 80x86 call (CALLF) places the returnaddress on a stack in memory

7 Encoding an ISA—There are two basic choices on encoding: fixed length and

variable length All MIPS instructions are 32 bits long, which simplifiesinstruction decoding Figure 1.6 shows the MIPS instruction formats The80x86 encoding is variable length, ranging from 1 to 18 bytes Variable-length instructions can take less space than fixed-length instructions, so a pro-gram compiled for the 80x86 is usually smaller than the same program com-piled for MIPS Note that choices mentioned above will affect how theinstructions are encoded into a binary representation For example, the num-ber of registers and the number of addressing modes both have a significantimpact on the size of instructions, as the register field and addressing modefield can appear many times in a single instruction

The other challenges facing the computer architect beyond ISA design areparticularly acute at the present, when the differences among instruction sets aresmall and when there are distinct application areas Therefore, starting with thisedition, the bulk of instruction set material beyond this quick review is found inthe appendices (see Appendices B and J)

We use a subset of MIPS64 as the example ISA in this book

Trang 40

1.3 Defining Computer Architecture ■ 11

Instruction type/opcode Instruction meaning

registers; only memory address mode is 16-bit displacement + contents of a GPR

DMUL, DMULU, DDIV,

DDIVU, MADD

Multiply and divide, signed and unsigned; multiply-add; all operations take and yield 64-bit values

DSLL, DSRL, DSRA, DSLLV,

DSRLV, DSRAV

Shifts: both immediate (DS ) and variable form (DS V); shifts are shift left logical, right logical, right arithmetic

(64-bit integer), W (32-bit integer), D (DP), or S (SP) Both operands are FPRs.

Figure 1.5 Subset of the instructions in MIPS64 SP = single precision; DP = double precision Appendix B gives

much more detail on MIPS64 For data, the most significant bit number is 0; least is 63.

Ngày đăng: 03/11/2023, 20:53

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN