1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

The electrical engineering HandbookSN09

6 201 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Section IX – Computer Engineering
Tác giả John V. Oldfield, Vojin G. Oklobdzija
Người hướng dẫn Richard C. Dorf, Editor
Trường học Syracuse University; University of California
Chuyên ngành Electrical Engineering
Thể loại Book Chapter
Năm xuất bản 2000
Thành phố Boca Raton
Định dạng
Số trang 6
Dung lượng 312,59 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The electrical engineering handbook

Trang 1

Oldfield, J.V., Oklobdzija, V.G “Section IX – Computer Engineering”

The Electrical Engineering Handbook

Ed Richard C Dorf

Boca Raton: CRC Press LLC, 2000

Trang 2

The ViewSonic® VP140 ViewPanel is about as flat as a board This new active-matrix LCD flat-panel has a 14-in viewing area Measuring at only 2.5 in deep and weighing just 12.1 lb, the VP140 weighs only a fraction of standard displays and uses 90% less desktop space The display unit supports a maximum noninterlaced resolution of 1024 ´ 768 pixels

at a 75-Hz refresh rate Additionally, the VP140 monitor can be configured for both desktop or slim line wall displays and it supports up to 16.7 million color images for both

PC and Macintosh® environments This revolutionary view panel represents the future of monitors (Photo courtesy of ViewSonic Corporation.)

Trang 3

Computer Engineering

86 Organization R F Tinder, V G Oklobdzija, V C Hamacher, Z G Vranesic, S G Zaky,

J Raymond

Number Systems • Computer Arithmetic • Architecture • Microprogramming

87 Programming J M Feldman, E W Czeck, T G Lewis, J J Martin

Assembly Language • High-Level Languages • Data Types and Data Structures

88 Memory Systems D Berger, J R Goodman, G S Sohi

Memory Hierarchies • Cache Memories • Parallel and Interleaved Memories • Virtual Memory • Research Issues

89 Input and Output S Sherr, R C Durbeck, W Suryn, M Veillette

Input Devices • Computer Output Printer Technologies • Smart Cards

90 Software Engineering C A Argila, C Jones, J J Martin

Tools and Techniques • Testing, Debugging, and Verification • Programming Methodology

91 Computer Graphics E P Rozanski

Graphics Hardware • Graphics Software

92 Computer Networks T G Robertazzi

Local Area Networks • Metropolitan Area Networks • Wide Area Networks • The Future

93 Fault Tolerance B W Johnson

Hardware Redundancy • Information Redundancy • Time Redundancy • Software Redundancy • Dependability Evaluation

94 Knowledge Engineering M Abdelguerfi, R Eskicioglu, J Liebowitz

Databases • Rule-Based Expert Systems

95 Parallel Processors T Feng

Classifications • Types of Parallel Processors • System Utilization

96 Operating Systems J Boykin

Types of Operating Systems • Distributed Computing Systems • Fault-Tolerant Systems • Parallel Processing • Real-Time Systems • Operating System Structure • Industry Standards

97 Computer Security and Cryptography J A Cooper, O Goldreich

Computer and Communications Security • Fundamentals of Cryptography

98 Computer Reliability C G Guy

Definitions of Failure, Fault, and Error • Failure Rate and Reliability • Relationship Between Reliability and Failure Rate • Mean Time to Failure • Mean Time to Repair • Mean Time Between Failures • Availability • Calculation of Computer System Reliability • Markov Modeling • Software Reliability • Reliability Calculations for Real Systems

99 The Internet and its Role in the Future G L Hawke

History • The Internet Today • The Future

Trang 4

John V Oldfield Vojin G Oklobdzija

OMPUTER ENGINEERING is a discipline that deals with the engineering knowledge required to build digital computers and special systems that communicate and/or process or transmit data As such, computer engineering is a multi-disciplinary field because it involves many different aspects of engineering that are necessary in designing such complex systems To illustrate this point one can think of all the various parts of engineering that are involved in a design of a digital computer system One can start with the knowledge of the material science that is necessary to process the materials of which the integrated circuits are made One also has to deal with the devices and device physics to make the most efficient transistors of which computing sytems are built The knowledge of electrical engineering and electronic circuits in particular

is necessary in order to design fast and efficient integrated circuits One level further in the hierarchy of the required knowledge is a logic design which is an implementation of the digital functions Digital design involves not only an intimate knowledge of electrical engineering but also the use of computer aided design tools and algorithms for efficient implementation of computational structures Building a complex computer system is similar to building a house — at the very beginning one cannot be bothered with all the details involved in the process, such as plumbing and electrical wiring Similarly a process of designing an electronic computer starts with an architecture that specifies the functionality and major blocks Much like building a house, those blocks are later designed by teams of engineers using the architectural specifications of the computer Computer architecture is on a cross-road between electrical engineering and computer science On one hand, one does not need to specify all the details of implementation while defining an architecture However, if one does not know the important aspects of the design which require the knowledge of electrical engineering, the architecture may not be good Given that the implementation of the architecture has to serve as a platform for various applications, the knowledge of software, compilers, and high-level languages is also necessary

Computer engineering is not only a very diverse discipline, but as such it is a subject of very rapid changes reflecting high rate of progress in a variety of disciplines encompassed by computer engineering The perfor-mance of digital computers has been doubling steadily every two years while the capacity of the semiconductor memory has been quadrupling every three years The price-performance figure has dropped for two orders of magnitude in the last ten years This trend has radically changed the way the computer is perceived today From exclusive and expensive machines, affordable to only a few, it has become a commodity For example, an average automobile today contains in the order of 20 processors controlling various aspects of the machine function, brake system, navigation, etc

Some of the technology-specific aspects of computer engineering were covered in Section VIII This section, however, is concerned with higher-level aspects which are substantially independent of circuit technology Chapter 86 reviews organizational matters which particularly affect computer processor design, such as the arithmetic and logical functions required The next chapter considers the major topic of programming, which may be different in each “layer,” using the previous analogy Programming too has long been dominated by a particular paradigm, the so-called imperative model, in which the programmer expresses an algorithm, i.e., a process for solving a problem, as a sequence of instructions—either simple or complex, depending on the type

of programming required Recently others have emerged, such as rule-based programming, which has a declar-ative model, i.e., the user specifies the facts and rules of a situation and poses a question, leaving the computer (“knowledge-engine”) to make its own inferences en route to finding a solution or set of solutions

Computer memory systems are considered in Chapter 88 Early purists preferred the term storage systems,

since the organization of a computer memory bears little resemblance to what we know of the organization of the human brain For economic reasons, computer memories have been organized as a hierarchy of different technologies, with decreasing cost per bit as well as increased access times as one moves away from the central processor The introduction of virtual memory in the Manchester Atlas project (c 1964) was a major break-through in removing memory management from the tasks of the programmer, but recently the availability of vast quantities of semiconductor memory at ultralow prices has reduced the need for this technique

C

Trang 5

© 2000 by CRC Press LLC

into a computer system, and be output by it in a correspondingly useful form Information may vary in time such as a temperature indication, in two dimensions such as the user’s action in moving a mouse, or even in three dimensions, and output may be as simple as closing a contact or drawing a picture containing a vast range of colors

Software engineering as discussed in Chapter 90 refers to the serious problem of managing the complexity

of the layers of software This problem has few parallels in other walks of life and is exacerbated by the rate of change in computing It is dominated by the overall question “Is this computer system reliable?” which will be referred to in Chapter 98 Some parallels can be drawn with other complex human organizations, and, fortu-nately, the computer itself can be applied to the task

Graphical input and output is the topic of Chapter 91 Early promise in the mid-1960s led to the pessimistic observation a decade later that this was “a solution looking for a problem,” but as computer display technology improved in quality, speed, and, most importantly, cost, attention was focused on visualization algorithms, e.g., the task of producing a two-dimensional representation of a three-dimensional object This is coupled with the need to provide a natural interface between the user and the computer and has led to the development

of interactive graphical techniques for drawing, pointing, etc., as well as consideration of the human factors involved

As computers have extended their scope, it has become necessary for a computer to communicate with other computers, whether nearby, such as a file server, or across a continent or ocean, such as in electronic mail Chapter 92 reviews the major concepts of both local and wide area computer networks

Many engineers were skeptical as to whether early computers would operate sufficiently long before a breakdown would prevent the production of useful results Little recognition has been given to the pioneers of component and circuit reliability that have made digital systems virtually, but still not totally, fault-free Critical systems, whether in medicine or national defense, must operate even if components and subsystems fail The next chapter reviews the techniques employed to make computer systems fault-tolerant

The idea of a rule-based system, referred to earlier, is covered in Chapter 94 Application software naturally reflects the nature of the application, and the term knowledge engineering has been coined to include languages and techniques for particularly demanding tasks, which cannot readily be expressed in a conventional scientific

or business programming language

Parallel systems are emerging as the power of computer systems is extended by using multiple units The term unit may correspond to anything from a rudimentary processor, such as a “smart word” in a massively parallel “fine grain” architecture, to a full-scale computer, in a coarse-grain parallel system with a few tens of parallel units Chapter 95 discusses the hardware and software approaches to a wide variety of parallel systems Operating systems, which are described in the next chapter, turn a “raw” computer into an instrument capable of performing useful, low-level tasks, such as creating a file or starting a process corresponding to an algorithm, or transferring its output to a device such as a printer, which may be busy with other tasks

As society has become more dependent upon the computer and computer technology, it has become increasingly concerned with protecting the privacy of individuals and maintaining the integrity of computer systems against infiltration—by individuals, groups, and even on occasion by governments Techniques for protecting the security of a system and ensuring individual privacy are discussed in Chapter 97

Chapter 98 discusses the overall reliability of computer systems, based on the inevitable limitations of both hardware and software mentioned earlier Given the inevitability of failure, human or component, what can

be said about the probability of a whole computer system failing? This may not be an academic issue for a passenger reading this section while flying in a modern jet airliner, which may spend over 95% of a flight under the control of an automatic pilot He or she may be reassured to know, however, that the practitioners of reliability engineering have reduced the risk of system failure to truly negligible proportions

Trang 6

A m main amplifier gain

A p preamplifier gain

A v availability

E L illuminance

f proportionality factor

h Planck’s constant 6.625 ´ 10–34 J·s

l failure rate

mf flip-flop sensitivity

mp photodetector sensitivity

ms Schmitt trigger sensitivity

n hardware utilization

P parallelism

P c character pitch

R1 shaft radius

S speed-up ratio

t L optical loss

z(t) hazard rate

Ngày đăng: 31/12/2013, 21:25

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN