1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Arithmetic builtin self test for embedded systems

282 1,8K 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 282
Dung lượng 8,04 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In BIST, the original circuit designed to perform the system function is ap­pended with additional modules for generation of test patterns and compaction of test responses [16].. At the

Trang 1

This PDF book contains some OCR errors.

Trang 2

Poznan University of Technology, Poland

To join a Prentice Hall P T R Internet mailing list, point to:

http://www.prenhall.com/mail_lists/

Prentice Hall P T R

Upper Saddle River, NJ 07458

ISBN 0137564384, October 1997

Trang 4

Contents

Preface vii

1 Built-in Self-Test 1

1.1 Introduction 1

1.2 Design for Testability 4

1.2.1 Controllability and Observability 4

1.2.2 Ad Hoc Techniques 6

1.2.3 Scan Designs 8

1.2.4 Boundary-Scan Architecture 12

1.2.5 Test Point Insertion 14

1.3 Generation of Test Vectors 17

1.4 Compaction of Test Responses 32

1.4.1 Objectives and Requirements 32

1.4.2 Compaction Schemes 33

1.4.3 Error Models and Aliasing 35

1.5 BIST Schemes for Random Logic 38

1.5.1 Design Rules for BIST 38

1.5.2 Serial BIST Architectures 42

Trang 5

1.6 BIST for Memory Arrays 53

1.6.1 Schemes Based on Deterministic Tests 55

1.6.2 Pseudo-Random Testing 57

1.6.3 Transparent BIST 57

Generation of Test Vectors 61

2.1 Additive Generators of Exhaustive Patterns 61

2.1.1 Basic Notions 62

2.1.2 Optimal Generators for Single Size Subspaces 65

2.1.3 Operand Interleaving 70

2.1.4 The Best Generators for Subspaces Within a Range of Sizes 72

2.2 Other Generation Schemes 76

2.2.1 Emulation of LFSRs and CAs 76

3.2.1 Steady State Analysis 90

3.3.3 The Compaction Quality 108

3.4 Cascaded Compaction Scheme 112

Fault Diagnosis 117

4.1 Analytical Model 117 4.2 Experimental Validation 121

4.3 The Quality of Diagnostic Resolution 122

4.4 Fault Diagnosis in Scan-Based Designs 126

iv

Trang 6

Contents v

5 BIST of Data-Path Kernel 135

5.1 Testing of ALU 135

5.1.1 Generation of Test Vectors 137

5.1.2 Test Application Phase 137

5.1.3 Compaction of Test Responses 139

5.1.4 Experimental Validation 139

5.2 Testing of the MAC Unit 140

5.3 Testing of the Microcontroller 141

6 Fault Grading 147

6.1 Fault Simulation Framework 148

6.2 Functional Fault Simulation 150

6.3.1 Performance of Building Block Models 164

6.3.2 High-Level Synthesis Benchmark Circuits 165

6.3.3 Comparison with PROOFS 166

8.2.2 Memory Array Faults 194

8.2.3 Read and Write Logic Faults 194

8.2.4 Address Decoder Faults 195

8.2.5 Multiple Faults 195

Trang 8

Preface

The semiconductor industry, driven by ever-increasing demands for higher performance and reliability, as well as greater functionality and speeds, continuously introduces new higher density technologies and new integrated circuits These circuits, like any other complex systems, not only have to meet the performance and functionality requirements, but they also have to be man-ufacturable In particular, they have to be highly testable in order to meet extremely high and constantly growing quality requirements The quality of testing is often defined as the number of faulty chips that pass the test for one million chips declared as good Many microelectronics companies have already set their testing quality goals to less than 100 dpm (defects per million), and there is intensive ongoing research to lower this number to less than 10 dpm as

targeted in the six sigma project pioneered by Motorola

Many integrated circuits are produced in large volume and very often operate

at high speeds Since their manufacturing yield strongly depends on the silicon area, and their performance is directly related to the delays on critical paths,

it is essential that the testing strategy provides a high fault coverage without a significant area overhead and performance degradation in order to build reliable and competitive products It is a well-known fact that the costs associated with detecting faults rise over thousands of times from the time the product

is specified to the time the product is released to customers This is why the most effective way to prevent costly prototyping turns is to consider testing issues as early in the design cycle as possible Tremendous practical importance

of this problem generated an immense amount of research in an attempt to develop testing schemes of the ultimate quality The increasing complexity of VLSI circuits, in the absence of a corresponding increase in the number of input and output pins, has made structured design for testability (DFT) and built-in self-test (BIST) two of the most important concepts in testing that profoundly

Trang 9

influenced the area in recent years [16] Scan design is a good example of structured DFT where, in the test mode, all memory elements are connected

in scan chains, through which the test vectors can be shifted in and out This solution enhances the controllability and observability of the circuit, and, as far as testing of combinational stuck-at faults is concerned, the circuit can be treated as a combinational network

In BIST, the original circuit designed to perform the system function is ap­pended with additional modules for generation of test patterns and compaction

of test responses [16] Thus, the BIST approach can be applied at all levels of testing, starting from wafer and device to system and field testing It is widely accepted that appending these modules to the original circuit satisfies the high fault coverage requirement while reducing the dependence on expensive testing equipment However, it is also agreed that this solution compromises a circuit's area and performance as it inevitably introduces either a hardware overhead

or additional delays and increased latency These delays may be excessive for high-speed circuits used in several new applications such as broadband packet switching, digital signal processing (DSP) for the asynchronous transfer mode (ATM), new generations of floating point processors, and others Therefore, BIST schemes are evaluated thoroughly on the basis of the fault coverage they provide, area overhead they require, and the performance penalty they intro­duce A more detailed survey of existing DFT and BIST schemes is provided in Chapter 1 Further information can be found in [2], [6], [7], and [16]

With the cost of testing becoming a significant part of the cost of new mi­croelectronics products, with inevitably upcoming challenges of new deep sub-micron technologies, with the increasing role of the hardware-software codesign, and last but not least, with ever-changing customer expectations, a demand for new solutions and tools appears to be relentless In particular, an un­questionable proliferation of high-performance data-path architectures clearly demonstrates how inadequate existing BIST schemes can be if they are to entail non-intrusive and at-speed testing and yet guarantee a portability of test pro­cedures Paradoxically, although the vastness of data-path architectures consist

of powerful building blocks such as adders, multipliers, or arithmetic and logic units (ALUs) offering a very high computational potential, existing data-path BIST schemes are unfortunate examples of having sophisticated modules on the chip but remaining unable to translate this advantage into efficient nonintrusive testing schemes

The approach presented in Chapters 2 through 8 is fundamentally different from the solutions introduced so far It uses several generic building blocks, which are already in the data path, as well as its very flexible and powerful con­trol circuitry to generate patterns and compact test responses This permits de­sign of complex software-based, and thus very portable, BIST functions These functions produce test vectors in the form of control signals, such as the type

Trang 10

Preface IX

of ALU operation, the addresses of registers, the input to shifters, etc., rather than data, as it is done in all other systems In such an environment, the need for extra hardware is either entirely eliminated or drastically reduced, test vec­tors can be easily distributed to different modules of the system, test responses can be collected in parallel, and there is virtually no performance degradation Furthermore, the approach can be used for at-speed testing, thereby provid­ing a capability to detect failures that may not be detected by conventional low-speed testing These characteristics make this method an exceptionally at­tractive testing scheme for a wide range of circuits including high performance DSP systems, microprocessors, and microcontrollers

In the following chapters we will discuss several new fundamental concepts and practical scenarios concerned with test generation, test application, and test-response compaction performed by means of building blocks of high perfor­mance data paths We will show that even the simplest modules provide a very high potential for the integration of their features into a new generation of effi­cient and portable BIST schemes As described techniques rest predominantly

on arithmetic operations, these schemes will be jointly referred to as arithmetic built-in self-test (ABIST) methodology We will demonstrate that the ABIST

paradigm virtually eliminates a traditional dichotomy between the functional mode and the testing mode, as testing will be based on regular operations and with no interference into the circuit structure It can be expected that it will create a next integration platform where off-line and on-line BIST schemes will

be merged together

Chapter 2 introduces several test generation schemes that can be easily implemented in data paths based on adders, multipliers, and ALUs These schemes may replace commonly used LFSR-based test-pattern generators, and consequently allow it to mimic several commonly used generation techniques

In particular, a new approach to generate pseudo-exhaustive test patterns by means of arithmetic operations is described The resultant test patterns provide

a complete state coverage on subspaces of contiguous bits

The Accumulator-Based Compaction (ABC) scheme for parallel compaction

of test responses is the subject of Chapter 3 We will demonstrate that the ABC scheme offers a quality of compaction similar to that of the best compactors based on multiple input signature registers (MISRs) or cellular automata (CA)

of the same size The presented characteristics can be used to estimate the fault coverage drop for a given circuit under test (CUT) characterized by its detection profile The impact of the compactor's internal faults on the compaction quality

is also examined

Compaction schemes can be also used to perform fault diagnosis Faults, especially single ones, can be easily identified by collecting signatures and com­paring them with a dictionary of precomputed signatures Chapter 4 examines the relationship between the size of the compactor, the size of the circuit which

Trang 11

determines the number of faults, and the quality of diagnostic resolution mea­sured as the percentage of faults that have unique signatures Moreover, an adaptive procedure to facilitate the fault diagnosis in scan-based designs is also described When running successive test experiments, it uses the ABC scheme

to identify all scan flip-flops which are driven by erroneous signals

Chapter 5 addresses testing of those data-path building blocks which play a key role in implementing more complex ABIST functions The blocks analyzed are components forming multiply-and-accumulate structures; that is, the ALUs, multipliers, and register files In addition, testability of simple microcontrollers

is also discussed

Chapter 6 deals with fault simulation techniques customized for ABIST ap­plications It starts with a method exploiting the hierarchy inherent in the data paths Then it continues with an approach taking an advantage of the architec­tural regularity of several building blocks and concludes with a comparison of the described technique with the best gate-level fault simulation tools

Perspectives for integration of the ABIST approach with a behavioral syn­thesis are examined in Chapter 7 A survey of methodology for incorporating ABIST elements into the high-level synthesis process is accompanied by the analysis of a relationship between input subspace state coverage and the struc­tural fault coverage of various data-path building blocks

In Chapter 8, several case studies are presented First, schemes aimed at test­ing random logic accessible through multiple scan chains are examined Next, the ABIST implementation of memory test algorithms is discussed along with customized arithmetic test-response compaction schemes adopted for this par­ticular application A scheme which is intended to enhance the testability of the digital decimators is subsequently described This scheme is built around the circuitry used for a normal function It exploits operations offered by already existing functional blocks of the decimators to perform basic testing functions Finally, yet another scheme to encapsulate test responses is shown It employs leaking integrators appearing in a variety of DSP circuits A quantitative char­acterization of this circuit acting as a compactor of test responses is provided together with modifications leading to a very competitive compaction quality

In several places throughout this book we will use an assembly level lan­guage It will allow brief programs to be written for various data-path test scenarios presented in chapters devoted to the ABIST methodology A detailed description of the language is included in Appendix B We urge the reader to spend a few minutes studying this section so that the test software will be eas­ily comprehended Furthermore, a careful analysis of some of the test programs may reveal interesting implementation details illustrating effectiveness of the software-based self-test mechanisms

This book is based on the results of research in ABIST, some of which have been presented in IEEE publications We would like to acknowledge the IEEE

Trang 12

of Mentor Graphics Robert Aitken of Hewlett Packard, Vivek Chickermane of IBM Microelectronics, and Sanjay Patel of Mentor Graphics provided useful suggestions and comments It is our pleasure to acknowledge the support we received from the Cooperative Research and Development grant from Natural Sciences and Engineering Research Council of Canada and Northern Telecom in early stages of the research project leading to this book Our special thanks go

to Rod Favaron and Mark Olen of Mentor Graphics Corporation for providing support to complete this project Last but not least, we would like to express our gratitude to Danusia Raj ska for her help in preparation of the manuscript

Janusz Raj ski Jerzy Tyszer

Trang 13

xii

Trang 14

CHAPTER 1

Built-in Self-Test

Before we proceed to present the major elements of the ABIST methodol­ogy, we would like to provide the reader with a brief overview of existing built-in self-test (BIST) principles and the resulting practical solutions Sub­sequently, this part of the book can be used as a reference when studying the remaining chapters, especially 2 and 3 Since BIST can be regarded as a natural outgrowth of design for testability (DFT), we begin this chapter by introducing several issues underlying DFT mechanisms and putting BIST into perspective

by examining the reasons for its emergence We will then look at a variety of BIST schemes which have evolved over the years for generating test patterns and analyzing the resultant circuit response The chapter concludes with BIST applications and, in particular, testing approaches for general and structured logic which are used in custom and semicustom designs

as performance, area, power, manufacturability, etc., may significantly enhance the reliability of products and their overall quality

Testability, although difficult to define and quantify because of the many different factors affecting costs and quality of testing, reflects ability of the

Trang 15

2 1 Built-in Self-Test

circuit's tests to detect, and possibly locate, failures causing malfunctioning of the circuit As the number and kind of faults that may occur depends on the type of device and a technology used to fabricate it, evaluation of test quality can

be a difficult and often computationally intensive process Ideally, we would like

to measure a defect level representing the fraction of faulty chips within those passed as good by the tests It is, however, difficult to obtain an accurate defect level, as it requires the knowledge of yield and statistical properties of defects Consequently, an indirect and easier-to-estimate test quality measure is used

It is called fault coverage and is defined as the ratio of the number of faults that can be detected to the total number of faults in the assumed fault domain

As the complexity of electronic devices continues to increase, the complete fault coverage, one of the primary quality requirements, becomes more and more difficult to achieve by virtue of only traditional testing paradigms

The growth in the cost of developing and applying tests is attributable to almost the trivial observation that modern complex integrated circuits pose very serious problems in testing and debugging Testing at the board or complete system level can be even more difficult On the other hand, there is a contin­uing need for testing in these various architectural stages At the chip-level, testability problems include:

• a very high and still increasing logic-to-pin ratio, which points out a highly unbalanced relationship between a limited number of input/output ports and unprecedently complex semiconductor devices which are accessible only through these terminals,

• a circuit complexity which continues to grow as new submicron technolo­gies offer higher densities and speeds,

• an increasingly long test-pattern generation and test application time; it has been repeatedly reported that functional and random tests for general class of circuits containing memory elements have very low fault coverage;

in the case of deterministic patterns, an extraordinary amount of process­ing time might be required to generate a test vector, and then it may take

a large number of clock cycles to excite a fault and propagate it to primary outputs,

• a prohibitively large volume of test data that must be kept by testing equipment,

• an inability to perform at-speed testing through external testing equip­ment,

• incomplete knowledge of the gate level structure as designers are separated from the level of implementation by automated synthesis tools,

Trang 16

1.1 Introduction 3

• lack of methods and metrics to measure the completeness of employed testing schemes,

• difficulties in finding skilled resources

At the printed circuit board level, external testers would require a sophisti­cated bed-of-nails fixture in order to access the pins of the chips on the board, if these circuits were designed without addressing testability issues This expen­sive technique becomes virtually impractical when a surface-mount technology

is used with components mounted densely on both sides of the board Also, as the board has to be removed from the system, system-level diagnosis becomes impossible

It is certainly imperative to keep all costs related to testing, and originating from the above mentioned problems, in reasonable bounds It appears that this can be accomplished at the expense of a modest amount of area and possible minimal performance degradation such that a uniform and structured solution can be used in debugging, manufacturing, and system testing This desired approach, or rather a collection of techniques which make the final design eco­nomically testable, is known as a process of design for testability (DFT) DFT

is expected to produce circuits with adequate controllability and observability, satisfying several design rules which reduce test development costs, increase fault coverage, and finally, reduce defect levels

Although several testability problems can be alleviated by using certain DFT techniques, the actual testing still requires the application of test stimuli and the comparison of test responses with the correct reference These operations are traditionally carried out by means of external testing equipment such that the tester applies test vectors one by one and then compares the responses with the fault-free responses, also one by one For large circuits this approach becomes infeasible As we have indicated above, the patterns can be difficult to generate and the number of tests can be so large that it would be difficult to store and handle them efficiently by the tester hardware The time to apply the vectors may become unacceptable In addition, the testers are very expensive, and testing cannot be performed once the device is in the system

An attractive alternative to the classical testing scenario, where test patterns are applied from an external tester, is built-in self-test (BIST) In BIST, an additional "on-chip" circuitry is included to generate test vectors, evaluate test responses, and control the test Random, or in fact pseudo-random, patterns can be generated by simple circuits, and test responses can be compacted into

a short statistic by calculating a signature This signature, obtained from the CUT, can be subsequently compared with a fault-free signature

BIST has revolutionized the way the integrated circuits can be tested It reduces the cost of manufacturing testing by shortening the test application

/

Trang 17

4 1 Built-in Self-Test

time, minimizing the amount of test data stored, and lowering the cost of test­ing equipment Its implementation can result in a reduction of the product development cycle and cost, as well as a reduction of the cost of system main­tenance The latter benefits may have a dramatic impact on the economics of testing It follows from the fact that built-in test circuitry can test chips, boards, and the entire system virtually without very expensive external automatic test equipment The ability to run tests at different levels of the system's hierarchy significantly simplifies diagnostic testing, which in turn improves troubleshoot­ing procedures and sanity checks during assembly, integration, and field service Since the BIST hardware is an integral part of the chip, BIST, in principle, could allow for at-speed testing, thus covering faults affecting circuit timing characteristics

The basic BIST objectives are often expressed with respect to test-pattern generation and test-response compaction It is expected that appending BIST circuitry to the circuit under test will result in high fault coverage, short test application time, small volume of test data, and compatibility with the assumed DFT methodology High fault coverage in BIST can be achieved only if all faults

of interest are detected and their effects are retained in the final signature after compaction Numerous test generation and test-response compaction techniques have been proposed in the open literature and are used in industrial practice

as implementation platforms to cope with these objectives for various types of failures, errors, and a variety of test scenarios In the following subsections we will outline several schemes used in different BIST environments They have gained a wide acceptance by BIST practitioners, and their superiority over non-BIST approaches ensures a successful applicability of BIST in current and future technologies

Clearly, the use of BIST is also associated with certain costs Additional sili­con area is required for the test hardware to perform test-pattern generation and test-response compaction Some performance degradation may be introduced due to the presence of multiplexers needed to apply the test patterns in the test mode Some testing equipment may still be needed to test the BIST hardware and to carry out the parametric testing BIST also requires more rigid design

In particular, unknown states are not allowed since they can produce unknown signatures We will also return to these problems in the next subsections

1.2 Design for Testability

1.2.1 Controllability and Observability

There are two major concepts which are commonly used in assessing and en­

hancing the testability of a circuit under test: controllability and observability

Controllability is a measure of how difficult it is to set a line to a value necessary

Trang 18

1.2 Design for Testability 5

to excite a fault Observability is a measure of how difficult it is to propagate

a faulty signal from a line to a primary output Notice that controllability, in addition to its impact on a fault activation, also indirectly affects the ease with which the required signals can be set to propagate fault effects The essence

of design for testability is to apply minor changes to the original circuit de­sign such that the resultant controllability and observability will be improved The frequently used set of characteristics for controllability and observability of each node in a circuit includes three values representing the relative degree of difficulty of:

• achieving 1 at the node (1-controllability),

• achieving 0 at the node (0-controllability),

• driving the fault effects from the node to a primary output

The above measures have to be used with respect to whether tests employed are pseudo-random or deterministic In the latter case, all measures can be employed to guide an automatic test-pattern generation technique In a BIST environment, however, a common test scenario is to use the pseudo-random patterns Under such circumstances, the definitions of controllability and ob­servability can be restated in the following way [16]:

• 1-controllability (0-controllability) of a node is the probability that a ran­domly applied input vector will set the node to a value 1 (0),

• observability of a line is the probability that a randomly applied input vector will sensitize one or more paths from that line to a primary output

It can be easily observed that a circuit node will have a low controllability and/or observability if a unique test vector or a long test sequence is required

to establish the state of this node and then to propagate this state to the outputs

of the circuit

E X A M P L E 1.1 A circuit shown in Fig 1.1, although easily initializable, is nev­

ertheless extremely difficult to control It consists of a microprogram memory driven by a next address counter The counter can be reset Its next state, how­ ever, can be either worked out by the counter itself (an increment function after applying a clock pulse) or provided by the memory as a "branch" address if a respective flag is set In either case, it may take an enormous amount of time to force several bits of the counter to certain values, especially if the memory con­ tains a lengthy program whose execution depends on data In other words, there

is no simple way to run all parts of the program in a systematic manner in order

Trang 19

6 1 Built-in Self-Test

Figure 1.1: Circuit difficult to control

to exercise the entire address space Furthermore, even totally sequential exe­ cution of the program still requires 2n - 1 clock pulses to set the most significant bit of the counter to the value of 1, where n is the size of the counter

The observations made so far indicate that the key to structural design for testability is to have the ability to control and observe the state variables directly This can be accomplished by a number of DFT approaches described

in the following sections

1.2.2 Ad Hoc Techniques

Several design techniques have been used over the years to avoid potential prob­lems with testing They are termed "ad hoc" approaches, as they are mostly aimed at designers and do not provide any systematic (algorithmic) method­ology which improves testability across the entire circuit They do provide, however, certain rules that must always be followed in order to increase con­trollability and observability These rules are, in different ways, implemented

in more rigorous DFT designs, too In the remainder of this section, we discuss some of these approaches that have traditionally been applied to simplify testing [2], [16]

Test points Test points can be added to a circuit to make it easier to

either sensitize faults (control points) or to observe them (observation points) Fig 1.2 shows so-called 0- and 1-injection circuits where two extra gates are

used to achieve 0- and 1-controllability of a line connecting subcircuits C\ and

C2 For example, in the 1-injection circuitry, when TEST — 0, the circuit

Trang 20

1.2 Design for Testability 7

Figure 1.2: Control points to force 0 and 1

operates in its normal mode Setting TEST = 1 allows us to inject a 1 on line

S, and subsequently on line b of subcircuit C2 , Line S can be controlled by

an extra primary input, or it can be driven by a flip-flop being a part of the internal scan path (see section 1.2.3) In general, optimal test point insertion in circuits with reconvergent fanout is an NP-complete problem [104], and therefore numerous empirical guidelines and approximate techniques have been proposed

to identify locations in a circuit to introduce control and observation points [25], [39], [86], [147], [152] In fact, almost every DFT technique listed below uses test point insertion inherently to implement its underlying philosophy of improving testability Techniques for automatic test point insertion are also discussed in section 1.2.5

Internally generated clocks, monostable multivibrators, and oscillators In

order to eliminate the need to synchronize the tester and pulses internal to a circuit, these devices should be disabled during test Furthermore, testing can

be performed at the speed of the tester rather than at the speed of the circuit

Asynchronous logic It should be avoided by designers as circuits with asyn­

chronous feedback loops are susceptible to hazards Although it is possible to omit a hazard by using an appropriate ATPG tool which takes into account tim­ing delays in the circuit, it can be very expensive In many cases (for example, pseudo-random testing) avoiding hazards is practically impossible

Initialization A sequential circuit must be brought into a known state before

its actual testing This can be achieved by using a customized initialization sequence However, as such a sequence is usually devised by a designer, it is unlikely that it will exhibit enough simplicity to be recreated by ATPG software

or to be used in a BIST environment Thus, it is recommended to employ reset

or set inputs to flip-flops or another simple presetting circuitry

Logical redundancy Unless added intentionally to eliminate hazards and

races or to increase reliability, a logical redundancy is a highly undesirable phenomenon which should be completely avoided The presence of redundancy causes ATPG tools to waste a lot of time while trying to generate nonexistent

Trang 21

8 1 Built-in Self-Test

tests for redundant faults Moreover, redundant faults may invalidate tests for nonredundant faults Unfortunately, the redundancy is often introduced inadvertently and is therefore extremely difficult to identify and remove

Global feedback paths Since from the ATPG point of view the feedback paths

may introduce very long gate paths, they should be eliminated The simplest way of achieving this objective is to use control points or another logic to break the paths during testing

Long counters and shift registers As shown in the last example, the long

counter may require an unacceptable number of clock cycles to change the most significant bits A common remedy is to add the control points, such that the counter (or a shift register) is partitioned into smaller units, which can be clocked a much lesser number of times to set significant bits

Memory arrays and other embedded structures Memory arrays should be

isolated from the remaining parts of a circuit for at least two reasons First,

it is very difficult to generate tests for circuits with memory blocks Second, when separated, a stand-alone memory circuitry can be conveniently tested by means of a variety of test schemes developed particularly for these structures The same methodology applies to other embedded logic blocks, such as PL As, cores, etc

Large combinational circuits Because of the time complexity of test genera­

tion and fault simulation, it is justified to partition large circuits in order to test them separately Partitioning simplifies the task of fault excitation, fault prop­agation, and line value justification in ATPG, in addition to increasing random pattern testability The independent testing of the resultant partitions is carried out through the number of test points added to lines crossing partition bound­aries Fig 1.3b illustrates such a partitioning [2] of a circuit shown in Fig 1.3a Control inputs T1 and T2 are used to test separately either C1 (T1T2 = 01), or

C 2 (T 1 T 2 = 10), or to put the circuit into a normal mode (T1T2 = 00)

1.2.3 Scan Designs

In order to test complex circuits in a time and cost-effective manner, a number

of structured design techniques have been proposed They rest on the general concept of making all or some state variables (memory elements) directly con­trollable and observable If this can be arranged, a circuit can be treated, as far as testing of combinational faults is concerned, as a combinational network

Perhaps the most used and best-known is a family of techniques termed scan designs They assume that during testing all registers (flip-flops and latches) in

a sequential circuit are connected into one or more shift registers or scan paths The circuit has two modes of operation:

• normal mode - the memory elements perform their regular functions (as

in an unmodified circuit),

Trang 22

1.2 Design for Testability 9

a)

Figure 1.3: Partitioning of a circuit

• test (scan) mode - all the memory elements connected into a shift register are used to shift in (or scan in) and out test data

During testing the sequence of operations for scan-based designs is as follows:

1 Select the test mode (memory elements form a shift register)

2 Shift in test-pattern values into the flip-flops

3 Set the corresponding values on the primary inputs

4 Select the normal mode

5 After the logic values have had time to settle, check the primary output values and subsequently capture a test response into the flip-flops

6 Select the test mode Shift out the flip-flop contents and compare them with good response values The next input vector can be shifted in at the same time

7 Repeat steps 2 through 6 for successive test vectors

The flip-flops are tested by means of a "flush test" consisting of either a string

of 1Is followed by a string of 0s, or a serial pattern 00110011 used to check if each flip-flop can hold 0 and 1 and make transitions

Trang 23

10 1 Built-in Self-Test

The scan-based designs must comply with a set of design rules and con­straints Usually, they are related to design methods for scan cells and con­sequently determine a type of DFT style which is adopted for a given circuit Nevertheless, several common advantages of scan-based DFT schemes can be easily pronounced They include:

• simplified test-pattern generation and test-pattern evaluation - testing the network is essentially the same as that of testing a combinational circuit,

• simplified timing analysis - proper operation of the network is independent

of clock characteristics and only requires the clock pulse to be active for

is usually controlled by an external clock, and there is a need to incorporate a design rule checking into CAD software in order to automate the design process Fig 1.4 illustrates a basic scan-path design As can be seen, the circuit features three extra pins (test mode, scan-in, scan-out) as well as area and performance overhead due to the multiplexers When the test mode signal is low, the circuit operates in its normal, that is, parallel-latch mode, except for increased delays In the test mode, test patterns are shifted in through the scan-in terminal, and test responses are subsequently shifted out through the scan-out pin

There are several forms of scan design, among them the scan path [63], the scan/set [157], the random-access scan [11], and the level-sensitive scan design (LSSD) [60] used in many IBM products For the sake of illustration, we will briefly discuss the LSSD technique The memory elements used by LSSD are implemented as latches in which the stored data cannot be changed by any input when the clocks are off Moreover, each latch is augmented to form a shift-register latch (see Fig 1.5 on page 12) by adding an extra latch (L2)

Trang 24

1.2 Design for Testability 11

Figure 1.4: Basic scan-design architecture

with a separate clock input Interconnection of the latches into a shift register structure is done as shown in Fig 1.5, which demonstrates the general structure for a so-called LSSD double-latch design In this approach, both latches, LI and L2, are used as system latches, and the circuit output is taken from outputs of

L2 Note that, in the normal mode, clocks C2 and C3 are used, while in the test

mode, nonoverlapping clocks C1 and C2 are used to prevent races The scan path is denoted by the dashed line Another LSSD approach, known as a single-latch design, is also used if it is desired to separate combinational circuits only

by a single latch In this solution, latches L2 are not employed to perform the system functions of the circuit Several variations of the original LSSD design have been also proposed [2], mostly to reduce logic complexity

To reduce the costs of using scan designs, especially area overhead and per­formance degradation, several partial scan techniques have been proposed in which only a subset of the circuit memory elements are included in the scan path Among these methods, there are three common strategies used to se­lect the flip-flops to scan such that the cost of test-pattern generation is re­duced while the testability overheads are minimized Chronologically, the first approach was to employ testability measures in the flip-flop selection process [171] It does not guarantee the optimal solution as testability measures are usually not accurate and do not characterize the global effects The second group of methods is based on breaking cyclic paths in the CUT [38] in order

to reduce the number of the feedback loops and their sequential depth The rationale behind these techniques is to reduce the high cost of sequential ATPG originating from the presence of these loops A special variant of this method cuts all the feedback loops, so that during test the resulting circuit works as

Trang 25

12 1 Built-in Self-Test

Figure 1.5: LSSD double-latch design

a pipeline where faults can be processed by a combinational ATPG [71] The third concept utilizes test-pattern generation techniques [110] The partial scan can be conveniently integrated with various BIST schemes [109], similarly to solutions used in the full scan environment The reader may find further details

in section 1.5

1.2.4 Boundary-Scan Architecture

As we have already mentioned, one of the major advantages of BIST is its ability

to operate at different levels of a circuit's architectural hierarchy However, in order to invoke the BIST procedures and facilitate their correct execution at the board, module or system level, certain design rules must be applied In 1990, a new testing standard was adopted by the Institute of Electrical and Electronics Engineers, Inc., and it is now defined as the IEEE Standard 1149.1, IEEE Standard Test Access Port and Boundary-Scan Architecture Its overview can

be found in [111] The basic architecture of the boundary scan is incorporated at the integrated circuit level and essentially consists of a protocol by which various test functions can be carried out In particular, the standard defines four (or optionally, five) new pins forming the test access port (TAP - see Fig 1.6):

two of them (test clock TCK and test mode select TMS) are used to control the protocol, while the remaining two pins (test data in TDI and test data out TDO)

Trang 26

1.2 Design for Testability 13

Figure 1.6: IEEE 1149.1 standard test access port

are employed to serially shift data into and out of the circuit Application of a 0

at the optional test reset input TRST* asynchronously forces the test logic into

its reset state The standard also specifies a simple finite state machine called the TAP controller which is driven by TCK, TMS and TRST*

Every chip designed according to the standard contains a boundary-scan

instruction register and an associated decode logic It is used to set the mode

of operation for selected data registers by means of boundary-scan instructions which always place data registers between TDI and TDO Two registers must

always be present: the bypass register and the boundary-scan register Addi­

tional registers are allowed under the optional clause of the 1149.1, and they can be selected by sending the proper control sequences to the TAP controller

In particular, internal scan paths can be connected via this circuitry to the chip's

scan-in and scan-out ports This is illustrated in Fig 1.6 by a block Internal scan paths It should be emphasized, however, that the normal input/output

terminals of the mission logic are connected to the chip's input/output pads through boundary-scan cells

If integrated circuits are mounted in a board during testing, a typical test session is carried out in the following way All TDIs and TDOs are daisy-chained from chip to chip A sequence of instructions is shifted through the system in such a way that every chip receives its own content of the instruction

Trang 27

14 1 Built-in Self-Test

register These instructions place the appropriate parts of data registers (for example, scan paths, user-defined registers, but also boundary-scan registers, etc.) between TDI and TDO pins Second, the test patterns are loaded, and the test instruction is executed The resultant test response can be subsequently shifted out of selected registers

The boundary scan allows efficient testing of board interconnect using the EXTEST instruction, facilitates isolation and testing chips via the boundary registers or built-in self-test hardware using the INTEST or RUNBIST instruc­tions (see also section 1.5.4), and makes it possible to capture snapshot obser­vations of normal system data using the SAMPLE instruction When no test actions are required in a given circuit, the BYPASS instruction puts a 1-bit length bypass register between TDI and TDO, thus forming a minimal scan path to and from other targeted chips

As can be seen, the boundary-scan architectures not only allow for efficient fault detection at the board, module, or system levels, but also create an environ­ment in which a fault diagnosis (location) at these architectural levels becomes

a relatively simple task A properly configured sequence of instructions may lead to isolation of a faulty chip in a timely fashion, thus enabling simple and fast repair procedures

1.2.5 Test Point Insertion

To make circuits easier to test, they can be modified either during a synthesis process [166] or through test point insertion The former technique accepts as

an input a two-level representation of a circuit and a constraint on the mini­mum fault detection probability and generates a multilevel implementation that satisfies the constraint while minimizing the literal count The post-synthesis methods, on the other hand, designate locations in the circuit to introduce test points facilitating the detection of "hard-to-test" faults using either exact fault simulation [25], [86], [168], or approximate testability measures [39], [147], [152] The results of fault simulation can be used to determine signal correlations and places at which signal propagation stops, in order to insert test points eliminat­ing the correlation and allowing fault excitation and fault effect propagation Conversely, to avoid time-consuming simulation experiments, the second group of methods utilizes the controllability and observability measures to iden­tify the hard-to-control and hard-to-observe sectors of the CUT, at which origins the test points are subsequently inserted Regardless of the test point selection process, all of these techniques attempt to improve the detection probabilities

of faults while minimizing a hardware overhead Notice that, depending on how

a test point is driven or observed, its insertion may require a few extra gates (compare Fig 1.2) and a wire routed to or from an additional flip-flop to be included in the scan chain

Trang 28

1.2 Design for Testability 15

Figure 1.7: Test points activation

In [162], a constructive test point insertion technique for scan-based designs has been proposed A divide-and-conquer approach is used to partition the en­tire test into multiple phases Each phase contributes to the results achieved so far, moving the solution closer to complete fault coverage Within each phase, a group of control and observation points are activated, as shown in Fig 1.7, such that they maximize the fault coverage calculated over the set of still undetected faults A probabilistic fault simulation, which computes the impact of a new control point in the presence of the control points already selected, is used as

a vehicle to select the test points In this way, in each test phase, a group of control points, driven by fixed values and operating synergistically, is enabled

In addition, observation points maximally enhancing the fault coverage are se­lected by a covering technique that utilizes the probabilistic fault simulation information

E X A M P L E 1.2 Referring further to Fig 1.7, the entire test experiment is divided

into four distinct phases: Φ0, Φ1, Φ2, and Φ3 In each phase, a set of control points is enabled by a phase decoder outputs and a specific number of test patterns are applied Sites c 1 and c 2 , depicted in the figure, illustrate the implementation

of an AND and an OR type control point, respectively The value of the test signal g t is 0 during phases Φ1 and Φ2; forcing the control point c 1 to be 0 regardless of the value of line g However, during phases Φ0 and Φ 3 , g t is 1 and the normal mode signal on line g is passed to c 1 Similarly, h t 's value is

1 during phases Φ2 and Φ3, forcing the output of c2 , irrespective of status of line h, to 1 During phases Φ0 and Φ1, ht is 0, thus allowing the normal mode signal on line h to reach the output of c 2

Trang 29

tained about the polarity of propagation in the form of values D and D This

reflects the scenario in a circuit with reconvergent fanout, where a fault can prop­

agate to a node as a D or a D In addition, keeping the polarity information is

helpful in properly capturing the reconvergence of fault effects The probability

of detecting a fault at a node is then obtained by adding two components, the

probability of detecting it as D and a D at that node The underlying prob­

abilities necessary for forward fault propagation are obtained by a prior logic simulation of a prespecified number of random vectors

The problem of selecting a number of observation points is carried out in such a way that the detection probability of a maximum number of faults meets

a user specified threshold Given the detection profile, a greedy heuristic selec­tion method is used, which continues until a prespecified number of observation points are chosen or no observation point satisfies the minimum-benefit-per-cost criterion The set of candidate locations for control point insertion is subse­quently determined by an estimation technique that computes two estimates,

E 0 and E1 , for various nodes in the circuit These measures give an indication

(based on the detection profile) of the number of faults that could potentially

be detected by placing an AND/OR type control point, respectively Nodes for

which E0 or E1 exceeds a minimum acceptable value are retained for subsequent

evaluations For each candidate node, a 0 or 1 value, depending on whether an AND or an OR type control point is simulated, is temporarily injected, and the resulting change in the propagation profile is determined Since the insertion

of a control point at a node perturbs the list of faults associated with itself or nodes in its fanout cone, it is necessary to recalculate the list for this set only Accordingly, an incremental probabilistic fault simulation is performed to deter­mine the new list of faults at various nodes in the fanout cone of the candidate signal The rank of a candidate is defined as the number of additional faults that propagate to primary outputs or observation points The candidate with the highest rank is then selected for control point insertion

Experimental results indicate that it is possible to achieve complete or complete fault coverage by using the simple two-phase scheme with the insertion

near-of very few test points In addition, it has been shown that the fault cover­age achieved can be further enhanced by adopting multi-phase scheme Again, experimental results indicate that the number of phases needed is small and demonstrate the constructive nature of these phases

Trang 30

1.3 Generation of Test Vectors 17

1.3 Generation of Test Vectors

Test pattern generation for BIST applications has to conform to two basic re­quirements: a high fault coverage and ease of incorporating a hardware necessary

to produce test data onto a chip Certainly, test vectors derived by means of off­line techniques, such as automatic test-pattern generation, may not be suitable

in the BIST environment, as their use will require a very large on-chip ROMs

In the next subsections, therefore, we will discuss those test generation schemes which meet the BIST criteria of effectiveness and implementation simplicity

1.3.1 Exhaustive Testing

When an n-input circuit is tested in an exhaustive fashion, all possible 2n in­put combinations are applied Thus, the detection of all combinational faults is guaranteed Other advantages of this approach include no need for fault sim­ulation and test-pattern generation for this category of faults However, faults that may cause combinational circuits to exhibit sequential behavior (for in­stance, CMOS stuck-open faults) may not be detected, even after applying 2n

test vectors A similar rule applies to another time-related failure, such as delay faults

The exhaustive testing can use very simple circuitry to produce patterns

In particular, an n-bit binary counter can deliver all required combinations, although some other structures are also employed (for example, a modified linear feedback shift register - see section 1.3.3) Unfortunately, as the number of input vectors grows exponentially with the number of primary inputs of the CUT, the

approach becomes impractical for circuits with n greater than about 30 for

which the test application time may exceed reasonable limits

1.3.2 Pseudo-Exhaustive Testing

Pseudo-exhaustive testing retains most of the advantages of the exhaustive test­ing approach while reducing the test application time by requiring much fewer test patterns It rests on the fact that every element of the n-input CUT is very often (or can be after some modification [88], [184]) controllable from no more

than k inputs, k < n, thus decreasing the necessary number of test vectors to 2k

Consequently, the actual performance of the pseudo-exhaustive testing relies on various techniques of circuit partition or segmentation The resultant blocks, disjoint or overlapping, usually feature a small input pin count, and therefore can be tested exhaustively

There are several forms of pseudo-exhaustive testing which use different strategies to carry out segmentation of the CUT Here, we briefly character­ize four techniques:

Trang 31

• hardware partitioning (physical segmentation) [117]: this technique adds

an extra logic to the CUT in order to divide the circuit into smaller, di­rectly controllable and observable, subcircuits, which can be subsequently tested exhaustively (an example of such partition is shown in Fig 1.3),

• sensitized-path segmentation [32], [35], [117], [173]: some circuits can be partitioned such that sensitizing paths are established from the circuit primary inputs to the inputs of a segment, and then from the segment outputs to the primary outputs; a given partition is tested separately while the remaining partitions are stimulated, such that noncontrolling values occur at those places in the CUT where it is necessary to assure propagation conditions,

• partial hardware partitioning [174]: this technique combines the features of the former two methods by applying test patterns, as in the sensitized-path segmentation, and observing the segment outputs directly due to extra hardware added to the CUT (as implemented in the physical segmentation approach)

A number of test generators have been proposed for the pseudo-exhaustive testing They employ linear feedback shift registers (LFSRs) [89], [116], con­densed LFSRs [179], constant-weight counters [164], devices using linear net­works [10], [37], or linear codes [163], [186], combined LFSRs and extra shift registers [17], cyclic LFSRs [178], cellular automata [44], [48], syndrome-driven counters [18], or other circuitry [153] In Chapter 2, another pseudo-exhaustive test generation technique is described in which an adder-based accumulator is used to produce a sequence of patterns by continuously accumulating a constant value

E X A M P L E 1.3 A counter implementing the well-known k-out-of-n code can be

used to generate pseudo-exhaustive test patterns such that every k-bit input space in an n-input CUT will be covered exhaustively For instance, the 2-out- of-4 constant weight code consists of the following code words: 1100, 1010, 1001,

sub-0110, 0101, 0011 As can be seen, all 2-bit binary combinations appear on every pair of positions

Trang 32

1.3 Generation of Test Vectors 19

An advantage of the pseudo-random testing stems from its potential for

a very simple hardware and small design efforts required to implement test generation means By far the most popular devices in use as a source of pseudo­random test sequences are linear feedback shift registers (LFSRs) Typically, they consist of D flip-flops and linear logic elements (XOR gates), connected

as shown in Fig 1.8 An LFSR of length n can be also represented by its characteristic polynomial hn x n + hn-1 x n-1 + • • • + h0, where the term hi x i refers to the ith flip-flop of the register, such that, if hi — 1, then there is a feedback tap taken from this flip-flop [16] Also, h0 = 1 For example, the LFSR shown in Fig 1.8 is characterized by the polynomial x 8 + x 6 + x 5 + x + 1 The

operation of LFSR can be described by a rather complex algebra of polynomials The interested reader may refer for further details to [2], [16]

If an LFSR is initialized to a nonzero value, then it can cycle through a number of states before coming back to the initial state A polynomial which causes an n-bit LFSR to go through all possible 2n — 1 nonzero states is called a

primitive characteristic polynomial A corresponding LFSR is often referred to

as a length LFSR, while the resultant sequence is termed a length sequence or m-sequence Such a sequence has the following properties:

Trang 33

in 2n - 1 — 1 positions and will differ in 2n - 1 positions

We will now use a nonhomogeneous Bernoulli process to model the based generator of pseudo-random vectors [175] In order to calculate any test-quality-related measure for pseudo-random testing, the detectability of every targeted fault is required Let us define this information as a ratio of the number

LFSR-k of patterns that detect a given fault to the total number N of patterns which can be produced by the LFSR The probability P v of detecting a fault with v

pseudo-random test vectors (a test confidence) is then given by:

where the product is the probability that a fault will not be detected by any of v

test vectors It is implicitly assumed that the LFSR produces all patterns, each test vector is chosen with equal probability, and it is not replaced Consequently, the expected fault coverage of a set of / faults, characterized by detectabilities

ki/N, i = 1,2, ,f, achieved by a set of v test vectors, is given by:

(1.2)

Since the test confidence is usually prespecified, one can determine a desired

test length as the smallest integer value of v that satisfies the inequality

(1.3)

(1.4)

Assuming N — v >> k, the above formula can be approximated as (N — v) k <=

N k (l-P v ), and finally

Trang 34

1.3 Generation of Test Vectors 21

It is also of interest to determine the average test length of a sequence of

pseudo-random vectors necessary to detect a fault with detect ability k/N This

quantity can be obtained by summing, over successive test lengths, the proba­

bilities that a fault is detected for the first time after applying exactly w test

vectors:

Yet another property of the vectors generated by LFSRs is that they appear

to be randomly ordered In fact, they behave randomly with respect to many statistical tests, and hence, the theoretical results regarding expected test length and fault coverage derived above can be replaced for purely random testing,

assuming sufficiently large LFSRs Each test vector is now chosen out of N —> ∞

different vectors and immediately replaced Again, let us first consider the

probability of detecting a fault with v random test vectors, assuming that the detection probability is p This probability can be expressed as follows:

(1.5)

where (1 - p)v is the probability that a fault will not be detected by any of v

test vectors Thus, the expected fault coverage of a set of / faults characterized

Since ln (1 — x) ≈ - x for x << 1, the last expression can be rewritten as:

The average test length of a sequence of random vectors, which is necessary

to detect a fault with the detection probability p, can be obtained in a similai way to that of (1.5), as follows:

Rearranging (1.6) to solve for the test length v, we have:

Trang 35

22 1 Built-in Self-Test

Figure 1.9: Five-stage cellular automaton

The pseudo-random patterns can also be produced by cellular automata (CAs) A cellular automaton is a collection of cells connected in a regular fashion Each cell is restricted to local neighborhood interactions, which are expressed by means of rules used to determine the next state based on infor­

mation received from the neighbors Assuming that a cell c can communicate

only with its two neighbors, c - 1 and c + 1, the following rule, represented by

a state transition table, can be established as:

This table is implemented by a combinational logic according to the formula known as rule 90:

(1.11) Another realization, termed rule 150, is as follows:

(1.12)

As can be noticed, the rule number is the decimal representation of the state transition function of the above-defined neighborhood [181] An example of a hybrid CA using rules 90 and 150 at alternating sites is shown in Fig 1.9 The work presented in [154] demonstrates the existence of the isomorphism between

a one-dimensional linear hybrid CA and an LFSR having the same irreducible characteristic polynomial Despite the same cycle structure, the sequencing of states may still be different between the CA and the LFSR, with the CA having the better randomness distribution [82] This property originates from a generic structure of CAs, where no shift induced correlation of bit values is observed,

Trang 36

1.3 Generation of Test Vectors 23

a phenomenon typical for the LFSRs A desired randomness can be, however,

achieved in the LFSR-based generators by using linear phase shifters, as shown

in section 1.5.2 Further information can be found in [49]

Another test generator, called a generalized LFSR (GLFSR), is presented

in [132] It essentially consists of a linear feedback shift register designed over

a higher order Galois field GF(26), where b > 1 All its components, such

as multipliers, adders, and storage elements, perform operations on 6-bit binary

numbers, interpreted as elements over GF(26) Each stage of an m-stage GLFSR

has b storage cells Their content is shifted to the next stage in parallel when

a clock is applied The feedback from a given stage consists of b bits which are

sent to all the stages The coefficients of a feedback polynomial are multiplied

by the feedback input over GF(26), and this operation is implemented using

XOR gates

1.3.4 Weighted Patterns

Conventional pseudo-random testing may fail to detect some faults, even in

very simple circuits Consider the AND-OR structure shown in Fig 1.10a This

circuit contains random pattern resistant faults In order to detect the out­

put y stuck-at-0 fault, all inputs must be set to 1, and if uniformly distributed

pseudo-random patterns are applied, the detection probability is 2- 3 2, which

clearly leads to an unacceptable test application time Generally, the coverage

of pseudo-random patterns can seldom achieve 100 percent Usually, the fault

coverage curve levels off and reaches the "the law of diminishing returns" region

after a number of tests have been applied The remaining faults can be cov­

ered using, at least, three different strategies The first one modifies the CUT

by means of test point insertion - the technique we have already discussed in

sections 1.2.2 and 1.2.5 The second approach targets the design of the test gen­

erators such that they can be used to tackle more efficiently the random pattern

resistant faults In the next four sections, we will describe several techniques

belonging to this class The third method attacks the problem neither at the

site of the faults nor at the source of the stimulus generation Instead, it tries

to arrange scan paths such that they can encode desired test patterns We will

describe this approach in section 1.3.8

Returning to Fig 1.10a, if each input is set to 1 with a probability of 31/32,

the y stuck-at-0 fault can be detected with a probability of (31/32)32 = 0.362,

implying that, on the average, three test vectors would be needed to achieve

this goal At the same time, each of the stuck-at-1 faults on the inputs of the

AND gate is detected with a probability of (l/32)(31/32)3 1 = 0.01168, which

means that, on the average, 86 vectors are required to detect it This example

illustrates the essence of weighted pseudo-random testing which can be used to

address the random pattern resistant faults Basically, this approach extends

Trang 37

24 1 Built-in Self-Test

Figure 1.10: Weighted pattern generator

the LFSR-based pseudo-random test generators by biasing the probabilities of the input bits so that the tests needed for hard-to-test faults are more likely to occur This concept is illustrated by the circuit shown in Fig 1.10b Let us assume that each stage of the LFSR has a probability of 0.5 of being either a

0 or a 1 More precisely, the probability of having 1 is 2n - 1/ ( 2n — 1), where n

is the size of the LFSR Let us also assume that these events are statistically

independent of which states occur on other bits Then, feeding k such signals

into an AND gate will result in a value of 1 at the output of the AND gate with

the probability of 0.5 k Other probabilities can be also obtained by using OR

gates and inverters

It is important to note that the stuck-at-1 fault at the output z of the circuit shown in Fig 1.10a requires different weights than those for y s-a-0, as now each

input should receive a 1 with a probability of 1 — (31/32) = 1/32 Unfortunately, there is no common weight set for both faults, and therefore two different weights have to be stored for each circuit input In general, a circuit may require several sets of weights, and, for each weight set, a number of random patterns have to

be applied Thus, the major objective of the weight generation process is to reduce both quantities Details of techniques based on structural analysis and deterministic test sets can be found, for instance, in [20], [90], [122], [129], [177], [182], and [183]

1.3.5 Reseeding of Linear Feedback Shift Registers

An efficient test generation scheme is expected to guarantee a very high fault coverage while minimizing test application time and test data storage require­ments Unfortunately, in general, due to inherent limitations of pseudo-random

Trang 38

1.3 Generation of Test Vectors 25

or weighted-random patterns, schemes based on these vectors may not be able

to detect some faults in some circuits in a cost-effective manner In such a case, a mixed-mode generation scheme appears to be an attractive choice It uses pseudo-random patterns to cover easy-to-test faults and, subsequently, de­terministic patterns to target the remaining hard-to-test faults As opposed

to other approaches, this technique allows different trade-offs between test data storage and test application time by varying the relative number of deterministic and pseudo-random patterns However, the overall efficiency of a BIST scheme resting on mixed-mode generators strongly depends on the methods employed

to reduce the amount of test data

There are two main approaches to reduce the quantity of test vectors: the reduction of the number of deterministic patterns by using dynamic compaction algorithms that target several single faults with a single pattern, and the com­pression of deterministic test cubes by exploiting the fact that they frequently feature a large number of unspecified positions One of the methods used to compress test cubes is based on the reseeding of the LFSRs and was originally proposed in [96] The following example elaborates on the proposed idea

E X A M P L E 1.4 Consider a k-bit LFSR represented by its feedback polynomial

in the variables a0, , ak - 1 is obtained Let the polynomial be of the form h(x) = x 3 + x2 + 1 If the LFSR is to generate a test cube {xxlx0lx}, where x denotes a "don't care" condition, then a corresponding seed can be determined

by solving the following system of equations:

a 0 = x

a 1 = x

The output sequence {ai}, i ≥ 0, is completely determined by the feedback poly­ nomial h(x) and the seed vector ( a0, , ak- i ) Applying the feedback equations

Trang 39

is loaded for each test group and has to be preserved during the decompression

of each test cube within the group Accordingly, the implementation of the decompressor may involve adding many extra flip-flops to avoid overwriting the content of the MP-LFSR during the decompression of a group of test patterns

An alternative to concatenation was proposed in [189] The underlying idea rests on the concept of variable-length seeds Deterministic patterns are gener­ated by an LFSR loaded with seeds whose lengths may be smaller than the size

of the LFSR Allowing such "shorter" seeds yields high encoding efficiency even for test cubes with varying number of specified positions The decompression hardware is loaded for each test pattern Hence, it is possible to implement the decompressor by using scan flip-flops as the state of the decompressor can be overwritten between applications of test cubes

In order to decompress a set of deterministic patterns, some extra infor­mation has to be stored to specify the length of the seed Usually, the test controller maintains the current length of the seed, and one extra bit is padded

to each seed to indicate when the current length should be increased Since only

one bit is used, the length is increased with a constant increment d Using a

fixed increment may require some extra zeroes to be added to the seeds such

that their length can always be expressed as b + id, i — 0 , 1 , 2 , , where b is the length of the shortest seed However, the value of the increment d can be

chosen such that the number of extra zeroes is kept at a minimum

A test data decompressor may consist of a k-bit LFSR feeding a single scan

chain, as shown in Fig 1.11 Deterministic patterns are generated by loading

the LFSR with an 5-bit seed (s < k) and applying enough cycles to fill the scan chain The seed specifies the first s positions of the LFSR, while the other k — s positions are assumed to be reset Hence, loading the seed can be

performed by resetting the LFSR and shifting the s-bit seed serially, starting with its least significant bit The content of the decompression LFSR is loaded

Trang 40

1.3 Generation of Test Vectors 27

Figure 1.11: Decompressor hardware

for each pattern, and can be overwritten between the applications of patterns Consequently, the LFSR can be implemented by using scan flip-flops

As can be seen in Fig 1.11, the scheme requires only one extra feedback (controlled by means of an AND gate) from the scan chain and a multiplexer

to allow the seed to be shifted in During testing, there are two modes of operations: random and deterministic In the random mode, the extra feedback from the scan chain is disabled and the (BIST) LFSR is used to generate random patterns In the deterministic mode, the extra feedback from the scan is enabled, and two control signals are used to load the seed and perform decompression:

signal Reset clears the decompression LFSR, while signal Load seed controls the

multiplexer to allow the seeds to be shifted in The seeds are loaded by first resetting the decompression LFSR and then shifting the seed variables serially through a multiplexer into the LFSR Once loaded, the seeds are decompressed

by exercising the decompression LFSR

The seeds can be obtained from the predetermined test cubes The calcu­lation process is computationally simple and is equivalent to solving a system

of s linear equations This system has a solution with probability greater than

0.999999 provided a single-polynomial decompression LFSR of length greater

than s + 20 is used, as was shown in [78] On the other hand, only 5 + 4 bits are required to encode a test cube with s specified bits if an MP-LFSR of de­ gree s with 16 polynomials is used [78] Thus, the decompressor may comprise

an additional decoder of polynomial IDs, which in turn is driven by extra test data provided with the actual seeds (Fig 1.11) The encoding of the required feedback polynomial can be also done implicitly by grouping together the seeds for specific polynomials and using a "next bit" to indicate whether the feedback polynomial has to be changed Thus, the number of bits required to encode a

test cube with s specified bits can be limited to s + 1

Ngày đăng: 08/03/2016, 11:20

TỪ KHÓA LIÊN QUAN