1. Trang chủ
  2. » Công Nghệ Thông Tin

Standardized Functional Verification- P3 pptx

10 244 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 153,53 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

1.6 Principles of Constrained Random Verification Subjecting a target to overwhelming variability during simulation prior to tape-out is the core principle of constrained pseudo-random

Trang 1

the faulty behavior before making the changes necessary to eliminate the faulty behavior completely

Suitable coverage analysis enables the manager to reduce the risk of faulty behavior within practical limits imposed by available resources (people, compute assets, time, money) Reducing risk requires more re-sources or rere-sources working more effectively

1.6 Principles of Constrained Random Verification

Subjecting a target to overwhelming variability during simulation prior to tape-out is the core principle of constrained pseudo-random verification (typically referred to as CRV, dropping the “pseudo-” prefix) This tech-nique is also referred to as “directed random” or as “focused random” veri-fication, because the pseudo-random testing is preferentially directed or focused on functionality of interest The CRV technique exercises the tar-get by driving its inputs with signal values chosen pseudo-randomly but constrained such that they are meaningful to the target being exercised This technique overcomes 3 challenges to verification:

• Extraordinary complexity: A complete verification of every single capa-bility of an IC is an enormously time-consuming and insufficiently pro-ductive task By “sampling” this vast functional space until no sample exhibits faulty behavior, the manager can gain sufficient confidence that the entire IC is free of bugs

• Habits of human programmers: Programmers have habits (particular ways of writing loops or other common code sequences) that limit the novelty that their tests can present to the target being verified A well-designed test generator will create tests with novelty unencumbered by programming habits

• Inefficiency of human programmers: A well-designed test generator can generate hundreds of thousands of novel tests in the time that an engi-neer can write one single test

Lacking the coding habits of a person, a test generator that chooses pseudo-randomly what to do next will wander about the functional space far and wide, exploring it much more thoroughly than the human-written tests As these tests do their wandering (quite quickly considering that am-ple verification resources providing over 1,000 MIPS will, figuratively speaking, leap about the cliffs and rifts of the functional space), they will now and then reveal the presence of a bug somewhere The presence of a

Trang 2

6 Chapter 1 – A Brief Overview of Functional Verification

bug is revealed by faulty behavior – some violation of a known rule or guideline

When all revisions have been made to the target to eliminate previously detected faulty behavior, the resulting target must survive some predeter-mined quantity of regression testing This predeterpredeter-mined quantity of re-gression is determined such that it indicates sufficiently low risk of func-tional bugs and that it can be completed on a schedule that meets business needs By subjecting the target to overwhelming variability during simula-tion prior to tape-out, the manager gains confidence that the resulting IC will function correctly

1.7 Standardized Functional Verification

Often the target of functional verification is a single integrated circuit device However, the standardized approach to verification accounts for digital systems comprising multiple devices and also IP (intellectual prop-erty – typically provided as RTL) from which distinctly different devices

or systems can be derived Likewise, sometimes the target is only an addi-tion or modificaaddi-tion to an existing device or system This standardized approach is applicable to such projects as well

The terms and techniques of this standardized approach to verification are applicable across all types of digital circuit – from I/O controllers to bus converters to microprocessors to fully integrated systems – and exten-sible to analog circuits and even software systems It enables objective comparisons of targets, of verification results, and of engineering resources needed to execute the project It also facilitates the movement of engineer-ing talent among projects as the methodology (terminology, architecture, and analytical techniques) is standardized regardless of the target It also provides a rigorous analytical foundation that facilitates the organization, collection, and analysis of coverage data

Standardized verification consists of:

1 a comprehensive standard framework for analysis that serves the

principles of constrained pseudo-random functional verification,

2 standard variables inherent in the design,

3 standard interpretation of the specification to define the variables and

their ranges and the rules and guidelines that govern them,

4 the verification plan that details

5 what results to obtain,

6 the standard measures and views of the results

Trang 3

Figure 1.1 shows the relationships among these six elements

What we really want to know is, does the RTL function as expected un-der all possible variations of its definitional and operational parameters? For that matter, what constitutes functional closure and how close are we

to that elusive goal? Finally, what is the risk of faulty behavior? These same questions must be addressed for the prototype silicon device as well Hence, this book

Fig 1.1 Standardized Functional Verification

1.8 Summary

The use of the CRV technique for functional verification has proven to be extremely powerful as well as sufficiently economical to produce commercial

Trang 4

8 Chapter 1 – A Brief Overview of Functional Verification

IC products However, costs remain high, both direct costs such as mask charges and indirect costs such as delayed time to market Surely there must be more that we can do to drive down these costs and risk Standard-ized functional verification will be seen to be extremely beneficial on both

of these fronts, not only by increased efficiencies within verification teams, but also by more sophisticated verification tools as theory underly-ing functional closure is incorporated Chapters 2 and 3 provide this neces-sary theory

Of course, the prudent engineer or manager will recognize that some de-signs, or changes to existing dede-signs, might be sufficiently simple that the costs associated with developing a pseudo-random test environment out-weigh the benefits of such an approach Many of the techniques described

in the remaining chapters will not be userful for such simple designs

Trang 5

Solid software architecture grows from well-designed data structures, and this applies to verification software in particular All digital system share common, unifying characteristics, and it is these characteristics that form a single standard framework for the analysis of all digital systems Data structures and algorithms that exploit this common framework yield archi-tectures that are extensible and reusable In addition, a common framework for verification greatly enables the sharing and moving of resources among projects rapidly and efficiently And, most importantly, it provides the ba-sis for data-driven risk assessment

In this chapter we will learn a method of interpretation of natural-language specification documents into a standard set of variables that is applicable to all possible digital systems

2.1 A Note on Terminology

Every effort has been made to create a comprehensive, concise, and unam-biguous nomenclature without inventing new terms Contemporary usage

of terms in verification varies widely, not only between corporations but also within single organizations This leads to frequent confusion and mis-understanding and can impede efforts to meet aggressive goals In addi-tion, certain terms are greatly overused within the industry, with some terms having so many different meanings that they become obstacles to clear communications rather than vehicles of technically precise meaning Where a given term may differ from other terms used in the industry, clari-fication is provided in footnotes

2.2 DUTs, DUVs, and Targets

The terms DUT (Device Under Test) or DUV (Device Under Verification) are often used in the context of functional verification However, DUT re-fers more precisely to an individual integrated circuit device undergoing

Trang 6

10 Chapter 2 – Analytical Foundation

test for manufacturing defects And, the term DUV, while more precise, is used less frequently than DUT when referring to that which must be veri-fied Moreover, often the thing to be verified does not include all of the logic in a given IC device, such as the pins of a new device or when only a portion of an existing device is undergoing verification for certain changes

to eliminate bugs or add new capabilities

This book uses the term target to refer to the precisely bounded logic

that must be verified CRV is aimed at this target specifically A target is defined very specifically such that there is no ambiguity as to that which is

to be verified, and that which is presumed to be verified already Are the pins and their control logic included in the target or not? Is the IP inte-grated into the new IC included in the target or not? Does the target com-prise only the enhancements to some existing IC or does the target include the existing logic as well? Does the target consist of RTL that may be used

to generate many different devices, such as is the case for commercial IP (intellectual property)? The answers to these questions establish the precise boundary between the target, and everything else

In addition, as we shall see later, it is useful to regard each clock domain

of a given target as a separate sub-target for the purposes of coverage analysis The reason for this should be clear when considering that logic is evaluated for each clock cycle, and logic in a slower clock domain will not

be exercised as much as the logic in a faster clock domain over the identi-cal period of time, measured in seconds (or fractions thereof) A design running at 100 MHz with a 33 MHz PCI interface is an example of a target with two clock domains The faster domain will experience about three times as much exercise as the slower domain for an average test

Finally, the analysis in this book applies to synchronous digital systems

Asynchronous signals are normally synchronized on entry to the device and signals crossing clock domain boundaries are synchronized into the destination domain before being used

2.3 Linear Algebra for Digital System Verification

A complete analysis of any digital system is based on the following prem-ise:

From the standpoint of functional verification, the

func-tionality of a digital system, including both the target of

verification as well as its surrounding environment, is

characterized completely by the variability inherent in the

definition of the system

Trang 7

An exhaustive list of these variables, their individual ranges, and the re-lationships among the variables is essential to ensuring that the verification software can exercise the system thoroughly In addition to exercising the digital system in a pseudo-random manner, certain special cases, which may not appear to lend themselves to exercising pseudo-randomly, must also be identified And finally, in addition to the variables and special cases the specification for the digital system will also define (explicitly or implicitly) rules and guidelines for which the responses of the target during its operation much be monitored

The variables related to the functionality of the digital system may be considered to lie within four subspaces that together constitute a basis for characterizing the entire range of variability that the digital system can ex-perience These 4 subspaces are:

• Connectivity

• Activation

• Conditions

• Stimuli & Responses

These 4 subspaces may be considered to constitute a “loosely orthogo-nal” system in a linear algebraic sense, and together they constitute the standard variables for interpreting a digital system for the purposes of functional verification

The interpretation of the specifications for a verification project by dif-ferent teams of engineers may indeed yield differing sets of variables Other than differences in names of variables, the sets will be equivalent in

a linear algebraic sense by way of some suitable mapping

As a familiar example, we can describe the points in a 2-dimensional space with Cartesian coordinates (where the variables are the distance

from the origin along orthogonal axes x and y) or with polar coordinates (where the variables are the radius r and the angle θ) The variables span-ning the space are different but equivalent: a one-to-one mapping from each set of variables to the other set exists

Similarly, interpreting the specification for a project by different teams will yield differing but equivalent sets of variables But, if the sets of

vari-ables resulting from different teams truly do differ, then the specification is

not sufficiently clear in its statement of requirements or, possibly, contains errors In either case it needs revision

Trang 8

12 Chapter 2 – Analytical Foundation

Fig 2.1 Standard Framework for Interpretation

Each unique combination of values of these variables determines a sin-gle function point within the functional space In the next chapter we shall see how the size of this function space can be measured using these stan-dard variables

2.4 Standard Variables

All variability to which a verification target can be subjected can be classi-fied among these four subspaces of functionality The assignment of a given variable to one subspace or the other need not adhere rigidly to the definitions as given below Instead, these subspaces are more usefully

Trang 9

regarded as individual “buckets” which together contain all of the vari-ables pertinent to the target’s behavior

It is not so important that a variable be classified within one subspace or another, but rather that all variables be defined and suitably classified Over the course of execution of a verification project, it may become more practical to reclassify one or more variables to enable implementation of better-architected verification software

These precisely defined variables will in most cases be used to deter-mine how to subject the target to variability – variability in the information

it receives (stimuli), variability in the conditions governing its operation while receiving the information, variability in the particulars of the logical operation (as a consequence of its connectivity) performed on the informa-tion, and variability in the particular sequencing (as determined by the clocking) of the logical operations As discussed earlier, subjecting a target

to overwhelming variability during simulation is the core principle of con-strained random verification

The process of verification will also expose bugs in the target’s external context, and provisions must be in place to make changes as needed to achieve goals set for the target

2.5 Ranges of Variables

The range of a variable may be contiguous (containing all possible values between some minimum and maximum value), piece-wise contiguous (containing two or more subsets of values), or enumerated (containing in-dividual values representing some special meaning such as a command) Most variables are independent, and their ranges are subject to con-strained random value selection However, some variables have ranges that are dependent on the values of other, independent variables in that they impose certain constraints on the range of values that may be assigned to such variables Their values can be chosen only after choosing values for variables upon which their ranges are dependent

Ranges of nearly all values will have inherent value boundaries – values which are special in that they determine the beginning or end of some-thing, or cause some particular behavior to be manifest when that value is attained Contiguous ranges will have at least two boundary values, the largest value and the smallest value These two values, and perhaps the next one or two neighboring values, should be identified while defining the variable Some intermediate values, such as high-water and low-water marks in queuing systems, must also be identified as boundary values

Trang 10

14 Chapter 2 – Analytical Foundation

Piece-wise contiguous variables will have multiple sets of boundary values, with each subrange in the overall range being regarded as distinct ranges with largest and smallest values, etc

Enumerated ranges (such as types of processor instructions or bus com-mands) may also have boundary values, but in a different sense As will be seen in chapter 3 in the discussion of condensation, various load and store instructions might be regarded as all lying on the single boundary value of memory access instruction for certain purposes, such as for coverage analysis

2.6 Rules and Guidelines

Technical specifications, whether for some industry standard or for some particular device, define a number of requirements that must be met by any implementation Each of these individual requirements constitutes a rule that must be obeyed by the target

Specifications often include recommendations for how a given imple-mentation should behave to ensure optimal performance Each of these recommendations constitutes a guideline that should be followed by the target

Each individual rule and guideline defined (whether explicitly or implic-itly) within the various specification documents must be monitored by the verification software to ensure that the target’s behavior always obeys all rules and adheres as closely to the guidelines as project needs dictate

Performance requirements (speed, throughput, latencies, etc.) will be described in terms of rules and guidelines

2.6.1 Example – Rules and Guidelines

As an example of how to dissect rules and guidelines from a specification, consider the following few sentences from (say, section x.y of) a specifica-tion for the behavior of agents on a shared bus:

“When a read request has been asserted on READ, data must be returned

to the requesting agent within at least 16 cycles To meet the performance

Additionally, implicit in each rule or guideline will be one or more exercisable scenarios that will lead the target either to obey or to violate the rule or guideline Exercising these scenarios thoroughly will reduce the risk of bugs in the target As will be seen in chapter 3, these scenarios will map to individual coverage models (Lachish et al 2002)

Ngày đăng: 03/07/2014, 08:20

TỪ KHÓA LIÊN QUAN