1. Trang chủ
  2. » Công Nghệ Thông Tin

Testing Computer Software phần 9 pptx

29 253 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 29
Dung lượng 724,73 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

At that point, the testers are in the traditional system test situation, with a big pile of untested code and very little remaining testing time left in the schedule.. 227 • You cannot s

Trang 1

225

• Dollars Management can try to add features more quickly, rather than dropping them, by spending

money However, because new features haven't yet been fully specified or designed, the project's

designers may become bottlenecks

• Release date The power of the evolutionary approach is best seen in the degree of schedule control

that it gives management The project manager can always postpone the release date Often, though,

he will choose to ship the product on time rather than adding those last 15 features

Because the program is always stabilized before the next wave of features is added, a proj ect manager who decides to stop adding features can probably finish the project within a few weeks

• Reliability The reliability of evolutionary products is high Because the product is tested and

stabilized as each new piece is added, most of the project's testing budget is spent before the end of the schedule If management says stop adding features and get the thing ready to ship, it won't take much additional final testing before the product is ready forrelease The incentive to skimp on testing and release something buggy is gone

The testing cost under the evolutionary model threatens to be large because testing starts early However, much of that cost is recouped because there isn't so much testing at the end Further, all those features that weren't specified or coded weren't the subject of test plans or tests either

One way to make a mess of an evolutionary project is to have the programmers keep writing fresh code rather than fixing the problems discovered during testing This is very tempting when the project is running behind schedule, but the appearance of progress is deceiving The problems will take longer to fix and retest later The project manager will face last minute tradeoffs between reliability and release date

Another risk of adopting an evolutionary approach is that an inexperienced project

manager may imagine that he doesn't have to do much initial planning, because the product

will evolve over time This is a big mistake If the core is built inflexibly, it will need

extensive reworking before key features can be added Fundamental work might have to be

redone many times Also, at what should be the end of the project, the marketing group might

realize that they forgot to identify some features as critical, so others were done instead Now

the product has to wait until those features are added

Marketing and sales staff sometimes don't understand this approach well enough to realize that when the project manager says that some features might be in the product or they might not, no one should sell the product as if it has a given feature, until that feature has been added

The test manager has another way to ruin the project If he doesn't assign test resources to the project early (perhaps because they're still assigned to some other project that's behind schedule), then the core program doesn't go through the full test cycle, nor do the incremental changes No one sees this slippage on the schedule until the testing staff are finally assigned, perhaps near what should be the very end of the project At that point, the testers are in the traditional system test situation, with a big pile of untested code and very little remaining testing time left in the schedule The result is something that looks like a badly managed waterfall project— testing costs more, the schedule falls apart, and lots of bugs are missed

The Testing Group probably has more ability to destroy an evolutionary project than anyone else Be considerate of this risk

Trang 2

226

A DEVELOPMENT MODEL'S IMPLICATIONS FOR TESTING

Once you understand the project manager's development model from his point of view, think about its implications for the testing effort Here are some implications of the waterfall method:

• Review the user interface early, perhaps by testing prototypes If the user interface is specified and

approved before the code is written, usability tests near the end of the project won't have much effect

• Start writing the test plan as early as possible because this will force you to critically analyze the

specifications (or drafts of them) Any risks they pose for testability of the project must be raised early

Theoretical models are important, but you must also be effective when managers throw out the rule book Think about this Figure as you read the tidy sequence of tasks in this chapter.

Trang 3

227

• You cannot start testing until late in the project The odds are that you will start later than the initial

schedule projects because design and coding will fall behind schedule Plan to spend extra time crafting your staffing plan For example, you want to unleash a trained group of testers as soon as the product is ready for testing, but what should they do if the program is so unstable that you kick it back

to the programmers for rework almost immediately? You should anticipate this and have alternate

work ready—perhaps you can use the time to automate some tests or create additional test data files

As another example, develop a strategy for incorporating new people at the last minute You can delegate many simple but time-consuming tasks to new staff, making maximum use of your experi enced testers during the limited time available You and your testing team can make it easy to hand off these tasks by identifying them and organizing your work accordingly If you don't do this planning in advance, you'll find it much harder to incorporate new testers at the end of a behind - schedule project, when no one has time to think

• By the time you start testing, your work is on the critical path Coding is complete, or nearly so As

soon as you stop finding bugs, the program ships On a project large enough for a few testers, consider dedicating one to guerrilla raids and other relatively unstructured testing This person's goal is to find one or two bugs each week that are so obviously bad that no one would ship the project with these problems in it The time it takes to fix these bugs is time the rest of the test group can use to plod through the program more systematically Projects that need this tactic for buying testing time are no fun, but this tactic might buy you several months of testing time, slipping the schedule week after

week, one week at a time (Warning: never delay reporting a critical bug It might be tempting to keep

a bug in reserve, to be reported when you can't find anything else that week This

buys you that one last week of testing (the week needed to fix this bug) Don't try

this unless you like the idea of being hated, fired, and blacklisted Don't do it Don't

think about it And never, never joke about it.)

In contrast, consider these (quite different) implications of the evolutionary method:

• Plan to staff the project with testers very early, starting reliability testing as soon

as the program reaches its first level of functionality

• Plan waves of usability tests as the project grows more complex You can probably

argue successfully for design changes in recently added code

• Plan to write the test plan as you go, rather than as a big effort before testing

• Plan to do your most powerful testing as early as possible Be careful about gambling that you' 11 have

time later to conduct tests that you know are critical The project manager might stop development at

any time We don't mean "^n/time after the planned release date." We mean any time—for example,

two of us shipped a product three months early by stopping adding features earlier than planned

QUALITY-RELATED COSTS

Quality-related costs include the costs of preventing, searching for, and coping with product errors and failures From a business point of view, you have a testing budget because the cost of testing is lower than the cost of dealing with customer complaints about undiscovered bugs If you can show that the company will save money

if you conduct a certain type of test at a certain point in the project, you'll probably get funding to do that work

As you envision ways to tailor your team's work to the project manager's development model, you'll want

to propose testing assignments that might be unusual in your company For example, your company might

Trang 4

228

not be used to assigning testers to review the product's external design—or to giving them enough time to do the job competently Other tasks that might require justification include analyzing customer support call records, automating test cases, delegating some types of last minute work to newly added junior testers or administrative staff, testing compatibility with a broader range of printers, or conducting an outside beta test

In each case, you can dramatically strengthen your argument with data that shows that this work will prevent significantly larger expenses later

Trang 5

229

The more you know about your company's quality-related expenditures, the better you'll do at evaluating and advocating new testing procedures

Quality-related costs are normally discussed in terms of four categories (Campanella, 1990):

• Prevention costs: everything the company spends to prevent software and documentation errors

• Appraisal costs: all testing costs and the costs of everything else the company does to look for errors

• Internal failure costs: all costs of coping with errors discovered during development and testing

• External failure costs: all costs of coping with errors discovered, typically by your customers, after

the product is released

Figure 13.2 shows examples of the different types of costs Feigenbaum (1991) estimates that the typical company spends 5 to 10 cents of every quality cost dollar on prevention, another 20 to 25 cents on appraisal, and the remaining 65 to 75 cents on internal and external failure costs

Quality Assurance (QA) groups often systematically collect quality-related cost information The typical software testing group has a narrower focus and mandate than a QA group You will probably find much information hard to obtain Don't despair Even if you can't learn everything, you can often learn a lot just by pooling data with your company's technical support group As to the rest of the data, make (but be ready to explain) educated guesses as needed What you collect or reasonably estimate will often be sufficient to justify

a proposal to reduce failure costs by increasing prevention or testing costs

THE DEVELOPMENT TIME LINE

The typical project manager will publish schedules that contain a series of milestones, the most common of which are called "alpha" and

"beta." The exact definitions vary (widely) from company to company, but in essence, alpha software is preliminary, buggy, but usable, while beta software is almost complete Figure 13.3 is

an example of a project time line, showing the milestones

This milestone-based approach is pragmatic It recognizes that programming, testing, manual writ-ing, and many other activities are done in parallel, and it maps them all onto the same time line In some companies, requirements writing, prototyp-ing, specifying, etc., might be mapped in parallel with all of these tasks (e.g., under an evolutionary model) whereas in others, this work might be con-sidered preliminary and be put earlier on the line But however the work is to be done in theory, the approach in practice involves a set of agreements—

Trang 6

230

Trang 7

231

these people will get this work done by this time and these people will be doing these things with the results

of that work

Figures 13.4 (13.4a, 13.4b, 13.4c, and 13.4d) are our rendition of a milestone chart We stress that this is only

an example, and that no company that we know of conforms exactly to this illustration Every company defines its milestones its own way and organizes the work in its own way, but these ways are often reasonably close to what we show here We think this is a useful way to think about the ordering of the tasks

For the rest of this chapter, we will explore this table and map your testing and test planning strategy and priorities onto the time line (Note: many of these terms were defined and discussed in Chapter 3.)

PRODUCT DESIGN

This is the start of the project, when all parties concerned figure out what this product should be This phase includes requirements analysis and internal and external design For thoughtful discussions, see DeMarco (1979), Gause & Weinberg (1989), Gilb (1988), Gould & Lewis (1985), Ould (1990), Weinberg (1982), Yourdon (1975), and Yourdon & Constantine (1979)

PROGRAMMING ACTIVITIES DURING PRODUCT DESIGN

If requirements documents, proposals, and contracts are written, they are written during this phase Coding begins at the end of this phase In waterfall projects, the internal and external specifications are written during this phase Detailed internal and external design might be done in this phase or later

MARKETING ACTIVITIES DURING PRODUCT DESIGN

The marketing department does research during this phase, to help define the product and to

communicate the vision of the product to management and to the programmers

Marketing might run ideas or very early prototypes through small groups of customers, in focus groups They might also survey customers of this product (or competitors), asking what features these people like

in this type of product, how they perceive competitors' quality (and why), and how much they would pay for a product they liked in this category You may be asked to help with some of these early consumer tests

DOCUMENTATION ACTIVITIES DURING PRODUCT DESIGN

Some documentation groups help draft specifications during this phase

TESTING ACTIVITIES DURING PRODUCT DESIGN

If you're lucky, you might be asked to review the product design documents as they are written This way, you'll learn about the product and be prepared for some early test planning

See Chapter 3, "Testing during the design stages" for thoughts on design flaws that you can find in the various specifying documents In practice, many testers don't make many useful comments during this review

Trang 8

232

Trang 9

to justify setting aside the time you need.

Prepare for test automation

Often, the most important thing you can do during the design period is to identify testing support code that you want in the program (or with it) You might not get this support, but if you ask early, and explain the value of the individual tools clearly, you stand a good chance of getting some of them Here are some examples for your wish list:

• Printer automation, you want command line control for test automation The program will probably

need the following parameters: name of the input file that the program is to print, name of the output file if printing is to be redirected to a file, name of the printer (possibly including the printer driver), and setup information such as the initial font or printer output resolution The program should exit after printing Given this facility, you can build a batch file that feeds a file to the program, selects the printer, tells it to print the file to disk, compares the file to one saved previously and prints the result, then starts the next test See "Some Tips On Automated Testing" in Chapter 8 for further discussion

• Memory meter You want to be able to press a key at any point in the program and

get a display or printout of the program's memory usage Something as simple as

the number of bytes free is handy This might be more useful if it also lists the sizes

of the five or ten largest contiguous blocks of memory It is amazing (testers never

believe it until they try it) how many new tests you will invent and how many hard

to reproduce bugs become easily reproducible once you know how much memory

is available before you do something, while you're doing it, and after you stop (or

after you erase what you did)

• Cheat keys These take you to a specific place in the program This is essential for

testing arcade games and role playing games and it might be handy in other programs

• Screen dump You need an easy way to copy everything on the screen to a printer or file (or serial port

if you analyze the data in real time on another machine) The screen dump utility should capture cursor and mouse position The tester will want to capture the screen put up by the program as it crashed, so the screen dump utility should be available even when the program under test crashes If the computer you test on doesn't have excellent programs for this, ask the programmers to make one

Create contractual acceptance tests

If your company is custom developing a product, it is extremely important to include an acceptance test in the sales contract You and your company's customer should define a set of tests that the customer will run when development is complete The customer must agree that if the program passes the tests, the contract has been fulfilled and any further changes or bug fixes are to be paid for separately The test must be so clearly stated that you can execute the test, and confirm that the product is ready for delivery, before sending the product to the customer It is important that you (Testing) work with the customer to define this test The risk is too high

Trang 10

234

Trang 11

235

that a non-tester will agree to something that is too vague, takes much too long to run, or guarantees too high

a level of quality, given the contract price Talk to your company's lawyer about this

Analyze the stability of acquisitions

If your company is considering acquiring someone else's product, you should conduct an initial stability test (Chapter 3, "Black box testing: The usual black box sequence of events") before anyone signs the papers So many disasters could have been avoided by companies if they had only done some preliminary testing before

a bad acquisition

Analyze customer data

Quality is complex concept A high quality product has the features that customers want and is also free from

deficiencies (e.g., bugs) (Juran, 1989) Customer feedback will give you insight into a product's quality

If this is a second or later release of a product, start analyzing customer data as early as possible during development Customer data comes from the following sources:

• Product reviews in newspapers, magazines, and user group newsletters

• Letters from customers Read every letter, or a large sample of them

• Phone calls from customers If your company is so large that there are many, many calls, ask your

technical support group to track the types of complaints coming in, and the number

of calls associated with each Unless you're working with an exceptionally

sophis-ticated group, the phone support staff won't be able to track more than about 15

categories of complaints Work with them to develop the 15 most useful categories

• Focus groups and other interviews of selected customers Marketing Departments

often interview small groups of customers They use some of these meetings to get

reactions to new ideas, and other meetings to get feedback on the current product

If possible, attend all meetings in which customers describe their (positive and

negative) experiences with the current version of the product Comparable infor

mation often comes when you meet with user groups

• Telephone surveys Marketing might call 50 or 100 customers to determine what they'd like in a next

version and what disappointed them about the current version You might call some registered customers and some complaining customers to ask similar questions, but with an emphasis on the product's reliability

Each of these sources will give you different, but important, indications of the product's quality For example, reviewers complain about missing features that few of your current customers care about Don't dismiss this: a year from now your customers might care a lot about these features Reviewers also miss (or don't mention) egregious bugs (Most reviewers mention relatively few bugs.) In contrast, callers and letter writers will point out many bugs but far fewer absent capabilities

You have the following objectives in collecting these data:

• Identify bugs that you missed You couldn't find every bug when you tested the last program

Customer data will reveal bugs you missed Use these to help enhance your test plan

Trang 12

236

Trang 13

237

• Assess the severity of the bugs you missed or that were deferred Identify the 10 or 15 problems that

cost the most customer support time and money If you can attach a support cost to each problem, all the better Project managers will schedule and fix old, expensive, problems Executives will supple ment the programming budget, if necessary, to get rid of these problems

• Develop an empirical basis for evaluating deferred bugs Many programs include dozens of deferred

bugs that no one ever complains about By comparing your knowledge of the deferred bug list and the customer complaint list, you can predict which bugs will cause customer reaction When the project manager defers a bug that you think will generate customer calls, approach her with the customer call records and explain the costs of similar deferrals last time

• Justify the expense of necessary further testing For example, suppose it would cost $20,000 to equip

your lab with three particularly suspect brands of computers, so that you can expand your compatibil ity testing If no one complains of incompatibility with these brands, don't bother setting up this lab But if your technical support group spends $40,000 per year answering calls from irate owners of these machines, you fully justify the lab and testing expense by holding out the promise of eliminating this customer service expense

Project managers and marketing staff are also interested in these data and may have further uses for them They may be looking for design or feature suggestions, or for information about the changing demographics of the user base We aren't considering these types of uses here, but in practice you may be able to do your research as part of a joint study

We recommend that you set up a detailed, multi-page table of problems and customer

requests Count how many times each complaint or request is made Keep a separate table

for each type of data (reviewer, letter, call, etc.) In setting up the tables, make a fast pass

through the letters, reviews, and printed survey results to identify problems to list on the

tables and leave room on them for things you missed For design issues, track how often

people said that they liked the way a feature works, along with the number of people who

said they didn't

You'll almost certainly find that most of the complaints come from a relatively small number of

problems This is the well known Pareto Principle (Gryna, 1988; McCabe & Schulmeyer, 1987) After

you've collected the data, sort the individual complaints in frequency order, from most often to least often mentioned (Don't be surprised if ordering differs significantly across data sources Reviewers complain about different things than letter writers and both complain about different things than randomly sampled customers.) A table showing the most frequent complaints, in order, is an effective way to present the

information Or, more classically, use a Pareto Chart (Walton's 1986 explanation is the simplest we've seen)

If possible, also show each problem's support cost (cost of reading and answering letters, average cost per call, average customer call length for each type of problem, etc.)

Review the user interface for consistency

Some, but not all, testers are talented at spotting user interface inconsistencies early in a product's design (Similarly for some technical writers.) If your test group includes someone with this skill, get that person into the design reviews as early as possible and give her enough time to review the product to be able to do this task well This is a very valuable skill

Trang 14

238

Negotiate early testing milestones

A good development team will want you to start testing before all the code is written A disorganized team will waste a tremendous amount of your time in early testing Negotiate some structure with the project manager

A project manager is often delighted to prioritize programming tasks in order to make your early testing effective With just a little encouragement from you, she may reorganize tasks in order to finish the highest risk tasks, or the tasks that will take the most testing, first

For example, if printing quality and multi-printer compatibility are high risk areas for a particular product, decide with the project manager how to get this into Testing early and what parts of the printer code (which printers, which reports or other types of output) will be finished when the first versions come in

Other early preparation for testing

Start setting up relationships with vendors of equipment that your program must be compatible with (see Chapter 8, "Printer testing: Setting up a printer test lab") The sooner you start, the easier it will be to get loaners

or free equipment

Think about reviewing competitors' products The lead tester for this project should be familiar with the competition and the types of bugs common to this category of software, on this type of hardware When will she get this expertise?

Start looking for beta testers Good ones are hard to find Beware that some capable volunteers will also be beta testers of competitive products If you use them, they will tell your competitors about your products' strengths and progress months in advance Others will give copies of the beta software to their friends or post

a copy on public bulletin boards We are speaking from experience and are not kidding about these risks You must allow lead time to appraise potential beta testers

FRAGMENTS CODED: FIRST FUNCTIONALITY

The program may work in only one video mode Tt may print to only one type of printer It lacks most features It's full of bugs In an evolutionary development group, the first functionality milestone is reached when the program's core functionality (with almost no features) is complete

PROGRAMMING ACTIVITIES AFTER FIRST FUNCTIONALITY

The programmers keep specifying, designing (unless they're waterfallers who've done all this already, or programmer-anarchists who code first and specify later), and programming

Ngày đăng: 06/08/2014, 09:20

TỪ KHÓA LIÊN QUAN