1. Trang chủ
  2. » Luận Văn - Báo Cáo

A guide to selecting software measures and metrics

373 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề A Guide To Selecting Software Measures And Metrics
Tác giả Capers Jones
Trường học CRC Press Taylor & Francis Group
Thể loại sách
Năm xuất bản 2017
Thành phố Boca Raton
Định dạng
Số trang 373
Dung lượng 2,22 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Thanks to other metrics and measurement research colleagues who also attempt to bring order into the chaos of software development: Special thanks to the late Allan Albrecht, the invento

Trang 2

A Guide to Selecting Software Measures

and Metrics

Trang 3

A Guide to Selecting Software Measures

and Metrics

Capers Jones

Trang 4

A Guide to Selecting Software Measures

and Metrics

Capers Jones

Trang 5

Boca Raton, FL 33487-2742

© 2017 by Taylor & Francis Group, LLC

CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S Government works

Printed on acid-free paper

International Standard Book Number-13: 978-1-1380-3307-8 (Hardback)

This book contains information obtained from authentic and highly regarded sources Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S Copyright Law, no part of this book may be reprinted, reproduced, ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.

transmit-For permission to photocopy or use material electronically from this work, please access www.copyright com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400 CCC is a not-for-profit organization that provides licenses and registration for a variety of users For organizations that have been granted a photocopy license by the CCC,

a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used

only for identification and explanation without intent to infringe.

Visit the Taylor & Francis Web site at

http://www.taylorandfrancis.com

and the CRC Press Web site at

http://www.crcpress.com

Trang 6

Contents

Preface vii

Acknowledgments xi

About the Author xiii

1 Introduction 1

2 Variations in Software Activities by Type of Software 17

3 Variations in Software Development Activities by Type of Software 29

4 Variations in Occupation Groups, Staff Size, Team Experience 35

5 Variations due to Inaccurate Software Metrics That Distort Reality 45

6 Variations in Measuring Agile and CMMI Development 51

7 Variations among 60 Development Methodologies 59

8 Variations in Software Programming Languages 63

9 Variations in Software Reuse from 0 % to 90% 69

10 Variations due to Project, Phase, and Activity Measurements 77

11 Variations in Burden Rates or Overhead Costs 83

12 Variations in Costs by Industry 87

13 Variations in Costs by Occupation Group 93

14 Variations in Work Habits and Unpaid Overtime 97

15 Variations in Functional and Nonfunctional Requirements 105

Trang 7

16 Variations in Software Quality Results 115

Missing Software Defect Data 116

Software Defect Removal Efficiency 117

Money Spent on Software Bug Removal 119

Wasted Time by Software Engineers due to Poor Quality 121

Bad Fixes or New Bugs in Bug Repairs 121

Bad-Test Cases (An Invisible Problem) 122

Error-Prone Modules with High Numbers of Bugs 122

Limited Scopes of Software Quality Companies 123

Lack of Empirical Data for ISO Quality Standards 134

Poor Test Case Design 135

Best Software Quality Metrics 135

Worst Software Quality Metrics 136

Why Cost per Defect Distorts Reality 137

Case A: Poor Quality 137

Case B: Good Quality 137

Case C: Zero Defects 137

Be Cautious of Technical Debt 139

The SEI CMMI Helps Defense Software Quality 139

Software Cost Drivers and Poor Quality 139

Software Quality by Application Size 140

17 Variations in Pattern-Based Early Sizing 147

18 Gaps and Errors in When Projects Start When Do They End? 157

19 Gaps and Errors in Measuring Software Quality 165

Measuring the Cost of Quality 179

20 Gaps and Errors due to Multiple Metrics without Conversion Rules 221

21 Gaps and Errors in Tools, Methodologies, Languages 227

Appendix 1: Alphabetical Discussion of Metrics and Measures 233

Appendix 2: Twenty-Five Software Engineering Targets from 2016 through 2021 333

Suggested Readings on Software Measures and Metric Issues 343

Summary and Conclusions on Measures and Metrics 349

Index 351

Trang 8

Preface

This is my 16th book overall and my second book on software measurement

My first measurement book was Applied Software Measurement, which was

pub-lished by McGraw-Hill in 1991, had a second edition in 1996, and a third edition in 2008

The reason I decided on a new book on measurement instead of the fourth tion of my older book is that this new book has a different vantage point The first book was a kind of tutorial on software measurements with practical advice in getting started and advice on how to produce useful reports for management and clients.This new book is not a tutorial on measurement, but rather a critique on a num-ber of bad measurement practices, hazardous metrics, and huge gaps and omissions

edi-in the software literature that leave major topics uncovered and unexamedi-ined In

fact the completeness of software historical data among more than 100 companies

and 20 government groups is only about 37%

In my regular professional work, I help clients collect benchmark data In doing this, I have noticed major gaps and omissions that need to be corrected if the data are going to be useful for comparisons or estimating future projects

Among the more serious gaps are leaks from software effort data that, if not corrected,

will distort reality and make the benchmarks almost useless and possibly even harmful.One of the most common leaks is that of unpaid overtime Software is a very labor-intensive occupation, and many of us work very long hours But few companies actually record unpaid overtime This means that software effort is underreported by around 15%, which is too large a value to ignore

Other leaks include the work of part-time specialists who come and go as needed There are dozens of these specialists, and their combined effort can top 45% of total software effort on large projects There are too many to show all of these specialists, but some of the more common include the following:

1 Agile coaches

2 Architects (software)

3 Architects (systems)

Trang 9

4 Architects (enterprise?)

5 Assessment specialists

6 Capability maturity model integrated (CMMI) specialists

7 Configuration control specialists

8 Cost estimating specialists

9 Customer support specialists

10 Database administration specialists

11 Education specialists

12 Enterprise resource planning (ERP) specialists

13 Expert-system specialists

14 Function point specialists (certified)

15 Graphics production specialists

16 Human factors specialists

26 Project office specialists

27 Process improvement specialists

28 Quality assurance specialists

29 Scrum masters

30 Security specialists

31 Technical writing specialists

32 Testing specialists (automated)

33 Testing specialists (manual)

34 Web page design specialists

35 Web masters

Another major leak is that of failing to record the rather high costs for users when they participate in software projects, such as embedded users for agile projects But users also provide requirements, participate in design and phase reviews, perform acceptance testing, and carry out many other critical activities User costs can col-lectively approach 85% of the effort of the actual software development teams.Without multiplying examples, this new book is somewhat like a medical book that attempts to discuss treatments for common diseases This book goes through

a series of measurement and metric problems and explains the damages they can cause There are also some suggestions on overcoming these problems, but the main

Trang 10

Preface ◾ ix

focus of the book is to show readers all of the major gaps and problems that need to

be corrected in order to accumulate accurate and useful benchmarks for software projects I hope readers will find the information to be of use

Quality data are even worse than productivity and resource data and are only about 25% complete The new technical debt metric is only about 17% complete Few companies even start quality measures until after unit test, so all early bugs found by reviews, desk checks, and static analysis are invisible Technical debt does not include consequential damages to clients, nor does it include litigation costs when clients sue for poor quality

Hardly anyone measures bad fixes, or new bugs in bug repairs themselves About 7% of bug repairs have new bugs, and this can rise above 35% for modules with high cyclomatic complexity Even fewer companies measure bad-test cases, or bugs in test libraries, which average about 15%

Yet another problem with software measurements has been the continuous usage for more than 50 years of metrics that distort reality and violate standard

economic principles The two most flagrant metrics with proven errors are cost per defect and lines of code (LOC) The cost per defect metric penalizes quality and

makes buggy applications look better than they are The LOC metric makes requirements and design invisible and, even worse, penalizes modern high-level programming languages

Professional benchmark organizations such as Namcook Analytics, Q/P Management Group, Davids’ Consulting, and TI Metricas in Brazil that validate client historical data before logging it can achieve measurement accuracy of perhaps 98% Contract projects that need accurate billable hours in order to get paid are often accurate to within 90% for development effort (but many omit unpaid overtime, and they never record user costs)

Function point metrics are the best choice for both economic and quality analyses of software projects The new SNAP metric for software nonfunctional assessment process measures nonfunctional requirements but is difficult to apply and also lacks empirical data

Ordinary internal information system projects and web applications developed under a cost-center model where costs are absorbed instead of being charged out are the least accurate and are the ones that average only 37% Agile projects are very weak in measurement accuracy and have often less than 50% accuracy Self-reported benchmarks are also weak in measurement accuracy and are often less than 35% in accumulating actual costs

A distant analogy to this book on measurement problems is Control of Communicable Diseases in Man, published by the U.S Public Health Service It has

concise descriptions of the symptoms and causes of more than 50 common municable diseases, together with discussions of proven effective therapies

com-Another medical book with useful guidance for those of us in software is

Paul Starr’s excellent book on The Social Transformation of American Medicine

Trang 11

This book won a Pulitzer Prize in 1982 Some of the topics on improving medical records and medical education have much to offer on improving software records and software education.

So as not to have an entire book filled with problems, Appendix 2 is a more tive section that shows 25 quantitative goals that could be achieved between now and

posi-2026 if the industry takes measurements seriously and also takes quality seriously

Trang 12

Acknowledgments

Thanks to my wife, Eileen Jones, for making this book possible Thanks for her patience when I get involved in writing and disappear for several hours Also thanks for her patience on holidays and vacations when I take my portable computer and write early in the morning

Thanks to my neighbor and business partner Ted Maroney, who handles contracts and the business side of Namcook Analytics LLC, which frees up my time for books and technical work Thanks also to Aruna Sankaranarayanan for her excel-lent work with our Software Risk Master (SRM) estimation tool and our website Thanks also to Larry Zevon for the fine work on our blog and to Bob Heffner for marketing plans Thanks also to Gary Gack and Jitendra Subramanyam for their work with us at Namcook

Thanks to other metrics and measurement research colleagues who also attempt

to bring order into the chaos of software development: Special thanks to the late Allan Albrecht, the inventor of function points, for his invaluable contribution to the industry and for his outstanding work Without Allan’s pioneering work on function points, the ability to create accurate baselines and benchmarks would probably not exist today in 2016

The new SNAP team from International Function Point Users Group (IFPUG) also deserves thanks: Talmon Ben-Canaan, Carol Dekkers, and Daniel French.Thanks also to Dr Alain Abran, Mauricio Aguiar, Dr Victor Basili, Dr Barry Boehm, Dr Fred Brooks, Manfred Bundschuh, Tom DeMarco, Dr Reiner Dumke, Christof Ebert, Gary Gack, Tom Gilb, Scott Goldfarb, Peter Hill, Dr Steven Kan,

Dr Leon Kappelman, Dr Tom McCabe, Dr Howard Rubin, Dr Akira Sakakibara, Manfred Seufort, Paul Strassman, Dr Gerald Weinberg, Cornelius Wille, the late

Ed Yourdon, and the late Dr Harlan Mills for their own solid research and for the excellence and clarity with which they communicated ideas about software The software industry is fortunate to have researchers and authors such as these.Thanks also to the other pioneers of parametric estimation for software projects:

Dr Barry Boehm of COCOMO, Tony DeMarco and Arlene Minkiewicz of PRICE, Frank Freiman and Dan Galorath of SEER, Dr Larry Putnam of SLIM and the other Putman family members, Dr Howard Rubin of Estimacs, Dr Charles Turk (a colleague at IBM when we built DPS in 1973), and William Roetzheim

Trang 13

of ExcelerPlan Many of us started work on parametric estimation in the 1970s and brought out our commercial tools in the 1980s.

Thanks to my former colleagues at Software Productivity Research (SPR) for their hard work on our three commercial estimating tools (SPQR/20 in 1984; CHECKPOINT in 1987; and KnowledgePlan in 1990): Doug Brindley, Chas Douglis, Lynn Caramanica, Carol Chiungos, Jane Greene, Rich Ward, Wayne Hadlock, Debbie Chapman, Mike Cunnane, David Herron, Ed Begley, Chuck Berlin, Barbara Bloom, Julie Bonaiuto, William Bowen, Michael Bragen, Doug Brindley, Kristin Brooks, Tom Cagley, Sudip Charkraboty, Craig Chamberlin, Michael Cunnane, Charlie Duczakowski, Gail Flaherty, Richard Gazoorian, James Glorie, Scott Goldfarb, David Gustafson, Bill Harmon, Shane Hartman, Bob Haven, Steve Hone, Jan Huffman, Peter Katsoulas, Richard Kauffold, Scott Moody, John Mulcahy, Phyllis Nissen, Jacob Okyne, Donna O’Donnel, Mark Pinis, Tom Riesmeyer, Janet Russac, Cres Smith, John Smith, Judy Sommers, Bill Walsh, and John Zimmerman Thanks also to Ajit Maira and Dick Spann for their service on SPR’s board of directors

Appreciation is also due to various corporate executives who supported the technical side of measurement and metrics by providing time and funding From IBM, the late Ted Climis and the late Jim Frame both supported the author’s mea-surement work and in fact commissioned several studies of productivity and quality inside IBM as well as funding IBM’s first parametric estimation tool in 1973 Rand Araskog and Dr Charles Herzfeld at ITT also provided funds for metrics studies,

as did Jim Frame who became the first ITT VP of software

Thanks are also due to the officers and employees of the IFPUG This zation started almost 30 years ago in 1986 and has grown to become the largest software measurement association in the history of software When the affiliates in other countries are included, the community of function point users is the largest measurement association in the world

organi-There are other function point associations such as Common Software Measurement International Consortium, Finnish Software Metrics Association, and Netherlands Software Metrics Association, but all 16 of my software books have used IFPUG function points This is in part due to the fact that Al Albrecht and I worked together at IBM and later at Software Productivity Research

Trang 14

About the Author

Capers Jones is currently the vice president and chief technology officer of Namcook Analytics LLC (www.Namcook.com) Namcook Analytic LLC designs leading-edge risk, cost, and quality estimation and measurement tools Software Risk Master (SRM)™ is the company’s advanced estimation tool with a patent-pending early sizing feature that allows sizing before requirements via pattern matching Namcook Analytics also collects software benchmark data and engages in longer range software process improvement, quality, and risk-assessment studies These Namcook studies are global and involve major corporations and some government agencies in many countries in Europe, Asia, and South America Capers Jones is the author of 15 software books and several hundred journal articles He is also an invited keynote speaker at many software conferences in the United States, Europe, and the Pacific Rim

Trang 16

Chapter 1

Introduction

As the developer of a family of software cost-estimating tools, the author is often

asked what seems to be a straightforward question: How accurate are the estimates compared to historical data?

The answer to this question is surprising Usually the estimates from modern parametric estimation tools are far more accurate than the historical data used by clients for comparisons! This fact is surprising because much of what are called

historical data are incomplete and omit most of the actual costs and work effort that

were accrued

In some cases historical data capture only 25% or less of the full amount of effort

that was expended Among the author’s IT clients, the average completeness of historical effort data is only about 37% of the true effort expended when calibrated

by later team interviews that reconstruct the missing data elements such as unpaid overtime

Quality data are incomplete too Most companies do not even start measuring quality until after unit test, so all requirement and design defects are excluded,

as are static analysis defects and unit test defects The result is a defect count that understates the true numbers of bugs by more than 75% In fact, some companies

do not measure defects until after release of the software

Thus when the outputs from an accurate parametric software cost-estimating

ExcelerPlan, KnowledgePlan, True-Price, SEER, or SLIM are compared to what

are called historical data, the results tend to be alarming and are also confusing

to clients and client executives

The outputs from the estimating tools often indicate higher costs, more effort, and longer schedules than the historical data indicate It is seldom realized that the difference is because of major gaps and omissions in the historical data themselves, rather than because of errors in the estimates

Trang 17

It is fair to ask if historical data are incomplete, how is it possible to know the true amounts and evaluate the quantity of missing data that were left out?

In order to correct the gaps and omissions that are normal in cost-tracking systems, it is necessary to interview the development team members and the project managers During these interview sessions, the contents of the histori-cal data collected for the project are compared to a complete work breakdown structure derived from similar projects

For each activity and task that occurs in the work breakdown structure, but which is missing from the historical data, the developers are asked whether or not the activity occurred If it did occur, the developers are asked to reconstruct from memory or their informal records the number of hours that the missing activity accrued

Problems with errors and leakage from software cost-tracking systems are as old as the software industry itself The first edition of the author’s book, Applied Software Measurement, was published in 1991 The third edition was published

in 2008 Yet the magnitude of errors in cost- and resource-tracking systems is essentially the same today as it was in 1991 Following is an excerpt from the third edition that summarizes the main issues of leakage from cost-tracking systems:

It is a regrettable fact that most corporate tracking systems for effort and costs (dollars, work hours, person months, etc.) are incorrect and manage to omit from 30% to more than 70% of the real effort applied to software projects Thus most companies cannot safely use their own historical data for predictive purposes When benchmark consulting personnel go on-site and interview managers and technical personnel, these errors and omissions can be partially corrected by interviews

The commonest omissions from historical data, ranked in order of significance, are given in Table 1.1

Not all of these errors are likely to occur on the same project, but enough of them occur so frequently that ordinary cost data from project tracking systems are essentially useless for serious economic study, for benchmark comparisons between companies, or for baseline analysis to judge rates of improvement

A more fundamental problem is that most enterprises simply do not record data for anything but a small subset of the activities actually performed In carry-ing out interviews with project managers and project teams to validate and correct historical data, the author has observed the following patterns of incomplete and missing data, using the 25 activities of a standard chart of accounts as the refer-ence model (Table 1.2)

When the author and his colleagues collect benchmark data, we ask the ers and personnel to try and reconstruct any missing cost elements Reconstruction

manag-of data from memory is plainly inaccurate, but it is better than omitting the ing data entirely

miss-Unfortunately, the bulk of the software literature and many historical studies only report information to the level of complete projects, rather than to the level

Trang 18

Introduction ◾ 3

of specific activities Such gross bottom line data cannot readily be validated and is

almost useless for serious economic purposes

Table 1.3 illustrates the differences between full activity-based costs for a

soft-ware project and the typical leaky patterns of softsoft-ware measurements normally

carried out Table 1.3 uses a larger 40-activity chart of accounts that shows typical work patterns for large systems of 10,000 function points or more

As can be seen, measurement leaks degrade the accuracy of the information

available to C-level executives and also make economic analysis of software costs very difficult unless the gaps are corrected

To illustrate the effect of leakage from software tracking systems, consider what

the complete development cycle would look like for a sample project The sample is for a PBX switching system of 1,500 function points written in the C programming language Table 1.4 illustrates a full set of activities and a full set of costs

Table 1.1 Most Common Gaps in Software Measurement Data

Sources of Cost Errors Magnitude of Cost Errors

1 Unpaid overtime by exempt staff (Up to 25 % of reported effort)

2 Charging time to the wrong project (Up to 20 % of reported effort)

3 User effort on software projects (Up to 50 % of reported effort)

4 Management effort on software projects (Up to 15 % of reported effort)

5 Specialist effort on software projects

Business analysts

Human factors specialists

Database administration specialists

Integration specialists

Quality assurance specialists

Technical writing specialists

Education specialists

Hardware or engineering specialists

Marketing specialists

Metrics and function point specialists

(Up to 45 % of reported effort)

6 Effort spent prior to cost-tracking start-up (Up to 10 % of reported effort)

7 Inclusion/exclusion of nonproject tasks

Departmental meetings

Courses and education

Travel

(Up to 25 % of reported effort)

Overall error magnitude (Up to 175 % of reported effort) Average accuracy of historical data (37 % of true effort and costs)

Trang 19

Table 1.2 Gaps and Omissions Observed in Data for a Software

Chart of Accounts

Activities Performed Completeness of Historical Data

05 Initial analysis and design Missing or Incomplete

09 Reusable code acquisition Missing or Incomplete

10 Purchased package acquisition Missing or Incomplete

12 Independent verification and validation Complete (defense only)

13 Configuration management Missing or Incomplete

15 User documentation Missing or Incomplete

21 Acceptance testing Missing or Incomplete

22 Independent testing Complete (defense only)

24 Installation and training Missing or Incomplete

25 Project management Missing or Incomplete

26 Total project resources, costs Incomplete

Trang 20

Introduction ◾ 5

(Continued)

Table 1.3 Measured Effort versus Actual Effort: 10,000 Function Points

Percent of Total Measured Results ( %)

Trang 21

Now consider what the same project would look like if only design, code, and unit test (DCUT) were recorded by the company’s tracking system This combina-tion is called DCUT and it has been a common software measurement for more than 50 years Table 1.5 illustrates the partial DCUT results.

Instead of a productivity rate of 6.00 function points per staff month, Table 1.4 indicates a productivity rate of 18.75 function points per staff month Instead of

a schedule of almost 25 calendar months, Table 1.2 indicates a schedule of less than 7 calendar months Instead of a cost per function point of U.S $1,666, the DCUT results are only U.S $533 per function point

Yet both Tables 1.4 and 1.5 are for exactly the same project Unfortunately,

what passes for historical data far more often matches the partial results shown

in Table 1.5 than the complete results shown in Table 1.4 This leakage of data is

Percent of Total Measured Results ( %)

Trang 26

Introduction ◾ 11

not economically valid, and it is not what C-level executives need and deserve to understand the real costs of software

Internal software projects where the development organization is defined as

a cost center are the most incomplete and inaccurate in collecting software data Many in-house projects by both corporations and government agencies lack use-ful historical data Thus such organizations tend to be very optimistic in their internal estimates because they have no solid basis for comparison If they switch

to a commercial estimating tool, they tend to be surprised at how much more costly the results might be

External projects that are being built under contract, and projects where the development organization is a profit center, have stronger incentives to capture costs with accuracy Thus contractors and outsource vendors are likely to keep better records than internal software groups

Another major gap for internal software projects developed by companies for their own use is the almost total failure to measure user costs Users participate in requirements, review documents, participate in phase reviews, perform acceptance

tests, and are sometimes embedded in development teams if the agile methodology

is used Sometimes user costs can approach or exceed 75% of development costs Table 1.6 shows typical leakage for user costs for internal projects where users are major participants Table 1.6 shows an agile project of 1,000 function points

As can be seen in Table 1.6, user costs were more than 35% of development costs This is too large a value to remain invisible and unmeasured if software eco-nomic analysis is going to be taken seriously

Tables 1.3 through 1.6 show how wide the differences can be between full measurement and partial measurement But an even wider range is possible, because many companies measure only coding and do not record unit test as a separate cost element

Table 1.7 shows the approximate distribution of tracking methods noted at more than 150 companies visited by the author and around 26,000 projects.Among the author’s clients, about 90% of project historical data are wrong and incomplete until Namcook consultants help the clients to correct them In fact, the

average among the author’s clients is that historical data are only about 37%

com-plete for effort and less than 25% comcom-plete for quality

Only 10% of the author’s clients actually have complete cost and resource data that include management and specialists such as technical writers These projects usually have formal cost-tracking systems and also project offices for larger projects They are often contract projects where payment depends on accurate records of effort for billing purposes

Leakage from cost-tracking systems and the wide divergence in what activities are included present a major problem to the software industry It is very difficult

to perform statistical analysis or create accurate benchmarks when so much of the reported data are incomplete, and there are so many variations in what gets recorded

Trang 28

Introduction ◾ 13

The gaps and variations in historical data explain why the author and his

colleagues find it necessary to go on-site and interview project managers and technical staff before accepting historical data Unverified historical data are often so incomplete as to negate the value of using them for benchmarks and industry studies

When we look at software quality data, we see similar leakages Many panies do not track any bugs before release Only sophisticated companies such as IBM, Raytheon, and Motorola track pretest bugs

com-Table 1.6 (Continued) User Effort versus Development Team Effort: Agile

Trang 29

At IBM, there were even volunteers who recorded bugs found during desk check sessions, debugging, and unit testing, just to provide enough data for statistical analysis (The author served as an IBM volunteer and recorded desk check and unit test bugs.)Table 1.8 shows the pattern of missing data for software defect and quality mea-surements for an application of a nominal 1,000 function points in Java.

Table 1.7 Distribution of Cost/Effort-Tracking Methods

Activities Percent of Projects

Design, coding, and unit test (DCUT) 40.00

Requirements, design, coding, and testing 20.00

All development, but not project management 15.00

All development and project management including

specialists

10.00

100.00

Table 1.8 Measured Quality versus Actual Quality: 1000 Function Points

Defect Removal Activities Removed Defects Measured Defects Percent of Total

(Continued)

Trang 30

Introduction ◾ 15

Out of the 25 total forms of defect removal, data are collected only for 13 of these under normal conditions Most quality measures ignore all bugs found before testing, and they ignore unit test bugs too

The apparent defect density of the measured defects is less than one-third of the true volume of software defects In other words, true defect potentials would be about 3.50 defects per function point, but due to gaps in the measurement of quality, appar-ent defect potentials would seem to be just under 1.00 defects per function point

Defect Removal Activities

Defects Removed

Defects Measured

Defects per function point 3.50 0.98

Defect removal efficiency (DRE) 94.29 % 79.59 %

Table 1.8 (Continued) Measured Quality versus Actual Quality: 1000

Function Points

Trang 31

The apparent defect removal efficiency (DRE) is artificially reduced from more than 94% to less than 80% due to the missing defect data from static analysis, inspections, and other pretest removal activities.

For the software industry as a whole, the costs of finding and fixing bugs are the top cost driver It is professionally embarrassing for the industry to be so lax about measuring the most expensive kind of work since software began

The problems illustrated in Tables 1.1 through 1.8 are just the surface festation of a deeper issue After more than 50 years, the software industry lacks anything that resembles a standard chart of accounts for collecting his-torical data

mani-This lack is made more difficult by the fact that in real life, there are many variations of activities that are actually performed There are variations due to

application size, and variations due to application type.

Trang 32

a rowboat versus building an 80,000 ton cruise ship.

A rowboat can be constructed by a single individual using only hand tools But a large modern cruise ship requires more than 350 workers including many specialists such as pipe fitters, electricians, steel workers, painters, and even interior decorators and a few fine artists

Software follows a similar pattern: Building large system in the 10,000 to 100,000 function point range is more or less equivalent to building other large structures such as ships, office buildings, or bridges Many kinds of specialists are utilized, and the development activities are quite extensive compared to smaller applications

Table 2.1 illustrates the variations in development activities noted for six size plateaus using the author’s 25-activity checklist for development projects

Below the plateau of 1,000 function points (which is roughly equivalent to 100,000 source code statements in a procedural language such as COBOL), less than half of the 25 activities are normally performed But large systems in the 10,000 to 100,000 function point range perform more than 20 of these activities

To illustrate these points, Table 2.2 shows more detailed quantitative variations

in results for three size plateaus, 100, 1,000, and 10,000 function points

Trang 34

Variations in Software Activities by Type of Software ◾ 19

Trang 35

Table 2.2 Variations by Powers of Ten (100, 1,000, and 10,000 Function Points)

update

Smart phone app

Local system

CMMI levels (0 = CMMI

not used)

Monthly burdened costs $10,000 $10,000 $10,000

Major cost drivers

(rank order)

requirements

Function requirements

Function requirement

6 Nonfunction

requirement

Nonfunction requirements

Nonfunction requirements

8 Integration Integration Integration

Source statements per

Trang 36

Variations in Software Activities by Type of Software ◾ 21

(Continued)

Table 2.2 (Continued) Variations by Powers of Ten (100, 1,000, and 10,000

Function Points)

Size in logical KLOC

(SRM default for KLOC)

Staff size (technical +

management)

Work hours per month

(U.S value)

Unpaid overtime per

month (software norms)

Effort in staff hours 949.48 11,843.70 291,395.39 International Function

Point Users Group

(IFPUG) function points

Logical lines of code

(LOC) per month

(includes executable

statements and data

definitions)

Trang 37

Table 2.2 (Continued) Variations by Powers of Ten (100, 1,000, and 10,000

Function Points)

(Continued)

Physical lines of code

(LOC) per month

(includes blank lines,

comments, headers, etc.)

difference $6,305 $84,750 $3,407,808 Plan/actual percent

difference

8.77 % 9.45 % 15.44 %

Planned cost per

function point $656.25 $812.50 $1,866.76 Actual cost per function

point $719.30 $897.25 $2,207.54

Defect Potentials and Removal Percent

Trang 38

Variations in Software Activities by Type of Software ◾ 23

(Continued)

Table 2.2 (Continued) Variations by Powers of Ten (100, 1,000, and 10,000

Function Points)

Defects per function point 2.31 4.09 5.75

Defect removal efficiency

Trang 39

Table 2.2 (Continued) Variations by Powers of Ten (100, 1,000, and 10,000

Trang 40

Variations in Software Activities by Type of Software ◾ 25

Table 2.2 (Continued) Variations by Powers of Ten (100, 1,000, and 10,000

Average total staff 6.37 20.11 148.42

Ngày đăng: 05/10/2023, 16:42

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN