1 Test ReportA Test Report is a document that is prepared once the testing of a software product is complete and the delivery is to be made to the customer.. Test DetailsThis section wou
Trang 11 Test Report
A Test Report is a document that is prepared once the testing of a software product is complete and the delivery is to be made to the customer This document would contain a summary of the entire project and would have to be presented in a way that any person who has not worked on the project would also get a good overview of the testing effort
Contents of a Test Report
The contents of a test report are as follows:
1 Overview
This comprises of 2 sections – Application Overview and Testing Scope
Application Overview – This would include detailed information on the
application under test, the end users and a brief outline of the functionality as well
Testing Scope – This would clearly outline the areas of the application that
would / would not be tested by the QA team This is done so that there would not
be any misunderstandings between customer and QA as regards what needs to
be tested and what does not need to be tested
This section would also contain information of Operating System / Browser
combinations if Compatibility testing is included in the testing effort
Trang 22 Test Details
This section would contain the Test Approach, Types of Testing conducted, Test Environment and Tools Used
Test Approach – This would discuss the strategy followed for executing the
project This could include information on how coordination was achieved
between Onsite and Offshore teams, any innovative methods used for
automation or for reducing repetitive workload on the testers, how information and daily / weekly deliverables were delivered to the client etc
Types of testing conducted – This section would mention any specific types of
testing performed (i.e.) Functional, Compatibility, Performance, Usability etc along with related specifications
Test Environment – This would contain information on the Hardware and
Software requirements for the project (i.e.) server configuration, client machine configuration, specific software installations required etc
Tools used – This section would include information on any tools that were used
for testing the project They could be functional or performance testing
automation tools, defect management tools, project tracking tools or any other tools which made the testing work easier
This section is similar to the Metrics section, but is more for showcasing the
salient features of the testing effort Incase many defects have been logged for the project, graphs can be generated accordingly and depicted in this section The graphs can be for Defects per build, Defects based on severity, Defects based on Status (i.e.) how many were fixed and how many rejected etc
5 Test Deliverables
This section would include links to the various documents prepared in the course
of the testing project (i.e.) Test Plan, Test Procedures, Test Logs, Release
Report etc
Trang 36 Recommendations
This section would include any recommendations from the QA team to the client
on the product tested It could also mention the list of known defects which have been logged by QA but not yet fixed by the development team so that they can
be taken care of in the next release of the application
Trang 42 Defect Management
2.1 Defect
A mismatch in the application and its specification is a defect A software error is present
when the program does not do what its end user expects it to do
2.2 Defect Fundamentals
A Defect is a product anomaly or flaw Defects include such things as omissions and
imperfections found during testing phases Symptoms (flaws) of faults contained insoftware that is sufficiently mature for production will be considered as defects.Deviations from expectation that is to be tracked and resolved is also termed a defect
An evaluation of defects discovered during testing provides the best indication ofsoftware quality Quality is the indication of how well the system meets the requirements
So in this context defects are identified as any failure to meet the system requirements.Defect evaluation is based on methods that range from simple number count to rigorousstatistical modeling
Rigorous evaluation uses assumptions about the arrival or discovery rates of defectsduring the testing process The actual data about defect rates are then fit to the model.Such an evaluation estimates the current system reliability and predicts how the reliabilitywill grow if testing and defect removal continue This evaluation is described as systemreliability growth modelling
Trang 52.2.1 Defect Life Cycle
2.3 Defect Tracking
After a defect has been found, it must be reported to development so that it can be fixed
The Initial State of a defect will be ‘New’.
The Project Lead of the development team will review the defect and set it to one
of the following statuses:
Open – Accepts the bug and assigns it to a developer.
Invalid Bug – The reported bug is not valid one as per the requirements/design
As Designed – This is an intended functionality as per the requirements/design Deferred –This will be an enhancement.
Duplicate – The bug has already been reported.
Trang 6Document – Once it is set to any of the above statuses apart from Open, and the
testing team does not agree with the development team it is set to document
status
Once the development team has started working on the defect the status is set to
WIP ((Work in Progress) or if the development team is waiting for a go ahead or
some technical feedback, they will set to Dev Waiting
After the development team has fixed the defect, the status is set to FIXED,
which means the defect is ready to re-test
On re-testing the defect, and the defect still exists, the status is set to
REOPENED, which will follow the same cycle as an open defect.
If the fixed defect satisfies the requirements/passes the test case, it is set to
Closed.
2.4 Defect Classification
The severity of bugs will be classified as follows:
Critical The problem prevents further processing and testing The Development Team
must be informed immediately and they need to take corrective actionimmediately
High The problem affects selected processing to a significant degree, making it
inoperable, Cause data loss, or could cause a user to make an incorrectdecision or entry The Development Team must be informed that day, and theyneed to take corrective action within 0 – 24 hours
Medium The problem affects selected processing, but has a work-around that allows
continued processing and testing No data loss is suffered These may becosmetic problems that hamper usability or divulge client-specific information.The Development Team must be informed within 24 hours, and they need totake corrective action within 24 - 48 hours
Low The problem is cosmetic, and/or does not affect further processing and testing
The Development Team must be informed within 48 hours, and they need totake corrective action within 48 - 96 hours
Trang 72.5 Defect Reporting Guidelines
The key to making a good report is providing the development staff with as much information as necessary to reproduce the bug This can be broken down into 5 points:
1) Give a brief description of the problem
2) List the steps that are needed to reproduce the bug or problem
3) Supply all relevant information such as version, project and data used 4) Supply a copy of all relevant reports and data including copies of the expected
results.
5) Summarize what you think the problem is.
When you are reporting a defect the more information you supply, the easier it will be for the developers to determine the problem and fix it
Simple problems can have a simple report, but the more complex the problem– the more information the developer is going to need
For example: cosmetic errors may only require a brief description of the screen, how to get it and what needs to be changed.
However, an error in processing will require a more detailed description, such as:
1) The name of the process and how to get to it.
2) Documentation on what was expected (Expected results)
3) The source of the expected results, if available This includes spread sheets, an earlier version of the software and any formulas used)
4) Documentation on what actually happened (Perceived results)
5) An explanation of how the results differed.
6) Identify the individual items that are wrong.
7) If specific data is involved, a copy of the data both before and after the process should be included.
8) Copies of any output should be included.
Trang 8As a rule the detail of your report will increase based on a) the severity of the bug, b) the level of the processing, c) the complexity of reproducing the bug.
Anatomy of a bug report
Bug reports need to do more than just describe the bug They have to give
developers something to work with so that they can successfully reproduce the problem.
In most cases the more information– correct information– given the better The report should explain exactly how to reproduce the problem and an explanation of exactly what the problem is.
The basic items in a report are as follows:
developers will have been working on it and if they’ve found a bug– it may already have been reported or even fixed In either case, they need to know which version to use when testing out the bug.
in question.
cosmetic error on a screen, you should include a dataset that exhibits the error.
If you’re reporting a processing error, you should include two versions of the dataset, one before the process and one after If the dataset from before the process is not included, developers will be forced to try and find the bug based on forensic evidence With the data, developers can trace what is happening.
names, don’t abbreviate and don’t assume anything.
After you’ve finished writing down the steps, follow them - make sure you’ve included everything you type and do to get to the
Trang 9problem If there are parameters, list them If you have to enter any data, supply the exact data entered Go through the process again and see if there are any steps that can be removed.
When you report the steps they should be the clearest steps to recreating the bug.
Description: Explain what is wrong - Try to weed out any extraneous
information, but detail what is wrong Include a list of what was expected Remember report one problem at a time, don’t combine bugs in one report.
Supporting documentation:
If available, supply documentation If the process is a report, include a copy of the report with the problem areas highlighted Include what you expected If you have a report to compare against, include it and its source information (if it’s a printout from
a previous version, include the version number and the dataset used)
This information should be stored in a centralized location so that Developers and Testers have access to the information The developers need it to reproduce the bug, identify it and fix it
Testers will need this information for later regression testing and verification.
2.5.1 Summary
A bug report is a case against a product In order to work it must supply all
necessary information to not only identify the problem but what is needed to fix it
as well
It is not enough to say that something is wrong The report must also say what the system should be doing
The report should be written in clear concise steps, so that someone who has
never seen the system can follow the steps and reproduce the problem It should include information about the product, including the version number, what data was used.
The more organized information provided the better the report will be.
Trang 113 Automation
What is Automation
Automated testing is automating the manual testing process currently in use
3.1 Why Automate the Testing Process?
Today, rigorous application testing is a critical part of virtually all software development projects As more organizations develop mission-critical systems to support their
business activities, the need is greatly increased for testing methods that support
business objectives It is necessary to ensure that these systems are reliable, built
according to specification, and have the ability to support business processes Many internal and external factors are forcing organizations to
ensure a high level of software quality and reliability
In the past, most software tests were performed using manual methods This required a large staff of test personnel to perform expensive, and time-consuming manual test
procedures Owing to the size and complexity of today’s advanced software applications, manual testing is no longer a viable option for most testing situations
Every organization has unique reasons for automating software quality activities, but several reasons are common across industries
Using Testing Effectively
By definition, testing is a repetitive activity The very nature of application software
development dictates that no matter which methods are employed to carry out testing (manual or automated), they remain repetitious throughout the development lifecycle Automation of testing processes allows machines to complete the tedious, repetitive workwhile human personnel perform other tasks
Automation allows the tester to reduce or eliminate the required “think time” or “read time”necessary for the manual interpretation of when or where to click the mouse or press the enter key
An automated test executes the next operation in the test hierarchy at machine speed, allowing
tests to be completed many times faster than the fastest individual Furthermore, some types of
testing, such as load/stress testing, are virtually impossible to perform manually
Reducing Testing Costs
The cost of performing manual testing is prohibitive when compared to automated
methods The
reason is that computers can execute instructions many times faster, and with fewer errors than
Trang 12individuals Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer Therefore, load/stress testing using
automated methods require only a fraction of the computer hardware that would be
necessary to
complete a manual test Imagine performing a load test on a typical distributed
client/server
application on which 50 concurrent users were planned
To do the testing manually, 50 application users employing 50 PCs with associated
software, an
available network, and a cadre of coordinators to relay instructions to the users would be required With an automated scenario, the entire test operation could be created on a single machine having the ability to run and rerun the test as necessary, at night or on weekends without having to assemble an army of end users As another example,
imagine the same application used by hundreds or thousands of users It is easy to see why manual methods for load/stress testing is an expensive and logistical nightmare
Replicating Testing Across Different Platforms
Automation allows the testing organization to perform consistent and repeatable tests When
applications need to be deployed across different hardware or software platforms,
standard or
benchmark tests can be created and repeated on target platforms to ensure that new platforms
operate consistently
Repeatability and Control
By using automated techniques, the tester has a very high degree of control over which types of
tests are being performed, and how the tests will be executed Using automated tests enforces
consistent procedures that allow developers to evaluate the effect of various applicationmodifications as well as the effect of various user actions
For example, automated tests can be built that extract variable data from external files orapplications and then run a test using the data as an input value Most importantly,
automated
tests can be executed as many times as necessary without requiring a user to recreate a test
script each time the test is run
Greater Application Coverage
The productivity gains delivered by automated testing allow and encourage organizations
Trang 13regulations as well as being required to document their quality assurance efforts for all parts of
their systems
3.2 Automation Life Cycle
Identifying Tests Requiring Automation
Most, but not all, types of tests can be automated Certain types of tests like user
identify tests that are prime candidates for automation
High Path Frequency - Automated testing can be used to verify the performance of
application paths that are used with a high degree of frequency when the software is running in full production
Examples include: creating customer records, invoicing and other high volume activities where
software failures would occur frequently
Critical Business Processes - In many situations, software applications can literally
define or control the core of a company’s business If the application fails, the company can face extreme
Trang 14disruptions in critical operations Mission-critical processes are prime candidates for
automated
testing
Examples include: financial month-end closings, production planning, sales order entryand other core activities Any application with a high-degree of risk associated with a failure is a
good candidate for test automation
Repetitive Testing - If a testing procedure can be reused many times, it is also a prime
candidate for automation For example, common outline files can be created to establish
a testing session, close a testing session and apply testing values These automated modules can be used again and again without having to rebuild the test scripts This modular approach saves time and money when compared to creating a new end-to-end script for each and every test
Applications with a Long Life Span - If an application is planned to be in production for
a long period of time, the greater the benefits are from automation
What to Look For in a Testing Tool
Choosing an automated software testing tool is an important step, and one which often poses enterprise-wide implications Here are several key issues, which should be
addressed when selecting an application testing solution
Test Planning and Management
A robust testing tool should have the capability to manage the testing process, provideorganization for testing components, and create meaningful end-user and management reports It should also allow users to include non-automated testing procedures within automated test plans and test results
A robust tool will allow users to integrate existing test results into an automated test plan Finally, an automated test should be able to link business requirements to test results, allowing users to evaluate application readiness based upon the application's ability to support the business requirements
Testing Product Integration
Testing tools should provide tightly integrated modules that support test component
reusability Test components built for performing functional tests should also support other types of testing including regression and load/stress testing All products within the testing product environment should be based upon a common, easy-to-understand
language User training and experience gained in performing one testing task should be transferable to other testing tasks Also, the architecture of the testing tool environment should be open to support interaction with other technologies such as defect or bug
tracking packages
Internet/Intranet Testing
A good tool will have the ability to support testing within the scope of a web browser The tests created for testing Internet or intranet-based applications should be portable across browsers, and should automatically adjust for different load times and performance
levels