1. Trang chủ
  2. » Công Nghệ Thông Tin

Java Extreme Programming Cookbook phần 8 potx

28 333 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 28
Dung lượng 322,28 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Tests run: 2, Failures: 2, Errors: 0 The example output shows a timed test that fails immediately and another that waits until the method under test completes.. public class TestPerfSea

Trang 1

java:48)

FAILURES!!!

Tests run: 2, Failures: 2, Errors: 0

The example output shows a timed test that fails immediately and another that waits until the method under test completes The underlying results are the same—both tests fail—but the printed message is different A nonwaiting test, or a test that fails immediately, is unable to print the actual time it took to complete the test

Maximum elapsed time (1000 ms) exceeded!

On the other hand, a test that fails after the method under test completes provides a better message This message shows the expected time and the actual time

Maximum elapsed time exceeded! Expected 1000ms, but was

1002ms

As you can see from the previous output, this test is really close to passing An important point to make here is that when a test is repeatedly close to passing, you may wish to increase the maximum allowed time by a few milliseconds

Of course, it is important to understand that performance will vary from computer to computer and JVM to JVM Adjusting the threshold to avoid spurious failure might break the test on another computer

If you need to view some basic metrics about why a timed test failed, the obvious choice is to

construct a timed test that waits for the completion of the method under test This helps to determine how close or how far away you are from having the test pass If you are more concerned about the tests executing quickly, construct a timed test that fails immediately

Example 8-1 shows a complete JUnitPerf timed test Notice the use of the public static Test suite( ) method This is a typical idiom used when writing JUnit tests, and proves invaluable when integrating JUnitPerf tests into an Ant buildfile We delve into Ant integration in Recipe 8.7

Example 8-1 JUnitPerf TimedTest

package com.oreilly.javaxp.junitperf;

import junit.framework.Test;

import junit.framework.TestSuite;

import com.clarkware.junitperf.TimedTest;

Trang 2

public class TestPerfSearchModel {

public static Test suite( ) {

Test testCase = new

TestSearchModel("testAsynchronousSearch");

TestSuite suite = new TestSuite( );

suite.addTest(new TimedTest(testCase, 2000, false));

JUnit's test decoration design brings about some limitations on the precision of

a JUnitPerf timed test The elapsed time recorded by a timed test that decorates

a single test method includes the total time of the setUp( ), testXXX( ), and tearDown( ) methods

If JUnitPerf decorates a TestSuite then the elapsed time recorded by a timed test includes the setUp( ), testXXX( ), and tearDown( )methods of all Test instances in the TestSuite

The solution is to adjust the maximum allowed time to accommodate the time spent setting up and tearing down the tests

Trang 3

executes the test once For more flexibility, a load test may use a

com.clarkware.junitperf.Timer to ramp up the number of concurrent users during test execution JUnitPerf provides a ConstantTimer and RandomTimer to simulate delays between user requests By default all threads are started at the same time by constructing a

ConstantTimer with a delay of zero milliseconds

If you need to simulate unique user information, each test must randomly choose a different user ID (for example) This can be accomplished using JUnit's setUp( ) method

Here is an example that constructs a LoadTest with 100 simultaneous users:

public static Test suite( ) {

Test testCase = new

TestSearchModel("testAsynchronousSearch");

Test loadTest = new LoadTest(testCase, 100);

TestSuite suite = new TestSuite( );

public static Test suite( ) {

Test testCase = new

TestSearchModel("testAsynchronousSearch");

Test loadTest = new LoadTest(testCase, 100, 10);

TestSuite suite = new TestSuite( );

public static Test suite( ) {

Test testCase = new

TestSearchModel("testAsynchronousSearch");

Timer timer = new RandomTimer(1000, 500);

Test loadTest = new LoadTest(testCase, 100, 10, timer); TestSuite suite = new TestSuite( );

suite.addTest(loadTest);

return suite;

}

The Timer interface defines a single method, getDelay( ), that returns the time in

milliseconds-to-wait until the next thread starts executing The example above constructs a

Trang 4

RandomTimer with a delay of 1,000 milliseconds (1 second), with a variation of 500 milliseconds (half a second) This means that a new user is added every one to one and a half seconds

Be careful when creating timers that wait long periods of time between starting new threads The longer the wait period, the longer it takes for the test to complete, which may or may not be desirable If you need to test this type of behavior, you may want to set up a suite of tests that run automatically (perhaps at night)

There are commercial tools available for this type of performance test, but typically they are hard to use JUnitPerf is simple and elegant, and any developer that knows how to write a JUnit test can sit down and write complex performance tests

Example 8-2 shows how to create a JUnitPerf load test As in the previous recipe, the use of the public static Test suite( ) method proves invaluable for integrating JUnitPerf tests into an Ant buildfile More details on Ant integration are coming up in Recipe 8.6

Example 8-2 JUnitPerf LoadTest

package com.oreilly.javaxp.junitperf;

import junit.framework.Test;

import junit.framework.TestSuite;

import com.clarkware.junitperf.TimedTest;

public class TestPerfSearchModel {

public static Test suite( ) {

Test testCase = new

TestSearchModel("testAsynchronousSearch");

Test loadTest = new LoadTest(testCase,

100,

new RandomTimer(1000, 500));

TestSuite suite = new TestSuite( );

Trang 5

8.5 Creating a Timed Test for Varying Loads

JUnitPerf allows us to accomplish this task with ease Example 8-3 shows how

Example 8-3 Load and performance testing

package com.oreilly.javaxp.junitperf;

import junit.framework.Test;

import junit.framework.TestSuite;

import com.clarkware.junitperf.*;

public class TestPerfSearchModel {

public static Test suite( ) {

Test testCase = new

TestSearchModel("testAsynchronousSearch");

Test loadTest = new LoadTest(testCase, 100);

Test timedTest = new TimedTest(loadTest, 3000, false);

TestSuite suite = new TestSuite( );

an asynchronous search and tests that it completes in less than 3 seconds In other words, we are testing that the search algorithm handles 100 simultaneous searches in less than three seconds

Trang 6

8.5.4 See Also

Recipe 8.3 shows how to create a JUnitPerf TimedTest Recipe 8.4 shows how to create a

JUnitPerf LoadTest Recipe 8.6 shows how to write a stress test Recipe 8.7 shows how to use Ant

to execute JUnitPerf tests

8.6 Testing Individual Response Times Under Load

Example 8-4 Stress testing

package com.oreilly.javaxp.junitperf;

import junit.framework.Test;

import junit.framework.TestSuite;

import com.clarkware.junitperf.*;

public class TestPerfSearchModel {

public static Test suite( ) {

Test testCase = new

Trang 7

Recipe 8.3 shows how to create a JUnitPerf TimedTest Recipe 8.4 shows how to create a

JUnitPerf LoadTest Recipe 8.7 shows how to use Ant to execute JUnitPerf tests

8.7 Running a TestSuite with Ant

No matter how your project chooses to incorporate JUnitPerf tests, the technique is the same: use the junit Ant task Example 8-5 shows an Ant target for executing only JUnitPerf tests This example should look similar to what you have seen in other chapters The only difference is the names of the files to include This book uses the naming convention "Test" for all JUnit tests, modified to

"TestPerf" for JUnitPerf tests so Ant can easily separate normal JUnit tests from JUnitPerf tests

Example 8-5 Executing JUnitPerf tests using Ant

<target name="junitperf" depends="compile">

<junit printsummary="on" fork="false" haltonfailure="false"> <classpath refid="classpath.project"/>

<formatter type="plain" usefile="false"/>

<batchtest fork="false" todir="${dir.build}">

<fileset dir="${dir.src}">

<include name="**/TestPerf*.java"/>

</fileset>

Trang 8

JUnitTestRunner locates the tests to execute First JUnitTestRunner uses reflection to look for a suite( ) method Specifically, it looks for the following method signature:

public static junit.framework.Test suite( )

If JUnitTestRunner locates this method, the returned Test is executed Otherwise,

JUnitTestRunner uses reflection to find all public methods starting with "test" This little trick allows us to provide continuous integration for any class that provides a valid JUnit suite( )method

8.8 Generating JUnitPerf Tests

As we were writing this book, we came up with the idea to code-generate JUnitPerf tests to show how

to extend the XDoclet framework This recipe uses that code generator, which is aptly named

JUnitPerfDoclet, to create JUnitPerf tests The concept is simple: mark up existing JUnit tests with JUnitPerfDoclet tags and execute an Ant target to generate the code

8.8.3.1 Creating a timed test

Here is how to mark up an existing JUnit test method to create a JUnitPerf TimedTest:

/**

* @junitperf.timedtest maxElapsedTime="2000"

* waitForCompletion="false"

*/

public void testSynchronousSearch( ) {

// details left out

}

Trang 9

The @junitperf.timedtest tag tells JUnitPerfDoclet that it should decorate the

testSynchronousSearch( ) method with a JUnitPerf TimedTest

The maxElapsedTime attribute is mandatory and specifies the maximum time the test method is allowed to execute (the time is in milliseconds) or the test fails

The waitForCompletion attribute is optional and specifies when a failure should occur If the value is "true", the total elapsed time is checked after the test method completes A value of "false" causes the test to fail immediately if the test method exceeds the maximum time allowed

8.8.3.2 Creating a load test

Here is how to mark up an existing JUnit test method to create a JUnitPerf LoadTest:

/**

* @junitperf.loadtest numberOfUsers="100"

* numberOfIterations="3"

*/

public void testAsynchronousSearch( ) {

// details left out

}

The @junitperf.loadtest tag tells JUnitPerfDoclet that it should decorate the

testAsynchronousSearch( ) method with a JUnitPerf LoadTest

The numberOfUsers attribute is mandatory and indicates the number of users or threads that simultaneously execute the test method

The numberOfIterations attribute is optional The value is a positive whole number that indicates how many times each user executes the test method

8.8.3.3 Generating the code

Example 8-6 shows how to generate the tests First, a new task definition is created, called

perfdoclet This task is responsible for kick-starting the code generation process We exclude from the fileset any class that begins with "TestPerf" because there may be hand-coded JUnitPerf tests somewhere in the source tree Finally, the junitperf subtask creates a new JUnitPerf class for each JUnit test case class that contains at least one test method with JUnitPerfDoclet tags For example, if a JUnit test case class named TestSearch uses JUnitPerfDoclet tags, then the

generated JUnitPerf test class is named TestPerfTestSearch

Example 8-6 JUnitPerfDoclet setup

<target name="generate.perf"

depends="prepare"

description="Generates the JUnitPerf tests.">

<taskdef name="perfdoclet" classname="xdoclet.DocletTask"> <classpath>

Trang 10

description="Runs the JUnitPerf tests.">

<junit printsummary="on" fork="false" haltonfailure="false"> <classpath refid="classpath.project"/>

<formatter type="plain" usefile="false"/>

<batchtest fork="false" todir="${dir.build}">

Trang 11

Chapter 9 XDoclet

Section 9.1 Introduction

Section 9.2 Setting Up a Development Environment for Generated Files

Section 9.3 Setting Up Ant to Run XDoclet

Section 9.4 Regenerating Files That Have Changed

Section 9.5 Generating the EJB Deployment Descriptor

Section 9.6 Specifying Different EJB Specifications

Section 9.7 Generating EJB Home and Remote Interfaces

Section 9.8 Creating and Executing a Custom Template

Section 9.9 Extending XDoclet to Generate Custom Files

Section 9.10 Creating an Ant XDoclet Task

Section 9.11 Creating an XDoclet Tag Handler

Section 9.12 Creating a Template File

Section 9.13 Creating an XDoclet xdoclet.xml File

Section 9.14 Creating an XDoclet Module

XDoclet provides direct support for generating many different types of files The most popular use of XDoclet is to generate EJB files such as deployment descriptors, remote and home interfaces, and even vendor-specific deployment descriptors If XDoclet does not provide what you need, you may define your own @ tags and template files For ultimate flexibility, new Ant XDoclet tasks and new XDoclet tag handlers may be created, allowing for practically any kind of content

One of the main goals of XDoclet is providing an active code-generation system through Ant This means that XDoclet works directly with your Ant buildfile to generate the necessary files your project needs For example, let's say you are working on an EJB called CustomerBean Normally, you

Trang 12

would have to write a minimum of four files: the bean implementation, remote interface, home interface, and the deployment descriptor If a new public method is introduced, all four files must be kept in sync or the deployment of the bean fails With XDoclet you simply write the bean

implementation class and mark it up with XDoclet @ tags During the build process an XDoclet Ant task generates the remaining three files for you Since all files are based on the single bean

implementation class, the files are always in sync

9.2 Setting Up a Development Environment for Generated Files

9.2.1 Problem

You want to set up your development environment to handle generated files

9.2.2 Solution

Create two directories at the same level as your source and build tree The first directory contains

generated source code and may be called something like src-generated The second directory contains compiled code for the generated source and may be called something like build-generated

9.2.3 Discussion

The best location for generated source files is in a directory at the same level as your source tree and build tree Equally important is separating the compiled code for generated source files from the compiled code of nongenerated source files This provides a convenient, easy to manage directory structure, as shown in Figure 9-1

Figure 9-1 Directory structure for generated files

Trang 13

9.2.3.1 Why not place generated files in the source directory?

Placing generated files in the src directory causes version control tools to assume new files should be

added to the repository, which is simply not true Generated files should never be versioned, but rather the templates and scripts that are used to generate the files should be versioned

• DO NOT check generated files into your version control tool

• DO check the templates and scripts used to generate the files

9.2.3.2 Why not place generated files in the build directory?

Placing generated files in the build directory has its own problems as well For starters, the build

directory, by convention, contains compiled code, not source code Another important reason to maintain separate directory structures is to keep your Ant buildfile simple and easy to manage When you want to force code to recompile, simply delete the build directories If you placed generated

source files in the build directory, the Ant buildfile would need to exclude those files from being

Trang 14

deleted Introducing a directory specifically for generated files allows the Ant buildfile to remain simple

9.2.3.3 Why not place the compiled generated code in the build directory?

The build directory may seem like a natural location for compiled generated code This type of setup

has its problems, though Developers typically use an IDE for quick development If a developer rebuilds the entire project through the IDE, then all of the compiled code may be deleted The IDE has

to rebuild all source code, including the generated code This may not seem like a mammoth task until you are dealing with thousands of generated files Keeping separate build directories ensures that your development environment remains stable and efficient

1 Prepare the development environment by creating output directories

2 Compile out-of-date code

3 Package the code into a deployable unit (JAR, WAR, or EAR)

4 Execute the JUnit tests.[1]

Ngày đăng: 12/08/2014, 19:21

TỪ KHÓA LIÊN QUAN