This is the first in a series of blog posts aimed at documenting whats involved in setting up a performance test harness from scratch. In my next post, I will show how to deploy these performance tests using Maven 2 and how to automate the process using Bamboo. This post assumes you have already set up your application for initial testing, this should include a sample dataset. In this blog, I’m using a current version of Fisheye. I’ve set the application up, given it a license key and added a sample CVS repository.
To start with, we’ll use a basic directory structure. Later, I’ll show you how to easily separate the data and tests, but for now I’ll assume you are working on a local copy. We’ll start to fill these with files as we go.
PerfTest/ PerfTest/resources PerfTest/resources/repository PerfTest/results
The first thing to do before jumping in to write the tests is to define what actions you want your test to perform.
JMeter uses the concept of a “Thread Group” to represent a set of users performing a set of actions. For example, this could be users logging in and then browsing the dashboard. At a minimum, you need to document thread groups (groups of simulated users), URLs you plan to sample in each Thread Group, what order they (the URLs) should be sampled in and what response assertions are needed to determine if an action was successful or not.
Skeleton JMeter test
JMeter works by running the actions which are placed on the test plan in the order that they are on the plan. To add an action, right click on the left menu where you want the action and then choose from the add menu. I start with the following skeleton. There are a few basics in place:
- Startup variables on the first screen. Items such as startup dirs.
- External resouces holds the locations of external data files like CSV’s.
- Thread details – the number of user threads and request delays.
- Server details – defaults setting where the server resides, which port and protocol.
- HTTP Cookie manager – keeps track of cookies for authenticated sessions.
Filling in the details
With this skeleton, it’s a matter of putting planned actions into JMeter as defined in your documentation. Have a look at the screen shots for the run order, but each thread group has at least:
- CSV Dataset
- a Pause
- Several Samples in order
Note: Pauses are applied between requests to even out the request stream. Users will wait between clicking links in your application and so should your tests.
Tip: To get started, take a look at the FishEye tests which I’ve been working on.
Below I’ve listed some of the common items I use in a test. I’ve highlighted the names where possible so you know what you’re looking for in the JMeter graphical interface.
JMeter works with a number of different protocols. Atlassian software mostly talks HTTP, but JMeter also supports JUnit, FTP, AJP, Soap, JDBC, JMS, Ldap and more.
Using a Sampler for the correct protocol allows actions to be performed. In the case of the HTTP Sampler, this means making a HTTP Request to a web server.
As with any test it’s important to make sure you get back the correct data. This is very important to ensure correct operation of the application under high load. By attaching a Response Assertion to a HTTP Sampler its easy to check the output matches some preset conditions. You can add two Response Assertions – one for testing the HTTP Response Code (200) and another assertion to check the Response Data for a text string.
The CSV Dataset module allows data to be stored in a CSV file on disk. This is read in at runtime and the data is available in variables in the test plan. This makes it easy to change the data “behind” the test application. By keeping the tests generic to the software and the actual info about the data being tested in CSV files, these CSV files can be easily replaced to use a another application dataset. Using sets of CSV files, you can then have test for variations of data, such as production, test or development.
The Module Controller allows modules in other thread groups to be called. In the test plan I’ve created there is a Thread Group which is disabled so it does not run. To disable an element, right click on the elemnt you wish to disable and choose disabled. Common actions are then placed in this Thread Group to create a “function storage” area. Store the actions you want in a Simple Controller in the disabled Thread Group. Next, add a Module Controller in the Thread Group you want the action to be performed in and include the “function” from the disabled Thread Group – this allows for easy reuse of common actions (eg. Login).
Another useful controller is the Throughput Controller. It allows control of either number of executions or percentage executions. The percentage execution is useful since you can set different weights on actions to control the traffic load. Using this controller, you can choose between actions, for example, browse the first of two links 10% of the time and the second link 90% of the time.
JMeter has the __P function which allows variables to be controlled from the command link. This allows variations on traffic patterns to be scripted without needing to change the test plan.
JMeter is a bit light on user feedback from actions performed in the graphical interface. When starting a test, the only feedback that there’s something happening is the thread count which is displayed in the top right hand corner (and maybe the sound of a creaking application somewhere).
I find the following modules give enough debugging and feedback to be useful without overbearing.
Generate Summary Results
This Post Processor prints some summary results to the command line once every few minutes. Later, I’m going to show you how to put this test into an automated build. This feedback on the command line is very useful to see how your build is going by watching the output logs produced by the build.
This Listener shows the samples as they are happening. The downside of this is that it is expensive cpu wise when the graphical interface is running. However, it really helps for debuging since it generates a log. You will use the output of this log to generate your graphs using a perl script supplied below. Note, dont save the response data unless you really need to (see the image).
For each sampler you have, there should be a Response Assertion checking correctness as above. I like to check for both a sucessful response code and some text on the page. With this Listener you can catch any assertion failures and print them. More usefully, you can save them to a file for later by specifying a filename in the available text field.
When configuring this module, make sure you select the checkbox to save the response data. Since this module is only configured to catch errors, the amount of data should be small unless something goes wrong. Later, if you have errors in your test, you can use this logfile which can be loaded into the textfield and the output will from the test run will be displayed, showing which assertions failed.
This Listener is another good way of getting feedback. It provides a table of results including response time, throughput and sample counts. The data in these cells can be copied to a spreadsheet.
If you’d like metrics with 90th percentile, you might prefer the Aggregate Report Listener instead which provides a similar functionality but with different metrics.
Now that you have a working test, its time to run it and get some graphs.
Be sure to remove any stale logs from the results directory before running the test as JMeter will append to any existing results file which would result in two test runs in one file.
To preserve system resources during tests, I run JMeter on the command line. This avoids the resource cost of displaying and updating a graphical interface.
If you have any variables you want to override (See the __P function in the JMeter docs), you can specify these on the command line using the -J switch. They can also be set in the user.properties file which is in the JMeter bin directory.
Here’s an example of a test running on the command line:
~/PerfTest gbarnett$ jmeter -n -t jmeter-test-fixedload.jmx -Jfisheye.host=cheech -Jscript.runtime=600 Created the tree successfully using jmeter-test-fixedload.jmx Starting the test @ Tue Sep 23 12:25:14 EST 2008 (1222136714842) Display Summary Results During Run + 179 in 102.4s = 1.7/s Avg: 26 Min: 2 Max: 126 Err: 0 (0.00%) Display Summary Results During Run + 1759 in 179.9s = 9.8/s Avg: 36 Min: 1 Max: 425 Err: 0 (0.00%) Display Summary Results During Run = 1938 in 282.5s = 6.9/s Avg: 35 Min: 1 Max: 425 Err: 0 (0.00%) Display Summary Results During Run + 3090 in 180.0s = 17.2/s Avg: 43 Min: 1 Max: 602 Err: 0 (0.00%) Display Summary Results During Run = 5028 in 462.5s = 10.9/s Avg: 40 Min: 1 Max: 602 Err: 0 (0.00%) Display Summary Results During Run + 2915 in 167.3s = 17.4/s Avg: 46 Min: 1 Max: 972 Err: 0 (0.00%) Display Summary Results During Run = 7943 in 629.8s = 12.6/s Avg: 42 Min: 1 Max: 972 Err: 0 (0.00%) Tidying up ... @ Tue Sep 23 12:35:47 EST 2008 (1222137347433) ... end of run
Results will be saved in .jtl output files in locations specified in the test on the output Listeners. I used the results directory:
~/PerfTest/results/gbarnett$ ls -al drwxr-xr-x 2 gbarnett staff 170 23 Sep 12:38 . drwxr-xr-x 4 gbarnett staff 204 23 Sep 12:25 .. -rw-r--r-- 1 gbarnett staff 83 23 Sep 12:48 jmeter-assertion-fixedload.jtl -rw-r--r-- 1 gbarnett staff 6078035 23 Sep 12:48 jmeter-result-fixedload.jtl -rw-r--r-- 1 gbarnett staff 636070 23 Sep 12:48 jmeter-summary-fixedload.jtl
Using the jmetergraph.pl script, graphs can be generated from these .jtl files.
The script requires the perl ‘GD’ and ‘Chart’ packages. Once those are installed, running it is easy:
$ jmetergraph.pl jmeter-result-fixedload.jtl
This script will scan the .jtl output files specified and create a set of graphs in the PNG file format in the currect directory.
Here’s an example image. This shows the response times of various request laid out as percentages.
In the next blog, I’m going to go over how to move these tests into a Maven 2 project. We’ll start with splitting up the data and tests and move onto running the test with the Chronos maven plugin.
Update: The second part of this blog is available – Automated performance testing using JMeter and Maven