This blog is made up of two parts.  In this first part, I will cover JIRA 4 performance, compared to JIRA 3.13.  In the second part, Mark Lassau Andreas Knecht will talk about the benefits of automated performance tests and the improvements to the JIRA codebase made possible by the regular performance telemetry data.

headline_JIRA4_185x99.png
The Tests

Atlassian has performance tests which are used internally to benchmark the speed of JIRA. These performance tests have been designed to work with a blank installation of JIRA. The tests consist of two components:

Setup test

The setup test isn’t so named because it sets up JIRA, but rather since its purpose is to prepare an installation of JIRA for performance testing. This script assumes that you’ve set JIRA up to the point where you’ve entered a license and are faced with a blank instance. It starts by creating a number of projects, then it adds users and assigns them to the correct group to be able to work on issues in those projects. Finally, it creates a number of issues in the system and during this creation process, adds comments and closes a percentage of issues.

Fixed load test

The Fixed load test runs after the setup test. There are a number of simulated user groups which log in using usernames created during the setup and perform common actions such as browsing, searching, commenting on issues or resolving issues. This test runs for a fixed amount of time at a certain level of load which is determined by the number of users configured to run.

Load Profile & Test Data

It’s important to try and mirror the distribution of requests which you expect to see in production. For Atlassian, that means one of the load profiles which we test against is determined by analysing log files from our public JIRA instance, http://jira.atlassian.com.

Although the load profile is real, the data that is used in the tests described below is synthetic. This is due to limitations in the performance tests that work with JIRA 3.13. JIRA 4 tests can be run against a clone of an existing installation however, since this cannot be done using the 3.13 tests, there would be nothing to compare against!

For this test, the setup test was configured to create 10,000 issues in 20 projects. The default permission schemes were used and the sample text for issues and comments is made up of around 12MB of English language text.

Load wise, the JIRA 3.13 tests attempt to perform about 30-40 requests per second. In JIRA 4, this is higher at 70-80 requests per second due to extra requests being made to simulate dashboard gadgets.  Under JIRA 4, this results in around 20,000 requests to view an issue and 20,000 requests to the issue navigator.

Speaking of the dashboard…

In JIRA 4 the dashboard is different as much of the heavy lifting is now done on the browser side. There are advantages to this – one of which is parallelisation. In the 3.13 dashboard, all the elements were rendered in serial. Now they’re all iframes with browser calls back to REST endpoints. This means that you can do lots of them at once, but it also means you have to pay attention to JavaScript performance. JMeter currently has no way of testing browser performance since it doesn’t run JavaScript. JMeter works by timing the HTTP requests. Since JMeter can’t run JavaScript, it can’t determine all the HTTP requests that would be made by a JavaScript dashboard gadget. The JIRA 4 “Dashboard” performance metric below is made up of the following requests which are at best an approximation of the calls that will happen in the real world:

  • Main Dashboard page
  • Request each of the served Gadget iframe URLs
  • Make 4 REST calls – 3 JQL and 1 project summary

The “Dashboard” result that’s returned is the sum of the response time for these requests in serial not parallel. In a browser such as IE or Safari, these requests would be made in parallel, depending on the number of connections the browser in question opens to the backend JIRA instance.

Test Results

avgtimechart.png
95percentile.png

Configuration

All testing was performed on the following hardware & software:

Server Platform CPU Physical Memory Hard Disk
Dell R610 2 x Intel ‘Nehalem’ Xeon E5520 (Quad Core) 32Gb (8x 4Gb DDR3) 2 x 15K 146Gb SAS, Raid 1
Atlassian JIRA MySQL Database Tomcat Application Server Java
4.0.0-RC1 5.0.45-7 5.5.27 Java(TM) SE (build 1.6.0_07-b06), Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)
3.13.5 5.0.45-7 5.5.27 Java(TM) SE (build 1.6.0_07-b06), Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)

Performance tests were conducted with Apache Jakarta JMeter 2.3.4.

All tests were run on Redhat Enterprise Linux 5.3 (Tikanga) 64bit (Kernel 2.6.18-128.2.1.el5). The filesystem used for all tests was EXT3 with the default options. The following tuning was applied to the OS in order to allow for more memory usage by the database server and larger buffers for the network stack:

/etc/sysctl.conf:
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 1310720000
kernel.shmall = 4294967296
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4098 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.core.netdev_max_backlog = 2500

Part II

Stay tuned for Part II of this blog by Mark Lassau Andreas Knecht.

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now