This blog is made up of two parts. In this first part, I will cover Jira 4 performance, compared to Jira 3.13. In the second part,
Mark Lassau Andreas Knecht will talk about the benefits of automated performance tests and the improvements to the Jira codebase made possible by the regular performance telemetry data.
Atlassian has performance tests which are used internally to benchmark the speed of Jira. These performance tests have been designed to work with a blank installation of Jira. The tests consist of two components:
The setup test isn’t so named because it sets up Jira, but rather since its purpose is to prepare an installation of Jira for performance testing. This script assumes that you’ve set Jira up to the point where you’ve entered a license and are faced with a blank instance. It starts by creating a number of projects, then it adds users and assigns them to the correct group to be able to work on issues in those projects. Finally, it creates a number of issues in the system and during this creation process, adds comments and closes a percentage of issues.
The Fixed load test runs after the setup test. There are a number of simulated user groups which log in using usernames created during the setup and perform common actions such as browsing, searching, commenting on issues or resolving issues. This test runs for a fixed amount of time at a certain level of load which is determined by the number of users configured to run.
It’s important to try and mirror the distribution of requests which you expect to see in production. For Atlassian, that means one of the load profiles which we test against is determined by analysing log files from our public Jira instance, http://jira.atlassian.com.
Although the load profile is real, the data that is used in the tests described below is synthetic. This is due to limitations in the performance tests that work with Jira 3.13. Jira 4 tests can be run against a clone of an existing installation however, since this cannot be done using the 3.13 tests, there would be nothing to compare against!
For this test, the setup test was configured to create 10,000 issues in 20 projects. The default permission schemes were used and the sample text for issues and comments is made up of around 12MB of English language text.
Load wise, the Jira 3.13 tests attempt to perform about 30-40 requests per second. In Jira 4, this is higher at 70-80 requests per second due to extra requests being made to simulate dashboard gadgets. Under Jira 4, this results in around 20,000 requests to view an issue and 20,000 requests to the issue navigator.
Speaking of the dashboard…
- Main Dashboard page
- Request each of the served Gadget iframe URLs
- Make 4 REST calls – 3 JQL and 1 project summary
The “Dashboard” result that’s returned is the sum of the response time for these requests in serial not parallel. In a browser such as IE or Safari, these requests would be made in parallel, depending on the number of connections the browser in question opens to the backend Jira instance.
All testing was performed on the following hardware & software:
|Server Platform||CPU||Physical Memory||Hard Disk|
|Dell R610||2 x Intel ‘Nehalem’ Xeon E5520 (Quad Core)||32Gb (8x 4Gb DDR3)||2 x 15K 146Gb SAS, Raid 1|
|Atlassian Jira||MySQL Database||Tomcat Application Server||Java|
|4.0.0-RC1||5.0.45-7||5.5.27||Java(TM) SE (build 1.6.0_07-b06), Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)|
|3.13.5||5.0.45-7||5.5.27||Java(TM) SE (build 1.6.0_07-b06), Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)|
Performance tests were conducted with Apache Jakarta JMeter 2.3.4.
All tests were run on Redhat Enterprise Linux 5.3 (Tikanga) 64bit (Kernel 2.6.18-128.2.1.el5). The filesystem used for all tests was EXT3 with the default options. The following tuning was applied to the OS in order to allow for more memory usage by the database server and larger buffers for the network stack:
Stay tuned for Part II of this blog by
Mark Lassau Andreas Knecht.