When Crowd first became an Atlassian product it was built using a bunch of ant scripts and lived in CVS, this is quite common for a lot of projects out there, but it is something that can be improved on. Over the last 6 months the Crowd team have taken a phased approach to moving Crowd into the world of Continuous Integration.

Basically we have taken the following steps:

  1. Move from CVS to SVN
  2. Move from Ant to Maven 2
  3. Add some level of test coverage, and build this into our development process
  4. Add integration tests for the Crowd Console
  5. Hook all this up with Cargo and Maven 2
  6. Drop it into a continuous integration server

Moving from CVS to SVN

This was made rather simple using the Python script cvs2svn provided by the Tigris community. Check out the previous link for more information for doing this for your own project.

Moving from Ant to Maven 2

This was a little more tricky and required a fair few iterations. The initial cut was handled by Justen and the end result was that running mvn package would generate our packages (jar’s & war’s), but this was about it.

The next step was a little more involved and required us to write our own Maven 2 plugin. The goal was to have the plugin build a releasable version of Crowd for our customers (similar to a JIRA standalone release). With the help of some of our awesome CompSci and BIT students who were undertaking the massive task of moving Confluence from Maven 1 to Maven 2, we were able to get a rather simple release plugin working.

The crux of it would do the following:

  1. Build all dependent modules of Crowd, this is all handled by the Maven 2 package command
  2. Grab a zipped copy of Tomcat 5.5.20, unzip this and copy across the war file and any other required libraries into Tomcat. This was all done using ant tasks (which is supported by Maven 2), if you want to check this out take a look at our plugin
  3. Grab all the sources for Crowd and dependent Atlassian libraries and package this into its own archive. This was done using a Java mojo and a custom XML file that referenced the dependent sources and their location. Here is a quick example of what this file looks like:
    <sourceIncludes>
    <sourceInclude>
    <groupId>atlassian-bucket</groupId>
    <artifactId>bucket</artifactId>
    <source>bucket</source>
    <scm.repo>svn-private</scm.repo>
    <scm.tag>atlassian_bucket_2006_06_15</scm.tag>
    </sourceInclude>
    .....
    <sourceIncludes>
    

At this point we were at a 4 or 5 step release process which we didn’t think was too bad.

Add some level of test coverage to Crowd

Currently one of the larger unit tested areas of Crowd would be the DAO layer. Since I was given the task of replacing Crowd’s Hibernate 3 code with Spring’s Hibernate 3 support, this was a perfect opportunity to sit down and create some existing test coverage for the original Hibernate 3 code base and then rip it out and replace it with Spring. Since this was a DAO layer and I didn’t really want to mock out the database calls (I wanted to make sure that what I was doing would work), I decided to use a mix of DBUnit and Spring’s AbstractTransactionalDataSourceSpringContextTests

Below are some of the more interesting methods you might be interested in if you want to implement something like this yourself:

Here we override the onSetUpBeforeTransaction() method of AbstractTransactionalDataSourceSpringContextTests (which would have to be one of the longest class names I have seen!) and setup our database via DBUnit:

protected void onSetUpBeforeTransaction() throws Exception
{
super.onSetUpBeforeTransaction();
// Setup the in-memory database with some sample data for testing
DataSource ds = jdbcTemplate.getDataSource();
Connection con = DataSourceUtils.getConnection(ds);
IDatabaseConnection dbUnitCon = new DatabaseConnection(con);
DatabaseConfig config = dbUnitCon.getConfig();
// This is being done to add Boolean support to DBUnit for HSQL DB
config.setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new HsqlDataTypeFactory());
// Grab the sample data from the classpath and perform
// a clean insert into HSQL DB
InputStream datasetStream =
com.atlassian.core.util.ClassLoaderUtils.getResourceAsStream("sample-data.xml",
BaseSpringTestCase.class);
IDataSet dataSet = new FlatXmlDataSet(datasetStream);
try
{
DatabaseOperation.CLEAN_INSERT.execute(dbUnitCon, dataSet);
}
finally
{
DataSourceUtils.releaseConnection(con, ds);
}
}

We also prepare the Spring context using a custom datasource for the tests, in this case an in-memory HSQL database. Here is a quick sample of the Spring config and properties file we are using for this:

<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="location" value="classpath:jdbc.test.properties"/>
</bean>
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="${hibernate.connection.driver_class}"/>
<property name="url" value="${hibernate.connection.url}"/>
<property name="username" value="${hibernate.connection.username}"/>
<property name="password" value="${hibernate.connection.password}"/>
</bean>
hibernate.connection.driver_class = org.hsqldb.jdbcDriver
hibernate.connection.url = jdbc:hsqldb:mem:crowddaotest
hibernate.connection.username = sa
hibernate.connection.password =
hibernate.dialect = org.hibernate.dialect.HSQLDialect
hibernate.transaction.factory_class=org.hibernate.transaction.JDBCTransactionFactory

Now we can simply write our unit tests and have a database state thats consistent for each run test, since after each test Spring will roll back the transaction.

Add integration tests for the Crowd Console

Not having integration tests meant that every-time we were about to release Crowd we’d spend a good few hours monkey-clicking the application to make sure it was all working fine. Although this might not sound too bad – we churn out point releases every two weeks or so – and that’s a lot of good dev time wasted. We’re now using JWebUnit to automate this monkey-business so now before we make a commit we run our small suite of unit tests and integration tests (like every good developer should) over the Crowd code base.

Hook all this up with Cargo and Maven 2

The next natural step was to get these integration tests running as part of our build life-cycle with Maven 2, in comes Cargo to the rescue. Cargo is an excellent little project that can take a build artifact (like a war file) and deploy it to an application server, and from here we can point our integration tests at the deployed webapp and see if everything passes.

This did take a little bit of research and source code sniffing (especially around the datasource configuration) to get working properly. But in the end we now have cargo grabbing our war artifact, using a HSQL DB datasource and hooking this into the integration-test phase for Maven 2.

Here is quick sample of how this configuration happens in our Maven 2 pom, hopefully this can help a few other out there trying to do the same thing:

<plugin>
<groupId>org.codehaus.cargo</groupId>
<artifactId>cargo-maven2-plugin</artifactId>
<configuration>
<wait>false</wait>
<container>
<containerId>tomcat5x</containerId>
<zipUrlInstaller>
<url>http://apache.wildit.net.au/tomcat/tomcat-5/v5.5.23/bin/apache-tomcat-5.5.23.zip</url>
<installDir>${installDir}</installDir>
</zipUrlInstaller>
<timeout>120000</timeout>
<output>output.log</output>
<log>cargo-log.log</log>
<dependencies>
<dependency>
<groupId>hsqldb</groupId>
<artifactId>hsqldb</artifactId>
</dependency>
</dependencies>
</container>
<configuration>
<home>${project.build.directory}/tomcat5x/container</home>
<properties>
<cargo.servlet.port>8095</cargo.servlet.port>
<cargo.logging>high</cargo.logging>
<cargo.datasource.datasource>
cargo.datasource.url=jdbc:hsqldb:mem:crowd_cargo|
cargo.datasource.driver=org.hsqldb.jdbcDriver|
cargo.datasource.username=sa|
cargo.datasource.password=|
cargo.datasource.type=javax.sql.DataSource|
cargo.datasource.jndi=jdbc/CrowdDS
</cargo.datasource.datasource>
</properties>
<deployables>
<deployable>
<groupId>com.atlassian.crowd</groupId>
<artifactId>crowd-web-app</artifactId>
<type>war</type>
<properties>
<context>crowd</context>
</properties>
<pingURL>http://localhost:8095/crowd</pingURL>
<pingTimeout>240000</pingTimeout>
</deployable>
</deployables>
</configuration>
</configuration>
<executions>
<execution>
<id>start-container</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start</goal>
</goals>
</execution>
<execution>
<id>stop-container</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>

The end result is, we have Crowd building, running its unit tests, deploying to Tomcat 5.5 and running the integration tests. Awesome! Even fewer monkeys than before.

Now it is time to drop in some continuous integration

So now we have tests of various flavours, it’s time to hook it all up into a CI server so we can get some quick reports back to us if we have done something that breaks a test. Our CI server let’s us know if our build’s pass on Java 1.4/5/6, so we don’t have to manually build using a particular JDK on our dev machines.

Currently we’re deploying to Tomcat 5.5 and using HSQL DB (which is our standalone release), but our CI server will let us have multiple build plans so we can start plugging in different application and data servers. So one of our next goals will be setting up different plans and configurations and having these run on every commit into SVN.

Some other positive side effects

  1. Our release process (ie building the zip and tar.gz files that our customers download) was still a 5+ step process. We have now been able to knock the building of these files down to a one button/script release which is one of the goals set out by Martin Fowler in his CI essay. There are still a few manual steps around uploading Java docs and the archives but we can probably knock these off in a further enhancement to our deployment script.
  2. O, and we now have a few less monkeys in our test/build/deploy process (even though peanuts are cheap!)

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now