I’m chuffed that the Human Capital Institute (HCI) and the Management Innovation eXchange (MIX) announced that Atlassian is a winner of the 2011 M-prize. The prestigious prize was awarded at one of the world’s largest HR event – the Human Capital Summit in Atlanta to celebrate innovation in HR.
It’s been a long road since we announced our 12-month public trial of our new approach to performance reviews. Back then, we decided to “rip apart” the traditional performance review – a bi-annual time-sink with its dreaded forced rankings, and – in our opinion – an underwhelming impact. We committed to sharing our findings openly with you – it has been quite a ride since then! Our approach caught the attention of the Fortune Magazine and generated plenty of conversation on the MIX post and in my first and second blog.
This blog: the Performance Check-in and the application we used
At Atlassian, we set out to create an approach to performance reviews that would energise and engage our people (rather than deplete and demoralise them). We replaced the traditional performance review with a series of monthly one-on-one meetings with specific topics of conversation, two of which are dedicated to discussing performance (what we refer to performance check-ins). As promised, I’d like to take some time to zoom in on how we provide candid performance feedback without ratings during our the performance check-ins.
Fast, visual, and effective.
How does our performance check-in work? We ask everyone to indicate how they have been going on two axes. By dragging a dot along one axis, you indicate your frequency of outstanding performance on a continuum of “never to always”. On the other axis, we asked the question “how often have you stretched yourself?” in order to acknowledge effort outside someone’s normal job responsibilities. The scale from ‘never to always’ helped people focus the conversation on how to improve the frequency of certain behavious or results rather than focusing on a number.
Recently, I shared this idea with the cool team at Sonar6 who translated this in a blogpost in their usual colorful way. When a certain position on the axis is chosen, the logical next questions are: “why didn’t you position yourself more to the left or right, higher or lower?”. This is written down in a simple text field below. The approach has encouraged a better coaching style review conversation.
The new application: Small-Improvements
With our different approach to performance reviews, we started scouting for a system that could replicate our model after realising there was no system available that could support what we wanted. It was pretty obvious there was no way any of the traditional HR heavyweight software vendors like SuccessFactors or Taleo could deliver this. We finally found two awesome start-ups who have created really smart solutions: Small-Improvements and Cadence. Both are creating a new breed of continuous feedback and performance review tools. We felt Small Improvements best fit our needs to as it features all sorts of goodies in a nice lightweight app, can facilitate our performance check-in process and capture continuous feedback/coaching.
Anyway, I don’t do the application justice. We’ve been using it for a while now and have enjoyed the ride so far. Check out the videos and site.
Free Licenses: To celebrate the M-prize win, Small-Improvements is giving away free licenses to 50 companies like Atlassian! First come, first serve.
How long should a performance check-in take?
What is the appropriate amount of time someone should spend on a performance review, or in our case – a performance check-in? The idealists will probably say “none as feedback should have been given throughout the year”. The perfectionist may say “whenever you have provided a comprehensive review of all your actions/behaviours through a number of defined competencies.” All good and well, reality is different.
I often find myself in hesitating when asked how much time people should allocate. Truth is, I have myself more than once leaned over my computer for several hours to write my self assessment. But here is the problem: the last hour I’ve spent on my self-assessment was just to add a polish – I wasn’t really adding any additional value.
The questions is – how much time is reasonable? Every person is different and we’re not going to lay down a law. But it’s interesting to see that that people will add most valuable information in the first 30 minutes. It takes some time to reflect and dot down notes that provide insight into why they’ve put themselves on a certain spot on the axes.
My hope is that we’ll get to the point that people write a great, pointy and valuable self evaluation in 20-30 minutes. After 30 minutes, substantial value can probably still be added, but reviews really shouldn’t take longer than 60 minutes (some special cases granted). I’d rather get people back writing great code sooner than getting them to write the best, most polished reviews.