Sometimes you don't need statistics. Sometimes you don't need retrospectives. Sometimes you don't need superlatives.
Performance metrics are important. At school, in sports, and in the arts, people want to know how well they are doing. They get grades for their knowledge of math, languages, and geography, rankings for their performances in football, basketball, and tennis, and ratings for their books, plays, or TV shows. If you don’t know how you’re doing, you cannot verify if you’re doing better next time. That’s why people want to know their score on a Microsoft certification exam. It’s why they hook up their Nike shoes to their iPods, tracking their running achievements. And it’s why I’m looking forward to your Amazon ratings for my book. 🙂
One responsibility of a manager is to make sure that employees get to know and understand how well they are doing their jobs. And whether you are producing metrics for individuals or groups, there are a number of tips you may want to keep in mind when measuring their performance:
1) Distinguish skill from discipline
In a previous blog post I discussed two rankings for maturity: skill and discipline. You may wish to evaluate people and teams separately for both. This helps skilled people (who may think that they’re too good to fail) not to forget about discipline. It also helps to avoid overconfidence in disciplined people (who may think they’re good just because they follow procedures). Some examples of measuring discipline: task board is up-to-date, meetings start on time, code coverage always > 95%. Some examples of measuring skill: no build failures, few bugs reported, and customer demos always accepted.
2) Do not rate knowledge or experience
I see knowledge and experience as prerequisites for skill and discipline, but I believe measuring people’s knowledge and experience doesn’t make much sense. Knowledge and experience are about being. Skill and discipline are about delivering. As a writer I don’t get ratings for being a writer. I get ratings for delivering a book. Nobody in your organization should be earning ratings for knowledge and experience, while wasting their time playing Tetris.
3) Rate multiple activities
Each of us has some things he is good at, and some things he is not. You can accept the humiliation of a bad rating for one activity when there is another one on which you’ve scored well. Similarly, employees can accept criticism more easily when it is compensated with compliments in other areas. Having multiple ratings also makes it easier to be honest and fair to a person. Rate people and teams for the quality of a software release and its timeliness, for customer satisfaction and cost effectiveness, for official standards adhered to and team flexibility.
4) Rate multiple performances
One of my high school teachers had a system where he organized at least ten test scores a year per person, and he promised not to count the lowest one, because “we all have a bad day sometimes.” People in general prefer to be rated multiple times for similar activities. They want a chance to do better next time. Rate them for each project that they do, and each new release that goes into production.
5) Use relative ratings where possible
Compare the performance of a team against their previous performances over time (“you’re now doing 15% better than last time”); against other teams in the organization (“you’re doing 20% worse than the guys in project X”); or against external businesses (“we’re doing 32% better than company B”). With relative metrics teams can strive to do better every time, instead of trying to meet one target and then staying there.
6) Keep the feedback loop as short as possible
There should be as little delay as possible between the time of an activity and feedback from the metrics. It is one of the reasons I started writing a blog before writing a book. I needed the immediate feedback from readers on my blog to know how to write better. Only one and a half year later I felt confident enough to start writing a book, which has a much longer feedback cycle.
7) Use both leading and lagging indicators
Leading indicators are metrics that, when they change, will indicate that you might be on the right track in achieving your goal. (Example: increased code coverage of unit tests might indicate higher quality in a product.) Lagging indicators are metrics that verify whether or not you have achieved a goal, after completing the work. (Example: reduced defects reported by customers verifies quality after the product’s release.) In general it is advised to use both leading and lagging indicators.
8) Never create the ratings yourself
The value of your opinion as a manager about the performance of a person or team is very, very, very small. Make sure that all ratings, whether qualitative or quantitative, are produced by the environment. Not by you. You may be the messenger sometimes, but not the assessor. Be the judge, not the prosecutor.
Talking about judges… Yes, I plead guilty (again). Like many other naïve managers in the world I have personally ranked and rated employees, once per year, using one single value on a 5-level scale. But I regret that now. I believe that people should be rated with multiple ratings, multiple times, as soon as possible. And not by me. Let the world know I’m sorry. It won’t happen again.
(Image by Bludgeoner86)
This article will be part of the book Management 3.0: Leading Agile Developers, Developing Agile Leaders. You can follow its progress here.
Twitter – Subscribe – Newsletter – LinkedIn – SlideShare
Latest, greatest and favoritest posts: No Maturity Models, But Individual Competence Discipline * Skill = Craftsmanship Managing Leaky Organizations |