So we've all had those periodic performance appraisals ... objectives setting, seeing how we've done against the last set of objectives and so on, making sure those objectives are S*M*A*R*T of course(!)
... well, my problem with this is that it's not really a level playing field. Sure, let's agree on some objectives for the future, and see how we've done against them; but not all objectives were created equal. It's kind of like saying, "Hey Fred, you need to get Jim to run the length of Britain in two days", and that becomes Fred's objective. So, Fred's objective is to tell Jim to run the length of Britain, and Jim's objective is to run the length of Britain in 2 days. Now, it seems to me that Fred is likely to achieve his objective, so well done Fred. And Jim - well, not so likely. If we are measuring performance against objectives then Jim fails, and Fred succeeds.
Okay, in reality Jim is unlikely to agree to that objective, but more realistic cases are commonplace. Your job is to manage me whilst I build a new social network application; my job is to build the application. Is it fair that we both get judged in the same way against those objectives? How often have you seen 'objective complexity' be included in the assessment? Should it?
I think it's a bit like Olympic diving competitions. The dive has a difficulty level and that sets how high a mark you can actually achieve. Even if you do the perfect dive at a difficulty of 2.3, you'll still not perform to the grade of someone doing a decent dive of difficulty 3.7. The easy objectives should offer less reward is all I'm saying. But have you seen this in practice? Who goes around normalising the objectives, rather than normalising the performance grades?
Let's even things out by recognising that some things are harder to achieve than others.