No Image Available

TrainingZone

Read more from TrainingZone

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Evaluation – beyond the happy sheet

default-16x9

Does training work? That’s always the number one question but we are still a long way from having a definitive answer.

Photo of David PardeyIt’s nearly fifty years since Donald Kirkpatrick first put forward his four level model for evaluation of training, fifty years in which the idea of evaluation has taken hold, but the practice is still a long way off from full implementation. Most training evaluation makes some attempt to measure Kirkpatrick’s first level, the participants’ reaction to the training (Did they actively engage in the learning?). Unfortunately, ‘happy sheets’ are about as far as it gets in most cases, and that’s often not very far! According to Kirkpatrick, evaluation should then go on to ask:

  1. Did learning take place? (Have the objectives of the learning and development activity been achieved – just because people got stuck into the learning activity doesn’t mean that they learnt anything, or that they learnt what they were supposed to learn!)
  2. Has learning transfer taken place? (Did the learning change how people behave in the workplace, in the way that it was intended to?)
  3. Has this changed performance brought about real improvements in the organisation’s performance? (For example, have errors been reduced, quality improved, money saved, etc?)

Unfortunately, training is often limited by the inability of trainers to become actively involved beyond the initial stages in development – the identification of training need (and sometimes that’s not in their control), and the design and delivery of training interventions. But beyond that are two more crucial stages:
  • The translation of what has been learnt into improved work performance

  • The opportunity for that Improved performance to affect organisational improvement

Clearly, if training doesn’t lead to improved individual performance, there is no way it can result in improved organisational performance. That’s what assessment is intended to do, but it must be assessment that focuses on what can be done with the learning. But knowing what they should be doing doesn’t mean that learners will make an effort to do it. That depends on whether or not they are committed and motivated to learn. Do they see the value of what they are doing? Has the learning activity been designed to meet their needs and expectations? Most importantly of all, do they want to be there? Without that connection between the learners and what is being learnt then there is little chance of it being translated into action.

So, if you know that people have learnt how to perform more effectively, and are motivated to apply what they have learnt, then you can have more confidence that their learning will translate into improved performance. However, for this to then be translated into organisational improvement there needs to be a real sense of understanding of the purpose and potential of the training by the management of the organisation – at all levels. What do managers want from the training? Why are they doing it? How does the training, and its expected changes in behaviour, fit into the general strategy of the organisation and the objectives and priorities of the individual managers involved?

All too often training is designed through a dialogue with training or HRD managers, but rarely with those line managers who will be in a position to influence directly how the outcomes of your programme are turned into action. Nor will you always get to meet the senior managers whose strategic objectives implicitly assume that the training will be effective. If you can’t meet them, you need to interrogate hard those people you do meet so as to get a clear picture of the range of different expectations. If you can’t meet at least some of the learners beforehand, the least you can do is make sure that managers are asked searching questions about them, their current skill level and their likely expectations.

Understanding Kirkpatrick’s four levels of evaluation will help trainers to think through these questions. Being able to measure the impact of training at all four levels, but recognising that this set of causal relationships exist is a significant step in improving the likelihood that the training will have the desired impact. At ILM we are embarking on a significant programme of developing cosy effective ways of evaluating training at the four levels – I’ll keep you posted through this blog on the progress we’re making.

David Pardey

Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

Thank you!