Charity Director. Consultant in OD, strategic HR and leadership development. Author of five books including Delivering E-Learning (Kogan Page, 2009). Interested in strategy, OD, leadership, technology, evidence and evaluation. Available for writing and speaking.
I would go further than Clive Boorman and A P Chester, and suggest you consider abandoning your "evaluation sheets" altogether.
I don't think there are any questions that "have to be asked ... right after the training has been completed".
Clarify your aims: what do you want to know about, and decide from that what you will measure and how - not least who you are going to ask. If you want to know what learners think about training, interviews or focus groups with a small sample will probably give you the answers you need. You shouldn't go about pissing*** off large populations of learners by carpet-bombing them with reactionnaires or 'happy sheets' either immediately after training or even after a respectful interval. How much do you ever use this data anyway?
The focus of your evaluation should be shifted to what performance improvements you gain, and what business results training/learning contributes to. Evaluation should be primarily about measuring impact of learning, not learner satisfaction - far too much of our effort goes into the latter and far too little into the former.
The relative silence may be because people like me are wondering what the problem is. Perhaps part of the confusion is about what you mean by "metrics" - do you mean standard cost headings that make up a template? Are there costs you have difficulty quantifying? I suspect you've already hit on the solution by simply extracting the bit that doesn't apply - the cost of presentation input.
I came onto this topic because I thought you were having problems with the value (outputs) side of things, but measuring costs ought to be relatively straightforward, surely?
Why are we so obsessed with happy sheets? Is immediate reaction on-the-spot more valuable than a survey a day, or a week, or a month later? (The comparison may be interesting, but doesn't justify our blanket use of these surveys.) The captive audience is one reason, but I suspect most of the captives are none too happy about it!
One alternative I've used is ballot boxes, into which learners deposit tokens indicating their agreement (or not) with key statements. Quick and more fun.
But I think the key question to ask is what do we hope to learn from learner reactions, and how will we use that information? I've yet to hear an answer to this question that justifies carpet-bombing with happy sheets...
It's all about value: defining what that value is, and how it may be measured (and no matter how difficult, almost anything can be measured - whether it's worthwhile is another question).
There are a number of free papers here: http://www.airthrey.com/papers.htm, most of which have been republished by TrainingZone. The seventh paper, to be be published next week, is on Business Impact Modelling, a technique for identifying the key indicators and tracking the impact of learning activities on them.
May I direct you to my 2009 article, The Seven Pillars of the Corporate University: http://www.learnforever.co.uk/articles.htm. If there is an omission from this article, it's that I don't discuss at any length the critical issue of winning hearts and minds - you correctly appear to have identified engagement as the most important question ahead of you. Having set up two corporate universities/academies, studied two others in depth, and failed to convince another organisation of its merits, I wish you well.
(The 'Seven Pillars' article, and its follow-up, 'How to Get Value from a Corporate University', both appear in my 2011 book, 101 Learning & Development Tools: http://www.learnforever.co.uk/books.htm.
I agree this is a fabulous question. And I especially like FrancesMF's point, "but wouldn't it be wonderful if, as a profession, our goals were not just about scores on happy sheets, but about knowing what our learners have achieved following the training. Then we might just find out that the way we split our time naturally changes".
At Airthrey, we offer clients a rule of thumb that 10% to 15% of their training budget (which goes for time as well as spend) should be committed to training evaluation. Clearly this will vary, depending on circumstances, but the problem is that many L&D professionals ignore evaluation to a large extent - see my recent blog post, Five Ostriches, http://learnforeverblog.blogspot.co.uk/2012/04/five-ostriches.html
If more people thought more about this question, I suspect more time would be spent on evaluation.
My discussion replies
I would go further than Clive Boorman and A P Chester, and suggest you consider abandoning your "evaluation sheets" altogether.
I don't think there are any questions that "have to be asked ... right after the training has been completed".
Clarify your aims: what do you want to know about, and decide from that what you will measure and how - not least who you are going to ask. If you want to know what learners think about training, interviews or focus groups with a small sample will probably give you the answers you need. You shouldn't go about pissing*** off large populations of learners by carpet-bombing them with reactionnaires or 'happy sheets' either immediately after training or even after a respectful interval. How much do you ever use this data anyway?
The focus of your evaluation should be shifted to what performance improvements you gain, and what business results training/learning contributes to. Evaluation should be primarily about measuring impact of learning, not learner satisfaction - far too much of our effort goes into the latter and far too little into the former.
Have a look at the free papers at www.airthrey.com/papers.htm - all previously published in TrainingZone.
Hi Neil,
The relative silence may be because people like me are wondering what the problem is. Perhaps part of the confusion is about what you mean by "metrics" - do you mean standard cost headings that make up a template? Are there costs you have difficulty quantifying? I suspect you've already hit on the solution by simply extracting the bit that doesn't apply - the cost of presentation input.
I came onto this topic because I thought you were having problems with the value (outputs) side of things, but measuring costs ought to be relatively straightforward, surely?
Happy to discuss further.
Ken
Why are we so obsessed with happy sheets? Is immediate reaction on-the-spot more valuable than a survey a day, or a week, or a month later? (The comparison may be interesting, but doesn't justify our blanket use of these surveys.) The captive audience is one reason, but I suspect most of the captives are none too happy about it!
One alternative I've used is ballot boxes, into which learners deposit tokens indicating their agreement (or not) with key statements. Quick and more fun.
But I think the key question to ask is what do we hope to learn from learner reactions, and how will we use that information? I've yet to hear an answer to this question that justifies carpet-bombing with happy sheets...
Good question and good discussion.
It's all about value: defining what that value is, and how it may be measured (and no matter how difficult, almost anything can be measured - whether it's worthwhile is another question).
There are a number of free papers here: http://www.airthrey.com/papers.htm, most of which have been republished by TrainingZone. The seventh paper, to be be published next week, is on Business Impact Modelling, a technique for identifying the key indicators and tracking the impact of learning activities on them.
Hi Paula,
May I direct you to my 2009 article, The Seven Pillars of the Corporate University: http://www.learnforever.co.uk/articles.htm. If there is an omission from this article, it's that I don't discuss at any length the critical issue of winning hearts and minds - you correctly appear to have identified engagement as the most important question ahead of you. Having set up two corporate universities/academies, studied two others in depth, and failed to convince another organisation of its merits, I wish you well.
(The 'Seven Pillars' article, and its follow-up, 'How to Get Value from a Corporate University', both appear in my 2011 book, 101 Learning & Development Tools: http://www.learnforever.co.uk/books.htm.
Good luck,
Ken
I agree this is a fabulous question. And I especially like FrancesMF's point, "but wouldn't it be wonderful if, as a profession, our goals were not just about scores on happy sheets, but about knowing what our learners have achieved following the training. Then we might just find out that the way we split our time naturally changes".
At Airthrey, we offer clients a rule of thumb that 10% to 15% of their training budget (which goes for time as well as spend) should be committed to training evaluation. Clearly this will vary, depending on circumstances, but the problem is that many L&D professionals ignore evaluation to a large extent - see my recent blog post, Five Ostriches, http://learnforeverblog.blogspot.co.uk/2012/04/five-ostriches.html
If more people thought more about this question, I suspect more time would be spent on evaluation.