Navigating the learning evaluation maze

Main points: 
Kenneth Fee and Dr Alasdair Rutherford guide the community through the evaluation minefield.
 
 
 
The field of learning evaluation is not short of models or tools. Neither is it short of enthusiastic proponents of one model over another. The resulting debates can mean confusion for organisations who want to evaluate their learning activities and don't know which way to turn.
 
This article reviews some of the most widely-used learning evaluation methods, and shows that each does have advantages and disadvantages. It argues that the benefits of learning are wide and varied, and unlikely to be completely captured by any one model or tool. The starting point for organisations should be an identification of their needs from evaluation. The Total Value Add approach, where organisations select the combination of methods that addresses the value created by learning in their organisation, helps you to answer the questions that are important to your organisation.
 

Approaches to learning evaluation

There are two broad approaches to the evaluation of learning:
  • goal-based evaluation
  • system-based evaluation

Kirkpatrick Levels

The most commonly encountered learning evaluation framework is goal-based: the Kirkpatrick Levels (1959).
 
Kirkpatrick Levels break down the goals of learning evaluation as follows:
  • Reactions of the learner – what did they think and feel about the training?
  • Learning – what was the resulting increase in knowledge or capability?
  • Behaviour – Was there an improvement in behaviour and performance?
  • Results – What effects on the organisation have resulted from the learner's performance?
The four levels are widely used in learning, and provide an accessible structure around which to build a learning evaluation system. However, the apparent simplicity of the levels masks the complexity that can be encountered in evaluating at each level. In particular, many organisations struggle to measure performance in levels 3 and 4.
 
"The starting point for organisations should be an identification of their needs from evaluation."

CIPP model

One of the more common system-based frameworks is the Context-Input-Process-Product, or CIPP, model. It provides a structure for designing a learning programme that places the design of the learning process in context. Evaluation questions can then be addressed to each of the four aspects to ensure the intervention is effective.
 
CIPP's focus on the learning programme as a system means that it is well suited to formative evaluations, designed to test and improve learning interventions. What system-based models do not excel at is measuring the link between learning and wider organisational goals, making summative evaluation more challenging.
 

Tools and techniques

Once you have selected an appropriate framework for your learning evaluation, there is then a wide selection of tools and techniques that can be applied to measure impact.
 

Return on Investment (ROI)

Return on Investment (ROI) is seen by some as a 'fifth level' in the Kirkpatrick model. It involves calculating the ratio of benefits to costs. The great advantage of this method is that it provides an easy to understand 'bottom line' figure for the impact of training on profitability. The challenge lies in accurately measuring both the full costs and benefits of learning. This can be particularly difficult where benefits are realised over time, or where there is uncertainty about potential benefits. ROI's advantage lies in providing a financial measure of learning performance to accompany other measures, or combined with other evaluation methods to quantify wider and long term benefits.
 

Return on Expectation (ROE)

Return on Expectation (ROE) acknowledges that there are wider, difficult-to-measure impacts of learning and development that are not necessarily captured by the ROI method. Instead, the ROE method suggests that the expectations of what the learning intervention will achieve should be established, and the performance measured against these.
 
The process of measuring expectations can also be useful as part of the evaluation. For example, if the expectations of managers and learners diverge it is worth addressing this at the start.
 
The criticism of ROE is that expectations can be difficult to define. It is also critical that they are realistic. In order to tackle this it is important that expectations are widely agreed on, and clearly fit into the organisation’s objectives. While on its own ROE will not necessarily provide the 'bottom line' measure required in some situations, it will provide a way to capture the wider organisational benefits ignored in a narrower measure of impact.
 

Balanced Scorecard

Balanced Scorecard is a strategic planning and management tool for combining performance measurements. The Balanced Scorecard incorporates financial measures of performance along with customer, internal business processes, and learning and growth measures. Organisations can include metrics for the performance of their learning and development function within the learning and growth perspective.
 
The inclusion of learning as a top-level perspective shows its importance for an organisation's strategy and success. The Balanced Scorecard method allows you to view learning performance alongside other performance measures, without prescribing exactly how it should be measured. It is therefore ideal for combining with other learning evaluation tools.
 

Six Sigma

Six Sigma is a quality tool focusing on the removal of defects and errors. In the context of learning evaluation, it encourages evaluators to take the perspective of the objectives of the users of learning. The most prominent Six Sigma method of learning evaluation is DMADDI:
  • Define – what are the business requirements?
  • Measure – what targets do we need to meet?
  • Analyse – what needs to be learnt?
  • Design – how should we teach it?
  • Develop – does our prototype match our design?
  • Implement – did the implementation meet business and instructional requirements?
Six Sigma is likely to be used for learning evaluation mainly in organisations that use the method across their business. It is a thorough and detailed system, which focuses on quantitative measures, but it requires significant training and expertise in order to be implemented well.
 

Total Value Add

Total Value Add is not 'yet another' learning evaluation model. Instead, it is an attempt to deal with the complexity created by the wide variety of competing approaches, tools and techniques. The goal is to help organisations capture all the benefits of learning activities within their organisations.
 
Before choosing the right evaluation tools, organisations should identify the objectives that they intend to achieve through a learning programme, from the immediate impacts on learners through to the longer term business impacts. You need to ask:
  • What do we need to know?
  • How can we find it out?
  • What are we going to do with the findings?
Organisations should recognise the wide range of potential benefits from learning, including greater skills, improved quality, increased productivity, greater staff retention, improved morale, knowledge spillover and more. It is critical to be clear about who the stakeholders are, what objectives they are trying to achieve, and identify overlaps or conflicts in these objectives.
 
"Don't be intimidated by what one expert says you 'must' evaluate. Do make sure that the learning evaluation you implement meets the needs of your organisation."
Evaluators then need to make clear the link between the learning activities, the benefits of the learning, and the strategic impacts or business outcomes. Armed with this information, the appropriate points and methods for evaluation can be selected. Robust research methods and digital technology will both play an important role. This can and should combine approaches to ensure that you accurately capture the impacts you are trying to measure.
 
Many learning impacts can't be measured directly and quantitatively, but this does not make them any less critical to organisational success. By focusing on gaining a better understanding of the impact of learning on these organisational outcomes through a range of approaches, the full contribution of learning to organisational outcomes can be captured. Without this, organisations risk missing out key benefits generated by their learning activities.
 
Don't be intimidated by what one expert says you 'must' evaluate. Do make sure that the learning evaluation you implement meets the needs of your organisation. With the Total Value Add approach organisations can ensure that they evaluate the aspects of learning that are important to their business, and successfully navigate the maze of learning evaluation models.
 
 
This is article first appeared on the Airthrey website
 
Kenneth Fee and Dr Alasdair Rutherford are the founding directors of learning evaluation firm Airthrey Ltd. Ken is a career learning and development professional, whose latest book, 101 Learning & Development Tools, deals with evaluation among other topics. Alasdair is an evaluation and econometrics specialist, and a Research Fellow at the University of Stirling

 

Comments

paulkearns's picture

The need for baseline measures doesn't get a mention? http://www.evidencebasedhr.com/?p=275

ken@airthrey.com's picture

Hi Paul,

Thanks for your prompt comment. I agree that baseline measures are very important.

The purpose of this article is to review the most common methods and tools in learning evaluation and try to position them relative to each other. I would argue that taking a baseline measurement is common to many of the tools, but (as you yourself have argued in your book and your blog), it's usually not explicit in the basic model, e.g., the Kirkpatrick levels.

This is only the second of a series of articles, and we can't possibly hope to cover everything in one, but please look out for the others that will follow.  Meanwhile, thanks for highlighting the importance of the baseline.

Cheers,

Ken

Stephen J. Gill's picture

I commend you on your effort to summarize all of the approaches to evaluation of learning interventions. This is not easy to do but you have done a nice job. I would suggest two additions. First, I would add the "Success Case Method" that was developed by Robert O. Brinkerhoff. It's a value-added approach that focuses on successful applications of learning in an organization and tries to understand how and why learning was applied to achieve important results. The other addition I would suggest is some discussion of unintended consequences of learning. These results should be examined, also. My experience is that often the most important results of learning interventions were not and could not have been anticipated at the outset. However, stakeholders need to know how and why these changes occured.  

ken@airthrey.com's picture

Hi Stephen,

Thanks for a useful contribution to the discussion.

We really didn't intend this to be a comprehensive review of all the possible approaches, methods and tools for learning evaluation, which simply wouldn't be possible for a short article.  But I think we should probably have included the Success Case Method, which is one of my personal favourites, and seems to be little understood or implemented in the UK.

What else have we missed?  Business Impact Modelling is another of my favourites.  Alternative goal-based approaches to Kirkpatrick, such as the nine outcome model proposed by Donovan and Townsend.  Dave Basarab's predictive evaluation.  The importance of an evidence-based approach.  Baseline measurement (as Paul Kearns said, above).  More on stakeholder consultation.  I could go on, but I won't.  This is a big subject, and we'll be picking up some of these themes in future articles in the series.

Thanks again,

Ken

Back to top Back to top