Author Profile Picture

Mark Bouch

Leading Change

Managing Director

Read more from Mark Bouch

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Measuring L&D impact: a realistic approach in a VUCA climate

istock-926110456

Since 1954 when Dr Donald Kirkpatrick first published ideas leading to the Kirkpatrick Model used by many learning and development professionals, considerable effort has been invested in measuring the effectiveness of L&D initiatives.

We now live and work in a dynamic and ever-changing world. It’s a volatile, uncertain, complex and ambiguous (VUCA) environment.

Businesses now need more agility to react to fleeting opportunities, adopt new strategies, and build (then absorb) new capabilities necessary to adapt to emergent trends.

While I’ve yet to meet a business leader who didn’t expect a positive return from L&D investment, I’ve met more than a few who didn’t want to spend time or money on M&E.

L&D professionals often field questions from clients like ‘how will people apply it in practice?’ and ‘what impact will it have on business results?’

I suspect we’re not alone in finding that promising early discussions about measuring performance impact and return on investment (ROI) often taper to nothing when clients the lack time, resources or will to establish tracked measurable outcomes/impact and establish their financial value.

Additionally, it can be hard to persuade clients to measure effectiveness in a disciplined way - so is there another way?

Kirkpatrick remains the gold standard

Kirkpatrick is the best known M&E (measuring effectiveness) model focused on four distinct levels of learning evaluation: reaction, learning, behaviour and results.

The model’s levels are closely related in a hierarchy to indicate how effective a learning intervention is in equipping learners with new skills and knowledge. It also looks at how they apply learning in the workplace and how those changes in behaviour lead to tangible benefits for the organisation.

Other derivatives include levels for evaluation of ROI and intangibles (benefits incapable of being reduced to their financial value).

The problem is that many business leaders don’t pay enough attention to the ‘gold standard’.

Many myths surround M&E of learning effectiveness, including:

  • Measurement (especially ROI) is too complex or takes too much time/effort.
  • There’s no point - senior managers don’t require it.
  • It’s too subjective or imprecise.
  • Businesses are unwilling to pay for it.

As a result, correlating training delivery and business impact remains challenging for many.

Doing comprehensive Kirkpatrick-style evaluation can be complex, time-consuming and expensive.

While I’ve yet to meet a business leader who didn’t expect a positive return from L&D investment, I’ve met more than a few who didn’t want to spend time or money on M&E.

Common refrains include: ‘we have too much to do already’, or ‘we have too many surveys’, or ‘I’ll know when it’s working’. This suggests it’s not always the business priority or it is perceived to be onerous.

Why evaluate?

It seems many are aware of their need to improve this – in a recent Towards Maturity study, 96% of respondents stated that they were looking to improve the way they gather and analyse data on learning impact. Unfortunately, only 17% were actually doing it, and that number is decreasing over time.

Much learning is now informal, self-directed and takes place ‘on-the-job’ as people learn from experience.

We need to ensure we can prove that contemporary approaches to learning are working, when they cannot easily be observed in training rooms.

In an ideal world we would isolate all variables and measure only the impact of learning. Sadly, we don’t live in an ideal world!

As a consultant, I champion the need to prove my worth to business decision-makers.

Even if I don’t get handsomely paid for doing the evaluation, the exercise will enhance my credentials and ability to win more business.  

Ask yourself why you need to evaluate learning and what the benefit will be for you and the client.

Case study

A client (a high technology business designing, manufacturing and racing Formula 1 cars) wanted to implement a modular leadership development programme designed to address opportunities identified by executive managers.

They believed the programme would help them maintain or improve results in a competitive environment.

Each module was accompanied by simple programme evaluation to determine reaction and satisfaction with learning delivery.

Whilst learning was ‘hot’, we asked questions about how easy it would be to apply in the workplace and established learner commitment to do so.

We achieved M&E through three methods: 

  1. HR business partners and managers were asked to observe and note evidence of specific practices and behaviours indicating how the skills and knowledge were being applied in practice. We collated and compared their collective observations and insights.
     
  2. The business already conducted a peer-rated leadership ranking exercise. They were able to extract anonymised data for the learner cohort to see whether rankings changed after the learning intervention – and in fact they had.
     
  3. Before the intervention we identified a sub-set of questions from the client’s staff engagement survey linked to areas we expected to improve. Although multiple variables were in play, these proxy measures consistently improved when the survey was next run.

Adapting to the future

Evaluate relevance, but don’t be scared of the truth 

Traditional M&E tends to look at reaction to learning interventions. Small tweaks can be made to improve measurement of delivery.

If you’re doing an evaluation anyway, ask your learners questions about the relevance to their job, how easy it will be to apply this learning at work and the killer question, ‘would you this programme to others?’ – adapting the Net Promoter Score (NPS) methodology.

Clients get it. The numbers don’t lie and they provide no place to hide.

Take account of variables but don’t worry too much about them

In an ideal world we would isolate all variables and measure only the impact of learning. Sadly, we don’t live in an ideal world!

You will worry yourself mad trying to identify every variable and perfect measures for each, even if your client had time and money to invest - don’t bother.  

Where multiple variables are in play, identify more influential factors up-front by asking the question ‘what else could have a critical impact on results?’

Examples include:

  • There may be other, related training programmes taking place.
  • An anticipated restructure could impact the learning cohort.

Once you’ve identified significant factors, estimate the impact in percentage terms each will have (or has had) on the results.

You achieve more credibility when clients themselves propose the extent to which the result should be ‘discounted’ for extraneous factors.  

Validate using different sources of data

Identify in advance what the client needs to see to ‘know it’s working’.

In line with Kirkpatrick principles, adopt a wide range of measures. Combine observational data with hard KPIs, embrace qualitative and quantitative data, measure inputs (completion of learning and learning quality) and outputs (productivity or improved delivery).

Don’t obsess about data quality. Use what you have and tell the client up-front: ‘if we deliver this intervention, we expect to see these changes…’.

Probably the best and most obvious way to get great data on learning impact is to support application with manager coaching, support materials and action learning groups where possible.  

Ask ‘how would you like us to measure them?’ to provide empirical linkage between cause and effect.

If you have scope, insist on a pilot group and control group to validate the outcomes. The control group don’t know who they are and receive no information about the intervention. The pilot group takes part in the intervention.

If you’ve been bold enough to forecast in advance what you expect to change, M&E then becomes very relevant to future decision-making rather than a proof of historic success.

Don’t focus on ROI, focus on learning impact

Evaluating learning impact is an easier proposition than full ROI measurement.

Once you’ve identified what you expect to change, put in place mechanisms to observe and measure whether they are changing.

Seek to use existing, client validated, measures where possible. Always prefer data showing impact if it’s available.

Probably the best and most obvious way to get great data on learning impact is to support application with manager coaching, support materials and action learning groups where possible.  

You’ll also need to identify any organisational barriers to implementation and do something positive about them e.g. ensure reward and recognition systems encourage ‘new’ behaviours.

Conclusion

When organisations are stretched, people are time-poor and there is a backdrop of uncertainty and change, it can be hard to persuade clients to measure effectiveness in a disciplined way.

The fundamental principles of learning evaluation haven’t changed. We just need to find a way to get better at measuring learning impact and adapt to the realities of the world we’re in.

If you do nothing else:

  • Evaluate relevance, but don’t be scared of the truth.
  • Take account of variables but don’t worry too much about them.
  • Validate using different sources of data.
  • Focus on learning impact not ROI.

Interested in this topic? Read Performance management: should we stop trying to measure the ROI of training and development?

One Response

  1. Top article Mark, really
    Top article Mark, really enjoyed reading this. I like your thinking around evaluation because it is simple to understand and practicable. Thanks.

Author Profile Picture
Mark Bouch

Managing Director

Read more from Mark Bouch
Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

Thank you!

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to TrainingZone's newsletter
ErrorHere