How to assess the organisational impact of L&D programmes through value profiling

Showing the impact of learning through value profiling
GaudiLab/iStock
Hub
Part of the Learning analytics hub
Brought to you by TrainingZone
Share this content

To prove the true value of learning at work, a shift towards measuring the impact that learning has on the business as a whole – rather than just on an individual – is key. 

Learning and development is a significant investment for most companies. To illustrate, the global corporate training market was estimated at $130m per annum in 2018, and eLearning is set to reach $325 billion by 2025.

Despite these large sums, few companies assess the contribution of their L&D investments to desired business outcomes. A few will have tried using the best-known methodology in this space – the Kirkpatrick model – only to find that it suffers from two major drawbacks: the first (as we will show) is that it is difficult to implement.

And the second is that it only measures the impact of L&D programmes from the perspective of the individuals undergoing the training, rather than the impact of the programme on the organisation as a whole.

This is referred to as individual-level versus organisational-level impact (or less formally individual versus organisational impact).

For L&D teams to prove the real value of their work it’s imperative that they deliver organisational-level capability improvements, and not merely individual-level improvements.

To achieve this learning practitioners would be wise to transition from the Kirkpatrick Model – and it’s focus on individual-level impact – towards a model better equipped to evaluate the organisational impact of L&D programmes.

The Kirkpatrick Model

Donald Kirkpatrick introduced his four-level model to measure the business impact of training programmes on individual participants in 1954. The model is based on the premise that in order for training-programme participants to deliver measurable business improvements, the following assumptions must hold (see Figure 1):

1. Reaction

Trainees must experience a positive reaction to the training. The assumption here is that trainees who experience the programme negatively are less likely to learn from it.

2. Learning

Trainees must gain new knowledge as a result of the training. This seems reasonable because unless trainees learn something new, they will simply continue to behave as they did before the training.

3. Behavioural change

Trainee behaviour must change following the programme. Again, this seems reasonable because individuals whose behaviour does not change are unlikely to achieve improved business outcomes.

4. Business results

If all of the above conditions are met and an appropriate L&D programme was delivered, then a positive impact on business outcomes is expected.

If positive business outcomes are not achieved, then either the above conditions have not been met, an inappropriate programme was selected, or non-specified events have worked against the programme’s success. These might include, for example, a change in the economy or labour market, an inappropriate organisation design or a change in leadership.

Figure 1: The Kirkpatrick Model assumptions expressed as a causal chain

The Kirkpatrick model

Source: The Kirkpatrick Partners

The Kirkpatrick Model is deployed as follows:

  1. Prior to the L&D programme, obtain baseline (pre-programme) measures for each participant’s current knowledge (learning), behaviour and business results

  2. Deliver the L&D programme

  3. Measure each participant’s post-programme reaction, learning, behaviour and business results again after the programme.

If the reaction is positive, and learning, behaviour and business results all show an improvement, the programme is said to have made a positive contribution to the business.

(For those interested, Jack Phillips extended the Kirkpatrick Model to calculate the ROI of the programme; however, his methodology also does not include organisational-level measures).

Limitations of the Kirkpatrick Model

While theoretically useful as a teaching tool, the Kirkpatrick Model presents a number of challenges:

1. Measurement of business result improvements

While measuring business result improvements is relatively straightforward in line functions such as sales or divisional management, it is not as simple in support functions such as HR, finance and marketing.

For example, how does one measure the business impact of updating a general ledger or processing administration for new joiners?

2. It is difficult to isolate the impact of the training programme

Consider a company that experiences improved business results after delivering employee productivity training. At the same time as that training, the company also underwent an organisational restructure and introduced significant automation as well as a new product-line.

How can the company be sure whether the improved business outcomes were the result of the training programme versus one of these other factors?

As Alec Levenson notes, most companies’ business results are simultaneously influenced by a variety of factors, making it difficult to isolate the business impact of any training delivered at the same time.

3. Impact is only measured at the level of the individual and not of the organisation

The Kirkpatrick Model measures improvements in the capabilities of each individual trainee participating in the programme. This means that the model provides no insights about potential improvements in organisational-level capabilities such as productivity or innovation.

This is an important deficit of the Kirkpatrick Model because it is quite possible to achieve significant individual-level improvements while only achieving marginal organisational-level improvements.

The Human Capital Value Profiler (HCVP)

A number of organisational-level models are available such as the Galbraith Star Model (Figure 2) or the Human Capital Value Profiler (HCVP) (Figure 3). These can be used to supplement or replace the Kirkpatrick Model in order to address the limitations mentioned above.

Figure 2: Galbraith Star Model

Galbraith Star Model

Source: Jay R. Galbraith: The Star Model™

Figure 3: The Human Capital Value Profiler

The Human Capital Value Profiler

Source: Blumberg Partnership Ltd.

Here we’ll explore the HCVP as one example of how to assess the organisational impact of L&D.

The model holds that successfully implemented people processes and programmes (such as L&D or organisational design) interact with one another to create the workforce capabilities a company needs to enable the Key Performance Drivers (KPDs) required to deliver its desired business outcomes:       

People Processes

Workforce

Capabilities

KPDs

Business Outcomes

The HCVP therefore implies the following:

  1. People processes interact with each other to eventually deliver an organisation’s desired business outcomes

  2. Problematic people processes will lead to problematic business outcomes

  3. Problematic business outcomes can be addressed by tracing a path back from the problematic business outcome to the problematic processes that are causing it

Using the HCVP to assess the organisational impact of L&D programmes

There are many ways to use the HCVP for assessing the organisational value of an L&D programme. The simplest of these works as follows:

  1. Hypothesise the chain of impact from the L&D programme to business outcomes via workforce capabilities and KPDs

  2. Measure the pre-programme levels of the hypothesised capabilities, KPDs and business outcomes

  3. Deliver the L&D programme

  4. Measure the post-programme level of the hypothesised capabilities, KPDs and business outcomes

  5. Analyse the measurements to determine whether the L&D programme led to hypothesised improvements

These steps are described in greater detail below:

1. Hypothesise the programme’s chain of impact

The first step is to hypothesise a possible path linking the L&D programme to the business outcomes. This exercise is best performed as a joint undertaking between L&D professionals and operational managers affected by the programme, and proceeds as follows:

  1. Hypothesise which workforce capabilities are likely to improve if the L&D programme is effective

  2. Next, list the KPDs which you believe are likely to be enabled by your hypothesised workforce capabilities

  3. Finally, hypothesise which business outcomes are likely to improve as a result of hypothesised KPD improvements in step ii.

For example, when assessing a communication programme for senior leaders, you might hypothesise improvements in the following workforce capabilities: engagement, leadership capability and employee experience.

In turn, you might hypothesise that these capability improvements will lead to improvements in productivity and innovation KPDs, and that these KPDs in turn will increase the revenue and profitability (see Table 1).

Table 1: A causal chain for a leadership communication training programme

L&D Programme

Workforce Capabilities

KPDs

Business Outcomes

Communication

 

Engagement

 

Productivity

 

Revenue

   

Leadership Capability

 

Innovation

 

Profitability

   

Employee Experience

       

2. Obtain pre-programme baseline measures

Prior to delivering the programme, take baseline measures for each of the workforce capabilities, KPDs and business outcomes listed in Table 1. These will be measured again after the programme has been delivered to determine whether improvements have indeed occurred.

To obtain measures, we recommend asking a sample of managers to rate the current effectiveness of each capability and KPD and then using the average. (They do not need to rate business outcomes as these are financial.) Ratings checklists are provided as part of the HCVP, but any validated effectiveness rating scale can be used if preferred.

The idea of asking managers to rate workforce capabilities and KPDs is similar to having managers rate teams for performance reviews, having employees rate each other when using a 360° survey, or when employees rate their own engagement via a climate survey.

There are at least two key advantages of having managers do the ratings and then using an average score: first, it engages multiple managers in the L&D measurement process; and second, it minimises the inevitably subjective bias associated with having one person doing a rating (such as when a manager undertakes a performance review on her own).

3. Deliver the L&D programme

Armed with the above baseline measures, the L&D programme is now delivered. If desired, the Kirkpatrick Model can be used simultaneously to obtain individual-level ratings.

4. Obtain post-programme improvement measures

Once the programme has been delivered and a suitable time period has passed, the hypothesised capabilities, KPDs and business outcomes are measured. As with the pre-programme measures, an average value for each should be crowdsourced from a group of managers.

5. Analyse differences in pre- and post-programme ratings

Calculate and analyse differences between pre- and post-programme ratings on capabilities, KPDs and business outcomes. If the hypotheses were correct and the programme made a positive contribution to business outcomes, there should be noticeable differences between the pre- and post-programme scores.

If desired, additional statistical advice can be obtained to determine whether these differences are statistically significant; however, in most instances, it is usually obvious when the programme has resulted in improvements or not.

Transitioning your approach

Individual-level measures, such as the Kirkpatrick Model, are not sufficient for predicting improvements in overall business outcomes.

For L&D to up their game and show the effectiveness of their learning solutions on an organisational level, the HCVP explored above is a good route to take. The process is simpler to implement than the Kirkpatrick individual-level process and obtains information crowd-sourced from managers who are affected by programme outputs.

What’s more, the HCVP approach can easily be extended to include the simultaneous impact of non-L&D programmes such as organisational restructuring, digital transformation and new leadership programmes.

About Max Blumberg

Max Blumberg

Max Blumberg, Ph.D., bridges the worlds of business performance and analytics to improve strategy execution, design powerful people processes and increase sales force effectiveness. 

Max has deep experience and success with virtually every aspect of business design, performance, diagnosis, and execution. He first worked as a management consultant at Accenture. He next launched and successfully sold a large technology component distribution company. In the following chapter of his career he was new business director for two technology companies including IBM SPSS, simultaneously earning his Ph.D. in psychology at Goldsmiths College, University of London. While in graduate school he also launched the Blumberg Partnership, a Top 50 analytics consultancy which delivers analytics and machine learning solutions to organisations like Nestle, Lloyds Register, Hilton Hotels & Resorts, GB Group plc, Angle Technology, the BBC, Rentokil Initial, Barclays Corporate, Brit Insurance, the MOD, the CIPD, and Friends Provident.

Replies

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.

Related content