Tools to make you THINK differently about your business

You’ve spent big money, time, effort, blood, sweat and tears in creating your training program– TO DO…but did it work?  Was there a payback and how do you prove it? 

Payback is the holy grail of training and training evaluation and we’re now going to look at it in two blogs: 

  • The first, this blog, will focus on models for evaluating training 
  • The second will focus on the methods

Accompanying both blogs will be a download (see below) which will outline more about a range of potential measures that could be employed….and talking of measures, the first stage is to…

Determine the measures 

Before you evaluate the effectiveness of any training program, you need to decide what the indicators of “effectiveness” are. Is training a success when employees become better at their jobs? Or is it perhaps, a happier, healthier company culture? Is it, maybe, both, or is it something else?  The key is to start with the end in mind.

The point is, you’ll probably want to track more than one measure of training effectiveness. The more measures you include, the more information you’ll have to help you improve this or any future initiative.  Measures are discussed in more detail in the download below.

Determine the method

Books have been written on the subject of training evaluation and the methods for doing so, so we’re not short of techniques to choose from!  

More helpfully, there are a small number of tried and tested techniques and we’re going to explore a few of them now.

Kirkpatrick’s Four-level Training Evaluation Model

Donald Kirkpatrick invented his four-step training evaluation model as part of his Ph.D. in 1954 but it really took off after the publication of his 1994 book Evaluating Training Programs.  So, this model has been around for some time and has therefore stood the test of time, other models are often refinements on his ideas. 

Let’s look, progressively deeper, at his four-step model:

Step 1: Reaction: Yep, our man Kirkpatrick is the guy responsible for those “happy sheets” you get after each and every training course you have even been on.  The objective of which is to evaluate your reaction to the training at the point of training.  At this stage, whilst you may have learnt something, it won’t have been applied…you’ve not even left the (real or perhaps on-line) room yet!  This is about attempting to understand your overall satisfaction with the learning experience.

Step 2: Learning: Measure what was learned during training. Use assessments to measure how much knowledge and skills have changed from before to after training.

Step 3: Behaviour: This step seeks to understand whether or not (and by how much) your behaviour has changed as a result of the training undertaken. This is usually an external assessment, by others using observation of performance, pre and post-training.  

Step 4: Results: The fourth, final and most important step is to evaluate the impact of your training on results. This is often measured at different levels, for instance:

  • Individual – What was the impact on you? How did you benefit?
  • Team / department – What was the impact on your team / department? How did they benefit?
  • Company or organisational – What was the impact on your organisation? How did the organisation benefit?

The actual results being measured would have been decided at the objective setting stage but might include issues like sales conversion, productivity, quality, efficiency and customer satisfaction.

Kirkpatrick’s model is still in use for good reason. Its logical staged approach is easy to apply, and once the evaluation is complete, you’ll have a deep and wide understanding of learning obtained.

The Phillips ROI Model

A guy called JJ Phillips argued for a fifth step to be added to the Kirkpatrick model to evaluate the program’s return on investment (ROI), so you’d need to measure the difference between your training “cost” and the “results” it delivered, so, when the results exceed the cost of undertaking the training, then you’ve got a positive ROI. 

However, we’d suggest that, in practical terms, this is something you ought to be doing as part of the Kirkpatrick model anyway.  

Anderson’s Model of Learning Evaluation

The Anderson model adopts a focus that very much attempts to align the training, and its evaluation, to the strategy the company is employing.

The easiest way to explain the approach is probably by way of example.

Obviously, we are going to have to oversimplify a bit but I hope you’ll get the picture.  Let’s take a host company, could be a manufacturing company or perhaps a construction company and they have sufficient resource to keep about 100 customers extremely happy with the current level of service (on time delivery / projects completed on time) they promise. 

The No. 1 strategy is the relentless and obsessive focus on delivering an excellent service and the accompanying customer satisfaction metrics, the results from which the directors are justifiably proud. 

However, the directors now want to rapidly expand the business but they can’t yet take the chance of adding to costs with more staffing resources, so they know that additional pressures will be placed on staff and the treasured customer feedback scores might suffer; they are worried that their (sterling) reputation may take a hit.  Now, suppose that their training manager develops a program to help the marketing team win new customers and another to cut waste from the existing processes.

If this model works for you then there are three stages:

Stage 1: Critically review your strategies and your training plans against one another.   Does the training plan focus on the strategic priorities?  Looking to the example above we can see that there was potentially a mis-match between the strategy, to increase clients and the desire to deliver high-quality customer care.

Stage 2: Measure the contribution of the training programme to strategic ambitions. For example, a training program that helps you reduce waste could be measured by the percentage of decrease costs, or other metrics as detailed in the tale below.

Manufacturing v Construction waste

Whatever metric chosen, the list is long

Stage 3: This is where you decide the ROI and this step depends on your approach. Examples include:

  • A comparison of the contribution identified in step 2 to resources that invested into the training programme, or, 
  • You might ask whether the percentage of decrease in costs was big enough: did it meet your expectations?

So, if the numbers do not produce an ROI that you are happy with then it is back to the drawing board to make improvements.  The approach promotes a virtuous circle that seeks to learn, align and improve.

Conclusion

All models are just that – models.  There is a great NLP presupposition (neuro linguistic programming – look it up, it’s a great toolkit) which states that “the map is not the territory” …we’re very much advocates of making the models work for you, so in conclusion we’d say the critical components of training evaluation are:

  • Decide on the evaluation criteria in the beginning
  • Measure impact at the individual, team and company levels to see if you have got payback (as per the diagram)

In the next article we’re going to look at different methods of evaluation …


Downloadable resources


 To find out how Statius can help you deliver:

• Better strategies
• Better systems
• Better measurement and 
• Engaged people delivering 
• Better results

Call us now on 0208 460 3345 or email sales@statius.co.uk

Tags

Comments are closed