How do you know that the training you have delivered has been taken onboard and is having the desired results? Godfrey Parkin suggests you don a hat and sunglasses and go undercover…
I have recently been running a training evaluation project with a financial advice company in the midst of the global battle for share of the baby boomer bubble market.
There have never been so many people on the cusp of retirement, and financial advisors are circling them like sharks around a sardine run. The normally staid and sensible financial advertising imagery is giving way to flower-painted VW microbuses, long hair and lava lamps, richly underpinned by the evocative music of the 70’s. We are urged to believe that the person in the dark suit who wants up to 4% of our liquid assets annually in return for helping us to buy a stairway to heaven is just a grown-up hippie at heart, man, who can really relate to our values.
The training challenge is significant if the advisors’ behaviour is to synch with the marketing message. Does the repositioning taught in class actually make it through to discussions with potential clients?
The nature of the business makes it very difficult for managers to observe and objectively evaluate those reporting to them. Training effectiveness can be inferred from actual sales results, prospect conversion rates, and before-and-after data mining studies. But if advisors are losing a lot of potential converts, such empirical data do not help to diagnose where (if at all) the training may have been deficient. So how can you be sure that the training is actually leading to on-the-job application?
One solution is to “secret shop” for financial advice. Using secret shopping to test customer interaction skills is a widespread approach in industries ranging from cars to cosmetics to coffee. It is typically used to target and remediate poor behaviour in specific sales/service individuals, or to check up on the quality of management of establishments such as restaurants and retail outlets. It is rarely used to evaluate the effectiveness of training. That’s because secret shopping tends to fall under the control of sales, marketing, or customer service departments who simply assume that training has done its job, and see implementation is a matter of personal choice or supervisor diligence. A direct link between individual on-the-job competence and training is rarely made by such departments, and training departments are reluctant to raise it.
For a wide range of customer contact skills at Level Three, secret shopping can tell us a great deal about the strengths and weaknesses of our training and its impact on behaviour. It can also tell us a lot about the environmental and systemic obstacles to application of the learning.
Yet we don’t use it as part of our continuous improvement process. There are several reasons:
* Conceptual confusion: Beyond Level One, trainers are more accustomed to checking the learner than the learning. The frame of reference is typically testing and opinion surveys, to see how each individual is doing, and such studies are seen to be vast and intrusive. But you don’t need to look at every learner to get a useful indication of the impact of any particular course or curriculum – a small sample will do.
* Cost: It’s a lot more expensive to have a shopper talk with an ex-learner than it is to simply e-mail out a link to a Zoomerang questionnaire. Depending on what behaviour you need to observe, a secret shopper can run you anything from $5 to $50 per visit – and that’s in the simple retail customer field where no sophisticated observation is called for. In sophisticated scenarios, such as seeking financial or legal advice, a “shop” may require an hour or more of sustained credibility through complex discussions. You may have to pay qualified shoppers up to $500 per visit. Add to that the investment in training your shopper squad in observation skills, plus the potential travel expenses involved in getting a national representation in your sample, and the cost of your training QA project can rapidly get into the tens of thousands.
But it need not get higher than that, because you are looking for a diagnostic indication, not a statistically valid sample. You may only visit three or four dozen individuals, the same number you might pull into focus groups. And you may be able get your sponsoring department to help with the costs.
* Ethics: Unlike marketers, trainers feel that there is something underhanded, invasive, or unethical in collecting data by subterfuge. But your focus is the training, not the trainee, and the identity of anyone “secretly shopped” should never be divulged to any of his/her management. It may not be retained at all once relevant demographics and training history are appended to the performance observation study.
* Liability: Do training departments really want to expose weaknesses in the quality of their training, particularly where each lost sale or disgruntled customer equates to a great deal of lost income? Level Three is the third rail of training evaluation, and most trainers want to stay away from it. The benefits of showing that training produces improved performance may not be significant enough to justify the risk of demonstrating that the opposite is true.
But when the time comes for organisational cut-backs, secret shopping at Level Three might buy trainers a stairway to heaven; relying on Level One smile sheets may be a highway to hell.
* Read more of Godfrey Parkin's columns here.