I think results come out in lots of different ways, and some of them you measure, and some of them you feel.
In the January issue of TD magazine, SAP CEO Bill McDermott makes the point that training results aren’t always numbers-driven. I’ve seen this first-hand.
An India-based colleague has spent the last several years holding monthly training sessions focused on our company values and discussing “soft” topics such as teamwork and collaboration. When I dropped by one of these training sessions last month, one of her trainees commented: “In other organizations people try to pull other people down. Our organization is unique in that everybody tries to help each other and boost each other’s performance.”
Sometimes you can feel the results of a training program. But as I mentioned in Monday’s post, companies around the world spend over $75 billion (with a b!) annually and have no idea whether or not their training efforts have produced any results. This isn’t good.
If you happen to be interested in the ability to show other people (your boss, for example) that your training efforts don’t just feel good, but have made a measurable difference, here are four ways to do that:
1. Make sure you ask what should be different as a result of the training.
This one may sound like a no-brainer, but you’d be surprised at how many times training is planned and executed without specifically identifying what should be done new or differently or better as a result.
2. Pay some attention to Kirkpatrick’s Four Levels of Evaluation…
About 60 years ago, Donald Kirkpatrick espoused four “levels” of evaluation to assist training practitioners begin to quantify their results. First come post-training evaluation scores (“smile sheets”), then learning (most of the time through pre/post testing), then skill transfer on the job (maybe a self-reported survey, or a survey from a trainee’s supervisor) and finally impact (did sales increase? did on-the-job safety accidents decrease?). Levels 1 and 2 are most common, but trainers and organizations can certainly strengthen their Level 3 and 4 efforts.
3. …and then go beyond Kirkpatrick.
According to a research paper entitled The Science of Training and Development in Organizations, Kirkpatrick’s Four Levels is a model that can be helpful, but there is data to suggest it is not the be-all-and-end-all that training professionals have pinned their evaluation hopes on. The authors of this paper offer the following example as a specific way to customize the measurement of a training program’s success or failure:
“If, as an example, the training is related to product features of cell phones for call center representatives, the intended outcome and hence the actual measure should look different depending on whether the goal of the training is to have trainees list features by phone or have a ‘mental model’ that allows them to generate recommendations for phones given customers’ statements of what they need in a new phone. It is likely that a generic evaluation (e.g., a multiple-choice test) will not show change due to training, whereas a more precise evaluation measure, tailored to the training content, might.”
4. Continue to boost retention while collecting knowledge and performance data.
Cognitive scientist Art Kohn offers a model he calls 2/2/2. This is a strategy to boost learner retention of content following a training program. Two days after a training program, send a few questions about the content to the learners (this can give data on how much they still remember days after having left your training program). Two weeks later, send a few short answer questions (again, this helps keep your content fresh in their minds and it gives you a data point on how much they’ve been able to retain). Finally, two months after the training program, ask a few questions about how your content has been applied on the job (which offers data on the training’s impact).
If companies as spending billions of dollars on training, never to know whether or not those efforts were effective, there’s a problem. Spending a few hours thinking through your evaluation strategy prior to deploying your next training program can make your efforts literally worth your time.