A few weeks ago I finished reading Will Thalheimer’s book, Performance-focused Smile Sheets: A Radical Rethinking of a Dangerous Artform (here’s my brief review of the book).
A colleague recently made fun of me, suggesting that I read “geeky books” in my spare time. That would be true if I just read books about smile sheets for fun. And while I did have fun reading this book (so I guess I am kind of geeky), I’ve been attempting to integrate lessons learned from the book into my work.
Following are two examples of improvements I’ve made on existing smile sheets, and the logic behind the changes (based upon my interpretation of the book):
Example 1: Was the training a good use of your time?
Intent: I wanted a bottom-line metric: what percent of participants felt investing their precious time in this training session was worthwhile? Over the past several years, I’ve been able to smile while sharing post-training evaluation feedback with presenters and my supervisor by telling them: 100% of training participants responded that this was a good use of their time!
The problem: If my supervisor (or anyone else) ever asked: “What does that mean? Was it a good use of their time because they felt entertained? Because they were able to escape their office for a day or two? Because they’re able to do something better? Why was it a good use of their time?” I wouldn’t have a good answer.
Based on what I learned in the book, I’ve adjusted the question to read like this:
Why I like this better: True, it’s not as clean (on the surface, anyway) as a question that simply asks whether the training was a good use of the participants’ time. However, this question allows me to identify specific ways that this training session may have been a good use of participants’ time.
If the training is designed to make sure participants are able to carry out tasks appropriately, then it’s really not a productive use of their time if the participants don’t feel they’re able to put these concepts into practice.
Example 2: Was the training engaging?
Intent: I think that lecture-style presentations are super boring, and I wanted a way to show the world that people who design lecture-based presentations will pay for it on end-of-session evaluations when I ask a question about participants’ level of engagement (while those awesomely designed presentations will be rewarded with high scoring responses on this question).
The problem: In the end, presentations with activities designed to engage learners and get them to practice new skills sometimes scored higher on this question… marginally. Average scores of activity-based sessions often came in around a 4.2 or 4.3 while lecture-based sessions would come in around 4.0 or 4.2. Of course, what does a 4.2 even mean?!
Desperately seeking a better way to evaluate the level of engagement that the design of a training session would yield, I borrowed (stole?) this question from Will’s book:
Why I like this better: Instead of having to choose between a 3 or a 4 or a 5, now there is a continuum of choices to describe how engaging a training program turned out to be. Anything less than “mostly engaged” is probably unacceptable.
One key point in Will Thalheimer’s book is that Smile Sheet-level feedback should be actionable. It’s tough to take action on a collection of numbers. After all, at what point do you take action? When something averages out to be a 3.9? 4.2?
These are two examples of how spending a weekend reading a “geeky book” can change habits or practices in the workplace. I’d love to hear what you’ve read recently… and what you’ve done with that new knowledge.