Conference professionals suck at measurement.
If you trust your conference smile sheet evaluations as a barometer of how effective your conference education was, you are just being foolish says learning research psychologist Dr. Will Thalheimer.
Ouch! The truth hurts!
The Emptiness Of Smile Sheets Evaluations
Thalheimer points to research that the smile sheet evaluation shows a 0.9 correlation that learning occurred. That’s basically no correlation at all.
Smile sheets are not related to learning.
Smile-sheet ratings that are good don’t guarantee that the learning intervention was effective.
Smile-sheet ratings that are bad don’t necessarily mean that the learning intervention was poor. ~ Dr. Will Thalheimer.
Still, we praise and extol our conference vortex of smile sheet summaries. We hold them up to board members and future conference prospects about how successful our event was. We promote vague realities.
We don’t realize how seriously flawed our measurement is. Those happy face assessments cost us a lot as they suck the life of any quality improvement process.
Encouraging Bias Towards Entertainment
To paraphrase Thalheimer,
By only seeking attendee opinions about education sessions, we’ve created a bias toward entertainment, motivation and charisma. We fail at evaluating attendees’ true learning, remembering and application as well as the session’s content validity.
By measuring only when the attendees are in conference sessions, we can’t evaluate if the attendees remember and apply the content at work.
We can measure if the participants have immediate understanding and recall. However, our evaluations don’t measure if the session supported long-term attitude, behavior and skill change.
In short, we don’t know if they ever applied what they heard at the conference. Or if it really helped them at all.
Smile Sheet Artificial Responses
Recently I attended a conference where the organizers texted a happy face and a frown face to attendees immediately after every session, meal and reception. There was also a box to text comments. Participants were to click the appropriate face and text it back to organizers.
Wow! What type of information could that possibly provide to the conference organizers except attendee immediate gratification? That evaluation process has very little to do with conference improvement or real learning.
Usually our smile sheet evaluations fail to examine if our participants can now apply the information to their profession. We don’t even know if our conference education improves workplace performance.
In short, we don’t get the right attendee feedback we need to make quality conference improvements.
The Bad News
Most of us will continue to use smile sheet evaluations anyway!
It’s easier. It’s what we’ve always done. We don’t know any other ways.
Unfortunately, many conference organizers and hosts don’t care if learning occurred. They only care if their paying attendees’ walked out of the room with temporary and possible fleeting satisfaction. It’s all about the short-lived high, the cotton-candy fluff, the enjoyment of the experience.
And due to a lack of accurate information from valid evaluations, we make very poor decisions about future conference education and networking.
We’ve created a black hole of information with our current measurement procedures.
The Good News
Is there a better way? Yep! We’ll discuss better strategies in our next post.
What are some good questions you’ve seen on smile sheet evaluations that are an improvement over the tradition questions? What should conference organizers measure to help improve future conferences?