But planners need to tailor them to their events
Knowing how attendees evaluate the effectiveness of an educational event is a critical component in keeping them engaged and in maximizing the host organization’s return on its meeting investment. But proper attendee evaluation is a multistep process that spans several time frames.
With so many other tasks required of planners leading up to and during an event, it’s understandable that many don’t spend a lot of time on strengthening evaluations. Here are some ideas on how to implement improvements.
Creating Better Daily Evaluations
Not surprisingly, technology has assumed a central role in on-the-spot evaluations of individual sessions. But moving from paper to the electronic medium requires some transitional work to fully tap into its benefits.
Allison Dixon, senior director of marketing and creative services for Meeting Expectations in Atlanta, sees increasingly more of her clients gathering instant session feedback through mobile apps such as DoubleDutch, EventMobi and Zerista, along with registration-software products such as EventRebels that offer evaluation tools. Dixon is not looking for a Twitter-style stream of attendee opinions as the session is taking place; she prefers for the evaluation form to become available on the app just five minutes before a session ends.
To gain maximum attendee participation, Dixon embeds a mention of the evaluation in each presentation, along with instructions on how to use that feature in the event app. She also briefs session leaders ahead of time on elements attendees will evaluate, to sharpen each presentation’s focus.
Dixon keeps the number of evaluation questions to five, with only two requiring numerical ratings. The other three are open-ended questions that ask about attendees’ key takeaways, deficiencies in the presentation or presenter, and why the topic should or shouldn’t remain on the agenda at the next meeting.
Electronic Versus Paper Evaluations
Everett Shupe, executive development program director for Goodwill Industries International in Rockville, Maryland, sometimes prefers to send a simple email to his attendees near the end of each day requesting feedback. “We find that people are more willing to spend a few minutes while back in their room to type in their thoughts,” he says.
Shupe limits these daily evaluations to three categories: one numerical rating and two that ask for the most valuable takeaways, as well as what could be improved upon. “I’ve cut it down from more than 20 questions, so that and the ability to do it through your phone, at your convenience, has improved our response rate versus using paper forms,” he notes.
The ability of the technology to search responses for repeated phrases and trending topics makes open-ended questions in an electronic product more useful and less time-consuming for planners to analyze. This makes it possible to refine future sessions—even those occurring the next day—to better meet attendee needs.
On the other hand, Dixon claims there’s still a place for paper evaluations in some meeting environments. “If we don’t anticipate wide adoption of the mobile event app across a specific meeting’s demographic, then we use paper,” she says.
When Shupe is a presenter rather than the planner at a meeting, he simply distributes sticky notes or index cards near the end of his session, asking for two thoughts about what attendees liked about the session and one about what could be improved. He then says, with a smile, that nobody can get past him at the exit without handing in the card.
“Of course I don’t hold anyone to it,” he says. “But I find that a little pressure of expectation gets most people to provide something that’s useful to me—especially if I am presenting again tomorrow.”
Improving Post-Event Evaluations
One week after an event, Dixon sends an email to attendees with a web link to an evaluation form that addresses the keynote sessions, as well as the logistical and other aspects of the event. “We use SurveyMonkey or FormSite because these evaluations are going to be more detailed, with about 30 questions,” she notes. “Many of them have open-text boxes for people to write in their thoughts, and we run keyword and trend analysis on those responses.”
Dixon provides a clear example of how open-ended questions benefited a meeting the following year: After an event hosted by a technology-industry client, there was a critical mass of attendee commentary stating that too many sessions were led by industry consultants rather than end users. So, she aimed the next call-for-proposals campaign toward current and potential buyers.
“We encouraged them through instructional articles on our website, and then gave them opportunities to collaborate with review-committee members via webinar to help refine their ideas and applications for presentation slots,” Dixon says. The changes were heavily marketed to attendees, and the new dynamic at the next event produced attendee feedback showing greater satisfaction and enthusiasm for attending more events.
The range of topics addressed in Dixon’s post-event evaluation also includes whether attendees liked and used various functions in the event app; how much they used the program guide; whether they found the exhibit hall experience satisfying, or lacking in certain types of vendors; and whether the housing and registration tasks were simple and smooth.
Producing Effective Long-Term Evaluations
Planners should make some effort to measure how attendees use what they’ve learned at a meeting a few months after an event. When presenting, Shupe offers to provide additional resources via email that expand upon the material; in return, he collects business cards from interested attendees.
“I send the resources a week or two after the event, and then after 12 weeks I contact a random selection of those people to hear how what they learned is being applied in their work,” he says. “They’re willing to talk, and the qualitative feedback I get is so valuable for understanding the effectiveness of the meeting.”
It’s also useful for helping to market the next one.