How to Build Better Attendee Evaluations to Determine Event ROI

How ToPlannerTips & Tools

A big-picture approach and the right templates generate the most useful results

Last month, we addressed how planners can become more hands-on in content creation and speaker preparation. But the only way to prove to executive stakeholders the value the planner added is by effectively measuring the business benefit of that event’s formal and informal sessions.

Yes, most every meeting has after-the-fact attendee evaluations. “But they too often focus on the job the planner did with event logistics: meals, entertainment, transportation and similar elements,” says John Nawn, founder of meeting design and measurement firm The Perfect Meeting, and former director of education for Professional Convention Management Association (PCMA).

Instead, content-involved planners must develop both on-site evaluations and post-event evaluations that determine how much the meeting furthered the specific goals and objectives of the executive stakeholders who own the meeting. And the only way planners can do that is to know those goals and objectives far enough ahead of the live event.

This way, planners can not only help develop content that drives deeper understanding, critical-knowledge retention and key-skill development—all of which result in performance improvement—but also can work with executives to identify the proper metrics and measures for each type of evaluation they will conduct, Nawn says.

Real-Time Testing Instantly Improves Meeting Outcomes

For on-site evaluations, the objective is twofold: to ensure that attendee learning and understanding reaches a specific level in each session, and to ensure that each topic is addressed in ways that satisfy the audience.

For the first objective, sessions should contain quality checks along the way. “If you’re conducting knowledge- or skill-based sessions such as implementing a new medical procedure or a new sales approach,” Nawn says, “you need to see in real time that people understand each concept before moving on.” Such quizzes are an easy addition to an audience-response system or a conference app.

For the second objective, send attendees a notice from the conference app in the final three minutes of a session saying the in-app evaluation must be done before the next agenda item begins.

Nawn often supplements this evaluation with data collected through “interview intercepts” on the show floor, conducted by a team wielding iPads.

“We collect and analyze the data in real time so the planner knows pretty quickly whether people were very satisfied or dissatisfied with certain sessions, that a topic they expected to hear about wasn’t sufficiently covered, or that a common question has come up about a topic,” he says. “Then, the planner can alter sessions on the fly to address those things, guaranteeing that key points are reinforced and therefore retained over the long term.”

Post-Event Evaluations Must Align with Executives’ Objectives

While many post-event evaluations go out within two weeks from the end of a meeting, Nawn says that’s almost never enough time to learn the true value of a business event. Instead, planners should use their premeeting collaboration with executives to determine when the desired changes in attendee behavior should result in performance improvement, and then distribute the event evaluation around that time.

“For each organization, the right template will focus on attendees’ practical use of the educational content and the behavioral changes to be achieved,” Nawn says. “You have to choose a time frame where you can see the change in business performance from the new knowledge and behavior of your workforce. In some cases, that could be two, three or even six months.”

To guide planners on building the right evaluation to match their objectives, Nawn has made available his post-event evaluation template at ThePerfectMeeting.com.

One feature of post-event evaluations that can yield rich information is the open-ended question. Planners generally dislike more than one or two such questions because they require many labor hours to gather and analyze for useful intelligence and trends.

However, data-analytics tools have evolved to make it easier for planners to analyze the responses. For instance, IBM Watson Analytics offers a tool where planners can upload a spreadsheet with up to 50 columns and 100,000 rows of data, to be analyzed in various ways—for free. Further, the queries can be phrased in plain English for Watson to find correlations and display results in different ways, even as word clouds.

Nawn’s template is mostly quantitative. It asks attendees for rankings, but also for percentages of, for instance, meeting content that was relevant to their work and the amount their overall job affected by what they learned. Sales figures and other organi­zational metrics can complement this data, as well.

The Final Test: Reporting to Executives

Nawn starts by reiterating to meeting owners the goals and objectives they originally agreed upon, as well as the metrics and measures they chose to track. He explains all the data-capture methods used, and only then reports the results.

“We run frequencies, bar charts, pie charts and cross tabs for basic results, and then we get into analysis to look at the relationship between variables. Last, we present our interpretations of all the data,” he says. “Once executives see a report like that and realize what kind of data they have access to, they start thinking of new questions to ask. They buy into the project for the next event, and then things start to get really interesting.”

Rob Carey is a business journalist and principal of Meetings & Hospitality Insight, a content marketing firm for the group-business market.

advertisement