In November 2020, the Q team delivered the annual Q community event in a virtual format. Joined by over 600 members from across the UK and Ireland, we came away from the two-day event energised and excited. But just as running virtual events at this scale was new to us in 2020, so was evaluating them. I know that many community members have faced the challenge of creating engaging online events and virtual spaces for collaboration over the last year, so I wanted to share some insight we gained through this process.
Here are six things we learned from our experience of evaluating the success and impact of an online event.
Beware of data overload
Clear evaluation questions helped us find a way through the vast amounts of data, and to prioritise where to put our energy.
When we’re online, we produce more data than when we attend physical events. From Zoom polls to chat sessions, every activity is a data collection opportunity – plus we also gathered all important in-depth qualitative feedback from a smaller number of attendees. This should be a blessing but can also be a curse if you aren’t able to prioritise what’s important. We realised early on that we didn’t have time to sift through every data point. Clear evaluation questions helped us find a way through the vast amounts of data, and to prioritise where to put our energy. If you’re evaluating your own event, start by identifying your key questions, and then link to data that can answer this question – otherwise you risk drowning in data.
The rules of engagement are different online
We learned a lot more about engagement behaviour than we would for in-person events. For example, we encouraged busy delegates to “dip in and out” as required – and attendance patterns showed that delegates did so, particularly at the end of a segment. It was clear that the online platform made it easier for people to engage in this way, and we learned some key lessons from this data – including that Zoom’s ‘breakout’ function doesn’t suit everyone. Notably, this effect was not obvious in all sessions; the patterns below show that whereas in one session there was stability in numbers until the end of the session, in the other there was a disengagement spike between 40 and 50 minutes. Establishing early on what type of engagement you’re hoping for allows you to create a permission structure beforehand and build it in to your event messaging.
Time is a cost
Considering venue, catering, travel, and other expenses, it would be easy to assume that in-person events are far more resource-intensive to run. However, our evaluation in 2020 showed that staff time is an easily overlooked cost during online event planning, and this year made up over half of all costs incurred. This information enabled us to start having conversations about what appropriate staff resourcing looks like, and to fully recognise that the impact was substantial; without collecting data for this, we would have been left guessing. We recommend keeping track of this, because staff time may be your greatest resource – and we found that the final numbers can be surprising.
Surveys are not enough
The first thought for evaluation is often to just do a survey. Surveys are a useful tool after your event finishes, but it’s during the event that you have the largest captive audience. With this in mind, we integrated polls and other data collection during the event, making us less reliant on people answering a survey afterwards. We took five minutes at the end of every session to poll attendees for their feedback using standardised questions for comparison. Building in this time can also allow for a short ‘after-action review’, where delegates can say what worked well, and what could have been even better, using the chat. Looking back, we could have used this time better and emphasised this element more as a key mechanism for improving quality.
Impact is hard to demonstrate
You can do all the above and more, and it still may not be realistic to demonstrate impact by evaluating a one-off event. In many cases it will be better to consider impact in the context on a broader evaluation. For us, that means thinking about the role our community event plays in achieving impact through the Q community. You can also consider your Theory of Change and be grounded in what the evidence tells us about the type of change these types of events can create, in order to reach achievable expectations.
Sharing, Comparing, Learning
Finally, you need a reference point; without one it can be hard to say anything very meaningful about the success of your event, even where you have lots of data. Refer back to any precedents you already have; online if possible, but in-person comparisons are still useful. For example, how does your delegate feedback compare to in-person events? How do the costs compare? We compared costs across years, and used similar survey questions to previous events; it really helped give a grounding of ‘what good looks like’. We plan to use some of the same measures in future years to continue comparing.
Ultimately, it’s good to acknowledge that evaluating usually in-person events is a new challenge for many of us. We are still figuring out how people engage online at a time when face-to-face events are not possible, and how our design can support with that. But by sharing and comparing our learning we can grow a body of knowledge to help us all in navigating the relatively new space of online-only events.
As we move through 2021, we’ll develop how we design events based on what we’ve learned and will continue to share our learning as we go. We’d also love to hear from you on what you’re doing to evaluate your online events. What’s worked and what hasn’t, and what have the challenges been? Share your thoughts in the comments below.