Skip to content

When we hear the term evaluation, we often react in one of three ways: sheer excitement and jubilation, dread and/or bewilderment. Love it or hate it, most still agree that ‘good’ evaluation is essential for improvement. So, why does evaluation often feel like the elephant in the room?

In preparation for this blog I have been reflecting on my observations of evaluation practice over the course of my career, and how my experience of good and bad evaluation has shaped my attitudes and practice over time.

The role of evaluation

The work we do is often well-intentioned; most of us would not embark on a piece of work without a genuine belief in its potential impact. It seems logical that we want to know with some level of certainty that our efforts are not in vain. I often describe evaluation as ‘The Thing’. It’s the thing we know we should do but too often forget about. We convince ourselves that we are just too busy getting on with the work to focus on it. We may try to get to grips with it, but it feels too hard and slips off the radar.

I often describe evaluation as ‘The Thing’. It’s the thing we know we should do but too often forget about.

At other times, we clearly see the importance of evaluation, put all our energies and resources into it but are utterly disappointed by the output because it fails to provide clarity on the questions we need answered. Anecdotally an improvement project may appear to make a positive impact, but the evaluation suggests the contrary and the project or initiative becomes a ‘dirty word’ which no one dares to mention. Or maybe we have delivered a great piece of work and only wished we had collected the data we needed to demonstrate the impact of our work.

Most of my career has focused on developing interventions to improve the health and well-being outcomes of communities. Working in this mode, I have often been ‘evaluated’ as opposed to acting in the role of ‘evaluator’. I deliberately state that ‘I have been evaluated’ as I feel that the evaluative process is often presented as a means to objectively assess the quality, value and importance of an area of interest but we forget that it can be an emotive process, which feels personal at times. After all, you are likely to be deeply invested; in the project’s purpose, the process and the patients and communities whose lives you are working to improve. An evaluation that doesn’t tell you what you want to hear can feel like a personal failure, or if it fails to tell you what you need to know it can be frustrating. Yes, we know in principle an evaluation is about learning whatever the outcome, but it is only natural that at times people may feel under scrutiny and reluctant to fully engage in the process.

An evaluation that doesn’t tell you what you want to hear can feel like a personal failure, or if it fails to tell you what you need to know it can be frustrating.

When it feels like everything has gone wrong, there is still so much that we can take away from the evaluation process itself that we can apply in the future:

  • With hindsight we can see the value in persevering with discussions about the role of evaluation when an idea has not been fully formed
  • We might be more inclined seek help from others who have expertise
  • We may decide to be closer to the data to ensure we collect the information required to answer our key questions
  • We might need to think more carefully about the methods and measures we use to validate the impact we are observing anecdotally.

These challenges may be avoidable when an evaluation has been planned and implemented well. A good evaluation enables practitioners and stakeholders to understand whether an intervention has achieved the desired outcome. It surfaces learning that contributes to the existing improvement knowledge base, which in turn helps to inform decisions about where to focus investment and energy.

Within the community, it is clear that members see the importance of Q. Evaluation is often a popular topic for workshops at our events. In my conversations with members, they are often keen to evaluate their work but not sure where to start, especially when working on complex projects.

As a starting point, I often direct members to the Health Foundation’s quick guide ‘Evaluation: what to consider’ as it provides a helpful overview of how to approach evaluation and signposts the reader to a range of resources. Q’s Evaluation SIG (Special Interest Group) is also a useful resource for members. The SIG is led by Q members with expertise and enthusiasm for evaluation and provides an online space that allows members to connect and share learning.

In my conversations with members, they are often keen to evaluate their work but not sure where to start, especially when working on complex projects.

How is Q evaluating?

Q is in an enviable position where it’s evaluation is concerned. Evaluation has been embedded in Q from its inception, and the developmental principles that have guided the initiative have helped to inform the ongoing development of Q. As the initiative becomes more established, the evaluation will increasingly focus on the impact of Q. However, it remains a challenge to evaluate such a complex initiative. As the community continues to grow and members increasingly self-organise, it becomes more difficult for the evaluation to capture or directly attribute the outcomes to Q. The evaluation uses a range of methods, including surveys, focus groups and interviews to help us to understand what is working well and how the project team and partners can respond to challenges.

An important feature of the evaluation is the annual member survey. The feedback we receive from members in the survey helps shape the future of Q. It provides invaluable insight into:

  • Why members join the community
  • How members experience their membership
  • The challenges members face within their improvement contexts.

Q members – we need your help

This year we would like to align our offer to better support members improvement efforts, so we will be asking you about your improvement priorities for the next 12 months. We are increasingly focusing on capturing the impact of Q, and this survey aims to surface members views on the impact their involvement has had on their practice and the wider health and care system.

An email from RAND Europe should now have arrived in your inbox inviting you to take part in the survey. Last year 39% of our members responded, and we are hoping that with your help we will increase the response rate this year. So, if you are curious about how other members are experiencing their time in Q, have ideas about how we could improve Q’s offer, or you would like to share a story about how Q has influenced your practice, please take the time to complete the survey. As a project team, we understand that our members lead busy lives and we really appreciate the time you take in supporting the evaluation. We want to thank you in advance and look forward to sharing the highlights from the survey with you!

Leave a comment

If you have a Q account please log in before posting your comment.

Read our comments policy before posting your comment.

* required fields

This will not be publicly visible