Q aims to develop better connected and better informed groups of people committed and able to improve quality across the UK’s health and care system. A process of continuous evaluation activity was designed into Q to support independently informed ‘step-back moments’ to learn and do things better. This blog shares a little about our approach to evaluation and the things we have been focusing on.
The data that Q members continue to provide us with through surveys, interviews and focus groups continues to be incredibly valuable.
During this phase of the evaluation, key areas of focus include how well it supports: building connections, governance and regional partnerships, recruitment; activities and events; the new Q improvement lab; learning; and managing change. In all of this we seek to not only understand whether they are well done in themselves but also how well they contribute to the overall aim of the Q initiative. The data that Q members continue to provide us with through surveys, interviews and focus groups continues to be incredibly valuable. We work closely with the Q team to feed the aggregate results and analysis back – and the Evaluation and Insight Manager, Hannah O’Malley, will soon be providing you with an update of key findings and how these are informing the design and delivery of Q.
Q’s ultimate aim is nothing less than to contribute to a transformed health and social care sector where an informed and empowered community of improvers are both able and have the authority to deliver demonstrable improvements.
Q’s ultimate aim is nothing less than to contribute to a transformed health and social care sector where an informed and empowered community of improvers are both able and have the authority to deliver demonstrable improvements. This aim is also shared by other local and regional initiatives and national mandates. So part of our challenge is not so much to understand what Q has achieved on its own but how well Q has strengthened local and national actions. This is obviously hard to measure but important to do. We think that some detailed case studies will help us do this and our first planned case study will be to look at how well this has worked in Scotland. We anticipate that future case studies will be focused on smaller areas or groups, but by exploring the dynamic through which Q and other improvement efforts interact in the case of Scotland we hope to generate exciting insights into the initiative as a whole.
As the team delivering the evaluation of the Q initiative we have often talked about the importance of being both embedded and independent. The embedded part is that we want to fully understand the perceptions, motivations and experiences of all the Q stakeholders. This means engaging at events, through focus groups and interviews and through participating in meetings. This not only gives us deeper knowledge of the initiative but it also tells us as evaluators what might be useful to decision-makers and participants and when they might need to know. On the other hand there would be little purpose in becoming so embedded that we lose our critical distance and the ability to use evidence to challenge the direction of the initiative. Therefore, we also ensure critical review of our work through Quality Assurance processes within RAND and through presenting our approach and findings to different forums of independent peers. For example, we will be presenting our approach at the Health Services UK conference in Nottingham on the 6th and 7th of July.
If you would like to learn more about evaluation, of Q in particular, do sign up to the new Evaluation Special Interest Group – being led by member Emma Gibbard. We will be taking part in a teleconference at midday on 18 July to discuss the aims of this group and present further details about the methods and findings of the evaluation of Q, as well as plans for the next phase.