Skip to content

Q Exchange

How much do patients and staff benefit?

We will help you measure how much your project benefits patients, staff and others, using validated person-reported outcome and experience measures (PROMs and PREMs). Ideal for peer-support schemes.

Read comments 12
  • Idea
  • 2018

Meet the team: Measure What Matters

Also:

  • Andrew Liles, Director (former acute trust CEO)
  • Jess Done, project manager
  • Joe Sladen, senior analyst

The challenge

Evaluation is a core requirement of most innovation funding, including Q Exchange. However it is time-consuming and difficult. An assessment criterion for Q Exchange is that all projects demonstrate (with some form of measurement) benefit to patients/staff/other beneficiaries. Our proposal is to help project teams do this.

Evaluation needs to consider a range of possible impacts, good and bad, to understand not only what works, but also reasons for non-adoption, abandonment and challenges to scale-up spread and sustainability (Greenhalgh et al 2017).

Service innovations are often complex. We need to understand how they help or hinder patients, carers and staff, and to compare innovations with each other and with what happened previously. Evaluation methods need to be quick, easy to use and acceptable to all stakeholders and cover all types of condition (condition-independence). Metrics may also track progress over time as key performance indicators (KPIs).

Projects need end-to-end solutions, including co-production, easy-to-use data collection, real-time reporting tools and dissemination of results.

Proposed idea

Our proposal is to work with about six Q Exchange projects, especially those focusing on peer support and service innovations, with their evaluation. The offer includes choice of person-reported measures, data collection, results reporting and production of local and summary reports.

At a conceptual level all people want the same things, namely, for everyone to be as happy, healthy, in control as possible, to receive excellent service from and across providers, and for things to work as they expect them to. Our family of generic outcome, experience and innovation measures help assess how well this is achieved.

Measures Available

Our current measures are listed in the table below (see also http://r-outcomes.com)

 

Patient-reported

Carer-reported

Staff-reported

Outcomes

Health status

Subjective well-being

Health confidence

Self-care

Acceptance of change

Loneliness

Sleep quality

Carer health status

Carer well-being

Carer confidence

Loneliness

Staff health status

Work well-being

Job confidence

Experience

Service received

Service integration

Wider determinants of health

Service received

Service integration

Wider determinants of health

Service provided

Service integration

Innovation

Readiness for innovation

Digital literacy

User satisfaction

Readiness for innovation

Digital literacy

User satisfaction

Readiness for innovation

Digital literacy

User satisfaction

Innovation adoption process

These measures share a common look and feel. They are short, generic, quick to use, easy to understand and work well together. Each has four question items and four response options, which may be colour coded with emoji. Responses are collected using smartphones, tablets, computers, text messages, telephone calls or on paper.

Measures are mixed and matched for specific contexts. Validation and psychometric studies have been published for the longer established measures, showing that they perform at least as well as much longer established instruments.

These have been proven in the Wessex Region for the evaluation of new care models and social prescribing, and the AHSN Network’s mobile ECG (atrial fibrillation detection) programme.

Process

Projects involve four stages.

1.     Choose which measures are needed.

2.     Co-produce the workflow process used to collect the data.

3.     Report findings without delay, using real-time interactive dashboards, for those who need to know.

4.     Make sense of the data collected and make changes to do things even better; this requires detailed local knowledge and leadership engagement.

We want to spread the use of these tools and see Q Exchange as a way to do this, which supports the Q initiative and helps other innovators.

Our team is based in Southern England, but operates nationally and internationally.

Deliverables

We will provide interim and final reports for each project and a final summary report at the end of one year, comparing the projects covered, including both process evaluation and the impact on patients, carers and staff as appropriate.

We will collaborate with and contribute to each project’s learning and spread plans, to help them maximise their impact, in addition to our own.

Benefits

A separate independent evaluation project to measure the impact of Q Exchange innovations, as perceived patients, staff and others affected, offers multiple benefits:

1.     Shared economies of scale.

2.     The costs are external to each project budget.

3.     Each project can focus on its own innovation, requiring less time and effort to focus on evaluation.

4.     Common methods for data collection and results reporting across projects facilitate comparison and simplify presentation of results.

5.     The outcomes and experience in different projects can be better understood and new knowledge gleaned.

6.     These tools are in themselves a valuable innovation.

7.     This proposal does not restrict each project from undertaking its own evaluation in parallel.

Data protection issues are limited, because collected from patients, carers and staff are either anonymous or uses a meaningless special purpose token to link before and after measures for the same subjects.

How you can contribute

  • The focus of this bid is collaboration with other projects.
  • Please contact Tim Benson (tim.benson@r-outcomes.com, 07855 682037) if you would like to know more or be involved.

Further information

PatientPoster (PDF, 391KB)

Comments

  1. Agree with the motivation to reduce cost of data collection and to enable easy comparison – for which the project builds a good foundation. Other comments have raised the importance of experiential data that may not be adequately captured with survey-based methods. I’m wondering if the evaluation approach could be augmented by co-designing and tailoring outcome measures and the methods needed to collect data on them, according to what is important for each evaluation?

    1. Thanks Dougal,

      We will strongly encourage the projects we collaborate with to do their own qualitative research, via case studies interviews etc, and also collect cost and activity data. Our approach has been based on co-production with local projects; this has led to the development of a broad portfolio of measures. We are always keen to further expand and validate our tools. We have a good deal of experience in data collection methods and believe that it is very important to tailor data collection to local contexts.

  2. Hi Tim. Great to see a peer support focus to your bid. How do you envisage partnering with the 6 projects you’d be helping to evaluate – do you envisage that they will approach you to be involved, or you would approach them to ask if they wanted your assistance, or some other way? Also, how will you be evaluating the impact of your own project? 

     

    As you already know from your involvement in the Q Lab, evidencing peer support was one of the main focusses of the Lab’s work this year. Although existing metrics can be used (e.g. Patient Activation Measures (PAM), Social Return on Investment (SROI) etc) – the consensus was that they don’t provide the whole picture, and as you know, creating an accurate reflection of all of the benefits of peer support can be very difficult. I know you mention free-text answers, but I’m not sure what these would be for – do you think there’s scope for capturing lived-experience stories within your person reported outcomes and experience measures?

    1. Hi Hawys, We see ourselves as only providing one important part of the picture. We expect projects to do their own qualitative research, using case studies for example and to do their own economic evaluation, especially costs and activity.

      At the short-list stage, over the summer, we will reach out to other projects and ask them if they would like to take part with us. There is an element of chicken and egg because the nature of this competition means that one or both of us may fail to be selected.

  3. From my point of view as a new Q member how, what and time limitations around measurement are one of my biggest areas of challenge. I think this would be a fantastic resource for supporting projects to make a benefit and for teaching and equipping future projects.

    1. Thanks Susan, we specialise in evaluation and recognise that this is a difficult area, where experience and economies of scale can make a big difference. So much good practical work is not evaluated well and the results are simply lost.

  4. Hi Tim, the project sounds really interesting and something I think a lot of organisations would benefit from. We are often quite good at developing a project but not so good at evaluating and I like how you have broken that down further specifically for patients and for staff.

    1. I really hope so. A lot of what we do is evaluation work, but we would really like our tools to be used in routine care as key performance indicators (KPIs), to build a strong sustainable feedback loop from patients to clinicians and managers.

  5. Well, the Relational Coordination folks do offer training, offer a tool etc.

    But I'd have thought it's actually a fairly standard Social Network Analysis survey, and they break down each relationship into 7 elements (things like shared mission, timely communication).

    You could do all their stuff, or just do your own version perhaps. Various Q members have been making use of RC, around Sheffield in particular.

    1. I like the scope statement on the RCRC website which says it is good when work is interdependent, uncertain and time constrained, which is as good a description of health care as any I have seen. There sounds like quite a bit of overlap with Normalisation Process Theory (NPT), which we are already using, but this is something I am keen to explore further. We managed to condense the core of NPT down to 4 questions asked of staff.

  6. Hi Tim - great to see a project that would collaborate and add value to other Q projects.
    The other element of any project or process that could do with being evaluated is the level of the 'Relational Coordination' of the key participants/stakeholders/ (See Gittell's work).

    It turns out that the quality of this relational coordination is strongly tied to many outcomes. It's a form of network analysis.

    Another query: I can see lots of measures, but is there a question about 'What improvements could we make that would make you more likely to recommend this service?' - so that there can be an ongoing improvement loop?

    1. Thanks Matt,

      Thanks for pointing me at Gittell's work on Relational Coordination (RC). I have been looking in detail at Carl May's Normalisation Process Theory and at Service integration, but I think this is something different. I am not sure if RC can be measured using a survey without training.

      We usually add in one or more questions with free text answers, covering things like what do you like, what could be improved etc.  This is all part of the continuing improvement loop.

Comments are now closed for this post.