Skip to content

Blog post

Expert Insight: Common improvement project pitfalls, and how to get published

Each year around 1,600 papers are submitted to BMJ Quality & Safety. Roughly just 10% of the papers submitted will make it to print. Here Editor-in-Chief for the journal, Dr. Kaveh Shojania, shares some valuable insights with the Q team into what makes a publishable project.

Dr Kaveh Shojania

I’m often asked for advice on getting improvement work published. Of course, I understand the desire for such advice, but the more valuable knowledge I can share relates to the common pitfalls in developing, executing and evaluating projects from the outset.

We hope to outline some key tips in a piece for BMJ Quality & Safety* later this year, and it’s a subject I teach about regularly both in pragmatic improvement courses and a graduate program I help run at the University of Toronto.

Common pitfalls

  1. A very common pitfall concerns rushing to a solution, without first characterizing the causes of the target problem. For instance, deciding that the solution consists of a reminder or audit and feedback, or a checklist with no initial diagnostics to determine why the desired state does not exist. In this way, many improvement efforts suffer from having no theory for the intervention. I don’t mean some grand or philosophical theory by this, but more just a mechanism for how and why the intervention ought to work. We published a review of this problem, Demystifying theory and its use in improvement, which includes some very helpful guidance.
  2. Another incredibly common problem lies with what I would call fake PDSA. The plan-do-study-act approach depends on rapid cycle, small tests of change to refine multiple aspects of the proposed intervention or change idea.  Yet, the eventual intervention often looks essentially the same as the original idea, with no changes or refinements in response to the numerous implementation challenges that most improvement efforts encounter . A primer on PDSA: executing plan-do-study-act cycles in practice, not just in name provides a detailed example using a previously published improvement intervention.

Publishing success

Returning to the question of publishing success, the reality is that, for BMJ Quality and Safety, even reasonably well-done projects can be difficult for us to accept for publication. There are two predominant reasons for this.

A high bar

We receive around 1,600 papers a year, from authors eager to see their work published in a respected peer-review journal. Unfortunately, we have space to publish only about 10-15% of these submissions, and we must also publish lots of other content in addition to improvement reports, including research that advances the science of improvement. In that context, to be published improvement reports should do at least one of the following:

  • Employ a novel or particularly intensive intervention
  • Have a robust evaluation that demonstrates important principles of improvement science work
  • Reveal interesting barriers to implementation (or how they were resolved)

Again, setting the bar this high largely reflects the volume of submissions we receive. But, these requirements also reflect the fact that the particular target of any given improvement intervention (for example, reducing dosing errors for antibiotics given to patients with cystic fibrosis, or improving adherence to a particular guideline in community-based mental health) will hold interest for only a small minority of readers, given the range of specialties, professions and clinical settings from which readers interested in QI work come from.

There are exceptions, of course—for example when the improvement intervention targets a problem of broad interest and meets one of the above criteria. A great example from last year is an excellent improvement involving a bundle to improve care for patients with COPD, which not only succeeded in various specific process improvements but also reduced re-admissions

Novel application

A lot of improvement projects represent the routine application of standard methods, making it difficult to justify publishing them in peer review literature. Suppose a group implements a simple order set on an acute care ward at a hospital and through this shows an improvement in venous thromboembolism prophylaxis. This type of intervention has become so routine that we can’t publish such a report, no matter how large the improvement. It would be a little like publishing a clinical case report describing a routine case of pneumonia. One could justify using such a case in a textbook for students, but why report a typical case in the medical literature?

If the intervention is neither novel nor a particularly intensive example of a familiar intervention… then the application of the methods must be exemplary…

If the intervention is neither novel nor a particularly intensive example of a familiar intervention (say, medication reconciliation), then the application of the methods must be exemplary or there should be some important learning about implementation barriers encountered and how they were resolved. This doesn’t differ from similar situations in, say, medical education. For instance, no one would expect an educational journal to publish a description of a course from medical or nursing school unless it included some novel elements or a robust evaluation.

Broad benefit

It may seem like a terrible shame that we can’t publish more improvement reports than we do, but ultimately there must be opportunities for learning. If a team at Hospital A succeeds in reducing X or increasing Y, readers at other hospitals will still have to carry out their own PDSA cycles because the differences in workflow and context mean different challenges to implementation will arise. So, the improvement report evidences that Hospital A succeeded in its goal, but not exactly how it can be replicated. Those interested in replicating the success must more or less do their own version of project. This differs from clinical research, where, when we read a paper that concludes that a particular treatment or intervention works, we can expect it will work in generally similar patients. But, for improvement interventions, so much depends on local context and implementation issues.

… for improvement interventions, so much depends on local context and implementation issues.

When I present this perspective to students, junior colleagues and others, it can sound demoralizing. However, I remind them of the following. If the motivation for the project lay with the importance of the target problem, then mission accomplished – i.e., the improvement itself must be the main reward.

If, however, a key goal from the outset was to receive academic credit for the work, then this needs to be factored in when choosing the project from the get go. In other words, consider the audience that would value the target of the intervention and keep the same considerations for the intervention itself and its evaluation. Journals routinely reject papers when they have major, unfixable problems, but merely passing that bar is not enough on its own for publication – there has to be more going for the paper in terms of interest to a broad readership.

*BMJ Quality & Safety is published on a monthly basis and is free for Q members to access. If you have a piece of work you’d like to see published in the journal, reach out to the editorial team. Alternatively, we also encourage all Q members to submit improvement reports to BMJ Open Quality for publication.

Leave a comment

If you have a Q account please log in before posting your comment.

Read our comments policy before posting your comment.

This will not be publicly visible