I recently learned that the NIH is questioning 16 years of Alzheimer’s research after indications of falsified imaging were identified in one of the seminal studies. Apparently, the images of amyloid plaques look suspiciously like copied-and-pasted blots. [1] One editorial in Science explained that “Journals are often uninterested in negative results, and researchers can be reluctant to contradict a famous investigator.”[2] Now, countless studies that were based on the original research are being called into question.

 Yet fabrications in research can cause reluctance to believe even peer-reviewed research. In the previous decade, a trio of researchers famously reported that their falsified research was peer-reviewed and published, exposing ethical concerns across the industry.[3] What does it take to get published today?

When I was in college, I often wondered about all the research studies that were never published – the ones that resulted in insignificant findings, or where the null hypothesis was proven to be correct. I wish that there was a research journal – perhaps it would be called Null — where the especially interesting null hypotheses and insignificant studies could be published. Researchers could continue to be promoted and published, but knowledge-building would take precedence. A library of negative findings could be established to prevent the duplication of negative studies. Risk management algorithms could query a rich history of negative findings to help us judge the cost or benefit of new therapies.

But human beings fear failure; it is one of our failures.

In quality improvement, we are so focused on finding and fixing failure. I sometimes joke that I find myself throwing together a corrective action plan just because the traffic light turned red. Unfortunately, the traffic signal is one metric that I cannot alter. But between the Christmas colors of our metric dashboards and the “go green!” slogans of our programs, we quickly learn that green is the symbol of success and red is the symbol of failure – ahem, an “opportunity for improvement,” rather.

As I look back on my career, I have learned a lot from the red moments. The red numbers revealed inequity in our maternal health program. The red metrics taught me to question percentages and samples in sepsis mortality data. Bright red failures prompt action, but we hardly ever study our “normal” results.

But a lot of terrible things used to be normal, and some studies appear normal until their results are studied on a larger scale. Sample size matters: The sepsis protocol of Early Goal-Directed Therapy (EGDT) — which is what the Center for Medicare and Medicaid Services (CMS) SEP-1 core measure is based on — did not reach significant findings until a retrospective meta-analysis was conducted on three different clinical studies conducted across seven different countries in the PRISM study. All of those studies were insignificant on their own — until they were combined in one library, one meta-analysis that could finally “see” the significance of the mortality data.[4]

These moments for improvement are meaningful, despite the unassuming nature of the initial statistics. And if anything, the discussion of falsified results assures us that the curtain will always be pulled back at some point on a sham study. The sure way to fail is to not try, but if we hide and disguise our failures, we are destined to forget as though we never tried.

So let’s be honest. Let’s build the library of failures and see what solutions emerge from the seeming insignificance of our findings.


[1] Piller, C. (2022). Blots on a Field? A neuroscience image sleuth finds signs of fabrication in scores of Alzheimer’s articles, threatening a reigning theory of the disease. Science (377), 6604. Retrieved Sept. 22, 2022, from https://www.science.org/content/article/potential-fabrication-research-images-thratens-key-theory-alzheimers-disease.  Doi: 10.1126/science.ade0209.

[2] Piller, C. (2022). Blots on a Field? A neuroscience image sleuth finds signs of fabrication in scores of Alzheimer’s articles, threatening a reigning theory of the disease. Science (377), 6604. Retrieved Sept. 22, 2022, from https://www.science.org/content/article/potential-fabrication-research-images-thratens-key-theory-alzheimers-disease.  Doi: 10.1126/science.ade0209.

[3] Couronne, I. (2018). Phys.org. (Oct 5, 2018). Retrieved Sept. 22, 2022, from https://phys.org/news/2018-10-real-fake-hoodwinks-journals.html.

[4] The PRISM Investigators. Early, Goal-Directed Therapy for Septic Shock – A Patient-Level Meta-Analysis. N Engl J Med (2017), 376:2223-2234. Retrieved Sept. 23, 2022, from https://www.nejm.org/doi/full/10.1056/nejmoa1701380. DOI: 10.1056/NEJMoa1701380.

2 Comments

  • Jennifer Sipert

    October 27, 2022

    Thank you for writing on something that is so rarely talked about in healthcare. On my last day at a job, a woman I worked closely with gave me my biggest professional compliment. She told me, “Jennifer, you taught us how to fail. We were so afraid of not having something to show for all that work we had done to gather and interpret the data.” We sure spend a lot of time on the null hypothesis (I still can’t believe I did all those calculations by hand in college!) and then we just walk away or look for a “significant finding” somewhere else. I think many of us thrive when we find and solve problems that it is really easy to stay in that mode because it can be so energizing. But it does not necessarily propel our research forward.

    • Gayle Porter

      November 3, 2022

      That is so true! There is risk in everything we do, but if we share our failures as well as our successes then we can improve. I like to think that some of our best solutions are just a few more failures away from being discovered.
      Thanks so much for sharing your experience, Jennifer!

Comment