One of the first moments I knew that my husband was “the one” was during a game of Balderdash. In this game, an unfamiliar word is read aloud, and each player writes a made-up definition. Then the players vote for the definition they think is the dictionary answer or which one is the silliest.

I was surprised to hear a nearly identical version of my definition during the reading. Everyone in the room started murmuring, “That must be the real one — there’s no way there could be two that sound so similar by chance.”

Well, it wasn’t the right answer. I was wrong, and so was my future husband, who had written an almost identical definition using the same pattern of puzzle-solving from the Latin prefix. It was true love.

One of the great things about the game Balderdash is the wild assortment of unfamiliar words — because the game wouldn’t be so much fun if we knew the truth. But the dictionary is nowhere to be found, and the playful fun of the game is possible because the truth is obscured, and every answer seems valid.

It’s not like that in speech writing. Wouldn’t it be great if we could just hand out demerits to the audience if they failed to understand our meaning? But if a presenter fails to make a point, we do not shame the audience for failing to understand; we counsel the speaker because their speech was not understandable. It puts a lot of pressure on the speaker to know their audience and prepare accordingly.

Likewise, when I taught patients how to administer insulin in the hospital, I used a long-standing teaching tool called “return demonstration,” where you ask the patient to demonstrate what you just showed them. It helps identify any misunderstandings that may have occurred. One hilarious example was a nurse who showed the patient how to inject insulin into an orange, because the texture of orange peel is similar to a subcutaneous injection. The patient demonstrated the procedure flawlessly, and then asked why the orange needed insulin.

Miscommunication is a real problem in healthcare, and communication is often cited as a predecessor to medical errors. Nurses and other practitioners are thoroughly educated on clear communication techniques, with a heavy focus on assessment and evaluation. What if the patient cannot read? What if they don’t speak English? What if they don’t understand the changes to their medication? We are taught how to check for these problems, as well as how to solve them with creativity and empathy.

Recently I saw an outcry on LinkedIn from a post about the new app, BILLY, which is geared to helping patients find billing information about procedures by providing a search tool using the Current Procedural Terminology (CPT) code.[1] The objections were that most patients do not know what the procedural code is, or how to obtain it. Without a crosswalk, data dictionary, or translation of some sort, how would they use the tool?

The app developer received the criticism and asked for machine-readable documentation with the information, stating that they would be happy to integrate it into the tool. But in the midst of this conversation, I started to wonder why government agencies that utilize these codes do not provide these materials to the public in the first place. Why is it so hard to know what you are diagnosed with, or what you’re consenting to?

Then an article from the Association of Health Care Journalists (AHCJ) really got my attention.[2] The article advised consumers of healthcare to be more aware of nuance in health journalism, urging them to understand the finer details of biostatistics and epidemiology. Okay, that’s a nice sentiment that is unlikely to happen.

But then they gave the example of evidence-based studies on the rate of medical errors in the United States leading to hyperbolic claims that caused false concern about patient safety. I’m sure you’ve heard the claim, “Medical errors are the third leading cause of death in the U.S.”[3] There’s some evidence that these claims are sensationalism. For example, are the deaths caused by medical errors or do they coincide with medical errors? It’s hard to separate out those two sometimes, especially in running a high-stakes code with crash carts and someone’s life hanging in the balance. While the code-based mortality data used in the study’s often cited in these claims is vague (just like the CPT codes were vague), the study’s aim was to urge further research and intervention about medical errors, which can be dangerous and deadly.

Should proverbially throw the baby out with the bathwater and dismiss patient safety as an issue because it is difficult to capture the data? Absolutely not. Even with vague coding and technical jargon, there are measurable problems:[4] The Joint Commission reported that in 2020 there were 173 sentinel event falls. Just two years later in 2022, that number increased to 611 falls resulting in serious injury or death.[5] That’s roughly 450 additional people who broke their bones or died after falling while under the care of a healthcare provider.

Those numbers were better during COVID than they are now.

So, I am loathe to disregard the urgency of medical errors. Yes, it’s difficult to track with current reporting procedures. Yes, there are data issues. But there are also significant increases in harm that should not be overlooked as “nuance.” Why not suggest ways to improve data accuracy and reporting? Because asking patients and consumers to understand the finer points of biochemistry and robotics is not going to go well.

Meanwhile, multiple healthcare lobbies have joined task forces devoted to censor health discussions that could potentially propagate misinformation. I cannot help but ask whether they are misguided in doing so. Misinformation only propagates when the truth is unknown, obscured, or when the speaker fails to communicate effectively the first time.

Misinformation about COVID thrived in an environment of unknowns and tightly controlled dialogue. Normal practitioners waited on phone lines for hours trying to find out what to do with their infected patients, and often left those calls without any real guidance or treatment recommendations. As we eagerly joined public health webinars, we often found that the authoritative answer changed by the week. And to be fair to those in charge, it was a novel virus. What they knew about it was changing and updating by the week. However, friends, family, and neighbors on social media were being very consistent in their messaging.

Amidst the lack of real data, it is no wonder that people turned to social media and junk science to find answers — any answers at all. It was like a game of Balderdash with billions of players making up their own answers, and average patients and healthcare workers were asked to take a guess on what the correct answer was. The “dictionary” playbook was not yet written, and in that environment, two people with the same answer have more validity than one right answer poorly communicated.

Controlling “misinformation” in the public is balderdash, and it is a sorry excuse for communicating poorly.

In a time of chaos, people search for consistency and what seems right. The honest, unadulterated truth was that we didn’t know, that anyone who claimed to know was just guessing, and that we were trying to write the dictionary as quickly and accurately as possible.

What would happen if we just said that? When the authorities say they don’t know, I am far less likely to search through my neighbor’s DIY medicinal recipes for a cure.

Instead of controlling the flow of information after communicating poorly, we need consistency and transparency when we communicate with the public. We need to be honest when we are guessing or waiting for more data. We should say it clearly the first time and allow individuals to make up their own minds about the evidence we share.

And it’s our job to use plain language that our audience understands; it is not their job to learn our technical profession just to participate in healthcare. We cannot herald the importance of access to healthcare and then ask our most disenfranchised patients to learn biostatistics before we talk straight with them. We should give people the opportunity to ask questions and read-back what they heard against what we mean. And it doesn’t given anyone great confidence in our medical leaders if they cannot communicate effectively.

Consider the great physicist Richard Feynman who famously invented the simplicity proof of understanding used by many engineers and teachers today.[6] In preparing a freshman lecture on why spin one-half particles obey Fermi-Dirac statistics, Feynman wrote the following phrase: “I couldn’t do it. I couldn’t reduce it to the freshman level. That means I don’t really understand it,” which has spawned the more common idiom “say it to me like I’m six.”

If we can’t explain these complicated medical issues in simple terms that everyone can understand, then it’s time to assess our ability to communicate.  A well-informed public does not need someone to filter misinformation for them; they’ll be perfectly capable of that on their own.

Today, if an organization says that they are controlling misinformation, I now assume they are poor communicators who lack evidence. Why let the mistakes made in communication during the COVID-19 pandemic rob us of excellence in health research, hiding it away for fear the public might misunderstand the nuance of our field’s inaccessible jargon and opaque data-analysis techniques?

Rather let’s teach our researchers and leaders to communicate better, so the public understands what the data means in the first place. Say it to them like they’re six.


[1] BILLY (2023). BILLY Info Page. www.trybilly.app.

[2] Jaklevic, M. C. (July 27, 2023). ‘Medical errors are the third leading cause of death’ and other statistics you should question. Association of Health Care Journalists. https://healthjournalism.org/blog/2023/07/medical-errors-are-the-third-leading-cause-of-death-and-other-statistics-you-should-question.

[3] Makary, Martin A.; Daniel, Michael. Medical error—the third leading cause of death in the US. BMJ. 2016 May. 2016;353;i2139. Retrieved August 7th, 2022, from https://www.bmj.com/content/353/bmj.i2139.

[4] Heron, M. (2021). Deaths: Leading Causes for 2019. National Vital Statistics Reports, 70(9), 18. Retrieved October 27th, 2022, from https://www.cdc.gov/nchs/data/nvsr/nvsr70/nvsr70-09-508..pdf.

[5] The Joint Commission (2023). Sentinel Event Data 2022 Annual Review. https://www.jointcommission.org/-/media/tjc/documents/resources/patient-safety-topics/sentinel-event/03162023_sentinel-event-_annual-review_final.pdf.

[6] Goodstein, David L. and Judith R. (1996). “Feynman’s Lost Lecture The Motion of Planets Around the Sun.” Caltech’s Engineering and Science magazine. http://calteches.library.caltech.edu/563/2/Goodstein.pdf

Comment