LLM hallucinations and patient care don’t play well

  • Burnout and medical errors: Physician burnout from long hours and administrative tasks cause preventable medical errors.

  • EMR and LLM challenges: EMRs while useful, demand extra time. Currently EMR work is double that of direct patient care, increasing error risks. LLMs could be a part of the solution however hallucinations are a huge issue.


Medical errors are an overlooked consequence of the healthcare system’s complexity. Globally they contribute to millions of injuries and hundreds of thousands of preventable deaths each year. While the focus has traditionally been on better training and stricter protocols, there’s a growing recognition that one root cause lies in something more subtle: physician burnout.

Physicians deal with long hours, administrative burdens, and complex systems. Over time this leads to fatigue, miscommunication, and mistakes that harm patients. It’s important to note that the fault doesn’t lie with the physician. 

One unexpected contributor to physician burnout is the reliance on electronic medical records (EMRs). While EMRs have revolutionized how patient data is stored and accessed, they also consume a large amount of a doctor’s time. Studies show that physicians spend two hours on administrative tasks for every one hour spent with patients. This imbalance not only reduces the quality of patient care but also leaves doctors mentally drained, increasing the likelihood of errors.

Innovative solutions are emerging to tackle this problem. Some point to LLMs as the answer, however hallucinations are a thing. This is a fatal flaw when dealing with patient care.

Large language models (LLMs) sometimes make up information or give wrong answers. This phenomenon has been named: hallucinating. This is unacceptably dangerous in healthcare. In this setting even small mistakes can lead to serious morbidity and mortality. We know these tools are great at generating text but sources and reasoning behind the model are not clear. A tool that can “hallucinate” doesn’t fit with the careful, accurate work needed to keep patients safe.

If LLMs are to help doctors and treatment workflows, they need to be checked and tested rigorously to avoid dangerous mistakes. In addition, sources need to be stated and code clearly traceable. Personally, I think the human MD will always need to be the mediator between data and patients. It is currently not transparent how they are trained, as this is probably proprietary knowledge. 

PHRASEFIRE is a simple physician-designed rapid workflow platform. It reduces the real and perceived effort being placed into the EMR daily. 

Check us out at www.phrasefire.com

**This post and information contained in the platform is not a recommendation to treat specific patients. The information is useful to attendings, residents and students under supervision for educational reference.

Share:

Leave a Reply

Your email address will not be published. Required fields are marked *

Related