Burnout and medical errors: Physician burnout from long hours and administrative tasks cause preventable medical errors.
EMR and LLM challenges: EMRs while useful, demand extra time. Currently EMR work is double that of direct patient care, increasing error risks. LLMs could be a part of the solution however hallucinations are a huge issue.
Innovative solutions are emerging to tackle this problem. Some would point to LLMs, however sources there are often not clearly shown and answers can be hallucinated. This is a fatal flaw when dealing with patient care.
Large language models (LLMs) sometimes make up information or give wrong answers. This phenomenon is called hallucinating. This is unacceptably dangerous in healthcare. In our world, even small mistakes can lead to serious health problems or death for patients. We know these tools are great at generating text, but sources and reasoning behind the compilation are not clear. If a doctor trusts an LLM that gives wrong advice about a diagnosis, medicine, or treatment, it could hurt or even kill a patient. Using a tool that can “hallucinate” doesn’t fit with the careful, accurate work needed to keep patients safe.
Before LLMs can help doctors and treatment workflows, they need to be checked and tested carefully to make sure they don’t make dangerous mistakes. In addition, sources need to be stated and clearly traceable. The current LLMs do not allow this. It is currently not transparent how they are trained, as this is probably proprietary knowledge.
We at Phrasefire decided that we needed a simple physician-designed system to reduce the real and perceived effort being placed into the EMR daily.
Check it out at www.phrasefire.com