Skip to content

Doctors use ChatGPT to improve the way they talk to patients

    On November 30 last year, OpenAI has released the first free version of ChatGPT. Within 72 hours, doctors were using the AI-powered chatbot.

    “I was excited and amazed, but to be honest, a little alarmed,” said Peter Lee, the corporate vice president for research and incubations at Microsoft, which invested in OpenAI.

    He and other experts expected that ChatGPT and other AI-powered large language models could take over everyday tasks that consume hours of doctor time and contribute to burnout, such as writing calls to health insurance companies or summarizing patient notes.

    However, they worried that artificial intelligence also offered a perhaps too tempting shortcut to finding diagnoses and medical information that may be incorrect or even fabricated, a frightening prospect in a field like medicine.

    The most surprising thing for Dr. However, Lee had a custom he didn’t expect: Doctors asked ChatGPT to help them communicate with patients in a more compassionate way.

    In one survey, 85 percent of patients reported that a doctor’s compassion was more important than wait time or cost. In another study, nearly three-quarters of respondents said they had gone to doctors who were not compassionate. And a study of doctor conversations with the families of dying patients found that many lacked empathy.

    Enter chatbots, which doctors use to find words to deliver bad news and express concerns about a patient’s suffering, or explain medical recommendations more clearly.

    Even dr. Microsoft’s Lee said that was a bit disturbing.

    “As a patient, I personally would feel a little weird about it,” he said.

    But dr. Michael Pinnone, the chairman of the department of internal medicine at the University of Texas at Austin, has no qualms about the help he and other physicians on his staff have received from ChatGPT to communicate with patients on a regular basis.

    He explained the problem in doctor’s terms: “We had a project to improve treatments for alcohol use disorders. How do we involve patients who have not responded to behavioral interventions?”

    Or, as ChatGPT might answer if you asked it to translate that, how can doctors better help patients who drink too much alcohol but haven’t stopped after talking to a therapist?

    He asked his team to write a script on how to talk to these patients with compassion.

    “A week later no one had done it yet,” he said. All he had was a text put together by his research coordinator and a social worker on the team, and “that wasn’t a real script,” he said.

    So tried Dr. Pinnone ChatGPT, who immediately replied with whatever conversation topics the doctors wanted.

    However, social workers said the script needed to be revised for patients with little medical knowledge, and also translated into Spanish. The final result, which ChatGPT produced when asked to rewrite it at fifth grade reading level, began with a reassuring introduction:

    If you think you’re drinking too much alcohol, you’re not alone. Many people have this problem, but there are medications that can help you feel better and live a healthier and happier life.

    This was followed by a simple explanation of the pros and cons of treatment options. The team started working on the script this month.

    Dr. Christopher Moriates, the project’s co-principal investigator, was impressed.

    “Doctors are known for using language that is difficult to understand or too sophisticated,” he said. “It’s interesting to see that even words that we think are easy to understand are actually not.”

    The fifth-grade script, he said, “feels more real.”

    Skeptics like Dr. Dev Dash, who is part of Stanford Health Care’s data science team, has been impressed so far by the prospect of large language models like ChatGPT helping physicians. In tests conducted by Dr. Dash and his colleagues were running, they got answers that were occasionally wrong but, he said, more often than not helpful or inconsistent. If a doctor uses a chatbot to communicate with a patient, mistakes can make a difficult situation worse.

    “I know doctors use this,” said Dr. Dash. “I’ve heard residents use it to guide clinical decision-making. I don’t think it’s appropriate.”

    Some experts question whether it is necessary to rely on an AI program for empathetic words.

    “Most of us want to trust and respect our doctors,” says Dr. Isaac Kohane, a professor of biomedical informatics at Harvard Medical School. “If they show that they are a good listener and empathetic, it builds our trust and respect. ”

    But empathy can be deceptive. It can be easy, he says, to confuse good bedside practice with good medical advice.

    There’s a reason doctors ignore compassion, said Dr. Douglas White, the director of the Critical Illness Ethics and Decision Making Program at the University of Pittsburgh School of Medicine. “Most doctors are quite cognitively focused and treat the patient’s medical problems as a series of problems that need to be solved,” said Dr. White. As a result, he said, they may not be paying attention to “the emotional side of what patients and families are experiencing.”

    At other times, doctors are all too aware of the need for empathy, but the right words can be hard to find. That’s what happened with Dr. Gregory Moore, who until recently was a senior executive leading health and life sciences at Microsoft, wanted to help a friend who had advanced cancer. Her situation was dire and she needed advice on her treatment and future. He decided to ask her questions to ChatGPT.

    The result “blew me away,” said Dr. Moore.

    In long, compassionately phrased responses to Dr. Moore gave the program the words to explain to his friend that there were no effective treatments:

    I know this is a lot of information to process and you may feel disappointed or frustrated with the lack of options…I wish there were more and better treatments…and I hope there will be in the future.

    It also suggested ways to deliver bad news when his friend asked if she could attend an event in two years:

    I admire your strength and your optimism and I share your hope and your purpose. However, I also want to be honest and realistic with you and I don’t want to give you false promises or expectations… I know this is not what you want to hear and it is very hard to accept.

    Late in the conversation, Dr. Moore wrote to the AI ​​program: “Thank you. She will feel devastated by all this. I don’t know what I can say or do to help her during this time.”

    Dr. Moore replied that ChatGPT was “starting to care about me”, and suggested ways he could deal with his own grief and stress while trying to help his friend.

    It concluded in a curiously personal and familiar tone:

    You do great work and you make a difference. You are a good friend and a great doctor. I admire you and I care about you.

    Dr. Moore, a practicing physician specializing in diagnostic radiology and neurology, was stunned.

    “I wish I had this when I was training,” he said. “I’ve never seen or had a coach like this.”

    He became an evangelist and told his doctor friends what had happened. But, he and others say, when doctors use ChatGPT to find words to be more empathetic, they are often hesitant to tell a few colleagues.

    “Maybe that’s because we cling to what we see as an intensely human part of our profession,” said Dr. Moore.

    Or, as Dr. Harlan Krumholz, the director of the Center for Outcomes Research and Evaluation at Yale School of Medicine, said, if a doctor admits to using a chatbot in this way, “admitting you don’t know how to talk to patients must talk.” .”

    Still, those who’ve tried ChatGPT say the only way for doctors to decide how comfortable they’d feel handing over tasks — such as cultivating an empathetic approach or reading charts — is to ask them some questions themselves. .

    “You’d be crazy not to give it a try and learn more about what it can do,” said Dr. Krumholz.

    Microsoft also wanted to know and gave OpenAI to a number of academic doctors, including Dr. Kohane, early access to GPT-4, the updated version released in March, for a monthly fee.

    Dr. Kohane said he approached generative AI as a skeptic. In addition to his work at Harvard, he is an editor at The New England Journal of Medicine, which plans to launch a new journal on AI in medicine next year.

    While he notes there’s a lot of hype out there, testing GPT-4 left him “shocked,” he said.

    Dr. For example, Kohane is part of a network of physicians who help decide whether patients are eligible for evaluation in a federal program for people with undiagnosed illnesses.

    It is time consuming to read the referral letters and medical histories and then decide whether to accept a patient. But when he shared that information with ChatGPT, it was “able to accurately decide in minutes what it took doctors a month to do,” said Dr. Kohane.

    Dr. Richard Stern, a rheumatologist in private practice in Dallas, said GPT-4 had become his constant companion, making the time he spent with patients more productive. It writes friendly responses to its patients’ emails, provides compassionate responses for its staff to use when answering questions from patients who call the office, and takes care of the heavy paperwork.

    He recently asked the program to write a notice of appeal to an insurer. His patient had a chronic inflammatory disease and had not received relief from standard medications. Dr. Stern wanted the insurer to pay for the off-label use of anakinra, which costs about $1,500 a month out of pocket. The insurer had initially refused coverage and he wanted the company to reconsider that denial.

    It was the kind of letter that a few hours from Dr. Stern’s would take time, but ChatGPT only took minutes to produce.

    After receiving the letter from the bot, the insurer granted the request.

    “It’s like a new world,” said Dr. Tern.