Skip to content

ChatGPT can help doctors and hurt patients

    “Medical knowledge and practices change and evolve over time, and there’s no telling where in the timeline of medicine ChatGPT gets its information from when mentioning a typical treatment,” she says. “Is that information recent or dated?”

    Users should also be wary of how ChatGPT-style bots can present fabricated or “hallucinated” information in a superficially fluid manner, potentially leading to serious errors if a person doesn’t fact-check an algorithm’s answers. And AI-generated text can influence people in subtle ways. A non-peer-reviewed study published in January that provided ethical teasers for ChatGPT concluded that the chatbot makes for an inconsistent moral advisor that can influence human decision-making, even when people know the advice comes from AI software.

    Being a doctor is much more than spitting out encyclopedic medical knowledge. While many physicians are excited about using ChatGPT for low-risk tasks such as text summarization, some bioethicists worry that physicians will turn to the bot for advice when they need to make a tough ethical decision, such as whether to have surgery or not. is the right choice for a low-probability patient. of survival or recovery.

    “You can’t outsource or automate those kinds of processes in a generative AI model,” said Jamie Webb, a bioethicist at the University of Edinburgh’s Center for Technomoral Futures.

    Last year, Webb and a team of moral psychologists explored what it would take to build an AI-powered “moral advisor” for use in medicine, inspired by previous research that suggested the idea. Webb and his co-authors concluded that such systems would be difficult to reliably balance different ethical principles and that doctors and other staff could face “moral incompetence” if they became too dependent on a bot instead of thinking hard decisions yourself. .

    Webb points out that doctors have been told before that language-processing AI will revolutionize their work, but is disappointed. After Danger! wins in 2010 and 2011, IBM’s Watson division turned to oncology, making claims about the effectiveness of fighting cancer with AI. But that solution, initially called Memorial Sloan Kettering in a box, was not as successful in clinical settings as the hype suggests, and in 2020 IBM shut down the project.

    When hype rings hollow, there can be lasting consequences. During a discussion panel at Harvard about the potential of AI in medicine in February, primary care physician Trishan Panch recalled seeing a colleague post on Twitter to share the results of asking ChatGPT to diagnose a disease shortly after the release of the chatbot.

    Excited clinicians quickly responded with commitments to use the technology in their own practices, Panch recalled, but around the 20th response, another doctor chimed in, saying every reference generated by the model was bogus. “It only takes one or two things to erode confidence in the whole thing,” said Panch, cofounder of healthcare software startup Wellframe.

    Despite AI’s sometimes glaring flaws, Robert Pearl, formerly of Kaiser Permanente, remains extremely optimistic about language models like ChatGPT. He believes that in the coming years, language models in healthcare will become more like the iPhone, packed with features and power that can empower physicians and help patients manage chronic illnesses. In fact, he suspects that language models like ChatGPT could help reduce the more than 250,000 deaths caused by medical errors in the US each year.

    Pearl considers some things off limits to AI. Helping people deal with grief and loss, end-of-life conversations with families, and talking about procedures with a high risk of complications shouldn’t be blunt, he says, because each patient’s needs are so variable that you have to have those conversations to to get there.

    “Those are people-to-people conversations,” says Pearl, who predicts that what’s available today is only a small percentage of the potential. “If I’m wrong, it’s because I’m overestimating the rate of technology improvement. But every time I look, it goes faster than I thought.”

    For now, he compares ChatGPT to a medical student: able to care for and assist patients, but everything it does needs to be evaluated by an attending physician.