Robert Pearl: Medical malpractice in the age of AI: Who will bear the blame?

June 10, 2024

via by Robert Pearl  | May 30, 2024

More than two-thirds of U.S. physicians have changed their minds about generative artificial intelligence and now view the technology as beneficial to health care. But as AI grows more powerful and prevalent in medicine, apprehensions remain high among medical professionals.

For the last 18 months, I’ve examined the potential uses and misuses of generative AI in medicine — research that culminated in my new book, “ChatGPT, MD.” Over that time, I’ve seen the fears of clinicians evolve — from worries over AI’s reliability and, consequently, patient safety, to a new set of fears: Who will be held liable when something goes wrong?

Technology experts have grown increasingly optimistic that next generations of AI technology will prove reliable and safe for patients, especially under expert human oversight. As evidence, recall that Google’s first medical AI model, Med-PaLM, achieved a mere “passing score” (60 percent) on the U.S. medical licensing exam in late 2022. Five months later, its successor, Med-PaLM 2, scored at an “expert” doctor level (85 percent).

Since then, numerous studies have shown that generative AI increasingly outperforms medical professionals in various tasks. These include diagnosis, treatment decisions, data analysis and even empathy.  <READ MORE>

Michelle’s Take: To minimize risk when using AI, it’s best to utilize it as a supportive tool and negotiate liability terms with the AI developers.

Share this article on...