Get the latest Science News and Discoveries

A simple twist fooled AI—and revealed a dangerous flaw in medical ethics


Even the most powerful AI models, including ChatGPT, can make surprisingly basic errors when navigating ethical medical decisions, a new study reveals. Researchers tweaked familiar ethical dilemmas and discovered that AI often defaulted to intuitive but incorrect responses—sometimes ignoring updated facts. The findings raise serious concerns about using AI for high-stakes health decisions and underscore the need for human oversight, especially when ethical nuance or emotional intelligence is involved.

None

Get the Android app

Or read this on ScienceDaily

Read more on:

Photo of simple twist

simple twist

Photo of dangerous flaw

dangerous flaw

Photo of medical ethics

medical ethics

Related news:

News photo

Japanese researchers argue ChatGPT is ready to teach medical ethics - EurekAlert

News photo

Orbital Electronics: See How a Simple Twist Is Rewiring the Future of Technology