🚀 Explore this awesome post from TechCrunch 📖
📂 **Category**: AI,Biotech & Health,Anthropic,Health,healthcare,medicine,OpenAI
📌 **What You’ll Learn**:
Dr. Seena Bari, a practicing surgeon and head of healthcare AI at data company iMerit, has seen firsthand how ChatGPT can mislead patients through faulty medical advice.
“I had a patient visit me recently, and when I recommended a medication, they had a printed dialogue from ChatGPT saying this medication has a 45% chance of causing a pulmonary embolism,” Dr. Barry told TechCrunch.
When Dr. Barry investigated further, he found that the statistic was from research on the effect of this drug on a specialized subgroup of people with TB, which did not apply to his patient.
However, when OpenAI announced its ChatGPT Health chatbot last week, Dr. Barry was more excited than concerned.
ChatGPT Health, which will be rolled out in the coming weeks, allows users to talk to a chatbot about their health in a more private environment, where their messages will not be used as training data for the underlying AI model.
“I think it’s great,” Dr. Barry said. “It’s something that’s already happening, so it needs to be formalized to protect patient information and put some safeguards around it […] It will make it more powerful for patients to use.
Users can get more personalized guidance from ChatGPT Health by uploading their medical records and syncing them with apps like Apple Health and MyFitnessPal. For the security-minded, this raises immediate red flags.
TechCrunch event
San Francisco
|
October 13-15, 2026
“Suddenly, medical data is being transferred from HIPAA-compliant organizations to non-HIPAA-compliant vendors,” Itai Schwartz, co-founder of data loss prevention company MIND, told TechCrunch. “So I’m curious to see how the organizers will handle this.”
But in the view of some industry professionals, the cat is already out of the bag. Now, instead of searching for cold symptoms on Google, people are talking to AI-powered chatbots — more than 230 million people already talk to ChatGPT about their health every week.
“This was one of the biggest use cases for ChatGPT,” Andrew Bracken, a partner at Gradient who invests in health technology, told TechCrunch. “So it makes a lot of sense that they would want to create a more private, secure, and optimized version of ChatGPT for these healthcare issues.”
AI chatbots face an ongoing problem with hallucinations, a particularly sensitive issue in healthcare. According to Vectara’s Realistic Consistency Evaluation Model, OpenAI’s GPT-5 is more susceptible to hallucinations than many Google and Anthropy models. But AI companies see the potential to correct shortcomings in health care (Anthropic also announced a health product this week).
For Dr. Nigam Shah, professor of medicine at Stanford University and chief data scientist at Stanford Healthcare, the inability of American patients to access care is more pressing than the threat of ChatGPT distributing bad advice.
“Right now, you go to any health system and you want to see your primary care doctor — the wait time will be three to six months,” Dr. Shah said. “If your choice was to wait six months to find a real doctor, or to talk to someone who is not a doctor but can do some things for you, which would you choose?”
Dr. Shah believes that the most obvious path to introducing AI into healthcare systems comes from the provider side, not the patient side.
Medical journals have often reported that administrative tasks can consume about half of a primary care physician’s time, reducing the number of patients they can see on a given day. If this type of work could be automated, doctors would be able to see more patients, perhaps reducing the need for people to use tools like ChatGPT Health without additional input from a real doctor.
Dr. Shah leads a team at Stanford University developing ChatEHR, a software integrated into the electronic health record (EHR) system, allowing clinicians to interact with a patient’s medical records in a more simple and efficient way.
“Making the electronic medical record more user-friendly means that doctors can spend less time searching every nook and cranny of it to get the information they need,” said Dr. Sneha Jain, who tested ChatEHR early on, in a Stanford Medicine article. “ChatEHR can help them get this information upfront so they can spend time on what matters — talking to patients and finding out what’s going on.”
Anthropic is also working on AI products that could be used by doctors and insurance companies, rather than just the public-facing chatbot Claude. Anthropic this week announced Claude for Healthcare by explaining how it will be used to reduce time spent on tedious administrative tasks, such as filing prior authorization applications for insurance providers.
“Some of you are seeing hundreds, even thousands of these prior authorization cases weekly,” said Mike Krieger, Anthropic’s chief procurement officer, in a recent presentation at a JPMorgan healthcare conference. “So imagine cutting 20 or 30 minutes off of each one — it’s a huge time saver.”
As AI and medicine become increasingly intertwined, there is an inevitable tension between the two worlds – a doctor’s primary motivation is to help their patients, while technology companies are ultimately accountable to their shareholders, even if their intentions are noble.
“I think stress is important,” Dr. Barry said. “Patients depend on us to be sarcastic and conservative in order to protect them.”
💬 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#Doctors #artificial #intelligence #place #healthcare #chatbot**
🕒 **Posted on**: 1768354047
🌟 **Want more?** Click here for more info! 🌟
