The AI War on Healthcare

OpenAI's crusade to undermine trust in our doctors, scientists and experts.

The AI War on Healthcare
Forces at Work - Designed and printed by Bart Fish

OpenAI recently released ChatGPT-5 and the top feature they're touting is health advice. They're so keen on this "capability" that they're openly undermining the expertise of doctors and attempting to position genAI as a more reliable health advisor. Sam Altman has publicly stated that this latest model is "the best model ever for health."

Fidji Simo, an OpenAI board member, shared a story about an OpenAI employee using ChatGPT-5 to decide whether to pursue radiation for his wife's cancer diagnosis. There's no caveat that ChatGPT-5 could be wrong or that its not a replacement for an actual doctor's advice. No, Fidji frames the story as a proof point that ChatGPT-5 should be trusted more than actual doctors. That the model can give certainty when doctors cannot. Obviously this is marketing, but it's a kind of marketing that is insidious and borderline fraudulent.

Post by an OpenAI board member touting ChatGPT 5's expertise over doctors.

Felipe, the OpenAI employee whose story Fidji shares, also does not share any warnings or advise caution in any way. Even as he "fact-checked each study", he does not mention whether he's qualified to evaluate those studies or even who "we" is.

Ok, so OpenAI just happens to have an employee testimonial that is relevant to a product release that heavily markets it's "health advice" capability. Except they have two. Kate Rouch, OpenAI's Chief Marketing Officer (CMO) was diagnosed with breast cancer shortly after joining OpenAI in December 2024. Kate has shared how ChatGPT helped her navigate the diagnosis, detailing how she used it to explain cancer in an age-appropriate manner to her kids, manage chemo side effects and create custom meditations.

Having two highly relevant cancer stories from OpenAI employees is quite the boon when you're releasing the "best model ever for health."

Elon Musk of course has chimed in, because everyone is dying to hear his hyperbolic take on AI and the future. Surprise, he believes AI is already better than most doctors. He parrots the AI industry's favorite marketing strategy, that AI will replace ALL jobs, even his own. Unsurprisingly, Grok does a pretty good imitation of Musk, spreading misinformation, white supremacist views and 4chan style memes.

I could write an entire piece on the AI hype engine, how LinkedIn has played a large role in amplifying AI influencers and content and (big surprise) how it's parent company Microsoft greatly benefits from this. How big names like Reid Hoffman (founder of LinkedIn), Eric Schmidt (former CEO of Google) and many others, have massive investments in AI companies. Remember when influencers had to identify posts as advertising? Or when journalists would regularly point out conflicts of interest and question the trustworthiness of these tech leaders?

Back to OpenAI's ChatGPT-5 release, let's dig into some of their claims related to health advice. One major marketing bullet was that ChatGPT-5 scores higher than any previous model on HealthBench, an evaluation that wait for it...OpenAI created. That's right, ChatGPT-5 scores the highest against the test OpenAI created itself.

HealthBench is based on synthetic data and an arguably small dataset at that. It's built on 5,000 interactions with 60% of them being single turn questions, meaning a user asked a chatbot one question with no follow-ups. This is all good and fine (is it?) but surely people aren't actually using ChatGPT-5 for medical advice?

Man sought diet advice from ChatGPT and ended up with 'bromide intoxication'

ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Meds

ChatGPT Is Giving Teens Advice on Hiding Their Eating Disorders

Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans

CCDH's new research shows that ChatGPT is betraying young people by generating dangerous advice about self-harm and suicide, eating disorders and substance abuse.

Despite Sam Altman and members of OpenAI's board publicly praising ChatGPT-5's ability to give health advice, what statements does the company give when that advice goes terribly wrong? They direct the media to the company's service terms that state that its services are not intended for use in the diagnosis or treatment of any health condition and their terms of use which state, "You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice."

Legally they have CYAs (cover your ass) in place while prominent leaders in the company publicly make misleading and arguably borderline fraudulent claims that could endanger people's lives.

A recent study by McKinsey and Harvard researchers project the U.S. could save up to $360 billion annually through implementing AI into our healthcare system. The paper also cites additional potential benefits like better healthcare quality, increased care access and better patient and doctor satisfaction. It’s worth noting that McKinsey has a reputation for making questionable recommendations, from being possibly the single greatest legitimizer for mass layoffs in history, to playing a large role in boosting opioid sales and causing the opioid epidemic.

How would healthcare companies reap those savings? By ransacking hospitals, laying off hundreds of thousands of healthcare workers and replacing them with chatbot care. At this point, many people have become almost immune to caring about layoffs of human workers. It's tossed aside casually as a consequence of innovation, a necessary and expected side effect. Very few people ask if this consequence is really a side effect or if it is a feature, one aimed at removing accountability and transparency for AI companies.

What other side effects could we see if genAI was aggressively adopted across our healthcare system? These are hypothetical consequences and certainly not comprehensive:

  1. LLMs hallucinate and make bad recommendations that cost human lives.
  2. Human-led healthcare becomes exponentially more expensive. This might mean the rich have access to human doctors and care, while the masses must settle for chatbot care.
  3. Political bias is forced into LLMs, reinforcing dangerous ideas like vaccines cause autism.
  4. Bias is weaponized to provide poorer quality care for minorities.
  5. Healthcare deteriorates due to a declining numbers of new doctors, researchers nurses and other critical healthcare experts.
  6. Mental health issues skyrocket with more people seeing genAI as a legitimate source of advice, therapy and diagnosis.

One of the most common arguments people make in defense of deploying AI aggressively into critical industries, is that people make mistakes too. Of course this is true, but those people face real consequences like malpractice suits and having their medical licenses stripped. What makes mass adoption of AI dangerous today, is that technocrats have brokered a deal with the U.S. government to completely avoid regulation or any kind of accountability. The checks and balances in our government that are intended to protect its citizens have been rigged in favor of AI companies.

What's worse, is this lack of accountability isn't a bug, it's a feature. One that healthcare institutions will likely embrace completely because it allows them to greatly increase their profit margins while absolving themselves of any liability.

Meanwhile, Sam Altman continues to make grandiose claims about how ChatGPT will solve cancer(s) in the future. Surely journalists challenge these claims, right? It's their job to point out that ChatGPT has not made any significant health-related discoveries, it has not cured any diseases and there is really no reason to believe it will in the future except deep seated techno-optimism. These so-called journalists do not call bullshit when Altman makes these claims. They sit back big-eyed, nodding along enthusiastically, while they platform his outlandish promises which amount to sci-fi fantasies.

The healthcare industry is not unique in OpenAI's colonialist attempts to take over major industries in America and across the world. OpenAI is aggressively attacking higher ed institutions, targeting students, teachers and our entire education ecosystem. This is despite a deluge of studies that raise very valid concerns around the long-term use of genAI and short-term effects in educational settings. OpenAI has created a reality distortion field, fueled by an enormous amount of hype and hundreds of billions of investment dollars.

This war on critical industries like healthcare, higher ed and creative fields, mirrors a political war that is raging on at the same time. It's a gutting of our experts, of public trust and the educated class. This is not a coincidence. It's a partnership between the American government and the technocratic billionaires. They have similar goals, a shared disdain for human rights, regulation and constitutional law. This so-called AI era is not about innovation, it's a power grab.