Cure Logo

January 21, 2026

Article

ChatGPT Health Marks a Turning Point in Patient Expectations, Forcing Hard Choices for Innovators

Cure, Google Gemini

Overview

As patients turn to AI for health answers, founders and healthcare builders are facing tough decisions around trust, regulation, escalation to human care, and how quickly to move.

On January 7, OpenAI launched ChatGPT Health, a new feature within its existing platform. The tool, currently available to a limited group of users pending broader rollout, allows them to link personal medical records and data from wellness apps, including information collected from devices like the Apple Watch. Based on that data, ChatGPT Health generates personalized responses to users’ health and wellness questions. 

OpenAI says the feature reflects a clear pattern in how people already use the product. Of the more than 800 million regular ChatGPT users, the company reports that roughly one in four submits a healthcare-related prompt each week, and more than 40 million users turn to the platform daily with health questions.

“ChatGPT recently published a report capturing trends across a large number of healthcare-related conversations from the general public and healthcare professionals, showing the value large language models can bring to navigating the healthcare system and managing your own or someone else’s health,” says Ashley Nicodemus, principal product designer at UEGroup.

OpenAI has emphasized that ChatGPT Health is not intended to diagnose conditions or recommend treatment, and that it should not replace professional medical care.

“LLMs do not change users’ expectations of care but instead provide a tool for advocacy,” Nicodemus says. “They allow people to navigate information in a more natural way and process it in their own time, which makes them valuable for patients and caregivers. Creating dedicated systems like ChatGPT Health adds more security to an existing behavior pattern and addresses a gap in our complex healthcare system.”

AI’s Impact on Patient Behavior and Care Delivery

ChatGPT Health reflects a broader shift in patient behavior, from passive recipients of care to what Sarah Matt, MD, MBA, a health technology strategist and author of The Borderless Healthcare Revolution, describes as “CEOs of their own health.”

Patients, she says, will no longer be willing to tolerate dense medical jargon or wait days for responses through patient portals.

“They will expect the same on-demand, plain-language transparency from their doctor that they get from their bank or Amazon,” Matt says. “The expectation is shifting from simply having access to data to having an immediate, working understanding of it.”

For care delivery, she adds, tools like ChatGPT Health loosen the hospital’s grip as the primary place where a person’s health information lives.

“The innovation roadmap will pivot toward interoperability, connecting these AI ‘brains’ to the actual medical records so the guidance is personalized rather than generic,” Matt says.

Healthcare and Consumer AI Behave Differently

The consequences of an error in healthcare AI are different from those in consumer-facing tools, Matt says.

“The cost of an AI hallucination in healthcare isn’t just misinformation. It’s patient safety,” she says. “Consumer AI is built for creativity and speed. Healthcare AI has to be built for accuracy and empathy.”

That distinction has pushed healthcare AI into more tightly controlled environments, with higher guardrails and clearer boundaries. These systems, she says, are increasingly expected to act as intermediaries between patients and complex medical data, rather than as general-purpose content generators.

At a broader level, Eric Fish, JD, a partner at Hooper Lundy and former chief legal officer for the Federation of State Medical Boards, says healthcare AI differs from consumer AI because of the scale of its potential impact.

“Even in its early stages, healthcare AI shows promise of driving real change in how healthcare operates,” he says, noting that the industry accounts for more than 20 percent of U.S. GDP.

Rather than simply improving workflows, Fish says, the technology could reshape how care is delivered and personalized.

“I’m optimistic that if regulatory challenges are navigated carefully in these early days, and trust is established, we could see a new paradigm emerge.”

Limits and Ethical Issues

Despite its promise, ChatGPT Health has clear limitations. One is the absence of human interaction.

“AI can summarize a chart,” Matt says. “But today it cannot hold a hand or read nonverbal cues of distress.”

Access is another concern. The digital divide, particularly around broadband and device access, could widen existing disparities. An analysis of Medicare beneficiaries who use telehealth, for example, found that Black, Hispanic, and Asian patients had lower rates of telehealth use than White patients.

“We can’t allow these tools to create a new border where only the tech-savvy receive AI-assisted care while others are left behind,” Matt says.

Fish describes products like ChatGPT Health as cautious entries into a legally complex space, ones that will test existing regulatory frameworks.

“The success of OpenAI, Anthropic, and others in navigating these frameworks and earning trust from users and regulators will be critical to broader adoption and more innovative use cases,” he says.

Any AI system handling patient data must comply with HIPAA requirements governing protected health information. That creates challenges for models that learn from user interactions or store conversation histories.

“The tension between AI’s appetite for data and privacy protections is fundamental,” Fish says. “Long-term success will depend on whether companies can maintain strong data governance without undermining the tool’s usefulness.”

The FDA’s framework for software as a medical device will also shape what these systems can do, he adds.

“If an AI system is used for diagnosis, treatment recommendations, or clinical decision support, it likely falls under FDA oversight,” Fish says. “Recent guidance issued just days before the launch of ChatGPT Health offers some clarity, but agentic models will increasingly blur the line between education and clinical direction.”

Sergei Polevikov, ABD, MBA, author of Advancing AI in Healthcare, raises additional questions about how patient data will ultimately be used.

“Even if OpenAI complies with current laws and promises not to use customer data, there’s no guarantee that business models won’t change,” he says, pointing to companies like 23andMe and Oura, which shifted how consumer health data was used as their business strategies changed.

He also notes that OpenAI relies on third parties, including b.well, to access and transfer patient data. “Even if OpenAI is compliant, who is ensuring that every third party is as well?” 

Regulatory Concerns

Even as patient-facing tools move quickly into clinical-adjacent use, clinician skepticism remains high. A study by QuestionPro, surveying 500 physicians across five specialties, found that doctors emphasized the need for medical AI to be traceable, evidence-based, and safe, particularly when it influences diagnoses, medications, or clinical decisions.

The survey highlighted uncertainty around several unresolved issues, including:

  • Who is responsible for errors

  • How patient data is used

  • What qualifies as medical advice versus education

  • Where AI support ends and clinical judgment begins

As tools like ChatGPT Health scale, the study suggested regulation will need to move toward human-in-the-loop governance and safety-first deployment.

Matt points to recent state-level efforts to draw clearer boundaries.

“California’s new AB 489 law, which took effect in January 2026, prohibits AI from using terms or designs that imply it is a licensed medical professional,” she says. “While many platforms operate as ‘information tools’ to avoid FDA classification, states are increasingly stepping in to prevent AI from impersonating doctors.”

Several states passed laws last year requiring licensed practitioners to sign off on AI-generated treatment plans, Fish says. Others restrict platforms from using titles like “Dr.” as a consumer safety measure.

What remains unresolved, he adds, is whether AI can be said to practice medicine when used directly by patients without clinical oversight.

“The prevailing view is that AI is a tool clinicians must use in line with their ethical obligations,” Fish says. “But regulators may need new frameworks to address patient-facing, agentic systems that don’t fit neatly into traditional definitions of medical practice.”

What Innovators Should Consider

With regulatory standards still evolving, Fish says developers should consider building escalation pathways that bring in human clinicians when certain thresholds are crossed.

“In areas like mental health, where LLMs are already acting as surrogate caregivers, developers could rely on existing telemedicine standards to transition users into live clinical care,” he says. With the right legal agreements in place, the handoff could be seamless.

Such designs may also reduce legal risk, he adds, by clearly defining where an AI system’s role ends and a clinician’s responsibility begins.

Fish also points to the growing use of regulatory sandboxes for high-stakes AI. States including Utah, Texas, Arizona, and Delaware have launched sandbox frameworks, with Utah recently announcing a pilot with Doctronic to explore AI-facilitated prescription refills.

At the federal level, Senator Ted Cruz introduced the SANDBOX Act in September 2025, proposing a program that would allow AI developers to receive temporary waivers from certain regulations to test new technologies.

“Greater collaboration between regulators and industry could help clarify both the benefits and limits of these tools,” Fish says. “That kind of cooperation may prevent reactionary policies that either halt innovation entirely or allow it to run ahead of accountability.”

advert_cure_membership_300x250

More Stories