Cure Logo

April 2, 2025

Article

How Cybercriminals Increasingly Exploit Healthcare’s ChatGPT

Overview

Healthcare’s growing reliance on generative AI has opened the door to new cybersecurity risks that are easy to miss until it’s too late.

Health systems now face AI-enabled threats from both inside and outside their firewalls

Artificial intelligence is one of the fastest growing technologies in healthcare, and one that cybercriminals are exploiting to sneak into computer systems, steal information and demand ransoms.

Cybersecurity firm Veriti recently identified thousands of exploitation attempts within a single week, through a year-old vulnerability in ChatGPT that has become a backdoor for cyberattacks targeting hospitals and healthcare systems.

The flaw is being used to exploit gaps in artificial intelligence security infrastructure, including misconfigured firewalls and intrusion prevention systems. While rated as medium severity, the issue has already been weaponized in real-world attacks, the firm said.

“This research highlights a crucial takeaway: No vulnerability is too small to matter, attackers will exploit any weakness they can find,” Veriti wrote in a blog post.

The American Hospital Association (AHA) issued a warning that healthcare institutions could face data breaches, regulatory fines and reputational harm if AI-related vulnerabilities go unpatched.

“This could allow an attacker to steal sensitive data or impact the availability of the AI tool,” said Scott Gee, AHA Deputy National Advisor for Cybersecurity and Risk, in a statement. “The fact that the vulnerability is a year old and a proof of concept for exploitation has been published for some time is also a good reminder of the importance of timely patching of software.”

Healthcare remained the most expensive sector for data breaches in 2024, with an average cost of $9.77 million—roughly double the global average of $4.88 million, according to a report by IBM. Cyberattacks, particularly ransomware, have surged across the industry, with more than 1,600 attacks per week globally, according to a KnowBe4 report.

Change Healthcare cyberattack: $6.3 Billion Loss

A major recent example involved Change Health, which experienced a devastating ransomware attack in February 2024 that severely disrupted healthcare services nationwide. The cyberattack affected health data of approximately 100 million Americans and impacted virtually every hospital across the country.

According to an AHA survey, 74 percent of hospitals experienced direct impacts on patient care, including critical delays in medical treatments. Financially, the attack caused hospital claims to drop by $6.3 billion within the first three weeks alone.

“The Change Healthcare cyberattack was the most consequential and debilitating cyberattack on health care in the history of the U.S.,” the hospital association said in a report on the issue. “The cyberattack made it clear that cybercriminals are seeking to maximize disruption to care delivery — by targeting mission-critical service

providers and suppliers.”

Healthcare Cyberattacks Impact Patient Safety and Public Health

Criminals are not only stealing personal health information for financial fraud but also manipulating data or disrupting clinical research and care operations, according to the World Economic Forum. The organization’s 2025 cybersecurity outlook highlights a rising wave of AI-driven threats, calling out generative AI as a tool that accelerates everything from malware creation to social engineering. These threats have serious implications for patient safety and public health.

“As genomics continues to evolve as a critical field, securing sensitive biological data, the interconnected systems and the users becomes essential,” said Hoda Al Khazimi, PhD, Director, Center for Cybersecurity, New York University Abu Dhabi, in the report. “The protection of bioinformatics platforms, along with the prevention of misuse in biotechnical applications, is vital.”

Mitigating the Dual-Use Dilemma of AI in Healthcare

One of the central tensions in healthcare’s adoption of generative AI is that the same tool that boosts productivity can also amplify harm. It’s a dual-use dilemma: the ability to streamline chart documentation or summarize clinical research is real, as is the risk of introducing hallucinated facts or regulatory violations.

But this can lead to healthcare workers feeding AI chatbots sensitive material — and sometimes protected health information. Employees have used large language models (LLMs) to draft patient letters, rewrite lab protocols and even troubleshoot hospital software. That’s made generative AI the latest “shadow IT” risk — an unapproved, often invisible layer of tech operating outside institutional controls.

From Policy to Practice

Hospitals are beginning to respond. Some are drafting internal AI usage policies that restrict which tools can be used and what data can be input. Others are turning to “private LLMs” — internally hosted versions of ChatGPT or open-source models — to regain control over data privacy and auditability.

“Instead of outright bans, organizations must shift their focus to enablement,” according to a post by Cleardata, a healthcare-focused security firm. “Just as enterprise IT departments eventually embraced the cloud by offering secure, sanctioned solutions, healthcare organizations must do the same with LLMs. The goal should not be to prevent employees from using AI but to provide them with safe, compliant alternatives.”

More Stories