Emerging Cyber Threats to AI-Based Diagnostics and Clinical Decision Support Tools
The following is a guest article by Ed Gaudet, Founder and CEO at Censinet As hyperbolic words go, transformation ranks near the top of the list. Yet, when something is truly transformative, it’s undeniable. And that is exactly what we have been witnessing with the use of artificial intelligence (AI) within the healthcare industry: a […]

The following is a guest article by Ed Gaudet, Founder and CEO at Censinet
As hyperbolic words go, transformation ranks near the top of the list. Yet, when something is truly transformative, it’s undeniable. And that is exactly what we have been witnessing with the use of artificial intelligence (AI) within the healthcare industry: a true digital transformation revolution.
With the AI healthcare market valued at $26.69 billion in 2024, and projected to reach over $600 billion by 2034, this transformation is not only reducing operational friction and administrative burden across healthcare organizations, but, more importantly, has the potential to improve patient outcomes through better diagnostics and clinical decision support..
However, this exciting transformation comes at a cost: increased cybersecurity risks — many of which healthcare professionals are not yet prepared to handle.
How AI Diagnostics and CDS Tools Could Be Targets
Before AI, traditional diagnostic and CDS systems prioritized the protection of patient data when it came to cybersecurity; however, as AI-based systems are increasingly involved in the interpretation of data for care-related decisions, the stakes have changed: cyberattacks on these systems no longer mean the potential loss of data, they can mean direct harm to the patient. Some of the techniques employed by bad actors include:
- Model Manipulation: Adversarial attacks are when the actors make small but targeted changes to the input data, which in turn causes the model to analyze the wrong data; for example, a malignant tumor may be mistaken for a benign one, leading to catastrophic consequences
- Data Poisoning: Attackers who access training data for AI model development can damage it, which leads to harmful or unsafe medical recommendations
- Model Theft and Reverse Engineering: Attackers can obtain AI models through theft or logical examination to extract the model’s weaknesses, then either build new malicious versions or replicate existing models
- Fake Inputs and Deepfakes: The injection of artificial patient information, manipulated medical records, and imaging results through systems leads to misdiagnosed treatments
- Operational Disruptions: Medical institutions are using AI systems to make operational decisions, such as ICU triage; the disablement or corruption of these systems creates serious operational disruptions that put both patients at risk and result in critical delays throughout entire hospitals
Why the Risk is Unique in Healthcare
A mistake in healthcare could easily mean life and death. Therefore, wrong diagnoses due to a corrupted AI tool are more than a financial liability; it is an immediate threat to people’s lives. Furthermore, recognizing a cyberattack can take time, but the compromise of an AI tool can be instantly detrimental if clinicians use faulty information to make decisions on their patients’ treatment. Unfortunately, securing an AI system in this industry is extremely hard due to legacy infrastructures and limited resources, not to mention the complex vendor ecosystem.
What Healthcare Leaders Must Do Now
It is critical that leaders in the industry consider this threat carefully and prepare accordingly. Data is not the only asset that requires heavy protection, AI models, the training processes, and the entire ecosystem need protecting as well.
Here are key steps to consider:
- Conduct Comprehensive AI Risk Assessments: Conduct thorough security evaluations before implementing any AI-based diagnostic or Clinical Decision Support (CDS) tools to understand risks and vulnerabilities, and plan for extended downtime in these systems..
- Implement AI-Specific Cybersecurity Controls: Follow cybersecurity practices made for AI systems by conducting adversarial attack monitoring and model output validation, as well as ensuring secure algorithm update procedures
- Secure the Supply Chain: Require third-party vendors to provide detailed information about model security, along with training data and update procedures; research by the Ponemon Institute has found that vulnerabilities in third-party vendors have accounted for 59% of healthcare breaches, therefore, healthcare organizations must ensure risk-focused contractual language enforces explicit cybersecurity measures that pertain to AI technologies
- Train Clinical and IT Staff on AI Risks: Both clinical personnel and IT staff need thorough training about approved use cases and the particular security weaknesses existing within AI systems; the staff must receive training that enables them to recognize irregularities in AI output, indicating potential cyber manipulation or model hallucinations.
- Advocate for Standards and Collaboration: Healthcare organizations should advocate for rigorous AI-specific standards and regulations, as well as collaborate and share identified vulnerabilities in AI technologies; the Health Sector Coordinating Council and HHS 405(d) program provide essential foundations, yet additional measures are necessary
The Future of AI in Healthcare Depends on Trust
AI has significant potential to transform care delivery and hospital operations; however, if cyber threats compromise these advancements, trust among clinicians and patients can quickly erode—jeopardizing not only adoption but patient safety itself.
Security must be embedded at every stage of AI development and implementation—it is not only a clinical and operational imperative but a moral one. Healthcare leaders have a responsibility to safeguard AI-driven diagnostics and clinical decision support tools with the same rigor applied to other critical systems. The future of healthcare innovation depends on trust as its foundation. Without secure, reliable AI systems that enhance clinical performance, we cannot earn or sustain that trust.
About Ed Gaudet
Ed Gaudet is the Founder and CEO at Censinet, with over 25 years of leadership in software innovation, marketing, and sales across startups and public companies. Formerly CMO and GM at Imprivata, he led its expansion into healthcare and launched the award-winning Cortext platform. Ed holds multiple patents in authentication, rights management, and security, and serves on the HHS 405(d) Cybersecurity Working Group and several Health Sector Coordinating Council task forces.