Addressing Ethical Considerations of Implementing AI Solutions in Regard to Patient Data Privacy and Decision Making
The ethical concerns of artificial intelligence have been a reocurring theme in science fiction, with the most famous example being Isaac Asimov’s Three Laws of Robotics. Defining the rules and exploring the consequences of creating an AI so sophisticated it is on the same level as humans is crucial world-building for the authors. It is a fascinating […]

The ethical concerns of artificial intelligence have been a reocurring theme in science fiction, with the most famous example being Isaac Asimov’s Three Laws of Robotics. Defining the rules and exploring the consequences of creating an AI so sophisticated it is on the same level as humans is crucial world-building for the authors. It is a fascinating theme in fiction that has prompted real-life ethical discussions about autonomy, what it means to be alive, and our relationship with technology. While we are not yet at the level of creating true sentient artificial intelligence, it is vital that we are thinking about what ethics we need to have in place for our AI applications.
We reached out to our brilliant Healthcare IT Today Community to ask — What ethical considerations should be addressed when implementing AI solutions in healthcare, particularly in areas such as patient data privacy and decision-making? The following are their answers.
Dr. Bruce Lieberthal, Chief Innovation Officer at Henry Schein, Inc.
The most important ethical consideration when utilizing AI in healthcare is to vet the algorithms and learning engines to ensure that the AI is predictable, precise, repeatable, and correct. Additionally, institutions that utilize AI need to make sure it is used for good and protect their systems so that the real threat of AI breaching and transmitting private data irresponsibly is minimized.
Mark Thomas, Chief Technology Officer at MRO Corp
Implementing AI in healthcare demands a commitment to ethics, particularly in safeguarding patient data privacy and ensuring responsible decision-making. At MRO, we prioritize transparency and ensure that AI systems are explainable to both clients and patients. We also use a rigorous data governance program to protect sensitive information while adhering to regulations like HIPAA. Not only should all healthcare organizations have a rigorous governance program in place, but they should also ensure their business partners do as well. Ethical AI isn’t just a guideline—it’s a cornerstone for building trust and delivering better outcomes in healthcare.
David J. Sand, MD, MBA, Chief Medical Officer at ZeOmega
Most critically, transparency and honesty should play an overarching role in our AI decisions. AI has shown toxicity and bias and can hallucinate, drift, and develop thought bubbles even when the data sources are curated, and most certainly when they are not. Users, particularly vulnerable patients, must be clearly informed that they are interacting with a machine – a sophisticated machine, but a machine nonetheless – lacking emotions and values. We must also remember that once we introduce our own data, it becomes part of the AI universe, and AI cannot forget.
Rick Stevens, Chief Technology Officer at Vispa
Healthcare providers must exercise extreme caution to avoid sending Protected Health Information (PHI) to public generative AI services, such as OpenAI’s API or the ChatGPT interface, as these platforms often retain and use submitted data to train models unless explicitly configured otherwise. This practice could inadvertently expose sensitive patient information, resulting in HIPAA violations. Providers should implement strict policies to prevent PHI from being shared with such systems and educate staff about these risks.
Additionally, organizations must carefully vet AI vendors to ensure they do not send PHI to generative AI services or other external systems without robust safeguards. This includes requiring vendors to sign Business Associate Agreements (BAAs) that clearly define their compliance responsibilities, verifying that the vendor’s AI models are trained on secure, compliant datasets, and ensuring all data handling aligns with HIPAA’s technical and administrative safeguards.
This is an exciting time as the possibilities with emerging AI capabilities seem endless, offering transformative potential for healthcare innovation, but vigilance in managing AI-related risks is essential to maintaining patient trust and ensuring regulatory compliance.
Tina Joros, JD, Chair, EHR Association AI Task Force at Veradigm
One ethical consideration that needs more discussion is when and how to notify patients about the use of AI in their care and exactly the role that AI will play. It is unlikely that a general disclaimer at the beginning of a patient visit will provide enough detail for a patient to understand the technologies used in their care; however, a more detailed analysis of the inputs used by the AI system and how it was trained would be overwhelming.
Clinicians must be responsible for evaluating AI technologies, communicating with patients about their use, and obtaining patient consent when necessary to maintain patient trust. But when and how to communicate that detail is still an open question in many care settings.
Additionally, the EHR Association advocates for technologies that incorporate human-in-the-loop or human override capabilities. This means that a learned person always remains central to decisions involving patient care and that clinicians use AI recommendations, insights, or other information only to help inform, not make, those decisions.
Jim Ducharme, CTO at ClearDATA
When implementing AI solutions in healthcare, the most pressing ethical considerations revolve around patient data privacy and decision-making accuracy. As AI processes vast amounts of patient data, the risk of ‘AI hallucinations’—false or misleading outputs due to flawed data or algorithms—poses a serious concern. These hallucinations can result in clinical errors or poor decision-making, which could harm patients.
Also, bad actors can exploit AI vulnerabilities through data poisoning attacks, introducing false information into AI systems to manipulate their outputs for malicious purposes. Trust is a key issue—AI systems, if left unchecked, can be misused.
By focusing on rigorous data validation, human oversight, implementing robust feedback loops, and using multiple AI models to cross-reference outputs, we can ensure that AI is a tool that enhances healthcare, rather than undermining it. At the end of the day, AI is a tool that can provide valuable insights, but it is up to healthcare professionals to interpret and apply those insights with the depth and care that only they can provide.
Ken Armstrong, InfoSec Manager at Tendo
The implementation of AI in healthcare brings tremendous potential but must address key ethical considerations to ensure responsible use. Patient data privacy is paramount, requiring robust security measures and compliance with regulations like HIPAA or GDPR to prevent breaches and unauthorized access. Transparency and accountability are critical, ensuring AI systems provide understandable insights and maintain human oversight in decision-making. Efforts to mitigate algorithmic bias are essential to prevent disparities in care, with diverse and representative datasets playing a pivotal role. Finally, patient autonomy must be upheld through informed consent and the assurance that AI serves to enhance, not replace clinical judgment. By addressing these considerations, AI can improve patient outcomes while maintaining trust and equity in healthcare delivery.
Gayathri Narayan, VP & General Manager of ModMed Scribe at ModMed
Health tech professionals currently have the opportunity -and the responsibility- to help raise AI ethically from the ground floor in a way that will not only be meaningful for provider workflows but also patient outcomes. I’ve said before that AI is an entire world in itself. Once a training model is fed with data, endless possibilities can unfold – a prospect that is incredibly exciting for engineers and healthcare professionals alike.
At the same time, the AI we see today is overall at an infant stage compared to what it will accomplish someday. This means not only are large quantities of clean, structured data crucial but so are sourcing diverse datasets and transparency from the start. Those who took shortcuts during the training of their models may eventually experience shortcomings once in the market beside platforms that started with higher standards. To stay ahead and be successful in the development of AI, health tech leaders will need to take an active role in approaching projects both thoughtfully and responsibly, while the rules are being written.
A lot of points to consider here! Huge thank you to Dr. Bruce Lieberthal, Chief Innovation Officer at Henry Schein, Inc., Mark Thomas, Chief Technology Officer at MRO Corp, David J. Sand, MD, MBA, Chief Medical Officer at ZeOmega, Rick Stevens, Chief Technology Officer at Vispa, Tina Joros, JD, Chair, EHR Association AI Task Force at Veradigm, Jim Ducharme, CTO at ClearDATA, Ken Armstrong, InfoSec Manager at Tendo, and Gayathri Narayan, VP & General Manager of ModMed Scribe at ModMed for taking the time out of your day to submit a quote! And thank you to all of you for taking the time out of your day to read this article! We could not do this without all of your support.
What ethical considerations do you think should be addressed when implementing AI solutions in healthcare, particularly in areas such as patient data privacy and decision-making? Let us know over on social media, we’d love to hear from all of you!