Is AI in Healthcare a Friend or a Monster? Ethics Will Decide Its Fate

Conversation with AI Ethics Lead Assessor Dr. Khulood Alsayegh

Image created by Rudy Chidiac. © Open Medical 2025. All Rights Reserved

“A lot of people think AI does everything and is amazing, forgetting that we are the ones that make it what it is,” says Dr. Khulood Alsayegh.

AI itself isn’t inherently good or bad. Its fate depends entirely on how we design, regulate, and implement it.

With AI expanding rapidly and seeping into our everyday lives, we have to scrutinise its ethical implications under a microscope, especially in healthcare, where lives are at stake.

To explore this, we spoke with Dr. Khulood Alsayegh, IEEE AI Ethics Lead Assessor, policymaker in a regulatory body, consultant clinician, and a member of Open Medical's Strategic Advisory Committee.

AI in healthcare: friend or foe?

While preparing for a presentation, Dr. Khulood Alsayegh had a slide with the question “What is AI?”

She turned to her 7-year-old son and asked him, curious what he thought.

“AI is my friend because he helps me with my homework,” he answered confidently.

But after a few more pointed questions and a moment of reflection, he added, “AI can also be a monster because he knows everything.”

His response captures the duality of AI, something Khulood emphasises people need to understand.

AI can be a knowledgeable and powerful friend, but it can also become a dangerous force without proper frameworks and safeguards for the knowledge it holds. What information should it have? Can its decisions be trusted? Does it work fairly for everyone? Who is accountable when AI gets it wrong?

And thus, the difference between AI as a friend and AI as a monster comes down to ethics.

AI is becoming more common in clinical support for decision-making, diagnostics, and patient management, and so the ethical foundation of these technologies must be carefully considered.

The four pillars of ethical AI

In her role as an AI Ethics Lead Assessor at IEEE, Dr. Khulood explains there are four core principles used to evaluate the ethics of AI tools, all of which are non-negotiable when it involves patients.

1. Accountability

“When AI works well, everyone takes credit. When it fails, everyone starts pointing fingers,” Khulood says.

If AI misdiagnoses a patient, recommends an incorrect treatment, or fails to detect a life-threatening condition, who is responsible? The developer? The hospital? The clinician?

“AI should not be the ultimate decision-maker because you cannot blame it at the end,” Khulood explains.

So human oversight is required for the use of AI in healthcare, but that raises another challenge: “If the human holds responsibility, how much information does he need, and to what extent? Because he needs to know a lot to be held accountable,” she explains.

It’s important to put the right roles and responsibilities in place from the beginning, ensuring everyone knows what they are accountable for.

2. Transparency

Can we trace how AI reaches its conclusions?

“If you cannot explain how an AI got from A to B and then back from B to A, then the transparency isn’t there,” Khulood says.

AI must be understandable to end-users, patients, clinicians, and policymakers. The moment AI’s decision-making process becomes a black box, transparency is lost, and so is trust.

IBM’s Watson for Oncology serves as an example. Initially celebrated as a groundbreaking innovation, hospitals couldn’t verify how it arrived at its treatment recommendations. Clinicians, trained to be evidence-based and analytical, were reluctant to trust a system they couldn’t fully understand. And indeed, later reports revealed the AI had recommended unsafe and incorrect cancer treatments.

If AI is to be trusted in healthcare, clinicians must understand its decision-making process, and, as Dr. Khulood asserts, “AI in healthcare should never be a decision-maker.”

3. Applicability

AI should work for all patients, not just a select few.

“If an AI tool is meant to support women’s health, does it support all types of women? All ethnicities? Elderly women? Younger women?” Khulood asks.

AI trained on narrow datasets risks bias, potentially worsening healthcare inequalities.

For example, AI in dermatology has shown biases in skin tone recognition. Many AI algorithms are trained predominantly on images of lighter skin, leading to reduced accuracy in diagnosing conditions in people with darker skin tones. This imbalance exacerbates healthcare disparities and can result in misdiagnoses or delayed treatment for certain populations.

AI must be rigorously tested across diverse demographics to avoid reinforcing existing inequalities or creating new ones.

4. Privacy

Healthcare data is highly sensitive, so AI systems must be designed to protect patient confidentiality.

“If an AI system is breached, how quickly can it be fixed?” Khulood asks.

AI must have robust security measures, including clear protocols for rapid response to breaches. Where data is stored is also critical. For example, in the UAE, patient data must be stored within the country to ensure security and compliance with national regulations.

Balancing all four principles

These principles don’t exist in isolation. AI must balance all four, but depending on the use case, one may take priority over the others.

She gives two examples:

A smartwatch designed for children can communicate with other smartwatches of the same make.

“As a parent, you’d ask if you want to know the whereabouts of other kids. Some may say, Why not if they are friends? But what if it connects with someone’s watch and they are not friends? So in this case, the privacy principle is very important.”

She provides another example of AI embedded in a pacemaker, which tracks and transmits patient health data.

If the AI is trained to detect anomalies based on ‘normal’ behaviour, lifestyle changes—such as travelling more, changing sleep patterns, or weight fluctuations—could trigger false alerts, suggesting a heart issue where there is none. Conversely, if the AI overcorrects for variability, it could miss genuine warning signs, failing to alert clinicians when intervention is needed.

Now, if something goes wrong, who is responsible?

“The hospital blames the developer, the developer blames the administrator, and so on,” Khulood explains. “But in the end, it’s the patient who suffers.”

So in this case, accountability would be important.

Importantly, each AI tool must strike the right balance, ensuring one pillar isn’t upheld at the expense of another.

AI is what we make it

The difference between AI as a friend or a monster comes down to ethics. AI is not an independent force. It reflects how we train it, use it, and regulate it.

To ensure AI remains a trusted partner in healthcare, we must ensure:

  • The human is the decision-maker and therefore must be in the loop at every stage to be held accountable.

  • AI systems must be transparent so clinicians can understand and verify recommendations.

  • AI is trained on diverse datasets to prevent bias and ensure fair outcomes for all patients.

  • Strong security and privacy to protect sensitive patient data.

AI’s potential in healthcare is enormous. But without ethical safeguards, it risks becoming a monster of our own making.

Previous
Previous

Improving Patient Flow Across the Perioperative Pathway: Why True, End-to-End Solutions Matter More Than Ever

Next
Next

The 10-Year Health Plan: What do we need to deliver?