Skip to main content

AI in Healthcare: Do the risks outweigh the benefits?

Let’s start the conversation

The other month I had the pleasure of attending a fascinating debate at the University of Melbourne. Between the sprightful discussion, free wine and food, I tried to grapple the magnitude of potential that is ‘AI in Healthcare.’ In the spirit of being randomly assigned to a team, I have attempted to articulate my points for both the affirmative and the negative below.

Affirmative – Risks outweigh the benefits.

For the purpose of this debate, I define Artificial Intelligence (AI) as the capability for machines to complete tasks in a way that replicates human intelligence, underscored by machine learning. This technology also encompasses the realms of natural language processing, robotic process automation (RPA), computer vision and digital twins. 
Healthcare is one of the most mature industries in the market. The act of caring for one another, and having specialists who master the art of healing has been around since the ancient Greeks. A physician’s oath commits medical practitioners to the protection of their patients and confidentiality.  However, the development and presence of AI in the healthcare system is creating a threat to that promise. At this time, the risks of AI in healthcare outweigh the benefits. Healthcare for this debate excludes pharmaceutical and insurance. It encompasses the immediate practitioner-patient interaction and care within hospitals, doctor offices, nursing homes and ambulatory services.

My arguments are as follows: 

Australia does not yet have the appropriate government regulations in place to secure a safe and confidential way of using AI in healthcare. The Trustworthy AI framework articulates that proper regulation means individuals are being informed of their data being collected and give consent.1 However, the sheer amount of data from different sources can sometimes make this feat tricky.2 The potential misuse of data needs to be considered. 

Australian healthcare data is not yet mature enough to appropriately use AI solutions. To date, there have been quite a few examples whereby the data pool inaccurately represents the population. For example, while an algorithm may be highly successful in diagnosing cardiovascular illness in white men, it may have poor predictability in non-white counterparts or women, due to the clinical trial pooling, and inherent biases in our society.3 Being aware of these discrepancies will prompt mindful consideration of data quality and quantity, and in time we will have datasets to assist in equitable patient care – but not yet. 

Another important implication of AI in healthcare is understanding who is liable when mistakes are made. If a physician is relying on an algorithm to assist in diagnosing a patient, who is at fault if this diagnosis is wrong, or a medication isn’t provided at the appropriate time? What about the data scientist behind the AI algorithm? Australia currently has no legal enforcement around the definition and liability of AI, and the affirmative believes this is essential should we wish to unlock the true potential of AI in healthcare. 

Negative – Benefits outweigh the risks.

Technology doesn’t hurt people, people hurt people. AI is an incredible technology with the potential to take the robot out of the human. When AI works alongside human intelligence, a new collective intelligence is unleashed, enabling humans to be more innovative and empathetic. AI has the potential to absolutely transform our healthcare industry, and in some ways, it already is – the risks do not outweigh the benefits.4

When we use AI in RPA - the AI can be trained on not only how to complete repetitive tasks, but also if it should. So, administrative tasks such as scheduling appointments and accessing medical records have the power to be automated. Other AI use cases include dynamic routing to get a sick patient to the nearest site of care for immediate attention. This frees up time for healthcare staff and coordination teams to do more of the human work, such as caring for patients. 

Individualized treatment plans are on the horizon, thanks to AI. By combining datasets – including a patient’s health history, lifestyle, genomic make-up, and personal preferences, AI will be able to make recommendations to the Doctor based on wellbeing aspects they may not necessarily be aware of. AI can be used for everything from image diagnosis to the prediction of future illness based on an individual’s lifestyle, environment, biometrics, and genomics.5 Personalisation of healthcare will create a magnitude of opportunities in ways of diagnostic accuracy and managing preventable illnesses. 

Advanced AI models, such as digital twin technologies have the potential to assist in training and improve patient outcomes. By replicating the real-life scenarios that occur in a surgical ward, a digital twin can be used to train surgeons on how to best approach a surgical procedure.6 By simulating a challenging or new surgery through VR/AR technology, a surgeon can plan their method and refine their skills. This can be beneficial in training situations, or in circumstances where the required specialists aren’t available in a particular hospital and so the AI can assist in providing rapid training or instructions. 

At this point in the debate, I’m starting to feel the pressure as I plan for the rebuttal. The benefits of AI in healthcare are astronomic, and we are yet to even scratch the surface of what’s possible – but are we prepared to appropriately manage the risks? Watch this space.