Skip to main content

Using AI and automated decision-making to improve clinical care in the NHS

In the first of a series of blogs, we examine the benefits of using artificial intelligence (AI) in medicine with Dr Afzal Chaudhry, Chief Clinical Information Officer at Cambridge University Hospitals, Dr Arvind Madan, Hurley Group GP Partner and Co-Founder of eConsult and Dr Indra Joshi, Director of AI for NHSX.

What roles can AI and automation play in the NHS, and what are the key blockers or success factors?

Indra: AI is successful when answering a problem, rather than just doing something fun with a pile of data. For example, we've seen it working well in triage where we have a huge number of images that need reporting on and, if you’re helping people know that something is normal and fine, versus quite urgently not normal, then you’re solving a real problem and people are more likely to adopt it.

Afzal: There's a great deal of opportunity from basic decision support tools directing people to order sets of investigations based upon the patient's chart, to patient care examples such as AI predicting patient readiness for discharge or likelihood of readmission, to higher level uses around planning a service for coming months and years (though these predictions have been disrupted by COVID-19). Two big challenges are the quality of data that feeds these tools in the first place, but also the required level of confidence in the nature of the algorithms among professionals, patients and society as a whole.

Arvind: In primary care there are lots of opportunities for AI to assist. COVID-19 and total triage (where patients contacting the practice first provide some information on the reasons for contact and are then triaged before making an appointment) has seen the widespread adoption of online consultations. The challenges include finding the right operating model for GP practices, reducing supply led demand and, of course, digital exclusion of patients who are less technologically advanced. 

Success is dependent on finding the right balance between enhancing patient convenience as well as driving practice efficiency.

Where have you seen AI and automation work particularly well?

Afzal: We’ve seen numerous successful cases, including order sets and recommendations for investigations and medication depending on the patient’s particular condition. As well as work on predicting levels of crowding within the A&E (emergency department) with the aim of intervening during peak time.

In my experience, the success of AI initiatives is often determined by how closely the clinical and operational teams are involved in the design and implementation of the solution, with people on the ground to support the roll out until it becomes embedded. We saw a very good example of this when implementing a solution to improve handling of patients who might have Sepsis, a life-threatening reaction to infection, working very closely with the microbiology, infectious disease and acute medicine teams.

Arvind: We've been working closely with Deloitte in tackling general practice pain points. An example is our Hurley eHub model in which we have a home based multidisciplinary clinical team processing high volumes of online consults, from a group of GP practices covering 115,000 patients. Historically, this has been administered by cobbling together email rules and Excel sheets. Following the joint work, we hope to have the ability to automatically interpret, prioritise and route online consults to the right member of the growing primary care team. Whilst the benefit in each individual case is small, the aggregated cost and time savings are substantial. Eventually, this may evolve into a network of interconnected AI assisted triage systems, sitting at the front of the major doors into the NHS e.g., general practice, outpatients, and urgent care.

Indra: There have been some really good success stories, both regionally and across ICSs footprints of how people are using analytics, especially on the operational side.

We've got 10 companies who have gone through in the Artificial Intelligence in Health and Care Awards studies, some of them already being rolled out success stories and the idea would be to see how they progress. One challenge is that people come up with solutions that aren't always ready, and we've got a job to sift through that. This is what we've tried to do with the award to say these later stage technologies have gone through a level of screening evaluation and they will go through a wider, more robust product evaluation with recognised brands like MHRA, NICE, etc. 

What needs to be done to get buy in from patients, clinicians, and administrators for AI solutions?

Indra: I think getting buy in is about understanding that base level of how much people need to know to be both safe but conscious of what they're using.

This is where the NHSX AI lab looks to debunk myths through resources like the AI Buyer’s Guide and Artificial Intelligence: How to get it right to show that some of this is practical stuff that might automate mundane tasks for you, or help to spot things that may not have been evident to the human eye. An example I give is measuring nodules in an image and people can be amazed that this is done for them, but also concerned about whether it should have been done for them. We look to educate people that this is fine, this is part of the process, and nothing to worry about. 

Our job is really to make sure that the technology is safe and effective and does what it says it does versus making sure everybody understands every little bit of the technology.

Afzal: Automation or AI will only be as good as the data feeding into it, so buy in must begin with this. Ultimately, I think gaining buy-in for the data quality that enables AI is a combination of showing the benefits, the ease with which data quality can be improved, and then occasionally a little bit of a professional ‘stick’ to reinforce the importance of recording things properly. 

Data quality is essential to everything that we do but you can often be astonishingly imprecise as a doctor and still get your patient better.

If an old person comes in and they're a bit short of breath, a doctor might give them some antibiotics in case they've got a chest infection and some diuretics in case they've got heart failure and five or six days later, the person's better but it may not be clear whether this was related to the doctor at all. The doctor may never have actually committed themselves to a formal diagnosis, causing planning for next steps and discharge to become increasingly difficult.

The more difficult areas to manage in AI will be more predictive stuff whereby the machine is constantly learning, refining and applying weightings based on complex information provided by the patient. At that point, you’re moving from a position of a doctor being able to eyeball the decision from the computer and say “OK, I get how this was arrived at”, to “I just have to believe this metric” and then make a decision based on that. Some extra thought needs to be given into validation. Does it need an annual review, an ethical panel oversight to make sure there’s no inherent bias?

Arvind: In the same way we envisage that one day driverless cars will take over from driven cars, and it may even be deemed irresponsible to drive your own car for safety reasons. I can see it becoming commonplace for clinicians to use AI in their daily work for the same reason, though a limiting step may be the degree of trust from patients and clinicians, with debates on how much evidence is required to adopt each element.

I can't imagine in my lifetime that AI will replace everything clinicians do, due to insufficient emotional intelligence and empathy to address issues such as breaking bad news, domestic abuse or palliative care. My instinct is that AI will only ever supplement the doctor-patient relationship in general practice, not replace it.

There's also the fear of missing a patient’s hidden agenda, for example, the embarrassing red-flag symptom they only divulge after building up a relationship with the clinician. But, we should also attempt to capture the benefit or other patients that are more willing to divulge these symptoms online. The sweet spot is how to optimise the use of AI to drive efficiency and health outcomes, whilst continuing to make the patient experience feel human.

Did you find this useful?

Thanks for your feedback