By John Gikopoulos, Global Head for Artificial Intelligence & Automation, Infosys Consulting
In recent years, the use of new technologies in healthcare has come under fire. A lot of focus has been on developing mobile technologies, such as the NHS’s partnership with Babylon earlier this year, to provide digital-first integrated care via a mobile app. The problem? These technologies aren’t always accessible to vulnerable, elderly patients, who may not even own mobile devices. This is a far cry from the dreams of healthcare democratisation we often hear about.
But all is not lost. Frankly, not enough is known about the ways in which AI and automation are being used in healthcare. What people see day-to-day as “AI in healthcare” isn’t even the tip of the iceberg – it’s a whiff of smoke coming off it. In reality, the use of AI and automation is much more widespread and accessible, and is being used by doctors and nurses on a daily basis. From diagnostics to automating the back office, AI is transforming healthcare for good.
The first way in which new technologies are transforming healthcare is through Intelligent Process Automation (IPA), which combines Robotic Process Automation (RPA) and machine learning. Through data science applications, simple tasks that were once very sequential in nature are now much more intelligent.
RPA is largely used for invoice scanning on enterprise resource planning (ERP) systems, but IPA goes one step further, enabling the use of optical character recognition (OCR) technology to scan anything. This means entire patient record-keeping systems can be automated and, through image recognition, basic reports can be populated.
This is the advent of intelligent automation in healthcare: freeing up doctors and nurses from manually entering details to instead spend their time working with vulnerable patients on the ground, and saving precious time and resources in a time when the NHS is more stretched than ever.
The virtuous spiral of standardisation
Right now, every clinician is allocating different evidence in different categories – listing the same types of diagnoses, diseases or predicted outcomes in different ways. As a result, the data available to analyse is ‘corrupt’ – since the evidence isn’t classified, when you think you’re looking at the full picture, you’re actually only looking at one third of it. That’s why we must standardise the way things are classified, calling the same thing by the same name, a task that machine learning makes possible.
There are huge benefits to this, highlighted by often tedious clinical trials. Standardisation through machine learning can reduce the time taken for drugs to come to market by 10% to 30%, saving hundreds of millions of pounds for pharma companies – which they can reinvest into making new drugs. We’re already seeing quicker results in terms of drug manufacturing, with important drugs going from ideation to human trials to being stocked on pharmacy shelves much more quickly.
Machine learning also leads to a much better chance of identifying positive side effects and finding uses of medication for other diseases, meaning patient outcomes are much improved.
Man or machine?
Machine learning analyses data in a completely different way, with machines undertaking repetitive tasks requiring high skillsets. Take a scan carried out on a patient, for example an MRI or X-ray. Thanks to the wealth of data at its disposal, the machine can apply filters to analyse what might be going wrong – prompting you to move the patient slightly, or know if there will be a bad output – meaning the machine stands a much stronger chance of spotting health issues.
Unsurprisingly, this area is highly contested. Will machines and robots be used to diagnose moving forward? Absolutely not. The machine is a tool used to identify potential problems and make more accurate predictions on what might be needed, but an experienced clinician still needs to diagnose and decide what must happen in every case.
These use cases apply to practically every part of a patient’s journey, inside or outside a hospital. It’s not limited to static images or scans, but involves a total ability to collect data on an ongoing basis in a dynamic fashion. The results are significant – recently, San Francisco University installed an AI-enabled sepsis detection system in an Intensive Care ward. There, the death rate fell by more than 12%, and patients using the system were 58% less likely to die.
Machine learning allows clinicians to see what type of behaviour is causing better outcomes, leading to quicker and more accurate diagnosis and treatment. More patients stand to benefit given the accessibility of this, even when they’re at home.
Wave goodbye to AI
This is not the case for AI interfacing – a remote session where AI software diagnoses patients. There is widespread criticism of this technology, not least because it’s inaccessible for more vulnerable patients who may not have mobile phones or tablets.
As in hospitals themselves, it will always be crucial for a clinician to define what must happen in every case. Interfacing should be used at a service level, such as in logistics or hotels, rather than at the hardcore diagnosis and treatment level. This form of telematics is a way of getting in touch with a patient – but it isn’t artificial intelligence.
Data, data, data
Every single use case for AI and automation in healthcare relies on one thing: data. From capturing data and the availability of data, to data structuring or data cleansing, we must be able to understand and share this information with other databases, that can communicate with each other. If you put bad information in, you get bad information out – which nobody wants in healthcare.
The second most important element is having a clear and cleansed data stream from different patients who are diagnosed and treated. The system can then find similarities to different diseases, suggest treatments and identify other types of treatments to eliminate a disease in its infancy. This means healthcare professionals could more easily find underlying treatments to diseases, and get these drugs to patients more quickly. In the current COVID-19 pandemic, using a clear data stream to create a multi-language FAQ avatar would not only provide consistent information, but it would also enable it to map questions, learn, and potentially predict outbreak hotspots.
The trajectory of AI adoption isn’t straightforward – there’s still widespread hesitation to try new things. When we think of AI in the enterprise, the risk-takers are often ambitious startups, companies undergoing major restructures, or companies in distress desperate for solutions. In other words, innovation is usually seen more as a dire need than a long-term plan. In healthcare, then, it’s understandable why adoption has been slow – the sensitive nature of saving lives is all about long-term planning, not last-minute needs.
However, supposedly “legacy” institutions like the NHS are far more advanced in the adoption of AI compared to private pharmaceutical companies. The NHS wants to reduce costs where possible and this necessitates some experimentation; as for pharmas, they enjoy bigger margins and are more likely to want to leave things as they are.
There has been no dramatic end-to-end overhaul of the patient journey thanks to AI and machine learning. But behind the scenes, things are changing, and we’re inching into real innovation. Developments in diagnostics and the back office aren’t visible to the public, or even sometimes to doctors and nurses, but everyone is benefitting. These technologies have huge potential, but they haven’t yet made a widespread impact. Buy-in from the clinicians who use the tech on a daily basis, and more excitement from the healthcare sector to embrace these technologies, will be all the fuel the fire needs.