The Covid-19 pandemic has been one of the most quantified health crises in history. The wealth of available information is a powerful tool for the response, but this is deeply dependent on making sense of mountains of data, and quickly. Artificial intelligence (AI) is potentially invaluable for this: it can sift through and interpret vast amounts of data, in the same way a human can, but with the crucial advantage that, when a human would have to take breaks or go to sleep, the AI can keep going. It enables superhuman efficiency and rapidity so vital to pandemic response. The coronavirus has highlighted the potential of AI, particularly in forecasting and treatment development, and while its use may not have been central to this crisis, its profile has certainly been raised for the future.
Pandemic warning systems
AI has proven that it can act faster than humans in predicting a pandemic. Machine learning company BlueDot flagged up the cluster of ‘unusual pneumonia’ cases in Wuhan in December, by trawling through and analysing global news reports. Within 2 hours of detecting the outbreak, BlueDot had sent a warning of the potential threat to its clients (public health officials in 12 countries, airlines and frontline hospitals) 9 days before the WHO’s first warning relating to Covid-19 was issued. The technology also correctly predicted, from air travel data, that Bangkok, Seoul, Tapei and Tokyo, would be the first places the virus began to spread.
Within 2 hours of detecting the outbreak, BlueDot had sent a warning of the potential threat 9 days before the WHO’s first warning
Now we have an indication that AI has reached the stage where, at least in this context, it seems able to make accurate predictions, it could form part of a future pandemic warning system. It is important to recognise that AI cannot replace human epidemiologists though: all AI models make assumptions — so validation of their conclusions by epidemiologists remains crucial. On top of this, for the technology to be usefully applied, the methods used to generate predictions must be presented transparently, and any limitations of generated conclusions clearly specified.
It is also important to note that while AI could be incredibly useful in predicting future outbreaks, as an outbreak progresses further, its potential usefulness decreases. As a disease spreads, the prediction task quickly becomes more daunting: the volume of information to assess and account for increases, and there are more humans whose behaviour must be considered. As the number of variables increases, so too does the margin of error on any predictions, and epidemiologists are much better suited than AI to weighing this up. However, especially with increased interest and development post-pandemic, perhaps AI will reach the point where it can accurately predict much more than pandemic risk: yes or no.
Healthcare resource management
BlueDot’s demonstration, as well as others, that AI is capable of making reliable predictions has seen its development fast-tracked in other fields too. Scientists at Cambridge University are working with NHS Digital and Public Health England to develop a tool known as the Coronavirus Capacity Planning and Analysis System, or CPAS. The AI behind this system uses aggregated data on patients admitted to hospitals to predict hospital resource use, enabling more effective planning, such as how much personal protective equipment to order. Technology like this has the potential to enable more efficient planning in future pandemics, and perhaps in the more ‘normal’ operation of the NHS too.
Failure to represent certain groups and to guard against systemic biases can mean that AIs do more harm than good
Developing AIs like CPAS is no easy task. As well as being clear on the model’s assumptions and limitations, the programme must be user-friendly and accessible to healthcare professionals, and the team at Cambridge who developed CPAS worked closely with clinicians to ensure this. Another key issue, especially important when dealing with patient data, is ensuring that the model is not ‘learning’ from biased datasets. Failure to represent certain groups and to guard against systemic biases can mean that AIs do more harm than good. This has been a big issue with AI in general, and is another reason why transparency on the algorithms, models, and data used by AI systems is vitally important as the field progresses.
Drug discovery
Using AI to sift through large amounts of data can also speed up the drug discovery process. A UK biotech company, Benevolant AI, has built a machine learning programme to discover and develop new drugs, and harnessed it to identify possible candidates for Covid-19 treatments. It uses the entirety of published biomedical literature as its data source, making connections between publications to build a picture of how a drug might interact with the body and target the disease, identifying connections that a human might not even think of.
AI has shown its potential to deliver the rapid response that is essential in a pandemic
In this case, the aim was to identify existing medicines that could be repurposed against Covid-19. Existing medicines have been a key research target, as they have already gone through the rigorous clinical trial process to be safety approved, so it is just a case of testing efficacy against the Covid-19 virus. Within a week, the AI returned a strong candidate: baricitinib, a drug currently in use for treatment of rheumatoid arthritis. The drug has now passed the laboratory testing phase and is in clinical trials on infected hospital patients, highlighting an emerging role for AI in drug design moving forward.
A framework for the future
Throughout the coronavirus outbreak, AI has shown its potential to deliver the rapid response that is essential in a pandemic and has proved its ability for superhuman efficiency. This has focused even more attention on AI technology, paving the way for a surge in its development moving forward. However, there is an urgent need to ensure this development is within a framework that prioritises transparency of methods and assumptions, and that guards against bias, before AI’s largescale deployment.
Sources and further reading
Title image by Juliet Shapiro
https://www.technologyreview.com/2020/03/12/905352/ai-could-help-with-the-next-pandemicbut-not-with-this-one/ https://www.thelancet.com/pdfs/journals/landig/PIIS2589-7500(20)30054-6.pdf https://www.ft.com/content/877b8752-6847-11ea-a6ac-9122541af204 https://www.nature.com/articles/d41587-020-00005-z