Would you trust a computer to correctly diagnose a health problem? Most of us would probably prefer to leave it in the hands of our highly trained general practitioner, emergency room doctor or surgeon. The narrative concerning the intersection between artificial intelligence (AI) is often grossly distorted towards one extreme or another: either the robots are coming to kill us and steal our jobs or they herald some new utopian era and represent the only possible source of future prosperity for the human race. Reality – as in most instances – is far more nuanced and probably lies somewhere in between these two extremes.
We’re a long way from developing Star Trek-esque androids that can perfectly simulate human behaviour and supplant your current, fully human doctor. However, there are a few ways in which AI has already begun to supplement your friendly neighbourhood doctor’s practice and a few more in the pipeline…
Consider the humble FitBit. We’re not entirely sure that they track our steps correctly all of the time or get our heartbeat right but they’re increasingly popular and there is evidence that they do work. They monitor our fitness levels, warn us when we need to get more exercise and can also record abnormalities such as heart palpitations, potentially saving lives.
The information they record can be shared with healthcare professionals and AI systems to be analysed, giving doctors a more accurate picture of the habits and needs of their patient, especially when supplemented with medical histories and other useful patient information. This allows doctors to more carefully and accurately tailor treatments, rendering them increasingly more effective.
However, critics are concerned that this information could also be used by companies to discriminate against their employees should the data be used unethically. Experts have also voiced concerns about invasion of privacy if the data collected and stored by manufacturers of fitness trackers is either hacked or sold.
Healthcare professionals have already begun to use machine learning-based applications, support vector machines and optical character recognition programs such as MATLAB’s handwriting recognition technology and Google’s Cloud Vision API to assist in the process of digitising healthcare information. This helps to speed up diagnosis and treatment times as healthcare professionals are able to more quickly access complete sets of records on their patients.
The Massachusetts Institute of Technology (MIT) Clinical Machine Learning Group is leading the pack in developing the next generation of intelligent electronic healthcare records by developing applications with built-in AI – specifically machine learning capabilities – that can help with the diagnostic process. In theory, this will allow healthcare professionals to quickly make clinical decisions and create individual treatment plans tailored to their patients.
According to MIT, there is an ever growing need for “robust machine learning [that is] safe, interpretable, can learn from little labelled training data, understand natural language, and generalize well across medical settings and institutions”.
The term “AI” is somewhat misleading as it implies something more than the technology that we currently use it to describe. We don’t literally mean artificial intelligence – no true AI has been invented yet – but advanced algorithms that run on ever more powerful computers and can recognise patterns, pick information out of complex texts or even derive the meaning of an entire document from just a few sentences. This is known as artificial narrow intelligence (ANI) and comes nowhere close to artificial general intelligence (AGI) – aka the next step in developing a fully conscious AI or “superintelligence” – that can abstract concepts from limited experience and transfer knowledge from one place to another.
However, natural language processing and computer vision – the two main applications for ANI – are developing phenomenally quickly, the latter of which is based on pattern recognition and crucial for diagnostics in healthcare. Algorithms are trained to recognise various patterns seen in medical images and used to help doctors diagnose specific conditions in their patients, such as DNA mutations in tumours, heart disease, and skin cancer. This methodology does have limitations, however, as the medical evidence that the algorithms are programmed to recognise tend to originate in highly developed regions and reflect the subjective assumptions (or biases) of the working team. Furthermore, the forecasting and predictive elements of these algorithms are anchored in previous cases, and may therefore be useless in new cases of treatment resistance or side effects of drugs. Finally, the majority of AI research already conducted has been done on training data sets collected from medical facilities and doctors are provided with the same dataset after the algorithm analyses the images, usually without any attempt to reproduce the clinical conditions.