#ClinicSpeak: the rise of the bots is inevitable

Let's automate the neurological examination ASAP and give you the power to challenge your neurologist. #ClinicSpeak #ResearchSpeak #MSBlog

Every now again there is a research paper that has nothing to do with MS, but its findings have broader implications that are so important for the field that I feel obliged to discuss them and put them in context for pwMS. 

Google are the current masters of artificial intelligence (AI) and they are heavily into healthcare. Google taught one of its AI bots (robots) how to read retinal photographs so that it could diagnose diabetic retinopathy. Surprise, surprise, once the bot had learnt it was able to detect with remarkable accuracy, in fact better than most doctors, referable diabetic retinopathy. For the geeks reading this post the sensitivity and specificity were both well over 90% and the area under the ROCs (receiver operating curves) were over 99%. These numbers are quite staggering. 

What this mean for medicine? If you have diabetes you can simply get a picture taken of your retina using relatively cheap technology, which could be made available to you in pharmacies or supermarkets, have the images uploaded to the cloud automatically and have Google assess whether or not you need to see an ophthalmologist. This technology is simply going to revolutionise the way people with diabetes are monitored and it is going to free up doctor-time for more useful things.

The implications of AI for the medical profession are profound and its coming to neurology and MS. I envisage us using the same technology to assess the retina and optic nerve in people with MS. This will allow you to know if your optic nerve is involved and will allow you to complete your online, or web, EDSS more accurately. We are currently working on a simple web APP that will allow you to assess your own visual function. May be we should simply ask Google to rent us one of their AI-bots and hand over the process to Google? I suspect that very soon I am going to need a new job; maybe I should go and work for Google and make this happen more quickly? 


I often lament about the automation of medicine, the demise of the traditional doctor-patient relationship and the dehumanising effect technology is having on our profession, but when technology does this to a field may be it is time to stopping fighting and to join them on the other side.  We are clearly living in a brave new world!

Gulshan et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17216

Importance: Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation.

Objective: To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs.

Design and Setting:  A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency.

Exposure: Deep learning–trained algorithm.

Main Outcomes and Measures:  The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity.

Results: The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%.

Conclusions and Relevance:  In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment.

CoI: We run an maintain a website called ClinicSpeak that aims to automate self-neurological assessment for pwMS. The objective is for pwMS to monitor and manage their own disease. I am also a Googophile (somebody who loves Google's technologies). 

Labels: , , , , ,