Spotlight on artificial intelligence and diabetic retinopathy

One recent study reported on the development and validation of an artificial intelligence algorithm for detecting diabetic retinopathy, while another tested such technology in primary care.


Two recent studies from Australia looked at how well artificial intelligence systems diagnose diabetic retinopathy.

The first study, published online by Diabetes Care on Oct. 1, described the development and validation of an artificial intelligence–based, deep learning algorithm for the detection of vision-threatening referable diabetic retinopathy (defined as at least preproliferative diabetic retinopathy or diabetic macular edema). The study included separate training and validation sets of retinal photographs, 71,043 in total, 12,329 with what clinicians had judged to be referable diabetic retinopathy. In the internal validation set, the algorithm's area under the curve (AUC) was 0.989, the sensitivity was 97.0%, and the specificity was 91.4%. Testing against an independent, multiethnic (Malay, Caucasian Australians, and Indigenous Australians) data set found results of 0.955%, 92.5%, and 98.5%, respectively. Most false positives (85.6%) were due to misclassification of mild or moderate diabetic retinopathy, and undetected intraretinal microvascular abnormalities caused 77.3% of false negatives.

The study authors concluded that the algorithm could be used with high accuracy to detect cases that should be referred to ophthalmology. “Thus it offers great potential as an efficient, low-cost solution for [diabetic retinopathy] screening,” they wrote. This study differed from previous similar analyses by its use of less strict, more real-world criteria for referral and a multiethnic population. The algorithm did automatically classify the quality and location of the images and could potentially be trained to better recognize intraretinal microvascular abnormalities, the authors said. Additional research is needed to determine how such software is incorporated into practice, for example, through telemedicine or by non–eye-trained professionals, they said.

The other study, published online by JAMA Network Open on Sept. 28, was a trial of artificial intelligence–based grading of retinal images in primary care practice. It included 193 patients with diabetes seen at a primary care practice with four physicians in Western Australia from Dec. 1, 2016, through May 31, 2017. The artificial intelligence system judged 17 of the patients as having diabetic retinopathy severe enough to require referral. Two were found to have true disease, and the other 15 were false positives. The resulting specificity was 92% (95% CI, 87% to 96%), and the positive predictive value was 12% (95% CI, 8% to 18%). Based on the results, the system appears to be effective at ruling out diabetic retinopathy and has potential for improving efficiency of screening, according to the study authors. “Roughly 92% of all patients were immediately told at their primary care practice they had no [diabetic retinopathy] and therefore no referral was needed,” they noted. However, the high number of false positives, many of which resulted from inadequate image quality and sheen reflections, is likely to be an issue in practice, but one that may be improved by further training of the artificial intelligence system, the authors said.

An accompanying commentary noted that in April the FDA approved an artificial intelligence device that screens for diabetic retinopathy, the first device approved to provide a screening decision for any disease without assisted interpretation by a clinician. The primary care study demonstrates the potential of such systems, but the generalizability of its results is uncertain and such tools should be evaluated in controlled clinical trials, the commentary said.