FDA Approves Non-Supervised Diagnostic AI

We’ve talked a lot about AI in healthcare recently, with a big focus on AI being used as a diagnostic tool to process scans/images and find potential issues. All of this technology thus far has been created with the understanding that the AI’s results will be reviewed and evaluated by a trained, specialized medical professional. That is, the doctor is still the final decision-maker, and the AI is her assistant.

All that changed this week, when the FDA announced its approval of the first AI tool that is meant to operate and issue a diagnosis completely independently, without any supervision from a specialized doctor. The software program, named IDx-DR, can detect diabetic retinopathy, a form of eye disease, by looking at photos of the retina that a nurse or doctor uploads to the program. After checking the image to make sure it’s high-resolution enough, the program evaluates the photo and then gives a diagnosis.

This is great on one level – it means that any nurse or doctor can upload a photo, and patients don’t need to wait to see a medical specialist in order to review the AI results and get a diagnosis. So theoretically, medical care will be more accessible and sooner. But, the flip side is a tricky ethical situation… Who is responsible when the diagnosis is wrong?

Why It’s Hot: Wait, are robots actually coming for our jobs after all? And who do we blame when they screw it up?

 

Learn More: The Verge | FDA release