HealthcareIT News: How should AI be designed and regulated, and who should it serve?

by Mike Miliard, HealthcareIT News (OCTOBER 17, 2019)

The U.S. Food and Drug Administration unveiled a new set of draft recommendations on clinical decision support software recently, and in its guidance, the agency said it’s taking a risk-based approach to categorization of these various CDS tools, many of them powered by artificial intelligence, that it hopes will be an “appropriate regulatory framework that take into account the realities of how technology advances play a crucial role in the efficient development of digital health technologies.”

Given the vast proliferation of AI and machine learning software across healthcare, and the speed at which it’s evolving, that certainly won’t be the last word from the FDA, or other regulatory agencies, on the subject.

A truly global framework

Indeed, said Robert Havasy, managing director of the Personal Connected Health Alliance, when he looks across the U.S. and around the world he sees the beginnings of a “truly global framework emerging, with common principles among the U.S., Europe and other places,” for the safe and effective deployment of AI in healthcare.

Havasy was speaking at the HIMSS Connected Health Conference during a roundtable discussion about developing approaches to AI regulation and design.

“We’re assessing risk with a global system,” said Havasy. “There are some common principles, one of which seems to be that the risk is presumed to be lower when there are competent individuals who can make their own decisions and understand how a system works.”

As Dr. Art Papier, a dermatologist at the University of Rochester and founder and CEO of the AI company VisualDx explained, even if an AI algorithm says a mole is 99.9% benign, if the patient says the mole has recently changed, it’s getting removed.

Healthcare is nowhere near the point where “these algorithms are reliable enough to trust” without skilled human intervention, he said.

An explainable process

At VisualDx, said Papier, “we are very process oriented. As we read the FDA guidance we’re seeing that the FDA really wants to make sure that your process is explainable, and you know running your tests and having the data to support the work.”

It’s critical, he said, for AI developers to be “explaining what you’re doing as best you can and surfacing that – so your users don’t have a sense that it’s just a big black box.”

Learn more about the rest of the roundtable discussion from HealthcareIT news here.

Subscribe to VisualDx Today

Become a VisualDx subscriber today and gain access to clinical information and medical images of thousands of diagnoses. Your first 7 days are FREE.

Learn More

Related Posts