In modern times, artificial intelligence (AI) has started to arrive everywhere you look. It powers our ever-present digital assistants; it will help recommend entertainment options and contains even begun to reshape the way organizations carry out their everyday operations. As AI moves further into healthcare, regulatory challenges await.
The the fact is, there isn’t a single major industry that isn’t being changed by the rapid development of AI-powered technology. There is one, though, that stands apart among others: healthcare.
The worldwide healthcare industry arguably has more to get from advances in AI than any industry. It’s already being put to make use of in aiding diagnoses, monitoring patient health data to consider early indicators of infection, and managing medication doses and prescriptions.
It’s even proven adept at predicting patient mortality. At the same time, however, the adoption of AI into healthcare carries some unique risks not found elsewhere – owing to the truth that any missteps can cost lives.
That the reality is rapidly creating a conflict between the large number of businesses that seek to build up healthcare-focused AI solutions. The regulators tasked with making certain the industry always puts the safety of patients first may have an issue.
As an overview of what’s happening with AI and healthcare — here’s a look at the ways that healthcare AI solutions are pushing the regulatory envelope. The challenges it makes for regulators to solve.
Securing the Underlying Data
To start out with, AI solutions don’t work in vacuum pressure. They count on complex infrastructures that gather a variety of data sources from several disparate providers. When it involves the healthcare industry that data will come from medical practices, hospitals, drug makers, insurers, and any number of other intermediaries.
The first challenge is in designing medical data integrations that adhere to existing medical privacy laws like the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation (GDPR) in the EU.
The problem, as it pertains to regulation, may be the fact that there’s no one-size-fits-all standardized platform to handle medical data. Most of what’s already used or in development contains custom database solutions which were never built to be interoperable.
That makes every link between such systems a potential privacy nightmare. That alone will require time to remedy, and there’s no telling what the ultimate solution will be. In addition, once you add AI to this mix, things get even more muddled.
For example, a lot of today’s medical AI systems have used real-world patient data to learn how exactly to do their intended functions. That data is normally anonymized before used for machine learning purposes, but studies have already confirmed that it’s often easy for such data to be re-associated with the people who generated it. That means there’s still a major data privacy concern even after current standards are applied to prevent one, and regulators will have to come up with entirely new instructions for and oversight of new medical data sharing platforms and how they handle sensitive and painful information.
Approving a Moving Target
Another regulatory challenge that current medical AI developments are creating is the proven fact that they’re distinctly different from all previous devices, medications, and technologies which have come before.
It’s because the latest AI solutions in the medical field are being built to learn because they gain contact with new patient data so that you can hone their ability to make diagnoses, assist doctors, or suggest treatments.
That means the capabilities, safety, and efficacy of a few of the newest medical AI solutions can’t be assessed an individual time for regulators to grant approvals.
Unlike medications and standard medical devices, applied AI in medicine is just a moving target. Whereas a non-AI device can undergo thorough testing and gain approval, an AI’s performance may be different the day as a result of its undergone testing. What’s more, there’s no telling if the performance differences is likely to make it are more effective or worse.
That’s why regulators like the US’s Food and Drug Administration (FDA) have thus far only started to approve locked-algorithm solutions like IDX Technologies’ IDx-DR eye scanner.
When medical AI has the ability to learn, however, all the existing approval processes no more suffice. To tackle the issue, the FDA has already proposed a whole new regulatory framework to deal with AI in medical applications. It would add a preapproval process that would allow manufacturers some leeway in what changes (or just how much machine learning) would be permissible without re-approval. In addition, it would require manufacturers to submit ongoing performance data to the agency to allow them to intervene if necessary. That, however, will demand a drastic increase in manpower at the FDA – and nobody’s sure if they’ll obtain it.
Dealing with Black Boxes
Just as it’s the case in the broader world of AI development, regulators of healthcare AI solutions are going to have to grapple with the prevalence of black-box AI computer software. AI generally seems to present a double-edged sword in healthcare. If we restrict developers from protecting their work too much, and innovation stops.
However, let devs have free reign, and there won’t be in whatever way to know if the approaches in use are what’s most useful for the patients that may rely on the technology.
To solve that problem, regulators will have to strike a delicate balance which allows developers some means of protecting trade secrets while providing enough transparency to allow for thorough vetting of healthcare algorithms. That’s going to require regulators across a variety of agencies to bring high-level AI developers into the fold, as they’ll function as only ones qualified to find out what the AI solutions are doing and why.
Those developers will also desire a medical or research back ground to be able to understand the medical aspects of the technology. That in itself is just a problem because there aren’t too many individuals who can satisfy both requirements at present, and there’s no existing program designed to produce such experts.
A World of Innovation Complications
Although it really is certain that AI holds the ability to revolutionize almost everything concerning the modern healthcare industry, the regulatory dilemmas identified here must be solved if it’s to happen in a safe and controlled manner.
Regulatory dilemmas will require something of a parallel revolution to just take place within the relevant regulatory bodies that oversee the industry. It’s going to take new approaches, expanded oversight, and the development of a new generation of medical AI experts. Needless to express, paying for all the regulations won’t be a trivial matter, either.
For all of those reasons, it’s an easy task to foresee that the required changes are not going to happen overnight. There isn’t a roadmap for regulators or developers to check out, and meaning they’ll need certainly to blaze a trail together into healthcare’s AI-powered future.
Blazing a trail means the requirement for all sides to be mindful and take care to get things right on the initial try might prove to be the greatest limiting aspect in AI’s spread into the. That, obviously, is how it should be.
After all, the consequences of failure to manage would be dire and irreversible — and in healthcare, there are real human lives that hang in the total amount.