AI and Healthcare: The Evolving Regulatory Environment

AI and Healthcare: The Evolving Regulatory Environment

Last updated on December 22nd, 2023

No doubt, artificial intelligence (AI) has been one of the most talked about topics of the year, especially with the release of Chat-GPT-4 in March.

Suddenly, it seemed everyone became aware of the very real possibility of AI replacing humans in a variety of professions, prompting a plethora of discussions about everything from ethics and global regulations to societal impacts and future industry disruption.

While the initial hype has evolved into an ongoing drumbeat of multi-faceted discussions, AI is moving forward and when it comes to healthcare, there are opportunities to transform virtually every aspect of the industry. Now, governments worldwide are stepping up to address how AI will be monitored and regulated.

On December 8, European Union officials announced a provisional deal finalizing what will become the world’s first comprehensive laws regulating artificial intelligence. Called the AI Act, it seeks to regulate uses for AI rather than the technology itself. It also strives to protect democracy and uphold the law and fundamental rights, while encouraging innovation and investment.

The Act’s rules work along a risk spectrum, with lighter rules for low-risk applications like content recommendations and stricter rules for high-risk applications, like medical devices. Violations could result in fines up to the equivalent of $38 million or 7% of a company’s global revenue.

The Act won’t take effect until two years after final approval, which is expected early next year.  Still, many believe it will serve as a global framework for classifying risks, ensuring transparency, and penalizing non-compliance.

What about the U.S.? On October 30, President Biden issued an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Its purpose, as noted in Section 1, is as follows:

“Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”

Interestingly, within the EO, an entire section (Section 8) is devoted to safe, responsible deployment and use of AI in healthcare, public health, and human services. Among other things, it includes key deadlines and deliverables (mostly driven by the Secretary of Health and Human Services):

  • Within 90 days of the EO, create an HHS AI Task Force. Within 365 of creating the task force, develop a strategic plan including policies and frameworks, and possible regulations, on AI and AI-enabled technologies in the HHS sector, including research, discovery, drug and device safety, healthcare delivery, finance, and public health.
  • Within 180 days of the EO, develop an AI assurance policy to evaluate important aspects of AI-enabled healthcare tools’ performance, as well as infrastructure needed for pre-market and post-market oversight of algorithmic system performance against real-world data.
  • Within 180 days of the EO, consider actions to advance understanding of and compliance with Federal nondiscrimination laws related to AI by HHS providers receiving Federal financial assistance.
  • Within 365 days of the EO, establish an AI safety program with a common way to identify and capture clinical errors resulting from AI in healthcare settings. Create a central repository for incidents that cause harm – including through bias or discrimination. Analyze data and outcomes to create recommendations and best practices for avoiding harm, and processes for disseminating them to stakeholders.
  • Within 365 days of the EO, develop a strategy to regulate the use of AI or AI-enabled tools in drug development processes.
  • Ongoing: create incentives under grantmaking authority to promote responsible AI development and use.

So, it looks like 2024 is going to be a landmark year for AI frameworks, potential regulations, and more. Stay tuned. As you consider what AI and related applications may mean to your organization, please remember RBT CPAs is here to provide accounting, audit, tax, and business advisory services. Interested in learning more? Give us a call today.

 

RBT CPAs is proud to say 100% of its work is prepared in America. Our company does not offshore work, so you always know who is handling your confidential financial data.