According to Chief Healthcare Executive.com, the challenges extend beyond technology as hospitals and health systems incorporate artificial intelligence (AI) more broadly. Providers must navigate evolving laws and regulations related to AI, says Kathleen Healy, an attorney with Bellingham, Washington-based Robinson+Cole, specializing in healthcare regulatory issues.

“One of the biggest challenges with AI is the proliferation of new laws and regulations,” Healy explains.

In a discussion with Chief Healthcare Executive, Healy outlines some legal complexities involving AI in healthcare and critical considerations for hospital leaders, including standards of care, liability, and patient consent.

Healy notes that hospitals increasingly use AI for business functions like reviewing and processing claims. AI is also making strides in patient care, notably in medical imaging. Speaking at the HIMSS Global Health Conference & Exhibition, Robert C. Garrett, CEO of Hackensack Meridian Health, highlighted AI’s potential to improve global health, particularly in enhancing access to care.

“In general, AI has the potential to build healthier communities on a scale and pace that were previously unimaginable,” he added. “We may detect disease earlier or develop drugs and vaccines faster and help clinical teams avoid burnout.”

However, hospital leaders must ensure AI enhances, rather than replaces, medical judgment while maintaining the standard of care.

Kathleen Healy notes that “[hospital management] need to be cautious about using AI as an enhancement and not a substitution for medical judgment, ensuring care meets the standard of care,” Healy emphasizes.

She adds that the standard of care may evolve due to AI, potentially leading to a "reasonable machine standard of care."

Legal experts have noted that as AI becomes more prevalent, health systems might face higher standards in malpractice cases. Samuel Hodge, a law professor at Temple University, indicated that AI could elevate the duty of care to a national standard.

Determining liability when AI is involved in patient care is complex. Factors include the physician’s diligence, the AI tool’s performance, and potential biases.

The federal government also regulates AI in healthcare to address bias and discrimination. The U.S. Department of Health and Human Services recently adopted a rule requiring providers to mitigate AI-related discrimination risks.

Hospitals must also address patient consent with the increasing use of AI. Healy points out questions about consent when AI technologies record patient conversations. Health systems must consider obtaining consent, especially as AI becomes embedded in workflows.

Hospitals need to secure consent if AI supports a diagnosis for patient care. Consent requirements will likely evolve, including disclosures about AI use and associated risks.

Hospitals must also prepare for scenarios where patients refuse consent for AI use in diagnoses.

Surveys indicate Americans are cautious about AI in patient care. A KPMG survey revealed that while many are optimistic about AI for administrative tasks, only 33% believe AI will lead to better diagnoses.

Hospitals and health systems need to understand the new legal landscape regarding AI.

“They need to develop policies and procedures to comply with laws and regulations,” Healy advises.

Hospitals should inventory their AI tools and ensure staff are informed about relevant regulations. Regular monitoring for compliance is essential.

Healy suggests forming multi-disciplinary teams to address AI's legal and operational issues, with hospital CEOs identifying key individuals to oversee these concerns.

“Identifying issues and pointing people to monitor them is crucial,” Healy concludes.