Skip to main contentSkip to navigationSkip to navigation
Healthworker with stethoscope and mobile phone.
Five hospitals in Perth’s South Metropolitan Health Service were advised in May to stop using ChatGPT to write medical records for patients. Photograph: Moyo Studio/Getty Images
Five hospitals in Perth’s South Metropolitan Health Service were advised in May to stop using ChatGPT to write medical records for patients. Photograph: Moyo Studio/Getty Images

AMA calls for stronger AI regulations after doctors use ChatGPT to write medical notes

This article is more than 9 months old

Artificial intelligence protections should include ensuring that clinicians make final decisions, Australian Medical Association says

Australia’s peak medical association has called for strong rules and transparency around the use of artificial intelligence in the healthcare industry, after a warning to doctors in Perth-based hospitals not to write medical notes using ChatGPT.

The Australian Medical Association said in its submission to the federal government’s discussion paper on safe and responsible AI, seen by the Guardian, that Australia lags behind other comparable countries in regulating AI and noted that stronger rules are needed to protect patients and healthcare professionals, and to engender trust.

Five hospitals in Perth’s South Metropolitan Health Service were advised in May to stop using ChatGPT to write medical records for patients after it was discovered some staff had been using the large language model for the practice.

The ABC reported that in an email to staff, the service’s CEO Paul Forden said there was no assurance of patient confidentiality using such systems, and it must stop.

AI protections should include ensuring that clinicians make the final decisions, and that there is informed consent by the patient for any treatment or diagnostic undertaken using AI.

The AMA also said patient data must be protected and appropriate ethical oversight is needed to ensure the system does not lead to greater health inequalities.

The proposed EU Artificial Intelligence Act – which would categorise the risks of different AI technologies and establish an oversight board – should be considered for Australia, the AMA said. Canada’s requirement for human intervention points for decision-making should also be considered, it said.

“Future regulation should ensure that clinical decisions that are influenced by AI are made with specified human intervention points during the decision-making process,” the submission states.

“The final decision must always be made by a human, and this decision must be a meaningful decision, not merely a tick box exercise.”

“The regulation should make clear that the ultimate decision on patient care should always be made by a human, usually a medical practitioner.”

The AMA president, Prof Steve Robson said AI is a rapidly evolving field, and the understanding of it varies from person to person.

skip past newsletter promotion

“We need to address the AI regulation gap in Australia, but especially in healthcare where there is the potential for patient injury from system errors, systemic bias embedded in algorithms and increased risk to patient privacy,” he said in a statement.

Google’s chief health officer, Dr Karen DeSalvo, told Guardian Australia earlier this month that AI will ultimately improve health outcomes for patients, but stressed the importance of getting the systems right.

“We have a lot of things to work out to make sure the models are constrained appropriately, that they’re factual, consistent, and that they follow these ethical and equity approaches that we want to take – but I’m super excited about the potential.”

A Google research study published in Nature this month found Google’s own medical large language model generated answers on par with answers from clinicians 92.9% of the time when asked the most common medical questions searched online.

Most viewed

Most viewed