- A patient in London was mistakenly invited to a diabetic screening after an AI-generated medical record falsely claimed he had diabetes and suspected heart disease. The summaries, created by Anima Health’s AI tool Annie, also included fabricated details like a fake hospital address. NHS officials have described the incident as a one-off human error, but the organization is already facing scrutiny over how AI tools are used and regulated.
AI use in healthcare has the potential to save time, money, and lives. But when technology that is known to occasionally lie is introduced into patient care, it also raises serious risks.
One London-based patient recently experienced just how serious those risks can be after receiving a letter inviting him to a diabetic eye screening—a standard annual check-up for people with diabetes in the UK. The problem: He had never been diagnosed with diabetes or shown any signs of the condition.
After opening the appointment letter late one evening, the patient, a healthy man in his mid-20’s, told Fortune he had briefly worried that he had been unknowingly diagnosed with the condition, before concluding the letter must just be an admin error. The next day, at a pre-scheduled routine blood test, a nurse questioned the diagnosis and, when the patient confirmed he wasn’t diabetic, the pair reviewed his medical history.
“He showed me the notes on the system, and they were AI-generated summaries. It was at that point I realized something weird was going on,” the patient, who asked for anonymity to discuss private health information, told Fortune.
After requesting and reviewing his medical records in full, the patient noticed the entry that had introduced the diabetes diagnosis was listed as a summary that had been “generated by Annie AI.” The record appeared around the same time he had attended the hospital for a severe case of tonsillitis. However, the record in question made no mention of tonsillitis. Instead, it said he had presented with chest pain and shortness of breath, attributed to a “likely angina due to coronary artery disease.” In reality, he had none of those symptoms.
The records, which were reviewed by Fortune, also noted the patient had been diagnosed with Type 2 diabetes late last year and was currently on a series of medications. It also included dosage and administration details for the drugs. However, none of these details were accurate, according to the patient and several other medical records reviewed by Fortune.
‘Health Hospital’ in ‘Health City’
Even stranger, the record attributed the address of the medical document it appeared to be processing to a fictitious “Health Hospital” located on “456 Care Road” in “Health City.” The address also included an invented postcode.
A representative for the NHS, Dr. Matthew Noble, told Fortune the GP practice responsible for the oversight employs a “limited use of supervised AI” and the error was a “one-off case of human error.” He said that a medical summariser had initially spotted the mistake in the patient’s record but had been distracted and “inadvertently saved the original version rather than the updated version [they] had been working on.”
However, the fictitious AI-generated record appears to have had downstream consequences, with the patient’s invitation to attend a diabetic eye screening appointment presumedly based on the erroneous summary.
While most AI tools used in healthcare are monitored by strict human oversight, another NHS worker told Fortune that the leap from the original symptoms—tonsillitis—to what was returned—likely angina due to coronary artery disease—raised alarm bells.
“These human error mistakes are fairly inevitable if you have an AI system producing completely inaccurate summaries,” the NHS employee said. “Many elderly or less literate patients may not even know there was an issue.”
The company behind the technology, Anima Health, did not respond to Fortune’s questions about the issue. However, Dr. Noble said, “Anima is an NHS-approved document management system that assists practice staff in processing incoming documents and actioning any necessary tasks.”
“No documents are ever processed by AI, Anima only suggests codes and a summary to a human reviewer in order to improve safety and efficiency. Each and every document requires review by a human before being actioned and filed,” he added.
AI’s uneasy rollout in the health sector
The incident is somewhat emblematic of the growing pains around AI’s rollout in healthcare. As hospitals and GP practices race to adopt automation tools that promise to ease workloads and reduce costs, they’re also grappling with the challenge of integrating still-maturing technology into high-stakes environments.
The pressure to innovate and potentially save lives with the technology is high, but so is the need for rigorous oversight, especially as tools once seen as “assistive” begin influencing real patient care.
The company behind the tech, Anima Health, promises healthcare professionals can “save hours per day through automation.” The company offers services including automatically generating “the patient communications, clinical notes, admin requests, and paperwork that doctors deal with daily.”
Anima’s AI tool, Annie, is registered with the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) as a Class I medical device. This means it is regarded as low-risk and designed to assist clinicians, such as examination lights or bandages, rather than automate medical decisions.
AI tools in this category require outputs to be reviewed by a clinician before action is taken or items are entered into the patient record. However, in this case of the misdiagnosed patient, the practice appeared to fail to appropriately address the factual errors before they were added to the patient’s records.
The incident comes amid increased scrutiny within the UK’s health service of the use and categorization of AI technology. Last month, bosses for the health service warned GPs and hospitals that some current uses of AI software could breach data protection rules and put patients at risk.
In an email first reported by Sky News and confirmed by Fortune, NHS England warned that unapproved AI software that breached minimum standards could risk putting patients at harm. The letter specifically addressed the use of Ambient Voice Technology, or “AVT” by some doctors.
The main issue with AI transcribing or summarizing information is the manipulation of the original text, Brendan Delaney, professor of Medical Informatics and Decision Making at Imperial College London and a PT General Practitioner, told Fortune.
“Rather than just simply passively recording, it gives it a medical device purpose,” Delaney said. The recent guidance issued by the NHS, however, has meant that some companies and practices are playing regulatory catch-up.
“Most of the devices now that were in common use now have a Class One [categorization],” Delaney said. “I know at least one, but probably many others are now scrambling to try and start their Class 2a, because they ought to have that.”
Whether a device should be defined as a Class 2a medical device essentially depends on its intended purpose and the level of clinical risk. Under U.K. medical device rules, if the tool’s output is relied upon to inform care decisions, it could require reclassification as a Class 2a medical device, a category subject to stricter regulatory controls.
Anima Health, along with other UK-based health tech companies, is currently pursuing Class 2a registration.
The U.K.’s AI for health push
The U.K. government is embracing the possibilities of AI in healthcare, hoping it can boost the country’s strained national health system.
In a recent “10-Year Health Plan,” the British government said it aims to make the NHS the most AI-enabled care system in the world, using the tech to reduce admin burden, support preventive care, and empower patients through technology.
But rolling out this technology in a way that meets current rules within the organization is complex. Even the U.K.’s health minister appeared to suggest earlier this year that some doctors may be pushing the limits when it comes to integrating AI technology in patient care.
“I’ve heard anecdotally down the pub, genuinely down the pub, that some clinicians are getting ahead of the game and are already using ambient AI to kind of record notes and things, even where their practice or their trust haven’t yet caught up with them,” Wes Streeting said, in comments reported by Sky News.
“Now, lots of issues there—not encouraging it—but it does tell me that contrary to this, ‘Oh, people don’t want to change, staff are very happy and they are really resistant to change’, it’s the opposite. People are crying out for this stuff,” he added.
AI tech certainly has huge possibilities to dramatically improve speed, accuracy, and access to care, especially in areas like diagnostics, medical recordkeeping, and reaching patients in under-resourced or remote settings. However, walking the line between the tech’s potential and risks is difficult in sectors like healthcare that deal with sensitive data and could cause significant harm.
Reflecting on his experience, the patient told Fortune: “In general, I think we should be using AI tools to support the NHS. It has massive potential to save money and time. However, LLMs are still really experimental, so they should be used with stringent oversight. I would hate this to be used as an excuse to not pursue innovation but instead should be used to highlight where caution and oversight are needed.”
This story was originally featured on Fortune.com