25 Dec
-
25 December 2025

AI in the NHS Christmas Special 2025

Compiled with 🤖 assistance

Brought to you by Curistica — Clinical Safety is for life, not just for Christmas

Executive Summary

Seven months. Fourteen thousand messages. Four hundred and sixty-two voices. One spectacular regulatory scandal. And precisely zero penguins were harmed in the making of these clinical notes (though we did discuss grilling them medium rare).

From the first tentative debates about "rehumanising" care in June to December's revelation that Santa himself runs an uncertified Class IIa medical device, the AI in the NHS WhatsApp group has evolved from discussion forum to something approaching a professional institution—complete with its first member removal, a defining community motto, and an encyclopaedic collection of transcription errors that would make any medical indemnity insurer weep.

The year's defining moments? The August declaration that "compliance is your only moat" set the philosophical tone. November's watershed removal of a member promoting an unregistered diagnostic tool proved we meant it. And throughout, the community maintained its signature blend of forensic regulatory rigour and gallows humour—because if you can't laugh about burning penguins for your clinical notes, what can you laugh about?

We've debated whether autonomous GPs will arrive by 2038 or 2040, watched enterprise Copilot rollouts crash and burn to the tune of ÂŁ1.4 million, and collectively decided that the finger is mightier than the clanker. Welcome to the definitive audit of 2025.

The Official Naughty List

Dedicated to those who tried to bypass the Compliance Moat and fell in

The "Medome Memorial Award" for Regulatory Audacity

Date of Incident: 7 November 2025, 20:00

Classification: First substantive member removal in group history

The moment the community's ethos crystallised into action. A US-based physician founder had been sharing updates about an AI diagnostic tool for months. Enthusiasm was high—the founder's personal story was compelling, the technology promising. But when pressed repeatedly about MHRA registration and UK regulatory compliance ahead of a planned launch, answers were not forthcoming.

The sequence was damning: the product was self-described as "at least Class IIa" but "not yet registered with MHRA." Marketing materials explicitly referenced diagnostic capabilities. A soft launch was announced. And when group administrators asked directly for certification status, the silence spoke volumes.

At 20:00 on November 7th, the removal notification appeared. Within hours, CSUs had begun blocking the platform.

"There were three issues on my mind," explained a group moderator afterward. "First, the developer may have been unaware of the risks they were taking. Second, members might have used the product and been exposed to risk. Third, their actions were, potentially, against the law."

The aftermath saw the emergence of clear community standards: "Please do share your projects, but be upfront and clear about your regulatory compliance."

The Lesson: Innovation does not grant immunity from the law. The group proved it would enforce its values.

The "Ghost of Data Privacy" Award: OpenEvidence & The AI Slop Epidemic

Peak Discussion: August 2025

When OpenEvidence announced a $3.5 billion valuation on $50 million revenue, the community had questions. When it emerged the platform monetises user insights to pharmaceutical companies, those questions became concerns. As one contributor noted: "Nothing is free to use. When it's free—you're the product."

This sparked broader anxieties about LLM crawlers consuming professional communications, the creeping commodification of clinical expertise, and what one member termed "the false glorification" of AI platforms that appear benevolent but harvest data relentlessly.

Meanwhile, the AI Slop Epidemic reached fever pitch. The community's verdict on AI-generated email replies was brutal and near-unanimous. When asked about using AI to respond to messages, the consensus was crystalline: "Your writing will be your biggest moat in a world filled with AI Slop. Protect it."

The phrase became a rallying cry. The finger, it was decided, remains mightier than the clanker.

The "Productivity Theatre" Award: The ÂŁ1.4m Copilot Catastrophe

Date: 15 December 2025

A shared post achieved instant legendary status, capturing the absurdity of enterprise AI rollouts with surgical precision:

"Last quarter I rolled out Microsoft Copilot to 4,000 employees. $30 per seat per month. $1.4 million annually. I called it 'digital transformation.' The board loved that phrase. They approved it in eleven minutes. No one asked what it would actually do. Including me."

The confession continued: 47 people opened it, 12 used it more than once, one used it to summarise an email they could have read in 30 seconds. It took 45 seconds. Plus time to fix the hallucinations.

"Microsoft sent a case study team. They wanted to feature us as a success story. I told them we 'saved 40,000 hours.' I calculated that number by multiplying employees by a number I made up. They didn't verify it. They never do."

The piece struck a nerve because it captured the "magic unicorn" thinking that pervades NHS digital transformation: purchasing shiny tools without fixing 12-minute PC boot times or training staff. Investment as performance rather than purpose.

The "Carbon Conscience" Award: Burning Penguins for Clinical Notes

Date: 7 November 2025

Amid discussions of NHS net zero commitments and the environmental impact of AI, one contributor cut to the heart of the matter: "Every time we do a clinical note we have to burn a penguin."

The solution? "Spray it with 2.74% chlorine and grill it medium rare. It will taste like Salt Bae."

The gallows humour masked genuine concern: as AI scribes proliferate, their energy consumption accumulates. The NHS's net zero ambitions and its AI ambitions may yet prove uncomfortable bedfellows.

The Nice List

Tools and behaviours that actually made the meatsacks happier

The "Sanity Saver" Award: Plaud Note

Amid fierce debates about ambient voice technology, one device achieved near-universal approval. The Plaud Note—a pocket-sized transcription device—became the unlikely hero of appraisal season.

"Two-hour appraisal nightmare into 25-minute task," was the verdict. Users forgave its occasional lapses into Welsh ("Transcripty Ting," as it became known when the microwave is a "popty ping") because it delivered something tangible: time.

Not time for more patients. Time for pyjamas. Time for living. That's the happy path.

The "Listening Ear" Award: Accurx

In a market of deaf giants, one vendor distinguished itself by actually responding to community feedback. Accurx's "You Said, We Did" approach to product development earned genuine appreciation.

With 88 mentions across the period, the platform featured prominently in discussions about what good vendor engagement looks like. When clinicians speak and companies listen, everyone wins.

The "Truth Teller" Award: The Community Itself

Reddit was declared the only place for honest product reviews, per Cunningham's Law. But this community runs it close.

The willingness to ask uncomfortable questions—about certification, about data practices, about who really benefits from "productivity gains"—has created something valuable: a space where marketing claims meet clinical reality.

The price of admission is rigour. The reward is trust.

The "Compliance Champion" Recognition

With 164 mentions of DCB0129/0160 across seven months, the community proved clinical safety isn't just a checkbox—it's a culture.

"Compliance is a mindset and practice, not paperwork," was the formulation. The frequency of Yellow Card discussions, MHRA queries, and ISO 14971 debates demonstrated something important: this is a community that takes governance seriously, even when it's inconvenient.

Especially when it's inconvenient.

The "Magic Unicorn" Hall of Fame

Where AI tried its best, but failed spectacularly

The Transcription Hall of Shame

What Was Said: Tramadol What AI Heard: Travelodge

What Was Said: Valley Wood Care Home What AI Heard: Bollywood Care Home

What Was Said: [Various clinical terms] What AI Heard: Welsh (unexpectedly, repeatedly)

"Scribe misquote of the day: Travelodge for Tramadol" arrived in December, instantly becoming group folklore. The Bollywood Care Home incident prompted the observation: "Much better."

The Autonomous GP Debate

The Question: Would you adopt a regulated AI tool that could work autonomously as a primary care clinician?

The Timeline (per Claude): Between 2038 and 2040 for full autonomy.

The Comfort: "At least 15 more Christmases of job security before the robots take over the performative human touch."

The debate that wouldn't die. Throughout the year, the community wrestled with autonomy, liability, and what happens when the AI makes a decision and the patient suffers harm.

"Dear CEO/Board, please sign here that you take personal liability if it all goes wrong because you removed the human," suggested one contributor. "That's what GP practices effectively do with unlimited liability."

The Teapot Test

"Hello ChatGPT, I have my face stuck in a teapot, how do I get it out without having to involve the fire brigade?"

The perfect encapsulation of AI's limitations. Some problems require human intervention. Some problems require the fire brigade. Knowing the difference is wisdom.

The Ghost of NHS Future: Predictions for 2026

Infrastructure Wagers

ICE and LIMS will still refuse to communicate. PC boot times will remain geological. The systems that "cannot fail" will continue to find innovative ways to fail. Somewhere, a printer queue will require CSU intervention.

Regulatory Forecast

More MHRA "airlock" sessions are coming. Class II device classification debates will intensify. DCB0129/0160 will remain the gold standard—or at least the only standard anyone actually enforces.

AI Outlook

Scribe market consolidation is inevitable. More enterprise Copilot disappointments await. The tension between "move fast" and "don't harm patients" will persist, with the community firmly in the latter camp.

Workforce Weather

The GP registrar job market bubble shows signs of bursting. Locum AI governance remains undefined. The liability question—who holds it when AI assists a decision?—will become more pressing with each passing month.

The Digital Detox Corner: Retro Computing Therapy

Because sometimes the only cure for AI fatigue is MS-DOS

With 59 mentions of floppy disks, Commodores, Spectrums, and BBC Micros, the community revealed its therapeutic preferences. When the weight of ambient voice technology governance becomes too heavy, there's always DOOM.

"Four CDs?! The luxury. I remember installing Windows 3.1 with SIX 1.44MB floppy disks back in the early 90s."

The nostalgia serves a purpose. Technology used to be tangible. If it broke, you blew on the cartridge. Now if it breaks, you file a Yellow Card report with the MHRA and wait for the airlock.

Someone even suggested seasonal AI scribe voices: "Need cheering up at Christmas? It's Santa taking your notes."

Quote Wall: The Year's Best Lines

"Compliance is your only moat (in UK and EU. US is weapons free)." — Digital Health & Clinical AI Specialist, August 2025

"Your writing will be your biggest moat in a world filled with AI Slop. Protect it." — Innovation-Focused GP, October 2025

"Better having to restrain wild horses than raise the dead." — Clinical Safety Expert, October 2025

"Every time we do a clinical note we have to burn a penguin." — Digital Transformation Lead, November 2025

"A good GP can tell a lot from a big sigh from a regular patient." — Clinical Safety Officer, June 2025

"Slow, inefficient meatsacks that refuse to move beyond versions more than once every geological Age." — System Analyst, September 2025

"Clinical safety is for life, not just for Christmas." — Group Moderator, July 2025

"Nothing is free to use. When it's free—you're the product." — Healthcare Technology Specialist, August 2025

Santa's Governance Framework (DCB0124-Dec-25)

A technical analysis of how Father Christmas manages his caseload

Following community speculation, Santa has admitted to deploying AI. His tool of choice: TOP PRESENT™ (Treat Option Picker – Present Response Emotional Signal Evaluation for Niceness of Treat).

The system analyses reactions to past gifts to boost joy and cut down on regifting. After months of training, it made some bold predictions:

• Adults who receive gift cards will be disappointed by receiving what they asked for

• Members of the NHS AI group would like safe and effective AI systems that improve patient care and doctor education

Impressed by its "accuracy," Santa rolled it out immediately.

The Audit Findings:

The elves have taken the rare step of going public. They note:

• Zero governance reviews were completed for the Naughty/Nice list

• Santa previously used Grok but abandoned it because "it said all views were nice"

• The algorithm is judging moral character without MHRA registration (Class IIa at minimum)

• Letters to Santa are now covered by Royal Mail's GDPR privacy policy—but the surveillance elves remain unaddressed

Risk Assessment:

• Initial Risk: HIGH (global deployment without clinical safety case)

• Residual Risk: MEDIUM (relies on magic rather than evidence)

• Control Measure: Letters now require parental consent under 13

Journal Watch: The Year's Essential Reading

Academic Papers & Key Studies

• AI-Enhanced Care, Diminished Connection — Healthcare Policy Analyst (Ghost.io)

• Human-AI Collectives Most Accurately Diagnose — Nature Digital Medicine

• Model Collapse and AI Pollution — The Register / ArXiv

• Dawn of a New Era of Primary Prevention — Leading Digital Medicine Academic (Substack)

Industry & News Coverage

• Sky News: Doctors Using Unapproved AI Software — June 2025

• EMIS and SystmOne Dominance a 'Barrier to Change' — Pulse

• OpenEvidence: The UpToDate Meets ChatGPT — Various

• Ten Mistakes Marred Firewall Upgrade at Australian Telco — Slashdot

Technical Resources

• BMJ Future Health Webinars on Prompt Engineering — BMJ

• MHRA AI Medical Device Classification Guidance — gov.uk

Looking Ahead: Unfinished Business for 2026

The Questions That Won't Go Away

Liability: When AI assists a clinical decision and harm results, who holds responsibility? The clinician who accepted the suggestion? The vendor who built the tool? The practice that deployed it? The question remains legally and ethically unresolved.

Consent: What does meaningful consent look like for audio recording and AI transcription? The "this call may be recorded for training purposes" message dates from the abacus era—and wasn't designed for model training.

Locums: How should practices manage AI scribe governance for sessional and locum GPs? One practice's solution: "No AVT unless we have the correct clinical safety case." But implementation remains patchy.

Anticipated Developments

The MHRA airlock sessions will yield classification guidance. More enterprise AI deployments will disappoint executives who approved them in eleven minutes. The scribe market will consolidate. And this community will continue to ask the questions vendors would rather avoid.

The Ongoing Debates

Can productivity gains be measured without gaming metrics? How do we preserve the "art" of medicine—the big sigh from a regular patient, the decades of pattern recognition—in an AI world? Will ICE ever talk to LIMS?

Some questions have no answers. We ask them anyway.

Group Personality Snapshot

The Demographics: 462 contributors across seven months. GPs, clinical safety officers, informaticians, practice managers, vendors, policy makers, educators, and the occasional bewildered bystander. Healthy sceptics and cautious optimists in roughly equal measure.