1 Nov
-
8 November 2025

AI in the NHS Weekly Newsletter - Issue #22

Executive Summary

Week 22 marked a pivotal moment in the group's evolution as regulatory compliance took centre stage. A significant study on NHS organisations' adherence to DCB0129 and DCB0160 standards sparked deep reflection on safety culture, whilst a Friday evening confrontation over an uncertified medical device led to the group's first member removal -- a decision that tested the community's commitment to patient safety over innovation enthusiasm. Beyond the drama, substantive discussions ranged from EPR implementation disasters and AI agents' economic promises to the technical complexities of patient consent for AI tools. The week demonstrated that this community is willing to enforce its standards, even when it means difficult conversations and uncomfortable decisions.

Major Discussions

The Compliance Study: Revealing the Safety Gaps

Tuesday 4 November brought a landmark moment when a digital health specialist shared research on NHS organisations' compliance with DCB0129 and DCB0160 clinical safety standards. The study, focused primarily on acute trusts with ICBs serving as a proxy for primary care, revealed concerning gaps in awareness and implementation of mandatory safety frameworks.

The findings sparked immediate discussion about jurisdictional complexities. A clinical safety specialist highlighted a critical oversight: "NI/Scotland/Wales don't have a statutory requirement for Clinical Safety Officers / DCB0160 / DCB0129 in organisations. There's a tidal wave of products on the horizon. If we don't have clinicians who are trained in the safe deployment of AI and/or understand it we're in a really dangerous place."

This observation prompted a clinical safety officer to articulate the dual purpose of high regulatory standards: "One of my two main reasons for high regulatory standards is to create a bar that keeps out those who will not meet it, doors closed in their face. The other reason is the safety bar itself for patient care." The comment captured the tension between innovation enablement and patient protection that would define the week's debates.

A primary care digital lead praised the research whilst noting practical challenges: "Really interesting and a great study. Thank you. It is very much Trust focused and ICBs are probably the closest proxy to primary care. Although practices / PCNs probably vary so much in how much they understand the compliance themselves as well. Bottom line - there needs to be much more awareness of clinical safety and the standards rather than just picking any tool and running with it."

By Wednesday, the discussion had evolved to question self-regulation's adequacy. A clinical safety expert posed a fundamental challenge: "At what point do we accept self regulation is never appropriate and start to enforce this stuff independently and properly, with teeth?" The question lingered throughout the week, finding practical application in Friday's enforcement action.

The Medome Controversy: When Innovation Meets Regulation

The week's defining moment arrived Friday evening when tensions over an AI diagnostic tool called Medome escalated into the group's first member removal. What began as product promotion evolved into a masterclass in regulatory requirements and the consequences of non-compliance.

The situation developed gradually. A US-based medical technology entrepreneur had been discussing Medome across multiple weeks, describing it as a diagnostic decision support tool potentially classified as "at least Class 2a" whilst acknowledging it was "not yet registered with MHRA." On Friday afternoon, he announced the tool's imminent release to waitlisted users.

By early evening, the group moderator raised formal concerns to the entrepreneur: "Please be aware that deploying a software product that qualifies as a medical device onto the UK market is against the law. I understand that you are seeking certification as per your previous responses, which indicates you believe it to be a device, and your online marketing materials also make claims about diagnosis several times."

The entrepreneur's response proved decisive: "Please don't take this the wrong way, but I've been perfectly clear in previous responses and see no reason to further indulge your comments or questions. Thank you for your understanding."

When a system analyst and group admin pressed for clarification about UK certification status, the entrepreneur responded: "My apologies. I thought it was a discussion group." The admin then removed him from the group, explaining: "For anyone new to the group we have previously asked to not advertise, encourage use of or otherwise recruit people to use specifically non compliant medical devices in this group."

The moderator outlined three concerns driving the action: "(1) The entrepreneur was unaware of the risks he was taking (2) people may have used the product and themselves be exposed to risk (3) his actions were, potentially, against the law."

The aftermath revealed complexities. An academic clinician clarified that a trial at Imperial College involving Medome across 11 practices was "still ongoing" but actually "awaiting ethics approval" -- a significant distinction from previous characterisations. This revelation prompted a healthcare transformation specialist to note: "We've gone from live multi-centre trial and imminent regs to... 'Pending ideation'."

The group's response was mixed but ultimately supportive. A clinical safety officer reflected the next morning: "This group is AI in the NHS. It's both right and proper that people here challenge anyone who wants to deploy into it. I'd see it as a sign of not good faith that someone wouldn't take such advice given when wanting to get it into the NHS." The moderator responded: "Thank you - believe me, I debated this internally and with the admins as it progressed."

A recently qualified GP acknowledged the difficult balance: "Benefit of doubt since i dont think anyone here has actually seen the product or the defined intended use. Could have all been pre-launch marketing for all we can objectively determine. Re: attitude, posture etc...thats another story i guess...and the admin(s) have given sentence."

The moderator later framed the incident as educational: "For those watching, this is a very useful demonstration of the complexities of device regulation and how easy it is to potentially misstep." He concluded with aspiration: "This group can hopefully model best practice interaction between vendor, payor, and user. If we can achieve that, then innovation can genuinely flourish."

Postscript: A CSU digital lead reported that "medome now blocked by our csu," suggesting the regulatory concerns had immediate practical implications.

Medicus EPR: The F12 Debate and System Migration Realities

Monday's discussions centred on Medicus, a new EPR system being adopted by some practices. A general practitioner asked about macro capabilities (the "F12" function beloved by EMIS users) and protocol systems. The query prompted thoughtful responses about feature equivalence versus functional equivalence.

An EPR system representative explained: "We don't have F12 because we address those requirements in lots of other ways. It's not a like for like copy of EMIS!" This response highlighted the challenge of migrating users accustomed to specific workflows.

A healthcare transformation specialist made a crucial point about communication: "'We dont have F12' (whatever that means to GP-land) is less important than articulating what the function required is and how a new service meets (or improves) the service functionality. Otherwise upgrading infrastructure might as well be swapping a Mini for a Caddilac and say 'but this one has no pedals'."

The discussion revealed tensions inherent in healthcare innovation: balancing familiar workflows with improved functionality, managing user expectations during transitions, and communicating change effectively to diverse audiences.

AI Agents and Economic Promises: The 75m Question

Tuesday morning brought a Digital Health News article claiming "AI agents in GP practices could save NHS 75m annually." The report, produced by One Advanced, prompted immediate scepticism.

A clinical safety officer dissected the economics: "Proper management consultant speak there. The only way the 'NHS' could save the money is if they pay practices less and the practices decide whether to sack staff members or not. Also, is that a net 'saving' after the cost of the tech? Unlikely." He continued: "If it's used to safely provide more appts then that's great but it will cost more for the tools. To be net cost neutral means people lose jobs."

A health economics specialist echoed concerns: "That's often the key thing missing from health economic assessments. Saying it will save hospital beds only increases capacity - you only save cash if you close wards and staff as you say."

A healthcare administrator raised fundamental questions: "I'm still scratching my head and wondering why we need 'to process documents, and add codes, file etc' when we are on the cusp of rolling our new EPRs in hospital (where they also need to code diagnosis and handover care). If integrating between systems is that difficult, do we need a single patient record?"

The discussion exposed the gulf between vendor promises and operational realities, highlighting how "savings" often means "capacity increases" unless paired with actual resource reduction -- a politically and practically fraught proposition.

EPR Implementation Disasters: When Systems Fail Spectacularly

Tuesday evening brought sobering news of an EPR implementation that led to a critical incident. An innovation-focused GP shared the story with concern.

A system analyst shared experiences: "We had the same out our local with cerner. Initial switch on was a disaster." An early-adopting GP added darkly: "They'll probably be promoted in this case. And let loose on another unsuspecting part of the NHS..."

A clinical safety officer articulated private sector consequences: "In any private sector company, the CIO would be unemployed after this and the entire management chain would feel the consequences." He continued: "It's not acceptable and should never be seen as such. In my experience, 90% of it comes down to contracting. In every major project I did, the supplier knew exactly how much it would cost them if it didn't work on time. Highest I've seen is 250k/hr."

The discussion highlighted the accountability gap between public and private sectors, raising questions about incentive structures and risk management in major NHS technology projects.

The Welsh Language Mystery: When AI Goes Rogue

Wednesday morning brought lighter confusion when a GP educator reported: "Anyone else's Plaud generating notes in Welsh?" The AI transcription tool had inexplicably switched languages for multiple users.

The group moderator offered Welsh linguistic humour: "Microwave = 'popty ping' Plaud = 'transcripty ting'." An early-adopting GP shared frustration: "Yes! Done it a few times. Had to run through chat GPT to convert. Why Welsh though???"

The incident, whilst amusing, highlighted the unpredictability of AI tools and the importance of robust language detection systems. It also demonstrated how quickly the group can pivot from regulatory gravitas to troubleshooting technical quirks.

BMJ Future Health: Automation, Care, and Safety

Thursday and Friday featured discussions around the BMJ Future Health conference, where the moderator and a clinical AI implementation lead presented on practical AI deployment. The moderator sought group input on automation tools making real differences in daily work, particularly document management, summarisation, and coding systems beyond the well-covered audio-visual transcription tools.

The conversation revealed interesting tensions around automation priorities, with the moderator noting that administrative automation tools might be more impactful than the heavily-discussed clinical scribes, yet receive far less attention.

Patient Consent for AI: The MRI Analogy

Saturday morning brought a sophisticated debate about consent requirements for AI tools. A GP and AI tool developer shared an article from their DPO arguing against simplistic consent: "Putting it bluntly: it is not acceptable to spring a one-line disclosure at the start of the consultation -- 'I'm using AI, is that ok?'"

The moderator reported that he and a clinical AI implementation lead had covered this topic at BMJ Future Health: "The audience didnt fall definitively either way, and Haris made a good argument for when you might not ask for consent." He cited MPS Foundation guidance on disclosure as "a matter of well-informed discretion."

A clinical AI implementation lead had made a compelling analogy: "MRI -- a technology that, at its core, uses such complex physics and computational tech that even experts would struggle to adequately convey." The moderator added: "The weak link, IMHO, is the understanding of the underlying tech by the clinicians themselves. Are we confident that those using AVT right now would be able to explain what it is and how it works in a way that the patient would be able to comprehend, consider, and communicate their preferences?"

The discussion touched on historical precedent: "Do we consent patients for use of the EHR? I guess we are at the 'implied consent' stage, but I clearly remember consenting patients when I made the switch from paper to digital back in 2000."

The MPS Foundation recommendation stated: "Disclosure should be a matter of well-informed discretion... Given that the clinician is responsible for patient care, and that disagreement with an AI tool could end up worrying the patient, it should be at the clinician's discretion, depending on context, whether to disclose to the patient that their decision has been informed by an AI tool."

Green NHS and AI's Carbon Footprint

Friday evening brought discussion of NHS net zero commitments in the context of AI deployment. An innovation-focused GP shared a Telegraph article about the NHS's environmental agenda, asking: "What will happen to the Green NHS with Co-Pilot and AVT usage?"

A system analyst quipped: "Every time we do a clinical note we have to burn a penguin." The exchange highlighted the tension between technological advancement and environmental commitments -- a topic likely to gain prominence as AI adoption scales.

Lighter Moments

The Illuminaunties Strike Again

Saturday featured a delightful continuation of a previous week's joke when a clinical safety officer shared: "My wife pointed out that in India it's the illuminaunties who come and get you." The playful linguistic twist on conspiracy theories demonstrated the group's capacity for cross-cultural wordplay.

Popcorn at the Ready

As Friday's regulatory confrontation intensified, the moderator quipped: "Well, I hope no-one spilled their popcorn." A recently qualified GP admitted: "very close call here i must admit."

Later, a healthcare professional reported: "So literally just finished the traitors final and then read this... not sure which one was better tbh!" Another added: "I pop out to watch fireworks and come back here to bigger fireworks"

The Penguin Sacrifice

The discussion about NHS carbon footprint prompted a system analyst to observe: "Every time we do a clinical note we have to burn a penguin." An innovation-focused GP continued: "Most definitely, spray it with 2.74% chlorine and grill it medium rare. It will taste like Salt Bae."

The absurdist humour captured the group's ability to find levity in serious policy discussions.

Corporate Wardrobe Malfunctions

When discussing the moderator's conference appearance, a medical professional teased: "He's been interviewing for new jobs! Leather jacket doesn't cut it in those corporate situations."

Prompting Fail

A healthcare transformation specialist called out the moderator for an insufficiently specific question about AI applications: "Question wasnt specific enough... I expected better of you. Prompting fail."

Quote Wall

"At what point do we accept self regulation is never appropriate and start to enforce this stuff independently and properly, with teeth?" -- Clinical Safety Expert, questioning self-regulation adequacy

"One of my two main reasons for high regulatory standards is to create a bar that keeps out those who will not meet it, doors closed in their face. The other reason is the safety bar itself for patient care." -- Clinical Safety Officer, on the dual purpose of regulation

"Please don't take this the wrong way, but I've been perfectly clear in previous responses and see no reason to further indulge your comments or questions." -- US-Based Medical Technology Entrepreneur, shortly before removal

"This group is AI in the NHS. It's both right and proper that people here challenge anyone who wants to deploy into it." -- Clinical Safety Officer, reflecting on the removal decision

"For those watching, this is a very useful demonstration of the complexities of device regulation and how easy it is to potentially misstep." -- Digital Health and Clinical AI Specialist, framing the incident as educational

"The weak link, IMHO, is the understanding of the underlying tech by the clinicians themselves." -- Digital Health and Clinical AI Specialist, on patient consent for AI

"Every time we do a clinical note we have to burn a penguin." -- System Analyst, on NHS carbon footprint and AI usage

"We've gone from live multi-centre trial and imminent regs to... 'Pending ideation'." -- Healthcare Transformation Specialist, on shifting trial timelines

Journal Watch

Academic Papers and Key Studies

Is NHS Ready for AI? Survey of Healthcare Professionals

Industry and Policy Documents

NHS DCB0129 and DCB0160 Compliance Study

AI Agents in GP Practices Could Save NHS 75m Annually

MPS Foundation White Paper: AI in Clinical Practice

NHS Learning Hub: Generative AI Course Launch

Strategic Commissioning Framework

ICBs Told to Go Beyond Traditional Providers

Layered Approach to AI Consent in General Practice

Technical Resources and Guidelines

Patient Safety Framework - Digital Health and Care Wales

TOON: Token-Oriented Object Notation for LLMs

OpenAI Usage Policy Updates

Industry Analysis and Commentary

OpenAI's Request for Government Bailout

Mind-Boggling Valuations of AI Companies

Inside Salesforce's AgentForce Struggles

NHS Productivity Slump Costs Economy 20bn Annually

Inside NHS Insane Net Zero Push

Data Sovereignty and Cloud Computing

Your Data, Their Rules: Growing Risks of EU Data in US Cloud

Amazon Sues to Stop Perplexity From Using AI Tool

Conference and Educational Opportunities

NHS Digital Academy Cohort 5 Applications Open

Accurx Connects Manchester Event

Video Resources and Podcasts

The 8 Billion iPod Extrapolation

FT Unhedged: AI Company Cross-Holdings

RadioLab: Guts and Patient Zero Episodes

Emerging Technologies

Google Project Suncatcher: Space-Based Data Centres

COBOL Code Generation in IBM Watsonx

Looking Ahead

Unresolved Questions

Regulatory Enforcement: The Medome incident raises fundamental questions about group standards enforcement. Will this become a precedent for future interventions? How will the group balance openness to innovation with safety requirements?

Compliance Awareness: The DCB0129/DCB0160 study revealed concerning gaps. What mechanisms can improve awareness and implementation, particularly in devolved nations without statutory requirements?

Consent Complexity: The Saturday debate about AI consent opened more questions than it answered. How do we operationalise "well-informed discretion"? What training do clinicians need?

Economic Realities: The sceptical reception of the 75m savings claim highlights tension between vendor promises and operational truths. How do we develop more rigorous health economic assessments for AI tools?

Ongoing Conversations

EPR Implementation Accountability: The critical incident discussion exposed accountability gaps. Expect continued examination of contracting structures and risk allocation in major NHS IT projects.

Environmental Impact: The Green NHS discussion introduced a new dimension to AI deployment considerations. Carbon footprint of inference and training will likely gain prominence in procurement decisions.

Automation Priorities: The moderator's call for more focus on administrative automation beyond scribes may shift discussion priorities. Document management, coding, and workflow tools deserve deeper examination.

Community Evolution

The week marked the group's transition from pure discussion forum to standards-enforcing community. This evolution brings both opportunities (stronger safety culture, clearer expectations) and risks (potential chilling effects on innovation discussion, vendor wariness).

The group's willingness to enforce boundaries whilst maintaining educational framing ("useful demonstration of regulatory complexities") suggests maturation. Whether this balance holds as the community grows will define its character.

Group Personality Snapshot

Week 22 revealed a community at a crossroads, choosing safety culture over easy collegiality. The Friday removal was not a snap decision but the culmination of weeks of gentle challenge met with resistance. When a healthcare transformation specialist later observed "I felt it was WAY overdue," it captured the group's careful patience before action.