8 Nov
-
15 November 2025

AI in the NHS Weekly Newsletter - Issue #23

Executive Summary

Newsletter #23 captures a week of intense debate spanning Microsoft Copilot's clinical safety implications, the open-source versus proprietary AI divide, and transformative changes in primary care systems. The group wrestled with fundamental questions about LLM crawling of published content, explored innovative uses of autonomous vehicles in healthcare transport, and engaged in spirited discussions about the future of GP practice through the lens of modern clinical systems. Against this backdrop of technological transformation, members shared research on AI safety failures, debated conflicts of interest in clinical AI development, and considered whether GPs face replacement or reinvention. The period closed with passionate exchanges about regulatory capture, open-source AI models, and the NHS's path toward digital sovereignty.

Newsletter #22 Launch & LLM Crawler Ethics

The period opened with the release of Newsletter #22, which immediately sparked meta-discussion about the implications of publishing group conversations online. A recently qualified GP raised a crucial concern: the newsletter repository is indexed and crawled by LLMs, meaning vendor opinions expressed in the group could surface in AI responses to queries like "is [product] any good?" This could create unfair negative perceptions if LLMs treat subjective group discussion as ground truth.

The digital health specialist acknowledged the issue and explored solutions including password protection, crawler opt-out directives, and content-level disclaimers. A technical analyst suggested that addressing the opinionated tone of AI-generated summaries at the content level might be more effective than technical barriers, recommending clear disclaimers to prevent LLMs from treating newsletter content as objective fact.

A systems analyst contributed additional SEO expertise, recommending FAQ/Q&A sections for common queries, proper heading hierarchy to guide web crawlers, and removal of infinite scroll patterns. This evolved into broader discussion about responsible publishing of professional discourse in the age of ubiquitous AI training data—a tension between transparency and the risk of decontextualisation.

The conversation demonstrated the group's maturity in considering second-order effects of their digital footprint, balancing the value of open knowledge sharing against potential misuse by AI systems that lack nuance.

Microsoft Copilot in the NHS: Clinical Safety Concerns

Tuesday morning saw the digital health specialist raise substantial concerns about Microsoft Copilot's deployment across NHS organisations, questioning whether sufficient thought had been given to clinical risk. The core anxiety centred on Copilot's deep embedding into clinical workflows and the difficulty of implementing adequate controls to mitigate risks when users are trusting it to handle clinical information.

"Such is the extent of this, and the deep embedding of the tool, it's hard to think of controls that could adequately mitigate this risk," noted the specialist, referencing concerns first voiced in March 2023 when Microsoft announced the assistant. Despite deeper technical understanding gained over 2.5 years, the fundamental worries about agentic AI in clinical contexts remained.

The discussion connected to broader themes about AI tool compliance and the role of clinical safety officers. An innovation-focused GP argued that blocking non-compliant tools isn't the answer, advocating instead for raising awareness so people simply don't use them. This sparked debate about whether education or technical controls should form the primary defence against unsafe AI adoption.

The Copilot conversation touched a nerve because it represents the collision of consumer technology with clinical safety requirements—familiar tools that feel safe but may carry hidden risks in healthcare contexts. The group's scepticism reflects hard-won experience that convenience and clinical appropriateness don't always align.

Open Source vs. Proprietary AI: The Sovereignty Debate

The final weekend ignited fierce debate about AI business models, regulatory capture, and digital sovereignty for the NHS. An innovation-focused GP shared concerns about a new Anthropic research paper on AI safety, interpreting it as part of a coordinated effort by major AI companies to discredit open-source models as security threats whilst positioning themselves as trustworthy alternatives.

"OpenAI & Claude both want regulations, but regulations that suit their business model," the GP argued. "They are heavily lobbying governments to ban open-source models as Chinese backdoor threats. Claude's paper, although using Claude, gives an impression: we are good guys, we will keep all secure. You can trust us, not the open-source community. A total nonsense!!!"

The GP made a forceful case for NHS adoption of open-source models, arguing it could save billions whilst ensuring complete UK government ownership and vertical integration. The comparison to Sam Altman's reported £2 billion OpenAI contract proposal with the UK government underscored the financial stakes.

The digital health specialist offered a more nuanced view, affirming the crucial importance of open-source alternatives whilst acknowledging that open-source must deliver equivalent security and safety. Drawing parallels to Linux's success in enterprise environments, the specialist suggested open-source AI can meet healthcare requirements if properly implemented.

A clinical safety expert added market analysis, suggesting that OpenAI, Nvidia, and others are "betting the shop that someone will intervene and give them insider treatment just so that they don't start a run on the market when they run out of cash"—a structural critique of the AI industry's financial sustainability.

The debate crystallised a fundamental tension: desire for NHS digital sovereignty versus concerns about security, safety, and the capacity to implement open-source solutions effectively. As one member noted tersely: "Sadly the NHS has failed to do this time and again."

Autonomous Vehicles in NHS Transport: Waymo Possibilities

Wednesday brought innovative thinking about autonomous vehicle technology in NHS transport services. A recently qualified GP floated the idea of using Waymo's autonomous taxis for certain hospital transport needs, noting that Waymo's UK expansion could offer alternatives to traditional ambulance services for non-emergency cases.

The proposal resonated with practical experience. The digital health specialist recalled using taxis for out-of-hours home visit requests with no clinical justification for home attendance but clear need for face-to-face assessment—a practice that worked until it was "heartily abused." Emergency department experiences echoed this: some patients receive funded taxis on discharge, particularly elderly individuals who shouldn't use public transport but don't require ambulance-level medical support.

Cost concerns emerged immediately. One member noted that Waymo is often more expensive than Uber, questioning why more expensive innovation would be adopted. However, a recently qualified GP countered that during off-peak times in San Francisco, Waymo is actually often cheaper—and price isn't everything. Another member framed it through an NHS lens: "An expensive transport solution may be better than no transport > missed appointments > worsened outcomes."

The discussion revealed creative problem-solving around system constraints, but also highlighted the pragmatic barriers to innovation adoption in resource-constrained healthcare. As one member pointed out bluntly: "There isn't any reason NOT to use cheaper and available transport options. Not as if taxis are a scarce resource—heck, we have plenty of unemployed drivers who would be delighted for the work."

The Great Clinical System Migration: Medicus/TORTUS in Action

Thursday and Friday saw extensive discussion about GP clinical system migration, triggered by a practice member considering the leap to Medicus with TORTUS integration. The conversation provided rare insight into the realities of abandoning legacy systems for modern alternatives.

Two GPs using the Medicus/TORTUS combination offered detailed implementation experiences. The upheaval question dominated: how disruptive is migration really? "Not much upheaval actually," reported one GP. "Adequate preparation and training (not a big deal and, with an intuitive, fresh modern system, staff pick it up very quickly), data migration sorted by team of experts, done and dusted in a few days."

An innovation-focused GP acknowledged bias ("tagging [the vendor representative] is likely to give you a biased—pro Medicus answer") whilst recommending practice visits to see the system in action. The GP urged: "Do not expect replica of EMIS or SystmOne. The workflows are different, cleaner, more intuitive. Be willing to learn a new way of doing things."

TORTUS's integration into Medicus revealed sophisticated functionality. The vendor representative detailed how consultations save to Medicus with "proper individual notes/bullet points under each proper heading" with observations coded using SNOMED CT, medications properly documented, and seamless EMIS integration via PFS IM1 messaging.

One GP cut through to the philosophical core: "I didn't want the status quo to remain for us, which is much worse. Short term pain for long-term gain. People always underestimate the risk of not doing something and letting them remain as is. Status quo is far riskier."

The funding question proved critical. Practice payment for clinical systems varies dramatically by region—some get central funding, others pay directly, creating inequitable access to innovation. "In NI, AccuRx is purchased directly by practices," noted one member. "There is no tender or procurement process... practices are spending their own money."

The discussion showcased the growing divide between early-adopter practices embracing modern systems and those locked in legacy platforms by funding constraints, risk aversion, or the overwhelming inertia of established workflows.

Clinical AI Conflicts of Interest & Co-Development Tensions

Thursday morning brought nuanced discussion about conflicts of interest in clinical AI development, sparked by broader healthcare industry news. An industry specialist highlighted a fundamental tension: "We are constantly told as manufacturers/innovators to co-develop with doctors and clinical staff, they should be involved, engaged, etc. But then when they are, people say 'look, they have a conflict of interest!'"

An innovation-focused GP offered a realistic assessment: "The simple answer is you can't. That is why ABPI rules are so strict. The risk can be reduced/mitigated etc but can't be ZERO. We all need to accept some inherent compromise, trust, and responsibility that comes with co-development. The alternative is slow, expensive, and often produces tools clinicians won't actually use."

This sparked reflection on the different models for clinical engagement. A recently qualified GP proposed a clinical pool service similar to Prolific, where large numbers of clinicians could serve as "guinea pigs" to test software and provide structured feedback. The digital health specialist had actually proposed exactly this during Babylon days, noting that whilst it never took off, the scope remained for agency-style clinical SME pools for research and synthetic data generation.

The rate-limiting step? "Expectation that it should be reimbursed at the same (or higher) level than what they might get doing locum work," the specialist explained. "While I don't want to undervalue their time, running test cases is a very different risk and labour profile to running around in an unfamiliar emergency department at 3am."

An industry specialist suggested an alternative: companies pay NHS employers directly, with programmed activities allowing consultant-level involvement with backfill funding. However, one member warned: "Problem with any 'group' is eventually you revert to the Road to Abilene and individual innovative flair is lost. Everything then becomes part of the 'safe' consensus."

The discussion revealed the impossible trilemma: meaningful clinical engagement requires compensation and time, but payment creates conflicts of interest, and group structures can suppress innovation.

Journal Watch

Academic Papers & Key Studies

Anthropic's AI Espionage Research Disrupting AI Espionage

NIHR Award Notice for Health AI Research NIHR Award NIHR207533

Oxford Remote By Default Study Remote By Default 2: Digital Tools Adoption Research

Industry Articles & News

IBM Watson for Oncology: The Full Story

United Healthcare AI Triage Tool Bias

EPIC Sepsis Tool Issues

Peter Thiel Interview on Capitalism & Young People Peter Thiel: Capitalism Isn't Working for Young People

Big Tech Killed the Personal Health Record Big Tech Killed Personal Health Record, Now OpenAI ChatGPT

Meta's Yann LeCun Plans Startup Departure Meta's Star AI Scientist Yann LeCun Plans to Leave for Own Startup

Blackstone Eyes EMIS Acquisition Blackstone Among Suitors for GP System EMIS

Digital Pathways Organ Transplant Error NHS Organ Transplant System Error

Technical Resources & Guidance

AI Credibility Buzzwords Guide AI Credibility Buzzwords to Watch For

Waymo UK Autonomous Vehicle Service Waymo in the UK

SimChat AI for Medical Training Medical Training Simulation Platform

Policy & Professional Resources

Scottish Deep End Project Resources The Scottish Deep End Project

Deep End Wales Digital Technology Session Save the Date Event

Looking Ahead

Unresolved Debates:

Microsoft Copilot safety: Clinical risk assessment frameworks for deeply embedded AI assistants remain unclear.

Open-source vs. proprietary AI: The NHS's path toward digital sovereignty versus vendor dependency requires strategic decisions.

Clinical system migration barriers: Funding inequity across UK nations creates unequal access to modern platforms.

Emerging Themes:

LLM training data ethics: Publishing professional discourse online creates second-order effects through AI training.

Conflicts of interest in co-development: The impossible trilemma of meaningful clinical engagement, appropriate compensation, and avoiding perceived bias needs fresh thinking.

Autonomous technology in NHS logistics: Beyond immediate transport applications, what other operational inefficiencies could autonomous systems address?

Events & Opportunities:

Deep End Wales Digital & Tech Session: 3rd December face-to-face event on technology in deprived communities

Continued MHRA AI Airlock cohorts: Ongoing opportunities for medical device developers across all UK nations

EMIS acquisition watch: Private equity interest in NHS clinical systems will have sector-wide implications

Group Personality Snapshot

Newsletter #23's discussions revealed a community that refuses simple categorisation. Members demonstrated technical sophistication in parsing AI safety research whilst crafting science fiction horror stories for Sunday morning entertainment. They debated regulatory capture and open-source model economics with genuine expertise, then diverted seamlessly into whether quantum computers could run Crysis 4.