🏥 AI in the NHS Newsletter #12

Issue #12 | 16-23 August 2025

"Compliance is your only moat (in UK and EU. US is weapons free)"

Executive Summary

This week witnessed fierce debates about automated discharge summaries at Chelsea & Westminster, deep concerns about funding for AI governance in primary care, and revelations about the true costs of the Federated Data Platform. The community examined troubling patterns in NHS tech adoption whilst celebrating breakthrough personal AI tools and discussing the implications of OpenAI's GPT-5 performance claims.

Major Topic Sections

1. The Automated Discharge Summary Controversy 🏥

The week's most heated discussion centred on Chelsea & Westminster Hospital's implementation of AI-assisted discharge summaries (DSUM), revealing deep divisions about safety, governance, and the reality of NHS tech deployment.

A regulatory expert who had conducted due diligence on the system revealed concerning details about the implementation, noting that whilst Palantir insisted their solution wasn't a medical device, regulatory teams at NHS England disagreed. The tension between leadership wanting to side with suppliers and regulatory teams raising concerns exemplifies a recurring pattern in NHS tech adoption.

One GP shared the stark reality of current practice: "every single patient I ever saw for follow up after a hospital visit had no discharge information of any kind unless they turned up AT LEAST a fortnight later." This context frames the debate - even a flawed automated system might improve on the current situation where critical handover information simply doesn't exist.

A hospital doctor's confession highlighted the technological chaos: "To run a clinic I have to have 4 apps running and still look in paper notes for the GP letter 🤦🏽‍♂️"

The discussion revealed fundamental questions about liability, with concerns that junior doctors (FY1s) signing off AI-generated summaries carry the legal responsibility despite potentially not knowing all the details. As one clinical safety expert noted, "Any of those would be attributable to the (presumably junior) doctor signing it off."

2. The Funding Crisis for AI Governance 💰

A critical post from a digital health consultant highlighted an emerging crisis: whilst the 10-Year Plan pushes aggressively for AI adoption, support for compliance and governance has been systematically cut. "Big cuts to the orgs that previously supported compliance at scale. End result is all the work shifts left, back into shoulders of front-line."

The implications are stark - practices expected to implement AI safely without resources, training, or support. This prompted one frustrated clinician to observe: "They expect us to do this for free. I've politely pulled 2 fingers to that approach."

The discussion revealed how economic pressures distort implementation: "An extra 2 patients a month would cover the cost of the AVT" - but this assumes practices can simply see more patients, ignoring the reality of fixed funding models and stretched resources.

3. Building vs Buying: The Open Source Debate 🛠️

Multiple threads explored whether the NHS should develop its own AI capabilities or rely on commercial providers. A technical expert who'd spent five figures training their own medical LLM shared sobering insights: "the use cases where it truly is worth it are probably minimal and making money back via API sales is very difficult because the frontier labs are so cheap."

This sparked discussion about economic sovereignty, with one participant noting: "NHS should have been supporting innovators who are training opensource models, not only to keep IP but to support local talent. Literally most developed countries are doing this except UK."

The conversation revealed how UK-based AVT companies face an existential threat from EHR providers potentially shutting down the entire startup ecosystem, particularly as Epic launches their own AI scribe capabilities.

Statistics & Engagement Patterns 📊

Activity Metrics

  • Total messages analysed: 287

  • Peak activity period: Wednesday 21st August (morning discussions about discharge summaries)

  • Busiest day: Friday 23rd August (52 messages)

  • Quietest period: Weekend evenings

Top 5 Contributors (by message volume)

  1. Clinical innovation lead (32 messages) - Driving discussions on governance and compliance

  2. Technical architect (28 messages) - Offering insights on model development and implementation

  3. GP practice partner (24 messages) - Sharing frontline perspectives on tech adoption

  4. Hospital consultant (21 messages) - Highlighting secondary care challenges

  5. Digital transformation specialist (19 messages) - Bridging technical and clinical worlds

Hottest Debate Topics (by engagement)

  1. Automated discharge summaries - 47 messages across 3 days

  2. Funding for AI governance - 31 messages

  3. Personal LLM development - 28 messages

  4. GPT-5 performance claims - 22 messages

  5. NHS economic growth through tech - 18 messages

Discussion Quality Metrics

  • Evidence-based claims: 42% (papers cited, personal experience shared)

  • Opinion/speculation: 35%

  • Questions/knowledge seeking: 23%

  • Cross-expertise engagement: High (clinical, technical, regulatory voices all represented)

  • Constructive debate measure: 8/10 (disagreements remained professional)

Lighter Moments & Group Dynamics 😄

The week wasn't all serious debate. A GP's attempt to use voice-to-text whilst walking the dog produced the memorable: "Good to turn away call back. Tomorow is my only positive read" - proving that even AI can't handle multitasking with a Labrador.

One member's father contributed a perfectly timed cartoon about AI replacing doctors and lawyers simultaneously, prompting the response: "Great so AI can diagnose me and sue me at the same time."

The community's dark humour emerged when discussing calorie requirements, with one member noting their "basal metabolic rate is 1500, and I'm fairly confident smashing away on a mechanical keyboard will account for a further 500 cal. 😂"

South Park's AI-focused episode announcement generated excitement, with speculation about whether it could top the previous week's content.

Quote Wall 💬

"Compliance is your only moat (in UK and EU. US is weapons free)" - Digital health consultant on competitive advantage

"I don't practice medicine with a gun to my head for anyone" - GP on patient charter requirements

"Benefits realisation is the most fun fiction during a procurement process" - Procurement specialist

"AI and tech can't fix a thing if the foundations, processes and culture mean it will fail at first sight" - Change management expert

"We are the standard we accept" - Clinical innovation lead on professional responsibility

"Until a number of prominent blocker Consultants are made unemployed and unemployable, this will not change" - Anonymous hospital administrator

"Problem is they expect us to do this for free" - Primary care clinician on unfunded mandates

"NHS business cases are often shiny brochures or puff-pieces" - Former civil servant

Journal Watch 📎

Academic Papers & Key Studies

📎 Nature Medicine - LLM Fine-tuning Vulnerabilities https://www.nature.com/articles/s41591-024-03445-1 A concerning paper revealing how small amounts of adversarial data can corrupt medical AI models. Critical reading for anyone involved in model customisation, highlighting that even well-intentioned fine-tuning can introduce dangerous biases. Directly relevant to discussions about NHS-specific model development.

📎 Google Personal Health LLM Nature paper via LinkedIn post Google's new conversational AI for sleep and fitness tracking. The group questioned the PR approach given community scepticism, with particular concerns about data privacy and the shift towards consumer health applications.

📎 JAMA Network - Clinical AI Implementation https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2837483 Pertinent to the DSUM discussion at Chelsea & Westminster, examining real-world outcomes of AI implementation in clinical settings. Provides evidence base for governance concerns raised.

📎 NEJM Catalyst - AI in Healthcare Delivery https://catalyst.nejm.org/doi/full/10.1056/CAT.23.0283 Comprehensive analysis of AI integration challenges in healthcare systems, supporting arguments about the need for proper funding and support structures.

Industry Articles & News

📎 HSJ - FDP True Costs Exceed £1bn https://www.hsj.co.uk/technology-and-innovation/exclusive-full-cost-of-federated-data-platform-to-exceed-1bn/7039871.article Explosive revelation that the Federated Data Platform's costs have tripled from initial £330m estimate. Community noted the irony of seeking AI efficiency whilst demonstrating such poor project management.

📎 Digital Health - NHS AI Safety Gap https://www.digitalhealth.net/2025/08/we-need-to-act-fast-to-close-the-nhs-ai-safety-gap/ Timely piece supporting group discussions about governance funding gaps and the risks of rapid deployment without proper safety infrastructure.

📎 Guardian - UK AI Strategy Critique https://www.theguardian.com/commentisfree/2025/aug/18/the-guardian-view-on-britains-ai-strategy-the-risk-is-that-it-is-dependency-dressed-up-in-digital-hype Critical analysis of UK's dependence on foreign AI models, supporting community arguments about need for domestic capability development.

📎 Medscape - UK Clinicians Want Clearer AI Guidance https://www.medscape.com/viewarticle/uk-clinicians-want-clearer-ai-guidance-and-oversight-2025a1000lz4 Survey data revealing clinician concerns about AI implementation, validating many of the governance issues raised in group discussions.

Technical Resources & Guidelines

📎 GSTT Quality Management System (Open Source) https://github.com/GSTT-CSC/QMS-Template Guy's and St Thomas' open-sourced their QMS template - invaluable resource for teams building compliant AI systems. Community praised this as genuine knowledge sharing.

📎 Anthropic Model Welfare Research https://www.anthropic.com/research/exploring-model-welfare https://www.anthropic.com/research/end-subset-conversations Fascinating exploration of AI consciousness and the ability for models to terminate distressing conversations. Sparked philosophical discussions about model rights and treatment.

📎 IatroX Knowledge Centre Launch https://www.iatrox.com/knowledge-centre Community member's comprehensive repository of 2500+ UK/European clinical guidelines mapped to conditions. Free resource with no commercial interest, demonstrating community spirit of knowledge sharing.

📎 OneAdvanced Acquires INPS Vision https://www.oneadvanced.com/resources/oneadvanced-acquires-inps-vision-erp/ Major consolidation in GP systems market, with implications for AI integration and competition dynamics discussed extensively.

Policy Documents & Reports

📎 NHS Patient Charter Rules https://www.england.nhs.uk/publication/you-and-your-general-practice/ https://www.pulsetoday.co.uk/news/2025-26-contract/nhse-sets-out-finalised-patient-charter-rules-that-practices-must-abide-by/ New requirements for GP practices generating frustration about unfunded mandates and lack of practical support tools.

📎 NHS FDP Data Protection Impact Assessment https://www.england.nhs.uk/publication/nhs-federated-data-platform-data-protection-impact-assessment-national-data-integration-tenant-minimum-viable-product/ Dense but critical reading on data governance for the FDP, relevant to discussions about Palantir's expanding role.

📎 Chelsea & Westminster AI-DSUM Information https://cw.is/ai-assisted-discharge-summary-ai-assisted-dsum/ Public information about the controversial automated discharge summary system, though community noted lack of transparency about supplier and governance arrangements.

Looking Ahead 🔮

Unresolved Questions:

  • Will NHS England address the governance funding gap before widespread AI deployment?

  • How will the Epic/AVT ecosystem evolve as EHRs develop native capabilities?

  • Can automated discharge summaries achieve safety standards without proper system integration?

Emerging Themes:

  • Growing tension between innovation pace and regulatory requirements

  • Shift towards personal AI tools and local implementations

  • Increasing scrutiny of large contract values and benefits realisation

Upcoming Events:

  • THE FIX Healthtech Festival - 18th September at Harwell Science Campus

  • Various members attending health tech events throughout September

  • South Park AI episode airing this week (community viewing party suggested!)

Group Personality Snapshot 🎭

This week showcased the community's unique ability to blend righteous anger about systemic failures with constructive problem-solving. The group's strength lies in its diversity - from clinicians sharing war stories about paper notes in 2025 to developers offering technical solutions, from regulatory experts raising red flags to innovators pushing boundaries.

What makes this community special is its refusal to accept the status quo whilst maintaining brutal honesty about the challenges. Members don't just complain - they build solutions, share resources freely, and support each other through the maze of NHS digital transformation.

The balance of gallows humour ("AI can diagnose me and sue me at the same time") with genuine passion for improvement creates a space where hard truths can be spoken and real progress can be made.

As one member perfectly summarised: "We are the standard we accept." This community consistently sets that standard higher, even when the system seems determined to lower it.

Next week's newsletter will return (assuming the editor isn't entirely replaced by Claude). Keep fighting the good fight, and remember - in the NHS, paranoia isn't a disorder, it's a survival strategy.

Newsletter compiled from 287 messages between midday 16th August and midday 23rd August 2025 Edited for clarity, anonymised for privacy, seasoned with just enough cynicism to be palatable

Previous
Previous

🏥 AI in the NHS Newsletter #13

Next
Next

🏥 AI in the NHS Newsletter #11