7 Mar
-
14 March 2026

AI in the NHS Weekly Newsletter - Issue #40

Executive Summary

A week that began with concerns about data centre security and AI infrastructure vulnerability evolved into one of the group's most ambitious conversations yet: a serious proposal for a sovereign NHS AI model, backed by potential £1m funding and near-unanimous enthusiasm. Between these bookends, the community debated whether AI erodes or evolves clinical skills, explored a wave of new tools from open-source voice recorders to ePortfolio platforms, and wrestled with the ethics of immigration data collection mandated through GP systems. The breadth of discussion — from hardware costs to the philosophy of medical education — captured the group at its most engaged and intellectually diverse.

Activity at a Glance

This period generated 587 messages across 8 days, with peak activity on Thursday 12 March (155 messages) driven by the sovereign AI model poll and infrastructure deep-dive. Tuesday 10 March was the second busiest day (137 messages) as an EMIS update requiring immigration data collection sparked ethical debate. Weekday traffic dominated at 83% of total messages, with a strong afternoon bias across the period.

🔥 The Skills Question: Should Doctors Be Encyclopaedias or Expert Users?

One of the period's most intellectually charged threads ran across several days, prompted by research on AI clinical decision support systems and their real-world performance. A Nature Health paper examining AI CDSS in Kenyan clinical settings reported a 3.4% hallucination rate and 8% harmful recommendation rate — figures that landed differently depending on whether you viewed them as reassuringly low or alarmingly high.

The debate crystallised on Wednesday 11 March when one GP offered what they acknowledged was "a spicy take":

"Do modern doctors actually need to be experts in information retention? Given that we have easy access to guidelines and AI is an information synthesis tool the likes of which we have never seen, wouldn't it be more useful to have doctors who are expert information USERS rather than medical encyclopaedias? We could focus more on communication skills, and ability to weigh competing priorities." — GP and clinical informatician

This provoked a rich exchange. A clinical safety and governance specialist warned about the parallels with legal training: "A lot of junior/paralegal time is the grunt-work that I think a well structured and focussed AI can do well. But, it's that grunt-work as a junior that makes a senior so effective. It's ladder pulling at its very worst without an effective succession strategy that builds that in."

A practice manager brought the aviation analogy into play: "Autopilot has been flying planes for years yet I wouldn't get on one that hadn't got a human being in the driving seat and a spare one sat next to them. I think human patients will want a highly skilled human in the loop." A clinical academic then shared a striking personal experience: on a recent flight, conditions were too difficult for human pilots, so autopilot took over — prompting reflections on a future where doctors might be required to get an AI second opinion before prescribing.

A radiology specialist pushed back against the "tasks not jobs" framing, noting that blind-read mammography and autonomous chest X-ray interpretation are already replacing specific radiologist functions. And a digital health specialist offered a grounding perspective: "Suspect all AI will do is reveal how much healthcare demand was never met due to system constraint. Roles will evolve but it'll be a while before personnel fully removed."

One contributor cut through the philosophical debate with characteristic directness: "AI won't replace doctors BUT the 💩 doctor using AI will still be 💩."

🏛️ Sovereign AI: From Frustration to a £1m Proposal

A thread that simmered all week erupted on Thursday 12 March when a poll landed in the group: "Is anyone here interested in co-developing a sovereign AI model for UK/NHS? Potential funding of up to 1 million to develop same." The results were emphatic — 17 voted "Bring it on", one selected "What is an AI model?", and nobody chose "You must be joking."

The conversation quickly deepened. One member laid out a detailed vision: "If I were suddenly made NHSE CEO, a very early thing I'd be stuffing into Wes Streeting's face would be a capital build project for a NHS AI platform. Two datacentres for sovereign data control, and a brutally strict procurement for the build process where the NHS would own every single line of the IP and access would be restricted to fully behind the NHS digital borders."

Practical barriers were explored with equal enthusiasm — hardware costs proved eye-watering (512GB DDR5 at £12,000), though members shared their own setups ranging from Mac Studio M3 Max rigs to dual-GPU Linux training machines. One member noted wryly that further hardware spending would make their divorce cost "far more than the PC." The group debated whether £1m was adequate (DeepSeek allegedly cost $10m with state backing), discussed post-training approaches using LoRA fine-tuning, and explored container-based cluster management.

The discussion built on earlier conversations about data centre vulnerability (Saturday 7 March), where members had shared Guardian reporting on drone strikes raising doubts about Gulf data centres and debated homomorphic encryption approaches. By Friday, the conversation had matured into proposals for a dedicated technical subgroup to pursue this work seriously.

🛡️ Ethics Under Pressure: EMIS, Immigration Data, and Patient Trust

A concerning development surfaced on Tuesday 10 March when members reported that an EMIS update was requiring GPs to collect immigration data — a mandate that immediately raised ethical alarm bells. Several members highlighted that everyone in England is entitled to register with a GP regardless of immigration status, sharing the relevant NHS guidance.

The discussion connected to broader concerns about the Federated Data Platform and Palantir's involvement, with members questioning the implications for vulnerable patient populations. One contributor expressed fear that "policy mandates override clinical ethics," while others noted the chilling effect on patients who might avoid seeking care if they believed their data was being shared for immigration purposes.

This thread exemplified the group's distinctive strength: combining technical knowledge with ethical sensitivity, and backing opinions with evidence and official guidance rather than speculation.

🛠️ Tool Watch: From Voice Recording to AI Portfolios

The group's role as an early-adopter community was on full display throughout the week, with a constant stream of tool discovery, testing, and real-world application.

OpenPlaud and lifelogging dominated Monday 9 March, when the group moderator demonstrated a successful integration of OpenPlaud with Claude's MCP server, using it to transcribe and fact-check a talk given to Aberdeen medical students. The group explored the privacy implications of local speech-to-text models versus cloud processing, with one practice manager cheerfully describing their lifelogging experiment as "Welcome to the world's least exciting thing, my life." Pop culture references flew thick and fast — Wall-E, Black Mirror, and the general question of whether we actually want to remember everything.

Folio, a new ePortfolio platform with a built-in reflective AI tutor, was announced on Sunday 8 March, addressing the "enforced reflection burden" that many see as a major contributor to trainee burnout. The platform was described as nearly ready for launch, with the group getting first access.

On the spreadsheet front, a member demonstrated creating a dynamic DPP Tracker dashboard in five minutes flat using Claude and Excel — a practical demonstration that drew admiration and highlighted how accessible AI-assisted productivity has become for non-technical users.

Apple's announcement of Claude, ChatGPT, and Gemini integration in CarPlay prompted one member to observe: "We got Apple clinical guidelines before GTA VI…"

😄 Lighter Moments

The group's personality shone through in several exchanges. When the sovereign model discussion reached peak technical density on Thursday evening, one member pleaded: "When are you nerds going to your separate nerd group as this is all way over my head 😂🤣" — to which a practice manager replied reassuringly: "Don't you worry, we can stay here and play with our etch-a-sketches whilst the clever kids go off and do clever kid stuff." A third member then asked: "What's an etch a sketch, asking for a friend 😅" — perfectly capturing the group's multigenerational dynamic.

The Claude loyalty club was in fine form, with members treating their preferred AI as a competitive secret. "Trouble is I feel like Claude is my secret ingredient when people are using ChatGPT etc so I don't really want to tell them about it," confided one member. Another offered: "There are two types of LLM users in the world. Those that use Claude and those that don't know they should use Claude." A third completed the trilogy: "Someone said, Claude is the new Alexa. She heard and now doesn't respond."

And Tuesday morning's commute report set the tone for the day: "The M25 car park is particularly scenic today, a rare sighting of a BMW with working indicators was spotted this morning, and average speeds of 10mph allows you time to mindfully breathe in the pollution."

💬 Quote Wall

"It's ladder pulling at its very worst without an effective succession strategy that builds that in." — Clinical safety and governance specialist, on AI replacing junior training tasks

"AI won't replace doctors BUT the 💩 doctor using AI will still be 💩" — Digital health educator

"Do modern doctors actually need to be experts in information retention?" — GP and clinical informatician

"If I were suddenly made NHSE CEO, a very early thing I'd be stuffing into Wes Streeting's face would be a capital build project for a NHS AI platform." — Clinical safety and governance specialist

"Welcome to the world's least exciting thing, my life." — Practice manager, on lifelogging

"There are two types of LLM users in the world. Those that use Claude and those that don't know they should use Claude." — Healthcare AI developer

"On my last flight there was an announcement that the landing conditions were too tricky for the human pilots so autopilot would take us in." — Clinical academic, on AI second opinions

"We got Apple clinical guidelines before GTA VI…" — Digital health specialist

📎 Journal Watch

Academic Papers & Key Studies

📎 AI-powered clinical decision support in Kenyan clinical settingsNature Health Evaluation of AI CDSS performance reporting 3.4% hallucination rate and 8% harmful recommendations. Sparked the week's major debate on automation bias and the role of clinical reasoning. Read the paper

📎 Generative AI clinical decision support for general practiceBJGP Early research on GenAI CDSS applications in primary care, examining diagnosis, investigations, and management pathways. Read the paper

📎 Attitudes to technology and AI in health careThe Health Foundation Comprehensive report on public and professional attitudes toward AI in healthcare settings. Read the report

Industry & News Articles

📎 Introducing Copilot HealthMicrosoft Microsoft's healthcare-specific Copilot announcement, discussed in the context of industry consolidation and NHS procurement. Read the article

📎 The Shape of the ThingOne Useful Thing (Ethan Mollick) Newsletter on AI's evolving capabilities and implications. Read the article

📎 Doctors risk becoming 'liability sink' for AI errorsMedscape Analysis of legal liability frameworks as AI takes on more clinical tasks. Read the article

📎 Drone strikes raise doubts over Gulf as AI superpowerThe Guardian Reporting on data centre vulnerability as military targets. Read the article

Technical Resources & Tools

📎 OpenPlaud — Open-source voice recording tool integrated with Claude MCP server for local transcription workflows. Visit the site

📎 CogStack — NHS-focused AI platform for population health, raised during sovereign model discussions. Visit the site

📎 Folio ePortfolio platformMedicGenie New ePortfolio with built-in reflective AI tutor. Visit the site

📎 Anthropic Learning Platform — Anthropic's educational resources for Claude users. Visit the site

🔮 Looking Ahead

NHSHackday Cardiff (21-22 March) — Tickets still available; free for participants. Several group members planning to attend.

Folio platform launch — The ePortfolio with reflective AI tutor is described as "almost ready" with the group getting first access.

Canva training courses (18 & 25 March) — Free two-hour sessions, London postcode required.

GenAI CDSS mini-conference (Summer 2026) — A one-day event focusing on diagnosis, investigations, and management in general practice is being organised.

CoWork technical subgroup — A dedicated space for advanced work on model orchestration, CLAW agents, and local model deployment is forming.

Sovereign NHS AI model — With 17 enthusiastic votes and £1m potential funding, discussions on scope, licensing, and training data continue.

🧬 Group Personality Snapshot

This was a week that showcased the community's evolution from a discussion forum into something closer to a distributed R&D lab. The sovereign model poll wasn't just a straw vote — it reflected genuine capability within the group, from members running inference on high-spec local hardware to those building clinical tools with AI assistance in under five minutes. The skills debate revealed a community comfortable with intellectual disagreement, where a "spicy take" is welcomed rather than shut down, and where aviation analogies from practising clinicians sit alongside peer-reviewed research citations. The etch-a-sketch exchange and Claude loyalty jokes showed a group that doesn't take itself too seriously, even whilst tackling questions about the future of British healthcare infrastructure. Five new members joined during the period, walking into a community that manages to be simultaneously welcoming and deeply technical.

APPENDIX A: Detailed Activity Analytics 📊

📬 Total Messages: 587 | 📈 Peak Day: Thursday 12 March (155 messages) | 🔥 Second Peak: Tuesday 10 March (137 messages) | 💬 Average/Active Day: 84 messages | 🏖️ Weekend Activity: 17% (101/587) | 💼 Weekday Activity: 83% (485/587) | 🔗 Messages with URLs: 74 (13%) | 👥 New Members: 5

[Chart images to be added via Webflow Designer]

APPENDIX B: Enhanced Statistics

Top 10 Contributors (Role Descriptors Only)

  1. Digital Health & Clinical AI Specialist (Group Moderator): 136 messages
  2. Healthcare AI Developer & Tool Tester: 62 messages
  3. Innovation-Focused GP & Geopolitics Commentator: 34 messages
  4. Clinical Safety & Governance Specialist: 28 messages
  5. Practice Designer & Digital Skills Trainer: 26 messages
  6. Digital Health Educator & Portfolio Career Advocate: 25 messages
  7. Clinical Academic & Implementation Lead: 24 messages
  8. Technical Implementation Specialist: 23 messages
  9. Clinical Innovator & Academic GP: 20 messages
  10. On-Premises Hardware Explorer & GP: 16 messages

Hottest Debate Topics

  1. 🔥🔥🔥 Sovereign NHS AI model and infrastructure (155+ messages across 4 days)
  2. 🔥🔥🔥 AI and clinical skills: encyclopaedias vs. expert users (98+ messages across 3 days)
  3. 🔥🔥 EMIS immigration data and FDP ethics (67+ messages across 3 days)
  4. 🔥🔥 Tool ecosystem: OpenPlaud, Folio, Claude workflows (80+ messages across 5 days)
  5. 🔥 Local model hardware and cost realities (45+ messages across 2 days)

Discussion Quality Metrics

  • Evidence-Based vs Opinion Ratio: Approximately 30% of messages referenced papers, guidelines, official data, or linked resources
  • Average Thread Depth: 5.3 messages per discussion thread
  • Constructive Challenge Rate: 25% of responses offered alternative viewpoints or counterarguments
  • External Resource Sharing: 74 unique links shared across the period
  • Cross-Expertise Engagement: At least 8 distinct professional backgrounds contributing

APPENDIX C: Daily Theme Summary

Saturday, 7 March

Primary Theme: Infrastructure vulnerability and data centre security Key Discussion: The group explored data centre military targeting following Guardian reporting on Gulf AI infrastructure, debated homomorphic encryption for secure processing, and discussed AI token cost sustainability and VC-backed pricing models. Secondary Discussions: Anthropic chart on legal/healthcare AI assumptions, AI scribe pricing models, newsletter #39 launch with retro gaming format Notable: Group privacy settings updated; growing awareness of geopolitical dimensions to AI infrastructure choices.

Sunday, 8 March

Primary Theme: Educational innovation and appraisal system modernisation Key Discussion: A detailed appraisal reflection coaching prompt was shared, Folio ePortfolio platform announced, and members discussed how 16-year-olds are already presenting on AI in medicine. Secondary Discussions: NHSHackday Cardiff tickets, agentic AI safety incidents (OpenClaw), International Women's Day acknowledgements, HealthOrbit crowdfunding Notable: Next-generation engagement with AI increasingly visible; Folio launch addressing reflection burden in training.

Monday, 9 March

Primary Theme: Voice recording, lifelogging, and local AI integration Key Discussion: OpenPlaud integration with Claude MCP demonstrated, with the group exploring local speech-to-text models as an alternative to cloud processing. Secondary Discussions: Ambient voice technology pilot results from Oxford, AACE ambulance survey, FDP safety documentation, Gamma presentation tool Notable: Practical demonstration of privacy-preserving productivity tools using open-source components.

Tuesday, 10 March

Primary Theme: Tool ecosystem explosion and clinical AI ethics crisis Key Discussion: EMIS update requiring immigration data collection sparked major ethical concern, alongside wide-ranging tool exploration. Secondary Discussions: Apple CarPlay AI integration, Canva training courses, DPP Tracker dashboard creation, LinkedIn professional badges, Perplexity Computer tool Notable: Peak activity (137 messages) driven by ethical flashpoint.

Wednesday, 11 March

Primary Theme: Clinical decision support, skill degradation, and workforce impact Key Discussion: The "spicy take" debate on doctors as information users vs. encyclopaedias dominated, supported by Nature Health and BJGP research papers. Secondary Discussions: Suvera platform demo, robot dentistry in China, NHS appraisal coaching tools, Health Foundation attitudes report Notable: Most intellectually diverse day of the period.

Thursday, 12 March

Primary Theme: Sovereign AI development, infrastructure deep-dive, and CoWork formation Key Discussion: The sovereign AI model poll (17/1/0) catalysed detailed technical discussion about NHS-owned infrastructure, hardware costs, post-training approaches, and container management. Secondary Discussions: Claude loyalty and competitive advantage, hardware cost realities, CogStack as existing UK AI capability, gender representation in tech Notable: Highest activity day (155 messages); the group moved from discussion to action planning.

Friday, 13 March

Primary Theme: Platform decisions, knowledge consolidation, and subgroup formation Key Discussion: Spreadsheet automation workflows shared, hearing test app deployment tested across platforms, and the CoWork technical subgroup began forming. Secondary Discussions: Data privacy and on-premises movement, NHE compliance, open letter to NHSE suggested, Plaud lifelogging adoption Notable: Transition from exploration to organisation; the group maturing into tiered engagement model.

Saturday, 14 March

Primary Theme: End of reporting period Key Discussion: One system message recorded. Activity effectively concluded late Friday 13 March.

AI in the NHS Weekly Newsletter is produced by Curistica Ltd for members of the AI in the NHS WhatsApp community. All contributors are anonymised. Views expressed are those of individual community members and do not represent any organisation.