AI in the NHS Newsletter #21
Saturday 25th October to Saturday 1st November, 2025
Week 21 delivered a fascinating mix of existential debates and practical innovation challenges. The group wrestled with profound questions about AI replacing GPs (sparked by a provocative poll showing 13-3 against adoption), whilst simultaneously confronting real-world barriers as Ankit.ai faced widespread ICB blocking despite being a free, well-governed administrative tool. New members brought fresh perspectives from Skin Analytics, Praktiki, and medtutor.ai, whilst the Government's announcement of AI radiology funding drew sceptical responses about "magic unicorns" versus addressing actual bottlenecks. Academic discussions ranged from clinical coding's future to OpenAI's revelation that 0.15% of ChatGPT use involves suicidal intent, bookended by an unexpectedly delightful detour into 1990s computing nostalgia. The community's personality shone through: simultaneously serious about patient safety and gleefully reminiscing about floppy disks and DOOM.
📊 Weekly Activity Analytics
Activity Dashboard
Daily Message Distribution
Activity Insights
Weekend dominance: The community showed remarkable weekend engagement, with Sunday and Monday generating over half of all messages
Three major peaks: Sunday 26th, Monday 27th, and Wednesday 29th each exceeded 100 messages
Mid-week lull: Thursday 30th saw the quietest day with just 9 messages
Strong participation: 10 contributors with 10+ messages each, showing broad engagement
Hot topics drove activity: Ankit.ai blocking, GP replacement poll, and digital planning guidance correlated with activity spikes
🚫 The Ankit.ai Blocking Crisis: Innovation Meets Bureaucracy
Sunday 26 October brought a rallying cry from the community when it emerged that Ankit.ai - a free, well-governed administrative tool for contractual queries and policies - was being systematically blocked by multiple NHS ICBs. The creator, a working GP, had invested personal time and resources, completed a thorough DPIA, and removed all potentially problematic elements (iframe heavy design, "generative AI" labelling, even the "buy me a coffee" link), yet still faced inexplicable barriers.
The group's response revealed deep frustration with systemic inconsistency. A clinical safety expert noted that truly non-clinical administrative tools should fall outside DCB requirements, though a lightweight risk assessment and DPIA remain helpful. When the comprehensive DPIA was shared, the consensus emerged: the problem wasn't the documentation but procurement and governance teams defaulting to "not our template" responses without understanding data protection principles.
One commentator wryly observed that Twitter and Facebook operate freely without NHS-specific DPIAs, whilst HSJ subscriptions face no such scrutiny. The suggestion that a helpdesk operator simply saw "AI" and added it to a block list without deeper assessment gained traction. A technical analyst pointed out: "Someone will have seen 'AI' and added it to a block list. Ankit's tool has enough warnings and notes that it'd take someone deliberately going off-piste to cause any harm."
Regional variation emerged as a key theme. North East London reported no blocking, with local teams being "very responsive" - a testament to progressive leadership. This geographic lottery led to calls for standardised national assessment processes rather than each of 42 ICBs conducting separate evaluations. The proposed Innovation Passport from the 10-year plan was discussed, though concerns were raised about its readiness and whether the solution matches the problem.
The creator's candid reflection resonated: "I suspect I either have to launch it as a commercial product or simply let it die. It doesn't feel that the system is keen to let things that work stick around." The discussion highlighted broader tensions between innovation, governance theatre, and the challenge of scaling grassroots solutions across a complex system.
👑 Would You Bow to Your AI Overlords? The Great GP Replacement Debate
Late Saturday 26 October saw a GP innovator drop a philosophical hand grenade into the chat: a poll asking if colleagues would adopt a "regulated & certified tool that can work autonomously as Primary Care Clinicians." The results spoke volumes: 13 voted "no", just 3 "yes" (including one of the newsletter editors, who subsequently fielded questions).
The debate that unfolded across Saturday night into Sunday morning was remarkable in its depth. A newly CCT'd GP pleaded for "at least a couple of years before replacing me", whilst others raised fundamental questions about continuity of care, the performative aspects of medicine, and what "better" actually means in healthcare delivery.
The pragmatic perspective emerged from several quarters. One contributor noted: "I optimise for patient care. If something else genuinely does it better, I salute our replacement overlords" - though acknowledging that "better" does immense lifting in that statement. The counter-argument came swiftly: "I think we're whole generational science and tech breakthroughs away from a tech that can meet the standard of GP care I've had over the years."
The sociological angle proved particularly insightful. In a system free at point of delivery, with GPs as first contact and gatekeepers, "so much of medicine is ultimately performative/human-touch related and has little to do with true diagnostic, whether it be reassurance, signposting to other services, social issues." This observation sparked discussion about whether AI would force healthcare back to examining these fundamentals after decades of moving away from them.
Future scenarios ranged from dystopian to pragmatic. One participant suggested that "fewer GPs learning ultrasound" might ironically lead to GPs becoming radiologists whilst AI handles primary care. The liability question loomed large: manufacturers would need insurance, but who carries responsibility when a GP principal is simultaneously clinician, data controller, and practice owner?
Real-world context grounded the abstract. The observation that "differently trained people in the Medical Model" have already begun replacing traditional GP roles added bite. One experienced GP noted that straightforward presentations increasingly go to ARRS colleagues and ANPs, leaving GPs with trickier cases requiring System 2 thinking - potentially a pattern that intensifies rather than reverses.
The conversation concluded with a striking demonstration: asking Claude (via ChatGPT) 100 times when autonomous GP AI replacements might arrive in the NHS yielded a distribution centred on 2038-2040, with 58% of responses falling in this tight window. The mean was 2038.8 years - approximately 13-17 years from now. Whether reassuring or terrifying depends entirely on your perspective.
👋 Welcome Wagon: New Faces and Fresh Tools
Saturday 26 October saw the community expand with several significant additions, each bringing distinct perspectives and capabilities:
Medical Director from Skin Analytics (former Babylon GP) joined with characteristic directness from his early days: "Thanks - looking forward to hearing and learning from this community!" His subsequent contributions on liability and indemnity proved invaluable, drawing on Skin Analytics' real-world experience navigating medical device regulation and insurance frameworks.
A pharmacist and content writer from Praktiki was welcomed alongside Praktiki's CEO and co-founder. Their arrival prompted discussions about the critical intersection of AI training tools and quality control, particularly around automated feedback systems. The consensus: having a "human in the loop" - whether trainers or "celebrity educators" - remains essential for medical education tools to maintain standards whilst leveraging AI's scalability.
A GP who built medtutor.ai ("I got sick of hearing 'AI will replace doctors' so I built something where AI replaces the patient instead") showcased patient simulations for SCA preparation with instant feedback. The tool's clever approach - allowing trainees to practice with consistent AI patients, share recordings with trainers for unbiased assessment, then adjust and try again - earned widespread praise. One educator reported using it in tutorial on Friday and finding it "very useful," whilst offers of free trainer access were gratefully accepted for wider distribution.
The warm welcomes reflected the group's culture: professional respect, genuine curiosity about new tools, and immediate engagement with the hard questions (in medtutor.ai's case, how to ensure quality control in AI-generated feedback). The mix of clinical innovation (melanoma detection), education technology (safe prescribing, SCA prep), and diverse professional backgrounds (GP, pharmacist, medical directors) enriched ongoing conversations throughout the week.
📱 NHS Digital Planning: The Path to "Digital by Default"
Tuesday 29 October brought extensive discussion of the newly published three-year planning framework, with its boldest target: 95% of appointments bookable through the NHS App by end of 2028-29. A detailed community analysis highlighted both ambitions and concerns about this journey toward "digital by default" delivery.
Key targets include a national product adoption dashboard tracking electronic prescriptions, e-referral interfaces, and NHS App integration by March 2028. The directive to "terminate" direct-to-patient SMS services in favour of NHS App push notifications represents significant change, potentially freeing substantial resources whilst requiring careful transition management.
The Federated Data Platform (FDP) generated most controversy. ICBs "should use the FDP for data warehousing" and implement the canonical data model - language that community members interpreted as "mandating adoption". One commented: "It seems we're being taken for idiots... If the FDP is genuinely transformational, prove it. Publish an independent evaluation, total lifecycle cost, implementation burden, data governance risks, and measurable improvement." The full project cost exceeding £1bn raised further questions.
Ambient voice technology (AVT) received prominent placement: providers should adopt "at pace," though central support was ruled out for 2025-26 implementations. The expectation that business cases will "write themselves" due to productivity improvements drew scepticism. A system supplier warned against "rushed implementations of standalone solutions that aren't integrated with clinical systems" - appearances in "short-term pilots" versus "full potential productivity benefits" of integrated solutions.
The Health Foundation's caution resonated: with healthcare providers at "widely different levels of digital maturity," supporting organisations "lower down the curve to build the infrastructure to deploy AI effectively" remains crucial. The concern: AVT represents a "wild West" market where serious problems may emerge before regulatory frameworks catch up.
Strategic concerns extended beyond individual technologies. The single patient record's notable absence from planning guidance suggested unresolved decisions about architecture. Cybersecurity's omission seemed particularly curious. The relationship between national direction and local implementation - always fraught - faces new tests when central mandates ("should use") meet constrained local resources and varying digital maturity.
One CIO's observation captured the tension: the FDP has become an "obsession" for NHS England, yet many trusts face "a lot of push-back just due to the cost of change" whilst the canonical data model remains "only partially developed." The three-year visibility represents genuine improvement over annual instability, but successful delivery requires more than targets - it demands resources, training, integration support, and realistic acknowledgement of starting positions.
🦄 "Magic Unicorns" and Radiology Bottlenecks: AI Announcement Reality Check
Tuesday 29 October's government announcement about AI-powered radiological analysis funding triggered immediate scepticism from clinical informaticians who've seen this pattern before. An image circulated showing the announcement alongside a critical observation: addressing waiting times requires fixing actual bottlenecks, not just accelerating one step in the pathway.
A consultant's reaction set the tone: "Oh for god's sake. Paging Margaret McCartney for some sense." Another noted: "Only if it's radiology reporting waiting times that are the rate limiting factor" - pointing to the fundamental flaw in assuming faster image analysis automatically equals faster patient journeys.
The technical details mattered. Questions emerged about device classification, clinical effectiveness validation, and whether public assessment data exists for review. The companies mentioned (Lucida, Quibim) have solid credentials, and integrating with digital pathology AI could indeed "support better pathways following on from diagnostics" - but the announcement's framing as a major breakthrough ignored systemic constraints.
Implementation reality provided sobering context from a practice using Skin Analytics for melanoma detection. Their deliberate approach includes built-in delays: scanning clinicians don't get instant results, patients receive callbacks the next day, and pathways explicitly avoid "instant gratification" precisely because this is "a serious medical device and the pathways must reflect that on how it impacts patients." Average time from GP visit to outcome: 2.8 days, considerably better than hospital routes. The contrast with government announcements promising rapid transformation couldn't be starker.
One digital health lead characterised this as the "shallowness of health policy... prior to the election GPCE were concerned it would be all magic unicorns, and here we are." The phrase "magic unicorns" captured the frustration: announcements that sound transformative but ignore the unglamorous work of pathway redesign, staff training, system integration, and addressing actual rate-limiting steps.
The underlying pattern reflects broader tensions. New technology alone rarely solves complex system problems. MRI scanner capacity, radiologist numbers, pathway design, and onwards referral routes all constrain patient flow. Accelerating image analysis without addressing these factors simply moves the bottleneck elsewhere - potentially creating new problems (overflow, loss of contemplation time, information overload) whilst claiming victory.
The community's response demonstrated sophisticated understanding: celebrating genuine innovation (AI radiology has proven value) whilst maintaining scepticism about implementation rhetoric that oversells and underdelivers. As one commented: "This is absolutely the headline that highlights the shallowness of health policy."
🔤 The End of Clinical Coding? LLMs and Healthcare Language
Sunday 27 October saw renewed debate about whether AI will eliminate clinical coding, sparked by a LinkedIn post highlighting fundamental tensions between structured ontologies and natural clinical language. A digital health specialist's response captured decades of accumulated wisdom: "Clinical Coding is important, of course, but it has always been an effort to take humans towards speaking in a language that can be easily understood and transferred digitally."
The historical context matters. SNOMED codes now exceed common English vocabulary by 5:1 - "almost no human being can remember all the Snomed codes!" One contributor noted coding served two purposes: knowing exactly what clinicians meant, and enabling statistics (machine readable data). But with SNOMED's explosion, the system has become unwieldy whilst remaining useful for EHR searches and potentially as training data tokens for AI models.
The LLM revolution offers a different approach: "Why reduce the complexity of clinical reality to a hierarchical graph, impressive as they have become, when you can have it embedded in several thousand dimensions without the need to be limited by human comprehensibility?" This philosophical shift - from forcing human thought into structured codes to allowing machines to process natural language in its full complexity - represents genuine transformation.
Practical considerations tempered pure enthusiasm. One expert suggested codes still serve roles: triggering pathways and actions in rule-based systems, maintaining consistency across healthcare informatics infrastructure, and providing searchable structured data. "Ideally hospitals would have an orchestration bus connecting all systems that used (probably several) LLMs to code stuff but the edges still need coded info, unless you want an LLM in every system, with all the possibility of variance that comes with that."
The transition challenge emerged as key. Healthcare operates on existing infrastructure built around coded data. Moving to LLM-processed natural language requires careful orchestration: which systems need codes, which can work with semantic understanding, how to maintain interoperability, and how to avoid introducing new failure modes. The technical debt of decades of coding-centric systems won't disappear overnight.
The conversation reflected broader themes: technology enabling return to more natural human communication patterns, the challenge of maintaining consistency whilst embracing flexibility, and the tension between revolutionary possibility and evolutionary reality. Clinical coding may not disappear completely, but its role will fundamentally shift as LLMs mature - from primary data structure to legacy compatibility layer.
💬 AI Slop and the Value of Authentic Writing
Monday 27 October saw a Northern Ireland GP identify an emerging pattern: "Noticing a HUGE trend in emails in NHS and beyond in folks using AI to reply. Em dashes, three point phrasing in sentences, the lot." His poll showed overwhelming irritation: 16 voted "yes" (AI replies irritate), just 1 voted "no". The discussion that followed explored why authentic writing increasingly matters in an age of AI generation.
The nuanced reality emerged quickly. For "purely transactional things, absolutely not" irritating. But "for things that I expect a human eye on, yes I do and I will judge the company." The expectation that companies should "read my mind to get that line right" captured the impossibility: AI-generated content works for some contexts, fails dramatically in others, and the difference hinges on emotional stakes and relationship expectations.
The strategic insight came from an innovation-focused GP: "Your writing will be your biggest moat in the world filled with AI Slop. Protect it." The phrase "AI slop" - low-effort, generic AI-generated content - gained immediate traction. In a world where anyone can generate plausible-sounding text instantly, authentic human voice becomes differentiating. The comment "The finger is mightier than the Clanker" (a delightful inversion) captured this sentiment.
Practical applications varied. One experienced user employed Apple Intelligence powered by OpenAI or Perplexity: "I usually write the responses and then ask AI to enhance it. If simple responses, then give the prompt to reply with thanks." This augmentation approach - human thought, AI polish - represented a middle ground between pure automation and complete rejection.
The deletion principle offered pragmatic advice: immediately delete emails one has "no intention of responding to." AI-generated replies actually help this process: "It allows me to ignore them emails straight away." The ability to quickly identify auto-generated content creates an arms race between generation and detection, with implications for attention, trust, and communication effectiveness.
The discussion reflected deeper questions about authenticity, effort, and value in professional communication. In an era when generating plausible text costs nothing, what does it mean when someone invests time in careful writing? What signals does AI generation send about the sender's priorities? And how do recipients calibrate their responses when they can't be sure a human read their original message?
📊 Enhanced Statistics Section
Message Flow Patterns
Peak hours: Evening discussions (4pm-8pm) generated sustained engagement
Weekend warriors: Saturday-Sunday accounted for 54.5% of all traffic
Conversation triggers: Polls, provocative questions, and breaking news drove highest responses
Thread depth: Major topics like GP replacement and Ankit.ai blocking generated 30+ message exchanges
Top 10 Contributors
Digital Health & Clinical AI Specialist: 76 messages - Group moderator, newsletter creator, and facilitator of technical discussions
Innovation-Focused GP: 44 messages - Poll creator and provocateur, coffee philosopher
Recently Qualified GP: 28 messages - Technical analyst with gift for humour and GIFs
Practice-Side GP Lead: 25 messages - Systems thinker and Medicus co-founder
Clinical Safety & Systems Expert: 25 messages - Skin Analytics pathway designer, holiday reader
Northern Ireland GP & Digital Lead: 25 messages - Writing protector and meme curator
GP & Digital Health Enthusiast: 20 messages - Portfolio career advocate and "chAI" evangelist
System Analyst: 11 messages - Local infrastructure advocate and retro gaming expert
Clinical Safety Specialist: 10 messages - Norfolk GP and rational discussion facilitator
ICB Digital Lead: 10 messages - North East London innovator and council member
Hottest Debate Topics (by engagement)
GP Replacement by AI (50+ messages): Existential debate about autonomous clinicians, liability, future roles
Ankit.ai Blocking (40+ messages): Systemic barriers, governance theatre, regional variation
NHS Digital Planning (25+ messages): FDP mandate, AVT adoption, digital maturity concerns
Retro Computing Nostalgia (35+ messages): Floppy disks, DOOM, Wing Commander - therapeutic escapism
Clinical Coding Futures (20+ messages): LLMs versus SNOMED, natural language processing
AI-Generated Email Replies (18+ messages): Authenticity, AI slop, writing as moat
Government Radiology AI (15+ messages): Magic unicorns critique, pathway reality
LLM Clinical Performance (12+ messages): Nature Medicine study, conversational versus exam performance
Quality Metrics
Evidence-based contributions: 51 messages contained URLs to research papers, articles, or technical resources
Cross-expertise engagement: Clinical, technical, policy, and vendor perspectives all represented
Constructive tone: Even heated debates maintained professional respect and humour
Follow-through: Questions posed early in week received detailed responses days later
Resource sharing: Academic papers, GitHub links, industry reports, and practical tools exchanged freely
💡 Lighter Moments
The Nostalgia Explosion
What began as a simple question about local versus cloud computing devolved (or perhaps evolved) into the week's most therapeutic digression. Tuesday 29 October saw the group collectively regress to the 1990s, sharing memories of:
Installing Windows 3.1 from six 1.44MB floppy disks (luxury compared to Windows 95's 13 floppies!)
PC Gamer magazine with 3 floppies containing the first 9 levels of DOOM
The "del boys at school" who did a roaring trade in copied software
Glasgow's Barras market with its mysterious CD shops: "Give them a tenner, they'd duck out and come back with a CD"
The Soundblaster parrot demo ("Anyone remember?")
Leisure Suit Larry access questions that required "cracking"
Jet Set Willy loaded via tape on the BBC Micro
The profound observation: "In a system entirely free at the point of delivery, even the download scene had its own market dynamics"
One participant confessed: "I'm amazed that amongst the heroin trade there was a thriving floppy underground in 1990s Glasgow." The response: "The Barras. That is all."
The thread demonstrated the group's ability to pivot from existential AI debates to collective reminiscence with zero transition time. When someone shared a floppy disk emoji asking "what is this?", the genuine confusion about whether they were joking or serious captured generational divides perfectly.
The "chAI" Moment
When an innovation-focused GP shared coffee from his camping trip, a digital health enthusiast immediately corrected: "Wrong beverage for this group. It is officially chAI 🫖" The instant acceptance of this as official group beverage showed the community's talent for creating in-jokes that stick.
Clinical Safety Expert's Nightmare
After posting about enjoying his "brew break" to read the 23-page newsletter, the group descended into hundreds of messages about retro computing. His return: "Wondering what sort of group he's joined with hundreds of unread messages every time he goes away for a brew 🤣" Another member's response captured it: "Better having to restrain wild horses than raise the dead."
The Poll About Polls
An innovation-focused GP's poll asking if AI could autonomously replace GPs generated such intense discussion that someone joked: "Did I read NHS is a monopoly 🤔 Sorry wrong group, Need to turn on advanced features 🤣" The group's ability to pivot from heated debate to self-aware humour within seconds remained impressive.
Travel Updates
"Currently on a long boring flight, hence waffle" - followed by insightful technical analysis. Response: "What needs to happen to make plane more interesting?" Practice Manager's wisdom: "Planes need to stay boring, the last thing I want on my plane journey is for anything exciting to happen!"
💬 Quote Wall
"Your writing will be your biggest moat in the world filled with AI Slop. Protect it."
— Innovation-Focused GP on authentic communication in AI era
"I optimise for patient care. If something else genuinely does it better, I salute my replacement overlords. I appreciate that 'better' is doing alot of lifting."
— Digital Health Specialist on autonomous AI clinicians
"2025 luddites in full swing"
— Recently Qualified GP on Ankit.ai blocking
"So much of medicine is ultimately performative/human-touch related and has little to do with true diagnostic, whether it be reassurance, signposting to other services, social issues."
— Recently Qualified GP on the future of general practice
"This is absolutely the headline that highlights the shallowness of health policy... prior to the election GPCE were concerned it would be all magic unicorns, and here we are."
— Practice-Side GP Lead on radiology AI announcements
"Why reduce the complexity of clinical reality to a hierarchical graph, impressive as they have become, when you can have it embedded in several thousand dimensions without the need to be limited by human comprehensibility?"
— Digital Health Specialist on LLMs versus clinical coding
"I got sick of hearing 'AI will replace doctors' so I built something where AI replaces the patient instead."
— GP and medtutor.ai creator on education technology
"Someone will have seen 'AI' and added it to a block list. Ankit's tool has enough warnings and notes that it'd take someone deliberately going off-piste to cause any harm."
— Clinical Safety Expert on governance theatre
📎 Journal Watch
Academic Papers & Key Studies
📎 Nature Medicine: "Do AI guardians protect us from health information overload?"
Published online: 27 October 2025
https://www.nature.com/articles/s41746-025-02093-0
Explores how AI-enabled assistants might filter, contextualise, and personalise health information to address digital health fatigue and information overload, potentially supporting informed self-management whilst mitigating unintended harms. Shared during discussions about OpenAI's safety work and patients' increasing reliance on AI for health queries.
📎 Nature Medicine: "LLM Performance in Clinical Consultations"
Early 2025 study cited in Guardian article
https://www.nature.com/articles/s41591-024-03328-5.epdf
Researchers designed an AI agent simulating human patients to test LLMs' clinical capabilities across 12 specialties. Key finding: all LLMs performed significantly worse in conversational consultations compared to exam-style questions. Sparked Sunday discussion about the gap between benchmark performance and real-world diagnostic capability.
📎 ArXiv: "Illustrated Guide to Transformers"
Technical explainer shared 29 October
https://www.krupadave.com/articles/everything-about-transformers?x=v3
Accessible, illustrated technical explanation of how transformer models work. Shared as morning reading for "techie folk" wanting deeper understanding of architectures underlying modern LLMs.
📎 ArXiv: "Diffusion Models Deep Dive"
October 2025
https://arxiv.org/abs/2510.21890
More complex technical paper on diffusion models, shared alongside transformer explainer for those seeking comprehensive understanding of generative AI architectures.
📎 ArXiv: "Thermodynamic Computing and P-bits"
29 October announcement
https://arxiv.org/abs/2510.23972
Following Extropic.ai's announcement about thermodynamic compute, this paper explores P-bits and alternative computing paradigms. Generated discussion about whether this represents genuine breakthrough or another overhyped technology.
📎 ChatGPT Shared Conversation: "Autonomous GPAI Timeline"
27 October simulation
https://chatgpt.com/share/68feb296-4e0c-8005-951d-4001cbf58738
Extended thinking mode analysis of what it would take for autonomous GP AI systems to become viable in NHS. Lengthy, potentially biased by conversation history, but useful for understanding LLM reasoning about complex sociotechnical challenges.
📎 ChatGPT Shared Conversation: "Retinal Detachment Consultation"
Personal diagnostic use case
https://chatgpt.com/share/68d1c8a9-20a0-800c-bca5-0accf1cc3c3f
Real example of GP using ChatGPT for personal health concern (photopsia symptoms between two GP colleagues). Demonstrated LLM capability in clinical reasoning whilst highlighting users' increasing comfort with AI health consultations.
Industry Articles & News
📎 Guardian Long Read: "DeepSeek is humane, doctors are more like machines"
28 October 2025
https://www.theguardian.com/society/2025/oct/28/deepseek-is-humane-doctors-are-more-like-machines-my-mothers-worrying-reliance-on-ai-for-health-advice
In-depth article exploring patients' reliance on AI (particularly DeepSeek) for health advice when physician access is limited and consultations feel mechanical. Sparked discussion about AI providing time and attention impossible in current healthcare systems, and the emerging reality of anthropomorphised relationships with LLMs.
📎 CNBC: "Amazon announces sweeping corporate job cuts"
27 October 2025
https://www.cnbc.com/2025/10/27/amazon-to-announce-sweeping-corporate-job-cuts-starting-tuesday.html
Latest major tech layoffs explicitly attributed to AI capabilities. Middle management roles most impacted. Shared as evidence of AI's accelerating impact on knowledge work, with implications for similar roles in healthcare administration.
📎 The Hindu Business Line: "Narayana Hrudayalaya acquires UK's Practice Plus Group"
31 October 2025
https://www.thehindubusinessline.com/companies/narayana-hrudayalaya-to-acquire-uks-practice-plus-group-hospitals-for-18878-million/article70224310.ece/amp/
Indian hospital group with reputation for efficiency and innovation acquiring UK private hospitals. Community reaction: "This could be innovation & efficiency fireworks!!!" with predictions of NHS trust takeovers within 3-5 years.
📎 HSJ: "The Download - The path to 'digital by default' is now clearer"
Weekly newsletter excerpt, October 2025
Detailed analysis of three-year planning framework's digital targets: 95% appointments bookable through NHS App by 2028-29, FDP adoption mandates, AVT deployment "at pace", termination of SMS services. Sparked extensive discussion about ambition versus reality, digital maturity variations, and missing elements (single patient record, cybersecurity).
📎 LinkedIn Post: "Question of the year - generating positive ROI with AI agents"
26 October 2025
https://www.linkedin.com/posts/rakeshgohel01_question-of-the-year-how-to-generate-positive-activity-7387829230191075328-3MGf
Early morning provocation: "Is NHS ready for agents, MCPs, etc or still catching up with likes of RPA? Oh dear..." Set tone for week's discussions about implementation readiness versus technological possibility.
📎 Indeed Hiring Lab: "AI at Work Report 2025"
October 2025
https://share.google/Ek7kvh0jy8VXqTlBr
How GenAI is rewiring the DNA of jobs. Shared during discussions about workforce transformation, with particular attention to healthcare administration roles and clinical support functions.
Technical Resources & Guidelines
📎 Ankit.ai DPIA Documentation
Published on ankitkant.com
https://www.ankitkant.com/disclaimer/dpia
Comprehensive Data Protection Impact Assessment for free administrative tool, demonstrating thorough governance despite grassroots origins. Shared during blocking crisis discussion as evidence that documentation quality wasn't the barrier.
📎 OpenAI: "Strengthening ChatGPT responses in sensitive conversations"
28 October 2025
https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/
Technical blog post revealing 0.15% of weekly ChatGPT use involves element of suicidal intent. Details improved response protocols and safety measures. Generated discussion about duty to notify authorities, comparison with search engine traffic, and CarefulAI's potential involvement.
📎 Brave Browser Blog: "Comet Prompt Injection Vulnerabilities"
October 2025
https://brave.com/blog/comet-prompt-injection/
Security analysis of Comet browser's agentic capabilities revealing serious prompt injection vulnerabilities. Stark advice at BiteLabs talk: "I'll keep this simple: do not use comet browser in any clinical setting."
📎 VentureBeat: "When your AI browser becomes your enemy"
October 2025
https://venturebeat.com/ai/when-your-ai-browser-becomes-your-enemy-the-comet-security-disaster
Detailed coverage of Comet browser security issues, reinforcing concerns about agentic browsers in healthcare contexts where prompt injection could enable unauthorised actions.
📎 NHSE: "GP Clinical Systems Experience Survey"
Live survey, October 2025
https://euklas.qualtrics.com/jfe/form/SV_86Y2shbu08coXuS
National survey gathering GP feedback on clinical systems (primarily EMIS and TPP). Community response mixed: improvements from earlier drafts acknowledged, though design questions remained about addressing core known issues versus gathering more data.
📎 Machine Learning University Explainers
Educational resource collection
https://mlu-explain.github.io/
Visual, interactive explanations of ML concepts including train/test/validation sets. Shared as morning reading for those interested in foundations of AI-driven technology: "Statistics... might be too early in the morning."
📎 IEEE Spectrum: "MLPerf Trends - AI Growth vs Hardware Struggles"
30 October 2025
https://spectrum.ieee.org/mlperf-trends
Analysis of hardware struggling to keep pace with AI model growth and training demands. Shared alongside journal reading during holiday catch-up, prompting discussion about infrastructure constraints on AI advancement.
Policy Documents & Official Reports
📎 Companies House Filings: TPP Director Changes
Filed 5 September 2025
Public records showing directorship termination at The Phoenix Partnership (Leeds) Ltd, though control retained through group shareholding and director appointment rights. Sparked speculation about reasons, potential scandals, and implications for GP IT systems market (55% of England's primary care).
📎 Sky News: "UnitedHealth considers Optum UK sale"
October 2025
Report that UnitedHealth Group (owners of EMIS, 55% of England's GP IT) considering sale just two years after £1.2bn acquisition. Blackstone among potential buyers. Generated concern about private equity involvement in critical national infrastructure.
📎 Telegraph: "NHS staff sick days cost £1bn per month"
30 October 2025
https://www.telegraph.co.uk/business/2025/10/30/nhs-staff-sick-days-mental-health-month/
Statistics on NHS staff sickness absence costs. Simple reaction from community: "Healthcare Org, can't even even look after its own people."
📎 Digital Health News: "TPP Director Changes Confirmed"
30 October 2025
https://www.digitalhealth.net/2025/10/frank-hester-believed-to-have-stepped-down-from-tpp/
Industry coverage of TPP directorship changes, connecting Companies House filings to broader questions about GP IT systems market control and future.
Conferences & Events
📎 Bradford Quantum Hackathon 2025
Event announcement, November 2025
https://aqora.io/events/quantumbradford2025/discussions
https://youtu.be/YIgwGw4rb3Y
Open to "vibe coders" exploring quantum computing applications. Community member participating to explore "Dance & Quantum" - demonstrating group's breadth from clinical AI to bleeding-edge computer science.
📎 Four Nations Conference: AI for Education
Upcoming event
Clinitalk team planning to raise ICB assurance process variation issues. Community members welcomed to share experiences and thoughts on standardisation needs.
🔮 Looking Ahead
Unresolved Questions:
Will standardised national ICB assessment processes emerge from the Innovation Passport, or will geographic lottery persist?
How will practices balance mandated "digital by default" targets with varying digital maturity levels and constrained resources?
What happens when the first autonomous AI clinical tool reaches MHRA Class III approval and liability questions become immediate rather than theoretical?
Can the NHS adopt AVT "at pace" whilst maintaining integration with core systems and avoiding the "wild West" scenario?
Will private equity ownership of critical GP IT infrastructure (EMIS under potential Blackstone acquisition) create new vulnerabilities?
Emerging Themes:
Governance theatre versus innovation: The Ankit.ai blocking exemplifies systemic inability to distinguish between rigorous assessment and bureaucratic obstruction
Writing as moat: In AI slop era, authentic human communication becomes increasingly valuable differentiator
Anthropomorphised AI relationships: Patients' deepening connections with LLMs like DeepSeek/ChatGPT challenges assumptions about healthcare's irreducibly human elements
Middle management displacement: Amazon's AI-driven layoffs preview similar patterns likely to emerge in NHS administration
Geographic variation in digital leadership: North East London's progressive approach versus other regions' blocking behaviour highlights how local leadership shapes innovation adoption
Continuing Debates:
Clinical coding futures: Whether LLMs truly eliminate need for structured ontologies or merely reduce their centrality whilst maintaining necessity for legacy system compatibility
Autonomous clinicians timeline: Claude's 2038-2040 prediction sits uncomfortably between impossible-to-imagine and closer-than-comfortable
Liability frameworks: Who carries responsibility when AI makes autonomous clinical decisions - manufacturers, clinicians, practice owners, or some new model?
Digital maturity inequality: How to support lagging organisations whilst mandating adoption "at pace" for those ready to move faster
👥 Group Personality Snapshot
What Makes This Community Unique:
Therapeutic Nostalgia: The ability to pivot from existential debate about AI replacing GPs to collective reminiscence about floppy disks and DOOM demonstrates remarkable emotional range. The retro computing thread wasn't distraction - it was processing space, allowing the group to decompress from heavy topics whilst reinforcing shared generational experiences. "I'm amazed that amongst the heroin trade there was a thriving floppy underground in 1990s Glasgow" captures the group's gift for finding absurdist humour in unexpected places.
Respectful Combat: An innovation-focused GP's provocative poll asking if people would adopt autonomous GP AI replacements could have generated defensiveness or hostility. Instead, it sparked one of the most thoughtful exchanges yet about healthcare futures, with multiple perspectives represented respectfully. When a newsletter editor admitted voting "yes" and fielding questions, the response wasn't attack but genuine curiosity about reasoning. This intellectual generosity whilst holding strong opinions exemplifies the community's maturity.
Practical Idealism: The group refuses to choose between vision and pragmatism. Discussions about autonomous AI clinicians acknowledged both transformative potential and implementation reality. Ankit.ai blocking generated fury about bureaucratic barriers and specific, actionable suggestions about DPIAs, explainer docs, and MHRA classification. The NHS digital planning analysis celebrated bold targets whilst interrogating missing details, unfunded mandates, and digital maturity variations.
Cross-Pollination: A medical director's arrival from Skin Analytics brought real-world medical device liability experience to theoretical autonomous AI debates. Praktiki and medtutor.ai's education technology perspectives enriched discussions about AI's role in clinical training. The mix of GPs, specialists, informaticians, vendors, and practice managers means every major topic receives multiple expert angles rather than echo chamber reinforcement.
Dark Humour as Coping: When discussing whether AI would replace GPs, one contributor quipped: "I wouldn't worry about AI replacing you dude. 'Differently trained people in the Medical Model' already got there 1st 😉" The ability to acknowledge painful truths (workforce substitution, continuity losses) through gallows humour allows the group to process difficult realities without descending into bitterness.
Evidence Attachment: 51 URLs shared across 396 messages demonstrates commitment to grounding opinions in research, industry analysis, and technical documentation. From Nature Medicine studies to ArXiv papers to Guardian long reads to Companies House filings, the group's instinct is "show me the evidence" rather than "trust my assertion."
Meme Fluency: The seamless integration of GIFs, movie references (Spider-Man 2!), and in-jokes (chAI, GP to kindly, #gptokindly) creates linguistic shorthand that accelerates communication whilst maintaining levity. When a clinical safety expert returned from a brew break to hundreds of retro computing messages, the response meme ("better restraining wild horses than raising the dead") needed no explanation.
This is a community that can debate AI replacing doctors at 3am, share academic papers on LLM performance at breakfast, reminisce about Leisure Suit Larry at lunch, and still find energy to welcome new members and critique government policy by evening. The WhatsApp group dynamics would be studied by anthropologists if they knew what was happening here.
APPENDIX: Daily Theme Summary
Saturday, 25 October 2025
Primary Theme: Keikku Health hardware platform appreciation
Key Discussion: Brief praise for Keikku's stethoscope-integrated scribe as "genius" for ED and ward settings, with hardware design quality highlighted
Secondary Discussions:
Newsletter #20 praise and reflection on recurring themes
Proposal for 6-month review at issue 26
Desire for better infographics
Notable: Low-key day setting up weekend explosion; newsletter appreciation showing community values self-reflection
Sunday, 26 October 2025
Primary Theme: Ankit.ai blocking crisis
Key Discussion: NHS ICBs systematically blocking free, well-governed administrative tool; community rallying with technical advice, governance analysis, and frustration at systemic barriers. Detailed DPIA shared, regional variation identified (NEL not blocking), calls for national standardisation
Secondary Discussions:
Medical Director from Skin Analytics welcomed to group with discussion of liability frameworks
Pharmacist and Praktiki team joined
GP's medtutor.ai showcase with positive reception
Innovation-focused GP's "would you adopt autonomous AI GP?" poll posted evening, setting up Monday debate
Notable: Group demonstrated sophisticated understanding of governance theatre versus genuine safety concerns; new member welcomes showed inclusive, questioning culture
Monday, 27 October 2025
Primary Theme: GP replacement by AI - philosophical and practical debate
Key Discussion: Extensive overnight/morning exchange following poll (13 no, 3 yes). Explored continuity of care, performative aspects of medicine, liability questions, future timelines, and 100-iteration Claude simulation showing 2038-2040 consensus
Secondary Discussions:
Clinical coding futures with LinkedIn post sparking discussion
AI-generated email replies (poll showing 16-1 irritated)
TPP directorship changes discussed
Indemnity and liability deep dive with medical device expert
Newsletter #20 PDF shared
Notable: Community's ability to engage with existential questions whilst maintaining humour; technical demonstration of LLM uncertainty using repeated sampling
Tuesday, 28 October 2025
Primary Theme: Quiet consolidation day
Key Discussion: Minimal activity; recording shared of practice-side clinical safety work
Secondary Discussions:
Follow-up on TPP news clarifying group control retained
Brief deletions and topic uncertainty
Humour about guardrails and topic limits
Notable: Pattern of intense debate followed by quiet reflection; Tuesday served as recovery day after weekend intensity
Wednesday, 29 October 2025
Primary Theme: Government AI radiology announcement and "magic unicorns" critique
Key Discussion: Announcement about AI-powered radiological analysis funding met with immediate scepticism; "only if it's radiology reporting waiting times that are rate limiting factor" captured central flaw. Compared unfavourably to Skin Analytics' deliberate pathway design
Secondary Discussions:
NHS digital planning guidance with FDP mandate concerns
ICB Digital Lead's extensive HSJ Download analysis
Retro computing nostalgia explosion (floppy disks, DOOM, Wing Commander, Glasgow Barras, Soundblaster parrot)
GP clinical systems survey shared
AWS outage sparking local versus cloud debate
Telegram's Cocoon decentralised AI compute network
Notable: Single longest discussion thread about 1990s computing; therapeutic nostalgia serving as pressure valve after heavy policy discussions
Thursday, 30 October 2025
Primary Theme: Quiet mid-week lull
Key Discussion: Minimal activity (9 messages total); GP survey attempts, technical reading recommendations
Secondary Discussions:
Clinical Safety Expert's holiday journal reading with IEEE recommendations
Telegraph article on NHS staff sick days
Brief technical discussions
Notable: Quietest day of week; group energy clearly weekend-frontloaded
Friday, 31 October 2025
Primary Theme: Guardian article on patients relying on AI health advice
Key Discussion: Digital health specialist shared long-read about patients turning to DeepSeek when physician access limited. Discussion of AI providing time and attention impossible in current healthcare systems. Patient forum perspectives on anthropomorphised relationships with LLMs
Secondary Discussions:
Narayana Hrudayalaya acquiring UK Practice Plus Group hospitals
Nature Medicine study on LLM performance: worse in conversational consultations than exams
LLMs versus humans at clinical reasoning
In-flight musings on out-of-hours call data as training sets
Notable: Community engaging with uncomfortable reality that AI increasingly filling emotional and attentional gaps in healthcare; recognition that "wants" versus "needs" distinction becomes less clear when humans feel uncared for
Saturday, 1 November 2025
Primary Theme: Minimal Saturday activity
Key Discussion: Clinical safety expert's humorous image about plane boredom, brief discussion of GDPR and call recording permissions for training data
Secondary Discussions:
Practice-side GP turning off advanced chat privacy (end of period)
Questions about retrospective training data consent
Notable: Week ended quietly after intense debates; group preparing for next phase
Cross-Day Patterns
Recurring Themes: Innovation versus bureaucracy, future of GP work, governance theatre versus genuine safety, authentic communication in AI era, technical capability versus implementation reality
Evolution: Ankit.ai blocking (Sunday) → autonomous AI debate (Monday) → government radiology announcement (Wednesday) → patients turning to AI (Friday) showed narrative arc from systemic barriers to theoretical futures to current reality
Community Dynamics: Weekend warriors dominating activity, therapeutic nostalgia threads, new member welcomes with substantive engagement, evidence-heavy discussions with 51 URLs shared, humour as processing mechanism for difficult topics
Newsletter #21 compiled by Claude 4.5 Sonnet
Curated by Curistica
Where healthcare innovation meets reality, and floppy disks are never forgotten