🏥 AI in the NHS Newsletter #13
Issue #13 | 23-30 August 2025
"Nothing is free to use, when it's free—you're the product"
Executive Summary
This week witnessed remarkable convergence of hope and despair as the group analysed 2.3 billion GP appointments revealing an exponential rise in online consultations, whilst confronting the implications of OpenEvidence selling user data and Claude's new data retention policies. Heated debates erupted over workforce displacement as 13% of US entry-level jobs vanished to AI, contrasted with members saving money on Sky subscriptions using AI negotiation tactics. The community grappled with fundamental questions about data sovereignty, economic sustainability, and whether the NHS's AI bubble will follow the dot-com trajectory.
Major Topic Sections
1. The Great Data Reckoning: When Free Isn't Free 📊
The week's most profound discussion centred on OpenEvidence's decision to monetise user data, prompting a clinical innovation lead to observe: "After Claude, OpenEvidence is selling user data. Nice!" This sparked fundamental questions about the economics of "free" AI tools in healthcare.
A digital transformation specialist crystallised the concern: "Nothing is free to use, when it's free—you're the product." The timing proved particularly poignant as Claude simultaneously announced new data retention policies—opt-in meant five years retention, opt-out meant 30 days. One practice partner's response was visceral: "Big nope from me. Even if anonymised you wouldn't be able to guarantee your data couldn't still be used to identify you."
The discussion revealed OpenEvidence's extraordinary valuation metrics: $210m raised at $3.5bn valuation off $50m revenue with presumably negative EBITDA—a 70x revenue multiple that one venture specialist called "pretty unprecedented". As they noted: "their end game is clearly user influence and access, not ads."
This prompted deeper questions about who drives that influence. "My worry is—who is driving that influence? Pharma? Special interest groups?" asked one GP, highlighting concerns about the hidden economics of "free" clinical decision support tools.
2. The GPAD Revolution: 2.3 Billion Appointments Tell a Story 📈
A data scientist's analysis of NHS appointment data from 2018-2023 using Prophet forecasting revealed stark truths about primary care's future. Their visual representation showed face-to-face appointments plateauing whilst online consultations rose exponentially—a 90-hour-per-week GP commenting: "If I book in every test result phone or SMS to inform of Vitamin D script, I will not be able to finish my work."
The data exposed fundamental flaws in how the NHS measures activity. As one primary care lead explained: "GPAD is an underestimate as it looks at appointments booked and not consultations done." The bigger problem emerged through discussion—"appointment" itself has become an outdated term. One veteran GP reflected: "When I started everything was an appointment... Now so many interactions take place via SMS, Email, OC, NHS App etc. Are all of these appointments? Should they be?"
Despite known issues with both EMIS and TPP extracts, GPAD remains the metric NHSE regions and CQC use for assessment. One digital lead's frustration was palpable: "The London region's ranked tables of best and worst performing practices per ICB is enough to ruin anyone's day at the quality of the data they're using."
The statistics revealed primary care doing 50% more appointments per 1000 patients than 2019, yet still facing criticism about access—prompting one exasperated voice: "General practice has done its bit, would be nice if the rest of the NHS could do the same before complaining at GPs."
3. The Employment Apocalypse: When AI Eats the Middle 💼
A sobering statistic emerged: 13% reduction in US entry-level jobs due to AI, in the country leading R&D. "Imagine the impact on UK where we are only consuming GenAI," warned one digital strategist. The implications for healthcare proved particularly stark—4000 GPs currently unable to find work, before AI's full impact hits.
One participant's prediction chilled the room: "The employment changes AI brings could easily be as massive as during the Industrial Revolution. The governments then took nearly two decades to realise they had to step up and do something with the left-behinds of society." The twist? "Except this time the changes will overwhelmingly impact those in offices, junior professionals, and 'middle class' occupations."
The music industry parallel emerged as potentially prophetic. One innovation lead suggested: "If you look at the music industry and how when their product became free (Napster) they evolved and changed—I suspect we'll see the same play out in healthcare." The fundamental question: "What are you willing to pay for and why?"
The bubble burst narrative gained traction, with parallels to the dot-com era becoming unavoidable. As one member noted: "Following the path exactly of the dot com bubble." Yet optimism persisted: "But there will be real winners and entirely new industries born out of it—Amazon for example."
Enhanced Statistics Section 📊
Activity Metrics
Total message count: 412 messages across the week
Peak activity: Friday 30th August, morning (GPAD data discussion)
Quietest period: Bank Holiday Monday afternoon
Top 5 Contributors
Data Analytics GP (38 messages) - "The 90-hour warrior revealing uncomfortable truths through Prophet"
Clinical Innovation Lead (35 messages) - "The compliance champion calling out data monetisation"
Digital Transformation Specialist (31 messages) - "Bridge between hope and dystopia"
Primary Care Partner (28 messages) - "Voice of frontline exhaustion and practical wisdom"
Technical Architect (26 messages) - "Reality checker on AI economics and sustainability"
Hottest Debate Topics
OpenEvidence data monetisation - 52 messages
GPAD methodology and flaws - 47 messages
Employment displacement by AI - 41 messages
Claude's data retention policies - 35 messages
AI bubble vs dot-com comparisons - 32 messages
Discussion Quality Metrics
Evidence-based contributions: 48% (papers cited, data analysed)
Cross-expertise engagement: 9/10 (clinical, technical, economic perspectives)
Constructive debate score: 8/10 (disagreements remained professional despite high emotions)
Gallows humour index: Peak levels detected
Lighter Moments 😄
The week's emotional support came via unexpected quarters. One member's annual Sky negotiation ritual, now AI-enhanced, saved enough "to get some good treats for the family!" Their joy was infectious: "Loving this use of AI!!" proving that sometimes the revolution comes through cable TV savings.
The discovery that Forbes contributors questioning Forbes's reliability sparked existential crisis: "I can say it, I have a Forbes page 😂"
When discussing suspicious healthcare apps, the community's detective skills emerged. Companies House searches revealing "income and assets of £0" and missing privacy notices prompted the understated observation: "Red flags..."
The week's philosophical peak came when discussing chatbot-related tragedy, with one member noting the meta-irony of AI potentially both diagnosing and representing legal action simultaneously.
Quote Wall 💬
"Nothing is free to use, when it's free—you're the product" - On the economics of AI tools
"I don't have a problem with companies selling data. I just want users to make informed choices & that also include businesses like NHS" - On data transparency
"Following the path exactly of the dot com bubble" - Historical perspective
"General practice has done its bit, would be nice if the rest of the NHS could do the same" - Frustration with asymmetric criticism
"Forbes is not a reliable source tbh (I can say it, I have a Forbes page 😂)" - Self-aware media criticism
"If I book in every test result phone or SMS... I will not be able to finish my work in the 90 hours I do each week" - The reality of modern general practice
"appointment is an outdated term" - Recognising systemic measurement failures
"We are only consuming GenAI" - UK's position in the AI economy
Journal Watch 📎
Academic Papers & Key Studies
📎 Chain-of-Thought Reasoning Study - https://t.co/w51SvO6C2V
Interesting examination of how prompts influence accuracy in COT reasoning approaches. Directly relevant to clinical decision support implementations and the importance of prompt engineering in healthcare AI applications.
📎 MIT Report on GenAI Pilot Failures - https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
Sobering analysis showing 95% of corporate GenAI pilots failing. However, the 5% that succeeded shared common characteristics: clear use cases, integrated workflows, and realistic expectations. Essential reading for NHS digital teams.
📎 Healthcare AI Struggles Analysis - https://www.medscape.com/viewarticle/why-artificial-intelligence-healthcare-still-struggles-2025a1000muu
Comprehensive examination of why AI adoption in healthcare remains challenging, supporting group discussions about the gap between promise and reality.
Industry Articles & News
📎 OpenEvidence Data Monetisation - LinkedIn post by Rik Renard
Revelation that OpenEvidence is selling user insights to pharmaceutical companies and medical device manufacturers. Raises critical questions about the sustainability of "free" clinical AI tools and the hidden economics of decision support systems.
📎 News Agents: AI Economic Bloodbath - https://music.amazon.co.uk/podcasts/4f8a3bd0-c447-4fa4-8d5a-b1070d2c532f
Podcast exploring potential economic disruption from AI, particularly relevant to discussions about middle-class job displacement and the need for proactive policy responses.
📎 CNN: 23andMe Data Risk - https://edition.cnn.com/2025/03/25/tech/23andme-bankruptcy-how-to-delete-data
Timely warning about genetic data vulnerability when companies face financial difficulties. Parallels drawn to AI companies holding healthcare data raise important governance questions.
📎 Chatbot Tragedy Coverage - https://archive.ph/4IYc9
Disturbing case of chatbot involvement in murder-suicide, highlighting urgent need for safety protocols and ethical guidelines in consumer-facing AI applications.
Technical Resources & Guidelines
📎 Patchs Auto-Booking Integration - https://help.patchs.ai/hc/en-gb/articles/24442486604823
Solution for automatically booking online consultations into clinical system diaries, addressing GPAD recording challenges discussed extensively this week.
📎 Prophet Time Series Analysis - Referenced in GPAD analysis
Statistical forecasting tool used to analyse 2.3 billion GP appointments, revealing exponential growth in digital consultations versus linear growth in traditional appointments.
📎 Claude Data Retention Policies - Anthropic announcement
New opt-in/opt-out framework with 5-year versus 30-day retention periods. Critical reading for any organisation using Claude for healthcare applications.
Policy Documents & Reports
📎 GPAD Methodology Documentation - NHS England
Official guidance on General Practice Appointment Data collection, though group consensus suggests fundamental flaws in methodology failing to capture modern consultation patterns.
📎 UK Employment Statistics - Via Twitter/X
13% reduction in entry-level positions attributed to AI in the US market, with implications for UK workforce planning and the 4000 GPs currently seeking employment.
Looking Ahead 🔮
Unresolved Questions
Will "free" AI tools in healthcare survive without data monetisation?
Can GPAD methodology evolve to capture true primary care activity?
How will the NHS respond to exponential growth in online consultations?
Is the UK destined to remain an AI consumer rather than producer?
Emerging Themes
Data sovereignty becoming existential issue for healthcare AI
Workforce displacement moving from theoretical to immediate threat
Measurement systems failing to capture modern healthcare delivery
Economic sustainability of AI tools under scrutiny
Upcoming Events
Growing interest in alternatives to commercial AI platforms
Potential shift towards open-source and locally-hosted solutions
Increased scrutiny of data retention and usage policies
Group Personality Snapshot 🎭
This week revealed a community at an inflection point—simultaneously exhausted by systemic failures and energised by possibilities. The ability to conduct sophisticated data analysis (2.3 billion appointments!) whilst maintaining gallows humour about 90-hour weeks demonstrates remarkable resilience.
The group's superpower remains its refusal to accept comfortable lies. When told AI tools are "free," members immediately identify the hidden costs. When shown impressive valuations, they calculate the unsustainable multiples. When promised efficiency, they count their actual hours.
What makes this community unique is its combination of technical sophistication with frontline experience. Members don't just theorise about AI's impact—they live it, measure it, and call out the disconnects between marketing and reality.
As one member perfectly captured: "Nothing is free." In this community, that's not cynicism—it's wisdom earned through experience.
Next week's newsletter will examine whether humans remain necessary for newsletter writing (early indicators suggest yes, if only for the gallows humour). Keep questioning the "free" tools, and remember—in healthcare AI, if you're not paying for the product, your patients probably are.
Newsletter compiled from 412 messages between midday 23rd August and midday 30th August 2025
Edited for clarity, anonymised for privacy, seasoned with appropriate scepticism about "free" lunches