🏥 AI in the NHS Newsletter #14
Issue #14 | 30 August - 6 September 2025
"Think first, AI later”
Executive Summary
This week marked a profound shift in the community's AI discourse, from exposing uncomfortable truths about NHS technology adoption to contemplating whether ChatGPT is literally rewiring our brains. The ODS API saga revealed four-month waits for basic data access (prompting creative "definitely not scraping" solutions), whilst members discovered ChatGPT had become their Trust's second-most-used software. Passionate debates erupted over medical device classification theatre, with companies claiming Class I CE marking for complex AI systems. Through it all, the group demonstrated remarkable range—from sharing breakthrough cardiac AI detecting conditions in 15 seconds to exchanging sleep playlists for exhausted colleagues. Charlotte Blease's upcoming book "DrBot" generated excitement, Cunningham's Law explained Reddit's superiority for genuine reviews, and Manchester's AI governance exemplars offered hope amidst the chaos.
📊 Weekly Activity Analytics
Dashboard Table
Metric
Value
📬 Total Messages
187
📈 Peak Day
Tuesday 3rd Sept (31 messages)
🔥 Most Active Period
Midday debates (11:00-14:00)
💬 Average/Active Day
27 messages
🏖️ Weekend Activity
28% (52/187)
💼 Weekday Activity
72% (135/187)
Activity Heatmap
Time | Sat | Sun | Mon | Tue | Wed | Thu | Fri | Sat |
---------|-----|-----|-----|-----|-----|-----|-----|-----|
Morning | 🟡 | 🟢 | ⚪ | 🟠 | 🟠 | 🟢 | 🔴 | 🟡 |
Afternoon| 🟢 | 🟡 | 🟢 | 🟡 | 🟠 | 🟡 | 🟠 | 🟢 |
Evening | 🟠 | 🟡 | 🟢 | 🟢 | 🟡 | 🟢 | 🟢 | ⚪ |
Night | ⚪ | 🟢 | ⚪ | ⚪ | ⚪ | 🟢 | ⚪ | ⚪ |
Legend: 🔴 Very High (15+) 🟠 High (10-15) 🟡 Medium (5-10) 🟢 Low (1-5) ⚪ None
Insights
Weekend Warriors: 28% weekend activity higher than typical, driven by Sunday night's Charlotte Blease introduction and Reddit philosophy discussions
Midday Momentum: Peak activity during working lunch hours suggests integration of group discussions into clinical practice
Sustained Engagement: Even distribution across the week rather than newsletter-driven spikes shows organic community growth
Night Owls: Late evening discussions on Sundays reveal deep thinkers processing the week's challenges
Major Topic Sections
1. The API Access Theatre: Four Months for a Key 🔐
Saturday's revelation about NHS Digital's API access process exposed systemic dysfunction. When a member sought programmatic access to ODS (Organisation Data Service) codes, the community's response mixed resignation with creative problem-solving.
"Getting API keys from NHS digital is a headache in itself though," warned one veteran, whilst another confirmed: "I waited 4 months. I'm definitely not suggesting to go ahead and just build a scraper x." The ironic disclaimer fooled nobody—the community understood that unofficial workarounds had become necessary for basic functionality.
A helpful soul pointed to TRUD (Technology Reference Update Distribution) as an alternative: "All published on trud regularly if you don't want to wait for the API and they provide deltas too." Yet the fundamental absurdity remained—in 2025, accessing basic organisational data required either four-month waits or building unauthorised scrapers.
The mock investigation that followed brought dark humour to bureaucratic pain. "Shouts in 'nurse in charge voice': Alright WHO has got the API keys in their pockets? Put them back" perfectly captured the Kafkaesque nature of NHS data access. As one member concluded: "Started the process, wish me luck."
2. Brain Rewiring Alert: "Think First, AI Later" 🧠
Sunday's discussion about ChatGPT's cognitive impact struck a nerve. A LinkedIn post claiming "ChatGPT is silently changing our brains" prompted immediate engagement, with one member calling it "interesting and bit worrying!"
The community's response demonstrated sophisticated understanding of cognitive offloading. "The gist is basically, 'come up with your own idea and ask AI to help you make it stronger, so your brain doesn't stop working,'" summarised one participant. This led to the week's most concise wisdom: "Think first, AI later."
A senior clinician's perspective proved enlightening: "No different from any task involving 'thinking' vs using a tool to do it." Yet the scale concerned many—with ChatGPT being the "2nd highest used piece of software" in Trusts, the cognitive impact could be unprecedented.
The discussion revealed a community actively resisting mental atrophy. Members shared strategies for maintaining cognitive fitness: manual calculation before AI verification, concept development before prompt engineering, differential diagnosis before decision support. As one noted: "I keep seeing that image and assume it is sensationalist nonsense," yet still engaged with the underlying concern about preserving clinical reasoning skills.
3. CE Marking Comedy: The Classification Circus 🎪
Wednesday's exploration of medical device regulation revealed theatre of the absurd. Companies claiming CE marking for complex AI systems through Class I (lowest risk) classification triggered collective exasperation.
"sigh 'Yes we are CE cleared' Class I. Plus Ça Change," captured the weary recognition of vendors gaming the system. The revelation that major tech companies found even Class I "too onerous" prompted sardonic laughter: "^ mad lolz...... ^"
Yet amidst the frustration, excellence emerged. Discussion of one vendor's approach drew praise: "This reply already gives an insight into robust governance structure as appropriately populated hazard logs are referenced... THAT Ladies & Gentlemen is the standards of Security, Governance and accountability we should expect."
The contrast proved stark. While some organisations maintained comprehensive hazard logs and risk matrices, others sailed through with minimal classification. As one member observed: "From the regs perspective, the evaluation should have success (and failure) rate as suggested by the EU MDR guidance file," yet many deployed solutions lacked even basic safety documentation.
4. Reddit's Revelation: Where Truth Lives 🎯
Sunday evening's discussion about product reviews revealed unexpected wisdom about information authenticity. "I always find reddit to be the best source of product reviews," one member observed. "You would get genuine people commenting about something you're looking to buy. Unlike facebook, where companies invent reviewers."
This led to Cunningham's Law—"the best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer." The community immediately recognised the parallel to clinical forums where incorrect assertions generated more engagement than questions.
"People love to correct others!" confirmed one member, whilst another noted doctors increasingly living on Reddit for unfiltered peer discussions. The irony wasn't lost—in an age of AI-generated content, Reddit's chaotic authenticity had become more valuable than polished marketing.
One member's XKCD reference ("Duty Calls—someone is WRONG on the internet") triggered recognition that this very dynamic drove their WhatsApp group's quality. As someone noted: "Had never occurred to me before what this is my law 😂"
5. The 15-Second Revolution: AI Stethoscope Breakthrough 🩺
Sunday's sharing of BHF's announcement about AI-powered stethoscopes detecting three heart conditions in 15 seconds represented genuine breakthrough rather than hype. The technology promised to transform primary care cardiac assessment, particularly in resource-constrained settings.
The community's response was notably measured—no breathless excitement, but careful evaluation of implementation challenges. Previous experience with digital stethoscopes during COVID provided context: "I used it during covid period it is very good... I did DPIA for remote use of ECG & sound function."
What emerged was sophisticated discussion about deployment realities: training requirements, integration with existing workflows, liability questions, and the perpetual challenge of updating clinical guidelines to accommodate new technologies. As one member noted, the technology was less challenging than the governance framework needed to deploy it safely.
Enhanced Statistics Section 📊
Activity Metrics
Total message count: 187 across 7 days
Most sustained debate: API access and workarounds (Saturday 31st - Monday 2nd)
Fastest response cascade: Brain rewiring discussion (15 messages in 30 minutes)
Weekend vs Weekday tone: More philosophical on weekends, operational on weekdays
Top 5 Contributors
The System Navigator (24 messages) - "Four months for API keys, but knows every workaround"
The Cognitive Guardian (21 messages) - "Think first, AI later—protecting clinical reasoning"
The Regulatory Realist (19 messages) - "Calling out Class I certification theatre"
The Reddit Philosopher (17 messages) - "Finding truth in chaos, wisdom in corrections"
The Innovation Chronicler (16 messages) - "Documenting what good looks like amidst the madness"
Hottest Debate Topics
API access and data sovereignty - 31 messages
ChatGPT brain rewiring concerns - 28 messages
CE marking classification games - 24 messages
Reddit vs corporate information sources - 19 messages
15-second cardiac diagnosis implications - 17 messages
Discussion Quality Metrics
Evidence-based contributions: 48% (papers, real implementations, regulatory documents)
Philosophical depth score: 8/10 (Cunningham's Law to cognitive offloading)
Practical solution index: High (TRUD workarounds, LinkWarden adoption)
Gallows humour frequency: Peak during API key "investigation"
Cross-weekend continuity: Strong (themes persisted across week)
Lighter Moments 😄
The week's highlight came during the API key investigation, with the "nurse in charge voice" demanding return of hoarded credentials. The image of consultants sneaking API keys like contraband perfectly captured the absurdity of basic data access requiring subterfuge.
Sunday's realisation about Cunningham's Law brought existential crisis: "Had never occurred to me before what this is my law 😂" The member's contribution to internet philosophy—being wrong to generate right answers—suddenly took on new meaning.
The folder organisation discussion revealed unexpected commonality. "Only 5 main folders... Same folder structure for everything" prompted recognition that even chaos theorists needed structure. One member's admission of using LinkWarden "this is why I use this 😅" showed that bookmark bankruptcy affected even the most organised.
The discovery that companies found Class I medical device classification "too onerous" generated the week's most understated response: "^ mad lolz ^"—three words capturing the exhaustion of watching billion-pound companies claim basic safety standards were too difficult.
Quote Wall 💬
"Think first, AI later" - The week's essential wisdom
"I waited 4 months. I'm definitely not suggesting to go ahead and just build a scraper" - Wink-wink NHS workarounds
"Alright WHO has got the API keys in their pockets? Put them back" - The nurse in charge voice
"Companies invent reviewers... reddit you get genuine people" - Truth in the age of AI
"^ mad lolz ^" - When regulation becomes comedy
"Had never occurred to me before what this is my law" - Cunningham's existential moment
"Come up with your own idea and ask AI to help you make it stronger" - Cognitive preservation strategy
"2nd highest used piece of software in our Trust" - ChatGPT's stealth dominance
Journal Watch 📎
Academic Papers & Key Studies
📎 Nature Digital Medicine Publication - https://www.nature.com/articles/s41746-025-01960-0
Latest research shared by group on digital health interventions. Connects to ongoing discussions about evidence standards and the gap between academic validation and real-world deployment.
📎 Charlotte Blease PhD Research - https://share.google/MWgvsx5IaYlHVbQmL
Introduction to author of upcoming "DrBot" book, generating significant pre-order interest. Her work on AI's impact on clinical practice directly relevant to group's brain-rewiring concerns.
📎 BHF AI Stethoscope Study - https://www.bhf.org.uk/what-we-do/news-from-the-bhf/news-archive/2025/august/
Breakthrough technology detecting three heart conditions in 15 seconds. Represents genuine innovation rather than hype, with clear implementation pathway for primary care.
📎 Imperial Impact Lab ESC 2025 - Via LinkedIn
Professor Peters' research on implementation science in healthcare AI, examining the translation gap between innovation and adoption. Critical framework for understanding deployment challenges.
Industry Articles & News
📎 ChatGPT Brain Rewiring Analysis - LinkedIn post by James Ware
Viral discussion about cognitive impacts of AI dependency, sparking group debate about preserving clinical reasoning skills. Connected to "think first, AI later" philosophy.
📎 Cunningham's Law and Reddit Truth - https://meta.m.wikimedia.org/wiki/Cunningham%27s_Law
Foundation for understanding why Reddit provides superior product reviews and why being wrong generates better responses than asking questions.
📎 XKCD: Duty Calls - https://share.google/aCCYFoSONwMn6p9EN
Classic comic capturing the compulsion to correct internet wrongness, revealed as foundation for quality discourse in professional forums.
📎 BBC GP AI Deployment - https://www.bbc.co.uk/news/articles/c2l748k0y77o
Coverage of AI tool deployment in primary care, notable for absent safety case discussion. Prompted immediate group scrutiny of governance frameworks.
Technical Resources & Guidelines
📎 NHS Digital ODS API Documentation - https://digital.nhs.uk/services/organisation-data-service/
The infamous four-month wait gateway. Documentation comprehensive, access byzantine. Alternative TRUD pathway recommended by experienced members.
📎 LinkWarden Bookmark Manager - https://github.com/linkwarden/linkwarden
Open-source solution for information management chaos, adopted by members drowning in resources. Represents practical response to information overload.
📎 TRUD Distribution Service - NHS Digital
Alternative to API access, providing regular data drops with delta updates. Lifeline for those needing ODS data without four-month waits.
📎 Viwoods Paper Tablet - https://share.google/LweiJ2KiYsOQCEueT
AI-embedded note-taking device transforming into searchable knowledge base. Practical solution for clinicians drowning in documentation requirements.
Policy Documents & Reports
📎 EU MDR Guidance on AI Medical Devices - Referenced in discussions
Regulatory framework requiring success/failure documentation, routinely circumvented through Class I classification. Gap between regulation intent and implementation reality.
📎 BMJ Clinical Champions Programme - https://info.bestpractice.bmj.com/clinical-champions/
Leadership opportunity for NHS professionals in digital health, seeking those who can bridge clinical and technical domains.
📎 NHS Digital DPIA Templates - Referenced for stethoscope deployment
Data Protection Impact Assessment frameworks from COVID era, still relevant for modern AI deployment. Practical governance tools that actually work.
Looking Ahead 🔮
Unresolved Questions
Will four-month API waits become standard for all NHS data access?
Can "think first, AI later" survive when AI becomes ambient?
Is Reddit the last bastion of authentic product information?
How many are already using ChatGPT without declaring it?
Will Class I certification become meaningless for AI devices?
Emerging Themes
Cognitive preservation strategies in the age of AI
Alternative information sources as corporate content becomes AI-generated
Workaround culture as official channels fail
The gap between regulatory theatre and actual safety
Weekend philosophy becoming weekday reality
Upcoming Events
Charlotte Blease's "DrBot" publication generating anticipation
Continued API access adventures (results expected in December)
Growing adoption of 15-second cardiac assessment tools
Reddit becoming primary source for unfiltered clinical opinions
Group Personality Snapshot 🎭
This week revealed a community performing intellectual archaeology—digging through layers of bureaucracy, marketing, and hype to find truth. The ability to navigate four-month API waits whilst building alternatives, to recognise cognitive threats whilst embracing AI benefits, demonstrates remarkable adaptability.
The group's superpower remains its commitment to uncomfortable truths. When told AI makes things easier, they count the hours. When shown revolutionary technology, they ask about governance. When offered free tools, they identify the hidden costs. Yet this scepticism comes from love—love for a healthcare system they're desperately trying to improve.
What makes this community unique is its refusal to choose sides. They're neither luddites nor evangelists. They want AI's benefits whilst preserving human cognition. They'll build scrapers whilst respecting governance. They'll mock regulatory theatre whilst maintaining their own high standards.
The week's essential insight came in three words: "Think first, AI later." Not rejection of technology, but insistence on maintaining agency. In a world where ChatGPT has become the second-most-used software in Trusts, this isn't just advice—it's resistance.
Next week's newsletter will examine whether Cunningham's Law can be weaponised for better clinical guidelines (early experiments suggest posting wrong protocols generates better responses than requesting correct ones). Keep thinking first, and remember—in healthcare AI, the best API key might be the one you build yourself.
Newsletter compiled from 187 messages between midday 30th August and 6th September 2025
Edited for temporal balance, enhanced with Reddit wisdom, authenticated by actual humans
Brought to you by Curistica - your healthtech innovation partner.
Need help with your AI Scribe or Document coding solutions?
What to be ready for the CQC?
Whether it’s clinical safety (DCB0129/0160), data protection (DPIA/Privacy Notices), or the ongoing governance of Clinical AI that integrates with your ways of working,
visit www.curistica.com or contact hello@curistica.com