
This pre-Christmas week saw 415 messages across eight days, with Monday delivering a remarkable 122-message peak as the community dissected Microsoft Copilot's enterprise value proposition and the thorny question of medical device classification. The Palantir-powered FDP discharge summary tool faced renewed regulatory scrutiny after an HSJ article highlighted concerns from the Patient Safety Commissioner, whilst a much-anticipated AVT comparison webinar brought together four scribe providers for what one member called "excruciating" competitor questioning. As the year winds down, the group compiled an impressive Christmas reading list and reflected on the enduring gap between AI promise and NHS implementation reality—all whilst debating whether consultation audio recordings represent medicolegal protection or privacy overreach.
Major Topics
The Copilot Conundrum: When AI Investment Meets Reality
Monday morning ignited with a viral satirical post about Microsoft Copilot deployment that resonated deeply with the community. The piece—describing a hypothetical executive who rolled out Copilot to 4,000 employees at £1.4 million annually only to discover 47 people had opened it—sparked a vigorous debate about AI procurement in healthcare settings.
"I really do think that a lot of AI sales are not far off this. Senior folk shouting 'AI is the future!' while not even understanding what Excel does." — Healthcare Systems Analyst
The conversation evolved into broader concerns about AI investment without proper adoption strategies. Several members noted the gap between the "AI enablement" metrics organisations create and actual productivity gains. An NHS digital lead observed that the most common phrase for 2025/26 would be "fund from existing resources"—a wry acknowledgment that AI ambitions rarely come with matched budgets.
The satire touched a nerve because it reflects genuine tensions in NHS AI procurement: pressure to appear innovative, difficulty measuring genuine value, and the challenge of driving adoption in overstretched clinical environments.
Device Classification Wars: Palantir's FDP Under the Microscope
The week's most substantive regulatory debate centred on Palantir's Federated Data Platform discharge summary tool following an HSJ article revealing concerns raised by the Patient Safety Commissioner. A digital transformation lead shared key excerpts:
"Members challenged the design of the AI discharge summary tool, questioning whether the tool had been classified as a medical device by the MHRA. One GP member expressed particular concern on the product being considered Class I."
The group engaged in a nuanced discussion about appropriate device classification. A medical device specialist from a Class IIb-certified imaging AI company pointed out that the "generating draft summaries to be reviewed by a clinician" defence used by the FDP team would not exempt their own product from higher classification—suggesting inconsistent regulatory interpretation.
The debate revealed ongoing industry confusion about where the line sits between administrative assistance and clinical decision support. A cardiologist and clinical safety officer noted that pulling information "sight unseen" from EHRs constitutes autonomous functionality requiring higher regulatory scrutiny: "Class II for autonomous use cases of AI like this is the most likely outcome."
AVT Showdown: Four Scribes Enter, Awkward Questions Ensue
Wednesday's AVT comparison webinar, hosted by a prominent GP educator, brought together Accurx Scribe (powered by Tandem), Surgery Intellect (powered by Tortus), Heidi, and Lexacom for a live demonstration. The community's reaction ranged from genuinely impressed to thoroughly entertained.
The highlight—generating considerable pre-event anticipation—was the "killer question" suggested by a health tech entrepreneur: "If you couldn't use your own product, which competitor would you use and why?" One viewer described the moment as "excruciating to watch."
Beyond the theatre, substantive questions emerged about personalisation. A GP pioneer shared his experience stopping AVT use after two weeks because verbose output made scrolling through EMIS records impractical:
"The text was too verbose so I could no longer scroll down EMIS to see what consultations were about without having to open each one. Also the style was not as I would write it."
This raised the thorny question of whether AVTs could learn individual documentation styles—and the data protection implications of storing clinician writing samples to enable such learning.
Medical Device Drift: Defining the Indefinable
A practice-level discussion about Heidi AI "suggesting diagnoses" despite not being licensed for diagnostic support spawned a broader terminological debate. A digital health and clinical AI specialist proposed a working definition:
"Medical Device Drift describes the emergent phenomenon where GenAI-powered medical devices autonomously generate outputs or functionality beyond their documented intended purpose, without any change to the software version or user behaviour."
An AI safety book author pushed back on imprecise terminology, noting that "algorithmic drift" and "model drift" are often misused: "Incorrect use of these phrases does get my goat." For fixed-code imaging AI, the concern is less about device drift and more about human drift—clinicians becoming over-concordant with AI suggestions over time.
A GP founder offered a characteristically direct summary: GenAI-based devices should be Class IIa/b before release.
EHR Quality: The Elephant in Every AI Room
Thursday's discussion about an apparent AI hallucination in a US EHR summarisation tool sparked reflection on underlying data quality challenges. A cardiologist and clinical trialist shared sobering statistics:
"Used to have to review these for clinical trials and can 100% say most summaries we got contained at least 2-3 significant errors—sometimes more. And these were generally healthy volunteers."
A health tech entrepreneur reinforced the point: "EHR data is far too noisy. Summarising from there programmatically is an exceptionally hard task."
The discussion illuminated a fundamental tension: AI tools require high-quality input data, but NHS records often contain working diagnoses that may later be disproven, incomplete documentation, and inconsistent coding. One analyst observed that "if absence of sinus rhythm documentation was considered AF," then AF appeared protective in heart failure—a statistical artefact of missing data rather than genuine clinical signal.
Lighter Moments
The week's levity peaked with a festive AI announcement: Santa has apparently been using "TOP PRESENT™️ AI"—a backronym for "Treat Option Picker – Present Response Emotional Signal Evaluation for Niceness of Treat." The system allegedly analysed reactions to past gifts, predicting that several named community members "would like safe and effective AI systems that improve patient care"—to which the elves noted "zero governance reviews were completed."
Scottish members rallied around a Burnistoun sketch about voice recognition failures, with one participant noting it "never gets old—I love how the subtitling illustrates the problem." The thread devolved into good-natured challenges about saying "purple burglar alarm."
The week also saw appreciation for the newsletter's role descriptor system, with one member noting he'd been assigned "Uber Driver" and "Factory Worker" across recent issues. "Same same but different," he observed philosophically.
Journal Watch
Academic Papers & Key Studies
• AI and Primary Care — New paper from a primary care researcher and colleagues examining current state of AI implementation in general practice. ScienceDirect Link
• Palantir Software Risk Assessment (Swiss) — German-language investigation into "devastating risks" of Palantir deployment. Netzpolitik (Translated)
• Knowledge Graphs for LLM Context Management — Technical resource on using knowledge graphs for knowledge mapping and structured reasoning. Memgraph Blog
Industry & News Articles
• FDP AI Tool Regulatory Row — HSJ article covering Patient Safety Commissioner concerns about Palantir's discharge summary tool. HSJ Article
• China-US AI Infrastructure Analysis — Discussion of China's renewable energy approach to AI infrastructure needs. The Times
• EMIS AI in General Practice — EMIS Health's perspective on AI implementation in primary care settings. EMIS Health
• OpenAI Codex Vulnerability Detection — Overview of GPT-5.2 capabilities for identifying code vulnerabilities. Cyberpress
Technical Resources & Guidelines
• DTAC (Digital Technology Assessment Criteria) — Recommended as starting point for health tech compliance. NHS Transform
• Penicillin/Penicillamine Allergy Safety Alert — NHSE National Patient Safety Alert. NHS England Long Read
• Australian Telco Firewall Failure Report — Analysis of botched upgrade that took down emergency services for 14 hours. Slashdot
Looking Ahead
Imminent Events
• Christmas Annual Newsletter — A bumper compilation covering 2025 highlights, arriving early next week
• LinkedIn Live: Clinical Safety — Friday 23 January 2026, 12-1pm (details to follow)
• Christmas Special Broadcast — Christmas Day at 6pm featuring GP duo discussion
Unresolved Debates
The consultation audio storage question remains firmly unresolved. A digital health specialist reminded the group that out-of-hours services have recorded telephone encounters "since the early noughties and the world hasn't exploded." Others maintain that storage costs drive the resistance rather than principled arguments. The debate will likely intensify as AI scribe vendors face questions about data retention for model training.
The Class I/II boundary for GenAI clinical tools demands regulatory clarification. Multiple members have flagged this to MHRA; expect continued pressure in 2026.
Community Notes
Several members are heading for Christmas leave; activity typically drops but "doesn't completely stop—nothing ever does in NHS AI." The group welcomed a new member: a Clinical Pharmacology Registrar from Glasgow interested in cardiometabolic medicine and health technology.
Quote Wall
"Your writing will be your biggest moat in a world filled with AI Slop." — Innovation-focused GP on authentic communication
"Fund from existing resources" — NHS 2025/26's most common phrase (sardonic prediction)
"Medical Device Drift describes the emergent phenomenon where GenAI-powered medical devices autonomously generate outputs beyond their documented intended purpose" — Digital Health & Clinical AI Specialist proposing terminology
"The NHS will never take a risk on a product that even if it's 1000% better than the human equivalent, holds a small risk of error" — GP and digital health consultant on institutional risk appetite
"Meatsack with a GMC registration gets it wrong, it's their fault and not the system" — Healthcare Systems Analyst on liability asymmetry
"There is only one Graham King (annoyingly there is actually two)" — AI Imaging Specialist on health IT naming collisions
"Better having to restrain wild horses than raise the dead" — On the group's characteristic high-activity periods
Group Personality Snapshot
This week showcased the community's distinctive blend of regulatory rigour and festive irreverence. A single satirical post about enterprise AI adoption generated more engagement than most policy papers—yet that same energy produced genuinely nuanced analysis of device classification and EHR data quality.


