3 Jan
-
10 January 2026

AI in the NHS Weekly Newsletter - Issue #31

Executive Summary

The first full week of 2026 opened with a familiar tension at the heart of healthcare AI: the irresistible momentum of consumer technology meeting the immovable demands of clinical safety. OpenAI's announcement of ChatGPT Health dominated discussion, raising existential questions about whether established regulatory frameworks can survive when 40 million daily health queries bypass traditional care pathways entirely. The group wrestled with ambient voice technology consent practices, debated whether AI coding assistants have genuinely democratised software development, and confronted the uncomfortable reality that mental health services may be so broken that AI-delivered CBT represents a genuine improvement. Throughout, a recurring theme emerged: the regulatory approaches that served medicine well for pharmaceuticals may be fundamentally mismatched to the pace and nature of digital innovation.

Major Topic Sections

1. ChatGPT Health: The Elephant Arrives in the Room

The week's defining moment came Wednesday evening when OpenAI announced ChatGPT Health—a feature enabling users to connect their Apple Health data and receive personalised health guidance. The announcement crystallised months of ambient anxiety into concrete concern.

"I'd be a bit sad if I was setting up a health query based company given this news," observed one digital health developer, articulating the existential threat to UK health tech startups.

The group rapidly identified the regulatory implications. "ChatGPT Health will struggle to get past the scrutiny of the regulators in EU and UK," argued a digital health and clinical AI specialist. "At the very, VERY least it will be class I over here, and only if they demonstrate it being very well guardrailed."

Others were less sanguine about regulatory defences holding. "If OpenAI wants, it will deal directly with Govt similar to FDP. They may even bundle OpenAI health in NHS App for patients," predicted an innovation-focused GP, noting that OpenAI's new managing director of international expansion is a former UK Chancellor.

The deeper concern wasn't regulatory capture but irrelevance. "AI companies have realised getting through governance of established medical systems will take a long time—they have now turned the tables and make the medics sweat for the money," observed a GP and health tech entrepreneur. "The clinician will carry the medico-legal liability still."

A pharmaceutical industry AI specialist framed the fundamental shift: "The most influential clinical opinion is no longer the clinician's. It's the first one." With 40 million daily health queries going to ChatGPT, the group debated whether this represented Dr Google 2.0 or something categorically different.

2. The Consent Question: How Do You Explain AI to Patients?

A poll early in the week asked medical members whether they consent patients before using ambient voice technology. The results—seventeen always consent, six do something in between, one never—prompted rich discussion about what consent even means in this context.

"Most people seem to wonder why I am asking permission," noted one GP using AVT tools. The practical challenges emerged quickly: "I usually start it before the person comes in and turn it off if they decline. Occasionally I consent them only to realise that I forgot to turn it on!"

A system analyst identified the deeper conceptual puzzle: "There is an interesting discussion we keep having across our local practices with our DPO around consent vs inform—depends on your DPIA and legal basis partly."

The discussion revealed a tension between legal requirements and clinical reality. One GP had topped 6,000 scribe-assisted consultations with no issues; others described practices where individual patient consent felt increasingly unsustainable. "We have been using Tortus for a month or so and it already feels like the biggest waste of time," joked one GP—clarifying they meant the individual consent process, not the tool itself.

3. Dr Google vs Dr LLM: Evolution or Revolution?

Friday's extended discussion tested whether AI health queries represent a qualitatively new problem or simply an improved version of existing patient behaviour.

"What's the difference between Dr Google and ChatGPT Health?" asked a technical analyst. "If ChatGPT Health starts to offer diagnosis with appropriate safety warnings, that's the next step. And if you are in a part of the world where you have no access to doctors anyway, it's a LOT better than nothing."

A clinical safety specialist pushed back: "Google still requires manual effort to search; LLMs make it near instantaneous. Natural language adds weight to LLM output which can be falsely reassuring whereas Google doesn't give that same feedback."

The group noted an evolution in patient presentations. "The giveaway is the end of the request saying 'would you like me to make this into a PDF for you,'" observed one GP, describing AI-assisted eConsults. Another noted that patient complaints had become "far more coherent and explain the outcome they want to achieve."

But one experienced GP raised concerns about the broader impact: "Much as we see with health tourism these days, we are heading towards an NHS that increasingly picks up the broken pieces from AI assisted self-care."

4. The Coding Revolution: Democratisation or Delusion?

Returning from leave, a learning and development professional sparked reflection on how fundamentally AI coding assistants have changed capability boundaries.

"My son is a software engineer doing a degree apprenticeship. I have zero experience coding, last was a very poor A-level project in Pascal. I'm now comfortably writing full stack web apps with databases that do genuinely useful things... This isn't 'AI is coming for software jobs.' It's 'AI has handed out software skills like free samples.'"

A digital health specialist endorsed this transformation while adding crucial caveats: "Code assistants are incredible, and the world has changed a great deal. There is still room for error though, and however you code things you have exactly the same responsibilities for data protection, security, and safety."

Practical experience with coding agents revealed limitations. "Was working on something with Claude Code and hit the limit for the week. Handed it over to Gemini and it demolished it," reported one technical contributor—meaning destroyed the working project rather than completed it. "Codex is now repairing the damage Gemini has wrought!"

The vision articulated: "My North Star is that anyone can safely build and use their own products for healthcare. Patients, parents, physicians, the whole lot. In this way, we close the demand:supply gap."

5. Mental Health: When Broken Services Make AI Look Good

Saturday morning's discussion turned unexpectedly poignant as GPs shared experiences of mental health service access—or its absence.

"I'm not sure we even have a mental health service in our parts," reflected one GP. "I suspect there's one psychiatrist locked in a basement with 100 people asking them for advice so they can say 'doctor informed.'"

A clinical safety specialist from Norfolk painted a grimmer picture: "I know some GPs who've never had a CAMHS referral accepted in their entire GP career." Another GP confirmed: "Not one yet since I've been in the profession, 13 years."

This context reframed discussions of AI-delivered CBT. "Regardless of whether AI driven or not, asynchronous CBT on demand in patients' homes at a time of their choosing fills a hole our service models can't," argued a GP. But scepticism remained about patient engagement: "It's like any long term intervention that requires effort and perseverance though. People remain hard to convince."

A long-serving GP reflected on the political drivers: "It's primarily political—we had community mental health stripped of resource by the CCG CEO to fund A&E based psych liaison, increasing costs to £5k per contact from around £200."

6. Regulation: The Speed of Pharma, The Pace of Tech

A question posed Saturday morning captured a week-long undercurrent: "How did the pharmaceutical industry get to the point where they accepted the only way to release a drug is to do 5-10 years of research?"

A health policy analyst offered perspective: "The pace of development is radically different between digital and pharma as well as the scaling dependencies. Much of it comes down to the consensus of acceptance of the risk—how much evidence is needed before 'we' think it's suitable to be used."

Others questioned whether pharma's model even worked as intended. "China, AFAICT, have industrialised rapid rollout. By keeping regulatory and time to rollout short, they massively decrease development costs compared to US/UK practice so the population benefits years earlier than the UK population would."

A regulatory affairs specialist framed the challenge practically: "It's clearly possible for LLMs to be certified to Class IIb level, for condition agnostic platforms to be certified to Class IIb level, and AI diagnostics up to Class III already are on the market. I'd be genuinely interested in views of what in current regs is too stringent for SaMD/AIaMD."

The counterpoint came from a GP deeply engaged in regulation debates: "The current regs are too lax. Class I isn't worth anything and very few software products need to be more than Class I. The things we are doing with AI would never be allowed in the world of pharmaceuticals."

Lighter Moments

The week's tech frustrations produced memorable exchanges. When the new WhatsApp Desktop update drew criticism, one contributor noted "pages and pages of 1 star reviews on the Store... I think this is a classic case of 'the world is mobile/web'... someone needs to be fired."

AI merch culture came under affectionate scrutiny. "I've had merch from Heidi, Tortus and Accurx," reported one GP. "Still not had a single TPP merch though..."

The gold standard of branded merchandise was established: "a plastic tumbler with the bottom 1 inch being stones embedded into the plastic and the phrase 'MICROSOFT ROCKS!' on it. I don't think that can be beaten."

NotebookLM sparked genuine enthusiasm: "A meeting transcript as source has just banged out flash cards, quizzes, podcasts and video blogs that would have taken me days to do. Blown away."

And a new pet acquisition prompted an excellent dad joke: "Bought a robot puppy. No bark. Only byte."

Quote Wall

"This isn't 'AI is coming for software jobs.' It's 'AI has handed out software skills like free samples.'" — Learning and development specialist on coding democratisation

"If anyone on here believes that OpenAI won't sell your data either openly or covertly at some point in the hunt for money then I've a PPE scheme you may be interested in investing in." — Clinical safety specialist on data privacy

"The most influential clinical opinion is no longer the clinician's. It's the first one." — GP and health tech entrepreneur on the First Opinion Problem

"Meat sacks get ill and eventually die. Any savings you make from early diagnoses are at best deferrals of care to a later time." — Clinical safety specialist on prevention economics

"I suspect there's one psychiatrist locked in a basement with 100 people asking them for advice so they can say 'doctor informed.'" — GP on mental health service access

"The giveaway is the end of the request saying 'would you like me to make this into a PDF for you.'" — Innovation-focused GP on identifying AI-assisted patient queries

"We're entering what I call the First Opinion Problem." — GP on how 40 million daily ChatGPT health queries reshape care

"Asynchronous CBT on demand in patients' homes at a time of their choosing fills a hole our service models can't." — GP on AI-delivered mental health support

Journal Watch

Academic Papers & Research

AI Outperforms Generalist Physicians in Clinical Reasoning (MedRxiv) Another vignette study comparing AI to physician performance. MedRxiv

FDA-Cleared AI Medical Devices: Transparency Analysis (Nature Digital Medicine) Review of 1,012 FDA-approved AI/ML medical devices finding average transparency score of just 3.3/17. Nature

NHS Resolution Annual Report 2023-24 Annual statistics on clinical negligence claims. NHS Resolution

Mental Health Risks of AI Chatbots (PMC) Research exploring risks when patients use AI as therapeutic support. PMC

Imperial College: Health Data Research Research relevant to building UK health data infrastructure. Imperial Spiral

Industry & Policy News

OpenAI Introduces ChatGPT Health The week's most-discussed announcement. OpenAI

NHS Menopause and Prostate Conditions for Online Hospital NHS England announcing priority conditions for digital first pathways. NHS England

Utah AI Prescribing Pilot (Politico) US state piloting AI for repeat prescription reissue. Politico

NICE AI Spirometry Consultation Open consultation on AI-assisted spirometry technology. NICE

EMIS Rebrand to Optum Context for ongoing discussions about primary care system consolidation. EMIS Health

Tools & Resources

Okara AI — Privacy-focused open source clinical assistant okara.ai

Pipit Voice — Local Mac voice transcription pipitvoice.com

RADIANT CERSI Regulatory Journey Support radiant-cersi.org

Dr Jessica Morley: Beyond the Hype Lecture Series (Yale) Yale

Rudi Hennessy Clinical Safety Substack Substack

Looking Ahead

Unresolved Debates:

• Where ChatGPT Health sits in UK regulatory frameworks remains unclear

• The consent vs inform distinction for AVT continues to vary significantly between practices

• Whether current Class I self-declaration provides meaningful patient protection

Emerging Themes:

• The "First Opinion Problem"—when AI provides health guidance before clinicians

• Integration of multiple AI tools and their comparative strengths

• Proportionality of medical device regulation for software vs pharmaceuticals

Upcoming Events:

• World Health Expo Dubai attracting multiple group members in coming months

• Yale "Beyond the Hype" lecture series on critical AI evaluation

• NHS AI Ambassador Network continuing discussions

Group Personality Snapshot

This week captured the community at its characteristic best: rigorously technical yet deeply humane, simultaneously excited by AI's potential and clear-eyed about its risks. The mental health discussion thread showed healthcare professionals sharing frustration about broken services while genuinely considering whether AI might help—not as techno-solutionism but as pragmatic recognition that something that works imperfectly beats nothing at all.