
Dr Keith Grimes, Founder & CEO, Curistica
Last week, Youssof and I ran a lunchtime session at Hale House for a room of startup founders and health tech builders. The brief was simple: cut through the complexity of AI compliance and give people a clear picture of what they actually need to know.
What I love about these sessions is that they never go the way you planned. We had slides in the background, but within minutes we were deep in the questions that people genuinely needed answered. A paediatrician wanting to know what makes AI compliance different from traditional software. A founder building a lifestyle and wellness app, trying to understand whether he even needs to worry about regulation. Practical questions from people in the thick of building products. That is always more valuable than a lecture.
So here is what we covered, and what I think anyone building or deploying AI-enabled products needs to hear right now.
Data protection will blow up first
Whatever you are building, whether it is a regulated medical device or a lifestyle chatbot, data protection is the thing that will catch you out before anything else. Youssof made this point early on, and he is right. If you are using an LLM, you need to know where that processing is happening. Who is your foundation model provider? Where is the data going? If you are handling health data, or anything that falls under Article 9 special category data, the scrutiny from the ICO is significantly higher.
The biggest trap we see with AI products is data residency. You might have set up your cloud hosting in London, which is great. But have you checked the failover settings? We have seen situations where a service is configured to process data in the UK, but if that region goes down, processing silently fails over to the US. That is a compliance violation. It does not matter that it happened automatically. If the processing was not permitted under your data protection impact assessment, you may well need to report it, and you will certainly need to address it.
There is also the question of anonymisation, which trips people up more than you might expect. We worked with a manufacturer who said they anonymise everything immediately, using AI to strip out names and dates of birth before anything is stored. That sounds perfectly reasonable. But even that transient processing of identifiable data, even for a split second, still counts as processing under GDPR. We had a long conversation with the ICO about exactly this point. The answer was clear: data in flight is still data, and it is still subject to the same rules.
The fine for a serious breach is up to £17.5 million or 4% of annual global turnover, whichever is higher. In practice, the ICO tends to look at what you did when you discovered the problem, how seriously you took it, and what you had in place beforehand. I am not aware of anyone being hammered for an honest failover incident. But you really do not want to be in that position, and you certainly do not want to find out your entire legal basis is wrong because you did not think it through properly at the start.
Clinical safety is a process, not a document
If your product is being used in a healthcare setting and it could affect patient care, you need to meet the DCB0129 standard in England. This is not optional. It is a legal requirement under the Health and Social Care Act 2012.
At its heart, the standard is straightforward. What have you built? What does it do? Who does it do it to, and in what setting? Then the safety work begins: how can it go wrong and hurt someone? What controls have you put in place to stop that happening? Where is the evidence that those controls actually work? You document all of this in a hazard log, supported by a clinical risk management plan and a clinical safety case report. Together, these form your safety case: the structured argument that your product is safe enough to deploy.
I always make the point that the compliance document is just the last part. It is the proof that you have done the work, captured at a single point in time. Safety itself is a continuous process. You built something, you identified the risks, you controlled them. But once it is out in the real world, things change. People use it in ways you did not expect. The technology evolves. New risks emerge. You have to keep managing this actively.
What makes AI different?
This was the question that kicked off the whole session. Tom, a paediatrician and health tech consultant in the room, asked it well: what actually changes when AI is in the mix?
The core answer is that AI is probabilistic, not deterministic. With traditional software, you run the same input ten times and get the same output ten times. Testing is relatively straightforward. With an AI system, particularly one built on a large language model, the output will vary. It might give you a slightly different answer each time, and the range of possible inputs is essentially unlimited.
Youssof summed it up nicely with the teapot analogy. You put a teapot on a table and tell someone to use it. Most people will pour from the spout. But someone, somewhere, will blow into it to make a noise. With a deterministic system, you can design it so that misuse is literally impossible. The system will not accept an input outside of what you have defined. With an AI chatbot, someone can type anything they like, and it will respond in some way. So much more of your safety effort has to shift to the post-deployment side: monitoring how it is being used, catching the unexpected, and feeding that back into your risk management.
This does not mean AI products cannot meet the required standards. Tandem recently achieved a Class IIa classification for their clinical coding product. Derm has Class III for a skin lesion assessment tool. The regulators are technology-agnostic. They care about the evidence, not the underlying architecture. As long as you can demonstrate through robust evidence that the risk is manageable and tolerable, the pathway is there. You just need to be smarter about how you test, how much you test, and how you monitor.
The deployer's responsibility is real, and most are not ready
Here is the bit that surprises a lot of people. Meeting DCB0129 as a manufacturer is only half the picture. In England, the organisation deploying the technology, the GP practice, the trust, the ICB, also has a legal obligation under DCB0160 to do their own clinical safety work.
The logic is sound. A manufacturer can tell you a product is safe if used in a particular way. But the deploying organisation has to satisfy itself that it can actually use it that way, in its specific setting, with its specific staff and workflows. Can they get everyone trained? Do they have a business continuity plan if the system goes down? Are they capturing and reporting incidents back to the manufacturer?
We see this every day right now with ambient scribes. GP practices are desperate to use them, and frankly, they should. But they have been explicitly told they need to do this safety work, and most have no idea where to start. Just yesterday I was talking to a practice that planned to use a telephone's hands-free microphone to capture consultations for their scribe. The manufacturer had never tested it that way. Everything they had said about transcription accuracy might no longer apply. That is not the manufacturer's problem to solve. It is the practice's job to identify that risk, test it themselves if needed, and document what they found.
The encouraging thing is that most practices are already running governance that covers the basics. They meet weekly, they have monthly governance meetings, they log incidents. All we are asking them to do is extend that to their digital technologies. A two-minute standing item in a meeting that is already happening: what tech are we using, have there been any incidents, has anything changed? That is the foundation.
Over 70% of NHS technologies lack proper safety assurance
Last year, Youssof, myself, and colleagues published a paper in JMIR using Freedom of Information data from English trusts and ICBs. We asked them how many digital technologies they had deployed, how many had the required DCB0129 and DCB0160 documentation, and how many clinical safety officers they employed.
The results were stark. Over 70% of the technologies in use did not have the necessary safety assurance in place. That could be anything from the software running an MRI machine to an electronic patient record. These are systems used every single day to deliver care to millions of people, and the evidence that they are safe to use simply is not readily available.
This is not about AI. This applies to all health IT. But AI raises the stakes, because the technology is dynamic, probabilistic, and capable of affecting care in real time. If we cannot get the basics right for the systems we have had for years, the challenge with AI is going to be significant.
The happy scenario and the sad one
Someone in the room asked where all of this is heading. I gave them two scenarios.
The sad scenario is a Horizon-type event. For those unfamiliar, the Post Office Horizon scandal saw hundreds of subpostmasters wrongly accused of theft, fraud, and false accounting because of defects in the Horizon accounting software. The system was producing errors, but there was a legal presumption that the computer output was correct. People lost their livelihoods, their reputations, and in some cases their lives, all because nobody questioned the technology. Youssof made the point during the session that this legal presumption, that the output of a computer is correct, still exists and is only now being seriously challenged. Apply that same presumption to an AI system making clinical recommendations, and the stakes become very clear. Something catastrophic happens, there is a public outcry, everything changes overnight, and we wonder why on earth we did not do something about it sooner. The truth is that people are already being harmed by technology in healthcare, often in very small ways, but at enormous scale. Electronic health records that crash regularly. Referrals that vanish into the system and are never received at the other end. HSSIB is already investigating some of these issues, including how electronic referrals can fail silently and the risks when electronic patient record systems lose functionality. A study from the VA in Philadelphia documented how misconfigured endpoints meant blood test results and referrals were going to the wrong place. People died. It is not hypothetical.
The happy scenario is the one I am working towards. AI is different from previous waves of health technology because people are already a bit cautious about it. They want to use ambient scribes, but they know it is AI, they know it might go wrong, and they have been told they need to do something about it. That combination of enthusiasm and caution almost never happens. It is a genuine window of opportunity to lay the groundwork for how we govern all health technology, not just AI.
Part of that is education. Right now, there is not a single medical school in the UK that includes AI, data science, or digital compliance as a core part of the curriculum. A student starting medical school in 2026 could qualify in 2031 without being taught any of this. Does anyone think there will be less technology in healthcare in five years? It is an extraordinary gap.
AI is good at the health part, not the care part
Youssof made a point towards the end that has stuck with me. He said that AI is very good at the health part, but not very good at the care part. I think that is exactly right.
People want to use AI for behaviour change: helping patients manage chronic disease, improve their diets, exercise more. And AI is genuinely good at delivering the right information, in the right way, at the right time. But behaviour change is not just about information. It requires capability, opportunity, and motivation. AI can help with some of that, but it cannot replace the human connection that actually drives change.
That is the deeper point here. AI presents an existential challenge to what we thought was important about clinical work. We prided ourselves on being able to diagnose, memorise guidelines, recall facts under pressure. Those are exactly the things AI is increasingly good at. What it cannot do is listen to a patient, make them feel cared for, and integrate all the messy complexity of being a person. The care that we deliver to patients today is, in many ways, better than it has ever been. Yet doctors, nurses, and patients feel miserable. Because it has become transactional. AI handles the transactional part well. That should free us up to do the bit that actually matters: caring for people.
I am an optimist. I think we will probably fall a bit short of that ideal, because we usually do. But the opportunity is there, and it is worth working towards.
What you can do right now
If you are building an AI product and you are not sure where to start, here is the practical takeaway from the session.
Start with data protection. Understand what data you are collecting, where it is being processed, and what your legal basis is. Get this wrong and it will be the first thing to cause you real problems.
If your product could affect patient care, understand the DCB0129 requirements. Clinical safety is not a box to tick at the end. It is a process that starts with understanding your risks and continues for as long as your product is in use.
Think about the people who will deploy your technology. The decisions you make as a manufacturer directly affect how easy or hard it is for a GP practice or a trust to use your product safely. Make their lives easier and you will have a commercial advantage as well as a safer product.
And do not try to keep up with ChatGPT. Whatever you build for healthcare will be a more constrained version of what people can get from a general-purpose model. That is the point. You are building something that is safe, governed, and fit for purpose. That is what the people buying and deploying your product actually need.
If you want to talk any of this through, Youssof and I are at Hale House most Wednesdays. Come and find us.
Dr Keith Grimes is the founder and CEO of Curistica, a specialist clinical AI and healthtech consultancy. Dr Youssof Oskrochi is Curistica's Head of Safety and Data Protection. For more information, visit curistica.com or get in touch at hello@curistica.com.
This article was written with the assistance of Plaud (recording/transcription) & Claude Opus 4.6 (summarisation/initial draft). All content was reviewed by the author prior to full writeup.

