Agentic Browser Plugins in Healthcare: A warning
Last night I received an email from Anthropic offering me the opportunity to join a waiting list for their new chrome plugin. It promises a great deal - an AI within the main tool I use to interact with the world and work. It comes on the back of similar offerings from Perplexity and Brave. These browsers and extensions are changing how we interact with the web. These tools can act on our behalf—booking appointments, managing emails, even accessing patient records. But they bring serious security risks that healthcare professionals need to understand.
What Are Agentic Browsers?
An agentic browser extension is an AI assistant built into your web browser. Unlike chatbots, these tools can take actions: clicking buttons, filling forms, and navigating websites automatically. Think of it as having a digital assistant with full access to everything you can see and do online.
In healthcare, this means these tools could potentially access patient portals, clinical systems, and confidential correspondence—making security crucial.
The Main Risk: Prompt Injection
Prompt injection is when attackers hide malicious instructions in web content that AI assistants then execute. The AI can't tell the difference between your legitimate request and hidden commands.
Consider this scenario: you ask your AI to summarise a webpage. Hidden in that page is an instruction telling the AI to forward all emails to an attacker's address. The AI reads and executes this command without you knowing. This isn't a hypothetical risk—it's a fundamental vulnerability in how these systems process information.
Healthcare-Specific Dangers
In healthcare settings, prompt injection creates cascading risks that extend far beyond typical data breaches. When an AI browser accesses patient records, it could expose entire medical histories to unauthorised parties, triggering GDPR violations with massive fines and destroying patient trust. The clinical risks are equally concerning: altered medical records could lead to wrong treatments, modified test results might misdirect diagnoses, and disrupted appointment systems could delay critical care.
The system-wide impact amplifies these dangers. One compromised browser with legitimate credentials becomes a master key to vast data repositories. In our interconnected NHS infrastructure, where systems communicate across trusts and primary care networks, a single breach can spread rapidly through multiple databases and clinical systems.
Why Healthcare Is Particularly Vulnerable
Healthcare systems present unique characteristics that make them especially susceptible to these attacks. Medical records contain comprehensive personal information that makes them exponentially more valuable than typical consumer data. These aren't just databases—they're life-critical systems where errors directly impact patient safety. The regulatory landscape adds another dimension: strict compliance standards under GDPR and the Data Protection Act mean that breaches carry severe legal and financial consequences beyond the immediate clinical impact.
The interconnected nature of modern healthcare infrastructure compounds these vulnerabilities. NHS Spine connects to local trust systems, which integrate with GP practices, creating a vast network where a single point of compromise can cascade through multiple layers of clinical data. Each connection represents both an opportunity for better care coordination and a potential attack vector.
Practical Solutions
For Healthcare Organisations
Healthcare organisations must implement technical controls that create clear boundaries between AI-assisted browsing and production clinical systems. This means isolating these tools in controlled environments where they can provide assistance without direct access to critical infrastructure. Every clinical action—from prescribing medications to accessing sensitive records—should require explicit human confirmation through secure channels that can't be manipulated by AI systems.
Monitoring becomes crucial in this new landscape. Organisations need sophisticated systems that can detect unusual AI behaviour patterns, tracking unexpected data access requests, anomalous API calls, and deviations from established workflows. These monitoring systems themselves can employ machine learning to identify potential prompt injection attempts before they execute, creating a defensive AI layer against malicious AI exploitation.
Policy measures must evolve alongside technical controls. Clear guidelines for AI tool usage need to be established, with defined boundaries for what these systems can and cannot access. Incident response procedures must be updated to account for AI-specific threats, and regular security assessments should specifically test for prompt injection vulnerabilities. Staff training becomes essential—not just IT teams but every healthcare professional using these tools needs to understand the risks and warning signs.
For Individual Practitioners
Practitioners using AI browsers need to maintain constant vigilance about the tools they employ and how they use them. This starts with source verification—only using AI extensions from established, trusted developers and keeping all software rigorously updated. The temptation to fully automate repetitive tasks must be balanced against the risk of losing oversight of critical clinical decisions.
Red flags that warrant immediate attention include unexpected browser behaviour, unusual data access requests, or the AI attempting actions that weren't explicitly requested. Practitioners should develop a healthy scepticism about automated processes, particularly when dealing with sensitive patient information or clinical decisions. Multi-factor authentication becomes non-negotiable for any system containing patient data.
The key is developing what we might call 'AI situational awareness'—understanding not just what the tool is doing, but what it could potentially do with the permissions and access it has been granted. This means regularly reviewing and limiting permissions, using separate browser profiles for different risk levels of work, and maintaining clear boundaries between AI-assisted research and direct clinical action.
The Path Forward
We stand at a critical juncture where the promise of AI-enhanced clinical practice meets the reality of evolving security threats. The solution isn't to retreat from these powerful tools but to develop frameworks for their safe adoption. This requires understanding that security in the age of AI isn't a technical problem alone: it's a challenge that spans technology, policy, education, and clinical governance.
Every healthcare professional using AI tools needs to understand prompt injection not as an abstract concept but as a real and present danger to patient safety and data security. Building secure workflows means designing processes that keep humans in control of critical decisions while allowing AI to handle the routine and administrative burden that consumes so much clinical time. Security becomes an ongoing discipline rather than a one-time implementation, requiring continuous monitoring, adjustment, and evolution as both the tools and threats develop.
The collaborative approach becomes essential. IT departments can't secure these systems in isolation from clinical users, just as clinicians can't safely adopt these tools without understanding their security implications. Management must provide the resources and governance structures that enable safe innovation rather than forcing a false choice between efficiency and security.
Key Takeaways
The integration of agentic browsers into healthcare represents both tremendous opportunity and significant risk. These tools offer capabilities that could transform how we deliver care, but they introduce vulnerabilities that could compromise the very foundations of clinical practice. Prompt injection isn't just another cybersecurity threat: it's a fundamental challenge to how we conceptualise the boundary between human intention and machine action in healthcare.
Healthcare data and systems are particularly attractive targets not just because of their value but because of their criticality. A breach here doesn't just mean stolen data; it can mean compromised clinical decisions, delayed treatments, and broken trust between patients and providers. Yet the answer isn't to avoid these tools but to implement them with eyes wide open to both their promise and their peril.
The goal remains clear: achieving the benefits of AI-enhanced healthcare while maintaining the security and integrity that patient care demands. This isn't about choosing between innovation and safety—it's about recognising that in healthcare, true innovation must inherently include safety as a core component. As we move forward, our success will be measured not just by the capabilities we deploy but by the trust we maintain and the harm we prevent.
The promise of AI in healthcare is real, transformative, and within reach. I have spent my entire career working to realise this promise, and know from experience that it requires more than technological adoption—it requires a fundamental rethinking of how we integrate intelligent systems into the practice of medicine. By understanding these vulnerabilities, implementing proper safeguards, and maintaining vigilant oversight, we can harness AI's benefits while protecting what matters most: patient safety, data integrity, and the trust that underpins all medical practice.
Oh, and I have signed up for the waiting list. I just might think twice about using it now.
References
All accessed 27 Aug 2025
https://brave.com/blog/comet-prompt-injection/
https://simonwillison.net/2025/Aug/25/agentic-browser-security/
https://simonwillison.net/2025/Apr/11/camel/
https://news.ycombinator.com/item?id=45004846
https://www.anthropic.com/news/claude-for-chrome