Clinical AI Boundaries

Why I’m exploring how clinicians define responsible AI use

Artificial intelligence is beginning to appear quietly inside clinical workflows.

Not through formal procurement processes or national programmes, but through curiosity. A clinician opens a browser tab, pastes in a paragraph, and asks for help structuring an explanation, summarising guidance, or clarifying a line of reasoning.

What’s striking is not the technology itself, but the speed at which it has arrived in professional environments without a clear language for how it should be used.

Recent surveys suggest that around 28% of UK GPs and 36% of consultants have already experimented with AI tools in their work, according to research from organisations including the Nuffield Trust and the Alan Turing Institute.¹

At the same time, clinicians consistently report uncertainty about how these tools intersect with professional responsibility. As many as 95% of GPs report receiving no formal training on the use of AI systems, while close to 89% express concern about liability when using them.²

In other words, adoption is rising quickly — but guidance at the level of individual practice remains thin.

The problem is not capability

Much of the public conversation about AI in healthcare focuses on capability.

Can models assist diagnosis?
Are hallucinations dangerous?
Should these systems be regulated as medical devices?

These questions matter. But they are not the questions most clinicians encounter day to day.

The practical questions are simpler:

When is it reasonable to use AI as a drafting tool?
What information should never be entered into consumer systems?
How should AI-generated output be treated when it influences documentation or explanation?

These are questions of professional boundaries, not technical capability.

A gap between policy and practice

Healthcare governance tends to operate at an institutional level. National bodies publish frameworks. Regulators outline principles. Organisations develop policies.

Yet the question an individual clinician often faces is more personal:

What are my own boundaries when using these tools?

Professional responsibility ultimately rests with the practitioner. But most existing guidance remains abstract, high-level, or written for organisations rather than individuals.

This creates a curious situation.

AI is already present in clinical environments, yet the language for articulating responsible use is still emerging.

Exploring a simple idea

The project I’m exploring through GABA is intentionally modest.

Rather than building software or offering training, I’m interested in a simple exercise: helping clinicians clearly state their boundaries around AI use.

Not as compliance documentation.
Not as certification.

But as a deliberate statement of professional judgement.

Such a statement might include:

• where AI may reasonably support workflow
• where AI should never influence decisions
• how outputs are verified before use
• what data is never entered into consumer systems

The goal is not to eliminate risk — which would be unrealistic — but to replace ambiguity with clarity.

Healthcare is one of the first professions where the intersection between AI tools and professional responsibility becomes visible.

The question is not whether clinicians will encounter AI in their work.

They already are.

The question is how the profession develops a language for using these tools deliberately rather than implicitly.

The small pilot I’m currently running with clinicians is simply an attempt to explore that question.

Because the most interesting changes in technology rarely begin with tools.

They begin with people deciding where the boundaries should be.

Sources

  1. Nuffield Trust; Alan Turing Institute surveys on AI adoption among UK clinicians (2024–2025).

  2. Pulse GP survey on AI training and liability concerns among UK general practitioners (2025).

Next
Next

What if creating useful software for your community required clarity of intent, not technical fluency?