Clinical AI Boundaries Pilot

Exploring how clinicians can clearly define their professional boundaries when using AI tools.

Artificial intelligence tools such as ChatGPT are increasingly appearing in clinical workflows.

Clinicians are experimenting with them for practical tasks — drafting explanations, structuring notes, summarising research — yet clear guidance on how individual clinicians should responsibly use these tools remains limited.

GABA is currently exploring a simple question:

What would it look like for a clinician to clearly articulate their professional boundaries when using AI?

As part of this exploration, I’m speaking with a small number of UK clinicians who have begun experimenting with AI tools in their work.

If that describes you, I would value your perspective.

Why This Matters

Many clinicians are already using AI for everyday tasks such as:

• drafting patient explanations
• structuring notes or letters
• summarising guidance or research
• administrative support

The challenge is rarely the tool itself.

It is the absence of clearly defined professional boundaries around its use.

Questions clinicians frequently raise include:

• When should AI output be verified or ignored?
• What information should never be entered into consumer AI systems?
• How should AI-assisted work be approached responsibly?
• What constitutes reasonable professional conduct when using these tools?

These questions are emerging faster than formal guidance.

What This Pilot Explores

This pilot explores a simple idea:

Helping clinicians articulate a clear personal framework for responsible AI use.

Rather than auditing behaviour, the aim is to help clinicians define their own boundaries.

A typical outcome might include:

• where AI may reasonably support workflow
• where AI should never influence decisions
• escalation and verification habits
• safe data-handling practices
• a short transparency statement about AI use

Think of it as a professional clarity exercise, not a training programme.

What This Is Not

This pilot is intentionally narrow in scope.

It is not:

• CPD training
• legal advice or compliance certification
• a review of clinical cases
• a diagnostic tool or medical device

The goal is simply to explore how clinicians might clearly articulate responsible AI use in their own practice.

Who I'm Speaking With

I’m currently speaking with:

• NHS GPs
• Private GPs
• Consultants who have begun experimenting with AI tools

You do not need to be an AI expert.

Practical experience with AI tools — even occasional use — is more useful than technical knowledge.

About GABA

GABA is an independent AI practice run by Adam Martin.

It explores how emerging AI tools interact with real professional environments.

Rather than building software, GABA focuses on helping professionals think clearly about boundaries, responsibility, and practical use of AI systems.

This clinician pilot is an early exploration of how those ideas might apply in healthcare.

Request a Pilot Conversation

If you are open to a short conversation about how AI is appearing in your clinical workflow, I would value your perspective.

This pilot is simply an exploration of whether a useful framework can be developed.

FAQ

Will this expose how I currently use AI?

No. This is not an audit or assessment.
The pilot simply explores how clinicians think about boundaries when using AI tools.

Will you review patient records or identifiable information?

No. The pilot is designed specifically to avoid identifiable patient data.

Is this a training programme or compliance exercise?

No. This is an exploratory pilot about how clinicians might clearly define responsible AI use.

A black and white clock with a circular dial, white hour and minute hands, and gray hour markers.

GABA explores how emerging AI tools interact with real professional environments — particularly where questions of responsibility, judgement, and trust arise.

Rather than building software, GABA focuses on helping professionals think clearly about how AI should and should not be used in their work.

The Clinical AI Boundaries Pilot is an early exploration of that question.

Adam Martin, Yorkshire, Spring 2026