AI · 25 March 2026 · 5 min read
AI in Practices: What's Useful and What's Noise
Most AI conversations are framed wrong for practices. Reframed operationally, the answer is much narrower — and much more useful.
Most AI conversations are framed wrong for practices.
The framing of “AI transformation” turns it into a strategic question — broad, slow, high-stakes. The framing that actually helps is much narrower: which specific tasks in this practice take time, repeat, have clear outputs, and have a human checkpoint?
That set of tasks is where AI tools currently earn their keep. Almost nothing else is.
What's useful
The applications that survive serious operational scrutiny in a practice tend to share a profile:
- The task is repetitive and rule-shaped.
- The output is short and reviewable.
- A human signs off before it goes anywhere consequential.
- Failure is visible, not silent.
Concretely, that looks like:
- Drafting routine correspondence. Referrals, follow-up letters, internal notes — assembled, then proofed by the person sending them.
- Internal knowledge lookup. “Where's the policy on…?” “What do we tell new patients about…?” A staff-facing assistant that answers from the practice's own documented procedures.
- Document handling. Sorting incoming correspondence, extracting structured information from referrals, flagging what needs attention.
- Operational summarising. Taking a noisy data export and turning it into a readable management summary.
These are not transformative. They are quiet, daily, and they return time.
What's noise
The framing that wastes practice attention usually shows up as:
- “AI strategy” engagements that produce a 60-page document and a roadmap, with nothing live in the practice at the end.
- Generic chatbots bolted onto websites that can't actually answer the questions patients ask, and that staff end up ignoring.
- Tools that add work instead of removing it — anything that requires a new process layered on top of the existing ones to “feed the AI.”
- Anything that ignores privacy or governance. Patient data has constraints. Tools that don't respect them are a liability, not a productivity gain.
The hype is loud. The useful applications are quiet.
How to tell the difference
A simple test: if the proposed application were already working flawlessly tomorrow, what would change in the practice on Wednesday morning?
If the answer is “the front desk handles X with less interruption,” or “the owner spends 30 minutes less on Y,” that's a real application.
If the answer is “we'd be more innovative” or “we'd be ahead of the curve” — that's noise.
The realistic version
For most practices, the realistic version of “adopting AI” looks like one or two narrow, well-scoped applications that quietly remove load. Not a strategy. Not a transformation. Just less repeated work, with a human still in the loop.
That's the whole bar.
Discuss
Have a related operational problem in your practice?
Most of this writing starts as conversations. Happy to talk yours through.
