AI in healthcare: 10 takeaways for what matters now
What clinicians, product teams and health leaders should pay attention to as AI moves from promise to practice.
This is Clinical Product Thinking 🧠, a weekly newsletter featuring practical tips, frameworks and strategies from the frontlines of clinical product.
Welcome, friends, this is issue No. 036 of Clinical Product Thinking. This week we’re diving into practical takeaways when considering AI’s impact on healthcare.
This week I tuned into a talk by Dr Keith Grimes for BMJ Future Health, talking about the future of AI in healthcare.
The discussion ranged from prompting and ambient scribes to clinical safety officers, AI agents, regulation and the rise of clinicians as builders.
For anyone thinking about how to approach this space, the message was clear: you do not need to become an AI expert overnight. What matters is becoming the kind of healthcare professional, product leader, or policymaker who can use these tools with curiosity, scepticism and responsibility.
Start with low-risk use cases. Learn actively. Ask better governance questions. And remember to consider who is accountable.
Here are my top 10 takeaways with actions you can take today:
1. Prompting matters less than context
The way we ask AI questions still matters, but the bigger determinant of useful output is increasingly the quality of the context we provide. In healthcare, that context might include the workflow, the patient group, the local policy, the intended use, the constraints of the service, and the specific decision or task we are trying to support.
Action point: Try prompting through voice rather than typing, because when we speak, we tend to give more natural background, more nuance and more useful context than when we type a short instruction into a chat box. I use Wispr Flow Pro and would never go back.
2. Digital governance cannot be someone else’s problem
Healthcare has mature ways of thinking about the governance of clinicians, drugs and devices, but digital tools have often been treated as though someone else must already have checked them properly. As AI tools move closer to clinical work, that assumption becomes increasingly unsafe.
Action point: Ask one simple governance question about any AI or digital tool you are using or considering: who is responsible for knowing whether this is safe in our setting? If the answer is vague, that is probably where the next conversation needs to start.
3. Clinical safety standards are becoming everyday knowledge
Clinical safety standards such as DCB0129 and DCB0160 have existed for years in England, but AI has made them much more visible, especially as ambient scribes and other tools begin to affect consultations, documentation and clinical workflows.
Action point: Consider getting clinical safety training, or at least learning the basics of clinical risk management for digital tools, because this is likely to become a core capability for clinicians, product teams and healthcare leaders working with AI.
4. Clinicians do not need to become AI engineers, but they do need to know enough
The expectation should not be that every clinician becomes a technical expert, but clinicians do need to understand the tools they use well enough to practise safely. That means knowing what the tool is for, what it is not for, how it can fail, what needs checking and what to do when something goes wrong.
Action point: Pick one AI tool you already use, or one your organisation is considering, and write down five things: what it does, what it does not do, how it might go wrong, what the user must check and how concerns should be reported.
5. Ambient scribes show both the promise and the risk
Ambient scribes may reduce documentation burden and improve the flow of consultations, but they also create new responsibilities around consent, accuracy, review and record quality. Their outputs can sound fluent while still being incomplete, distorted or wrong.
Action point: Before using an ambient scribe, practise explaining it to a patient in plain English, including what it does, why you are using it, what happens to the information, and how the final clinical note will be checked.
6. Clinicians are now builders and that is both good and risky
AI has made it easier for clinicians and healthcare teams to prototype tools, calculators, chatbots and workflow aids without needing advanced coding skills. That is exciting, because frontline staff often understand the problems best, but a prototype is not the same as a safe clinical product.
Action point: When experimenting with AI-built tools, keep a clear boundary between experimenting, prototyping and deploying, and do not use real patient data or real clinical decisions until governance, safety, data protection and accountability have been properly considered.
7. Agents are coming, fast
AI agents are different from simple chatbots because they can pursue goals, use tools, plan steps, retrieve information and adjust their approach. They are likely to have a major impact on administrative, operational and back-office work before they are, quite rightly, trusted with higher-risk clinical tasks.
Action point: Look for one repetitive administrative task in your work that involves gathering information, drafting, checking or organising, and consider whether an agentic workflow could support part of it under human supervision.
8. AI should be compared with real care, not imaginary perfect care
AI tools are often judged against an idealised version of healthcare rather than the care patients actually receive. That matters because current processes are often delayed, inconsistent, poorly measured or unavailable, especially in pressured parts of the system.
Action point: Before dismissing an AI tool because it is imperfect, ask how the current process performs, how often it fails, whether that failure is measured, and what harm or burden patients already experience without the tool.
9. AI will amplify the system it enters
AI does not fix broken workflows by magic. If a service has clear processes, good measurement and strong accountability, AI may help it become faster or more consistent, but if the existing process is confused or unsafe, AI may simply scale that confusion.
Action point: Do a quick workflow review before introducing AI: map the current process, identify where patients or staff struggle, decide what outcome should improve, and only then ask whether AI is the right intervention.
10. Start learning somewhere low-risk
The safest way to build fluency with AI is not to begin in clinical care, but somewhere personally familiar and low-risk. Hobbies are useful because you usually know enough about the subject to spot when the model is helpful, vague or confidently wrong.
Action point: Start using AI for one of your own hobbies this week, whether that is cooking, running, writing, music or travel planning, and use that experience to learn how context, checking and judgement change the quality of the output.
The rate of change of AI is astounding and will only continue to speed up. If you’re feeling a bit left behind, you’re not the only one! But remember you do not need to know everything, and you definitely do not need to chase every new tool. But it is worth getting curious, trying things in low-risk ways, and building the habit of asking good questions about safety, workflow, governance and accountability. That is probably where the real value starts. Happy AI-ing all.
Join the next clinical product panel 🎤
On 12th May, Danielle Brightman and I are hosting a panel covering some of the most pressing questions in clinical product management. Tickets have been going fast. 👉 Sign up here.
That’s all for this week. See you next time! 👋
🤝 Work with me | 📅 Attend an event | | ✍️ Send a message
Written by Dr Louise Rix, Head of Clinical Product, doctor and ex-VC. Passionate about all things healthcare, healthtech and clinical product (…obviously). Based in London. You can find me on Linkedin.
Made with 💜 for better, safer HealthTech.




