Why Clinical Safety Failures Often Start in Product Decisions
The hidden safety impact of everyday design and prioritisation choices
This is Clinical Product Thinking đ§ , your weekly newsletter featuring practical tips, frameworks and strategies from the frontlines of clinical product.
Good afternoon friends, this is issue No. 024. This week, weâre chatting with Dr Karim Sandid, GP turned clinical product manager and clinical safety officer, on what it means to build clinically safe products.
For years, startups have followed Mark Zuckerbergâs mantra:
Move fast and break things.
In consumer tech, breaking things usually means a feature that doesnât work, a faulty release, or a short-lived outage. Annoying, but recoverable.
In healthtech, the things that break are different.
They include trust.
They include clinical confidence.
And sometimes, they include people.
So while this mantra may work in consumer tech, in clinical product, it would be something closer to: moving deliberately, with an eye on downstream consequences (not quite as catchy, Iâll admit).
That difference shaped much of my conversation with Karim:
Many clinical safety incidents arenât unexpected failures.
Theyâre the unintended result of early product decisions.
Not software bugs.
Not freak edge cases.
But small, reasonable-seeming decisions that quietly shaped risk long before anyone called it a âsafety issueâ.
Where teams go wrong: design without a safety lens
Most healthtech teams use some form of design thinking:
researching real user problems
prototyping and testing solutions
iterating towards better experiences
On paper, this should reduce risk.
But in reality, these processes often run without continuous clinical safety input, especially during everyday product decisions.
Safety tends to appear:
at the very start (âDoes this make clinical sense?â)
or right at the end (âCan someone sign this off?â)
Whatâs missing is the middle, where hundreds of seemingly minor choices accumulate into material risk.
This is where an important distinction gets blurred:
Clinical input â clinical safety input
A clinician might help you answer:
Is this clinically sensible?
Is this aligned with guidelines?
Would this fit into a real workflow?
A clinical safety lens asks different questions:
How could this be misunderstood?
Where might a user place inappropriate trust in the product?
What happens if this is used incorrectly, partially or under pressure?
What assumptions are we making about behaviour, context or attention?
Those questions rarely come up unless someone is explicitly responsible for asking them.
A concrete example: when UI decisions become safety decisions
Karim shared a blood pressure product that looked, on the surface, very low risk.
The interface used:
green, yellow, and red zones
a clear disclaimer stating it was not intended to diagnose or treat medical conditions
From a product perspective, this felt sensible.
From a marketing perspective, it felt intuitive.
From a regulatory and safety perspective, it was a problem.
The colour coding alone implied normal versus abnormal.
That implication effectively positioned the product as a medical device, disclaimer or not.
Nothing about the data changed.
No new algorithm was added.
A single interface choice altered the productâs regulatory and safety profile.
This is the pattern Iâve seen a number of times:
clinical input exists
but safety implications of product decisions are only recognised after theyâre embedded
The âhouse you canât moveâ problem
Karim used an analogy that captures this perfectly.
Bringing clinical safety in late is like laying the foundations of a house, pouring the concrete, and setting the structure, only to be told afterwards that the house should sit two feet further up the hill.
You can move it.
But the cost is enormous.
In product development, early decisions harden quickly:
data models
escalation logic
defaults and thresholds
information architecture
By the time a product reaches beta or launch, many of these are effectively locked in.
Late safety review often means:
rework
scope reduction
significant delays
or uncomfortable compromises
So what does this mean day to day?
Clinical safety isnât just documentation or sign-off.
Itâs product decisions made long before launch, especially those that influence how users interpret, trust and act on what they see.
In practice, that means safety needs to be present:
when success metrics are defined
when defaults, thresholds and visual cues are chosen
when workflows are simplified or steps removed
when disclaimers are added instead of design constraints
These may not be moments that teams typically label as âsafety decisionsâ.
But theyâre exactly where risk is introduced.
The teams that do this well donât run separate safety processes.
They ensure someone in the room is consistently asking:
âWhat could go wrong here, in the real world, not the happy path?â
Not at the end.
Not just for compliance.
But while decisions are still cheap to change.
This is also why I believe clinical product managers should be clinically safety trained, to ensure safety-aware judgement is present throughout product development.
Clinical Product Dinner â¨
đ 4th March - Designing Virtual Care Pathways: Where Clinical Safety, Product, and Operations Collide
An intimate dinner on the product, safety and operational decisions that make or break virtual care pathways. Weâll examine where teams underestimate risk as care moves into the home, and why early design choices matter most. Featuring Dr Sukrti Nagpal, Interim CMO at Doccla, on what actually happens at scale. đ Get your ticket here.
Hiring Spotlight đ
Dr Reinhold Innerhofer, co-founder and CMO of a new stealth healthtech company, is hiring a product-minded clinician to join the founding team. I caught up with Reinhold to talk about their direction: theyâre on a mission to help 100 million people live their healthiest lives by moving healthcare upstream. This is a fantastic opportunity to help shape how a new category in preventive health is built. đ Apply here.
Thatâs the public post for this week. See you next time! đ
đ¤ Work with me | đ Attend an event | | âď¸ Send a message
Written by Dr Louise Rix, Head of Clinical Product, doctor and ex-VC. Passionate about all things healthcare, healthtech and clinical product (âŚobviously). Based in London. You can find me on Linkedin.
Made with đ for better, safer HealthTech.
[NEW] Want to Go Deeper? đ Join Paid
Below is an extended, off-the-record conversation with Dr Karim Sandid (Semble) exploring how clinical safety decisions actually get made in product teams, including trade-offs that are hard to capture in writing.




