Clinical Product Thinking

Clinical Product Thinking

Why Clinical Safety Failures Often Start in Product Decisions

The hidden safety impact of everyday design and prioritisation choices

Feb 08, 2026
∙ Paid

This is Clinical Product Thinking 🧠, your weekly newsletter featuring practical tips, frameworks and strategies from the frontlines of clinical product.

Good afternoon friends, this is issue No. 024. This week, we’re chatting with Dr Karim Sandid, GP turned clinical product manager and clinical safety officer, on what it means to build clinically safe products.

For years, startups have followed Mark Zuckerberg’s mantra:

Move fast and break things.

In consumer tech, breaking things usually means a feature that doesn’t work, a faulty release, or a short-lived outage. Annoying, but recoverable.

In healthtech, the things that break are different.

They include trust.
They include clinical confidence.
And sometimes, they include people.

So while this mantra may work in consumer tech, in clinical product, it would be something closer to: moving deliberately, with an eye on downstream consequences (not quite as catchy, I’ll admit).

That difference shaped much of my conversation with Karim:

Many clinical safety incidents aren’t unexpected failures.
They’re the unintended result of early product decisions.

Not software bugs.
Not freak edge cases.
But small, reasonable-seeming decisions that quietly shaped risk long before anyone called it a “safety issue”.

Where teams go wrong: design without a safety lens

Most healthtech teams use some form of design thinking:

  • researching real user problems

  • prototyping and testing solutions

  • iterating towards better experiences

On paper, this should reduce risk.

But in reality, these processes often run without continuous clinical safety input, especially during everyday product decisions.

Safety tends to appear:

  • at the very start (“Does this make clinical sense?”)

  • or right at the end (“Can someone sign this off?”)

What’s missing is the middle, where hundreds of seemingly minor choices accumulate into material risk.

This is where an important distinction gets blurred:

Clinical input ≠ clinical safety input

A clinician might help you answer:

  • Is this clinically sensible?

  • Is this aligned with guidelines?

  • Would this fit into a real workflow?

A clinical safety lens asks different questions:

  • How could this be misunderstood?

  • Where might a user place inappropriate trust in the product?

  • What happens if this is used incorrectly, partially or under pressure?

  • What assumptions are we making about behaviour, context or attention?

Those questions rarely come up unless someone is explicitly responsible for asking them.

A concrete example: when UI decisions become safety decisions

Karim shared a blood pressure product that looked, on the surface, very low risk.

The interface used:

  • green, yellow, and red zones

  • a clear disclaimer stating it was not intended to diagnose or treat medical conditions

From a product perspective, this felt sensible.
From a marketing perspective, it felt intuitive.

From a regulatory and safety perspective, it was a problem.

The colour coding alone implied normal versus abnormal.
That implication effectively positioned the product as a medical device, disclaimer or not.

Nothing about the data changed.
No new algorithm was added.

A single interface choice altered the product’s regulatory and safety profile.

This is the pattern I’ve seen a number of times:

  • clinical input exists

  • but safety implications of product decisions are only recognised after they’re embedded

The “house you can’t move” problem

Karim used an analogy that captures this perfectly.

Bringing clinical safety in late is like laying the foundations of a house, pouring the concrete, and setting the structure, only to be told afterwards that the house should sit two feet further up the hill.

You can move it.
But the cost is enormous.

In product development, early decisions harden quickly:

  • data models

  • escalation logic

  • defaults and thresholds

  • information architecture

By the time a product reaches beta or launch, many of these are effectively locked in.

Late safety review often means:

  • rework

  • scope reduction

  • significant delays

  • or uncomfortable compromises

So what does this mean day to day?

Clinical safety isn’t just documentation or sign-off.

It’s product decisions made long before launch, especially those that influence how users interpret, trust and act on what they see.

In practice, that means safety needs to be present:

  • when success metrics are defined

  • when defaults, thresholds and visual cues are chosen

  • when workflows are simplified or steps removed

  • when disclaimers are added instead of design constraints

These may not be moments that teams typically label as “safety decisions”.
But they’re exactly where risk is introduced.

The teams that do this well don’t run separate safety processes.
They ensure someone in the room is consistently asking:

“What could go wrong here, in the real world, not the happy path?”

Not at the end.
Not just for compliance.
But while decisions are still cheap to change.

This is also why I believe clinical product managers should be clinically safety trained, to ensure safety-aware judgement is present throughout product development.


Clinical Product Dinner ✨

📆 4th March - Designing Virtual Care Pathways: Where Clinical Safety, Product, and Operations Collide

An intimate dinner on the product, safety and operational decisions that make or break virtual care pathways. We’ll examine where teams underestimate risk as care moves into the home, and why early design choices matter most. Featuring Dr Sukrti Nagpal, Interim CMO at Doccla, on what actually happens at scale. 👉 Get your ticket here.


Hiring Spotlight 🚀

Dr Reinhold Innerhofer, co-founder and CMO of a new stealth healthtech company, is hiring a product-minded clinician to join the founding team. I caught up with Reinhold to talk about their direction: they’re on a mission to help 100 million people live their healthiest lives by moving healthcare upstream. This is a fantastic opportunity to help shape how a new category in preventive health is built. 👉 Apply here.


That’s the public post for this week. See you next time! 👋

🤝 Work with me | 📅 Attend an event | | ✍️ Send a message


Written by Dr Louise Rix, Head of Clinical Product, doctor and ex-VC. Passionate about all things healthcare, healthtech and clinical product (…obviously). Based in London. You can find me on Linkedin.


Made with 💜 for better, safer HealthTech.


[NEW] Want to Go Deeper? 👇 Join Paid

Below is an extended, off-the-record conversation with Dr Karim Sandid (Semble) exploring how clinical safety decisions actually get made in product teams, including trade-offs that are hard to capture in writing.

User's avatar

Continue reading this post for free, courtesy of Dr. Louise Rix 👩‍⚕️.

Or purchase a paid subscription.
© 2026 Louise Rix · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture