Blogs
Mar 20250→14 min read

Stop Building MVPs. Build M-V-Ps.

Minimum, viable, product. All three words mean something. Most teams ship versions that miss one of them.

MVP is the most overused acronym in product. Everyone says they're shipping one. Most aren't.

The phrase has three words. M, V, P. All three are constraints. Skip any one and you ship something else.

A minimum that isn't viable is a prototype. A viable minimum that isn't a product is a script. A product that's minimum-viable but not minimal is just a v1 dressed up in the term. Each failure mode happens. Each produces a launch that doesn't do what an MVP is supposed to do — test market fit cheaply.

Minimum

Smallest scope that still tests the core hypothesis. Not 'small.' Not 'feels lightweight.' Smallest. Whatever you'd reflexively call the MVP, cut a third more.

The operational test: for every feature in scope, ask 'can the user complete the core task without this?' If yes, the feature isn't in MVP. Move it to phase 2.

Most teams' first instinct is to over-scope because they want the MVP to feel polished. Polished MVPs delay the launch and confuse the test — you can't tell whether users like the feature or like the polish. Cut the polish. Polished comes later, when you know what to polish.

Polished MVPs confuse the test. Ship rough. Polish what works.
Viable

The MVP has to actually let a real user complete the core task. Not 'mostly.' Not 'with workarounds.' End to end, by themselves.

The trap here is the inverse of over-scoping. Teams cut so aggressively that the MVP becomes a prototype — it shows the idea, but can't actually run on real user input. The first real user hits it, encounters a missing edge case, and bounces. The MVP didn't get tested because it wasn't viable.

The test: take the worst real user you've talked to (the one with the messiest inputs, the most edge-case workflow). Could they complete the core task in your MVP without you sitting next to them? If no, the MVP isn't viable. It's a demo.

Product

It has to be a product, not a service or a script.

This sounds like a minor distinction. It's load-bearing. A 'product' is something the user can interact with repeatedly without your manual intervention. A 'service' is something you do for them. A 'script' is something that runs once and stops.

Many teams ship something that looks like an MVP but is operationally a service. The team is doing manual work in the background to make the magic happen. This is fine as a phase before the MVP — concierge MVPs are a legitimate validation technique — but it isn't the MVP test you think it is. The user is reacting to your manual delivery, not to the product. When you eventually automate the manual work, the experience changes, and the validation you got from the concierge phase may not transfer.

If the goal is to test product-market fit, the MVP has to be a product. If the goal is to test service-market fit, ship the concierge version openly and don't call it an MVP.

All three together

The discipline of getting all three right is what separates 0→1 efforts that ship and learn from ones that ship and confuse themselves.

Minimum without viable: pretty demo, no usage. Viable without minimum: shipped too late, learned too slow. Both without product: tested a service the user is going to be surprised about when it productizes.

M, V, P. Three constraints. They all have to be true at the same time. The fastest path to genuine signal in early-stage product work is treating them as three separate gates and forcing yourself to clear all three before launch.