All frameworks
02How I think

How I Think About AI in Products

When AI adds value, when it's hype, and what to ship anyway.

Most PMs talk about AI. I've shipped it across recommendation, document generation, and conversational interfaces. The honest answer is: AI is great when it removes a step the user already does, and a liability when it adds one they have to learn.

01Framework
01
User time saved

If the AI feature doesn't measurably shorten the path the user is already on, it's a demo, not a feature. I quantify the saved minutes before greenlight.

02
Accuracy threshold

Some flows need 99% (financial reconciliation, compliance). Some can live at 75% with a clean correction loop (drafting, summarisation). Pick the threshold first; pick the model second.

03
Fallback when AI fails

What does the user see at the moment the model is wrong? If the answer is 'an apologetic toast and a broken flow', the feature isn't ready.

02What I'd show in a working session
  • My decision tree for build vs. buy vs. wrap-an-API.
  • How I structure prompt iteration — the smallest test set I can defend.
  • When to expose model confidence to the user, and when to hide it.
03Generic example
Generic illustration

Built and deployed multiple AI agents across recommendation, document generation, and a conversational layer. The unifying thread: each one removed a step the user was already doing manually — and degraded gracefully when the model missed.

04Artifact
AI feasibility scorecard

A one-page grid I score features on: time saved, accuracy needed, fallback severity, build/buy cost, defensibility. Anything that lands in the bottom-left quadrant gets killed before a sprint touches it.

05Takeaway

Most AI features fail because they confuse capability with utility. The right question isn't 'can the model do this?' — it's 'will the user notice the difference, and at what threshold of error?'