The Remove-a-Step Rule
Every AI feature should be tested against one question: does this remove a step the user is already taking? If no, kill it.
The single most useful filter for AI feature proposals is the remove-a-step rule. State the feature in one sentence. Ask: does this remove a step that real users are already taking, manually, today?
If yes: high adoption is likely. The user already does the thing; the AI just makes it faster. No new behavior required. The value proposition is felt on first use.
If no: the feature is asking the user to learn a new action AND trust an AI with it AND fit it into a workflow they don't currently have. Adoption will be a multi-quarter slog and possibly never happen.
Three worked examples.
'AI summarizes a long document.' Does this remove a step? Yes — users were skimming the document anyway, condensing in their head. The AI does it faster. Adopt.
'AI suggests new tags you should apply to your data based on patterns.' Does this remove a step? No — users weren't tagging this way before. The AI is creating new work that looks like analysis but actually requires the user to evaluate every suggestion. Don't adopt.
'AI drafts a reply to a customer support ticket.' Does this remove a step? Yes — support agents were drafting replies. AI drafts first; agent edits. Adopt.
The rule is fast. You can run it in 30 seconds. It will reject 40% of feature proposals before any design or spec work. That's the point.
AI features that remove steps adopt fast. AI features that add steps die slow.
Be specific about what counts as a step.
A step is a discrete unit of effort the user performs to make progress toward their goal. Reading a document is a step. Writing an email is a step. Searching a database is a step. Reviewing a list and picking the right item is a step.
Reading an AI suggestion to decide whether to accept it is also a step. This is the most common gotcha. Teams propose AI features that 'help' by surfacing suggestions, but the act of evaluating each suggestion is itself a step. If the user has to evaluate ten suggestions, the AI added ten new steps, even if each suggestion is offered in one click.
The sharp version of the rule: does the AI reduce the total number of decisions the user has to make? Not 'does the AI offer help.' Help is irrelevant if it requires decision overhead.