Hadar Zeharia
Back to notes

Two ways to use AI in a product

A co-pilot is the shape most teams have shipped between 2023 and 2025. The user does the work, AI suggests, the user accepts or rejects. GitHub Copilot. Notion AI. Cursor. The product holds the user’s hand through the doing.

An autonomous system is the other shape, less visible because it does not announce itself. The user sets the rules once. The system does the work. The user reviews exceptions when something genuinely needs human judgment. Stripe’s Smart Retries. Amazon’s subscribe-and-save. Utility autopay. PM OS at my desk every morning at 07:08, drafting the briefing before coffee.

Both ship. Both work. They are different products with different relationships to the user’s attention, and the gap between them is what 2026 starts to surface.

What the co-pilot framing assumes

A co-pilot is a polite product. It assumes the user wants to drive, wants to remain in the loop, wants to verify each suggestion before it lands. For some categories that is exactly right. Coding inside a codebase the user knows better than any model. Editing a memo whose voice belongs to the sender. Therapy, where the human is doing the actual work and the model is just the prompter. The context that matters lives in the user, not in the system.

For most other categories, the co-pilot framing is a downgrade dressed up as empowerment.

The friction the user came to remove is the doing, not the deciding. A user who has set up autopay does not want a product that asks each month, should we pay your electric bill now? They wanted the bill paid. They set up autopay specifically so they could stop thinking about it. The polite version of the critique says co-pilots add friction. The harder version is more accurate: co-pilots ship cognitive load. Each suggestion is a decision point. Forty-seven suggestions in a morning is forty-seven decisions the user did not come to the product to make.

The user came to the product to make fewer decisions, not more.

What an autonomous system actually requires

Three pieces, none of them about the AI itself.

The first is a clear set of rules the user can set, change, and audit. Pay my electric bill from this account if it is under $X. Otherwise notify me. The user is in the system, not in the loop on every transaction. The rules are explicit, editable, and visible at any time the user wants to know what the system is doing on their behalf.

The second is a tiered autonomy model. Low-stakes actions run automatically. Medium-stakes actions are drafted and wait for approval. High-stakes actions remain human-only. The line is drawn explicitly. The system refuses actions it does not have authority for, and the refusal — the cannot, not the will not — is what makes the user trust the system to keep running when they are not watching.

The third is a complete audit log. What was done, when, why, and what alternative was considered. Surfaced when it matters: at the next decision point, in the monthly summary, in the moment of doubt. Buried in a settings panel does not count.

When all three are in place, the user gets what they actually came for. Their problem solved, without their attention as the cost.

What this looks like, concretely

Stripe’s Smart Retries is the cleanest example. A naive system retries declined payments at midnight every day. Smart Retries retries when this specific card historically clears, Thursday at 3pm when payday lands. The user does not approve each retry. The user sees the result at the end of the month: we recovered $X you would have lost. The audit log is the recovery summary itself.

Amazon’s subscribe-and-save sits in the same category. The product does not ask each month do you want more dish soap? The user set the cadence at signup. The product runs. The user reviews when consumption stops matching delivery, not before. The same shape works for utility autopay, for SaaS renewals, for any recurring action where the user has already decided what they want.

Each of these companies could have built a co-pilot version. AI noticed your card declined — want to retry now? It would not have been the better product. It would have been the more annoying one, sitting between the user and the outcome they had already signaled they wanted.

When co-pilot is the right shape

A co-pilot is the right shape when the user’s context is irreplaceable, when the output is high-stakes and irreversible, or when the experience of doing is the entire point. Coding inside an unfamiliar codebase. Sending an email to a regulator. Composing music with a generative tool. In those cases the AI is the prompter, the verifier, the suggestion engine. The user remains the actor.

For everything else, the long tail of I just want this handled, autonomous wins.

The PM bet of the decade

The next decade of product sorts along this axis. Companies that build co-pilots for every workflow will hit diminishing returns. Each new AI feature adds another decision point to the user’s day. Eventually the user notices their day became more effortful, not less. They downgrade or they churn.

Companies that build autonomous systems with serious audit logs compound. The user starts with one autonomous feature, adds another, adds another. The product is doing real work on the user’s behalf and the user is making fewer, better decisions per day.

That last clause is the bet. Fewer, better decisions per day. It is what users came for. It is what AI capability has finally made possible at scale.

The PM job, in this shape

The PM job is to draw the lines. Which decisions are sacred to the user. Which are automatic. Which are drafted and waiting for approval. Which are notification-only. Which are forbidden to the system entirely.

And to design the audit log carefully enough that the user trusts the system to do the rest.

That is the actual work. Not adding an AI feature. Drawing the lines.

Stop building polite assistants. Start building patient operators.