Back to Blog

The 25 Best AI Prompts for Product Managers

February 27, 2026by Promptzy
ai prompts product managerschatgpt pm promptsai prompts prdproduct management ai prompts

Product management work splits neatly into the parts where AI is genuinely useful and the parts where it is actively harmful. AI is useful for synthesis, for summarizing research, for cleaning up a messy PRD, for generating stakeholder updates from raw notes, and for stress testing a prioritization call. AI is harmful when a PM uses it to avoid the hard thinking that actually earns the title: deciding what not to build, understanding why a user is frustrated instead of just noting that they are, and holding an opinion when the room wants consensus. The prompts below are built for the useful half. The hard thinking is still on you.

Below are 25 prompts I run when I am doing the craft of the role. User interviews, raw roadmap lists, half written PRDs, metrics dumps, and competitive tear downs all go into {{clipboard}}. Pick the five or six that fit the parts of your week that are most tedious and keep them a keystroke away so you actually use them.


Promptzy syncs AI skills across Claude, Cursor, OpenClaw, ChatGPT, and Gemini

Download Free for macOS

Writing PRDs and specs

1. Turn a rough idea into a PRD outline

I have an idea for a feature and I need to turn it into a proper PRD. I am not looking for a generic template. I want a PRD outline tailored to what I am actually trying to build.

Here is the idea and any context I have about the users, the problem, and the rough scope:

{{clipboard}}

Produce:

1. A working title for the feature in the voice of the user benefit, not the feature name.
2. The problem statement in two sentences, grounded in what I said about users.
3. The specific user or persona this is for.
4. Three to five success metrics ranked by importance, with the one I should commit to in bold.
5. An outline of the PRD with sections appropriate to this feature: goals, non goals, user stories, functional requirements, UX notes, risks and open questions, and launch plan.
6. A "why now" section in two or three sentences that a skeptical exec would accept.
7. The three open questions I need to answer before this PRD can move forward.

Do not fill in content I have not given you. Mark gaps with bracketed notes.

2. Critique a PRD for weaknesses before I share it

I am about to share a PRD with engineering and design and I want a critical read first. Not a rubber stamp, an actual critique.

Here is the PRD:

{{clipboard}}

Review:

1. Clarity of the problem statement: is it specific enough that a new engineer could repeat it back to me?
2. Scope: is there anything that is clearly scope creep, or anything missing that the team will discover in week one?
3. Success metrics: are they measurable, tied to user behavior, and does each one have a current baseline?
4. User stories: do they read like actual user situations, or like feature descriptions disguised as stories?
5. Non goals: are there any, and do they protect the scope?
6. Risks: are the real risks called out, or just the easy ones?
7. Handoff clarity: could this PRD be implemented without a follow up meeting? If not, what is missing?

For each weakness, propose a concrete fix. Be harsh. I would rather rewrite now than fix it in sprint planning.

3. Write the problem statement and goals section of a PRD

I have a lot of context about a problem I want to solve, but I cannot write a tight problem statement. Help me compress it.

Here is everything I know:

{{clipboard}}

Produce:

1. A two sentence problem statement in user voice, not internal voice.
2. A one paragraph background section explaining why this matters now.
3. Three goals for the project, each measurable and tied to user behavior.
4. Three non goals that explicitly exclude tempting scope creep.
5. The hypothesis underneath the project: "we believe that X, if we do Y, because Z."
6. Two or three open questions that the problem statement deliberately does not answer.

Keep each section short. A one page problem statement is better than a five page one.

User research synthesis

4. Synthesize a batch of user interviews into themes

I ran a batch of user interviews and I have the transcripts. Help me find the themes without flattening them into bland generalizations.

Here are the transcripts:

{{clipboard}}

Produce:

1. Three to five themes that appear across multiple interviews, ranked by how often they came up and how strongly they were expressed.
2. For each theme, two or three direct quotes that capture the strongest expression of it. Attribute them to the participant.
3. Tensions between themes, where different participants said contradictory things about the same topic.
4. Any surprising or unexpected theme that only one or two participants mentioned but that sounds important.
5. Any theme that I was probably listening for and might be over weighting.
6. A one paragraph summary of the strongest signal across the batch.

Do not invent quotes. If a theme is inferred rather than directly stated, say so.

5. Turn a single interview into an insight document

I just finished a user interview and I want to extract the insights while they are fresh, before I forget the texture.

Here is the interview:

{{clipboard}}

Produce:

1. A one paragraph summary of the participant's context and what they were trying to accomplish.
2. Three insights that challenge an assumption my team has been making.
3. Three quotes that capture the participant's actual language. Use them verbatim.
4. One moment in the conversation where the participant said something that does not fit our current product mental model, and what it implies.
5. The top two follow up questions I wish I had asked.
6. A one line headline for this interview that I could use in a slide or doc later.

Do not generalize. Stay close to what this specific person said.

6. Find the gap between what users say and what they do

I have user feedback from interviews and behavioral data from analytics, and the two do not tell the same story. Help me figure out where the gap is.

Here is both:

{{clipboard}}

Produce:

1. The main thing users say they want or do, based on the interviews.
2. The main thing users actually do, based on the data.
3. The biggest disagreement between the two.
4. The most likely reasons for the gap: users are wrong about themselves, the interview sample is biased, the data is misleading, or the question was framed in a way that invited a socially desirable answer.
5. Which of the two signals I should trust more for this specific decision, and why.
6. One new question I should investigate to resolve the gap.

Do not pretend both sources agree. Force me to confront the gap.

Prioritization and roadmapping

7. Run a RICE or ICE scoring on a list of features

I have a list of feature ideas and I need to prioritize them. I use ICE scoring (impact, confidence, ease) but I want a sanity check from a second brain.

Here is the list and any context I have about each one:

{{clipboard}}

For each item, produce:

1. An impact score from 1 to 10, with a one sentence justification.
2. A confidence score from 1 to 10 based on how much evidence I have for the impact estimate.
3. An ease score from 1 to 10 based on the rough effort.
4. A total ICE score (impact times confidence divided by ease, or whatever formula I specify).
5. A one sentence note on the riskiest assumption in my scoring.

At the end, give me:

1. A ranked list.
2. Any item I should split into smaller pieces so it scores better.
3. Any item I should kill because even its best case is not good enough.
4. A flag for any item where the scores are highly uncertain and I should run a quick discovery before committing.

Do not inflate scores to be polite. Be a critic, not a cheerleader.

8. Stress test a prioritization call

I am about to commit to a prioritization decision and I want to pressure test it before I defend it in a meeting.

Here is the decision and my reasoning:

{{clipboard}}

Attack the decision:

1. What is the strongest argument against the thing I am picking?
2. What is the strongest argument for the thing I am not picking?
3. What assumption am I making that, if wrong, would flip the decision?
4. What would change if the time horizon was twice as long, or half as long?
5. What would change if the team was half the size, or twice the size?
6. Who on the team is most likely to disagree with this call, and what would they say?
7. Is there a cheaper experiment I could run that would inform the decision before committing fully?

After the attack, tell me whether the decision still holds. If it does, strengthen the reasoning. If it does not, propose a revised call.

9. Build a roadmap narrative from a list of projects

I have a list of projects for the next two quarters and I need to present them as a coherent roadmap, not a shopping list.

Here is the list with rough scope and owners:

{{clipboard}}

Produce:

1. The underlying theme or story that connects the projects. If there is no story, say so.
2. A reordering of the projects into a narrative: what we are trying to accomplish, the sequence of bets we are making, and why each project comes when it does.
3. Any project that does not fit the narrative and should be explained separately or dropped.
4. A one paragraph summary I could use to open a roadmap presentation.
5. The three projects I should lead with if I only have five minutes to present.
6. A list of the assumptions that, if challenged, would force us to re sequence the roadmap.

Do not force a narrative on projects that genuinely do not share one. Honesty is better than a polished but fake story.

Stakeholder communication

10. Write a weekly product update from raw notes

I have a rough pile of notes from the week: what the team shipped, what is in progress, what is blocked, what I learned. Turn it into a clean weekly update my stakeholders will actually read.

Here are the notes:

{{clipboard}}

Produce an update with:

1. A headline for the week: one sentence that captures the most important thing.
2. Shipped: concrete items with a one line note on impact.
3. In progress: what is moving and rough expected dates.
4. Blockers: each with what is blocked, who is blocking, and what I need.
5. Learnings: one or two things the team learned, good or bad, framed as updates to our thinking.
6. What is next: the top one or two priorities for next week.

Voice: direct, honest, no corporate hedging. Under 350 words. If something is bad news, include it without burying it.

11. Draft an update for an exec who does not read long docs

I need to write an update for a senior exec who will skim this for under 30 seconds. I need them to take away the right things.

Here is the raw update and who it is for:

{{clipboard}}

Produce:

1. A subject line that would make them open the email.
2. A one sentence takeaway at the very top.
3. A three bullet summary: what changed, what it means, what I need.
4. A "more context" section they can skip, with the nuance.
5. The single number or metric they would want to see.
6. A clear ask, if I have one. If I do not, no ask.

Total length at the top (before "more context"): under 100 words. Execs read the top. Everything below is optional.

12. Handle a stakeholder request that does not fit the roadmap

A stakeholder is pushing for a feature that does not fit the current roadmap. I need to respond in a way that is respectful but does not commit to something I cannot deliver.

Here is the request and my roadmap context:

{{clipboard}}

Produce:

1. A response under 150 words that acknowledges the request and the reasoning behind it.
2. A clear statement of why it does not fit right now, grounded in priorities, not excuses.
3. A question that explores whether there is a cheaper version of what they actually need.
4. A concrete next step: when we will revisit, what would change the answer, or how to escalate if they disagree.
5. A tone that is collaborative, not defensive.

Do not say "no" in a way that closes the door. Do not say "yes" in a way that creates a commitment I cannot keep. Find the middle ground.

User stories and acceptance criteria

13. Write user stories from a feature description

I have a feature description and I need it broken into user stories that engineering can actually build against.

Here is the feature:

{{clipboard}}

Produce:

1. Five to eight user stories in the "as a [user], I want [action] so that [outcome]" format.
2. Each story should be small enough to fit in a sprint and independent enough to be shipped on its own.
3. Stories should be in a logical order that would allow for iterative delivery: ship story 1, then story 2 extends it, etc.
4. A flag for any story that depends on another story and cannot be built in parallel.
5. A note on any functionality in the feature description that does NOT fit into a user story and should be documented separately (infrastructure, tech debt, non functional requirements).
6. A prioritization: which stories are must have for an MVP versus can wait.

Do not write stories that are really just task descriptions. A story should describe a user outcome.

14. Write acceptance criteria for a user story

I have a user story and I need acceptance criteria that are specific enough for engineering to test against, without micromanaging the implementation.

Here is the story:

{{clipboard}}

Produce acceptance criteria in Given/When/Then format:

1. Three to six scenarios that cover the happy path and the main edge cases.
2. Each scenario is specific about the starting state, the action, and the expected outcome.
3. Edge cases: empty state, error state, permission issues, loading state, invalid input.
4. A note on any scenario that probably needs design review before it can be tested.
5. A flag for any scenario that implies a product decision I have not made yet.

Do not write acceptance criteria that are just a reworded version of the story. They should test it, not restate it.

15. Identify the edge cases a user story is not covering

I have a user story and I want a second set of eyes on the edge cases before I hand it off.

Here is the story:

{{clipboard}}

For this story, list:

1. Empty state: what happens when there is no data?
2. Error state: what happens when something goes wrong, and how does the user recover?
3. Loading state: what does the user see while waiting?
4. Permission state: what happens if the user does not have access?
5. Concurrent actions: what if two users do this at the same time?
6. Large data: what if the user has 10,000 of whatever this story is about?
7. Legacy users: what if the user has data that predates this feature?
8. Offline or network issues: what happens, and does the feature degrade gracefully?

For each edge case, tell me whether the story currently addresses it. If not, decide whether to add it to the story, defer it to a later story, or document it as a known limitation.

Competitive analysis

16. Produce a competitive tear down of a feature

A competitor just launched a feature that is relevant to my product. I want a useful tear down, not a marketing summary.

Here is the feature and what I know about it:

{{clipboard}}

Produce:

1. A one paragraph description of what the feature does, in plain language.
2. The user problem it is solving, in the user's voice.
3. The three things they probably did well, based on what is visible.
4. The three things that are probably rough or incomplete based on the launch scope.
5. The strategic bet this feature represents for them.
6. The implications for my product: do I need to match, beat, differentiate, or ignore? Pick one.
7. The one piece of information I would most want to know that is not public, and how I could probably find it out.

Do not assume they did this better than we could. Assume the opposite: what is the strongest version of my team's response?

17. Compare my product to a competitor on a specific axis

I need to position my product against a competitor on a specific dimension (pricing, onboarding, power features, pricing, whatever). Help me build the comparison honestly.

Here is the axis and what I know about both products:

{{clipboard}}

Produce:

1. A side by side comparison on the specific dimension, with concrete examples where possible.
2. The three things I do better, stated in the buyer's language not mine.
3. The three things they do better, stated honestly.
4. The one thing I do that is not better or worse, just different, and why that might matter to a specific buyer segment.
5. A positioning statement for my product on this axis, under 30 words, that is not a marketing slogan.
6. A sales or product adjustment I should consider based on the comparison.

If my product is genuinely weaker on this axis, say so and help me decide whether to compete there or route around it.

Data analysis and metrics

18. Interpret a confusing metrics dashboard

I am looking at a metrics dashboard and I cannot figure out what it is telling me. Help me read it carefully.

Here is the dashboard or the metric snapshot:

{{clipboard}}

Walk me through:

1. What each metric is measuring, in plain English.
2. Whether the trend is going in the right direction, the wrong direction, or flat.
3. Any metric that looks like a leading indicator versus a lagging indicator.
4. Any metric that is moving but does not matter (vanity metric).
5. The single most important signal in this dashboard and why.
6. Anything in the dashboard that is contradictory or implies a measurement error.
7. The one question I should be asking my data team about this view.

Do not just describe the numbers. Tell me what they mean.

19. Design a metrics tree for a feature

I am launching a feature and I need to instrument it. I want a metrics tree that starts from the business outcome and decomposes down to the specific events I should capture.

Here is the feature and the business outcome I care about:

{{clipboard}}

Produce:

1. The top level business metric (revenue, retention, activation, whatever).
2. The secondary metrics that decompose the top level (e.g., activation equals sign up rate times time to first value).
3. The product metrics underneath each secondary metric.
4. The specific events I need to instrument to measure the product metrics.
5. The properties I need to capture on each event so I can slice it.
6. The one metric I would use as the north star for this feature, and why.
7. A flag for any metric that is important but hard to measure, with a proxy I could use instead.

Keep the tree tight. If I instrument twenty metrics, I will look at zero of them.

20. Write a hypothesis for an experiment

I am running an A/B test and I need a crisp hypothesis that my team will actually hold me accountable to.

Here is the change I am testing and my initial thinking:

{{clipboard}}

Produce:

1. A hypothesis in the form "we believe that [change] will cause [metric] to move by [direction and magnitude] because [reason]."
2. The primary metric, with the current baseline.
3. The secondary metrics that should also move if the hypothesis is correct.
4. A guardrail metric that should NOT move, and how much movement would be a red flag.
5. The minimum detectable effect and the sample size needed to see it.
6. A decision rule: what counts as a win, what counts as a loss, what counts as ambiguous.
7. What I will do in each of the three outcomes.

Do not write a hypothesis I cannot disprove. Force me to commit to a falsifiable claim.

Launch planning

21. Build a launch plan from a feature description

I have a feature that is nearly ready to ship and I need a launch plan that covers the parts PMs forget.

Here is the feature and the rough ship date:

{{clipboard}}

Produce a launch plan with:

1. Launch tier (hard launch, soft launch, gradual rollout, internal only) with justification.
2. Rollout plan: percentages, stages, and the criteria to advance.
3. Messaging: internal audience (employees, support, sales), external audience (users, prospects, press).
4. Support readiness: what docs need to exist, what training support needs.
5. Risks: the three most likely things that could go wrong on launch day.
6. Rollback plan: what the kill switch is and who owns it.
7. Success criteria: what we will measure in the first day, week, and month.
8. Post launch review: when we will look at the data and what we will do with it.

Keep the plan executable. A launch plan that is too long does not get used.

22. Write launch messaging for three different audiences

I need launch messaging for a feature, but the message needs to land differently with three audiences: users, sales, and leadership.

Here is the feature:

{{clipboard}}

Produce three versions:

1. For users: what it does, why they should care, how to use it. Short, concrete, in their voice. Not marketing speak.
2. For sales: what it does, what objections it addresses, what customer segment benefits most, the one thing they should lead with. Talk in terms of deals.
3. For leadership: the strategic bet this represents, the metric it moves, the risk it takes, the time horizon for results. Concise and honest.

Each version under 150 words. No version should feel like a rehash of the others.

Retrospectives and postmortems

23. Run a product retrospective from team notes

I have raw notes from a team retro session and I need to turn them into a clean document that tracks action items and themes.

Here are the raw notes:

{{clipboard}}

Produce:

1. Themes: three to five patterns that came up across different notes, grouped by topic.
2. Wins: concrete things the team did well, phrased in a way that makes them repeatable.
3. Frustrations: the friction points, stated without assigning blame.
4. Action items: each with a clear owner placeholder, a concrete deliverable, and a rough deadline.
5. Discussion items that need more time than the retro allowed: flagged for a follow up session.
6. The single most important change the team should make based on this retro.

Do not editorialize. Stick to what the team said. If the notes contradict themselves, flag it rather than smoothing it over.

24. Write a product postmortem for a launch that did not go well

A launch I owned did not hit its goals. I need to write a postmortem that is honest about what happened without being self flagellating.

Here is the data and my narrative:

{{clipboard}}

Produce:

1. Summary: one paragraph on what we launched and how it performed relative to the goal.
2. Hypothesis at launch: what we believed would happen and why.
3. What actually happened: the data, without spin.
4. Root cause analysis: the three most likely reasons the hypothesis was wrong.
5. What we learned: updates to our mental models, not platitudes.
6. What we would do differently: specific, not "be more careful next time."
7. What we should do now: kill the feature, iterate, pivot, or wait for more data.

No blame. No vague lessons. The goal is to update how we think so the next launch is less likely to miss.

25. Identify the decision that most likely caused a bad outcome

I have a project that went worse than expected and I want to trace the decision or assumption that was the biggest contributor.

Here is the context and my reconstruction of what happened:

{{clipboard}}

Walk me through:

1. The three to five major decisions made along the way.
2. For each, whether it was reasonable at the time given what we knew, and whether it was reasonable in hindsight.
3. The single decision that most likely changed the outcome.
4. Whether that decision was a process failure (we skipped something) or a judgment failure (we called it wrong).
5. The specific thing we could have done differently that was realistic, not a platitude.
6. The meta lesson: any pattern in how we make decisions that made the bad call more likely.

Do not assign blame to a person. The question is about decisions and process, not who made them.

Store and manage your prompts with Promptzy

Free prompt manager for Mac. Search with Cmd+Shift+P, auto-paste into any AI app.

Download Free for macOS