AI product marketing: how AI changes every phase of the PMM job
AI is changing product marketing from research to launch to measurement. Here is what actually works, what does not, and where human judgment still matters.

I have been a product marketing manager for a long time. I have written positioning documents on airplanes, on couches, in hotel lobbies at 2am before a launch. I have done competitive analysis with nothing but a browser, a spreadsheet, and too much coffee.
AI has changed every part of that job. The work is different now. The speed is different. The expectations are different. And the gap between a PMM who uses AI well and one who does not is getting wider every month.
This is not a post about AI tools. You can find a hundred of those. This is a post about what happens to the actual job of product marketing when AI enters the picture. Phase by phase. With honest assessments of what works, what fails, and where you still need a human brain making the calls.
I wrote about this idea more broadly in good marketing in the AI era. The short version: AI makes great marketers better and exposes the ones who were going through the motions. That pattern is especially true for product marketing, where judgment matters more than output.
What AI actually does well in product marketing
Before I walk through each phase, let me be direct about where AI earns its keep. I use AI in my own marketing workflow every day, and the places where it saves real time are specific.
Research acceleration. This is the biggest win. A competitive analysis that used to take me a full day now takes two hours. Not because the AI does the analysis for me. It does the gathering. It pulls together pricing pages, feature lists, positioning statements, analyst reports, and customer reviews into a structured document. I still have to read it all and form an opinion. But the gathering phase? That used to be 60% of the work.
Pattern recognition in large datasets. Feed AI twenty customer interview transcripts and ask it to find the top five recurring themes. It will find them faster than you will. It will also find themes you missed because you were tired during interview number fourteen. I have tested this against my own manual analysis and the AI catches patterns I skip.
First drafts of structured documents. Positioning frameworks, battle cards, launch plans, messaging matrices. Anything with a known structure and known inputs. AI produces a solid first draft in minutes. The draft needs editing. Sometimes heavy editing. But you are editing instead of staring at a blank page.
Data analysis and reporting. Campaign performance, win/loss trends, feature adoption metrics. AI can summarize data and surface the interesting signals faster than I can build pivot tables. When I need to pull together a quarterly business review, AI cuts the prep time in half.
Content generation at scale. Blog posts, email sequences, social copy, documentation. AI generates drafts that follow a consistent structure. I built an entire system for this with Claude and it changed how I work. Not because the output is publish-ready. Because it gives me something to react to instead of something to create from nothing.
Here is what all of these have in common: AI is doing the work that used to be manual, repetitive, and time-consuming. The gathering. The organizing. The first pass. What it is not doing is the thinking. The judgment. The decisions about what matters and what does not.
That distinction matters for everything that follows.
The PMM job, phase by phase
Product marketing has phases. Every PMM knows them. Research, positioning, launch, enablement, measurement. The exact names vary by company. The work does not.
Let me walk through each phase and show you where AI helps, where it falls short, and where the smart PMMs are gaining an edge.
Research: from weeks to days
The research phase is where AI has the most obvious impact. Customer research, market research, competitive research. All of it gets faster.
Customer research. I used to spend two weeks scheduling and conducting twelve customer interviews, then another week analyzing the transcripts. Now I still do the interviews. You cannot skip talking to customers. But the analysis is different. I feed the transcripts to Claude and ask for common themes, unexpected quotes, and contradictions between what customers say they want and what they describe doing. The AI finds things I would have found eventually. It finds them in twenty minutes instead of three days.
A PMM at a Series B company told me she runs her customer advisory board notes through AI after every session. She gets a summary of themes within an hour. She used to wait two weeks to compile the same analysis. That speed matters because the insights reach the product team while the conversation is still fresh.
Competitive research. This is where AI really shines. I can ask it to compare my product's positioning against three competitors, using only public information. It pulls from websites, documentation, blog posts, analyst reports, and review sites. The output is not perfect. It misses nuance. It does not understand political context. But it gives me an 80% complete competitive overview in an hour.
The BCG and Harvard study from 2023 found that consultants using AI were 25% faster and produced 40% higher quality output on tasks within AI's capability range. Competitive research is squarely in that range. It is structured, information-dense, and benefits from broad data gathering. Exactly what AI does well.
Market research. AI is good at synthesizing analyst reports, survey data, and industry trends. It is bad at predicting what will happen next. I use it for the synthesis and do the prediction myself. When I was preparing a market analysis for a developer tools company last year, AI compiled data from six different analyst reports in thirty minutes. I spent the next four hours figuring out what the data actually meant for our positioning.
The risk in this phase is trusting the AI's synthesis without checking its work. AI hallucinates. It confidently states things that are not true. I have caught it inventing statistics, misattributing quotes, and conflating two different companies. Every fact that matters needs a human to verify it.
Positioning: AI generates options, humans make choices
Positioning is the most important job a product marketer does. It is also the job where AI is the most useful and the most dangerous at the same time.
AI is useful because it can generate positioning options fast. Give it your product information, your target customer profile, your competitive situation, and your key differentiators. It will produce five or six positioning directions in ten minutes. Some will be obvious. One or two will be surprising. That speed of discovery is valuable.
I built a Claude skill that generates positioning frameworks following the methodology in my book. It asks the right questions, structures the output correctly, and produces a solid first draft. I wrote about how to build these kinds of tools if you want to try it yourself.
AI is dangerous because positioning is about choices. What do you say? What do you not say? Which audience do you prioritize? Which competitor do you position against? These are judgment calls that require understanding things AI cannot access: your CEO's vision, your sales team's reality, your engineering team's capacity, your board's expectations.
I once worked with a startup that used AI to generate their positioning. The output was technically correct. It hit all the right frameworks. But it positioned them as a "developer platform for building AI applications" when their actual differentiation was their pricing model. The AI did not know that pricing was the thing that made developers choose them. Because the AI had never sat in a sales call and heard a prospect say "wait, you charge HOW much less?"
The right way to use AI for positioning: generate options, then argue with them. Push back on every claim. Ask "would our competitor say the same thing?" If yes, it is not positioning. It is table stakes. Use AI to draft, use your brain to decide.
Launch: speed with guardrails
Product launches are where PMMs earn their salary. Everything comes together at once: messaging, content, sales enablement, analyst briefings, customer communications, partner updates, internal alignment. It is the most stressful and the most rewarding part of the job.
AI helps with launch execution in three specific ways.
Content generation. A launch needs blog posts, landing pages, email sequences, social posts, sales decks, customer-facing documentation, and internal FAQs. That is a lot of content. AI can draft all of it. I have used Claude to generate a complete launch content package, from announcement blog post to sales battle card, in under a day. Without AI, that same package takes a week.
The catch is quality control. AI-generated launch content tends to be generic. It uses the same superlatives, the same structure, the same tone. Left unedited, your launch sounds like every other company's launch. The PMM's job is to take the draft and make it specific. Add the customer quote that nails the value prop. Remove the claim you cannot back up. Replace the generic benefit statement with the specific metric from your beta program.
Timeline management. I feed my launch plan into AI and ask it to identify dependencies, gaps, and risks. It catches things like "you have the analyst briefing scheduled before the messaging is finalized" or "your sales enablement training is the same day as your product release." These are simple logic checks that a human can do. But a human forgets to do them when they are managing forty tasks at once.
Sales enablement materials. This is where AI saves the most time during a launch. Battle cards, objection handling guides, competitive comparisons, demo scripts. All structured documents with known inputs. AI generates them fast. I wrote about AI-powered sales enablement in detail if you want the specifics.
But here is the thing about launches that AI cannot touch: the politics. Launches require alignment across engineering, product, design, sales, marketing, leadership, and sometimes legal. That alignment happens in meetings, in hallway conversations, in Slack threads where someone is upset that their feature did not make the cut. No AI handles that. A great PMM reads the room and adjusts. A great PMM knows when to push and when to back down. A great PMM gets the VP of Engineering to agree to the launch date by framing it in terms that matter to engineering, not marketing. That is human work.
Enablement: from creation to curation
Sales enablement used to be a creation problem. PMMs had to write everything: the pitch deck, the one-pager, the competitive guide, the FAQ, the objection handling doc. It took weeks.
Now it is a curation problem. AI can generate all of those materials. The question is whether they are good enough. Whether they reflect reality. Whether the sales team will actually use them.
I have found that the best approach is a partnership. AI generates the first draft. I review and edit. Then I send it to a sales rep I trust and ask one question: "Would you actually use this in a deal?"
If the answer is no, I ask why. Usually it is because the AI missed something specific. A common objection the AI does not know about. A competitor's pricing change from last week. A feature limitation that the docs do not mention but every customer asks about.
Those gaps are where the PMM adds value. The AI handles the 80% of enablement content that follows known patterns. The PMM handles the 20% that requires real-world knowledge.
AI is also changing how PMMs work with sales call data. Tools like Gong and Chorus record and transcribe sales calls. AI analyzes those transcripts and surfaces patterns: common objections, frequently asked questions, competitor mentions, feature requests. A PMM who reviews AI-generated call analysis every week knows more about what is happening in the field than a PMM who sits in on two calls a month.
When you write a go-to-market plan, the enablement section used to be the bottleneck. Not anymore. The bottleneck has moved upstream to strategy and positioning, where it should have been all along.
Measurement: better questions, not just better data
Measurement is the phase where most PMMs are weakest. Not because they are bad at it. Because the tools have historically been bad at connecting marketing activity to business outcomes.
AI helps in two ways. First, it makes data analysis faster. Pull your campaign data, your pipeline data, your win/loss data, and your feature adoption data into one place. Ask AI to find correlations and anomalies. It will surface patterns that would take hours of spreadsheet work to find manually.
Second, and more important, AI helps you ask better questions. I feed AI my quarterly results and ask it to generate hypotheses about what drove the changes. Why did win rates drop in Q3? The AI might suggest three or four possible explanations based on the data. I still have to investigate each one. But having a starting list of hypotheses saves time.
The limitation: AI is good at measuring what happened. It is bad at measuring what mattered. Did that analyst briefing influence the deal? Did the customer case study change the prospect's mind? Did the competitive positioning shift win the account? These are attribution questions that require human judgment. Sometimes intuition. Sometimes a phone call to the sales rep who closed the deal.
The Wharton research on AI-assisted decision making found something interesting. People using AI for data analysis made better decisions on average, but the best decision makers without AI still outperformed the average decision makers with AI. The tool amplifies whatever you bring to it. A PMM with strong analytical instincts gets more out of AI measurement tools than a PMM who relies on the AI to tell them what to think.
What AI cannot do
I want to be direct about this because I think the industry conversation skews too optimistic. AI is a tool. A good tool. But there are things it cannot do that are central to the PMM job.
AI cannot make judgment calls. Product marketing is a job of judgment. Which market should we enter? Which competitor should we position against? Which customer segment should we prioritize? Should we launch now or wait for the next feature? These decisions require understanding context that AI does not have: company strategy, market timing, team capacity, competitive dynamics, customer relationships.
I have seen PMMs try to outsource these decisions to AI. They feed it all the data and ask "what should we do?" The AI gives a reasonable-sounding answer. Sometimes the answer is even correct. But the PMM cannot explain why it is correct. They cannot defend it in a meeting. They cannot adjust it when new information emerges. Judgment is not just about reaching the right answer. It is about understanding why it is right and knowing when it stops being right.
AI cannot read a room. Product marketing requires working with people who have different incentives. Engineering wants to ship features. Sales wants to close deals. The CEO wants to tell a big story. The CFO wants to see ROI. A great PMM finds the message that works for all of them. That requires empathy, political awareness, and the ability to adjust on the fly. AI cannot do any of those things.
I remember a launch where the head of sales was furious that we were positioning the product as an enterprise solution. "Our deals are all mid-market," he said. He was right about the current deals. But the CEO wanted to move upmarket. The PMM's job was to find the positioning that acknowledged the current reality and pointed toward the future. That required understanding both perspectives and building a bridge between them. No AI prompt generates that bridge.
AI cannot build relationships. So much of product marketing depends on trust. Trust with the product team so they tell you about features early. Trust with sales so they give you honest feedback. Trust with customers so they agree to be references. Trust with analysts so they take your briefing seriously. These relationships take months to build and are the infrastructure that makes everything else work.
AI cannot have taste. This is the hardest one to explain. Taste is knowing that a positioning statement is technically correct but feels wrong. It is knowing that a launch blog post hits all the right points but reads like it was written by a committee. It is reading a competitive battle card and knowing that the sales team will ignore it because it does not sound like how they actually talk. Taste is pattern recognition built on years of experience. It is the thing that separates a PMM who has done ten launches from a PMM who has done one launch ten times.
I wrote about what product marketing actually does and the common misconception that it is primarily a content creation role. AI reinforces that misconception. If you think PMM is about producing documents, then AI looks like it can replace PMMs. If you understand that PMM is about making judgment calls that connect product, market, and customer, then AI looks like what it is: a tool that makes good PMMs more productive.
A practical framework for AI-augmented PMM
I have been using AI in my product marketing work for over two years now. Here is the framework I have settled on. It is not complicated. The best frameworks never are.
Step 1: Know what you are good at and what AI is good at
Make two lists. List one: the things that require your judgment, your relationships, your taste. Positioning decisions. Stakeholder alignment. Customer empathy. Strategic bets.
List two: the things that are structured, repeatable, and information-dense. Competitive analysis. First drafts. Data synthesis. Content generation.
AI gets list two. You keep list one. If you try to give AI tasks from list one, you will get output that looks right and is wrong. If you keep doing tasks from list two yourself, you are wasting time you could spend on the work that actually matters.
Step 2: Build repeatable workflows
One-off prompts are a waste. If you do competitive analysis every quarter, build a workflow for it. Define the structure. Define the inputs. Define what good output looks like. Then turn that into something repeatable.
I built Claude Code skills for my most common tasks: positioning frameworks, competitive analysis, launch plans, content strategies. Each skill encodes my methodology so I get consistent output every time. The investment in building the workflow pays back on the second use.
This is where most PMMs get stuck. They use AI as a chat tool. They type a prompt, get a response, type another prompt, get another response. That works for one-off questions. It does not work for the kind of structured, repeatable work that defines product marketing.
Step 3: Always be the last editor
This is the rule I never break. AI generates. I edit. Every positioning statement, every launch blog post, every battle card, every piece of content goes through my hands before it goes anywhere else.
Not because the AI output is bad. Often it is quite good. But because the editing is where I add the things AI cannot: the specific customer quote, the nuance about the competitive situation, the tone adjustment for a particular audience, the detail from a conversation I had with the sales team last week.
Editing is not a downgrade from writing. Editing AI output is a higher-value activity than writing from scratch, because you are applying your judgment and taste to a structured foundation instead of spending energy on structure.
Step 4: Verify everything
AI hallucinates. It invents statistics. It misquotes people. It conflates similar companies. It states opinions as facts. Every claim that matters needs to be verified by a human.
I have a simple rule: if it would be embarrassing to get wrong, check it. If it is going in a sales deck that a customer will see, check it. If it cites a specific number or study, find the original source.
This sounds obvious. But when AI generates a beautiful, well-structured document with confident-sounding citations, it is easy to trust it. Do not trust it. Verify it.
Step 5: Measure the impact
Track what AI actually changes about your work. Not in vague terms like "I am more productive." In specific terms.
How many hours did competitive analysis take before AI? How many now? What is the quality difference in your positioning documents? Are you producing more content? Is that content performing better? Are your sales enablement materials being used more?
The BCG study found that the quality improvement from AI was highest on tasks that were already structured and well-defined. That maps exactly to the PMM workflow. The structured parts get better and faster. The judgment parts stay the same. That is how it should be.
I track three metrics for my own AI usage: time saved per task, number of revision cycles (fewer is better, and it means the AI draft was closer to final), and quality scores from stakeholders who consume my work. After two years, the pattern is clear: AI saves me about 30% of my time on research and drafting. It saves me almost nothing on strategy and stakeholder alignment. That 30% lets me spend more time on the work that only I can do.
Step 6: Keep learning
The tools change every month. What AI could not do six months ago, it can do now. What it does poorly today, it will do well next quarter. Stay current. Experiment with new capabilities. But do not chase every new feature. The framework matters more than the tool.
I have changed my AI tools three times in two years. I have not changed my framework once. The tools are interchangeable. The methodology is not.
The widening gap
I said at the top that AI is making the gap between good PMMs and great PMMs wider. Here is why.
A good PMM produces solid work. Decent positioning. Competent launches. Adequate enablement materials. They follow the playbook. With AI, a good PMM produces the same solid work faster. They save time. They handle more products. They generate more content. The floor of their output rises.
A great PMM uses that saved time differently. They do not just produce more of the same. They use the extra hours to sit in more sales calls. To have longer conversations with customers. To think harder about positioning. To build deeper relationships with the product team. To develop the taste and judgment that separates adequate work from excellent work.
The great PMM's ceiling rises faster than the good PMM's floor. And over time, that gap becomes impossible to close.
I have seen this play out at three companies in the last year. The PMMs who treat AI as a tool for doing existing work faster are performing well. The PMMs who treat AI as a tool that frees them to do different, higher-value work are performing at a level I have never seen before. They are running competitive analysis on a weekly cadence instead of quarterly. They are producing enablement content within days of a competitive move. They are testing positioning variations faster than the product team ships features.
That speed and quality combination is new. It was not possible before AI. And the PMMs who figure it out first build an advantage that compounds.
I am not worried about AI replacing product marketers. I am watching AI make it very clear how much of each PMM's job was judgment and how much was assembly.
If you are preparing for PMM interviews, know this: the bar is rising. Companies want PMMs who can use AI as an accelerator and still bring the judgment, the taste, and the relationships that make product marketing work.
Put in the work. Build the judgment. Develop the taste. Then use AI to make all of it faster.
That is the job now.

Developer marketing expert with 30+ years of experience at Sun Microsystems, Microsoft, AWS, Meta, Twitter, and Supabase. Author of Picks and Shovels, the Amazon #1 bestseller on developer marketing.

Want the complete playbook?
Picks and Shovels is the definitive guide to developer marketing. Amazon #1 bestseller with practical strategies from 30 years of marketing to developers.