How to use AI for competitive analysis in developer tools
AI does not replace competitive analysis. It makes you faster at the tedious parts so you can spend more time on judgment calls. Here is my exact workflow.

AI-powered competitive analysis for developer tools cuts a three-week project to three days by handling the reading, structuring, and first drafts so you can spend your time on the judgment calls that actually matter.
Competitive analysis is the most tedious job in product marketing. I say this as someone who has done it for thirty years. I lurk a lot on r/productmarketing and it seems to be a recurring theme among many PMMs. The work itself is straightforward. Track what your competitors say. Track what they ship. Track how they price. Build battlecards. Update them when things change.
The problem is that "straightforward" and "fast" are not the same thing. A thorough competitive analysis used to take me two to three weeks. Reading every page of a competitor's website. Downloading their docs. Signing up for free trials. Sitting through their webinars. Building spreadsheets. Writing battlecards that nobody would read.
AI cuts that timeline to two or three days. Not because it does the thinking for you. Because it does the reading for you. And the structuring. And the first-draft writing. That frees you up for the part that actually matters: figuring out what the competitive landscape means for your product and your customers.
Here is my exact workflow. I use it today. It works.
Why was competitive analysis so slow before AI?
Competitive analysis was slow because it required judgment that no tool can shortcut. You need to understand why a competitor positioned themselves a certain way, not just what they said. You need to read between the lines of a pricing page to understand their go-to-market motion. You need to talk to customers who evaluated your competitor and chose them. Or chose you.
None of that judgment work goes away with AI. The tedious parts do. The hours spent copying text from websites into spreadsheets. The time spent formatting battlecards. The days spent trying to get a coherent picture of a market with fifteen players.
I wrote about the role of AI in my marketing workflow in My AI Flow State. Competitive analysis is where the speed gains are most dramatic.
How do you map the competitive landscape with AI?
I start every competitive analysis with my LLM (somteimes I use Perplexity, sometimes I use Claude.)
The first question is always broader than you think it should be. I want to find competitors I don't know about, not just confirm the ones I do.
Here is a prompt I actually use:
Who are the main alternatives to [your product] for [your target use case]? Include direct competitors, adjacent tools that developers use as substitutes, and open-source projects. For each, tell me their positioning, pricing model, and primary audience. Cite your sources.
This returns a landscape, not a list. Perplexity will find direct competitors, sure. But it also surfaces the indirect ones. The open-source project that solves 60% of the problem for free. The internal tooling pattern that your prospects build instead of buying. The adjacent product that is expanding into your category.
In Picks and Shovels, I describe four types of competitors: direct, indirect, status quo, and build-it-yourself. Most people only track the direct ones. The build-it-yourself competitor, the team that decides to write their own solution, is often the biggest threat for developer tools. Perplexity is good at surfacing all four types because it searches broadly.
I run three to five variations of this query, each from a different angle. "Best tools for [use case]." "Alternatives to [competitor name]." "[Use case] open source." This gives me overlapping results that I can cross-reference.
The output is a rough landscape. Maybe fifteen to twenty names. I spend thirty minutes pruning: removing dead projects, confirming that each company is still active, and sorting them into tiers. Tier 1 competitors get full battlecards. Tier 2 gets a brief profile. Tier 3 gets a mention in a watchlist.
How do you analyze competitor positioning with AI?
Once I have my landscape, I go back to my LLM for deeper analysis.
For each Tier 1 competitor, I feed Claude their homepage copy, their pricing page, their documentation landing page, and their "about" page. Then I ask:
Based on this content, what is this company's positioning? Who are they targeting? What is their primary differentiator? How do they describe their product category? What is their pricing model and what does it signal about their go-to-market motion?
Claude is good at this. It will identify whether a competitor is positioning on price, performance, developer experience, or ecosystem. It will notice when a company says "enterprise-ready" on their pricing page but targets individual developers in their docs. Those contradictions are gold.
I also ask Claude to compare the positioning of two or three competitors side by side:
Compare the positioning of [Competitor A] and [Competitor B]. Where do they overlap? Where do they diverge? If a developer were choosing between them, what would be the deciding factors?
This comparison is where the analysis starts to get useful. It reveals clusters. Maybe three competitors are all positioning on speed, and none of them are positioning on developer experience. That is an opening. Or maybe every competitor talks about "AI-native" capabilities, which tells you that AI is table stakes, not a differentiator. I wrote about this exact dynamic in Does positioning still matter?: when everyone sounds the same, you need sharper differentiation, not louder claims.
How do you build AI-assisted battlecards?
AI-assisted battlecards work best when structured around sales scenarios, not feature comparison tables. Or don't use, depending on how you build them. I have seen too many battlecards that are just feature comparison tables. Sales reps ignore those. They need to know what to say in a conversation, not what to put in a spreadsheet.
I use Claude to structure battlecards around scenarios, not features. Here is the prompt:
Create a battlecard for [your product] vs [competitor]. Structure it around these sections: (1) Their positioning in one sentence, (2) Our counter-positioning in one sentence, (3) Three common objections a prospect raises when considering them, with our response to each, (4) Technical trade-offs where we win and where they win, (5) When to recommend them over us, (6) Proof points: customer quotes, benchmarks, or case studies that support our position.
That last section, "when to recommend them over us," is the one most marketers skip. It is the one that makes your battlecard credible. If your sales rep tells a prospect "our competitor is better for X use case," that rep earns trust. The competitive analysis Agent Skill from Picks and Shovels encodes this approach. It forces Claude to include honest assessments, not just cheerleading.
One warning: Claude will try to be diplomatic. It will write things like "both products have strengths." Push back. Ask for specific, concrete trade-offs. "In what specific scenario would a developer choose the competitor? Be precise."
I also ask Claude to write each objection response in conversational language. Not marketing language. The rep needs to be able to say it out loud in a meeting without sounding like they are reading from a script.
How do you monitor competitor positioning changes over time?
Competitive analysis is not a one-time project. Markets move. Competitors ship features, change pricing, pivot messaging. Your battlecards decay the moment you finish them.
I wrote my own agent system to run periodic tasks, including comeptitive analysis. If you're looking for a rough equivalent, try using Claude Cowork. You can prompt and set it to run every ninety days or so.
What has [competitor name] announced or changed in the last 90 days? Include product launches, pricing changes, funding rounds, and any notable blog posts or positioning changes. Cite sources.
You should always feed it what you already have.
Here is our current battlecard for [competitor]. Here is what they have announced in the last 90 days. What needs to change in the battlecard? Are there new objections we should prepare for? Has their positioning shifted?
This quarterly update takes about two hours per competitor. Without AI, it took two days. That is the difference between actually doing the update and letting your battlecards go stale for a year.
How do you turn competitive analysis into recommendations?
The competitive landscape is not useful until you turn it into decisions. I use Claude to synthesize everything into a brief for leadership.
Based on this competitive analysis, write a two-page brief for our leadership team. Include: (1) The top three competitive threats and why, (2) Positioning gaps we should address, (3) Product capabilities we are missing that competitors have, (4) Specific recommendations for product, marketing, and sales.
The key word is "specific." Not "we should improve our positioning." That is useless. Instead: "Competitor X is winning mid-market deals because they offer a self-serve onboarding flow. We require a sales call. Recommendation: build a self-serve path for teams under 50 developers."
This is where AI falls short, and where your judgment matters most. Claude can synthesize data. It cannot tell you which competitive threat matters most for your business. It does not know your roadmap, your team's capacity, or your board's priorities. The synthesis step is where you earn your salary as a product marketer.
What does AI get wrong in competitive analysis?
I would be dishonest if I told you AI handles all of this without problems. It does not. Here is what goes wrong regularly.
AI hallucinates features. Claude will often confidently state that a competitor offers a feature they have never built. It pulls from outdated blog posts, misreads documentation, or simply makes things up. Every factual claim about a competitor must be verified against their current website and docs. Every single one.
AI misses positioning nuance. A competitor's pricing page might say "starting at $99/month," but the real story is that their enterprise tier requires a $50,000 annual commitment. AI reads the surface. You need to read the subtext.
AI cannot judge market sentiment. You can ask Perplexity what developers think about a competitor, and it will find some Reddit threads and blog posts. But it cannot tell you whether the sentiment is shifting. It cannot tell you that a competitor's community is quietly frustrated with a recent API change. That requires human observation.
AI does not know your internal context. It does not know that your engineering team tried to build the feature your competitor just shipped and decided it was technically infeasible. It does not know that your CEO has a personal relationship with a competitor's founder. It does not know that your biggest customer is also evaluating the competitor.
Use AI for speed. Use your brain for judgment. That division of labor is what makes the workflow work.
Which AI prompts work for competitive analysis?
I have refined these prompts over dozens of competitive analysis cycles. Use them as starting points and adjust for your market.
Landscape mapping:
Who are all the alternatives to [product] for [use case]? Include direct competitors, open-source projects, and adjacent tools. For each, provide their one-line positioning, pricing model, and target audience.
Positioning extraction:
Read the following website content from [competitor]. Extract their positioning statement, primary differentiator, target audience, and go-to-market motion. Note any contradictions between their marketing copy and their documentation.
Battlecard generation:
Create a sales battlecard for [your product] vs [competitor]. Include their positioning, our counter-positioning, three objections with responses, honest trade-offs, and guidance on when they are the better choice.
Quarterly update:
Compare this battlecard from [date] with the following recent announcements from [competitor]. What has changed? What new objections should sales prepare for?
Leadership brief:
Synthesize these five battlecards into a two-page competitive brief. Identify the top three threats, our positioning gaps, and specific recommendations for product, marketing, and sales.
How does this connect to the Picks and Shovels Agent Skills?
If you have installed the Picks and Shovels Agent Skills, the competitive analysis skill encodes this entire workflow. It tells Claude to identify all four competitor types, structure battlecards around scenarios, include honest assessments, and use consequence framing for each competitive alternative.
The skill also connects to the positioning skill. Strong competitive analysis feeds directly into positioning work. If you discover that three competitors are all claiming the same differentiator, that insight should sharpen your own position. The skills are designed to work together. Run the competitive analysis, then feed the output into the positioning skill to pressure-test your own framework.
For the full methodology behind these skills and the competitive analysis frameworks they encode, Picks and Shovels covers it in depth. The skills give you the workflow. The book gives you the thinking behind it.
Visit the AI Marketing Hub for more prompts, workflows, and tools that accelerate marketing work for developer products.

Developer marketing expert with 30+ years of experience at Sun Microsystems, Microsoft, AWS, Meta, Twitter, and Supabase. Author of Picks and Shovels, the Amazon #1 bestseller on developer marketing.

Want the complete playbook?
Picks and Shovels is the definitive guide to developer marketing. Amazon #1 bestseller with practical strategies from 30 years of marketing to developers.