Does positioning still matter?
AI agents can read your docs, evaluate your capabilities, and match you to customer needs without a positioning statement. So does positioning still matter? The answer changed my thinking.

Every day, millions of people ask ChatGPT, Claude, and Gemini to recommend products and tools. They describe what they need, their constraints, and their budget. Within seconds, they get three or four options with clear trade-offs for each.
None of those AI models ask for a positioning statement. They don't need one. They read documentation, evaluate capabilities, cross-reference community sentiment, and match options to the specific use case. No one has to tell the AI who each product is for. It figures that out on its own based on the user's question and the user's profile.
I've spent thirty years driving positioning and messaging at some of the largest and most consequential companies in tech and at some of the most successful startups in our industry. I wrote the definitive book on technical marketing with frameworks and guidance on landing segmentation, positioning, and messaging.
If AI agents can do the cognitive work that positioning was designed to shortcut, the matching of customer need to product capability, then what am I actually doing? Should I turn out the lights and go eat Cheetos on the couch?
The case for irrelevance
Let's start with the strongest version of the argument against positioning.
AI agents are compressing the entire marketing funnel. McKinsey projects that "agentic commerce", where AI agents handle the full discovery-to-decision process on behalf of customers, could reach $5 trillion in global sales by 2030. Bain estimates AI agents could account for 15-25% of US e-commerce over the same period. Gartner predicts that by 2028, 90% of B2B buying will be AI-intermediated, pushing over $15 trillion in spend through AI agent exchanges. When an agent does the shopping, the traditional positioning touchpoints shrink or vanish. Your website hero section. Your competitive comparison page. Your analyst briefing. All less relevant.
Agents don't need your carefully crafted positioning framework. They read your documentation. They evaluate your API surface. They test your error handling. They cross-reference third-party reviews and community discussions. They do the cognitive work that positioning was designed to shortcut for busy humans. As I wrote in Your docs are for AI now, documentation page views are dropping 30-40% while API adoption metrics are rising 20-30%. The AI is doing the reading humans used to do.
And then there's AI Engine Optimization. If you simply describe your capabilities well and make your content AI-friendly, won't the LLM figure out who you're for? Businesses optimized for AI systems are achieving 25-40% higher visibility than competitors relying on conventional SEO alone. Maybe good AEO is all you need.
We're still in a transition. Many buyers still visit vendor sites, attend webinars, and consult human references. But the direction is clear, and the shift is accelerating. I found this argument compelling at first. The first draft of this post was all about leaning towards this direction, actually. And then I started researching.
Why the experts say it matters more
April Dunford, who literally wrote the book on positioning with Obviously Awesome, argued on LaunchPod earlier this year that positioning matters more in the AI age, not less. Her reasoning: when every company can claim "we use AI," technology claims become commoditized. The more similar everyone sounds, the more you need a sharp, differentiated position. She has a single question she uses to expose whether a company's AI positioning is genuine or hollow, and most companies fail it:
"Can your competitor say the exact same thing?"
If the answer is yes, you don't have positioning. You have table stakes.
She's right. AI creates sameness. When everyone feeds similar prompts into similar tools, the output converges. Without clear positioning, everything sounds the same.
The essence of positioning is differentiation.
AI agents also evaluate more than specifications. When an agent recommends a product, it considers community health, documentation quality, sentiment across reviews, and how the product is discussed in forums and social media. Your position creates the reputation that AI agents detect and relay to users.
Dunford's argument is strong. But there's data that complicates it.
Elena Verna, who helped grow Lovable to $200M in ARR within a year, says 60-70% of traditional growth tactics no longer apply in AI companies. Her team re-finds product-market fit every three months. If the market shifts that fast, a positioning statement you wrote last quarter might already be wrong.
And Rand Fishkin's research at SparkToro found that there is less than a one-in-one-hundred chance that an AI will return the same list of brand recommendations when you run the same prompt twice. Your "position" in AI-mediated discovery looks less like a fixed point and more like a probability distribution.
So positioning matters. But the rules around it are changing fast.
What Ariel laundry detergent teaches us about AI
INSEAD professor David Dubois and Jellyfish developed a framework they call "Share of Model," measuring how often, prominently, and favorably brands appear in AI-generated responses. Their research fascinated me.
Ariel, the laundry detergent, has a 24% recommendation share on Meta's Llama model. On Google's Gemini, it has less than 1%.
Same brand. Same product. Same positioning statement, presumably written by a team at Procter & Gamble and approved through seventeen layers of review. Completely different visibility depending on which AI model a customer happens to be using.
Every model was trained on different data, weights different signals, and interprets brand presence differently. Ariel's positioning team did nothing wrong. They just discovered that every AI model interprets their brand differently.
The Dubois research categorizes brands into archetypes. "Cyborgs" like Tesla and BMW are strong with both humans and AI. "Emergent" brands have low visibility with both, risking digital irrelevance. But the most interesting category is brands that perform well with humans and poorly with AI, or vice versa. These brands have a gap they don't even know about.
Unlike search engines, where you could rank on page two and still get some traffic, LLMs have no page two. If a model doesn't include you in its response, you simply don't exist for that query, that user, that moment. And the next time someone asks the same question, the response might be different.
A word of caution: Share of Model is a fascinating framework, but Fishkin's own research shows it is not yet a reliable KPI. AI outputs are too variable for precise measurement. Treat it as a directional signal, not a scorecard. The goal is to understand where your messaging is landing and where it has gaps, not to game a metric.
My first instinct was that positioning itself needed to multiply. Different positions for different audiences, different channels, different AI models. I was ready to coin a term for it.
Then I realized I was confusing two things.
Positioning versus messaging
We draw a hard line between positioning and messaging, two concepts that people confuse all the time. Positioning is strategic. It answers: who are you for, what do you do better than anyone else, and why does that matter? Messaging is how you express that position to specific audiences in specific contexts.
Positioning is singular. Messaging multiplies.
Consider a time-series database. The positioning might be: "The fastest time-series database for IoT workloads." Clear and sharp.
Now watch what happens at the messaging layer.
A developer building a monitoring dashboard needs to hear about query speed and visualization integrations. A platform team evaluating infrastructure needs to hear about reliability, clustering, and the ecosystem of tools around it. An AI agent recommending tools for a side project needs to find documentation about ease of getting started, free tier availability, and quick setup guides. A procurement team needs to see compliance certifications and vendor stability.
Same position. Four different messages that actually land. The position didn't change. The way it was expressed changed based on who was listening and what they cared about.
This trips up nearly everyone who works on positioning. And databases are the classic conundrum to message. After all, a database can have multiple user segments as customers. It's still the same database: fast, dependable, etc. But it can be different things to different people.
This is where solution pages earn their keep. A /solution/oil-and-gas page and a /solution/fleet-management page for that same database express the same core position with different emphasis, terminology, and proof points. They are messaging made concrete, each one shaped for a specific problem domain.
LLMs respond to this structure. According to the Omnius AI Search Industry Report, 82.5% of AI citations link to topic-specific pages rather than homepages. When someone asks "best monitoring platform for oil and gas," the LLM looks for content that closely matches that query. A solution page with industry-specific language will match far better than a generic product page. Practitioner David Hunt documented this directly: after rebuilding B2B solution pages into narrow topical hubs, each with a problem statement, a "when to use this" section, and a comparison block, those pages started getting cited in LLM answers for "tools for X in Y industry" queries.
The Ariel problem isn't a positioning failure either. Ariel's position is fine. What happened is that the messaging layer, the content and context that AI models trained on, was rich enough on some models' training data and absent on others. Gemini didn't get the message. The position was there all along.
The stack builds from the bottom up.
- Segmentation is the foundation: who are you for?
- Positioning sits on top of it: what are you to those people, and why are you different?
- Messaging sits on top of positioning: how do you express it to each audience?
- And the implementation layer sits on top of messaging: your solution pages, your documentation, your llms.txt, your community content.
Each layer depends on the one below it. Messaging without positioning is noise. Positioning without segmentation is guessing. From there, the chain keeps going: implementation shapes customer experience, customer experience shapes what people say about you, and what people say is what AI learns from.
In Picks and Shovels, I present a positioning framework built on the Rule of Three: one statement, three supporting points, grounded in Aristotle's ethos, pathos, and logos. That framework holds. What changes in the AI age is the sheer number of messaging variants you need to build on top of it. One for the developer evaluating your free tier at midnight. One for the CTO comparing you against three competitors in a spreadsheet. One structured for the AI agent parsing your documentation on behalf of a user who typed "best database for real-time analytics" into Claude.
I've argued before that understanding customer behavior matters more than demographics. In How to identify your ideal customer profile, I made the case that if your customers have fundamentally different behaviors and use cases, they need different messages that speak to those behaviors. The AI age makes that even more true, because the AI itself is now an audience with its own behaviors and its own way of parsing what you say.
What this means for your marketing
Sharpen your position. Then build a messaging library on top of it. One clear statement. Then variants for each segment, use case, and channel. Your website, your docs, and your llms.txt should all say the same thing with different emphasis. If they contradict each other, AI will surface the contradiction.
Optimize for AI interpretation, not just visibility. AEO gets you found. Messaging determines how you get found. If your docs only describe capabilities, AI agents will describe you generically. If your docs include scenarios ("if you're building a real-time dashboard, here's why we built it this way"), agents can match you to specific needs. I wrote a practical guide on making your content AI-friendly that covers this in detail.
Test your messaging across models. Run your product name through Claude, ChatGPT, Gemini, and Perplexity every month. Ask: "What is [your product] best for?" The answers will be different. If a model gets your story wrong, you have a messaging gap, not a positioning problem.
Update messaging quarterly. If Elena Verna is right that AI companies re-find product-market fit every three months, your messaging can't be a document you wrote last year. Your position may stay stable. Your messaging won't.
Let your customers do the messaging. The best product descriptions come from how real users talk about you. Capture that language. Use it everywhere. AI models pick up on customer language because it shows up across forums, social media, and community discussions. Polished marketing copy lives on your website. Customer language lives everywhere else. Guess which one AI trains on.
Positioning is dead, long live positioning
Lazy messaging was always the problem. Not positioning. The idea that you could write one statement with three bullet points and walk away was always a shortcut. It worked when humans were the only audience. It falls apart when AI models with different training data are recommending your product in ways you can't see or control.
The discipline of positioning hasn't changed. Know who you're for. Know what you do better than anyone else. Know why it matters. If anything, sameness makes that sharper thinking more necessary.
What changes is the work you build on top of it. More messaging variants. More solution pages. More scenario-based descriptions in your docs. More attention to how your community talks about you. More structured data in your llms.txt. One position, many messages, many audiences. The AI is one of them now.

Developer marketing expert with 30+ years of experience at Sun Microsystems, Microsoft, AWS, Meta, Twitter, and Supabase. Author of Picks and Shovels, the Amazon #1 bestseller on developer marketing.

Want the complete playbook?
Picks and Shovels is the definitive guide to developer marketing. Amazon #1 bestseller with practical strategies from 30 years of marketing to developers.
Don't get left behind by AI
Sign up for the Strategic Nerds Newsletter and get expert advice on Developer Marketing and Developer Relations so you can navigate technical marketing in the AI era.