Your docs are for AI now (and this changes everything about Developer Relations)
AI coding agents are now the biggest consumer of your docs. The necessary change in how we write docs is a harbinger of the changes necessary in Developer Relations strategy, overall.

Developers no longer read your documentation the way you think they do. While you focus on readability, structure, and accuracy:
- Claude Desktop consumes your API reference through MCP connections
- Cursor parses your quickstart guide to generate integration code
- GitHub Copilot indexes your examples to answer questions without visiting your site
AI coding assistants have become the primary consumers of developer documentation. The shift happened over 18 months as tools embedded documentation access directly into workflows.
For documentation teams already fighting for credibility, this creates a measurement problem:
- Page view metrics are dropping
- Tutorial completion rates are plummeting
- Yet API adoption is rising
- Traditional engagement metrics no longer tell the full story
- Executives question whether documentation investments are working
If you're a technical writer or documentation engineer, you might be wondering whether AI will automate your role. The honest answer: AI changes what you do, not whether you're needed. Documentation quality becomes more critical, not less, because AI amplifies both excellence and mediocrity. Poor documentation produces hallucinations and failed integrations. Excellent documentation drives adoption without human intervention. The craft of creating comprehensive, accurate, well-structured documentation matters more than ever.
I write a lot about Developer Relations in Picks and Shovels: Marketing to Developers During the AI Gold Rush. The book covers metrics, community, product marketing, go-to-market strategy, management and team-building, and, of course, documentation, all for developer-focused companies. Documentation is a critical component of any developer-focused go-to-market strategy and, as such, I spend an entire chapter (and my entire career) obsessing over it.
For similar reasons, this piece starts with documentation because that's where AI disruption is most visible and measurable. Page views drop while adoption rises. AI assistants consume docs without generating traffic. The feedback loop tightens from days to seconds. But the implications ripple through all of Developer Relations. When AI handles mechanical documentation consumption, it forces every part of DevRel—advocacy, community, positioning, measurement—to evolve from tactical execution to strategic impact.
Whether you're a technical writer, documentation engineer, developer advocate, or DevRel leader, the rise of AI coding agents is changing not only how documentation is consumed, but what makes your entire function effective.
So, let’s start at docs and see how that first domino affects the broader Developer Relations engine.
The consumption shift nobody saw coming
When Anthropic released the Model Context Protocol in late 2023, AI assistants gained ability to access external data sources directly. Documentation became machine-readable context rather than human-readable instruction.
Developers now work differently:
- Describe what they want to build
- AI assistant reads your docs and understands your API
- AI writes the integration code
- Developer reviews output and asks follow-up questions
- They never open your documentation site
Companies tracking both traditional metrics and API adoption see a clear pattern:
- Documentation page views down 30-40%
- Time on page down by half
- API sign-ups up 20-30%
- First successful call metrics up 20-30%
Your documentation works better than ever. Traditional engagement metrics just cannot prove it anymore. This is where documentation quality becomes the differentiator.
Why AI makes bad docs worse but good docs invaluable
AI assistants expose documentation problems that humans worked around.
When humans cannot find information, they search differently. They ask in Discord. They figure it out through trial and error. When AI cannot find information, it hallucinates. It gives up. It returns errors. Eventually, developers blame your product, not their AI assistant.
Incomplete documentation becomes a blocking issue. Poorly structured documentation confuses AI context windows. Humans can skim and skip around. AI assistants need clear hierarchy and logical progression.
This reality validates what documentation professionals have always known: shortcuts hurt users. The difference now is that AI makes documentation problems visible faster. When a developer gets stuck because of missing documentation, they might file a support ticket days later. When AI hallucinates because of incomplete documentation, it generates broken code immediately. The feedback loop tightens.
But when documentation is comprehensive, well-structured, and technically accurate, both humans and AI coding agents benefit.
While it's still early days in terms of measuring AI Engine Optimization results, there is enough evidence to believe that good documentation results in AI assistants recommending your product when it fits. Particularly when your documentation not only explains what your product is for, but how it can be used in real-world scenarios.
If AI agents can better understand your docs:
- They write better integration code than most humans
- They help developers succeed faster
- They become your best advocates
This creates a higher standard for documentation quality. The technical writing community has long advocated for completeness, accuracy, and clarity as core principles. AI consumption doesn't change these principles. It makes them non-negotiable. You cannot paper over gaps with helpful support teams when AI is the first line of documentation consumption. The documentation must be right.
Good documentation becomes more valuable, not less. AI assistants amplify quality. They multiply the impact of excellent documentation and expose the weakness of mediocre documentation. This is why technical writers and documentation engineers remain essential. AI cannot fix incomplete documentation. It cannot create structure from chaos. It cannot ensure accuracy or completeness. Those skills require human expertise, domain knowledge, and the craft you've spent years developing.
The dual audience documentation framework
Your documentation now serves two readers with fundamentally different consumption patterns. Human developers skim, jump around, and search for specific answers. AI assistants parse sequentially, pull sections into context windows, and need complete information to avoid hallucination.
The breakthrough insight is that good structure for AI assistants is also good structure for humans. You do not need separate documentation. You need documentation that serves both audiences through careful structure and completeness.
This aligns with principles the technical writing community has championed for years: semantic structure, logical organization, self-contained sections, complete examples, and exhaustive reference material. AI consumption doesn't require abandoning best practices. It requires applying them rigorously.
- Semantic structure serves both audiences. AI assistants parse heading hierarchies to understand concept relationships. When you use consistent H2 for major concepts, H3 for specific implementations, and H4 for edge cases, AI can map your product's logical structure. Humans benefit from the same consistency. Random heading levels confuse both audiences for the same reason: neither can build a mental model of how concepts relate.
- Scannability serves both audiences. AI assistants often pull single sections into context windows rather than consuming entire documentation sets. Each section needs to make sense in isolation with brief contextual reminders. Humans scanning documentation appreciate this too. When a code example includes a comment like "requires authentication token from step 1," both AI and humans can use the example without hunting for context.
- Front-loading critical information serves both audiences. AI assistants make recommendations based on information encountered early in documentation. If authentication requirements appear buried in page 5, AI might recommend your product to developers who cannot meet those requirements. Humans scanning for blockers check the same places. Both audiences need prerequisites, rate limits, and authentication requirements upfront.
- Complete code examples serve both audiences. AI assistants need full request and response examples to generate accurate code. Show the complete HTTP request with headers, the full response body, and error cases. Not just the interesting parts. Humans copy-paste complete examples more successfully than fragments they must adapt. Both audiences struggle with code snippets that assume context.
- Exhaustive error documentation serves both audiences. When AI assistants cannot find error documentation, they hallucinate explanations. List every possible error code with causes and solutions. AI needs this to help developers debug without inventing information. Humans debugging at 2am need the same comprehensive error reference to resolve issues without support tickets.
Measuring documentation impact in the AI era
Traditional engagement metrics tell an incomplete story. Page views, time on page, and tutorial completion measured human behavior. When AI agents consume your docs through MCPs and context windows, those metrics miss half the picture.
But documentation quality metrics still matter. Error rates, completeness, accuracy, and clarity remain essential. The technical writing community has long understood that measuring documentation effectiveness is challenging. You don't need to abandon quality standards. You need to supplement engagement metrics with effectiveness metrics that prove AI-mediated adoption is working.
You need to measure whether your documentation actually works for AI consumption. Here's what matters:
Does AI-generated code actually work?
This is the ultimate test. When Claude reads your docs and writes integration code, does it work on first try? If yes, your documentation is AI-optimized. If the code has errors, your docs have gaps or ambiguities that confuse AI.
Track this by monitoring:
- Error patterns in production from new integrations
- Support tickets about "followed AI-generated code and it didn't work"
- Time from API key generation to first successful call (shorter means AI code worked)
- Failed authentication attempts that indicate documentation confusion
If you see developers getting errors on steps that should be straightforward, your documentation is not serving AI assistants well. They are reading your docs, writing code, and producing failures.
Are developers adopting without visiting your docs?
Look for API adoption patterns that do not correlate with documentation page views. If API signups are rising while documentation traffic stays flat or drops, AI-mediated discovery is working. Developers are succeeding without ever landing on your site.
This feels counterintuitive. You spent years optimizing for documentation traffic. Now you want adoption to happen without it. But this is the new reality. AI agents read your docs and help developers succeed without generating page views.
Track the ratio of API adoptions to documentation visits. When this ratio increases, AI agents are doing more of the work.
What types of support questions are you getting?
AI assistants should handle basic questions. If you are still getting tickets about "how do I authenticate?" or "what are the rate limits?", your docs do not work well with AI. These are questions AI should answer without human support involvement.
If your support tickets are increasingly about sophisticated edge cases and specific use case implementations, AI is handling the basics well. Your documentation serves AI assistants who are fielding the repetitive questions.
Analyze your support ticket themes:
- Increase in basic questions means AI documentation optimization is failing
- Increase in advanced questions means AI is successfully handling basics
- Questions about "AI told me to do X but it didn't work" means documentation gaps
Can you track AI citations?
Some AI assistants show which sources they referenced. If you have relationships with Anthropic, OpenAI, or GitHub, ask whether your docs appear in their systems. This is the new domain authority.
When AI assistants cite your documentation frequently, they trust it. When they cite competitors instead, your documentation is losing to theirs in AI evaluation.
This metric is hard to access today. Most AI assistant providers do not share citation data. But this will change. Start building relationships now with AI assistant teams. Offer to be a test case for documentation optimization. Get early access to citation analytics.
The nature of Developer Relations is changing
As with documentation, AI assistants do not replace Developer Relations. They force Developer Relations to become what it should have always been: strategic, influential, and revenue-impacting.
The mechanical work is becoming more automated. AI answers basic questions. It generates starter code. It debugs common errors. This frees Developer Relations to drive strategic efforts that really matter.
Documentation structure
A core strategic goal for Developer Relations now is structuring documentation that explains not just what your product does and how to use it, but what can be built with it.
Traditional documentation follows a reference model. You document endpoints, parameters, response codes. You show authentication flows and rate limits. You provide a quickstart guide that demonstrates basic usage. This approach assumes developers already know they need your product and are learning how to implement it.
AI-mediated discovery works differently. Developers describe problems to AI assistants before they know which products solve those problems. The AI assistant searches its knowledge base and recommends solutions. If your documentation only explains what your API does, AI cannot connect your product to developer problems.
Scenario-focused documentation changes this. When you document what can be built with your API, you insert your product into AI-mediated discovery. A developer asks Claude "how do I build a webhook system that processes payment events?" If your documentation includes a section on building webhook processors, AI assistants can recommend your product even when the developer never heard of you.
This requires documenting capabilities and scenarios comprehensively:
- Use case scenarios. Document real problems developers solve with your product. Not "how to call the API" but "how to build a notification system" or "how to create a data sync pipeline." Each scenario should explain the problem, why your product fits, and complete implementation examples.
- Architecture patterns. Show how your product fits into larger systems. Developers rarely build standalone integrations. They build authentication systems, data pipelines, automation workflows. Document these patterns so AI assistants understand where your product belongs in developer architectures.
- Problem-solution mapping. Connect developer pain points directly to your capabilities. If your API handles rate limiting, document how developers use it to solve the problem of third-party API quotas. If your product does real-time sync, explain how teams use it to solve the problem of stale data across systems.
- Complete working examples. Show full implementations, not code snippets. AI assistants need context to generate working code. A three-line example helps developers who already understand the system. A complete example helps AI assistants recommend your product and write working integration code for developers who are just discovering you.
- Capability boundaries. Document what your product cannot do as clearly as what it can do. AI assistants need this to make accurate recommendations. When you hide limitations, AI recommends your product for wrong use cases. This creates frustrated developers and support burden.
This is not a replacement for technical reference documentation. You still need accurate endpoint documentation, parameter definitions, and error codes. Reference documentation is foundational. The rigor you apply to API reference docs, the accuracy you ensure in parameter descriptions, the completeness of error code documentation—this remains essential work that requires deep technical expertise.
But reference documentation serves developers who already chose your product. Scenario documentation serves AI assistants helping developers discover which product to choose.
Documentation teams should audit documentation through this lens. For each major capability, ask whether a developer describing that problem to an AI assistant would discover your product. If not, you are losing mindshare to competitors who document scenarios more comprehensively.
The pattern applies to all of Developer Relations
The documentation shift reveals a broader transformation in Developer Relations. AI forces technical writers to move from describing systems to describing what systems enable. This is the first domino. The same principle applies to everything DevRel does.
Traditional DevRel work is descriptive. You document features. You answer questions. You explain how things work. You show developers what your product does. This worked when developers needed guidance through complexity.
AI handles descriptive work now. It reads your docs, answers questions, explains features, and generates implementation code. When AI commoditizes the descriptive layer, what remains is aspirational work. Show developers what becomes possible. Inspire them to build things they did not know they could build. Position your product not as a tool that does X, but as infrastructure that enables Y.
Documentation shifts from reference to scenarios. Community shifts from answering questions to showing possibilities. Advocacy shifts from feature explanations to outcome demonstrations. Content shifts from how-to tutorials to what-you-can-build inspiration. The entire function moves from describing what exists to envisioning what could exist.
This is why the documentation transformation matters beyond documentation teams. It's the clearest example of a pattern reshaping all Developer Relations work. When AI handles the mechanical and descriptive, DevRel must become strategic and aspirational.
Community is the moat
When AI assistants can recommend any technically equivalent solution, community becomes the deciding factor. Despite our rapidly changing world (or, maybe, because of it!), human connection is becoming more important than ever. Developers choose products where they feel connected, where they trust the team behind the technology, where they see a path to success beyond what documentation provides.
As I describe in Picks and Shovels, building this moat requires three pillars:
- Long-term engagement. Not campaigns. Not quarterly initiatives. Sustained presence over years. Developers remember who was there when they needed help. They remember who invested in the community before asking for anything in return. AI assistants can answer questions instantly. They cannot build relationships that span years.
- Authentic interaction. Developers can smell marketing from miles away. AI assistants are marketing by definition. Human Developer Advocates who genuinely care about developer success, who admit when their product is not the right fit, who celebrate community achievements without corporate motives, these advocates build trust AI cannot replicate.
- Technical credibility. You cannot fake your way into developer trust. Developer Advocates must be technically competent enough to earn respect. They must ship code, understand architecture decisions, and engage at a level that proves they belong in the conversation. AI assistants can generate code. They cannot earn technical respect from developers who judge credibility based on years of experience and battle scars.
Developer Relations as influencer work
Developer Relations has always been an influencer's game. Now that reality becomes explicit.
Building relationships and influence is the game. Not content volume. Not engagement metrics. Not community size. Influence means developers trust your judgment about technology choices. Influence means when you recommend an approach, developers listen because you earned credibility over time.
Developer Advocacy, in particular, needs to think like an influencer now. Inspire developers with what is possible. Motivate them to try new approaches. Teach them not just how to use your product, but why it matters and why now.
An old Microsoft colleague from the 90s once described Developer Advocacy as "the tip of the spear" in marketing. Your job is to be the first person to describe the possibilities of using your product. Not just how, but why. Not just features, but outcomes. Not just today, but why now.
Content AI cannot create
AI assistants can explain how to authenticate to your API. They cannot explain why your product exists or why developers should care right now.
This strategic content is invaluable because it is inexorably linked with product positioning and go-to-market strategy:
- Why does this product exist? What problem were you solving that drove you to build this?
- Why should developers care? What becomes possible with this that was impossible before?
- Why now? What changed in the market that makes this the right time to adopt?
- Who is this for? Not just "developers" but which developers with which problems?
These answers require deep understanding of market context, competitive landscape, and strategic positioning. AI assistants can regurgitate existing content. They cannot synthesize strategic narrative that connects product capabilities to market timing to developer pain points.
Strategic position to impact revenue
When Developer Relations defines possibilities, explains why and why now, and builds influential community presence, it directly impacts revenue in ways that matter to executives.
- Defining the Ideal Customer Profile becomes partially a Developer Relations function. Developer Advocates engaging deeply with community know which developers succeed with your product and which struggle. They identify patterns in who gets value fast and who churns. This intelligence shapes ICP definition more accurately than any analytics dashboard.
- Product positioning sharpens through community feedback. Advocates hear how developers describe your product to peers. They hear which benefits resonate and which fall flat. They spot positioning gaps where your message does not match market understanding. This feedback loop improves positioning faster than quarterly market research.
- Go-to-market strategy validates through community adoption patterns. Developer Advocates see which channels drive quality developers. They see which partnerships create mutual value. They identify ecosystem opportunities that sales teams miss. This ground-level intelligence makes GTM strategy execution more effective.
- AI assistants provide technical support. Developer Advocates provide strategic market intelligence that shapes product direction, positioning, and go-to-market execution. One is a cost center. The other is a revenue function.
Making this shift from tactical to strategic requires concrete action. Here's how to start.
Key steps for Developer Advocates and Developer Relations teams
For Developer Advocates
Master AI-assisted workflows yourself. Install Claude Desktop with MCP, use Cursor to build something with your own API, and watch how AI reads your docs. Understanding this workflow firsthand gives you credibility when advocating for documentation changes. You cannot guide others through territory you have not explored yourself.
Shift from content volume to community influence. Stop measuring success by blog posts published or talks delivered. Start measuring influence by how often developers cite your guidance when making technology decisions. Build relationships that span years, not campaigns that span quarters. When AI can answer technical questions instantly, your value comes from trust developers place in your judgment.
Create strategic positioning content AI cannot replicate. Document why your product exists, why developers should care now, and who benefits most from using it. This content connects product capabilities to market timing to developer pain points in ways AI assistants cannot synthesize. It directly supports product positioning and go-to-market strategy.
Become the voice of the developer inside your company. Your deep community engagement reveals patterns in who succeeds with your product and who struggles. This intelligence shapes ICP definition, sharpens positioning, and validates GTM strategy. Make this feedback loop visible to executives so they understand your strategic impact.
For documentation teams and technical writers
Advocate for quality metrics alongside engagement metrics. The work you do creating accurate, complete, well-structured documentation directly impacts adoption. Page views don't prove your value. Documentation quality does. Work with product and engineering teams to track metrics that prove effectiveness: error rates, support ticket reduction, time-to-first-successful-call. These metrics connect your work to business outcomes.
Embrace docs-as-code practices for AI optimization. If you're already using Git, version control, and treating documentation like code, you're ahead. Apply the same rigor to AI optimization. Test documentation with AI assistants the way you test code. Use branching and reviews to validate that AI can parse and understand structural changes. Your engineering approach to documentation becomes your competitive advantage.
Document the system, not just the API. Your technical expertise lets you explain not just what each endpoint does, but how pieces fit together. AI assistants need this systems thinking to make accurate recommendations. When you document architecture patterns, integration workflows, and system boundaries, you create documentation that serves both humans and AI effectively.
Maintain the quality standards AI cannot replicate. AI can draft content. It cannot ensure technical accuracy. It cannot maintain consistency across thousands of pages. It cannot understand when product behavior changes invalidate documentation. These quality assurance tasks require domain expertise, attention to detail, and institutional knowledge. This is why technical writers remain essential.
For Developer Relations teams
Rebuild measurement systems around effectiveness, not just engagement. Stop reporting page views and tutorial completion rates. Start tracking time-to-first-successful-call, error patterns in new integrations, and support ticket themes. Build dashboards that connect documentation quality to adoption velocity and revenue. Make it impossible for executives to cut budget without seeing business impact.
Audit documentation for AI consumption. Pick your most critical integration guides and test them with AI assistants. Feed documentation to Claude and ask it to implement the integration. Identify where AI succeeds and where it hallucinates. Fix gaps in structure, completeness, and clarity that confuse AI parsing. This work protects adoption as AI-mediated discovery becomes dominant.
Position documentation as AI infrastructure. Frame documentation not as a resource for humans, but as technical foundation that makes AI-assisted integration possible. Companies with AI-optimized documentation will dominate AI-assisted discovery. This positioning protects documentation budgets when executives question traditional metrics.
Build relationships with AI assistant providers. Anthropic, OpenAI, and GitHub want developer tools to work well with their assistants. Partner with them to optimize documentation. Offer to be a test case. Get early access to citation analytics. This visibility protects your program when budgets tighten and demonstrates you understand where the market is going.
For Developer Relations leadership
Reframe the function as strategic, not support. When your CMO questions Developer Relations value, show how community intelligence shapes ICP definition, sharpens product positioning, and validates GTM execution. Demonstrate that Developer Advocates provide market intelligence that shapes product direction. Position Developer Relations as revenue function, not cost center.
Invest in long-term community moats. Authentic, technically credible community engagement over years creates competitive advantages AI cannot replicate. When AI assistants can recommend any technically equivalent solution, community trust becomes the deciding factor. This requires sustained investment in relationships, not quarterly campaigns.
Educate executives on the AI shift. Share how AI consumption changes documentation metrics, adoption patterns, and value measurement. Teams that adapt early will thrive. Teams that ignore this will lose budget and relevance. Make the shift visible before executives question why traditional metrics are declining.
Your documentation is already being consumed by AI. Your community work already influences adoption decisions. The only question is whether you adapt measurement and positioning to reflect this reality before budget season forces the conversation.
The path forward
The shift from descriptive to aspirational work is not unique to documentation. It's the pattern reshaping all of Developer Relations.
When AI handles mechanical and descriptive tasks, what remains is showing developers what becomes possible. Technical writers move from describing APIs to documenting what can be built. Developer Advocates move from explaining features to demonstrating outcomes. Community builders move from answering questions to inspiring possibilities. The entire function elevates from tactical execution to strategic enablement.
This validates what you've always known about quality, completeness, and structure. AI makes these principles mandatory rather than aspirational. Documentation must be comprehensive because AI cannot fill gaps. Community must be authentic because AI cannot build trust. Advocacy must be credible because AI cannot earn respect. The craft you've developed matters more now, not less.
Those who adapt measurement systems to prove effectiveness rather than engagement, who optimize for dual audiences of humans and AI, and who position their work as strategic enablement rather than tactical support will demonstrate business impact executives cannot ignore.
Picks and Shovels: Marketing to Developers During the AI Gold Rush is available now, everywhere books are sold. It's a #1 Amazon Best-Seller. Did you get your copy yet?