Your documentation has two audiences: humans and AI. Not long ago, a user who needed help with your product would start by reading your documentation. If they couldn’t find what they needed, they’d open a support ticket or try a different solution. That model changed. Users increasingly get answers from AI agents—ChatGPT, Claude, Perplexity, or your Mintlify assistant—that pull in your documentation as context and generate responses on your behalf. Developers building on your platform use coding agents like Claude Code or Cursor that read your documentation to understand your product. Your documentation must still work well for humans. But you also need to consider the AI that mediates between your content and your users. Luckily, making documentation agent-friendly is similar to making it good for humans. Clearer pages, more explicit context, consistent terminology, and focused content always help. But there are some specific considerations that we’ll cover in this course.Documentation Index
Fetch the complete documentation index at: https://learn.mintlify.com/llms.txt
Use this file to discover all available pages before exploring further.
What this course covers
- How agents read your documentation — What happens when an agent processes your content, and why some documentation works better as context for them.
- Write content agents can use — The writing and content design choices that make your documentation reliable context for agents.
- Control what agents see — Which settings and files to configure, including
llms.txt,CLAUDE.md, andagents.md. - Keep content agent-friendly over time — A maintenance approach for a fast-changing AI landscape.
Who this is for
Developers, technical writers, and product teams who maintain documentation. Some lessons focus on Mintlify, but the principles apply to any documentation platform. You don’t need a background in AI or machine learning, just an interest in making your content work well for anyone (or anything) that reads it.Lessons in this course
- How agents reads your docs — What happens when an agent processes your content, and what determines whether it succeeds
- Write content agents can use — Techniques for writing documentation that serves as reliable context for agents.
- Control what agents see — Configure what agents have access to, and give them the context to produce accurate answers.
- Keep content agent-friendly over time — How to maintain agent-friendly content as your product grows and best practices evolve.
Agent-friendly documentation checklist
Use this as a quick reference for your documentation.Content
- Every page makes sense without reading adjacent pages
- Page titles are specific enough to match user queries (“Configure rate limits for API requests” instead of “Advanced configuration”)
- Page descriptions answer “why would someone read this?” with specific topics. No vague summaries
- Headings make sense without surrounding context and tell a narrative from just skimming them
- Each page covers one topic or task, focused on a single user goal
- Terminology is consistent throughout with one name per concept
Examples and references
- Code examples are complete and runnable
- Placeholder values are clear and explain what to substitute
- Cross-references name what they link to. No “click here,” “the above,” or similar
Configuration
- Your site has an
llms.txtfile -
llms.txthas a blockquote description that summarizes the content of your site - Each entry in
llms.txthas a description and a link - Important pages appear first in
llms.txt - Deprecated, internal, and changelog pages are excluded from
llms.txt -
AGENTS.mdorCLAUDE.mddefines audience, terminology, and content type rules for coding agents
Ongoing maintenance
- Review user conversations with agents regularly for failures and wrong answers
- Update documentation alongside product changes
- Review
llms.txtwhen major sections are added or removed, unless yourllms.txtautomatically updates