Skip to main content

Documentation Index

Fetch the complete documentation index at: https://learn.mintlify.com/llms.txt

Use this file to discover all available pages before exploring further.

When a human reads your documentation, they arrive with context. They have a goal in mind. They know what page they just came from or what task they were working on. They can skim for the section they need, follow a link, and build a mental model of your product over multiple sessions. When an agent comes to your documentation, it takes a different approach. Agents process your content as chunks of text, often without knowing what’s adjacent to each chunk or how it fits into your overall content structure or product knowledge. They don’t have browsing sessions or remember the last time they came to your site. They work with whatever prompt a user gave them, any configuration files, and what they discover to find an answer.
This course focuses on task-based agent consumption. Creating high quality content for agents that access your documentation in real time to answer a question or complete a task. A related but separate concern is optimizing for model training where you make your content crawlable and well-structured so it’s useful as training data for LLMs. The two share some practices, but have distinct requirements.

Pages should be self-contained

For agent-friendly documentation, each page must function as a standalone document. If an agent fetches a page to answer a question, it might have only that page. Or even just parts of that page. If there is text that says “as described in the previous section” or “once you’ve completed the setup above,” the agent may not have that chunk of context, which could lead to incorrect or incomplete answers. Documentation that relies on implied continuity is not agent-friendly. When you’re writing content, evaluate if pages are understandable on their own. This is a best practice that also helps humans understand your documentation. People may arrive on any page of your documentation when searching for information. So the effort to prepare every page as an entry point to your documentation is a good investment.

Use metadata to guide agents

Page titles and descriptions are critical for search engines and navigation. They’re also the first signal an agent uses to determine what a page is about and whether it’s relevant to a user’s question. A title like “Advanced configuration” tells an agent almost nothing. Always use specific titles such as “Configure rate limits for API requests” that are specific enough to match real queries users might ask agents. Page descriptions work the same way. A concrete, specific description makes it much easier for an agent to route the right content to the right question.

Be consistent with terminology

Inconsistent terminology frustrates humans, but they can figure out meaning from context. Agents are not always able to. If your docs call the same thing by both “workspace” and “organization” in different places, most humans will figure it out from context. But agents may treat them as distinct concepts and fail to match a user’s question to relevant content. A user asking about “organizations” might need a page that uses the term “workspace” instead. In agent-friendly docs, consistent terminology is key to improving the accuracy of answers.

Focus pages on a single topic or task

A page that covers installation, basic configuration, advanced options, and troubleshooting all at once is hard for an agent to use precisely. When a user asks a specific question, the agent has to extract the relevant portion from a lot of surrounding noise. Sometimes it does this well. Often it doesn’t. Focused pages that cover one topic and have one purpose give agents a much cleaner signal. But it’s not just about the agents. Don’t get rid of all your tutorials and quickstarts that go through multiple setup tasks. Just make sure that you also have focused how-to pages that cover key tasks in your product.

Technical issues can cause silent failures

The principles above are about content quality. But some agent failures have nothing to do with how well you’ve written your content. Technical issues can prevent agents from accessing your content at all. Agents fetch pages directly rather than navigating your site like humans do. Here are some of the most common issues that can cause failures:
  • Page size: Pages over about 50,000 characters get truncated when an agent fetches them. The agent receives part of the page, but won’t know content was cut off. Focused, single-topic pages are much less likely to hit this limit.
  • Tabbed content: Agents typically only see the first tab on a page, so content in other tabs may be invisible to them. If you use tabs to organize variants, make sure the most important content is in the default tab, or restructure so critical information isn’t hidden behind a tab at all. Anything that applies to all tabs should come before the tabs.
  • JavaScript rendering: Agents can’t execute JavaScript. Sites that render content client-side may return little or nothing useful to an agent. Mintlify renders server-side by default, so this isn’t a concern for Mintlify powered sites, but consider what external links you include.
  • Broken or moved content: Agents often have URLs from training data. If you move content to a different domain without maintaining redirects, those URLs fail silently. Same-host redirects work; cross-host redirects often don’t.
For humans, these problems produce a visible bad experience. For agents, they produce a silent failure where the agent just doesn’t have the information, and the user gets a wrong or incomplete answer with no indication that anything went wrong.
You’ve learned how agents read your documentation and how to make your content agent-friendly.Next up: Write content agents can use — The specific techniques for putting these principles into practice.