AI coding agents are only as good as the information you put in front of them. Every byte of context you load into an agent’s context window is space that could have been used for the code, conversation, or task at hand. Getting this balance right requires the same design instinct that makes good UIs, great games, and well-structured APIs: Progressive Disclosure.
What Is Progressive Disclosure?
Progressive Disclosure is the principle of revealing complexity incrementally rather than all at once. Instead of bombarding an audience with everything they might ever need to know, you give them just enough to act now, with clear pathways to more detail as it becomes relevant.
In UX, it means hiding advanced settings until the user needs them. In storytelling, it means revealing world-building details through experience rather than exposition. In software architecture, it is why the C4 model has four levels rather than one enormous diagram. Even HTML links, like the ones you see in this article, follow this principle. You get to decide if and when you want to seek out more detail by following a link, or if it’s not needed for your current context.
For a full overview of the principle and where it applies, see Progressive Disclosure on DevIQ.
Agent Context Windows and Their Limits
Every large language model (LLM) has a context window: a finite amount of text it can hold in “working memory” at one time. This window is shared between your instructions, the conversation history, the code or documents the agent is reading, and the agent’s own output.
Context windows have grown significantly over the years, from 4K tokens to 32K to 128K and beyond, but this growth does not solve the underlying problem. As noted in Context Windows Won’t Grow Forever, we are approaching physical and practical limits. More importantly, a larger context window does not mean you should fill it indiscriminately. Noise in the context degrades response quality: the more irrelevant content the model has to reason around, the more likely it is to produce off-target results.
The Signal-to-Noise Ratio of agent context windows is critical to their success!
Every instruction, template, example, and background fact you load into an agent session has a cost. The goal is to load only what is relevant to the current task, and to make the rest of the detail accessible on demand.
Applying Progressive Disclosure to Agent Design
The key insight is that an agent should not need to know everything upfront, just as a developer does not memorize the entire codebase before writing a single line of code. Instead, the agent should be given:
- A concise summary of what it is and what it can do.
- Focused, actionable instructions for the current context.
- Pointers to additional detail that it can retrieve only when needed.
This approach maps directly onto how skills work in agent systems like agentskills.io. A skill has a name, a brief description that helps the agent select it, and a body of instructions that are only loaded when the skill is invoked. This is Progressive Disclosure built into the architecture: the agent sees the summary and fetches the detail on demand.
The same design principle applies to MCP (Model Context Protocol) servers, AGENTS.md files, CLAUDE.md files, GitHub Copilot’s copilot-instructions.md, and squad-style agent teams like Brady Gaster’s Squad, where each team member lives in its own file, reads only its own context, and contributes to a shared but well-organized knowledge base.
The Bad: Everything in One File
The most common mistake is dumping every possible instruction into a single file and calling it done. This might look like an AGENTS.md at the root of the repository that contains something like:
|
|
Every time the agent starts a session in this repository, it loads all of this. If the task is to fix a typo in a README, the agent still pays the full cost of all those domain model descriptions and API conventions. The signal-to-noise ratio drops with every line you add to a monolithic instructions file.
The Bad: Skills with Giant Descriptions
A subtler but equally damaging anti-pattern appears when developers discover skills or agent capabilities and then write verbose descriptions for each one:
|
|
The agent now has a description that is itself a wall of text. The agent must read and process every word of that description for every task, even tasks that have nothing to do with API endpoint creation. The purpose of a skill description is to help the agent decide whether to use the skill, not to be a complete manual for using it.
The Good: Focused Descriptions with Linked Detail
The right approach treats each skill’s description as a selector, not a tutorial. Keep descriptions short and precise, and move the actual how-to guidance into separate files that the agent retrieves only when it invokes the skill:
|
|
The skill description registered with the agent system is concise:
|
|
And the skill.md file contains the detailed instructions that are only fetched when the agent actually invokes this skill:
|
|
The top-level AGENTS.md or copilot-instructions.md now reads like a table of contents, not a reference manual:
|
|
This structure is exactly what tools like Brady Gaster’s Squad encourage: each team member (agent) lives in its own file, reads only its own context, and links to resources rather than embedding them.
The agent carries a map, not the entire territory.
Key Takeaways
Progressive Disclosure in agent design comes down to a few concrete practices:
- Keep top-level instruction files minimal. The
AGENTS.md,copilot-instructions.md, orCLAUDE.mdshould orient the agent, not exhaustively document the project. - Write skill descriptions as selectors, not manuals. Use one or two sentences that help the agent decide when to use a skill; the detail lives in the skill’s own folder.
- Use links and file references liberally. An agent can fetch a file when it needs it. Putting that file’s contents directly in the context “just in case” wastes tokens on every session where the task does not need it.
- Organize MCP server tools with the same discipline. Each tool’s description should say what it does and when to use it, with documentation or examples accessible via a separate resource rather than embedded in the tool description.
- Trust the agent to navigate. A well-organized file structure with clear names at every level lets the agent find what it needs through exploration, the same way a developer navigates an unfamiliar codebase.
The goal is an agent that arrives at each session carrying only what it needs for the current task, ready to reach out for more detail the moment it becomes relevant, and never wasting precious context on information that does not apply.