Optimizing AI Agents with Progressive Disclosure

AI coding agents are only as good as the information you put in front of them. Every byte of context you load into an agent’s context window is space that could have been used for the code, conversation, or task at hand. Getting this balance right requires the same design instinct that makes good UIs, great games, and well-structured APIs: Progressive Disclosure.

What Is Progressive Disclosure?

Progressive Disclosure is the principle of revealing complexity incrementally rather than all at once. Instead of bombarding an audience with everything they might ever need to know, you give them just enough to act now, with clear pathways to more detail as it becomes relevant.

In UX, it means hiding advanced settings until the user needs them. In storytelling, it means revealing world-building details through experience rather than exposition. In software architecture, it is why the C4 model has four levels rather than one enormous diagram. Even HTML links, like the ones you see in this article, follow this principle. You get to decide if and when you want to seek out more detail by following a link, or if it’s not needed for your current context.

For a full overview of the principle and where it applies, see Progressive Disclosure on DevIQ.

Agent Context Windows and Their Limits

Every large language model (LLM) has a context window: a finite amount of text it can hold in “working memory” at one time. This window is shared between your instructions, the conversation history, the code or documents the agent is reading, and the agent’s own output.

Context windows have grown significantly over the years, from 4K tokens to 32K to 128K and beyond, but this growth does not solve the underlying problem. As noted in Context Windows Won’t Grow Forever, we are approaching physical and practical limits. More importantly, a larger context window does not mean you should fill it indiscriminately. Noise in the context degrades response quality: the more irrelevant content the model has to reason around, the more likely it is to produce off-target results.

The Signal-to-Noise Ratio of agent context windows is critical to their success!

Every instruction, template, example, and background fact you load into an agent session has a cost. The goal is to load only what is relevant to the current task, and to make the rest of the detail accessible on demand.

Applying Progressive Disclosure to Agent Design

The key insight is that an agent should not need to know everything upfront, just as a developer does not memorize the entire codebase before writing a single line of code. Instead, the agent should be given:

  1. A concise summary of what it is and what it can do.
  2. Focused, actionable instructions for the current context.
  3. Pointers to additional detail that it can retrieve only when needed.

This approach maps directly onto how skills work in agent systems like agentskills.io. A skill has a name, a brief description that helps the agent select it, and a body of instructions that are only loaded when the skill is invoked. This is Progressive Disclosure built into the architecture: the agent sees the summary and fetches the detail on demand.

The same design principle applies to MCP (Model Context Protocol) servers, AGENTS.md files, CLAUDE.md files, GitHub Copilot’s copilot-instructions.md, and squad-style agent teams like Brady Gaster’s Squad, where each team member lives in its own file, reads only its own context, and contributes to a shared but well-organized knowledge base.

The Bad: Everything in One File

The most common mistake is dumping every possible instruction into a single file and calling it done. This might look like an AGENTS.md at the root of the repository that contains something like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# Project Instructions

## Build

Run `dotnet build` to build the project. Requires .NET 9. Always run `dotnet restore` first.
If you encounter NuGet authentication errors, check that your nuget.config has the right feed URLs.
The CI pipeline runs `dotnet build --configuration Release` and then runs all tests...

## Code Style

We use 4-space indentation. Namespace declarations should be file-scoped.
Interfaces should be prefixed with I. Private fields should use _camelCase.
Never use var when the type is not obvious. Always use expression-bodied members for single-line
properties. Use primary constructors where possible in C# 12+...(another 100 lines of things `.editorconfig` could cover)

## Domain Model

The core domain contains the following aggregates: Order, Customer, Product, Shipment.
An Order has a list of OrderItems. Each OrderItem has a ProductId, Quantity, and UnitPrice.
The Order aggregate enforces the invariant that TotalPrice equals the sum of (Quantity * UnitPrice)
for all items. Orders in the Pending state can be cancelled... (100s more lines covering every type in the domain model and its relationship to other types)

## Testing

We use xUnit. Test classes should mirror the production code namespace with a .Tests suffix.
Use the Arrange-Act-Assert pattern. Name tests using the format MethodName_Scenario_ExpectedBehavior.
Mock external dependencies using NSubstitute. The test project references the main project...(and more...)

## API Conventions

All endpoints return ProblemDetails on error. Validation uses FluentValidation. Controllers inherit
from ApiController and are decorated with [Authorize] by default unless marked [AllowAnonymous].
Route templates follow the pattern /api/v{version}/{resource}...

[...1000 more lines of everything the agent could ever need...]

Every time the agent starts a session in this repository, it loads all of this. If the task is to fix a typo in a README, the agent still pays the full cost of all those domain model descriptions and API conventions. The signal-to-noise ratio drops with every line you add to a monolithic instructions file.

The Bad: Skills with Giant Descriptions

A subtler but equally damaging anti-pattern appears when developers discover skills or agent capabilities and then write verbose descriptions for each one:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
## Skill: create-api-endpoint

When you need to create a new REST API endpoint in this project, you should follow these steps.
First, identify the resource being exposed. Then create a new controller class in the Controllers
directory. The controller must inherit from ControllerBase and be decorated with [ApiController]
and [Route("api/v{version:apiVersion}/[controller]")]. Each action method should be decorated
with the appropriate HTTP method attribute ([HttpGet], [HttpPost], [HttpPut], [HttpDelete]).

Return types should use ActionResult<T> for methods that return data, or IActionResult for methods
that return only a status code. Always use async/await. Input validation is handled by FluentValidation;
create a corresponding validator class in the Validators folder. The validator should inherit from
AbstractValidator<T> where T is the request DTO. Always register the validator in Program.cs...

[...continues for 1000 more words...]

The agent now has a description that is itself a wall of text. The agent must read and process every word of that description for every task, even tasks that have nothing to do with API endpoint creation. The purpose of a skill description is to help the agent decide whether to use the skill, not to be a complete manual for using it.

The Good: Focused Descriptions with Linked Detail

The right approach treats each skill’s description as a selector, not a tutorial. Keep descriptions short and precise, and move the actual how-to guidance into separate files that the agent retrieves only when it invokes the skill:

1
2
3
4
5
6
7
.github/
  agents/
    skills/
      create-api-endpoint/
        skill.md          <- loaded when skill is selected
        template.cs       <- referenced from skill.md, fetched on demand
        conventions.md    <- referenced from skill.md, fetched on demand

The skill description registered with the agent system is concise:

1
2
3
4
5
## Skill: create-api-endpoint

Creates a new REST API endpoint following project conventions.
See `.github/agents/skills/create-api-endpoint/skill.md` for step-by-step instructions
and `.github/agents/skills/create-api-endpoint/template.cs` for the canonical controller template.

And the skill.md file contains the detailed instructions that are only fetched when the agent actually invokes this skill:

1
2
3
4
5
6
7
8
9
# Create API Endpoint

Follow these steps:
1. Create an endpoint in `src/Web/Endpoints/` using `template.cs` as a starting point.
2. Register the endpoint in `Program.cs` if adding a new route group.
3. Add a FluentValidation validator in `src/Web/Validators/` for any POST/PUT body.
4. Add integration tests in `tests/IntegrationTests/Endpoints/`.

See `conventions.md` for naming rules and return type guidance.

The top-level AGENTS.md or copilot-instructions.md now reads like a table of contents, not a reference manual:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Project Agent Instructions

This is a .NET 10 web API using minimal APIs and REPR pattern. Key facts:
- Build: `dotnet build` (requires `dotnet restore` first)
- Test: `dotnet test`
- Lint: `dotnet format --verify-no-changes`

## Available Skills

- **create-api-endpoint** — Add a new REST endpoint. See `.github/agents/skills/create-api-endpoint/`
- **add-domain-entity** — Add a new domain entity with value objects. See `.github/agents/skills/add-domain-entity/`
- **write-unit-tests** — Generate TUnit tests for a class. See `.github/agents/skills/write-unit-tests/`

For architecture decisions, see `docs/architecture/decisions/`.
For domain model overview, see `docs/domain-model.md`.

This structure is exactly what tools like Brady Gaster’s Squad encourage: each team member (agent) lives in its own file, reads only its own context, and links to resources rather than embedding them.

The agent carries a map, not the entire territory.

Key Takeaways

Progressive Disclosure in agent design comes down to a few concrete practices:

  • Keep top-level instruction files minimal. The AGENTS.md, copilot-instructions.md, or CLAUDE.md should orient the agent, not exhaustively document the project.
  • Write skill descriptions as selectors, not manuals. Use one or two sentences that help the agent decide when to use a skill; the detail lives in the skill’s own folder.
  • Use links and file references liberally. An agent can fetch a file when it needs it. Putting that file’s contents directly in the context “just in case” wastes tokens on every session where the task does not need it.
  • Organize MCP server tools with the same discipline. Each tool’s description should say what it does and when to use it, with documentation or examples accessible via a separate resource rather than embedded in the tool description.
  • Trust the agent to navigate. A well-organized file structure with clear names at every level lets the agent find what it needs through exploration, the same way a developer navigates an unfamiliar codebase.

The goal is an agent that arrives at each session carrying only what it needs for the current task, ready to reach out for more detail the moment it becomes relevant, and never wasting precious context on information that does not apply.

Resources