48 lines
2.8 KiB
Markdown
48 lines
2.8 KiB
Markdown
# Agent Team Coordination
|
|
|
|
## Prompt construction
|
|
|
|
Front-load all context into the initial Agent() prompt. Never send follow-up messages for information that was available at launch time. Every follow-up costs a round-trip and risks the agent having already committed to a suboptimal approach.
|
|
|
|
Bad: spawn agent, then send "also use context7 tools"
|
|
Good: include "Use mcp__context7__resolve-library-id and mcp__context7__query-docs for current docs" in the original prompt
|
|
|
|
## Reading selectively
|
|
|
|
When spawning agents that need to learn a pattern from existing files, point them at 2-3 structurally distinct examples — not every file. Most files in a category share 80% of their structure. Reading all N wastes context on redundancy.
|
|
|
|
Bad: "Read all 10 Python agent files to understand the pattern"
|
|
Good: "Read codeflash-python.md (router), codeflash-cpu.md (domain agent), and codeflash-deep.md (orchestrator) — these three cover all structural variants"
|
|
|
|
## Concise reporting
|
|
|
|
Agents report back to the team lead who has full file access. Reports should confirm completion and flag issues — not restate file contents.
|
|
|
|
Bad: 500-word summary listing every section, antipattern, and line count
|
|
Good: "Done. 2 files written: codeflash-js-cpu.md (334 lines), codeflash-js-memory.md (386 lines). Both follow Python agent structure. No issues."
|
|
|
|
Only include detail when something deviated from the plan or needs the lead's attention.
|
|
|
|
## Templating shared structure
|
|
|
|
When writing 3+ files with shared structure, extract the common skeleton first. Define the invariant sections (frontmatter shape, opening protocol reference, experiment loop structure, keep/discard tree, plateau detection) as a template, then fill in domain-specific content. This prevents inconsistencies in phrasing and formatting across files.
|
|
|
|
When using writer agents in parallel, include the skeleton in each agent's prompt rather than having each agent independently infer the structure from Python examples.
|
|
|
|
## Tool selection hierarchy
|
|
|
|
Default to the most direct tool for the job. Spawning an agent is expensive — it allocates a full context window, loads system prompts, and returns only a summary. Use it only when the simpler options can't do the job.
|
|
|
|
```
|
|
Direct tool (Grep, Glob, Read, Bash + jq) ← first choice
|
|
↓ not enough
|
|
Subagent (Explore, general-purpose) ← when you need multi-step search or the target is unknown
|
|
↓ not enough
|
|
Named teammate (Agent + TeamCreate) ← when work is long-running and parallel
|
|
```
|
|
|
|
Concrete examples:
|
|
- Parse a known JSON file → `jq` via Bash, not an Agent
|
|
- Find which files define a class → Grep, not Explore
|
|
- Search for a pattern across 200 files, then read the top 5 matches → Explore
|
|
- Profile, implement, and benchmark in parallel → named teammates
|