10 KiB
Experiment Loop — Shared Base
Each domain's experiment-loop.md extends this base with domain-specific reasoning checklists, metrics, thresholds, and logging schemas. Read the domain file first — it will reference this for the common framework.
The Loop
LOOP (until plateau detected or user requests stop):
Print a status line before each step so the user can follow progress (see Progress Updates in the agent prompt).
- Review git history. Before choosing a target, read recent experiment history to learn from past attempts:
Look for patterns: if 3+ commits that improved the metric all touched the same file or area, focus there. If a specific approach failed 3+ times, avoid it. If a successful commit used a technique (e.g., "replaced list with set"), look for similar opportunities elsewhere.git log --oneline -20 # experiment sequence — what was tried git diff HEAD~1 # why the last change worked (or didn't) git log -20 --stat # which files drive improvements - Choose target. Pick the next candidate from the ranked bottleneck list (see Bottleneck Ranking in the agent prompt), informed by patterns from step 1. Print
[experiment N] Target: <description> (<category>, <est. impact>). If the list is empty or stale (after a re-rank), rebuild it from profiling data (see domain file for sources). - Reasoning checklist. Answer all questions from the domain file. Unknown answers = research more.
- Capture original output. Before changing anything, run the target function with representative inputs and save its output. This is your correctness oracle — the optimized version must produce identical results.
- Micro-benchmark (when applicable). Print
[experiment N] Micro-benchmarking...then result. - Implement. Print
[experiment N] Implementing: <one-line summary of change>. - Verify benchmark fidelity. Re-read the benchmark and confirm it exercises the exact code path and parameters you changed. If you modified function arguments, wrapper flags, pool sizes, or configuration, the benchmark must use the same values. If the benchmark was written before step 6, the implementation may have changed assumptions — update the benchmark to match. A benchmark that doesn't mirror the production change proves nothing.
- Verify output equivalence. Run the optimized version with the same inputs from step 4 and compare outputs. If outputs differ, discard immediately — this is a correctness regression, not an optimization. Do not proceed to benchmarking.
- Benchmark: Run target test. Print
[experiment N] Benchmarking.... Always run for correctness, even for micro-only optimizations. - Guard (if configured). Run the guard command (see Guard Command below). If the guard fails, the optimization broke something — revert and rework (max 2 attempts), then discard if still failing.
- Read results: pass/fail, metrics. Print the domain-specific result line (see domain file).
- If crashed or regressed = fix or discard immediately.
- Confirm small deltas: If improvement is below the domain's noise threshold, re-run to confirm not noise.
- Record in
.codeflash/results.tsv(schema in domain file). - Keep/discard (see decision tree in domain file). Print
[experiment N] KEEPor[experiment N] DISCARD — <reason>. - E2E benchmark (after KEEP, when available). If
codeflash compareis available (seee2e-benchmarks.md), run$RUNNER -m codeflash compare <pre-opt-sha> HEADto get authoritative isolated measurements. Record e2e results alongside micro-bench results inresults.tsv. If e2e contradicts micro-bench (e.g., micro showed 15% but e2e shows <2%), re-evaluate the keep decision — trust the e2e measurement. Print[experiment N] E2E: <base>ms → <head>ms (<speedup>x). - Config audit (after KEEP). Check for related configuration flags that may have become dead or inconsistent after your change. Infrastructure changes (drivers, pools, middleware) often leave behind no-op config. Remove or update stale flags.
- Milestones (every 3-5 keeps): Run full benchmark (including
codeflash compare <baseline-sha> HEADfor cumulative e2e measurement), create milestone branch. Print[milestone] vN — <total kept>/<total experiments>, cumulative <metric>.
Keep/Discard Decision Tree — Common Structure
Output matches original?
+-- NO -> DISCARD immediately (correctness regression)
+-- YES -> Test passed?
+-- NO -> Fix or discard immediately
+-- YES -> Guard passed? (skip if no guard configured)
+-- NO -> Revert, rework optimization (max 2 attempts)
| +-- Still fails -> DISCARD
+-- YES -> Primary metric improved?
+-- YES (>= domain threshold) -> KEEP
+-- YES (< domain threshold) -> Re-run to confirm not noise
| +-- Confirmed -> KEEP
| +-- Noise -> DISCARD
+-- Micro-bench only improved (>= domain micro threshold) -> KEEP (if on confirmed hot path)
+-- NO -> DISCARD
Domain files specify the exact thresholds and any additional branches.
Guard Command
An optional secondary verification that must always pass — a regression safety net. The guard prevents optimizing one metric while silently breaking another.
Setup: During session initialization, ask the user if there's a command that must always pass (e.g., pytest tests/, mypy ., npm run typecheck). Store it in .codeflash/conventions.md under ## Guard. If no guard is specified, skip step 10 in the loop.
Rules:
- The guard runs AFTER benchmarking (step 10), not before — don't waste time guarding a change that didn't even improve the metric.
- If the metric improved but the guard fails: revert the change, rework the optimization to not break the guard, and re-run (max 2 attempts). If it still fails after 2 rework attempts, DISCARD.
- NEVER modify guard/test files to make the guard pass. Always adapt the implementation instead.
- Record guard status in results.tsv: add
guard_passorguard_failto the status column.
Strategy Rotation
If 3+ consecutive discards on the same type of optimization, switch strategy. Domain files list the rotation order.
Plateau Detection & Stuck State Recovery
Universal checks (run after every experiment): See Stopping Criteria in the agent prompt — diminishing returns, user target reached, cumulative stall. If any fires, stop.
Domain-specific: 3+ consecutive discards across all strategies = check if remaining candidates are non-optimizable (see domain file for criteria). If top 3 candidates are all non-optimizable, stop and report to user with what's left and why.
Stuck State Recovery
If 5+ consecutive discards (across all strategy rotations), trigger this recovery protocol before giving up:
- Re-read all in-scope files from scratch. Your mental model may have drifted — re-read the actual code, not your cached understanding.
- Re-read the full results log (
.codeflash/results.tsv). Look for patterns:- Which files/functions appeared in successful experiments? Focus there.
- Which techniques worked? Try variants of those techniques on new targets.
- Which approaches failed repeatedly? Explicitly avoid them.
- Re-read the original goal. Has the focus drifted from what the user asked for?
- Try combining 2-3 previously successful changes that might compound (e.g., a data structure change + an algorithm change in the same hot path).
- Try the opposite of what hasn't worked. If fine-grained optimizations keep failing, try a coarser architectural change. If local changes keep failing, try a cross-function refactor.
- Check git history for hints:
git log --oneline -20 --stat— do successful commits cluster in specific files or patterns?
If recovery still produces no improvement after 3 more experiments, stop and report with a summary of what was tried and why the codebase appears to be at its optimization floor for this domain.
Cross-Domain Escalation
During profiling or experimentation, you may discover the real bottleneck is in a different domain than the one you're optimizing. Watch for these signals:
| You are | Signal | Likely domain |
|---|---|---|
| CPU agent | Peak memory dominates runtime (GC pressure, swapping) | Memory |
| CPU agent | Hot function is await-heavy or serializes I/O |
Async |
| Memory agent | Allocations are fast but algorithm is O(n^2) | CPU |
| Memory agent | Memory growth from connection/session accumulation | Async |
| Async agent | Individual coroutines are CPU-bound, not I/O-bound | CPU |
| Async agent | Coroutines hold large buffers that overlap at peak | Memory |
| Any agent | Import time or circular deps are the real bottleneck | Structure |
When you detect a cross-domain signal:
- Log it in results.tsv:
experiment N | ESCALATE | <signal description> | suggests <domain> - Tell the user: "I'm finding that the real bottleneck is [description] — this is a [domain] issue, not [current domain]. Want me to switch?"
- Write it in HANDOFF.md so a resumed session picks it up.
Do NOT silently switch domains or attempt fixes outside your expertise.
Session End — Learnings
When the session ends (plateau, user stop, or escalation), write .codeflash/learnings.md with insights that would help future sessions on this codebase. Append if the file already exists.
Format:
## <date> — <domain> session on <branch>
### What worked
- <technique> on <target> gave <improvement> (e.g., "dict index for dedup gave 12x on process_records")
### What didn't work
- <technique> on <target> — <why> (e.g., "generator pipeline for parse_rows — overhead exceeded savings at n<1000")
### Codebase insights
- <observation> (e.g., "ORM layer accounts for 60% of runtime — query optimization would have more impact than Python-level changes")
Keep entries concise. Future sessions read this file to avoid repeating failed approaches and to build on successful patterns.