63 lines
4.5 KiB
Markdown
63 lines
4.5 KiB
Markdown
# Pre-Submit Self-Review
|
|
|
|
Before sending `[complete]`, run this checklist against your full diff (`git diff <base>..<branch>`). Fix any findings before finalizing. This catches the issues that reviewers consistently flag on performance PRs.
|
|
|
|
Also read your language's `pre-submit-review.md` for language-specific checks.
|
|
|
|
## 1. Resource Ownership
|
|
|
|
For every resource release you added (closing handles, freeing memory, dropping references, early returns that release resources):
|
|
|
|
- **Is this object caller-owned?** If it was passed as a parameter, the caller may still need it after your function returns. Check all call sites of the function and verify no caller uses the object afterward.
|
|
- **Is this object shared?** If it's accessed via instance state, a global, or a cache — other code paths may reference it concurrently.
|
|
- **Is this object behind a feature flag or alternate code path?** Check for environment-driven conditionals, conditional imports, or plugin/hook consumers. Regressions behind feature flags are invisible in standard tests.
|
|
|
|
**Fix pattern:** If you don't own the object, don't release it. Use scoped cleanup (language-appropriate RAII / context managers / try-finally) only for objects your function creates.
|
|
|
|
## 2. Concurrency & Production Safety
|
|
|
|
Assume this runs in a high-concurrency environment (web server, multiple threads/workers/tasks).
|
|
|
|
- **Shared mutable state:** Does your change modify module-level variables, class attributes, or globals? Is access thread-safe?
|
|
- **Locking scope:** If you hold a lock, is the critical section minimal? Are you doing I/O under the lock?
|
|
- **Event loop safety:** Never start a new event loop from code that may already be in an async context.
|
|
- **Resource lifecycle under concurrency:** File handles, images, arrays, sessions — what happens if many requests hit this code simultaneously? Is each request getting its own resources, or sharing?
|
|
- **Partial failure:** If your optimization crashes mid-way, does it leave resources leaked or state corrupted? Check for missing cleanup paths.
|
|
- **Idempotency:** Can your changed function be called multiple times with the same input safely? Does it accumulate state across calls?
|
|
|
|
## 3. Correctness vs Intent
|
|
|
|
Cross-check your implementation against what the PR claims:
|
|
|
|
- **Every claim in results.tsv has evidence.** If you recorded "45% speedup", the benchmark output should show it. If you recorded "KEEP", the tests must have passed.
|
|
- **No silent behavior changes.** If your optimization changes output format, error handling, logging, or edge case behavior — even slightly — document it. Reviewers will diff the behavior, not just the code.
|
|
- **Quality tradeoffs disclosed.** If your change trades accuracy for speed, latency for memory, or precision for throughput — quantify both sides in your commit message and HANDOFF.md.
|
|
|
|
## 4. Abstraction Boundaries
|
|
|
|
- **API contracts preserved.** If you changed a function's behavior (e.g., it now releases a resource it didn't before, or returns a different type), check all callers. The function's implicit contract includes what it does NOT do.
|
|
- **No code duplication across paths.** If you added a parallel implementation (e.g., async version of a sync function), it will drift. Prefer making the existing function handle both cases over duplicating logic.
|
|
- **Don't inline what should stay encapsulated.** If you copied logic from a helper to avoid a function call — the next person who fixes a bug in the helper won't know to fix your copy too.
|
|
|
|
## 5. Test Coverage of the Actual Path
|
|
|
|
- **Tests exercise the production code path**, not a test-local approximation. If the optimization goes through a framework-specific loader, factory, or plugin system in production, the test must too.
|
|
- **Tests cover the alternate paths** your change affects. If the function is called from both sync and async contexts, test both.
|
|
- **Regression tests for edge cases** mentioned in your analysis (e.g., empty input, single element, concurrent access).
|
|
|
|
## How to Run
|
|
|
|
At the end of the experiment loop, before sending `[complete]`:
|
|
|
|
1. Get the full diff: `git diff <base-branch>..HEAD`
|
|
2. Get list of all modified files: `git diff <base-branch>..HEAD --name-only`
|
|
3. For each modified file, find all callers (focus on functions where you added resource releases)
|
|
|
|
Walk through each KEEP commit against this checklist. If you find an issue:
|
|
1. Fix it
|
|
2. Re-run tests
|
|
3. Amend or add a new commit
|
|
4. Update results.tsv if metrics changed
|
|
5. Note the fix in HANDOFF.md under "Pre-submit review findings"
|
|
|
|
Only send `[complete]` after all checklist items pass.
|