- sessions.md: hard compaction limits, no-polling, file read budget - debugging.md: root cause first, isolated testing, subprocess logging - github.md: strengthen MCP-first enforcement - error-handling.md (packages): no silent swallowing, protect ast.parse - test-coverage.md (packages): every module needs tests, known gaps
1.5 KiB
Error Handling in Pipeline Code
No silent swallowing
Never write except Exception: pass or except Exception: return. Every except block must either:
- Log at WARNING or higher with
exc_info=True - Re-raise after cleanup
- Return a clearly documented sentinel value with at least DEBUG logging
The tracing code (benchmarking/_tracing.py) is the worst offender -- 7 bare excepts with zero logging. Don't add more.
Protect ast.parse() calls
ast.parse() raises SyntaxError on malformed input. Always wrap it:
try:
tree = ast.parse(source)
except SyntaxError:
log.warning("Failed to parse %s", path)
return None # or skip this file
Known unprotected calls: _state.py:70, _ranking.py:33.
XML/TOML parsing
Always use recover=True for lxml XMLParser. Always wrap tomlkit.parse() in try/except. Log parsing failures at WARNING, not DEBUG -- config parsing failures matter.
Format consistency: client-server boundary
The AI service returns markdown-fenced code blocks. Every endpoint response must be parsed with CodeStringsMarkdown.parse_markdown_code() before using the code. Currently only /optimize and /optimize-line-profiler do this correctly. The refinement, repair, and adaptive endpoints in ai/_refinement.py skip this step.
Pattern to follow (from pipeline/_candidate_gen.py:83-91):
parsed = CodeStringsMarkdown.parse_markdown_code(c.code)
if not parsed.code_strings:
continue
plain_code = "\n\n".join(cs.code for cs in parsed.code_strings)