mirror of
https://github.com/codeflash-ai/codeflash-internal.git
synced 2026-05-04 18:25:18 +00:00
## Summary - Return **422 Unprocessable Entity** instead of 500 for known operational failures (LLM output parsing failures, no valid candidates produced, invalid rankings, etc.) across all aiservice endpoints - Keeps 500 for genuine internal errors (bare `except Exception` catch-alls that could include DB/network failures) - Adds `422` to Django-Ninja response schemas so the framework serializes responses correctly ## Endpoints changed | Endpoint | Failure type | Old | New | |---|---|---|---| | `/ai/testgen` | `TestGenerationFailedError`, `ParserSyntaxError` | 500 | 422 | | `/ai/optimize` | No valid candidates generated | 500 | 422 | | `/ai/optimize-line-profiler` | No optimizations generated | 500 | 422 | | `/ai/adaptive_optimize` | LLM parse error, no candidate | 500 | 422 | | `/ai/code_repair` | LLM error, `ParserSyntaxError`, `ValidationError` | 500 | 422 | | `/ai/rank` | Invalid ranking from LLM | 500 | 422 | | `/ai/explain` | LLM failure, XML parse failure | 500 | 422 | | `/ai/optimization_review` | JSON parse failure, no JSON block | 500 | 422 | ## Why These endpoints were returning 500 for expected outcomes (e.g., LLM returning unparseable output), which triggered Azure 5xx alerts and inflated error metrics. 422 correctly signals that the request was understood but the server couldn't produce a valid result. ## Test plan - [x] `uv run pytest -x -q -k "optimizer or rank or explain or code_repair or review"` — 199 passed - [ ] Verify Azure 5xx alert rate drops after deploy --------- Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| aiservice | ||
| .dockerignore | ||