## Summary - Pass coverage details (unexecuted lines, threshold) to review and repair prompts so the LLM can identify low-coverage tests - Accept previous repair errors in the repair endpoint and include them in the prompt for retry cycles - Parallelize per-test review LLM calls with `asyncio.TaskGroup` - Conditionally include codeflash env var context (`CODEFLASH_TRACER_DISABLE`, etc.) in repair prompts when the function under test references them ## Test plan - [x] Tested locally with codeflash CLI against `Tracer.__enter__` — review, repair, and retry cycles all work - [x] Coverage details and previous errors appear correctly in prompts - [x] Review parallelization reduces latency from sequential ~60s per test to concurrent |
||
|---|---|---|
| .. | ||
| aiservice | ||
| authapp | ||
| core | ||
| tests | ||
| .env.example | ||
| .pre-commit-config.yaml | ||
| CLAUDE.md | ||
| deploy | ||
| gunicorn.conf.py | ||
| manage.py | ||
| mypy_allowlist.txt | ||
| pyproject.toml | ||
| pytest.ini | ||
| README.md | ||
| runtests.py | ||
| start_gunicorn.sh | ||
| uv.lock | ||
| uvicorn_worker.py | ||
This is django-ninja project for the ai service. https://django-ninja.dev