mirror of
https://github.com/codeflash-ai/codeflash-internal.git
synced 2026-05-04 18:25:18 +00:00
Accept baseline_runtime_ns, loop_count, line_profiler_results, and test_input_examples on the optimize endpoint. Pass runtime context and test examples to the user prompt so the LLM can generate better-informed candidates. Alternate line profiler data across parallel calls for diversity (odd calls get LP, even calls don't). |
||
|---|---|---|
| .. | ||
| aiservice | ||
| .dockerignore | ||