codeflash-internal/tiles/codeflash-internal-rules/rules/optimization-patterns.md
Kevin Turcios dfc56f19a0 feat: add Tessl tiles for codeflash-internal (rules, docs, skills)
Three private tiles published to the codeflash workspace:
- codeflash-internal-rules: 6 eager rules (code-style, architecture,
  optimization-patterns, git-conventions, testing-rules, multi-language-handlers)
- codeflash-internal-docs: 8 lazy doc pages (domain-types, optimization-pipeline,
  test-generation-pipeline, context-extraction, aiservice/cf-api endpoints,
  configuration-thresholds, llm-provider-abstraction)
- codeflash-internal-skills: 4 on-demand skills (debug-optimization-failure,
  add-language-support, add-api-endpoint, debug-test-generation)
2026-02-14 22:16:33 -05:00

39 lines
1.7 KiB
Markdown

# Optimization Patterns
## Router Dispatch
- Shared routers in `core/shared/` dispatch by the `language` field on the request schema
- Lazy imports inside the endpoint body to avoid circular dependencies:
```python
if data.language in ("javascript", "typescript"):
from core.languages.js_ts.optimizer import optimize_javascript # noqa: PLC0415
return await optimize_javascript(request, data)
```
- Default language is Python
## Context Extraction
- `BaseOptimizerContext` in `optimizer_context.py` handles prompt management and code extraction
- Two context types: `SingleOptimizerContext` (single-file) and `MultiOptimizerContext` (multi-file)
- `extract_code_and_explanation_from_llm_res()` parses LLM markdown response into code blocks
- `parse_and_generate_candidate_schema()` converts extracted code to `OptimizeResponseItemSchema`
## Model Distribution
- `get_model_distribution()` in `optimizer_config.py` splits calls between OpenAI and Anthropic
- Formula: `claude_calls = (total - 1) // 2`, `gpt_calls = total - claude_calls`
- `MAX_OPTIMIZER_CALLS = 6`, `MAX_OPTIMIZER_LP_CALLS = 7`
- Claude gets fewer calls as it's more expensive
## Postprocessing
- `postprocess.py` handles deduplication and validation of optimization candidates
- Deduplication: normalize with `ast.parse()` + `ast.dump()`, skip duplicates with identical AST
- Equality check: compare optimized code to original to skip no-ops
- Uses `libcst` for all code transformations (preserves formatting)
## Prompt Templates
- Prompts stored as `.md` files alongside their modules
- Rendered with Jinja2 (e.g., `build_prompt()` in testgen)
- System and user prompts are constructed per-context type