Three private tiles published to the codeflash workspace: - codeflash-internal-rules: 6 eager rules (code-style, architecture, optimization-patterns, git-conventions, testing-rules, multi-language-handlers) - codeflash-internal-docs: 8 lazy doc pages (domain-types, optimization-pipeline, test-generation-pipeline, context-extraction, aiservice/cf-api endpoints, configuration-thresholds, llm-provider-abstraction) - codeflash-internal-skills: 4 on-demand skills (debug-optimization-failure, add-language-support, add-api-endpoint, debug-test-generation)
5.2 KiB
| name | description |
|---|---|
| debug-optimization-failure | Diagnose why an optimization produced no results or failed silently. Use when an optimization request returns errors, empty results, or all candidates are rejected. Walks through request validation, router dispatch, context extraction, LLM calls, postprocessing, and logging stages. |
Debug Optimization Failure
Use this workflow when an optimization request fails or produces no results. Work through the stages sequentially — stop at the first failure found.
Step 1: Validate the Request
Check that the incoming OptimizeSchema is well-formed.
- Read
core/shared/optimizer_models.py— verify the request matchesOptimizeSchemafields - Check required fields:
source_code,trace_idmust be non-empty - Check
languagefield — must be"python","javascript","typescript", or"java" - Check
n_candidates— default is 5, must be positive
Checkpoint: If the request schema is invalid, the error comes from Pydantic validation. Check the 400 response for field-level errors.
Step 2: Check Router Dispatch
Verify the correct language handler is invoked.
- Read
core/shared/optimizer_router.py— theoptimize()endpoint dispatches bydata.language - Supported routes:
"javascript"/"typescript"→core.languages.js_ts.optimizer.optimize_javascript"java"→core.languages.java.optimizer.optimize_java- Default →
core.languages.python.optimizer.optimizer.optimize_python
- Check for import errors — lazy imports inside the function body may fail if a language module is missing
Checkpoint: If dispatch fails, you'll see an ImportError. Check that the language module exists under core/languages/.
Step 3: Check Context Extraction
Verify the optimization context is built correctly.
- Read
core/languages/python/optimizer/context_utils/optimizer_context.py BaseOptimizerContext.get_dynamic_context()dispatches to Single or Multi context- Check
get_system_prompt()andget_user_prompt()— they should produce non-empty prompts - Check
extract_code_and_explanation_from_llm_res()— this parses markdown code blocks from the LLM response
Checkpoint: If context extraction returns empty prompts, check that source_code in the request is valid Python/JS code.
Step 4: Check LLM Calls
Verify the LLM is called correctly and returns valid responses.
- Read
aiservice/llm.py—call_llm()is the universal call handler - Check
get_llm_client(model_type)returns a valid client (notNone) - Environment variables required:
- OpenAI:
AZURE_OPENAI_API_KEY,AZURE_OPENAI_ENDPOINT,OPENAI_API_VERSION - Anthropic:
ANTHROPIC_FOUNDRY_API_KEY,ANTHROPIC_FOUNDRY_BASE_URL
- OpenAI:
- Check
optimizer_config.py—get_model_distribution()determines how many calls per model - Look for exceptions:
"LLM client for model type '...' is not available"
Checkpoint: If LLM calls fail, check environment variables and API key validity. Network errors will raise exceptions.
Step 5: Check Postprocessing
Verify candidates survive postprocessing.
- Read
core/languages/python/optimizer/postprocess.py deduplicate_optimizations()— removes candidates with identical AST (viaast.parse()+ast.dump())equality_check()— removes candidates identical to the original code- Check if ALL candidates were deduplicated or matched the original
Checkpoint: If all candidates are removed by postprocessing, the LLM is generating identical or no-op code. Try increasing n_candidates or checking prompt quality.
Step 6: Check Response Construction
Verify the response is properly constructed.
- Each successful candidate produces an
OptimizeResponseItemSchema parse_and_generate_candidate_schema()converts extracted code to the schemais_valid_code()validates syntax —cst.ParserSyntaxErrororValidationErrormeans malformed output- If parsing fails, the candidate is dropped and a Sentry message is captured
Checkpoint: If candidates parse but the response is empty, check the validation step in the optimizer flow.
Step 7: Check Logging
Verify the optimization was logged for debugging.
- Read
core/log_features/models.py—OptimizationFeaturesstores per-trace-id data - Check
optimizations_raw(before postprocessing) vsoptimizations_post(after) - LLM calls are recorded via
record_llm_call()in thefinallyblock ofcall_llm() - PostHog events track
aiservice-optimize-openai-usage
Checkpoint: If logging shows raw candidates but no post candidates, postprocessing removed them all.
Key Files Reference
| File | What to check |
|---|---|
core/shared/optimizer_router.py |
Language dispatch |
core/shared/optimizer_models.py |
Request validation |
core/languages/python/optimizer/optimizer.py |
Optimization flow |
core/languages/python/optimizer/context_utils/optimizer_context.py |
Context extraction |
core/languages/python/optimizer/postprocess.py |
Dedup and validation |
aiservice/llm.py |
LLM calls and client setup |
core/shared/optimizer_config.py |
Model distribution |
core/log_features/models.py |
Logging and tracking |