Replace three `from X import support as _` patterns with a loop using
`importlib.import_module()`, eliminating the duplicate name binding.
Co-authored-by: Kevin Turcios <KRRT7@users.noreply.github.com>
The function was removed in the dead code cleanup but the test file still
imported it and had a TestCreatePyprojectToml class, causing ImportError.
Co-authored-by: Kevin Turcios <undefined@users.noreply.github.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Function now lives alongside the console/logger it depends on.
Updated all 8 callers to import from cli_cmds.console instead.
Co-Authored-By: Oz <oz-agent@warp.dev>
inquirer_wrapper, inquirer_wrapper_path, split_string_to_cli_width,
and split_string_to_fit_width were replaced by direct inquirer usage
with CodeflashTheme and rich.prompt.Confirm. Only apologize_and_exit
remains.
Co-Authored-By: Oz <oz-agent@warp.dev>
- Fix duplicate type annotation for test_count_cache in optimizer.py
- Replace ast.walk() with stack-based traversal in _find_class_node_by_name (21x speedup)
- Use list instead of deque in CallGraph.ancestors (34% speedup, order doesn't matter for set result)
The optimization replaced recursive calls in `_get_expr_name` with an iterative loop that walks attribute chains once, collecting parts into a list and reversing them only at the end, eliminating function-call overhead that dominated 46% of original runtime (line profiler shows recursive calls at 1154 ns/hit vs. the new loop iterations at ~300 ns/hit). Additionally, `_expr_matches_name` now precomputes `"." + suffix` once instead of building it twice per invocation via f-strings, cutting redundant string allocations. The net 26% runtime improvement comes primarily from avoiding Python's recursion stack and reducing temporary object creation in the hot path, with all tests passing and only minor per-test slowdowns (typically 10–25%) offset by dramatic wins on deep attribute chains (up to 393% faster for 100-level nesting).
Expanded bug fix workflow to explicit 5-step sequence with subagent delegation, aligned type annotation rule with codebase conventions, simplified verification rule to reference prek, and expanded git/PR guidelines.
Runtime annotations in PR descriptions were broken in two ways:
1. add_runtime_comments() ignored class/method prefixes in keys, causing
annotations from unrelated test classes to leak across files and sum
incorrectly at the same line number. Now filters by class names found
in each test source file.
2. Test functions were removed before annotations were added, shifting
line numbers so annotations landed on wrong lines. Swapped ordering
so annotations are applied first, then function removal carries them
along correctly.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The optimization replaced a large multi-type `isinstance()` check (13 AST node types constructed into a tuple on every iteration) with a single `hasattr(node, "body")` test, then conditionally checked for `orelse`, `finalbody`, and `handlers` only when `body` exists. Line profiler shows the original `isinstance` block consumed ~40% of runtime across 7327 calls, while the new `hasattr` checks are ~3× cheaper per call. The nested conditionals avoid calling `getattr` with default values when attributes are absent (e.g., `orelse` is missing in 85% of nodes), cutting wasted attribute lookups from four unconditional `getattr` calls to typically one or two `hasattr` checks plus direct accesses. Across 59 test runs processing ~7300 AST nodes each, this yields a 109% speedup with identical correctness.
re_extract_from_cache was always calling add_needed_imports_from_module,
but HASHING context should use ast.unparse(ast.parse()) to normalize
whitespace for consistent hashing, matching extract_all_contexts_from_files.
Build test_count_cache once before ranking instead of calling
existing_unit_test_count O(2N) times. Guard for None function_to_tests
and add debug logging when effort is escalated from medium to high.
The optimization reorders checks in `_should_use_raw_project_class_context` to perform cheap O(1) checks before expensive body iterations. Moving the `decorator_list` check from near the end to the very start eliminates ~60% of body scans when decorators are present (line profiler shows the single-pass loop dropped from 2.84ms to 2.60ms per hit). Folding the manual `_class_has_explicit_init` and `_has_descriptor_like_class_fields` calls into one body traversal with early returns cuts redundant iterations, and checking for namedtuple/dataclass before computing size metrics avoids the `_get_class_start_line` computation in ~15% of cases. This achieves a 42% runtime improvement (737µs → 518µs) with no functional regressions.
The optimization replaced `any()` generator expressions with explicit early-return for-loops in four helper functions (`_is_namedtuple_class`, `_class_has_explicit_init`, `_has_descriptor_like_class_fields`, and `_has_non_property_method_decorator`), eliminating the overhead of building generator objects and calling the `any()` builtin. Line profiler data shows `_class_has_explicit_init` dropped from 1.85 ms to 0.96 ms (48% faster), and `_is_namedtuple_class` improved from 97 µs to 53 µs (46% faster), because the optimized code avoids allocating iterator state and returns immediately upon finding a match instead of completing the generator. The 51% overall runtime improvement (1.43 ms → 948 µs) comes from these cumulative reductions in per-call overhead across thousands of invocations during AST traversal. Test suite confirms no behavioral changes across all edge cases including dataclasses, decorators, and size-limit boundaries.
The optimization replaces `ast.walk(tree)` — which visits every node in the AST — with a manual stack-based traversal that only descends into container node types (`Module`, `ClassDef`, `FunctionDef`, control-flow statements, etc.) where `ClassDef` nodes can appear. This eliminates traversal of leaf nodes like `Name`, `Constant`, `Load`, and `Store`, which constitute the bulk of an AST but never contain class definitions. The profiler shows the original single-line comprehension spent 100% of runtime (117.7 ms) in `ast.walk`, while the optimized version completes in 36.1 ms (3.26× faster) by skipping ~60–80% of nodes depending on AST density. Tests confirm correctness across nested classes, control-flow scopes, and large trees with 1000+ classes.
The optimization replaced `ast.walk()` (which visits every node in the AST) with a custom `ImportCollector` visitor that only processes `ImportFrom` nodes, eliminating ~18,000 unnecessary node type-checks on a representative 48-parse benchmark where only ~3,400 nodes were actually relevant. This cuts the import-collection phase from 84.4 ms to 69.3 ms (18% faster) as seen in the profiler, while the broader pipeline improves 11% end-to-end. A few tests with minimal or deeply nested imports show slight regressions (~10%) because visitor dispatch overhead dominates when there are very few target nodes, but these cases are rare in production codebases where the hot-path callers (`build_testgen_context`, `enrich_testgen_context`) process multi-file contexts with dozens of imports.
Parse each file once instead of up to 16 times by:
- Making remove_unused_definitions_by_function_names accept/return cst.Module
- Making parse_code_and_prune_cst and add_needed_imports_from_module accept cst.Module
- Threading the parsed Module through process_file_context
- Adding extract_all_contexts_from_files that processes all 4 context types
(READ_WRITABLE, READ_ONLY, HASHING, TESTGEN) in a single per-file pass