introducing this due to pain points in V1, not a complete rewrite, based
off v1
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Kevin Turcios <KRRT7@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
- Add RAF throttling to scroll handler to prevent excessive re-renders
- Memoize inline style objects in CandidateContent component
- Parallelize function search across files with Promise.all
- Remove useSearchParams hook to stabilize callback dependencies
- Add loading.tsx for Next.js route streaming
- Add full-width code blocks in ranking section for better review
- Add badges matching main: Rank #X (indigo), Best (emerald), Used for PR (blue)
- Filter out unknown candidates from ranking display and re-rank sequentially
- Strip markdown code block wrappers from displayed code
- Fix refinement parent lookup to include other refinements
- Fall back to original code when refinement parent not found
- Make search bar non-sticky (scrolls with content)
- Add temporary auth bypass for /observability testing
- Rename src/app/observability2/ to src/app/observability/
- Rename src/components/observability2/ to src/components/observability/
- Update all route references from /observability2 to /observability
- Update all import paths from observability2 to observability
Delete all legacy observability code and consolidate on observability2:
- Remove src/app/observability/ routes and src/app/trace/ route
- Remove src/components/observability/ and src/components/trace/
- Remove src/lib/observability-utils.ts and observability-response-parse.ts
- Move copy-button and info-icon components to observability2
- Update middleware, sidebar, and conditional-layout to use /observability2
- Add tree-sitter for accurate Python function detection and highlighting
- Add summary-only view toggle for test generation section
- Clean up timeline view and types
- Replace LLMCallsTimeline with new TimelinePageView component
- Add scroll-based section tracking with sticky timeline dot
- Implement sci-fi pop-out animations when sections become active
- Add unified diff view with GitHub-style coloring for candidates
- Auto-expand code blocks when section is active (70vh max height)
- Collapse test sections by default
- Remove orphaned components (llm-calls-timeline, scrolling-timeline, timeline-utils)
- Update exports in index.ts
- Fix LP candidate ordering to display in numerical order (1-6)
- Add "Reverts to original" warning badge for refined candidates
- Rename "test_generation" section to "Generated Tests"
- Reorder sections: Generated Tests now appears before optimization candidates
- Collapse unit tests by default (only candidates expanded)
- Fix Function to Optimize section to work without filePath
- Prioritize metadata.function_to_optimize for function name extraction
- Add Response button in debug dialog to toggle raw LLM response view
- Collapse Instrumented Behavior Test and Instrumented Performance Test by default
Replace <img> with Next.js <Image> component for optimized loading,
remove unused imports, fix TypeScript any type, and add eslint-disable
comments for intentional hook dependency patterns.
Phase 1 of visual redesign - create token system for consistent theming:
- Add tokens.css with zinc color scale (50-950) and semantic colors
- Add typography.css with JetBrains Mono for technical data
- Add spacing.css with 8px grid system
- Update globals.css to import all token files
- Configure tailwind.config.ts with token integration
- Load JetBrains Mono font in layout.tsx
Requirements: CLR-01 through CLR-05, LAY-01, LAY-02, TYP-01 through TYP-03
Replace dropdown structure with flat card layout for all call types.
Each item now shows directly with a header containing title, ranking
badges, and a debug button. LLM call details (metrics, tokens, prompts,
response) are moved into a reusable debug dialog component.
- Fix operator precedence in language detection (llm-calls-timeline.tsx:65)
- Fix AttributeError by checking function_to_optimize is not None before accessing attributes (testgen.py:260)
Co-authored-by: Kevin Turcios <KRRT7@users.noreply.github.com>
- Create model-context-windows.ts with model info and context limits
- Show context window usage bar when model is recognized
- Display warning when approaching (>75%) or near (>90%) context limit
- Add tooltips showing exact token counts
- Supports GPT-4.1, GPT-5-mini, GPT-4o, Claude Sonnet/Haiku models
- Display Rank #N, Best, and Used for PR badges inline with model name
- Remove separate banner from candidate cards
- Matches V1 badge placement for consistency
- Add colored banner at top of candidate cards showing rank, best, and used-for-PR
- Banner uses emerald for best candidates, indigo for ranked candidates
- Move badges from inline to dedicated banner for better visibility
- Apply same treatment to refinement candidates
- Extract refinement candidates (source=REFINE) from optimizations_origin
- Pass refinementCandidates to LLMCallsTimeline component
- Map refinement LLM calls to their generated candidates
- Display refinement candidates with amber theme and parent reference
- Show diff comparing refined code to parent candidate code
- Support ranking/best/used-for-PR badges on refinement candidates
- Calculate totalTokens across all LLM calls
- Display tokens and candidates count in TraceSummary
- Update grid layout to accommodate 6 metrics on larger screens
- Show Rank #N badge for ranked candidates
- Show "Best" badge for top-ranked candidate
- Show "Used for PR" badge when best candidate was used for pull request
- Highlight best candidate with green border/background
- Add ranking explanation section at the bottom of timeline
Replace basic syntax highlighting with custom DiffView component that shows:
- Green background for additions with + indicator
- Red background for deletions with − indicator
- Blue background for hunk headers
- Left border color coding for quick visual scanning
Add ability to switch between full optimized code view and unified diff
view for each optimization candidate, making it easier to understand
what changes were made during optimization.
- Add instrumented_perf_test field display in LLM calls timeline
- Add syntax highlighting with react-syntax-highlighter for all code blocks
- Parse markdown code blocks to extract filename and show clean code
- Create reusable CodeFileDisplay component for consistent code rendering
- Update code-context-section to parse files and display them separately
- Default code sections to expanded when viewing optimizations