## Summary - Restructure CLAUDE.md hierarchy so Claude Code auto-discovers project-specific instructions - Delete dead `AGENTS.md` files (referenced non-existent `.tessl/RULES.md`) - Rename `django/aiservice/AGENTS.md` → `CLAUDE.md` for auto-discovery - Create `js/CLAUDE.md` with package commands and gotchas - Move PR review guidelines to `.claude/rules/pr-review.md` (auto-loaded rule) - Move prek workflow to `.claude/skills/fix-prek.md` (on-demand skill) - Add path-scoped rules for Python and Next.js patterns - Add domain glossary, service architecture diagram, and per-package gotchas ## Test plan - Verify `CLAUDE.md` files exist at root, `django/aiservice/`, and `js/` - Verify no remaining references to `AGENTS.md` or `.tessl/` - Verify `.claude/rules/` and `.claude/skills/` files are committed
1.8 KiB
1.8 KiB
Claude Code Instructions
Monorepo Structure
codeflash-internal/
├── django/aiservice/ # Python backend — Django-Ninja API for LLM optimization
├── js/
│ ├── cf-api/ # Express API — GitHub webhooks, PR analysis, DB ops
│ ├── cf-webapp/ # Next.js 14 — Dashboard UI
│ ├── common/ # Shared library — Prisma schema, types, integrations
│ └── VSC-Extension/ # VS Code extension — in-editor optimization
├── cli/ # Java/Gradle — sample code-to-optimize for testing
├── deployment/ # Unified Docker container (on-prem) + Azure configs
└── experiments/ # R&D — Jupyter notebooks, analysis scripts
Service Architecture
VSC-Extension / CLI → cf-api (Express, :3001) → aiservice (Django-Ninja, :8000)
cf-webapp (:3000) reads from the same PostgreSQL DB via Prisma
Shared Tooling
- Python: Use
uvfor dependency management and script execution. Never usepip. - JavaScript: Use
npmfor all JS packages. - Pre-commit:
uv run prek run --all-filesfrom repo root.
Git Commits
Use conventional commit format: fix:, feat:, refactor:, docs:, test:, chore:
Glossary
- Function to Optimize — target function for optimization
- Optimization Candidate — LLM-generated code that may be faster
- Read-Write Context — code the LLM can modify
- Read-Only Context — code provided as info only (not modified)
- Tracer — collects input args for a Python function at runtime
- Replay Test — reruns traced inputs to verify behavior
- Inspired Regression Test — new tests generated by the LLM from existing tests + function code
- Comparator — compares two Python objects for equality