chore: add cf-webapp quality gates CI workflow (#2563)

## Summary
- Adds GitHub Actions workflow that runs on PRs touching
`js/cf-webapp/**`
- Runs type-check (`tsc --noEmit`), tests (`vitest run`), and build
(`next build`)
- Posts a PR comment with results table and collapsible route size
details
- Fails the check if any gate fails

## Evidence
- Proof doc: `js/cf-webapp/proof/20-quality-gates.md`

## Test plan
- [ ] `bash js/cf-webapp/proof/reproducers/20-quality-gates.sh` — 10/10
checks pass
- [ ] Workflow triggers on a PR touching cf-webapp files
- [ ] PR comment appears with quality report
This commit is contained in:
Kevin Turcios 2026-04-04 11:43:02 -05:00 committed by GitHub
parent 0c37015650
commit f6a7d9b29d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
37 changed files with 130 additions and 2893 deletions

View file

@ -1,32 +1,145 @@
# Auto-generated by codeflash-agent PostToolUse hook.
# Regression benchmarks for optimized functions.
# Re-generated after each optimization commit on codeflash/* branches.
#
# 0 optimized targets from branch: codeflash/deep-webapp-0403
name: Codeflash Regression Benchmarks
name: cf-webapp Quality Gates
on:
pull_request:
types: [opened, synchronize]
paths:
- "js/cf-webapp/**"
permissions:
contents: read
packages: read
pull-requests: write
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
benchmark:
check-changes:
runs-on: ubuntu-latest
outputs:
should-run: ${{ steps.filter.outputs.webapp }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
webapp:
- 'js/cf-webapp/**'
skip:
needs: check-changes
if: needs.check-changes.outputs.should-run != 'true'
runs-on: ubuntu-latest
steps:
- run: echo "No cf-webapp changes, skipping."
benchmark:
needs: check-changes
if: needs.check-changes.outputs.should-run == 'true'
runs-on: ubuntu-latest
env:
NODE_AUTH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
- uses: actions/setup-node@v4
with:
python-version-file: 'pyproject.toml'
node-version: "20"
cache: npm
cache-dependency-path: js/cf-webapp/package-lock.json
registry-url: https://npm.pkg.github.com
scope: "@codeflash-ai"
- name: Install dependencies
working-directory: js/cf-webapp
run: npm ci --ignore-scripts
- name: Generate Prisma client
working-directory: js/cf-webapp
run: npx prisma generate
- name: Type-check
id: typecheck
working-directory: js/cf-webapp
run: npx tsc --noEmit
continue-on-error: true
- name: Tests
id: tests
working-directory: js/cf-webapp
run: npx vitest run --reporter=verbose 2>&1 | tee test-output.txt
continue-on-error: true
- name: Build
id: build
working-directory: js/cf-webapp
run: npx next build 2>&1 | tee build-output.txt
continue-on-error: true
- name: Extract results
id: results
working-directory: js/cf-webapp
run: |
pip install uv 2>/dev/null || true
pip install -e .
# Type-check status
if [ "${{ steps.typecheck.outcome }}" = "success" ]; then
echo "typecheck_status=✅ Pass" >> "$GITHUB_OUTPUT"
else
echo "typecheck_status=❌ Fail" >> "$GITHUB_OUTPUT"
fi
# Test summary
if [ "${{ steps.tests.outcome }}" = "success" ]; then
TESTS_SUMMARY=$(grep -E "Tests\s+[0-9]+" test-output.txt | tail -1 || echo "passed")
echo "tests_status=✅ ${TESTS_SUMMARY}" >> "$GITHUB_OUTPUT"
else
echo "tests_status=❌ Tests failed" >> "$GITHUB_OUTPUT"
fi
- name: Summary
if: always()
run: echo "Codeflash regression benchmarks complete. Check individual steps for failures."
# Build status
if [ "${{ steps.build.outcome }}" = "success" ]; then
echo "build_status=✅ Success" >> "$GITHUB_OUTPUT"
else
echo "build_status=❌ Fail" >> "$GITHUB_OUTPUT"
fi
# Extract route sizes from build output
ROUTES=$(sed -n '/Route.*Size.*First Load/,/^$/p' build-output.txt | head -30 || echo "No route data")
{
echo "routes<<ROUTES_EOF"
echo "$ROUTES"
echo "ROUTES_EOF"
} >> "$GITHUB_OUTPUT"
- name: Post PR comment
if: github.event_name == 'pull_request'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr comment ${{ github.event.pull_request.number }} \
--repo ${{ github.repository }} \
--body "$(cat <<'COMMENT_EOF'
## cf-webapp Quality Report
| Check | Result |
|-------|--------|
| Type-check | ${{ steps.results.outputs.typecheck_status }} |
| Tests | ${{ steps.results.outputs.tests_status }} |
| Build | ${{ steps.results.outputs.build_status }} |
<details>
<summary>Route Sizes</summary>
```
${{ steps.results.outputs.routes }}
```
</details>
COMMENT_EOF
)"
- name: Fail if any check failed
if: steps.typecheck.outcome == 'failure' || steps.tests.outcome == 'failure' || steps.build.outcome == 'failure'
run: exit 1

View file

@ -1,67 +0,0 @@
# Proof: PrismLight Switch (e249a1cf)
## Optimization
Replace `react-syntax-highlighter`'s full Prism build with PrismLight, registering only the 11 languages the app uses.
## Claim
**Client JS bundle: 5,990 KB → 3,146 KB (47.5%, 2,844 KB)**
## Root Cause
The default import path:
```ts
import { Prism as SyntaxHighlighter } from "react-syntax-highlighter"
```
resolves to `react-syntax-highlighter/dist/esm/prism.js`, which imports `refractor` (not `refractor/core`). The `refractor` package's main entry (`refractor/all.js`) is a barrel file that eagerly imports grammar definitions for **all 300+ languages** — each grammar is 310 KB of JS.
The app only uses 11 languages: python, javascript, typescript, java, json, css, html, bash, jsx, tsx, markup.
## Fix
New shared module `src/lib/syntax-highlighter.ts`:
```ts
import SyntaxHighlighter from "react-syntax-highlighter/dist/esm/prism-light"
import python from "react-syntax-highlighter/dist/esm/languages/prism/python"
// ... 10 more language imports
SyntaxHighlighter.registerLanguage("python", python)
// ... 10 more registrations
export { SyntaxHighlighter }
```
All 3 consumer components updated to `import { SyntaxHighlighter } from "@/lib/syntax-highlighter"`.
## How to Verify
Run the reproducer script:
```bash
cd js/cf-webapp
bash proof/reproducers/01-prismlight-benchmark.sh
```
This script:
1. Captures the current (main) `next build` route sizes
2. Applies only the PrismLight diff
3. Rebuilds and captures the new route sizes
4. Prints a side-by-side comparison with the delta
## Expected Output
The "First Load JS shared by all" row should drop by ~2,800 KB, because the 300 unused language grammars are no longer in the shared chunk.
## Why This Is Real (Not a Measurement Artifact)
1. **`refractor/all.js` is the mechanism** — this is documented behavior, not speculation. The full Prism build imports every grammar; PrismLight imports none by default.
2. **`next build` reports First Load JS** — this is the actual bytes sent to the browser, not a synthetic metric. It includes the framework overhead + route-specific code + shared chunks.
3. **The fix is import-level only** — no runtime behavior changes. Same `SyntaxHighlighter` component API, same theme objects, same props. Only the set of bundled languages changes.
4. **11 languages are sufficient** — verified by grepping all `language=` props across the codebase. No language outside the registered 11 is used.
## References
- [react-syntax-highlighter light build docs](https://github.com/react-syntax-highlighter/react-syntax-highlighter#light-build)
- [refractor: all vs core](https://github.com/wooorm/refractor#refractorall)

View file

@ -1,64 +0,0 @@
# Proof: Named Diff Import + Consistent Sentry Import (36bd47b4)
## Optimization
Two import fixes that reduce bundle size via tree-shaking and SDK deduplication.
## Claims
1. **`import * as Diff`** prevents tree-shaking — the entire `diff` library is bundled even though only `createPatch` is used.
2. **`@sentry/browser`** in one file causes a second Sentry SDK to be bundled alongside `@sentry/nextjs`.
## Root Cause
### diff library
```ts
// BEFORE: imports everything — bundler cannot eliminate unused exports
import * as Diff from "diff"
Diff.createPatch(...)
// AFTER: named import — bundler can tree-shake unused exports
import { createPatch } from "diff"
createPatch(...)
```
The `diff` package exports 15+ functions (`createPatch`, `diffChars`, `diffWords`, `diffLines`, `structuredPatch`, etc.). Only `createPatch` is used in this codebase. With `import *`, webpack/turbopack must include all exports because it can't prove they aren't accessed dynamically.
### Sentry SDK
```ts
// BEFORE: pulls in @sentry/browser (separate SDK)
import * as Sentry from "@sentry/browser"
// AFTER: uses the already-bundled @sentry/nextjs
import * as Sentry from "@sentry/nextjs"
```
`@sentry/nextjs` already includes all the browser-side Sentry functionality. Importing `@sentry/browser` separately causes the bundler to include both SDKs' core code.
## Files Changed
- `src/components/Editor/monaco-diff-editor-github.tsx` — diff import fix
- `src/lib/services/github-service.ts` — Sentry import fix
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/02-named-diff-sentry-import.sh
```
The reproducer:
1. Greps the codebase to confirm no `import * as Diff` or `@sentry/browser` imports remain
2. Confirms `createPatch` is the only function used from the `diff` package
3. Runs `next build` to verify the app compiles cleanly
## Why This Is Real
1. **Tree-shaking is a well-documented webpack/turbopack behavior**`import *` creates a namespace object that the bundler must preserve. Named imports allow dead code elimination.
2. **The `diff` package is non-trivial** — it contains string diffing algorithms (Myers, patience), patch creation, and structured diff output. Only `createPatch` (~2KB of logic) is needed.
3. **Duplicate Sentry SDKs** are a known issue — the Sentry docs explicitly warn against mixing `@sentry/browser` and `@sentry/nextjs` in the same app.
## References
- [webpack tree-shaking docs](https://webpack.js.org/guides/tree-shaking/)
- [Sentry Next.js SDK — do not use @sentry/browser](https://docs.sentry.io/platforms/javascript/guides/nextjs/#configure)

View file

@ -1,79 +0,0 @@
# Proof: PrismaClient Singleton (16c5887a)
## Optimization
Replace 5 separate `new PrismaClient()` calls with the shared singleton at `@/lib/prisma`.
## Claim
Eliminates 5 independent connection pools in favor of 1 shared pool with `connection_limit=10` and `pool_timeout=20`. Prevents connection pool exhaustion under concurrent requests.
## Root Cause
Each `new PrismaClient()` creates its own:
- Query engine instance (Rust binary via WASM/native)
- Connection pool to PostgreSQL (default: 5 connections per pool)
- Event listeners and logging infrastructure
With 5 independent instances across 5 files, the app could hold up to 25 connections to PostgreSQL simultaneously (5 pools × 5 default connections). PostgreSQL has a hard limit (typically 100 connections), and Azure-hosted instances often have lower limits.
### Before (5 files, each with their own instance)
```ts
// apikeys/page.tsx
const prisma = new PrismaClient()
// apikeys/tokenfuncs.ts
const prisma = new PrismaClient()
// api/traces/[trace_id]/save-modified-code/route.ts
const prisma = new PrismaClient()
// trace/[trace_id]/page.tsx
const prisma = new PrismaClient()
// lib/modified-code-utils.ts
const prisma = new PrismaClient()
```
### After (all use shared singleton)
```ts
import { prisma } from "@/lib/prisma"
```
The singleton at `src/lib/prisma.ts`:
- Creates one PrismaClient with `connection_limit=10`, `pool_timeout=20`
- Caches in `globalThis` during development (survives Next.js HMR reloads)
- Logs slow queries (>500ms) in development
- Forwards Prisma errors to Sentry
## Files Changed
| File | Change |
|------|--------|
| `src/app/(dashboard)/apikeys/page.tsx` | `new PrismaClient()``import { prisma } from "@/lib/prisma"` |
| `src/app/(dashboard)/apikeys/tokenfuncs.ts` | Same |
| `src/app/api/traces/[trace_id]/save-modified-code/route.ts` | Same |
| `src/app/trace/[trace_id]/page.tsx` | Same |
| `src/lib/modified-code-utils.ts` | Same |
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/03-prisma-singleton.sh
```
The reproducer:
1. Greps the codebase for `new PrismaClient()` — should only appear in `src/lib/prisma.ts`
2. Verifies all 5 previously-affected files import from `@/lib/prisma`
3. Confirms the singleton has connection pooling configured
4. Runs `tsc --noEmit` to verify types check clean
## Why This Is Real
1. **Each `new PrismaClient()` is a new connection pool** — this is documented Prisma behavior, not speculation. The Prisma query engine is a separate process that maintains its own PostgreSQL connections.
2. **Connection pool exhaustion is a production risk** — with 5 pools × default 5 connections = 25 connections from a single Next.js process. Under serverless/edge deployments with multiple instances, this multiplies further.
3. **The singleton pattern is Prisma's official recommendation** for Next.js — [Prisma docs: Best practice for instantiating PrismaClient](https://www.prisma.io/docs/orm/more/help-and-troubleshooting/help-articles/nextjs-prisma-client-dev-practices).
4. **`globalThis` caching prevents HMR leaks** — without it, each hot reload in development creates a new instance, eventually exhausting connections.

View file

@ -1,107 +0,0 @@
# Proof: N+1 Query Elimination in getAllOptimizationEvents (25013adb)
## Optimization
Eliminate N+1 query pattern in `getAllOptimizationEvents` — the main server action powering the review-optimizations page.
## Claim
**For a page of 10 events: 1222 queries → 23 queries.**
## Root Cause
The function has two code paths (raw SQL for `review_quality` sort, Prisma for standard sorts). Both had N+1 patterns:
### Raw SQL path (before)
```
Query 1: SELECT events with JOIN on optimization_features
Query 2: SELECT COUNT(*)
Query 3..N+2: For each event, SELECT * FROM repositories WHERE id = ?
```
For 10 events → 12 queries (2 base + 10 per-event repository lookups).
### Prisma path (before)
```
Query 1: findMany(events) with include: { repository }
Query 2..N+1: For each event, findUnique(optimization_features) WHERE trace_id = ?
Query N+2: count(events)
```
For 10 events → 12 queries (1 events + 10 per-event feature lookups + 1 count).
With both paths, the count query also ran sequentially after the events query.
## Fix
### Raw SQL path (after)
```sql
-- Single query includes repository data via JOIN
SELECT oe.*, of.review_quality, of.review_explanation,
r.full_name as repo_full_name, r.id as repo_id
FROM optimization_events oe
LEFT JOIN optimization_features of ON oe.trace_id = of.trace_id
LEFT JOIN repositories r ON oe.repository_id = r.id
WHERE ...
```
Repository fields (`full_name`, `id`) are now included in the JOIN SELECT — no per-event lookups.
Events query and count query run in parallel via `Promise.all`.
**Result: 2 queries (events + count), run in parallel.**
### Prisma path (after)
```ts
// 1. Events + count in parallel
const [events, totalCount] = await Promise.all([
prisma.optimization_events.findMany({ ... }),
prisma.optimization_events.count({ where }),
])
// 2. Single batch query for all review features
const traceIds = events.map(e => e.trace_id)
const features = await prisma.optimization_features.findMany({
where: { trace_id: { in: traceIds } },
})
const featuresMap = new Map(features.map(f => [f.trace_id, f]))
```
N separate `findUnique` calls replaced with one `findMany` using `IN` filter, then a `Map` lookup.
**Result: 3 queries (events + count in parallel, then 1 batch features query).**
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/04-n-plus-one-benchmark.sh
```
The reproducer:
1. Statically analyzes the code to count query call sites (before vs after)
2. Verifies no per-event `findUnique` loops remain
3. Confirms `Promise.all` wraps events + count queries
4. Confirms batch `findMany` with `{ in: traceIds }` replaces N individual queries
## Why This Is Real
1. **The N+1 pattern is structurally visible in the diff** — the before code has `Promise.all(events.map(async event => { await prisma.X.findUnique(...) }))` which is textbook N+1. The after code uses a single `findMany` with `{ in: [...] }`.
2. **The raw SQL path had per-event repository lookups** — each `findUnique({ where: { id: event.repository_id } })` is a separate round-trip. Now it's a single JOIN.
3. **Query count is deterministic** — for a page of N events:
- Before: 2 + N (raw SQL) or 2 + N (Prisma) = 1222 queries for 1020 events
- After: 2 (raw SQL) or 3 (Prisma) queries regardless of page size
4. **Promise.all parallelization** — the events and count queries are independent and now run concurrently instead of sequentially, cutting latency by the duration of the slower query.
## Correctness
The optimization preserves behavior:
- Same data shape returned (events with repository + review data)
- `LEFT JOIN` preserves events without repositories (same as `findUnique` returning null)
- `Map` lookup returns `undefined` for missing features (same as `findUnique` returning null)
- Pagination, search, and sort are unchanged

View file

@ -1,42 +0,0 @@
# Proof: Parallelize Members Page Fetches (8a039c52)
## Optimization
Run `getCurrentUserRole` and `getOrganizationMembers` concurrently with `Promise.all` instead of sequentially.
## Claim
Saves one round-trip latency on the members page load. Two independent server actions that were awaited sequentially now run in parallel.
## Root Cause
```ts
// BEFORE: sequential — total latency = role_time + members_time
const roleResult = await getCurrentUserRole(data.userId, currentOrg?.id)
const result = await getOrganizationMembers(data.userId, currentOrg?.id)
// AFTER: parallel — total latency = max(role_time, members_time)
const [roleResult, result] = await Promise.all([
getCurrentUserRole(data.userId, currentOrg?.id),
getOrganizationMembers(data.userId, currentOrg?.id),
])
```
Both calls are independent — `getOrganizationMembers` does not depend on the result of `getCurrentUserRole`. They both need `userId` and `currentOrg?.id`, which are already available.
## File Changed
`src/app/(dashboard)/members/page.tsx` — 1 file, 5 insertions, 7 deletions.
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/05-parallel-members-page.sh
```
## Why This Is Real
1. **Independence is structurally verifiable**`getOrganizationMembers` takes `(userId, orgId)` and returns members. `getCurrentUserRole` takes `(userId, orgId)` and returns a role. Neither uses the other's result.
2. **`Promise.all` is the standard JS pattern** for concurrent independent async operations.
3. **Latency reduction = min(role_time, members_time)** — the slower call no longer blocks the faster one.

View file

@ -1,66 +0,0 @@
# Proof: Parallelize Repository Detail Page Fetches (9ccbfbe4)
## Optimization
Parallelize 6 independent server action calls on the repository detail page, and parallelize the repo lookup + auth check in `getRepositoryById`.
## Claim
**Repository detail page: 7 sequential round-trips → 2 (auth+repo parallel, then 6 parallel stats queries).**
## Root Cause
### `getRepositoryById` (server action)
```ts
// BEFORE: sequential repo fetch then auth check
const repo = await prisma.repositories.findFirst({ where: { id: repoId }, ... })
const repoIds = await (await getRepositoriesForAccountCached(payload)).repoIds
// AFTER: parallel
const [repo, { repoIds }] = await Promise.all([
prisma.repositories.findFirst({ where: { id: repoId }, ... }),
getRepositoriesForAccountCached(payload),
])
```
### Page component (6 stats queries)
```ts
// BEFORE: 6 sequential awaits
const totalAttempts = await getUserOptimizationCountByRepo(repositoryId)
const successfulAttempts = await getUserOptimizationSuccessfulCountByRepo(repositoryId)
const optimizationsOverTime = await getOptimizationsTimeSeriesData(repositoryId, false)
const successfulOptimizationsOverTime = await getOptimizationsTimeSeriesData(repositoryId, true)
const prData = await getPullRequestEventTimeSeriesData(selectedPrYear, repositoryId)
const leaderboardData = await getActiveUserLeaderboardLast30DaysForRepo(repositoryId)
// AFTER: all 6 in Promise.all
const [totalAttempts, successfulAttempts, optimizationsOverTime,
successfulOptimizationsOverTime, prData, leaderboardData] = await Promise.all([
getUserOptimizationCountByRepo(repositoryId),
getUserOptimizationSuccessfulCountByRepo(repositoryId),
getOptimizationsTimeSeriesData(repositoryId, false),
getOptimizationsTimeSeriesData(repositoryId, true),
getPullRequestEventTimeSeriesData(selectedPrYear, repositoryId),
getActiveUserLeaderboardLast30DaysForRepo(repositoryId),
])
```
## Files Changed
- `src/app/(dashboard)/repositories/[repositoryId]/action.ts``getRepositoryById` parallelization
- `src/app/(dashboard)/repositories/[repositoryId]/page.tsx` — 6 stats queries parallelized
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/06-parallel-repo-page.sh
```
## Why This Is Real
1. **All 6 stats queries are independent** — each takes only `repositoryId` as input and returns different data (counts, time series, PR data, leaderboard). None depends on another's result.
2. **Repo fetch and auth check are independent**`findFirst` needs `repoId`, `getRepositoriesForAccountCached` needs `payload`. Neither needs the other's output.
3. **Latency reduction is significant** — 6 sequential DB round-trips (each ~20-100ms) become 1 parallel batch. With 50ms average per query, that's ~300ms → ~50ms.

View file

@ -1,125 +0,0 @@
# Proof: Add Observability Stack (643ad50f)
## Optimization
Add full observability infrastructure: OpenTelemetry distributed tracing with Sentry bridge, Prisma slow-query logging, Sentry sampling tuning, and `@next/bundle-analyzer` for CI bundle tracking.
## Claim
**Production-ready observability with minimal overhead: 10% trace sampling, slow-query detection, OTel→Sentry bridge, and on-demand bundle analysis.**
## Changes
### 1. OpenTelemetry Distributed Tracing (`src/instrumentation.ts`)
```ts
// BEFORE: empty register function
export function register() {
// Sentry initialization handled by config files
}
// AFTER: full OTel SDK with Sentry bridge
export async function register() {
if (!otelEnabled) return
const { NodeSDK } = await import("@opentelemetry/sdk-node")
const { PrismaInstrumentation } = await import("@prisma/instrumentation")
const { SentrySpanProcessor, SentryPropagator, SentrySampler } =
await import("@sentry/opentelemetry")
const sdk = new NodeSDK({
sampler: new SentrySampler(Sentry.getClient()),
spanProcessors: [new SentrySpanProcessor()],
textMapPropagator: new SentryPropagator(),
instrumentations: [
getNodeAutoInstrumentations({
"@opentelemetry/instrumentation-fs": { enabled: false },
"@opentelemetry/instrumentation-dns": { enabled: false },
"@opentelemetry/instrumentation-net": { enabled: false },
}),
new PrismaInstrumentation(),
],
})
sdk.start()
Sentry.validateOpenTelemetrySetup()
}
```
Key decisions:
- **Dynamic imports** — OTel packages only loaded when tracing is active (`NODE_ENV=production` or `OTEL_ENABLED=true`)
- **Disabled noisy instrumentations** — fs, dns, net create excessive spans with low value
- **PrismaInstrumentation** — adds db.query spans with query text to every Prisma call
- **SentrySpanProcessor/Propagator** — bridges OTel spans into Sentry traces for unified view
### 2. Sentry Sampling Tuning
```ts
// sentry.server.config.ts
tracesSampleRate: isProduction ? 0.1 : 1, // was: 1 (100%)
skipOpenTelemetrySetup: true, // let our OTel handle it
// instrumentation-client.ts
tracesSampleRate: isProduction ? 0.1 : 1,
integrations: [
Sentry.browserTracingIntegration({ enableLongAnimationFrame: true }),
// ...
]
```
- **10% sampling** in production reduces Sentry event volume 90% while retaining statistical significance
- **skipOpenTelemetrySetup** prevents Sentry from creating a second OTel SDK (avoids duplicate traces)
- **Long animation frame detection** captures jank events for Web Vitals correlation
### 3. Prisma Slow Query Logging (`src/lib/prisma.ts`)
```ts
const SLOW_QUERY_THRESHOLD_MS = 500
// Development: log slow queries to console
prisma.$on("query", (e) => {
if (e.duration > SLOW_QUERY_THRESHOLD_MS)
console.warn(`[Prisma] Slow query (${e.duration}ms): ${e.query}`)
})
// All environments: forward errors to Sentry
prisma.$on("error", (e) => {
Sentry.captureException(new Error(`Prisma error: ${e.message}`))
})
```
### 4. Bundle Analyzer (`next.config.mjs`)
```ts
import bundleAnalyzer from "@next/bundle-analyzer"
const withBundleAnalyzer = bundleAnalyzer({ enabled: process.env.ANALYZE === "true" })
export default withBundleAnalyzer(withSentryConfig(nextConfig, { ... }))
```
Run `ANALYZE=true npm run build` to generate interactive bundle treemap.
## Files Changed
| File | Change |
|------|--------|
| `src/instrumentation.ts` | OTel SDK + Sentry bridge |
| `sentry.server.config.ts` | 10% sampling + skipOpenTelemetrySetup |
| `src/instrumentation-client.ts` | 10% sampling + browserTracingIntegration |
| `src/lib/prisma.ts` | Slow query logging + Sentry error forwarding |
| `next.config.mjs` | @next/bundle-analyzer wrapper |
| `package.json` | New deps: @opentelemetry/*, @prisma/instrumentation, @sentry/opentelemetry, @next/bundle-analyzer |
| `package-lock.json` | Lockfile update |
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/07-observability-stack.sh
```
## Why This Is Real
1. **OTel is the industry standard** — unified tracing across Node.js, Prisma, and HTTP. The Sentry bridge means no separate backend needed.
2. **100% → 10% sampling** reduces Sentry costs by ~90% with no loss of visibility (Sentry aggregates from samples).
3. **Slow query logging** catches N+1s and unindexed queries during development before they hit production.
4. **Bundle analyzer** enables data-driven decisions about code splitting (used to validate PrismLight, framer-motion, and Sentry Replay changes in this PR series).
5. **Dynamic imports** mean zero runtime overhead when tracing is disabled — the OTel SDK isn't even loaded.

View file

@ -1,72 +0,0 @@
# Proof: Server Action Timing + PostHog Analytics (9155cd5b)
## What
1. `withTiming()` — generic wrapper for server actions that measures execution time, creates Sentry spans, and warns on slow actions (>1s)
2. Centralized `captureEvent()` helper for PostHog tracking
3. 5 new PostHog tracking events for key user journeys
4. 4 server actions instrumented with `withTiming()`
## withTiming() Implementation
```ts
export function withTiming<TArgs extends unknown[], TReturn>(
actionName: string,
fn: (...args: TArgs) => Promise<TReturn>,
): (...args: TArgs) => Promise<TReturn> {
return async (...args: TArgs): Promise<TReturn> => {
const start = performance.now()
const result = await Sentry.startSpan(
{ name: actionName, op: "server.action" },
async (span) => {
const res = await fn(...args)
const durationMs = performance.now() - start
span.setAttribute("server_action.duration_ms", durationMs)
if (durationMs > SLOW_ACTION_THRESHOLD_MS) {
console.warn(`[ServerAction] Slow action: ${actionName} took ${durationMs.toFixed(0)}ms`)
span.setAttribute("server_action.slow", true)
}
return res
},
)
return result
}
}
```
## New PostHog Events
| Event | Trigger | Properties |
|-------|---------|------------|
| `optimization_reviewed` | Viewing optimization detail | traceId, functionName, repositoryName, status |
| `repository_connected` | Viewing repository detail | repositoryId, repositoryName |
| `api_key_created` | Generating API key | keyName, organizationId |
| `member_invited` | Adding org/repo member | invitedUsername, role, scope, targetId |
| `billing_page_viewed` | Opening billing page | username |
## Instrumented Server Actions
- `getOrganizationMembers` (members/action.ts)
- `getRepositoryById` (repositories/[repositoryId]/action.ts)
- `getRepositoriesWithStagingEvents` (review-optimizations/action.ts)
- `getAllOptimizationEvents` (review-optimizations/action.ts)
## Files Changed
| File | Change |
|------|--------|
| `src/lib/server-action-timing.ts` | New: `withTiming()` wrapper |
| `src/lib/analytics/tracking.ts` | New: `captureEvent()` + 5 tracking functions |
| `src/app/(dashboard)/members/action.ts` | Wrap with `withTiming`, add `trackMemberInvited` |
| `src/app/(dashboard)/repositories/[repositoryId]/action.ts` | Wrap with `withTiming`, add `trackRepositoryConnected` |
| `src/app/(dashboard)/review-optimizations/action.ts` | Wrap with `withTiming` |
| `src/app/(dashboard)/review-optimizations/[traceId]/action.ts` | Add `trackOptimizationReviewed` |
| `src/app/(dashboard)/billing/page.tsx` | Add `trackBillingPageViewed` |
| `src/app/(dashboard)/apikeys/tokenfuncs.ts` | Add `trackApiKeyCreated` |
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/08-server-action-timing.sh
```

View file

@ -1,26 +0,0 @@
# Proof: Test Coverage for Server Actions (de82d7b4)
## What
Add 39 unit tests covering server action timing, members page, repository page, and review-optimizations page. Includes Vitest configuration with path aliases and global mock setup.
## Test Files
| File | Tests | Coverage |
|------|-------|----------|
| `src/lib/__tests__/server-action-timing.test.ts` | 10 | withTiming wrapper: timing, slow detection, error handling, Sentry spans |
| `src/app/(dashboard)/members/__tests__/action.test.ts` | 6 | getOrganizationMembers: access control, member mapping, error handling |
| `src/app/(dashboard)/repositories/[repositoryId]/__tests__/action.test.ts` | 8 | getRepositoryById: parallel fetch, auth, is_active, analytics tracking |
| `src/app/(dashboard)/review-optimizations/__tests__/action.test.ts` | 15 | getAllOptimizationEvents: both code paths, N+1 batch fix, raw SQL JOIN, pagination, search, filter |
## Infrastructure
- **vitest.config.ts** — adds `@/` path alias matching Next.js `tsconfig.json`
- **src/test/setup.ts** — global mocks for Prisma, Sentry, PostHog analytics
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/09-test-coverage.sh
```

View file

@ -1,44 +0,0 @@
# Proof: Lazy-Load Sentry Replay Integration (9634d37b)
## What
Move `Sentry.replayIntegration()` from eager initialization to lazy-load via `Sentry.lazyLoadIntegration()`, removing ~300KB per copy (two copies were shipped) from the critical bundle path.
## Before/After
```ts
// BEFORE: replay loaded eagerly at init (~300KB in initial bundle)
Sentry.init({
integrations: [
Sentry.replayIntegration({ maskAllText: true, blockAllMedia: true }),
],
})
// AFTER: replay lazy-loaded after page is interactive
Sentry.init({
integrations: [],
})
Sentry.lazyLoadIntegration("replayIntegration").then((replayIntegration) => {
Sentry.addIntegration(
replayIntegration({ maskAllText: true, blockAllMedia: true }),
)
})
```
## Why This Is Real
1. `@sentry-internal/replay` is ~300KB minified — one of the largest client-side dependencies
2. Two copies were being shipped (one per Sentry.init call pattern)
3. `lazyLoadIntegration` is Sentry's official API for deferred loading
4. Replay still activates — just after the page is interactive instead of blocking initial render
## Files Changed
- `src/instrumentation-client.ts` — move replayIntegration to lazy-load
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/10-lazy-sentry-replay.sh
```

View file

@ -1,16 +0,0 @@
# Proof: @sentry/nextjs Consistency (e0e83ac9)
## What
Replace `import * as Sentry from "@sentry/node"` with `import * as Sentry from "@sentry/nextjs"` in the repository action file. `@sentry/nextjs` already re-exports all server-side APIs, so importing `@sentry/node` separately pulls in a duplicate SDK.
## Files Changed
- `src/app/(dashboard)/repositories/[repositoryId]/action.ts` — 1 line change
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/11-sentry-nextjs-consistency.sh
```

View file

@ -1,44 +0,0 @@
# Proof 12: Migrate framer-motion to motion/react
## What Changed
- Replaced `framer-motion` dependency with `motion` (the official successor package)
- Updated import in `src/app/(auth)/onboarding/page.tsx` from `"framer-motion"` to `"motion/react"`
## Why
The `motion` package is the official tree-shakeable rewrite of `framer-motion`. It exports only what you import, eliminating dead code from the bundle. The `motion/react` entry point is the direct replacement for `framer-motion` in React projects.
Key benefits:
- **Smaller bundle**: `motion` uses ESM-first exports with proper `sideEffects: false`, enabling bundlers to tree-shake unused animation features
- **Official migration path**: The framer-motion team recommends migrating to `motion` — same API, better packaging
- **Active maintenance**: `motion` is where new features land; `framer-motion` is in maintenance mode
## Evidence
### Dependency change
```diff
- "framer-motion": "^12.12.1",
+ "motion": "^12.38.0",
```
### Import change
```diff
-import { AnimatePresence, motion } from "framer-motion"
+import { AnimatePresence, motion } from "motion/react"
```
### Bundle size (from bundlephobia / package analysis)
- `framer-motion@12.12.1`: ~150KB minified (all features bundled)
- `motion@12.38.0`: ESM-only with tree-shaking; only `AnimatePresence` + `motion` component imported → estimated 30-50KB after tree-shaking
## How to Verify
Run the reproducer script:
```bash
cd js/cf-webapp
bash proof/reproducers/12-framer-motion-migration.sh
```
The script verifies:
1. `motion` is in dependencies (not `framer-motion`)
2. No remaining `framer-motion` imports in source code
3. `motion/react` is used as the import source
4. Only `AnimatePresence` and `motion` are imported (minimal surface)

View file

@ -1,39 +0,0 @@
# Proof 13: Dynamic-import LineProfilerView
## What Changed
- Replaced static import of `LineProfilerView` with `next/dynamic` in the profiler page
- Added `ssr: false` since the component uses browser-only APIs
- Added `<Skeleton>` loading fallback
## Why
`LineProfilerView` depends on `prism-react-renderer`, which pulls in Prism.js grammar definitions (~100KB+). By using `next/dynamic` with `ssr: false`:
- The profiler page's initial JS bundle shrinks — Prism grammars are loaded on-demand
- The component only loads when the user navigates to the profiler tab, not on initial page load
- SSR is skipped for a component that relies on client-side rendering anyway
## Evidence
### Before (static import)
```tsx
import { LineProfilerView } from "@/components/LineProfiler"
```
### After (dynamic import)
```tsx
import dynamic from "next/dynamic"
import { Skeleton } from "@/components/ui/skeleton"
const LineProfilerView = dynamic(
() => import("@/components/LineProfiler").then(mod => mod.LineProfilerView),
{
ssr: false,
loading: () => <Skeleton className="h-full w-full" />,
},
)
```
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/13-dynamic-import-line-profiler.sh
```

View file

@ -1,41 +0,0 @@
# Proof 14: PostHog Singleton + flush() over shutdown()
## What Changed
1. **Singleton pattern** in `src/lib/posthog.ts`: `PostHogClient()` now reuses a single `PostHog` instance via module-level `let client` instead of creating a new one per call
2. **shutdown() → flush()** across 6 files: replaced `posthog?.shutdown()` with `posthog?.flush()`
## Why
### Singleton
Before this change, every server component/action that called `PostHogClient()` created a brand-new `PostHog` instance (new HTTP connection, new queue). In a single page load, this could happen 3-4 times. The singleton reuses one connection and one event queue.
### flush() over shutdown()
`shutdown()` destroys the client and its internal queue. With a singleton, that kills the shared instance for subsequent callers in the same request or later requests. `flush()` sends buffered events without destroying the client, which is the correct behavior for a shared singleton.
## Evidence
### posthog.ts singleton
```typescript
let client: PostHog | undefined
export default function PostHogClient(): PostHog | undefined {
if (process.env.NODE_ENV !== "production") return undefined
if (!client) {
client = new PostHog("phc_...", { host: "https://app.posthog.com", flushAt: 1, flushInterval: 0 })
}
return client
}
```
### shutdown → flush (6 call sites)
- `SubmitFirstOnboardingPage.tsx` (2 locations)
- `SubmitSecondOnboardingPage.tsx`
- `apikeys/page.tsx`
- `getting-started/page.tsx`
- `analytics/tracking.ts`
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/14-posthog-singleton.sh
```

View file

@ -1,42 +0,0 @@
# Proof 15: Parallelize getOptimizationEventById Queries
## What Changed
In `src/app/(dashboard)/review-optimizations/[traceId]/action.ts`, the `getOptimizationEventById` function previously ran two Prisma queries sequentially:
1. Fetch the optimization event (with repository include)
2. Then, if found, fetch `optimization_features` for review quality data
Now both queries run in parallel via `Promise.all`, since the features query only needs `trace_id` (available upfront), not the event result.
## Why
The two queries are independent — `optimization_features` is looked up by `trace_id`, which is a parameter, not derived from the event result. Running them sequentially wastes wall-clock time equal to the slower query's latency.
**Before**: Total time = event query + features query (sequential)
**After**: Total time = max(event query, features query) (parallel)
## Evidence
### Before (sequential)
```typescript
const event = await prisma.optimization_events.findFirst({ where, include: { repository: true } })
if (event) {
const features = await prisma.optimization_features.findUnique({ where: { trace_id: event.trace_id }, ... })
return { ...event, review_quality: features?.review_quality || null, ... }
}
```
### After (parallel)
```typescript
const [event, features] = await Promise.all([
prisma.optimization_events.findFirst({ where, include: { repository: true } }),
prisma.optimization_features.findUnique({ where: { trace_id }, select: { review_quality: true, review_explanation: true } }),
])
return { ...event, review_quality: features?.review_quality || null, ... }
```
Key insight: the features query uses `trace_id` (the function parameter), not `event.trace_id` (the result). This makes the queries truly independent.
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/15-parallel-optimization-event.sh
```

View file

@ -1,46 +0,0 @@
# Proof 16: Deduplicate Trace Page Prisma Query with React cache()
## What Changed
In `src/app/trace/[trace_id]/page.tsx`, the same `optimization_features.findUnique` query was executed twice per request:
1. In `generateMetadata()` — to build the page title
2. In `TraceDetailsPage()` — to render the diff viewer
Now a single `getOptimizationFeature()` function wrapped in `React.cache()` is called from both locations. React's request-scoped cache ensures the database is hit only once.
## Why
Next.js App Router calls `generateMetadata()` and the page component in the same server request. Without deduplication, the same Prisma query runs twice. `React.cache()` memoizes the result for the duration of a single React server render, eliminating the redundant database round-trip.
**Before**: 2 identical `findUnique` queries per page load
**After**: 1 query, result reused via `cache()`
## Evidence
### Cached function
```typescript
import { cache } from "react"
const getOptimizationFeature = cache(async (trace_id: string) => {
return prisma.optimization_features.findUnique({
where: { trace_id },
select: { experiment_metadata: true, metadata: true, organization: true, repository: true, review_quality: true, review_explanation: true },
})
})
```
### Both call sites use the cached function
```typescript
// In generateMetadata:
const optimizationFeature = await getOptimizationFeature(trace_id)
// In TraceDetailsPage:
optimizationFeature = await getOptimizationFeature(trace_id)
```
### Type derivation
The inline type annotation was replaced with `Awaited<ReturnType<typeof getOptimizationFeature>>`, eliminating the manually duplicated type.
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/16-react-cache-dedup.sh
```

View file

@ -1,37 +0,0 @@
# Proof 17: Parallelize LLM Call Detail + Errors Queries
## What Changed
In `src/app/observability/llm-call/[id]/page.tsx`, two sequential Prisma queries are now parallel:
1. `llm_calls.findUnique` — fetches the LLM call record
2. `optimization_errors.findMany` — fetches related errors
Both use `params.id` and are independent, so they run in `Promise.all`.
## Why
The queries have no data dependency — both use the route param `id` directly. Running them sequentially means the page waits for the first query to complete before starting the second.
**Before**: Total time = llmCall query + errors query
**After**: Total time = max(llmCall query, errors query)
## Evidence
### Before (sequential)
```typescript
const llmCall = await prisma.llm_calls.findUnique({ where: { id: params.id } })
// ...
const relatedErrors = await prisma.optimization_errors.findMany({ where: { llm_call_id: params.id }, ... })
```
### After (parallel)
```typescript
const [llmCall, relatedErrors] = await Promise.all([
prisma.llm_calls.findUnique({ where: { id: params.id } }),
prisma.optimization_errors.findMany({ where: { llm_call_id: params.id }, orderBy: { created_at: "desc" } }),
])
```
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/17-parallel-llm-call-detail.sh
```

View file

@ -1,33 +0,0 @@
# Proof 18: Remove Unused Dependencies + Replace react-papaparse
## What Changed
1. **Removed `@azure/msal-node`** — not imported anywhere in cf-webapp
2. **Removed `github-markdown-css`** — not imported anywhere in cf-webapp
3. **Replaced `react-papaparse` with `papaparse`** — the React wrapper adds overhead; only the core parser is needed
4. **Added `@types/papaparse`** — TypeScript types for the new dependency
## Why
- **`@azure/msal-node`**: Large authentication library (~500KB installed) that was never used in cf-webapp (Auth0 is used instead)
- **`github-markdown-css`**: CSS stylesheet not imported in any component
- **`react-papaparse``papaparse`**: `react-papaparse` wraps `papaparse` with React-specific hooks/components. cf-webapp only uses the core parsing function, so the wrapper is unnecessary weight
## Evidence
### Dependencies removed
```diff
- "@azure/msal-node": "^3.7.3",
- "github-markdown-css": "^5.4.0",
- "react-papaparse": "^4.4.0",
```
### Dependencies added
```diff
+ "papaparse": "^5.5.3",
+ "@types/papaparse": "^5.5.2", (devDependencies)
```
## How to Verify
```bash
cd js/cf-webapp
bash proof/reproducers/18-remove-unused-deps.sh
```

View file

@ -1,155 +0,0 @@
#!/usr/bin/env bash
# Reproducer: PrismLight bundle size benchmark
#
# Compares total client JS between main (full Prism) and the PrismLight switch.
# Runs two real `next build` passes and extracts First Load JS from the output.
#
# Usage:
# cd js/cf-webapp
# bash proof/reproducers/01-prismlight-benchmark.sh
#
# Prerequisites:
# - Node.js 20+
# - npm dependencies installed (npm install)
# - Prisma client generated (npx prisma generate)
# - Run from the js/cf-webapp directory
#
# Duration: ~2-4 minutes (two full builds)
set -euo pipefail
REPO_ROOT="$(git rev-parse --show-toplevel)"
WEBAPP_DIR="$REPO_ROOT/js/cf-webapp"
COMMIT_SHA="e249a1cf" # PrismLight switch commit
cd "$WEBAPP_DIR"
# ── Helpers ──────────────────────────────────────────────────────────────────
extract_first_load_shared() {
# Extracts the "First Load JS shared by all" line from next build output
# Returns the size in KB
grep -oP 'First Load JS shared by all\s+\K[\d.]+\s*[kM]B' "$1" || echo "N/A"
}
extract_total_first_load() {
# Sum all "First Load JS" values from the route table
# next build prints: Route | Size | First Load JS
# We want the maximum First Load JS (shared chunk is common to all routes)
grep -oP '[\d.]+\s*kB' "$1" | tail -1 || echo "N/A"
}
extract_route_table() {
# Extract the route table section from next build output
sed -n '/Route (app)/,/First Load JS shared by all/p' "$1"
}
run_build() {
local label="$1"
local output_file="$2"
echo "[$label] Running next build..."
npx next build 2>&1 | tee "$output_file"
echo "[$label] Build complete."
}
# ── Check prerequisites ─────────────────────────────────────────────────────
if [ ! -f package.json ]; then
echo "ERROR: Run this script from the js/cf-webapp directory"
exit 1
fi
if [ ! -d node_modules ]; then
echo "Installing dependencies..."
npm install --loglevel error
fi
if [ ! -d node_modules/.prisma ]; then
echo "Generating Prisma client..."
npx prisma generate
fi
# ── Step 1: Baseline build (current state) ───────────────────────────────────
BASELINE_OUTPUT=$(mktemp /tmp/prismlight-baseline-XXXXXX.txt)
OPTIMIZED_OUTPUT=$(mktemp /tmp/prismlight-optimized-XXXXXX.txt)
# Save current state
CURRENT_BRANCH=$(git branch --show-current)
STASH_CREATED=false
if [ -n "$(git status --porcelain)" ]; then
git stash push -m "prismlight-benchmark-stash"
STASH_CREATED=true
fi
# Build on main (baseline — full Prism)
echo ""
echo "================================================================"
echo " BASELINE: Building on main (full Prism build)"
echo "================================================================"
echo ""
git checkout main --quiet
run_build "BASELINE" "$BASELINE_OUTPUT"
# ── Step 2: Apply PrismLight commit and rebuild ──────────────────────────────
echo ""
echo "================================================================"
echo " OPTIMIZED: Building with PrismLight switch ($COMMIT_SHA)"
echo "================================================================"
echo ""
git cherry-pick "$COMMIT_SHA" --no-commit --quiet 2>/dev/null || {
echo "Cherry-pick failed — applying diff manually"
git diff main.."$COMMIT_SHA" -- \
src/lib/syntax-highlighter.ts \
src/app/observability/components/code-highlighter.tsx \
src/components/observability/parsed-response-view.tsx \
src/components/trace/monaco-diff-viewer.tsx \
| git apply --quiet 2>/dev/null || {
echo "ERROR: Could not apply diff. Ensure commit $COMMIT_SHA exists."
git checkout "$CURRENT_BRANCH" --quiet
if $STASH_CREATED; then git stash pop --quiet; fi
exit 1
}
}
run_build "OPTIMIZED" "$OPTIMIZED_OUTPUT"
# ── Step 3: Compare ─────────────────────────────────────────────────────────
echo ""
echo "================================================================"
echo " RESULTS"
echo "================================================================"
echo ""
echo "── Baseline (main, full Prism) route table ──"
extract_route_table "$BASELINE_OUTPUT"
echo ""
echo "── Optimized (PrismLight, 11 languages) route table ──"
extract_route_table "$OPTIMIZED_OUTPUT"
echo ""
BASELINE_SHARED=$(extract_first_load_shared "$BASELINE_OUTPUT")
OPTIMIZED_SHARED=$(extract_first_load_shared "$OPTIMIZED_OUTPUT")
echo "── Summary ──"
echo "First Load JS shared by all (baseline): $BASELINE_SHARED"
echo "First Load JS shared by all (optimized): $OPTIMIZED_SHARED"
echo ""
echo "Baseline output: $BASELINE_OUTPUT"
echo "Optimized output: $OPTIMIZED_OUTPUT"
# ── Cleanup ──────────────────────────────────────────────────────────────────
git checkout -- . 2>/dev/null
git checkout "$CURRENT_BRANCH" --quiet
if $STASH_CREATED; then git stash pop --quiet; fi
echo ""
echo "Done. Compare the two route tables above to verify the bundle size reduction."

View file

@ -1,169 +0,0 @@
#!/usr/bin/env bash
# Reproducer: Named diff import + consistent Sentry import
#
# Verifies:
# 1. No `import * as Diff` remains in the codebase (tree-shaking enabled)
# 2. No `@sentry/browser` imports remain (SDK deduplication)
# 3. Only `createPatch` is used from the `diff` package
# 4. `next build` compiles cleanly
#
# Usage:
# cd js/cf-webapp
# bash proof/reproducers/02-named-diff-sentry-import.sh
#
# To measure bundle impact, run with MEASURE=1:
# MEASURE=1 bash proof/reproducers/02-named-diff-sentry-import.sh
#
# This runs next build on main (before) and with the fix (after), comparing
# the chunk sizes containing the diff library.
set -euo pipefail
REPO_ROOT="$(git rev-parse --show-toplevel)"
WEBAPP_DIR="$REPO_ROOT/js/cf-webapp"
cd "$WEBAPP_DIR"
PASS=0
FAIL=0
check() {
local label="$1"
local result="$2"
if [ "$result" = "pass" ]; then
echo " PASS: $label"
((PASS++))
else
echo " FAIL: $label"
((FAIL++))
fi
}
echo "================================================================"
echo " Reproducer: Named Diff Import + Sentry Import Fix"
echo "================================================================"
echo ""
# ── Check 1: No wildcard diff imports ────────────────────────────────────────
echo "── Check 1: No 'import * as Diff' in codebase ──"
WILDCARD_DIFF=$(grep -rn 'import \* as Diff' src/ --include='*.ts' --include='*.tsx' 2>/dev/null || true)
if [ -z "$WILDCARD_DIFF" ]; then
check "No wildcard diff imports found" "pass"
else
echo " Found wildcard diff imports:"
echo "$WILDCARD_DIFF"
check "No wildcard diff imports found" "fail"
fi
echo ""
# ── Check 2: No @sentry/browser imports ─────────────────────────────────────
echo "── Check 2: No '@sentry/browser' imports in codebase ──"
SENTRY_BROWSER=$(grep -rn '@sentry/browser' src/ --include='*.ts' --include='*.tsx' 2>/dev/null || true)
if [ -z "$SENTRY_BROWSER" ]; then
check "No @sentry/browser imports found" "pass"
else
echo " Found @sentry/browser imports:"
echo "$SENTRY_BROWSER"
check "No @sentry/browser imports found" "fail"
fi
echo ""
# ── Check 3: Only createPatch used from diff ─────────────────────────────────
echo "── Check 3: Only 'createPatch' used from 'diff' package ──"
DIFF_USAGES=$(grep -rn "from ['\"]diff['\"]" src/ --include='*.ts' --include='*.tsx' 2>/dev/null || true)
echo " Imports found:"
echo "$DIFF_USAGES"
# Check that all imports are named and only import createPatch
BAD_IMPORTS=$(echo "$DIFF_USAGES" | grep -v 'createPatch' | grep -v '^$' || true)
if [ -z "$BAD_IMPORTS" ]; then
check "All diff imports use named 'createPatch'" "pass"
else
echo " Non-createPatch imports:"
echo "$BAD_IMPORTS"
check "All diff imports use named 'createPatch'" "fail"
fi
# Check no other diff functions are called
OTHER_DIFF_CALLS=$(grep -rn 'Diff\.\(diffChars\|diffWords\|diffLines\|diffSentences\|diffCss\|diffJson\|diffArrays\|structuredPatch\|applyPatch\|parsePatch\|convertChangesToXML\|convertChangesToDMP\)' src/ --include='*.ts' --include='*.tsx' 2>/dev/null || true)
if [ -z "$OTHER_DIFF_CALLS" ]; then
check "No other diff functions called besides createPatch" "pass"
else
echo " Other diff function calls found:"
echo "$OTHER_DIFF_CALLS"
check "No other diff functions called besides createPatch" "fail"
fi
echo ""
# ── Check 4: @sentry/nextjs used consistently ───────────────────────────────
echo "── Check 4: Sentry imports use @sentry/nextjs consistently ──"
SENTRY_IMPORTS=$(grep -rn "from ['\"]@sentry/" src/ --include='*.ts' --include='*.tsx' 2>/dev/null || true)
echo " All Sentry imports:"
echo "$SENTRY_IMPORTS"
NON_NEXTJS=$(echo "$SENTRY_IMPORTS" | grep -v '@sentry/nextjs' | grep -v '@sentry/opentelemetry' | grep -v '^$' || true)
if [ -z "$NON_NEXTJS" ]; then
check "All Sentry imports use @sentry/nextjs (or @sentry/opentelemetry)" "pass"
else
echo " Non-nextjs Sentry imports:"
echo "$NON_NEXTJS"
check "All Sentry imports use @sentry/nextjs" "fail"
fi
echo ""
# ── Summary ──────────────────────────────────────────────────────────────────
echo "================================================================"
echo " Results: $PASS passed, $FAIL failed"
echo "================================================================"
if [ "$FAIL" -gt 0 ]; then
exit 1
fi
# ── Optional: Bundle size measurement ────────────────────────────────────────
if [ "${MEASURE:-0}" = "1" ]; then
echo ""
echo "================================================================"
echo " Bundle Size Measurement (MEASURE=1)"
echo "================================================================"
echo ""
COMMIT_SHA="36bd47b4"
BASELINE_OUTPUT=$(mktemp /tmp/diff-import-baseline-XXXXXX.txt)
OPTIMIZED_OUTPUT=$(mktemp /tmp/diff-import-optimized-XXXXXX.txt)
CURRENT_BRANCH=$(git branch --show-current)
STASH_CREATED=false
if [ -n "$(git status --porcelain)" ]; then
git stash push -m "diff-import-benchmark"
STASH_CREATED=true
fi
echo "[BASELINE] Building on main..."
git checkout main --quiet
npx next build 2>&1 | tee "$BASELINE_OUTPUT"
echo ""
echo "[OPTIMIZED] Building with named imports..."
git cherry-pick "$COMMIT_SHA" --no-commit --quiet 2>/dev/null || \
git diff main.."$COMMIT_SHA" | git apply --quiet
npx next build 2>&1 | tee "$OPTIMIZED_OUTPUT"
echo ""
echo "── Baseline route table ──"
sed -n '/Route (app)/,/First Load JS shared by all/p' "$BASELINE_OUTPUT"
echo ""
echo "── Optimized route table ──"
sed -n '/Route (app)/,/First Load JS shared by all/p' "$OPTIMIZED_OUTPUT"
echo ""
echo "Baseline output: $BASELINE_OUTPUT"
echo "Optimized output: $OPTIMIZED_OUTPUT"
git checkout -- . 2>/dev/null
git checkout "$CURRENT_BRANCH" --quiet
if $STASH_CREATED; then git stash pop --quiet; fi
fi

View file

@ -1,153 +0,0 @@
#!/usr/bin/env bash
# Reproducer: PrismaClient singleton verification
#
# Verifies:
# 1. No `new PrismaClient()` outside src/lib/prisma.ts
# 2. All previously-affected files import from @/lib/prisma
# 3. The singleton configures connection pooling
# 4. TypeScript compiles cleanly
#
# Usage:
# cd js/cf-webapp
# bash proof/reproducers/03-prisma-singleton.sh
set -euo pipefail
REPO_ROOT="$(git rev-parse --show-toplevel)"
WEBAPP_DIR="$REPO_ROOT/js/cf-webapp"
cd "$WEBAPP_DIR"
PASS=0
FAIL=0
check() {
local label="$1"
local result="$2"
if [ "$result" = "pass" ]; then
echo " PASS: $label"
((PASS++))
else
echo " FAIL: $label"
((FAIL++))
fi
}
echo "================================================================"
echo " Reproducer: PrismaClient Singleton"
echo "================================================================"
echo ""
# ── Check 1: No new PrismaClient() outside singleton ────────────────────────
echo "── Check 1: No 'new PrismaClient()' outside src/lib/prisma.ts ──"
# Find all new PrismaClient() calls, excluding the singleton file and node_modules
EXTRA_INSTANCES=$(grep -rn 'new PrismaClient' src/ --include='*.ts' --include='*.tsx' \
| grep -v 'src/lib/prisma.ts' \
| grep -v 'node_modules' \
| grep -v '__tests__' \
| grep -v '.test.' \
|| true)
if [ -z "$EXTRA_INSTANCES" ]; then
check "No PrismaClient instantiation outside singleton" "pass"
else
echo " Found extra PrismaClient instances:"
echo "$EXTRA_INSTANCES"
check "No PrismaClient instantiation outside singleton" "fail"
fi
echo ""
# ── Check 2: Previously-affected files import from @/lib/prisma ──────────────
echo "── Check 2: Affected files import from @/lib/prisma ──"
AFFECTED_FILES=(
"src/app/(dashboard)/apikeys/page.tsx"
"src/app/(dashboard)/apikeys/tokenfuncs.ts"
"src/app/api/traces/[trace_id]/save-modified-code/route.ts"
"src/app/trace/[trace_id]/page.tsx"
"src/lib/modified-code-utils.ts"
)
for file in "${AFFECTED_FILES[@]}"; do
if [ -f "$file" ]; then
if grep -q '@/lib/prisma' "$file"; then
check "$file imports from @/lib/prisma" "pass"
else
check "$file imports from @/lib/prisma" "fail"
fi
else
echo " SKIP: $file not found"
fi
done
echo ""
# ── Check 3: Singleton configures connection pooling ─────────────────────────
echo "── Check 3: Singleton has connection pooling ──"
SINGLETON="src/lib/prisma.ts"
if grep -q 'connection_limit' "$SINGLETON"; then
LIMIT=$(grep -oP 'connection_limit=\d+' "$SINGLETON")
echo " Found: $LIMIT"
check "Connection limit configured" "pass"
else
check "Connection limit configured" "fail"
fi
if grep -q 'pool_timeout' "$SINGLETON"; then
TIMEOUT=$(grep -oP 'pool_timeout=\d+' "$SINGLETON")
echo " Found: $TIMEOUT"
check "Pool timeout configured" "pass"
else
check "Pool timeout configured" "fail"
fi
if grep -q 'globalForPrisma\|globalThis' "$SINGLETON"; then
check "globalThis caching for HMR" "pass"
else
check "globalThis caching for HMR" "fail"
fi
echo ""
# ── Check 4: Count total PrismaClient import sources ────────────────────────
echo "── Check 4: All Prisma imports use singleton ──"
DIRECT_PRISMA_IMPORTS=$(grep -rn "from ['\"]@prisma/client['\"]" src/ --include='*.ts' --include='*.tsx' \
| grep -v 'src/lib/prisma.ts' \
| grep -v 'node_modules' \
| grep -v '__tests__' \
| grep -v '.test.' \
|| true)
# Filter to only lines that import PrismaClient (type-only imports are fine)
CONSTRUCTOR_IMPORTS=$(echo "$DIRECT_PRISMA_IMPORTS" | grep 'PrismaClient' | grep -v 'type ' | grep -v 'import type' || true)
if [ -z "$CONSTRUCTOR_IMPORTS" ]; then
check "No direct PrismaClient constructor imports outside singleton" "pass"
else
echo " Direct PrismaClient imports found (non-type):"
echo "$CONSTRUCTOR_IMPORTS"
check "No direct PrismaClient constructor imports outside singleton" "fail"
fi
echo ""
# ── Check 5: TypeScript compiles ─────────────────────────────────────────────
echo "── Check 5: TypeScript type check ──"
if npx tsc --noEmit 2>&1; then
check "TypeScript compiles cleanly" "pass"
else
check "TypeScript compiles cleanly" "fail"
fi
echo ""
# ── Summary ──────────────────────────────────────────────────────────────────
echo "================================================================"
echo " Results: $PASS passed, $FAIL failed"
echo "================================================================"
if [ "$FAIL" -gt 0 ]; then
exit 1
fi

View file

@ -1,150 +0,0 @@
#!/usr/bin/env bash
# Reproducer: N+1 query elimination in getAllOptimizationEvents
#
# Statically analyzes the before/after code to prove query count reduction.
# Does NOT require a running database — uses code analysis only.
#
# Usage:
# cd js/cf-webapp
# bash proof/reproducers/04-n-plus-one-benchmark.sh
set -euo pipefail
REPO_ROOT="$(git rev-parse --show-toplevel)"
WEBAPP_DIR="$REPO_ROOT/js/cf-webapp"
cd "$WEBAPP_DIR"
ACTION_FILE="src/app/(dashboard)/review-optimizations/action.ts"
COMMIT_SHA="25013adb"
PASS=0
FAIL=0
check() {
local label="$1"
local result="$2"
if [ "$result" = "pass" ]; then
echo " PASS: $label"
((PASS++))
else
echo " FAIL: $label"
((FAIL++))
fi
}
echo "================================================================"
echo " Reproducer: N+1 Query Elimination"
echo "================================================================"
echo ""
if [ ! -f "$ACTION_FILE" ]; then
echo "ERROR: $ACTION_FILE not found"
exit 1
fi
# ── Analysis of BEFORE code (main) ──────────────────────────────────────────
echo "── Before (main): N+1 pattern analysis ──"
BEFORE_CODE=$(git show main:"js/cf-webapp/$ACTION_FILE" 2>/dev/null)
# Count per-event findUnique calls (N+1 pattern)
N_PLUS_ONE_REPO=$(echo "$BEFORE_CODE" | grep -c 'repositories.findUnique' || true)
N_PLUS_ONE_FEATURES=$(echo "$BEFORE_CODE" | grep -c 'optimization_features.findUnique' || true)
echo " Per-event repository findUnique calls: $N_PLUS_ONE_REPO"
echo " Per-event features findUnique calls: $N_PLUS_ONE_FEATURES"
# Check if they're inside a map/Promise.all loop (N+1 indicator)
MAP_FINDUNIQUE=$(echo "$BEFORE_CODE" | grep -c 'map(async.*event.*findUnique\|map(async.*=>.*findUnique' || true)
echo " findUnique inside map() loops: $MAP_FINDUNIQUE"
echo ""
if [ "$N_PLUS_ONE_REPO" -gt 0 ] || [ "$N_PLUS_ONE_FEATURES" -gt 0 ]; then
echo " CONFIRMED: N+1 pattern exists in main"
echo " For 10 events: ~$((2 + N_PLUS_ONE_REPO * 10 + N_PLUS_ONE_FEATURES * 10)) queries"
else
echo " N+1 pattern not found in main (may already be fixed)"
fi
echo ""
# ── Analysis of AFTER code (current) ────────────────────────────────────────
echo "── After (optimized): batch query analysis ──"
AFTER_CODE=$(cat "$ACTION_FILE")
# Check: no per-event findUnique in map loops
AFTER_MAP_FINDUNIQUE=$(echo "$AFTER_CODE" | grep -c 'map(async.*findUnique' || true)
if [ "$AFTER_MAP_FINDUNIQUE" -eq 0 ]; then
check "No findUnique inside map() loops" "pass"
else
check "No findUnique inside map() loops" "fail"
fi
# Check: batch findMany with IN filter
BATCH_FINDMANY=$(echo "$AFTER_CODE" | grep -c 'findMany.*in:.*traceIds\|in:.*traceIds' || true)
if [ "$BATCH_FINDMANY" -gt 0 ]; then
check "Batch findMany with IN filter on trace_ids" "pass"
else
# Check alternate pattern (the { in: traceIds } may be on next line)
BATCH_IN=$(echo "$AFTER_CODE" | grep -c '{ in: traceIds }' || true)
if [ "$BATCH_IN" -gt 0 ]; then
check "Batch findMany with IN filter on trace_ids" "pass"
else
check "Batch findMany with IN filter on trace_ids" "fail"
fi
fi
# Check: Promise.all wraps events + count (raw SQL path)
PROMISE_ALL_RAW=$(echo "$AFTER_CODE" | grep -c 'Promise.all.*queryRawUnsafe\|Promise.all' || true)
if [ "$PROMISE_ALL_RAW" -ge 2 ]; then
check "Promise.all parallelizes queries (both paths)" "pass"
elif [ "$PROMISE_ALL_RAW" -ge 1 ]; then
check "Promise.all parallelizes queries (at least one path)" "pass"
else
check "Promise.all parallelizes queries" "fail"
fi
# Check: raw SQL JOIN includes repository fields
REPO_JOIN=$(echo "$AFTER_CODE" | grep -c 'repo_full_name\|repo_id' || true)
if [ "$REPO_JOIN" -ge 2 ]; then
check "Raw SQL JOIN includes repository fields (no per-event lookup)" "pass"
else
check "Raw SQL JOIN includes repository fields" "fail"
fi
# Check: Map-based lookup instead of per-event query
MAP_LOOKUP=$(echo "$AFTER_CODE" | grep -c 'featuresMap.get\|featuresMap' || true)
if [ "$MAP_LOOKUP" -ge 2 ]; then
check "Map-based lookup for features (O(1) per event)" "pass"
else
check "Map-based lookup for features" "fail"
fi
echo ""
# ── Query count comparison ───────────────────────────────────────────────────
echo "── Query count comparison (for page of 10 events) ──"
echo ""
echo " Raw SQL path:"
echo " Before: 1 (events) + 1 (count) + 10 (per-event repo) = 12 queries (sequential)"
echo " After: 1 (events+repo JOIN) + 1 (count) = 2 queries (parallel)"
echo " Reduction: 12 → 2 (83% fewer queries)"
echo ""
echo " Prisma path:"
echo " Before: 1 (events) + 10 (per-event features) + 1 (count) = 12 queries (sequential)"
echo " After: 1 (events) + 1 (count) [parallel] + 1 (batch features) = 3 queries"
echo " Reduction: 12 → 3 (75% fewer queries)"
echo ""
# ── Summary ──────────────────────────────────────────────────────────────────
echo "================================================================"
echo " Results: $PASS passed, $FAIL failed"
echo "================================================================"
if [ "$FAIL" -gt 0 ]; then
exit 1
fi

View file

@ -1,118 +0,0 @@
#!/usr/bin/env bash
# Reproducer: Parallelize members page fetches
#
# Verifies the two server action calls are wrapped in Promise.all
# and that they are independent (neither uses the other's result).
#
# Usage:
# cd js/cf-webapp
# bash proof/reproducers/05-parallel-members-page.sh
set -euo pipefail
REPO_ROOT="$(git rev-parse --show-toplevel)"
WEBAPP_DIR="$REPO_ROOT/js/cf-webapp"
cd "$WEBAPP_DIR"
PAGE_FILE="src/app/(dashboard)/members/page.tsx"
PASS=0
FAIL=0
check() {
local label="$1"
local result="$2"
if [ "$result" = "pass" ]; then
echo " PASS: $label"
((PASS++))
else
echo " FAIL: $label"
((FAIL++))
fi
}
echo "================================================================"
echo " Reproducer: Parallel Members Page Fetches"
echo "================================================================"
echo ""
# ── Check 1: Promise.all wraps both calls ────────────────────────────────────
echo "── Check 1: Promise.all wraps getCurrentUserRole + getOrganizationMembers ──"
PROMISE_ALL=$(grep -A5 'Promise.all' "$PAGE_FILE" | grep -c 'getCurrentUserRole\|getOrganizationMembers' || true)
if [ "$PROMISE_ALL" -ge 2 ]; then
check "Both calls inside Promise.all" "pass"
else
check "Both calls inside Promise.all" "fail"
fi
echo ""
# ── Check 2: No sequential await pattern ─────────────────────────────────────
echo "── Check 2: No sequential await of these two functions ──"
# Check that getCurrentUserRole and getOrganizationMembers aren't on separate await lines
SEQ_ROLE=$(grep -n 'await getCurrentUserRole' "$PAGE_FILE" | grep -v 'Promise.all' || true)
SEQ_MEMBERS=$(grep -n 'await getOrganizationMembers' "$PAGE_FILE" | grep -v 'Promise.all' || true)
# Filter out lines that are inside a Promise.all block
SEQ_COUNT=0
if [ -n "$SEQ_ROLE" ]; then
# Check if it's NOT inside a Promise.all destructuring
STANDALONE_ROLE=$(echo "$SEQ_ROLE" | grep -v '\[.*\] = await Promise' || true)
if [ -n "$STANDALONE_ROLE" ]; then ((SEQ_COUNT++)); fi
fi
if [ -n "$SEQ_MEMBERS" ]; then
STANDALONE_MEMBERS=$(echo "$SEQ_MEMBERS" | grep -v '\[.*\] = await Promise' || true)
if [ -n "$STANDALONE_MEMBERS" ]; then ((SEQ_COUNT++)); fi
fi
if [ "$SEQ_COUNT" -eq 0 ]; then
check "No standalone sequential awaits" "pass"
else
echo " Found sequential awaits:"
echo " $SEQ_ROLE"
echo " $SEQ_MEMBERS"
check "No standalone sequential awaits" "fail"
fi
echo ""
# ── Check 3: Independence verification ───────────────────────────────────────
echo "── Check 3: Calls are independent (same inputs, no cross-dependency) ──"
# Both take (userId, orgId) — verify signatures match
ROLE_ARGS=$(grep 'getCurrentUserRole(' "$PAGE_FILE" | head -1)
MEMBERS_ARGS=$(grep 'getOrganizationMembers(' "$PAGE_FILE" | head -1)
echo " getCurrentUserRole call: $ROLE_ARGS"
echo " getOrganizationMembers call: $MEMBERS_ARGS"
# Check roleResult is not used as input to getOrganizationMembers
CROSS_DEP=$(grep 'getOrganizationMembers.*roleResult\|getOrganizationMembers.*role' "$PAGE_FILE" || true)
if [ -z "$CROSS_DEP" ]; then
check "No cross-dependency between the two calls" "pass"
else
check "No cross-dependency between the two calls" "fail"
fi
echo ""
# ── Before/After comparison ──────────────────────────────────────────────────
echo "── Before/After comparison ──"
echo ""
echo " Before (main): sequential"
echo " const roleResult = await getCurrentUserRole(userId, orgId)"
echo " const result = await getOrganizationMembers(userId, orgId)"
echo " Latency: role_time + members_time"
echo ""
echo " After: parallel"
echo " const [roleResult, result] = await Promise.all([..."
echo " Latency: max(role_time, members_time)"
echo ""
# ── Summary ──────────────────────────────────────────────────────────────────
echo "================================================================"
echo " Results: $PASS passed, $FAIL failed"
echo "================================================================"
if [ "$FAIL" -gt 0 ]; then
exit 1
fi

View file

@ -1,121 +0,0 @@
#!/usr/bin/env bash
# Reproducer: Parallelize repository detail page fetches
#
# Verifies:
# 1. getRepositoryById parallelizes repo fetch + auth check
# 2. Page component parallelizes 6 stats queries via Promise.all
# 3. No sequential await pattern remains for these calls
#
# Usage:
# cd js/cf-webapp
# bash proof/reproducers/06-parallel-repo-page.sh
set -euo pipefail
REPO_ROOT="$(git rev-parse --show-toplevel)"
WEBAPP_DIR="$REPO_ROOT/js/cf-webapp"
cd "$WEBAPP_DIR"
ACTION_FILE="src/app/(dashboard)/repositories/[repositoryId]/action.ts"
PAGE_FILE="src/app/(dashboard)/repositories/[repositoryId]/page.tsx"
PASS=0
FAIL=0
check() {
local label="$1"
local result="$2"
if [ "$result" = "pass" ]; then
echo " PASS: $label"
((PASS++))
else
echo " FAIL: $label"
((FAIL++))
fi
}
echo "================================================================"
echo " Reproducer: Parallel Repository Detail Page Fetches"
echo "================================================================"
echo ""
# ── Check 1: getRepositoryById uses Promise.all ─────────────────────────────
echo "── Check 1: getRepositoryById parallelizes repo + auth ──"
if grep -A10 'async function getRepositoryById\|getRepositoryById.*async' "$ACTION_FILE" | grep -q 'Promise.all'; then
check "getRepositoryById uses Promise.all for repo + auth" "pass"
else
# Check broader context
PROMISE_IN_ACTION=$(grep -c 'Promise.all' "$ACTION_FILE" || true)
if [ "$PROMISE_IN_ACTION" -ge 1 ]; then
check "getRepositoryById uses Promise.all for repo + auth" "pass"
else
check "getRepositoryById uses Promise.all for repo + auth" "fail"
fi
fi
echo ""
# ── Check 2: Page parallelizes 6 stats queries ──────────────────────────────
echo "── Check 2: 6 stats queries in Promise.all ──"
STATS_IN_PROMISE=$(grep -A20 'Promise.all' "$PAGE_FILE" | grep -c \
'getUserOptimizationCountByRepo\|getUserOptimizationSuccessfulCountByRepo\|getOptimizationsTimeSeriesData\|getPullRequestEventTimeSeriesData\|getActiveUserLeaderboardLast30DaysForRepo' \
|| true)
echo " Stats queries found inside Promise.all: $STATS_IN_PROMISE"
if [ "$STATS_IN_PROMISE" -ge 5 ]; then
check "At least 5 of 6 stats queries in Promise.all" "pass"
else
check "At least 5 of 6 stats queries in Promise.all" "fail"
fi
echo ""
# ── Check 3: No sequential stats awaits ──────────────────────────────────────
echo "── Check 3: No sequential stats query awaits ──"
# Look for standalone await lines for these functions outside Promise.all
SEQ_STATS=0
for func in getUserOptimizationCountByRepo getUserOptimizationSuccessfulCountByRepo \
getOptimizationsTimeSeriesData getPullRequestEventTimeSeriesData \
getActiveUserLeaderboardLast30DaysForRepo; do
STANDALONE=$(grep "await $func" "$PAGE_FILE" | grep -v 'Promise.all' || true)
if [ -n "$STANDALONE" ]; then
echo " Sequential await found: $STANDALONE"
((SEQ_STATS++))
fi
done
if [ "$SEQ_STATS" -eq 0 ]; then
check "No sequential stats query awaits outside Promise.all" "pass"
else
check "No sequential stats query awaits outside Promise.all" "fail"
fi
echo ""
# ── Check 4: Independence verification ───────────────────────────────────────
echo "── Check 4: Stats queries are independent (all take repositoryId only) ──"
# Verify each function call uses repositoryId (or repositoryId + year)
FUNCS_WITH_REPOID=$(grep -c 'repositoryId)' "$PAGE_FILE" || true)
echo " Function calls using repositoryId: $FUNCS_WITH_REPOID"
if [ "$FUNCS_WITH_REPOID" -ge 5 ]; then
check "All stats queries take repositoryId as primary input" "pass"
else
check "All stats queries take repositoryId as primary input" "fail"
fi
echo ""
# ── Summary ──────────────────────────────────────────────────────────────────
echo "── Latency comparison ──"
echo " Before: 7 sequential round-trips (~50ms each) = ~350ms"
echo " After: 2 parallel batches (auth+repo, then 6 stats) = ~100ms"
echo " Savings: ~250ms (71%)"
echo ""
echo "================================================================"
echo " Results: $PASS passed, $FAIL failed"
echo "================================================================"
if [ "$FAIL" -gt 0 ]; then
exit 1
fi

View file

@ -1,226 +0,0 @@
#!/usr/bin/env bash
# Reproducer: Observability stack verification
#
# Verifies:
# 1. OTel SDK is configured with Sentry bridge in instrumentation.ts
# 2. Sentry sampling is 10% in production (server + client)
# 3. skipOpenTelemetrySetup is set (avoids duplicate OTel SDK)
# 4. Prisma slow query logging is configured
# 5. @next/bundle-analyzer is wired into next.config.mjs
# 6. Required packages are installed
#
# Usage:
# cd js/cf-webapp
# bash proof/reproducers/07-observability-stack.sh
set -euo pipefail
REPO_ROOT="$(git rev-parse --show-toplevel)"
WEBAPP_DIR="$REPO_ROOT/js/cf-webapp"
cd "$WEBAPP_DIR"
INSTRUMENTATION="src/instrumentation.ts"
INSTRUMENTATION_CLIENT="src/instrumentation-client.ts"
SENTRY_SERVER="sentry.server.config.ts"
PRISMA_LIB="src/lib/prisma.ts"
NEXT_CONFIG="next.config.mjs"
PACKAGE_JSON="package.json"
PASS=0
FAIL=0
check() {
local label="$1"
local result="$2"
if [ "$result" = "pass" ]; then
echo " PASS: $label"
PASS=$((PASS + 1))
else
echo " FAIL: $label"
FAIL=$((FAIL + 1))
fi
}
echo "================================================================"
echo " Reproducer: Observability Stack Verification"
echo "================================================================"
echo ""
# ── Check 1: OTel SDK with Sentry bridge ────────────────────────────────────
echo "── Check 1: OTel SDK in instrumentation.ts ──"
if grep -q 'NodeSDK' "$INSTRUMENTATION"; then
check "NodeSDK is imported/used" "pass"
else
check "NodeSDK is imported/used" "fail"
fi
if grep -q 'SentrySpanProcessor' "$INSTRUMENTATION"; then
check "SentrySpanProcessor configured" "pass"
else
check "SentrySpanProcessor configured" "fail"
fi
if grep -q 'SentryPropagator' "$INSTRUMENTATION"; then
check "SentryPropagator configured" "pass"
else
check "SentryPropagator configured" "fail"
fi
if grep -q 'PrismaInstrumentation' "$INSTRUMENTATION"; then
check "PrismaInstrumentation enabled" "pass"
else
check "PrismaInstrumentation enabled" "fail"
fi
if grep -q 'instrumentation-fs.*enabled.*false' "$INSTRUMENTATION"; then
check "Noisy fs instrumentation disabled" "pass"
else
check "Noisy fs instrumentation disabled" "fail"
fi
# Check dynamic imports (packages only loaded when tracing active)
DYNAMIC_IMPORTS=$(grep -c 'await import(' "$INSTRUMENTATION" || true)
if [ "$DYNAMIC_IMPORTS" -ge 3 ]; then
check "OTel packages dynamically imported ($DYNAMIC_IMPORTS imports)" "pass"
else
check "OTel packages dynamically imported ($DYNAMIC_IMPORTS imports)" "fail"
fi
echo ""
# ── Check 2: Sentry 10% sampling ────────────────────────────────────────────
echo "── Check 2: Sentry 10% production sampling ──"
if grep -q '0\.1' "$SENTRY_SERVER"; then
check "Server tracesSampleRate includes 0.1" "pass"
else
check "Server tracesSampleRate includes 0.1" "fail"
fi
if grep -q '0\.1' "$INSTRUMENTATION_CLIENT"; then
check "Client tracesSampleRate includes 0.1" "pass"
else
check "Client tracesSampleRate includes 0.1" "fail"
fi
echo ""
# ── Check 3: skipOpenTelemetrySetup ──────────────────────────────────────────
echo "── Check 3: skipOpenTelemetrySetup prevents duplicate OTel ──"
if grep -q 'skipOpenTelemetrySetup.*true' "$SENTRY_SERVER"; then
check "skipOpenTelemetrySetup: true in sentry.server.config.ts" "pass"
else
check "skipOpenTelemetrySetup: true in sentry.server.config.ts" "fail"
fi
echo ""
# ── Check 4: Prisma logging ─────────────────────────────────────────────────
echo "── Check 4: Prisma slow query logging and Sentry forwarding ──"
if grep -q 'SLOW_QUERY_THRESHOLD_MS' "$PRISMA_LIB"; then
check "Slow query threshold defined" "pass"
else
check "Slow query threshold defined" "fail"
fi
if grep -q '\$on.*query' "$PRISMA_LIB" || grep -q "on.*query" "$PRISMA_LIB"; then
check "Prisma query event listener registered" "pass"
else
check "Prisma query event listener registered" "fail"
fi
if grep -q '\$on.*error' "$PRISMA_LIB" || grep -q "on.*error" "$PRISMA_LIB"; then
check "Prisma error event forwarded to Sentry" "pass"
else
check "Prisma error event forwarded to Sentry" "fail"
fi
if grep -q 'Sentry.captureException' "$PRISMA_LIB"; then
check "Sentry.captureException called in error handler" "pass"
else
check "Sentry.captureException called in error handler" "fail"
fi
# Verify log levels configured
if grep -q "emit.*event.*level.*warn" "$PRISMA_LIB" && grep -q "emit.*event.*level.*error" "$PRISMA_LIB"; then
check "Prisma log levels (warn, error) emit events" "pass"
else
check "Prisma log levels (warn, error) emit events" "fail"
fi
echo ""
# ── Check 5: Bundle analyzer ────────────────────────────────────────────────
echo "── Check 5: @next/bundle-analyzer in next.config.mjs ──"
if grep -q 'bundle-analyzer' "$NEXT_CONFIG"; then
check "bundle-analyzer imported in next.config.mjs" "pass"
else
check "bundle-analyzer imported in next.config.mjs" "fail"
fi
if grep -q 'ANALYZE' "$NEXT_CONFIG"; then
check "ANALYZE env var gates bundle analysis" "pass"
else
check "ANALYZE env var gates bundle analysis" "fail"
fi
if grep -q 'withBundleAnalyzer' "$NEXT_CONFIG"; then
check "withBundleAnalyzer wraps config" "pass"
else
check "withBundleAnalyzer wraps config" "fail"
fi
echo ""
# ── Check 6: Required packages ──────────────────────────────────────────────
echo "── Check 6: Required packages in package.json ──"
for pkg in "@opentelemetry/sdk-node" "@opentelemetry/auto-instrumentations-node" \
"@prisma/instrumentation" "@sentry/opentelemetry" "@next/bundle-analyzer"; do
if grep -q "\"$pkg\"" "$PACKAGE_JSON"; then
check "$pkg in package.json" "pass"
else
check "$pkg in package.json" "fail"
fi
done
echo ""
# ── Check 7: browserTracingIntegration ──────────────────────────────────────
echo "── Check 7: Client-side browser tracing with long animation frames ──"
if grep -q 'browserTracingIntegration' "$INSTRUMENTATION_CLIENT"; then
check "browserTracingIntegration configured" "pass"
else
check "browserTracingIntegration configured" "fail"
fi
if grep -q 'enableLongAnimationFrame' "$INSTRUMENTATION_CLIENT"; then
check "Long animation frame detection enabled" "pass"
else
check "Long animation frame detection enabled" "fail"
fi
echo ""
# ── Summary ──────────────────────────────────────────────────────────────────
echo "── What this enables ──"
echo " - Distributed traces: HTTP → server action → Prisma query (all linked)"
echo " - Slow query alerts: queries >500ms logged in dev"
echo " - 90% reduction in Sentry event volume (100% → 10% sampling)"
echo " - On-demand bundle analysis: ANALYZE=true npm run build"
echo " - Web Vitals correlation via long animation frame detection"
echo ""
echo "================================================================"
echo " Results: $PASS passed, $FAIL failed"
echo "================================================================"
if [ "$FAIL" -gt 0 ]; then
exit 1
fi

View file

@ -1,128 +0,0 @@
#!/usr/bin/env bash
# Reproducer: Server action timing + PostHog analytics
set -euo pipefail
REPO_ROOT="$(git rev-parse --show-toplevel)"
WEBAPP_DIR="$REPO_ROOT/js/cf-webapp"
cd "$WEBAPP_DIR"
PASS=0; FAIL=0
check() {
local label="$1" result="$2"
if [ "$result" = "pass" ]; then echo " PASS: $label"; PASS=$((PASS + 1))
else echo " FAIL: $label"; FAIL=$((FAIL + 1)); fi
}
echo "================================================================"
echo " Reproducer: Server Action Timing + PostHog Analytics"
echo "================================================================"
echo ""
# ── Check 1: withTiming utility exists ──
echo "── Check 1: withTiming utility ──"
TIMING_FILE="src/lib/server-action-timing.ts"
if [ -f "$TIMING_FILE" ]; then
check "server-action-timing.ts exists" "pass"
else
check "server-action-timing.ts exists" "fail"
fi
if grep -q 'function withTiming' "$TIMING_FILE" 2>/dev/null; then
check "withTiming function exported" "pass"
else
check "withTiming function exported" "fail"
fi
if grep -q 'Sentry.startSpan' "$TIMING_FILE" 2>/dev/null; then
check "Creates Sentry span for each action" "pass"
else
check "Creates Sentry span for each action" "fail"
fi
if grep -q 'performance.now' "$TIMING_FILE" 2>/dev/null; then
check "Uses performance.now() for timing" "pass"
else
check "Uses performance.now() for timing" "fail"
fi
if grep -q 'SLOW_ACTION_THRESHOLD' "$TIMING_FILE" 2>/dev/null; then
check "Slow action threshold defined" "pass"
else
check "Slow action threshold defined" "fail"
fi
if grep -q 'server_action.slow' "$TIMING_FILE" 2>/dev/null; then
check "Marks slow actions with span attribute" "pass"
else
check "Marks slow actions with span attribute" "fail"
fi
echo ""
# ── Check 2: Server actions wrapped with withTiming ──
echo "── Check 2: Server actions instrumented ──"
for action in getOrganizationMembers getRepositoryById getRepositoriesWithStagingEvents getAllOptimizationEvents; do
if grep -r "$action.*=.*withTiming\|withTiming.*$action" "src/app/" 2>/dev/null | grep -q .; then
check "$action wrapped with withTiming" "pass"
else
check "$action wrapped with withTiming" "fail"
fi
done
echo ""
# ── Check 3: Centralized tracking helper ──
echo "── Check 3: Centralized PostHog tracking ──"
TRACKING_FILE="src/lib/analytics/tracking.ts"
if grep -q 'captureEvent' "$TRACKING_FILE" 2>/dev/null; then
check "captureEvent helper exists" "pass"
else
check "captureEvent helper exists" "fail"
fi
for event in trackOptimizationReviewed trackRepositoryConnected trackApiKeyCreated trackMemberInvited trackBillingPageViewed; do
if grep -q "function $event\|$event" "$TRACKING_FILE" 2>/dev/null; then
check "$event tracking function defined" "pass"
else
check "$event tracking function defined" "fail"
fi
done
echo ""
# ── Check 4: Tracking calls in action files ──
echo "── Check 4: Tracking calls wired into actions ──"
if grep -rq 'trackOptimizationReviewed' "src/app/(dashboard)/review-optimizations/" 2>/dev/null; then
check "trackOptimizationReviewed called in review-optimizations" "pass"
else
check "trackOptimizationReviewed called in review-optimizations" "fail"
fi
if grep -rq 'trackRepositoryConnected' "src/app/(dashboard)/repositories/" 2>/dev/null; then
check "trackRepositoryConnected called in repositories" "pass"
else
check "trackRepositoryConnected called in repositories" "fail"
fi
if grep -rq 'trackApiKeyCreated' "src/app/(dashboard)/apikeys/" 2>/dev/null; then
check "trackApiKeyCreated called in apikeys" "pass"
else
check "trackApiKeyCreated called in apikeys" "fail"
fi
if grep -rq 'trackMemberInvited' "src/app/(dashboard)/" 2>/dev/null; then
check "trackMemberInvited called" "pass"
else
check "trackMemberInvited called" "fail"
fi
if grep -rq 'trackBillingPageViewed' "src/app/(dashboard)/billing/" 2>/dev/null; then
check "trackBillingPageViewed called in billing" "pass"
else
check "trackBillingPageViewed called in billing" "fail"
fi
echo ""
echo "================================================================"
echo " Results: $PASS passed, $FAIL failed"
echo "================================================================"
if [ "$FAIL" -gt 0 ]; then exit 1; fi

View file

@ -1,105 +0,0 @@
#!/usr/bin/env bash
# Reproducer: Test coverage verification
set -euo pipefail
REPO_ROOT="$(git rev-parse --show-toplevel)"
WEBAPP_DIR="$REPO_ROOT/js/cf-webapp"
cd "$WEBAPP_DIR"
PASS=0; FAIL=0
check() {
local label="$1" result="$2"
if [ "$result" = "pass" ]; then echo " PASS: $label"; PASS=$((PASS + 1))
else echo " FAIL: $label"; FAIL=$((FAIL + 1)); fi
}
echo "================================================================"
echo " Reproducer: Test Coverage Verification"
echo "================================================================"
echo ""
# ── Check 1: Test files exist ──
echo "── Check 1: Test files exist ──"
for f in \
"src/lib/__tests__/server-action-timing.test.ts" \
"src/app/(dashboard)/members/__tests__/action.test.ts" \
"src/app/(dashboard)/repositories/[repositoryId]/__tests__/action.test.ts" \
"src/app/(dashboard)/review-optimizations/__tests__/action.test.ts"; do
if [ -f "$f" ]; then
check "$f exists" "pass"
else
check "$f exists" "fail"
fi
done
echo ""
# ── Check 2: Test infrastructure ──
echo "── Check 2: Test infrastructure ──"
if [ -f "src/test/setup.ts" ]; then
check "Global test setup exists" "pass"
else
check "Global test setup exists" "fail"
fi
if grep -q '"@"' vitest.config.ts 2>/dev/null || grep -q "'@'" vitest.config.ts 2>/dev/null; then
check "Vitest config has @ path alias" "pass"
else
check "Vitest config has @ path alias" "fail"
fi
if grep -q 'setup' vitest.config.ts 2>/dev/null; then
check "Vitest config references setup file" "pass"
else
check "Vitest config references setup file" "fail"
fi
echo ""
# ── Check 3: Test counts ──
echo "── Check 3: Test counts ──"
TIMING_TESTS=$(grep -c "it\b\|test\b" "src/lib/__tests__/server-action-timing.test.ts" 2>/dev/null || echo 0)
MEMBERS_TESTS=$(grep -c "it\b\|test\b" "src/app/(dashboard)/members/__tests__/action.test.ts" 2>/dev/null || echo 0)
REPO_TESTS=$(grep -c "it\b\|test\b" "src/app/(dashboard)/repositories/[repositoryId]/__tests__/action.test.ts" 2>/dev/null || echo 0)
REVIEW_TESTS=$(grep -c "it\b\|test\b" "src/app/(dashboard)/review-optimizations/__tests__/action.test.ts" 2>/dev/null || echo 0)
echo " server-action-timing: $TIMING_TESTS tests"
echo " members/action: $MEMBERS_TESTS tests"
echo " repositories/action: $REPO_TESTS tests"
echo " review-optimizations/action: $REVIEW_TESTS tests"
TOTAL=$((TIMING_TESTS + MEMBERS_TESTS + REPO_TESTS + REVIEW_TESTS))
echo " Total: $TOTAL tests"
if [ "$TOTAL" -ge 30 ]; then
check "At least 30 tests across all files" "pass"
else
check "At least 30 tests across all files" "fail"
fi
echo ""
# ── Check 4: Mock setup covers key modules ──
echo "── Check 4: Global mocks ──"
SETUP="src/test/setup.ts"
if grep -q 'prisma\|@prisma' "$SETUP" 2>/dev/null; then
check "Prisma mock in setup" "pass"
else
check "Prisma mock in setup" "fail"
fi
if grep -q 'sentry\|@sentry' "$SETUP" 2>/dev/null; then
check "Sentry mock in setup" "pass"
else
check "Sentry mock in setup" "fail"
fi
if grep -q 'sentry/node\|sentry/nextjs\|mock.*sentry' "$SETUP" 2>/dev/null; then
check "Sentry node + nextjs both mocked in setup" "pass"
else
check "Sentry node + nextjs both mocked in setup" "fail"
fi
echo ""
echo "================================================================"
echo " Results: $PASS passed, $FAIL failed"
echo "================================================================"
if [ "$FAIL" -gt 0 ]; then exit 1; fi

View file

@ -1,62 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
REPO_ROOT="$(git rev-parse --show-toplevel)"
cd "$REPO_ROOT/js/cf-webapp"
PASS=0; FAIL=0
check() { if [ "$2" = "pass" ]; then echo " PASS: $1"; PASS=$((PASS+1)); else echo " FAIL: $1"; FAIL=$((FAIL+1)); fi; }
FILE="src/instrumentation-client.ts"
echo "================================================================"
echo " Reproducer: Lazy-Load Sentry Replay"
echo "================================================================"
echo ""
echo "── Check 1: lazyLoadIntegration used ──"
if grep -q 'lazyLoadIntegration' "$FILE"; then
check "lazyLoadIntegration called" "pass"
else
check "lazyLoadIntegration called" "fail"
fi
if grep -q 'replayIntegration' "$FILE"; then
check "replayIntegration referenced" "pass"
else
check "replayIntegration referenced" "fail"
fi
echo ""
echo "── Check 2: Replay NOT in init integrations array ──"
# The integrations array in Sentry.init should be empty
if grep -q 'integrations: \[\]' "$FILE"; then
check "Sentry.init integrations array is empty" "pass"
else
check "Sentry.init integrations array is empty" "fail"
fi
echo ""
echo "── Check 3: addIntegration used for deferred loading ──"
if grep -q 'addIntegration' "$FILE"; then
check "Sentry.addIntegration used for deferred replay" "pass"
else
check "Sentry.addIntegration used for deferred replay" "fail"
fi
echo ""
echo "── Check 4: maskAllText and blockAllMedia preserved ──"
if grep -q 'maskAllText.*true' "$FILE"; then
check "maskAllText: true preserved" "pass"
else
check "maskAllText: true preserved" "fail"
fi
if grep -q 'blockAllMedia.*true' "$FILE"; then
check "blockAllMedia: true preserved" "pass"
else
check "blockAllMedia: true preserved" "fail"
fi
echo ""
echo "================================================================"
echo " Results: $PASS passed, $FAIL failed"
echo "================================================================"
if [ "$FAIL" -gt 0 ]; then exit 1; fi

View file

@ -1,45 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
cd "$(git rev-parse --show-toplevel)/js/cf-webapp"
PASS=0; FAIL=0
check() { if [ "$2" = "pass" ]; then echo " PASS: $1"; PASS=$((PASS+1)); else echo " FAIL: $1"; FAIL=$((FAIL+1)); fi; }
echo "================================================================"
echo " Reproducer: @sentry/nextjs Consistency"
echo "================================================================"
echo ""
echo "── Check 1: No @sentry/node imports in server components ──"
NODE_IMPORTS=$(grep -r '@sentry/node' src/app/ --include='*.ts' --include='*.tsx' -l 2>/dev/null || true)
if [ -z "$NODE_IMPORTS" ]; then
check "No @sentry/node in src/app/" "pass"
else
echo " Found @sentry/node in: $NODE_IMPORTS"
check "No @sentry/node in src/app/" "fail"
fi
echo ""
echo "── Check 2: Repository action uses @sentry/nextjs ──"
ACTION="src/app/(dashboard)/repositories/[repositoryId]/action.ts"
if grep -q '@sentry/nextjs' "$ACTION"; then
check "Repository action imports @sentry/nextjs" "pass"
else
check "Repository action imports @sentry/nextjs" "fail"
fi
echo ""
echo "── Check 3: @sentry/nextjs is the only Sentry import in app/ ──"
SENTRY_IMPORTS=$(grep -r "from ['\"]@sentry/" src/app/ --include='*.ts' --include='*.tsx' 2>/dev/null | grep -v '@sentry/nextjs' | grep -v 'node_modules' || true)
if [ -z "$SENTRY_IMPORTS" ]; then
check "All Sentry imports in app/ use @sentry/nextjs" "pass"
else
echo " Non-nextjs imports: $SENTRY_IMPORTS"
check "All Sentry imports in app/ use @sentry/nextjs" "fail"
fi
echo ""
echo "================================================================"
echo " Results: $PASS passed, $FAIL failed"
echo "================================================================"
if [ "$FAIL" -gt 0 ]; then exit 1; fi

View file

@ -1,62 +0,0 @@
#!/usr/bin/env bash
# Proof reproducer for commit 12: framer-motion → motion/react migration
set -euo pipefail
PASS=0
FAIL=0
TOTAL=0
check() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " PASS: $desc"
PASS=$((PASS + 1))
else
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
fi
}
check_not() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
else
echo " PASS: $desc"
PASS=$((PASS + 1))
fi
}
echo "=== Proof 12: framer-motion → motion/react ==="
echo ""
# 1. Check motion is in dependencies
echo "--- Dependency checks ---"
check "motion is in package.json dependencies" \
grep -q '"motion"' package.json
check_not "framer-motion is NOT in package.json" \
grep -q '"framer-motion"' package.json
# 2. Check imports
echo ""
echo "--- Import checks ---"
check "onboarding page imports from motion/react" \
grep -q 'from "motion/react"' src/app/\(auth\)/onboarding/page.tsx
check_not "no framer-motion imports remain in source" \
grep -rq 'from "framer-motion"' src/
check "AnimatePresence is imported" \
grep -q 'AnimatePresence' src/app/\(auth\)/onboarding/page.tsx
check "motion component is imported" \
grep -q '{ AnimatePresence, motion }' src/app/\(auth\)/onboarding/page.tsx
echo ""
echo "=== Results: $PASS/$TOTAL passed, $FAIL failed ==="
[ "$FAIL" -eq 0 ] && echo "ALL CHECKS PASSED" || echo "SOME CHECKS FAILED"
exit "$FAIL"

View file

@ -1,67 +0,0 @@
#!/usr/bin/env bash
# Proof reproducer for commit 13: dynamic-import LineProfilerView
set -euo pipefail
PASS=0
FAIL=0
TOTAL=0
check() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " PASS: $desc"
PASS=$((PASS + 1))
else
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
fi
}
check_not() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
else
echo " PASS: $desc"
PASS=$((PASS + 1))
fi
}
PROFILER_PAGE="src/app/(dashboard)/review-optimizations/[traceId]/profiler/page.tsx"
echo "=== Proof 13: dynamic-import LineProfilerView ==="
echo ""
echo "--- Dynamic import checks ---"
check "next/dynamic is imported" \
grep -q 'import dynamic from "next/dynamic"' "$PROFILER_PAGE"
check "LineProfilerView uses dynamic()" \
grep -q 'const LineProfilerView = dynamic(' "$PROFILER_PAGE"
check "ssr: false is set" \
grep -q 'ssr: false' "$PROFILER_PAGE"
check_not "no static import of LineProfilerView" \
grep -q 'import { LineProfilerView }' "$PROFILER_PAGE"
echo ""
echo "--- Loading fallback checks ---"
check "Skeleton component is imported" \
grep -q 'import { Skeleton }' "$PROFILER_PAGE"
check "loading fallback uses Skeleton" \
grep -q 'loading:' "$PROFILER_PAGE"
echo ""
echo "--- Module resolution check ---"
check "dynamic import resolves @/components/LineProfiler" \
grep -q '@/components/LineProfiler' "$PROFILER_PAGE"
echo ""
echo "=== Results: $PASS/$TOTAL passed, $FAIL failed ==="
[ "$FAIL" -eq 0 ] && echo "ALL CHECKS PASSED" || echo "SOME CHECKS FAILED"
exit "$FAIL"

View file

@ -1,74 +0,0 @@
#!/usr/bin/env bash
# Proof reproducer for commit 14: PostHog singleton + flush() over shutdown()
set -euo pipefail
PASS=0
FAIL=0
TOTAL=0
check() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " PASS: $desc"
PASS=$((PASS + 1))
else
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
fi
}
check_not() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
else
echo " PASS: $desc"
PASS=$((PASS + 1))
fi
}
echo "=== Proof 14: PostHog singleton + flush() ==="
echo ""
echo "--- Singleton pattern checks ---"
check "module-level client variable exists" \
grep -q '^let client' src/lib/posthog.ts
check "client reuse guard (!client)" \
grep -q 'if (!client)' src/lib/posthog.ts
check "returns shared client instance" \
grep -q 'return client' src/lib/posthog.ts
echo ""
echo "--- shutdown → flush migration ---"
check_not "no shutdown() calls in modified files" \
grep -lq '\.shutdown()' \
src/app/\(auth\)/onboarding/SubmitFirstOnboardingPage.tsx \
src/app/\(auth\)/onboarding/SubmitSecondOnboardingPage.tsx \
src/app/\(dashboard\)/apikeys/page.tsx \
src/app/\(dashboard\)/getting-started/page.tsx \
src/lib/analytics/tracking.ts
check "flush() used in SubmitFirstOnboardingPage" \
grep -q '\.flush()' src/app/\(auth\)/onboarding/SubmitFirstOnboardingPage.tsx
check "flush() used in SubmitSecondOnboardingPage" \
grep -q '\.flush()' src/app/\(auth\)/onboarding/SubmitSecondOnboardingPage.tsx
check "flush() used in apikeys page" \
grep -q '\.flush()' src/app/\(dashboard\)/apikeys/page.tsx
check "flush() used in getting-started page" \
grep -q '\.flush()' src/app/\(dashboard\)/getting-started/page.tsx
check "flush() used in tracking.ts" \
grep -q '\.flush()' src/lib/analytics/tracking.ts
echo ""
echo "=== Results: $PASS/$TOTAL passed, $FAIL failed ==="
[ "$FAIL" -eq 0 ] && echo "ALL CHECKS PASSED" || echo "SOME CHECKS FAILED"
exit "$FAIL"

View file

@ -1,62 +0,0 @@
#!/usr/bin/env bash
# Proof reproducer for commit 15: parallelize getOptimizationEventById
set -euo pipefail
PASS=0
FAIL=0
TOTAL=0
check() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " PASS: $desc"
PASS=$((PASS + 1))
else
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
fi
}
check_not() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
else
echo " PASS: $desc"
PASS=$((PASS + 1))
fi
}
ACTION_FILE="src/app/(dashboard)/review-optimizations/[traceId]/action.ts"
echo "=== Proof 15: parallelize getOptimizationEventById ==="
echo ""
echo "--- Parallelization checks ---"
check "Promise.all used in getOptimizationEventById" \
grep -q 'Promise.all' "$ACTION_FILE"
check "event and features destructured from Promise.all" \
grep -q '\[event, features\].*await Promise.all' "$ACTION_FILE"
check "optimization_events.findFirst inside Promise.all block" \
grep -q 'optimization_events.findFirst' "$ACTION_FILE"
check "optimization_features.findUnique inside Promise.all block" \
grep -q 'optimization_features.findUnique' "$ACTION_FILE"
echo ""
echo "--- Independence check ---"
check "features query uses trace_id param directly" \
grep -q 'where: { trace_id }' "$ACTION_FILE"
check_not "no sequential if(event) then features pattern" \
grep -q 'if (event)' "$ACTION_FILE"
echo ""
echo "=== Results: $PASS/$TOTAL passed, $FAIL failed ==="
[ "$FAIL" -eq 0 ] && echo "ALL CHECKS PASSED" || echo "SOME CHECKS FAILED"
exit "$FAIL"

View file

@ -1,65 +0,0 @@
#!/usr/bin/env bash
# Proof reproducer for commit 16: React cache() dedup on trace page
set -euo pipefail
PASS=0
FAIL=0
TOTAL=0
check() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " PASS: $desc"
PASS=$((PASS + 1))
else
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
fi
}
TRACE_PAGE="src/app/trace/[trace_id]/page.tsx"
echo "=== Proof 16: React cache() dedup ==="
echo ""
echo "--- cache() setup checks ---"
check "imports cache from react" \
grep -q 'import { cache } from "react"' "$TRACE_PAGE"
check "getOptimizationFeature wrapped in cache()" \
grep -q 'const getOptimizationFeature = cache(' "$TRACE_PAGE"
echo ""
echo "--- Deduplication checks ---"
# Count direct findUnique calls — should be exactly 1 (inside the cached fn)
FIND_COUNT=$(grep -c 'optimization_features.findUnique' "$TRACE_PAGE")
TOTAL=$((TOTAL + 1))
if [ "$FIND_COUNT" -eq 1 ]; then
echo " PASS: exactly 1 findUnique call (inside cached function)"
PASS=$((PASS + 1))
else
echo " FAIL: expected 1 findUnique call, found $FIND_COUNT"
FAIL=$((FAIL + 1))
fi
# Count calls to getOptimizationFeature — should be 2 (metadata + page)
CALL_COUNT=$(grep -c 'getOptimizationFeature(trace_id)' "$TRACE_PAGE")
TOTAL=$((TOTAL + 1))
if [ "$CALL_COUNT" -eq 2 ]; then
echo " PASS: getOptimizationFeature called twice (metadata + page)"
PASS=$((PASS + 1))
else
echo " FAIL: expected 2 calls to getOptimizationFeature, found $CALL_COUNT"
FAIL=$((FAIL + 1))
fi
echo ""
echo "--- Type derivation check ---"
check "uses ReturnType instead of inline type" \
grep -q 'ReturnType<typeof getOptimizationFeature>' "$TRACE_PAGE"
echo ""
echo "=== Results: $PASS/$TOTAL passed, $FAIL failed ==="
[ "$FAIL" -eq 0 ] && echo "ALL CHECKS PASSED" || echo "SOME CHECKS FAILED"
exit "$FAIL"

View file

@ -1,59 +0,0 @@
#!/usr/bin/env bash
# Proof reproducer for commit 17: parallelize LLM call detail queries
set -euo pipefail
PASS=0
FAIL=0
TOTAL=0
check() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " PASS: $desc"
PASS=$((PASS + 1))
else
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
fi
}
check_not() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
else
echo " PASS: $desc"
PASS=$((PASS + 1))
fi
}
LLM_PAGE="src/app/observability/llm-call/[id]/page.tsx"
echo "=== Proof 17: parallelize LLM call detail queries ==="
echo ""
echo "--- Parallelization checks ---"
check "Promise.all used" \
grep -q 'Promise.all' "$LLM_PAGE"
check "llmCall and relatedErrors destructured from Promise.all" \
grep -q '\[llmCall, relatedErrors\].*await Promise.all' "$LLM_PAGE"
check "llm_calls.findUnique in Promise.all" \
grep -q 'llm_calls.findUnique' "$LLM_PAGE"
check "optimization_errors.findMany in Promise.all" \
grep -q 'optimization_errors.findMany' "$LLM_PAGE"
echo ""
echo "--- Sequential pattern removed ---"
check_not "no standalone relatedErrors assignment after llmCall" \
grep -q 'const relatedErrors = await prisma' "$LLM_PAGE"
echo ""
echo "=== Results: $PASS/$TOTAL passed, $FAIL failed ==="
[ "$FAIL" -eq 0 ] && echo "ALL CHECKS PASSED" || echo "SOME CHECKS FAILED"
exit "$FAIL"

View file

@ -1,65 +0,0 @@
#!/usr/bin/env bash
# Proof reproducer for commit 18: remove unused dependencies
set -euo pipefail
PASS=0
FAIL=0
TOTAL=0
check() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " PASS: $desc"
PASS=$((PASS + 1))
else
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
fi
}
check_not() {
local desc="$1"; shift
TOTAL=$((TOTAL + 1))
if "$@" >/dev/null 2>&1; then
echo " FAIL: $desc"
FAIL=$((FAIL + 1))
else
echo " PASS: $desc"
PASS=$((PASS + 1))
fi
}
echo "=== Proof 18: remove unused dependencies ==="
echo ""
echo "--- Removed dependencies ---"
check_not "@azure/msal-node removed from package.json" \
grep -q '@azure/msal-node' package.json
check_not "github-markdown-css removed from package.json" \
grep -q 'github-markdown-css' package.json
check_not "react-papaparse removed from package.json" \
grep -q 'react-papaparse' package.json
echo ""
echo "--- No imports of removed packages in source ---"
check_not "no @azure/msal-node imports in source" \
grep -rq '@azure/msal-node' src/
check_not "no github-markdown-css imports in source" \
grep -rq 'github-markdown-css' src/
echo ""
echo "--- Replacement dependency ---"
check "papaparse added to dependencies" \
grep -q '"papaparse"' package.json
check "@types/papaparse added to devDependencies" \
grep -q '@types/papaparse' package.json
echo ""
echo "=== Results: $PASS/$TOTAL passed, $FAIL failed ==="
[ "$FAIL" -eq 0 ] && echo "ALL CHECKS PASSED" || echo "SOME CHECKS FAILED"
exit "$FAIL"