Wireframe · /optimize — xbow-modeled self-serve intent capture
Codeflash Lightspeed · Optimization Assessment

Autonomous, expert-reviewed performance optimization, on demand.

A traditional performance audit takes months. Codeflash Lightspeed runs in days. Our codeflash-agent generates candidates against your benchmark, our senior performance engineers review every one, and you receive merge-ready PRs on your repo.

Starts at $5,000 · credited against a full engagement if you convert within 60 days · pricing →
01 · Connect

GitHub App + benchmark

Install our app on one repo, point at a benchmark file. Read-only, scoped, expires in 14 days.

02 · Run

Agent runs 48–72h

codeflash-agent profiles, generates candidates, benchmarks each one on our hardware in a sandbox.

03 · Review

Engineer audit

A Codeflash performance engineer reviews every candidate. Rejects the fragile ones. Keeps the shippable ones.

04 · Merge

Reviewable PRs

Up to 5 draft PRs land on your repo, each with before/after benchmarks and reviewer rationale. Report delivered within 5 business days.

Corporate email required. We reject gmail.com, outlook.com, proton.me, etc.
If private, install the Codeflash GitHub App on this repo before submitting. We'll send the install link immediately after submit if needed.
The repo cannot be modified after submission. Ensure it's correct.
File or glob in the repo. If you don't have a benchmark, upload one — we need a reproducible measurement before we start.
I'm not a robot reCAPTCHA
We'll contact you within 24 hours with the GitHub App installation link and kick off the run. You'll receive your optimization report and draft PRs within 5 business days.
Form structure modeled on xbow/pentest. Key shape: self-serve intent capture, billing upfront, one immutable target field (repo URL replaces their URL), plan dropdown as qualifier, legal checkboxes identical in purpose. The 24-hour human follow-up is deliberate — same as xbow, this is human-gated activation disguised as self-serve.

Sample optimization report

See what you get back before you commit. Redacted report from a 7-week engagement — 24 PRs, 4 stacked bottlenecks, 90% infra cost cut.

Download sample (PDF) →

What we commit to

  • SOC 2 Type 2
  • Your code is never used to train models. Ever.
  • Sandboxed, zero-egress execution
  • Read-only GitHub App, expires in 14 days
  • Every PR reviewed by a performance engineer
  • Report within 5 business days
  • Named a Gartner® Cool Vendor™ 2025

What we need from you

  • One repo
  • One benchmark file (or upload one)
  • Corporate email
  • Billing address
  • An objective worth measuring

Typical result

Lightspeed assessments deliver 3–8 reviewed PRs. Typical speedup range: 2×–20× on the targeted workload. Ships in 5 business days.

No reviewed PRs? Assessment fee refunded in full.

Open design questions for this page:
· Price anchor: [CONFIRM $5,000]. xbow uses $4K for a pentest. For us, one reviewed PR is worth more than a pentest report — but higher than $10K probably breaks the "self-serve" framing.
· Credit-back mechanic: we're saying "credited against a full engagement within 60 days". [CONFIRM] this vs. "flat paid diagnostic with a 5× identification floor" (the §7.2 draft). Can't run both promises on the site without confusion.
· No-PR refund: added "no reviewed PRs = refund". [CONFIRM] — changes the economic model. Alternative: guarantee at least 1 PR or money back.
· Private-repo access: current copy says install GitHub App before submit. Worth A/B testing against "fill form first, we send install link" (lower friction, but higher drop-off between intent and activation).
· Benchmark-upload path: drafted as a link in help text. Real question: do we allow "no benchmark, we'll write one for you" as a paid add-on, or reject those runs outright?
· Do we replace the homepage primary CTA "Book a call" with "Start optimizing"? Or keep both and measure?

The short list.

What if you can't find any optimizations?

If no reviewed PRs land on your repo, the assessment fee is refunded in full. That's happened on zero Lightspeed runs to date. [CONFIRM phrasing + the "zero to date" claim]

Do you need access to our production data?

No. The agent runs against your benchmark in our sandbox. Benchmarks use synthetic or scrubbed inputs. We never touch your production systems.

What if we don't have a benchmark?

Benchmarks are non-negotiable — without one we can't measure a win. If you don't have one, our engineers can write one for you as a paid add-on. Alternatively, upload a py-spy / pprof / perf profile and we'll deliver a diagnostic report instead of PRs.

Will you use our code to train models?

Never. Not ours, not third-party.

Can we run it on-prem?

The Lightspeed assessment runs in our sandbox. If you need on-prem, that's a Full Engagement — talk to us.

What languages?

Python, Java, JavaScript, TypeScript, Go, and more.

Not sure Lightspeed is right?

Book a 20-minute diagnostic call with a performance engineer. We'll help you scope the right path.

Book a diagnostic