optimization Pipeline (#1860)

Co-authored-by: saga4 <saga4@codeflashs-MacBook-Air.local>
This commit is contained in:
Sarthak Agarwal 2025-10-02 12:21:10 -07:00 committed by GitHub
parent 439afac72b
commit 0c16414301
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
16 changed files with 7706 additions and 0 deletions

View file

@ -0,0 +1,20 @@
.venv/
__pycache__/
*.pyc
*.pyo
*.DS_Store
.idea/
.vscode/
dist/
build/
*.log
*.trace
k8s/generated-job-*.yaml
/work/
.env
test_key.pem
tools/keys
server/logs
config
server/jobs.json
config/repos.csv

View file

@ -0,0 +1,152 @@
Optimizer Factory for Codeflash — EC2-backed
What this is
A minimal pipeline to run Codeflash optimizations across many Python repositories using on-demand EC2 instances. Configure a CSV, launch from the UI, stream logs directly from the instance, and approve results in Codeflash Staging.
Prerequisites
- AWS account with permissions for EC2 and IAM
- AWS CLI installed and configured (`aws configure` with an IAM user/role)
- GitHub Personal Access Token (classic) with `public_repo` scope
- Codeflash API key from `app.codeflash.ai`
- Python 3.10+
Project layout
- scripts/
- run_optimization.sh — forks/clones, detects roots, runs Codeflash
- detect_roots.py — simple heuristics for module/tests roots
- server/
- app.py — Flask API serving static UI and EC2 job actions
- static/ — plain HTML/CSS/JS UI to manage repos and jobs
- analyzer.py — Anthropic-powered analyzer to extract per-repo env config
- config/repos.csv — list of repos and resource tiers
- tools/requirements.txt — local deps for the server (Flask, boto3, paramiko)
- env.example — environment template for EC2 and secrets
Step-by-step setup
1. Configure AWS
- Copy `env.example` to `.env` and fill it (AWS region, EC2 key pair, security group, AMI ID, SSH key path).
- **Important for WSL users**: See SSH Key Configuration section below for proper setup.
2. Install dependencies
- `pip install -r tools/requirements.txt`
3. Provide tokens locally
- Set env vars `CODEFLASH_API_KEY` and `GITHUB_TOKEN` in your shell or `.env`.
4. Ensure networking
- The security group must allow outbound HTTPS and inbound SSH from your IP if you want direct access. The instance will reach GitHub and PyPI over the internet.
5. Configure repositories to process
- Edit `config/repos.csv` and add rows:
repo_url,module_root,tests_root,resource_tier
https://github.com/psf/requests,requests,tests,small
https://github.com/pallets/flask,src/flask,tests,medium
https://github.com/numpy/numpy,numpy,numpy/tests,large
https://github.com/user/small-util,auto,auto,small
- Columns:
- `repo_url` — upstream repository URL
- `module_root`, `tests_root` — path or `auto` to auto-detect
- `resource_tier``small` | `medium` | `large` (selects job definition)
6. Run jobs
- Install local deps:
- `pip install -r tools/requirements.txt`
- Start server:
- `python server/app.py`
- Open UI: `http://localhost:5000`
- From the UI you can:
- Add/update/delete repos (edits `config/repos.csv`)
- Run optimization for a repo or run all (each launches a dedicated EC2 instance)
- Check job status (instance state and exit code)
- View logs (tail of `/var/log/codeflash-optimization.log` on the instance)
- Analyze a repo via LLM (Anthropic) and apply proposed config to CSV
7. Monitor and review
- EC2 console: see instances launching/terminating
- UI logs panel: streams the remote log file
- Codeflash Staging: approve optimizations
Retries and tuning
- If a job fails with OOM, change the `resource_tier` in `config/repos.csv` to a larger tier and re-run the launcher.
- For more automation (e.g., automatic tier escalation), consider adding AWS Step Functions later.
How it works (under the hood)
- The server launches an EC2 instance per job and waits for SSH.
- It uploads `scripts/run_optimization.sh` and `scripts/detect_roots.py`, exports env with analyzer hints, and starts the optimization.
- The job writes logs to `/var/log/codeflash-optimization.log`; the server tails this file.
- A background watcher terminates the instance after completion.
Notes
- Ensure the Codeflash GitHub App is installed for your account/org so forks are covered.
- Provide sufficient EC2 instance size; default is `c7i.2xlarge` but adjust as needed.
LLM-powered Repo Analysis (optional)
- Purpose: Suggest per-repo configuration (module root, tests root, resource tier) and optional safe setup commands.
- Requirements:
- `ANTHROPIC_API_KEY` set in environment for the server
- `pip install -r tools/requirements.txt` (includes `anthropic`, `jsonschema`)
- How it works:
- UI → Analyze (🧠) calls `/api/analyze_repo` and shows results once ready
- Results are stored as `config/analysis/<org>-<repo>.json`
- You can selectively apply `module_root`, `tests_root`, and `resource_tier` to the CSV
- On job submit, if analysis exists, BE passes sanitized overrides to the container via env:
- `SYSTEM_PACKAGES`: allowlisted apt packages
- `PRE_INSTALL_CMDS`, `INSTALL_CMDS`, `POST_INSTALL_CMDS`: safe, filtered commands joined with `&&`
- Non-secret env vars if provided
- `scripts/entrypoint.sh` executes these overrides before running the default detection path
SSH Key Configuration
**Critical for WSL Users**: If you're running this on Windows Subsystem for Linux (WSL), you must configure your SSH key properly to avoid permission errors.
1. **Copy SSH key to WSL filesystem**:
```bash
# Copy your SSH key from Windows to WSL home directory
cp /mnt/c/path/to/your/key.pem ~/.ssh/your_key_name.pem
```
2. **Set correct permissions**:
```bash
# Set restrictive permissions (required by SSH)
chmod 600 ~/.ssh/your_key_name.pem
```
3. **Update .env file**:
```bash
# Use WSL path, not Windows path
SSH_KEY_PATH=~/.ssh/your_key_name.pem
```
**Why this is necessary**: SSH requires strict file permissions (600) for private keys. Windows file permissions don't translate correctly to WSL, causing "Permissions are too open" errors. By copying the key to the WSL filesystem and setting permissions with `chmod`, you ensure SSH can read the key properly.
**Troubleshooting SSH Issues**:
- If you get "Permissions are too open" error: Ensure the key is in WSL filesystem (`~/.ssh/`) not Windows filesystem (`/mnt/c/`)
- If you get "No such file or directory": Verify the path in `.env` matches the actual key location
- If you get "Permission denied": Check that `chmod 600` was applied successfully with `ls -la ~/.ssh/`
Security and safety
- Commands from LLM are pared down via allowlist; risky patterns are dropped.
- Only non-secret env vars are passed through; secrets stay in AWS Secrets Manager.
- If analysis is unavailable, the system falls back to current heuristic detection.

View file

@ -0,0 +1,105 @@
# Environment for EC2-based Optimizer Factory
# Copy this file to .env and fill in your actual values
# =============================================================================
# AWS CONFIGURATION
# =============================================================================
# AWS_REGION: AWS region where your EC2 instances will run
# Used by: server/app.py
# Example: us-east-1, us-west-2, eu-west-1
AWS_REGION=us-east-1
## EC2 INSTANCE SETTINGS
# AWS_KEY_NAME: Name of your EC2 Key Pair used for SSH
AWS_KEY_NAME=your_key_pair_name
# AWS_SECURITY_GROUP: Security group ID allowing SSH egress/ingress to instance
AWS_SECURITY_GROUP=sg-xxxxxxxx
# AWS_INSTANCE_TYPE: Instance type (example: c7i.2xlarge)
AWS_INSTANCE_TYPE=c7i.2xlarge
# AWS_AMI_ID: Ubuntu 22.04 AMI ID in your region
AWS_AMI_ID=ami-xxxxxxxx
# SSH_KEY_PATH: Local path to the private key for AWS_KEY_NAME
# IMPORTANT: This must be an absolute path or a path that resolves correctly
# Examples:
# - Linux/macOS: ~/.ssh/your_key_pair.pem or /home/user/.ssh/your_key_pair.pem
# - WSL: ~/.ssh/your_key_pair.pem (NOT /mnt/c/path/to/key.pem)
# - Windows: C:\Users\YourName\.ssh\your_key_pair.pem
#
# WSL USERS: You MUST copy your SSH key to the WSL filesystem and set correct permissions:
# 1. Copy key: cp /mnt/c/path/to/your/key.pem ~/.ssh/your_key_pair.pem
# 2. Set permissions: chmod 600 ~/.ssh/your_key_pair.pem
# 3. Use WSL path: SSH_KEY_PATH=~/.ssh/your_key_pair.pem
#
# The key file must have 600 permissions (readable only by owner) for SSH to work.
SSH_KEY_PATH=~/.ssh/your_key_pair.pem
## (Removed) AWS Batch configuration: no longer used
# =============================================================================
# SERVER CONFIGURATION
# =============================================================================
# PORT: Port number for the Flask web server
# Used by: server/app.py to start the web interface
# Default: 5000
PORT=5000
# ANTHROPIC_API_KEY: API key for Anthropic Claude LLM analysis
# Used by: server/analyzer.py for repository analysis and configuration
# Get this from: https://console.anthropic.com/ > API Keys
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# GITHUB_TOKEN: GitHub Personal Access Token for repository access
# Used by: server/analyzer.py (for accessing private repos during analysis)
# scripts/run_optimization.sh (for GitHub operations during optimization via EC2)
# Get this from: GitHub Settings > Developer settings > Personal access tokens
# Required scopes: repo (for private repos), public_repo (for public repos)
GITHUB_TOKEN=your_github_token_here
# =============================================================================
# TROUBLESHOOTING SSH KEY ISSUES
# =============================================================================
#
# Common SSH key problems and solutions:
#
# 1. "Permissions are too open" error:
# - Problem: SSH key has incorrect permissions
# - Solution: Run `chmod 600 ~/.ssh/your_key_pair.pem`
# - WSL users: Ensure key is in WSL filesystem, not Windows filesystem
#
# 2. "No such file or directory" error:
# - Problem: SSH_KEY_PATH points to non-existent file
# - Solution: Verify the path exists with `ls -la ~/.ssh/your_key_pair.pem`
# - Check that the path in .env matches the actual file location
#
# 3. "Permission denied" error:
# - Problem: SSH key permissions are too restrictive or incorrect
# - Solution: Run `chmod 600 ~/.ssh/your_key_pair.pem` (not 700 or 644)
# - Verify with `ls -la ~/.ssh/your_key_pair.pem` (should show -rw-------)
#
# 4. WSL-specific issues:
# - Problem: Using Windows path (/mnt/c/...) causes permission issues
# - Solution: Copy key to WSL filesystem: `cp /mnt/c/path/to/key.pem ~/.ssh/`
# - Then set permissions: `chmod 600 ~/.ssh/your_key_pair.pem`
# - Update .env to use WSL path: `SSH_KEY_PATH=~/.ssh/your_key_pair.pem`
#
# 5. Testing SSH connectivity:
# - Test manually: `ssh -i ~/.ssh/your_key_pair.pem ubuntu@YOUR_EC2_IP`
# - If this works, the issue is in the application configuration
# - If this fails, the issue is with the SSH key or EC2 instance setup
# =============================================================================
# NOTES ABOUT REMOTE JOB VARIABLES
# =============================================================================
# The following variables are injected remotely on EC2 per job:
#
# GITHUB_REPO_URL: Set by server/app.py when launching the job
# MODULE_ROOT: Set by server/app.py
# TESTS_ROOT: Set by server/app.py
# CODEFLASH_API_KEY: Taken from local environment and exported on the instance
# FORK_OWNER: Determined by gh on the instance

View file

@ -0,0 +1,44 @@
import os
import json
def find_module_root(cwd: str) -> str | None:
# Prefer src layout
src_dir = os.path.join(cwd, "src")
if os.path.isdir(src_dir):
for name in os.listdir(src_dir):
path = os.path.join(src_dir, name)
if os.path.isdir(path) and os.path.isfile(os.path.join(path, "__init__.py")):
return os.path.relpath(path, cwd).replace("\\", "/")
# Fallback: top-level package dir containing __init__.py
for name in os.listdir(cwd):
if name in {"tests", "test", "benchmarks", "venv", ".venv", "build", "dist", ".git"}:
continue
path = os.path.join(cwd, name)
if os.path.isdir(path) and os.path.isfile(os.path.join(path, "__init__.py")):
return name
return None
def find_tests_root(cwd: str) -> str | None:
for cand in ("tests", "test", "testing"):
if os.path.isdir(os.path.join(cwd, cand)):
return cand
return None
def main() -> None:
cwd = os.getcwd()
module_root = find_module_root(cwd)
tests_root = find_tests_root(cwd)
print(json.dumps({
"module_root": module_root or "",
"tests_root": tests_root or "",
}))
if __name__ == "__main__":
main()

View file

@ -0,0 +1,60 @@
#!/bin/bash
set -e
echo "--- [ENTRYPOINT] Starting environment setup ---"
cd /repo
if [ -z "${GITHUB_REPO_URL:-}" ]; then echo "GITHUB_REPO_URL is required"; exit 1; fi
# Optional: run system packages installation if provided (allowlisted by BE)
if [ -n "${SYSTEM_PACKAGES:-}" ]; then
echo "Installing system packages: ${SYSTEM_PACKAGES}"
sudo apt-get update && sudo apt-get install -y --no-install-recommends ${SYSTEM_PACKAGES} || true
fi
echo "Cloning ${GITHUB_REPO_URL} to inspect for dependencies..."
git clone "${GITHUB_REPO_URL}" .
echo "Detecting package manager..."
if [ -f "poetry.lock" ]; then
echo "Poetry project detected. Installing dependencies with 'poetry install'."
poetry install --with dev || poetry install
export VENV_PATH=$(poetry env info --path)
elif [ -f "pyproject.toml" ] && grep -q "\[tool.uv\]" "pyproject.toml"; then
echo "UV project detected. Installing dependencies with 'uv sync'."
uv venv
uv sync --all-extras || uv sync
export VENV_PATH=$(pwd)/.venv
elif [ -f "requirements.txt" ]; then
echo "requirements.txt detected. Installing with pip."
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt || true
[ -f "requirements-test.txt" ] && pip install -r requirements-test.txt || true
[ -f "requirements-dev.txt" ] && pip install -r requirements-dev.txt || true
export VENV_PATH=$(pwd)/.venv
else
echo "WARNING: No standard dependency file found. Proceeding without installation."
python -m venv .venv
export VENV_PATH=$(pwd)/.venv
fi
# Optional: LLM-provided pre/install/post commands (allowlisted and sanitized by BE)
run_cmds_if_any() {
CMDS_VAR="$1"
CMDS_VAL="${!CMDS_VAR}"
if [ -n "$CMDS_VAL" ]; then
echo "Running commands from $CMDS_VAR"
/bin/bash -lc "$CMDS_VAL" || true
fi
}
run_cmds_if_any PRE_INSTALL_CMDS
run_cmds_if_any INSTALL_CMDS
run_cmds_if_any POST_INSTALL_CMDS
echo "--- [ENTRYPOINT] Handing off to optimization script ---"
rm -rf /repo/*
/app/scripts/run_optimization.sh

View file

@ -0,0 +1,515 @@
#!/usr/bin/env python3
# Shebang line: specifies the interpreter to be used for executing the script (Python 3).
import os
# Imports the os module for interacting with the operating system, such as file paths and environment variables.
import sys
# Imports the sys module, providing access to system-specific parameters and functions, like sys.exit.
import json
# Imports the json module for working with JSON data (encoding and decoding).
import subprocess
# Imports the subprocess module for running new processes (shell commands).
from datetime import datetime
# Imports the datetime class from the datetime module, used for generating timestamps.
def sh(cmd: str) -> subprocess.CompletedProcess:
# Function to execute a shell command.
# cmd: The shell command string to execute.
# -> subprocess.CompletedProcess: Returns the result of the executed process.
return subprocess.run(
cmd, # The command to execute.
shell=True, # Execute the command through the shell (allows shell features like piping).
stdout=subprocess.PIPE, # Captures standard output.
stderr=subprocess.STDOUT, # Directs standard error to standard output (so both are captured in stdout).
text=True # Decodes stdout and stderr as text using the default encoding.
)
def list_tree(root: str, max_depth: int = 2, max_entries: int = 500) -> str:
# Function to generate a simplified file tree structure string for a given root directory.
# root: The starting directory path.
# max_depth: The maximum directory depth to traverse (default 2).
# max_entries: The maximum total number of file entries to include (default 500).
# -> str: Returns the file tree as a single string, with lines separated by newlines.
out_lines = []
# List to store the lines of the file tree output.
count = 0
# Counter for the number of files added to the output.
for cur_root, dirs, files in os.walk(root):
# os.walk iterates through the directory tree rooted at 'root'.
# cur_root: The current directory path.
# dirs: A list of subdirectory names in cur_root.
# files: A list of file names in cur_root.
# Calculate the current depth relative to the initial root.
depth = cur_root[len(root):].count(os.sep)
# Calculate indentation based on depth.
indent = " " * depth
# Add the current directory name to the output with indentation and a trailing slash.
out_lines.append(f"{indent}{os.path.basename(cur_root)}/")
# Process and list files in the current directory, limiting to the first 200 files in this directory.
for f in sorted(files)[:200]:
# Check if the maximum total entry count has been reached.
if count >= max_entries:
# If max entries reached, append a truncation message and return the output.
out_lines.append(" ... (truncated)")
return "\n".join(out_lines)
# Add the file name to the output with indentation.
out_lines.append(f"{indent} {f}")
# Increment the file count.
count += 1
# Prune the directory list if the maximum depth is reached.
if depth >= max_depth:
# Setting dirs[:] = [] prevents os.walk from descending into subdirectories of the current directory.
dirs[:] = []
# Join all collected lines into a single string and return it.
return "\n".join(out_lines)
def main() -> int:
# Main function to execute the LLM setup helper logic.
# -> int: Returns an exit code (0 for success, non-zero for error).
# Get the Anthropic API key from the environment variable.
api_key = os.getenv("ANTHROPIC_API_KEY", "").strip()
# Check if the API key is available. If not, log a message and exit gracefully (assuming no LLM interaction is possible).
if not api_key:
print("ANTHROPIC_API_KEY not set; skipping LLM setup helper.")
return 0
# Get work repository path from environment, defaulting to a specific path.
work_repo = os.environ.get("WORK_REPO", "/home/ubuntu/work/repo")
# Get the root directory for tests from environment variables, preferring LLM_TESTS_ROOT.
tests_root = os.environ.get("LLM_TESTS_ROOT") or os.environ.get("TESTS_ROOT") or ""
# Get the pytest command from environment variables, defaulting to "pytest".
pytest_cmd = os.environ.get("LLM_PYTEST_CMD") or os.environ.get("PYTEST_CMD") or "pytest"
# Define the directory for conversation logs.
conv_dir = "/home/ubuntu/app/logs"
# Create the log directory if it doesn't exist.
os.makedirs(conv_dir, exist_ok=True)
# Generate an ISO-formatted timestamp for the log file name.
ts = datetime.utcnow().isoformat(timespec="seconds").replace(":", "-")
# Define the path for the specific log file using the timestamp.
conv_file = os.path.join(conv_dir, f"llm-setup-{ts}.log")
# Define the path for a symbolic link that points to the latest log file.
symlink_path = os.path.join(conv_dir, "llm-setup.log")
# --- Symlink Creation ---
try:
# Check if the symlink path exists (either as a link or a regular file/dir).
if os.path.islink(symlink_path) or os.path.exists(symlink_path):
try:
# Attempt to delete the existing symlink or file/directory.
os.unlink(symlink_path)
except Exception:
# Ignore errors during unlinking.
pass
# Create a new symbolic link from symlink_path to conv_file.
os.symlink(conv_file, symlink_path)
except Exception:
# Ignore errors during symlink creation (e.g., permission issues).
pass
# --- End Symlink Creation ---
# Derive repo slug for memory files from GITHUB_REPO_URL if available, else work repo basename
repo_url_for_slug = os.environ.get("GITHUB_REPO_URL", "").strip()
slug = None
if repo_url_for_slug:
parts = repo_url_for_slug.rstrip("/").split("/")
if len(parts) >= 2:
org = parts[-2]
name = parts[-1].replace(".git", "")
slug = f"{org}-{name}"
if not slug:
slug = os.path.basename(work_repo.rstrip("/")) or "repo"
# Session JSONL file for full conversation logging
conv_jsonl = os.path.join(conv_dir, f"llm-setup-{ts}.jsonl")
# Persistent memory JSONL across runs for this repo
memory_file = os.path.join(conv_dir, f"llm-memory-{slug}.jsonl")
def _now_iso() -> str:
return datetime.utcnow().isoformat(timespec="seconds") + "Z"
def _append_jsonl(path: str, obj: dict) -> None:
try:
with open(path, "a", encoding="utf-8") as jf:
jf.write(json.dumps(obj, ensure_ascii=False) + "\n")
except Exception:
# JSONL logging errors should never crash the helper
pass
def _load_memory_tail(path: str, max_messages: int = 10) -> list[dict]:
try:
if not os.path.exists(path):
return []
with open(path, "r", encoding="utf-8") as jf:
lines = jf.readlines()[-max_messages:]
out = []
for line in lines:
line = line.strip()
if not line:
continue
try:
obj = json.loads(line)
role = obj.get("role")
content = obj.get("content")
if isinstance(role, str) and isinstance(content, str):
out.append({"role": role, "content": content})
except Exception:
continue
return out
except Exception:
return []
def log(msg: str) -> None:
# Inner function to append a message to the current log file.
with open(conv_file, "a", encoding="utf-8") as f:
# Open the log file in append mode ('a') with UTF-8 encoding.
f.write(f"[{_now_iso()}] {msg}\n")
# Write the message with timestamp followed by a newline.
# Initial logging of the helper start and key parameters.
log("=== LLM Setup Helper Started ===")
log(f"Repo: {work_repo}")
log(f"Tests root: {tests_root}")
log(f"Pytest cmd: {pytest_cmd}")
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "start", "repo": work_repo, "tests_root": tests_root, "pytest_cmd": pytest_cmd})
# Generate the file tree string for the repository.
tree = list_tree(work_repo)
# Prepare Anthropic client lazily (import only when needed, after API key check).
try:
from anthropic import Anthropic
# Attempt to import the Anthropic client library.
except Exception as e:
# If import fails, log the error and exit with an error code.
log(f"Anthropic client import failed: {e}")
return 1
# Initialize the Anthropic client with the retrieved API key.
client = Anthropic(api_key=api_key)
# Define the system prompt, which sets the LLM's role, goal, and output format/rules.
system_prompt = (
"You are a reliable setup assistant. Your goal: get tests running to at least 50% passing. "
"You can only respond with JSON following the schema: {\n"
" \"actions\": [ { \"type\": \"shell\", \"cmd\": \"string\" } ... ],\n"
" \"comment\": \"brief reasoning\"\n} . "
"Rules:\n- Only use safe package commands (pip install <pkg>), avoid destructive ops.\n"
"- DO NOT rewrite pyproject.toml or packaging files destructively. Prefer adding missing deps via pip.\n"
"- Avoid removing files or changing project layout; prefer the simplest fix.\n"
"- Assume venv is active.\n- Prefer installing missing Python packages explicitly.\n"
"- Use one or more actions; keep each action minimal.\n"
"- You are not allowed to change directories (cd), or use semicolons (;) or pipes (|).\n"
"- You MAY chain multiple allowed install commands using '&&' only. Each side of '&&' must be an allowed install command.\n"
"- Use the repo venv interpreter for all Python/pip: .venv/bin/python -m pip install <pkgs> (preferred).\n"
"- Allowed actions: {type: shell, cmd: '.venv/bin/python -m pip install ...' | 'python -m pip install ...' | 'pip install ...' | 'python devscripts/install_deps.py'} | {type: read, path: 'relative/path'} | {type: finish}.\n"
"- Do NOT run tests (pytest) or benchmarks; we will run tests after you finish.\n"
"- After you are done with installation steps, respond with a single {\"actions\":[{\"type\":\"finish\"}],\"comment\":\"...\"}.\n"
"- We will provide you the remaining tries_left each turn; optimize your plan accordingly.\n"
"- If you need to read a small file, request it using: {\"actions\":[{\"type\":\"read\",\"path\":\"relative/path\"}]} (limit to key config/tests).\n"
)
# Prepare the initial context data to send to the LLM.
user_context = {
"repo_path": work_repo,
"tests_root": tests_root,
"pytest_cmd": pytest_cmd,
"tree": tree,
}
# Initialize the conversation history with memory, then initial context message.
chat = []
prior_memory = _load_memory_tail(memory_file, max_messages=10)
if prior_memory:
chat.extend(prior_memory)
log(f"Loaded conversation memory: {len(prior_memory)} messages")
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "memory_loaded", "count": len(prior_memory)})
# Include minimal test tail if available
test_tail = ""
try:
test_log_path = os.environ.get("TEST_LOG_FILE", "").strip()
if test_log_path and os.path.isfile(test_log_path):
with open(test_log_path, "r", encoding="utf-8", errors="ignore") as tf:
content = tf.read()
test_tail = content[-4000:]
except Exception:
test_tail = ""
initial_payload = {"context": user_context}
if test_tail:
initial_payload["last_test_output_tail"] = test_tail
initial_payload["tries_left"] = int(os.environ.get("LLM_SETUP_MAX_STEPS", "10") or "10")
chat.append({"role": "user", "content": json.dumps(initial_payload, ensure_ascii=False)})
# Initialize a set to track executed commands for deduplication (guardrail).
executed_cmds: set[str] = set()
def add_msg(role: str, content: str) -> None:
# Inner function to add a new message (from user or assistant) to the chat history.
chat.append({"role": role, "content": content})
# Persist to session JSONL and memory JSONL
payload = {"ts": _now_iso(), "role": role, "content": content}
_append_jsonl(conv_jsonl, payload)
_append_jsonl(memory_file, payload)
# Also write a readable line to plaintext log
log(f"[{role.upper()}] {content}")
# Keep recent window to bound context size for the LLM.
if len(chat) > 12:
del chat[: len(chat) - 12]
# Define loop control variables.
max_loops = int(os.environ.get("LLM_SETUP_MAX_STEPS", "10") or "10")
pass_threshold = 0.5 # Target passing ratio (50%).
# Start the main loop for setup attempts.
for i in range(1, max_loops + 1):
# Log the start of the current step.
log(f"\n--- Setup step {i}/{max_loops} ---")
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "setup_step", "iteration": i, "max": max_loops})
# Prepare the message content for the LLM, including remaining tries and optional signals from last executions.
msg_text = json.dumps({
"context": user_context,
"tries_left": max_loops - i + 1,
}, ensure_ascii=False)
# Add the new message to the conversation history.
add_msg("user", msg_text)
# --- Call Anthropic API ---
try:
# Get the model name from the environment, with a default value.
model = os.environ.get("ANTHROPIC_MODEL", "claude-3-5-haiku-20241022")
# Provide summarized last outputs to reduce tokens
# (already appended test output tail above; keep conversation short via truncation in add_msg)
resp = client.messages.create(
model=model,
max_tokens=800, # Limit the LLM's response length.
system=system_prompt, # Provide the system prompt.
messages=chat, # Send the conversation history.
)
except Exception as e:
# Log API error and exit with an error code.
log(f"Anthropic API error: {e}")
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "api_error", "error": str(e)})
return 1
# Extract the text content from the LLM response.
text = ""
try:
# Iterate through content blocks (which should contain the JSON response).
for block in resp.content:
if getattr(block, "type", "text") == "text":
text += block.text
except Exception:
# Fallback for unexpected response structure.
text = str(resp)
# Log the raw text response from the LLM.
log("LLM raw response:\n" + text)
add_msg("assistant", text)
# --- End Anthropic API Call ---
# Handle optional file read requests from assistant (single small file)
try:
data_peek = json.loads(text)
reqs = data_peek.get("actions") if isinstance(data_peek, dict) else None
read_path = None
if isinstance(reqs, list):
for a in reqs:
if isinstance(a, dict) and a.get("type") == "read" and isinstance(a.get("path"), str):
read_path = a["path"]
break
if read_path:
# Allowlist: only small text files under repo root
safe = True
if read_path.startswith("/") or ".." in read_path or len(read_path) > 200:
safe = False
abs_path = os.path.join(work_repo, read_path)
content = ""
if safe and os.path.isfile(abs_path) and os.path.getsize(abs_path) <= 200*1024:
try:
with open(abs_path, "r", encoding="utf-8", errors="ignore") as rf:
content = rf.read()
except Exception:
content = ""
# Return file content (summarized if large) as a user message
if content:
snippet = content if len(content) <= 8000 else (content[:6000] + "\n...\n" + content[-1000:])
add_msg("user", json.dumps({"file": read_path, "content": snippet}, ensure_ascii=False))
else:
add_msg("user", json.dumps({"file": read_path, "error": "unavailable or too large"}, ensure_ascii=False))
# Continue to next loop iteration to let model act on the file
continue
except Exception:
pass
# --- Parse LLM Response and Execute Actions ---
# Robust JSON extraction: find the largest JSON object if raw isn't valid
def _extract_json(s: str) -> str:
try:
json.loads(s)
return s
except Exception:
pass
# best-effort bracket matching
start = s.find('{')
last = s.rfind('}')
while start != -1 and last != -1 and last > start:
candidate = s[start:last+1]
try:
json.loads(candidate)
return candidate
except Exception:
last = s.rfind('}', 0, last)
return ""
extracted = _extract_json(text)
if not extracted:
log("Failed to extract JSON from assistant output")
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "parse_error", "error": "no_json_in_output", "assistant_text": text[:16000]})
return 1
try:
data = json.loads(extracted)
except Exception as e:
log(f"Failed to parse LLM JSON: {e}")
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "parse_error", "error": str(e), "assistant_text": text[:16000]})
return 1
# Extract the 'actions' list from the parsed JSON data.
actions = data.get("actions") or []
# Validate that 'actions' is a non-empty list.
if not isinstance(actions, list) or not actions:
log("No actions provided; aborting.")
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "no_actions"})
return 1
# Detect finish action
if len(actions) == 1 and isinstance(actions[0], dict) and actions[0].get("type") == "finish":
log("Assistant requested finish; exiting setup loop.")
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "finish_requested", "iteration": i})
# Successful finish; orchestrator will re-run tests
return 0
# List to store the summary of executed commands for feedback to the LLM.
exec_summ = []
# Iterate over the proposed actions.
for a in actions:
# Validate the action structure
if not isinstance(a, dict):
continue
a_type = a.get("type")
if a_type == "shell":
# Extract and clean the command string.
cmd = str(a.get("cmd") or "").strip()
if not cmd:
continue
# Guardrails: determine if this shell command is allowed
def _is_allowed_single(c: str) -> bool:
lc = c.strip()
# Disallow directory changes
if lc.startswith("cd ") or " cd " in f" {lc} ":
return False
# Allow pip installs via multiple forms
allowed_prefixes = [
"pip install ",
"pip3 install ",
"python -m pip install ",
"python3 -m pip install ",
".venv/bin/python -m pip install ",
".venv/bin/python3 -m pip install ",
]
if any(lc.startswith(p) for p in allowed_prefixes):
return True
# Allow safe pip queries
allowed_pip_queries = [
"python -m pip list",
"python3 -m pip list",
".venv/bin/python -m pip list",
".venv/bin/python3 -m pip list",
"python -m pip show ",
"python3 -m pip show ",
".venv/bin/python -m pip show ",
".venv/bin/python3 -m pip show ",
"python -m pip freeze",
"python3 -m pip freeze",
".venv/bin/python -m pip freeze",
".venv/bin/python3 -m pip freeze",
]
if any(lc.startswith(p) for p in allowed_pip_queries):
return True
# Allow repo-provided dependency helper when present
if lc in {
"python devscripts/install_deps.py",
"python3 devscripts/install_deps.py",
".venv/bin/python devscripts/install_deps.py",
".venv/bin/python3 devscripts/install_deps.py",
}:
return True
return False
def _is_allowed_shell(c: str) -> bool:
# Disallow semicolons or pipes entirely
if ";" in c or "|" in c:
return False
# Permit '&&' as a chain of allowed singles
parts = [p.strip() for p in c.split("&&")]
if len(parts) > 1:
return all(_is_allowed_single(p) for p in parts)
return _is_allowed_single(c)
if not _is_allowed_shell(cmd):
log(f"Blocked command (unsafe): {cmd}")
exec_summ.append({"cmd": cmd, "status": "blocked"})
continue
if cmd in executed_cmds:
log(f"Skipping duplicate command: {cmd}")
exec_summ.append({"cmd": cmd, "status": "duplicate_skipped"})
continue
# Execute the command in repo cwd
old_cwd = os.getcwd()
try:
os.chdir(work_repo)
log(f"Executing: {cmd}")
r = sh(cmd)
log(r.stdout)
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "exec", "cmd": cmd, "stdout": r.stdout[-16000:]})
executed_cmds.add(cmd)
exec_summ.append({"cmd": cmd, "status": "ran"})
finally:
os.chdir(old_cwd)
elif a_type == "read":
# handled earlier in the loop; ignore here
continue
elif a_type == "finish":
log("Assistant issued finish in multi-action response; exiting.")
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "finish_requested", "iteration": i})
return 0
else:
# Unknown type
exec_summ.append({"type": a_type or "", "status": "blocked"})
# Feed back the execution summary to the LLM for memory/context in the next loop.
if exec_summ:
add_msg("user", json.dumps({"executed": exec_summ}, ensure_ascii=False))
# If we exhaust steps without finish, return non-zero so orchestrator can decide next round
log("Setup steps exhausted without finish action.")
_append_jsonl(conv_jsonl, {"ts": _now_iso(), "event": "exhausted"})
return 1
if __name__ == "__main__":
# Standard entry point of the script. Calls main() and uses its return value as the process exit code.
sys.exit(main())

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1 @@
# Server package for Optimizer Factory

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,926 @@
/**
* Generic API helper function for making HTTP requests to the Flask backend
* @param {string} path - API endpoint path (e.g., '/api/repos')
* @param {Object} opts - Fetch options (method, body, etc.)
* @returns {Promise<Object>} Parsed JSON response
* @throws {Error} If response is not ok (4xx/5xx status)
*/
async function api(path, opts) {
const res = await fetch(
path,
Object.assign(
{ headers: { "Content-Type": "application/json" } },
opts || {}
)
);
if (!res.ok) throw new Error(await res.text());
return res.json();
}
/**
* Generates HTML table row for a repository item
* @param {Object} item - Repository data object with repo_url, last_job_id
* @returns {string} HTML string for table row with action buttons
*/
function rowHtml(item) {
const esc = (s) => (s || "").replaceAll("<", "&lt;").replaceAll(">", "&gt;");
const tierClass = `tier-${item.resource_tier}`;
return `<tr>
<td><div class="repo-url">${esc(item.repo_url)}</div></td>
<td>${esc(item.module_root)}</td>
<td>${esc(item.tests_root)}</td>
<td><div class="job-id">${esc(item.last_job_id || "")}</div></td>
<td>
<div class="action-buttons">
<button class="btn btn-primary btn-sm" data-action="analyze" data-repo="${esc(
item.repo_url
)}">
<span>🧠</span> Analyze
</button>
<button class="btn btn-success btn-sm" data-action="run" data-repo="${esc(
item.repo_url
)}">
<span></span> Run
</button>
<button class="btn btn-secondary btn-sm" data-action="status" data-repo="${esc(
item.repo_url
)}">
<span>📊</span> Status
</button>
<button class="btn btn-info btn-sm" data-action="track" data-repo="${esc(
item.repo_url
)}">
<span>🛰</span> Track
</button>
<button class="btn btn-secondary btn-sm" data-action="logs" data-repo="${esc(
item.repo_url
)}">
<span>📄</span> Logs
</button>
<button class="btn btn-secondary btn-sm" data-action="download-logs" data-repo="${esc(
item.repo_url
)}">
<span></span> Download Logs
</button>
<button class="btn btn-danger btn-sm" data-action="terminate" data-repo="${esc(
item.repo_url
)}">
<span></span> Terminate
</button>
<button class="btn btn-danger btn-sm" data-action="delete" data-repo="${esc(
item.repo_url
)}">
<span>🗑</span> Delete
</button>
<button class="btn btn-warning btn-sm" data-action="restart" data-repo="${esc(
item.repo_url
)}">
<span>🔁</span> Restart
</button>
</div>
</td>
</tr>`;
}
/**
* Refreshes the repository table by fetching data from the backend
* Shows loading state during the operation
*/
async function refresh() {
const container = document.querySelector(".container");
container.classList.add("loading");
try {
const data = await api("/api/repos");
const tbody = document.querySelector("#repos tbody");
tbody.innerHTML = data.items.map(rowHtml).join("");
} finally {
container.classList.remove("loading");
}
}
/**
* Adds a new repository or updates an existing one
* @param {boolean} isUpdate - If true, updates existing repo; if false, adds new repo
* Shows loading state and clears form after successful add
*/
async function addOrUpdate(isUpdate) {
const payload = {
repo_url: document.getElementById("repo_url").value.trim(),
};
if (!payload.repo_url) {
alert("Repository URL is required");
return;
}
const btn = isUpdate
? document.getElementById("update")
: document.getElementById("add");
const originalText = btn.innerHTML;
btn.innerHTML = isUpdate
? "<span>⏳</span> Updating..."
: "<span>⏳</span> Adding...";
btn.disabled = true;
try {
await api("/api/repos", {
method: isUpdate ? "PUT" : "POST",
body: JSON.stringify(payload),
});
await refresh();
// Clear form after successful add
if (!isUpdate) {
document.getElementById("repo_url").value = "";
}
} catch (e) {
alert(`Error: ${e}`);
} finally {
btn.innerHTML = originalText;
btn.disabled = false;
}
}
/**
* Runs optimization for all repositories in the CSV
* Shows loading state and refreshes the table after completion
*/
async function runAll() {
const btn = document.getElementById("runAll");
const originalText = btn.innerHTML;
btn.innerHTML = "<span>⏳</span> Running All...";
btn.disabled = true;
try {
await api("/api/run_all", { method: "POST", body: JSON.stringify({}) });
await refresh();
} catch (e) {
alert(`Error: ${e}`);
} finally {
btn.innerHTML = originalText;
btn.disabled = false;
}
}
/**
* Runs optimization for a single repository
* @param {string} repo - Repository URL to run optimization for
* Shows loading state on the specific button and refreshes table after completion
*/
async function runSingle(repo) {
const btn = document.querySelector(
`button[data-action="run"][data-repo="${repo}"]`
);
if (btn) {
const originalText = btn.innerHTML;
btn.innerHTML = "<span>⏳</span> Running...";
btn.disabled = true;
try {
await api("/api/run", {
method: "POST",
body: JSON.stringify({ repo_url: repo }),
});
await refresh();
} catch (e) {
alert(`Error: ${e}`);
} finally {
btn.innerHTML = originalText;
btn.disabled = false;
}
}
}
/**
* Fetches and displays the job status for a repository
* @param {string} repo - Repository URL to get status for
* Updates the status panel with job information (status, timestamps, etc.)
*/
async function showStatus(repo) {
const statusEl = document.getElementById("status");
statusEl.textContent = "Loading status...";
try {
const data = await api(
"/api/job_status?repo_url=" + encodeURIComponent(repo)
);
statusEl.textContent = JSON.stringify(data.job, null, 2);
} catch (e) {
statusEl.textContent = `Error loading status: ${e}`;
}
}
/**
* Fetches and displays the job logs for a repository
* @param {string} repo - Repository URL to get logs for
* Updates the logs panel with CloudWatch log events from the job
*/
async function showLogs(repo) {
const logsEl = document.getElementById("logs");
logsEl.textContent = "Loading logs...";
try {
const data = await api(
"/api/job_logs?repo_url=" + encodeURIComponent(repo)
);
logsEl.textContent = (data.events || []).join("\n");
} catch (e) {
logsEl.textContent = `Error loading logs: ${e}`;
}
}
// Simple tracker: polls /api/job_logs and shows current stage and latest logs
async function trackRepo(repo) {
const logsEl = document.getElementById("logs");
logsEl.textContent = `Tracking ${repo}...`;
let stop = false;
const stopAfterMs = 10 * 60 * 1000; // 10 minutes client-side tracking
const start = Date.now();
while (!stop) {
try {
const data = await api(
"/api/job_logs?repo_url=" + encodeURIComponent(repo)
);
const stage = data.stage
? `\n\nStage: ${JSON.stringify(data.stage)}`
: "";
logsEl.textContent = (data.events || []).join("\n") + stage;
} catch (e) {
logsEl.textContent = `Error loading logs: ${e}`;
}
await new Promise((r) => setTimeout(r, 3000));
if (Date.now() - start > stopAfterMs) stop = true;
}
}
/**
* Global click event handler for all action buttons
* Handles run, status, logs, and delete actions for repositories
* Shows confirmation dialog for delete operations
*/
document.addEventListener("click", (e) => {
const btn = e.target.closest("button");
if (!btn) return;
const action = btn.getAttribute("data-action");
const repo = btn.getAttribute("data-repo");
if (action === "run") runSingle(repo);
if (action === "status") showStatus(repo);
if (action === "logs") showLogs(repo);
if (action === "track") trackRepo(repo);
if (action === "restart") restartRepo(repo, btn);
if (action === "download-logs") downloadLogs(repo);
if (action === "analyze") startAnalysis(repo);
if (action === "terminate") terminateRepo(repo, btn);
if (action === "delete") {
if (confirm(`Are you sure you want to delete "${repo}"?`)) {
const originalText = btn.innerHTML;
btn.innerHTML = "<span>⏳</span> Deleting...";
btn.disabled = true;
api("/api/repos", {
method: "DELETE",
body: JSON.stringify({ repo_url: repo }),
})
.then(refresh)
.catch((err) => {
alert(`Error: ${err}`);
btn.innerHTML = originalText;
btn.disabled = false;
});
}
}
});
async function downloadLogs(repo) {
try {
const btn = document.querySelector(
`button[data-action="download-logs"][data-repo="${repo}"]`
);
const originalText = btn ? btn.innerHTML : null;
if (btn) {
btn.innerHTML = "<span>⬇️</span> Preparing...";
btn.disabled = true;
}
// Download all logs as a zip with proper naming from server
const res = await fetch(
"/api/job_logs/download_all?repo_url=" + encodeURIComponent(repo)
);
if (!res.ok) throw new Error(await res.text());
// Stream to blob with progress (requires Content-Length from server)
const reader = res.body.getReader();
const chunks = [];
let received = 0;
while (true) {
const { done, value } = await reader.read();
if (done) break;
chunks.push(value);
received += value.length;
if (btn) {
const total = Number(res.headers.get("Content-Length") || 0);
if (total > 0) {
const pct = Math.min(100, Math.floor((received / total) * 100));
btn.innerHTML = `<span>⬇️</span> ${pct}% (${Math.ceil(
received / 1024 / 1024
)} MB / ${Math.ceil(total / 1024 / 1024)} MB)`;
} else {
btn.innerHTML = `<span>⬇️</span> ${Math.ceil(
received / 1024 / 1024
)} MB`;
}
}
}
const blob = new Blob(chunks, { type: "application/zip" });
const url = window.URL.createObjectURL(blob);
const a = document.createElement("a");
a.href = url;
// Let browser use server-provided filename from Content-Disposition
const cd = res.headers.get("Content-Disposition") || "";
const m = cd.match(/filename=([^;]+)/);
a.download = m ? m[1] : "logs.zip";
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
window.URL.revokeObjectURL(url);
if (btn && originalText) {
btn.innerHTML = originalText;
btn.disabled = false;
}
} catch (e) {
alert(`Download error: ${e}`);
const btn = document.querySelector(
`button[data-action="download-logs"][data-repo="${repo}"]`
);
if (btn) {
btn.innerHTML = "<span>⚠️</span> Retry";
btn.disabled = false;
}
}
}
async function terminateRepo(repo, btn) {
if (!confirm(`Terminate EC2 job for ${repo}?`)) return;
const originalText = btn.innerHTML;
btn.innerHTML = "<span>⏳</span> Terminating...";
btn.disabled = true;
try {
await api("/api/terminate", {
method: "POST",
body: JSON.stringify({ repo_url: repo }),
});
await refresh();
} catch (e) {
alert(`Error: ${e}`);
} finally {
btn.innerHTML = originalText;
btn.disabled = false;
}
}
async function restartRepo(repo, btn) {
if (!confirm(`Restart optimization for ${repo}?`)) return;
const originalText = btn.innerHTML;
btn.innerHTML = "<span>⏳</span> Restarting...";
btn.disabled = true;
try {
await api("/api/restart", {
method: "POST",
body: JSON.stringify({ repo_url: repo }),
});
await refresh();
} catch (e) {
alert(`Error: ${e}`);
} finally {
btn.innerHTML = originalText;
btn.disabled = false;
}
}
// Event listeners for main action buttons
document.getElementById("refresh").addEventListener("click", refresh);
document.getElementById("runAll").addEventListener("click", runAll);
document
.getElementById("add")
.addEventListener("click", () => addOrUpdate(false));
document
.getElementById("update")
.addEventListener("click", () => addOrUpdate(true));
async function startAnalysis(repo, opts) {
const silent = opts && opts.silent;
// Show loading state on the analyze button
const btn = document.querySelector(
`button[data-action="analyze"][data-repo="${repo}"]`
);
let originalText = null;
if (btn && !silent) {
originalText = btn.innerHTML;
btn.innerHTML = "<span>⏳</span> Analyzing...";
btn.disabled = true;
}
try {
const res = await api("/api/analyze_repo", {
method: "POST",
body: JSON.stringify({ repo_url: repo }),
});
if (!silent) {
await pollAnalysisAndAutoApply(res.analysis_id, repo);
}
} catch (e) {
if (!silent) {
alert(`Error starting analysis: ${e}`);
}
} finally {
// Restore button state
if (btn && originalText && !silent) {
btn.innerHTML = originalText;
btn.disabled = false;
}
}
}
async function pollAnalysisAndAutoApply(analysisId, repo) {
console.log(
`[Analysis] Starting analysis polling for repo: ${repo}, analysisId: ${analysisId}`
);
let status = "queued";
for (let i = 0; i < 60; i++) {
try {
const st = await api(
`/api/analyze_repo/status?analysis_id=${encodeURIComponent(analysisId)}`
);
status = st.status;
console.log(
`[Analysis] Poll ${i + 1}: Status is ${status} for repo ${repo}`
);
if (status === "succeeded") break;
if (status === "failed") {
console.error(
`[Analysis] Analysis failed for repo ${repo}: ${st.message}`
);
alert(
`❌ Analysis failed for ${repo}: ${st.message || "Unknown error"}`
);
return;
}
await new Promise((r) => setTimeout(r, 2000));
} catch (e) {
console.error(`[Analysis] Error polling status for repo ${repo}:`, e);
alert(`Error polling analysis status: ${e}`);
return;
}
}
if (status !== "succeeded") {
console.warn(
`[Analysis] Timeout waiting for analysis result for repo ${repo}`
);
alert(
`⏱️ Timeout waiting for analysis result for ${repo}. Please try again.`
);
return;
}
try {
// Get the analysis results
const res = await api(
`/api/analyze_repo/result?analysis_id=${encodeURIComponent(analysisId)}`
);
const result = res.result || {};
console.log(`[Analysis] Got analysis results for repo ${repo}:`, result);
// Automatically apply all analysis results
const apply = {
module_root: true,
tests_root: true,
resource_tier: true,
};
await api("/api/apply_analysis", {
method: "POST",
body: JSON.stringify({ repo_url: repo, apply }),
});
console.log(
`[Analysis] Successfully applied analysis results for repo ${repo}`
);
// Show success message with analysis details
const cf = result.codeflash || {};
const resources = result.resources || {};
const tests = result.tests || {};
const successMessage = `✅ Analysis completed and applied successfully for ${repo}!
Analysis Results:
Package Manager: ${result.package_manager || "unknown"}
Confidence: ${(result.confidence ?? 0).toFixed(2)}
Module Root: ${cf.module_root || "auto"}
Tests Root: ${cf.tests_root || "auto"}
Test Command: ${tests.test_command || "pytest"}
All settings have been automatically saved to the CSV configuration.`;
alert(successMessage);
// Refresh the table to show updated data
await refresh();
} catch (e) {
console.error(
`[Analysis] Error applying analysis results for repo ${repo}:`,
e
);
alert(`Error applying analysis results: ${e}`);
}
}
async function pollAnalysisAndShow(analysisId, repo) {
const modal = document.getElementById("analysisModal");
const content = document.getElementById("analysisContent");
modal.style.display = "flex";
content.innerHTML = `<p>Analyzing <strong>${repo}</strong>... please wait.</p>`;
document.getElementById("applyAnalysis").disabled = true;
let status = "queued";
for (let i = 0; i < 60; i++) {
const st = await api(
`/api/analyze_repo/status?analysis_id=${encodeURIComponent(analysisId)}`
);
status = st.status;
if (status === "succeeded") break;
if (status === "failed") {
content.innerHTML = `<p>❌ Analysis failed: ${
st.message || "Unknown error"
}</p>`;
return;
}
await new Promise((r) => setTimeout(r, 2000));
}
if (status !== "succeeded") {
content.innerHTML = `<p>Timeout waiting for analysis result.</p>`;
return;
}
const res = await api(
`/api/analyze_repo/result?analysis_id=${encodeURIComponent(analysisId)}`
);
const result = res.result || {};
const cf = result.codeflash || {};
const resources = result.resources || {};
const tests = result.tests || {};
content.innerHTML = `
<div>
<p><strong>Repo:</strong> ${repo}</p>
<p><strong>Package Manager:</strong> ${
result.package_manager || "unknown"
}</p>
<p><strong>Confidence:</strong> ${(result.confidence ?? 0).toFixed(2)}</p>
<hr/>
<h4>Proposed Codeflash Config</h4>
<ul>
<li>module_root: <code>${cf.module_root || "auto"}</code></li>
<li>tests_root: <code>${cf.tests_root || "auto"}</code></li>
<li>resource_tier: <code>${resources.tier || "small"}</code></li>
<li>test_command: <code>${tests.test_command || "pytest"}</code></li>
</ul>
<div style="margin-top:8px;">
<label><input type="checkbox" id="apply_module_root" checked> Apply module_root</label>
<label style="margin-left:12px;"><input type="checkbox" id="apply_tests_root" checked> Apply tests_root</label>
<label style="margin-left:12px;"><input type="checkbox" id="apply_resource_tier" checked> Apply resource_tier</label>
</div>
</div>
`;
document.getElementById("applyAnalysis").disabled = false;
document.getElementById("applyAnalysis").onclick = async () => {
const apply = {
module_root: document.getElementById("apply_module_root").checked,
tests_root: document.getElementById("apply_tests_root").checked,
resource_tier: document.getElementById("apply_resource_tier").checked,
};
try {
await api("/api/apply_analysis", {
method: "POST",
body: JSON.stringify({ repo_url: repo, apply }),
});
alert("Applied to CSV");
modal.style.display = "none";
await refresh();
} catch (e) {
alert(`Error applying analysis: ${e}`);
}
};
}
document.getElementById("closeAnalysisModal").addEventListener("click", () => {
document.getElementById("analysisModal").style.display = "none";
});
// =============================================================================
// BULK UPLOAD FUNCTIONALITY
// =============================================================================
/**
* Handles tab switching between single and bulk upload forms
* @param {string} tabName - Either 'single' or 'bulk'
*/
function switchTab(tabName) {
// Update tab buttons
document
.getElementById("singleTab")
.classList.toggle("active", tabName === "single");
document
.getElementById("bulkTab")
.classList.toggle("active", tabName === "bulk");
// Update tab content
document
.getElementById("singleForm")
.classList.toggle("active", tabName === "single");
document
.getElementById("bulkForm")
.classList.toggle("active", tabName === "bulk");
}
/**
* Handles CSV file selection and preview
*/
function handleCsvFileSelection() {
const fileInput = document.getElementById("csvFile");
const fileWrapper = fileInput.parentElement;
const fileText = fileWrapper.querySelector(".file-input-text");
const validateBtn = document.getElementById("validateCsv");
const uploadBtn = document.getElementById("uploadCsv");
const csvPreview = document.getElementById("csvPreview");
const validationResults = document.getElementById("validationResults");
if (fileInput.files.length > 0) {
const file = fileInput.files[0];
fileWrapper.classList.add("has-file");
fileText.textContent = `Selected: ${file.name}`;
validateBtn.disabled = false;
// Read and preview file
const reader = new FileReader();
reader.onload = function (e) {
const content = e.target.result;
document.getElementById("csvContent").textContent =
content.substring(0, 1000) +
(content.length > 1000 ? "\n... (truncated)" : "");
csvPreview.style.display = "block";
// Store CSV content for validation
fileInput.csvContent = content;
};
reader.readAsText(file);
} else {
fileWrapper.classList.remove("has-file");
fileText.textContent = "Choose CSV file...";
validateBtn.disabled = true;
uploadBtn.disabled = true;
csvPreview.style.display = "none";
validationResults.style.display = "none";
}
}
/**
* Validates CSV content by calling the backend
*/
async function validateCsv() {
const fileInput = document.getElementById("csvFile");
const validateBtn = document.getElementById("validateCsv");
const uploadBtn = document.getElementById("uploadCsv");
const validationResults = document.getElementById("validationResults");
if (!fileInput.csvContent) {
alert("Please select a CSV file first");
return;
}
const originalText = validateBtn.innerHTML;
validateBtn.innerHTML = "<span>⏳</span> Validating...";
validateBtn.disabled = true;
uploadBtn.disabled = true;
try {
const response = await api("/api/repos/bulk", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ csv_data: fileInput.csvContent }),
});
// Display validation results
displayValidationResults(response);
validationResults.style.display = "block";
// Enable upload button if validation passed
if (response.ok) {
uploadBtn.disabled = false;
uploadBtn.innerHTML = "<span>📤</span> Upload Repositories";
} else {
uploadBtn.disabled = true;
uploadBtn.innerHTML = "<span>📤</span> Fix Errors First";
}
} catch (err) {
alert(`Validation error: ${err}`);
validationResults.style.display = "none";
} finally {
validateBtn.innerHTML = originalText;
validateBtn.disabled = false;
}
}
/**
* Displays validation results in the UI
* @param {Object} response - Response from validation API
*/
function displayValidationResults(response) {
const summaryEl = document.getElementById("validationSummary");
const detailsEl = document.getElementById("validationDetails");
// Display summary stats
const stats = response.stats || {};
summaryEl.innerHTML = `
<div class="validation-stat total">
<span class="validation-stat-number">${stats.total_rows || 0}</span>
Total Rows
</div>
<div class="validation-stat valid">
<span class="validation-stat-number">${stats.valid_count || 0}</span>
Valid
</div>
<div class="validation-stat warnings">
<span class="validation-stat-number">${stats.warning_count || 0}</span>
Warnings
</div>
<div class="validation-stat errors">
<span class="validation-stat-number">${stats.error_count || 0}</span>
Errors
</div>
`;
// Display detailed validation results
const results = response.validation_results || [];
if (results.length > 0) {
detailsEl.innerHTML = results
.map((result) => {
const hasErrors = result.errors && result.errors.length > 0;
const hasWarnings = result.warnings && result.warnings.length > 0;
const cssClass = hasErrors
? "has-errors"
: hasWarnings
? "has-warnings"
: "valid";
let messagesHtml = "";
if (hasErrors || hasWarnings) {
const messages = [
...(result.errors || []).map(
(msg) => `<li class="error">${msg}</li>`
),
...(result.warnings || []).map(
(msg) => `<li class="warning">${msg}</li>`
),
];
messagesHtml = `<ul class="validation-messages">${messages.join(
""
)}</ul>`;
}
return `
<div class="validation-row ${cssClass}">
<div class="validation-row-header">
Line ${result.line}: ${result.repo_url || "(no URL)"}
</div>
${messagesHtml}
</div>
`;
})
.join("");
} else {
detailsEl.innerHTML = "<p>No validation details available.</p>";
}
}
/**
* Uploads validated CSV data to create repositories
*/
async function uploadCsv() {
const fileInput = document.getElementById("csvFile");
const uploadBtn = document.getElementById("uploadCsv");
if (!fileInput.csvContent) {
alert("Please select and validate a CSV file first");
return;
}
if (
!confirm("This will add all valid repositories from the CSV. Continue?")
) {
return;
}
const originalText = uploadBtn.innerHTML;
uploadBtn.innerHTML = "<span>⏳</span> Uploading...";
uploadBtn.disabled = true;
try {
const response = await api("/api/repos/bulk", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ csv_data: fileInput.csvContent }),
});
if (response.ok) {
alert(response.message || "Repositories uploaded successfully!");
// Clear form and refresh table
document.getElementById("csvFile").value = "";
handleCsvFileSelection();
refresh();
// Switch back to single form
switchTab("single");
} else {
alert(`Upload failed: ${response.error || "Unknown error"}`);
// Re-display validation results
displayValidationResults(response);
}
} catch (err) {
alert(`Upload error: ${err}`);
} finally {
uploadBtn.innerHTML = originalText;
uploadBtn.disabled = false;
}
}
/**
* Downloads a CSV template file
*/
function downloadCsvTemplate() {
const csvContent = `repo_url,module_root,tests_root,resource_tier
https://github.com/psf/requests,requests,tests,small
https://github.com/pallets/flask,src/flask,tests,medium
https://github.com/your-org/your-repo,auto,auto,small`;
const blob = new Blob([csvContent], { type: "text/csv" });
const url = window.URL.createObjectURL(blob);
const a = document.createElement("a");
a.href = url;
a.download = "repos_template.csv";
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
window.URL.revokeObjectURL(url);
}
// =============================================================================
// EVENT LISTENERS
// =============================================================================
// Tab switching
document
.getElementById("singleTab")
.addEventListener("click", () => switchTab("single"));
document
.getElementById("bulkTab")
.addEventListener("click", () => switchTab("bulk"));
// File input handling
document
.getElementById("csvFile")
.addEventListener("change", handleCsvFileSelection);
document
.querySelector(".file-input-wrapper")
.addEventListener("click", function () {
document.getElementById("csvFile").click();
});
// Bulk upload actions
document.getElementById("validateCsv").addEventListener("click", validateCsv);
document.getElementById("uploadCsv").addEventListener("click", uploadCsv);
document
.getElementById("downloadTemplate")
.addEventListener("click", function (e) {
e.preventDefault();
downloadCsvTemplate();
});
// Event listeners for main action buttons
document.getElementById("refresh").addEventListener("click", refresh);
document.getElementById("runAll").addEventListener("click", runAll);
document
.getElementById("add")
.addEventListener("click", () => addOrUpdate(false));
document
.getElementById("update")
.addEventListener("click", () => addOrUpdate(true));
// Initialize the page by loading repository data
refresh();

View file

@ -0,0 +1,142 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Optimizer Factory</title>
<link rel="stylesheet" href="/static/style.css" />
</head>
<body>
<div class="container">
<div class="header">
<h1>Optimizer Factory</h1>
<p>Manage Codeflash optimizations across Python repositories using EC2</p>
</div>
<div class="content">
<div class="actions">
<button id="refresh" class="btn btn-secondary">
<span>🔄</span> Refresh
</button>
<button id="runAll" class="btn btn-success">
<span>🚀</span> Run All
</button>
</div>
<div class="form-section">
<h2>Add / Update Repository</h2>
<div class="form-tabs">
<button id="singleTab" class="tab-btn active">Single Repository</button>
<button id="bulkTab" class="tab-btn">Bulk Upload (CSV)</button>
</div>
<div id="singleForm" class="tab-content active">
<div class="form-grid">
<div class="form-group">
<label for="repo_url">Repository URL</label>
<input id="repo_url" placeholder="https://github.com/org/repo" />
</div>
</div>
<div class="form-actions">
<button id="add" class="btn btn-primary">
<span></span> Add Repository
</button>
<button id="update" class="btn btn-secondary">
<span>✏️</span> Update Repository
</button>
</div>
</div>
<div id="bulkForm" class="tab-content">
<div class="bulk-upload-section">
<div class="form-group">
<label for="csvFile">CSV File</label>
<div class="file-input-wrapper">
<input type="file" id="csvFile" accept=".csv" />
<span class="file-input-text">Choose CSV file...</span>
</div>
<div class="help-text">
CSV format: repo_url
<br>
<a href="#" id="downloadTemplate">Download template</a>
</div>
</div>
<div class="csv-preview" id="csvPreview" style="display: none;">
<h4>CSV Preview</h4>
<div class="csv-content" id="csvContent"></div>
</div>
<div class="validation-results" id="validationResults" style="display: none;">
<h4>Validation Results</h4>
<div class="validation-summary" id="validationSummary"></div>
<div class="validation-details" id="validationDetails"></div>
</div>
<div class="form-actions">
<button id="validateCsv" class="btn btn-secondary" disabled>
<span>🔍</span> Validate CSV
</button>
<button id="uploadCsv" class="btn btn-success" disabled>
<span>📤</span> Upload Repositories
</button>
</div>
</div>
</div>
</div>
<div class="table-section">
<h2>Repositories</h2>
<div class="table-wrapper">
<table id="repos">
<thead>
<tr>
<th>Repository</th>
<th>Module Root</th>
<th>Tests Root</th>
<th>Last EC2 ID</th>
<th>Actions</th>
</tr>
</thead>
<tbody></tbody>
</table>
</div>
</div>
<div class="logs-section">
<h2>Job Status & Logs</h2>
<div class="logs-container">
<div class="log-panel">
<h3>Job Status</h3>
<pre id="status" class="log-content"></pre>
</div>
<div class="log-panel">
<h3>Job Logs</h3>
<pre id="logs" class="log-content"></pre>
</div>
</div>
</div>
<!-- Analysis Modal -->
<div id="analysisModal"
style="display:none; position:fixed; inset:0; background: rgba(0,0,0,0.4); align-items:center; justify-content:center;">
<div style="background:#fff; max-width:900px; width:95%; border-radius:10px; padding:16px;">
<div style="display:flex; justify-content:space-between; align-items:center;">
<h3>LLM Analysis</h3>
<button id="closeAnalysisModal" class="btn btn-secondary btn-sm">Close</button>
</div>
<div id="analysisContent" style="margin-top:10px; max-height:60vh; overflow:auto;"></div>
<div class="form-actions" style="margin-top:12px;">
<button id="applyAnalysis" class="btn btn-primary"><span>💾</span> Apply to CSV</button>
</div>
</div>
</div>
</div>
</div>
<script src="/static/app.js"></script>
</body>
</html>

View file

@ -0,0 +1,640 @@
* {
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto,
"Helvetica Neue", Arial, sans-serif;
margin: 0;
padding: 0;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
color: #333;
}
.container {
max-width: 1200px;
margin: 20px auto;
padding: 0;
background: #fff;
border-radius: 12px;
box-shadow: 0 8px 32px rgba(0, 0, 0, 0.1);
overflow: hidden;
}
.header {
background: linear-gradient(135deg, #4f46e5 0%, #7c3aed 100%);
color: white;
padding: 24px 32px;
text-align: center;
}
.header h1 {
margin: 0;
font-size: 28px;
font-weight: 600;
}
.header p {
margin: 8px 0 0 0;
opacity: 0.9;
font-size: 16px;
}
.content {
padding: 32px;
}
.actions {
display: flex;
gap: 12px;
margin-bottom: 24px;
flex-wrap: wrap;
}
.btn {
padding: 12px 20px;
border: none;
border-radius: 8px;
cursor: pointer;
font-weight: 500;
font-size: 14px;
transition: all 0.2s ease;
text-decoration: none;
display: inline-flex;
align-items: center;
gap: 8px;
}
.btn-primary {
background: #4f46e5;
color: white;
}
.btn-primary:hover {
background: #4338ca;
transform: translateY(-1px);
box-shadow: 0 4px 12px rgba(79, 70, 229, 0.3);
}
.btn-success {
background: #10b981;
color: white;
}
.btn-success:hover {
background: #059669;
transform: translateY(-1px);
box-shadow: 0 4px 12px rgba(16, 185, 129, 0.3);
}
.btn-danger {
background: #ef4444;
color: white;
}
.btn-danger:hover {
background: #dc2626;
transform: translateY(-1px);
box-shadow: 0 4px 12px rgba(239, 68, 68, 0.3);
}
.btn-secondary {
background: #6b7280;
color: white;
}
.btn-secondary:hover {
background: #4b5563;
transform: translateY(-1px);
box-shadow: 0 4px 12px rgba(107, 114, 128, 0.3);
}
.btn-sm {
padding: 8px 12px;
font-size: 12px;
}
.form-section {
background: #f8fafc;
border: 1px solid #e2e8f0;
border-radius: 12px;
padding: 24px;
margin-bottom: 32px;
}
.form-section h2 {
margin: 0 0 20px 0;
color: #1e293b;
font-size: 20px;
font-weight: 600;
}
/* Tab styling */
.form-tabs {
display: flex;
margin-bottom: 20px;
border-bottom: 2px solid #e2e8f0;
gap: 4px;
}
.tab-btn {
background: none;
border: none;
padding: 12px 20px;
cursor: pointer;
color: #64748b;
font-weight: 500;
border-bottom: 3px solid transparent;
transition: all 0.3s ease;
border-radius: 8px 8px 0 0;
position: relative;
top: 2px;
}
.tab-btn:hover {
color: #3b82f6;
background: rgba(59, 130, 246, 0.05);
}
.tab-btn.active {
color: #3b82f6;
border-bottom-color: #3b82f6;
background: white;
}
.tab-content {
display: none;
}
.tab-content.active {
display: block;
}
.form-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 16px;
margin-bottom: 20px;
}
.form-group {
display: flex;
flex-direction: column;
}
.form-group label {
font-weight: 500;
color: #374151;
margin-bottom: 6px;
font-size: 14px;
}
.form-group input,
.form-group select {
padding: 12px 16px;
border: 1px solid #d1d5db;
border-radius: 8px;
font-size: 14px;
transition: all 0.2s ease;
background: white;
}
.form-group input:focus,
.form-group select:focus {
outline: none;
border-color: #4f46e5;
box-shadow: 0 0 0 3px rgba(79, 70, 229, 0.1);
}
.form-actions {
display: flex;
gap: 12px;
flex-wrap: wrap;
}
.table-section {
margin-bottom: 32px;
}
.table-section h2 {
margin: 0 0 16px 0;
color: #1e293b;
font-size: 20px;
font-weight: 600;
}
.table-wrapper {
background: white;
border-radius: 12px;
overflow: hidden;
border: 1px solid #e2e8f0;
}
table {
width: 100%;
border-collapse: collapse;
}
th {
background: #f1f5f9;
color: #475569;
font-weight: 600;
padding: 16px;
text-align: left;
font-size: 14px;
border-bottom: 1px solid #e2e8f0;
}
td {
padding: 16px;
border-bottom: 1px solid #f1f5f9;
font-size: 14px;
vertical-align: middle;
}
tbody tr:hover {
background: #f8fafc;
}
.repo-url {
color: #4f46e5;
font-weight: 500;
max-width: 300px;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.tier-badge {
display: inline-block;
padding: 4px 8px;
border-radius: 6px;
font-size: 12px;
font-weight: 500;
text-transform: uppercase;
}
.tier-small {
background: #dbeafe;
color: #1e40af;
}
.tier-medium {
background: #fef3c7;
color: #d97706;
}
.tier-large {
background: #fee2e2;
color: #dc2626;
}
.job-id {
font-family: "Monaco", "Menlo", "Ubuntu Mono", monospace;
font-size: 12px;
color: #6b7280;
max-width: 120px;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.action-buttons {
display: flex;
gap: 4px;
flex-wrap: wrap;
align-items: center;
justify-content: flex-start;
}
.action-buttons .btn-sm {
padding: 6px 10px;
font-size: 11px;
min-width: 70px;
white-space: nowrap;
display: inline-flex;
align-items: center;
justify-content: center;
gap: 4px;
font-weight: 500;
}
.action-buttons .btn-sm span {
font-size: 12px;
}
.logs-section {
margin-top: 32px;
}
.logs-section h2 {
margin: 0 0 16px 0;
color: #1e293b;
font-size: 20px;
font-weight: 600;
}
.logs-container {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 20px;
}
.log-panel {
background: white;
border: 1px solid #e2e8f0;
border-radius: 12px;
overflow: hidden;
}
.log-panel h3 {
background: #f8fafc;
margin: 0;
padding: 16px 20px;
color: #374151;
font-size: 16px;
font-weight: 600;
border-bottom: 1px solid #e2e8f0;
}
.log-content {
background: #1e293b;
color: #e2e8f0;
padding: 20px;
font-family: "Monaco", "Menlo", "Ubuntu Mono", monospace;
font-size: 13px;
line-height: 1.5;
max-height: 400px;
overflow: auto;
margin: 0;
white-space: pre-wrap;
word-wrap: break-word;
}
.log-content:empty:after {
content: "No data available. Click Status or Logs buttons to load information.";
color: #64748b;
font-style: italic;
}
.loading {
opacity: 0.6;
pointer-events: none;
}
/* Bulk upload styling */
.bulk-upload-section {
max-width: 100%;
}
.file-input-wrapper {
position: relative;
display: inline-block;
width: 100%;
cursor: pointer;
display: flex;
align-items: center;
padding: 12px 16px;
border: 2px dashed #cbd5e0;
border-radius: 8px;
background: #f7fafc;
transition: all 0.3s ease;
min-height: 48px;
}
.file-input-wrapper input[type="file"] {
position: absolute;
left: -9999px;
}
.file-input-wrapper:hover {
border-color: #3b82f6;
background: #eff6ff;
}
.file-input-wrapper.has-file {
border-color: #10b981;
background: #f0fdf4;
border-style: solid;
}
.file-input-text {
color: #64748b;
font-size: 14px;
}
.file-input-wrapper.has-file .file-input-text {
color: #059669;
font-weight: 500;
}
.help-text {
margin-top: 8px;
font-size: 12px;
color: #6b7280;
line-height: 1.4;
}
.help-text a {
color: #3b82f6;
text-decoration: none;
}
.help-text a:hover {
text-decoration: underline;
}
.csv-preview,
.validation-results {
margin-top: 20px;
padding: 16px;
border-radius: 8px;
border: 1px solid #e2e8f0;
background: white;
}
.csv-preview h4,
.validation-results h4 {
margin: 0 0 12px 0;
color: #1e293b;
font-size: 16px;
font-weight: 600;
}
.csv-content {
max-height: 200px;
overflow: auto;
background: #f8fafc;
padding: 12px;
border-radius: 6px;
font-family: "Courier New", monospace;
font-size: 12px;
line-height: 1.4;
white-space: pre-wrap;
}
.validation-summary {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(120px, 1fr));
gap: 12px;
margin-bottom: 16px;
}
.validation-stat {
text-align: center;
padding: 12px;
border-radius: 8px;
font-size: 14px;
}
.validation-stat.total {
background: #f1f5f9;
color: #475569;
}
.validation-stat.valid {
background: #f0fdf4;
color: #059669;
}
.validation-stat.warnings {
background: #fffbeb;
color: #d97706;
}
.validation-stat.errors {
background: #fef2f2;
color: #dc2626;
}
.validation-stat-number {
display: block;
font-size: 20px;
font-weight: 700;
margin-bottom: 4px;
}
.validation-details {
max-height: 300px;
overflow-y: auto;
}
.validation-row {
padding: 12px;
margin-bottom: 8px;
border-radius: 6px;
border-left: 4px solid #e2e8f0;
}
.validation-row.has-errors {
background: #fef2f2;
border-left-color: #dc2626;
}
.validation-row.has-warnings {
background: #fffbeb;
border-left-color: #d97706;
}
.validation-row.valid {
background: #f0fdf4;
border-left-color: #059669;
}
.validation-row-header {
font-weight: 600;
margin-bottom: 6px;
color: #374151;
font-size: 14px;
}
.validation-messages {
list-style: none;
padding: 0;
margin: 0;
}
.validation-messages li {
padding: 4px 0;
font-size: 13px;
display: flex;
align-items: center;
gap: 6px;
}
.validation-messages .error {
color: #dc2626;
}
.validation-messages .warning {
color: #d97706;
}
.validation-messages .error::before {
content: "❌";
font-size: 12px;
}
.validation-messages .warning::before {
content: "⚠️";
font-size: 12px;
}
@media (max-width: 768px) {
.container {
margin: 10px;
border-radius: 8px;
}
.content {
padding: 20px;
}
.header {
padding: 20px;
}
.header h1 {
font-size: 24px;
}
.logs-container {
grid-template-columns: 1fr;
}
.form-grid {
grid-template-columns: 1fr;
}
.actions {
justify-content: center;
}
.action-buttons {
justify-content: center;
}
.form-tabs {
flex-direction: column;
gap: 0;
}
.tab-btn {
border-radius: 0;
border-bottom: 1px solid #e2e8f0;
top: 0;
}
.tab-btn.active {
border-bottom-color: #3b82f6;
}
.validation-summary {
grid-template-columns: repeat(2, 1fr);
}
.csv-content {
font-size: 11px;
}
}

View file

@ -0,0 +1,9 @@
boto3>=1.34.0
Flask>=3.0.0
python-dotenv>=1.0.0
anthropic>=0.31.0
jsonschema>=4.21.1
requests>=2.32.0
paramiko>=3.4.0