codeflash-agent/evals/templates/crossdomain-easy/tests/test_streamer.py
Kevin Turcios 33faedf427
Add Unstructured report, rewrite statusline, format evals/scripts (#20)
* Add Unstructured engagement report as uv workspace member

Three-tier Plotly Dash app (Executive Brief, Engineering Team, Full
Detail) with data in JSON, theme constants in theme.py, and Dash
production improvements (Google Fonts, clientside callbacks, meta tags).

Also: add .playwright-mcp/ to .gitignore, add reports/* ruff overrides,
remove tracked .codeflash/observability/read-tracker.

* Rewrite statusline to derive context from git state

Detects active area from changed files (reports, packages, plugin,
.codeflash, case-studies, evals), falls back to branch name convention
(perf/*, feat/*, fix/*), shows dirty indicator. Uses whoami for
cross-platform user detection.

* Add pre-push lint rule to commit guidelines

* Exclude .codeflash/ from ruff linting

Benchmark and profiling scripts in .codeflash/ are scratch work, not
package source. Excluding them prevents CI failures from ad-hoc scripts.

* Run ruff format across packages, scripts, evals, and plugin refs

* Fix github-app async test failures in CI

Add asyncio_mode = "auto" to root pytest config so async tests
are detected when running from the repo root via uv run pytest packages/.
2026-04-15 03:06:16 -05:00

50 lines
1.6 KiB
Python

from log_analyzer.streamer import aggregate_results, stream_results
def test_stream_basic():
template = {"version": "1.0", "source": "default"}
records = [
{"id": 1, "value": "a"},
{"id": 2, "value": "b", "source": "custom"},
]
result = stream_results(records, template)
assert len(result) == 2
def test_stream_large_batch():
"""Large batch — this function is too slow."""
template = {
"version": "1.0",
"metadata": {
"region": "us-east",
"env": "prod",
"settings": {
"retry": {"count": 3, "backoff": [1, 2, 4, 8]},
"timeouts": {"connect": 5, "read": 30, "write": 10},
},
},
"tags": ["log", "processed", "v2", "analytics"],
"schema": {
"fields": [
{"name": "id", "type": "int", "required": True},
{"name": "value", "type": "string", "required": True},
{"name": "timestamp", "type": "datetime", "required": True},
{"name": "source", "type": "string", "required": False},
{"name": "priority", "type": "int", "required": False},
],
},
}
records = []
for i in range(50_000):
records.append(
{
"id": i,
"value": f"data-{i}",
"timestamp": f"2024-01-{(i % 28) + 1:02d}",
}
)
result = stream_results(records, template)
assert len(result) == 50_000
summary = aggregate_results(result)
assert summary["id"]["count"] == 50_000