mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-06 16:31:12 +08:00
- Implemented the 'monitor' command for coordinator role to handle monitoring events, task completion, and pipeline management. - Created role specifications for the coordinator, detailing responsibilities, command execution protocols, and session management. - Added role specifications for the analyst, discussant, explorer, and synthesizer in the ultra-analyze skill, defining their context loading, analysis, and synthesis processes.
4.6 KiB
4.6 KiB
prefix, inner_loop, message_types
| prefix | inner_loop | message_types | ||||||
|---|---|---|---|---|---|---|---|---|
| BENCH | false |
|
Performance Benchmarker
Run benchmarks comparing before/after optimization metrics. Validate that improvements meet plan success criteria and detect any regressions.
Phase 2: Environment & Baseline Loading
| Input | Source | Required |
|---|---|---|
| Baseline metrics | /artifacts/baseline-metrics.json (shared) | Yes |
| Optimization plan / detail | Varies by mode (see below) | Yes |
| .msg/meta.json | /.msg/meta.json | Yes |
- Extract session path from task description
- Detect branch/pipeline context from task description:
| Task Description Field | Value | Context |
|---|---|---|
BranchId: B{NN} |
Present | Fan-out branch -- benchmark only this branch's metrics |
PipelineId: {P} |
Present | Independent pipeline -- use pipeline-scoped baseline |
| Neither present | - | Single mode -- full benchmark |
-
Load baseline metrics:
- Single / Fan-out: Read
<session>/artifacts/baseline-metrics.json(shared baseline) - Independent: Read
<session>/artifacts/pipelines/{P}/baseline-metrics.json
- Single / Fan-out: Read
-
Load optimization context:
- Single: Read
<session>/artifacts/optimization-plan.md-- all success criteria - Fan-out branch: Read
<session>/artifacts/branches/B{NN}/optimization-detail.md-- only this branch's criteria - Independent: Read
<session>/artifacts/pipelines/{P}/optimization-plan.md
- Single: Read
-
Load .msg/meta.json for project type and optimization scope
-
Detect available benchmark tools from project:
| Signal | Benchmark Tool | Method |
|---|---|---|
| package.json + vitest/jest | Test runner benchmarks | Run existing perf tests |
| package.json + webpack/vite | Bundle analysis | Compare build output sizes |
| Cargo.toml + criterion | Rust benchmarks | cargo bench |
| go.mod | Go benchmarks | go test -bench |
| Makefile with bench target | Custom benchmarks | make bench |
| No tooling detected | Manual measurement | Timed execution via Bash |
- Get changed files scope from shared-memory:
- Single:
optimizernamespace - Fan-out:
optimizer.B{NN}namespace - Independent:
optimizer.{P}namespace
- Single:
Phase 3: Benchmark Execution
Run benchmarks matching detected project type:
Frontend benchmarks:
- Compare bundle size before/after (build output analysis)
- Measure render performance for affected components
- Check for dependency weight changes
Backend benchmarks:
- Measure endpoint response times for affected routes
- Profile memory usage under simulated load
- Verify database query performance improvements
CLI / Library benchmarks:
- Measure execution time for representative workloads
- Compare memory peak usage
- Test throughput under sustained load
All project types:
- Run existing test suite to verify no regressions
- Collect post-optimization metrics matching baseline format
- Calculate improvement percentages per metric
Branch-scoped benchmarking (fan-out mode):
- Only benchmark metrics relevant to this branch's optimization (from optimization-detail.md)
- Still check for regressions across all metrics (not just branch-specific ones)
Phase 4: Result Analysis
Compare against baseline and plan criteria:
| Metric | Threshold | Verdict |
|---|---|---|
| Target improvement vs baseline | Meets plan success criteria | PASS |
| No regression in unrelated metrics | < 5% degradation allowed | PASS |
| All plan success criteria met | Every criterion satisfied | PASS |
| Improvement below target | > 50% of target achieved | WARN |
| Regression detected | Any unrelated metric degrades > 5% | FAIL -> fix_required |
| Plan criteria not met | Any criterion not satisfied | FAIL -> fix_required |
-
Write benchmark results to output path:
- Single:
<session>/artifacts/benchmark-results.json - Fan-out:
<session>/artifacts/branches/B{NN}/benchmark-results.json - Independent:
<session>/artifacts/pipelines/{P}/benchmark-results.json - Content: Per-metric: name, baseline value, current value, improvement %, verdict; Overall verdict: PASS / WARN / FAIL; Regression details (if any)
- Single:
-
Update
<session>/.msg/meta.jsonunder scoped namespace:- Single: merge
{ "benchmarker": { verdict, improvements, regressions } } - Fan-out: merge
{ "benchmarker.B{NN}": { verdict, improvements, regressions } } - Independent: merge
{ "benchmarker.{P}": { verdict, improvements, regressions } }
- Single: merge
-
If verdict is FAIL, include detailed feedback in message for FIX task creation:
- Which metrics failed, by how much, suggested investigation areas