feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -1,697 +1,169 @@
---
name: team-perf-opt
description: Performance optimization team skill. Profiles application performance, identifies bottlenecks, designs optimization strategies, implements changes, benchmarks improvements, and reviews code quality via CSV wave pipeline with interactive review-fix cycles.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"performance optimization task description\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
description: Unified team skill for performance optimization. Coordinator orchestrates pipeline, workers are team-worker agents. Supports single/fan-out/independent parallel modes. Triggers on "team perf-opt".
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team Performance Optimization
## Usage
Profile application performance, identify bottlenecks, design optimization strategies, implement changes, benchmark improvements, and review code quality.
```bash
$team-perf-opt "Optimize API response times for the user dashboard endpoints"
$team-perf-opt -c 4 "Profile and reduce memory usage in the data processing pipeline"
$team-perf-opt -y "Optimize bundle size and rendering performance for the frontend"
$team-perf-opt --continue "perf-optimize-api-20260308"
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--continue`: Resume existing session
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
---
## Overview
Orchestrate multi-agent performance optimization: profile application, identify bottlenecks, design optimization strategies, implement changes, benchmark improvements, review code quality. The pipeline has five domain roles (profiler, strategist, optimizer, benchmarker, reviewer) mapped to CSV wave stages with an interactive review-fix cycle.
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
## Architecture
```
+-------------------------------------------------------------------+
| TEAM PERFORMANCE OPTIMIZATION WORKFLOW |
+-------------------------------------------------------------------+
| |
| Phase 0: Pre-Wave Interactive (Requirement Clarification) |
| +- Parse user task description |
| +- Detect scope: specific endpoint vs full app profiling |
| +- Clarify ambiguous requirements (request_user_input) |
| +- Output: refined requirements for decomposition |
| |
| Phase 1: Requirement -> CSV + Classification |
| +- Identify performance targets and metrics |
| +- Build 5-stage pipeline (profile->strategize->optimize-> |
| | benchmark+review) |
| +- Classify tasks: csv-wave | interactive (exec_mode) |
| +- Compute dependency waves (topological sort) |
| +- Generate tasks.csv with wave + exec_mode columns |
| +- User validates task breakdown (skip if -y) |
| |
| Phase 2: Wave Execution Engine (Extended) |
| +- For each wave (1..N): |
| | +- Execute pre-wave interactive tasks (if any) |
| | +- Build wave CSV (filter csv-wave tasks for this wave) |
| | +- Inject previous findings into prev_context column |
| | +- spawn_agents_on_csv(wave CSV) |
| | +- Execute post-wave interactive tasks (if any) |
| | +- Merge all results into master tasks.csv |
| | +- Check: any failed? -> skip dependents |
| +- discoveries.ndjson shared across all modes (append-only) |
| +- Review-fix cycle: max 3 iterations per branch |
| |
| Phase 3: Post-Wave Interactive (Completion Action) |
| +- Pipeline completion report with benchmark comparisons |
| +- Interactive completion choice (Archive/Keep/Export) |
| +- Final aggregation / report |
| |
| Phase 4: Results Aggregation |
| +- Export final results.csv |
| +- Generate context.md with all findings |
| +- Display summary: completed/failed/skipped per wave |
| +- Offer: view results | retry failed | done |
| |
+-------------------------------------------------------------------+
Skill(skill="team-perf-opt", args="<task-description>")
|
SKILL.md (this file) = Router
|
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze -> dispatch -> spawn workers -> STOP
|
+-------+-------+-------+-------+-------+
v v v v v
[profiler] [strategist] [optimizer] [benchmarker] [reviewer]
(team-worker agents)
Pipeline (Single mode):
PROFILE-001 -> STRATEGY-001 -> IMPL-001 -> BENCH-001 + REVIEW-001 (fix cycle)
Pipeline (Fan-out mode):
PROFILE-001 -> STRATEGY-001 -> [IMPL-B01..N](parallel) -> BENCH+REVIEW per branch
Pipeline (Independent mode):
[Pipeline A: PROFILE-A->STRATEGY-A->IMPL-A->BENCH-A+REVIEW-A]
[Pipeline B: PROFILE-B->STRATEGY-B->IMPL-B->BENCH-B+REVIEW-B] (parallel)
```
---
## Role Registry
## Pipeline Definition
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
| profiler | [roles/profiler/role.md](roles/profiler/role.md) | PROFILE-* | false |
| strategist | [roles/strategist/role.md](roles/strategist/role.md) | STRATEGY-* | false |
| optimizer | [roles/optimizer/role.md](roles/optimizer/role.md) | IMPL-*, FIX-* | true |
| benchmarker | [roles/benchmarker/role.md](roles/benchmarker/role.md) | BENCH-* | false |
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REVIEW-*, QUALITY-* | false |
## Role Router
Parse `$ARGUMENTS`:
- Has `--role <name>` → Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role``roles/coordinator/role.md`, execute entry router
## Shared Constants
- **Session prefix**: `PERF-OPT`
- **Session path**: `.workflow/.team/PERF-OPT-<slug>-<date>/`
- **Team name**: `perf-opt`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
## Worker Spawn Template
Coordinator spawns workers using this template:
```
Stage 1 Stage 2 Stage 3 Stage 4
PROFILE-001 --> STRATEGY-001 --> IMPL-001 --> BENCH-001
[profiler] [strategist] [optimizer] [benchmarker]
^ |
+<-- FIX-001 ---+
| REVIEW-001
+<--------> [reviewer]
(max 3 iterations)
spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: <role>
role_spec: <skill_root>/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.` },
{ type: "text", text: `## Task Context
task_id: <task-id>
title: <task-title>
description: <task-description>
pipeline_phase: <pipeline-phase>` },
{ type: "text", text: `## Upstream Context
<prev_context>` }
]
})
```
---
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
## Task Classification Rules
**Inner Loop roles** (optimizer): Set `inner_loop: true`.
**Single-task roles** (profiler, strategist, benchmarker, reviewer): Set `inner_loop: false`.
Each task is classified by `exec_mode`:
## User Commands
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, revision cycles, user checkpoints |
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph (branch-grouped), no advancement |
| `resume` / `continue` | Check worker states, advance next step |
| `revise <TASK-ID> [feedback]` | Create revision task + cascade downstream (scoped to branch) |
| `feedback <text>` | Analyze feedback impact, create targeted revision chain |
| `recheck` | Re-run quality check |
| `improve [dimension]` | Auto-improve weakest dimension |
**Classification Decision**:
| Task Property | Classification |
|---------------|---------------|
| Performance profiling (single-pass) | `csv-wave` |
| Optimization strategy design (single-pass) | `csv-wave` |
| Code optimization implementation | `csv-wave` |
| Benchmark execution (single-pass) | `csv-wave` |
| Code review (single-pass) | `csv-wave` |
| Review-fix cycle (iterative revision) | `interactive` |
| User checkpoint (plan approval) | `interactive` |
| Discussion round (DISCUSS-OPT, DISCUSS-REVIEW) | `interactive` |
---
## CSV Schema
### tasks.csv (Master State)
```csv
id,title,description,role,bottleneck_type,priority,target_files,deps,context_from,exec_mode,wave,status,findings,verdict,artifacts_produced,error
"PROFILE-001","Profile performance","Profile application performance to identify CPU, memory, I/O, network, and rendering bottlenecks. Produce baseline metrics and ranked report.","profiler","","","","","","csv-wave","1","pending","","","",""
"STRATEGY-001","Design optimization plan","Analyze bottleneck report to design prioritized optimization plan with strategies and expected improvements.","strategist","","","","PROFILE-001","PROFILE-001","csv-wave","2","pending","","","",""
"IMPL-001","Implement optimizations","Implement performance optimization changes following strategy plan in priority order.","optimizer","","","","STRATEGY-001","STRATEGY-001","csv-wave","3","pending","","","",""
"BENCH-001","Benchmark improvements","Run benchmarks comparing before/after optimization metrics. Validate improvements meet plan criteria.","benchmarker","","","","IMPL-001","IMPL-001","csv-wave","4","pending","","PASS","",""
"REVIEW-001","Review optimization code","Review optimization changes for correctness, side effects, regression risks, and best practices.","reviewer","","","","IMPL-001","IMPL-001","csv-wave","4","pending","","APPROVE","",""
```
**Columns**:
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (PREFIX-NNN format) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description (self-contained) |
| `role` | Input | Worker role: profiler, strategist, optimizer, benchmarker, reviewer |
| `bottleneck_type` | Input | Performance bottleneck category: CPU, MEMORY, IO, NETWORK, RENDERING, DATABASE |
| `priority` | Input | P0 (Critical), P1 (High), P2 (Medium), P3 (Low) |
| `target_files` | Input | Semicolon-separated file paths to focus on |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
| `verdict` | Output | Benchmark/review verdict: PASS, WARN, FAIL, APPROVE, REVISE, REJECT |
| `artifacts_produced` | Output | Semicolon-separated paths of produced artifacts |
| `error` | Output | Error message if failed (empty if success) |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
---
## Agent Registry (Interactive Agents)
| Agent | Role File | Pattern | Responsibility | Position |
|-------|-----------|---------|----------------|----------|
| Plan Reviewer | agents/plan-reviewer.md | 2.3 (send_input cycle) | Review bottleneck report or optimization plan at user checkpoint | pre-wave |
| Fix Cycle Handler | agents/fix-cycle-handler.md | 2.3 (send_input cycle) | Manage review-fix iteration cycle (max 3 rounds) | post-wave |
| Completion Handler | agents/completion-handler.md | 2.3 (send_input cycle) | Handle pipeline completion action (Archive/Keep/Export) | standalone |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 4 |
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
| `context.md` | Human-readable execution report | Created in Phase 4 |
| `task-analysis.json` | Phase 1 output: scope, bottleneck targets, pipeline config | Created in Phase 1 |
| `artifacts/baseline-metrics.json` | Profiler: before-optimization metrics | Created by profiler |
| `artifacts/bottleneck-report.md` | Profiler: ranked bottleneck findings | Created by profiler |
| `artifacts/optimization-plan.md` | Strategist: prioritized optimization plan | Created by strategist |
| `artifacts/benchmark-results.json` | Benchmarker: after-optimization metrics | Created by benchmarker |
| `artifacts/review-report.md` | Reviewer: code review findings | Created by reviewer |
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
---
## Session Structure
## Session Directory
```
.workflow/.csv-wave/{session-id}/
+-- tasks.csv # Master state (all tasks, both modes)
+-- results.csv # Final results export
+-- discoveries.ndjson # Shared discovery board (all agents)
+-- context.md # Human-readable report
+-- task-analysis.json # Phase 1 analysis output
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
.workflow/.team/PERF-OPT-<slug>-<date>/
+-- session.json # Session metadata + status + parallel_mode
+-- artifacts/
| +-- baseline-metrics.json # Profiler output
| +-- bottleneck-report.md # Profiler output
| +-- optimization-plan.md # Strategist output
| +-- benchmark-results.json # Benchmarker output
| +-- review-report.md # Reviewer output
+-- interactive/ # Interactive task artifacts
| +-- {id}-result.json
+-- wisdom/
+-- patterns.md # Discovered patterns and conventions
| +-- baseline-metrics.json # Profiler: before-optimization metrics
| +-- bottleneck-report.md # Profiler: ranked bottleneck findings
| +-- optimization-plan.md # Strategist: prioritized optimization plan
| +-- benchmark-results.json # Benchmarker: after-optimization metrics
| +-- review-report.md # Reviewer: code review findings
| +-- branches/B01/... # Fan-out branch artifacts
| +-- pipelines/A/... # Independent pipeline artifacts
+-- explorations/ # Shared explore cache
+-- wisdom/patterns.md # Discovered patterns and conventions
+-- discussions/ # Discussion records
+-- .msg/messages.jsonl # Team message bus
+-- .msg/meta.json # Session metadata
```
---
## Completion Action
## Implementation
When the pipeline completes:
### Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
const requirement = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
.trim()
const slug = requirement.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `perf-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/interactive ${sessionFolder}/wisdom`)
// Initialize discoveries.ndjson
Write(`${sessionFolder}/discoveries.ndjson`, '')
// Initialize wisdom
Write(`${sessionFolder}/wisdom/patterns.md`, '# Patterns & Conventions\n')
```
request_user_input({
questions: [{
question: "Team pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
]
}]
})
```
---
## Specs Reference
### Phase 0: Pre-Wave Interactive (Requirement Clarification)
**Objective**: Parse user task, detect performance scope, clarify ambiguities, prepare for decomposition.
**Workflow**:
1. **Parse user task description** from $ARGUMENTS
2. **Check for existing sessions** (continue mode):
- Scan `.workflow/.csv-wave/perf-*/tasks.csv` for sessions with pending tasks
- If `--continue`: resume the specified or most recent session, skip to Phase 2
- If active session found: ask user whether to resume or start new
3. **Identify performance optimization target**:
| Signal | Target |
|--------|--------|
| Specific endpoint/file mentioned | Scoped optimization |
| "slow", "performance", "speed", generic | Full application profiling |
| Specific metric (response time, memory, bundle size) | Targeted metric optimization |
| "frontend", "backend", "CLI" | Platform-specific profiling |
4. **Clarify if ambiguous** (skip if AUTO_YES):
```javascript
request_user_input({
questions: [{
question: "Please confirm the performance optimization scope.",
header: "Scope",
id: "perf_scope",
options: [
{ label: "Proceed (Recommended)", description: "Scope is clear, start profiling" },
{ label: "Narrow scope", description: "Specify endpoints/modules to focus on" },
{ label: "Add constraints", description: "Target metrics, acceptable trade-offs" }
]
}]
})
```
5. **Output**: Refined requirement string for Phase 1
**Success Criteria**:
- Refined requirements available for Phase 1 decomposition
- Existing session detected and handled if applicable
---
### Phase 1: Requirement -> CSV + Classification
**Objective**: Decompose performance optimization task into the 5-stage pipeline tasks, assign waves, generate tasks.csv.
**Decomposition Rules**:
1. **Stage mapping** -- performance optimization always follows this pipeline:
| Stage | Role | Task Prefix | Wave | Description |
|-------|------|-------------|------|-------------|
| 1 | profiler | PROFILE | 1 | Profile app, identify bottlenecks, produce baseline metrics |
| 2 | strategist | STRATEGY | 2 | Design optimization plan from bottleneck report |
| 3 | optimizer | IMPL | 3 | Implement optimizations per plan priority |
| 4a | benchmarker | BENCH | 4 | Benchmark before/after, validate improvements |
| 4b | reviewer | REVIEW | 4 | Review optimization code for correctness |
2. **Single-pipeline decomposition**: Generate one task per stage with sequential dependencies:
- PROFILE-001 (wave 1, no deps)
- STRATEGY-001 (wave 2, deps: PROFILE-001)
- IMPL-001 (wave 3, deps: STRATEGY-001)
- BENCH-001 (wave 4, deps: IMPL-001)
- REVIEW-001 (wave 4, deps: IMPL-001)
3. **Description enrichment**: Each task description must be self-contained with:
- Clear goal statement
- Input artifacts to read
- Output artifacts to produce
- Success criteria
- Session folder path
**Classification Rules**:
| Task Property | exec_mode |
|---------------|-----------|
| PROFILE, STRATEGY, IMPL, BENCH, REVIEW (initial pass) | `csv-wave` |
| FIX tasks (review-fix cycle) | `interactive` (handled by fix-cycle-handler agent) |
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
**Success Criteria**:
- tasks.csv created with valid schema, wave, and exec_mode assignments
- task-analysis.json written with scope and pipeline config
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
```javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
const maxWave = Math.max(...tasks.map(t => t.wave))
for (let wave = 1; wave <= maxWave; wave++) {
console.log(`\nWave ${wave}/${maxWave}`)
// 1. Separate tasks by exec_mode
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
// 2. Check dependencies -- skip tasks whose deps failed
for (const task of waveTasks) {
const depIds = (task.deps || '').split(';').filter(Boolean)
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
task.status = 'skipped'
task.error = `Dependency failed: ${depIds.filter((id, i) =>
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
}
}
// 3. Execute pre-wave interactive tasks (if any)
for (const task of interactiveTasks.filter(t => t.status === 'pending')) {
const agentFile = task.id.startsWith('FIX') ? 'agents/fix-cycle-handler.md' : 'agents/plan-reviewer.md'
Read(agentFile)
const agent = spawn_agent({
message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${agentFile}\n2. Read: ${sessionFolder}/discoveries.ndjson\n3. Read: .workflow/project-tech.json (if exists)\n\n---\n\nGoal: ${task.description}\nScope: ${task.title}\nSession: ${sessionFolder}\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
if (result.timed_out) {
send_input({ id: agent, message: "Please finalize and output current findings." })
wait({ ids: [agent], timeout_ms: 120000 })
}
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
task_id: task.id, status: "completed", findings: parseFindings(result),
timestamp: getUtc8ISOString()
}))
close_agent({ id: agent })
task.status = 'completed'
task.findings = parseFindings(result)
}
// 4. Build prev_context for csv-wave tasks
const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
for (const task of pendingCsvTasks) {
task.prev_context = buildPrevContext(task, tasks)
}
if (pendingCsvTasks.length > 0) {
// 5. Write wave CSV
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))
// 6. Determine instruction -- read from instructions/agent-instruction.md
Read('instructions/agent-instruction.md')
// 7. Execute wave via spawn_agents_on_csv
spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: perfOptInstruction, // from instructions/agent-instruction.md
max_concurrency: maxConcurrency,
max_runtime_seconds: 900,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
verdict: { type: "string" },
artifacts_produced: { type: "string" },
error: { type: "string" }
}
}
})
// 8. Merge results into master CSV
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const r of results) {
const t = tasks.find(t => t.id === r.id)
if (t) Object.assign(t, r)
}
}
// 9. Update master CSV
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
// 10. Cleanup temp files
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
// 11. Post-wave: check for review-fix cycle
const benchTask = tasks.find(t => t.id.startsWith('BENCH') && t.wave === wave)
const reviewTask = tasks.find(t => t.id.startsWith('REVIEW') && t.wave === wave)
if ((benchTask?.verdict === 'FAIL' || reviewTask?.verdict === 'REVISE' || reviewTask?.verdict === 'REJECT')) {
const fixCycleCount = tasks.filter(t => t.id.startsWith('FIX')).length
if (fixCycleCount < 3) {
const fixId = `FIX-${String(fixCycleCount + 1).padStart(3, '0')}`
const feedback = [benchTask?.error, reviewTask?.findings].filter(Boolean).join('\n')
tasks.push({
id: fixId, title: `Fix issues from review/benchmark cycle ${fixCycleCount + 1}`,
description: `Fix issues found:\n${feedback}`,
role: 'optimizer', bottleneck_type: '', priority: 'P0', target_files: '',
deps: '', context_from: '', exec_mode: 'interactive',
wave: wave + 1, status: 'pending', findings: '', verdict: '',
artifacts_produced: '', error: ''
})
}
}
// 12. Display wave summary
const completed = waveTasks.filter(t => t.status === 'completed').length
const failed = waveTasks.filter(t => t.status === 'failed').length
const skipped = waveTasks.filter(t => t.status === 'skipped').length
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
}
```
**Success Criteria**:
- All waves executed in order
- Both csv-wave and interactive tasks handled per wave
- Each wave's results merged into master CSV before next wave starts
- Dependent tasks skipped when predecessor failed
- Review-fix cycle handled with max 3 iterations
- discoveries.ndjson accumulated across all waves and mechanisms
---
### Phase 3: Post-Wave Interactive (Completion Action)
**Objective**: Pipeline completion report with performance improvement metrics and interactive completion choice.
```javascript
// 1. Generate pipeline summary
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const completed = tasks.filter(t => t.status === 'completed')
const failed = tasks.filter(t => t.status === 'failed')
// 2. Load improvement metrics from benchmark results
let improvements = ''
try {
const benchmark = JSON.parse(Read(`${sessionFolder}/artifacts/benchmark-results.json`))
improvements = `Performance Improvements:\n${benchmark.metrics.map(m =>
` ${m.name}: ${m.baseline} -> ${m.current} (${m.improvement})`).join('\n')}`
} catch {}
console.log(`
============================================
PERFORMANCE OPTIMIZATION COMPLETE
Deliverables:
- Baseline Metrics: artifacts/baseline-metrics.json
- Bottleneck Report: artifacts/bottleneck-report.md
- Optimization Plan: artifacts/optimization-plan.md
- Benchmark Results: artifacts/benchmark-results.json
- Review Report: artifacts/review-report.md
${improvements}
Pipeline: ${completed.length}/${tasks.length} tasks
Session: ${sessionFolder}
============================================
`)
// 3. Completion action
if (!AUTO_YES) {
request_user_input({
questions: [{
question: "Performance optimization complete. Choose next action.",
header: "Done",
id: "completion",
options: [
{ label: "Archive (Recommended)", description: "Archive session, output final summary" },
{ label: "Keep Active", description: "Keep session for follow-up work" },
{ label: "Retry Failed", description: "Re-run failed tasks" }
]
}]
})
}
```
**Success Criteria**:
- Post-wave interactive processing complete
- User informed of results and improvement metrics
---
### Phase 4: Results Aggregation
**Objective**: Generate final results and human-readable report.
```javascript
// 1. Export results.csv
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
// 2. Generate context.md
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
let contextMd = `# Performance Optimization Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`
contextMd += `## Summary\n`
contextMd += `| Status | Count |\n|--------|-------|\n`
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`
contextMd += `## Deliverables\n\n`
contextMd += `| Artifact | Path |\n|----------|------|\n`
contextMd += `| Baseline Metrics | artifacts/baseline-metrics.json |\n`
contextMd += `| Bottleneck Report | artifacts/bottleneck-report.md |\n`
contextMd += `| Optimization Plan | artifacts/optimization-plan.md |\n`
contextMd += `| Benchmark Results | artifacts/benchmark-results.json |\n`
contextMd += `| Review Report | artifacts/review-report.md |\n\n`
const maxWave = Math.max(...tasks.map(t => t.wave))
contextMd += `## Wave Execution\n\n`
for (let w = 1; w <= maxWave; w++) {
const waveTasks = tasks.filter(t => t.wave === w)
contextMd += `### Wave ${w}\n\n`
for (const t of waveTasks) {
const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
contextMd += `${icon} **${t.title}** [${t.role}] ${t.verdict ? `(${t.verdict})` : ''} ${t.findings || ''}\n\n`
}
}
Write(`${sessionFolder}/context.md`, contextMd)
console.log(`Results exported to: ${sessionFolder}/results.csv`)
console.log(`Report generated at: ${sessionFolder}/context.md`)
```
**Success Criteria**:
- results.csv exported (all tasks, both modes)
- context.md generated with deliverables list
- Summary displayed to user
---
## Shared Discovery Board Protocol
All agents (csv-wave and interactive) share a single `discoveries.ndjson` file for cross-task knowledge exchange.
**Format**: One JSON object per line (NDJSON):
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"PROFILE-001","type":"bottleneck_found","data":{"type":"CPU","location":"src/services/DataProcessor.ts:145","severity":"Critical","description":"O(n^2) nested loop in processRecords"}}
{"ts":"2026-03-08T10:05:00Z","worker":"IMPL-001","type":"file_modified","data":{"file":"src/services/DataProcessor.ts","change":"Replaced nested loop with Map lookup","lines_added":8}}
```
**Discovery Types**:
| Type | Data Schema | Description |
|------|-------------|-------------|
| `bottleneck_found` | `{type, location, severity, description}` | Performance bottleneck identified |
| `hotspot_found` | `{file, function, cpu_pct, description}` | CPU hotspot detected |
| `memory_issue` | `{file, type, size_mb, description}` | Memory leak or bloat found |
| `io_issue` | `{operation, latency_ms, description}` | I/O performance issue |
| `file_modified` | `{file, change, lines_added}` | File change recorded |
| `metric_measured` | `{metric, value, unit, context}` | Performance metric measured |
| `pattern_found` | `{pattern_name, location, description}` | Code pattern identified |
| `artifact_produced` | `{name, path, producer, type}` | Deliverable created |
**Protocol**:
1. Agents MUST read discoveries.ndjson at start of execution
2. Agents MUST append relevant discoveries during execution
3. Agents MUST NOT modify or delete existing entries
4. Deduplication by `{type, data.location}` or `{type, data.file}` key
---
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
- [specs/team-config.json](specs/team-config.json) — Team configuration
## Error Handling
| Error | Resolution |
|-------|------------|
| Circular dependency in tasks | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Interactive agent failed | Mark as failed, skip dependents |
| All agents in wave failed | Log error, offer retry or abort |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| Review-fix cycle exceeds 3 iterations | Escalate to user with summary of remaining issues |
| Benchmark regression detected | Create FIX task with regression details |
| Profiling tool not available | Fall back to static analysis methods |
| Continue mode: no session found | List available sessions, prompt user to select |
---
## Core Rules
1. **Start Immediately**: First action is session initialization, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson -- both mechanisms share it
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
8. **Max 3 Fix Cycles**: Review-fix cycle capped at 3 iterations; escalate to user after
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
---
## Coordinator Role Constraints (Main Agent)
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
- Spawns agents with task assignments
- Waits for agent callbacks
- Merges results and coordinates workflow
- Manages workflow transitions between phases
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
- Wait patiently for `wait()` calls to complete
- NOT skip workflow steps due to perceived delays
- NOT assume agents have failed just because they're taking time
- Trust the timeout mechanisms defined in the skill
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
- Use `send_input()` to ask questions or provide clarification
- NOT skip the agent or move to next phase prematurely
- Give agents opportunity to respond before escalating
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
- Skip phases or stages defined in the workflow
- Bypass required approval or review steps
- Execute dependent tasks before prerequisites complete
- Assume task completion without explicit agent callback
- Make up or fabricate agent results
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
- Total execution time may range from 30-90 minutes or longer
- Each phase may take 10-30 minutes depending on complexity
- The coordinator must remain active and attentive throughout the entire process
- Do not terminate or skip steps due to time concerns
| Scenario | Resolution |
|----------|------------|
| Unknown --role value | Error with role registry list |
| Role file not found | Error with expected path (roles/{name}/role.md) |
| Profiling tool not available | Fallback to static analysis methods |
| Benchmark regression detected | Auto-create FIX task with regression details |
| Review-fix cycle exceeds 3 iterations | Escalate to user |
| One branch IMPL fails | Mark that branch failed, other branches continue |
| Fast-advance conflict | Coordinator reconciles on next callback |
| Completion action fails | Default to Keep Active |

View File

@@ -1,141 +0,0 @@
# Completion Handler Agent
Handle pipeline completion action for performance optimization: present results summary with before/after metrics, offer Archive/Keep/Export options, execute chosen action.
## Identity
- **Type**: `interactive`
- **Responsibility**: Pipeline completion and session lifecycle management
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Present complete pipeline summary with before/after performance metrics
- Offer completion action choices
- Execute chosen action (archive, keep, export)
- Produce structured output
### MUST NOT
- Skip presenting results summary
- Execute destructive actions without confirmation
- Modify source code
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load result artifacts |
| `Write` | builtin | Write export files |
| `Bash` | builtin | Archive/cleanup operations |
| `request_user_input` | builtin | Present completion choices |
---
## Execution
### Phase 1: Results Collection
**Objective**: Gather all pipeline results for summary.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| tasks.csv | Yes | Master task state |
| Baseline metrics | Yes | Pre-optimization metrics |
| Benchmark results | Yes | Post-optimization metrics |
| Review report | Yes | Code review findings |
**Steps**:
1. Read tasks.csv -- count completed/failed/skipped
2. Read baseline-metrics.json -- extract before metrics
3. Read benchmark-results.json -- extract after metrics, compute improvements
4. Read review-report.md -- extract final verdict
**Output**: Compiled results summary with before/after comparison
---
### Phase 2: Present and Choose
**Objective**: Display results and get user's completion choice.
**Steps**:
1. Display pipeline summary with before/after metrics comparison table
2. Present completion action:
```javascript
request_user_input({
questions: [{
question: "Performance optimization complete. What would you like to do?",
header: "Completion",
id: "completion_action",
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
{ label: "Keep Active", description: "Keep session for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location" }
]
}]
})
```
**Output**: User's choice
---
### Phase 3: Execute Action
**Objective**: Execute the chosen completion action.
| Choice | Action |
|--------|--------|
| Archive & Clean | Copy results.csv and context.md to archive, mark session completed |
| Keep Active | Mark session as paused, leave all artifacts in place |
| Export Results | Copy key deliverables to user-specified location |
---
## Structured Output Template
```
## Pipeline Summary
- Tasks: X completed, Y failed, Z skipped
- Duration: estimated from timestamps
## Performance Improvements
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| metric_1 | value | value | +X% |
| metric_2 | value | value | +X% |
## Deliverables
- Baseline Metrics: path
- Bottleneck Report: path
- Optimization Plan: path
- Benchmark Results: path
- Review Report: path
## Action Taken
- Choice: Archive & Clean / Keep Active / Export Results
- Status: completed
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Result artifacts missing | Report partial summary with available data |
| Archive operation fails | Default to Keep Active |
| Export path invalid | Ask user for valid path |
| Timeout approaching | Default to Keep Active |

View File

@@ -1,156 +0,0 @@
# Fix Cycle Handler Agent
Manage the review-fix iteration cycle for performance optimization. Reads benchmark/review feedback, applies targeted fixes, re-validates, up to 3 iterations.
## Identity
- **Type**: `interactive`
- **Responsibility**: Iterative fix-verify cycle for optimization issues
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read benchmark results and review report to understand failures
- Apply targeted fixes addressing specific feedback items
- Re-validate after each fix attempt
- Track iteration count (max 3)
- Produce structured output with fix summary
### MUST NOT
- Skip reading feedback before attempting fixes
- Apply broad changes unrelated to feedback
- Exceed 3 fix iterations
- Modify code outside the scope of reported issues
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load feedback artifacts and source files |
| `Edit` | builtin | Apply targeted code fixes |
| `Write` | builtin | Write updated artifacts |
| `Bash` | builtin | Run build/test/benchmark validation |
| `Grep` | builtin | Search for patterns |
| `Glob` | builtin | Find files |
---
## Execution
### Phase 1: Feedback Loading
**Objective**: Load and parse benchmark/review feedback.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Benchmark results | Yes (if benchmark failed) | From artifacts/benchmark-results.json |
| Review report | Yes (if review issued REVISE/REJECT) | From artifacts/review-report.md |
| Optimization plan | Yes | Original plan for reference |
| Baseline metrics | Yes | For regression comparison |
| Discoveries | No | Shared findings |
**Steps**:
1. Read benchmark-results.json -- identify metrics that failed targets or regressed
2. Read review-report.md -- identify Critical/High findings with file:line references
3. Categorize issues by type and priority:
- Performance regression (benchmark target not met)
- Correctness issue (logic error, race condition)
- Side effect (unintended behavior change)
- Maintainability concern (excessive complexity)
**Output**: Prioritized list of issues to fix
---
### Phase 2: Fix Implementation (Iterative)
**Objective**: Apply fixes and re-validate, up to 3 rounds.
**Steps**:
For each iteration (1..3):
1. **Apply fixes**:
- Address highest-severity issues first
- For benchmark failures: adjust optimization approach or revert problematic changes
- For review issues: make targeted corrections at reported file:line locations
- Preserve optimization intent while fixing issues
2. **Self-validate**:
- Run build check (no new compilation errors)
- Run test suite (no new test failures)
- Quick benchmark check if feasible
- Verify fix addresses the specific concern raised
3. **Check convergence**:
| Validation Result | Action |
|-------------------|--------|
| All checks pass | Exit loop, report success |
| Some checks still fail, iteration < 3 | Continue to next iteration |
| Still failing at iteration 3 | Report remaining issues for escalation |
**Output**: Fix results per iteration
---
### Phase 3: Result Reporting
**Objective**: Produce final fix cycle summary.
**Steps**:
1. Update benchmark-results.json with post-fix metrics if applicable
2. Append fix discoveries to discoveries.ndjson
3. Report final status
---
## Structured Output Template
```
## Summary
- Fix cycle completed: N iterations, M issues resolved, K remaining
## Iterations
### Iteration 1
- Fixed: [list of fixes applied with file:line]
- Validation: [pass/fail per dimension]
### Iteration 2 (if needed)
- Fixed: [list of fixes]
- Validation: [pass/fail]
## Final Status
- verdict: PASS | PARTIAL | ESCALATE
- Remaining issues (if any): [list]
## Performance Impact
- Metric changes from fixes (if measured)
## Artifacts Updated
- artifacts/benchmark-results.json (updated metrics, if re-benchmarked)
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Fix introduces new regression | Revert fix, try alternative approach |
| Cannot reproduce reported issue | Log as resolved-by-environment, continue |
| Fix scope exceeds current files | Report scope expansion needed, escalate |
| Optimization approach fundamentally flawed | Report for strategist escalation |
| Timeout approaching | Output partial results with iteration count |
| 3 iterations exhausted | Report remaining issues for user escalation |

View File

@@ -1,150 +0,0 @@
# Plan Reviewer Agent
Review bottleneck report or optimization plan at user checkpoints, providing interactive approval or revision requests.
## Identity
- **Type**: `interactive`
- **Responsibility**: Review and approve/revise plans before execution proceeds
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read the bottleneck report or optimization plan being reviewed
- Produce structured output with clear APPROVE/REVISE verdict
- Include specific file:line references in findings
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Modify source code directly
- Produce unstructured output
- Approve without actually reading the plan
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load plan artifacts and project files |
| `Grep` | builtin | Search for patterns in codebase |
| `Glob` | builtin | Find files by pattern |
| `Bash` | builtin | Run build/test commands |
### Tool Usage Patterns
**Read Pattern**: Load context files before review
```
Read("{session_folder}/artifacts/bottleneck-report.md")
Read("{session_folder}/artifacts/optimization-plan.md")
Read("{session_folder}/discoveries.ndjson")
```
---
## Execution
### Phase 1: Context Loading
**Objective**: Load the plan or report to review.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Bottleneck report | Yes (if reviewing profiling) | Ranked bottleneck list from profiler |
| Optimization plan | Yes (if reviewing strategy) | Prioritized plan from strategist |
| Discoveries | No | Shared findings from prior stages |
**Steps**:
1. Read the artifact being reviewed from session artifacts folder
2. Read discoveries.ndjson for additional context
3. Identify which checkpoint this review corresponds to (CP-1 for profiling, CP-2 for strategy)
**Output**: Loaded plan context for review
---
### Phase 2: Plan Review
**Objective**: Evaluate plan quality, completeness, and feasibility.
**Steps**:
1. **For bottleneck report review (CP-1)**:
- Verify all performance dimensions are covered (CPU, memory, I/O, network, rendering)
- Check that severity rankings are justified with measured evidence
- Validate baseline metrics are quantified with units and measurement method
- Check scope coverage matches original requirement
2. **For optimization plan review (CP-2)**:
- Verify each optimization has unique OPT-ID and self-contained detail
- Check priority assignments follow impact/effort matrix
- Validate target files are non-overlapping between optimizations
- Verify success criteria are measurable with specific thresholds
- Check that implementation guidance is actionable
- Assess risk levels and potential side effects
3. **Issue classification**:
| Finding Severity | Condition | Impact |
|------------------|-----------|--------|
| Critical | Missing key profiling dimension or infeasible plan | REVISE required |
| High | Unclear criteria or unrealistic targets | REVISE recommended |
| Medium | Minor gaps in coverage or detail | Note for improvement |
| Low | Style or formatting issues | Informational |
**Output**: Review findings with severity classifications
---
### Phase 3: Verdict
**Objective**: Issue APPROVE or REVISE verdict.
| Verdict | Condition | Action |
|---------|-----------|--------|
| APPROVE | No Critical or High findings | Plan is ready for next stage |
| REVISE | Has Critical or High findings | Return specific feedback for revision |
**Output**: Verdict with detailed feedback
---
## Structured Output Template
```
## Summary
- One-sentence verdict: APPROVE or REVISE with rationale
## Findings
- Finding 1: [severity] description with artifact reference
- Finding 2: [severity] description with specific section reference
## Verdict
- APPROVE: Plan is ready for execution
OR
- REVISE: Specific items requiring revision
1. Issue description + suggested fix
2. Issue description + suggested fix
## Recommendations
- Optional improvement suggestions (non-blocking)
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Artifact file not found | Report in findings, request re-generation |
| Plan structure invalid | Report as Critical finding, REVISE verdict |
| Scope mismatch | Report in findings, note for coordinator |
| Timeout approaching | Output current findings with "PARTIAL" status |

View File

@@ -1,122 +0,0 @@
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read shared discoveries: {session_folder}/discoveries.ndjson (if exists, skip if not)
2. Read project context: .workflow/project-tech.json (if exists)
3. Read task schema: ~ or <project>/.codex/skills/team-perf-opt/schemas/tasks-schema.md
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Description**: {description}
**Role**: {role}
**Bottleneck Type**: {bottleneck_type}
**Priority**: {priority}
**Target Files**: {target_files}
### Previous Tasks' Findings (Context)
{prev_context}
---
## Execution Protocol
1. **Read discoveries**: Load {session_folder}/discoveries.ndjson for shared exploration findings
2. **Use context**: Apply previous tasks' findings from prev_context above
3. **Execute by role**:
**If role = profiler**:
- Detect project type by scanning for framework markers:
- Frontend (React/Vue/Angular): render time, bundle size, FCP/LCP/CLS
- Backend Node (Express/Fastify/NestJS): CPU hotspots, memory, DB queries
- Native/JVM Backend (Cargo/Go/Java): CPU, memory, GC tuning
- CLI Tool: startup time, throughput, memory peak
- Trace hot code paths and CPU hotspots within target scope
- Identify memory allocation patterns and potential leaks
- Measure I/O and network latency where applicable
- Collect quantified baseline metrics (timing, memory, throughput)
- Rank top 3-5 bottlenecks by severity (Critical/High/Medium)
- Record evidence: file paths, line numbers, measured values
- Write `{session_folder}/artifacts/baseline-metrics.json` (metrics)
- Write `{session_folder}/artifacts/bottleneck-report.md` (ranked bottlenecks)
**If role = strategist**:
- Read bottleneck report and baseline from {session_folder}/artifacts/
- For each bottleneck, select optimization strategy by type:
- CPU: algorithm optimization, memoization, caching, worker threads
- MEMORY: pool reuse, lazy init, WeakRef, scope cleanup
- IO: batching, async pipelines, streaming, connection pooling
- NETWORK: request coalescing, compression, CDN, prefetching
- RENDERING: virtualization, memoization, CSS containment, code splitting
- DATABASE: index optimization, query rewriting, caching layer
- Prioritize by impact/effort: P0 (high impact+low effort) to P3
- Assign unique OPT-IDs (OPT-001, 002, ...) with non-overlapping file targets
- Define measurable success criteria (target metric value or improvement %)
- Write `{session_folder}/artifacts/optimization-plan.md`
**If role = optimizer**:
- Read optimization plan from {session_folder}/artifacts/optimization-plan.md
- Apply optimizations in priority order (P0 first)
- Preserve existing behavior -- optimization must not break functionality
- Make minimal, focused changes per optimization
- Add comments only where optimization logic is non-obvious
- Preserve existing code style and conventions
**If role = benchmarker**:
- Read baseline from {session_folder}/artifacts/baseline-metrics.json
- Read plan from {session_folder}/artifacts/optimization-plan.md
- Run benchmarks matching detected project type:
- Frontend: bundle size, render performance
- Backend: endpoint response times, memory under load, DB query times
- CLI: execution time, memory peak, throughput
- Run test suite to verify no regressions
- Collect post-optimization metrics matching baseline format
- Calculate improvement percentages per metric
- Compare against plan success criteria
- Write `{session_folder}/artifacts/benchmark-results.json`
- Set verdict: PASS (meets criteria) / WARN (partial) / FAIL (regression or criteria not met)
**If role = reviewer**:
- Read plan from {session_folder}/artifacts/optimization-plan.md
- Review changed files across 5 dimensions:
- Correctness: logic errors, race conditions, null safety
- Side effects: unintended behavior changes, API contract breaks
- Maintainability: code clarity, complexity increase, naming
- Regression risk: impact on unrelated code paths
- Best practices: idiomatic patterns, no optimization anti-patterns
- Write `{session_folder}/artifacts/review-report.md`
- Set verdict: APPROVE / REVISE / REJECT
4. **Share discoveries**: Append exploration findings to shared board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> {session_folder}/discoveries.ndjson
```
5. **Report result**: Return JSON via report_agent_job_result
### Discovery Types to Share
- `bottleneck_found`: `{type, location, severity, description}` -- Bottleneck identified
- `hotspot_found`: `{file, function, cpu_pct, description}` -- CPU hotspot
- `memory_issue`: `{file, type, size_mb, description}` -- Memory problem
- `io_issue`: `{operation, latency_ms, description}` -- I/O issue
- `db_issue`: `{query, latency_ms, description}` -- Database issue
- `file_modified`: `{file, change, lines_added}` -- File change recorded
- `metric_measured`: `{metric, value, unit, context}` -- Metric measured
- `pattern_found`: `{pattern_name, location, description}` -- Pattern identified
- `artifact_produced`: `{name, path, producer, type}` -- Deliverable created
---
## Output (report_agent_job_result)
Return JSON:
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Key discoveries and implementation notes (max 500 chars)",
"verdict": "PASS|WARN|FAIL|APPROVE|REVISE|REJECT or empty",
"artifacts_produced": "semicolon-separated artifact paths",
"error": ""
}

View File

@@ -0,0 +1,89 @@
---
role: benchmarker
prefix: BENCH
inner_loop: false
message_types:
success: bench_complete
error: error
fix: fix_required
---
# Performance Benchmarker
Run benchmarks comparing before/after optimization metrics. Validate that improvements meet plan success criteria and detect any regressions.
## Phase 2: Environment & Baseline Loading
| Input | Source | Required |
|-------|--------|----------|
| Baseline metrics | <session>/artifacts/baseline-metrics.json (shared) | Yes |
| Optimization plan / detail | Varies by mode (see below) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
1. Extract session path from task description
2. **Detect branch/pipeline context** from task description:
| Task Description Field | Value | Context |
|----------------------|-------|---------|
| `BranchId: B{NN}` | Present | Fan-out branch -- benchmark only this branch's metrics |
| `PipelineId: {P}` | Present | Independent pipeline -- use pipeline-scoped baseline |
| Neither present | - | Single mode -- full benchmark |
3. **Load baseline metrics**:
- Single / Fan-out: Read `<session>/artifacts/baseline-metrics.json` (shared baseline)
- Independent: Read `<session>/artifacts/pipelines/{P}/baseline-metrics.json`
4. **Load optimization context**:
- Single: Read `<session>/artifacts/optimization-plan.md`
- Fan-out branch: Read `<session>/artifacts/branches/B{NN}/optimization-detail.md`
- Independent: Read `<session>/artifacts/pipelines/{P}/optimization-plan.md`
5. Load .msg/meta.json for project type and optimization scope
6. Detect available benchmark tools from project:
| Signal | Benchmark Tool | Method |
|--------|---------------|--------|
| package.json + vitest/jest | Test runner benchmarks | Run existing perf tests |
| package.json + webpack/vite | Bundle analysis | Compare build output sizes |
| Cargo.toml + criterion | Rust benchmarks | cargo bench |
| go.mod | Go benchmarks | go test -bench |
| Makefile with bench target | Custom benchmarks | make bench |
| No tooling detected | Manual measurement | Timed execution via Bash |
7. Get changed files scope from shared-memory (optimizer namespace, scoped by branch/pipeline)
## Phase 3: Benchmark Execution
Run benchmarks matching detected project type:
**Frontend benchmarks**: Compare bundle size, render performance, dependency weight changes.
**Backend benchmarks**: Measure endpoint response times, memory usage under load, database query improvements.
**CLI / Library benchmarks**: Execution time, memory peak, throughput under sustained load.
**All project types**:
- Run existing test suite to verify no regressions
- Collect post-optimization metrics matching baseline format
- Calculate improvement percentages per metric
**Branch-scoped benchmarking** (fan-out mode):
- Only benchmark metrics relevant to this branch's optimization
- Still check for regressions across all metrics
## Phase 4: Result Analysis
Compare against baseline and plan criteria:
| Metric | Threshold | Verdict |
|--------|-----------|---------|
| Target improvement vs baseline | Meets plan success criteria | PASS |
| No regression in unrelated metrics | < 5% degradation allowed | PASS |
| All plan success criteria met | Every criterion satisfied | PASS |
| Improvement below target | > 50% of target achieved | WARN |
| Regression detected | Any unrelated metric degrades > 5% | FAIL -> fix_required |
| Plan criteria not met | Any criterion not satisfied | FAIL -> fix_required |
1. Write benchmark results to output path (scoped by branch/pipeline/single)
2. Update `<session>/.msg/meta.json` under scoped namespace
3. If verdict is FAIL, include detailed feedback in message for FIX task creation

View File

@@ -0,0 +1,61 @@
# Analyze Task - Performance Optimization
Parse optimization request -> detect parallel mode -> determine scope -> design pipeline structure.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
### Parallel Mode Detection
| Flag | Value | Pipeline Shape |
|------|-------|----------------|
| `--parallel-mode=single` | explicit | Single linear pipeline |
| `--parallel-mode=fan-out` | explicit | Shared profile+strategy, N parallel IMPL branches |
| `--parallel-mode=independent` | explicit | M fully independent pipelines |
| `--parallel-mode=auto` or absent | default | Auto-detect from bottleneck count at CP-2.5 |
| `auto` with count <= 2 | auto-resolved | single mode |
| `auto` with count >= 3 | auto-resolved | fan-out mode |
### Scope Detection
| Signal | Target |
|--------|--------|
| Specific file/module mentioned | Scoped optimization |
| "slow", "performance", generic | Full application profiling |
| Specific metric mentioned (FCP, memory, startup) | Targeted metric optimization |
| Multiple quoted targets (independent mode) | Per-target scoped optimization |
### Optimization Keywords
| Keywords | Capability |
|----------|------------|
| profile, bottleneck, slow, benchmark | profiler |
| optimize, improve, reduce, speed | optimizer |
| strategy, plan, prioritize | strategist |
| verify, test, validate | benchmarker |
| review, audit, quality | reviewer |
## Output
Coordinator state from this command (used by dispatch.md):
```json
{
"parallel_mode": "<auto|single|fan-out|independent>",
"max_branches": 5,
"optimization_targets": ["<target1>", "<target2>"],
"independent_targets": [],
"scope": "<specific|full-app|targeted>",
"target_metrics": ["<metric1>", "<metric2>"]
}
```
## Pipeline Structure by Mode
| Mode | Stages |
|------|--------|
| single | PROFILE-001 -> STRATEGY-001 -> IMPL-001 -> BENCH-001 + REVIEW-001 |
| fan-out | PROFILE-001 -> STRATEGY-001 -> [IMPL-B01..N in parallel] -> BENCH+REVIEW per branch |
| independent | N complete pipelines (PROFILE+STRATEGY+IMPL+BENCH+REVIEW) in parallel |
| auto | Decided at CP-2.5 after STRATEGY-001 completes based on bottleneck count |

View File

@@ -0,0 +1,262 @@
# Command: Dispatch
Create the performance optimization task chain with correct dependencies and structured task descriptions. Supports single, fan-out, independent, and auto parallel modes.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| User requirement | From coordinator Phase 1 | Yes |
| Session folder | From coordinator Phase 2 | Yes |
| Pipeline definition | From SKILL.md Pipeline Definitions | Yes |
| Parallel mode | From session.json `parallel_mode` | Yes |
| Max branches | From session.json `max_branches` | Yes |
| Independent targets | From session.json `independent_targets` (independent mode only) | Conditional |
1. Load user requirement and optimization scope from session.json
2. Load pipeline stage definitions from SKILL.md Task Metadata Registry
3. Read `parallel_mode` and `max_branches` from session.json
4. For `independent` mode: read `independent_targets` array from session.json
## Phase 3: Task Chain Creation (Mode-Branched)
### Task Description Template
Every task is a JSON entry in the tasks array, written to `<session>/tasks.json`:
```json
{
"id": "<TASK-ID>",
"subject": "<TASK-ID>",
"description": "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>\nTASK:\n - <step 1: specific action>\n - <step 2: specific action>\n - <step 3: specific action>\nCONTEXT:\n - Session: <session-folder>\n - Scope: <optimization-scope>\n - Branch: <branch-id or 'none'>\n - Upstream artifacts: <artifact-1>, <artifact-2>\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <deliverable path> + <quality criteria>\nCONSTRAINTS: <scope limits, focus areas>\n---\nInnerLoop: <true|false>\nBranchId: <B01|A|none>",
"status": "pending",
"owner": "<role>",
"blockedBy": ["<dependency-list>"]
}
```
After building all entries, write the full array to `<session>/tasks.json`.
### Mode Router
| Mode | Action |
|------|--------|
| `single` | Create 5 tasks (PROFILE -> STRATEGY -> IMPL -> BENCH + REVIEW) -- unchanged from linear pipeline |
| `auto` | Create PROFILE-001 + STRATEGY-001 only. **Defer branch creation to CP-2.5** after strategy completes |
| `fan-out` | Create PROFILE-001 + STRATEGY-001 only. **Defer branch creation to CP-2.5** after strategy completes |
| `independent` | Create M complete pipelines immediately (one per target) |
---
### Single Mode Task Chain
Create task entries in dependency order (backward compatible, unchanged):
**PROFILE-001** (profiler, Stage 1):
```json
{
"id": "PROFILE-001",
"subject": "PROFILE-001",
"description": "PURPOSE: Profile application performance to identify bottlenecks | Success: Baseline metrics captured, top 3-5 bottlenecks ranked by severity\nTASK:\n - Detect project type and available profiling tools\n - Execute profiling across relevant dimensions (CPU, memory, I/O, network, rendering)\n - Collect baseline metrics and rank bottlenecks by severity\nCONTEXT:\n - Session: <session-folder>\n - Scope: <optimization-scope>\n - Branch: none\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/artifacts/baseline-metrics.json + <session>/artifacts/bottleneck-report.md | Quantified metrics with evidence\nCONSTRAINTS: Focus on <optimization-scope> | Profile before any changes\n---\nInnerLoop: false",
"status": "pending",
"owner": "profiler",
"blockedBy": []
}
```
**STRATEGY-001** (strategist, Stage 2):
```json
{
"id": "STRATEGY-001",
"subject": "STRATEGY-001",
"description": "PURPOSE: Design prioritized optimization plan from bottleneck analysis | Success: Actionable plan with measurable success criteria per optimization\nTASK:\n - Analyze bottleneck report and baseline metrics\n - Select optimization strategies per bottleneck type\n - Prioritize by impact/effort ratio, define success criteria\n - Each optimization MUST have a unique OPT-ID (OPT-001, OPT-002, ...) with non-overlapping target files\nCONTEXT:\n - Session: <session-folder>\n - Scope: <optimization-scope>\n - Branch: none\n - Upstream artifacts: baseline-metrics.json, bottleneck-report.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/artifacts/optimization-plan.md | Priority-ordered with improvement targets, discrete OPT-IDs\nCONSTRAINTS: Focus on highest-impact optimizations | Risk assessment required | Non-overlapping file targets per OPT-ID\n---\nInnerLoop: false",
"status": "pending",
"owner": "strategist",
"blockedBy": ["PROFILE-001"]
}
```
**IMPL-001** (optimizer, Stage 3):
```json
{
"id": "IMPL-001",
"subject": "IMPL-001",
"description": "PURPOSE: Implement optimization changes per strategy plan | Success: All planned optimizations applied, code compiles, existing tests pass\nTASK:\n - Load optimization plan and identify target files\n - Apply optimizations in priority order (P0 first)\n - Validate changes compile and pass existing tests\nCONTEXT:\n - Session: <session-folder>\n - Scope: <optimization-scope>\n - Branch: none\n - Upstream artifacts: optimization-plan.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: Modified source files + validation passing | Optimizations applied without regressions\nCONSTRAINTS: Preserve existing behavior | Minimal changes per optimization | Follow code conventions\n---\nInnerLoop: true",
"status": "pending",
"owner": "optimizer",
"blockedBy": ["STRATEGY-001"]
}
```
**BENCH-001** (benchmarker, Stage 4 - parallel):
```json
{
"id": "BENCH-001",
"subject": "BENCH-001",
"description": "PURPOSE: Benchmark optimization results against baseline | Success: All plan success criteria met, no regressions detected\nTASK:\n - Load baseline metrics and plan success criteria\n - Run benchmarks matching project type\n - Compare before/after metrics, calculate improvements\nCONTEXT:\n - Session: <session-folder>\n - Scope: <optimization-scope>\n - Branch: none\n - Upstream artifacts: baseline-metrics.json, optimization-plan.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/artifacts/benchmark-results.json | Per-metric comparison with verdicts\nCONSTRAINTS: Must compare against baseline | Flag any regressions\n---\nInnerLoop: false",
"status": "pending",
"owner": "benchmarker",
"blockedBy": ["IMPL-001"]
}
```
**REVIEW-001** (reviewer, Stage 4 - parallel):
```json
{
"id": "REVIEW-001",
"subject": "REVIEW-001",
"description": "PURPOSE: Review optimization code for correctness, side effects, and regression risks | Success: All dimensions reviewed, verdict issued\nTASK:\n - Load modified files and optimization plan\n - Review across 5 dimensions: correctness, side effects, maintainability, regression risk, best practices\n - Issue verdict: APPROVE, REVISE, or REJECT with actionable feedback\nCONTEXT:\n - Session: <session-folder>\n - Scope: <optimization-scope>\n - Branch: none\n - Upstream artifacts: optimization-plan.md, benchmark-results.json (if available)\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/artifacts/review-report.md | Per-dimension findings with severity\nCONSTRAINTS: Focus on optimization changes only | Provide specific file:line references\n---\nInnerLoop: false",
"status": "pending",
"owner": "reviewer",
"blockedBy": ["IMPL-001"]
}
```
---
### Auto / Fan-out Mode Task Chain (Deferred Branching)
For `auto` and `fan-out` modes, create only shared stages now. Branch tasks are created at **CP-2.5** after STRATEGY-001 completes.
Create PROFILE-001 and STRATEGY-001 entries with same templates as single mode above.
**Do NOT create IMPL/BENCH/REVIEW task entries yet.** They are created by the CP-2.5 Branch Creation subroutine in monitor.md.
---
### Independent Mode Task Chain
For `independent` mode, create M complete pipelines -- one per target in `independent_targets` array.
Pipeline prefix chars: `A, B, C, D, E, F, G, H, I, J` (from config `pipeline_prefix_chars`).
For each target index `i` (0-based), with prefix char `P = pipeline_prefix_chars[i]`:
```
// Create session subdirectory for this pipeline
Bash("mkdir -p <session>/artifacts/pipelines/<P>")
// Build task entries for this pipeline
Add entries to tasks array:
{ "id": "PROFILE-<P>01", ..., "blockedBy": [] }
{ "id": "STRATEGY-<P>01", ..., "blockedBy": ["PROFILE-<P>01"] }
{ "id": "IMPL-<P>01", ..., "blockedBy": ["STRATEGY-<P>01"] }
{ "id": "BENCH-<P>01", ..., "blockedBy": ["IMPL-<P>01"] }
{ "id": "REVIEW-<P>01", ..., "blockedBy": ["IMPL-<P>01"] }
Write all entries to <session>/tasks.json
```
Task descriptions follow same template as single mode, with additions:
- `Pipeline: <P>` in CONTEXT
- Artifact paths use `<session>/artifacts/pipelines/<P>/` instead of `<session>/artifacts/`
- Shared-memory namespace uses `<role>.<P>` (e.g., `profiler.A`, `optimizer.B`)
- Each pipeline's scope is its specific target from `independent_targets[i]`
Example for pipeline A with target "optimize rendering":
```json
{
"id": "PROFILE-A01",
"subject": "PROFILE-A01",
"description": "PURPOSE: Profile rendering performance | Success: Rendering bottlenecks identified\nTASK:\n - Detect project type and available profiling tools\n - Execute profiling focused on rendering performance\n - Collect baseline metrics and rank rendering bottlenecks\nCONTEXT:\n - Session: <session-folder>\n - Scope: optimize rendering\n - Pipeline: A\n - Shared memory: <session>/.msg/meta.json (namespace: profiler.A)\nEXPECTED: <session>/artifacts/pipelines/A/baseline-metrics.json + bottleneck-report.md\nCONSTRAINTS: Focus on rendering scope\n---\nInnerLoop: false\nPipelineId: A",
"status": "pending",
"owner": "profiler",
"blockedBy": []
}
```
---
### CP-2.5: Branch Creation Subroutine
**Triggered by**: monitor.md handleCallback when STRATEGY-001 completes in `auto` or `fan-out` mode.
**Procedure**:
1. Read `<session>/artifacts/optimization-plan.md` to count OPT-IDs
2. Read `.msg/meta.json` -> `strategist.optimization_count`
3. **Auto mode decision**:
| Optimization Count | Decision |
|-------------------|----------|
| count <= 2 | Switch to `single` mode -- add IMPL-001, BENCH-001, REVIEW-001 entries to tasks.json (standard single pipeline) |
| count >= 3 | Switch to `fan-out` mode -- create branch task entries below |
4. Update session.json with resolved `parallel_mode` (auto -> single or fan-out)
5. **Fan-out branch creation** (when count >= 3 or forced fan-out):
- Truncate to `max_branches` if `optimization_count > max_branches` (keep top N by priority)
- For each optimization `i` (1-indexed), branch ID = `B{NN}` where NN = zero-padded i:
```
// Create branch artifact directory
Bash("mkdir -p <session>/artifacts/branches/B{NN}")
// Extract single OPT detail to branch
Write("<session>/artifacts/branches/B{NN}/optimization-detail.md",
extracted OPT-{NNN} block from optimization-plan.md)
```
6. Add branch task entries to `<session>/tasks.json` for each branch B{NN}:
```json
{
"id": "IMPL-B{NN}",
"subject": "IMPL-B{NN}",
"description": "PURPOSE: Implement optimization OPT-{NNN} | Success: Single optimization applied, compiles, tests pass\nTASK:\n - Load optimization detail from branches/B{NN}/optimization-detail.md\n - Apply this single optimization to target files\n - Validate changes compile and pass existing tests\nCONTEXT:\n - Session: <session-folder>\n - Branch: B{NN}\n - Upstream artifacts: branches/B{NN}/optimization-detail.md\n - Shared memory: <session>/.msg/meta.json (namespace: optimizer.B{NN})\nEXPECTED: Modified source files for OPT-{NNN} only\nCONSTRAINTS: Only implement this branch's optimization | Do not touch files outside OPT-{NNN} scope\n---\nInnerLoop: false\nBranchId: B{NN}",
"status": "pending",
"owner": "optimizer",
"blockedBy": ["STRATEGY-001"]
}
```
```json
{
"id": "BENCH-B{NN}",
"subject": "BENCH-B{NN}",
"description": "PURPOSE: Benchmark branch B{NN} optimization | Success: OPT-{NNN} metrics meet success criteria\nTASK:\n - Load baseline metrics and OPT-{NNN} success criteria\n - Benchmark only metrics relevant to this optimization\n - Compare against baseline, calculate improvement\nCONTEXT:\n - Session: <session-folder>\n - Branch: B{NN}\n - Upstream artifacts: baseline-metrics.json, branches/B{NN}/optimization-detail.md\n - Shared memory: <session>/.msg/meta.json (namespace: benchmarker.B{NN})\nEXPECTED: <session>/artifacts/branches/B{NN}/benchmark-results.json\nCONSTRAINTS: Only benchmark this branch's metrics\n---\nInnerLoop: false\nBranchId: B{NN}",
"status": "pending",
"owner": "benchmarker",
"blockedBy": ["IMPL-B{NN}"]
}
```
```json
{
"id": "REVIEW-B{NN}",
"subject": "REVIEW-B{NN}",
"description": "PURPOSE: Review branch B{NN} optimization code | Success: Code quality verified for OPT-{NNN}\nTASK:\n - Load modified files from optimizer.B{NN} shared-memory namespace\n - Review across 5 dimensions for this branch's changes only\n - Issue verdict: APPROVE, REVISE, or REJECT\nCONTEXT:\n - Session: <session-folder>\n - Branch: B{NN}\n - Upstream artifacts: branches/B{NN}/optimization-detail.md\n - Shared memory: <session>/.msg/meta.json (namespace: reviewer.B{NN})\nEXPECTED: <session>/artifacts/branches/B{NN}/review-report.md\nCONSTRAINTS: Only review this branch's changes\n---\nInnerLoop: false\nBranchId: B{NN}",
"status": "pending",
"owner": "reviewer",
"blockedBy": ["IMPL-B{NN}"]
}
```
7. Update session.json:
- `branches`: array of branch IDs (["B01", "B02", ...])
- `fix_cycles`: object keyed by branch ID, all initialized to 0
---
## Phase 4: Validation
Verify task chain integrity:
| Check | Method | Expected |
|-------|--------|----------|
| Task count correct | Read tasks.json, count entries | single: 5, auto/fan-out: 2 (pre-CP-2.5), independent: 5*M |
| Dependencies correct | Trace dependency graph | Acyclic, correct blockedBy |
| No circular dependencies | Trace dependency graph | Acyclic |
| Task IDs use correct prefixes | Pattern check | Match naming rules per mode |
| Structured descriptions complete | Each has PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS | All present |
| Branch/Pipeline IDs consistent | Cross-check with session.json | Match |
### Naming Rules Summary
| Mode | Stage 3 | Stage 4 | Fix |
|------|---------|---------|-----|
| Single | IMPL-001 | BENCH-001, REVIEW-001 | FIX-001, FIX-002 |
| Fan-out | IMPL-B01 | BENCH-B01, REVIEW-B01 | FIX-B01-1, FIX-B01-2 |
| Independent | IMPL-A01 | BENCH-A01, REVIEW-A01 | FIX-A01-1, FIX-A01-2 |
If validation fails, fix the specific task entry in tasks.json and re-validate.

View File

@@ -0,0 +1,320 @@
# Command: Monitor
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, and completion. Supports single, fan-out, and independent parallel modes with per-branch/pipeline tracking.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session state | <session>/session.json | Yes |
| Task list | Read `<session>/tasks.json` | Yes |
| Trigger event | From Entry Router detection | Yes |
| Pipeline definition | From SKILL.md | Yes |
1. Load session.json for current state, `parallel_mode`, `branches`, `fix_cycles`
2. Read `<session>/tasks.json` to get current task statuses
3. Identify trigger event type from Entry Router
## Phase 3: Event Handlers
### handleCallback
Triggered when a worker sends completion message.
1. Parse message to identify role, task ID, and **branch/pipeline label**:
| Message Pattern | Branch Detection |
|----------------|-----------------|
| `[optimizer-B01]` or task ID `IMPL-B01` | Branch `B01` (fan-out) |
| `[profiler-A]` or task ID `PROFILE-A01` | Pipeline `A` (independent) |
| `[profiler]` or task ID `PROFILE-001` | No branch (single) |
2. Mark task as completed:
Read `<session>/tasks.json`, find entry by id `<task-id>`, set `"status": "completed"`, write back.
3. Record completion in session state
4. **CP-2.5 check** (auto/fan-out mode only):
- If completed task is STRATEGY-001 AND `parallel_mode` is `auto` or `fan-out`:
- Execute **CP-2.5 Branch Creation** subroutine from dispatch.md
- After branch creation, proceed to handleSpawnNext (spawns all IMPL-B* in parallel)
- STOP after spawning
5. Check if checkpoint feedback is configured for this stage:
| Completed Task | Checkpoint | Action |
|---------------|------------|--------|
| PROFILE-001 / PROFILE-{P}01 | CP-1 | Notify user: bottleneck report ready for review |
| STRATEGY-001 / STRATEGY-{P}01 | CP-2 | Notify user: optimization plan ready for review |
| STRATEGY-001 (auto/fan-out) | CP-2.5 | Execute branch creation, then notify user with branch count |
| BENCH-* or REVIEW-* | CP-3 | Check verdicts per branch (see Review-Fix Cycle below) |
6. Proceed to handleSpawnNext
### handleSpawnNext
Find and spawn the next ready tasks.
1. Scan tasks.json for tasks where:
- Status is "pending"
- All blockedBy tasks have status "completed"
2. For each ready task, spawn team-worker:
```
spawn_agent({
agent_type: "team_worker",
items: [{
description: "Spawn <role> worker for <task-id>",
team_name: "perf-opt",
name: "<role>",
prompt: `## Role Assignment
role: <role>
role_spec: ~ or <project>/.codex/skills/team-perf-opt/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: perf-opt
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
}]
})
```
3. **Parallel spawn rules by mode**:
| Mode | Scenario | Spawn Behavior |
|------|----------|---------------|
| Single | Stage 4 ready | Spawn BENCH-001 + REVIEW-001 in parallel |
| Fan-out (CP-2.5 done) | All IMPL-B* unblocked | Spawn ALL IMPL-B* in parallel |
| Fan-out (IMPL-B{NN} done) | BENCH-B{NN} + REVIEW-B{NN} ready | Spawn both for that branch in parallel |
| Independent | Any unblocked task | Spawn all ready tasks across all pipelines in parallel |
4. STOP after spawning -- use `wait_agent({ ids: [<spawned-agent-ids>] })` to wait for next callback
### Review-Fix Cycle (CP-3)
**Per-branch/pipeline scoping**: Each branch/pipeline has its own independent fix cycle.
#### Single Mode (unchanged)
When both BENCH-001 and REVIEW-001 are completed:
1. Read benchmark verdict from shared-memory (benchmarker namespace)
2. Read review verdict from shared-memory (reviewer namespace)
| Bench Verdict | Review Verdict | Action |
|--------------|----------------|--------|
| PASS | APPROVE | -> handleComplete |
| PASS | REVISE | Create FIX task entry with review feedback |
| FAIL | APPROVE | Create FIX task entry with benchmark feedback |
| FAIL | REVISE/REJECT | Create FIX task entry with combined feedback |
| Any | REJECT | Create FIX task entry + flag for strategist re-evaluation |
#### Fan-out Mode (per-branch)
When both BENCH-B{NN} and REVIEW-B{NN} are completed for a specific branch:
1. Read benchmark verdict from `benchmarker.B{NN}` namespace
2. Read review verdict from `reviewer.B{NN}` namespace
3. Apply same verdict matrix as single mode, but scoped to this branch only
4. **Other branches are unaffected** -- they continue independently
#### Independent Mode (per-pipeline)
When both BENCH-{P}01 and REVIEW-{P}01 are completed for a specific pipeline:
1. Read verdicts from `benchmarker.{P}` and `reviewer.{P}` namespaces
2. Apply same verdict matrix, scoped to this pipeline only
#### Fix Cycle Count Tracking
Fix cycles are tracked per branch/pipeline in `session.json`:
```json
// Single mode
{ "fix_cycles": { "main": 0 } }
// Fan-out mode
{ "fix_cycles": { "B01": 0, "B02": 1, "B03": 0 } }
// Independent mode
{ "fix_cycles": { "A": 0, "B": 2 } }
```
| Cycle Count | Action |
|-------------|--------|
| < 3 | Add FIX task entry to tasks.json, increment cycle count for this branch/pipeline |
| >= 3 | Escalate THIS branch/pipeline to user. Other branches continue |
#### FIX Task Creation (branched)
**Fan-out mode** -- add new entry to `<session>/tasks.json`:
```json
{
"id": "FIX-B{NN}-{cycle}",
"subject": "FIX-B{NN}-{cycle}",
"description": "PURPOSE: Fix issues in branch B{NN} from review/benchmark | Success: All flagged issues resolved\nTASK:\n - Address review findings: <specific-findings>\n - Fix benchmark regressions: <specific-regressions>\n - Re-validate after fixes\nCONTEXT:\n - Session: <session-folder>\n - Branch: B{NN}\n - Upstream artifacts: branches/B{NN}/review-report.md, branches/B{NN}/benchmark-results.json\n - Shared memory: <session>/.msg/meta.json (namespace: optimizer.B{NN})\nEXPECTED: Fixed source files for B{NN} only\nCONSTRAINTS: Targeted fixes only | Do not touch other branches\n---\nInnerLoop: false\nBranchId: B{NN}",
"status": "pending",
"owner": "optimizer",
"blockedBy": []
}
```
Create new BENCH and REVIEW entries with retry suffix:
- `BENCH-B{NN}-R{cycle}` blocked on `FIX-B{NN}-{cycle}`
- `REVIEW-B{NN}-R{cycle}` blocked on `FIX-B{NN}-{cycle}`
**Independent mode** -- add new entry to `<session>/tasks.json`:
```json
{
"id": "FIX-{P}01-{cycle}",
"...same pattern with pipeline prefix...",
"status": "pending",
"owner": "optimizer",
"blockedBy": []
}
```
Create `BENCH-{P}01-R{cycle}` and `REVIEW-{P}01-R{cycle}` entries.
### handleCheck
Output current pipeline status grouped by branch/pipeline.
**Single mode** (unchanged):
```
Pipeline Status:
[DONE] PROFILE-001 (profiler) -> bottleneck-report.md
[DONE] STRATEGY-001 (strategist) -> optimization-plan.md
[RUN] IMPL-001 (optimizer) -> implementing...
[WAIT] BENCH-001 (benchmarker) -> blocked by IMPL-001
[WAIT] REVIEW-001 (reviewer) -> blocked by IMPL-001
Fix Cycles: 0/3
Session: <session-id>
```
**Fan-out mode**:
```
Pipeline Status (fan-out, 3 branches):
Shared Stages:
[DONE] PROFILE-001 (profiler) -> bottleneck-report.md
[DONE] STRATEGY-001 (strategist) -> optimization-plan.md (4 OPT-IDs)
Branch B01 (OPT-001: <title>):
[RUN] IMPL-B01 (optimizer) -> implementing...
[WAIT] BENCH-B01 (benchmarker) -> blocked by IMPL-B01
[WAIT] REVIEW-B01 (reviewer) -> blocked by IMPL-B01
Fix Cycles: 0/3
Branch B02 (OPT-002: <title>):
[DONE] IMPL-B02 (optimizer) -> done
[RUN] BENCH-B02 (benchmarker) -> benchmarking...
[RUN] REVIEW-B02 (reviewer) -> reviewing...
Fix Cycles: 0/3
Branch B03 (OPT-003: <title>):
[FAIL] IMPL-B03 (optimizer) -> failed
Fix Cycles: 0/3 [BRANCH FAILED]
Session: <session-id>
```
**Independent mode**:
```
Pipeline Status (independent, 2 pipelines):
Pipeline A (target: optimize rendering):
[DONE] PROFILE-A01 -> [DONE] STRATEGY-A01 -> [RUN] IMPL-A01 -> ...
Fix Cycles: 0/3
Pipeline B (target: optimize API):
[DONE] PROFILE-B01 -> [DONE] STRATEGY-B01 -> [DONE] IMPL-B01 -> ...
Fix Cycles: 1/3
Session: <session-id>
```
Output status -- do NOT advance pipeline.
### handleResume
Resume pipeline after user pause or interruption.
1. Audit tasks.json for inconsistencies:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed blockers but still "pending" -> include in spawn list
2. For fan-out/independent: check each branch/pipeline independently
3. Proceed to handleSpawnNext
### handleConsensus
Handle consensus_blocked signals from discuss rounds.
| Severity | Action |
|----------|--------|
| HIGH | Pause pipeline (or branch), notify user with findings summary |
| MEDIUM | Create revision task entry for the blocked role (scoped to branch if applicable) |
| LOW | Log finding, continue pipeline |
### handleComplete
Triggered when all pipeline tasks are completed and no fix cycles remain.
**Completion check varies by mode**:
| Mode | Completion Condition |
|------|---------------------|
| Single | All 5 tasks (+ any FIX/retry tasks) have status "completed" |
| Fan-out | ALL branches have BENCH + REVIEW completed with PASS/APPROVE (or escalated), shared stages done |
| Independent | ALL pipelines have BENCH + REVIEW completed with PASS/APPROVE (or escalated) |
**Aggregate results** before transitioning to Phase 5:
1. For fan-out mode: collect per-branch benchmark results into `<session>/artifacts/aggregate-results.json`:
```json
{
"branches": {
"B01": { "opt_id": "OPT-001", "bench_verdict": "PASS", "review_verdict": "APPROVE", "improvement": "..." },
"B02": { "opt_id": "OPT-002", "bench_verdict": "PASS", "review_verdict": "APPROVE", "improvement": "..." },
"B03": { "status": "failed", "reason": "IMPL failed" }
},
"overall": { "total_branches": 3, "passed": 2, "failed": 1 }
}
```
2. For independent mode: collect per-pipeline results similarly
3. If any tasks not completed, return to handleSpawnNext
4. If all completed (allowing for failed branches marked as such), transition to coordinator Phase 5
### handleRevise
Triggered by user "revise <TASK-ID> [feedback]" command.
1. Parse target task ID and optional feedback
2. Detect branch/pipeline from task ID pattern
3. Add revision task entry to tasks.json with same role but updated requirements, scoped to branch
4. Skip blockedBy (no dependencies, immediate execution)
5. Cascade: create new downstream task entries within same branch only
6. Proceed to handleSpawnNext
### handleFeedback
Triggered by user "feedback <text>" command.
1. Analyze feedback text to determine impact scope
2. Identify which pipeline stage, role, and branch/pipeline should handle the feedback
3. Add targeted revision task entry to tasks.json (scoped to branch if applicable)
4. Proceed to handleSpawnNext
## Phase 4: State Persistence
After every handler execution:
1. Update session.json with current state (active tasks, fix cycle counts per branch, last event, resolved parallel_mode)
2. Verify tasks.json consistency
3. STOP and wait for next event

View File

@@ -0,0 +1,147 @@
# Coordinator - Performance Optimization Team
**Role**: coordinator
**Type**: Orchestrator
**Team**: perf-opt
Orchestrates the performance optimization pipeline: manages task chains, spawns team-worker agents, handles review-fix cycles, and drives the pipeline to completion.
## Boundaries
### MUST
- Use `team-worker` agent type for all worker spawns (NOT `general-purpose`)
- Follow Command Execution Protocol for dispatch and monitor commands
- Respect pipeline stage dependencies (blockedBy)
- Stop after spawning workers -- wait for callbacks
- Handle review-fix cycles with max 3 iterations per branch
- Execute completion action in Phase 5
### MUST NOT
- Implement domain logic (profiling, optimizing, reviewing) -- workers handle this
- Spawn workers without creating tasks first
- Skip checkpoints when configured
- Force-advance pipeline past failed review/benchmark
- Modify source code directly -- delegate to optimizer worker
---
## Command Execution Protocol
When coordinator needs to execute a command (dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
4. **Execute synchronously** -- complete the command workflow before proceeding
---
## Entry Router
When coordinator is invoked, detect invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains role tag [profiler], [strategist], [optimizer], [benchmarker], [reviewer] | -> handleCallback (monitor.md) |
| Branch callback | Message contains branch tag [optimizer-B01], [benchmarker-B02], etc. | -> handleCallback branch-aware (monitor.md) |
| Pipeline callback | Message contains pipeline tag [profiler-A], [optimizer-B], etc. | -> handleCallback pipeline-aware (monitor.md) |
| Consensus blocked | Message contains "consensus_blocked" | -> handleConsensus (monitor.md) |
| Status check | Arguments contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume (monitor.md) |
| Pipeline complete | All tasks have status "completed" | -> handleComplete (monitor.md) |
| Interrupted session | Active/paused session exists | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For callback/check/resume/complete: load `@commands/monitor.md` and execute matched handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/PERF-OPT-*/.msg/meta.json` for active/paused sessions
- If found, extract session folder path, status, and `parallel_mode`
2. **Parse $ARGUMENTS** for detection keywords
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Requirement Clarification below
---
## Phase 0: Session Resume Check
Triggered when an active/paused session is detected on coordinator entry.
1. Load session.json from detected session folder
2. Audit task list: read `<session>/tasks.json`
3. Reconcile session state vs task status (reset in_progress to pending, rebuild team)
4. Spawn workers for ready tasks -> Phase 4 coordination loop
---
## Phase 1: Requirement Clarification
1. Parse user task description from $ARGUMENTS
2. **Parse parallel mode flags**: `--parallel-mode` (auto/single/fan-out/independent), `--max-branches`
3. Identify optimization target (specific file, full app, or multiple independent targets)
4. If target is unclear, request_user_input for scope clarification
5. Record optimization requirement with scope, target metrics, parallel_mode, max_branches
---
## Phase 2: Session & Team Setup
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash({ command: "pwd" })`
- `skill_root` = `<project_root>/.claude/skills/team-perf-opt`
2. Create session directory with artifacts/, explorations/, wisdom/, discussions/ subdirs
3. Write session.json with extended fields (parallel_mode, max_branches, branches, fix_cycles)
4. Initialize meta.json with pipeline metadata via team_msg
5. Initialize session folder structure (replaces TeamCreate)
---
## Phase 3: Create Task Chain
Execute `@commands/dispatch.md` inline (Command Execution Protocol).
---
## Phase 4: Spawn & Coordination Loop
### Initial Spawn
Find first unblocked task and spawn its worker using SKILL.md Worker Spawn Template with:
- `role_spec: <skill_root>/roles/<role>/role.md`
- `team_name: perf-opt`
**STOP** after spawning. Wait for worker callback.
### Coordination (via monitor.md handlers)
All subsequent coordination handled by `@commands/monitor.md`.
---
## Phase 5: Report + Completion Action
1. Load session state -> count completed tasks, calculate duration
2. List deliverables (baseline-metrics.json, bottleneck-report.md, optimization-plan.md, benchmark-results.json, review-report.md)
3. Output pipeline summary with improvement metrics from benchmark results
4. Execute completion action per SKILL.md Completion Action section
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Teammate unresponsive | Send follow-up, 2x -> respawn |
| Profiling tool not available | Fallback to static analysis methods |
| Benchmark regression detected | Auto-create FIX task with regression details |
| Review-fix cycle exceeds 3 iterations | Escalate to user with summary of remaining issues |
| One branch IMPL fails | Mark that branch failed, other branches continue |
| max_branches exceeded | Truncate to top N optimizations by priority at CP-2.5 |

View File

@@ -0,0 +1,97 @@
---
role: optimizer
prefix: IMPL
inner_loop: true
additional_prefixes: [FIX]
message_types:
success: impl_complete
error: error
fix: fix_required
---
# Code Optimizer
Implement optimization changes following the strategy plan. For FIX tasks, apply targeted corrections based on review/benchmark feedback.
## Modes
| Mode | Task Prefix | Trigger | Focus |
|------|-------------|---------|-------|
| Implement | IMPL | Strategy plan ready | Apply optimizations per plan priority |
| Fix | FIX | Review/bench feedback | Targeted fixes for identified issues |
## Phase 2: Plan & Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Optimization plan | <session>/artifacts/optimization-plan.md | Yes (IMPL, no branch) |
| Branch optimization detail | <session>/artifacts/branches/B{NN}/optimization-detail.md | Yes (IMPL with branch) |
| Pipeline optimization plan | <session>/artifacts/pipelines/{P}/optimization-plan.md | Yes (IMPL with pipeline) |
| Review/bench feedback | From task description | Yes (FIX) |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Wisdom files | <session>/wisdom/patterns.md | No |
| Context accumulator | From prior IMPL/FIX tasks | Yes (inner loop) |
1. Extract session path and task mode (IMPL or FIX) from task description
2. **Detect branch/pipeline context** from task description:
| Task Description Field | Value | Context |
|----------------------|-------|---------|
| `BranchId: B{NN}` | Present | Fan-out branch -- load single optimization detail |
| `PipelineId: {P}` | Present | Independent pipeline -- load pipeline-scoped plan |
| Neither present | - | Single mode -- load full optimization plan |
3. **Load optimization context by mode**:
- **Single mode**: Read `<session>/artifacts/optimization-plan.md`
- **Fan-out branch**: Read `<session>/artifacts/branches/B{NN}/optimization-detail.md`
- **Independent pipeline**: Read `<session>/artifacts/pipelines/{P}/optimization-plan.md`
4. For FIX: parse review/benchmark feedback for specific issues to address
5. Use ACE search or CLI tools to load implementation context for target files
6. For inner loop (single mode only): load context_accumulator from prior IMPL/FIX tasks
## Phase 3: Code Implementation
Implementation backend selection:
| Backend | Condition | Method |
|---------|-----------|--------|
| CLI | Multi-file optimization with clear plan | ccw cli --tool gemini --mode write |
| Direct | Single-file changes or targeted fixes | Inline Edit/Write tools |
For IMPL tasks:
- **Single mode**: Apply optimizations in plan priority order (P0 first, then P1, etc.)
- **Fan-out branch**: Apply ONLY this branch's single optimization
- **Independent pipeline**: Apply this pipeline's optimizations in priority order
- Follow implementation guidance from plan (target files, patterns)
- Preserve existing behavior -- optimization must not break functionality
For FIX tasks:
- Read specific issues from review/benchmark feedback
- Apply targeted corrections to flagged code locations
- Verify the fix addresses the exact concern raised
General rules:
- Make minimal, focused changes per optimization
- Add comments only where optimization logic is non-obvious
- Preserve existing code style and conventions
## Phase 4: Self-Validation
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Syntax | IDE diagnostics or build check | No new errors |
| File integrity | Verify all planned files exist and are modified | All present |
| Acceptance | Match optimization plan success criteria | All target metrics addressed |
| No regression | Run existing tests if available | No new failures |
If validation fails, attempt auto-fix (max 2 attempts) before reporting error.
Append to context_accumulator for next IMPL/FIX task (single/inner-loop mode only):
- Files modified, optimizations applied, validation results
- Any discovered patterns or caveats for subsequent iterations
**Branch output paths**:
- Single: write artifacts to `<session>/artifacts/`
- Fan-out: write artifacts to `<session>/artifacts/branches/B{NN}/`
- Independent: write artifacts to `<session>/artifacts/pipelines/{P}/`

View File

@@ -0,0 +1,73 @@
---
role: profiler
prefix: PROFILE
inner_loop: false
message_types:
success: profile_complete
error: error
---
# Performance Profiler
Profile application performance to identify CPU, memory, I/O, network, and rendering bottlenecks. Produce quantified baseline metrics and a ranked bottleneck report.
## Phase 2: Context & Environment Detection
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | No |
1. Extract session path and target scope from task description
2. Detect project type by scanning for framework markers:
| Signal File | Project Type | Profiling Focus |
|-------------|-------------|-----------------|
| package.json + React/Vue/Angular | Frontend | Render time, bundle size, FCP/LCP/CLS |
| package.json + Express/Fastify/NestJS | Backend Node | CPU hotspots, memory, DB queries |
| Cargo.toml / go.mod / pom.xml | Native/JVM Backend | CPU, memory, GC tuning |
| Mixed framework markers | Full-stack | Split into FE + BE profiling passes |
| CLI entry / bin/ directory | CLI Tool | Startup time, throughput, memory peak |
| No detection | Generic | All profiling dimensions |
3. Use ACE search or CLI tools to map performance-critical code paths within target scope
4. Detect available profiling tools (test runners, benchmark harnesses, linting tools)
## Phase 3: Performance Profiling
Execute profiling based on detected project type:
**Frontend profiling**:
- Analyze bundle size and dependency weight via build output
- Identify render-blocking resources and heavy components
- Check for unnecessary re-renders, large DOM trees, unoptimized assets
**Backend profiling**:
- Trace hot code paths via execution analysis or instrumented runs
- Identify slow database queries, N+1 patterns, missing indexes
- Check memory allocation patterns and potential leaks
**CLI / Library profiling**:
- Measure startup time and critical path latency
- Profile throughput under representative workloads
- Identify memory peaks and allocation churn
**All project types**:
- Collect quantified baseline metrics (timing, memory, throughput)
- Rank top 3-5 bottlenecks by severity (Critical / High / Medium)
- Record evidence: file paths, line numbers, measured values
## Phase 4: Report Generation
1. Write baseline metrics to `<session>/artifacts/baseline-metrics.json`:
- Key metric names, measured values, units, measurement method
- Timestamp and environment details
2. Write bottleneck report to `<session>/artifacts/bottleneck-report.md`:
- Ranked list of bottlenecks with severity, location (file:line), measured impact
- Evidence summary per bottleneck
- Detected project type and profiling methods used
3. Update `<session>/.msg/meta.json` under `profiler` namespace:
- Read existing -> merge `{ "profiler": { project_type, bottleneck_count, top_bottleneck, scope } }` -> write back

View File

@@ -0,0 +1,75 @@
---
role: reviewer
prefix: REVIEW
inner_loop: false
additional_prefixes: [QUALITY]
discuss_rounds: [DISCUSS-REVIEW]
message_types:
success: review_complete
error: error
fix: fix_required
---
# Optimization Reviewer
Review optimization code changes for correctness, side effects, regression risks, and adherence to best practices. Provide structured verdicts with actionable feedback.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Optimization code changes | From IMPL task artifacts / git diff | Yes |
| Optimization plan / detail | Varies by mode (see below) | Yes |
| Benchmark results | Varies by mode (see below) | No |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
1. Extract session path from task description
2. **Detect branch/pipeline context** from task description:
| Task Description Field | Value | Context |
|----------------------|-------|---------|
| `BranchId: B{NN}` | Present | Fan-out branch -- review only this branch's changes |
| `PipelineId: {P}` | Present | Independent pipeline -- review pipeline-scoped changes |
| Neither present | - | Single mode -- review all optimization changes |
3. **Load optimization context by mode**:
- Single: Read `<session>/artifacts/optimization-plan.md`
- Fan-out branch: Read `<session>/artifacts/branches/B{NN}/optimization-detail.md`
- Independent: Read `<session>/artifacts/pipelines/{P}/optimization-plan.md`
4. Load .msg/meta.json for scoped optimizer namespace
5. Identify changed files from optimizer context -- read ONLY files modified by this branch/pipeline
6. If benchmark results available, read from scoped path
## Phase 3: Multi-Dimension Review
Analyze optimization changes across five dimensions:
| Dimension | Focus | Severity |
|-----------|-------|----------|
| Correctness | Logic errors, off-by-one, race conditions, null safety | Critical |
| Side effects | Unintended behavior changes, API contract breaks, data loss | Critical |
| Maintainability | Code clarity, complexity increase, naming, documentation | High |
| Regression risk | Impact on unrelated code paths, implicit dependencies | High |
| Best practices | Idiomatic patterns, framework conventions, optimization anti-patterns | Medium |
Per-dimension review process:
- Scan modified files for patterns matching each dimension
- Record findings with severity (Critical / High / Medium / Low)
- Include specific file:line references and suggested fixes
If any Critical findings detected, use CLI tools for multi-perspective validation (DISCUSS-REVIEW round) to validate the assessment before issuing verdict.
## Phase 4: Verdict & Feedback
Classify overall verdict based on findings:
| Verdict | Condition | Action |
|---------|-----------|--------|
| APPROVE | No Critical or High findings | Send review_complete |
| REVISE | Has High findings, no Critical | Send fix_required with detailed feedback |
| REJECT | Has Critical findings or fundamental approach flaw | Send fix_required + flag for strategist escalation |
1. Write review report to scoped output path (single/fan-out/independent)
2. Update `<session>/.msg/meta.json` under scoped namespace
3. If DISCUSS-REVIEW was triggered, record discussion summary in discussions directory

View File

@@ -0,0 +1,94 @@
---
role: strategist
prefix: STRATEGY
inner_loop: false
discuss_rounds: [DISCUSS-OPT]
message_types:
success: strategy_complete
error: error
---
# Optimization Strategist
Analyze bottleneck reports and baseline metrics to design a prioritized optimization plan with concrete strategies, expected improvements, and risk assessments.
## Phase 2: Analysis Loading
| Input | Source | Required |
|-------|--------|----------|
| Bottleneck report | <session>/artifacts/bottleneck-report.md | Yes |
| Baseline metrics | <session>/artifacts/baseline-metrics.json | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Wisdom files | <session>/wisdom/patterns.md | No |
1. Extract session path from task description
2. Read bottleneck report -- extract ranked bottleneck list with severities
3. Read baseline metrics -- extract current performance numbers
4. Load .msg/meta.json for profiler findings (project_type, scope)
5. Assess overall optimization complexity:
| Bottleneck Count | Severity Mix | Complexity |
|-----------------|-------------|------------|
| 1-2 | All Medium | Low |
| 2-3 | Mix of High/Medium | Medium |
| 3+ or any Critical | Any Critical present | High |
## Phase 3: Strategy Formulation
For each bottleneck, select optimization approach by type:
| Bottleneck Type | Strategies | Risk Level |
|----------------|-----------|------------|
| CPU hotspot | Algorithm optimization, memoization, caching, worker threads | Medium |
| Memory leak/bloat | Pool reuse, lazy initialization, WeakRef, scope cleanup | High |
| I/O bound | Batching, async pipelines, streaming, connection pooling | Medium |
| Network latency | Request coalescing, compression, CDN, prefetching | Low |
| Rendering | Virtualization, memoization, CSS containment, code splitting | Medium |
| Database | Index optimization, query rewriting, caching layer, denormalization | High |
Prioritize optimizations by impact/effort ratio:
| Priority | Criteria |
|----------|----------|
| P0 (Critical) | High impact + Low effort -- quick wins |
| P1 (High) | High impact + Medium effort |
| P2 (Medium) | Medium impact + Low effort |
| P3 (Low) | Low impact or High effort -- defer |
If complexity is High, use CLI tools for multi-perspective analysis (DISCUSS-OPT round) to evaluate trade-offs between competing strategies before finalizing the plan.
Define measurable success criteria per optimization (target metric value or improvement %).
## Phase 4: Plan Output
1. Write optimization plan to `<session>/artifacts/optimization-plan.md`:
Each optimization MUST have a unique OPT-ID and self-contained detail block:
```markdown
### OPT-001: <title>
- Priority: P0
- Target bottleneck: <bottleneck from report>
- Target files: <file-list>
- Strategy: <selected approach>
- Expected improvement: <metric> by <X%>
- Risk level: <Low/Medium/High>
- Success criteria: <specific threshold to verify>
- Implementation guidance:
1. <step 1>
2. <step 2>
3. <step 3>
### OPT-002: <title>
...
```
Requirements:
- Each OPT-ID is sequentially numbered (OPT-001, OPT-002, ...)
- Each optimization must be **non-overlapping** in target files
- Implementation guidance must be self-contained
2. Update `<session>/.msg/meta.json` under `strategist` namespace:
- Read existing -> merge -> write back with optimization metadata
3. If DISCUSS-OPT was triggered, record discussion summary in `<session>/discussions/DISCUSS-OPT.md`

View File

@@ -1,174 +0,0 @@
# Team Performance Optimization -- CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier (PREFIX-NNN) | `"PROFILE-001"` |
| `title` | string | Yes | Short task title | `"Profile performance"` |
| `description` | string | Yes | Detailed task description (self-contained) with goal, inputs, outputs, success criteria | `"Profile application performance..."` |
| `role` | enum | Yes | Worker role: `profiler`, `strategist`, `optimizer`, `benchmarker`, `reviewer` | `"profiler"` |
| `bottleneck_type` | string | No | Performance bottleneck category: CPU, MEMORY, IO, NETWORK, RENDERING, DATABASE | `"CPU"` |
| `priority` | enum | No | P0 (Critical), P1 (High), P2 (Medium), P3 (Low) | `"P0"` |
| `target_files` | string | No | Semicolon-separated file paths to focus on | `"src/services/DataProcessor.ts"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"PROFILE-001"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"PROFILE-001"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[PROFILE-001] Found 3 CPU hotspots..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Found 3 CPU hotspots, 1 memory leak..."` |
| `verdict` | string | Benchmark/review verdict: PASS, WARN, FAIL, APPROVE, REVISE, REJECT | `"PASS"` |
| `artifacts_produced` | string | Semicolon-separated paths of produced artifacts | `"artifacts/bottleneck-report.md"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Role Prefix Mapping
| Role | Prefix | Stage | Responsibility |
|------|--------|-------|----------------|
| profiler | PROFILE | 1 | Performance profiling, baseline metrics, bottleneck identification |
| strategist | STRATEGY | 2 | Optimization plan design, strategy selection, prioritization |
| optimizer | IMPL / FIX | 3 | Code implementation, optimization application, targeted fixes |
| benchmarker | BENCH | 4 | Benchmark execution, before/after comparison, regression detection |
| reviewer | REVIEW | 4 | Code review for correctness, side effects, regression risks |
---
### Example Data
```csv
id,title,description,role,bottleneck_type,priority,target_files,deps,context_from,exec_mode,wave,status,findings,verdict,artifacts_produced,error
"PROFILE-001","Profile performance","PURPOSE: Profile application performance to identify bottlenecks\nTASK:\n- Detect project type (frontend/backend/CLI)\n- Trace hot code paths and CPU hotspots\n- Identify memory allocation patterns and leaks\n- Measure I/O and network latency\n- Collect quantified baseline metrics\nINPUT: Codebase under target scope\nOUTPUT: artifacts/baseline-metrics.json + artifacts/bottleneck-report.md\nSUCCESS: Ranked bottleneck list with severity, baseline metrics collected\nSESSION: .workflow/.csv-wave/perf-example-20260308","profiler","","","","","","csv-wave","1","pending","","","",""
"STRATEGY-001","Design optimization plan","PURPOSE: Design prioritized optimization plan from bottleneck report\nTASK:\n- For each bottleneck, select optimization strategy\n- Prioritize by impact/effort ratio (P0-P3)\n- Define measurable success criteria per optimization\n- Assign unique OPT-IDs with non-overlapping file targets\nINPUT: artifacts/bottleneck-report.md + artifacts/baseline-metrics.json\nOUTPUT: artifacts/optimization-plan.md\nSUCCESS: Prioritized plan with self-contained OPT blocks\nSESSION: .workflow/.csv-wave/perf-example-20260308","strategist","","","","PROFILE-001","PROFILE-001","csv-wave","2","pending","","","",""
"IMPL-001","Implement optimizations","PURPOSE: Implement performance optimizations per plan\nTASK:\n- Apply optimizations in priority order (P0 first)\n- Preserve existing behavior\n- Make minimal, focused changes\nINPUT: artifacts/optimization-plan.md\nOUTPUT: Modified source files\nSUCCESS: All planned optimizations applied, no functionality regressions\nSESSION: .workflow/.csv-wave/perf-example-20260308","optimizer","","","","STRATEGY-001","STRATEGY-001","csv-wave","3","pending","","","",""
"BENCH-001","Benchmark improvements","PURPOSE: Benchmark before/after optimization metrics\nTASK:\n- Run benchmarks matching detected project type\n- Compare post-optimization metrics vs baseline\n- Calculate improvement percentages\n- Detect any regressions\nINPUT: artifacts/baseline-metrics.json + artifacts/optimization-plan.md\nOUTPUT: artifacts/benchmark-results.json\nSUCCESS: All target improvements met, no regressions\nSESSION: .workflow/.csv-wave/perf-example-20260308","benchmarker","","","","IMPL-001","IMPL-001","csv-wave","4","pending","","","",""
"REVIEW-001","Review optimization code","PURPOSE: Review optimization changes for correctness and quality\nTASK:\n- Correctness: logic errors, race conditions, null safety\n- Side effects: unintended behavior changes, API breaks\n- Maintainability: code clarity, complexity, naming\n- Regression risk: impact on unrelated code paths\n- Best practices: idiomatic patterns, no anti-patterns\nINPUT: artifacts/optimization-plan.md + changed files\nOUTPUT: artifacts/review-report.md\nSUCCESS: APPROVE verdict (no Critical/High findings)\nSESSION: .workflow/.csv-wave/perf-example-20260308","reviewer","","","","IMPL-001","IMPL-001","csv-wave","4","pending","","","",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
--------------------- -------------------- -----------------
id ----------> id ----------> id
title ----------> title ----------> (reads)
description ----------> description ----------> (reads)
role ----------> role ----------> (reads)
bottleneck_type--------> bottleneck_type--------> (reads)
priority ----------> priority ----------> (reads)
target_files----------> target_files----------> (reads)
deps ----------> deps ----------> (reads)
context_from----------> context_from----------> (reads)
exec_mode ----------> exec_mode ----------> (reads)
wave ----------> (reads)
prev_context ----------> (reads)
status
findings
verdict
artifacts_produced
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "PROFILE-001",
"status": "completed",
"findings": "Found 3 CPU hotspots: O(n^2) in DataProcessor.processRecords (Critical), unoptimized regex in Validator.check (High), synchronous file reads in ConfigLoader (Medium). Memory baseline: 145MB peak, 2 potential leak sites.",
"verdict": "",
"artifacts_produced": "artifacts/baseline-metrics.json;artifacts/bottleneck-report.md",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `bottleneck_found` | `data.location` | `{type, location, severity, description}` | Performance bottleneck identified |
| `hotspot_found` | `data.file+data.function` | `{file, function, cpu_pct, description}` | CPU hotspot detected |
| `memory_issue` | `data.file+data.type` | `{file, type, size_mb, description}` | Memory leak or bloat |
| `io_issue` | `data.operation` | `{operation, latency_ms, description}` | I/O performance issue |
| `db_issue` | `data.query` | `{query, latency_ms, description}` | Database performance issue |
| `file_modified` | `data.file` | `{file, change, lines_added}` | File change recorded |
| `metric_measured` | `data.metric+data.context` | `{metric, value, unit, context}` | Performance metric measured |
| `pattern_found` | `data.pattern_name+data.location` | `{pattern_name, location, description}` | Code pattern identified |
| `artifact_produced` | `data.path` | `{name, path, producer, type}` | Deliverable created |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"PROFILE-001","type":"bottleneck_found","data":{"type":"CPU","location":"src/services/DataProcessor.ts:145","severity":"Critical","description":"O(n^2) nested loop in processRecords, 850ms for 10k records"}}
{"ts":"2026-03-08T10:01:00Z","worker":"PROFILE-001","type":"hotspot_found","data":{"file":"src/services/DataProcessor.ts","function":"processRecords","cpu_pct":42,"description":"Accounts for 42% of CPU time in profiling run"}}
{"ts":"2026-03-08T10:02:00Z","worker":"PROFILE-001","type":"metric_measured","data":{"metric":"response_time_p95","value":1250,"unit":"ms","context":"GET /api/dashboard"}}
{"ts":"2026-03-08T10:15:00Z","worker":"IMPL-001","type":"file_modified","data":{"file":"src/services/DataProcessor.ts","change":"Replaced O(n^2) with Map lookup O(n)","lines_added":12}}
{"ts":"2026-03-08T10:25:00Z","worker":"BENCH-001","type":"metric_measured","data":{"metric":"response_time_p95","value":380,"unit":"ms","context":"GET /api/dashboard (after optimization)"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| CSV task findings | Interactive task | Injected via spawn message or send_input |
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
| Role valid | role in {profiler, strategist, optimizer, benchmarker, reviewer} | "Invalid role: {role}" |
| Verdict enum | verdict in {PASS, WARN, FAIL, APPROVE, REVISE, REJECT, ""} | "Invalid verdict: {verdict}" |
| Cross-mechanism deps | Interactive to CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |

View File

@@ -0,0 +1,65 @@
# Pipeline Definitions — Team Performance Optimization
## Pipeline Modes
### Single Mode (Linear with Review-Fix Cycle)
```
Stage 1 Stage 2 Stage 3 Stage 4
PROFILE-001 --> STRATEGY-001 --> IMPL-001 --> BENCH-001
[profiler] [strategist] [optimizer] [benchmarker]
^ |
+<--FIX-001---->+
| REVIEW-001
+<--------> [reviewer]
(max 3 iterations)
```
### Fan-out Mode (Shared stages 1-2, parallel branches 3-4)
```
Stage 1 Stage 2 CP-2.5 Stage 3+4 (per branch)
PROFILE-001 --> STRATEGY-001 --+-> IMPL-B01 --> BENCH-B01 + REVIEW-B01 (fix cycle)
+-> IMPL-B02 --> BENCH-B02 + REVIEW-B02 (fix cycle)
+-> IMPL-B0N --> BENCH-B0N + REVIEW-B0N (fix cycle)
|
AGGREGATE -> Phase 5
```
### Independent Mode (M fully independent pipelines)
```
Pipeline A: PROFILE-A01 --> STRATEGY-A01 --> IMPL-A01 --> BENCH-A01 + REVIEW-A01
Pipeline B: PROFILE-B01 --> STRATEGY-B01 --> IMPL-B01 --> BENCH-B01 + REVIEW-B01
|
AGGREGATE -> Phase 5
```
## Task Metadata Registry (Single Mode)
| Task ID | Role | Phase | Dependencies | Description |
|---------|------|-------|-------------|-------------|
| PROFILE-001 | profiler | Stage 1 | (none) | Profile application, identify bottlenecks |
| STRATEGY-001 | strategist | Stage 2 | PROFILE-001 | Design optimization plan from bottleneck report |
| IMPL-001 | optimizer | Stage 3 | STRATEGY-001 | Implement highest-priority optimizations |
| BENCH-001 | benchmarker | Stage 4 | IMPL-001 | Run benchmarks, compare vs baseline |
| REVIEW-001 | reviewer | Stage 4 | IMPL-001 | Review optimization code for correctness |
| FIX-001 | optimizer | Stage 3 (cycle) | REVIEW-001 or BENCH-001 | Fix issues found in review/benchmark |
## Checkpoints
| Checkpoint | Trigger | Behavior |
|------------|---------|----------|
| CP-1 | PROFILE-001 complete | User reviews bottleneck report, can refine scope |
| CP-2 | STRATEGY-001 complete | User reviews optimization plan, can adjust priorities |
| CP-2.5 | STRATEGY-001 complete (auto/fan-out) | Auto-create N branch tasks, spawn all IMPL-B* in parallel |
| CP-3 | REVIEW/BENCH fail | Auto-create FIX task for that branch only (max 3x per branch) |
| CP-4 | All tasks/branches complete | Aggregate results, interactive completion action |
## Task Naming Rules
| Mode | Stage 3 | Stage 4 | Fix | Retry |
|------|---------|---------|-----|-------|
| Single | IMPL-001 | BENCH-001, REVIEW-001 | FIX-001 | BENCH-001-R1, REVIEW-001-R1 |
| Fan-out | IMPL-B01 | BENCH-B01, REVIEW-B01 | FIX-B01-1 | BENCH-B01-R1, REVIEW-B01-R1 |
| Independent | IMPL-A01 | BENCH-A01, REVIEW-A01 | FIX-A01-1 | BENCH-A01-R1, REVIEW-A01-R1 |

View File

@@ -0,0 +1,246 @@
{
"version": "5.0.0",
"team_name": "perf-opt",
"team_display_name": "Performance Optimization",
"skill_name": "team-perf-opt",
"skill_path": "~ or <project>/.claude/skills/team-perf-opt/",
"worker_agent": "team-worker",
"pipeline_type": "Linear with Review-Fix Cycle (Parallel-Capable)",
"completion_action": "interactive",
"has_inline_discuss": true,
"has_shared_explore": true,
"has_checkpoint_feedback": true,
"has_session_resume": true,
"roles": [
{
"name": "coordinator",
"type": "orchestrator",
"description": "Orchestrates performance optimization pipeline, manages task chains, handles review-fix cycles",
"spec_path": "roles/coordinator/role.md",
"tools": ["Task", "TaskCreate", "TaskList", "TaskGet", "TaskUpdate", "TeamCreate", "TeamDelete", "SendMessage", "AskUserQuestion", "Read", "Write", "Bash", "Glob", "Grep"]
},
{
"name": "profiler",
"type": "orchestration",
"description": "Profiles application performance, identifies CPU/memory/IO/network/rendering bottlenecks",
"role_spec": "roles/profiler/role.md",
"inner_loop": false,
"frontmatter": {
"prefix": "PROFILE",
"inner_loop": false,
"additional_prefixes": [],
"discuss_rounds": [],
"delegates_to": [],
"message_types": {
"success": "profile_complete",
"error": "error"
}
},
"weight": 1,
"tools": ["Read", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
},
{
"name": "strategist",
"type": "orchestration",
"description": "Analyzes bottleneck reports, designs prioritized optimization plans with concrete strategies",
"role_spec": "roles/strategist/role.md",
"inner_loop": false,
"frontmatter": {
"prefix": "STRATEGY",
"inner_loop": false,
"additional_prefixes": [],
"discuss_rounds": ["DISCUSS-OPT"],
"delegates_to": [],
"message_types": {
"success": "strategy_complete",
"error": "error"
}
},
"weight": 2,
"tools": ["Read", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
},
{
"name": "optimizer",
"type": "code_generation",
"description": "Implements optimization changes following the strategy plan",
"role_spec": "roles/optimizer/role.md",
"inner_loop": true,
"frontmatter": {
"prefix": "IMPL",
"inner_loop": true,
"additional_prefixes": ["FIX"],
"discuss_rounds": [],
"delegates_to": [],
"message_types": {
"success": "impl_complete",
"error": "error",
"fix": "fix_required"
}
},
"weight": 3,
"tools": ["Read", "Write", "Edit", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
},
{
"name": "benchmarker",
"type": "validation",
"description": "Runs benchmarks, compares before/after metrics, validates performance improvements",
"role_spec": "roles/benchmarker/role.md",
"inner_loop": false,
"frontmatter": {
"prefix": "BENCH",
"inner_loop": false,
"additional_prefixes": [],
"discuss_rounds": [],
"delegates_to": [],
"message_types": {
"success": "bench_complete",
"error": "error",
"fix": "fix_required"
}
},
"weight": 4,
"tools": ["Read", "Bash", "Glob", "Grep", "Task"]
},
{
"name": "reviewer",
"type": "read_only_analysis",
"description": "Reviews optimization code for correctness, side effects, and regression risks",
"role_spec": "roles/reviewer/role.md",
"inner_loop": false,
"frontmatter": {
"prefix": "REVIEW",
"inner_loop": false,
"additional_prefixes": ["QUALITY"],
"discuss_rounds": ["DISCUSS-REVIEW"],
"delegates_to": [],
"message_types": {
"success": "review_complete",
"error": "error",
"fix": "fix_required"
}
},
"weight": 4,
"tools": ["Read", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
}
],
"parallel_config": {
"modes": ["single", "fan-out", "independent", "auto"],
"default_mode": "auto",
"max_branches": 5,
"auto_mode_rules": {
"single": "optimization_count <= 2",
"fan-out": "optimization_count >= 3"
}
},
"pipeline": {
"stages": [
{
"stage": 1,
"name": "Performance Profiling",
"roles": ["profiler"],
"blockedBy": [],
"fast_advance": true
},
{
"stage": 2,
"name": "Optimization Strategy",
"roles": ["strategist"],
"blockedBy": ["PROFILE"],
"fast_advance": true
},
{
"stage": 3,
"name": "Code Optimization",
"roles": ["optimizer"],
"blockedBy": ["STRATEGY"],
"fast_advance": false
},
{
"stage": 4,
"name": "Benchmark & Review",
"roles": ["benchmarker", "reviewer"],
"blockedBy": ["IMPL"],
"fast_advance": false,
"parallel": true,
"review_fix_cycle": {
"trigger": "REVIEW or BENCH finds issues",
"target_stage": 3,
"max_iterations": 3
}
}
],
"parallel_pipelines": {
"fan-out": {
"shared_stages": [1, 2],
"branch_stages": [3, 4],
"branch_prefix": "B",
"review_fix_cycle": {
"scope": "per_branch",
"max_iterations": 3
}
},
"independent": {
"pipeline_prefix_chars": "ABCDEFGHIJ",
"review_fix_cycle": {
"scope": "per_pipeline",
"max_iterations": 3
}
}
},
"diagram": "See pipeline-diagram section"
},
"shared_resources": [
{
"name": "Performance Baseline",
"path": "<session>/artifacts/baseline-metrics.json",
"usage": "Before-optimization metrics for comparison",
"scope": "shared (fan-out) / per-pipeline (independent)"
},
{
"name": "Bottleneck Report",
"path": "<session>/artifacts/bottleneck-report.md",
"usage": "Profiler output consumed by strategist",
"scope": "shared (fan-out) / per-pipeline (independent)"
},
{
"name": "Optimization Plan",
"path": "<session>/artifacts/optimization-plan.md",
"usage": "Strategist output consumed by optimizer",
"scope": "shared (fan-out) / per-pipeline (independent)"
},
{
"name": "Benchmark Results",
"path": "<session>/artifacts/benchmark-results.json",
"usage": "Benchmarker output consumed by reviewer",
"scope": "per-branch (fan-out) / per-pipeline (independent)"
}
],
"shared_memory_namespacing": {
"single": {
"profiler": "profiler",
"strategist": "strategist",
"optimizer": "optimizer",
"benchmarker": "benchmarker",
"reviewer": "reviewer"
},
"fan-out": {
"profiler": "profiler",
"strategist": "strategist",
"optimizer": "optimizer.B{NN}",
"benchmarker": "benchmarker.B{NN}",
"reviewer": "reviewer.B{NN}"
},
"independent": {
"profiler": "profiler.{P}",
"strategist": "strategist.{P}",
"optimizer": "optimizer.{P}",
"benchmarker": "benchmarker.{P}",
"reviewer": "reviewer.{P}"
}
}
}