Add unit tests for various components and stores in the terminal dashboard

- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management.
- Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping.
- Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
catlog22
2026-03-08 21:38:20 +08:00
parent 9aa07e8d01
commit 62d8aa3623
157 changed files with 36544 additions and 71 deletions

View File

@@ -0,0 +1,659 @@
---
name: team-perf-opt
description: Performance optimization team skill. Profiles application performance, identifies bottlenecks, designs optimization strategies, implements changes, benchmarks improvements, and reviews code quality via CSV wave pipeline with interactive review-fix cycles.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"performance optimization task description\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team Performance Optimization
## Usage
```bash
$team-perf-opt "Optimize API response times for the user dashboard endpoints"
$team-perf-opt -c 4 "Profile and reduce memory usage in the data processing pipeline"
$team-perf-opt -y "Optimize bundle size and rendering performance for the frontend"
$team-perf-opt --continue "perf-optimize-api-20260308"
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--continue`: Resume existing session
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
---
## Overview
Orchestrate multi-agent performance optimization: profile application, identify bottlenecks, design optimization strategies, implement changes, benchmark improvements, review code quality. The pipeline has five domain roles (profiler, strategist, optimizer, benchmarker, reviewer) mapped to CSV wave stages with an interactive review-fix cycle.
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
```
+-------------------------------------------------------------------+
| TEAM PERFORMANCE OPTIMIZATION WORKFLOW |
+-------------------------------------------------------------------+
| |
| Phase 0: Pre-Wave Interactive (Requirement Clarification) |
| +- Parse user task description |
| +- Detect scope: specific endpoint vs full app profiling |
| +- Clarify ambiguous requirements (AskUserQuestion) |
| +- Output: refined requirements for decomposition |
| |
| Phase 1: Requirement -> CSV + Classification |
| +- Identify performance targets and metrics |
| +- Build 5-stage pipeline (profile->strategize->optimize-> |
| | benchmark+review) |
| +- Classify tasks: csv-wave | interactive (exec_mode) |
| +- Compute dependency waves (topological sort) |
| +- Generate tasks.csv with wave + exec_mode columns |
| +- User validates task breakdown (skip if -y) |
| |
| Phase 2: Wave Execution Engine (Extended) |
| +- For each wave (1..N): |
| | +- Execute pre-wave interactive tasks (if any) |
| | +- Build wave CSV (filter csv-wave tasks for this wave) |
| | +- Inject previous findings into prev_context column |
| | +- spawn_agents_on_csv(wave CSV) |
| | +- Execute post-wave interactive tasks (if any) |
| | +- Merge all results into master tasks.csv |
| | +- Check: any failed? -> skip dependents |
| +- discoveries.ndjson shared across all modes (append-only) |
| +- Review-fix cycle: max 3 iterations per branch |
| |
| Phase 3: Post-Wave Interactive (Completion Action) |
| +- Pipeline completion report with benchmark comparisons |
| +- Interactive completion choice (Archive/Keep/Export) |
| +- Final aggregation / report |
| |
| Phase 4: Results Aggregation |
| +- Export final results.csv |
| +- Generate context.md with all findings |
| +- Display summary: completed/failed/skipped per wave |
| +- Offer: view results | retry failed | done |
| |
+-------------------------------------------------------------------+
```
---
## Pipeline Definition
```
Stage 1 Stage 2 Stage 3 Stage 4
PROFILE-001 --> STRATEGY-001 --> IMPL-001 --> BENCH-001
[profiler] [strategist] [optimizer] [benchmarker]
^ |
+<-- FIX-001 ---+
| REVIEW-001
+<--------> [reviewer]
(max 3 iterations)
```
---
## Task Classification Rules
Each task is classified by `exec_mode`:
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, revision cycles, user checkpoints |
**Classification Decision**:
| Task Property | Classification |
|---------------|---------------|
| Performance profiling (single-pass) | `csv-wave` |
| Optimization strategy design (single-pass) | `csv-wave` |
| Code optimization implementation | `csv-wave` |
| Benchmark execution (single-pass) | `csv-wave` |
| Code review (single-pass) | `csv-wave` |
| Review-fix cycle (iterative revision) | `interactive` |
| User checkpoint (plan approval) | `interactive` |
| Discussion round (DISCUSS-OPT, DISCUSS-REVIEW) | `interactive` |
---
## CSV Schema
### tasks.csv (Master State)
```csv
id,title,description,role,bottleneck_type,priority,target_files,deps,context_from,exec_mode,wave,status,findings,verdict,artifacts_produced,error
"PROFILE-001","Profile performance","Profile application performance to identify CPU, memory, I/O, network, and rendering bottlenecks. Produce baseline metrics and ranked report.","profiler","","","","","","csv-wave","1","pending","","","",""
"STRATEGY-001","Design optimization plan","Analyze bottleneck report to design prioritized optimization plan with strategies and expected improvements.","strategist","","","","PROFILE-001","PROFILE-001","csv-wave","2","pending","","","",""
"IMPL-001","Implement optimizations","Implement performance optimization changes following strategy plan in priority order.","optimizer","","","","STRATEGY-001","STRATEGY-001","csv-wave","3","pending","","","",""
"BENCH-001","Benchmark improvements","Run benchmarks comparing before/after optimization metrics. Validate improvements meet plan criteria.","benchmarker","","","","IMPL-001","IMPL-001","csv-wave","4","pending","","PASS","",""
"REVIEW-001","Review optimization code","Review optimization changes for correctness, side effects, regression risks, and best practices.","reviewer","","","","IMPL-001","IMPL-001","csv-wave","4","pending","","APPROVE","",""
```
**Columns**:
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (PREFIX-NNN format) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description (self-contained) |
| `role` | Input | Worker role: profiler, strategist, optimizer, benchmarker, reviewer |
| `bottleneck_type` | Input | Performance bottleneck category: CPU, MEMORY, IO, NETWORK, RENDERING, DATABASE |
| `priority` | Input | P0 (Critical), P1 (High), P2 (Medium), P3 (Low) |
| `target_files` | Input | Semicolon-separated file paths to focus on |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
| `verdict` | Output | Benchmark/review verdict: PASS, WARN, FAIL, APPROVE, REVISE, REJECT |
| `artifacts_produced` | Output | Semicolon-separated paths of produced artifacts |
| `error` | Output | Error message if failed (empty if success) |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
---
## Agent Registry (Interactive Agents)
| Agent | Role File | Pattern | Responsibility | Position |
|-------|-----------|---------|----------------|----------|
| Plan Reviewer | agents/plan-reviewer.md | 2.3 (send_input cycle) | Review bottleneck report or optimization plan at user checkpoint | pre-wave |
| Fix Cycle Handler | agents/fix-cycle-handler.md | 2.3 (send_input cycle) | Manage review-fix iteration cycle (max 3 rounds) | post-wave |
| Completion Handler | agents/completion-handler.md | 2.3 (send_input cycle) | Handle pipeline completion action (Archive/Keep/Export) | standalone |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 4 |
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
| `context.md` | Human-readable execution report | Created in Phase 4 |
| `task-analysis.json` | Phase 1 output: scope, bottleneck targets, pipeline config | Created in Phase 1 |
| `artifacts/baseline-metrics.json` | Profiler: before-optimization metrics | Created by profiler |
| `artifacts/bottleneck-report.md` | Profiler: ranked bottleneck findings | Created by profiler |
| `artifacts/optimization-plan.md` | Strategist: prioritized optimization plan | Created by strategist |
| `artifacts/benchmark-results.json` | Benchmarker: after-optimization metrics | Created by benchmarker |
| `artifacts/review-report.md` | Reviewer: code review findings | Created by reviewer |
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
---
## Session Structure
```
.workflow/.csv-wave/{session-id}/
+-- tasks.csv # Master state (all tasks, both modes)
+-- results.csv # Final results export
+-- discoveries.ndjson # Shared discovery board (all agents)
+-- context.md # Human-readable report
+-- task-analysis.json # Phase 1 analysis output
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
+-- artifacts/
| +-- baseline-metrics.json # Profiler output
| +-- bottleneck-report.md # Profiler output
| +-- optimization-plan.md # Strategist output
| +-- benchmark-results.json # Benchmarker output
| +-- review-report.md # Reviewer output
+-- interactive/ # Interactive task artifacts
| +-- {id}-result.json
+-- wisdom/
+-- patterns.md # Discovered patterns and conventions
```
---
## Implementation
### Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
const requirement = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
.trim()
const slug = requirement.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `perf-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/interactive ${sessionFolder}/wisdom`)
// Initialize discoveries.ndjson
Write(`${sessionFolder}/discoveries.ndjson`, '')
// Initialize wisdom
Write(`${sessionFolder}/wisdom/patterns.md`, '# Patterns & Conventions\n')
```
---
### Phase 0: Pre-Wave Interactive (Requirement Clarification)
**Objective**: Parse user task, detect performance scope, clarify ambiguities, prepare for decomposition.
**Workflow**:
1. **Parse user task description** from $ARGUMENTS
2. **Check for existing sessions** (continue mode):
- Scan `.workflow/.csv-wave/perf-*/tasks.csv` for sessions with pending tasks
- If `--continue`: resume the specified or most recent session, skip to Phase 2
- If active session found: ask user whether to resume or start new
3. **Identify performance optimization target**:
| Signal | Target |
|--------|--------|
| Specific endpoint/file mentioned | Scoped optimization |
| "slow", "performance", "speed", generic | Full application profiling |
| Specific metric (response time, memory, bundle size) | Targeted metric optimization |
| "frontend", "backend", "CLI" | Platform-specific profiling |
4. **Clarify if ambiguous** (skip if AUTO_YES):
```javascript
AskUserQuestion({
questions: [{
question: "Please confirm the performance optimization scope:",
header: "Performance Scope",
multiSelect: false,
options: [
{ label: "Proceed as described", description: "Scope is clear" },
{ label: "Narrow scope", description: "Specify endpoints/modules to focus on" },
{ label: "Add constraints", description: "Target metrics, acceptable trade-offs" }
]
}]
})
```
5. **Output**: Refined requirement string for Phase 1
**Success Criteria**:
- Refined requirements available for Phase 1 decomposition
- Existing session detected and handled if applicable
---
### Phase 1: Requirement -> CSV + Classification
**Objective**: Decompose performance optimization task into the 5-stage pipeline tasks, assign waves, generate tasks.csv.
**Decomposition Rules**:
1. **Stage mapping** -- performance optimization always follows this pipeline:
| Stage | Role | Task Prefix | Wave | Description |
|-------|------|-------------|------|-------------|
| 1 | profiler | PROFILE | 1 | Profile app, identify bottlenecks, produce baseline metrics |
| 2 | strategist | STRATEGY | 2 | Design optimization plan from bottleneck report |
| 3 | optimizer | IMPL | 3 | Implement optimizations per plan priority |
| 4a | benchmarker | BENCH | 4 | Benchmark before/after, validate improvements |
| 4b | reviewer | REVIEW | 4 | Review optimization code for correctness |
2. **Single-pipeline decomposition**: Generate one task per stage with sequential dependencies:
- PROFILE-001 (wave 1, no deps)
- STRATEGY-001 (wave 2, deps: PROFILE-001)
- IMPL-001 (wave 3, deps: STRATEGY-001)
- BENCH-001 (wave 4, deps: IMPL-001)
- REVIEW-001 (wave 4, deps: IMPL-001)
3. **Description enrichment**: Each task description must be self-contained with:
- Clear goal statement
- Input artifacts to read
- Output artifacts to produce
- Success criteria
- Session folder path
**Classification Rules**:
| Task Property | exec_mode |
|---------------|-----------|
| PROFILE, STRATEGY, IMPL, BENCH, REVIEW (initial pass) | `csv-wave` |
| FIX tasks (review-fix cycle) | `interactive` (handled by fix-cycle-handler agent) |
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
**Success Criteria**:
- tasks.csv created with valid schema, wave, and exec_mode assignments
- task-analysis.json written with scope and pipeline config
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
```javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
const maxWave = Math.max(...tasks.map(t => t.wave))
for (let wave = 1; wave <= maxWave; wave++) {
console.log(`\nWave ${wave}/${maxWave}`)
// 1. Separate tasks by exec_mode
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
// 2. Check dependencies -- skip tasks whose deps failed
for (const task of waveTasks) {
const depIds = (task.deps || '').split(';').filter(Boolean)
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
task.status = 'skipped'
task.error = `Dependency failed: ${depIds.filter((id, i) =>
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
}
}
// 3. Execute pre-wave interactive tasks (if any)
for (const task of interactiveTasks.filter(t => t.status === 'pending')) {
const agentFile = task.id.startsWith('FIX') ? 'agents/fix-cycle-handler.md' : 'agents/plan-reviewer.md'
Read(agentFile)
const agent = spawn_agent({
message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${agentFile}\n2. Read: ${sessionFolder}/discoveries.ndjson\n3. Read: .workflow/project-tech.json (if exists)\n\n---\n\nGoal: ${task.description}\nScope: ${task.title}\nSession: ${sessionFolder}\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
if (result.timed_out) {
send_input({ id: agent, message: "Please finalize and output current findings." })
wait({ ids: [agent], timeout_ms: 120000 })
}
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
task_id: task.id, status: "completed", findings: parseFindings(result),
timestamp: getUtc8ISOString()
}))
close_agent({ id: agent })
task.status = 'completed'
task.findings = parseFindings(result)
}
// 4. Build prev_context for csv-wave tasks
const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
for (const task of pendingCsvTasks) {
task.prev_context = buildPrevContext(task, tasks)
}
if (pendingCsvTasks.length > 0) {
// 5. Write wave CSV
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))
// 6. Determine instruction -- read from instructions/agent-instruction.md
Read('instructions/agent-instruction.md')
// 7. Execute wave via spawn_agents_on_csv
spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: perfOptInstruction, // from instructions/agent-instruction.md
max_concurrency: maxConcurrency,
max_runtime_seconds: 900,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
verdict: { type: "string" },
artifacts_produced: { type: "string" },
error: { type: "string" }
}
}
})
// 8. Merge results into master CSV
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const r of results) {
const t = tasks.find(t => t.id === r.id)
if (t) Object.assign(t, r)
}
}
// 9. Update master CSV
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
// 10. Cleanup temp files
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
// 11. Post-wave: check for review-fix cycle
const benchTask = tasks.find(t => t.id.startsWith('BENCH') && t.wave === wave)
const reviewTask = tasks.find(t => t.id.startsWith('REVIEW') && t.wave === wave)
if ((benchTask?.verdict === 'FAIL' || reviewTask?.verdict === 'REVISE' || reviewTask?.verdict === 'REJECT')) {
const fixCycleCount = tasks.filter(t => t.id.startsWith('FIX')).length
if (fixCycleCount < 3) {
const fixId = `FIX-${String(fixCycleCount + 1).padStart(3, '0')}`
const feedback = [benchTask?.error, reviewTask?.findings].filter(Boolean).join('\n')
tasks.push({
id: fixId, title: `Fix issues from review/benchmark cycle ${fixCycleCount + 1}`,
description: `Fix issues found:\n${feedback}`,
role: 'optimizer', bottleneck_type: '', priority: 'P0', target_files: '',
deps: '', context_from: '', exec_mode: 'interactive',
wave: wave + 1, status: 'pending', findings: '', verdict: '',
artifacts_produced: '', error: ''
})
}
}
// 12. Display wave summary
const completed = waveTasks.filter(t => t.status === 'completed').length
const failed = waveTasks.filter(t => t.status === 'failed').length
const skipped = waveTasks.filter(t => t.status === 'skipped').length
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
}
```
**Success Criteria**:
- All waves executed in order
- Both csv-wave and interactive tasks handled per wave
- Each wave's results merged into master CSV before next wave starts
- Dependent tasks skipped when predecessor failed
- Review-fix cycle handled with max 3 iterations
- discoveries.ndjson accumulated across all waves and mechanisms
---
### Phase 3: Post-Wave Interactive (Completion Action)
**Objective**: Pipeline completion report with performance improvement metrics and interactive completion choice.
```javascript
// 1. Generate pipeline summary
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const completed = tasks.filter(t => t.status === 'completed')
const failed = tasks.filter(t => t.status === 'failed')
// 2. Load improvement metrics from benchmark results
let improvements = ''
try {
const benchmark = JSON.parse(Read(`${sessionFolder}/artifacts/benchmark-results.json`))
improvements = `Performance Improvements:\n${benchmark.metrics.map(m =>
` ${m.name}: ${m.baseline} -> ${m.current} (${m.improvement})`).join('\n')}`
} catch {}
console.log(`
============================================
PERFORMANCE OPTIMIZATION COMPLETE
Deliverables:
- Baseline Metrics: artifacts/baseline-metrics.json
- Bottleneck Report: artifacts/bottleneck-report.md
- Optimization Plan: artifacts/optimization-plan.md
- Benchmark Results: artifacts/benchmark-results.json
- Review Report: artifacts/review-report.md
${improvements}
Pipeline: ${completed.length}/${tasks.length} tasks
Session: ${sessionFolder}
============================================
`)
// 3. Completion action
if (!AUTO_YES) {
AskUserQuestion({
questions: [{
question: "Performance optimization complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
{ label: "Keep Active", description: "Keep session for follow-up work" },
{ label: "Retry Failed", description: "Re-run failed tasks" }
]
}]
})
}
```
**Success Criteria**:
- Post-wave interactive processing complete
- User informed of results and improvement metrics
---
### Phase 4: Results Aggregation
**Objective**: Generate final results and human-readable report.
```javascript
// 1. Export results.csv
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
// 2. Generate context.md
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
let contextMd = `# Performance Optimization Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`
contextMd += `## Summary\n`
contextMd += `| Status | Count |\n|--------|-------|\n`
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`
contextMd += `## Deliverables\n\n`
contextMd += `| Artifact | Path |\n|----------|------|\n`
contextMd += `| Baseline Metrics | artifacts/baseline-metrics.json |\n`
contextMd += `| Bottleneck Report | artifacts/bottleneck-report.md |\n`
contextMd += `| Optimization Plan | artifacts/optimization-plan.md |\n`
contextMd += `| Benchmark Results | artifacts/benchmark-results.json |\n`
contextMd += `| Review Report | artifacts/review-report.md |\n\n`
const maxWave = Math.max(...tasks.map(t => t.wave))
contextMd += `## Wave Execution\n\n`
for (let w = 1; w <= maxWave; w++) {
const waveTasks = tasks.filter(t => t.wave === w)
contextMd += `### Wave ${w}\n\n`
for (const t of waveTasks) {
const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
contextMd += `${icon} **${t.title}** [${t.role}] ${t.verdict ? `(${t.verdict})` : ''} ${t.findings || ''}\n\n`
}
}
Write(`${sessionFolder}/context.md`, contextMd)
console.log(`Results exported to: ${sessionFolder}/results.csv`)
console.log(`Report generated at: ${sessionFolder}/context.md`)
```
**Success Criteria**:
- results.csv exported (all tasks, both modes)
- context.md generated with deliverables list
- Summary displayed to user
---
## Shared Discovery Board Protocol
All agents (csv-wave and interactive) share a single `discoveries.ndjson` file for cross-task knowledge exchange.
**Format**: One JSON object per line (NDJSON):
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"PROFILE-001","type":"bottleneck_found","data":{"type":"CPU","location":"src/services/DataProcessor.ts:145","severity":"Critical","description":"O(n^2) nested loop in processRecords"}}
{"ts":"2026-03-08T10:05:00Z","worker":"IMPL-001","type":"file_modified","data":{"file":"src/services/DataProcessor.ts","change":"Replaced nested loop with Map lookup","lines_added":8}}
```
**Discovery Types**:
| Type | Data Schema | Description |
|------|-------------|-------------|
| `bottleneck_found` | `{type, location, severity, description}` | Performance bottleneck identified |
| `hotspot_found` | `{file, function, cpu_pct, description}` | CPU hotspot detected |
| `memory_issue` | `{file, type, size_mb, description}` | Memory leak or bloat found |
| `io_issue` | `{operation, latency_ms, description}` | I/O performance issue |
| `file_modified` | `{file, change, lines_added}` | File change recorded |
| `metric_measured` | `{metric, value, unit, context}` | Performance metric measured |
| `pattern_found` | `{pattern_name, location, description}` | Code pattern identified |
| `artifact_produced` | `{name, path, producer, type}` | Deliverable created |
**Protocol**:
1. Agents MUST read discoveries.ndjson at start of execution
2. Agents MUST append relevant discoveries during execution
3. Agents MUST NOT modify or delete existing entries
4. Deduplication by `{type, data.location}` or `{type, data.file}` key
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Circular dependency in tasks | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Interactive agent failed | Mark as failed, skip dependents |
| All agents in wave failed | Log error, offer retry or abort |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| Review-fix cycle exceeds 3 iterations | Escalate to user with summary of remaining issues |
| Benchmark regression detected | Create FIX task with regression details |
| Profiling tool not available | Fall back to static analysis methods |
| Continue mode: no session found | List available sessions, prompt user to select |
---
## Core Rules
1. **Start Immediately**: First action is session initialization, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson -- both mechanisms share it
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
8. **Max 3 Fix Cycles**: Review-fix cycle capped at 3 iterations; escalate to user after
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped

View File

@@ -0,0 +1,141 @@
# Completion Handler Agent
Handle pipeline completion action for performance optimization: present results summary with before/after metrics, offer Archive/Keep/Export options, execute chosen action.
## Identity
- **Type**: `interactive`
- **Responsibility**: Pipeline completion and session lifecycle management
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Present complete pipeline summary with before/after performance metrics
- Offer completion action choices
- Execute chosen action (archive, keep, export)
- Produce structured output
### MUST NOT
- Skip presenting results summary
- Execute destructive actions without confirmation
- Modify source code
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load result artifacts |
| `Write` | builtin | Write export files |
| `Bash` | builtin | Archive/cleanup operations |
| `AskUserQuestion` | builtin | Present completion choices |
---
## Execution
### Phase 1: Results Collection
**Objective**: Gather all pipeline results for summary.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| tasks.csv | Yes | Master task state |
| Baseline metrics | Yes | Pre-optimization metrics |
| Benchmark results | Yes | Post-optimization metrics |
| Review report | Yes | Code review findings |
**Steps**:
1. Read tasks.csv -- count completed/failed/skipped
2. Read baseline-metrics.json -- extract before metrics
3. Read benchmark-results.json -- extract after metrics, compute improvements
4. Read review-report.md -- extract final verdict
**Output**: Compiled results summary with before/after comparison
---
### Phase 2: Present and Choose
**Objective**: Display results and get user's completion choice.
**Steps**:
1. Display pipeline summary with before/after metrics comparison table
2. Present completion action:
```javascript
AskUserQuestion({
questions: [{
question: "Performance optimization complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
{ label: "Keep Active", description: "Keep session for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location" }
]
}]
})
```
**Output**: User's choice
---
### Phase 3: Execute Action
**Objective**: Execute the chosen completion action.
| Choice | Action |
|--------|--------|
| Archive & Clean | Copy results.csv and context.md to archive, mark session completed |
| Keep Active | Mark session as paused, leave all artifacts in place |
| Export Results | Copy key deliverables to user-specified location |
---
## Structured Output Template
```
## Pipeline Summary
- Tasks: X completed, Y failed, Z skipped
- Duration: estimated from timestamps
## Performance Improvements
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| metric_1 | value | value | +X% |
| metric_2 | value | value | +X% |
## Deliverables
- Baseline Metrics: path
- Bottleneck Report: path
- Optimization Plan: path
- Benchmark Results: path
- Review Report: path
## Action Taken
- Choice: Archive & Clean / Keep Active / Export Results
- Status: completed
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Result artifacts missing | Report partial summary with available data |
| Archive operation fails | Default to Keep Active |
| Export path invalid | Ask user for valid path |
| Timeout approaching | Default to Keep Active |

View File

@@ -0,0 +1,156 @@
# Fix Cycle Handler Agent
Manage the review-fix iteration cycle for performance optimization. Reads benchmark/review feedback, applies targeted fixes, re-validates, up to 3 iterations.
## Identity
- **Type**: `interactive`
- **Responsibility**: Iterative fix-verify cycle for optimization issues
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read benchmark results and review report to understand failures
- Apply targeted fixes addressing specific feedback items
- Re-validate after each fix attempt
- Track iteration count (max 3)
- Produce structured output with fix summary
### MUST NOT
- Skip reading feedback before attempting fixes
- Apply broad changes unrelated to feedback
- Exceed 3 fix iterations
- Modify code outside the scope of reported issues
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load feedback artifacts and source files |
| `Edit` | builtin | Apply targeted code fixes |
| `Write` | builtin | Write updated artifacts |
| `Bash` | builtin | Run build/test/benchmark validation |
| `Grep` | builtin | Search for patterns |
| `Glob` | builtin | Find files |
---
## Execution
### Phase 1: Feedback Loading
**Objective**: Load and parse benchmark/review feedback.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Benchmark results | Yes (if benchmark failed) | From artifacts/benchmark-results.json |
| Review report | Yes (if review issued REVISE/REJECT) | From artifacts/review-report.md |
| Optimization plan | Yes | Original plan for reference |
| Baseline metrics | Yes | For regression comparison |
| Discoveries | No | Shared findings |
**Steps**:
1. Read benchmark-results.json -- identify metrics that failed targets or regressed
2. Read review-report.md -- identify Critical/High findings with file:line references
3. Categorize issues by type and priority:
- Performance regression (benchmark target not met)
- Correctness issue (logic error, race condition)
- Side effect (unintended behavior change)
- Maintainability concern (excessive complexity)
**Output**: Prioritized list of issues to fix
---
### Phase 2: Fix Implementation (Iterative)
**Objective**: Apply fixes and re-validate, up to 3 rounds.
**Steps**:
For each iteration (1..3):
1. **Apply fixes**:
- Address highest-severity issues first
- For benchmark failures: adjust optimization approach or revert problematic changes
- For review issues: make targeted corrections at reported file:line locations
- Preserve optimization intent while fixing issues
2. **Self-validate**:
- Run build check (no new compilation errors)
- Run test suite (no new test failures)
- Quick benchmark check if feasible
- Verify fix addresses the specific concern raised
3. **Check convergence**:
| Validation Result | Action |
|-------------------|--------|
| All checks pass | Exit loop, report success |
| Some checks still fail, iteration < 3 | Continue to next iteration |
| Still failing at iteration 3 | Report remaining issues for escalation |
**Output**: Fix results per iteration
---
### Phase 3: Result Reporting
**Objective**: Produce final fix cycle summary.
**Steps**:
1. Update benchmark-results.json with post-fix metrics if applicable
2. Append fix discoveries to discoveries.ndjson
3. Report final status
---
## Structured Output Template
```
## Summary
- Fix cycle completed: N iterations, M issues resolved, K remaining
## Iterations
### Iteration 1
- Fixed: [list of fixes applied with file:line]
- Validation: [pass/fail per dimension]
### Iteration 2 (if needed)
- Fixed: [list of fixes]
- Validation: [pass/fail]
## Final Status
- verdict: PASS | PARTIAL | ESCALATE
- Remaining issues (if any): [list]
## Performance Impact
- Metric changes from fixes (if measured)
## Artifacts Updated
- artifacts/benchmark-results.json (updated metrics, if re-benchmarked)
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Fix introduces new regression | Revert fix, try alternative approach |
| Cannot reproduce reported issue | Log as resolved-by-environment, continue |
| Fix scope exceeds current files | Report scope expansion needed, escalate |
| Optimization approach fundamentally flawed | Report for strategist escalation |
| Timeout approaching | Output partial results with iteration count |
| 3 iterations exhausted | Report remaining issues for user escalation |

View File

@@ -0,0 +1,150 @@
# Plan Reviewer Agent
Review bottleneck report or optimization plan at user checkpoints, providing interactive approval or revision requests.
## Identity
- **Type**: `interactive`
- **Responsibility**: Review and approve/revise plans before execution proceeds
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read the bottleneck report or optimization plan being reviewed
- Produce structured output with clear APPROVE/REVISE verdict
- Include specific file:line references in findings
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Modify source code directly
- Produce unstructured output
- Approve without actually reading the plan
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load plan artifacts and project files |
| `Grep` | builtin | Search for patterns in codebase |
| `Glob` | builtin | Find files by pattern |
| `Bash` | builtin | Run build/test commands |
### Tool Usage Patterns
**Read Pattern**: Load context files before review
```
Read("{session_folder}/artifacts/bottleneck-report.md")
Read("{session_folder}/artifacts/optimization-plan.md")
Read("{session_folder}/discoveries.ndjson")
```
---
## Execution
### Phase 1: Context Loading
**Objective**: Load the plan or report to review.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Bottleneck report | Yes (if reviewing profiling) | Ranked bottleneck list from profiler |
| Optimization plan | Yes (if reviewing strategy) | Prioritized plan from strategist |
| Discoveries | No | Shared findings from prior stages |
**Steps**:
1. Read the artifact being reviewed from session artifacts folder
2. Read discoveries.ndjson for additional context
3. Identify which checkpoint this review corresponds to (CP-1 for profiling, CP-2 for strategy)
**Output**: Loaded plan context for review
---
### Phase 2: Plan Review
**Objective**: Evaluate plan quality, completeness, and feasibility.
**Steps**:
1. **For bottleneck report review (CP-1)**:
- Verify all performance dimensions are covered (CPU, memory, I/O, network, rendering)
- Check that severity rankings are justified with measured evidence
- Validate baseline metrics are quantified with units and measurement method
- Check scope coverage matches original requirement
2. **For optimization plan review (CP-2)**:
- Verify each optimization has unique OPT-ID and self-contained detail
- Check priority assignments follow impact/effort matrix
- Validate target files are non-overlapping between optimizations
- Verify success criteria are measurable with specific thresholds
- Check that implementation guidance is actionable
- Assess risk levels and potential side effects
3. **Issue classification**:
| Finding Severity | Condition | Impact |
|------------------|-----------|--------|
| Critical | Missing key profiling dimension or infeasible plan | REVISE required |
| High | Unclear criteria or unrealistic targets | REVISE recommended |
| Medium | Minor gaps in coverage or detail | Note for improvement |
| Low | Style or formatting issues | Informational |
**Output**: Review findings with severity classifications
---
### Phase 3: Verdict
**Objective**: Issue APPROVE or REVISE verdict.
| Verdict | Condition | Action |
|---------|-----------|--------|
| APPROVE | No Critical or High findings | Plan is ready for next stage |
| REVISE | Has Critical or High findings | Return specific feedback for revision |
**Output**: Verdict with detailed feedback
---
## Structured Output Template
```
## Summary
- One-sentence verdict: APPROVE or REVISE with rationale
## Findings
- Finding 1: [severity] description with artifact reference
- Finding 2: [severity] description with specific section reference
## Verdict
- APPROVE: Plan is ready for execution
OR
- REVISE: Specific items requiring revision
1. Issue description + suggested fix
2. Issue description + suggested fix
## Recommendations
- Optional improvement suggestions (non-blocking)
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Artifact file not found | Report in findings, request re-generation |
| Plan structure invalid | Report as Critical finding, REVISE verdict |
| Scope mismatch | Report in findings, note for coordinator |
| Timeout approaching | Output current findings with "PARTIAL" status |

View File

@@ -0,0 +1,122 @@
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read shared discoveries: {session_folder}/discoveries.ndjson (if exists, skip if not)
2. Read project context: .workflow/project-tech.json (if exists)
3. Read task schema: .codex/skills/team-perf-opt/schemas/tasks-schema.md
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Description**: {description}
**Role**: {role}
**Bottleneck Type**: {bottleneck_type}
**Priority**: {priority}
**Target Files**: {target_files}
### Previous Tasks' Findings (Context)
{prev_context}
---
## Execution Protocol
1. **Read discoveries**: Load {session_folder}/discoveries.ndjson for shared exploration findings
2. **Use context**: Apply previous tasks' findings from prev_context above
3. **Execute by role**:
**If role = profiler**:
- Detect project type by scanning for framework markers:
- Frontend (React/Vue/Angular): render time, bundle size, FCP/LCP/CLS
- Backend Node (Express/Fastify/NestJS): CPU hotspots, memory, DB queries
- Native/JVM Backend (Cargo/Go/Java): CPU, memory, GC tuning
- CLI Tool: startup time, throughput, memory peak
- Trace hot code paths and CPU hotspots within target scope
- Identify memory allocation patterns and potential leaks
- Measure I/O and network latency where applicable
- Collect quantified baseline metrics (timing, memory, throughput)
- Rank top 3-5 bottlenecks by severity (Critical/High/Medium)
- Record evidence: file paths, line numbers, measured values
- Write `{session_folder}/artifacts/baseline-metrics.json` (metrics)
- Write `{session_folder}/artifacts/bottleneck-report.md` (ranked bottlenecks)
**If role = strategist**:
- Read bottleneck report and baseline from {session_folder}/artifacts/
- For each bottleneck, select optimization strategy by type:
- CPU: algorithm optimization, memoization, caching, worker threads
- MEMORY: pool reuse, lazy init, WeakRef, scope cleanup
- IO: batching, async pipelines, streaming, connection pooling
- NETWORK: request coalescing, compression, CDN, prefetching
- RENDERING: virtualization, memoization, CSS containment, code splitting
- DATABASE: index optimization, query rewriting, caching layer
- Prioritize by impact/effort: P0 (high impact+low effort) to P3
- Assign unique OPT-IDs (OPT-001, 002, ...) with non-overlapping file targets
- Define measurable success criteria (target metric value or improvement %)
- Write `{session_folder}/artifacts/optimization-plan.md`
**If role = optimizer**:
- Read optimization plan from {session_folder}/artifacts/optimization-plan.md
- Apply optimizations in priority order (P0 first)
- Preserve existing behavior -- optimization must not break functionality
- Make minimal, focused changes per optimization
- Add comments only where optimization logic is non-obvious
- Preserve existing code style and conventions
**If role = benchmarker**:
- Read baseline from {session_folder}/artifacts/baseline-metrics.json
- Read plan from {session_folder}/artifacts/optimization-plan.md
- Run benchmarks matching detected project type:
- Frontend: bundle size, render performance
- Backend: endpoint response times, memory under load, DB query times
- CLI: execution time, memory peak, throughput
- Run test suite to verify no regressions
- Collect post-optimization metrics matching baseline format
- Calculate improvement percentages per metric
- Compare against plan success criteria
- Write `{session_folder}/artifacts/benchmark-results.json`
- Set verdict: PASS (meets criteria) / WARN (partial) / FAIL (regression or criteria not met)
**If role = reviewer**:
- Read plan from {session_folder}/artifacts/optimization-plan.md
- Review changed files across 5 dimensions:
- Correctness: logic errors, race conditions, null safety
- Side effects: unintended behavior changes, API contract breaks
- Maintainability: code clarity, complexity increase, naming
- Regression risk: impact on unrelated code paths
- Best practices: idiomatic patterns, no optimization anti-patterns
- Write `{session_folder}/artifacts/review-report.md`
- Set verdict: APPROVE / REVISE / REJECT
4. **Share discoveries**: Append exploration findings to shared board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> {session_folder}/discoveries.ndjson
```
5. **Report result**: Return JSON via report_agent_job_result
### Discovery Types to Share
- `bottleneck_found`: `{type, location, severity, description}` -- Bottleneck identified
- `hotspot_found`: `{file, function, cpu_pct, description}` -- CPU hotspot
- `memory_issue`: `{file, type, size_mb, description}` -- Memory problem
- `io_issue`: `{operation, latency_ms, description}` -- I/O issue
- `db_issue`: `{query, latency_ms, description}` -- Database issue
- `file_modified`: `{file, change, lines_added}` -- File change recorded
- `metric_measured`: `{metric, value, unit, context}` -- Metric measured
- `pattern_found`: `{pattern_name, location, description}` -- Pattern identified
- `artifact_produced`: `{name, path, producer, type}` -- Deliverable created
---
## Output (report_agent_job_result)
Return JSON:
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Key discoveries and implementation notes (max 500 chars)",
"verdict": "PASS|WARN|FAIL|APPROVE|REVISE|REJECT or empty",
"artifacts_produced": "semicolon-separated artifact paths",
"error": ""
}

View File

@@ -0,0 +1,174 @@
# Team Performance Optimization -- CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier (PREFIX-NNN) | `"PROFILE-001"` |
| `title` | string | Yes | Short task title | `"Profile performance"` |
| `description` | string | Yes | Detailed task description (self-contained) with goal, inputs, outputs, success criteria | `"Profile application performance..."` |
| `role` | enum | Yes | Worker role: `profiler`, `strategist`, `optimizer`, `benchmarker`, `reviewer` | `"profiler"` |
| `bottleneck_type` | string | No | Performance bottleneck category: CPU, MEMORY, IO, NETWORK, RENDERING, DATABASE | `"CPU"` |
| `priority` | enum | No | P0 (Critical), P1 (High), P2 (Medium), P3 (Low) | `"P0"` |
| `target_files` | string | No | Semicolon-separated file paths to focus on | `"src/services/DataProcessor.ts"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"PROFILE-001"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"PROFILE-001"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[PROFILE-001] Found 3 CPU hotspots..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Found 3 CPU hotspots, 1 memory leak..."` |
| `verdict` | string | Benchmark/review verdict: PASS, WARN, FAIL, APPROVE, REVISE, REJECT | `"PASS"` |
| `artifacts_produced` | string | Semicolon-separated paths of produced artifacts | `"artifacts/bottleneck-report.md"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Role Prefix Mapping
| Role | Prefix | Stage | Responsibility |
|------|--------|-------|----------------|
| profiler | PROFILE | 1 | Performance profiling, baseline metrics, bottleneck identification |
| strategist | STRATEGY | 2 | Optimization plan design, strategy selection, prioritization |
| optimizer | IMPL / FIX | 3 | Code implementation, optimization application, targeted fixes |
| benchmarker | BENCH | 4 | Benchmark execution, before/after comparison, regression detection |
| reviewer | REVIEW | 4 | Code review for correctness, side effects, regression risks |
---
### Example Data
```csv
id,title,description,role,bottleneck_type,priority,target_files,deps,context_from,exec_mode,wave,status,findings,verdict,artifacts_produced,error
"PROFILE-001","Profile performance","PURPOSE: Profile application performance to identify bottlenecks\nTASK:\n- Detect project type (frontend/backend/CLI)\n- Trace hot code paths and CPU hotspots\n- Identify memory allocation patterns and leaks\n- Measure I/O and network latency\n- Collect quantified baseline metrics\nINPUT: Codebase under target scope\nOUTPUT: artifacts/baseline-metrics.json + artifacts/bottleneck-report.md\nSUCCESS: Ranked bottleneck list with severity, baseline metrics collected\nSESSION: .workflow/.csv-wave/perf-example-20260308","profiler","","","","","","csv-wave","1","pending","","","",""
"STRATEGY-001","Design optimization plan","PURPOSE: Design prioritized optimization plan from bottleneck report\nTASK:\n- For each bottleneck, select optimization strategy\n- Prioritize by impact/effort ratio (P0-P3)\n- Define measurable success criteria per optimization\n- Assign unique OPT-IDs with non-overlapping file targets\nINPUT: artifacts/bottleneck-report.md + artifacts/baseline-metrics.json\nOUTPUT: artifacts/optimization-plan.md\nSUCCESS: Prioritized plan with self-contained OPT blocks\nSESSION: .workflow/.csv-wave/perf-example-20260308","strategist","","","","PROFILE-001","PROFILE-001","csv-wave","2","pending","","","",""
"IMPL-001","Implement optimizations","PURPOSE: Implement performance optimizations per plan\nTASK:\n- Apply optimizations in priority order (P0 first)\n- Preserve existing behavior\n- Make minimal, focused changes\nINPUT: artifacts/optimization-plan.md\nOUTPUT: Modified source files\nSUCCESS: All planned optimizations applied, no functionality regressions\nSESSION: .workflow/.csv-wave/perf-example-20260308","optimizer","","","","STRATEGY-001","STRATEGY-001","csv-wave","3","pending","","","",""
"BENCH-001","Benchmark improvements","PURPOSE: Benchmark before/after optimization metrics\nTASK:\n- Run benchmarks matching detected project type\n- Compare post-optimization metrics vs baseline\n- Calculate improvement percentages\n- Detect any regressions\nINPUT: artifacts/baseline-metrics.json + artifacts/optimization-plan.md\nOUTPUT: artifacts/benchmark-results.json\nSUCCESS: All target improvements met, no regressions\nSESSION: .workflow/.csv-wave/perf-example-20260308","benchmarker","","","","IMPL-001","IMPL-001","csv-wave","4","pending","","","",""
"REVIEW-001","Review optimization code","PURPOSE: Review optimization changes for correctness and quality\nTASK:\n- Correctness: logic errors, race conditions, null safety\n- Side effects: unintended behavior changes, API breaks\n- Maintainability: code clarity, complexity, naming\n- Regression risk: impact on unrelated code paths\n- Best practices: idiomatic patterns, no anti-patterns\nINPUT: artifacts/optimization-plan.md + changed files\nOUTPUT: artifacts/review-report.md\nSUCCESS: APPROVE verdict (no Critical/High findings)\nSESSION: .workflow/.csv-wave/perf-example-20260308","reviewer","","","","IMPL-001","IMPL-001","csv-wave","4","pending","","","",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
--------------------- -------------------- -----------------
id ----------> id ----------> id
title ----------> title ----------> (reads)
description ----------> description ----------> (reads)
role ----------> role ----------> (reads)
bottleneck_type--------> bottleneck_type--------> (reads)
priority ----------> priority ----------> (reads)
target_files----------> target_files----------> (reads)
deps ----------> deps ----------> (reads)
context_from----------> context_from----------> (reads)
exec_mode ----------> exec_mode ----------> (reads)
wave ----------> (reads)
prev_context ----------> (reads)
status
findings
verdict
artifacts_produced
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "PROFILE-001",
"status": "completed",
"findings": "Found 3 CPU hotspots: O(n^2) in DataProcessor.processRecords (Critical), unoptimized regex in Validator.check (High), synchronous file reads in ConfigLoader (Medium). Memory baseline: 145MB peak, 2 potential leak sites.",
"verdict": "",
"artifacts_produced": "artifacts/baseline-metrics.json;artifacts/bottleneck-report.md",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `bottleneck_found` | `data.location` | `{type, location, severity, description}` | Performance bottleneck identified |
| `hotspot_found` | `data.file+data.function` | `{file, function, cpu_pct, description}` | CPU hotspot detected |
| `memory_issue` | `data.file+data.type` | `{file, type, size_mb, description}` | Memory leak or bloat |
| `io_issue` | `data.operation` | `{operation, latency_ms, description}` | I/O performance issue |
| `db_issue` | `data.query` | `{query, latency_ms, description}` | Database performance issue |
| `file_modified` | `data.file` | `{file, change, lines_added}` | File change recorded |
| `metric_measured` | `data.metric+data.context` | `{metric, value, unit, context}` | Performance metric measured |
| `pattern_found` | `data.pattern_name+data.location` | `{pattern_name, location, description}` | Code pattern identified |
| `artifact_produced` | `data.path` | `{name, path, producer, type}` | Deliverable created |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"PROFILE-001","type":"bottleneck_found","data":{"type":"CPU","location":"src/services/DataProcessor.ts:145","severity":"Critical","description":"O(n^2) nested loop in processRecords, 850ms for 10k records"}}
{"ts":"2026-03-08T10:01:00Z","worker":"PROFILE-001","type":"hotspot_found","data":{"file":"src/services/DataProcessor.ts","function":"processRecords","cpu_pct":42,"description":"Accounts for 42% of CPU time in profiling run"}}
{"ts":"2026-03-08T10:02:00Z","worker":"PROFILE-001","type":"metric_measured","data":{"metric":"response_time_p95","value":1250,"unit":"ms","context":"GET /api/dashboard"}}
{"ts":"2026-03-08T10:15:00Z","worker":"IMPL-001","type":"file_modified","data":{"file":"src/services/DataProcessor.ts","change":"Replaced O(n^2) with Map lookup O(n)","lines_added":12}}
{"ts":"2026-03-08T10:25:00Z","worker":"BENCH-001","type":"metric_measured","data":{"metric":"response_time_p95","value":380,"unit":"ms","context":"GET /api/dashboard (after optimization)"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| CSV task findings | Interactive task | Injected via spawn message or send_input |
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
| Role valid | role in {profiler, strategist, optimizer, benchmarker, reviewer} | "Invalid role: {role}" |
| Verdict enum | verdict in {PASS, WARN, FAIL, APPROVE, REVISE, REJECT, ""} | "Invalid verdict: {verdict}" |
| Cross-mechanism deps | Interactive to CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |