Add unit tests for various components and stores in the terminal dashboard

- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management.
- Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping.
- Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
catlog22
2026-03-08 21:38:20 +08:00
parent 9aa07e8d01
commit 62d8aa3623
157 changed files with 36544 additions and 71 deletions

View File

@@ -0,0 +1,495 @@
---
name: team-review
description: Multi-agent code review pipeline with scanner, reviewer, and fixer roles. Executes toolchain + LLM scan, deep analysis with root cause enrichment, and automated fixes with rollback-on-failure.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] [--full|--fix|-q] [--dimensions=sec,cor,prf,mnt] \"target path or pattern\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team Review
## Usage
```bash
$team-review "src/auth/**/*.ts"
$team-review -c 2 --full "src/components"
$team-review -y --dimensions=sec,cor "src/api"
$team-review --continue "RV-auth-review-2026-03-08"
$team-review -q "src/utils"
$team-review --fix "src/auth/login.ts"
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--continue`: Resume existing session
- `--full`: Enable scan + review + fix pipeline
- `--fix`: Fix-only mode (skip scan/review)
- `-q, --quick`: Quick scan only
- `--dimensions=sec,cor,prf,mnt`: Custom dimensions (security, correctness, performance, maintainability)
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
---
## Overview
Orchestrate multi-agent code review with three specialized roles: scanner (toolchain + LLM semantic scan), reviewer (deep analysis with root cause enrichment), and fixer (automated fixes with rollback-on-failure). Supports 4-dimension analysis: security (SEC), correctness (COR), performance (PRF), maintainability (MNT).
**Execution Model**: Hybrid — CSV wave pipeline (primary) + individual agent spawn (secondary)
```
┌─────────────────────────────────────────────────────────────────────────┐
│ Team Review WORKFLOW │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Phase 0: Pre-Wave Interactive │
│ ├─ Parse arguments and detect pipeline mode │
│ ├─ Validate target path and resolve file patterns │
│ └─ Output: refined requirements for decomposition │
│ │
│ Phase 1: Requirement → CSV + Classification │
│ ├─ Generate task breakdown based on pipeline mode │
│ ├─ Create scan/review/fix tasks with dependencies │
│ ├─ Classify tasks: csv-wave (scanner, reviewer) | interactive (fixer)│
│ ├─ Compute dependency waves (topological sort → depth grouping) │
│ ├─ Generate tasks.csv with wave + exec_mode columns │
│ └─ User validates task breakdown (skip if -y) │
│ │
│ Phase 2: Wave Execution Engine (Extended) │
│ ├─ For each wave (1..N): │
│ │ ├─ Execute pre-wave interactive tasks (if any) │
│ │ ├─ Build wave CSV (filter csv-wave tasks for this wave) │
│ │ ├─ Inject previous findings into prev_context column │
│ │ ├─ spawn_agents_on_csv(wave CSV) │
│ │ ├─ Execute post-wave interactive tasks (if any) │
│ │ ├─ Merge all results into master tasks.csv │
│ │ └─ Check: any failed? → skip dependents │
│ └─ discoveries.ndjson shared across all modes (append-only) │
│ │
│ Phase 3: Post-Wave Interactive │
│ ├─ Generate final review report and fix summary │
│ └─ Final aggregation / report │
│ │
│ Phase 4: Results Aggregation │
│ ├─ Export final results.csv │
│ ├─ Generate context.md with all findings │
│ ├─ Display summary: completed/failed/skipped per wave │
│ └─ Offer: view results | retry failed | done │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## Task Classification Rules
Each task is classified by `exec_mode`:
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification, inline utility |
**Classification Decision**:
| Task Property | Classification |
|---------------|---------------|
| Scanner task (toolchain + LLM scan) | `csv-wave` |
| Reviewer task (deep analysis) | `csv-wave` |
| Fixer task (code modification with rollback) | `interactive` |
---
## CSV Schema
### tasks.csv (Master State)
```csv
id,title,description,deps,context_from,exec_mode,dimension,target,wave,status,findings,error
1,Scan codebase,Run toolchain + LLM scan on target files,,,"csv-wave","sec,cor,prf,mnt","src/**/*.ts",1,pending,"",""
2,Review findings,Deep analysis with root cause enrichment,1,1,"csv-wave","sec,cor,prf,mnt","scan-results.json",2,pending,"",""
3,Fix issues,Apply fixes with rollback-on-failure,2,2,"interactive","","review-report.json",3,pending,"",""
```
**Columns**:
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (string) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `dimension` | Input | Review dimensions (sec,cor,prf,mnt) |
| `target` | Input | Target path or pattern |
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
| `status` | Output | `pending``completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
| `error` | Output | Error message if failed (empty if success) |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
---
## Agent Registry (Interactive Agents)
| Agent | Role File | Pattern | Responsibility | Position |
|-------|-----------|---------|----------------|----------|
| fixer | agents/fixer.md | 2.3 | Apply fixes with rollback-on-failure | post-wave |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 4 |
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
| `context.md` | Human-readable execution report | Created in Phase 4 |
| `interactive/fixer-result.json` | Results from fixer task | Created per interactive task |
| `agents/registry.json` | Active interactive agent tracking | Updated on spawn/close |
---
## Session Structure
```
.workflow/.csv-wave/{session-id}/
├── tasks.csv # Master state (all tasks, both modes)
├── results.csv # Final results export
├── discoveries.ndjson # Shared discovery board (all agents)
├── context.md # Human-readable report
├── wave-{N}.csv # Temporary per-wave input (csv-wave only)
├── interactive/ # Interactive task artifacts
│ ├── fixer-result.json # Per-task results
│ └── cache-index.json # Shared exploration cache
└── agents/
└── registry.json # Active interactive agent tracking
```
---
## Implementation
### Session Initialization
```javascript
// Parse arguments
const args = parseArguments($ARGUMENTS)
const AUTO_YES = args.yes || args.y || false
const CONCURRENCY = args.concurrency || args.c || 3
const CONTINUE_SESSION = args.continue || null
const MODE = args.full ? 'full' : args.fix ? 'fix-only' : args.quick || args.q ? 'quick' : 'default'
const DIMENSIONS = args.dimensions || 'sec,cor,prf,mnt'
const TARGET = args._[0] || null
// Generate session ID
const sessionId = `RV-${slugify(TARGET || 'review')}-${formatDate(new Date(), 'yyyy-MM-dd')}`
const sessionDir = `.workflow/.csv-wave/${sessionId}`
// Create session structure
Bash({ command: `mkdir -p "${sessionDir}/interactive" "${sessionDir}/agents"` })
Write(`${sessionDir}/discoveries.ndjson`, '')
Write(`${sessionDir}/agents/registry.json`, JSON.stringify({ active: [], closed: [] }))
```
---
### Phase 0: Pre-Wave Interactive
**Objective**: Parse arguments, validate target, detect pipeline mode
**Execution**:
1. Parse command-line arguments for mode flags (--full, --fix, -q)
2. Extract target path/pattern from arguments
3. Validate target exists and resolve to file list
4. Detect pipeline mode based on flags
5. Store configuration in session metadata
**Success Criteria**:
- Refined requirements available for Phase 1 decomposition
- Interactive agents closed, results stored
---
### Phase 1: Requirement → CSV + Classification
**Objective**: Generate task breakdown based on pipeline mode and create master CSV
**Decomposition Rules**:
| Mode | Tasks Generated |
|------|----------------|
| quick | SCAN-001 (quick scan only) |
| default | SCAN-001 → REV-001 |
| full | SCAN-001 → REV-001 → FIX-001 |
| fix-only | FIX-001 (requires existing review report) |
**Classification Rules**:
- Scanner tasks: `exec_mode=csv-wave` (one-shot toolchain + LLM scan)
- Reviewer tasks: `exec_mode=csv-wave` (one-shot deep analysis)
- Fixer tasks: `exec_mode=interactive` (multi-round with rollback)
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
**Success Criteria**:
- tasks.csv created with valid schema, wave, and exec_mode assignments
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
```javascript
// Load master CSV
const masterCSV = readCSV(`${sessionDir}/tasks.csv`)
const maxWave = Math.max(...masterCSV.map(t => t.wave))
for (let wave = 1; wave <= maxWave; wave++) {
// Execute pre-wave interactive tasks
const preWaveTasks = masterCSV.filter(t =>
t.wave === wave && t.exec_mode === 'interactive' && t.position === 'pre-wave'
)
for (const task of preWaveTasks) {
const agent = spawn_agent({
message: buildInteractivePrompt(task, sessionDir)
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
close_agent({ id: agent })
updateTaskStatus(task.id, result)
}
// Build wave CSV (csv-wave tasks only)
const waveTasks = masterCSV.filter(t => t.wave === wave && t.exec_mode === 'csv-wave')
if (waveTasks.length > 0) {
// Inject prev_context from context_from tasks
for (const task of waveTasks) {
if (task.context_from) {
const contextIds = task.context_from.split(';')
const contextFindings = masterCSV
.filter(t => contextIds.includes(t.id))
.map(t => `[Task ${t.id}] ${t.findings}`)
.join('\n\n')
task.prev_context = contextFindings
}
}
// Write wave CSV
writeCSV(`${sessionDir}/wave-${wave}.csv`, waveTasks)
// Execute wave
spawn_agents_on_csv({
csv_path: `${sessionDir}/wave-${wave}.csv`,
instruction_path: `${sessionDir}/instructions/agent-instruction.md`,
concurrency: CONCURRENCY
})
// Merge results back to master
const waveResults = readCSV(`${sessionDir}/wave-${wave}.csv`)
for (const result of waveResults) {
const masterTask = masterCSV.find(t => t.id === result.id)
Object.assign(masterTask, result)
}
writeCSV(`${sessionDir}/tasks.csv`, masterCSV)
// Cleanup wave CSV
Bash({ command: `rm "${sessionDir}/wave-${wave}.csv"` })
}
// Execute post-wave interactive tasks
const postWaveTasks = masterCSV.filter(t =>
t.wave === wave && t.exec_mode === 'interactive' && t.position === 'post-wave'
)
for (const task of postWaveTasks) {
const agent = spawn_agent({
message: buildInteractivePrompt(task, sessionDir)
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
close_agent({ id: agent })
updateTaskStatus(task.id, result)
}
// Check for failures and skip dependents
const failedTasks = masterCSV.filter(t => t.wave === wave && t.status === 'failed')
if (failedTasks.length > 0) {
skipDependents(masterCSV, failedTasks)
}
}
```
**Success Criteria**:
- All waves executed in order
- Both csv-wave and interactive tasks handled per wave
- Each wave's results merged into master CSV before next wave starts
- Dependent tasks skipped when predecessor failed
- discoveries.ndjson accumulated across all waves and mechanisms
- Interactive agent lifecycle tracked in registry.json
---
### Phase 3: Post-Wave Interactive
**Objective**: Generate final review report and fix summary
**Execution**:
1. Aggregate all findings from scan and review tasks
2. Generate comprehensive review report with metrics
3. If fixer ran, generate fix summary with success/failure rates
4. Write final reports to session directory
**Success Criteria**:
- Post-wave interactive processing complete
- Interactive agents closed, results stored
---
### Phase 4: Results Aggregation
**Objective**: Generate final results and human-readable report.
```javascript
// Export results.csv
const masterCSV = readCSV(`${sessionDir}/tasks.csv`)
writeCSV(`${sessionDir}/results.csv`, masterCSV)
// Generate context.md
const contextMd = generateContextReport(masterCSV, sessionDir)
Write(`${sessionDir}/context.md`, contextMd)
// Cleanup interactive agents
const registry = JSON.parse(Read(`${sessionDir}/agents/registry.json`))
for (const agent of registry.active) {
close_agent({ id: agent.id })
}
Write(`${sessionDir}/agents/registry.json`, JSON.stringify({ active: [], closed: registry.closed }))
// Display summary
const summary = {
total: masterCSV.length,
completed: masterCSV.filter(t => t.status === 'completed').length,
failed: masterCSV.filter(t => t.status === 'failed').length,
skipped: masterCSV.filter(t => t.status === 'skipped').length
}
console.log(`Pipeline complete: ${summary.completed}/${summary.total} tasks completed`)
```
**Success Criteria**:
- results.csv exported (all tasks, both modes)
- context.md generated
- All interactive agents closed (registry.json cleanup)
- Summary displayed to user
---
## Shared Discovery Board Protocol
**Discovery Types**:
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `finding` | `file+line+dimension` | `{dimension, file, line, severity, title}` | Code issue discovered by scanner |
| `root_cause` | `finding_id` | `{finding_id, description, related_findings[]}` | Root cause analysis from reviewer |
| `fix_applied` | `file+line` | `{file, line, fix_strategy, status}` | Fix application result from fixer |
| `pattern` | `pattern_name` | `{pattern, files[], occurrences}` | Code pattern identified across files |
**Discovery NDJSON Format**:
```jsonl
{"ts":"2026-03-08T14:30:22Z","worker":"1","type":"finding","data":{"dimension":"sec","file":"src/auth.ts","line":42,"severity":"high","title":"SQL injection vulnerability"}}
{"ts":"2026-03-08T14:35:10Z","worker":"2","type":"root_cause","data":{"finding_id":"SEC-001","description":"Unsanitized user input in query","related_findings":["SEC-002"]}}
{"ts":"2026-03-08T14:40:05Z","worker":"3","type":"fix_applied","data":{"file":"src/auth.ts","line":42,"fix_strategy":"minimal","status":"fixed"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Bridging
### Interactive Result → CSV Task
When a pre-wave interactive task produces results needed by csv-wave tasks:
```javascript
// 1. Interactive result stored in file
const resultFile = `${sessionDir}/interactive/${taskId}-result.json`
// 2. Wave engine reads when building prev_context for csv-wave tasks
// If a csv-wave task has context_from referencing an interactive task:
// Read the interactive result file and include in prev_context
```
### CSV Result → Interactive Task
When a post-wave interactive task needs CSV wave results:
```javascript
// Option A: Include in spawn message
const csvFindings = readMasterCSV().filter(t => t.wave === currentWave && t.exec_mode === 'csv-wave')
const context = csvFindings.map(t => `## Task ${t.id}: ${t.title}\n${t.findings}`).join('\n\n')
spawn_agent({
message: `...\n### Wave ${currentWave} Results\n${context}\n...`
})
// Option B: Inject via send_input (if agent already running)
send_input({
id: activeAgent,
message: `## Wave ${currentWave} Results\n${context}\n\nProceed with analysis.`
})
```
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Circular dependency | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Interactive agent failed | Mark as failed, skip dependents |
| Pre-wave interactive failed | Skip dependent csv-wave tasks in same wave |
| All agents in wave failed | Log error, offer retry or abort |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| Lifecycle leak | Cleanup all active agents via registry.json at end |
| Continue mode: no session found | List available sessions, prompt user to select |
| Target path invalid | AskUserQuestion for corrected path |
| Scanner finds 0 findings | Report clean, skip review + fix stages |
---
## Core Rules
1. **Start Immediately**: First action is session initialization, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson — both mechanisms share it
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent (tracked in registry.json)
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped

View File

@@ -0,0 +1,360 @@
# Fixer Agent
Fix code based on reviewed findings. Load manifest, plan fix groups, apply with rollback-on-failure, verify.
## Identity
- **Type**: `code-generation`
- **Role File**: `~/.codex/agents/fixer.md`
- **Responsibility**: Code modification with rollback-on-failure
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Produce structured output following template
- Include file:line references in findings
- Apply fixes using Edit tool in dependency order
- Run tests after each fix
- Rollback on test failure (no retry)
- Mark dependent fixes as skipped if prerequisite failed
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Produce unstructured output
- Exceed defined scope boundaries
- Retry failed fixes (rollback and move on)
- Apply fixes without running tests
- Modify files outside fix scope
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | File I/O | Load fix manifest, review report, source files |
| `Write` | File I/O | Write fix plan, execution results, summary |
| `Edit` | File modification | Apply code fixes |
| `Bash` | Shell execution | Run tests, verification tools, git operations |
| `Glob` | File discovery | Find test files, source files |
| `Grep` | Content search | Search for patterns in code |
### Tool Usage Patterns
**Read Pattern**: Load context files before fixing
```
Read(".workflow/project-tech.json")
Read("<session>/fix/fix-manifest.json")
Read("<session>/review/review-report.json")
Read("<target-file>")
```
**Write Pattern**: Generate artifacts after processing
```
Write("<session>/fix/fix-plan.json", <plan>)
Write("<session>/fix/execution-results.json", <results>)
Write("<session>/fix/fix-summary.json", <summary>)
```
---
## Execution
### Phase 1: Context & Scope Resolution
**Objective**: Load fix manifest, review report, and determine fixable findings
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Task description | Yes | Contains session path and input path |
| Fix manifest | Yes | <session>/fix/fix-manifest.json |
| Review report | Yes | <session>/review/review-report.json |
| Project tech | No | .workflow/project-tech.json |
**Steps**:
1. Extract session path and input path from task description
2. Load fix manifest (scope, source report path)
3. Load review report (findings with enrichment)
4. Filter fixable findings: severity in scope AND fix_strategy !== 'skip'
5. If 0 fixable → report complete immediately
6. Detect quick path: findings <= 5 AND no cross-file dependencies
7. Detect verification tools:
- tsc: tsconfig.json exists
- eslint: package.json contains eslint
- jest: package.json contains jest
- pytest: pyproject.toml exists
- semgrep: semgrep available
8. Load wisdom files from `<session>/wisdom/`
**Output**: Fixable findings list, quick_path flag, available verification tools
---
### Phase 2: Plan Fixes
**Objective**: Group findings, resolve dependencies, determine execution order
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Fixable findings | Yes | From Phase 1 |
| Fix dependencies | Yes | From review report enrichment |
**Steps**:
1. Group findings by primary file
2. Merge groups with cross-file dependencies (union-find algorithm)
3. Topological sort within each group (respect fix_dependencies, append cycles at end)
4. Sort groups by max severity (critical first)
5. Determine execution path:
- quick_path: <=5 findings AND <=1 group → single agent
- standard: one agent per group, in execution_order
6. Write fix plan to `<session>/fix/fix-plan.json`:
```json
{
"plan_id": "<uuid>",
"quick_path": true|false,
"groups": [
{
"id": "group-1",
"files": ["src/auth.ts"],
"findings": ["SEC-001", "SEC-002"],
"max_severity": "critical"
}
],
"execution_order": ["group-1", "group-2"],
"total_findings": 10,
"total_groups": 2
}
```
**Output**: Fix plan with grouped findings and execution order
---
### Phase 3: Execute Fixes
**Objective**: Apply fixes with rollback-on-failure
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Fix plan | Yes | From Phase 2 |
| Source files | Yes | Files to modify |
**Steps**:
**Quick path**: Single code-developer agent for all findings
**Standard path**: One code-developer agent per group, in execution_order
Agent prompt includes:
- Finding list (dependency-sorted)
- File contents (truncated 8K)
- Critical rules:
1. Apply each fix using Edit tool in order
2. After each fix, run related tests
3. Tests PASS → finding is "fixed"
4. Tests FAIL → `git checkout -- {file}` → mark "failed" → continue
5. No retry on failure. Rollback and move on
6. If finding depends on previously failed finding → mark "skipped"
Agent execution:
```javascript
const agent = spawn_agent({
message: `## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read role definition: ~/.codex/agents/code-developer.md
---
## Fix Group: {group.id}
**Files**: {group.files.join(', ')}
**Findings**: {group.findings.length}
### Findings (dependency-sorted):
{group.findings.map(f => `
- ID: ${f.id}
- Severity: ${f.severity}
- Location: ${f.location.file}:${f.location.line}
- Description: ${f.description}
- Fix Strategy: ${f.fix_strategy}
- Dependencies: ${f.fix_dependencies.join(', ')}
`).join('\n')}
### Critical Rules:
1. Apply each fix using Edit tool in order
2. After each fix, run related tests
3. Tests PASS → finding is "fixed"
4. Tests FAIL → git checkout -- {file} → mark "failed" → continue
5. No retry on failure. Rollback and move on
6. If finding depends on previously failed finding → mark "skipped"
### Output Format:
Return JSON:
{
"results": [
{"id": "SEC-001", "status": "fixed|failed|skipped", "file": "src/auth.ts", "error": ""}
]
}
`
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
close_agent({ id: agent })
```
Parse agent response for structured JSON. Fallback: check git diff per file if no structured output.
Write execution results to `<session>/fix/execution-results.json`:
```json
{
"fixed": ["SEC-001", "COR-003"],
"failed": ["SEC-002"],
"skipped": ["SEC-004"]
}
```
**Output**: Execution results with fixed/failed/skipped findings
---
### Phase 4: Post-Fix Verification
**Objective**: Run verification tools on modified files
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Execution results | Yes | From Phase 3 |
| Modified files | Yes | Files that were changed |
| Verification tools | Yes | From Phase 1 detection |
**Steps**:
1. Run available verification tools on modified files:
| Tool | Command | Pass Criteria |
|------|---------|---------------|
| tsc | `npx tsc --noEmit` | 0 errors |
| eslint | `npx eslint <files>` | 0 errors |
| jest | `npx jest --passWithNoTests` | Tests pass |
| pytest | `pytest --tb=short` | Tests pass |
| semgrep | `semgrep --config auto <files> --json` | 0 results |
2. If verification fails critically → rollback last batch
3. Write verification results to `<session>/fix/verify-results.json`
4. Generate fix summary:
```json
{
"fix_id": "<uuid>",
"fix_date": "<ISO8601>",
"scope": "critical,high",
"total": 10,
"fixed": 7,
"failed": 2,
"skipped": 1,
"fix_rate": 0.7,
"verification": {
"tsc": "pass",
"eslint": "pass",
"jest": "pass"
}
}
```
5. Generate human-readable summary in `<session>/fix/fix-summary.md`
6. Update `<session>/.msg/meta.json` with fix results
7. Contribute discoveries to `<session>/wisdom/` files
**Output**: Fix summary with verification results
---
## Inline Subagent Calls
This agent may spawn utility subagents during its execution:
### code-developer
**When**: After fix plan is ready
**Agent File**: ~/.codex/agents/code-developer.md
```javascript
const utility = spawn_agent({
message: `### MANDATORY FIRST STEPS
1. Read: ~/.codex/agents/code-developer.md
## Fix Group: {group.id}
[See Phase 3 prompt template above]
`
})
const result = wait({ ids: [utility], timeout_ms: 600000 })
close_agent({ id: utility })
// Parse result and update execution results
```
### Result Handling
| Result | Severity | Action |
|--------|----------|--------|
| Success | - | Integrate findings, continue |
| consensus_blocked | HIGH | Include in output with severity flag for orchestrator |
| consensus_blocked | MEDIUM | Include warning, continue |
| Timeout/Error | - | Continue without utility result, log warning |
---
## Structured Output Template
```
## Summary
- Fixed X/Y findings (Z% success rate)
- Failed: A findings (rolled back)
- Skipped: B findings (dependency failures)
## Findings
- SEC-001: Fixed SQL injection in src/auth.ts:42
- SEC-002: Failed to fix XSS (tests failed, rolled back)
- SEC-004: Skipped (depends on SEC-002)
## Verification Results
- tsc: PASS (0 errors)
- eslint: PASS (0 errors)
- jest: PASS (all tests passed)
## Modified Files
- src/auth.ts: 2 fixes applied
- src/utils/sanitize.ts: 1 fix applied
## Open Questions
1. SEC-002 fix caused test failures - manual review needed
2. Consider refactoring auth module for better security
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Input file not found | Report in Open Questions, continue with available data |
| Scope ambiguity | Report in Open Questions, proceed with reasonable assumption |
| Processing failure | Output partial results with clear status indicator |
| Timeout approaching | Output current findings with "PARTIAL" status |
| Fix manifest missing | ERROR, cannot proceed without manifest |
| Review report missing | ERROR, cannot proceed without review |
| All fixes failed | Report failure, include rollback details |
| Verification tool unavailable | Skip verification, warn in output |
| Git operations fail | Report error, manual intervention needed |

View File

@@ -0,0 +1,102 @@
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read shared discoveries: {session_folder}/discoveries.ndjson (if exists, skip if not)
2. Read project context: .workflow/project-tech.json (if exists)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Description**: {description}
**Dimension**: {dimension}
**Target**: {target}
### Previous Tasks' Findings (Context)
{prev_context}
---
## Execution Protocol
1. **Read discoveries**: Load {session_folder}/discoveries.ndjson for shared exploration findings
2. **Use context**: Apply previous tasks' findings from prev_context above
3. **Execute**: Perform your assigned role (scanner or reviewer) following the role-specific instructions below
4. **Share discoveries**: Append exploration findings to shared board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> {session_folder}/discoveries.ndjson
```
5. **Report result**: Return JSON via report_agent_job_result
### Role-Specific Instructions
**If you are a Scanner (SCAN-* task)**:
1. Extract session path and target from description
2. Resolve target files (glob pattern or directory → `**/*.{ts,tsx,js,jsx,py,go,java,rs}`)
3. If no source files found → report empty, complete task cleanly
4. Detect toolchain availability:
- tsc: `tsconfig.json` exists → COR dimension
- eslint: `.eslintrc*` or `eslint` in package.json → COR/MNT
- semgrep: `.semgrep.yml` exists → SEC dimension
- ruff: `pyproject.toml` + ruff available → SEC/COR/MNT
- mypy: mypy available + `pyproject.toml` → COR
- npmAudit: `package-lock.json` exists → SEC
5. Run detected tools in parallel via Bash backgrounding
6. Parse tool outputs into normalized findings with dimension, severity, file:line
7. Execute semantic scan via CLI: `ccw cli --tool gemini --mode analysis --rule analysis-review-code-quality`
8. Focus areas per dimension:
- SEC: Business logic vulnerabilities, privilege escalation, sensitive data flow, auth bypass
- COR: Logic errors, unhandled exception paths, state management bugs, race conditions
- PRF: Algorithm complexity, N+1 queries, unnecessary sync, memory leaks, missing caching
- MNT: Architectural coupling, abstraction leaks, convention violations, dead code
9. Merge toolchain + semantic findings, deduplicate (same file + line + dimension)
10. Assign dimension-prefixed IDs: SEC-001, COR-001, PRF-001, MNT-001
11. Write scan results to session directory
**If you are a Reviewer (REV-* task)**:
1. Extract session path and input path from description
2. Load scan results from previous task (via prev_context or session directory)
3. If scan results empty → report clean, complete immediately
4. Triage findings into deep_analysis (critical/high/medium, max 15) and pass_through (remaining)
5. Split deep_analysis into domain groups:
- Group A: Security + Correctness → Root cause tracing, fix dependencies, blast radius
- Group B: Performance + Maintainability → Optimization approaches, refactor tradeoffs
6. Execute parallel CLI agents for enrichment: `ccw cli --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause`
7. Request 6 enrichment fields per finding:
- root_cause: {description, related_findings[], is_symptom}
- impact: {scope: low/medium/high, affected_files[], blast_radius}
- optimization: {approach, alternative, tradeoff}
- fix_strategy: minimal / refactor / skip
- fix_complexity: low / medium / high
- fix_dependencies: finding IDs that must be fixed first
8. Merge enriched + pass_through findings
9. Cross-correlate:
- Critical files: file appears in >=2 dimensions
- Root cause groups: cluster findings sharing related_findings
- Optimization suggestions: from root cause groups + standalone enriched findings
10. Compute metrics: by_dimension, by_severity, dimension_severity_matrix, fixable_count
11. Write review report to session directory
### Discovery Types to Share
- `finding`: {dimension, file, line, severity, title} — Code issue discovered
- `root_cause`: {finding_id, description, related_findings[]} — Root cause analysis
- `pattern`: {pattern, files[], occurrences} — Code pattern identified
---
## Output (report_agent_job_result)
Return JSON:
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Key discoveries and implementation notes (max 500 chars)",
"error": ""
}
**Scanner findings format**: "Found X security issues (Y critical, Z high), A correctness bugs, B performance issues, C maintainability concerns. Toolchain: [tool results]. LLM scan: [semantic issues]."
**Reviewer findings format**: "Analyzed X findings. Critical files: [files]. Root cause groups: [count]. Fixable: Y/X. Recommended fix scope: [scope]."

View File

@@ -0,0 +1,143 @@
# Team Review — CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier | `"1"` |
| `title` | string | Yes | Short task title | `"Scan codebase"` |
| `description` | string | Yes | Detailed task description (self-contained) | `"Run toolchain + LLM scan on src/**/*.ts"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"1;2"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"1"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
| `dimension` | string | Yes | Review dimensions (comma-separated: sec,cor,prf,mnt) | `"sec,cor,prf,mnt"` |
| `target` | string | Yes | Target path or pattern for analysis | `"src/**/*.ts"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task 1] Scan found 15 issues..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending``completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Found 15 security issues, 8 correctness bugs"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Example Data
```csv
id,title,description,deps,context_from,exec_mode,dimension,target,wave,status,findings,error
1,Scan codebase,Run toolchain + LLM scan on target files,,,"csv-wave","sec,cor,prf,mnt","src/**/*.ts",1,pending,"",""
2,Review findings,Deep analysis with root cause enrichment,1,1,"csv-wave","sec,cor,prf,mnt","scan-results.json",2,pending,"",""
3,Fix issues,Apply fixes with rollback-on-failure,2,2,"interactive","","review-report.json",3,pending,"",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
───────────────────── ──────────────────── ─────────────────
id ───────────► id ──────────► id
title ───────────► title ──────────► (reads)
description ───────────► description ──────────► (reads)
deps ───────────► deps ──────────► (reads)
context_from───────────► context_from──────────► (reads)
exec_mode ───────────► exec_mode ──────────► (reads)
dimension ───────────► dimension ──────────► (reads)
target ───────────► target ──────────► (reads)
wave ──────────► (reads)
prev_context ──────────► (reads)
status
findings
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "1",
"status": "completed",
"findings": "Found 15 security issues (3 critical, 5 high, 7 medium), 8 correctness bugs, 4 performance issues, 12 maintainability concerns. Toolchain: tsc (5 errors), eslint (8 warnings), semgrep (3 vulnerabilities). LLM scan: 26 semantic issues.",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `finding` | `file+line+dimension` | `{dimension, file, line, severity, title}` | Code issue discovered by scanner |
| `root_cause` | `finding_id` | `{finding_id, description, related_findings[]}` | Root cause analysis from reviewer |
| `fix_applied` | `file+line` | `{file, line, fix_strategy, status}` | Fix application result from fixer |
| `pattern` | `pattern_name` | `{pattern, files[], occurrences}` | Code pattern identified across files |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T14:30:22Z","worker":"1","type":"finding","data":{"dimension":"sec","file":"src/auth.ts","line":42,"severity":"high","title":"SQL injection vulnerability"}}
{"ts":"2026-03-08T14:35:10Z","worker":"2","type":"root_cause","data":{"finding_id":"SEC-001","description":"Unsanitized user input in query","related_findings":["SEC-002"]}}
{"ts":"2026-03-08T14:40:05Z","worker":"3","type":"fix_applied","data":{"file":"src/auth.ts","line":42,"fix_strategy":"minimal","status":"fixed"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| CSV task findings | Interactive task | Injected via spawn message or send_input |
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status ∈ {pending, completed, failed, skipped} | "Invalid status: {status}" |
| Cross-mechanism deps | Interactive→CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
| Dimension valid | dimension ∈ {sec, cor, prf, mnt} or combinations | "Invalid dimension: {dimension}" |
| Target non-empty | Every task has target | "Empty target for task: {id}" |