mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-26 19:56:37 +08:00
feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture
- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files) - Delete old team-lifecycle (v3) and team-planex-v2 - Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs) - Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate) to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input) - Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor) - Convert all coordinator role files: dispatch.md, monitor.md, role.md - Convert all worker role files: remove run_in_background, fix Bash syntax - Convert all specs/pipelines.md references - Final state: 20 team skills, 217 .md files, zero Claude Code API residuals Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -1,533 +1,148 @@
|
||||
---
|
||||
name: team-review
|
||||
description: Multi-agent code review pipeline with scanner, reviewer, and fixer roles. Executes toolchain + LLM scan, deep analysis with root cause enrichment, and automated fixes with rollback-on-failure.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] [--full|--fix|-q] [--dimensions=sec,cor,prf,mnt] \"target path or pattern\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
|
||||
description: Unified team skill for code review. 3-role pipeline: scanner, reviewer, fixer. Triggers on "team-review".
|
||||
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*)
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Review
|
||||
|
||||
## Usage
|
||||
Orchestrate multi-agent code review: scanner -> reviewer -> fixer. Toolchain + LLM scan, deep analysis with root cause enrichment, and automated fix with rollback-on-failure.
|
||||
|
||||
```bash
|
||||
$team-review "src/auth/**/*.ts"
|
||||
$team-review -c 2 --full "src/components"
|
||||
$team-review -y --dimensions=sec,cor "src/api"
|
||||
$team-review --continue "RV-auth-review-2026-03-08"
|
||||
$team-review -q "src/utils"
|
||||
$team-review --fix "src/auth/login.ts"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||
- `--continue`: Resume existing session
|
||||
- `--full`: Enable scan + review + fix pipeline
|
||||
- `--fix`: Fix-only mode (skip scan/review)
|
||||
- `-q, --quick`: Quick scan only
|
||||
- `--dimensions=sec,cor,prf,mnt`: Custom dimensions (security, correctness, performance, maintainability)
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Orchestrate multi-agent code review with three specialized roles: scanner (toolchain + LLM semantic scan), reviewer (deep analysis with root cause enrichment), and fixer (automated fixes with rollback-on-failure). Supports 4-dimension analysis: security (SEC), correctness (COR), performance (PRF), maintainability (MNT).
|
||||
|
||||
**Execution Model**: Hybrid — CSV wave pipeline (primary) + individual agent spawn (secondary)
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ Team Review WORKFLOW │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 0: Pre-Wave Interactive │
|
||||
│ ├─ Parse arguments and detect pipeline mode │
|
||||
│ ├─ Validate target path and resolve file patterns │
|
||||
│ └─ Output: refined requirements for decomposition │
|
||||
│ │
|
||||
│ Phase 1: Requirement → CSV + Classification │
|
||||
│ ├─ Generate task breakdown based on pipeline mode │
|
||||
│ ├─ Create scan/review/fix tasks with dependencies │
|
||||
│ ├─ Classify tasks: csv-wave (scanner, reviewer) | interactive (fixer)│
|
||||
│ ├─ Compute dependency waves (topological sort → depth grouping) │
|
||||
│ ├─ Generate tasks.csv with wave + exec_mode columns │
|
||||
│ └─ User validates task breakdown (skip if -y) │
|
||||
│ │
|
||||
│ Phase 2: Wave Execution Engine (Extended) │
|
||||
│ ├─ For each wave (1..N): │
|
||||
│ │ ├─ Execute pre-wave interactive tasks (if any) │
|
||||
│ │ ├─ Build wave CSV (filter csv-wave tasks for this wave) │
|
||||
│ │ ├─ Inject previous findings into prev_context column │
|
||||
│ │ ├─ spawn_agents_on_csv(wave CSV) │
|
||||
│ │ ├─ Execute post-wave interactive tasks (if any) │
|
||||
│ │ ├─ Merge all results into master tasks.csv │
|
||||
│ │ └─ Check: any failed? → skip dependents │
|
||||
│ └─ discoveries.ndjson shared across all modes (append-only) │
|
||||
│ │
|
||||
│ Phase 3: Post-Wave Interactive │
|
||||
│ ├─ Generate final review report and fix summary │
|
||||
│ └─ Final aggregation / report │
|
||||
│ │
|
||||
│ Phase 4: Results Aggregation │
|
||||
│ ├─ Export final results.csv │
|
||||
│ ├─ Generate context.md with all findings │
|
||||
│ ├─ Display summary: completed/failed/skipped per wave │
|
||||
│ └─ Offer: view results | retry failed | done │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
Skill(skill="team-review", args="task description")
|
||||
|
|
||||
SKILL.md (this file) = Router
|
||||
|
|
||||
+--------------+--------------+
|
||||
| |
|
||||
no --role flag --role <name>
|
||||
| |
|
||||
Coordinator Worker
|
||||
roles/coordinator/role.md roles/<name>/role.md
|
||||
|
|
||||
+-- analyze -> dispatch -> spawn workers -> STOP
|
||||
|
|
||||
+-------+-------+-------+
|
||||
v v v
|
||||
[scan] [review] [fix]
|
||||
team-worker agents, each loads roles/<role>/role.md
|
||||
```
|
||||
|
||||
---
|
||||
## Role Registry
|
||||
|
||||
## Task Classification Rules
|
||||
| Role | Path | Prefix | Inner Loop |
|
||||
|------|------|--------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
|
||||
| scanner | [roles/scanner/role.md](roles/scanner/role.md) | SCAN-* | false |
|
||||
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REV-* | false |
|
||||
| fixer | [roles/fixer/role.md](roles/fixer/role.md) | FIX-* | true |
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
## Role Router
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification, inline utility |
|
||||
Parse `$ARGUMENTS`:
|
||||
- Has `--role <name>` -> Read `roles/<name>/role.md`, execute Phase 2-4
|
||||
- No `--role` -> `roles/coordinator/role.md`, execute entry router
|
||||
|
||||
**Classification Decision**:
|
||||
## Shared Constants
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Scanner task (toolchain + LLM scan) | `csv-wave` |
|
||||
| Reviewer task (deep analysis) | `csv-wave` |
|
||||
| Fixer task (code modification with rollback) | `interactive` |
|
||||
- **Session prefix**: `RV`
|
||||
- **Session path**: `.workflow/.team/RV-<slug>-<date>/`
|
||||
- **Team name**: `review`
|
||||
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
|
||||
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
|
||||
|
||||
---
|
||||
## Worker Spawn Template
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,deps,context_from,exec_mode,dimension,target,wave,status,findings,error
|
||||
1,Scan codebase,Run toolchain + LLM scan on target files,,,"csv-wave","sec,cor,prf,mnt","src/**/*.ts",1,pending,"",""
|
||||
2,Review findings,Deep analysis with root cause enrichment,1,1,"csv-wave","sec,cor,prf,mnt","scan-results.json",2,pending,"",""
|
||||
3,Fix issues,Apply fixes with rollback-on-failure,2,2,"interactive","","review-report.json",3,pending,"",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (string) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `dimension` | Input | Review dimensions (sec,cor,prf,mnt) |
|
||||
| `target` | Input | Target path or pattern |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` → `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| fixer | agents/fixer.md | 2.3 | Apply fixes with rollback-on-failure | post-wave |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `interactive/fixer-result.json` | Results from fixer task | Created per interactive task |
|
||||
| `agents/registry.json` | Active interactive agent tracking | Updated on spawn/close |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
Coordinator spawns workers using this template:
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
├── tasks.csv # Master state (all tasks, both modes)
|
||||
├── results.csv # Final results export
|
||||
├── discoveries.ndjson # Shared discovery board (all agents)
|
||||
├── context.md # Human-readable report
|
||||
├── wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
├── interactive/ # Interactive task artifacts
|
||||
│ ├── fixer-result.json # Per-task results
|
||||
│ └── cache-index.json # Shared exploration cache
|
||||
└── agents/
|
||||
└── registry.json # Active interactive agent tracking
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
// Parse arguments
|
||||
const args = parseArguments($ARGUMENTS)
|
||||
const AUTO_YES = args.yes || args.y || false
|
||||
const CONCURRENCY = args.concurrency || args.c || 3
|
||||
const CONTINUE_SESSION = args.continue || null
|
||||
const MODE = args.full ? 'full' : args.fix ? 'fix-only' : args.quick || args.q ? 'quick' : 'default'
|
||||
const DIMENSIONS = args.dimensions || 'sec,cor,prf,mnt'
|
||||
const TARGET = args._[0] || null
|
||||
|
||||
// Generate session ID
|
||||
const sessionId = `RV-${slugify(TARGET || 'review')}-${formatDate(new Date(), 'yyyy-MM-dd')}`
|
||||
const sessionDir = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
// Create session structure
|
||||
Bash({ command: `mkdir -p "${sessionDir}/interactive" "${sessionDir}/agents"` })
|
||||
Write(`${sessionDir}/discoveries.ndjson`, '')
|
||||
Write(`${sessionDir}/agents/registry.json`, JSON.stringify({ active: [], closed: [] }))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive
|
||||
|
||||
**Objective**: Parse arguments, validate target, detect pipeline mode
|
||||
|
||||
**Execution**:
|
||||
|
||||
1. Parse command-line arguments for mode flags (--full, --fix, -q)
|
||||
2. Extract target path/pattern from arguments
|
||||
3. Validate target exists and resolve to file list
|
||||
4. Detect pipeline mode based on flags
|
||||
5. Store configuration in session metadata
|
||||
|
||||
**Success Criteria**:
|
||||
- Refined requirements available for Phase 1 decomposition
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement → CSV + Classification
|
||||
|
||||
**Objective**: Generate task breakdown based on pipeline mode and create master CSV
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
| Mode | Tasks Generated |
|
||||
|------|----------------|
|
||||
| quick | SCAN-001 (quick scan only) |
|
||||
| default | SCAN-001 → REV-001 |
|
||||
| full | SCAN-001 → REV-001 → FIX-001 |
|
||||
| fix-only | FIX-001 (requires existing review report) |
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
- Scanner tasks: `exec_mode=csv-wave` (one-shot toolchain + LLM scan)
|
||||
- Reviewer tasks: `exec_mode=csv-wave` (one-shot deep analysis)
|
||||
- Fixer tasks: `exec_mode=interactive` (multi-round with rollback)
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```javascript
|
||||
// Load master CSV
|
||||
const masterCSV = readCSV(`${sessionDir}/tasks.csv`)
|
||||
const maxWave = Math.max(...masterCSV.map(t => t.wave))
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
// Execute pre-wave interactive tasks
|
||||
const preWaveTasks = masterCSV.filter(t =>
|
||||
t.wave === wave && t.exec_mode === 'interactive' && t.position === 'pre-wave'
|
||||
)
|
||||
for (const task of preWaveTasks) {
|
||||
const agent = spawn_agent({
|
||||
message: buildInteractivePrompt(task, sessionDir)
|
||||
})
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
close_agent({ id: agent })
|
||||
updateTaskStatus(task.id, result)
|
||||
}
|
||||
|
||||
// Build wave CSV (csv-wave tasks only)
|
||||
const waveTasks = masterCSV.filter(t => t.wave === wave && t.exec_mode === 'csv-wave')
|
||||
if (waveTasks.length > 0) {
|
||||
// Inject prev_context from context_from tasks
|
||||
for (const task of waveTasks) {
|
||||
if (task.context_from) {
|
||||
const contextIds = task.context_from.split(';')
|
||||
const contextFindings = masterCSV
|
||||
.filter(t => contextIds.includes(t.id))
|
||||
.map(t => `[Task ${t.id}] ${t.findings}`)
|
||||
.join('\n\n')
|
||||
task.prev_context = contextFindings
|
||||
}
|
||||
}
|
||||
|
||||
// Write wave CSV
|
||||
writeCSV(`${sessionDir}/wave-${wave}.csv`, waveTasks)
|
||||
|
||||
// Execute wave
|
||||
spawn_agents_on_csv({
|
||||
csv_path: `${sessionDir}/wave-${wave}.csv`,
|
||||
instruction_path: `${sessionDir}/instructions/agent-instruction.md`,
|
||||
concurrency: CONCURRENCY
|
||||
})
|
||||
|
||||
// Merge results back to master
|
||||
const waveResults = readCSV(`${sessionDir}/wave-${wave}.csv`)
|
||||
for (const result of waveResults) {
|
||||
const masterTask = masterCSV.find(t => t.id === result.id)
|
||||
Object.assign(masterTask, result)
|
||||
}
|
||||
writeCSV(`${sessionDir}/tasks.csv`, masterCSV)
|
||||
|
||||
// Cleanup wave CSV
|
||||
Bash({ command: `rm "${sessionDir}/wave-${wave}.csv"` })
|
||||
}
|
||||
|
||||
// Execute post-wave interactive tasks
|
||||
const postWaveTasks = masterCSV.filter(t =>
|
||||
t.wave === wave && t.exec_mode === 'interactive' && t.position === 'post-wave'
|
||||
)
|
||||
for (const task of postWaveTasks) {
|
||||
const agent = spawn_agent({
|
||||
message: buildInteractivePrompt(task, sessionDir)
|
||||
})
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
close_agent({ id: agent })
|
||||
updateTaskStatus(task.id, result)
|
||||
}
|
||||
|
||||
// Check for failures and skip dependents
|
||||
const failedTasks = masterCSV.filter(t => t.wave === wave && t.status === 'failed')
|
||||
if (failedTasks.length > 0) {
|
||||
skipDependents(masterCSV, failedTasks)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
- Interactive agent lifecycle tracked in registry.json
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive
|
||||
|
||||
**Objective**: Generate final review report and fix summary
|
||||
|
||||
**Execution**:
|
||||
|
||||
1. Aggregate all findings from scan and review tasks
|
||||
2. Generate comprehensive review report with metrics
|
||||
3. If fixer ran, generate fix summary with success/failure rates
|
||||
4. Write final reports to session directory
|
||||
|
||||
**Success Criteria**:
|
||||
- Post-wave interactive processing complete
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
// Export results.csv
|
||||
const masterCSV = readCSV(`${sessionDir}/tasks.csv`)
|
||||
writeCSV(`${sessionDir}/results.csv`, masterCSV)
|
||||
|
||||
// Generate context.md
|
||||
const contextMd = generateContextReport(masterCSV, sessionDir)
|
||||
Write(`${sessionDir}/context.md`, contextMd)
|
||||
|
||||
// Cleanup interactive agents
|
||||
const registry = JSON.parse(Read(`${sessionDir}/agents/registry.json`))
|
||||
for (const agent of registry.active) {
|
||||
close_agent({ id: agent.id })
|
||||
}
|
||||
Write(`${sessionDir}/agents/registry.json`, JSON.stringify({ active: [], closed: registry.closed }))
|
||||
|
||||
// Display summary
|
||||
const summary = {
|
||||
total: masterCSV.length,
|
||||
completed: masterCSV.filter(t => t.status === 'completed').length,
|
||||
failed: masterCSV.filter(t => t.status === 'failed').length,
|
||||
skipped: masterCSV.filter(t => t.status === 'skipped').length
|
||||
}
|
||||
console.log(`Pipeline complete: ${summary.completed}/${summary.total} tasks completed`)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated
|
||||
- All interactive agents closed (registry.json cleanup)
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `finding` | `file+line+dimension` | `{dimension, file, line, severity, title}` | Code issue discovered by scanner |
|
||||
| `root_cause` | `finding_id` | `{finding_id, description, related_findings[]}` | Root cause analysis from reviewer |
|
||||
| `fix_applied` | `file+line` | `{file, line, fix_strategy, status}` | Fix application result from fixer |
|
||||
| `pattern` | `pattern_name` | `{pattern, files[], occurrences}` | Code pattern identified across files |
|
||||
|
||||
**Discovery NDJSON Format**:
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T14:30:22Z","worker":"1","type":"finding","data":{"dimension":"sec","file":"src/auth.ts","line":42,"severity":"high","title":"SQL injection vulnerability"}}
|
||||
{"ts":"2026-03-08T14:35:10Z","worker":"2","type":"root_cause","data":{"finding_id":"SEC-001","description":"Unsanitized user input in query","related_findings":["SEC-002"]}}
|
||||
{"ts":"2026-03-08T14:40:05Z","worker":"3","type":"fix_applied","data":{"file":"src/auth.ts","line":42,"fix_strategy":"minimal","status":"fixed"}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Bridging
|
||||
|
||||
### Interactive Result → CSV Task
|
||||
|
||||
When a pre-wave interactive task produces results needed by csv-wave tasks:
|
||||
|
||||
```javascript
|
||||
// 1. Interactive result stored in file
|
||||
const resultFile = `${sessionDir}/interactive/${taskId}-result.json`
|
||||
|
||||
// 2. Wave engine reads when building prev_context for csv-wave tasks
|
||||
// If a csv-wave task has context_from referencing an interactive task:
|
||||
// Read the interactive result file and include in prev_context
|
||||
```
|
||||
|
||||
### CSV Result → Interactive Task
|
||||
|
||||
When a post-wave interactive task needs CSV wave results:
|
||||
|
||||
```javascript
|
||||
// Option A: Include in spawn message
|
||||
const csvFindings = readMasterCSV().filter(t => t.wave === currentWave && t.exec_mode === 'csv-wave')
|
||||
const context = csvFindings.map(t => `## Task ${t.id}: ${t.title}\n${t.findings}`).join('\n\n')
|
||||
|
||||
spawn_agent({
|
||||
message: `...\n### Wave ${currentWave} Results\n${context}\n...`
|
||||
})
|
||||
agent_type: "team_worker",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: <skill_root>/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
// Option B: Inject via send_input (if agent already running)
|
||||
send_input({
|
||||
id: activeAgent,
|
||||
message: `## Wave ${currentWave} Results\n${context}\n\nProceed with analysis.`
|
||||
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.` },
|
||||
|
||||
{ type: "text", text: `## Task Context
|
||||
task_id: <task-id>
|
||||
title: <task-title>
|
||||
description: <task-description>
|
||||
pipeline_phase: <pipeline-phase>` },
|
||||
|
||||
{ type: "text", text: `## Upstream Context
|
||||
<prev_context>` }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
|
||||
|
||||
## User Commands
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | View pipeline status graph |
|
||||
| `resume` / `continue` | Advance to next step |
|
||||
| `--full` | Enable scan + review + fix pipeline |
|
||||
| `--fix` | Fix-only mode (skip scan/review) |
|
||||
| `-q` / `--quick` | Quick scan only |
|
||||
| `--dimensions=sec,cor,prf,mnt` | Custom dimensions |
|
||||
| `-y` / `--yes` | Skip confirmations |
|
||||
|
||||
## Completion Action
|
||||
|
||||
When pipeline completes, coordinator presents:
|
||||
|
||||
```
|
||||
request_user_input({
|
||||
questions: [{
|
||||
question: "Review pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up work" },
|
||||
{ label: "Export Results", description: "Export deliverables to target directory" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
## Session Directory
|
||||
|
||||
```
|
||||
.workflow/.team/RV-<slug>-<date>/
|
||||
├── .msg/messages.jsonl # Team message bus
|
||||
├── .msg/meta.json # Session state + cross-role state
|
||||
├── wisdom/ # Cross-task knowledge
|
||||
├── scan/ # Scanner output
|
||||
├── review/ # Reviewer output
|
||||
└── fix/ # Fixer output
|
||||
```
|
||||
|
||||
## Specs Reference
|
||||
|
||||
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
|
||||
- [specs/dimensions.md](specs/dimensions.md) — Review dimension definitions (SEC/COR/PRF/MNT)
|
||||
- [specs/finding-schema.json](specs/finding-schema.json) — Finding data schema
|
||||
- [specs/team-config.json](specs/team-config.json) — Team configuration
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| Pre-wave interactive failed | Skip dependent csv-wave tasks in same wave |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| Lifecycle leak | Cleanup all active agents via registry.json at end |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
| Target path invalid | request_user_input for corrected path |
|
||||
| Scanner finds 0 findings | Report clean, skip review + fix stages |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0/1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson — both mechanisms share it
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
|
||||
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent (tracked in registry.json)
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Role Constraints (Main Agent)
|
||||
|
||||
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
|
||||
|
||||
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
|
||||
- Spawns agents with task assignments
|
||||
- Waits for agent callbacks
|
||||
- Merges results and coordinates workflow
|
||||
- Manages workflow transitions between phases
|
||||
|
||||
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
|
||||
- Wait patiently for `wait()` calls to complete
|
||||
- NOT skip workflow steps due to perceived delays
|
||||
- NOT assume agents have failed just because they're taking time
|
||||
- Trust the timeout mechanisms defined in the skill
|
||||
|
||||
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
|
||||
- Use `send_input()` to ask questions or provide clarification
|
||||
- NOT skip the agent or move to next phase prematurely
|
||||
- Give agents opportunity to respond before escalating
|
||||
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
|
||||
|
||||
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
|
||||
- Skip phases or stages defined in the workflow
|
||||
- Bypass required approval or review steps
|
||||
- Execute dependent tasks before prerequisites complete
|
||||
- Assume task completion without explicit agent callback
|
||||
- Make up or fabricate agent results
|
||||
|
||||
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
|
||||
- Total execution time may range from 30-90 minutes or longer
|
||||
- Each phase may take 10-30 minutes depending on complexity
|
||||
- The coordinator must remain active and attentive throughout the entire process
|
||||
- Do not terminate or skip steps due to time concerns
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown --role value | Error with available role list |
|
||||
| Role not found | Error with expected path (roles/<name>/role.md) |
|
||||
| CLI tool fails | Worker fallback to direct implementation |
|
||||
| Scanner finds 0 findings | Report clean, skip review + fix |
|
||||
| User declines fix | Delete FIX tasks, complete with review-only results |
|
||||
| Fast-advance conflict | Coordinator reconciles on next callback |
|
||||
| Completion action fails | Default to Keep Active |
|
||||
|
||||
@@ -1,360 +0,0 @@
|
||||
# Fixer Agent
|
||||
|
||||
Fix code based on reviewed findings. Load manifest, plan fix groups, apply with rollback-on-failure, verify.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `code-generation`
|
||||
- **Role File**: `~/.codex/agents/fixer.md`
|
||||
- **Responsibility**: Code modification with rollback-on-failure
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Produce structured output following template
|
||||
- Include file:line references in findings
|
||||
- Apply fixes using Edit tool in dependency order
|
||||
- Run tests after each fix
|
||||
- Rollback on test failure (no retry)
|
||||
- Mark dependent fixes as skipped if prerequisite failed
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Produce unstructured output
|
||||
- Exceed defined scope boundaries
|
||||
- Retry failed fixes (rollback and move on)
|
||||
- Apply fixes without running tests
|
||||
- Modify files outside fix scope
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | File I/O | Load fix manifest, review report, source files |
|
||||
| `Write` | File I/O | Write fix plan, execution results, summary |
|
||||
| `Edit` | File modification | Apply code fixes |
|
||||
| `Bash` | Shell execution | Run tests, verification tools, git operations |
|
||||
| `Glob` | File discovery | Find test files, source files |
|
||||
| `Grep` | Content search | Search for patterns in code |
|
||||
|
||||
### Tool Usage Patterns
|
||||
|
||||
**Read Pattern**: Load context files before fixing
|
||||
```
|
||||
Read(".workflow/project-tech.json")
|
||||
Read("<session>/fix/fix-manifest.json")
|
||||
Read("<session>/review/review-report.json")
|
||||
Read("<target-file>")
|
||||
```
|
||||
|
||||
**Write Pattern**: Generate artifacts after processing
|
||||
```
|
||||
Write("<session>/fix/fix-plan.json", <plan>)
|
||||
Write("<session>/fix/execution-results.json", <results>)
|
||||
Write("<session>/fix/fix-summary.json", <summary>)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context & Scope Resolution
|
||||
|
||||
**Objective**: Load fix manifest, review report, and determine fixable findings
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Task description | Yes | Contains session path and input path |
|
||||
| Fix manifest | Yes | <session>/fix/fix-manifest.json |
|
||||
| Review report | Yes | <session>/review/review-report.json |
|
||||
| Project tech | No | .workflow/project-tech.json |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Extract session path and input path from task description
|
||||
2. Load fix manifest (scope, source report path)
|
||||
3. Load review report (findings with enrichment)
|
||||
4. Filter fixable findings: severity in scope AND fix_strategy !== 'skip'
|
||||
5. If 0 fixable → report complete immediately
|
||||
6. Detect quick path: findings <= 5 AND no cross-file dependencies
|
||||
7. Detect verification tools:
|
||||
- tsc: tsconfig.json exists
|
||||
- eslint: package.json contains eslint
|
||||
- jest: package.json contains jest
|
||||
- pytest: pyproject.toml exists
|
||||
- semgrep: semgrep available
|
||||
8. Load wisdom files from `<session>/wisdom/`
|
||||
|
||||
**Output**: Fixable findings list, quick_path flag, available verification tools
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Plan Fixes
|
||||
|
||||
**Objective**: Group findings, resolve dependencies, determine execution order
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Fixable findings | Yes | From Phase 1 |
|
||||
| Fix dependencies | Yes | From review report enrichment |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Group findings by primary file
|
||||
2. Merge groups with cross-file dependencies (union-find algorithm)
|
||||
3. Topological sort within each group (respect fix_dependencies, append cycles at end)
|
||||
4. Sort groups by max severity (critical first)
|
||||
5. Determine execution path:
|
||||
- quick_path: <=5 findings AND <=1 group → single agent
|
||||
- standard: one agent per group, in execution_order
|
||||
6. Write fix plan to `<session>/fix/fix-plan.json`:
|
||||
```json
|
||||
{
|
||||
"plan_id": "<uuid>",
|
||||
"quick_path": true|false,
|
||||
"groups": [
|
||||
{
|
||||
"id": "group-1",
|
||||
"files": ["src/auth.ts"],
|
||||
"findings": ["SEC-001", "SEC-002"],
|
||||
"max_severity": "critical"
|
||||
}
|
||||
],
|
||||
"execution_order": ["group-1", "group-2"],
|
||||
"total_findings": 10,
|
||||
"total_groups": 2
|
||||
}
|
||||
```
|
||||
|
||||
**Output**: Fix plan with grouped findings and execution order
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Execute Fixes
|
||||
|
||||
**Objective**: Apply fixes with rollback-on-failure
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Fix plan | Yes | From Phase 2 |
|
||||
| Source files | Yes | Files to modify |
|
||||
|
||||
**Steps**:
|
||||
|
||||
**Quick path**: Single code-developer agent for all findings
|
||||
**Standard path**: One code-developer agent per group, in execution_order
|
||||
|
||||
Agent prompt includes:
|
||||
- Finding list (dependency-sorted)
|
||||
- File contents (truncated 8K)
|
||||
- Critical rules:
|
||||
1. Apply each fix using Edit tool in order
|
||||
2. After each fix, run related tests
|
||||
3. Tests PASS → finding is "fixed"
|
||||
4. Tests FAIL → `git checkout -- {file}` → mark "failed" → continue
|
||||
5. No retry on failure. Rollback and move on
|
||||
6. If finding depends on previously failed finding → mark "skipped"
|
||||
|
||||
Agent execution:
|
||||
```javascript
|
||||
const agent = spawn_agent({
|
||||
message: `## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read role definition: ~/.codex/agents/code-developer.md
|
||||
|
||||
---
|
||||
|
||||
## Fix Group: {group.id}
|
||||
|
||||
**Files**: {group.files.join(', ')}
|
||||
**Findings**: {group.findings.length}
|
||||
|
||||
### Findings (dependency-sorted):
|
||||
{group.findings.map(f => `
|
||||
- ID: ${f.id}
|
||||
- Severity: ${f.severity}
|
||||
- Location: ${f.location.file}:${f.location.line}
|
||||
- Description: ${f.description}
|
||||
- Fix Strategy: ${f.fix_strategy}
|
||||
- Dependencies: ${f.fix_dependencies.join(', ')}
|
||||
`).join('\n')}
|
||||
|
||||
### Critical Rules:
|
||||
1. Apply each fix using Edit tool in order
|
||||
2. After each fix, run related tests
|
||||
3. Tests PASS → finding is "fixed"
|
||||
4. Tests FAIL → git checkout -- {file} → mark "failed" → continue
|
||||
5. No retry on failure. Rollback and move on
|
||||
6. If finding depends on previously failed finding → mark "skipped"
|
||||
|
||||
### Output Format:
|
||||
Return JSON:
|
||||
{
|
||||
"results": [
|
||||
{"id": "SEC-001", "status": "fixed|failed|skipped", "file": "src/auth.ts", "error": ""}
|
||||
]
|
||||
}
|
||||
`
|
||||
})
|
||||
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
close_agent({ id: agent })
|
||||
```
|
||||
|
||||
Parse agent response for structured JSON. Fallback: check git diff per file if no structured output.
|
||||
|
||||
Write execution results to `<session>/fix/execution-results.json`:
|
||||
```json
|
||||
{
|
||||
"fixed": ["SEC-001", "COR-003"],
|
||||
"failed": ["SEC-002"],
|
||||
"skipped": ["SEC-004"]
|
||||
}
|
||||
```
|
||||
|
||||
**Output**: Execution results with fixed/failed/skipped findings
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Post-Fix Verification
|
||||
|
||||
**Objective**: Run verification tools on modified files
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Execution results | Yes | From Phase 3 |
|
||||
| Modified files | Yes | Files that were changed |
|
||||
| Verification tools | Yes | From Phase 1 detection |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Run available verification tools on modified files:
|
||||
|
||||
| Tool | Command | Pass Criteria |
|
||||
|------|---------|---------------|
|
||||
| tsc | `npx tsc --noEmit` | 0 errors |
|
||||
| eslint | `npx eslint <files>` | 0 errors |
|
||||
| jest | `npx jest --passWithNoTests` | Tests pass |
|
||||
| pytest | `pytest --tb=short` | Tests pass |
|
||||
| semgrep | `semgrep --config auto <files> --json` | 0 results |
|
||||
|
||||
2. If verification fails critically → rollback last batch
|
||||
3. Write verification results to `<session>/fix/verify-results.json`
|
||||
4. Generate fix summary:
|
||||
```json
|
||||
{
|
||||
"fix_id": "<uuid>",
|
||||
"fix_date": "<ISO8601>",
|
||||
"scope": "critical,high",
|
||||
"total": 10,
|
||||
"fixed": 7,
|
||||
"failed": 2,
|
||||
"skipped": 1,
|
||||
"fix_rate": 0.7,
|
||||
"verification": {
|
||||
"tsc": "pass",
|
||||
"eslint": "pass",
|
||||
"jest": "pass"
|
||||
}
|
||||
}
|
||||
```
|
||||
5. Generate human-readable summary in `<session>/fix/fix-summary.md`
|
||||
6. Update `<session>/.msg/meta.json` with fix results
|
||||
7. Contribute discoveries to `<session>/wisdom/` files
|
||||
|
||||
**Output**: Fix summary with verification results
|
||||
|
||||
---
|
||||
|
||||
## Inline Subagent Calls
|
||||
|
||||
This agent may spawn utility subagents during its execution:
|
||||
|
||||
### code-developer
|
||||
|
||||
**When**: After fix plan is ready
|
||||
**Agent File**: ~/.codex/agents/code-developer.md
|
||||
|
||||
```javascript
|
||||
const utility = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/agents/code-developer.md
|
||||
|
||||
## Fix Group: {group.id}
|
||||
[See Phase 3 prompt template above]
|
||||
`
|
||||
})
|
||||
const result = wait({ ids: [utility], timeout_ms: 600000 })
|
||||
close_agent({ id: utility })
|
||||
// Parse result and update execution results
|
||||
```
|
||||
|
||||
### Result Handling
|
||||
|
||||
| Result | Severity | Action |
|
||||
|--------|----------|--------|
|
||||
| Success | - | Integrate findings, continue |
|
||||
| consensus_blocked | HIGH | Include in output with severity flag for orchestrator |
|
||||
| consensus_blocked | MEDIUM | Include warning, continue |
|
||||
| Timeout/Error | - | Continue without utility result, log warning |
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Fixed X/Y findings (Z% success rate)
|
||||
- Failed: A findings (rolled back)
|
||||
- Skipped: B findings (dependency failures)
|
||||
|
||||
## Findings
|
||||
- SEC-001: Fixed SQL injection in src/auth.ts:42
|
||||
- SEC-002: Failed to fix XSS (tests failed, rolled back)
|
||||
- SEC-004: Skipped (depends on SEC-002)
|
||||
|
||||
## Verification Results
|
||||
- tsc: PASS (0 errors)
|
||||
- eslint: PASS (0 errors)
|
||||
- jest: PASS (all tests passed)
|
||||
|
||||
## Modified Files
|
||||
- src/auth.ts: 2 fixes applied
|
||||
- src/utils/sanitize.ts: 1 fix applied
|
||||
|
||||
## Open Questions
|
||||
1. SEC-002 fix caused test failures - manual review needed
|
||||
2. Consider refactoring auth module for better security
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Input file not found | Report in Open Questions, continue with available data |
|
||||
| Scope ambiguity | Report in Open Questions, proceed with reasonable assumption |
|
||||
| Processing failure | Output partial results with clear status indicator |
|
||||
| Timeout approaching | Output current findings with "PARTIAL" status |
|
||||
| Fix manifest missing | ERROR, cannot proceed without manifest |
|
||||
| Review report missing | ERROR, cannot proceed without review |
|
||||
| All fixes failed | Report failure, include rollback details |
|
||||
| Verification tool unavailable | Skip verification, warn in output |
|
||||
| Git operations fail | Report error, manual intervention needed |
|
||||
@@ -1,102 +0,0 @@
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: {session_folder}/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read project context: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Description**: {description}
|
||||
**Dimension**: {dimension}
|
||||
**Target**: {target}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read discoveries**: Load {session_folder}/discoveries.ndjson for shared exploration findings
|
||||
2. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
3. **Execute**: Perform your assigned role (scanner or reviewer) following the role-specific instructions below
|
||||
4. **Share discoveries**: Append exploration findings to shared board:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> {session_folder}/discoveries.ndjson
|
||||
```
|
||||
5. **Report result**: Return JSON via report_agent_job_result
|
||||
|
||||
### Role-Specific Instructions
|
||||
|
||||
**If you are a Scanner (SCAN-* task)**:
|
||||
1. Extract session path and target from description
|
||||
2. Resolve target files (glob pattern or directory → `**/*.{ts,tsx,js,jsx,py,go,java,rs}`)
|
||||
3. If no source files found → report empty, complete task cleanly
|
||||
4. Detect toolchain availability:
|
||||
- tsc: `tsconfig.json` exists → COR dimension
|
||||
- eslint: `.eslintrc*` or `eslint` in package.json → COR/MNT
|
||||
- semgrep: `.semgrep.yml` exists → SEC dimension
|
||||
- ruff: `pyproject.toml` + ruff available → SEC/COR/MNT
|
||||
- mypy: mypy available + `pyproject.toml` → COR
|
||||
- npmAudit: `package-lock.json` exists → SEC
|
||||
5. Run detected tools in parallel via Bash backgrounding
|
||||
6. Parse tool outputs into normalized findings with dimension, severity, file:line
|
||||
7. Execute semantic scan via CLI: `ccw cli --tool gemini --mode analysis --rule analysis-review-code-quality`
|
||||
8. Focus areas per dimension:
|
||||
- SEC: Business logic vulnerabilities, privilege escalation, sensitive data flow, auth bypass
|
||||
- COR: Logic errors, unhandled exception paths, state management bugs, race conditions
|
||||
- PRF: Algorithm complexity, N+1 queries, unnecessary sync, memory leaks, missing caching
|
||||
- MNT: Architectural coupling, abstraction leaks, convention violations, dead code
|
||||
9. Merge toolchain + semantic findings, deduplicate (same file + line + dimension)
|
||||
10. Assign dimension-prefixed IDs: SEC-001, COR-001, PRF-001, MNT-001
|
||||
11. Write scan results to session directory
|
||||
|
||||
**If you are a Reviewer (REV-* task)**:
|
||||
1. Extract session path and input path from description
|
||||
2. Load scan results from previous task (via prev_context or session directory)
|
||||
3. If scan results empty → report clean, complete immediately
|
||||
4. Triage findings into deep_analysis (critical/high/medium, max 15) and pass_through (remaining)
|
||||
5. Split deep_analysis into domain groups:
|
||||
- Group A: Security + Correctness → Root cause tracing, fix dependencies, blast radius
|
||||
- Group B: Performance + Maintainability → Optimization approaches, refactor tradeoffs
|
||||
6. Execute parallel CLI agents for enrichment: `ccw cli --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause`
|
||||
7. Request 6 enrichment fields per finding:
|
||||
- root_cause: {description, related_findings[], is_symptom}
|
||||
- impact: {scope: low/medium/high, affected_files[], blast_radius}
|
||||
- optimization: {approach, alternative, tradeoff}
|
||||
- fix_strategy: minimal / refactor / skip
|
||||
- fix_complexity: low / medium / high
|
||||
- fix_dependencies: finding IDs that must be fixed first
|
||||
8. Merge enriched + pass_through findings
|
||||
9. Cross-correlate:
|
||||
- Critical files: file appears in >=2 dimensions
|
||||
- Root cause groups: cluster findings sharing related_findings
|
||||
- Optimization suggestions: from root cause groups + standalone enriched findings
|
||||
10. Compute metrics: by_dimension, by_severity, dimension_severity_matrix, fixable_count
|
||||
11. Write review report to session directory
|
||||
|
||||
### Discovery Types to Share
|
||||
|
||||
- `finding`: {dimension, file, line, severity, title} — Code issue discovered
|
||||
- `root_cause`: {finding_id, description, related_findings[]} — Root cause analysis
|
||||
- `pattern`: {pattern, files[], occurrences} — Code pattern identified
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed" | "failed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"error": ""
|
||||
}
|
||||
|
||||
**Scanner findings format**: "Found X security issues (Y critical, Z high), A correctness bugs, B performance issues, C maintainability concerns. Toolchain: [tool results]. LLM scan: [semantic issues]."
|
||||
|
||||
**Reviewer findings format**: "Analyzed X findings. Critical files: [files]. Root cause groups: [count]. Fixable: Y/X. Recommended fix scope: [scope]."
|
||||
@@ -0,0 +1,71 @@
|
||||
# Analyze Task
|
||||
|
||||
Parse user task -> detect review capabilities -> build dependency graph -> design pipeline.
|
||||
|
||||
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
|
||||
|
||||
## Signal Detection
|
||||
|
||||
| Keywords | Capability | Prefix |
|
||||
|----------|------------|--------|
|
||||
| scan, lint, static analysis, toolchain | scanner | SCAN |
|
||||
| review, analyze, audit, findings | reviewer | REV |
|
||||
| fix, repair, remediate, patch | fixer | FIX |
|
||||
|
||||
## Pipeline Mode Detection
|
||||
|
||||
| Condition | Mode |
|
||||
|-----------|------|
|
||||
| Flag `--fix` | fix-only |
|
||||
| Flag `--full` | full |
|
||||
| Flag `-q` or `--quick` | quick |
|
||||
| (none) | default |
|
||||
|
||||
## Dependency Graph
|
||||
|
||||
Natural ordering for review pipeline:
|
||||
- Tier 0: scanner (toolchain + semantic scan, no upstream dependency)
|
||||
- Tier 1: reviewer (deep analysis, requires scan findings)
|
||||
- Tier 2: fixer (apply fixes, requires reviewed findings + user confirm)
|
||||
|
||||
## Pipeline Definitions
|
||||
|
||||
```
|
||||
quick: SCAN(quick=true)
|
||||
default: SCAN -> REV
|
||||
full: SCAN -> REV -> [user confirm] -> FIX
|
||||
fix-only: FIX
|
||||
```
|
||||
|
||||
## Complexity Scoring
|
||||
|
||||
| Factor | Points |
|
||||
|--------|--------|
|
||||
| Per capability | +1 |
|
||||
| Large target scope (>20 files) | +2 |
|
||||
| Multiple dimensions | +1 |
|
||||
| Fix phase included | +1 |
|
||||
|
||||
Results: 1-2 Low, 3-4 Medium, 5+ High
|
||||
|
||||
## Role Minimization
|
||||
|
||||
- Cap at 4 roles (coordinator + 3 workers)
|
||||
- Sequential pipeline: scanner -> reviewer -> fixer
|
||||
|
||||
## Output
|
||||
|
||||
Write <session>/task-analysis.json:
|
||||
```json
|
||||
{
|
||||
"task_description": "<original>",
|
||||
"pipeline_mode": "<quick|default|full|fix-only>",
|
||||
"target": "<path>",
|
||||
"dimensions": ["sec", "cor", "prf", "mnt"],
|
||||
"auto_confirm": false,
|
||||
"capabilities": [{ "name": "<cap>", "prefix": "<PREFIX>" }],
|
||||
"dependency_graph": { "<TASK-ID>": { "role": "<role>", "blockedBy": ["..."] } },
|
||||
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
|
||||
"complexity": { "score": 0, "level": "Low|Medium|High" }
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,90 @@
|
||||
# Dispatch Tasks
|
||||
|
||||
Create task chains from pipeline mode, write to tasks.json with proper deps relationships.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Read task-analysis.json -> extract pipeline_mode and parameters
|
||||
2. Read specs/pipelines.md -> get task registry for selected pipeline
|
||||
3. Topological sort tasks (respect deps)
|
||||
4. Validate all owners exist in role registry (SKILL.md)
|
||||
5. For each task (in order):
|
||||
- Add task entry to tasks.json `tasks` object (see template below)
|
||||
- Set deps array with upstream task IDs
|
||||
6. Update tasks.json metadata with pipeline.tasks_total
|
||||
7. Validate chain (no orphans, no cycles, all refs valid)
|
||||
|
||||
## Task Entry Template
|
||||
|
||||
Each task in tasks.json `tasks` object:
|
||||
```json
|
||||
{
|
||||
"<TASK-ID>": {
|
||||
"title": "<concise title>",
|
||||
"description": "PURPOSE: <goal> | Success: <criteria>\nTASK:\n - <step 1>\n - <step 2>\nCONTEXT:\n - Session: <session-folder>\n - Target: <target>\n - Dimensions: <dimensions>\n - Upstream artifacts: <list>\nEXPECTED: <artifact path> + <quality criteria>\nCONSTRAINTS: <scope limits>\n---\nInnerLoop: <true|false>\nRoleSpec: <project>/.codex/skills/team-review/roles/<role>/role.md",
|
||||
"role": "<role-name>",
|
||||
"prefix": "<PREFIX>",
|
||||
"deps": ["<upstream-task-id>"],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Pipeline Task Registry
|
||||
|
||||
### default Mode
|
||||
```
|
||||
SCAN-001 (scanner): Multi-dimension code scan
|
||||
deps: [], meta: target=<target>, dimensions=<dims>
|
||||
REV-001 (reviewer): Deep finding analysis and review
|
||||
deps: [SCAN-001]
|
||||
```
|
||||
|
||||
### full Mode
|
||||
```
|
||||
SCAN-001 (scanner): Multi-dimension code scan
|
||||
deps: [], meta: target=<target>, dimensions=<dims>
|
||||
REV-001 (reviewer): Deep finding analysis and review
|
||||
deps: [SCAN-001]
|
||||
FIX-001 (fixer): Plan and execute fixes
|
||||
deps: [REV-001]
|
||||
```
|
||||
|
||||
### fix-only Mode
|
||||
```
|
||||
FIX-001 (fixer): Execute fixes from manifest
|
||||
deps: [], meta: input=<fix-manifest>
|
||||
```
|
||||
|
||||
### quick Mode
|
||||
```
|
||||
SCAN-001 (scanner): Quick scan (fast mode)
|
||||
deps: [], meta: target=<target>, quick=true
|
||||
```
|
||||
|
||||
## InnerLoop Flag Rules
|
||||
|
||||
- true: fixer role (iterative fix cycles)
|
||||
- false: scanner, reviewer roles
|
||||
|
||||
## Dependency Validation
|
||||
|
||||
- No orphan tasks (all tasks have valid owner)
|
||||
- No circular dependencies
|
||||
- All deps references exist in tasks object
|
||||
- Session reference in every task description
|
||||
- RoleSpec reference in every task description
|
||||
|
||||
## Log After Creation
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "coordinator",
|
||||
type: "dispatch_ready",
|
||||
data: { pipeline: "<mode>", task_count: <N>, target: "<target>" }
|
||||
})
|
||||
```
|
||||
185
.codex/skills/team-review/roles/coordinator/commands/monitor.md
Normal file
185
.codex/skills/team-review/roles/coordinator/commands/monitor.md
Normal file
@@ -0,0 +1,185 @@
|
||||
# Monitor Pipeline
|
||||
|
||||
Synchronous pipeline coordination using spawn_agent + wait_agent.
|
||||
|
||||
## Constants
|
||||
|
||||
- WORKER_AGENT: team_worker
|
||||
- FAST_ADVANCE_AWARE: true
|
||||
|
||||
## Handler Router
|
||||
|
||||
| Source | Handler |
|
||||
|--------|---------|
|
||||
| "capability_gap" | handleAdapt |
|
||||
| "check" or "status" | handleCheck |
|
||||
| "resume" or "continue" | handleResume |
|
||||
| All tasks completed | handleComplete |
|
||||
| Default | handleSpawnNext |
|
||||
|
||||
## Role-Worker Map
|
||||
|
||||
| Prefix | Role | Role Spec | inner_loop |
|
||||
|--------|------|-----------|------------|
|
||||
| SCAN-* | scanner | `<project>/.codex/skills/team-review/roles/scanner/role.md` | false |
|
||||
| REV-* | reviewer | `<project>/.codex/skills/team-review/roles/reviewer/role.md` | false |
|
||||
| FIX-* | fixer | `<project>/.codex/skills/team-review/roles/fixer/role.md` | true |
|
||||
|
||||
## handleCheck
|
||||
|
||||
Read-only status report from tasks.json, then STOP.
|
||||
|
||||
1. Read tasks.json
|
||||
2. Count tasks by status (pending, in_progress, completed, failed)
|
||||
|
||||
Output:
|
||||
```
|
||||
[coordinator] Review Pipeline Status
|
||||
[coordinator] Mode: <pipeline_mode>
|
||||
[coordinator] Progress: <completed>/<total> (<percent>%)
|
||||
|
||||
[coordinator] Pipeline Graph:
|
||||
SCAN-001: <done|run|wait|deleted> <summary>
|
||||
REV-001: <done|run|wait|deleted> <summary>
|
||||
FIX-001: <done|run|wait|deleted> <summary>
|
||||
|
||||
done=completed >>>=running o=pending x=deleted
|
||||
|
||||
[coordinator] Active Agents: <list from active_agents>
|
||||
[coordinator] Ready to spawn: <subjects>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
Then STOP.
|
||||
|
||||
## handleResume
|
||||
|
||||
1. Read tasks.json, check active_agents
|
||||
2. No active agents -> handleSpawnNext
|
||||
3. Has active agents -> check each status
|
||||
- completed -> mark done in tasks.json
|
||||
- in_progress -> still running
|
||||
- other -> worker failure -> reset to pending
|
||||
4. Some completed -> handleSpawnNext
|
||||
5. All running -> report status, STOP
|
||||
|
||||
## handleSpawnNext
|
||||
|
||||
Find ready tasks, spawn workers, wait for results, process.
|
||||
|
||||
1. Read tasks.json:
|
||||
- completedTasks: status = completed
|
||||
- inProgressTasks: status = in_progress
|
||||
- deletedTasks: status = deleted
|
||||
- readyTasks: status = pending AND all deps in completedTasks
|
||||
|
||||
2. No ready + work in progress -> report waiting, STOP
|
||||
3. No ready + nothing in progress -> handleComplete
|
||||
4. Has ready -> take first ready task:
|
||||
a. Determine role from prefix (use Role-Worker Map)
|
||||
b. Update task status to in_progress in tasks.json
|
||||
c. team_msg log -> task_unblocked
|
||||
d. Spawn team_worker:
|
||||
|
||||
```javascript
|
||||
// 1) Update status in tasks.json
|
||||
state.tasks[taskId].status = 'in_progress'
|
||||
|
||||
// 2) Spawn worker
|
||||
const agentId = spawn_agent({
|
||||
agent_type: "team_worker",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: ${role}
|
||||
role_spec: ${skillRoot}/roles/${role}/role.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
requirement: ${taskDescription}
|
||||
inner_loop: ${innerLoop}` },
|
||||
|
||||
{ type: "text", text: `## Current Task
|
||||
- Task ID: ${taskId}
|
||||
- Task: ${taskSubject}
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` }
|
||||
]
|
||||
})
|
||||
|
||||
// 3) Track agent
|
||||
state.active_agents[taskId] = { agentId, role, started_at: now }
|
||||
|
||||
// 4) Wait for completion
|
||||
wait_agent({ ids: [agentId] })
|
||||
|
||||
// 5) Collect results
|
||||
state.tasks[taskId].status = 'completed'
|
||||
delete state.active_agents[taskId]
|
||||
```
|
||||
|
||||
e. Check for checkpoints after worker completes:
|
||||
- scanner completes -> read meta.json for findings_count:
|
||||
- findings_count === 0 -> mark remaining REV-*/FIX-* tasks as deleted -> handleComplete
|
||||
- findings_count > 0 -> proceed to handleSpawnNext
|
||||
- reviewer completes AND pipeline_mode === 'full':
|
||||
- autoYes flag set -> write fix-manifest.json, set fix_scope='all' -> handleSpawnNext
|
||||
- NO autoYes -> request_user_input:
|
||||
```
|
||||
question: "<N> findings reviewed. Proceed with fix?"
|
||||
options:
|
||||
- "Fix all": set fix_scope='all'
|
||||
- "Fix critical/high only": set fix_scope='critical,high'
|
||||
- "Skip fix": mark FIX-* tasks as deleted -> handleComplete
|
||||
```
|
||||
Write fix_scope to meta.json, write fix-manifest.json, -> handleSpawnNext
|
||||
- fixer completes -> handleSpawnNext (checks for completion naturally)
|
||||
|
||||
5. Update tasks.json, output summary, STOP
|
||||
|
||||
## handleComplete
|
||||
|
||||
Pipeline done. Generate report and completion action.
|
||||
|
||||
1. All tasks completed or deleted (no pending, no in_progress)
|
||||
2. Read final session state from meta.json
|
||||
3. Generate pipeline summary: mode, target, findings_count, stages_completed, fix results (if applicable), deliverable paths
|
||||
4. Update session: pipeline_status='complete', completed_at=<timestamp>
|
||||
5. Read session.completion_action:
|
||||
- interactive -> request_user_input (Archive/Keep/Export)
|
||||
- auto_archive -> Archive & Clean
|
||||
- auto_keep -> Keep Active (status=paused)
|
||||
|
||||
## handleAdapt
|
||||
|
||||
Capability gap reported mid-pipeline.
|
||||
|
||||
1. Parse gap description
|
||||
2. Check if existing role covers it -> redirect
|
||||
3. Role count < 4 -> generate dynamic role-spec in <session>/role-specs/
|
||||
4. Create new task in tasks.json, spawn worker
|
||||
5. Role count >= 4 -> merge or pause
|
||||
|
||||
## Fast-Advance Reconciliation
|
||||
|
||||
On every coordinator wake:
|
||||
1. Read tasks.json for completed tasks
|
||||
2. Sync active_agents with actual state
|
||||
3. No duplicate spawns
|
||||
|
||||
## State Persistence
|
||||
|
||||
After every handler execution:
|
||||
1. Reconcile active_agents with actual tasks.json states
|
||||
2. Remove entries for completed/deleted tasks
|
||||
3. Write updated tasks.json
|
||||
4. STOP (wait for next event)
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Session file not found | Error, suggest re-initialization |
|
||||
| 0 findings after scan | Delete remaining stages, complete pipeline |
|
||||
| User declines fix | Delete FIX-* tasks, complete with review-only results |
|
||||
| Pipeline stall | Check deps chains, report to user |
|
||||
| Worker failure | Reset task to pending, respawn on next resume |
|
||||
142
.codex/skills/team-review/roles/coordinator/role.md
Normal file
142
.codex/skills/team-review/roles/coordinator/role.md
Normal file
@@ -0,0 +1,142 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate team-review: parse target -> detect mode -> dispatch task chain -> monitor -> report.
|
||||
|
||||
## Identity
|
||||
- Name: coordinator | Tag: [coordinator]
|
||||
- Responsibility: Target parsing, mode detection, task creation/dispatch, stage monitoring, result aggregation
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- All output prefixed with `[coordinator]`
|
||||
- Parse task description and detect pipeline mode
|
||||
- Create session folder and spawn team_worker agents via spawn_agent
|
||||
- Dispatch task chain with proper dependencies (tasks.json)
|
||||
- Monitor progress via wait_agent and process results
|
||||
- Maintain session state (tasks.json)
|
||||
- Execute completion action when pipeline finishes
|
||||
|
||||
### MUST NOT
|
||||
- Run analysis tools directly (semgrep, eslint, tsc, etc.)
|
||||
- Modify source code files
|
||||
- Perform code review or scanning directly
|
||||
- Bypass worker roles
|
||||
- Spawn workers with general-purpose agent (MUST use team_worker)
|
||||
|
||||
## Command Execution Protocol
|
||||
When coordinator needs to execute a specific phase:
|
||||
1. Read `commands/<command>.md`
|
||||
2. Follow the workflow defined in the command
|
||||
3. Commands are inline execution guides, NOT separate agents
|
||||
4. Execute synchronously, complete before proceeding
|
||||
|
||||
## Entry Router
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
|
||||
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
|
||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt (monitor.md) |
|
||||
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
|
||||
| Interrupted session | Active session in .workflow/.team/RV-* | -> Phase 0 |
|
||||
| New session | None of above | -> Phase 1 |
|
||||
|
||||
For check/resume/adapt/complete: load @commands/monitor.md, execute handler, STOP.
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
1. Scan .workflow/.team/RV-*/tasks.json for active/paused sessions
|
||||
2. No sessions -> Phase 1
|
||||
3. Single session -> reconcile (read tasks.json, reset in_progress->pending, kick first ready task)
|
||||
4. Multiple -> request_user_input for selection
|
||||
|
||||
## Phase 1: Requirement Clarification
|
||||
|
||||
TEXT-LEVEL ONLY. No source code reading.
|
||||
|
||||
1. Parse arguments for explicit settings:
|
||||
|
||||
| Flag | Mode | Description |
|
||||
|------|------|-------------|
|
||||
| `--fix` | fix-only | Skip scan/review, go directly to fixer |
|
||||
| `--full` | full | scan + review + fix pipeline |
|
||||
| `-q` / `--quick` | quick | Quick scan only, no review/fix |
|
||||
| (none) | default | scan + review pipeline |
|
||||
|
||||
2. Extract parameters: target, dimensions, auto-confirm flag
|
||||
3. Clarify if ambiguous (request_user_input for target path)
|
||||
4. Delegate to @commands/analyze.md
|
||||
5. Output: task-analysis.json
|
||||
6. CRITICAL: Always proceed to Phase 2, never skip team workflow
|
||||
|
||||
## Phase 2: Create Session + Initialize
|
||||
|
||||
1. Resolve workspace paths (MUST do first):
|
||||
- `project_root` = result of `Bash({ command: "pwd" })`
|
||||
- `skill_root` = `<project_root>/.codex/skills/team-review`
|
||||
2. Generate session ID: RV-<slug>-<date>
|
||||
3. Create session folder structure (scan/, review/, fix/, wisdom/)
|
||||
4. Read specs/pipelines.md -> select pipeline based on mode
|
||||
5. Initialize tasks.json:
|
||||
```json
|
||||
{
|
||||
"session_id": "<id>",
|
||||
"pipeline_mode": "<default|full|fix-only|quick>",
|
||||
"target": "<target>",
|
||||
"dimensions": "<dimensions>",
|
||||
"auto_confirm": false,
|
||||
"created_at": "<ISO timestamp>",
|
||||
"active_agents": {},
|
||||
"tasks": {}
|
||||
}
|
||||
```
|
||||
6. Initialize pipeline via team_msg state_update:
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: "<id>", from: "coordinator",
|
||||
type: "state_update", summary: "Session initialized",
|
||||
data: {
|
||||
pipeline_mode: "<default|full|fix-only|quick>",
|
||||
pipeline_stages: ["scanner", "reviewer", "fixer"],
|
||||
target: "<target>",
|
||||
dimensions: "<dimensions>",
|
||||
auto_confirm: "<auto_confirm>"
|
||||
}
|
||||
})
|
||||
```
|
||||
7. Write session meta.json
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
Delegate to @commands/dispatch.md:
|
||||
1. Read specs/pipelines.md for selected pipeline's task registry
|
||||
2. Add task entries to tasks.json `tasks` object with deps
|
||||
3. Update tasks.json metadata with pipeline.tasks_total
|
||||
|
||||
## Phase 4: Spawn-and-Wait
|
||||
|
||||
Delegate to @commands/monitor.md#handleSpawnNext:
|
||||
1. Find ready tasks (pending + deps resolved)
|
||||
2. Spawn team_worker agents via spawn_agent, wait_agent for results
|
||||
3. Output status summary
|
||||
4. STOP
|
||||
|
||||
## Phase 5: Report + Completion Action
|
||||
|
||||
1. Generate summary (mode, target, findings_total, by_severity, fix_rate if applicable)
|
||||
2. Execute completion action per session.completion_action:
|
||||
- interactive -> request_user_input (Archive/Keep/Export)
|
||||
- auto_archive -> Archive & Clean
|
||||
- auto_keep -> Keep Active
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Task too vague | request_user_input for clarification |
|
||||
| Session corruption | Attempt recovery, fallback to manual |
|
||||
| Worker crash | Reset task to pending, respawn |
|
||||
| Scanner finds 0 findings | Report clean, skip review + fix stages |
|
||||
| Fix verification fails | Log warning, report partial results |
|
||||
| Target path invalid | request_user_input for corrected path |
|
||||
76
.codex/skills/team-review/roles/fixer/role.md
Normal file
76
.codex/skills/team-review/roles/fixer/role.md
Normal file
@@ -0,0 +1,76 @@
|
||||
---
|
||||
role: fixer
|
||||
prefix: FIX
|
||||
inner_loop: true
|
||||
message_types:
|
||||
success: fix_complete
|
||||
error: fix_failed
|
||||
---
|
||||
|
||||
# Code Fixer
|
||||
|
||||
Fix code based on reviewed findings. Load manifest, plan fix groups, apply with rollback-on-failure, verify. Code-generation role -- modifies source files.
|
||||
|
||||
## Phase 2: Context & Scope Resolution
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| Fix manifest | <session>/fix/fix-manifest.json | Yes |
|
||||
| Review report | <session>/review/review-report.json | Yes |
|
||||
| .msg/meta.json | <session>/.msg/meta.json | No |
|
||||
|
||||
1. Extract session path, input path from task description
|
||||
2. Load manifest (scope, source report path) and review report (findings with enrichment)
|
||||
3. Filter fixable findings: severity in scope AND fix_strategy !== 'skip'
|
||||
4. If 0 fixable -> report complete immediately
|
||||
5. Detect quick path: findings <= 5 AND no cross-file dependencies
|
||||
6. Detect verification tools: tsc (tsconfig.json), eslint (package.json), jest (package.json), pytest (pyproject.toml), semgrep (semgrep available)
|
||||
7. Load wisdom files from `<session>/wisdom/`
|
||||
|
||||
## Phase 3: Plan + Execute
|
||||
|
||||
### 3A: Plan Fixes (deterministic, no CLI)
|
||||
1. Group findings by primary file
|
||||
2. Merge groups with cross-file dependencies (union-find)
|
||||
3. Topological sort within each group (respect fix_dependencies, append cycles at end)
|
||||
4. Sort groups by max severity (critical first)
|
||||
5. Determine execution path: quick_path (<=5 findings, <=1 group) or standard
|
||||
6. Write `<session>/fix/fix-plan.json`: `{plan_id, quick_path, groups[{id, files[], findings[], max_severity}], execution_order[], total_findings, total_groups}`
|
||||
|
||||
### 3B: Execute Fixes
|
||||
**Quick path**: Single code-developer agent for all findings.
|
||||
**Standard path**: One code-developer agent per group, in execution_order.
|
||||
|
||||
Agent prompt includes: finding list (dependency-sorted), file contents (truncated 8K), critical rules:
|
||||
1. Apply each fix using Edit tool in order
|
||||
2. After each fix, run related tests
|
||||
3. Tests PASS -> finding is "fixed"
|
||||
4. Tests FAIL -> `git checkout -- {file}` -> mark "failed" -> continue
|
||||
5. No retry on failure. Rollback and move on
|
||||
6. If finding depends on previously failed finding -> mark "skipped"
|
||||
|
||||
Agent returns JSON: `{results:[{id, status: fixed|failed|skipped, file, error?}]}`
|
||||
Fallback: check git diff per file if no structured output.
|
||||
|
||||
Write `<session>/fix/execution-results.json`: `{fixed[], failed[], skipped[]}`
|
||||
|
||||
## Phase 4: Post-Fix Verification
|
||||
|
||||
1. Run available verification tools on modified files:
|
||||
|
||||
| Tool | Command | Pass Criteria |
|
||||
|------|---------|---------------|
|
||||
| tsc | `npx tsc --noEmit` | 0 errors |
|
||||
| eslint | `npx eslint <files>` | 0 errors |
|
||||
| jest | `npx jest --passWithNoTests` | Tests pass |
|
||||
| pytest | `pytest --tb=short` | Tests pass |
|
||||
| semgrep | `semgrep --config auto <files> --json` | 0 results |
|
||||
|
||||
2. If verification fails critically -> rollback last batch
|
||||
3. Write `<session>/fix/verify-results.json`
|
||||
4. Generate `<session>/fix/fix-summary.json`: `{fix_id, fix_date, scope, total, fixed, failed, skipped, fix_rate, verification}`
|
||||
5. Generate `<session>/fix/fix-summary.md` (human-readable)
|
||||
6. Update `<session>/.msg/meta.json` with fix results
|
||||
7. Contribute discoveries to `<session>/wisdom/` files
|
||||
68
.codex/skills/team-review/roles/reviewer/role.md
Normal file
68
.codex/skills/team-review/roles/reviewer/role.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
role: reviewer
|
||||
prefix: REV
|
||||
inner_loop: false
|
||||
message_types:
|
||||
success: review_complete
|
||||
error: error
|
||||
---
|
||||
|
||||
# Finding Reviewer
|
||||
|
||||
Deep analysis on scan findings: triage, root cause / impact / optimization enrichment via CLI fan-out, cross-correlation, and structured review report generation. Read-only -- never modifies source code.
|
||||
|
||||
## Phase 2: Context & Triage
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| Scan results | <session>/scan/scan-results.json | Yes |
|
||||
| .msg/meta.json | <session>/.msg/meta.json | No |
|
||||
|
||||
1. Extract session path, input path, dimensions from task description
|
||||
2. Load review specs: Run `ccw spec load --category review` for review standards, checklists, and approval gates
|
||||
3. Load scan results. If missing or empty -> report clean, complete immediately
|
||||
3. Load wisdom files from `<session>/wisdom/`
|
||||
4. Triage findings into two buckets:
|
||||
|
||||
| Bucket | Criteria | Action |
|
||||
|--------|----------|--------|
|
||||
| deep_analysis | severity in [critical, high, medium], max 15, sorted critical-first | Enrich with root cause, impact, optimization |
|
||||
| pass_through | remaining (low, info, or overflow) | Include in report without enrichment |
|
||||
|
||||
If deep_analysis empty -> skip Phase 3, go to Phase 4.
|
||||
|
||||
## Phase 3: Deep Analysis (CLI Fan-out)
|
||||
|
||||
Split deep_analysis into two domain groups, run parallel CLI agents:
|
||||
|
||||
| Group | Dimensions | Focus |
|
||||
|-------|-----------|-------|
|
||||
| A | Security + Correctness | Root cause tracing, fix dependencies, blast radius |
|
||||
| B | Performance + Maintainability | Optimization approaches, refactor tradeoffs |
|
||||
|
||||
If either group empty -> skip that agent.
|
||||
|
||||
Build prompt per group requesting 6 enrichment fields per finding:
|
||||
- `root_cause`: `{description, related_findings[], is_symptom}`
|
||||
- `impact`: `{scope: low/medium/high, affected_files[], blast_radius}`
|
||||
- `optimization`: `{approach, alternative, tradeoff}`
|
||||
- `fix_strategy`: minimal / refactor / skip
|
||||
- `fix_complexity`: low / medium / high
|
||||
- `fix_dependencies`: finding IDs that must be fixed first
|
||||
|
||||
Execute via `ccw cli --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause` (fallback: qwen -> codex). Parse JSON array responses, merge with originals (CLI-enriched replace originals, unenriched get defaults). Write `<session>/review/enriched-findings.json`.
|
||||
|
||||
## Phase 4: Report Generation
|
||||
|
||||
1. Combine enriched + pass_through findings
|
||||
2. Cross-correlate:
|
||||
- **Critical files**: file appears in >=2 dimensions -> list with finding_count, severities
|
||||
- **Root cause groups**: cluster findings sharing related_findings -> identify primary
|
||||
- **Optimization suggestions**: from root cause groups + standalone enriched findings
|
||||
3. Compute metrics: by_dimension, by_severity, dimension_severity_matrix, fixable_count, auto_fixable_count
|
||||
4. Write `<session>/review/review-report.json`: `{review_id, review_date, findings[], critical_files[], optimization_suggestions[], root_cause_groups[], summary}`
|
||||
5. Write `<session>/review/review-report.md`: Executive summary, metrics matrix (dimension x severity), critical/high findings table, critical files list, optimization suggestions, recommended fix scope
|
||||
6. Update `<session>/.msg/meta.json` with review summary
|
||||
7. Contribute discoveries to `<session>/wisdom/` files
|
||||
71
.codex/skills/team-review/roles/scanner/role.md
Normal file
71
.codex/skills/team-review/roles/scanner/role.md
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
role: scanner
|
||||
prefix: SCAN
|
||||
inner_loop: false
|
||||
message_types:
|
||||
success: scan_complete
|
||||
error: error
|
||||
---
|
||||
|
||||
# Code Scanner
|
||||
|
||||
Toolchain + LLM semantic scan producing structured findings. Static analysis tools in parallel, then LLM for issues tools miss. Read-only -- never modifies source code. 4-dimension system: security (SEC), correctness (COR), performance (PRF), maintainability (MNT).
|
||||
|
||||
## Phase 2: Context & Toolchain Detection
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| .msg/meta.json | <session>/.msg/meta.json | No |
|
||||
|
||||
1. Extract session path, target, dimensions, quick flag from task description
|
||||
2. Resolve target files (glob pattern or directory -> `**/*.{ts,tsx,js,jsx,py,go,java,rs}`)
|
||||
3. If no source files found -> report empty, complete task cleanly
|
||||
4. Detect toolchain availability:
|
||||
|
||||
| Tool | Detection | Dimension |
|
||||
|------|-----------|-----------|
|
||||
| tsc | `tsconfig.json` exists | COR |
|
||||
| eslint | `.eslintrc*` or `eslint` in package.json | COR/MNT |
|
||||
| semgrep | `.semgrep.yml` exists | SEC |
|
||||
| ruff | `pyproject.toml` + ruff available | SEC/COR/MNT |
|
||||
| mypy | mypy available + `pyproject.toml` | COR |
|
||||
| npmAudit | `package-lock.json` exists | SEC |
|
||||
|
||||
5. Load wisdom files from `<session>/wisdom/` if they exist
|
||||
|
||||
## Phase 3: Scan Execution
|
||||
|
||||
**Quick mode**: Single CLI call with analysis mode, max 20 findings, skip toolchain.
|
||||
|
||||
**Standard mode** (sequential):
|
||||
|
||||
### 3A: Toolchain Scan
|
||||
Run detected tools in parallel via Bash backgrounding. Each tool writes to `<session>/scan/tmp/<tool>.{json|txt}`. After `wait`, parse each output into normalized findings:
|
||||
- tsc: `file(line,col): error TSxxxx: msg` -> dimension=correctness, source=tool:tsc
|
||||
- eslint: JSON array -> severity 2=correctness/high, else=maintainability/medium
|
||||
- semgrep: `{results[]}` -> dimension=security, severity from extra.severity
|
||||
- ruff: `[{code,message,filename}]` -> S*=security, F*/B*=correctness, else=maintainability
|
||||
- mypy: `file:line: error: msg [code]` -> dimension=correctness
|
||||
- npm audit: `{vulnerabilities:{}}` -> dimension=security, category=dependency
|
||||
|
||||
Write `<session>/scan/toolchain-findings.json`.
|
||||
|
||||
### 3B: Semantic Scan (LLM via CLI)
|
||||
Build prompt with target file patterns, toolchain dedup summary, and per-dimension focus areas:
|
||||
- SEC: Business logic vulnerabilities, privilege escalation, sensitive data flow, auth bypass
|
||||
- COR: Logic errors, unhandled exception paths, state management bugs, race conditions
|
||||
- PRF: Algorithm complexity, N+1 queries, unnecessary sync, memory leaks, missing caching
|
||||
- MNT: Architectural coupling, abstraction leaks, convention violations, dead code
|
||||
|
||||
Execute via `ccw cli --tool gemini --mode analysis --rule analysis-review-code-quality` (fallback: qwen -> codex). Parse JSON array response, validate required fields (dimension, title, location.file), enforce per-dimension limit (max 5 each), filter minimum severity (medium+). Write `<session>/scan/semantic-findings.json`.
|
||||
|
||||
## Phase 4: Aggregate & Output
|
||||
|
||||
1. Merge toolchain + semantic findings, deduplicate (same file + line + dimension = duplicate)
|
||||
2. Assign dimension-prefixed IDs: SEC-001, COR-001, PRF-001, MNT-001
|
||||
3. Write `<session>/scan/scan-results.json` with schema: `{scan_date, target, dimensions, quick_mode, total_findings, by_severity, by_dimension, findings[]}`
|
||||
4. Each finding: `{id, dimension, category, severity, title, description, location:{file,line}, source, suggested_fix, effort, confidence}`
|
||||
5. Update `<session>/.msg/meta.json` with scan summary (findings_count, by_severity, by_dimension)
|
||||
6. Contribute discoveries to `<session>/wisdom/` files
|
||||
@@ -1,143 +0,0 @@
|
||||
# Team Review — CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier | `"1"` |
|
||||
| `title` | string | Yes | Short task title | `"Scan codebase"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained) | `"Run toolchain + LLM scan on src/**/*.ts"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"1;2"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"1"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
| `dimension` | string | Yes | Review dimensions (comma-separated: sec,cor,prf,mnt) | `"sec,cor,prf,mnt"` |
|
||||
| `target` | string | Yes | Target path or pattern for analysis | `"src/**/*.ts"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task 1] Scan found 15 issues..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` → `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Found 15 security issues, 8 correctness bugs"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,deps,context_from,exec_mode,dimension,target,wave,status,findings,error
|
||||
1,Scan codebase,Run toolchain + LLM scan on target files,,,"csv-wave","sec,cor,prf,mnt","src/**/*.ts",1,pending,"",""
|
||||
2,Review findings,Deep analysis with root cause enrichment,1,1,"csv-wave","sec,cor,prf,mnt","scan-results.json",2,pending,"",""
|
||||
3,Fix issues,Apply fixes with rollback-on-failure,2,2,"interactive","","review-report.json",3,pending,"",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
───────────────────── ──────────────────── ─────────────────
|
||||
id ───────────► id ──────────► id
|
||||
title ───────────► title ──────────► (reads)
|
||||
description ───────────► description ──────────► (reads)
|
||||
deps ───────────► deps ──────────► (reads)
|
||||
context_from───────────► context_from──────────► (reads)
|
||||
exec_mode ───────────► exec_mode ──────────► (reads)
|
||||
dimension ───────────► dimension ──────────► (reads)
|
||||
target ───────────► target ──────────► (reads)
|
||||
wave ──────────► (reads)
|
||||
prev_context ──────────► (reads)
|
||||
status
|
||||
findings
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "1",
|
||||
"status": "completed",
|
||||
"findings": "Found 15 security issues (3 critical, 5 high, 7 medium), 8 correctness bugs, 4 performance issues, 12 maintainability concerns. Toolchain: tsc (5 errors), eslint (8 warnings), semgrep (3 vulnerabilities). LLM scan: 26 semantic issues.",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `finding` | `file+line+dimension` | `{dimension, file, line, severity, title}` | Code issue discovered by scanner |
|
||||
| `root_cause` | `finding_id` | `{finding_id, description, related_findings[]}` | Root cause analysis from reviewer |
|
||||
| `fix_applied` | `file+line` | `{file, line, fix_strategy, status}` | Fix application result from fixer |
|
||||
| `pattern` | `pattern_name` | `{pattern, files[], occurrences}` | Code pattern identified across files |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T14:30:22Z","worker":"1","type":"finding","data":{"dimension":"sec","file":"src/auth.ts","line":42,"severity":"high","title":"SQL injection vulnerability"}}
|
||||
{"ts":"2026-03-08T14:35:10Z","worker":"2","type":"root_cause","data":{"finding_id":"SEC-001","description":"Unsanitized user input in query","related_findings":["SEC-002"]}}
|
||||
{"ts":"2026-03-08T14:40:05Z","worker":"3","type":"fix_applied","data":{"file":"src/auth.ts","line":42,"fix_strategy":"minimal","status":"fixed"}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message or send_input |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status ∈ {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| Cross-mechanism deps | Interactive→CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
|
||||
| Dimension valid | dimension ∈ {sec, cor, prf, mnt} or combinations | "Invalid dimension: {dimension}" |
|
||||
| Target non-empty | Every task has target | "Empty target for task: {id}" |
|
||||
82
.codex/skills/team-review/specs/dimensions.md
Normal file
82
.codex/skills/team-review/specs/dimensions.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Review Dimensions (4-Dimension System)
|
||||
|
||||
## Security (SEC)
|
||||
|
||||
Vulnerabilities, attack surfaces, and data protection issues.
|
||||
|
||||
**Categories**: injection, authentication, authorization, data-exposure, encryption, input-validation, access-control
|
||||
|
||||
**Tool Support**: Semgrep (`--config auto`), npm audit, tsc strict mode
|
||||
**LLM Focus**: Business logic vulnerabilities, privilege escalation paths, sensitive data flows
|
||||
|
||||
**Severity Mapping**:
|
||||
- Critical: RCE, SQL injection, auth bypass, data breach
|
||||
- High: XSS, CSRF, insecure deserialization, weak crypto
|
||||
- Medium: Missing input validation, overly permissive CORS
|
||||
- Low: Informational headers, minor config issues
|
||||
|
||||
---
|
||||
|
||||
## Correctness (COR)
|
||||
|
||||
Bugs, logic errors, and type safety issues.
|
||||
|
||||
**Categories**: bug, error-handling, edge-case, type-safety, race-condition, null-reference
|
||||
|
||||
**Tool Support**: tsc `--noEmit`, ESLint error-level rules
|
||||
**LLM Focus**: Logic errors, unhandled exception paths, state management bugs, race conditions
|
||||
|
||||
**Severity Mapping**:
|
||||
- Critical: Data corruption, crash in production path
|
||||
- High: Incorrect business logic, unhandled error in common path
|
||||
- Medium: Edge case not handled, missing null check
|
||||
- Low: Minor type inconsistency, unused variable
|
||||
|
||||
---
|
||||
|
||||
## Performance (PRF)
|
||||
|
||||
Inefficiencies, resource waste, and scalability issues.
|
||||
|
||||
**Categories**: n-plus-one, memory-leak, blocking-operation, complexity, resource-usage, caching
|
||||
|
||||
**Tool Support**: None (LLM-only dimension)
|
||||
**LLM Focus**: Algorithm complexity, N+1 queries, unnecessary sync operations, memory leaks, missing caching
|
||||
|
||||
**Severity Mapping**:
|
||||
- Critical: Memory leak in long-running process, O(n³) on user data
|
||||
- High: N+1 query in hot path, blocking I/O in async context
|
||||
- Medium: Suboptimal algorithm, missing obvious cache
|
||||
- Low: Minor inefficiency, premature optimization opportunity
|
||||
|
||||
---
|
||||
|
||||
## Maintainability (MNT)
|
||||
|
||||
Code quality, readability, and structural health.
|
||||
|
||||
**Categories**: code-smell, naming, complexity, duplication, dead-code, pattern-violation, coupling
|
||||
|
||||
**Tool Support**: ESLint warning-level rules, complexity metrics
|
||||
**LLM Focus**: Architectural coupling, abstraction leaks, project convention violations
|
||||
|
||||
**Severity Mapping**:
|
||||
- High: God class, circular dependency, copy-paste across modules
|
||||
- Medium: Long method, magic numbers, unclear naming
|
||||
- Low: Minor style inconsistency, commented-out code
|
||||
- Info: Pattern observation, refactoring suggestion
|
||||
|
||||
---
|
||||
|
||||
## Why 4 Dimensions (Not 7)
|
||||
|
||||
The original review-cycle used 7 dimensions with significant overlap:
|
||||
|
||||
| Original | Problem | Merged Into |
|
||||
|----------|---------|-------------|
|
||||
| Quality | Overlaps Maintainability + Best-Practices | **Maintainability** |
|
||||
| Best-Practices | Overlaps Quality + Maintainability | **Maintainability** |
|
||||
| Architecture | Overlaps Maintainability (coupling/layering) | **Maintainability** (structure) + **Security** (security architecture) |
|
||||
| Action-Items | Not a dimension — it's a report format | Standard field on every finding |
|
||||
|
||||
4 dimensions = clear ownership, no overlap, each maps to distinct tooling.
|
||||
82
.codex/skills/team-review/specs/finding-schema.json
Normal file
82
.codex/skills/team-review/specs/finding-schema.json
Normal file
@@ -0,0 +1,82 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"title": "Finding",
|
||||
"description": "Standardized finding format for team-review pipeline",
|
||||
"type": "object",
|
||||
"required": ["id", "dimension", "category", "severity", "title", "description", "location", "source", "effort", "confidence"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^(SEC|COR|PRF|MNT)-\\d{3}$",
|
||||
"description": "{DIM_PREFIX}-{SEQ}"
|
||||
},
|
||||
"dimension": {
|
||||
"type": "string",
|
||||
"enum": ["security", "correctness", "performance", "maintainability"]
|
||||
},
|
||||
"category": {
|
||||
"type": "string",
|
||||
"description": "Sub-category within the dimension"
|
||||
},
|
||||
"severity": {
|
||||
"type": "string",
|
||||
"enum": ["critical", "high", "medium", "low", "info"]
|
||||
},
|
||||
"title": { "type": "string" },
|
||||
"description": { "type": "string" },
|
||||
"location": {
|
||||
"type": "object",
|
||||
"required": ["file", "line"],
|
||||
"properties": {
|
||||
"file": { "type": "string" },
|
||||
"line": { "type": "integer" },
|
||||
"end_line": { "type": "integer" },
|
||||
"code_snippet": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"source": {
|
||||
"type": "string",
|
||||
"description": "tool:eslint | tool:tsc | tool:semgrep | llm | tool+llm"
|
||||
},
|
||||
"tool_rule": { "type": ["string", "null"] },
|
||||
"suggested_fix": { "type": "string" },
|
||||
"references": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"effort": { "type": "string", "enum": ["low", "medium", "high"] },
|
||||
"confidence": { "type": "string", "enum": ["high", "medium", "low"] },
|
||||
"root_cause": {
|
||||
"type": ["object", "null"],
|
||||
"description": "Populated by reviewer role",
|
||||
"properties": {
|
||||
"description": { "type": "string" },
|
||||
"related_findings": { "type": "array", "items": { "type": "string" } },
|
||||
"is_symptom": { "type": "boolean" }
|
||||
}
|
||||
},
|
||||
"impact": {
|
||||
"type": ["object", "null"],
|
||||
"properties": {
|
||||
"scope": { "type": "string", "enum": ["low", "medium", "high"] },
|
||||
"affected_files": { "type": "array", "items": { "type": "string" } },
|
||||
"blast_radius": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"optimization": {
|
||||
"type": ["object", "null"],
|
||||
"properties": {
|
||||
"approach": { "type": "string" },
|
||||
"alternative": { "type": "string" },
|
||||
"tradeoff": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"fix_strategy": { "type": ["string", "null"], "enum": ["minimal", "refactor", "skip", null] },
|
||||
"fix_complexity": { "type": ["string", "null"], "enum": ["low", "medium", "high", null] },
|
||||
"fix_dependencies": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"default": []
|
||||
}
|
||||
}
|
||||
}
|
||||
102
.codex/skills/team-review/specs/pipelines.md
Normal file
102
.codex/skills/team-review/specs/pipelines.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# Review Pipelines
|
||||
|
||||
Pipeline definitions and task registry for team-review.
|
||||
|
||||
## Pipeline Modes
|
||||
|
||||
| Mode | Description | Tasks |
|
||||
|------|-------------|-------|
|
||||
| default | Scan + review | SCAN -> REV |
|
||||
| full | Scan + review + fix | SCAN -> REV -> [confirm] -> FIX |
|
||||
| fix-only | Fix from existing manifest | FIX |
|
||||
| quick | Quick scan only | SCAN (quick=true) |
|
||||
|
||||
## Pipeline Definitions
|
||||
|
||||
### default Mode (2 tasks, linear)
|
||||
|
||||
```
|
||||
SCAN-001 -> REV-001
|
||||
```
|
||||
|
||||
| Task ID | Role | Dependencies | Description |
|
||||
|---------|------|-------------|-------------|
|
||||
| SCAN-001 | scanner | (none) | Multi-dimension code scan (toolchain + LLM) |
|
||||
| REV-001 | reviewer | SCAN-001 | Deep finding analysis and review report |
|
||||
|
||||
### full Mode (3 tasks, linear with user checkpoint)
|
||||
|
||||
```
|
||||
SCAN-001 -> REV-001 -> [user confirm] -> FIX-001
|
||||
```
|
||||
|
||||
| Task ID | Role | Dependencies | Description |
|
||||
|---------|------|-------------|-------------|
|
||||
| SCAN-001 | scanner | (none) | Multi-dimension code scan (toolchain + LLM) |
|
||||
| REV-001 | reviewer | SCAN-001 | Deep finding analysis and review report |
|
||||
| FIX-001 | fixer | REV-001 + user confirm | Plan + execute + verify fixes |
|
||||
|
||||
### fix-only Mode (1 task)
|
||||
|
||||
```
|
||||
FIX-001
|
||||
```
|
||||
|
||||
| Task ID | Role | Dependencies | Description |
|
||||
|---------|------|-------------|-------------|
|
||||
| FIX-001 | fixer | (none) | Execute fixes from existing manifest |
|
||||
|
||||
### quick Mode (1 task)
|
||||
|
||||
```
|
||||
SCAN-001 (quick=true)
|
||||
```
|
||||
|
||||
| Task ID | Role | Dependencies | Description |
|
||||
|---------|------|-------------|-------------|
|
||||
| SCAN-001 | scanner | (none) | Quick scan, max 20 findings, skip toolchain |
|
||||
|
||||
## Review Dimensions (4-Dimension System)
|
||||
|
||||
| Dimension | Code | Focus |
|
||||
|-----------|------|-------|
|
||||
| Security | SEC | Vulnerabilities, auth, data exposure |
|
||||
| Correctness | COR | Bugs, logic errors, type safety |
|
||||
| Performance | PRF | N+1, memory leaks, blocking ops |
|
||||
| Maintainability | MNT | Coupling, complexity, dead code |
|
||||
|
||||
## Fix Scope Options
|
||||
|
||||
| Scope | Description |
|
||||
|-------|-------------|
|
||||
| all | Fix all findings |
|
||||
| critical,high | Fix critical and high severity only |
|
||||
| skip | Skip fix phase |
|
||||
|
||||
## Session Directory
|
||||
|
||||
```
|
||||
.workflow/.team/RV-<slug>-<YYYY-MM-DD>/
|
||||
├── .msg/messages.jsonl # Message bus log
|
||||
├── .msg/meta.json # Session state + cross-role state
|
||||
├── wisdom/ # Cross-task knowledge
|
||||
│ ├── learnings.md
|
||||
│ ├── decisions.md
|
||||
│ ├── conventions.md
|
||||
│ └── issues.md
|
||||
├── scan/ # Scanner output
|
||||
│ ├── toolchain-findings.json
|
||||
│ ├── semantic-findings.json
|
||||
│ └── scan-results.json
|
||||
├── review/ # Reviewer output
|
||||
│ ├── enriched-findings.json
|
||||
│ ├── review-report.json
|
||||
│ └── review-report.md
|
||||
└── fix/ # Fixer output
|
||||
├── fix-manifest.json
|
||||
├── fix-plan.json
|
||||
├── execution-results.json
|
||||
├── verify-results.json
|
||||
├── fix-summary.json
|
||||
└── fix-summary.md
|
||||
```
|
||||
27
.codex/skills/team-review/specs/team-config.json
Normal file
27
.codex/skills/team-review/specs/team-config.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"name": "team-review",
|
||||
"description": "Code scanning, vulnerability review, optimization suggestions, and automated fix",
|
||||
"sessionDir": ".workflow/.team-review/",
|
||||
"msgDir": ".workflow/.team-msg/team-review/",
|
||||
"roles": {
|
||||
"coordinator": { "prefix": "RC", "type": "orchestration", "file": "roles/coordinator/role.md" },
|
||||
"scanner": { "prefix": "SCAN", "type": "read-only-analysis", "file": "roles/scanner/role.md" },
|
||||
"reviewer": { "prefix": "REV", "type": "read-only-analysis", "file": "roles/reviewer/role.md" },
|
||||
"fixer": { "prefix": "FIX", "type": "code-generation", "file": "roles/fixer/role.md" }
|
||||
},
|
||||
"collaboration_pattern": "CP-1",
|
||||
"pipeline": ["scanner", "reviewer", "fixer"],
|
||||
"dimensions": {
|
||||
"security": { "prefix": "SEC", "tools": ["semgrep", "npm-audit"] },
|
||||
"correctness": { "prefix": "COR", "tools": ["tsc", "eslint-error"] },
|
||||
"performance": { "prefix": "PRF", "tools": [] },
|
||||
"maintainability": { "prefix": "MNT", "tools": ["eslint-warning"] }
|
||||
},
|
||||
"severity_levels": ["critical", "high", "medium", "low", "info"],
|
||||
"defaults": {
|
||||
"max_deep_analysis": 15,
|
||||
"max_quick_findings": 20,
|
||||
"max_parallel_fixers": 3,
|
||||
"quick_fix_threshold": 5
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user