feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -1,825 +1,134 @@
---
name: team-iterdev
description: Iterative development team with Generator-Critic loop, dynamic pipeline selection (patch/sprint/multi-sprint), task ledger for progress tracking, and shared wisdom for cross-sprint learning.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"task description\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
description: Unified team skill for iterative development team. Pure router — all roles read this file. Beat model is coordinator-only in monitor.md. Generator-Critic loops (developer<->reviewer, max 3 rounds). Triggers on "team iterdev".
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team IterDev
## Usage
Iterative development team skill. Generator-Critic loops (developer<->reviewer, max 3 rounds), task ledger (task-ledger.json) for real-time progress, shared memory (cross-sprint learning), and dynamic pipeline selection for incremental delivery.
```bash
$team-iterdev "Implement user authentication with JWT"
$team-iterdev -c 4 "Refactor payment module to support multiple gateways"
$team-iterdev -y "Fix login button not responding on mobile"
$team-iterdev --continue "ids-auth-jwt-20260308"
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--continue`: Resume existing session
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
---
## Overview
Iterative development team skill with Generator-Critic (GC) loops between developer and reviewer roles (max 3 rounds). Automatically selects pipeline complexity (patch/sprint/multi-sprint) based on task signals. Tracks progress via task ledger. Accumulates cross-sprint wisdom in shared discovery board.
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary for GC loop control and requirement analysis)
## Architecture
```
+-------------------------------------------------------------------------+
| TEAM ITERDEV WORKFLOW |
+-------------------------------------------------------------------------+
| |
| Phase 0: Pre-Wave Interactive |
| +-- Analyze task complexity and select pipeline mode |
| +-- Explore codebase for patterns and dependencies |
| +-- Output: pipeline mode, task analysis, session artifacts |
| |
| Phase 1: Requirement -> CSV + Classification |
| +-- Parse task into pipeline-specific task chain |
| +-- Assign roles: architect, developer, tester, reviewer |
| +-- Classify tasks: csv-wave | interactive (exec_mode) |
| +-- Compute dependency waves (topological sort -> depth grouping) |
| +-- Generate tasks.csv with wave + exec_mode columns |
| +-- User validates task breakdown (skip if -y) |
| |
| Phase 2: Wave Execution Engine (Extended) |
| +-- For each wave (1..N): |
| | +-- Execute pre-wave interactive tasks (if any) |
| | +-- Build wave CSV (filter csv-wave tasks for this wave) |
| | +-- Inject previous findings into prev_context column |
| | +-- spawn_agents_on_csv(wave CSV) |
| | +-- Execute post-wave interactive tasks (if any) |
| | +-- Merge all results into master tasks.csv |
| | +-- Check: any failed? -> skip dependents |
| +-- discoveries.ndjson shared across all modes (append-only) |
| |
| Phase 3: Post-Wave Interactive |
| +-- Generator-Critic (GC) loop control |
| +-- If review has critical issues: trigger DEV-fix -> re-REVIEW |
| +-- Max 3 GC rounds, then force convergence |
| |
| Phase 4: Results Aggregation |
| +-- Export final results.csv |
| +-- Generate context.md with all findings |
| +-- Display summary: completed/failed/skipped per wave |
| +-- Offer: view results | retry failed | done |
| |
+-------------------------------------------------------------------------+
Skill(skill="team-iterdev", args="task description")
|
SKILL.md (this file) = Router
|
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze -> dispatch -> spawn workers -> STOP
|
+-------+-------+-------+
v v v v
[architect] [developer] [tester] [reviewer]
(team-worker agents, each loads roles/<role>/role.md)
```
---
## Role Registry
## Task Classification Rules
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
| architect | [roles/architect/role.md](roles/architect/role.md) | DESIGN-* | false |
| developer | [roles/developer/role.md](roles/developer/role.md) | DEV-* | true |
| tester | [roles/tester/role.md](roles/tester/role.md) | VERIFY-* | false |
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REVIEW-* | false |
Each task is classified by `exec_mode`:
## Role Router
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification, inline utility |
Parse `$ARGUMENTS`:
- Has `--role <name>` → Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role` `roles/coordinator/role.md`, execute entry router
**Classification Decision**:
## Shared Constants
| Task Property | Classification |
|---------------|---------------|
| Architecture design (DESIGN-*) | `csv-wave` |
| Code implementation (DEV-*) | `csv-wave` |
| Test execution and fix cycle (VERIFY-*) | `csv-wave` |
| Code review (REVIEW-*) | `csv-wave` |
| Fix task from review feedback (DEV-fix-*) | `csv-wave` |
| GC loop control (decide revision vs convergence) | `interactive` |
| Task analysis and pipeline selection (Phase 0) | `interactive` |
- **Session prefix**: `IDS`
- **Session path**: `.workflow/.team/IDS-<slug>-<date>/`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
---
## Worker Spawn Template
## Pipeline Selection Logic
| Signal | Score |
|--------|-------|
| Changed files > 10 | +3 |
| Changed files 3-10 | +2 |
| Structural change (refactor, architect, restructure) | +3 |
| Cross-cutting concern (multiple, across, cross) | +2 |
| Simple fix (fix, bug, typo, patch) | -2 |
| Score | Pipeline |
|-------|----------|
| >= 5 | multi-sprint |
| 2-4 | sprint |
| 0-1 | patch |
### Pipeline Definitions
**Patch** (2 tasks, serial):
```
DEV-001 -> VERIFY-001
```
**Sprint** (4 tasks, with parallel window):
```
DESIGN-001 -> DEV-001 -> [VERIFY-001 + REVIEW-001] (parallel)
```
**Multi-Sprint** (5+ tasks, iterative with GC loop):
```
Sprint 1: DESIGN-001 -> DEV-001 -> [VERIFY-001 + REVIEW-001] -> DEV-fix (if needed) -> REVIEW-002
Sprint 2+ created dynamically
```
---
## CSV Schema
### tasks.csv (Master State)
```csv
id,title,description,role,pipeline,sprint_num,gc_round,deps,context_from,exec_mode,wave,status,findings,review_score,gc_signal,error
"DESIGN-001","Technical design and task breakdown","Explore codebase, create component design, break into implementable tasks with acceptance criteria","architect","sprint","1","0","","","csv-wave","1","pending","","","",""
"DEV-001","Implement design","Load design and task breakdown, implement tasks in execution order, validate syntax","developer","sprint","1","0","DESIGN-001","DESIGN-001","csv-wave","2","pending","","","",""
"VERIFY-001","Verify implementation","Detect test framework, run targeted tests, run regression suite","tester","sprint","1","0","DEV-001","DEV-001","csv-wave","3","pending","","","",""
```
**Columns**:
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (string) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description |
| `role` | Input | Worker role: architect, developer, tester, reviewer |
| `pipeline` | Input | Pipeline mode: patch, sprint, multi-sprint |
| `sprint_num` | Input | Sprint number (1-based, for multi-sprint) |
| `gc_round` | Input | Generator-Critic round number (0 = initial, 1+ = fix round) |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
| `review_score` | Output | Quality score 1-10 (reviewer only) |
| `gc_signal` | Output | `REVISION_NEEDED` or `CONVERGED` (reviewer only) |
| `error` | Output | Error message if failed (empty if success) |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
---
## Agent Registry (Interactive Agents)
| Agent | Role File | Pattern | Responsibility | Position |
|-------|-----------|---------|----------------|----------|
| task-analyzer | agents/task-analyzer.md | 2.3 (wait-respond) | Analyze task complexity, select pipeline mode, detect capabilities | standalone (Phase 0) |
| gc-controller | agents/gc-controller.md | 2.3 (wait-respond) | Evaluate review severity, decide DEV-fix vs convergence | post-wave (after REVIEW wave) |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 4 |
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
| `context.md` | Human-readable execution report | Created in Phase 4 |
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
| `wisdom/` | Cross-sprint knowledge accumulation | Updated by agents via discoveries |
---
## Session Structure
Coordinator spawns workers using this template:
```
.workflow/.csv-wave/{session-id}/
+-- tasks.csv # Master state (all tasks, both modes)
+-- results.csv # Final results export
+-- discoveries.ndjson # Shared discovery board (all agents)
+-- context.md # Human-readable report
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
+-- interactive/ # Interactive task artifacts
| +-- {id}-result.json # Per-task results
+-- wisdom/ # Cross-sprint knowledge
| +-- learnings.md
| +-- decisions.md
| +-- conventions.md
| +-- issues.md
+-- design/ # Architect output
| +-- design-001.md
| +-- task-breakdown.json
+-- code/ # Developer tracking
| +-- dev-log.md
+-- verify/ # Tester output
| +-- verify-001.json
+-- review/ # Reviewer output
+-- review-001.md
```
spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: <role>
role_spec: <skill_root>/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
requirement: <task-description>
inner_loop: <true|false>
---
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.` },
## Implementation
{ type: "text", text: `## Task Context
task_id: <task-id>
title: <task-title>
description: <task-description>
pipeline_phase: <pipeline-phase>` },
### Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse flags
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
// Clean requirement text (remove flags)
const requirement = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
.trim()
const slug = requirement.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
let sessionId = `ids-${slug}-${dateStr}`
let sessionFolder = `.workflow/.csv-wave/${sessionId}`
// Continue mode: find existing session
if (continueMode) {
const existing = Bash(`ls -t .workflow/.csv-wave/ids-* 2>/dev/null | head -1`).trim()
if (existing) {
sessionId = existing.split('/').pop()
sessionFolder = existing
// Read existing tasks.csv, find incomplete waves, resume from Phase 2
}
}
Bash(`mkdir -p ${sessionFolder}/{interactive,wisdom,design,code,verify,review}`)
// Initialize wisdom files
Write(`${sessionFolder}/wisdom/learnings.md`, `# Learnings\n\n`)
Write(`${sessionFolder}/wisdom/decisions.md`, `# Decisions\n\n`)
Write(`${sessionFolder}/wisdom/conventions.md`, `# Conventions\n\n`)
Write(`${sessionFolder}/wisdom/issues.md`, `# Issues\n\n`)
```
---
### Phase 0: Pre-Wave Interactive
**Objective**: Analyze task complexity, explore codebase, and select pipeline mode.
**Execution**:
```javascript
const analyzer = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~ or <project>/.codex/skills/team-iterdev/agents/task-analyzer.md (MUST read first)
2. Read: .workflow/project-tech.json (if exists)
---
Goal: Analyze iterative development task and select pipeline mode
Requirement: ${requirement}
### Task
1. Detect capabilities from keywords:
- design/architect/restructure -> architect role needed
- implement/build/code/fix -> developer role needed
- test/verify/validate -> tester role needed
- review/audit/quality -> reviewer role needed
2. Score complexity for pipeline selection:
- Changed files > 10: +3, 3-10: +2
- Structural change: +3
- Cross-cutting: +2
- Simple fix: -2
3. Score >= 5 -> multi-sprint, 2-4 -> sprint, 0-1 -> patch
4. Return structured analysis result
`
{ type: "text", text: `## Upstream Context
<prev_context>` }
]
})
const analyzerResult = wait({ ids: [analyzer], timeout_ms: 120000 })
if (analyzerResult.timed_out) {
send_input({ id: analyzer, message: "Please finalize and output current findings." })
const retry = wait({ ids: [analyzer], timeout_ms: 60000 })
}
close_agent({ id: analyzer })
// Store analysis result
Write(`${sessionFolder}/interactive/task-analyzer-result.json`, JSON.stringify({
task_id: "task-analysis",
status: "completed",
pipeline_mode: parsedMode, // "patch" | "sprint" | "multi-sprint"
capabilities: parsedCapabilities,
complexity_score: parsedScore,
roles_needed: parsedRoles,
timestamp: getUtc8ISOString()
}))
```
If not AUTO_YES, present pipeline mode selection for confirmation:
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
```javascript
if (!AUTO_YES) {
const answer = request_user_input({
questions: [{
question: `Task: "${requirement}" — Recommended: ${pipeline_mode}. Approve or override?`,
header: "Pipeline",
id: "pipeline_select",
options: [
{ label: "Approve (Recommended)", description: `Use ${pipeline_mode} pipeline (complexity: ${complexity_score})` },
{ label: "Patch", description: "Simple fix: DEV -> VERIFY (2 tasks)" },
{ label: "Sprint/Multi", description: "Standard or complex: DESIGN -> DEV -> VERIFY + REVIEW" }
]
}]
})
}
## User Commands
| Command | Action |
|---------|--------|
| `check` / `status` | View execution status graph |
| `resume` / `continue` | Advance to next step |
## Session Directory
```
.workflow/.team/IDS-<slug>-<YYYY-MM-DD>/
├── .msg/
├── messages.jsonl # Team message bus
│ └── meta.json # Session state
├── task-analysis.json # Coordinator analyze output
├── task-ledger.json # Real-time task progress ledger
├── wisdom/ # Cross-task knowledge accumulation
│ ├── learnings.md
│ ├── decisions.md
│ ├── conventions.md
│ └── issues.md
├── design/ # Architect output
│ ├── design-001.md
│ └── task-breakdown.json
├── code/ # Developer tracking
│ └── dev-log.md
├── verify/ # Tester output
│ └── verify-001.json
└── review/ # Reviewer output
└── review-001.md
```
**Success Criteria**:
- Pipeline mode selected and confirmed
- Task analysis stored in session
- Interactive agents closed, results stored
## Specs Reference
---
### Phase 1: Requirement -> CSV + Classification
**Objective**: Build tasks.csv from selected pipeline mode with proper wave assignments.
**Decomposition Rules**:
| Pipeline | Tasks | Wave Structure |
|----------|-------|---------------|
| patch | DEV-001 -> VERIFY-001 | 2 waves, serial |
| sprint | DESIGN-001 -> DEV-001 -> VERIFY-001 + REVIEW-001 | 3 waves (VERIFY and REVIEW parallel in wave 3) |
| multi-sprint | DESIGN-001 -> DEV-001 -> VERIFY-001 + REVIEW-001 -> DEV-fix + REVIEW-002 | 4+ waves, with GC loop |
**Pipeline Task Definitions**:
#### Patch Pipeline (2 csv-wave tasks)
| Task ID | Role | Wave | Deps | Description |
|---------|------|------|------|-------------|
| DEV-001 | developer | 1 | (none) | Implement fix: load target files, apply changes, validate syntax |
| VERIFY-001 | tester | 2 | DEV-001 | Verify fix: detect test framework, run targeted tests, check for regressions |
#### Sprint Pipeline (4 csv-wave tasks)
| Task ID | Role | Wave | Deps | Description |
|---------|------|------|------|-------------|
| DESIGN-001 | architect | 1 | (none) | Technical design: explore codebase, create component design, task breakdown |
| DEV-001 | developer | 2 | DESIGN-001 | Implement design: load design and task breakdown, implement in order, validate syntax |
| VERIFY-001 | tester | 3 | DEV-001 | Verify implementation: detect framework, run targeted tests, run regression suite |
| REVIEW-001 | reviewer | 3 | DEV-001 | Code review: load changes and design, review across correctness/completeness/maintainability/security, score quality |
#### Multi-Sprint Pipeline (5+ csv-wave tasks + GC control)
| Task ID | Role | Wave | Deps | Description |
|---------|------|------|------|-------------|
| DESIGN-001 | architect | 1 | (none) | Technical design and task breakdown for sprint 1 |
| DEV-001 | developer | 2 | DESIGN-001 | First implementation batch |
| VERIFY-001 | tester | 3 | DEV-001 | Test execution and fix cycle |
| REVIEW-001 | reviewer | 3 | DEV-001 | Code review with GC signal |
| GC-CHECK-001 | gc-controller | 4 | REVIEW-001 | GC decision: revision or convergence |
Additional DEV-fix and REVIEW tasks created dynamically when GC controller decides REVISION.
**Classification Rules**:
All work tasks (design, development, testing, review) are `csv-wave`. GC loop control between reviewer and next dev-fix is `interactive` (post-wave, spawned by orchestrator to decide the GC outcome).
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
**Success Criteria**:
- tasks.csv created with valid schema, wave, and exec_mode assignments
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
```javascript
const failedIds = new Set()
const skippedIds = new Set()
const MAX_GC_ROUNDS = 3
let gcRound = 0
for (let wave = 1; wave <= maxWave; wave++) {
console.log(`\n## Wave ${wave}/${maxWave}\n`)
// 1. Read current master CSV
const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
// 2. Separate csv-wave and interactive tasks for this wave
const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
// 3. Skip tasks whose deps failed
const executableCsvTasks = []
for (const task of csvTasks) {
const deps = task.deps.split(';').filter(Boolean)
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
skippedIds.add(task.id)
updateMasterCsvRow(sessionFolder, task.id, {
status: 'skipped',
error: 'Dependency failed or skipped'
})
continue
}
executableCsvTasks.push(task)
}
// 4. Build prev_context for each csv-wave task
for (const task of executableCsvTasks) {
const contextIds = task.context_from.split(';').filter(Boolean)
const prevFindings = contextIds
.map(id => {
const prevRow = masterCsv.find(r => r.id === id)
if (prevRow && prevRow.status === 'completed' && prevRow.findings) {
return `[Task ${id}: ${prevRow.title}] ${prevRow.findings}`
}
return null
})
.filter(Boolean)
.join('\n')
task.prev_context = prevFindings || 'No previous context available'
}
// 5. Write wave CSV and execute csv-wave tasks
if (executableCsvTasks.length > 0) {
const waveHeader = 'id,title,description,role,pipeline,sprint_num,gc_round,deps,context_from,exec_mode,wave,prev_context'
const waveRows = executableCsvTasks.map(t =>
[t.id, t.title, t.description, t.role, t.pipeline, t.sprint_num, t.gc_round, t.deps, t.context_from, t.exec_mode, t.wave, t.prev_context]
.map(cell => `"${String(cell).replace(/"/g, '""')}"`)
.join(',')
)
Write(`${sessionFolder}/wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
const waveResult = spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: Read(`~ or <project>/.codex/skills/team-iterdev/instructions/agent-instruction.md`),
max_concurrency: maxConcurrency,
max_runtime_seconds: 900,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
review_score: { type: "string" },
gc_signal: { type: "string" },
error: { type: "string" }
},
required: ["id", "status", "findings"]
}
})
// Merge results into master CSV
const waveResults = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const result of waveResults) {
updateMasterCsvRow(sessionFolder, result.id, {
status: result.status,
findings: result.findings || '',
review_score: result.review_score || '',
gc_signal: result.gc_signal || '',
error: result.error || ''
})
if (result.status === 'failed') failedIds.add(result.id)
}
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`)
}
// 6. Execute post-wave interactive tasks (GC controller)
for (const task of interactiveTasks) {
if (task.status !== 'pending') continue
const deps = task.deps.split(';').filter(Boolean)
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
skippedIds.add(task.id)
continue
}
// Spawn GC controller agent
const gcAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~ or <project>/.codex/skills/team-iterdev/agents/gc-controller.md (MUST read first)
2. Read: ${sessionFolder}/discoveries.ndjson (shared discoveries)
---
Goal: Evaluate review severity and decide DEV-fix vs convergence
Session: ${sessionFolder}
GC Round: ${gcRound}
Max GC Rounds: ${MAX_GC_ROUNDS}
### Context
Read the latest review file in ${sessionFolder}/review/ and check:
- review.critical_count > 0 OR review.score < 7 -> REVISION
- review.critical_count == 0 AND review.score >= 7 -> CONVERGE
If gcRound >= maxRounds -> CONVERGE (force convergence)
`
})
const gcResult = wait({ ids: [gcAgent], timeout_ms: 120000 })
if (gcResult.timed_out) {
send_input({ id: gcAgent, message: "Please finalize your decision now." })
wait({ ids: [gcAgent], timeout_ms: 60000 })
}
close_agent({ id: gcAgent })
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
task_id: task.id, status: "completed",
gc_decision: gcDecision, gc_round: gcRound,
timestamp: getUtc8ISOString()
}))
if (gcDecision === "CONVERGE") {
// Skip remaining GC tasks, mark fix tasks as skipped
} else {
gcRound++
// Dynamically add DEV-fix and REVIEW tasks to master CSV for next waves
const fixWave = wave + 1
const reviewWave = wave + 2
appendMasterCsvRow(sessionFolder, {
id: `DEV-fix-${gcRound}`, title: `Fix review issues (round ${gcRound})`,
description: `Fix critical/high issues from REVIEW. Focus on review feedback only.`,
role: 'developer', pipeline: pipeline_mode, sprint_num: '1',
gc_round: String(gcRound), deps: task.id, context_from: `REVIEW-001`,
exec_mode: 'csv-wave', wave: String(fixWave),
status: 'pending', findings: '', review_score: '', gc_signal: '', error: ''
})
appendMasterCsvRow(sessionFolder, {
id: `REVIEW-${gcRound + 1}`, title: `Re-review (round ${gcRound})`,
description: `Review fixes from DEV-fix-${gcRound}. Re-evaluate quality.`,
role: 'reviewer', pipeline: pipeline_mode, sprint_num: '1',
gc_round: String(gcRound), deps: `DEV-fix-${gcRound}`, context_from: `DEV-fix-${gcRound}`,
exec_mode: 'csv-wave', wave: String(reviewWave),
status: 'pending', findings: '', review_score: '', gc_signal: '', error: ''
})
maxWave = Math.max(maxWave, reviewWave)
}
updateMasterCsvRow(sessionFolder, task.id, { status: 'completed', findings: `GC decision: ${gcDecision}` })
}
}
```
**Success Criteria**:
- All waves executed in order
- Both csv-wave and interactive tasks handled per wave
- Each wave's results merged into master CSV before next wave starts
- Dependent tasks skipped when predecessor failed
- discoveries.ndjson accumulated across all waves and mechanisms
- GC loop controlled with max 3 rounds
---
### Phase 3: Post-Wave Interactive
**Objective**: Handle any final GC loop convergence and multi-sprint transitions.
If the pipeline is multi-sprint and the current sprint completed successfully:
1. Evaluate sprint metrics (velocity, review scores)
2. If more sprints needed, dynamically create next sprint tasks in master CSV
3. If sprint metrics are strong (review avg >= 8), consider downgrading next sprint to simpler pipeline
If max GC rounds reached and issues remain, log to wisdom/issues.md and proceed.
**Success Criteria**:
- Post-wave interactive processing complete
- Interactive agents closed, results stored
---
### Phase 4: Results Aggregation
**Objective**: Generate final results and human-readable report.
```javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
Write(`${sessionFolder}/results.csv`, masterCsv)
const tasks = parseCsv(masterCsv)
const completed = tasks.filter(t => t.status === 'completed')
const failed = tasks.filter(t => t.status === 'failed')
const skipped = tasks.filter(t => t.status === 'skipped')
const contextContent = `# Team IterDev Report
**Session**: ${sessionId}
**Requirement**: ${requirement}
**Pipeline**: ${pipeline_mode}
**Completed**: ${getUtc8ISOString()}
---
## Summary
| Metric | Count |
|--------|-------|
| Total Tasks | ${tasks.length} |
| Completed | ${completed.length} |
| Failed | ${failed.length} |
| Skipped | ${skipped.length} |
| GC Rounds | ${gcRound} |
---
## Pipeline Execution
${waveDetails}
---
## Task Details
${taskDetails}
---
## Deliverables
| Artifact | Path |
|----------|------|
| Design Document | ${sessionFolder}/design/design-001.md |
| Task Breakdown | ${sessionFolder}/design/task-breakdown.json |
| Dev Log | ${sessionFolder}/code/dev-log.md |
| Verification | ${sessionFolder}/verify/verify-001.json |
| Review Report | ${sessionFolder}/review/review-001.md |
| Wisdom | ${sessionFolder}/wisdom/ |
`
Write(`${sessionFolder}/context.md`, contextContent)
```
If not AUTO_YES, offer completion actions:
```javascript
if (!AUTO_YES) {
request_user_input({
questions: [{
question: "IterDev pipeline complete. Choose next action.",
header: "Done",
id: "completion",
options: [
{ label: "Archive (Recommended)", description: "Archive session, generate final report" },
{ label: "Keep Active", description: "Keep session for follow-up or inspection" },
{ label: "Retry Failed", description: "Re-run failed tasks" }
]
}]
})
}
```
**Success Criteria**:
- results.csv exported (all tasks, both modes)
- context.md generated
- All interactive agents closed
- Summary displayed to user
---
## Shared Discovery Board Protocol
All agents across all waves share `discoveries.ndjson`. This enables cross-role knowledge sharing.
**Discovery Types**:
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `design_decision` | `data.component` | `{component, approach, rationale, alternatives}` | Architecture decision |
| `implementation` | `data.file` | `{file, changes, pattern_used, notes}` | Code implementation detail |
| `test_result` | `data.test_suite` | `{test_suite, pass_rate, failures[], regressions}` | Test execution result |
| `review_finding` | `data.file_line` | `{file_line, severity, dimension, description, suggestion}` | Review finding |
| `convention` | `data.name` | `{name, description, example}` | Discovered project convention |
| `gc_decision` | `data.round` | `{round, signal, critical_count, score}` | GC loop decision |
**Format**: NDJSON, each line is self-contained JSON:
```jsonl
{"ts":"2026-03-08T10:00:00+08:00","worker":"DESIGN-001","type":"design_decision","data":{"component":"AuthModule","approach":"JWT with refresh tokens","rationale":"Stateless auth for microservices","alternatives":"Session-based, OAuth2"}}
{"ts":"2026-03-08T10:05:00+08:00","worker":"DEV-001","type":"implementation","data":{"file":"src/auth/jwt.ts","changes":"Added JWT middleware","pattern_used":"Express middleware pattern","notes":"Uses existing bcrypt dependency"}}
{"ts":"2026-03-08T10:10:00+08:00","worker":"REVIEW-001","type":"review_finding","data":{"file_line":"src/auth/jwt.ts:42","severity":"HIGH","dimension":"security","description":"Token expiry not validated","suggestion":"Add exp claim check"}}
```
**Protocol Rules**:
1. Read board before own work -- leverage existing context
2. Write discoveries immediately via `echo >>` -- don't batch
3. Deduplicate -- check existing entries by type + dedup key
4. Append-only -- never modify or delete existing lines
---
## Consensus Severity Routing
When the reviewer returns review results with severity-graded verdicts:
| Severity | Action |
|----------|--------|
| HIGH | Trigger DEV-fix round (GC loop), max 3 rounds total |
| MEDIUM | Log warning, continue pipeline |
| LOW | Treat as review passed |
**Constraints**: Max 3 GC rounds (fix cycles). If still HIGH after 3 rounds, force convergence and record in wisdom/issues.md.
---
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
## Error Handling
| Error | Resolution |
|-------|------------|
| Circular dependency | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Interactive agent failed | Mark as failed, skip dependents |
| All agents in wave failed | Log error, offer retry or abort |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| GC loop exceeds 3 rounds | Force convergence, record in wisdom/issues.md |
| Sprint velocity drops below 50% | Report to user, suggest scope reduction |
| Task ledger corrupted | Rebuild from tasks.csv state |
| Continue mode: no session found | List available sessions, prompt user to select |
---
## Core Rules
1. **Start Immediately**: First action is session initialization, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson -- both mechanisms share it
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
---
## Coordinator Role Constraints (Main Agent)
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
- Spawns agents with task assignments
- Waits for agent callbacks
- Merges results and coordinates workflow
- Manages workflow transitions between phases
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
- Wait patiently for `wait()` calls to complete
- NOT skip workflow steps due to perceived delays
- NOT assume agents have failed just because they're taking time
- Trust the timeout mechanisms defined in the skill
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
- Use `send_input()` to ask questions or provide clarification
- NOT skip the agent or move to next phase prematurely
- Give agents opportunity to respond before escalating
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
- Skip phases or stages defined in the workflow
- Bypass required approval or review steps
- Execute dependent tasks before prerequisites complete
- Assume task completion without explicit agent callback
- Make up or fabricate agent results
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
- Total execution time may range from 30-90 minutes or longer
- Each phase may take 10-30 minutes depending on complexity
- The coordinator must remain active and attentive throughout the entire process
- Do not terminate or skip steps due to time concerns
| Scenario | Resolution |
|----------|------------|
| Unknown command | Error with available command list |
| Role not found | Error with role registry |
| GC loop exceeds 3 rounds | Accept with warning, record in shared memory |
| Sprint velocity drops below 50% | Coordinator alerts user, suggests scope reduction |
| Task ledger corrupted | Rebuild from TaskList state |
| Conflict detected | Update conflict_info, notify coordinator, create DEV-fix task |
| Pipeline deadlock | Check blockedBy chain, report blocking point |

View File

@@ -1,193 +0,0 @@
# GC Controller Agent
Evaluate review severity after REVIEW wave and decide whether to trigger a DEV-fix iteration or converge the pipeline.
## Identity
- **Type**: `interactive`
- **Role File**: `agents/gc-controller.md`
- **Responsibility**: Evaluate review severity, decide DEV-fix vs convergence
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Load review results from completed REVIEW tasks
- Evaluate gc_signal and review_score to determine decision
- Respect max iteration count to prevent infinite loops
- Produce structured output with clear CONVERGE/FIX/ESCALATE decision
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Modify source code directly
- Produce unstructured output
- Exceed max iteration count without escalating
- Ignore Critical findings in review results
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load review results and session state |
| `Write` | builtin | Create FIX task definitions for next wave |
| `Bash` | builtin | Query session state, count iterations |
### Tool Usage Patterns
**Read Pattern**: Load review results
```
Read("{session_folder}/artifacts/review-results.json")
Read("{session_folder}/session-state.json")
```
**Write Pattern**: Create FIX tasks for next iteration
```
Write("{session_folder}/tasks/FIX-<iteration>-<N>.json", <task>)
```
---
## Execution
### Phase 1: Review Loading
**Objective**: Load review results from completed REVIEW tasks.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Review results | Yes | review_score, gc_signal, findings from REVIEW tasks |
| Session state | Yes | Current iteration count, max iterations |
| Task analysis | No | Original task-analysis.json for context |
**Steps**:
1. Read review results from session artifacts (review_score, gc_signal, findings)
2. Read session state to determine current iteration number
3. Read max_iterations from task-analysis.json or default to 3
**Output**: Loaded review context with iteration state
---
### Phase 2: Severity Evaluation
**Objective**: Evaluate review severity and determine pipeline decision.
**Steps**:
1. **Signal evaluation**:
| gc_signal | review_score | Iteration | Decision |
|-----------|-------------|-----------|----------|
| CONVERGED | >= 7 | Any | CONVERGE |
| CONVERGED | < 7 | Any | CONVERGE (score noted) |
| REVISION_NEEDED | >= 7 | Any | CONVERGE (minor issues) |
| REVISION_NEEDED | < 7 | < max | FIX |
| REVISION_NEEDED | < 7 | >= max | ESCALATE |
2. **Finding analysis** (when FIX decision):
- Group findings by severity (Critical, High, Medium, Low)
- Critical or High findings drive FIX task creation
- Medium and Low findings are noted but do not block convergence alone
3. **Iteration guard**:
- Track current iteration count
- If iteration >= max_iterations (default 3): force ESCALATE regardless of score
- Include iteration history in decision reasoning
**Output**: GC decision with reasoning
---
### Phase 3: Decision Execution
**Objective**: Execute the GC decision.
| Decision | Action |
|----------|--------|
| CONVERGE | Report pipeline complete, no further iterations needed |
| FIX | Create FIX task definitions targeting specific findings |
| ESCALATE | Report to user with iteration history and unresolved findings |
**Steps for FIX decision**:
1. Extract actionable findings (Critical and High severity)
2. Group findings by target file or module
3. Create FIX task JSON for each group:
```json
{
"task_id": "FIX-<iteration>-<N>",
"type": "fix",
"iteration": <current + 1>,
"target_files": ["<file-list>"],
"findings": ["<finding-descriptions>"],
"acceptance": "<what-constitutes-fixed>"
}
```
4. Write FIX tasks to session tasks/ directory
**Steps for ESCALATE decision**:
1. Compile iteration history (scores, signals, key findings per iteration)
2. List unresolved Critical/High findings
3. Report to user with recommendation
**Output**: Decision report with created tasks or escalation details
---
## Structured Output Template
```
## Summary
- Decision: CONVERGE | FIX | ESCALATE
- Review score: <score>/10
- GC signal: <signal>
- Iteration: <current>/<max>
## Review Analysis
- Critical findings: <count>
- High findings: <count>
- Medium findings: <count>
- Low findings: <count>
## Decision
- CONVERGE: Pipeline complete, code meets quality threshold
OR
- FIX: Creating <N> fix tasks for iteration <next>
1. FIX-<id>: <description> targeting <files>
2. FIX-<id>: <description> targeting <files>
OR
- ESCALATE: Max iterations reached, unresolved issues require user input
1. Unresolved: <finding-description>
2. Unresolved: <finding-description>
## Iteration History
- Iteration 1: score=<N>, signal=<signal>, findings=<count>
- Iteration 2: score=<N>, signal=<signal>, findings=<count>
## Reasoning
- <Why this decision was made>
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Review results not found | Report as error, cannot make GC decision |
| Missing gc_signal field | Infer from review_score: >= 7 treat as CONVERGED, < 7 as REVISION_NEEDED |
| Missing review_score field | Infer from gc_signal and findings count |
| Session state corrupted | Default to iteration 1, note uncertainty |
| Timeout approaching | Output current decision with "PARTIAL" status |

View File

@@ -1,206 +0,0 @@
# Task Analyzer Agent
Analyze task complexity, detect required capabilities, and select the appropriate pipeline mode for iterative development.
## Identity
- **Type**: `interactive`
- **Role File**: `agents/task-analyzer.md`
- **Responsibility**: Analyze task complexity, detect required capabilities, select pipeline mode
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Parse user requirement to detect project type
- Analyze complexity by file count and dependency depth
- Select appropriate pipeline mode based on analysis
- Produce structured output with task-analysis JSON
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Modify source code or project files
- Produce unstructured output
- Select pipeline mode without analyzing the codebase
- Begin implementation work
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load project files, configs, package manifests |
| `Glob` | builtin | Discover project files and estimate scope |
| `Grep` | builtin | Detect frameworks, dependencies, patterns |
| `Bash` | builtin | Run detection commands, count files |
### Tool Usage Patterns
**Glob Pattern**: Estimate scope by file discovery
```
Glob("src/**/*.ts")
Glob("**/*.test.*")
Glob("**/package.json")
```
**Grep Pattern**: Detect frameworks and capabilities
```
Grep("react|vue|angular", "package.json")
Grep("jest|vitest|mocha", "package.json")
```
**Read Pattern**: Load project configuration
```
Read("package.json")
Read("tsconfig.json")
Read("pyproject.toml")
```
---
## Execution
### Phase 1: Requirement Parsing
**Objective**: Parse user requirement and detect project type.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| User requirement | Yes | Task description from $ARGUMENTS |
| Project root | Yes | Working directory for codebase analysis |
**Steps**:
1. Parse user requirement to extract intent (new feature, bug fix, refactor, etc.)
2. Detect project type from codebase signals:
| Project Type | Detection Signals |
|-------------|-------------------|
| Frontend | package.json with react/vue/angular, src/**/*.tsx |
| Backend | server.ts, app.py, go.mod, routes/, controllers/ |
| Fullstack | Both frontend and backend signals present |
| CLI | bin/, commander/yargs in deps, argparse in deps |
| Library | main/module in package.json, src/lib/, no app entry |
3. Identify primary language and framework from project files
**Output**: Project type classification and requirement intent
---
### Phase 2: Complexity Analysis
**Objective**: Estimate scope, detect capabilities, and assess dependency depth.
**Steps**:
1. **Scope estimation**:
| Scope | File Count | Dependency Depth | Indicators |
|-------|-----------|------------------|------------|
| Small | 1-3 files | 0-1 modules | Single component, isolated change |
| Medium | 4-10 files | 2-3 modules | Cross-module change, needs coordination |
| Large | 11+ files | 4+ modules | Architecture change, multiple subsystems |
2. **Capability detection**:
- Language: TypeScript, Python, Go, Java, etc.
- Testing framework: jest, vitest, pytest, go test, etc.
- Build system: webpack, vite, esbuild, setuptools, etc.
- Linting: eslint, prettier, ruff, etc.
- Type checking: tsc, mypy, etc.
3. **Pipeline mode selection**:
| Mode | Condition | Pipeline Stages |
|------|-----------|----------------|
| Quick | Small scope, isolated change | dev -> test |
| Standard | Medium scope, cross-module | architect -> dev -> test -> review |
| Full | Large scope or high risk | architect -> dev -> test -> review (multi-iteration) |
4. **Risk assessment**:
- Breaking change potential (public API modifications)
- Test coverage gaps (areas without existing tests)
- Dependency complexity (shared modules, circular refs)
**Output**: Scope, capabilities, pipeline mode, and risk assessment
---
### Phase 3: Analysis Report
**Objective**: Write task-analysis result as structured JSON.
**Steps**:
1. Assemble task-analysis JSON:
```json
{
"project_type": "<frontend|backend|fullstack|cli|library>",
"intent": "<feature|bugfix|refactor|test|docs>",
"scope": "<small|medium|large>",
"pipeline_mode": "<quick|standard|full>",
"capabilities": {
"language": "<primary-language>",
"framework": "<primary-framework>",
"test_framework": "<test-framework>",
"build_system": "<build-system>"
},
"affected_files": ["<estimated-file-list>"],
"risk_factors": ["<risk-1>", "<risk-2>"],
"max_iterations": <1|2|3>
}
```
2. Report analysis summary to user
**Output**: task-analysis.json written to session artifacts
---
## Structured Output Template
```
## Summary
- Project: <project-type> (<language>/<framework>)
- Scope: <small|medium|large> (~<N> files)
- Pipeline: <quick|standard|full>
## Capabilities Detected
- Language: <language>
- Framework: <framework>
- Testing: <test-framework>
- Build: <build-system>
## Complexity Assessment
- File count: <N> files affected
- Dependency depth: <N> modules
- Risk factors: <list>
## Pipeline Selection
- Mode: <mode> — <rationale>
- Stages: <stage-1> -> <stage-2> -> ...
- Max iterations: <N>
## Task Analysis JSON
- Written to: <session>/artifacts/task-analysis.json
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Empty project directory | Report as unknown project type, default to standard pipeline |
| No package manifest found | Infer from file extensions, note reduced confidence |
| Ambiguous project type | Report both candidates, select most likely |
| Cannot determine scope | Default to medium, note uncertainty |
| Timeout approaching | Output current analysis with "PARTIAL" status |

View File

@@ -1,118 +0,0 @@
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read shared discoveries: .workflow/.csv-wave/{session-id}/discoveries.ndjson (if exists, skip if not)
2. Read project context: .workflow/project-tech.json (if exists)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: {role}
**Description**: {description}
**Pipeline**: {pipeline}
**Sprint**: {sprint_num}
**GC Round**: {gc_round}
### Previous Tasks' Findings (Context)
{prev_context}
---
## Execution Protocol
1. **Read discoveries**: Load shared discoveries from the session's discoveries.ndjson for cross-task context
2. **Use context**: Apply previous tasks' findings from prev_context above
3. **Execute by role**:
### Role: architect (DESIGN-* tasks)
- Explore codebase for existing patterns, module structure, and dependencies
- Use mcp__ace-tool__search_context for semantic discovery when available
- Create design document covering:
- Architecture decision: approach, rationale, alternatives considered
- Component design: responsibility, dependencies, files to modify, complexity
- Task breakdown: file changes, estimated complexity, dependencies, acceptance criteria
- Integration points and risks with mitigations
- Write design document to session design/ directory
- Write task breakdown JSON (array of tasks with id, title, files, complexity, dependencies, acceptance_criteria)
- Record architecture decisions in wisdom/decisions.md via discovery board
### Role: developer (DEV-* tasks)
- **Normal task** (gc_round = 0):
- Read design document and task breakdown from context
- Implement tasks following the execution order from breakdown
- Use Edit or Write for file modifications
- Validate syntax after each major change (tsc --noEmit or equivalent)
- Auto-fix if validation fails (max 2 attempts)
- **Fix task** (gc_round > 0):
- Read review feedback from prev_context
- Focus on critical/high severity issues ONLY
- Do NOT change code that was not flagged in review
- Fix critical issues first, then high, then medium
- Maintain existing code style and patterns
- Write dev log to session code/ directory
- Record implementation details via discovery board
### Role: tester (VERIFY-* tasks)
- Detect test framework from project files (package.json, pytest.ini, etc.)
- Get list of changed files from dev log in prev_context
- Run targeted tests for changed files
- Run regression test suite
- If tests fail: attempt fix (max 3 iterations using available tools)
- Write verification results JSON to session verify/ directory
- Record test results via discovery board
- Report pass rate in findings
### Role: reviewer (REVIEW-* tasks)
- Read changed files from dev log in prev_context
- Read design document for requirements alignment
- Review across 4 weighted dimensions:
- Correctness (30%): Logic correctness, boundary handling, edge cases
- Completeness (25%): Coverage of design requirements
- Maintainability (25%): Readability, code style, DRY, naming
- Security (20%): Vulnerabilities, input validation, auth issues
- Assign severity per finding: CRITICAL / HIGH / MEDIUM / LOW
- Include file:line references for each finding
- Calculate weighted quality score (1-10)
- Determine GC signal:
- critical_count > 0 OR score < 7 -> `REVISION_NEEDED`
- critical_count == 0 AND score >= 7 -> `CONVERGED`
- Write review report to session review/ directory
- Record review findings via discovery board
4. **Share discoveries**: Append exploration findings to shared board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
```
Discovery types to share:
- `design_decision`: {component, approach, rationale, alternatives} -- architecture decision
- `implementation`: {file, changes, pattern_used, notes} -- code implementation detail
- `test_result`: {test_suite, pass_rate, failures[], regressions} -- test execution result
- `review_finding`: {file_line, severity, dimension, description, suggestion} -- review finding
- `convention`: {name, description, example} -- discovered project convention
5. **Report result**: Return JSON via report_agent_job_result
---
## Output (report_agent_job_result)
Return JSON:
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Key discoveries and implementation notes (max 500 chars)",
"review_score": "Quality score 1-10 (reviewer only, empty for others)",
"gc_signal": "REVISION_NEEDED | CONVERGED (reviewer only, empty for others)",
"error": ""
}
**Role-specific findings guidance**:
- **architect**: List component count, task count, key decisions. Example: "Designed 3 components (AuthModule, TokenService, Middleware). Created 5 implementation tasks. Key decision: JWT with refresh token rotation."
- **developer**: List changed file count, syntax status, key changes. Example: "Modified 5 files. All syntax clean. Key changes: JWT middleware, token validation, auth routes."
- **developer (fix)**: List fixed issue count, remaining issues. Example: "Fixed 2 HIGH issues (token expiry, input validation). 0 remaining critical/high issues."
- **tester**: List pass rate, test count, regression status. Example: "Pass rate: 96% (24/25 tests). 1 edge case failure (token-expiry). No regressions detected."
- **reviewer**: List score, issue counts, verdict. Example: "Score: 7.5/10. Findings: 0 CRITICAL, 1 HIGH, 3 MEDIUM, 2 LOW. GC signal: REVISION_NEEDED."

View File

@@ -0,0 +1,65 @@
---
role: architect
prefix: DESIGN
inner_loop: false
message_types:
success: design_ready
revision: design_revision
error: error
---
# Architect
Technical design, task decomposition, and architecture decision records for iterative development.
## Phase 2: Context Loading + Codebase Exploration
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | No |
| Wisdom files | <session>/wisdom/ | No |
1. Extract session path and requirement from task description
2. Read .msg/meta.json for shared context (architecture_decisions, implementation_context)
3. Read wisdom files if available (learnings.md, decisions.md, conventions.md)
4. Explore codebase for existing patterns, module structure, dependencies:
- Use mcp__ace-tool__search_context for semantic discovery
- Identify similar implementations and integration points
## Phase 3: Technical Design + Task Decomposition
**Design strategy selection**:
| Condition | Strategy |
|-----------|----------|
| Single module change | Direct inline design |
| Cross-module change | Multi-component design with integration points |
| Large refactoring | Phased approach with milestones |
**Outputs**:
1. **Design Document** (`<session>/design/design-<num>.md`):
- Architecture decision: approach, rationale, alternatives
- Component design: responsibility, dependencies, files, complexity
- Task breakdown: files, estimated complexity, dependencies, acceptance criteria
- Integration points and risks with mitigations
2. **Task Breakdown JSON** (`<session>/design/task-breakdown.json`):
- Array of tasks with id, title, files, complexity, dependencies, acceptance_criteria
- Execution order for developer to follow
## Phase 4: Design Validation
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Components defined | Verify component list | At least 1 component |
| Task breakdown exists | Verify task list | At least 1 task |
| Dependencies mapped | All components have dependencies field | All present (can be empty) |
| Integration points | Verify integration section | Key integrations documented |
1. Run validation checks above
2. Write architecture_decisions entry to .msg/meta.json:
- design_id, approach, rationale, components, task_count
3. Write discoveries to wisdom/decisions.md and wisdom/conventions.md

View File

@@ -0,0 +1,62 @@
# Analyze Task
Parse iterative development task -> detect capabilities -> assess pipeline complexity -> design roles.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Capability | Role |
|----------|------------|------|
| design, architect, restructure, refactor plan | architect | architect |
| implement, build, code, fix, develop | developer | developer |
| test, verify, validate, coverage | tester | tester |
| review, audit, quality, check | reviewer | reviewer |
## Pipeline Selection
| Signal | Score |
|--------|-------|
| Changed files > 10 | +3 |
| Changed files 3-10 | +2 |
| Structural change (refactor, architect, restructure) | +3 |
| Cross-cutting (multiple, across, cross) | +2 |
| Simple fix (fix, bug, typo, patch) | -2 |
| Score | Pipeline |
|-------|----------|
| >= 5 | multi-sprint |
| 2-4 | sprint |
| 0-1 | patch |
## Dependency Graph
Natural ordering tiers:
- Tier 0: architect (design must come first)
- Tier 1: developer (implementation requires design)
- Tier 2: tester, reviewer (validation requires artifacts, can run parallel)
## Complexity Assessment
| Factor | Points |
|--------|--------|
| Cross-module changes | +2 |
| Serial depth > 3 | +1 |
| Multiple developers needed | +2 |
| GC loop likely needed | +1 |
## Output
Write <session>/task-analysis.json:
```json
{
"task_description": "<original>",
"pipeline_type": "<patch|sprint|multi-sprint>",
"capabilities": [{ "name": "<cap>", "role": "<role>", "keywords": ["..."] }],
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
"complexity": { "score": 0, "level": "Low|Medium|High" },
"needs_architecture": true,
"needs_testing": true,
"needs_review": true
}
```

View File

@@ -0,0 +1,187 @@
# Command: Dispatch
Create the iterative development task chain with correct dependencies and structured task descriptions.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| User requirement | From coordinator Phase 1 | Yes |
| Session folder | From coordinator Phase 2 | Yes |
| Pipeline definition | From SKILL.md Pipeline Definitions | Yes |
| Pipeline mode | From tasks.json `pipeline_mode` | Yes |
1. Load user requirement and scope from tasks.json
2. Load pipeline stage definitions from SKILL.md Task Metadata Registry
3. Read `pipeline_mode` from tasks.json (patch / sprint / multi-sprint)
## Phase 3: Task Chain Creation
### Task Entry Template
Each task in tasks.json `tasks` object:
```json
{
"<TASK-ID>": {
"title": "<concise title>",
"description": "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>\nTASK:\n - <step 1: specific action>\n - <step 2: specific action>\n - <step 3: specific action>\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: <artifact-1>, <artifact-2>\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <deliverable path> + <quality criteria>\nCONSTRAINTS: <scope limits, focus areas>\n---\nInnerLoop: <true|false>",
"role": "<role-name>",
"prefix": "<PREFIX>",
"deps": ["<dependency-list>"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
### Mode Router
| Mode | Action |
|------|--------|
| `patch` | Create DEV-001 + VERIFY-001 |
| `sprint` | Create DESIGN-001 + DEV-001 + VERIFY-001 + REVIEW-001 |
| `multi-sprint` | Create Sprint 1 chain, subsequent sprints created dynamically |
---
### Patch Pipeline
**DEV-001** (developer):
```json
{
"DEV-001": {
"title": "Implement fix",
"description": "PURPOSE: Implement fix | Success: Fix applied, syntax clean\nTASK:\n - Load target files and understand context\n - Apply fix changes\n - Validate syntax\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: Modified source files + <session>/code/dev-log.md | Syntax clean\nCONSTRAINTS: Minimal changes | Preserve existing behavior\n---\nInnerLoop: true",
"role": "developer",
"prefix": "DEV",
"deps": [],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**VERIFY-001** (tester):
```json
{
"VERIFY-001": {
"title": "Verify fix correctness",
"description": "PURPOSE: Verify fix correctness | Success: Tests pass, no regressions\nTASK:\n - Detect test framework\n - Run targeted tests for changed files\n - Run regression test suite\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: code/dev-log.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/verify/verify-001.json | Pass rate >= 95%\nCONSTRAINTS: Focus on changed files | Report any regressions\n---\nInnerLoop: false",
"role": "tester",
"prefix": "VERIFY",
"deps": ["DEV-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
---
### Sprint Pipeline
**DESIGN-001** (architect):
```json
{
"DESIGN-001": {
"title": "Technical design and task breakdown",
"description": "PURPOSE: Technical design and task breakdown | Success: Design document + task breakdown ready\nTASK:\n - Explore codebase for patterns and dependencies\n - Create component design with integration points\n - Break down into implementable tasks with acceptance criteria\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/design/design-001.md + <session>/design/task-breakdown.json | Components defined, tasks actionable\nCONSTRAINTS: Focus on <task-scope> | Risk assessment required\n---\nInnerLoop: false",
"role": "architect",
"prefix": "DESIGN",
"deps": [],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**DEV-001** (developer):
```json
{
"DEV-001": {
"title": "Implement design",
"description": "PURPOSE: Implement design | Success: All design tasks implemented, syntax clean\nTASK:\n - Load design and task breakdown\n - Implement tasks in execution order\n - Validate syntax after changes\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: design/design-001.md, design/task-breakdown.json\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: Modified source files + <session>/code/dev-log.md | Syntax clean, all tasks done\nCONSTRAINTS: Follow design | Preserve existing behavior | Follow code conventions\n---\nInnerLoop: true",
"role": "developer",
"prefix": "DEV",
"deps": ["DESIGN-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**VERIFY-001** (tester, parallel with REVIEW-001):
```json
{
"VERIFY-001": {
"title": "Verify implementation",
"description": "PURPOSE: Verify implementation | Success: Tests pass, no regressions\nTASK:\n - Detect test framework\n - Run tests for changed files\n - Run regression test suite\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: code/dev-log.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/verify/verify-001.json | Pass rate >= 95%\nCONSTRAINTS: Focus on changed files | Report regressions\n---\nInnerLoop: false",
"role": "tester",
"prefix": "VERIFY",
"deps": ["DEV-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**REVIEW-001** (reviewer, parallel with VERIFY-001):
```json
{
"REVIEW-001": {
"title": "Code review for correctness and quality",
"description": "PURPOSE: Code review for correctness and quality | Success: All dimensions reviewed, verdict issued\nTASK:\n - Load changed files and design document\n - Review across 4 dimensions: correctness, completeness, maintainability, security\n - Score quality (1-10) and issue verdict\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: design/design-001.md, code/dev-log.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/review/review-001.md | Per-dimension findings with severity\nCONSTRAINTS: Focus on implementation changes | Provide file:line references\n---\nInnerLoop: false",
"role": "reviewer",
"prefix": "REVIEW",
"deps": ["DEV-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
---
### Multi-Sprint Pipeline
Sprint 1: DESIGN-001 -> DEV-001 -> DEV-002(incremental) -> VERIFY-001 -> DEV-fix -> REVIEW-001
Create Sprint 1 tasks using sprint templates above, plus:
**DEV-002** (developer, incremental):
```json
{
"DEV-002": {
"title": "Incremental implementation",
"description": "PURPOSE: Incremental implementation | Success: Remaining tasks implemented\nTASK:\n - Load remaining tasks from breakdown\n - Implement incrementally\n - Validate syntax\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: design/task-breakdown.json, code/dev-log.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: Modified source files + updated dev-log.md\nCONSTRAINTS: Incremental delivery | Follow existing patterns\n---\nInnerLoop: true",
"role": "developer",
"prefix": "DEV",
"deps": ["DEV-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
Subsequent sprints created dynamically after Sprint N completes.
## Phase 4: Validation
Verify task chain integrity:
| Check | Method | Expected |
|-------|--------|----------|
| Task count correct | tasks.json count | patch: 2, sprint: 4, multi: 5+ |
| Dependencies correct | Trace deps graph | Acyclic, correct ordering |
| No circular dependencies | Trace full graph | Acyclic |
| Structured descriptions | Each has PURPOSE/TASK/CONTEXT/EXPECTED | All present |
If validation fails, fix the specific task and re-validate.

View File

@@ -0,0 +1,186 @@
# Command: Monitor
Synchronous pipeline coordination using spawn_agent + wait_agent.
## Constants
- WORKER_AGENT: team_worker
- MAX_GC_ROUNDS: 3
## Handler Router
| Source | Handler |
|--------|---------|
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session state | tasks.json | Yes |
| Trigger event | From Entry Router detection | Yes |
| Meta state | <session>/.msg/meta.json | Yes |
1. Load tasks.json for current state, pipeline_mode, gc_round, max_gc_rounds
2. Read tasks from tasks.json to get current task statuses
3. Identify trigger event type from Entry Router
## Phase 3: Event Handlers
### handleCallback
Triggered when a worker completes (wait_agent returns).
1. Determine role from completed task prefix:
| Task Prefix | Role Detection |
|-------------|---------------|
| `DESIGN-*` | architect |
| `DEV-*` | developer |
| `VERIFY-*` | tester |
| `REVIEW-*` | reviewer |
2. Mark task as completed in tasks.json:
```
state.tasks[taskId].status = 'completed'
```
3. Record completion in session state and update metrics
4. **Generator-Critic check** (when reviewer completes):
- If completed task is REVIEW-* AND pipeline is sprint or multi-sprint:
- Read review report for GC signal (critical_count, score)
- Read tasks.json for gc_round
| GC Signal | gc_round < max | Action |
|-----------|----------------|--------|
| review.critical_count > 0 OR review.score < 7 | Yes | Increment gc_round, create DEV-fix task in tasks.json with deps on this REVIEW, log `gc_loop_trigger` |
| review.critical_count > 0 OR review.score < 7 | No (>= max) | Force convergence, accept with warning, log to wisdom/issues.md |
| review.critical_count == 0 AND review.score >= 7 | - | Review passed, proceed to handleComplete check |
- Log team_msg with type "gc_loop_trigger" or "task_unblocked"
5. Proceed to handleSpawnNext
### handleSpawnNext
Find and spawn the next ready tasks.
1. Read tasks.json, find tasks where:
- Status is "pending"
- All deps tasks have status "completed"
2. For each ready task, determine role from task prefix:
| Task Prefix | Role | Inner Loop |
|-------------|------|------------|
| DESIGN-* | architect | false |
| DEV-* | developer | true |
| VERIFY-* | tester | false |
| REVIEW-* | reviewer | false |
3. Spawn team_worker:
```javascript
// 1) Update status in tasks.json
state.tasks[taskId].status = 'in_progress'
// 2) Spawn worker
const agentId = spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: ${role}
role_spec: ${skillRoot}/roles/${role}/role.md
session: ${sessionFolder}
session_id: ${sessionId}
requirement: ${taskDescription}
inner_loop: ${innerLoop}` },
{ type: "text", text: `Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.` }
]
})
// 3) Track agent
state.active_agents[taskId] = { agentId, role, started_at: now }
// 4) Wait for completion
wait_agent({ ids: [agentId] })
// 5) Collect results and update tasks.json
state.tasks[taskId].status = 'completed'
delete state.active_agents[taskId]
```
4. **Parallel spawn rules**:
| Pipeline | Scenario | Spawn Behavior |
|----------|----------|---------------|
| Patch | DEV -> VERIFY | One worker at a time |
| Sprint | VERIFY + REVIEW both unblocked | Spawn BOTH in parallel, wait_agent for both |
| Sprint | Other stages | One worker at a time |
| Multi-Sprint | VERIFY + DEV-fix both unblocked | Spawn BOTH in parallel, wait_agent for both |
| Multi-Sprint | Other stages | One worker at a time |
5. STOP after processing -- wait for next event
### handleCheck
Output current pipeline status from tasks.json. Do NOT advance pipeline.
```
Pipeline Status (<pipeline-mode>):
[DONE] DESIGN-001 (architect) -> design/design-001.md
[DONE] DEV-001 (developer) -> code/dev-log.md
[RUN] VERIFY-001 (tester) -> verifying...
[RUN] REVIEW-001 (reviewer) -> reviewing...
[WAIT] DEV-fix (developer) -> blocked by REVIEW-001
GC Rounds: <gc_round>/<max_gc_rounds>
Sprint: <sprint_id>
Session: <session-id>
```
### handleResume
Resume pipeline after user pause or interruption.
1. Audit tasks.json for inconsistencies:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed deps but still "pending" -> include in spawn list
2. Proceed to handleSpawnNext
### handleComplete
Triggered when all pipeline tasks are completed.
**Completion check by mode**:
| Mode | Completion Condition |
|------|---------------------|
| patch | DEV-001 + VERIFY-001 completed |
| sprint | DESIGN-001 + DEV-001 + VERIFY-001 + REVIEW-001 (+ any GC tasks) completed |
| multi-sprint | All sprint tasks (+ any GC tasks) completed |
1. Verify all tasks completed in tasks.json
2. If any tasks not completed, return to handleSpawnNext
3. **Multi-sprint check**: If multi-sprint AND more sprints planned:
- Record sprint metrics to .msg/meta.json sprint_history
- Evaluate downgrade eligibility (velocity >= expected, review avg >= 8)
- Pause for user confirmation before Sprint N+1
4. If all completed, transition to coordinator Phase 5 (Report + Completion Action)
## Phase 4: State Persistence
After every handler execution:
1. Update tasks.json with current state (gc_round, last event, active tasks)
2. Update .msg/meta.json gc_round if changed
3. Verify task list consistency
4. STOP and wait for next event

View File

@@ -0,0 +1,161 @@
# Coordinator Role
Orchestrate team-iterdev: analyze -> dispatch -> spawn -> monitor -> report.
## Identity
- Name: coordinator | Tag: [coordinator]
- Responsibility: Analyze task -> Create session -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- Use `team_worker` agent type for all worker spawns (NOT `general-purpose`)
- Follow Command Execution Protocol for dispatch and monitor commands
- Respect pipeline stage dependencies (deps)
- Stop after spawning workers -- wait for results via wait_agent
- Handle developer<->reviewer GC loop (max 3 rounds)
- Maintain tasks.json for real-time progress
- Execute completion action in Phase 5
### MUST NOT
- Implement domain logic (designing, coding, testing, reviewing) -- workers handle this
- Spawn workers without creating tasks first
- Write source code directly
- Force-advance pipeline past failed review/validation
- Modify task outputs (workers own their deliverables)
## Command Execution Protocol
When coordinator needs to execute a command:
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
| Interrupted session | Active/paused session in .workflow/.team/IDS-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For check/resume/complete: load @commands/monitor.md, execute handler, STOP.
## Phase 0: Session Resume Check
1. Scan `.workflow/.team/IDS-*/tasks.json` for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile (read tasks.json, reset in_progress->pending, kick first ready task)
4. Multiple -> request_user_input for selection
## Phase 1: Requirement Clarification
TEXT-LEVEL ONLY. No source code reading.
1. Parse user task description from $ARGUMENTS
2. Delegate to @commands/analyze.md
3. Assess complexity for pipeline selection:
| Signal | Weight |
|--------|--------|
| Changed files > 10 | +3 |
| Changed files 3-10 | +2 |
| Structural change (refactor, architect, restructure) | +3 |
| Cross-cutting (multiple, across, cross) | +2 |
| Simple fix (fix, bug, typo, patch) | -2 |
| Score | Pipeline |
|-------|----------|
| >= 5 | multi-sprint |
| 2-4 | sprint |
| 0-1 | patch |
4. Ask for missing parameters via request_user_input (mode selection)
5. Record requirement with scope, pipeline mode
6. CRITICAL: Always proceed to Phase 2, never skip team workflow
## Phase 2: Session & Team Setup
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash({ command: "pwd" })`
- `skill_root` = `<project_root>/.codex/skills/team-iterdev`
2. Generate session ID: `IDS-<slug>-<YYYY-MM-DD>`
3. Create session folder structure:
```
mkdir -p .workflow/.team/<session-id>/{design,code,verify,review,wisdom}
```
4. Read specs/pipelines.md -> select pipeline based on complexity
5. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
6. Write initial tasks.json:
```json
{
"session_id": "<id>",
"pipeline_mode": "<patch|sprint|multi-sprint>",
"requirement": "<original requirement>",
"created_at": "<ISO timestamp>",
"gc_round": 0,
"max_gc_rounds": 3,
"active_agents": {},
"tasks": {}
}
```
7. Initialize meta.json with pipeline metadata:
```typescript
mcp__ccw-tools__team_msg({
operation: "log", session_id: "<id>", from: "coordinator",
type: "state_update", summary: "Session initialized",
data: {
pipeline_mode: "<patch|sprint|multi-sprint>",
pipeline_stages: ["architect", "developer", "tester", "reviewer"],
roles: ["coordinator", "architect", "developer", "tester", "reviewer"]
}
})
```
## Phase 3: Task Chain Creation
Delegate to @commands/dispatch.md:
1. Read specs/pipelines.md for selected pipeline task registry
2. Add task entries to tasks.json `tasks` object with deps
3. Update tasks.json metadata
## Phase 4: Spawn-and-Wait
Delegate to @commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + all deps resolved)
2. Spawn team_worker agents via spawn_agent, wait_agent for results
3. Output status summary
4. STOP
## Phase 5: Report + Completion Action
1. Load session state -> count completed tasks, calculate duration
2. Record sprint learning to .msg/meta.json sprint_history
3. List deliverables:
| Deliverable | Path |
|-------------|------|
| Design Document | <session>/design/design-001.md |
| Task Breakdown | <session>/design/task-breakdown.json |
| Dev Log | <session>/code/dev-log.md |
| Verification Results | <session>/verify/verify-001.json |
| Review Report | <session>/review/review-001.md |
4. Execute completion action per session.completion_action:
- interactive -> request_user_input (Archive/Keep/Export)
- auto_archive -> Archive & Clean (status=completed)
- auto_keep -> Keep Active (status=paused)
## Error Handling
| Error | Resolution |
|-------|------------|
| Task too vague | request_user_input for clarification |
| Session corruption | Attempt recovery, fallback to manual |
| Worker crash | Reset task to pending, respawn |
| GC loop exceeds 3 rounds | Accept with warning, record in shared memory |
| Sprint velocity drops below 50% | Alert user, suggest scope reduction |
| Task ledger corrupted | Rebuild from tasks.json state |

View File

@@ -0,0 +1,74 @@
---
role: developer
prefix: DEV
inner_loop: true
message_types:
success: dev_complete
progress: dev_progress
error: error
---
# Developer
Code implementer. Implements code according to design, incremental delivery. Acts as Generator in Generator-Critic loop (paired with reviewer).
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Design document | <session>/design/design-001.md | For non-fix tasks |
| Task breakdown | <session>/design/task-breakdown.json | For non-fix tasks |
| Review feedback | <session>/review/*.md | For fix tasks |
| Wisdom files | <session>/wisdom/ | No |
1. Extract session path from task description
2. Read .msg/meta.json for shared context
3. Detect task type:
| Task Type | Detection | Loading |
|-----------|-----------|---------|
| Fix task | Subject contains "fix" | Read latest review file for feedback |
| Normal task | No "fix" in subject | Read design document + task breakdown |
4. Load previous implementation_context from .msg/meta.json
5. Read wisdom files for conventions and known issues
## Phase 3: Code Implementation
**Implementation strategy selection**:
| Task Count | Complexity | Strategy |
|------------|------------|----------|
| <= 2 tasks | Low | Direct: inline Edit/Write |
| 3-5 tasks | Medium | Single agent: one code-developer for all |
| > 5 tasks | High | Batch agent: group by module, one agent per batch |
**Fix Task Mode** (GC Loop):
- Focus on review feedback items only
- Fix critical issues first, then high, then medium
- Do NOT change code that was not flagged
- Maintain existing code style and patterns
**Normal Task Mode**:
- Read target files, apply changes using Edit or Write
- Follow execution order from task breakdown
- Validate syntax after each major change
## Phase 4: Self-Validation
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Syntax | tsc --noEmit or equivalent | No errors |
| File existence | Verify all planned files exist | All files present |
| Import resolution | Check no broken imports | All imports resolve |
1. Run syntax check: `tsc --noEmit` / `python -m py_compile` / equivalent
2. Auto-fix if validation fails (max 2 attempts)
3. Write dev log to `<session>/code/dev-log.md`:
- Changed files count, syntax status, fix task flag, file list
4. Update implementation_context in .msg/meta.json:
- task, changed_files, is_fix, syntax_clean
5. Write discoveries to wisdom/learnings.md

View File

@@ -0,0 +1,66 @@
---
role: reviewer
prefix: REVIEW
inner_loop: false
message_types:
success: review_passed
revision: review_revision
critical: review_critical
error: error
---
# Reviewer
Code reviewer. Multi-dimensional review, quality scoring, improvement suggestions. Acts as Critic in Generator-Critic loop (paired with developer).
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Design document | <session>/design/design-001.md | For requirements alignment |
| Changed files | Git diff | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for shared context and previous review_feedback_trends
3. Read design document for requirements alignment
4. Get changed files via git diff, read file contents (limit 20 files)
## Phase 3: Multi-Dimensional Review
**Review dimensions**:
| Dimension | Weight | Focus Areas |
|-----------|--------|-------------|
| Correctness | 30% | Logic correctness, boundary handling |
| Completeness | 25% | Coverage of design requirements |
| Maintainability | 25% | Readability, code style, DRY |
| Security | 20% | Vulnerabilities, input validation |
Per-dimension: scan modified files, record findings with severity (CRITICAL/HIGH/MEDIUM/LOW), include file:line references and suggestions.
**Scoring**: Weighted average of dimension scores (1-10 each).
**Output review report** (`<session>/review/review-<num>.md`):
- Files reviewed count, quality score, issue counts by severity
- Per-finding: severity, file:line, dimension, description, suggestion
- Scoring breakdown by dimension
- Signal: CRITICAL / REVISION_NEEDED / APPROVED
- Design alignment notes
## Phase 4: Trend Analysis + Verdict
1. Compare with previous review_feedback_trends from .msg/meta.json
2. Identify recurring issues, improvement areas, new issues
| Verdict Condition | Message Type |
|-------------------|--------------|
| criticalCount > 0 | review_critical |
| score < 7 | review_revision |
| else | review_passed |
3. Update review_feedback_trends in .msg/meta.json:
- review_id, score, critical count, high count, dimensions, gc_round
4. Write discoveries to wisdom/learnings.md

View File

@@ -0,0 +1,88 @@
---
role: tester
prefix: VERIFY
inner_loop: false
message_types:
success: verify_passed
failure: verify_failed
fix: fix_required
error: error
---
# Tester
Test validator. Test execution, fix cycles, and regression detection.
## Phase 2: Environment Detection
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Changed files | Git diff | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for shared context
3. Get changed files via git diff
4. Detect test framework and command:
| Detection | Method |
|-----------|--------|
| Test command | Check package.json scripts, pytest.ini, Makefile |
| Coverage tool | Check for nyc, coverage.py, jest --coverage config |
Common commands: npm test, pytest, go test ./..., cargo test
## Phase 3: Execution + Fix Cycle
**Iterative test-fix cycle** (max 5 iterations):
| Step | Action |
|------|--------|
| 1 | Run test command |
| 2 | Parse results, check pass rate |
| 3 | Pass rate >= 95% -> exit loop (success) |
| 4 | Extract failing test details |
| 5 | Apply fix using CLI tool |
| 6 | Increment iteration counter |
| 7 | iteration >= MAX (5) -> exit loop (report failures) |
| 8 | Go to Step 1 |
**Fix delegation**: Use CLI tool to fix failing tests:
```bash
ccw cli -p "PURPOSE: Fix failing tests; success = all listed tests pass
TASK: • Analyze test failure output • Identify root cause in changed files • Apply minimal fix
MODE: write
CONTEXT: @<changed-files> | Memory: Test output from current iteration
EXPECTED: Code fixes that make failing tests pass without breaking other tests
CONSTRAINTS: Only modify files in changed list | Minimal changes
Test output: <test-failure-details>
Changed files: <file-list>" --tool gemini --mode write --rule development-debug-runtime-issues
```
Wait for CLI completion before re-running tests.
## Phase 4: Regression Check + Report
1. Run full test suite for regression: `<test-command> --all`
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Regression | Run full test suite | No FAIL in output |
| Coverage | Run coverage tool | >= 80% (if configured) |
2. Write verification results to `<session>/verify/verify-<num>.json`:
- verify_id, pass_rate, iterations, passed, timestamp, regression_passed
3. Determine message type:
| Condition | Message Type |
|-----------|--------------|
| passRate >= 0.95 | verify_passed |
| passRate < 0.95 && iterations >= MAX | fix_required |
| passRate < 0.95 | verify_failed |
4. Update .msg/meta.json with test_patterns entry
5. Write discoveries to wisdom/issues.md

View File

@@ -1,174 +0,0 @@
# Team IterDev -- CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier | `"DEV-001"` |
| `title` | string | Yes | Short task title | `"Implement design"` |
| `description` | string | Yes | Detailed task description (self-contained) | `"Load design document, implement tasks in execution order..."` |
| `role` | string | Yes | Worker role: architect, developer, tester, reviewer | `"developer"` |
| `pipeline` | string | Yes | Pipeline mode: patch, sprint, multi-sprint | `"sprint"` |
| `sprint_num` | integer | Yes | Sprint number (1-based, for multi-sprint tracking) | `"1"` |
| `gc_round` | integer | Yes | Generator-Critic round number (0 = initial, 1+ = fix round) | `"0"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"DESIGN-001"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"DESIGN-001"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task DESIGN-001] Created design with 3 components..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Implemented 5 files, all syntax clean..."` |
| `review_score` | string | Quality score 1-10 (reviewer only, empty for others) | `"8"` |
| `gc_signal` | string | `REVISION_NEEDED` or `CONVERGED` (reviewer only) | `"CONVERGED"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Example Data
```csv
id,title,description,role,pipeline,sprint_num,gc_round,deps,context_from,exec_mode,wave,status,findings,review_score,gc_signal,error
"DESIGN-001","Technical design and task breakdown","Explore codebase for patterns and dependencies. Create component design with integration points. Break into implementable tasks with acceptance criteria.","architect","sprint","1","0","","","csv-wave","1","pending","","","",""
"DEV-001","Implement design","Load design document and task breakdown. Implement tasks in execution order. Validate syntax after each change. Write dev log.","developer","sprint","1","0","DESIGN-001","DESIGN-001","csv-wave","2","pending","","","",""
"VERIFY-001","Verify implementation","Detect test framework. Run targeted tests for changed files. Run regression test suite. Report pass rate.","tester","sprint","1","0","DEV-001","DEV-001","csv-wave","3","pending","","","",""
"REVIEW-001","Code review","Load changed files and design. Review across correctness, completeness, maintainability, security. Score quality 1-10. Issue verdict.","reviewer","sprint","1","0","DEV-001","DEV-001","csv-wave","3","pending","","","",""
"GC-CHECK-001","GC loop decision","Evaluate review severity. If critical_count > 0 or score < 7: REVISION. Else: CONVERGE.","gc-controller","sprint","1","1","REVIEW-001","REVIEW-001","interactive","4","pending","","","",""
"DEV-fix-1","Fix review issues (round 1)","Fix critical and high issues from REVIEW-001. Focus on review feedback only. Do NOT change unflagged code.","developer","sprint","1","1","GC-CHECK-001","REVIEW-001","csv-wave","5","pending","","","",""
"REVIEW-002","Re-review (round 1)","Review fixes from DEV-fix-1. Re-evaluate quality. Check if critical issues are resolved.","reviewer","sprint","1","1","DEV-fix-1","DEV-fix-1","csv-wave","6","pending","","","",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
--------------------- -------------------- -----------------
id -----------> id ----------> id
title -----------> title ----------> (reads)
description -----------> description ----------> (reads)
role -----------> role ----------> (reads)
pipeline -----------> pipeline ----------> (reads)
sprint_num -----------> sprint_num ----------> (reads)
gc_round -----------> gc_round ----------> (reads)
deps -----------> deps ----------> (reads)
context_from-----------> context_from----------> (reads)
exec_mode -----------> exec_mode ----------> (reads)
wave ----------> (reads)
prev_context ----------> (reads)
status
findings
review_score
gc_signal
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "DEV-001",
"status": "completed",
"findings": "Implemented 5 files following design. All syntax checks pass. Key changes: src/auth/jwt.ts, src/middleware/auth.ts.",
"review_score": "",
"gc_signal": "",
"error": ""
}
```
Reviewer-specific output:
```json
{
"id": "REVIEW-001",
"status": "completed",
"findings": "Reviewed 5 files. Correctness: 8/10, Completeness: 9/10, Maintainability: 7/10, Security: 6/10. 1 HIGH issue (missing token expiry check).",
"review_score": "7.5",
"gc_signal": "REVISION_NEEDED",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `design_decision` | `data.component` | `{component, approach, rationale, alternatives}` | Architecture decision |
| `implementation` | `data.file` | `{file, changes, pattern_used, notes}` | Code implementation detail |
| `test_result` | `data.test_suite` | `{test_suite, pass_rate, failures[], regressions}` | Test execution result |
| `review_finding` | `data.file_line` | `{file_line, severity, dimension, description, suggestion}` | Review finding |
| `convention` | `data.name` | `{name, description, example}` | Discovered project convention |
| `gc_decision` | `data.round` | `{round, signal, critical_count, score}` | GC loop decision record |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T10:00:00+08:00","worker":"DESIGN-001","type":"design_decision","data":{"component":"AuthModule","approach":"JWT with refresh tokens","rationale":"Stateless auth","alternatives":"Session-based, OAuth2"}}
{"ts":"2026-03-08T10:05:00+08:00","worker":"DEV-001","type":"implementation","data":{"file":"src/auth/jwt.ts","changes":"Added JWT middleware with token validation","pattern_used":"Express middleware","notes":"Reuses existing bcrypt"}}
{"ts":"2026-03-08T10:10:00+08:00","worker":"VERIFY-001","type":"test_result","data":{"test_suite":"auth","pass_rate":0.96,"failures":["token-expiry-edge-case"],"regressions":false}}
{"ts":"2026-03-08T10:15:00+08:00","worker":"REVIEW-001","type":"review_finding","data":{"file_line":"src/auth/jwt.ts:42","severity":"HIGH","dimension":"security","description":"Token expiry not validated","suggestion":"Add exp claim check in validateToken()"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| CSV task findings | Interactive task | Injected via spawn message or send_input |
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
| Valid role | role in {architect, developer, tester, reviewer, gc-controller} | "Invalid role: {role}" |
| GC round non-negative | gc_round >= 0 | "Invalid gc_round: {value}" |
| Valid pipeline | pipeline in {patch, sprint, multi-sprint} | "Invalid pipeline: {value}" |
| Cross-mechanism deps | Interactive<->CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |

View File

@@ -0,0 +1,94 @@
# IterDev Pipeline Definitions
## Three-Pipeline Architecture
### Patch Pipeline (2 beats, serial)
```
DEV-001 -> VERIFY-001
[developer] [tester]
```
### Sprint Pipeline (4 beats, with parallel window)
```
DESIGN-001 -> DEV-001 -> [VERIFY-001 + REVIEW-001] (parallel)
[architect] [developer] [tester] [reviewer]
```
### Multi-Sprint Pipeline (N beats, iterative)
```
Sprint 1: DESIGN-001 -> DEV-001 -> DEV-002(incremental) -> VERIFY-001 -> DEV-fix -> REVIEW-001
Sprint 2: DESIGN-002(refined) -> DEV-003 -> VERIFY-002 -> REVIEW-002
...
```
## Generator-Critic Loop (developer <-> reviewer)
```
DEV -> REVIEW -> (if review.critical_count > 0 || review.score < 7)
-> DEV-fix -> REVIEW-2 -> (if still issues) -> DEV-fix-2 -> REVIEW-3
-> (max 3 rounds, then accept with warning)
```
## Pipeline Selection Logic
| Signal | Score |
|--------|-------|
| Changed files > 10 | +3 |
| Changed files 3-10 | +2 |
| Structural change | +3 |
| Cross-cutting concern | +2 |
| Simple fix | -2 |
| Score | Pipeline |
|-------|----------|
| >= 5 | multi-sprint |
| 2-4 | sprint |
| 0-1 | patch |
## Task Metadata Registry
| Task ID | Role | Pipeline | Dependencies | Description |
|---------|------|----------|-------------|-------------|
| DESIGN-001 | architect | sprint/multi | (none) | Technical design and task breakdown |
| DEV-001 | developer | all | DESIGN-001 (sprint/multi) or (none for patch) | Code implementation |
| DEV-002 | developer | multi | DEV-001 | Incremental implementation |
| DEV-fix | developer | sprint/multi | REVIEW-* (GC loop trigger) | Fix issues from review |
| VERIFY-001 | tester | all | DEV-001 (or last DEV) | Test execution and fix cycles |
| REVIEW-001 | reviewer | sprint/multi | DEV-001 (or last DEV) | Code review and quality scoring |
## Checkpoints
| Trigger Condition | Location | Behavior |
|-------------------|----------|----------|
| GC loop exceeds max rounds | After REVIEW-3 | Stop iteration, accept with warning, record in wisdom |
| Sprint transition | End of Sprint N | Pause, retrospective, user confirms `resume` for Sprint N+1 |
| Pipeline stall | No ready + no running tasks | Check missing tasks, report blockedBy chain to user |
## Multi-Sprint Dynamic Downgrade
If Sprint N metrics are strong (velocity >= expected, review avg >= 8), coordinator may downgrade Sprint N+1 from multi-sprint to sprint pipeline for efficiency.
## Task Ledger Schema
| Field | Description |
|-------|-------------|
| `sprint_id` | Current sprint identifier |
| `sprint_goal` | Sprint objective |
| `tasks[]` | Array of task entries |
| `metrics` | Aggregated metrics: total, completed, in_progress, blocked, velocity |
**Task Entry Fields**:
| Field | Description |
|-------|-------------|
| `id` | Task identifier |
| `title` | Task title |
| `owner` | Assigned role |
| `status` | pending / in_progress / completed / blocked |
| `started_at` / `completed_at` | Timestamps |
| `gc_rounds` | Generator-Critic iteration count |
| `review_score` | Reviewer score (null until reviewed) |
| `test_pass_rate` | Tester pass rate (null until tested) |

View File

@@ -0,0 +1,172 @@
{
"team_name": "team-iterdev",
"team_display_name": "Team IterDev",
"description": "Iterative development team with Generator-Critic loop, task ledger, sprint learning, dynamic pipeline, conflict handling, concurrency control, rollback strategy, user feedback loop, and tech debt tracking",
"version": "1.2.0",
"roles": {
"coordinator": {
"task_prefix": null,
"responsibility": "Sprint planning, backlog management, task ledger maintenance, GC loop control, Phase 1: conflict handling, concurrency control, rollback strategy, Phase 3: user feedback loop, tech debt tracking",
"message_types": [
"sprint_started", "gc_loop_trigger", "sprint_complete", "task_unblocked", "error", "shutdown",
"conflict_detected", "conflict_resolved", "resource_locked", "resource_unlocked", "resource_contention",
"rollback_initiated", "rollback_completed", "rollback_failed",
"user_feedback_received", "tech_debt_identified"
]
},
"architect": {
"task_prefix": "DESIGN",
"responsibility": "Technical design, task decomposition, architecture decisions",
"message_types": ["design_ready", "design_revision", "error"]
},
"developer": {
"task_prefix": "DEV",
"responsibility": "Code implementation, incremental delivery",
"message_types": ["dev_complete", "dev_progress", "error"]
},
"tester": {
"task_prefix": "VERIFY",
"responsibility": "Test execution, fix cycle, regression detection",
"message_types": ["verify_passed", "verify_failed", "fix_required", "error"]
},
"reviewer": {
"task_prefix": "REVIEW",
"responsibility": "Code review, quality scoring, improvement suggestions",
"message_types": ["review_passed", "review_revision", "review_critical", "error"]
}
},
"pipelines": {
"patch": {
"description": "Simple fix: implement → verify",
"task_chain": ["DEV-001", "VERIFY-001"],
"gc_loops": 0
},
"sprint": {
"description": "Standard feature: design → implement → verify + review (parallel)",
"task_chain": ["DESIGN-001", "DEV-001", "VERIFY-001", "REVIEW-001"],
"gc_loops": 3,
"parallel_groups": [["VERIFY-001", "REVIEW-001"]]
},
"multi-sprint": {
"description": "Large feature: multiple sprint cycles with incremental delivery",
"task_chain": "dynamic — coordinator creates per-sprint chains",
"gc_loops": 3,
"sprint_count": "dynamic"
}
},
"innovation_patterns": {
"generator_critic": {
"generator": "developer",
"critic": "reviewer",
"max_rounds": 3,
"convergence_trigger": "review.critical_count === 0 && review.score >= 7"
},
"task_ledger": {
"file": "task-ledger.json",
"updated_by": "coordinator",
"tracks": ["status", "gc_rounds", "review_score", "test_pass_rate", "velocity"],
"phase1_extensions": {
"conflict_info": {
"fields": ["status", "conflicting_files", "resolution_strategy", "resolved_by_task_id"],
"default": { "status": "none", "conflicting_files": [], "resolution_strategy": null, "resolved_by_task_id": null }
},
"rollback_info": {
"fields": ["snapshot_id", "rollback_procedure", "last_successful_state_id"],
"default": { "snapshot_id": null, "rollback_procedure": null, "last_successful_state_id": null }
}
}
},
"shared_memory": {
"file": "shared-memory.json",
"fields": {
"architect": "architecture_decisions",
"developer": "implementation_context",
"tester": "test_patterns",
"reviewer": "review_feedback_trends"
},
"persistent_fields": ["sprint_history", "what_worked", "what_failed", "patterns_learned"],
"phase1_extensions": {
"resource_locks": {
"description": "Concurrency control: resource locking state",
"lock_timeout_ms": 300000,
"deadlock_detection": true
}
}
},
"dynamic_pipeline": {
"selector": "coordinator",
"criteria": "file_count + module_count + complexity_assessment",
"downgrade_rule": "velocity >= expected && review_avg >= 8 → simplify next sprint"
}
},
"phase1_features": {
"conflict_handling": {
"enabled": true,
"detection_strategy": "file_overlap",
"resolution_strategies": ["manual", "auto_merge", "abort"]
},
"concurrency_control": {
"enabled": true,
"lock_timeout_minutes": 5,
"deadlock_detection": true,
"resources": ["task-ledger.json", "shared-memory.json"]
},
"rollback_strategy": {
"enabled": true,
"snapshot_trigger": "task_complete",
"default_procedure": "git revert HEAD"
}
},
"phase2_features": {
"external_dependency_management": {
"enabled": true,
"validation_trigger": "task_start",
"supported_sources": ["npm", "maven", "pip", "git"],
"version_check_command": {
"npm": "npm list {name}",
"pip": "pip show {name}",
"maven": "mvn dependency:tree"
}
},
"state_recovery": {
"enabled": true,
"checkpoint_trigger": "phase_complete",
"max_checkpoints_per_task": 5,
"checkpoint_dir": "checkpoints/"
}
},
"phase3_features": {
"user_feedback_loop": {
"enabled": true,
"collection_trigger": "sprint_complete",
"max_feedback_items": 50,
"severity_levels": ["low", "medium", "high", "critical"],
"status_flow": ["new", "reviewed", "addressed", "closed"]
},
"tech_debt_tracking": {
"enabled": true,
"detection_sources": ["review", "test", "architect"],
"categories": ["code", "design", "test", "documentation"],
"severity_levels": ["low", "medium", "high", "critical"],
"status_flow": ["open", "in_progress", "resolved", "deferred"],
"report_trigger": "sprint_retrospective"
}
},
"collaboration_patterns": ["CP-1", "CP-3", "CP-5", "CP-6"],
"session_dirs": {
"base": ".workflow/.team/IDS-{slug}-{YYYY-MM-DD}/",
"design": "design/",
"code": "code/",
"verify": "verify/",
"review": "review/",
"messages": ".workflow/.team-msg/{team-name}/"
}
}