Remove outdated coordinator, developer, reviewer, tester roles and associated commands, along with pipeline definitions and team configuration files to streamline the iterative development process.

This commit is contained in:
catlog22
2026-03-28 12:40:22 +08:00
parent 656550210e
commit 662cff53d9
31 changed files with 9 additions and 2773 deletions

View File

@@ -1,6 +1,6 @@
---
name: brainstorm
description: Unified brainstorming skill with dual-mode operation - auto pipeline and single role analysis. Triggers on "brainstorm", "头脑风暴".
description: Unified brainstorming skill with dual-mode operation auto mode (framework generation, parallel multi-role analysis, cross-role synthesis) and single role analysis. Triggers on "brainstorm", "头脑风暴".
allowed-tools: Skill(*), Agent(conceptual-planning-agent, context-search-agent), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*), Bash(*)
---

View File

@@ -1,6 +1,6 @@
---
name: spec-generator
description: Specification generator - 6 phase document chain producing product brief, PRD, architecture, and epics. Triggers on "generate spec", "create specification", "spec generator", "workflow:spec".
description: Specification generator - 7 phase document chain producing product brief, PRD, architecture, epics, and issues with Codex review gates. Triggers on "generate spec", "create specification", "spec generator", "workflow:spec".
allowed-tools: Agent, AskUserQuestion, TaskCreate, TaskUpdate, TaskList, Read, Write, Edit, Bash, Glob, Grep, Skill
---

View File

@@ -1,127 +0,0 @@
---
name: team-iterdev
description: Unified team skill for iterative development team. Pure router — all roles read this file. Beat model is coordinator-only in monitor.md. Generator-Critic loops (developer<->reviewer, max 3 rounds). Triggers on "team iterdev".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Agent(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
# Team IterDev
Iterative development team skill. Generator-Critic loops (developer<->reviewer, max 3 rounds), task ledger (task-ledger.json) for real-time progress, shared memory (cross-sprint learning), and dynamic pipeline selection for incremental delivery.
## Architecture
```
Skill(skill="team-iterdev", args="task description")
|
SKILL.md (this file) = Router
|
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze -> dispatch -> spawn workers -> STOP
|
+-------+-------+-------+
v v v v
[architect] [developer] [tester] [reviewer]
(team-worker agents, each loads roles/<role>/role.md)
```
## Role Registry
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
| architect | [roles/architect/role.md](roles/architect/role.md) | DESIGN-* | false |
| developer | [roles/developer/role.md](roles/developer/role.md) | DEV-* | true |
| tester | [roles/tester/role.md](roles/tester/role.md) | VERIFY-* | false |
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REVIEW-* | false |
## Role Router
Parse `$ARGUMENTS`:
- Has `--role <name>` → Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role``@roles/coordinator/role.md`, execute entry router
## Shared Constants
- **Session prefix**: `IDS`
- **Session path**: `.workflow/.team/IDS-<slug>-<date>/`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
## Worker Spawn Template
Coordinator spawns workers using this template:
```
Agent({
subagent_type: "team-worker",
description: "Spawn <role> worker",
team_name: "iterdev",
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: <skill_root>/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: iterdev
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file (@<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).`
})
```
## User Commands
| Command | Action |
|---------|--------|
| `check` / `status` | View execution status graph |
| `resume` / `continue` | Advance to next step |
## Session Directory
```
.workflow/.team/IDS-<slug>-<YYYY-MM-DD>/
├── .msg/
│ ├── messages.jsonl # Team message bus
│ └── meta.json # Session state
├── task-analysis.json # Coordinator analyze output
├── task-ledger.json # Real-time task progress ledger
├── wisdom/ # Cross-task knowledge accumulation
│ ├── learnings.md
│ ├── decisions.md
│ ├── conventions.md
│ └── issues.md
├── design/ # Architect output
│ ├── design-001.md
│ └── task-breakdown.json
├── code/ # Developer tracking
│ └── dev-log.md
├── verify/ # Tester output
│ └── verify-001.json
└── review/ # Reviewer output
└── review-001.md
```
## Specs Reference
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown command | Error with available command list |
| Role not found | Error with role registry |
| GC loop exceeds 3 rounds | Accept with warning, record in shared memory |
| Sprint velocity drops below 50% | Coordinator alerts user, suggests scope reduction |
| Task ledger corrupted | Rebuild from TaskList state |
| Conflict detected | Update conflict_info, notify coordinator, create DEV-fix task |
| Pipeline deadlock | Check blockedBy chain, report blocking point |

View File

@@ -1,65 +0,0 @@
---
role: architect
prefix: DESIGN
inner_loop: false
message_types:
success: design_ready
revision: design_revision
error: error
---
# Architect
Technical design, task decomposition, and architecture decision records for iterative development.
## Phase 2: Context Loading + Codebase Exploration
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | No |
| Wisdom files | <session>/wisdom/ | No |
1. Extract session path and requirement from task description
2. Read .msg/meta.json for shared context (architecture_decisions, implementation_context)
3. Read wisdom files if available (learnings.md, decisions.md, conventions.md)
4. Explore codebase for existing patterns, module structure, dependencies:
- Use mcp__ace-tool__search_context for semantic discovery
- Identify similar implementations and integration points
## Phase 3: Technical Design + Task Decomposition
**Design strategy selection**:
| Condition | Strategy |
|-----------|----------|
| Single module change | Direct inline design |
| Cross-module change | Multi-component design with integration points |
| Large refactoring | Phased approach with milestones |
**Outputs**:
1. **Design Document** (`<session>/design/design-<num>.md`):
- Architecture decision: approach, rationale, alternatives
- Component design: responsibility, dependencies, files, complexity
- Task breakdown: files, estimated complexity, dependencies, acceptance criteria
- Integration points and risks with mitigations
2. **Task Breakdown JSON** (`<session>/design/task-breakdown.json`):
- Array of tasks with id, title, files, complexity, dependencies, acceptance_criteria
- Execution order for developer to follow
## Phase 4: Design Validation
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Components defined | Verify component list | At least 1 component |
| Task breakdown exists | Verify task list | At least 1 task |
| Dependencies mapped | All components have dependencies field | All present (can be empty) |
| Integration points | Verify integration section | Key integrations documented |
1. Run validation checks above
2. Write architecture_decisions entry to .msg/meta.json:
- design_id, approach, rationale, components, task_count
3. Write discoveries to wisdom/decisions.md and wisdom/conventions.md

View File

@@ -1,62 +0,0 @@
# Analyze Task
Parse iterative development task -> detect capabilities -> assess pipeline complexity -> design roles.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Capability | Role |
|----------|------------|------|
| design, architect, restructure, refactor plan | architect | architect |
| implement, build, code, fix, develop | developer | developer |
| test, verify, validate, coverage | tester | tester |
| review, audit, quality, check | reviewer | reviewer |
## Pipeline Selection
| Signal | Score |
|--------|-------|
| Changed files > 10 | +3 |
| Changed files 3-10 | +2 |
| Structural change (refactor, architect, restructure) | +3 |
| Cross-cutting (multiple, across, cross) | +2 |
| Simple fix (fix, bug, typo, patch) | -2 |
| Score | Pipeline |
|-------|----------|
| >= 5 | multi-sprint |
| 2-4 | sprint |
| 0-1 | patch |
## Dependency Graph
Natural ordering tiers:
- Tier 0: architect (design must come first)
- Tier 1: developer (implementation requires design)
- Tier 2: tester, reviewer (validation requires artifacts, can run parallel)
## Complexity Assessment
| Factor | Points |
|--------|--------|
| Cross-module changes | +2 |
| Serial depth > 3 | +1 |
| Multiple developers needed | +2 |
| GC loop likely needed | +1 |
## Output
Write <session>/task-analysis.json:
```json
{
"task_description": "<original>",
"pipeline_type": "<patch|sprint|multi-sprint>",
"capabilities": [{ "name": "<cap>", "role": "<role>", "keywords": ["..."] }],
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
"complexity": { "score": 0, "level": "Low|Medium|High" },
"needs_architecture": true,
"needs_testing": true,
"needs_review": true
}
```

View File

@@ -1,234 +0,0 @@
# Command: Dispatch
Create the iterative development task chain with correct dependencies and structured task descriptions.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| User requirement | From coordinator Phase 1 | Yes |
| Session folder | From coordinator Phase 2 | Yes |
| Pipeline definition | From SKILL.md Pipeline Definitions | Yes |
| Pipeline mode | From session.json `pipeline` | Yes |
1. Load user requirement and scope from session.json
2. Load pipeline stage definitions from SKILL.md Task Metadata Registry
3. Read `pipeline` mode from session.json (patch / sprint / multi-sprint)
## Phase 3: Task Chain Creation
### Task Description Template
Every task description uses structured format for clarity:
```
TaskCreate({
subject: "<TASK-ID>",
description: "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>
TASK:
- <step 1: specific action>
- <step 2: specific action>
- <step 3: specific action>
CONTEXT:
- Session: <session-folder>
- Scope: <task-scope>
- Upstream artifacts: <artifact-1>, <artifact-2>
- Shared memory: <session>/.msg/meta.json
EXPECTED: <deliverable path> + <quality criteria>
CONSTRAINTS: <scope limits, focus areas>
---
InnerLoop: <true|false>"
})
TaskUpdate({ taskId: "<TASK-ID>", addBlockedBy: [<dependency-list>], owner: "<role>" })
```
### Mode Router
| Mode | Action |
|------|--------|
| `patch` | Create DEV-001 + VERIFY-001 |
| `sprint` | Create DESIGN-001 + DEV-001 + VERIFY-001 + REVIEW-001 |
| `multi-sprint` | Create Sprint 1 chain, subsequent sprints created dynamically |
---
### Patch Pipeline
**DEV-001** (developer):
```
TaskCreate({
subject: "DEV-001",
description: "PURPOSE: Implement fix | Success: Fix applied, syntax clean
TASK:
- Load target files and understand context
- Apply fix changes
- Validate syntax
CONTEXT:
- Session: <session-folder>
- Scope: <task-scope>
- Shared memory: <session>/.msg/meta.json
EXPECTED: Modified source files + <session>/code/dev-log.md | Syntax clean
CONSTRAINTS: Minimal changes | Preserve existing behavior
---
InnerLoop: true"
})
TaskUpdate({ taskId: "DEV-001", owner: "developer" })
```
**VERIFY-001** (tester):
```
TaskCreate({
subject: "VERIFY-001",
description: "PURPOSE: Verify fix correctness | Success: Tests pass, no regressions
TASK:
- Detect test framework
- Run targeted tests for changed files
- Run regression test suite
CONTEXT:
- Session: <session-folder>
- Scope: <task-scope>
- Upstream artifacts: code/dev-log.md
- Shared memory: <session>/.msg/meta.json
EXPECTED: <session>/verify/verify-001.json | Pass rate >= 95%
CONSTRAINTS: Focus on changed files | Report any regressions
---
InnerLoop: false"
})
TaskUpdate({ taskId: "VERIFY-001", addBlockedBy: ["DEV-001"], owner: "tester" })
```
---
### Sprint Pipeline
**DESIGN-001** (architect):
```
TaskCreate({
subject: "DESIGN-001",
description: "PURPOSE: Technical design and task breakdown | Success: Design document + task breakdown ready
TASK:
- Explore codebase for patterns and dependencies
- Create component design with integration points
- Break down into implementable tasks with acceptance criteria
CONTEXT:
- Session: <session-folder>
- Scope: <task-scope>
- Shared memory: <session>/.msg/meta.json
EXPECTED: <session>/design/design-001.md + <session>/design/task-breakdown.json | Components defined, tasks actionable
CONSTRAINTS: Focus on <task-scope> | Risk assessment required
---
InnerLoop: false"
})
TaskUpdate({ taskId: "DESIGN-001", owner: "architect" })
```
**DEV-001** (developer):
```
TaskCreate({
subject: "DEV-001",
description: "PURPOSE: Implement design | Success: All design tasks implemented, syntax clean
TASK:
- Load design and task breakdown
- Implement tasks in execution order
- Validate syntax after changes
CONTEXT:
- Session: <session-folder>
- Scope: <task-scope>
- Upstream artifacts: design/design-001.md, design/task-breakdown.json
- Shared memory: <session>/.msg/meta.json
EXPECTED: Modified source files + <session>/code/dev-log.md | Syntax clean, all tasks done
CONSTRAINTS: Follow design | Preserve existing behavior | Follow code conventions
---
InnerLoop: true"
})
TaskUpdate({ taskId: "DEV-001", addBlockedBy: ["DESIGN-001"], owner: "developer" })
```
**VERIFY-001** (tester, parallel with REVIEW-001):
```
TaskCreate({
subject: "VERIFY-001",
description: "PURPOSE: Verify implementation | Success: Tests pass, no regressions
TASK:
- Detect test framework
- Run tests for changed files
- Run regression test suite
CONTEXT:
- Session: <session-folder>
- Scope: <task-scope>
- Upstream artifacts: code/dev-log.md
- Shared memory: <session>/.msg/meta.json
EXPECTED: <session>/verify/verify-001.json | Pass rate >= 95%
CONSTRAINTS: Focus on changed files | Report regressions
---
InnerLoop: false"
})
TaskUpdate({ taskId: "VERIFY-001", addBlockedBy: ["DEV-001"], owner: "tester" })
```
**REVIEW-001** (reviewer, parallel with VERIFY-001):
```
TaskCreate({
subject: "REVIEW-001",
description: "PURPOSE: Code review for correctness and quality | Success: All dimensions reviewed, verdict issued
TASK:
- Load changed files and design document
- Review across 4 dimensions: correctness, completeness, maintainability, security
- Score quality (1-10) and issue verdict
CONTEXT:
- Session: <session-folder>
- Scope: <task-scope>
- Upstream artifacts: design/design-001.md, code/dev-log.md
- Shared memory: <session>/.msg/meta.json
EXPECTED: <session>/review/review-001.md | Per-dimension findings with severity
CONSTRAINTS: Focus on implementation changes | Provide file:line references
---
InnerLoop: false"
})
TaskUpdate({ taskId: "REVIEW-001", addBlockedBy: ["DEV-001"], owner: "reviewer" })
```
---
### Multi-Sprint Pipeline
Sprint 1: DESIGN-001 -> DEV-001 -> DEV-002(incremental) -> VERIFY-001 -> DEV-fix -> REVIEW-001
Create Sprint 1 tasks using sprint templates above, plus:
**DEV-002** (developer, incremental):
```
TaskCreate({
subject: "DEV-002",
description: "PURPOSE: Incremental implementation | Success: Remaining tasks implemented
TASK:
- Load remaining tasks from breakdown
- Implement incrementally
- Validate syntax
CONTEXT:
- Session: <session-folder>
- Scope: <task-scope>
- Upstream artifacts: design/task-breakdown.json, code/dev-log.md
- Shared memory: <session>/.msg/meta.json
EXPECTED: Modified source files + updated dev-log.md
CONSTRAINTS: Incremental delivery | Follow existing patterns
---
InnerLoop: true"
})
TaskUpdate({ taskId: "DEV-002", addBlockedBy: ["DEV-001"], owner: "developer" })
```
Subsequent sprints created dynamically after Sprint N completes.
## Phase 4: Validation
Verify task chain integrity:
| Check | Method | Expected |
|-------|--------|----------|
| Task count correct | TaskList count | patch: 2, sprint: 4, multi: 5+ |
| Dependencies correct | Trace addBlockedBy graph | Acyclic, correct ordering |
| No circular dependencies | Trace full graph | Acyclic |
| Structured descriptions | Each has PURPOSE/TASK/CONTEXT/EXPECTED | All present |
If validation fails, fix the specific task and re-validate.

View File

@@ -1,182 +0,0 @@
# Command: Monitor
Event-driven pipeline coordination. Beat model: coordinator wake -> process -> spawn -> STOP.
## Constants
- SPAWN_MODE: background
- ONE_STEP_PER_INVOCATION: true
- FAST_ADVANCE_AWARE: true
- WORKER_AGENT: team-worker
- MAX_GC_ROUNDS: 3
## Handler Router
| Source | Handler |
|--------|---------|
| Message contains [architect], [developer], [tester], [reviewer] | handleCallback |
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session state | <session>/session.json | Yes |
| Task list | TaskList() | Yes |
| Trigger event | From Entry Router detection | Yes |
| Meta state | <session>/.msg/meta.json | Yes |
| Task ledger | <session>/task-ledger.json | No |
1. Load session.json for current state, pipeline mode, gc_round, max_gc_rounds
2. Run TaskList() to get current task statuses
3. Identify trigger event type from Entry Router
## Phase 3: Event Handlers
### handleCallback
Triggered when a worker sends completion message.
1. Parse message to identify role and task ID:
| Message Pattern | Role Detection |
|----------------|---------------|
| `[architect]` or task ID `DESIGN-*` | architect |
| `[developer]` or task ID `DEV-*` | developer |
| `[tester]` or task ID `VERIFY-*` | tester |
| `[reviewer]` or task ID `REVIEW-*` | reviewer |
2. Mark task as completed:
```
TaskUpdate({ taskId: "<task-id>", status: "completed" })
```
3. Record completion in session state and update task-ledger.json metrics
4. **Generator-Critic check** (when reviewer completes):
- If completed task is REVIEW-* AND pipeline is sprint or multi-sprint:
- Read review report for GC signal (critical_count, score)
- Read session.json for gc_round
| GC Signal | gc_round < max | Action |
|-----------|----------------|--------|
| review.critical_count > 0 OR review.score < 7 | Yes | Increment gc_round, create DEV-fix task blocked by this REVIEW, log `gc_loop_trigger` |
| review.critical_count > 0 OR review.score < 7 | No (>= max) | Force convergence, accept with warning, log to wisdom/issues.md |
| review.critical_count == 0 AND review.score >= 7 | - | Review passed, proceed to handleComplete check |
- Log team_msg with type "gc_loop_trigger" or "task_unblocked"
5. Proceed to handleSpawnNext
### handleSpawnNext
Find and spawn the next ready tasks.
1. Scan task list for tasks where:
- Status is "pending"
- All blockedBy tasks have status "completed"
2. For each ready task, determine role from task prefix:
| Task Prefix | Role | Inner Loop |
|-------------|------|------------|
| DESIGN-* | architect | false |
| DEV-* | developer | true |
| VERIFY-* | tester | false |
| REVIEW-* | reviewer | false |
3. Spawn team-worker:
```
Agent({
subagent_type: "team-worker",
description: "Spawn <role> worker for <task-id>",
team_name: "iterdev",
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: ~ or <project>/.claude/skills/team-iterdev/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: iterdev
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
})
```
4. **Parallel spawn rules**:
| Pipeline | Scenario | Spawn Behavior |
|----------|----------|---------------|
| Patch | DEV -> VERIFY | One worker at a time |
| Sprint | VERIFY + REVIEW both unblocked | Spawn BOTH in parallel |
| Sprint | Other stages | One worker at a time |
| Multi-Sprint | VERIFY + DEV-fix both unblocked | Spawn BOTH in parallel |
| Multi-Sprint | Other stages | One worker at a time |
5. STOP after spawning -- wait for next callback
### handleCheck
Output current pipeline status. Do NOT advance pipeline.
```
Pipeline Status (<pipeline-mode>):
[DONE] DESIGN-001 (architect) -> design/design-001.md
[DONE] DEV-001 (developer) -> code/dev-log.md
[RUN] VERIFY-001 (tester) -> verifying...
[RUN] REVIEW-001 (reviewer) -> reviewing...
[WAIT] DEV-fix (developer) -> blocked by REVIEW-001
GC Rounds: <gc_round>/<max_gc_rounds>
Sprint: <sprint_id>
Session: <session-id>
```
### handleResume
Resume pipeline after user pause or interruption.
1. Audit task list for inconsistencies:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed blockers but still "pending" -> include in spawn list
2. Proceed to handleSpawnNext
### handleComplete
Triggered when all pipeline tasks are completed.
**Completion check by mode**:
| Mode | Completion Condition |
|------|---------------------|
| patch | DEV-001 + VERIFY-001 completed |
| sprint | DESIGN-001 + DEV-001 + VERIFY-001 + REVIEW-001 (+ any GC tasks) completed |
| multi-sprint | All sprint tasks (+ any GC tasks) completed |
1. Verify all tasks completed via TaskList()
2. If any tasks not completed, return to handleSpawnNext
3. **Multi-sprint check**: If multi-sprint AND more sprints planned:
- Record sprint metrics to .msg/meta.json sprint_history
- Evaluate downgrade eligibility (velocity >= expected, review avg >= 8)
- Pause for user confirmation before Sprint N+1
4. If all completed, transition to coordinator Phase 5 (Report + Completion Action)
## Phase 4: State Persistence
After every handler execution:
1. Update session.json with current state (gc_round, last event, active tasks)
2. Update task-ledger.json metrics (completed count, in_progress count, velocity)
3. Update .msg/meta.json gc_round if changed
4. Verify task list consistency
5. STOP and wait for next event

View File

@@ -1,153 +0,0 @@
# Coordinator Role
Orchestrate team-iterdev: analyze -> dispatch -> spawn -> monitor -> report.
## Identity
- Name: coordinator | Tag: [coordinator]
- Responsibility: Analyze task -> Create team -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- Use `team-worker` agent type for all worker spawns (NOT `general-purpose`)
- Follow Command Execution Protocol for dispatch and monitor commands
- Respect pipeline stage dependencies (blockedBy)
- Stop after spawning workers -- wait for callbacks
- Handle developer<->reviewer GC loop (max 3 rounds)
- Maintain task-ledger.json for real-time progress
- Execute completion action in Phase 5
### MUST NOT
- Implement domain logic (designing, coding, testing, reviewing) -- workers handle this
- Spawn workers without creating tasks first
- Write source code directly
- Force-advance pipeline past failed review/validation
- Modify task outputs (workers own their deliverables)
## Command Execution Protocol
When coordinator needs to execute a command:
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains [architect], [developer], [tester], [reviewer] | -> handleCallback (monitor.md) |
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
| Interrupted session | Active/paused session in .workflow/.team/IDS-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For callback/check/resume/complete: load @commands/monitor.md, execute handler, STOP.
## Phase 0: Session Resume Check
1. Scan `.workflow/.team/IDS-*/.msg/meta.json` for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile (audit TaskList, reset in_progress->pending, rebuild team, kick first ready task)
4. Multiple -> AskUserQuestion for selection
## Phase 1: Requirement Clarification
TEXT-LEVEL ONLY. No source code reading.
1. Parse user task description from $ARGUMENTS
2. Delegate to @commands/analyze.md
3. Assess complexity for pipeline selection:
| Signal | Weight |
|--------|--------|
| Changed files > 10 | +3 |
| Changed files 3-10 | +2 |
| Structural change (refactor, architect, restructure) | +3 |
| Cross-cutting (multiple, across, cross) | +2 |
| Simple fix (fix, bug, typo, patch) | -2 |
| Score | Pipeline |
|-------|----------|
| >= 5 | multi-sprint |
| 2-4 | sprint |
| 0-1 | patch |
4. Ask for missing parameters via AskUserQuestion (mode selection)
5. Record requirement with scope, pipeline mode
6. CRITICAL: Always proceed to Phase 2, never skip team workflow
## Phase 2: Session & Team Setup
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash({ command: "pwd" })`
- `skill_root` = `<project_root>/.claude/skills/team-iterdev`
2. Generate session ID: `IDS-<slug>-<YYYY-MM-DD>`
3. Create session folder structure:
```
mkdir -p .workflow/.team/<session-id>/{design,code,verify,review,wisdom}
```
4. Create team: `TeamCreate({ team_name: "iterdev" })`
5. Read specs/pipelines.md -> select pipeline based on complexity
6. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
7. Write session.json
8. Initialize task-ledger.json
9. Initialize meta.json with pipeline metadata:
```typescript
mcp__ccw-tools__team_msg({
operation: "log", session_id: "<id>", from: "coordinator",
type: "state_update", summary: "Session initialized",
data: {
pipeline_mode: "<patch|sprint|multi-sprint>",
pipeline_stages: ["architect", "developer", "tester", "reviewer"],
roles: ["coordinator", "architect", "developer", "tester", "reviewer"],
team_name: "iterdev"
}
})
```
## Phase 3: Task Chain Creation
Delegate to @commands/dispatch.md:
1. Read specs/pipelines.md for selected pipeline task registry
2. Create tasks via TaskCreate, then TaskUpdate with addBlockedBy
3. Update task-ledger.json
## Phase 4: Spawn-and-Stop
Delegate to @commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + all addBlockedBy dependencies resolved)
2. Spawn team-worker agents (see SKILL.md Spawn Template)
3. Output status summary
4. STOP
## Phase 5: Report + Completion Action
1. Load session state -> count completed tasks, calculate duration
2. Record sprint learning to .msg/meta.json sprint_history
3. List deliverables:
| Deliverable | Path |
|-------------|------|
| Design Document | <session>/design/design-001.md |
| Task Breakdown | <session>/design/task-breakdown.json |
| Dev Log | <session>/code/dev-log.md |
| Verification Results | <session>/verify/verify-001.json |
| Review Report | <session>/review/review-001.md |
4. Execute completion action per session.completion_action:
- interactive -> AskUserQuestion (Archive/Keep/Export)
- auto_archive -> Archive & Clean (status=completed, TeamDelete)
- auto_keep -> Keep Active (status=paused)
## Error Handling
| Error | Resolution |
|-------|------------|
| Task too vague | AskUserQuestion for clarification |
| Session corruption | Attempt recovery, fallback to manual |
| Worker crash | Reset task to pending, respawn |
| GC loop exceeds 3 rounds | Accept with warning, record in shared memory |
| Sprint velocity drops below 50% | Alert user, suggest scope reduction |
| Task ledger corrupted | Rebuild from TaskList state |

View File

@@ -1,74 +0,0 @@
---
role: developer
prefix: DEV
inner_loop: true
message_types:
success: dev_complete
progress: dev_progress
error: error
---
# Developer
Code implementer. Implements code according to design, incremental delivery. Acts as Generator in Generator-Critic loop (paired with reviewer).
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Design document | <session>/design/design-001.md | For non-fix tasks |
| Task breakdown | <session>/design/task-breakdown.json | For non-fix tasks |
| Review feedback | <session>/review/*.md | For fix tasks |
| Wisdom files | <session>/wisdom/ | No |
1. Extract session path from task description
2. Read .msg/meta.json for shared context
3. Detect task type:
| Task Type | Detection | Loading |
|-----------|-----------|---------|
| Fix task | Subject contains "fix" | Read latest review file for feedback |
| Normal task | No "fix" in subject | Read design document + task breakdown |
4. Load previous implementation_context from .msg/meta.json
5. Read wisdom files for conventions and known issues
## Phase 3: Code Implementation
**Implementation strategy selection**:
| Task Count | Complexity | Strategy |
|------------|------------|----------|
| <= 2 tasks | Low | Direct: inline Edit/Write |
| 3-5 tasks | Medium | Single agent: one code-developer for all |
| > 5 tasks | High | Batch agent: group by module, one agent per batch |
**Fix Task Mode** (GC Loop):
- Focus on review feedback items only
- Fix critical issues first, then high, then medium
- Do NOT change code that was not flagged
- Maintain existing code style and patterns
**Normal Task Mode**:
- Read target files, apply changes using Edit or Write
- Follow execution order from task breakdown
- Validate syntax after each major change
## Phase 4: Self-Validation
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Syntax | tsc --noEmit or equivalent | No errors |
| File existence | Verify all planned files exist | All files present |
| Import resolution | Check no broken imports | All imports resolve |
1. Run syntax check: `tsc --noEmit` / `python -m py_compile` / equivalent
2. Auto-fix if validation fails (max 2 attempts)
3. Write dev log to `<session>/code/dev-log.md`:
- Changed files count, syntax status, fix task flag, file list
4. Update implementation_context in .msg/meta.json:
- task, changed_files, is_fix, syntax_clean
5. Write discoveries to wisdom/learnings.md

View File

@@ -1,66 +0,0 @@
---
role: reviewer
prefix: REVIEW
inner_loop: false
message_types:
success: review_passed
revision: review_revision
critical: review_critical
error: error
---
# Reviewer
Code reviewer. Multi-dimensional review, quality scoring, improvement suggestions. Acts as Critic in Generator-Critic loop (paired with developer).
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Design document | <session>/design/design-001.md | For requirements alignment |
| Changed files | Git diff | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for shared context and previous review_feedback_trends
3. Read design document for requirements alignment
4. Get changed files via git diff, read file contents (limit 20 files)
## Phase 3: Multi-Dimensional Review
**Review dimensions**:
| Dimension | Weight | Focus Areas |
|-----------|--------|-------------|
| Correctness | 30% | Logic correctness, boundary handling |
| Completeness | 25% | Coverage of design requirements |
| Maintainability | 25% | Readability, code style, DRY |
| Security | 20% | Vulnerabilities, input validation |
Per-dimension: scan modified files, record findings with severity (CRITICAL/HIGH/MEDIUM/LOW), include file:line references and suggestions.
**Scoring**: Weighted average of dimension scores (1-10 each).
**Output review report** (`<session>/review/review-<num>.md`):
- Files reviewed count, quality score, issue counts by severity
- Per-finding: severity, file:line, dimension, description, suggestion
- Scoring breakdown by dimension
- Signal: CRITICAL / REVISION_NEEDED / APPROVED
- Design alignment notes
## Phase 4: Trend Analysis + Verdict
1. Compare with previous review_feedback_trends from .msg/meta.json
2. Identify recurring issues, improvement areas, new issues
| Verdict Condition | Message Type |
|-------------------|--------------|
| criticalCount > 0 | review_critical |
| score < 7 | review_revision |
| else | review_passed |
3. Update review_feedback_trends in .msg/meta.json:
- review_id, score, critical count, high count, dimensions, gc_round
4. Write discoveries to wisdom/learnings.md

View File

@@ -1,88 +0,0 @@
---
role: tester
prefix: VERIFY
inner_loop: false
message_types:
success: verify_passed
failure: verify_failed
fix: fix_required
error: error
---
# Tester
Test validator. Test execution, fix cycles, and regression detection.
## Phase 2: Environment Detection
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Changed files | Git diff | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for shared context
3. Get changed files via git diff
4. Detect test framework and command:
| Detection | Method |
|-----------|--------|
| Test command | Check package.json scripts, pytest.ini, Makefile |
| Coverage tool | Check for nyc, coverage.py, jest --coverage config |
Common commands: npm test, pytest, go test ./..., cargo test
## Phase 3: Execution + Fix Cycle
**Iterative test-fix cycle** (max 5 iterations):
| Step | Action |
|------|--------|
| 1 | Run test command |
| 2 | Parse results, check pass rate |
| 3 | Pass rate >= 95% -> exit loop (success) |
| 4 | Extract failing test details |
| 5 | Apply fix using CLI tool |
| 6 | Increment iteration counter |
| 7 | iteration >= MAX (5) -> exit loop (report failures) |
| 8 | Go to Step 1 |
**Fix delegation**: Use CLI tool to fix failing tests:
```bash
ccw cli -p "PURPOSE: Fix failing tests; success = all listed tests pass
TASK: • Analyze test failure output • Identify root cause in changed files • Apply minimal fix
MODE: write
CONTEXT: @<changed-files> | Memory: Test output from current iteration
EXPECTED: Code fixes that make failing tests pass without breaking other tests
CONSTRAINTS: Only modify files in changed list | Minimal changes
Test output: <test-failure-details>
Changed files: <file-list>" --tool gemini --mode write --rule development-debug-runtime-issues
```
Wait for CLI completion before re-running tests.
## Phase 4: Regression Check + Report
1. Run full test suite for regression: `<test-command> --all`
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Regression | Run full test suite | No FAIL in output |
| Coverage | Run coverage tool | >= 80% (if configured) |
2. Write verification results to `<session>/verify/verify-<num>.json`:
- verify_id, pass_rate, iterations, passed, timestamp, regression_passed
3. Determine message type:
| Condition | Message Type |
|-----------|--------------|
| passRate >= 0.95 | verify_passed |
| passRate < 0.95 && iterations >= MAX | fix_required |
| passRate < 0.95 | verify_failed |
4. Update .msg/meta.json with test_patterns entry
5. Write discoveries to wisdom/issues.md

View File

@@ -1,94 +0,0 @@
# IterDev Pipeline Definitions
## Three-Pipeline Architecture
### Patch Pipeline (2 beats, serial)
```
DEV-001 -> VERIFY-001
[developer] [tester]
```
### Sprint Pipeline (4 beats, with parallel window)
```
DESIGN-001 -> DEV-001 -> [VERIFY-001 + REVIEW-001] (parallel)
[architect] [developer] [tester] [reviewer]
```
### Multi-Sprint Pipeline (N beats, iterative)
```
Sprint 1: DESIGN-001 -> DEV-001 -> DEV-002(incremental) -> VERIFY-001 -> DEV-fix -> REVIEW-001
Sprint 2: DESIGN-002(refined) -> DEV-003 -> VERIFY-002 -> REVIEW-002
...
```
## Generator-Critic Loop (developer <-> reviewer)
```
DEV -> REVIEW -> (if review.critical_count > 0 || review.score < 7)
-> DEV-fix -> REVIEW-2 -> (if still issues) -> DEV-fix-2 -> REVIEW-3
-> (max 3 rounds, then accept with warning)
```
## Pipeline Selection Logic
| Signal | Score |
|--------|-------|
| Changed files > 10 | +3 |
| Changed files 3-10 | +2 |
| Structural change | +3 |
| Cross-cutting concern | +2 |
| Simple fix | -2 |
| Score | Pipeline |
|-------|----------|
| >= 5 | multi-sprint |
| 2-4 | sprint |
| 0-1 | patch |
## Task Metadata Registry
| Task ID | Role | Pipeline | Dependencies | Description |
|---------|------|----------|-------------|-------------|
| DESIGN-001 | architect | sprint/multi | (none) | Technical design and task breakdown |
| DEV-001 | developer | all | DESIGN-001 (sprint/multi) or (none for patch) | Code implementation |
| DEV-002 | developer | multi | DEV-001 | Incremental implementation |
| DEV-fix | developer | sprint/multi | REVIEW-* (GC loop trigger) | Fix issues from review |
| VERIFY-001 | tester | all | DEV-001 (or last DEV) | Test execution and fix cycles |
| REVIEW-001 | reviewer | sprint/multi | DEV-001 (or last DEV) | Code review and quality scoring |
## Checkpoints
| Trigger Condition | Location | Behavior |
|-------------------|----------|----------|
| GC loop exceeds max rounds | After REVIEW-3 | Stop iteration, accept with warning, record in wisdom |
| Sprint transition | End of Sprint N | Pause, retrospective, user confirms `resume` for Sprint N+1 |
| Pipeline stall | No ready + no running tasks | Check missing tasks, report blockedBy chain to user |
## Multi-Sprint Dynamic Downgrade
If Sprint N metrics are strong (velocity >= expected, review avg >= 8), coordinator may downgrade Sprint N+1 from multi-sprint to sprint pipeline for efficiency.
## Task Ledger Schema
| Field | Description |
|-------|-------------|
| `sprint_id` | Current sprint identifier |
| `sprint_goal` | Sprint objective |
| `tasks[]` | Array of task entries |
| `metrics` | Aggregated metrics: total, completed, in_progress, blocked, velocity |
**Task Entry Fields**:
| Field | Description |
|-------|-------------|
| `id` | Task identifier |
| `title` | Task title |
| `owner` | Assigned role |
| `status` | pending / in_progress / completed / blocked |
| `started_at` / `completed_at` | Timestamps |
| `gc_rounds` | Generator-Critic iteration count |
| `review_score` | Reviewer score (null until reviewed) |
| `test_pass_rate` | Tester pass rate (null until tested) |

View File

@@ -1,172 +0,0 @@
{
"team_name": "team-iterdev",
"team_display_name": "Team IterDev",
"description": "Iterative development team with Generator-Critic loop, task ledger, sprint learning, dynamic pipeline, conflict handling, concurrency control, rollback strategy, user feedback loop, and tech debt tracking",
"version": "1.2.0",
"roles": {
"coordinator": {
"task_prefix": null,
"responsibility": "Sprint planning, backlog management, task ledger maintenance, GC loop control, Phase 1: conflict handling, concurrency control, rollback strategy, Phase 3: user feedback loop, tech debt tracking",
"message_types": [
"sprint_started", "gc_loop_trigger", "sprint_complete", "task_unblocked", "error", "shutdown",
"conflict_detected", "conflict_resolved", "resource_locked", "resource_unlocked", "resource_contention",
"rollback_initiated", "rollback_completed", "rollback_failed",
"user_feedback_received", "tech_debt_identified"
]
},
"architect": {
"task_prefix": "DESIGN",
"responsibility": "Technical design, task decomposition, architecture decisions",
"message_types": ["design_ready", "design_revision", "error"]
},
"developer": {
"task_prefix": "DEV",
"responsibility": "Code implementation, incremental delivery",
"message_types": ["dev_complete", "dev_progress", "error"]
},
"tester": {
"task_prefix": "VERIFY",
"responsibility": "Test execution, fix cycle, regression detection",
"message_types": ["verify_passed", "verify_failed", "fix_required", "error"]
},
"reviewer": {
"task_prefix": "REVIEW",
"responsibility": "Code review, quality scoring, improvement suggestions",
"message_types": ["review_passed", "review_revision", "review_critical", "error"]
}
},
"pipelines": {
"patch": {
"description": "Simple fix: implement → verify",
"task_chain": ["DEV-001", "VERIFY-001"],
"gc_loops": 0
},
"sprint": {
"description": "Standard feature: design → implement → verify + review (parallel)",
"task_chain": ["DESIGN-001", "DEV-001", "VERIFY-001", "REVIEW-001"],
"gc_loops": 3,
"parallel_groups": [["VERIFY-001", "REVIEW-001"]]
},
"multi-sprint": {
"description": "Large feature: multiple sprint cycles with incremental delivery",
"task_chain": "dynamic — coordinator creates per-sprint chains",
"gc_loops": 3,
"sprint_count": "dynamic"
}
},
"innovation_patterns": {
"generator_critic": {
"generator": "developer",
"critic": "reviewer",
"max_rounds": 3,
"convergence_trigger": "review.critical_count === 0 && review.score >= 7"
},
"task_ledger": {
"file": "task-ledger.json",
"updated_by": "coordinator",
"tracks": ["status", "gc_rounds", "review_score", "test_pass_rate", "velocity"],
"phase1_extensions": {
"conflict_info": {
"fields": ["status", "conflicting_files", "resolution_strategy", "resolved_by_task_id"],
"default": { "status": "none", "conflicting_files": [], "resolution_strategy": null, "resolved_by_task_id": null }
},
"rollback_info": {
"fields": ["snapshot_id", "rollback_procedure", "last_successful_state_id"],
"default": { "snapshot_id": null, "rollback_procedure": null, "last_successful_state_id": null }
}
}
},
"shared_memory": {
"file": "shared-memory.json",
"fields": {
"architect": "architecture_decisions",
"developer": "implementation_context",
"tester": "test_patterns",
"reviewer": "review_feedback_trends"
},
"persistent_fields": ["sprint_history", "what_worked", "what_failed", "patterns_learned"],
"phase1_extensions": {
"resource_locks": {
"description": "Concurrency control: resource locking state",
"lock_timeout_ms": 300000,
"deadlock_detection": true
}
}
},
"dynamic_pipeline": {
"selector": "coordinator",
"criteria": "file_count + module_count + complexity_assessment",
"downgrade_rule": "velocity >= expected && review_avg >= 8 → simplify next sprint"
}
},
"phase1_features": {
"conflict_handling": {
"enabled": true,
"detection_strategy": "file_overlap",
"resolution_strategies": ["manual", "auto_merge", "abort"]
},
"concurrency_control": {
"enabled": true,
"lock_timeout_minutes": 5,
"deadlock_detection": true,
"resources": ["task-ledger.json", "shared-memory.json"]
},
"rollback_strategy": {
"enabled": true,
"snapshot_trigger": "task_complete",
"default_procedure": "git revert HEAD"
}
},
"phase2_features": {
"external_dependency_management": {
"enabled": true,
"validation_trigger": "task_start",
"supported_sources": ["npm", "maven", "pip", "git"],
"version_check_command": {
"npm": "npm list {name}",
"pip": "pip show {name}",
"maven": "mvn dependency:tree"
}
},
"state_recovery": {
"enabled": true,
"checkpoint_trigger": "phase_complete",
"max_checkpoints_per_task": 5,
"checkpoint_dir": "checkpoints/"
}
},
"phase3_features": {
"user_feedback_loop": {
"enabled": true,
"collection_trigger": "sprint_complete",
"max_feedback_items": 50,
"severity_levels": ["low", "medium", "high", "critical"],
"status_flow": ["new", "reviewed", "addressed", "closed"]
},
"tech_debt_tracking": {
"enabled": true,
"detection_sources": ["review", "test", "architect"],
"categories": ["code", "design", "test", "documentation"],
"severity_levels": ["low", "medium", "high", "critical"],
"status_flow": ["open", "in_progress", "resolved", "deferred"],
"report_trigger": "sprint_retrospective"
}
},
"collaboration_patterns": ["CP-1", "CP-3", "CP-5", "CP-6"],
"session_dirs": {
"base": ".workflow/.team/IDS-{slug}-{YYYY-MM-DD}/",
"design": "design/",
"code": "code/",
"verify": "verify/",
"review": "review/",
"messages": ".workflow/.team-msg/{team-name}/"
}
}

View File

@@ -1,6 +1,6 @@
---
name: team-lifecycle-v4
description: Full lifecycle team skill with clean architecture. SKILL.md is a universal router — all roles read it. Beat model is coordinator-only. Structure is roles/ + specs/ + templates/. Triggers on "team lifecycle v4".
description: Full lifecycle team skill — plan, develop, test, review in one coordinated session. Role-based architecture with coordinator-driven beat model. Triggers on "team lifecycle v4".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Agent(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---

View File

@@ -1,6 +1,6 @@
---
name: team-ultra-analyze
description: Deep collaborative analysis team skill. All roles route via this SKILL.md. Beat model is coordinator-only (monitor.md). Structure is roles/ + specs/. Triggers on "team ultra-analyze", "team analyze".
description: Deep collaborative analysis team skill. Multi-role investigation with coordinator-driven synthesis. Triggers on "team ultra-analyze", "team analyze".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Agent(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---

View File

@@ -1,6 +1,6 @@
---
name: workflow-multi-cli-plan
description: Multi-CLI collaborative planning with ACE context gathering, iterative cross-verification, and execution handoff
description: Multi-CLI collaborative planning with codebase context gathering, iterative cross-verification, and execution handoff.
allowed-tools: Skill, Agent, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash, Glob, Grep
---

View File

@@ -1,6 +1,6 @@
---
name: workflow-test-fix
description: Unified test-fix pipeline combining test generation (session, context, analysis, task gen) with iterative test-cycle execution (adaptive strategy, progressive testing, CLI fallback). Triggers on "workflow-test-fix", "workflow-test-fix", "test fix workflow".
description: Unified test-fix pipeline combining test generation (session, context, analysis, task gen) with iterative test-cycle execution (adaptive strategy, progressive testing, CLI fallback). Triggers on "workflow-test-fix", "test fix workflow".
allowed-tools: Skill, Agent, AskUserQuestion, TaskCreate, TaskUpdate, TaskList, Read, Write, Edit, Bash, Glob, Grep
---

View File

@@ -1,6 +1,6 @@
---
name: analyze-with-file
description: Interactive collaborative analysis with documented discussions, inline exploration, and evolving understanding. Serial execution with no agent delegation.
description: Interactive collaborative analysis with documented discussions, inline exploration, and evolving understanding.
argument-hint: "TOPIC=\"<question or topic>\" [--depth=quick|standard|deep] [--continue]"
---

View File

@@ -1,6 +1,6 @@
---
name: roadmap-with-file
description: Strategic requirement roadmap with iterative decomposition and issue creation. Outputs roadmap.md (human-readable, single source) + issues.jsonl (machine-executable). Handoff to csv-wave-pipeline.
description: Strategic requirement roadmap with iterative decomposition and issue creation. Outputs roadmap.md (human-readable, single source) + issues.jsonl (machine-executable).
argument-hint: "[-y|--yes] [-c|--continue] [-m progressive|direct|auto] \"requirement description\""
---

View File

@@ -1,6 +1,6 @@
---
name: spec-setup
description: Initialize project-level state and configure specs via interactive questionnaire using cli-explore-agent
description: Initialize project-level state and configure specs via interactive questionnaire.
argument-hint: "[--regenerate] [--skip-specs] [--reset]"
allowed-tools: spawn_agent, wait, send_input, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
---

View File

@@ -1,219 +0,0 @@
---
name: team-iterdev
description: Unified team skill for iterative development team. Pure router — all roles read this file. Beat model is coordinator-only in monitor.md. Generator-Critic loops (developer<->reviewer, max 3 rounds). Triggers on "team iterdev".
allowed-tools: spawn_agent(*), wait_agent(*), send_message(*), assign_task(*), close_agent(*), list_agents(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
# Team IterDev
Iterative development team skill. Generator-Critic loops (developer<->reviewer, max 3 rounds), task ledger (task-ledger.json) for real-time progress, shared memory (cross-sprint learning), and dynamic pipeline selection for incremental delivery.
## Architecture
```
Skill(skill="team-iterdev", args="task description")
|
SKILL.md (this file) = Router
|
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze -> dispatch -> spawn workers -> STOP
|
+-------+-------+-------+
v v v v
[architect] [developer] [tester] [reviewer]
(team-worker agents, each loads roles/<role>/role.md)
```
## Role Registry
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
| architect | [roles/architect/role.md](roles/architect/role.md) | DESIGN-* | false |
| developer | [roles/developer/role.md](roles/developer/role.md) | DEV-* | true |
| tester | [roles/tester/role.md](roles/tester/role.md) | VERIFY-* | false |
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REVIEW-* | false |
## Role Router
Parse `$ARGUMENTS`:
- Has `--role <name>` → Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role``roles/coordinator/role.md`, execute entry router
## Delegation Lock
**Coordinator is a PURE ORCHESTRATOR. It coordinates, it does NOT do.**
Before calling ANY tool, apply this check:
| Tool Call | Verdict | Reason |
|-----------|---------|--------|
| `spawn_agent`, `wait_agent`, `close_agent`, `send_message`, `assign_task` | ALLOWED | Orchestration |
| `list_agents` | ALLOWED | Agent health check |
| `request_user_input` | ALLOWED | User interaction |
| `mcp__ccw-tools__team_msg` | ALLOWED | Message bus |
| `Read/Write` on `.workflow/.team/` files | ALLOWED | Session state |
| `Read` on `roles/`, `commands/`, `specs/` | ALLOWED | Loading own instructions |
| `Read/Grep/Glob` on project source code | BLOCKED | Delegate to worker |
| `Edit` on any file outside `.workflow/` | BLOCKED | Delegate to worker |
| `Bash("ccw cli ...")` | BLOCKED | Only workers call CLI |
| `Bash` running build/test/lint commands | BLOCKED | Delegate to worker |
**If a tool call is BLOCKED**: STOP. Create a task, spawn a worker.
**No exceptions for "simple" tasks.** Even a single-file read-and-report MUST go through spawn_agent.
---
## Shared Constants
- **Session prefix**: `IDS`
- **Session path**: `.workflow/.team/IDS-<slug>-<date>/`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
## Worker Spawn Template
Coordinator spawns workers using this template:
```
spawn_agent({
agent_type: "team_worker",
task_name: "<task-id>",
fork_context: false,
items: [
{ type: "text", text: `## Role Assignment
role: <role>
role_spec: <skill_root>/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.` },
{ type: "text", text: `## Task Context
task_id: <task-id>
title: <task-title>
description: <task-description>
pipeline_phase: <pipeline-phase>` },
{ type: "text", text: `## Upstream Context
<prev_context>` }
]
})
```
After spawning, use `wait_agent({ targets: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ target })` each worker.
### Model Selection Guide
Iterative development uses Generator-Critic loops. Developer and reviewer need high reasoning for code generation and review quality.
| Role | reasoning_effort | Rationale |
|------|-------------------|-----------|
| architect | high | Design decisions require careful tradeoff analysis |
| developer | high | Code generation needs precision, inner loop role |
| tester | medium | Test execution follows defined verification plan |
| reviewer | high | Critical review quality drives GC loop effectiveness |
## User Commands
| Command | Action |
|---------|--------|
| `check` / `status` | View execution status graph |
| `resume` / `continue` | Advance to next step |
## Session Directory
```
.workflow/.team/IDS-<slug>-<YYYY-MM-DD>/
├── .msg/
│ ├── messages.jsonl # Team message bus
│ └── meta.json # Session state
├── task-analysis.json # Coordinator analyze output
├── task-ledger.json # Real-time task progress ledger
├── wisdom/ # Cross-task knowledge accumulation
│ ├── learnings.md
│ ├── decisions.md
│ ├── conventions.md
│ └── issues.md
├── design/ # Architect output
│ ├── design-001.md
│ └── task-breakdown.json
├── code/ # Developer tracking
│ └── dev-log.md
├── verify/ # Tester output
│ └── verify-001.json
└── review/ # Reviewer output
└── review-001.md
```
## Specs Reference
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
## v4 Agent Coordination
### Message Semantics
| Intent | API | Example |
|--------|-----|---------|
| Queue supplementary info (don't interrupt) | `send_message` | Send design context to running developer |
| Assign GC iteration round | `assign_task` | Assign revision to developer after reviewer feedback |
| Check running agents | `list_agents` | Verify agent health during resume |
### Agent Health Check
Use `list_agents({})` in handleResume and handleComplete:
```
// Reconcile session state with actual running agents
const running = list_agents({})
// Compare with task-ledger.json active tasks
// Reset orphaned tasks (in_progress but agent gone) to pending
```
### Named Agent Targeting
Workers are spawned with `task_name: "<task-id>"` enabling direct addressing:
- `send_message({ target: "DEV-001", items: [...] })` -- send design context to developer
- `assign_task({ target: "DEV-001", items: [...] })` -- assign revision after reviewer feedback
- `close_agent({ target: "REVIEW-001" })` -- cleanup after GC loop completes
### Generator-Critic Loop via assign_task (Deep Interaction Pattern)
The developer-reviewer GC loop uses `assign_task` for iteration rounds:
```
// Round N: Reviewer found issues -> coordinator assigns revision to developer
assign_task({
target: "DEV-001",
items: [
{ type: "text", text: `## GC Revision Round ${N}
review_feedback: <reviewer findings from REVIEW-001>
iteration: ${N}/3
instruction: Address reviewer feedback. Focus on: <specific issues>.` }
]
})
```
After developer completes revision, coordinator spawns/assigns reviewer for next round. Max 3 rounds per GC cycle.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown command | Error with available command list |
| Role not found | Error with role registry |
| GC loop exceeds 3 rounds | Accept with warning, record in shared memory |
| Sprint velocity drops below 50% | Coordinator alerts user, suggests scope reduction |
| Task ledger corrupted | Rebuild from TaskList state |
| Conflict detected | Update conflict_info, notify coordinator, create DEV-fix task |
| Pipeline deadlock | Check blockedBy chain, report blocking point |

View File

@@ -1,65 +0,0 @@
---
role: architect
prefix: DESIGN
inner_loop: false
message_types:
success: design_ready
revision: design_revision
error: error
---
# Architect
Technical design, task decomposition, and architecture decision records for iterative development.
## Phase 2: Context Loading + Codebase Exploration
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | No |
| Wisdom files | <session>/wisdom/ | No |
1. Extract session path and requirement from task description
2. Read .msg/meta.json for shared context (architecture_decisions, implementation_context)
3. Read wisdom files if available (learnings.md, decisions.md, conventions.md)
4. Explore codebase for existing patterns, module structure, dependencies:
- Use mcp__ace-tool__search_context for semantic discovery
- Identify similar implementations and integration points
## Phase 3: Technical Design + Task Decomposition
**Design strategy selection**:
| Condition | Strategy |
|-----------|----------|
| Single module change | Direct inline design |
| Cross-module change | Multi-component design with integration points |
| Large refactoring | Phased approach with milestones |
**Outputs**:
1. **Design Document** (`<session>/design/design-<num>.md`):
- Architecture decision: approach, rationale, alternatives
- Component design: responsibility, dependencies, files, complexity
- Task breakdown: files, estimated complexity, dependencies, acceptance criteria
- Integration points and risks with mitigations
2. **Task Breakdown JSON** (`<session>/design/task-breakdown.json`):
- Array of tasks with id, title, files, complexity, dependencies, acceptance_criteria
- Execution order for developer to follow
## Phase 4: Design Validation
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Components defined | Verify component list | At least 1 component |
| Task breakdown exists | Verify task list | At least 1 task |
| Dependencies mapped | All components have dependencies field | All present (can be empty) |
| Integration points | Verify integration section | Key integrations documented |
1. Run validation checks above
2. Write architecture_decisions entry to .msg/meta.json:
- design_id, approach, rationale, components, task_count
3. Write discoveries to wisdom/decisions.md and wisdom/conventions.md

View File

@@ -1,62 +0,0 @@
# Analyze Task
Parse iterative development task -> detect capabilities -> assess pipeline complexity -> design roles.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Capability | Role |
|----------|------------|------|
| design, architect, restructure, refactor plan | architect | architect |
| implement, build, code, fix, develop | developer | developer |
| test, verify, validate, coverage | tester | tester |
| review, audit, quality, check | reviewer | reviewer |
## Pipeline Selection
| Signal | Score |
|--------|-------|
| Changed files > 10 | +3 |
| Changed files 3-10 | +2 |
| Structural change (refactor, architect, restructure) | +3 |
| Cross-cutting (multiple, across, cross) | +2 |
| Simple fix (fix, bug, typo, patch) | -2 |
| Score | Pipeline |
|-------|----------|
| >= 5 | multi-sprint |
| 2-4 | sprint |
| 0-1 | patch |
## Dependency Graph
Natural ordering tiers:
- Tier 0: architect (design must come first)
- Tier 1: developer (implementation requires design)
- Tier 2: tester, reviewer (validation requires artifacts, can run parallel)
## Complexity Assessment
| Factor | Points |
|--------|--------|
| Cross-module changes | +2 |
| Serial depth > 3 | +1 |
| Multiple developers needed | +2 |
| GC loop likely needed | +1 |
## Output
Write <session>/task-analysis.json:
```json
{
"task_description": "<original>",
"pipeline_type": "<patch|sprint|multi-sprint>",
"capabilities": [{ "name": "<cap>", "role": "<role>", "keywords": ["..."] }],
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
"complexity": { "score": 0, "level": "Low|Medium|High" },
"needs_architecture": true,
"needs_testing": true,
"needs_review": true
}
```

View File

@@ -1,187 +0,0 @@
# Command: Dispatch
Create the iterative development task chain with correct dependencies and structured task descriptions.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| User requirement | From coordinator Phase 1 | Yes |
| Session folder | From coordinator Phase 2 | Yes |
| Pipeline definition | From SKILL.md Pipeline Definitions | Yes |
| Pipeline mode | From tasks.json `pipeline_mode` | Yes |
1. Load user requirement and scope from tasks.json
2. Load pipeline stage definitions from SKILL.md Task Metadata Registry
3. Read `pipeline_mode` from tasks.json (patch / sprint / multi-sprint)
## Phase 3: Task Chain Creation
### Task Entry Template
Each task in tasks.json `tasks` object:
```json
{
"<TASK-ID>": {
"title": "<concise title>",
"description": "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>\nTASK:\n - <step 1: specific action>\n - <step 2: specific action>\n - <step 3: specific action>\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: <artifact-1>, <artifact-2>\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <deliverable path> + <quality criteria>\nCONSTRAINTS: <scope limits, focus areas>\n---\nInnerLoop: <true|false>",
"role": "<role-name>",
"prefix": "<PREFIX>",
"deps": ["<dependency-list>"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
### Mode Router
| Mode | Action |
|------|--------|
| `patch` | Create DEV-001 + VERIFY-001 |
| `sprint` | Create DESIGN-001 + DEV-001 + VERIFY-001 + REVIEW-001 |
| `multi-sprint` | Create Sprint 1 chain, subsequent sprints created dynamically |
---
### Patch Pipeline
**DEV-001** (developer):
```json
{
"DEV-001": {
"title": "Implement fix",
"description": "PURPOSE: Implement fix | Success: Fix applied, syntax clean\nTASK:\n - Load target files and understand context\n - Apply fix changes\n - Validate syntax\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: Modified source files + <session>/code/dev-log.md | Syntax clean\nCONSTRAINTS: Minimal changes | Preserve existing behavior\n---\nInnerLoop: true",
"role": "developer",
"prefix": "DEV",
"deps": [],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**VERIFY-001** (tester):
```json
{
"VERIFY-001": {
"title": "Verify fix correctness",
"description": "PURPOSE: Verify fix correctness | Success: Tests pass, no regressions\nTASK:\n - Detect test framework\n - Run targeted tests for changed files\n - Run regression test suite\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: code/dev-log.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/verify/verify-001.json | Pass rate >= 95%\nCONSTRAINTS: Focus on changed files | Report any regressions\n---\nInnerLoop: false",
"role": "tester",
"prefix": "VERIFY",
"deps": ["DEV-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
---
### Sprint Pipeline
**DESIGN-001** (architect):
```json
{
"DESIGN-001": {
"title": "Technical design and task breakdown",
"description": "PURPOSE: Technical design and task breakdown | Success: Design document + task breakdown ready\nTASK:\n - Explore codebase for patterns and dependencies\n - Create component design with integration points\n - Break down into implementable tasks with acceptance criteria\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/design/design-001.md + <session>/design/task-breakdown.json | Components defined, tasks actionable\nCONSTRAINTS: Focus on <task-scope> | Risk assessment required\n---\nInnerLoop: false",
"role": "architect",
"prefix": "DESIGN",
"deps": [],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**DEV-001** (developer):
```json
{
"DEV-001": {
"title": "Implement design",
"description": "PURPOSE: Implement design | Success: All design tasks implemented, syntax clean\nTASK:\n - Load design and task breakdown\n - Implement tasks in execution order\n - Validate syntax after changes\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: design/design-001.md, design/task-breakdown.json\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: Modified source files + <session>/code/dev-log.md | Syntax clean, all tasks done\nCONSTRAINTS: Follow design | Preserve existing behavior | Follow code conventions\n---\nInnerLoop: true",
"role": "developer",
"prefix": "DEV",
"deps": ["DESIGN-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**VERIFY-001** (tester, parallel with REVIEW-001):
```json
{
"VERIFY-001": {
"title": "Verify implementation",
"description": "PURPOSE: Verify implementation | Success: Tests pass, no regressions\nTASK:\n - Detect test framework\n - Run tests for changed files\n - Run regression test suite\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: code/dev-log.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/verify/verify-001.json | Pass rate >= 95%\nCONSTRAINTS: Focus on changed files | Report regressions\n---\nInnerLoop: false",
"role": "tester",
"prefix": "VERIFY",
"deps": ["DEV-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**REVIEW-001** (reviewer, parallel with VERIFY-001):
```json
{
"REVIEW-001": {
"title": "Code review for correctness and quality",
"description": "PURPOSE: Code review for correctness and quality | Success: All dimensions reviewed, verdict issued\nTASK:\n - Load changed files and design document\n - Review across 4 dimensions: correctness, completeness, maintainability, security\n - Score quality (1-10) and issue verdict\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: design/design-001.md, code/dev-log.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: <session>/review/review-001.md | Per-dimension findings with severity\nCONSTRAINTS: Focus on implementation changes | Provide file:line references\n---\nInnerLoop: false",
"role": "reviewer",
"prefix": "REVIEW",
"deps": ["DEV-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
---
### Multi-Sprint Pipeline
Sprint 1: DESIGN-001 -> DEV-001 -> DEV-002(incremental) -> VERIFY-001 -> DEV-fix -> REVIEW-001
Create Sprint 1 tasks using sprint templates above, plus:
**DEV-002** (developer, incremental):
```json
{
"DEV-002": {
"title": "Incremental implementation",
"description": "PURPOSE: Incremental implementation | Success: Remaining tasks implemented\nTASK:\n - Load remaining tasks from breakdown\n - Implement incrementally\n - Validate syntax\nCONTEXT:\n - Session: <session-folder>\n - Scope: <task-scope>\n - Upstream artifacts: design/task-breakdown.json, code/dev-log.md\n - Shared memory: <session>/.msg/meta.json\nEXPECTED: Modified source files + updated dev-log.md\nCONSTRAINTS: Incremental delivery | Follow existing patterns\n---\nInnerLoop: true",
"role": "developer",
"prefix": "DEV",
"deps": ["DEV-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
Subsequent sprints created dynamically after Sprint N completes.
## Phase 4: Validation
Verify task chain integrity:
| Check | Method | Expected |
|-------|--------|----------|
| Task count correct | tasks.json count | patch: 2, sprint: 4, multi: 5+ |
| Dependencies correct | Trace deps graph | Acyclic, correct ordering |
| No circular dependencies | Trace full graph | Acyclic |
| Structured descriptions | Each has PURPOSE/TASK/CONTEXT/EXPECTED | All present |
If validation fails, fix the specific task and re-validate.

View File

@@ -1,227 +0,0 @@
# Command: Monitor
Synchronous pipeline coordination using spawn_agent + wait_agent.
## Constants
- WORKER_AGENT: team_worker
- MAX_GC_ROUNDS: 3
## Handler Router
| Source | Handler |
|--------|---------|
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session state | tasks.json | Yes |
| Trigger event | From Entry Router detection | Yes |
| Meta state | <session>/.msg/meta.json | Yes |
1. Load tasks.json for current state, pipeline_mode, gc_round, max_gc_rounds
2. Read tasks from tasks.json to get current task statuses
3. Identify trigger event type from Entry Router
## Phase 3: Event Handlers
### handleCallback
Triggered when a worker completes (wait_agent returns).
1. Determine role from completed task prefix:
| Task Prefix | Role Detection |
|-------------|---------------|
| `DESIGN-*` | architect |
| `DEV-*` | developer |
| `VERIFY-*` | tester |
| `REVIEW-*` | reviewer |
2. Mark task as completed in tasks.json:
```
state.tasks[taskId].status = 'completed'
```
3. Record completion in session state and update metrics
4. **Generator-Critic check** (when reviewer completes):
- If completed task is REVIEW-* AND pipeline is sprint or multi-sprint:
- Read review report for GC signal (critical_count, score)
- Read tasks.json for gc_round
| GC Signal | gc_round < max | Action |
|-----------|----------------|--------|
| review.critical_count > 0 OR review.score < 7 | Yes | Increment gc_round, create DEV-fix task in tasks.json with deps on this REVIEW, log `gc_loop_trigger` |
| review.critical_count > 0 OR review.score < 7 | No (>= max) | Force convergence, accept with warning, log to wisdom/issues.md |
| review.critical_count == 0 AND review.score >= 7 | - | Review passed, proceed to handleComplete check |
- Log team_msg with type "gc_loop_trigger" or "task_unblocked"
5. Proceed to handleSpawnNext
### handleSpawnNext
Find and spawn the next ready tasks.
1. Read tasks.json, find tasks where:
- Status is "pending"
- All deps tasks have status "completed"
2. For each ready task, determine role from task prefix:
| Task Prefix | Role | Inner Loop |
|-------------|------|------------|
| DESIGN-* | architect | false |
| DEV-* | developer | true |
| VERIFY-* | tester | false |
| REVIEW-* | reviewer | false |
3. Spawn team_worker:
```javascript
// 1) Update status in tasks.json
state.tasks[taskId].status = 'in_progress'
// 2) Spawn worker
const agentId = spawn_agent({
agent_type: "team_worker",
task_name: taskId, // e.g., "DEV-001" — enables named targeting
items: [
{ type: "text", text: `## Role Assignment
role: ${role}
role_spec: ${skillRoot}/roles/${role}/role.md
session: ${sessionFolder}
session_id: ${sessionId}
requirement: ${taskDescription}
inner_loop: ${innerLoop}` },
{ type: "text", text: `Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.` }
]
})
// 3) Track agent
state.active_agents[taskId] = { agentId, role, started_at: now }
// 4) Wait for completion — use task_name for stable targeting (v4)
const waitResult = wait_agent({ targets: [taskId], timeout_ms: 900000 })
if (waitResult.timed_out) {
state.tasks[taskId].status = 'timed_out'
close_agent({ target: taskId })
delete state.active_agents[taskId]
// Report timeout, STOP
} else {
// 5) Collect results and update tasks.json
state.tasks[taskId].status = 'completed'
close_agent({ target: taskId }) // Use task_name, not agentId
delete state.active_agents[taskId]
}
```
4. **Parallel spawn rules**:
| Pipeline | Scenario | Spawn Behavior |
|----------|----------|---------------|
| Patch | DEV -> VERIFY | One worker at a time |
| Sprint | VERIFY + REVIEW both unblocked | Spawn BOTH in parallel, wait_agent for both |
| Sprint | Other stages | One worker at a time |
| Multi-Sprint | VERIFY + DEV-fix both unblocked | Spawn BOTH in parallel, wait_agent for both |
| Multi-Sprint | Other stages | One worker at a time |
**Cross-Agent Supplementary Context** (v4):
When spawning workers in a later pipeline phase, send upstream results as supplementary context to already-running workers:
```
// Example: Send design results to running developer
send_message({
target: "<running-agent-task-name>",
items: [{ type: "text", text: `## Supplementary Context\n${upstreamFindings}` }]
})
// Note: send_message queues info without interrupting the agent's current work
```
Use `send_message` (not `assign_task`) for supplementary info that enriches but doesn't redirect the agent's current task.
5. STOP after processing -- wait for next event
### handleCheck
Output current pipeline status from tasks.json. Do NOT advance pipeline.
```
Pipeline Status (<pipeline-mode>):
[DONE] DESIGN-001 (architect) -> design/design-001.md
[DONE] DEV-001 (developer) -> code/dev-log.md
[RUN] VERIFY-001 (tester) -> verifying...
[RUN] REVIEW-001 (reviewer) -> reviewing...
[WAIT] DEV-fix (developer) -> blocked by REVIEW-001
GC Rounds: <gc_round>/<max_gc_rounds>
Sprint: <sprint_id>
Session: <session-id>
```
### handleResume
**Agent Health Check** (v4):
```
// Verify actual running agents match session state
const runningAgents = list_agents({})
// For each active_agent in tasks.json:
// - If agent NOT in runningAgents -> agent crashed
// - Reset that task to pending, remove from active_agents
// This prevents stale agent references from blocking the pipeline
```
Resume pipeline after user pause or interruption.
1. Audit tasks.json for inconsistencies:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed deps but still "pending" -> include in spawn list
2. Proceed to handleSpawnNext
### handleComplete
**Cleanup Verification** (v4):
```
// Verify all agents are properly closed
const remaining = list_agents({})
// If any team agents still running -> close_agent each
// Ensures clean session shutdown
```
Triggered when all pipeline tasks are completed.
**Completion check by mode**:
| Mode | Completion Condition |
|------|---------------------|
| patch | DEV-001 + VERIFY-001 completed |
| sprint | DESIGN-001 + DEV-001 + VERIFY-001 + REVIEW-001 (+ any GC tasks) completed |
| multi-sprint | All sprint tasks (+ any GC tasks) completed |
1. Verify all tasks completed in tasks.json
2. If any tasks not completed, return to handleSpawnNext
3. **Multi-sprint check**: If multi-sprint AND more sprints planned:
- Record sprint metrics to .msg/meta.json sprint_history
- Evaluate downgrade eligibility (velocity >= expected, review avg >= 8)
- Pause for user confirmation before Sprint N+1
4. If all completed, transition to coordinator Phase 5 (Report + Completion Action)
## Phase 4: State Persistence
After every handler execution:
1. Update tasks.json with current state (gc_round, last event, active tasks)
2. Update .msg/meta.json gc_round if changed
3. Verify task list consistency
4. STOP and wait for next event

View File

@@ -1,193 +0,0 @@
# Coordinator Role
Orchestrate team-iterdev: analyze -> dispatch -> spawn -> monitor -> report.
## Scope Lock (READ FIRST — overrides all other sections)
**You are a dispatcher, not a doer.** Your ONLY outputs are:
- Session state files (`.workflow/.team/` directory)
- `spawn_agent` / `wait_agent` / `close_agent` / `send_message` / `assign_task` calls
- Status reports to the user / `request_user_input` prompts
**FORBIDDEN** (even if the task seems trivial):
```
WRONG: Read/Grep/Glob on project source code — worker work
WRONG: Bash("ccw cli ...") — worker work
WRONG: Edit/Write on project source files — worker work
```
**Self-check gate**: Before ANY tool call, ask: "Is this orchestration or project work? If project work → STOP → spawn worker."
---
## Identity
- Name: coordinator | Tag: [coordinator]
- Responsibility: Analyze task -> Create session -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- Use `team_worker` agent type for all worker spawns (NOT `general-purpose`)
- Follow Command Execution Protocol for dispatch and monitor commands
- Respect pipeline stage dependencies (deps)
- Stop after spawning workers -- wait for results via wait_agent
- Handle developer<->reviewer GC loop (max 3 rounds)
- Maintain tasks.json for real-time progress
- Execute completion action in Phase 5
- **Always proceed through full Phase 1-5 workflow, never skip to direct execution**
- Use `send_message` for supplementary context (non-interrupting) and `assign_task` for triggering new work
- Use `list_agents` for session resume health checks and cleanup verification
### MUST NOT
- Implement domain logic (designing, coding, testing, reviewing) -- workers handle this
- Spawn workers without creating tasks first
- Write source code directly
- Force-advance pipeline past failed review/validation
- Modify task outputs (workers own their deliverables)
- Call CLI tools (ccw cli) — only workers use CLI
## Command Execution Protocol
When coordinator needs to execute a command:
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
| Interrupted session | Active/paused session in .workflow/.team/IDS-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For check/resume/complete: load @commands/monitor.md, execute handler, STOP.
## Phase 0: Session Resume Check
1. Scan `.workflow/.team/IDS-*/tasks.json` for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile (read tasks.json, reset in_progress->pending, kick first ready task)
4. Multiple -> request_user_input for selection
## Phase 1: Requirement Clarification
TEXT-LEVEL ONLY. No source code reading.
1. Parse user task description from $ARGUMENTS
2. Delegate to @commands/analyze.md
3. Assess complexity for pipeline selection:
| Signal | Weight |
|--------|--------|
| Changed files > 10 | +3 |
| Changed files 3-10 | +2 |
| Structural change (refactor, architect, restructure) | +3 |
| Cross-cutting (multiple, across, cross) | +2 |
| Simple fix (fix, bug, typo, patch) | -2 |
| Score | Pipeline |
|-------|----------|
| >= 5 | multi-sprint |
| 2-4 | sprint |
| 0-1 | patch |
4. Ask for missing parameters via request_user_input (mode selection)
5. Record requirement with scope, pipeline mode
6. CRITICAL: Always proceed to Phase 2, never skip team workflow
## Phase 2: Session & Team Setup
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash({ command: "pwd" })`
- `skill_root` = `<project_root>/.codex/skills/team-iterdev`
2. Generate session ID: `IDS-<slug>-<YYYY-MM-DD>`
3. Create session folder structure:
```
mkdir -p .workflow/.team/<session-id>/{design,code,verify,review,wisdom}
```
4. Read specs/pipelines.md -> select pipeline based on complexity
5. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
6. Write initial tasks.json:
```json
{
"session_id": "<id>",
"pipeline_mode": "<patch|sprint|multi-sprint>",
"requirement": "<original requirement>",
"created_at": "<ISO timestamp>",
"gc_round": 0,
"max_gc_rounds": 3,
"active_agents": {},
"tasks": {}
}
```
7. Initialize meta.json with pipeline metadata:
```typescript
mcp__ccw-tools__team_msg({
operation: "log", session_id: "<id>", from: "coordinator",
type: "state_update", summary: "Session initialized",
data: {
pipeline_mode: "<patch|sprint|multi-sprint>",
pipeline_stages: ["architect", "developer", "tester", "reviewer"],
roles: ["coordinator", "architect", "developer", "tester", "reviewer"]
}
})
```
## Phase 3: Task Chain Creation
Delegate to @commands/dispatch.md:
1. Read specs/pipelines.md for selected pipeline task registry
2. Add task entries to tasks.json `tasks` object with deps
3. Update tasks.json metadata
## Phase 4: Spawn-and-Wait
Delegate to @commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + all deps resolved)
2. Spawn team_worker agents via spawn_agent, wait_agent for results
3. Output status summary
4. STOP
## Phase 5: Report + Completion Action
1. Load session state -> count completed tasks, calculate duration
2. Record sprint learning to .msg/meta.json sprint_history
3. List deliverables:
| Deliverable | Path |
|-------------|------|
| Design Document | <session>/design/design-001.md |
| Task Breakdown | <session>/design/task-breakdown.json |
| Dev Log | <session>/code/dev-log.md |
| Verification Results | <session>/verify/verify-001.json |
| Review Report | <session>/review/review-001.md |
4. Execute completion action per session.completion_action:
- interactive -> request_user_input (Archive/Keep/Export)
- auto_archive -> Archive & Clean (status=completed)
- auto_keep -> Keep Active (status=paused)
## v4 Coordination Patterns
### Message Semantics
- **send_message**: Queue supplementary info to a running agent. Does NOT interrupt current processing. Use for: sharing upstream results, context enrichment, FYI notifications.
- **assign_task**: Assign new work and trigger processing. Use for: waking idle agents, redirecting work, requesting new output.
### Agent Lifecycle Management
- **list_agents({})**: Returns all running agents. Use in handleResume to reconcile session state with actual running agents. Use in handleComplete to verify clean shutdown.
- **Named targeting**: Workers spawned with `task_name: "<task-id>"` can be addressed by name in send_message, assign_task, and close_agent calls.
## Error Handling
| Error | Resolution |
|-------|------------|
| Task too vague | request_user_input for clarification |
| Session corruption | Attempt recovery, fallback to manual |
| Worker crash | Reset task to pending, respawn |
| GC loop exceeds 3 rounds | Accept with warning, record in shared memory |
| Sprint velocity drops below 50% | Alert user, suggest scope reduction |
| Task ledger corrupted | Rebuild from tasks.json state |

View File

@@ -1,74 +0,0 @@
---
role: developer
prefix: DEV
inner_loop: true
message_types:
success: dev_complete
progress: dev_progress
error: error
---
# Developer
Code implementer. Implements code according to design, incremental delivery. Acts as Generator in Generator-Critic loop (paired with reviewer).
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Design document | <session>/design/design-001.md | For non-fix tasks |
| Task breakdown | <session>/design/task-breakdown.json | For non-fix tasks |
| Review feedback | <session>/review/*.md | For fix tasks |
| Wisdom files | <session>/wisdom/ | No |
1. Extract session path from task description
2. Read .msg/meta.json for shared context
3. Detect task type:
| Task Type | Detection | Loading |
|-----------|-----------|---------|
| Fix task | Subject contains "fix" | Read latest review file for feedback |
| Normal task | No "fix" in subject | Read design document + task breakdown |
4. Load previous implementation_context from .msg/meta.json
5. Read wisdom files for conventions and known issues
## Phase 3: Code Implementation
**Implementation strategy selection**:
| Task Count | Complexity | Strategy |
|------------|------------|----------|
| <= 2 tasks | Low | Direct: inline Edit/Write |
| 3-5 tasks | Medium | Single agent: one code-developer for all |
| > 5 tasks | High | Batch agent: group by module, one agent per batch |
**Fix Task Mode** (GC Loop):
- Focus on review feedback items only
- Fix critical issues first, then high, then medium
- Do NOT change code that was not flagged
- Maintain existing code style and patterns
**Normal Task Mode**:
- Read target files, apply changes using Edit or Write
- Follow execution order from task breakdown
- Validate syntax after each major change
## Phase 4: Self-Validation
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Syntax | tsc --noEmit or equivalent | No errors |
| File existence | Verify all planned files exist | All files present |
| Import resolution | Check no broken imports | All imports resolve |
1. Run syntax check: `tsc --noEmit` / `python -m py_compile` / equivalent
2. Auto-fix if validation fails (max 2 attempts)
3. Write dev log to `<session>/code/dev-log.md`:
- Changed files count, syntax status, fix task flag, file list
4. Update implementation_context in .msg/meta.json:
- task, changed_files, is_fix, syntax_clean
5. Write discoveries to wisdom/learnings.md

View File

@@ -1,66 +0,0 @@
---
role: reviewer
prefix: REVIEW
inner_loop: false
message_types:
success: review_passed
revision: review_revision
critical: review_critical
error: error
---
# Reviewer
Code reviewer. Multi-dimensional review, quality scoring, improvement suggestions. Acts as Critic in Generator-Critic loop (paired with developer).
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Design document | <session>/design/design-001.md | For requirements alignment |
| Changed files | Git diff | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for shared context and previous review_feedback_trends
3. Read design document for requirements alignment
4. Get changed files via git diff, read file contents (limit 20 files)
## Phase 3: Multi-Dimensional Review
**Review dimensions**:
| Dimension | Weight | Focus Areas |
|-----------|--------|-------------|
| Correctness | 30% | Logic correctness, boundary handling |
| Completeness | 25% | Coverage of design requirements |
| Maintainability | 25% | Readability, code style, DRY |
| Security | 20% | Vulnerabilities, input validation |
Per-dimension: scan modified files, record findings with severity (CRITICAL/HIGH/MEDIUM/LOW), include file:line references and suggestions.
**Scoring**: Weighted average of dimension scores (1-10 each).
**Output review report** (`<session>/review/review-<num>.md`):
- Files reviewed count, quality score, issue counts by severity
- Per-finding: severity, file:line, dimension, description, suggestion
- Scoring breakdown by dimension
- Signal: CRITICAL / REVISION_NEEDED / APPROVED
- Design alignment notes
## Phase 4: Trend Analysis + Verdict
1. Compare with previous review_feedback_trends from .msg/meta.json
2. Identify recurring issues, improvement areas, new issues
| Verdict Condition | Message Type |
|-------------------|--------------|
| criticalCount > 0 | review_critical |
| score < 7 | review_revision |
| else | review_passed |
3. Update review_feedback_trends in .msg/meta.json:
- review_id, score, critical count, high count, dimensions, gc_round
4. Write discoveries to wisdom/learnings.md

View File

@@ -1,88 +0,0 @@
---
role: tester
prefix: VERIFY
inner_loop: false
message_types:
success: verify_passed
failure: verify_failed
fix: fix_required
error: error
---
# Tester
Test validator. Test execution, fix cycles, and regression detection.
## Phase 2: Environment Detection
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Changed files | Git diff | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for shared context
3. Get changed files via git diff
4. Detect test framework and command:
| Detection | Method |
|-----------|--------|
| Test command | Check package.json scripts, pytest.ini, Makefile |
| Coverage tool | Check for nyc, coverage.py, jest --coverage config |
Common commands: npm test, pytest, go test ./..., cargo test
## Phase 3: Execution + Fix Cycle
**Iterative test-fix cycle** (max 5 iterations):
| Step | Action |
|------|--------|
| 1 | Run test command |
| 2 | Parse results, check pass rate |
| 3 | Pass rate >= 95% -> exit loop (success) |
| 4 | Extract failing test details |
| 5 | Apply fix using CLI tool |
| 6 | Increment iteration counter |
| 7 | iteration >= MAX (5) -> exit loop (report failures) |
| 8 | Go to Step 1 |
**Fix delegation**: Use CLI tool to fix failing tests:
```bash
ccw cli -p "PURPOSE: Fix failing tests; success = all listed tests pass
TASK: • Analyze test failure output • Identify root cause in changed files • Apply minimal fix
MODE: write
CONTEXT: @<changed-files> | Memory: Test output from current iteration
EXPECTED: Code fixes that make failing tests pass without breaking other tests
CONSTRAINTS: Only modify files in changed list | Minimal changes
Test output: <test-failure-details>
Changed files: <file-list>" --tool gemini --mode write --rule development-debug-runtime-issues
```
Wait for CLI completion before re-running tests.
## Phase 4: Regression Check + Report
1. Run full test suite for regression: `<test-command> --all`
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Regression | Run full test suite | No FAIL in output |
| Coverage | Run coverage tool | >= 80% (if configured) |
2. Write verification results to `<session>/verify/verify-<num>.json`:
- verify_id, pass_rate, iterations, passed, timestamp, regression_passed
3. Determine message type:
| Condition | Message Type |
|-----------|--------------|
| passRate >= 0.95 | verify_passed |
| passRate < 0.95 && iterations >= MAX | fix_required |
| passRate < 0.95 | verify_failed |
4. Update .msg/meta.json with test_patterns entry
5. Write discoveries to wisdom/issues.md

View File

@@ -1,94 +0,0 @@
# IterDev Pipeline Definitions
## Three-Pipeline Architecture
### Patch Pipeline (2 beats, serial)
```
DEV-001 -> VERIFY-001
[developer] [tester]
```
### Sprint Pipeline (4 beats, with parallel window)
```
DESIGN-001 -> DEV-001 -> [VERIFY-001 + REVIEW-001] (parallel)
[architect] [developer] [tester] [reviewer]
```
### Multi-Sprint Pipeline (N beats, iterative)
```
Sprint 1: DESIGN-001 -> DEV-001 -> DEV-002(incremental) -> VERIFY-001 -> DEV-fix -> REVIEW-001
Sprint 2: DESIGN-002(refined) -> DEV-003 -> VERIFY-002 -> REVIEW-002
...
```
## Generator-Critic Loop (developer <-> reviewer)
```
DEV -> REVIEW -> (if review.critical_count > 0 || review.score < 7)
-> DEV-fix -> REVIEW-2 -> (if still issues) -> DEV-fix-2 -> REVIEW-3
-> (max 3 rounds, then accept with warning)
```
## Pipeline Selection Logic
| Signal | Score |
|--------|-------|
| Changed files > 10 | +3 |
| Changed files 3-10 | +2 |
| Structural change | +3 |
| Cross-cutting concern | +2 |
| Simple fix | -2 |
| Score | Pipeline |
|-------|----------|
| >= 5 | multi-sprint |
| 2-4 | sprint |
| 0-1 | patch |
## Task Metadata Registry
| Task ID | Role | Pipeline | Dependencies | Description |
|---------|------|----------|-------------|-------------|
| DESIGN-001 | architect | sprint/multi | (none) | Technical design and task breakdown |
| DEV-001 | developer | all | DESIGN-001 (sprint/multi) or (none for patch) | Code implementation |
| DEV-002 | developer | multi | DEV-001 | Incremental implementation |
| DEV-fix | developer | sprint/multi | REVIEW-* (GC loop trigger) | Fix issues from review |
| VERIFY-001 | tester | all | DEV-001 (or last DEV) | Test execution and fix cycles |
| REVIEW-001 | reviewer | sprint/multi | DEV-001 (or last DEV) | Code review and quality scoring |
## Checkpoints
| Trigger Condition | Location | Behavior |
|-------------------|----------|----------|
| GC loop exceeds max rounds | After REVIEW-3 | Stop iteration, accept with warning, record in wisdom |
| Sprint transition | End of Sprint N | Pause, retrospective, user confirms `resume` for Sprint N+1 |
| Pipeline stall | No ready + no running tasks | Check missing tasks, report blockedBy chain to user |
## Multi-Sprint Dynamic Downgrade
If Sprint N metrics are strong (velocity >= expected, review avg >= 8), coordinator may downgrade Sprint N+1 from multi-sprint to sprint pipeline for efficiency.
## Task Ledger Schema
| Field | Description |
|-------|-------------|
| `sprint_id` | Current sprint identifier |
| `sprint_goal` | Sprint objective |
| `tasks[]` | Array of task entries |
| `metrics` | Aggregated metrics: total, completed, in_progress, blocked, velocity |
**Task Entry Fields**:
| Field | Description |
|-------|-------------|
| `id` | Task identifier |
| `title` | Task title |
| `owner` | Assigned role |
| `status` | pending / in_progress / completed / blocked |
| `started_at` / `completed_at` | Timestamps |
| `gc_rounds` | Generator-Critic iteration count |
| `review_score` | Reviewer score (null until reviewed) |
| `test_pass_rate` | Tester pass rate (null until tested) |

View File

@@ -1,172 +0,0 @@
{
"team_name": "team-iterdev",
"team_display_name": "Team IterDev",
"description": "Iterative development team with Generator-Critic loop, task ledger, sprint learning, dynamic pipeline, conflict handling, concurrency control, rollback strategy, user feedback loop, and tech debt tracking",
"version": "1.2.0",
"roles": {
"coordinator": {
"task_prefix": null,
"responsibility": "Sprint planning, backlog management, task ledger maintenance, GC loop control, Phase 1: conflict handling, concurrency control, rollback strategy, Phase 3: user feedback loop, tech debt tracking",
"message_types": [
"sprint_started", "gc_loop_trigger", "sprint_complete", "task_unblocked", "error", "shutdown",
"conflict_detected", "conflict_resolved", "resource_locked", "resource_unlocked", "resource_contention",
"rollback_initiated", "rollback_completed", "rollback_failed",
"user_feedback_received", "tech_debt_identified"
]
},
"architect": {
"task_prefix": "DESIGN",
"responsibility": "Technical design, task decomposition, architecture decisions",
"message_types": ["design_ready", "design_revision", "error"]
},
"developer": {
"task_prefix": "DEV",
"responsibility": "Code implementation, incremental delivery",
"message_types": ["dev_complete", "dev_progress", "error"]
},
"tester": {
"task_prefix": "VERIFY",
"responsibility": "Test execution, fix cycle, regression detection",
"message_types": ["verify_passed", "verify_failed", "fix_required", "error"]
},
"reviewer": {
"task_prefix": "REVIEW",
"responsibility": "Code review, quality scoring, improvement suggestions",
"message_types": ["review_passed", "review_revision", "review_critical", "error"]
}
},
"pipelines": {
"patch": {
"description": "Simple fix: implement → verify",
"task_chain": ["DEV-001", "VERIFY-001"],
"gc_loops": 0
},
"sprint": {
"description": "Standard feature: design → implement → verify + review (parallel)",
"task_chain": ["DESIGN-001", "DEV-001", "VERIFY-001", "REVIEW-001"],
"gc_loops": 3,
"parallel_groups": [["VERIFY-001", "REVIEW-001"]]
},
"multi-sprint": {
"description": "Large feature: multiple sprint cycles with incremental delivery",
"task_chain": "dynamic — coordinator creates per-sprint chains",
"gc_loops": 3,
"sprint_count": "dynamic"
}
},
"innovation_patterns": {
"generator_critic": {
"generator": "developer",
"critic": "reviewer",
"max_rounds": 3,
"convergence_trigger": "review.critical_count === 0 && review.score >= 7"
},
"task_ledger": {
"file": "task-ledger.json",
"updated_by": "coordinator",
"tracks": ["status", "gc_rounds", "review_score", "test_pass_rate", "velocity"],
"phase1_extensions": {
"conflict_info": {
"fields": ["status", "conflicting_files", "resolution_strategy", "resolved_by_task_id"],
"default": { "status": "none", "conflicting_files": [], "resolution_strategy": null, "resolved_by_task_id": null }
},
"rollback_info": {
"fields": ["snapshot_id", "rollback_procedure", "last_successful_state_id"],
"default": { "snapshot_id": null, "rollback_procedure": null, "last_successful_state_id": null }
}
}
},
"shared_memory": {
"file": "shared-memory.json",
"fields": {
"architect": "architecture_decisions",
"developer": "implementation_context",
"tester": "test_patterns",
"reviewer": "review_feedback_trends"
},
"persistent_fields": ["sprint_history", "what_worked", "what_failed", "patterns_learned"],
"phase1_extensions": {
"resource_locks": {
"description": "Concurrency control: resource locking state",
"lock_timeout_ms": 300000,
"deadlock_detection": true
}
}
},
"dynamic_pipeline": {
"selector": "coordinator",
"criteria": "file_count + module_count + complexity_assessment",
"downgrade_rule": "velocity >= expected && review_avg >= 8 → simplify next sprint"
}
},
"phase1_features": {
"conflict_handling": {
"enabled": true,
"detection_strategy": "file_overlap",
"resolution_strategies": ["manual", "auto_merge", "abort"]
},
"concurrency_control": {
"enabled": true,
"lock_timeout_minutes": 5,
"deadlock_detection": true,
"resources": ["task-ledger.json", "shared-memory.json"]
},
"rollback_strategy": {
"enabled": true,
"snapshot_trigger": "task_complete",
"default_procedure": "git revert HEAD"
}
},
"phase2_features": {
"external_dependency_management": {
"enabled": true,
"validation_trigger": "task_start",
"supported_sources": ["npm", "maven", "pip", "git"],
"version_check_command": {
"npm": "npm list {name}",
"pip": "pip show {name}",
"maven": "mvn dependency:tree"
}
},
"state_recovery": {
"enabled": true,
"checkpoint_trigger": "phase_complete",
"max_checkpoints_per_task": 5,
"checkpoint_dir": "checkpoints/"
}
},
"phase3_features": {
"user_feedback_loop": {
"enabled": true,
"collection_trigger": "sprint_complete",
"max_feedback_items": 50,
"severity_levels": ["low", "medium", "high", "critical"],
"status_flow": ["new", "reviewed", "addressed", "closed"]
},
"tech_debt_tracking": {
"enabled": true,
"detection_sources": ["review", "test", "architect"],
"categories": ["code", "design", "test", "documentation"],
"severity_levels": ["low", "medium", "high", "critical"],
"status_flow": ["open", "in_progress", "resolved", "deferred"],
"report_trigger": "sprint_retrospective"
}
},
"collaboration_patterns": ["CP-1", "CP-3", "CP-5", "CP-6"],
"session_dirs": {
"base": ".workflow/.team/IDS-{slug}-{YYYY-MM-DD}/",
"design": "design/",
"code": "code/",
"verify": "verify/",
"review": "review/",
"messages": ".workflow/.team-msg/{team-name}/"
}
}