Refactor team collaboration skills and update documentation

- Renamed `team-lifecycle-v5` to `team-lifecycle` across various documentation files for consistency.
- Updated references in code examples and usage sections to reflect the new skill name.
- Added a new command file for the `monitor` functionality in the `team-iterdev` skill, detailing the coordinator's monitoring events and task management.
- Introduced new components for dynamic pipeline visualization and session coordinates display in the frontend.
- Implemented utility functions for pipeline stage detection and status derivation based on message history.
- Enhanced the team role panel to map members to their respective pipeline roles with status indicators.
- Updated Chinese documentation to reflect the changes in skill names and descriptions.
This commit is contained in:
catlog22
2026-03-04 11:07:48 +08:00
parent 5e96722c09
commit ffd5282932
132 changed files with 2938 additions and 18916 deletions

View File

@@ -11,24 +11,23 @@ Unified team skill: roadmap-driven development with phased execution pipeline. C
## Architecture Overview
```
┌───────────────────────────────────────────────┐
Skill(skill="team-roadmap-dev")
args="<task-description>" or args="--role=xxx"│
└───────────────────┬───────────────────────────┘
│ Role Router
┌──── --role present? ────┐
│ NO │ YES
↓ ↓
Orchestration Mode Role Dispatch
(auto → coordinator) (route to role.md)
┌────┴────┬───────────┬───────────┐
↓ ↓ ↓ ↓
┌──────────┐┌─────────┐┌──────────┐┌──────────┐
│coordinator││ planner ││ executor ││ verifier
│ (human ││ PLAN-* ││ EXEC-* ││ VERIFY-* │
│ 交互) ││ ││ ││ │
└──────────┘└─────────┘└──────────┘└──────────┘
+---------------------------------------------------+
| Skill(skill="team-roadmap-dev") |
| args="<task-description>" |
+-------------------+-------------------------------+
|
Orchestration Mode (auto -> coordinator)
|
Coordinator (inline)
Phase 0-5 orchestration
|
+-------+-------+-------+
v v v
[tw] [tw] [tw]
plann- execu- verif-
er tor ier
(tw) = team-worker agent
```
## Command Architecture
@@ -66,12 +65,12 @@ Parse `$ARGUMENTS` to extract `--role`. If absent → Orchestration Mode (auto r
### Role Registry
| Role | File | Task Prefix | Type | Compact |
|------|------|-------------|------|---------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | **⚠️ 压缩后必须重读** |
| planner | [roles/planner/role.md](roles/planner/role.md) | PLAN-* | pipeline | 压缩后必须重读 |
| executor | [roles/executor/role.md](roles/executor/role.md) | EXEC-* | pipeline | 压缩后必须重读 |
| verifier | [roles/verifier/role.md](roles/verifier/role.md) | VERIFY-* | pipeline | 压缩后必须重读 |
| Role | Spec | Task Prefix | Inner Loop |
|------|------|-------------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
| planner | [role-specs/planner.md](role-specs/planner.md) | PLAN-* | true |
| executor | [role-specs/executor.md](role-specs/executor.md) | EXEC-* | true |
| verifier | [role-specs/verifier.md](role-specs/verifier.md) | VERIFY-* | true |
> **⚠️ COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
@@ -309,39 +308,60 @@ Phase N: PLAN-N01 → EXEC-N01 → VERIFY-N01 → Complete
## Coordinator Spawn Template
When coordinator spawns workers, use background mode (Spawn-and-Stop):
### v5 Worker Spawn (all roles)
When coordinator spawns workers, use `team-worker` agent with role-spec path:
```
Task({
subagent_type: "general-purpose",
subagent_type: "team-worker",
description: "Spawn <role> worker",
team_name: "roadmap-dev",
name: "<role>",
run_in_background: true,
prompt: `You are team "roadmap-dev" <ROLE>.
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-roadmap-dev/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: roadmap-dev
requirement: <task-description>
inner_loop: true
## Primary Directive
All your work must be executed through Skill to load role definition:
Skill(skill="team-roadmap-dev", args="--role=<role>")
Current requirement: <task-description>
Session: <session-folder>
## Role Guidelines
- Only process <PREFIX>-* tasks, do not execute other role work
- All output prefixed with [<role>] identifier
- Only communicate with coordinator
- Do not use TaskCreate for other roles
- Call mcp__ccw-tools__team_msg before every SendMessage
## Workflow
1. Call Skill -> load role definition and execution logic
2. Follow role.md 5-Phase flow
3. team_msg + SendMessage results to coordinator
4. TaskUpdate completed -> check next task`
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
})
```
**All roles** (planner, executor, verifier): Set `inner_loop: true`.
---
## Completion Action
When the pipeline completes (all phases done, coordinator Phase 5):
```
AskUserQuestion({
questions: [{
question: "Roadmap Dev pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
]
}]
})
```
| Choice | Action |
|--------|--------|
| Archive & Clean | Update session status="completed" -> TeamDelete(roadmap-dev) -> output final summary |
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-roadmap-dev", args="resume")` |
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
## Session Directory
```

View File

@@ -1,373 +1,445 @@
# Command: monitor
# Command: Monitor
Stop-Wait phase execution loop. Spawns workers synchronously and manages phase transitions.
Handle all coordinator monitoring events for the roadmap-dev pipeline using the async Spawn-and-Stop pattern. Multi-phase execution with gap closure expressed as event-driven state machine transitions. One operation per invocation, then STOP and wait for the next callback.
## Purpose
## Constants
Execute all roadmap phases sequentially using the Stop-Wait pattern. The coordinator spawns each worker synchronously (run_in_background: false), waits for completion, then proceeds to the next step. Handles gap closure loops and phase transitions.
| Key | Value | Description |
|-----|-------|-------------|
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
| WORKER_AGENT | team-worker | All workers spawned as team-worker agents |
| MAX_GAP_ITERATIONS | 3 | Maximum gap closure re-plan/exec/verify cycles per phase |
## Design Principle
### Role-Worker Map
Models have no concept of time. Polling, sleeping, and periodic checking are forbidden. All coordination uses synchronous Task() calls where worker return = step done.
| Prefix | Role | Role Spec | inner_loop |
|--------|------|-----------|------------|
| PLAN | planner | `.claude/skills/team-roadmap-dev/role-specs/planner.md` | true (subagents: cli-explore-agent, action-planning-agent) |
| EXEC | executor | `.claude/skills/team-roadmap-dev/role-specs/executor.md` | true (subagents: code-developer) |
| VERIFY | verifier | `.claude/skills/team-roadmap-dev/role-specs/verifier.md` | true |
## Strategy
### Pipeline Structure
Sequential spawning -- coordinator spawns one worker at a time via synchronous Task() calls. Each worker processes its task and returns. The coordinator inspects the result and decides the next action.
Per-phase task chain: `PLAN-{phase}01 -> EXEC-{phase}01 -> VERIFY-{phase}01`
Gap closure creates: `PLAN-{phase}0N -> EXEC-{phase}0N -> VERIFY-{phase}0N` (N = iteration + 1)
Multi-phase: Phases execute sequentially. Each phase completes its full PLAN/EXEC/VERIFY cycle (including gap closure) before the next phase is dispatched.
### State Machine Coordinates
The coordinator tracks its position using these state variables in `meta.json`:
```
Coordinator ──spawn──→ Planner (blocks until done)
←─return──
──spawn──→ Executor (blocks until done)
←─return──
──spawn──→ Verifier (blocks until done)
←─return──
──decide──→ next phase or gap closure
```
## Parameters
| Parameter | Source | Description |
|-----------|--------|-------------|
| `sessionFolder` | From coordinator | Session artifact directory |
| `roadmap` | From roadmap.md | Parsed phase list |
| `config` | From config.json | Execution mode and gates |
| `resumePhase` | From resume command (optional) | Phase to resume from |
| `resumeStep` | From resume command (optional) | Step within phase to resume from (plan/exec/verify/gap_closure) |
| `resumeGapIteration` | From resume command (optional) | Gap iteration to resume from |
## Execution Steps
### Step 1: Load Session State
```javascript
const roadmap = Read(`${sessionFolder}/roadmap.md`)
const config = JSON.parse(Read(`${sessionFolder}/config.json`))
const state = Read(`${sessionFolder}/state.md`)
const totalPhases = countPhases(roadmap)
// Support resume: use resume coordinates if provided, else parse from state
const currentPhase = resumePhase || parseCurrentPhase(state)
const startStep = resumeStep || 'plan' // plan|exec|verify|gap_closure
const startGapIteration = resumeGapIteration || 0
```
### Step 2: Phase Loop
```javascript
for (let phase = currentPhase; phase <= totalPhases; phase++) {
// --- Phase N execution ---
// 2a. Dispatch task chain (if not already dispatched)
// Read("commands/dispatch.md") → creates PLAN/EXEC/VERIFY tasks
dispatch(phase, sessionFolder)
let phaseComplete = false
let gapIteration = 0
const MAX_GAP_ITERATIONS = 3
while (!phaseComplete && gapIteration <= MAX_GAP_ITERATIONS) {
// 2b. Spawn Planner (Stop-Wait)
const planResult = spawnPlanner(phase, gapIteration, sessionFolder)
// 2c. Gate: plan_check (if configured)
if (config.gates.plan_check && gapIteration === 0) {
const plans = Glob(`${sessionFolder}/phase-${phase}/plan-*.md`)
// Present plan summary to user
AskUserQuestion({
questions: [{
question: `Phase ${phase} plan ready. Proceed with execution?`,
header: "Plan Review",
multiSelect: false,
options: [
{ label: "Proceed", description: "Execute the plan as-is" },
{ label: "Revise", description: "Ask planner to revise" },
{ label: "Skip phase", description: "Skip this phase entirely" }
]
}]
})
// Handle "Revise" → re-spawn planner
// Handle "Skip phase" → break to next phase
}
// 2d. Spawn Executor (Stop-Wait)
const execResult = spawnExecutor(phase, gapIteration, sessionFolder)
// 2e. Spawn Verifier (Stop-Wait)
const verifyResult = spawnVerifier(phase, gapIteration, sessionFolder)
// 2f. Check verification result
const verification = Read(`${sessionFolder}/phase-${phase}/verification.md`)
const gapsFound = parseGapsFound(verification)
if (!gapsFound || gapsFound.length === 0) {
// Phase passed
phaseComplete = true
} else if (gapIteration < MAX_GAP_ITERATIONS) {
// Gap closure: create new task chain for gaps
gapIteration++
triggerGapClosure(phase, gapIteration, gapsFound, sessionFolder)
} else {
// Max iterations reached, report to user
AskUserQuestion({
questions: [{
question: `Phase ${phase} still has ${gapsFound.length} gaps after ${MAX_GAP_ITERATIONS} attempts. How to proceed?`,
header: "Gap Closure Limit",
multiSelect: false,
options: [
{ label: "Continue anyway", description: "Accept current state, move to next phase" },
{ label: "Retry once more", description: "One more gap closure attempt" },
{ label: "Stop", description: "Halt execution for manual intervention" }
]
}]
})
// Handle user choice
phaseComplete = true // or stop based on choice
}
}
// 2g. Phase transition
updateStatePhaseComplete(phase, sessionFolder)
// 2h. Interactive gate at phase boundary
if (config.mode === "interactive" && phase < totalPhases) {
AskUserQuestion({
questions: [{
question: `Phase ${phase} complete. Proceed to phase ${phase + 1}?`,
header: "Phase Transition",
multiSelect: false,
options: [
{ label: "Proceed", description: `Start phase ${phase + 1}` },
{ label: "Review results", description: "Show phase summary before continuing" },
{ label: "Stop", description: "Pause execution here" }
]
}]
})
// Handle "Review results" → display summary, then re-ask
// Handle "Stop" → invoke pause command
if (userChoice === "Stop") {
Read("commands/pause.md")
// Execute pause: save state with currentPhase, currentStep, gapIteration
pauseSession(phase, "transition", gapIteration, sessionFolder)
return // Exit monitor loop
}
}
// If mode === "yolo": auto-advance, no user interaction
session.coordinates = {
current_phase: <number>, // Active phase (1-based)
total_phases: <number>, // Total phases from roadmap
gap_iteration: <number>, // Current gap closure iteration within phase (0 = initial)
step: <string>, // Current step: "plan" | "exec" | "verify" | "gap_closure" | "transition"
status: <string> // "running" | "paused" | "complete"
}
```
### Step 3: Spawn Functions (Stop-Wait Pattern)
## Phase 2: Context Loading
#### Spawn Planner
| Input | Source | Required |
|-------|--------|----------|
| Session file | `<session-folder>/.msg/meta.json` | Yes |
| Task list | `TaskList()` | Yes |
| Active workers | session.active_workers[] | Yes |
| Coordinates | session.coordinates | Yes |
| Config | `<session-folder>/config.json` | Yes |
| State | `<session-folder>/state.md` | Yes |
```javascript
function spawnPlanner(phase, gapIteration, sessionFolder) {
const suffix = gapIteration === 0 ? "01" : `0${gapIteration + 1}`
const gapContext = gapIteration > 0
? `\nGap closure iteration ${gapIteration}. Fix gaps from: ${sessionFolder}/phase-${phase}/verification.md`
: ""
```
Load session state:
1. Read <session-folder>/.msg/meta.json -> session
2. Read <session-folder>/config.json -> config
3. TaskList() -> allTasks
4. Extract coordinates from session (current_phase, gap_iteration, step)
5. Extract active_workers[] from session (default: [])
6. Parse $ARGUMENTS to determine trigger event
```
// Synchronous call - blocks until planner returns
Task({
subagent_type: "team-worker",
description: `Spawn planner worker for phase ${phase}`,
team_name: "roadmap-dev",
name: "planner",
prompt: `## Role Assignment
role: planner
role_spec: .claude/skills/team-roadmap-dev/role-specs/planner.md
session: ${sessionFolder}
session_id: ${sessionId}
## Phase 3: Event Handlers
### Wake-up Source Detection
Parse `$ARGUMENTS` to determine handler:
| Priority | Condition | Handler |
|----------|-----------|---------|
| 1 | Message contains `[planner]`, `[executor]`, or `[verifier]` | handleCallback |
| 2 | Contains "check" or "status" | handleCheck |
| 3 | Contains "resume", "continue", or "next" | handleResume |
| 4 | Pipeline detected as complete (all phases done) | handleComplete |
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
---
### Handler: handleCallback
Worker completed a task. Determine which step completed via prefix, apply pipeline logic, advance.
```
Receive callback from [<role>]
+- Find matching active worker by role tag
+- Is this a progress update (not final)? (Inner Loop intermediate)
| +- YES -> Update session state -> STOP
+- Task status = completed?
| +- YES -> remove from active_workers -> update session
| | +- Determine completed step from task prefix:
| | |
| | +- PLAN-* completed:
| | | +- Update coordinates.step = "plan_done"
| | | +- Is this initial plan (gap_iteration === 0)?
| | | | +- YES + config.gates.plan_check?
| | | | | +- AskUserQuestion:
| | | | | question: "Phase <N> plan ready. Proceed with execution?"
| | | | | header: "Plan Review"
| | | | | options:
| | | | | - "Proceed": -> handleSpawnNext (spawns EXEC)
| | | | | - "Revise": Create new PLAN task with incremented suffix
| | | | | blockedBy: [] (immediate), -> handleSpawnNext
| | | | | - "Skip phase": Delete all phase tasks
| | | | | -> advanceToNextPhase
| | | | +- NO (gap closure plan) -> handleSpawnNext (spawns EXEC)
| | | +- -> handleSpawnNext
| | |
| | +- EXEC-* completed:
| | | +- Update coordinates.step = "exec_done"
| | | +- -> handleSpawnNext (spawns VERIFY)
| | |
| | +- VERIFY-* completed:
| | +- Update coordinates.step = "verify_done"
| | +- Read verification result from:
| | | <session-folder>/phase-<N>/verification.md
| | +- Parse gaps from verification
| | +- Gaps found?
| | +- NO -> Phase passed
| | | +- -> advanceToNextPhase
| | +- YES + gap_iteration < MAX_GAP_ITERATIONS?
| | | +- -> triggerGapClosure
| | +- YES + gap_iteration >= MAX_GAP_ITERATIONS?
| | +- AskUserQuestion:
| | question: "Phase <N> still has <count> gaps after <max> attempts."
| | header: "Gap Closure Limit"
| | options:
| | - "Continue anyway": Accept, -> advanceToNextPhase
| | - "Retry once more": Increment max, -> triggerGapClosure
| | - "Stop": -> pauseSession
| |
| +- NO -> progress message -> STOP
+- No matching worker found
+- Scan all active workers for completed tasks
+- Found completed -> process each (same logic above) -> handleSpawnNext
+- None completed -> STOP
```
**Sub-procedure: advanceToNextPhase**
```
advanceToNextPhase:
+- Update state.md: mark current phase completed
+- current_phase < total_phases?
| +- YES:
| | +- config.mode === "interactive"?
| | | +- AskUserQuestion:
| | | question: "Phase <N> complete. Proceed to phase <N+1>?"
| | | header: "Phase Transition"
| | | options:
| | | - "Proceed": Dispatch next phase tasks, -> handleSpawnNext
| | | - "Review results": Output phase summary, re-ask
| | | - "Stop": -> pauseSession
| | +- Auto mode: Dispatch next phase tasks directly
| | +- Update coordinates:
| | current_phase++, gap_iteration=0, step="plan"
| | +- Dispatch new phase tasks (PLAN/EXEC/VERIFY with blockedBy)
| | +- -> handleSpawnNext
| +- NO -> All phases done -> handleComplete
```
**Sub-procedure: triggerGapClosure**
```
triggerGapClosure:
+- Increment coordinates.gap_iteration
+- suffix = "0" + (gap_iteration + 1)
+- phase = coordinates.current_phase
+- Read gaps from verification.md
+- Log: team_msg gap_closure
+- Create gap closure task chain:
|
| TaskCreate: PLAN-{phase}{suffix}
| subject: "PLAN-{phase}{suffix}: Gap closure for phase {phase} (iteration {gap_iteration})"
| description: includes gap list, references to previous verification
| blockedBy: [] (immediate start)
|
| TaskCreate: EXEC-{phase}{suffix}
| subject: "EXEC-{phase}{suffix}: Execute gap fixes for phase {phase}"
| blockedBy: [PLAN-{phase}{suffix}]
|
| TaskCreate: VERIFY-{phase}{suffix}
| subject: "VERIFY-{phase}{suffix}: Verify gap closure for phase {phase}"
| blockedBy: [EXEC-{phase}{suffix}]
|
+- Set owners: planner, executor, verifier
+- Update coordinates.step = "gap_closure"
+- -> handleSpawnNext (picks up the new PLAN task)
```
**Sub-procedure: pauseSession**
```
pauseSession:
+- Save coordinates to meta.json (phase, step, gap_iteration)
+- Update coordinates.status = "paused"
+- Update state.md with pause marker
+- team_msg log -> session_paused
+- Output: "Session paused at phase <N>, step <step>. Resume with 'resume'."
+- STOP
```
---
### Handler: handleSpawnNext
Find all ready tasks, spawn team-worker agent in background, update session, STOP.
```
Collect task states from TaskList()
+- completedSubjects: status = completed
+- inProgressSubjects: status = in_progress
+- readySubjects: status = pending
AND (no blockedBy OR all blockedBy in completedSubjects)
Ready tasks found?
+- NONE + work in progress -> report waiting -> STOP
+- NONE + nothing in progress:
| +- More phases to dispatch? -> advanceToNextPhase
| +- No more phases -> handleComplete
+- HAS ready tasks -> take first ready task:
+- Is task owner an Inner Loop role AND that role already has active_worker?
| +- YES -> SKIP spawn (existing worker picks it up via inner loop)
| +- NO -> normal spawn below
+- Determine role from prefix:
| PLAN-* -> planner
| EXEC-* -> executor
| VERIFY-* -> verifier
+- TaskUpdate -> in_progress
+- team_msg log -> task_unblocked (team_session_id=<session-id>)
+- Spawn team-worker (see spawn call below)
+- Add to session.active_workers
+- Update session file
+- Output: "[coordinator] Spawned <role> for <subject>"
+- STOP
```
**Spawn worker tool call** (one per ready task):
```
Task({
subagent_type: "team-worker",
description: "Spawn <role> worker for <subject>",
team_name: "roadmap-dev",
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-roadmap-dev/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: roadmap-dev
requirement: Phase ${phase} planning${gapContext}
inner_loop: false
requirement: <task-description>
inner_loop: true
## Current Task
- Task: PLAN-${phase}${suffix}
- Phase: ${phase}
- Task ID: <task-id>
- Task: <subject>
- Phase: <current_phase>
- Gap Iteration: <gap_iteration>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
run_in_background: false // CRITICAL: Stop-Wait, blocks until done
})
}
```
#### Spawn Executor
```javascript
function spawnExecutor(phase, gapIteration, sessionFolder) {
const suffix = gapIteration === 0 ? "01" : `0${gapIteration + 1}`
Task({
subagent_type: "team-worker",
description: `Spawn executor worker for phase ${phase}`,
team_name: "roadmap-dev",
name: "executor",
prompt: `## Role Assignment
role: executor
role_spec: .claude/skills/team-roadmap-dev/role-specs/executor.md
session: ${sessionFolder}
session_id: ${sessionId}
team_name: roadmap-dev
requirement: Phase ${phase} execution
inner_loop: false
## Current Task
- Task: EXEC-${phase}${suffix}
- Phase: ${phase}
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
run_in_background: false // CRITICAL: Stop-Wait
})
}
```
#### Spawn Verifier
```javascript
function spawnVerifier(phase, gapIteration, sessionFolder) {
const suffix = gapIteration === 0 ? "01" : `0${gapIteration + 1}`
Task({
subagent_type: "team-worker",
description: `Spawn verifier worker for phase ${phase}`,
team_name: "roadmap-dev",
name: "verifier",
prompt: `## Role Assignment
role: verifier
role_spec: .claude/skills/team-roadmap-dev/role-specs/verifier.md
session: ${sessionFolder}
session_id: ${sessionId}
team_name: roadmap-dev
requirement: Phase ${phase} verification
inner_loop: false
## Current Task
- Task: VERIFY-${phase}${suffix}
- Phase: ${phase}
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
run_in_background: false // CRITICAL: Stop-Wait
})
}
```
### Step 4: Gap Closure
```javascript
function triggerGapClosure(phase, iteration, gaps, sessionFolder) {
const suffix = `0${iteration + 1}`
// Log gap closure initiation
mcp__ccw-tools__team_msg({
operation: "log", team: sessionId // MUST be session ID (e.g., RD-xxx-date), NOT team name,
from: "coordinator", to: "planner",
type: "gap_closure",
ref: `${sessionFolder}/phase-${phase}/verification.md`
})
// Create new task chain for gap closure
// PLAN-{phase}{suffix}: re-plan focusing on gaps only
TaskCreate({
subject: `PLAN-${phase}${suffix}: Gap closure for phase ${phase} (iteration ${iteration})`,
description: `[coordinator] Gap closure re-planning for phase ${phase}.
## Session
- Folder: ${sessionFolder}
- Phase: ${phase}
- Gap Iteration: ${iteration}
## Gaps to Address
${gaps.map(g => `- ${g}`).join('\n')}
## Reference
- Original verification: ${sessionFolder}/phase-${phase}/verification.md
- Previous plans: ${sessionFolder}/phase-${phase}/plan-*.md
## Instructions
1. Focus ONLY on the listed gaps -- do not re-plan completed work
2. Create ${sessionFolder}/phase-${phase}/plan-${suffix}.md for gap fixes
3. TaskUpdate completed when gap plan is written`,
activeForm: `Re-planning phase ${phase} gaps (iteration ${iteration})`
})
// EXEC and VERIFY tasks follow same pattern with blockedBy
// (same as dispatch.md Step 4 and Step 5, with gap suffix)
}
```
### Step 5: State Updates
```javascript
function updateStatePhaseComplete(phase, sessionFolder) {
const state = Read(`${sessionFolder}/state.md`)
// Update current phase status
Edit(`${sessionFolder}/state.md`, {
old_string: `- Phase: ${phase}\n- Status: in_progress`,
new_string: `- Phase: ${phase}\n- Status: completed\n- Completed: ${new Date().toISOString().slice(0, 19)}`
})
// If more phases remain, set next phase as ready
const nextPhase = phase + 1
if (nextPhase <= totalPhases) {
// Append next phase readiness
Edit(`${sessionFolder}/state.md`, {
old_string: `- Phase: ${phase}\n- Status: completed`,
new_string: `- Phase: ${phase}\n- Status: completed\n\n- Phase: ${nextPhase}\n- Status: ready_to_dispatch`
})
}
}
```
### Step 6: Completion
```javascript
// All phases done -- return control to coordinator Phase 5 (Report + Persist)
mcp__ccw-tools__team_msg({
operation: "log", team: sessionId // MUST be session ID (e.g., RD-xxx-date), NOT team name,
from: "coordinator", to: "all",
type: "project_complete",
ref: `${sessionFolder}/roadmap.md`
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
})
```
## Gap Closure Loop Diagram
---
### Handler: handleCheck
Read-only status report. No pipeline advancement.
**Output format**:
```
VERIFY-{N}01 → gaps_found?
│ NO → Phase complete → next phase
│ YES ↓
PLAN-{N}02 (gaps only) → EXEC-{N}02 → VERIFY-{N}02
│ gaps_found?
│ NO → Phase complete
│ YES ↓
PLAN-{N}03 → EXEC-{N}03 → VERIFY-{N}03
│ gaps_found?
│ NO → Phase complete
│ YES → Max iterations (3) → ask user
[coordinator] Roadmap Pipeline Status
[coordinator] Phase: <current>/<total> | Gap Iteration: <N>/<max>
[coordinator] Progress: <completed>/<total tasks> (<percent>%)
[coordinator] Current Phase <N> Graph:
PLAN-{N}01: <status-icon> <summary>
EXEC-{N}01: <status-icon> <summary>
VERIFY-{N}01: <status-icon> <summary>
[PLAN-{N}02: <status-icon> (gap closure #1)]
[EXEC-{N}02: <status-icon>]
[VERIFY-{N}02:<status-icon>]
done=completed >>>=running o=pending x=deleted .=not created
[coordinator] Phase Summary:
Phase 1: completed
Phase 2: in_progress (step: exec)
Phase 3: not started
[coordinator] Active Workers:
> <subject> (<role>) - running [inner-loop: N/M tasks done]
[coordinator] Ready to spawn: <subjects>
[coordinator] Coordinates: phase=<N> step=<step> gap=<iteration>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
## Forbidden Patterns
Then STOP.
| Pattern | Why Forbidden | Alternative |
|---------|---------------|-------------|
| `setTimeout` / `sleep` | Models have no time concept | Synchronous Task() return |
| `setInterval` / polling loop | Wastes tokens, unreliable | Stop-Wait spawn pattern |
| `TaskOutput` with sleep polling | Indirect, fragile | `run_in_background: false` |
| `while (!done) { check() }` | Busy wait, no progress | Sequential synchronous calls |
---
### Handler: handleResume
Check active worker completion, process results, advance pipeline. Also handles resume from paused state.
```
Check coordinates.status:
+- "paused" -> Restore coordinates, resume from saved position
| Reset coordinates.status = "running"
| -> handleSpawnNext (picks up where it left off)
+- "running" -> Normal resume:
Load active_workers from session
+- No active workers -> handleSpawnNext
+- Has active workers -> check each:
+- status = completed -> mark done, remove from active_workers, log
+- status = in_progress -> still running, log
+- other status -> worker failure -> reset to pending
After processing:
+- Some completed -> handleSpawnNext
+- All still running -> report status -> STOP
+- All failed -> handleSpawnNext (retry)
```
---
### Handler: handleComplete
All phases done. Generate final project summary and finalize session.
```
All phases completed (no pending, no in_progress across all phases)
+- Generate project-level summary:
| - Roadmap overview (phases completed)
| - Per-phase results:
| - Gap closure iterations used
| - Verification status
| - Key deliverables
| - Overall stats (tasks completed, phases, total gap iterations)
|
+- Update session:
| coordinates.status = "complete"
| session.completed_at = <timestamp>
| Write meta.json
|
+- Update state.md: mark all phases completed
+- team_msg log -> project_complete
+- Output summary to user
+- STOP
```
---
### Worker Failure Handling
When a worker has unexpected status (not completed, not in_progress):
1. Reset task -> pending via TaskUpdate
2. Remove from active_workers
3. Log via team_msg (type: error)
4. Report to user: task reset, will retry on next resume
## Phase 4: State Persistence
After every handler action, before STOP:
| Check | Action |
|-------|--------|
| Coordinates updated | current_phase, step, gap_iteration reflect actual state |
| Session state consistent | active_workers matches TaskList in_progress tasks |
| No orphaned tasks | Every in_progress task has an active_worker entry |
| Meta.json updated | Write updated session state and coordinates |
| State.md updated | Phase progress reflects actual completion |
| Completion detection | All phases done + no pending + no in_progress -> handleComplete |
```
Persist:
1. Update coordinates in meta.json
2. Reconcile active_workers with actual TaskList states
3. Remove entries for completed/deleted tasks
4. Write updated meta.json
5. Update state.md if phase status changed
6. Verify consistency
7. STOP (wait for next callback)
```
## State Machine Diagram
```
[dispatch] -> PLAN-{N}01 spawned
|
[planner callback]
|
plan_check gate? --YES--> AskUser --> "Revise" --> new PLAN task --> [spawn]
| "Skip" --> advanceToNextPhase
| "Proceed" / no gate
v
EXEC-{N}01 spawned
|
[executor callback]
|
v
VERIFY-{N}01 spawned
|
[verifier callback]
|
gaps found? --NO--> advanceToNextPhase
|
YES + iteration < MAX
|
v
triggerGapClosure:
PLAN-{N}02 -> EXEC-{N}02 -> VERIFY-{N}02
|
[repeat verify check]
|
gaps found? --NO--> advanceToNextPhase
|
YES + iteration >= MAX
|
v
AskUser: "Continue anyway" / "Retry" / "Stop"
advanceToNextPhase:
+- phase < total? --YES--> interactive gate? --> dispatch phase+1 --> [spawn PLAN]
+- phase = total? --> handleComplete
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Worker Task() throws error | Log error, retry once. If still fails, report to user |
| Session file not found | Error, suggest re-initialization |
| Worker callback from unknown role | Log info, scan for other completions |
| All workers still running on resume | Report status, suggest check later |
| Pipeline stall (no ready, no running, has pending) | Check blockedBy chains, report to user |
| Verification file missing | Treat as gap -- verifier may have crashed, re-spawn |
| Phase dispatch fails | Check roadmap integrity, report to user |
| User chooses "Stop" at gate | Invoke pause command: save state.md with coordinates, exit cleanly |
| Max gap iterations exceeded | Present to user with gap details, ask for guidance |
| Max gap iterations exceeded | Ask user: continue / retry / stop |
| User chooses "Stop" at any gate | Pause session with coordinates, exit cleanly |

View File

@@ -1,307 +0,0 @@
# Command: implement
Wave-based task execution using code-developer subagent. Reads IMPL-*.json task files, computes execution waves from dependency graph, and executes sequentially by wave with parallel tasks within each wave.
## Purpose
Read IMPL-*.json task files for the current phase, compute wave groups from depends_on graph, and execute each task by delegating to a code-developer subagent. Produce summary-NN.md per task with structured YAML frontmatter for verifier and cross-task context.
## When to Use
- Phase 3 of executor execution (after loading tasks, before self-validation)
- Called once per EXEC-* task
## Strategy
Compute waves from dependency graph (topological sort). Sequential waves, parallel tasks within each wave. Each task is delegated to a code-developer subagent with the full task JSON plus prior summary context. After each task completes, a summary is written. After each wave completes, wave progress is reported.
## Parameters
| Parameter | Source | Description |
|-----------|--------|-------------|
| `sessionFolder` | From EXEC-* task description | Session artifact directory |
| `phaseNumber` | From EXEC-* task description | Phase number (1-based) |
| `tasks` | From executor Phase 2 | Parsed task JSON objects |
| `waves` | From executor Phase 2 | Wave-grouped task map |
| `waveNumbers` | From executor Phase 2 | Sorted wave number array |
| `priorSummaries` | From executor Phase 2 | Summaries from earlier phases |
## Execution Steps
### Step 1: Compute Waves from Dependency Graph
```javascript
// Tasks loaded in executor Phase 2 from .task/IMPL-*.json
// Compute wave assignment from depends_on graph
function computeWaves(tasks) {
const waveMap = {} // taskId → waveNumber
const assigned = new Set()
let currentWave = 1
while (assigned.size < tasks.length) {
const ready = tasks.filter(t =>
!assigned.has(t.id) &&
(t.depends_on || []).every(d => assigned.has(d))
)
if (ready.length === 0 && assigned.size < tasks.length) {
// Cycle detected — force lowest unassigned
const unassigned = tasks.find(t => !assigned.has(t.id))
ready.push(unassigned)
}
for (const task of ready) {
waveMap[task.id] = currentWave
assigned.add(task.id)
}
currentWave++
}
// Group by wave
const waves = {}
for (const task of tasks) {
const w = waveMap[task.id]
if (!waves[w]) waves[w] = []
waves[w].push(task)
}
return {
waves,
waveNumbers: Object.keys(waves).map(Number).sort((a, b) => a - b),
totalWaves: currentWave - 1
}
}
const { waves, waveNumbers, totalWaves } = computeWaves(tasks)
const totalTasks = tasks.length
let completedTasks = 0
```
### Step 2: Sequential Wave Execution
```javascript
for (const waveNum of waveNumbers) {
const waveTasks = waves[waveNum]
for (const task of waveTasks) {
const startTime = Date.now()
// 2a. Build context from prior summaries
const contextSummaries = []
// From earlier phases
for (const ps of priorSummaries) {
contextSummaries.push(ps.content)
}
// From earlier waves in this phase
for (const earlierWave of waveNumbers.filter(w => w < waveNum)) {
for (const earlierTask of waves[earlierWave]) {
try {
const summaryFile = `${sessionFolder}/phase-${phaseNumber}/summary-${earlierTask.id}.md`
contextSummaries.push(Read(summaryFile))
} catch {}
}
}
const contextSection = contextSummaries.length > 0
? `## Prior Context\n\n${contextSummaries.join('\n\n---\n\n')}`
: "## Prior Context\n\nNone (first task in first wave)."
// 2b. Build implementation prompt from task JSON
const filesSection = (task.files || [])
.map(f => `- \`${f.path}\` (${f.action}): ${f.change}`)
.join('\n')
const stepsSection = (task.implementation || [])
.map((step, i) => typeof step === 'string' ? `${i + 1}. ${step}` : `${i + 1}. ${step.step}: ${step.description}`)
.join('\n')
const convergenceSection = task.convergence
? `## Success Criteria\n${(task.convergence.criteria || []).map(c => `- ${c}`).join('\n')}\n\n**Verification**: ${task.convergence.verification || 'N/A'}`
: ''
// 2c. Delegate to code-developer subagent
const implResult = Task({
subagent_type: "code-developer",
run_in_background: false,
prompt: `Implement the following task. Write production-quality code following existing patterns.
## Task: ${task.id} - ${task.title}
${task.description}
## Files
${filesSection}
## Implementation Steps
${stepsSection}
${convergenceSection}
${contextSection}
## Implementation Rules
- Follow existing code patterns and conventions in the project
- Write clean, minimal code that satisfies the task requirements
- Create all files listed with action "create"
- Modify files listed with action "modify" as described
- Handle errors appropriately
- Do NOT add unnecessary features beyond what the task specifies
- Do NOT modify files outside the task scope unless absolutely necessary
## Output
After implementation, report:
1. Files created or modified (with brief description of changes)
2. Key decisions made during implementation
3. Any deviations from the task (and why)
4. Capabilities provided (exports, APIs, components)
5. Technologies/patterns used`
})
const duration = Math.round((Date.now() - startTime) / 60000)
// 2d. Write summary
const summaryPath = `${sessionFolder}/phase-${phaseNumber}/summary-${task.id}.md`
const affectedPaths = (task.files || []).map(f => f.path)
Write(summaryPath, `---
phase: ${phaseNumber}
task: "${task.id}"
title: "${task.title}"
requires: [${(task.depends_on || []).map(d => `"${d}"`).join(', ')}]
provides: ["${task.id}"]
affects:
${affectedPaths.map(p => ` - "${p}"`).join('\n')}
tech-stack: []
key-files:
${affectedPaths.map(p => ` - "${p}"`).join('\n')}
key-decisions: []
patterns-established: []
convergence-met: pending
duration: ${duration}m
completed: ${new Date().toISOString().slice(0, 19)}
---
# Summary: ${task.id} - ${task.title}
## Implementation Result
${implResult || "Implementation delegated to code-developer subagent."}
## Files Affected
${affectedPaths.map(p => `- \`${p}\``).join('\n')}
## Convergence Criteria
${(task.convergence?.criteria || []).map(c => `- [ ] ${c}`).join('\n')}
`)
completedTasks++
}
// 2e. Report wave progress
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId,
from: "executor",
type: "exec_progress",
ref: `${sessionFolder}/phase-${phaseNumber}/`
})
}
```
### Step 3: Report Execution Complete
```javascript
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId,
from: "executor",
type: "exec_complete",
ref: `${sessionFolder}/phase-${phaseNumber}/`
})
```
## Summary File Format
Each summary-{IMPL-ID}.md uses YAML frontmatter:
```yaml
---
phase: N
task: "IMPL-N"
title: "Task title"
requires: ["IMPL-N"]
provides: ["IMPL-N"]
affects: [paths]
tech-stack: [technologies]
key-files: [paths]
key-decisions: [decisions]
patterns-established: [patterns]
convergence-met: pending|pass|fail
duration: Xm
completed: timestamp
---
```
### Frontmatter Fields
| Field | Type | Description |
|-------|------|-------------|
| `phase` | number | Phase this summary belongs to |
| `task` | string | Task ID that was executed |
| `title` | string | Task title |
| `requires` | string[] | Dependency task IDs consumed |
| `provides` | string[] | Task ID provided to downstream |
| `affects` | string[] | File paths created or modified |
| `tech-stack` | string[] | Technologies/frameworks used |
| `key-files` | string[] | Primary files (subset of affects) |
| `key-decisions` | string[] | Decisions made during implementation |
| `patterns-established` | string[] | Patterns introduced |
| `convergence-met` | string | Whether convergence criteria passed |
| `duration` | string | Execution time |
| `completed` | string | ISO timestamp |
## Deviation Rules
| Deviation | Action | Report |
|-----------|--------|--------|
| **Bug found** in existing code | Auto-fix, continue | Log in summary key-decisions |
| **Missing critical** dependency | Add to scope, implement | Log in summary key-decisions |
| **Blocking dependency** (unresolvable) | Stop task execution | Report error to coordinator |
| **Architectural concern** | Do NOT auto-fix | Report error to coordinator, await guidance |
## Wave Execution Example
```
Phase 2, 4 tasks, 3 waves (computed from depends_on):
Wave 1: [IMPL-201 (types)] — no dependencies
-> delegate IMPL-201 to code-developer
-> write summary-IMPL-201.md
-> report: Wave 1/3 complete (1/4 tasks)
Wave 2: [IMPL-202 (API), IMPL-203 (UI)] — depend on IMPL-201
-> delegate IMPL-202 (loads summary-IMPL-201 as context)
-> write summary-IMPL-202.md
-> delegate IMPL-203 (loads summary-IMPL-201 as context)
-> write summary-IMPL-203.md
-> report: Wave 2/3 complete (3/4 tasks)
Wave 3: [IMPL-204 (tests)] — depends on IMPL-202, IMPL-203
-> delegate IMPL-204 (loads summaries 201-203 as context)
-> write summary-IMPL-204.md
-> report: Wave 3/3 complete (4/4 tasks)
-> report: exec_complete
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| code-developer subagent fails | Retry once. If still fails, write error summary, continue with next task |
| File write conflict | Last write wins. Log in summary. Verifier will validate |
| Task references non-existent file | Check if dependency task creates it. If yes, load summary. If no, log error |
| All tasks in a wave fail | Report wave failure to coordinator, attempt next wave |
| Summary write fails | Retry with Bash fallback. Critical — verifier needs summaries |

View File

@@ -1,217 +0,0 @@
# Executor Role
Code implementation per phase. Reads IMPL-*.json task files from the phase's .task/ directory, computes execution waves from the dependency graph, and executes sequentially by wave with parallel tasks within each wave. Each task is delegated to a code-developer subagent. Produces summary-{IMPL-ID}.md files for verifier consumption.
## Identity
- **Name**: `executor` | **Tag**: `[executor]`
- **Task Prefix**: `EXEC-*`
- **Responsibility**: Code generation
## Boundaries
### MUST
- All outputs must carry `[executor]` prefix
- Only process `EXEC-*` prefixed tasks
- Only communicate with coordinator (SendMessage)
- Delegate implementation to commands/implement.md
- Execute tasks in dependency order (sequential waves, parallel within wave)
- Write summary-{IMPL-ID}.md per task after execution
- Report wave progress to coordinator
- Work strictly within Code generation responsibility scope
### MUST NOT
- Execute work outside this role's responsibility scope
- Create plans or modify IMPL-*.json task files
- Verify implementation against must_haves (that is verifier's job)
- Create tasks for other roles (TaskCreate)
- Interact with user (AskUserQuestion)
- Process PLAN-* or VERIFY-* tasks
- Skip loading prior summaries for cross-plan context
- Communicate directly with other worker roles (must go through coordinator)
- Omit `[executor]` identifier in any output
---
## Toolbox
### Available Commands
| Command | File | Phase | Description |
|---------|------|-------|-------------|
| `implement` | [commands/implement.md](commands/implement.md) | Phase 3 | Wave-based plan execution via code-developer subagent |
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| `code-developer` | Subagent | executor | Code implementation per plan |
| `Read/Write` | File operations | executor | Task JSON and summary management |
| `Glob` | Search | executor | Find task files and summaries |
| `Bash` | Shell | executor | Syntax validation, lint checks |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `exec_complete` | executor -> coordinator | All plans executed | Implementation done, summaries written |
| `exec_progress` | executor -> coordinator | Wave completed | Wave N of M done |
| `error` | executor -> coordinator | Failure | Implementation failed |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "executor",
type: <message-type>,
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --session-id <session-id> --from executor --type <type> --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `EXEC-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
**Resume Artifact Check**: Check whether this task's output artifact already exists:
- All summaries exist for phase tasks -> skip to Phase 5
- Artifact incomplete or missing -> normal Phase 2-4 execution
### Phase 2: Load Tasks
**Objective**: Load task JSONs and compute execution waves.
**Loading steps**:
| Input | Source | Required |
|-------|--------|----------|
| Task JSONs | <session-folder>/phase-{N}/.task/IMPL-*.json | Yes |
| Prior summaries | <session-folder>/phase-{1..N-1}/summary-*.md | No |
| Wisdom | <session-folder>/wisdom/ | No |
1. **Find task files**:
- Glob `{sessionFolder}/phase-{phaseNumber}/.task/IMPL-*.json`
- If no files found -> error to coordinator
2. **Parse all task JSONs**:
- Read each task file
- Extract: id, description, depends_on, files, convergence
3. **Compute waves from dependency graph**:
| Step | Action |
|------|--------|
| 1 | Start with wave=1, assigned=set(), waveMap={} |
| 2 | Find tasks with all dependencies in assigned |
| 3 | If none found but tasks remain -> force-assign first unassigned |
| 4 | Assign ready tasks to current wave, add to assigned |
| 5 | Increment wave, repeat until all tasks assigned |
| 6 | Group tasks by wave number |
4. **Load prior summaries for cross-task context**:
- For each prior phase, read summary files
- Store for reference during implementation
### Phase 3: Implement (via command)
**Objective**: Execute wave-based implementation.
Delegate to `commands/implement.md`:
| Step | Action |
|------|--------|
| 1 | For each wave (sequential): |
| 2 | For each task in wave: delegate to code-developer subagent |
| 3 | Write summary-{IMPL-ID}.md per task |
| 4 | Report wave progress |
| 5 | Continue to next wave |
**Implementation strategy selection**:
| Task Count | Complexity | Strategy |
|------------|------------|----------|
| <= 2 tasks | Low | Direct: inline Edit/Write |
| 3-5 tasks | Medium | Single agent: one code-developer for all |
| > 5 tasks | High | Batch agent: group by module, one agent per batch |
**Produces**: `{sessionFolder}/phase-{N}/summary-IMPL-*.md`
**Command**: [commands/implement.md](commands/implement.md)
### Phase 4: Self-Validation
**Objective**: Basic validation after implementation (NOT full verification).
**Validation checks**:
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| File existence | `test -f <path>` | All affected files exist |
| TypeScript syntax | `npx tsc --noEmit` | No TS errors |
| Lint | `npm run lint` | No critical errors |
**Validation steps**:
1. **Find summary files**: Glob `{sessionFolder}/phase-{phaseNumber}/summary-*.md`
2. **For each summary**:
- Parse frontmatter for affected files
- Check each file exists
- Run syntax check for TypeScript files
- Log errors via team_msg
3. **Run lint once for all changes** (best-effort)
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
Standard report flow: team_msg log -> SendMessage with `[executor]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
**Report message**:
```
SendMessage({
message: "[executor] Phase <N> execution complete.
- Tasks executed: <count>
- Waves: <wave-count>
- Summaries: <file-list>
Ready for verification."
})
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No EXEC-* tasks available | Idle, wait for coordinator assignment |
| Context/Plan file not found | Notify coordinator, request location |
| Command file not found | Fall back to inline execution |
| No task JSON files found | Error to coordinator -- planner may have failed |
| code-developer subagent fails | Retry once. If still fails, log error in summary, continue with next plan |
| Syntax errors after implementation | Log in summary, continue -- verifier will catch remaining issues |
| Missing dependency from earlier wave | Error to coordinator -- dependency graph may be incorrect |
| File conflict between parallel plans | Log warning, last write wins -- verifier will validate correctness |
| Critical issue beyond scope | SendMessage fix_required to coordinator |
| Unexpected error | Log error via team_msg, report to coordinator |

View File

@@ -1,355 +0,0 @@
# Command: create-plans
Generate execution plans via action-planning-agent. Produces IMPL_PLAN.md, .task/IMPL-*.json, and TODO_LIST.md — the same artifact format as workflow-plan skill.
## Purpose
Transform phase context into structured task JSONs and implementation plan. Delegates to action-planning-agent for document generation. Produces artifacts compatible with workflow-plan's output format, enabling reuse of executor and verifier logic.
## When to Use
- Phase 3 of planner execution (after research, before self-validation)
- Called once per PLAN-* task
## Strategy
Delegate to action-planning-agent with phase context (context.md + roadmap phase section). The agent produces task JSONs with convergence criteria (replacing the old must_haves concept), dependency graph (replacing wave numbering), and implementation steps.
## Parameters
| Parameter | Source | Description |
|-----------|--------|-------------|
| `sessionFolder` | From PLAN-* task description | Session artifact directory |
| `phaseNumber` | From PLAN-* task description | Phase number (1-based) |
## Output Artifact Mapping (vs old plan-NN.md)
| Old (plan-NN.md) | New (IMPL-*.json) | Notes |
|-------------------|--------------------|-------|
| `plan: NN` | `id: "IMPL-N"` | Task identifier |
| `wave: N` | `depends_on: [...]` | Dependency graph replaces explicit waves |
| `files_modified: [...]` | `files: [{path, action, change}]` | Structured file list |
| `requirements: [REQ-IDs]` | `description` + `scope` | Requirements embedded in description |
| `must_haves.truths` | `convergence.criteria` | Observable behaviors → measurable criteria |
| `must_haves.artifacts` | `files` + `convergence.verification` | File checks in verification command |
| `must_haves.key_links` | `convergence.verification` | Import wiring in verification command |
| Plan body (implementation steps) | `implementation: [...]` | Step-by-step actions |
## Execution Steps
### Step 1: Load Phase Context
```javascript
const context = Read(`${sessionFolder}/phase-${phaseNumber}/context.md`)
const roadmap = Read(`${sessionFolder}/roadmap.md`)
const config = JSON.parse(Read(`${sessionFolder}/config.json`))
// Extract phase section from roadmap
const phaseGoal = extractPhaseGoal(roadmap, phaseNumber)
const requirements = extractRequirements(roadmap, phaseNumber)
const successCriteria = extractSuccessCriteria(roadmap, phaseNumber)
// Check for gap closure context
const isGapClosure = context.includes("Gap Closure Context")
// Load prior phase summaries for cross-phase context
const priorSummaries = []
for (let p = 1; p < phaseNumber; p++) {
try {
const summaryFiles = Glob(`${sessionFolder}/phase-${p}/summary-*.md`)
for (const sf of summaryFiles) {
priorSummaries.push(Read(sf))
}
} catch {}
}
```
### Step 2: Prepare Output Directories
```javascript
Bash(`mkdir -p "${sessionFolder}/phase-${phaseNumber}/.task"`)
```
### Step 3: Delegate to action-planning-agent
```javascript
const taskDir = `${sessionFolder}/phase-${phaseNumber}/.task`
const implPlanPath = `${sessionFolder}/phase-${phaseNumber}/IMPL_PLAN.md`
const todoListPath = `${sessionFolder}/phase-${phaseNumber}/TODO_LIST.md`
Task({
subagent_type: "action-planning-agent",
run_in_background: false,
description: `Generate phase ${phaseNumber} planning documents`,
prompt: `
## TASK OBJECTIVE
Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for roadmap-dev session phase ${phaseNumber}.
IMPORTANT: This is PLANNING ONLY - generate planning documents, NOT implementing code.
## PHASE CONTEXT
${context}
## ROADMAP PHASE ${phaseNumber}
Goal: ${phaseGoal}
Requirements:
${requirements.map(r => `- ${r.id}: ${r.desc}`).join('\n')}
Success Criteria:
${successCriteria.map(c => `- ${c}`).join('\n')}
${isGapClosure ? `## GAP CLOSURE
This is a gap closure iteration. Only address gaps listed in context — do NOT re-plan completed work.
Existing task JSONs in ${taskDir} represent prior work. Create gap-specific tasks starting from next available ID.` : ''}
${priorSummaries.length > 0 ? `## PRIOR PHASE CONTEXT
${priorSummaries.join('\n\n---\n\n')}` : ''}
## SESSION PATHS
Output:
- Task Dir: ${taskDir}
- IMPL_PLAN: ${implPlanPath}
- TODO_LIST: ${todoListPath}
## CONTEXT METADATA
Session: ${sessionFolder}
Phase: ${phaseNumber}
Depth: ${config.depth || 'standard'}
## USER CONFIGURATION
Execution Method: agent
Preferred CLI Tool: gemini
## EXPECTED DELIVERABLES
1. Task JSON Files (${taskDir}/IMPL-*.json)
- Unified flat schema (task-schema.json)
- Quantified requirements with explicit counts
- focus_paths from context.md relevant files
- convergence criteria derived from success criteria (goal-backward)
2. Implementation Plan (${implPlanPath})
- Phase goal and context
- Task breakdown and execution strategy
- Dependency graph
3. TODO List (${todoListPath})
- Flat structure with [ ] for pending
- Links to task JSONs
## TASK ID FORMAT
Use: IMPL-{phaseNumber}{seq} (e.g., IMPL-101, IMPL-102 for phase 1)
## CONVERGENCE CRITERIA RULES (replacing old must_haves)
Each task MUST include convergence:
- criteria: Measurable conditions derived from success criteria (goal-backward, not task-forward)
- Include file existence checks
- Include export/symbol presence checks
- Include test passage checks where applicable
- verification: Executable command to verify criteria
- definition_of_done: Business-language completion definition
## CLI EXECUTION ID FORMAT
Each task: cli_execution.id = "RD-${sessionFolder.split('/').pop()}-{task_id}"
## QUALITY STANDARDS
- Task count <= 10 per phase (hard limit)
- All requirements quantified
- Acceptance criteria measurable
- Dependencies form a valid DAG (no cycles)
`
})
```
### Step 4: Validate Generated Artifacts
```javascript
// 4a. Verify task JSONs were created
const taskFiles = Glob(`${taskDir}/IMPL-*.json`)
if (!taskFiles || taskFiles.length === 0) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId,
from: "planner",
type: "error",
})
return
}
// 4b. Validate each task JSON
for (const taskFile of taskFiles) {
const taskJson = JSON.parse(Read(taskFile))
// Required fields check
const requiredFields = ['id', 'title', 'description', 'files', 'implementation', 'convergence']
for (const field of requiredFields) {
if (!taskJson[field]) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId,
from: "planner",
type: "plan_progress",
})
}
}
// Convergence criteria check
if (!taskJson.convergence?.criteria || taskJson.convergence.criteria.length === 0) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId,
from: "planner",
type: "plan_progress",
})
}
// Dependency cycle check (simple: task cannot depend on itself)
if (taskJson.depends_on?.includes(taskJson.id)) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId,
from: "planner",
type: "error",
})
}
}
// 4c. Validate dependency DAG (no cycles)
const allTasks = taskFiles.map(f => JSON.parse(Read(f)))
const taskIds = new Set(allTasks.map(t => t.id))
// Check all depends_on references are valid
for (const task of allTasks) {
for (const dep of (task.depends_on || [])) {
if (!taskIds.has(dep)) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId,
from: "planner",
type: "plan_progress",
})
}
}
}
// 4d. Verify IMPL_PLAN.md exists
const implPlanExists = Bash(`test -f "${implPlanPath}" && echo "EXISTS" || echo "NOT_FOUND"`).trim()
if (implPlanExists === "NOT_FOUND") {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId,
from: "planner",
type: "plan_progress",
})
// Create minimal IMPL_PLAN.md from task JSONs
generateMinimalImplPlan(allTasks, implPlanPath, phaseGoal, phaseNumber)
}
```
### Step 5: Compute Wave Structure (for reporting)
```javascript
// Derive wave structure from dependency graph (for reporting only — executor uses depends_on directly)
function computeWaves(tasks) {
const waves = {}
const assigned = new Set()
let currentWave = 1
while (assigned.size < tasks.length) {
const waveMembers = tasks.filter(t =>
!assigned.has(t.id) &&
(t.depends_on || []).every(d => assigned.has(d))
)
if (waveMembers.length === 0 && assigned.size < tasks.length) {
const unassigned = tasks.find(t => !assigned.has(t.id))
waveMembers.push(unassigned)
}
for (const task of waveMembers) {
waves[task.id] = currentWave
assigned.add(task.id)
}
currentWave++
}
return { waves, totalWaves: currentWave - 1 }
}
const { waves, totalWaves } = computeWaves(allTasks)
```
### Step 6: Report Plan Structure
```javascript
const taskCount = allTasks.length
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId,
from: "planner",
type: "plan_progress",
ref: `${sessionFolder}/phase-${phaseNumber}/`
})
return {
taskCount,
totalWaves,
waves,
taskFiles,
implPlanPath,
todoListPath
}
```
## Gap Closure Plans
When creating plans for gap closure (re-planning after verification found gaps):
```javascript
if (isGapClosure) {
// 1. Existing IMPL-*.json files represent completed work
// 2. action-planning-agent receives gap context and creates gap-specific tasks
// 3. New task IDs start from next available (e.g., IMPL-103 if 101,102 exist)
// 4. convergence criteria should directly address gap descriptions from verification.md
// 5. Gap tasks may depend on existing completed tasks
}
```
## Helper: Minimal IMPL_PLAN.md Generation
```javascript
function generateMinimalImplPlan(tasks, outputPath, phaseGoal, phaseNumber) {
const content = `# Implementation Plan: Phase ${phaseNumber}
## Goal
${phaseGoal}
## Tasks
${tasks.map(t => `### ${t.id}: ${t.title}
${t.description}
**Files**: ${(t.files || []).map(f => f.path).join(', ')}
**Depends on**: ${(t.depends_on || []).join(', ') || 'None'}
**Convergence Criteria**:
${(t.convergence?.criteria || []).map(c => `- ${c}`).join('\n')}
`).join('\n---\n\n')}
## Dependency Graph
${'```'}
${tasks.map(t => `${t.id} → [${(t.depends_on || []).join(', ')}]`).join('\n')}
${'```'}
`
Write(outputPath, content)
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| context.md not found | Error — research phase was skipped or failed |
| action-planning-agent fails | Retry once. If still fails, error to coordinator |
| No task JSONs generated | Error to coordinator — agent may have misunderstood input |
| Dependency cycle detected | Log warning, break cycle at lowest-numbered task |
| Too many tasks (>10) | Log warning — agent should self-limit but validate |
| Missing convergence criteria | Log warning — every task should have at least one criterion |
| IMPL_PLAN.md not generated | Create minimal version from task JSONs |

View File

@@ -1,219 +0,0 @@
# Command: research
Gather context for a phase before creating execution plans. Explores the codebase, reads requirements from roadmap, and produces a structured context.md file.
## Purpose
Build a comprehensive understanding of the phase's scope by combining roadmap requirements, prior phase outputs, and codebase analysis. The resulting context.md is the sole input for the create-plans command.
## When to Use
- Phase 2 of planner execution (after task discovery, before plan creation)
- Called once per PLAN-* task (including gap closure iterations)
## Strategy
Subagent delegation (cli-explore-agent) for codebase exploration, supplemented by optional Gemini CLI for deep analysis when depth warrants it. Planner does NOT explore the codebase directly -- it delegates.
## Parameters
| Parameter | Source | Description |
|-----------|--------|-------------|
| `sessionFolder` | From PLAN-* task description | Session artifact directory |
| `phaseNumber` | From PLAN-* task description | Phase to research (1-based) |
| `depth` | From config.json or task description | "quick" / "standard" / "comprehensive" |
## Execution Steps
### Step 1: Read Roadmap and Extract Phase Requirements
```javascript
const roadmap = Read(`${sessionFolder}/roadmap.md`)
const config = JSON.parse(Read(`${sessionFolder}/config.json`))
const depth = config.depth || "standard"
// Parse phase section from roadmap
// Extract: goal, requirements (REQ-IDs), success criteria
const phaseSection = extractPhaseSection(roadmap, phaseNumber)
const phaseGoal = phaseSection.goal
const requirements = phaseSection.requirements // [{id: "REQ-101", desc: "..."}, ...]
const successCriteria = phaseSection.successCriteria // ["testable behavior 1", ...]
```
### Step 2: Read Prior Phase Context (if applicable)
```javascript
const priorContext = []
if (phaseNumber > 1) {
// Load summaries from previous phases for dependency context
for (let p = 1; p < phaseNumber; p++) {
try {
const summary = Glob(`${sessionFolder}/phase-${p}/summary-*.md`)
for (const summaryFile of summary) {
priorContext.push({
phase: p,
file: summaryFile,
content: Read(summaryFile)
})
}
} catch {
// Prior phase may not have summaries yet (first phase)
}
// Also load verification results for dependency awareness
try {
const verification = Read(`${sessionFolder}/phase-${p}/verification.md`)
priorContext.push({
phase: p,
file: `${sessionFolder}/phase-${p}/verification.md`,
content: verification
})
} catch {}
}
}
// For gap closure: load the verification that triggered re-planning
const isGapClosure = planTaskDescription.includes("Gap closure")
let gapContext = null
if (isGapClosure) {
gapContext = Read(`${sessionFolder}/phase-${phaseNumber}/verification.md`)
}
```
### Step 3: Codebase Exploration via cli-explore-agent
```javascript
// Build exploration query from requirements
const explorationQuery = requirements.map(r => r.desc).join('; ')
const exploreResult = Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore codebase for phase ${phaseNumber} requirements`,
prompt: `Explore this codebase to gather context for the following requirements:
## Phase Goal
${phaseGoal}
## Requirements
${requirements.map(r => `- ${r.id}: ${r.desc}`).join('\n')}
## Success Criteria
${successCriteria.map(c => `- ${c}`).join('\n')}
## What to Find
1. Files that will need modification to satisfy these requirements
2. Existing patterns and conventions relevant to this work
3. Dependencies and integration points
4. Test patterns used in this project
5. Configuration or schema files that may need updates
## Output Format
Provide a structured summary:
- **Relevant Files**: List of files with brief description of relevance
- **Patterns Found**: Coding patterns, naming conventions, architecture patterns
- **Dependencies**: Internal and external dependencies that affect this work
- **Test Infrastructure**: Test framework, test file locations, test patterns
- **Risks**: Potential issues or complications discovered`
})
```
### Step 4: Optional Deep Analysis via Gemini CLI
```javascript
// Only for comprehensive depth or complex phases
if (depth === "comprehensive") {
const analysisResult = Bash({
command: `ccw cli -p "PURPOSE: Deep codebase analysis for implementation planning. Phase goal: ${phaseGoal}
TASK: \
- Analyze module boundaries and coupling for affected files \
- Identify shared utilities and helpers that can be reused \
- Map data flow through affected components \
- Assess test coverage gaps in affected areas \
- Identify backward compatibility concerns
MODE: analysis
CONTEXT: @**/* | Memory: Requirements: ${requirements.map(r => r.desc).join(', ')}
EXPECTED: Structured analysis with: module map, reuse opportunities, data flow diagram, test gaps, compatibility risks
CONSTRAINTS: Focus on files relevant to phase ${phaseNumber} requirements" \
--tool gemini --mode analysis --rule analysis-analyze-code-patterns`,
run_in_background: false,
timeout: 300000
})
// Store deep analysis result for context.md
}
```
### Step 5: Write context.md
```javascript
Bash(`mkdir -p "${sessionFolder}/phase-${phaseNumber}"`)
const contextContent = `# Phase ${phaseNumber} Context
Generated: ${new Date().toISOString().slice(0, 19)}
Session: ${sessionFolder}
Depth: ${depth}
## Phase Goal
${phaseGoal}
## Requirements
${requirements.map(r => `- **${r.id}**: ${r.desc}`).join('\n')}
## Success Criteria
${successCriteria.map(c => `- [ ] ${c}`).join('\n')}
## Prior Phase Dependencies
${priorContext.length > 0
? priorContext.map(p => `### Phase ${p.phase}\n- Source: ${p.file}\n- Key outputs: ${extractKeyOutputs(p.content)}`).join('\n\n')
: 'None (this is the first phase)'}
${isGapClosure ? `## Gap Closure Context\n\nThis is a gap closure iteration. Gaps from previous verification:\n${gapContext}` : ''}
## Relevant Files
${exploreResult.relevantFiles.map(f => `- \`${f.path}\`: ${f.description}`).join('\n')}
## Patterns Identified
${exploreResult.patterns.map(p => `- **${p.name}**: ${p.description}`).join('\n')}
## Dependencies
${exploreResult.dependencies.map(d => `- ${d}`).join('\n')}
## Test Infrastructure
${exploreResult.testInfo || 'Not analyzed (quick depth)'}
${depth === "comprehensive" && analysisResult ? `## Deep Analysis\n\n${analysisResult}` : ''}
## Questions / Risks
${exploreResult.risks.map(r => `- ${r}`).join('\n')}
`
Write(`${sessionFolder}/phase-${phaseNumber}/context.md`, contextContent)
```
## Output
| Artifact | Path | Description |
|----------|------|-------------|
| context.md | `{sessionFolder}/phase-{N}/context.md` | Structured phase context for plan creation |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| roadmap.md not found | Error to coordinator via message bus |
| cli-explore-agent fails | Retry once. Fallback: use ACE search_context directly |
| Gemini CLI fails | Skip deep analysis section, proceed with basic context |
| Prior phase summaries missing | Log warning, proceed without dependency context |
| Phase section not found in roadmap | Error to coordinator -- phase number may be invalid |

View File

@@ -1,239 +0,0 @@
# Planner Role
Research and plan creation per phase. Gathers codebase context via cli-explore-agent and Gemini CLI, then generates wave-based execution plans with convergence criteria. Each plan is a self-contained unit of work that an executor can implement autonomously.
## Identity
- **Name**: `planner` | **Tag**: `[planner]`
- **Task Prefix**: `PLAN-*`
- **Responsibility**: Orchestration (research + plan generation)
## Boundaries
### MUST
- All outputs must carry `[planner]` prefix
- Only process `PLAN-*` prefixed tasks
- Only communicate with coordinator (SendMessage)
- Delegate research to commands/research.md
- Delegate plan creation to commands/create-plans.md
- Reference real files discovered during research (never fabricate paths)
- Verify plans have no dependency cycles before reporting
- Work strictly within Orchestration responsibility scope
### MUST NOT
- Execute work outside this role's responsibility scope
- Direct code writing or modification
- Call code-developer or other implementation subagents
- Create tasks for other roles (TaskCreate)
- Interact with user (AskUserQuestion)
- Process EXEC-* or VERIFY-* tasks
- Skip the research phase
- Communicate directly with other worker roles (must go through coordinator)
- Omit `[planner]` identifier in any output
---
## Toolbox
### Available Commands
| Command | File | Phase | Description |
|---------|------|-------|-------------|
| `research` | [commands/research.md](commands/research.md) | Phase 2 | Context gathering via codebase exploration |
| `create-plans` | [commands/create-plans.md](commands/create-plans.md) | Phase 3 | Wave-based plan file generation |
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| `cli-explore-agent` | Subagent | planner | Codebase exploration, pattern analysis |
| `action-planning-agent` | Subagent | planner | Task JSON + IMPL_PLAN.md generation |
| `gemini` | CLI tool | planner | Deep analysis for complex phases (optional) |
| `Read/Write` | File operations | planner | Context and plan file management |
| `Glob/Grep` | Search | planner | File discovery and pattern matching |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `plan_ready` | planner -> coordinator | Plans created | Plan files written with wave structure |
| `plan_progress` | planner -> coordinator | Research complete | Context gathered, starting plan creation |
| `error` | planner -> coordinator | Failure | Research or planning failed |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "planner",
type: <message-type>,
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --session-id <session-id> --from planner --type <type> --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `PLAN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
**Resume Artifact Check**: Check whether this task's output artifact already exists:
- `<session>/phase-N/context.md` exists -> skip to Phase 3
- Artifact incomplete or missing -> normal Phase 2-4 execution
### Phase 2: Research (via command)
**Objective**: Gather codebase context for plan generation.
**Loading steps**:
| Input | Source | Required |
|-------|--------|----------|
| roadmap.md | <session-folder>/roadmap.md | Yes |
| Prior phase summaries | <session-folder>/phase-*/summary-*.md | No |
| Wisdom | <session-folder>/wisdom/ | No |
Delegate to `commands/research.md`:
| Step | Action |
|------|--------|
| 1 | Read roadmap.md for phase goal and requirements |
| 2 | Read prior phase summaries (if any) |
| 3 | Launch cli-explore-agent for codebase exploration |
| 4 | Optional: Gemini CLI for deeper analysis (if depth=comprehensive) |
| 5 | Write context.md to {sessionFolder}/phase-{N}/context.md |
**Produces**: `{sessionFolder}/phase-{N}/context.md`
**Command**: [commands/research.md](commands/research.md)
**Report progress via team_msg**:
```
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId,
from: "planner",
type: "plan_progress",
ref: "<session>/phase-<N>/context.md"
})
```
### Phase 3: Create Plans (via command)
**Objective**: Generate wave-based execution plans.
Delegate to `commands/create-plans.md`:
| Step | Action |
|------|--------|
| 1 | Load context.md for phase |
| 2 | Prepare output directories (.task/) |
| 3 | Delegate to action-planning-agent |
| 4 | Agent produces IMPL_PLAN.md + .task/IMPL-*.json + TODO_LIST.md |
| 5 | Validate generated artifacts |
| 6 | Return task count and dependency structure |
**Produces**:
- `{sessionFolder}/phase-{N}/IMPL_PLAN.md`
- `{sessionFolder}/phase-{N}/.task/IMPL-*.json`
- `{sessionFolder}/phase-{N}/TODO_LIST.md`
**Command**: [commands/create-plans.md](commands/create-plans.md)
### Phase 4: Self-Validation
**Objective**: Verify task JSONs before reporting.
**Validation checks**:
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| Referenced files exist | `test -f <path>` for modify actions | All files found or warning logged |
| Self-dependency | Check if depends_on includes own ID | No self-dependencies |
| Convergence criteria | Check convergence.criteria exists | Each task has criteria |
| Cross-dependency | Verify all depends_on IDs exist | All dependencies valid |
**Validation steps**:
1. **File existence check** (for modify actions):
- For each task file with action="modify"
- Check file exists
- Log warning if not found
2. **Self-dependency check**:
- For each task, verify task.id not in task.depends_on
- Log error if self-dependency detected
3. **Convergence criteria check**:
- Verify each task has convergence.criteria array
- Log warning if missing
4. **Cross-dependency validation**:
- Collect all task IDs
- Verify each depends_on reference exists
- Log warning if unknown dependency
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
Standard report flow: team_msg log -> SendMessage with `[planner]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
**Wave count computation**:
| Step | Action |
|------|--------|
| 1 | Start with wave=1, assigned=set() |
| 2 | Find tasks with all dependencies in assigned |
| 3 | Assign those tasks to current wave, add to assigned |
| 4 | Increment wave, repeat until all tasks assigned |
| 5 | Return wave count |
**Report message**:
```
SendMessage({
message: "[planner] Phase <N> planning complete.
- Tasks: <count>
- Waves: <wave-count>
- IMPL_PLAN: <session>/phase-<N>/IMPL_PLAN.md
- Task JSONs: <file-list>
All tasks validated. Ready for execution."
})
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No PLAN-* tasks available | Idle, wait for coordinator assignment |
| Context/Plan file not found | Notify coordinator, request location |
| Command file not found | Fall back to inline execution |
| roadmap.md not found | Error to coordinator -- dispatch may have failed |
| cli-explore-agent fails | Retry once. If still fails, use direct ACE search as fallback |
| Gemini CLI fails | Skip deep analysis, proceed with basic context |
| action-planning-agent fails | Retry once. If still fails, error to coordinator |
| No task JSONs generated | Error to coordinator -- agent may have misunderstood input |
| No requirements found for phase | Error to coordinator -- roadmap may be malformed |
| Dependency cycle detected | Log warning, break cycle |
| Referenced file not found | Log warning. If file is from prior wave, acceptable |
| Critical issue beyond scope | SendMessage fix_required to coordinator |
| Unexpected error | Log error via team_msg, report to coordinator |

View File

@@ -1,335 +0,0 @@
# Command: verify
Goal-backward verification of convergence criteria from IMPL-*.json task files against actual codebase state. Checks convergence criteria (measurable conditions), file operations (existence/content), and runs verification commands.
## Purpose
For each task's convergence criteria, verify that the expected goals are met in the actual codebase. This is goal-backward verification: check what should exist, NOT what tasks were done. Produce a structured pass/fail result per task with gap details for any failures.
## Key Principle
**Goal-backward, not task-forward.** Do not check "did the executor follow the steps?" — check "does the codebase now have the properties that were required?"
## When to Use
- Phase 3 of verifier execution (after loading targets, before compiling results)
- Called once per VERIFY-* task
## Strategy
For each task, check convergence criteria and file operations. Use Bash for verification commands, Read/Grep for file checks, and optionally Gemini CLI for semantic validation of complex criteria.
## Parameters
| Parameter | Source | Description |
|-----------|--------|-------------|
| `sessionFolder` | From VERIFY-* task description | Session artifact directory |
| `phaseNumber` | From VERIFY-* task description | Phase number (1-based) |
| `tasks` | From verifier Phase 2 | Parsed task JSON objects with convergence criteria |
| `summaries` | From verifier Phase 2 | Parsed summary objects for context |
## Execution Steps
### Step 1: Initialize Results
```javascript
const verificationResults = []
```
### Step 2: Verify Each Task
```javascript
for (const task of tasks) {
const taskResult = {
task: task.id,
title: task.title,
status: 'pass', // will be downgraded if any check fails
details: [],
gaps: []
}
// --- 2a. Check Convergence Criteria ---
const criteria = task.convergence?.criteria || []
for (const criterion of criteria) {
const check = checkCriterion(criterion, task)
taskResult.details.push({
type: 'criterion',
description: criterion,
passed: check.passed
})
if (!check.passed) {
taskResult.gaps.push({
task: task.id,
type: 'criterion',
item: criterion,
expected: criterion,
actual: check.actual || 'Check failed'
})
}
}
// --- 2b. Check File Operations ---
const files = task.files || []
for (const fileEntry of files) {
const fileChecks = checkFileEntry(fileEntry)
for (const check of fileChecks) {
taskResult.details.push(check.detail)
if (!check.passed) {
taskResult.gaps.push({
...check.gap,
task: task.id
})
}
}
}
// --- 2c. Run Verification Command ---
if (task.convergence?.verification) {
const verifyCheck = runVerificationCommand(task.convergence.verification)
taskResult.details.push({
type: 'verification_command',
description: `Verification: ${task.convergence.verification}`,
passed: verifyCheck.passed
})
if (!verifyCheck.passed) {
taskResult.gaps.push({
task: task.id,
type: 'verification_command',
item: task.convergence.verification,
expected: 'Command exits with code 0',
actual: verifyCheck.actual || 'Command failed'
})
}
}
// --- 2d. Score task ---
const totalChecks = taskResult.details.length
const passedChecks = taskResult.details.filter(d => d.passed).length
if (passedChecks === totalChecks) {
taskResult.status = 'pass'
} else if (passedChecks > 0) {
taskResult.status = 'partial'
} else {
taskResult.status = 'fail'
}
verificationResults.push(taskResult)
}
```
### Step 3: Criterion Checking Function
```javascript
function checkCriterion(criterion, task) {
// Criteria are measurable conditions
// Strategy: derive testable assertions from criterion text
// Attempt 1: If criterion mentions a test command, run it
const testMatch = criterion.match(/test[s]?\s+(pass|run|succeed)/i)
if (testMatch) {
const testResult = Bash(`npm test 2>&1 || yarn test 2>&1 || pytest 2>&1 || true`)
const passed = !testResult.includes('FAIL') && !testResult.includes('failed')
return { passed, actual: passed ? 'Tests pass' : 'Test failures detected' }
}
// Attempt 2: If criterion mentions specific counts or exports, check files
const filePaths = (task.files || []).map(f => f.path)
for (const filePath of filePaths) {
const exists = Bash(`test -f "${filePath}" && echo "EXISTS" || echo "NOT_FOUND"`).trim()
if (exists === "EXISTS") {
const content = Read(filePath)
// Check if criterion keywords appear in the implementation
const keywords = criterion.split(/\s+/).filter(w => w.length > 4)
const relevant = keywords.filter(kw =>
content.toLowerCase().includes(kw.toLowerCase())
)
if (relevant.length >= Math.ceil(keywords.length * 0.3)) {
return { passed: true, actual: 'Implementation contains relevant logic' }
}
}
}
// Attempt 3: Compile check for affected files
for (const filePath of filePaths) {
if (filePath.endsWith('.ts') || filePath.endsWith('.tsx')) {
const compileCheck = Bash(`npx tsc --noEmit "${filePath}" 2>&1 || true`)
if (compileCheck.includes('error TS')) {
return { passed: false, actual: `TypeScript errors in ${filePath}` }
}
}
}
// Default: mark as passed if files exist and compile
return { passed: true, actual: 'Files exist and compile without errors' }
}
```
### Step 4: File Entry Checking Function
```javascript
function checkFileEntry(fileEntry) {
const checks = []
const path = fileEntry.path
const action = fileEntry.action // create, modify, delete
// 4a. Check file existence based on action
const exists = Bash(`test -f "${path}" && echo "EXISTS" || echo "NOT_FOUND"`).trim()
if (action === 'create' || action === 'modify') {
checks.push({
passed: exists === "EXISTS",
detail: {
type: 'file',
description: `File exists: ${path} (${action})`,
passed: exists === "EXISTS"
},
gap: exists !== "EXISTS" ? {
type: 'file',
item: `file_exists: ${path}`,
expected: `File should exist (action: ${action})`,
actual: 'File not found'
} : null
})
if (exists !== "EXISTS") {
return checks.filter(c => c.gap !== null || c.passed)
}
// 4b. Check minimum content (non-empty for create)
if (action === 'create') {
const content = Read(path)
const lineCount = content.split('\n').length
const minLines = 3 // Minimum for a meaningful file
checks.push({
passed: lineCount >= minLines,
detail: {
type: 'file',
description: `${path}: has content (${lineCount} lines)`,
passed: lineCount >= minLines
},
gap: lineCount < minLines ? {
type: 'file',
item: `min_content: ${path}`,
expected: `>= ${minLines} lines`,
actual: `${lineCount} lines`
} : null
})
}
} else if (action === 'delete') {
checks.push({
passed: exists === "NOT_FOUND",
detail: {
type: 'file',
description: `File deleted: ${path}`,
passed: exists === "NOT_FOUND"
},
gap: exists !== "NOT_FOUND" ? {
type: 'file',
item: `file_deleted: ${path}`,
expected: 'File should be deleted',
actual: 'File still exists'
} : null
})
}
return checks.filter(c => c.gap !== null || c.passed)
}
```
### Step 5: Verification Command Runner
```javascript
function runVerificationCommand(command) {
try {
const result = Bash(`${command} 2>&1; echo "EXIT:$?"`)
const exitCodeMatch = result.match(/EXIT:(\d+)/)
const exitCode = exitCodeMatch ? parseInt(exitCodeMatch[1]) : 1
return {
passed: exitCode === 0,
actual: exitCode === 0 ? 'Command succeeded' : `Exit code: ${exitCode}\n${result.slice(0, 200)}`
}
} catch (e) {
return { passed: false, actual: `Command error: ${e.message}` }
}
}
```
### Step 6: Write verification.md
```javascript
const totalGaps = verificationResults.flatMap(r => r.gaps)
const overallStatus = totalGaps.length === 0 ? 'passed' : 'gaps_found'
Write(`${sessionFolder}/phase-${phaseNumber}/verification.md`, `---
phase: ${phaseNumber}
status: ${overallStatus}
tasks_checked: ${tasks.length}
tasks_passed: ${verificationResults.filter(r => r.status === 'pass').length}
gaps:
${totalGaps.map(g => ` - task: "${g.task}"
type: "${g.type}"
item: "${g.item}"
expected: "${g.expected}"
actual: "${g.actual}"`).join('\n')}
---
# Phase ${phaseNumber} Verification
## Summary
- **Status**: ${overallStatus}
- **Tasks Checked**: ${tasks.length}
- **Passed**: ${verificationResults.filter(r => r.status === 'pass').length}
- **Partial**: ${verificationResults.filter(r => r.status === 'partial').length}
- **Failed**: ${verificationResults.filter(r => r.status === 'fail').length}
- **Total Gaps**: ${totalGaps.length}
## Task Results
${verificationResults.map(r => `### ${r.task}: ${r.title}${r.status.toUpperCase()}
${r.details.map(d => `- [${d.passed ? 'x' : ' '}] (${d.type}) ${d.description}`).join('\n')}`).join('\n\n')}
${totalGaps.length > 0 ? `## Gaps for Re-Planning
The following gaps must be addressed in a gap closure iteration:
${totalGaps.map((g, i) => `### Gap ${i + 1}
- **Task**: ${g.task}
- **Type**: ${g.type}
- **Item**: ${g.item}
- **Expected**: ${g.expected}
- **Actual**: ${g.actual}`).join('\n\n')}` : '## All Goals Met'}
`)
```
## Verification Checklist
### Convergence Criteria
| Check Method | Tool | When |
|--------------|------|------|
| Run tests | Bash(`npm test`) | Criterion mentions "test" |
| Compile check | Bash(`npx tsc --noEmit`) | TypeScript files |
| Keyword match | Read + string match | General behavioral criteria |
| Verification command | Bash(convergence.verification) | Always if provided |
| Semantic check | Gemini CLI (analysis) | Complex criteria (optional) |
### File Operations
| Check | Tool | What |
|-------|------|------|
| File exists (create/modify) | Bash(`test -f`) | files[].path with action create/modify |
| File deleted | Bash(`test -f`) | files[].path with action delete |
| Minimum content | Read + line count | Newly created files |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No task JSON files found | Error to coordinator -- planner may have failed |
| No summary files found | Error to coordinator -- executor may have failed |
| Verification command fails | Record as gap with error output |
| File referenced in task missing | Record as gap (file type) |
| Task JSON malformed (no convergence) | Log warning, score as pass (nothing to check) |
| All checks for a task fail | Score as 'fail', include all gaps |

View File

@@ -1,244 +0,0 @@
# Verifier Role
Goal-backward verification per phase. Reads convergence criteria from IMPL-*.json task files and checks them against the actual codebase state after execution. Does NOT modify code -- read-only validation. Produces verification.md with pass/fail results and structured gap lists.
## Identity
- **Name**: `verifier` | **Tag**: `[verifier]`
- **Task Prefix**: `VERIFY-*`
- **Responsibility**: Validation
## Boundaries
### MUST
- All outputs must carry `[verifier]` prefix
- Only process `VERIFY-*` prefixed tasks
- Only communicate with coordinator (SendMessage)
- Delegate verification to commands/verify.md
- Check goals (what should exist), NOT tasks (what was done)
- Produce structured gap lists for failed items
- Remain read-only -- never modify source code
- Work strictly within Validation responsibility scope
### MUST NOT
- Execute work outside this role's responsibility scope
- Modify any source code or project files
- Create plans or execute implementations
- Create tasks for other roles (TaskCreate)
- Interact with user (AskUserQuestion)
- Process PLAN-* or EXEC-* tasks
- Auto-fix issues (report them, let planner/executor handle fixes)
- Communicate directly with other worker roles (must go through coordinator)
- Omit `[verifier]` identifier in any output
---
## Toolbox
### Available Commands
| Command | File | Phase | Description |
|---------|------|-------|-------------|
| `verify` | [commands/verify.md](commands/verify.md) | Phase 3 | Goal-backward convergence criteria checking |
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| `gemini` | CLI tool | verifier | Deep semantic checks for complex truths (optional) |
| `Read` | File operations | verifier | Task JSON and summary reading |
| `Glob` | Search | verifier | Find task and summary files |
| `Bash` | Shell | verifier | Execute verification commands |
| `Grep` | Search | verifier | Pattern matching in codebase |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `verify_passed` | verifier -> coordinator | All convergence criteria met | Phase verification passed |
| `gaps_found` | verifier -> coordinator | Some criteria failed | Structured gap list for re-planning |
| `error` | verifier -> coordinator | Failure | Verification process failed |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "verifier",
type: <message-type>,
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --session-id <session-id> --from verifier --type <type> --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `VERIFY-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
**Resume Artifact Check**: Check whether this task's output artifact already exists:
- `<session>/phase-N/verification.md` exists -> skip to Phase 5
- Artifact incomplete or missing -> normal Phase 2-4 execution
### Phase 2: Load Verification Targets
**Objective**: Load task JSONs and summaries for verification.
**Detection steps**:
| Input | Source | Required |
|-------|--------|----------|
| Task JSONs | <session-folder>/phase-{N}/.task/IMPL-*.json | Yes |
| Summaries | <session-folder>/phase-{N}/summary-*.md | Yes |
| Wisdom | <session-folder>/wisdom/ | No |
1. **Read task JSON files**:
- Find all IMPL-*.json files
- Extract convergence criteria from each task
- If no files found -> error to coordinator
2. **Read summary files**:
- Find all summary-*.md files
- Parse frontmatter for: task, affects, provides
- If no files found -> error to coordinator
### Phase 3: Goal-Backward Verification (via command)
**Objective**: Execute convergence criteria checks.
Delegate to `commands/verify.md`:
| Step | Action |
|------|--------|
| 1 | For each task's convergence criteria |
| 2 | Check criteria type: files, command, pattern |
| 3 | Execute appropriate verification method |
| 4 | Score each task: pass / partial / fail |
| 5 | Compile gap list for failed items |
**Verification strategy selection**:
| Criteria Type | Method |
|---------------|--------|
| File existence | `test -f <path>` |
| Command execution | Run specified command, check exit code |
| Pattern match | Grep for pattern in specified files |
| Semantic check | Optional: Gemini CLI for deep analysis |
**Produces**: verificationResults (structured data)
**Command**: [commands/verify.md](commands/verify.md)
### Phase 4: Compile Results
**Objective**: Aggregate pass/fail and generate verification.md.
**Result aggregation**:
| Metric | Source | Threshold |
|--------|--------|-----------|
| Pass rate | Task results | >= 100% for passed |
| Gaps count | Failed criteria | 0 for passed |
**Compile steps**:
1. **Aggregate results per task**:
- Count passed, partial, failed
- Collect all gaps from partial/failed tasks
2. **Determine overall status**:
- `passed` if gaps.length === 0
- `gaps_found` otherwise
3. **Write verification.md**:
- YAML frontmatter with status, counts, gaps
- Summary section
- Task results section
- Gaps section (if any)
**Verification.md structure**:
```yaml
---
phase: <N>
status: passed | gaps_found
tasks_checked: <count>
tasks_passed: <count>
gaps:
- task: "<task-id>"
type: "<criteria-type>"
item: "<description>"
expected: "<expected-value>"
actual: "<actual-value>"
---
# Phase <N> Verification
## Summary
- Status: <status>
- Tasks Checked: <count>
- Passed: <count>
- Total Gaps: <count>
## Task Results
### TASK-ID: Title - STATUS
- [x] (type) description
- [ ] (type) description
## Gaps (if any)
### Gap 1: Task - Type
- Expected: ...
- Actual: ...
```
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
Standard report flow: team_msg log -> SendMessage with `[verifier]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
**Report message**:
```
SendMessage({
message: "[verifier] Phase <N> verification complete.
- Status: <status>
- Tasks: <passed>/<total> passed
- Gaps: <gap-count>
Verification written to: <verification-path>"
})
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No VERIFY-* tasks available | Idle, wait for coordinator assignment |
| Context/Plan file not found | Notify coordinator, request location |
| Command file not found | Fall back to inline execution |
| No task JSON files found | Error to coordinator -- planner may have failed |
| No summary files found | Error to coordinator -- executor may have failed |
| File referenced in task missing | Record as gap (file type) |
| Bash command fails during check | Record as gap with error message |
| Verification command fails | Record as gap with exit code |
| Gemini CLI fails | Fallback to direct checks, skip semantic analysis |
| Critical issue beyond scope | SendMessage fix_required to coordinator |
| Unexpected error | Log error via team_msg, report to coordinator |