feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -1,667 +1,267 @@
---
name: team-coordinate
description: Universal team coordination skill with dynamic role generation. Analyzes task, generates worker roles at runtime, decomposes into CSV tasks with dependency waves, dispatches parallel CSV agents per wave. Coordinator is orchestrator; all workers are CSV or interactive agents with dynamically generated instructions.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"task description\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
description: Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on "Team Coordinate ".
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team Coordinate
## Usage
Universal team coordination skill: analyze task -> generate role-specs -> dispatch -> execute -> deliver. Only the **coordinator** is built-in. All worker roles are **dynamically generated** as lightweight role-spec files and spawned via the `team-worker` agent.
```bash
$team-coordinate "Implement user authentication with JWT tokens"
$team-coordinate -c 4 "Refactor payment module and write API documentation"
$team-coordinate -y "Analyze codebase security and fix vulnerabilities"
$team-coordinate --continue "tc-auth-jwt-20260308"
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--continue`: Resume existing session
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
---
## Overview
Universal team coordination: analyze task -> detect capabilities -> generate dynamic role instructions -> decompose into dependency-ordered CSV tasks -> execute wave-by-wave -> deliver results. Only the **coordinator** (this orchestrator) is built-in. All worker roles are **dynamically generated** as CSV agent instructions at runtime.
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
## Architecture
```
+-------------------------------------------------------------------+
| TEAM COORDINATE WORKFLOW |
+-------------------------------------------------------------------+
| |
| Phase 0: Pre-Wave Interactive (Requirement Clarification) |
| +- Parse user task description |
| +- Clarify ambiguous requirements (request_user_input) |
| +- Output: refined requirements for decomposition |
| |
| Phase 1: Requirement -> CSV + Classification |
| +- Signal detection: keyword scan -> capability inference |
| +- Dependency graph construction (DAG) |
| +- Role minimization (cap at 5 roles) |
| +- Classify tasks: csv-wave | interactive (exec_mode) |
| +- Compute dependency waves (topological sort) |
| +- Generate tasks.csv with wave + exec_mode columns |
| +- Generate per-role agent instructions dynamically |
| +- User validates task breakdown (skip if -y) |
| |
| Phase 2: Wave Execution Engine (Extended) |
| +- For each wave (1..N): |
| | +- Execute pre-wave interactive tasks (if any) |
| | +- Build wave CSV (filter csv-wave tasks for this wave) |
| | +- Inject previous findings into prev_context column |
| | +- spawn_agents_on_csv(wave CSV) |
| | +- Execute post-wave interactive tasks (if any) |
| | +- Merge all results into master tasks.csv |
| | +- Check: any failed? -> skip dependents |
| +- discoveries.ndjson shared across all modes (append-only) |
| |
| Phase 3: Post-Wave Interactive (Completion Action) |
| +- Pipeline completion report |
| +- Interactive completion choice (Archive/Keep/Export) |
| +- Final aggregation / report |
| |
| Phase 4: Results Aggregation |
| +- Export final results.csv |
| +- Generate context.md with all findings |
| +- Display summary: completed/failed/skipped per wave |
| +- Offer: view results | retry failed | done |
| |
+-------------------------------------------------------------------+
+---------------------------------------------------+
| Skill(skill="team-coordinate") |
| args="task description" |
+-------------------+-------------------------------+
|
Orchestration Mode (auto -> coordinator)
|
Coordinator (built-in)
Phase 0-5 orchestration
|
+-------+-------+-------+-------+
v v v v v
[team-worker agents, each loaded with a dynamic role-spec]
(roles generated at runtime from task analysis)
CLI Tools (callable by any worker):
ccw cli --mode analysis - analysis and exploration
ccw cli --mode write - code generation and modification
```
---
## Shared Constants
## Task Classification Rules
| Constant | Value |
|----------|-------|
| Session prefix | `TC` |
| Session path | `.workflow/.team/TC-<slug>-<date>/` |
| Worker agent | `team-worker` |
| Message bus | `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)` |
| CLI analysis | `ccw cli --mode analysis` |
| CLI write | `ccw cli --mode write` |
| Max roles | 5 |
Each task is classified by `exec_mode`:
## Role Router
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, needs clarification, revision cycles |
This skill is **coordinator-only**. Workers do NOT invoke this skill -- they are spawned as `team-worker` agents directly.
**Classification Decision**:
### Input Parsing
| Task Property | Classification |
|---------------|---------------|
| Single-pass code implementation | `csv-wave` |
| Single-pass analysis or documentation | `csv-wave` |
| Research with defined scope | `csv-wave` |
| Testing with known targets | `csv-wave` |
| Design requiring iterative refinement | `interactive` |
| Plan requiring user approval checkpoint | `interactive` |
| Revision cycle (fix-verify loop) | `interactive` |
Parse `$ARGUMENTS`. No `--role` needed -- always routes to coordinator.
---
### Role Registry
## CSV Schema
Only coordinator is statically registered. All other roles are dynamic, stored as role-specs in session.
### tasks.csv (Master State)
| Role | File | Type |
|------|------|------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | built-in orchestrator |
| (dynamic) | `<session>/role-specs/<role-name>.md` | runtime-generated role-spec |
```csv
id,title,description,role,responsibility_type,output_type,deps,context_from,exec_mode,wave,status,findings,artifacts_produced,error
"RESEARCH-001","Investigate auth patterns","Research JWT authentication patterns and best practices","researcher","orchestration","artifact","","","csv-wave","1","pending","","",""
"IMPL-001","Implement auth module","Build JWT authentication middleware","developer","code-gen","codebase","RESEARCH-001","RESEARCH-001","csv-wave","2","pending","","",""
"TEST-001","Validate auth implementation","Write and run tests for auth module","tester","validation","artifact","IMPL-001","IMPL-001","csv-wave","3","pending","","",""
### CLI Tool Usage
Workers can use CLI tools for analysis and code operations:
| Tool | Purpose |
|------|---------|
| ccw cli --mode analysis | Analysis, exploration, pattern discovery |
| ccw cli --mode write | Code generation, modification, refactoring |
### Dispatch
Always route to coordinator. Coordinator reads `roles/coordinator/role.md` and executes its phases.
### Orchestration Mode
User just provides task description.
**Invocation**: `Skill(skill="team-coordinate", args="task description")`
**Lifecycle**:
```
User provides task description
-> coordinator Phase 1: task analysis (detect capabilities, build dependency graph)
-> coordinator Phase 2: generate role-specs + initialize session
-> coordinator Phase 3: create task chain from dependency graph
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
-> Worker executes -> callback -> coordinator advances next step
-> Loop until pipeline complete -> Phase 5 report + completion action
```
**Columns**:
**User Commands** (wake paused coordinator):
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (PREFIX-NNN format) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description with goal, steps, success criteria |
| `role` | Input | Dynamic role name (researcher, developer, analyst, etc.) |
| `responsibility_type` | Input | `orchestration`, `read-only`, `code-gen`, `code-gen-docs`, `validation` |
| `output_type` | Input | `artifact` (session files), `codebase` (project files), `mixed` |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
| `artifacts_produced` | Output | Semicolon-separated paths of produced artifacts |
| `error` | Output | Error message if failed (empty if success) |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph, no advancement |
| `resume` / `continue` | Check worker states, advance next step |
| `revise <TASK-ID> [feedback]` | Revise specific task with optional feedback |
| `feedback <text>` | Inject feedback into active pipeline |
| `improve [dimension]` | Auto-improve weakest quality dimension |
---
## Agent Registry (Interactive Agents)
## Coordinator Spawn Template
| Agent | Role File | Pattern | Responsibility | Position |
|-------|-----------|---------|----------------|----------|
| Plan Reviewer | agents/plan-reviewer.md | 2.3 (send_input cycle) | Review and approve plans before execution waves | pre-wave |
| Completion Handler | agents/completion-handler.md | 2.3 (send_input cycle) | Handle pipeline completion action (Archive/Keep/Export) | standalone |
### v2 Worker Spawn (all roles)
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 4 |
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
| `context.md` | Human-readable execution report | Created in Phase 4 |
| `task-analysis.json` | Phase 0/1 output: capabilities, dependency graph, roles | Created in Phase 1 |
| `role-instructions/` | Dynamically generated per-role instruction templates | Created in Phase 1 |
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
---
## Session Structure
When coordinator spawns workers, use `team-worker` agent with role-spec path:
```
.workflow/.csv-wave/{session-id}/
+-- tasks.csv # Master state (all tasks, both modes)
+-- results.csv # Final results export
+-- discoveries.ndjson # Shared discovery board (all agents)
+-- context.md # Human-readable report
+-- task-analysis.json # Phase 1 analysis output
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
+-- role-instructions/ # Dynamically generated instruction templates
| +-- researcher.md
| +-- developer.md
| +-- ...
+-- artifacts/ # All deliverables from workers
| +-- research-findings.md
| +-- implementation-summary.md
| +-- ...
+-- interactive/ # Interactive task artifacts
| +-- {id}-result.json
+-- wisdom/ # Cross-task knowledge
+-- learnings.md
+-- decisions.md
spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: <role>
role_spec: <session-folder>/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.` },
{ type: "text", text: `## Task Context
task_id: <task-id>
title: <task-title>
description: <task-description>
pipeline_phase: <pipeline-phase>` },
{ type: "text", text: `## Upstream Context
<prev_context>` }
]
})
```
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
**Inner Loop roles** (role has 2+ serial same-prefix tasks): Set `inner_loop: true`. The team-worker agent handles the loop internally.
**Single-task roles**: Set `inner_loop: false`.
---
## Implementation
## Completion Action
### Session Initialization
When pipeline completes (all tasks done), coordinator presents an interactive choice:
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
const requirement = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
.trim()
const slug = requirement.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `tc-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/role-instructions ${sessionFolder}/interactive ${sessionFolder}/wisdom`)
// Initialize discoveries.ndjson
Write(`${sessionFolder}/discoveries.ndjson`, '')
// Initialize wisdom files
Write(`${sessionFolder}/wisdom/learnings.md`, '# Learnings\n')
Write(`${sessionFolder}/wisdom/decisions.md`, '# Decisions\n')
```
request_user_input({
questions: [{
question: "Team pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up team" },
{ label: "Keep Active", description: "Keep session for follow-up work" },
{ label: "Export Results", description: "Export deliverables to target directory, then clean" }
]
}]
})
```
---
### Action Handlers
### Phase 0: Pre-Wave Interactive (Requirement Clarification)
**Objective**: Parse user task, clarify ambiguities, prepare for decomposition.
**Workflow**:
1. **Parse user task description** from $ARGUMENTS
2. **Check for existing sessions** (continue mode):
- Scan `.workflow/.csv-wave/tc-*/tasks.csv` for sessions with pending tasks
- If `--continue`: resume the specified or most recent session, skip to Phase 2
- If active session found: ask user whether to resume or start new
3. **Clarify if ambiguous** (skip if AUTO_YES):
```javascript
request_user_input({
questions: [{
question: "Please confirm the task scope and deliverables.",
header: "Scope",
id: "task_scope",
options: [
{ label: "Proceed (Recommended)", description: "Task is clear enough" },
{ label: "Narrow scope", description: "Specify files/modules/areas" },
{ label: "Add constraints", description: "Timeline, tech stack, style" }
]
}]
})
```
4. **Output**: Refined requirement string for Phase 1
**Success Criteria**:
- Refined requirements available for Phase 1 decomposition
- Existing session detected and handled if applicable
| Choice | Steps |
|--------|-------|
| Archive & Clean | Update session status="completed" -> output final summary with artifact paths |
| Keep Active | Update session status="paused" -> output: "Resume with: Skill(skill='team-coordinate', args='resume')" |
| Export Results | request_user_input(target path) -> copy artifacts to target -> Archive & Clean |
---
### Phase 1: Requirement -> CSV + Classification
## Specs Reference
**Objective**: Analyze task, detect capabilities, build dependency graph, generate tasks.csv and role instructions.
| Spec | Purpose |
|------|---------|
| [specs/pipelines.md](specs/pipelines.md) | Dynamic pipeline model, task naming, dependency graph |
| [specs/role-spec-template.md](specs/role-spec-template.md) | Template for dynamic role-spec generation |
| [specs/quality-gates.md](specs/quality-gates.md) | Quality thresholds and scoring dimensions |
| [specs/knowledge-transfer.md](specs/knowledge-transfer.md) | Context transfer protocols between roles |
**Decomposition Rules**:
---
1. **Signal Detection** -- scan task description for capability keywords:
## Session Directory
| Signal | Keywords | Capability | Prefix | Responsibility Type |
|--------|----------|------------|--------|---------------------|
| Research | investigate, explore, compare, survey, find, research, discover | researcher | RESEARCH | orchestration |
| Writing | write, draft, document, article, report, summarize | writer | DRAFT | code-gen-docs |
| Coding | implement, build, code, fix, refactor, develop, create, migrate | developer | IMPL | code-gen |
| Design | design, architect, plan, structure, blueprint, schema | designer | DESIGN | orchestration |
| Analysis | analyze, review, audit, assess, evaluate, inspect, diagnose | analyst | ANALYSIS | read-only |
| Testing | test, verify, validate, QA, quality, check, coverage | tester | TEST | validation |
| Planning | plan, breakdown, organize, schedule, decompose, roadmap | planner | PLAN | orchestration |
2. **Dependency Graph** -- build DAG using natural ordering tiers:
| Tier | Capabilities | Description |
|------|-------------|-------------|
| 0 | researcher, planner | Knowledge gathering / planning |
| 1 | designer | Design (requires tier 0 if present) |
| 2 | writer, developer | Creation (requires design/plan if present) |
| 3 | analyst, tester | Validation (requires artifacts to validate) |
3. **Role Minimization** -- merge overlapping capabilities, cap at 5 roles
4. **Key File Inference** -- extract nouns from task description, map to likely file paths
5. **output_type derivation**:
| Task Signal | output_type |
|-------------|-------------|
| "write report", "analyze", "research" | `artifact` |
| "update code", "modify", "fix bug" | `codebase` |
| "implement feature + write summary" | `mixed` |
**Classification Rules**:
| Task Property | exec_mode |
|---------------|-----------|
| Single-pass implementation/analysis/documentation | `csv-wave` |
| Needs iterative user approval | `interactive` |
| Fix-verify revision cycle | `interactive` |
| Standard research, coding, testing | `csv-wave` |
**Wave Computation**: Kahn's BFS topological sort with depth tracking.
```javascript
// After task analysis, generate dynamic role instruction templates
for (const role of analysisResult.roles) {
const instruction = generateRoleInstruction(role, sessionFolder)
Write(`${sessionFolder}/role-instructions/${role.name}.md`, instruction)
}
// Generate tasks.csv from dependency graph
const tasks = buildTasksCsv(analysisResult)
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
Write(`${sessionFolder}/task-analysis.json`, JSON.stringify(analysisResult, null, 2))
```
.workflow/.team/TC-<slug>-<date>/
+-- team-session.json # Session state + dynamic role registry
+-- task-analysis.json # Phase 1 output: capabilities, dependency graph
+-- role-specs/ # Dynamic role-spec definitions (generated Phase 2)
| +-- <role-1>.md # Lightweight: frontmatter + Phase 2-4 only
| +-- <role-2>.md
+-- artifacts/ # All MD deliverables from workers
| +-- <artifact>.md
+-- .msg/ # Team message bus + state
| +-- messages.jsonl # Message log
| +-- meta.json # Session metadata + cross-role state
+-- wisdom/ # Cross-task knowledge
| +-- learnings.md
| +-- decisions.md
| +-- issues.md
+-- explorations/ # Shared explore cache
| +-- cache-index.json
| +-- explore-<angle>.json
+-- discussions/ # Inline discuss records
| +-- <round>.md
```
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
### team-session.json Schema
**Success Criteria**:
- tasks.csv created with valid schema, wave, and exec_mode assignments
- Role instruction templates generated in role-instructions/
- task-analysis.json written
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
```javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
const maxWave = Math.max(...tasks.map(t => t.wave))
for (let wave = 1; wave <= maxWave; wave++) {
console.log(`\nWave ${wave}/${maxWave}`)
// 1. Separate tasks by exec_mode
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
// 2. Check dependencies -- skip tasks whose deps failed
for (const task of waveTasks) {
const depIds = (task.deps || '').split(';').filter(Boolean)
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
task.status = 'skipped'
task.error = `Dependency failed: ${depIds.filter((id, i) =>
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
```json
{
"session_id": "TC-<slug>-<date>",
"task_description": "<original user input>",
"status": "active | paused | completed",
"team_name": "<team-name>",
"roles": [
{
"name": "<role-name>",
"prefix": "<PREFIX>",
"responsibility_type": "<type>",
"inner_loop": false,
"role_spec": "role-specs/<role-name>.md"
}
}
// 3. Execute pre-wave interactive tasks (e.g., plan approval)
const preWaveInteractive = interactiveTasks.filter(t => t.status === 'pending')
for (const task of preWaveInteractive) {
// Read agent definition
Read(`agents/plan-reviewer.md`)
const agent = spawn_agent({
message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${sessionFolder}/discoveries.ndjson\n\nGoal: ${task.description}\nScope: ${task.title}\nSession: ${sessionFolder}\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
if (result.timed_out) {
send_input({ id: agent, message: "Please finalize and output current findings." })
wait({ ids: [agent], timeout_ms: 120000 })
}
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
task_id: task.id, status: "completed", findings: parseFindings(result),
timestamp: getUtc8ISOString()
}))
close_agent({ id: agent })
task.status = 'completed'
task.findings = parseFindings(result)
}
// 4. Build prev_context for csv-wave tasks
const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
for (const task of pendingCsvTasks) {
task.prev_context = buildPrevContext(task, tasks)
}
if (pendingCsvTasks.length > 0) {
// 5. Write wave CSV
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))
// 6. Determine instruction for this wave (use role-specific instruction)
// Group tasks by role, build combined instruction
const waveInstruction = buildWaveInstruction(pendingCsvTasks, sessionFolder, wave)
// 7. Execute wave via spawn_agents_on_csv
spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: waveInstruction,
max_concurrency: maxConcurrency,
max_runtime_seconds: 900,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
artifacts_produced: { type: "string" },
error: { type: "string" }
}
}
})
// 8. Merge results into master CSV
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const r of results) {
const t = tasks.find(t => t.id === r.id)
if (t) Object.assign(t, r)
}
}
// 9. Update master CSV
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
// 10. Cleanup temp files
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
// 11. Display wave summary
const completed = waveTasks.filter(t => t.status === 'completed').length
const failed = waveTasks.filter(t => t.status === 'failed').length
const skipped = waveTasks.filter(t => t.status === 'skipped').length
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
],
"pipeline": {
"dependency_graph": {},
"tasks_total": 0,
"tasks_completed": 0
},
"active_workers": [],
"completed_tasks": [],
"completion_action": "interactive",
"created_at": "<timestamp>"
}
```
**Success Criteria**:
- All waves executed in order
- Both csv-wave and interactive tasks handled per wave
- Each wave's results merged into master CSV before next wave starts
- Dependent tasks skipped when predecessor failed
- discoveries.ndjson accumulated across all waves and mechanisms
---
### Phase 3: Post-Wave Interactive (Completion Action)
## Session Resume
**Objective**: Pipeline completion report and interactive completion choice.
Coordinator supports `resume` / `continue` for interrupted sessions:
```javascript
// 1. Generate pipeline summary
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const completed = tasks.filter(t => t.status === 'completed')
const failed = tasks.filter(t => t.status === 'failed')
console.log(`
============================================
TASK COMPLETE
Deliverables:
${completed.map(t => ` - ${t.id}: ${t.title} (${t.role})`).join('\n')}
Pipeline: ${completed.length}/${tasks.length} tasks
Duration: <elapsed>
Session: ${sessionFolder}
============================================
`)
// 2. Completion action
if (!AUTO_YES) {
const choice = request_user_input({
questions: [{
question: "Team pipeline complete. Choose next action.",
header: "Done",
id: "completion",
options: [
{ label: "Archive (Recommended)", description: "Archive session, output final summary" },
{ label: "Keep Active", description: "Keep session for follow-up work" },
{ label: "Retry Failed", description: "Re-run failed tasks" }
]
}]
})
// Handle choice accordingly
}
```
**Success Criteria**:
- Post-wave interactive processing complete
- User informed of results
---
### Phase 4: Results Aggregation
**Objective**: Generate final results and human-readable report.
```javascript
// 1. Export results.csv
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
// 2. Generate context.md
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
let contextMd = `# Team Coordinate Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`
contextMd += `## Summary\n`
contextMd += `| Status | Count |\n|--------|-------|\n`
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`
const maxWave = Math.max(...tasks.map(t => t.wave))
contextMd += `## Wave Execution\n\n`
for (let w = 1; w <= maxWave; w++) {
const waveTasks = tasks.filter(t => t.wave === w)
contextMd += `### Wave ${w}\n\n`
for (const t of waveTasks) {
const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
contextMd += `${icon} **${t.title}** [${t.role}] ${t.findings || ''}\n\n`
}
}
Write(`${sessionFolder}/context.md`, contextMd)
// 3. Display final summary
console.log(`Results exported to: ${sessionFolder}/results.csv`)
console.log(`Report generated at: ${sessionFolder}/context.md`)
```
**Success Criteria**:
- results.csv exported (all tasks, both modes)
- context.md generated
- Summary displayed to user
---
## Shared Discovery Board Protocol
All agents (csv-wave and interactive) share a single `discoveries.ndjson` file for cross-task knowledge exchange.
**Format**: One JSON object per line (NDJSON):
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"RESEARCH-001","type":"pattern_found","data":{"pattern_name":"Repository Pattern","location":"src/repos/","description":"Data access layer uses repository pattern"}}
{"ts":"2026-03-08T10:05:00Z","worker":"IMPL-001","type":"file_modified","data":{"file":"src/auth/jwt.ts","change":"Added JWT middleware","lines_added":45}}
```
**Discovery Types**:
| Type | Data Schema | Description |
|------|-------------|-------------|
| `pattern_found` | `{pattern_name, location, description}` | Design pattern identified |
| `file_modified` | `{file, change, lines_added}` | File change recorded |
| `dependency_found` | `{from, to, type}` | Dependency relationship discovered |
| `issue_found` | `{file, line, severity, description}` | Issue or bug discovered |
| `decision_made` | `{decision, rationale, impact}` | Design decision recorded |
| `artifact_produced` | `{name, path, producer, type}` | Deliverable created |
**Protocol**:
1. Agents MUST read discoveries.ndjson at start of execution
2. Agents MUST append relevant discoveries during execution
3. Agents MUST NOT modify or delete existing entries
4. Deduplication by `{type, data.file, data.pattern_name}` key
---
## Dynamic Role Instruction Generation
The coordinator generates role-specific instruction templates during Phase 1. Each template is written to `role-instructions/{role-name}.md` and used as the `instruction` parameter for `spawn_agents_on_csv`.
**Generation Rules**:
1. Each instruction must be self-contained (agent has no access to master CSV)
2. Use `{column_name}` placeholders for CSV column substitution
3. Include session folder path as literal (not placeholder)
4. Include mandatory discovery board read/write steps
5. Include role-specific execution guidance based on responsibility_type
6. Include output schema matching tasks.csv output columns
See `instructions/agent-instruction.md` for the base instruction template that is customized per role.
1. Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
2. Multiple matches -> request_user_input for selection
3. Audit task states -> reconcile session state <-> task status
4. Reset in_progress -> pending (interrupted tasks)
5. Rebuild team and spawn needed workers only
6. Create missing tasks, set dependencies
7. Kick first executable task -> Phase 4 coordination loop
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Circular dependency | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Interactive agent failed | Mark as failed, skip dependents |
| All agents in wave failed | Log error, offer retry or abort |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| No capabilities detected | Default to single `general` role with TASK prefix |
| All capabilities merge to one | Valid: single-role execution, reduced overhead |
| Task description too vague | request_user_input for clarification in Phase 0 |
| Continue mode: no session found | List available sessions, prompt user to select |
| Role instruction generation fails | Fall back to generic instruction template |
---
## Core Rules
1. **Start Immediately**: First action is session initialization, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
7. **Skip on Failure**: If a dependency failed, skip the dependent task
8. **Dynamic Roles**: All worker roles are generated at runtime from task analysis -- no static role registry
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
---
## Coordinator Role Constraints (Main Agent)
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
- Spawns agents with task assignments
- Waits for agent callbacks
- Merges results and coordinates workflow
- Manages workflow transitions between phases
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
- Wait patiently for `wait()` calls to complete
- NOT skip workflow steps due to perceived delays
- NOT assume agents have failed just because they're taking time
- Trust the timeout mechanisms defined in the skill
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
- Use `send_input()` to ask questions or provide clarification
- NOT skip the agent or move to next phase prematurely
- Give agents opportunity to respond before escalating
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
- Skip phases or stages defined in the workflow
- Bypass required approval or review steps
- Execute dependent tasks before prerequisites complete
- Assume task completion without explicit agent callback
- Make up or fabricate agent results
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
- Total execution time may range from 30-90 minutes or longer
- Each phase may take 10-30 minutes depending on complexity
- The coordinator must remain active and attentive throughout the entire process
- Do not terminate or skip steps due to time concerns
| Scenario | Resolution |
|----------|------------|
| Unknown command | Error with available command list |
| Dynamic role-spec not found | Error, coordinator may need to regenerate |
| Command file not found | Fallback to inline execution |
| CLI tool fails | Worker proceeds with direct implementation, logs warning |
| Explore cache corrupt | Clear cache, re-explore |
| Fast-advance spawns wrong task | Coordinator reconciles on next callback |
| capability_gap reported | Coordinator generates new role-spec via handleAdapt |
| Completion action fails | Default to Keep Active, log warning |

View File

@@ -1,127 +0,0 @@
# Completion Handler Agent
Interactive agent for handling pipeline completion actions. Presents results summary and manages Archive/Keep/Export choices.
## Identity
- **Type**: `interactive`
- **Role File**: `agents/completion-handler.md`
- **Responsibility**: Pipeline completion reporting and cleanup action
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read final tasks.csv to compile completion summary
- Present deliverables list with paths
- Execute chosen completion action
- Produce structured output following template
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Delete session data without user confirmation
- Produce unstructured output
- Modify task artifacts
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | built-in | Load tasks.csv, artifacts |
| `request_user_input` | built-in | Get completion choice |
| `Write` | built-in | Store completion result |
| `Bash` | built-in | Archive or export operations |
---
## Execution
### Phase 1: Summary Generation
**Objective**: Compile pipeline completion summary
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| tasks.csv | Yes | Master state with all results |
| artifacts/ | No | Deliverable files |
| discoveries.ndjson | No | Shared discoveries |
**Steps**:
1. Read tasks.csv, count completed/failed/skipped
2. List all produced artifacts with paths
3. Summarize discoveries
4. Calculate pipeline duration if timestamps available
**Output**: Completion summary
---
### Phase 2: Completion Choice
**Objective**: Execute user's chosen completion action
**Steps**:
1. Present completion choice:
```javascript
request_user_input({
questions: [{
question: "Team pipeline complete. What would you like to do?",
header: "Completion",
id: "completion_action",
options: [
{ label: "Archive & Clean (Recommended)", description: "Mark session complete, output final summary" },
{ label: "Keep Active", description: "Keep session for follow-up work" },
{ label: "Export Results", description: "Export deliverables to target directory" }
]
}]
})
```
2. Handle choice:
| Choice | Steps |
|--------|-------|
| Archive & Clean | Write completion status, output artifact paths |
| Keep Active | Keep session files, output resume instructions |
| Export Results | Ask target path, copy artifacts, then archive |
**Output**: Completion action result
---
## Structured Output Template
```
## Summary
- Pipeline status: completed
- Tasks: <completed>/<total>
## Deliverables
- <artifact-path-1> (produced by <role>)
- <artifact-path-2> (produced by <role>)
## Action Taken
- Choice: <archive|keep|export>
- Details: <action-specific details>
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| tasks.csv not found | Report error, suggest manual review |
| Export target path invalid | Ask user for valid path |
| Processing failure | Default to Keep Active, log warning |

View File

@@ -1,145 +0,0 @@
# Plan Reviewer Agent
Interactive agent for reviewing and approving plans before execution waves. Used when a task requires user confirmation checkpoint before proceeding.
## Identity
- **Type**: `interactive`
- **Role File**: `agents/plan-reviewer.md`
- **Responsibility**: Review generated plans, seek user approval, handle revision requests
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read the plan artifact being reviewed
- Present a clear summary to the user
- Wait for user approval before reporting complete
- Produce structured output following template
- Include file:line references in findings
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Approve plans without user confirmation
- Modify the plan artifact directly
- Produce unstructured output
- Exceed defined scope boundaries
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | built-in | Load plan artifacts and context |
| `request_user_input` | built-in | Get user approval or revision feedback |
| `Write` | built-in | Store review result |
### Tool Usage Patterns
**Read Pattern**: Load context files before review
```
Read("<session>/artifacts/<plan>.md")
Read("<session>/discoveries.ndjson")
```
**Write Pattern**: Store review result
```
Write("<session>/interactive/<task-id>-result.json", <result>)
```
---
## Execution
### Phase 1: Context Loading
**Objective**: Load the plan artifact and supporting context
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Plan artifact | Yes | The plan document to review |
| discoveries.ndjson | No | Shared discoveries for context |
| Previous task findings | No | Upstream task results |
**Steps**:
1. Extract session path from task assignment
2. Read the plan artifact referenced in the task description
3. Read discoveries.ndjson for additional context
4. Summarize key aspects of the plan
**Output**: Plan summary ready for user review
---
### Phase 2: User Review
**Objective**: Present plan to user and get approval
**Steps**:
1. Display plan summary with key decisions and trade-offs
2. Present approval choice:
```javascript
request_user_input({
questions: [{
question: "Review the plan and decide:",
header: "Plan Review",
id: "plan_review",
options: [
{ label: "Approve (Recommended)", description: "Proceed with execution" },
{ label: "Revise", description: "Request changes to the plan" },
{ label: "Abort", description: "Cancel the pipeline" }
]
}]
})
```
3. Handle response:
| Response | Action |
|----------|--------|
| Approve | Report approved status |
| Revise | Collect revision feedback, report revision needed |
| Abort | Report abort status |
**Output**: Review decision with details
---
## Structured Output Template
```
## Summary
- Plan reviewed: <plan-name>
- Decision: <approved|revision-needed|aborted>
## Findings
- Key strength 1: description
- Key concern 1: description
## Decision Details
- User choice: <choice>
- Feedback: <user feedback if revision>
## Open Questions
1. Any unresolved items from review
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Plan artifact not found | Report in Open Questions, ask user for path |
| User does not respond | Timeout, report partial with "awaiting-review" status |
| Processing failure | Output partial results with clear status indicator |

View File

@@ -1,184 +0,0 @@
# Agent Instruction Template -- Team Coordinate
Base instruction template for CSV wave agents. The orchestrator dynamically customizes this per role during Phase 1, writing role-specific versions to `role-instructions/{role-name}.md`.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 1 | Coordinator generates per-role instruction from this template |
| Phase 2 | Injected as `instruction` parameter to `spawn_agents_on_csv` |
---
## Base Instruction Template
```markdown
## TASK ASSIGNMENT -- Team Coordinate
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists, skip if not)
2. Read project context: .workflow/project-tech.json (if exists)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: {role}
**Responsibility**: {responsibility_type}
**Output Type**: {output_type}
### Task Description
{description}
### Previous Tasks' Findings (Context)
{prev_context}
---
## Execution Protocol
1. **Read discoveries**: Load <session-folder>/discoveries.ndjson for shared exploration findings
2. **Use context**: Apply previous tasks' findings from prev_context above
3. **Execute task**:
- Read target files referenced in description
- Follow the execution steps outlined in the TASK section of description
- Produce deliverables matching the EXPECTED section of description
- Verify output matches success criteria
4. **Share discoveries**: Append exploration findings to shared board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> <session-folder>/discoveries.ndjson
```
5. **Report result**: Return JSON via report_agent_job_result
### Discovery Types to Share
- `pattern_found`: {pattern_name, location, description} -- Design pattern identified in codebase
- `file_modified`: {file, change, lines_added} -- File change performed by this agent
- `dependency_found`: {from, to, type} -- Dependency relationship between components
- `issue_found`: {file, line, severity, description} -- Issue or bug discovered
- `decision_made`: {decision, rationale, impact} -- Design decision made during execution
- `artifact_produced`: {name, path, producer, type} -- Deliverable file created
---
## Output (report_agent_job_result)
Return JSON:
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Key discoveries and implementation notes (max 500 chars)",
"artifacts_produced": "semicolon-separated paths of produced files",
"error": ""
}
```
---
## Role-Specific Customization
The coordinator generates per-role instruction variants during Phase 1. Each variant adds role-specific execution guidance to Step 3.
### For Research / Exploration Roles
Add to execution protocol step 3:
```
3. **Execute**:
- Define exploration scope from description
- Use code search tools to find relevant patterns and implementations
- Survey approaches, compare alternatives
- Document findings with file:line references
- Write research artifact to <session-folder>/artifacts/
```
### For Code Implementation Roles
Add to execution protocol step 3:
```
3. **Execute**:
- Read upstream design/spec artifacts referenced in description
- Read target files listed in description
- Apply code changes following project conventions
- Validate changes compile/lint correctly
- Run relevant tests if available
- Write implementation summary to <session-folder>/artifacts/
```
### For Analysis / Audit Roles
Add to execution protocol step 3:
```
3. **Execute**:
- Read target files/modules for analysis
- Apply analysis criteria systematically
- Classify findings by severity (critical, high, medium, low)
- Include file:line references in findings
- Write analysis report to <session-folder>/artifacts/
```
### For Test / Validation Roles
Add to execution protocol step 3:
```
3. **Execute**:
- Read source files to understand implementation
- Identify test cases from description
- Generate test files following project test conventions
- Run tests and capture results
- Write test report to <session-folder>/artifacts/
```
### For Documentation / Writing Roles
Add to execution protocol step 3:
```
3. **Execute**:
- Read source code and existing documentation
- Generate documentation following template in description
- Ensure accuracy against current implementation
- Include code examples where appropriate
- Write document to <session-folder>/artifacts/
```
### For Design / Architecture Roles
Add to execution protocol step 3:
```
3. **Execute**:
- Read upstream research findings
- Analyze existing codebase structure
- Design component interactions and data flow
- Document architecture decisions with rationale
- Write design document to <session-folder>/artifacts/
```
---
## Quality Requirements
All agents must verify before reporting complete:
| Requirement | Criteria |
|-------------|----------|
| Files produced | Verify all claimed artifacts exist via Read |
| Files modified | Verify content actually changed |
| Findings accuracy | Findings reflect actual work done |
| Discovery sharing | At least 1 discovery shared to board |
| Error reporting | Non-empty error field if status is failed |
---
## Placeholder Reference
| Placeholder | Resolved By | When |
|-------------|------------|------|
| `<session-folder>` | Skill designer (Phase 1) | Literal path baked into instruction |
| `{id}` | spawn_agents_on_csv | Runtime from CSV row |
| `{title}` | spawn_agents_on_csv | Runtime from CSV row |
| `{description}` | spawn_agents_on_csv | Runtime from CSV row |
| `{role}` | spawn_agents_on_csv | Runtime from CSV row |
| `{responsibility_type}` | spawn_agents_on_csv | Runtime from CSV row |
| `{output_type}` | spawn_agents_on_csv | Runtime from CSV row |
| `{prev_context}` | spawn_agents_on_csv | Runtime from CSV row |

View File

@@ -0,0 +1,247 @@
# Command: analyze-task
## Purpose
Parse user task description -> detect required capabilities -> build dependency graph -> design dynamic roles with role-spec metadata. Outputs structured task-analysis.json with frontmatter fields for role-spec generation.
## CRITICAL CONSTRAINT
**TEXT-LEVEL analysis only. MUST NOT read source code or explore codebase.**
**Allowed:**
- Parse user task description text
- request_user_input for clarification
- Keyword-to-capability mapping
- Write `task-analysis.json`
If task context requires codebase knowledge, set `needs_research: true`. Phase 2 will spawn researcher worker.
## When to Use
| Trigger | Condition |
|---------|-----------|
| New task | Coordinator Phase 1 receives task description |
| Re-analysis | User provides revised requirements |
| Adapt | handleAdapt extends analysis for new capability |
## Strategy
- **Delegation**: Inline execution (coordinator processes directly)
- **Mode**: Text-level analysis only (no codebase reading)
- **Output**: `<session>/task-analysis.json`
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | User input from Phase 1 | Yes |
| Clarification answers | request_user_input results (if any) | No |
| Session folder | From coordinator Phase 2 | Yes |
## Phase 3: Task Analysis
### Step 1: Signal Detection
Scan task description for capability keywords:
| Signal | Keywords | Capability | Prefix | Responsibility Type |
|--------|----------|------------|--------|---------------------|
| Research | investigate, explore, compare, survey, find, research, discover, benchmark, study | researcher | RESEARCH | orchestration |
| Writing | write, draft, document, article, report, blog, describe, explain, summarize, content | writer | DRAFT | code-gen (docs) |
| Coding | implement, build, code, fix, refactor, develop, create app, program, migrate, port | developer | IMPL | code-gen (code) |
| Design | design, architect, plan, structure, blueprint, model, schema, wireframe, layout | designer | DESIGN | orchestration |
| Analysis | analyze, review, audit, assess, evaluate, inspect, examine, diagnose, profile | analyst | ANALYSIS | read-only |
| Testing | test, verify, validate, QA, quality, check, assert, coverage, regression | tester | TEST | validation |
| Planning | plan, breakdown, organize, schedule, decompose, roadmap, strategy, prioritize | planner | PLAN | orchestration |
**Multi-match**: A task may trigger multiple capabilities.
**No match**: Default to a single `general` capability with `TASK` prefix.
### Step 2: Artifact Inference
Each capability produces default output artifacts:
| Capability | Default Artifact | Format |
|------------|-----------------|--------|
| researcher | Research findings | `<session>/artifacts/research-findings.md` |
| writer | Written document(s) | `<session>/artifacts/<doc-name>.md` |
| developer | Code implementation | Source files + `<session>/artifacts/implementation-summary.md` |
| designer | Design document | `<session>/artifacts/design-spec.md` |
| analyst | Analysis report | `<session>/artifacts/analysis-report.md` |
| tester | Test results | `<session>/artifacts/test-report.md` |
| planner | Execution plan | `<session>/artifacts/execution-plan.md` |
### Step 2.5: Key File Inference
For each task, infer relevant files based on capability type and task keywords:
| Capability | File Inference Strategy |
|------------|------------------------|
| researcher | Extract domain keywords -> map to likely directories (e.g., "auth" -> `src/auth/**`, `middleware/auth.ts`) |
| developer | Extract feature/module keywords -> map to source files (e.g., "payment" -> `src/payments/**`, `types/payment.ts`) |
| designer | Look for architecture/config keywords -> map to config/schema files |
| analyst | Extract target keywords -> map to files under analysis |
| tester | Extract test target keywords -> map to source + test files |
| writer | Extract documentation target -> map to relevant source files for context |
| planner | No specific files (planning is abstract) |
**Inference rules:**
- Extract nouns and verbs from task description
- Match against common directory patterns (src/, lib/, components/, services/, utils/)
- Include related type definition files (types/, *.d.ts)
- For "fix bug" tasks, include error-prone areas (error handlers, validation)
- For "implement feature" tasks, include similar existing features as reference
### Step 3: Dependency Graph Construction
Build a DAG of work streams using natural ordering tiers:
| Tier | Capabilities | Description |
|------|-------------|-------------|
| 0 | researcher, planner | Knowledge gathering / planning |
| 1 | designer | Design (requires context from tier 0 if present) |
| 2 | writer, developer | Creation (requires design/plan if present) |
| 3 | analyst, tester | Validation (requires artifacts to validate) |
### Step 4: Complexity Scoring
| Factor | Weight | Condition |
|--------|--------|-----------|
| Capability count | +1 each | Number of distinct capabilities |
| Cross-domain factor | +2 | Capabilities span 3+ tiers |
| Parallel tracks | +1 each | Independent parallel work streams |
| Serial depth | +1 per level | Longest dependency chain length |
| Total Score | Complexity | Role Limit |
|-------------|------------|------------|
| 1-3 | Low | 1-2 roles |
| 4-6 | Medium | 2-3 roles |
| 7+ | High | 3-5 roles |
### Step 5: Role Minimization
Apply merging rules to reduce role count (cap at 5).
### Step 6: Role-Spec Metadata Assignment
For each role, determine frontmatter and generation hints:
| Field | Derivation |
|-------|------------|
| `prefix` | From capability prefix (e.g., RESEARCH, DRAFT, IMPL) |
| `inner_loop` | `true` if role has 2+ serial same-prefix tasks |
| `CLI tools` | Suggested, not mandatory -- coordinator may adjust based on task needs |
| `pattern_hint` | Reference pattern name from role-spec-template (research/document/code/analysis/validation) -- guides coordinator's Phase 2-4 composition, NOT a rigid template selector |
| `output_type` | `artifact` (new files in session/artifacts/) / `codebase` (modify existing project files) / `mixed` (both) -- determines verification strategy in Behavioral Traits |
| `message_types.success` | `<prefix>_complete` |
| `message_types.error` | `error` |
**output_type derivation**:
| Task Signal | output_type | Example |
|-------------|-------------|---------|
| "write report", "analyze", "research" | `artifact` | New analysis-report.md in session |
| "update docs", "modify code", "fix bug" | `codebase` | Modify existing project files |
| "implement feature + write summary" | `mixed` | Code changes + implementation summary |
## Phase 4: Output
Write `<session-folder>/task-analysis.json`:
```json
{
"task_description": "<original user input>",
"capabilities": [
{
"name": "researcher",
"prefix": "RESEARCH",
"responsibility_type": "orchestration",
"tasks": [
{
"id": "RESEARCH-001",
"goal": "What this task achieves and why",
"steps": [
"step 1: specific action with clear verb",
"step 2: specific action with clear verb",
"step 3: specific action with clear verb"
],
"key_files": [
"src/path/to/relevant.ts",
"src/path/to/other.ts"
],
"upstream_artifacts": [],
"success_criteria": "Measurable completion condition",
"constraints": "Scope limits, focus areas"
}
],
"artifacts": ["research-findings.md"]
}
],
"dependency_graph": {
"RESEARCH-001": [],
"DRAFT-001": ["RESEARCH-001"],
"ANALYSIS-001": ["DRAFT-001"]
},
"roles": [
{
"name": "researcher",
"prefix": "RESEARCH",
"responsibility_type": "orchestration",
"task_count": 1,
"inner_loop": false,
"role_spec_metadata": {
"CLI tools": ["explore"],
"pattern_hint": "research",
"output_type": "artifact",
"message_types": {
"success": "research_complete",
"error": "error"
}
}
}
],
"complexity": {
"capability_count": 2,
"cross_domain_factor": false,
"parallel_tracks": 0,
"serial_depth": 2,
"total_score": 3,
"level": "low"
},
"needs_research": false,
"artifacts": [
{ "name": "research-findings.md", "producer": "researcher", "path": "artifacts/research-findings.md" }
]
}
```
## Complexity Interpretation
**CRITICAL**: Complexity score is for **role design optimization**, NOT for skipping team workflow.
| Complexity | Team Structure | Coordinator Action |
|------------|----------------|-------------------|
| Low (1-2 roles) | Minimal team | Generate 1-2 role-specs, create team, spawn workers |
| Medium (2-3 roles) | Standard team | Generate role-specs, create team, spawn workers |
| High (3-5 roles) | Full team | Generate role-specs, create team, spawn workers |
**All complexity levels use team_worker architecture**:
- Single-role tasks still spawn team_worker agent
- Coordinator NEVER executes task work directly
- Team infrastructure provides session management, message bus, fast-advance
**Purpose of complexity score**:
- Determine optimal role count (merge vs separate)
- Guide dependency graph design
- Inform user about task scope
- NOT for deciding whether to use team workflow
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No capabilities detected | Default to single `general` role with TASK prefix |
| Circular dependency in graph | Break cycle at lowest-tier edge, warn |
| Task description too vague | Return minimal analysis, coordinator will request_user_input |
| All capabilities merge into one | Valid -- single-role execution via team_worker |

View File

@@ -0,0 +1,126 @@
# Command: dispatch
## Purpose
Create task chains from dynamic dependency graphs. Builds pipelines from the task-analysis.json produced by Phase 1. Workers are spawned as team_worker agents with role-spec paths.
## When to Use
| Trigger | Condition |
|---------|-----------|
| After analysis | Phase 1 complete, task-analysis.json exists |
| After adapt | handleAdapt created new roles, needs new tasks |
| Re-dispatch | Pipeline restructuring (rare) |
## Strategy
- **Delegation**: Inline execution (coordinator processes directly)
- **Inputs**: task-analysis.json + team-session.json
- **Output**: tasks.json with dependency chains
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task analysis | `<session-folder>/task-analysis.json` | Yes |
| Session file | `<session-folder>/team-session.json` | Yes |
| Role registry | `team-session.json#roles` | Yes |
| Scope | User requirements description | Yes |
## Phase 3: Task Chain Creation
### Workflow
1. **Read dependency graph** from `task-analysis.json#dependency_graph`
2. **Topological sort** tasks to determine creation order
3. **Validate** all task roles exist in role registry
4. **Build tasks array** (in topological order):
```json
[
{
"id": "<PREFIX>-<NNN>",
"title": "<PREFIX>-<NNN>",
"description": "PURPOSE: <goal> | Success: <success_criteria>\nTASK:\n - <step 1>\n - <step 2>\n - <step 3>\nCONTEXT:\n - Session: <session-folder>\n - Upstream artifacts: <artifact-1.md>, <artifact-2.md>\n - Key files: <file1>, <file2>\n - Shared state: team_msg(operation=\"get_state\", session_id=<session-id>)\nEXPECTED: <deliverable path> + <quality criteria>\nCONSTRAINTS: <scope limits>\n---\nInnerLoop: <true|false>\nRoleSpec: <session-folder>/role-specs/<role-name>.md",
"status": "pending",
"role": "<role-name>",
"prefix": "<PREFIX>",
"deps": ["<dependency-list from graph>"],
"findings": "",
"error": ""
}
]
```
5. **Write tasks.json** with the complete array
6. **Update team-session.json** with pipeline and tasks_total
7. **Validate** created chain
### Task Description Template
Every task description includes structured fields for clarity:
```
PURPOSE: <goal from task-analysis.json#tasks[].goal> | Success: <success_criteria from task-analysis.json#tasks[].success_criteria>
TASK:
- <step 1 from task-analysis.json#tasks[].steps[]>
- <step 2 from task-analysis.json#tasks[].steps[]>
- <step 3 from task-analysis.json#tasks[].steps[]>
CONTEXT:
- Session: <session-folder>
- Upstream artifacts: <comma-separated list from task-analysis.json#tasks[].upstream_artifacts[]>
- Key files: <comma-separated list from task-analysis.json#tasks[].key_files[]>
- Shared state: team_msg(operation="get_state", session_id=<session-id>)
EXPECTED: <artifact path from task-analysis.json#capabilities[].artifacts[]> + <quality criteria based on capability type>
CONSTRAINTS: <constraints from task-analysis.json#tasks[].constraints>
---
InnerLoop: <true|false>
RoleSpec: <session-folder>/role-specs/<role-name>.md
```
**Field Mapping**:
- `PURPOSE`: From `task-analysis.json#capabilities[].tasks[].goal` + `success_criteria`
- `TASK`: From `task-analysis.json#capabilities[].tasks[].steps[]`
- `CONTEXT.Upstream artifacts`: From `task-analysis.json#capabilities[].tasks[].upstream_artifacts[]`
- `CONTEXT.Key files`: From `task-analysis.json#capabilities[].tasks[].key_files[]`
- `EXPECTED`: From `task-analysis.json#capabilities[].artifacts[]` + quality criteria
- `CONSTRAINTS`: From `task-analysis.json#capabilities[].tasks[].constraints`
### InnerLoop Flag Rules
| Condition | InnerLoop |
|-----------|-----------|
| Role has 2+ serial same-prefix tasks | true |
| Role has 1 task | false |
| Tasks are parallel (no dependency between them) | false |
### Dependency Validation
| Check | Criteria |
|-------|----------|
| No orphan tasks | Every task is reachable from at least one root |
| No circular deps | Topological sort succeeds without cycle |
| All roles valid | Every task role exists in team-session.json#roles |
| All deps valid | Every deps entry references an existing task id |
| Session reference | Every task description contains `Session: <session-folder>` |
| RoleSpec reference | Every task description contains `RoleSpec: <path>` |
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Task count | Matches dependency_graph node count |
| Dependencies | Every deps entry references an existing task id |
| Role assignment | Each task role is in role registry |
| Session reference | Every task description contains `Session:` |
| Pipeline integrity | No disconnected subgraphs (warn if found) |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Circular dependency detected | Report cycle, halt task creation |
| Role not in role registry | Error, coordinator must fix roles first |
| Task creation fails | Log error, report to coordinator |
| Duplicate task id | Skip creation, log warning |
| Empty dependency graph | Error, task analysis may have failed |

View File

@@ -0,0 +1,327 @@
# Command: monitor
## Purpose
Event-driven pipeline coordination with Spawn-and-Stop pattern. Role names are read from `team-session.json#roles`. Workers are spawned as `team_worker` agents with role-spec paths. Includes `handleComplete` for pipeline completion action and `handleAdapt` for mid-pipeline capability gap handling.
## When to Use
| Trigger | Condition |
|---------|-----------|
| Worker result | Result from wait_agent contains [role-name] from session roles |
| User command | "check", "status", "resume", "continue" |
| Capability gap | Worker reports capability_gap |
| Pipeline spawn | After dispatch, initial spawn needed |
| Pipeline complete | All tasks done |
## Strategy
- **Delegation**: Inline execution with handler routing
- **Beat model**: ONE_STEP_PER_INVOCATION -- one handler then STOP
- **Workers**: Spawned as team_worker via spawn_agent
## Constants
| Constant | Value | Description |
|----------|-------|-------------|
| SPAWN_MODE | spawn_agent | All workers spawned via `spawn_agent` |
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
| FAST_ADVANCE_AWARE | true | Workers may skip coordinator for simple linear successors |
| WORKER_AGENT | team_worker | All workers spawned as team_worker agents |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session file | `<session-folder>/team-session.json` | Yes |
| Task list | Read tasks.json | Yes |
| Active workers | session.active_workers[] | Yes |
| Role registry | session.roles[] | Yes |
**Dynamic role resolution**: Known worker roles are loaded from `session.roles[].name`. Role-spec paths are in `session.roles[].role_spec`.
## Phase 3: Handler Routing
### Wake-up Source Detection
Parse `$ARGUMENTS` to determine handler:
| Priority | Condition | Handler |
|----------|-----------|---------|
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
| 2 | Contains "capability_gap" | handleAdapt |
| 3 | Contains "check" or "status" | handleCheck |
| 4 | Contains "resume", "continue", or "next" | handleResume |
| 5 | Pipeline detected as complete | handleComplete |
| 6 | None of the above (initial spawn after dispatch) | handleSpawnNext |
---
### Handler: handleCallback
Worker completed a task. Verify completion, update state, auto-advance.
```
Receive result from wait_agent for [<role>]
+- Find matching active worker by role (from session.roles)
+- Is this a progress update (not final)? (Inner Loop intermediate task completion)
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
+- Task status = completed?
| +- YES -> remove from active_workers -> update session
| | +- Close agent: close_agent({ id: <agentId> })
| | +- -> handleSpawnNext
| +- NO -> progress message, do not advance -> STOP
+- No matching worker found
+- Scan all active workers for completed tasks
+- Found completed -> process each -> handleSpawnNext
+- None completed -> STOP
```
**Fast-advance reconciliation**: A worker may have already spawned its successor via fast-advance. When processing any callback or resume:
1. Read recent `fast_advance` messages from team_msg (type="fast_advance")
2. For each fast_advance message: add the spawned successor to `active_workers` if not already present
3. Check if the expected next task is already `in_progress` (fast-advanced)
4. If yes -> skip spawning that task (already running)
5. If no -> normal handleSpawnNext
---
### Handler: handleCheck
Read-only status report. No pipeline advancement.
**Output format**:
```
[coordinator] Pipeline Status
[coordinator] Progress: <completed>/<total> (<percent>%)
[coordinator] Execution Graph:
<visual representation of dependency graph with status icons>
done=completed >>>=running o=pending .=not created
[coordinator] Active Workers:
> <subject> (<role>) - running <elapsed> [inner-loop: N/M tasks done]
[coordinator] Ready to spawn: <subjects>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
Then STOP.
---
### Handler: handleResume
Check active worker completion, process results, advance pipeline.
```
Load active_workers from session
+- No active workers -> handleSpawnNext
+- Has active workers -> check each:
+- status = completed -> mark done, log
+- status = in_progress -> still running, log
+- other status -> worker failure -> reset to pending
After processing:
+- Some completed -> handleSpawnNext
+- All still running -> report status -> STOP
+- All failed -> handleSpawnNext (retry)
```
---
### Handler: handleSpawnNext
Find all ready tasks, spawn team_worker agents, update session, STOP.
```
Collect task states from tasks.json
+- completedSubjects: status = completed
+- inProgressSubjects: status = in_progress
+- readySubjects: pending + all deps in completedSubjects
Ready tasks found?
+- NONE + work in progress -> report waiting -> STOP
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> handleComplete
+- HAS ready tasks -> for each:
+- Is task role an Inner Loop role AND that role already has an active_worker?
| +- YES -> SKIP spawn (existing worker will pick it up via inner loop)
| +- NO -> normal spawn below
+- Update tasks.json entry status -> "in_progress"
+- team_msg log -> task_unblocked (session_id=<session-id>)
+- Spawn team_worker (see spawn call below)
+- Add to session.active_workers
Update session file -> output summary -> STOP
```
**Spawn worker call** (one per ready task):
```
const agentId = spawn_agent({
agent_type: "team_worker",
items: [{ type: "text", text: `## Role Assignment
role: <role>
role_spec: <session-folder>/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: <team-name>
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.` }]
})
// Collect results:
const result = wait_agent({ ids: [agentId], timeout_ms: 900000 })
// Process result, update tasks.json
close_agent({ id: agentId })
```
---
### Handler: handleComplete
Pipeline complete. Execute completion action based on session configuration.
```
All tasks completed (no pending, no in_progress)
+- Generate pipeline summary:
| - Deliverables list with paths
| - Pipeline stats (tasks completed, duration)
| - Discussion verdicts (if any)
|
+- Read session.completion_action:
|
+- "interactive":
| request_user_input({
| questions: [{
| question: "Team pipeline complete. What would you like to do?",
| header: "Completion",
| multiSelect: false,
| options: [
| { label: "Archive & Clean (Recommended)", description: "Archive session, clean up team" },
| { label: "Keep Active", description: "Keep session for follow-up work" },
| { label: "Export Results", description: "Export deliverables to target directory" }
| ]
| }]
| })
| +- "Archive & Clean":
| | Update session status="completed"
| | Clean up session
| | Output final summary with artifact paths
| +- "Keep Active":
| | Update session status="paused"
| | Output: "Resume with: Skill(skill='team-coordinate', args='resume')"
| +- "Export Results":
| request_user_input for target directory
| Copy deliverables to target
| Execute Archive & Clean flow
|
+- "auto_archive":
| Execute Archive & Clean without prompt
|
+- "auto_keep":
Execute Keep Active without prompt
```
**Fallback**: If completion action fails, default to Keep Active (session status="paused"), log warning.
---
### Handler: handleAdapt
Handle mid-pipeline capability gap discovery. A worker reports `capability_gap` when it encounters work outside its scope.
**CONSTRAINT**: Maximum 5 worker roles per session. handleAdapt MUST enforce this limit.
```
Parse capability_gap message:
+- Extract: gap_description, requesting_role, suggested_capability
+- Validate gap is genuine:
+- Check existing roles in session.roles -> does any role cover this?
| +- YES -> redirect to that role -> STOP
| +- NO -> genuine gap, proceed to role-spec generation
+- CHECK ROLE COUNT LIMIT (MAX 5 ROLES):
+- Count current roles in session.roles
+- If count >= 5:
+- Attempt to merge new capability into existing role
+- If merge NOT possible -> PAUSE, report to user
+- Generate new role-spec:
1. Read specs/role-spec-template.md
2. Fill template with: frontmatter (role, prefix, inner_loop, message_types) + Phase 2-4 content
3. Write to <session-folder>/role-specs/<new-role>.md
4. Add to session.roles[]
+- Create new task(s) (add to tasks.json)
+- Update team-session.json
+- Spawn new team_worker -> STOP
```
---
### Worker Failure Handling
When a worker has unexpected status (not completed, not in_progress):
1. Reset task -> pending in tasks.json
2. Log via team_msg (type: error)
3. Report to user: task reset, will retry on next resume
### Fast-Advance Failure Recovery
When coordinator detects a fast-advanced task has failed:
```
handleCallback / handleResume detects:
+- Task is in_progress (was fast-advanced by predecessor)
+- No active_worker entry for this task
+- Resolution:
1. Update tasks.json -> reset task to pending
2. Remove stale active_worker entry (if any)
3. Log via team_msg (type: error)
4. -> handleSpawnNext (will re-spawn the task normally)
```
### Fast-Advance State Sync
On every coordinator wake (handleCallback, handleResume, handleCheck):
1. Read team_msg entries with `type="fast_advance"` since last coordinator wake
2. For each entry: sync `active_workers` with the spawned successor
3. This ensures coordinator's state reflects fast-advance decisions even before the successor's callback arrives
### Consensus-Blocked Handling
```
handleCallback receives message with consensus_blocked flag
+- Route by severity:
+- severity = HIGH
| +- Create REVISION task (same role, incremented suffix)
| +- Max 1 revision per task. If already revised -> PAUSE, escalate to user
+- severity = MEDIUM
| +- Proceed with warning, log to wisdom/issues.md
| +- Normal handleSpawnNext
+- severity = LOW
+- Proceed normally, treat as consensus_reached with notes
```
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Session state consistent | active_workers matches tasks.json in_progress tasks |
| No orphaned tasks | Every in_progress task has an active_worker entry |
| Dynamic roles valid | All task roles exist in session.roles |
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Session file not found | Error, suggest re-initialization |
| Worker callback from unknown role | Log info, scan for other completions |
| All workers still running on resume | Report status, suggest check later |
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
| Fast-advance conflict | Coordinator reconciles, no duplicate spawns |
| Dynamic role-spec file not found | Error, coordinator must regenerate from task-analysis |
| capability_gap when role limit (5) reached | Attempt merge, else pause for user |
| Completion action fails | Default to Keep Active, log warning |

View File

@@ -0,0 +1,361 @@
---
role: coordinator
---
# Coordinator Role
Orchestrate the team-coordinate workflow: task analysis, dynamic role-spec generation, task dispatching, progress monitoring, session state, and completion action. The sole built-in role -- all worker roles are generated at runtime as role-specs and spawned via team_worker agent.
## Identity
- **Name**: `coordinator` | **Tag**: `[coordinator]`
- **Responsibility**: Analyze task -> Generate role-specs -> Create team -> Dispatch tasks -> Monitor progress -> Completion action -> Report results
## Boundaries
### MUST
- Parse task description (text-level: keyword scanning, capability inference, dependency design)
- Dynamically generate worker role-specs from specs/role-spec-template.md
- Create session and spawn team_worker agents
- Dispatch tasks with proper dependency chains from task-analysis.json
- Monitor progress via worker results and route messages
- Maintain session state persistence (team-session.json)
- Handle capability_gap reports (generate new role-specs mid-pipeline)
- Handle consensus_blocked HIGH verdicts (create revision tasks or pause)
- Detect fast-advance orphans on resume/check and reset to pending
- Execute completion action when pipeline finishes
### MUST NOT
- **Read source code or perform codebase exploration** (delegate to worker roles)
- Execute task work directly (delegate to workers)
- Modify task output artifacts (workers own their deliverables)
- Call implementation agents (code-developer, etc.) directly
- Skip dependency validation when creating task chains
- Generate more than 5 worker roles (merge if exceeded)
- Override consensus_blocked HIGH without user confirmation
- Spawn workers with wrong agent type (MUST use `team_worker`)
---
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| state_update | outbound | Session init, pipeline progress |
| task_unblocked | outbound | Task ready for execution |
| fast_advance | inbound | Worker skipped coordinator |
| capability_gap | inbound | Worker needs new capability |
| error | inbound | Worker failure |
| impl_complete | inbound | Worker task done |
| consensus_blocked | inbound | Discussion verdict conflict |
## Message Bus Protocol
All coordinator state changes MUST be logged to team_msg BEFORE reporting results:
1. `team_msg(operation="log", ...)` -- log the event
2. Report results via output / report_agent_job_result
3. Update tasks.json entry status
Read state before every handler: `team_msg(operation="get_state", session_id=<session-id>)`
---
## Command Execution Protocol
When coordinator needs to execute a command (analyze-task, dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** - NOT separate agents or subprocesses
4. **Execute synchronously** - complete the command workflow before proceeding
Example:
```
Phase 1 needs task analysis
-> Read roles/coordinator/commands/analyze-task.md
-> Execute Phase 2 (Context Loading)
-> Execute Phase 3 (Task Analysis)
-> Execute Phase 4 (Output)
-> Continue to Phase 2
```
## Toolbox
| Tool | Type | Purpose |
|------|------|---------|
| commands/analyze-task.md | Command | Task analysis and role design |
| commands/dispatch.md | Command | Task chain creation |
| commands/monitor.md | Command | Pipeline monitoring and handlers |
| team_worker | Subagent | Worker spawning via spawn_agent |
| tasks.json | File | Task lifecycle (create/read/update) |
| team_msg | System | Message bus operations |
| request_user_input | System | User interaction |
---
## Entry Router
When coordinator is invoked, first detect the invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker result | Result from wait_agent contains `[role-name]` from session roles | -> handleCallback |
| Status check | Arguments contain "check" or "status" | -> handleCheck |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
| Pipeline complete | All tasks completed, no pending/in_progress | -> handleComplete |
| Interrupted session | Active/paused session exists in `.workflow/.team/TC-*` | -> Phase 0 (Resume Check) |
| New session | None of above | -> Phase 1 (Task Analysis) |
For callback/check/resume/adapt/complete: load `@commands/monitor.md` and execute the appropriate handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
- If found, extract `session.roles[].name` for callback detection
2. **Parse $ARGUMENTS** for detection keywords
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler section, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Task Analysis below
---
## Phase 0: Session Resume Check
**Objective**: Detect and resume interrupted sessions before creating new ones.
**Workflow**:
1. Scan `.workflow/.team/TC-*/team-session.json` for sessions with status "active" or "paused"
2. No sessions found -> proceed to Phase 1
3. Single session found -> resume it (-> Session Reconciliation)
4. Multiple sessions -> request_user_input for user selection
**Session Reconciliation**:
1. Read tasks.json -> get real status of all tasks
2. Reconcile: session.completed_tasks <-> tasks.json status (bidirectional sync)
3. Reset any in_progress tasks -> pending (they were interrupted)
4. Detect fast-advance orphans (in_progress without recent activity) -> reset to pending
5. Determine remaining pipeline from reconciled state
6. Rebuild team if disbanded (create session + spawn needed workers only)
7. Create missing tasks (add to tasks.json), set deps
8. Verify dependency chain integrity
9. Update session file with reconciled state
10. Kick first executable task's worker -> Phase 4
---
## Phase 1: Task Analysis
**Objective**: Parse user task, detect capabilities, build dependency graph, design roles.
**Constraint**: This is TEXT-LEVEL analysis only. No source code reading, no codebase exploration.
**Workflow**:
1. **Parse user task description**
2. **Clarify if ambiguous** via request_user_input:
- What is the scope? (specific files, module, project-wide)
- What deliverables are expected? (documents, code, analysis reports)
- Any constraints? (timeline, technology, style)
3. **Delegate to `@commands/analyze-task.md`**:
- Signal detection: scan keywords -> infer capabilities
- Artifact inference: each capability -> default output type (.md)
- Dependency graph: build DAG of work streams
- Complexity scoring: count capabilities, cross-domain factor, parallel tracks
- Role minimization: merge overlapping, absorb trivial, cap at 5
- **Role-spec metadata**: Generate frontmatter fields (prefix, inner_loop, additional_members, message_types)
4. **Output**: Write `<session>/task-analysis.json`
5. **If `needs_research: true`**: Phase 2 will spawn researcher worker first
**Success**: Task analyzed, capabilities detected, dependency graph built, roles designed with role-spec metadata.
**CRITICAL - Team Workflow Enforcement**:
Regardless of complexity score or role count, coordinator MUST:
- Always proceed to Phase 2 (generate role-specs)
- Always create team and spawn workers via team_worker agent
- NEVER execute task work directly, even for single-role low-complexity tasks
- NEVER skip team workflow based on complexity assessment
**Single-role execution is still team-based** - just with one worker. The team architecture provides:
- Consistent message bus communication
- Session state management
- Artifact tracking
- Fast-advance capability
- Resume/recovery mechanisms
---
## Phase 2: Generate Role-Specs + Initialize Session
**Objective**: Create session, generate dynamic role-spec files, initialize shared infrastructure.
**Workflow**:
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash("pwd")`
- `skill_root` = `<project_root>/.codex/skills/team-coordinate`
2. **Check `needs_research` flag** from task-analysis.json:
- If `true`: **Spawn researcher worker first** to gather codebase context
- Wait for researcher result via wait_agent
- Merge research findings into task context
- Update task-analysis.json with enriched context
3. **Generate session ID**: `TC-<slug>-<date>` (slug from first 3 meaningful words of task)
4. **Create session folder structure**:
```
.workflow/.team/<session-id>/
+-- role-specs/
+-- artifacts/
+-- wisdom/
+-- explorations/
+-- discussions/
+-- .msg/
```
5. **Create session folder + initialize `tasks.json`** (empty array)
6. **Read `specs/role-spec-template.md`** for Behavioral Traits + Reference Patterns
7. **For each role in task-analysis.json#roles**:
- Fill YAML frontmatter: role, prefix, inner_loop, additional_members, message_types
- **Compose Phase 2-4 content** (NOT copy from template):
- Phase 2: Derive input sources and context loading steps from **task description + upstream dependencies**
- Phase 3: Describe **execution goal** (WHAT to achieve) from task description -- do NOT prescribe specific CLI tool or approach
- Phase 4: Combine **Behavioral Traits** (from template) + **output_type** (from task analysis) to compose verification steps
- Reference Patterns may guide phase structure, but task description determines specific content
- Write generated role-spec to `<session>/role-specs/<role-name>.md`
8. **Register roles** in team-session.json#roles (with `role_spec` path instead of `role_file`)
9. **Initialize shared infrastructure**:
- `wisdom/learnings.md`, `wisdom/decisions.md`, `wisdom/issues.md` (empty with headers)
- `explorations/cache-index.json` (`{ "entries": [] }`)
- `discussions/` (empty directory)
10. **Initialize pipeline metadata** via team_msg:
```typescript
// Use team_msg to write pipeline metadata to .msg/meta.json
// Note: dynamic roles -- replace <placeholders> with actual role list from task-analysis.json
mcp__ccw-tools__team_msg({
operation: "log",
session_id: "<session-id>",
from: "coordinator",
type: "state_update",
summary: "Session initialized",
data: {
pipeline_mode: "<mode>",
pipeline_stages: ["<role1>", "<role2>", "<...dynamic-roles>"],
roles: ["coordinator", "<role1>", "<role2>", "<...dynamic-roles>"],
team_name: "<team-name>" // extracted from session ID or task description
}
})
```
11. **Write team-session.json** with: session_id, task_description, status="active", roles, pipeline (empty), active_workers=[], completion_action="interactive", created_at
**Success**: Session created, role-spec files generated, shared infrastructure initialized.
---
## Phase 3: Create Task Chain
**Objective**: Dispatch tasks based on dependency graph with proper dependencies.
Delegate to `@commands/dispatch.md` which creates the full task chain:
1. Reads dependency_graph from task-analysis.json
2. Topological sorts tasks
3. Builds tasks array and writes to tasks.json with deps
4. Assigns role based on role mapping from task-analysis.json
5. Includes `Session: <session-folder>` in every task description
6. Sets InnerLoop flag for multi-task roles
7. Updates team-session.json with pipeline and tasks_total
**Success**: All tasks created with correct dependency chains, session updated.
---
## Phase 4: Spawn-and-Stop
**Objective**: Spawn first batch of ready workers, then STOP.
**Design**: Spawn-and-Stop + wait_agent pattern, with worker fast-advance.
**Workflow**:
1. Load `@commands/monitor.md`
2. Find tasks with: status=pending, deps all resolved, role assigned
3. For each ready task -> spawn team_worker (see SKILL.md Coordinator Spawn Template)
4. Output status summary with execution graph
5. STOP
**Pipeline advancement** driven by three wake sources:
- Worker result (automatic) -> Entry Router -> handleCallback
- User "check" -> handleCheck (status only)
- User "resume" -> handleResume (advance)
---
## Phase 5: Report + Completion Action
**Objective**: Completion report, interactive completion choice, and follow-up options.
**Workflow**:
1. Load session state -> count completed tasks, duration
2. List all deliverables with output paths in `<session>/artifacts/`
3. Include discussion summaries (if inline discuss was used)
4. Summarize wisdom accumulated during execution
5. Output report:
```
[coordinator] ============================================
[coordinator] TASK COMPLETE
[coordinator]
[coordinator] Deliverables:
[coordinator] - <artifact-1.md> (<producer role>)
[coordinator] - <artifact-2.md> (<producer role>)
[coordinator]
[coordinator] Pipeline: <completed>/<total> tasks
[coordinator] Roles: <role-list>
[coordinator] Duration: <elapsed>
[coordinator]
[coordinator] Session: <session-folder>
[coordinator] ============================================
```
6. **Execute Completion Action** (based on session.completion_action):
| Mode | Behavior |
|------|----------|
| `interactive` | request_user_input with Archive/Keep/Export options |
| `auto_archive` | Execute Archive & Clean without prompt |
| `auto_keep` | Execute Keep Active without prompt |
**Interactive handler**: See SKILL.md Completion Action section.
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Task timeout | Log, mark failed, ask user to retry or skip |
| Worker crash | Respawn worker, reassign task |
| Dependency cycle | Detect in task analysis, report to user, halt |
| Task description too vague | request_user_input for clarification |
| Session corruption | Attempt recovery, fallback to manual reconciliation |
| Role-spec generation fails | Fall back to single general-purpose role |
| capability_gap reported | handleAdapt: generate new role-spec, create tasks, spawn |
| All capabilities merge to one | Valid: single-role execution, reduced overhead |
| No capabilities detected | Default to single general role with TASK prefix |
| Completion action fails | Default to Keep Active, log warning |

View File

@@ -1,165 +0,0 @@
# Team Coordinate -- CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier (PREFIX-NNN) | `"RESEARCH-001"` |
| `title` | string | Yes | Short task title | `"Investigate auth patterns"` |
| `description` | string | Yes | Detailed task description (self-contained) with goal, steps, success criteria, key files | `"PURPOSE: Research JWT auth patterns..."` |
| `role` | string | Yes | Dynamic role name | `"researcher"` |
| `responsibility_type` | enum | Yes | `orchestration`, `read-only`, `code-gen`, `code-gen-docs`, `validation` | `"orchestration"` |
| `output_type` | enum | Yes | `artifact` (session files), `codebase` (project files), `mixed` | `"artifact"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"RESEARCH-001;DESIGN-001"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"RESEARCH-001"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[RESEARCH-001] Found 3 auth patterns..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Implemented JWT middleware with refresh token support..."` |
| `artifacts_produced` | string | Semicolon-separated paths of produced artifacts | `"artifacts/research-findings.md;src/auth/jwt.ts"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Dynamic Role Prefixes
| Capability | Prefix | Responsibility Type |
|------------|--------|---------------------|
| researcher | RESEARCH | orchestration |
| writer | DRAFT | code-gen-docs |
| developer | IMPL | code-gen |
| designer | DESIGN | orchestration |
| analyst | ANALYSIS | read-only |
| tester | TEST | validation |
| planner | PLAN | orchestration |
| (default) | TASK | orchestration |
---
### Example Data
```csv
id,title,description,role,responsibility_type,output_type,deps,context_from,exec_mode,wave,status,findings,artifacts_produced,error
"RESEARCH-001","Research auth patterns","PURPOSE: Investigate JWT authentication patterns and industry best practices | Success: Comprehensive findings document with pattern comparison\nTASK:\n- Survey JWT vs session-based auth\n- Compare token refresh strategies\n- Document security considerations\nCONTEXT:\n- Key files: src/auth/*, src/middleware/*\nEXPECTED: artifacts/research-findings.md","researcher","orchestration","artifact","","","csv-wave","1","pending","","",""
"DESIGN-001","Design auth architecture","PURPOSE: Design authentication module architecture based on research | Success: Architecture document with component diagram\nTASK:\n- Define auth module structure\n- Design token lifecycle\n- Plan middleware integration\nCONTEXT:\n- Upstream: RESEARCH-001 findings\nEXPECTED: artifacts/auth-design.md","designer","orchestration","artifact","RESEARCH-001","RESEARCH-001","csv-wave","2","pending","","",""
"IMPL-001","Implement auth module","PURPOSE: Build JWT authentication middleware | Success: Working auth module with tests passing\nTASK:\n- Create JWT utility functions\n- Implement auth middleware\n- Add route guards\nCONTEXT:\n- Upstream: DESIGN-001 architecture\n- Key files: src/auth/*, src/middleware/*\nEXPECTED: Source files + artifacts/implementation-summary.md","developer","code-gen","mixed","DESIGN-001","DESIGN-001","csv-wave","3","pending","","",""
"TEST-001","Test auth implementation","PURPOSE: Validate auth module correctness | Success: All tests pass, coverage >= 80%\nTASK:\n- Write unit tests for JWT utilities\n- Write integration tests for middleware\n- Run test suite\nCONTEXT:\n- Upstream: IMPL-001 implementation\nEXPECTED: artifacts/test-report.md","tester","validation","artifact","IMPL-001","IMPL-001","csv-wave","4","pending","","",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
--------------------- -------------------- -----------------
id ----------> id ----------> id
title ----------> title ----------> (reads)
description ----------> description ----------> (reads)
role ----------> role ----------> (reads)
responsibility_type ---> responsibility_type ---> (reads)
output_type ----------> output_type ----------> (reads)
deps ----------> deps ----------> (reads)
context_from----------> context_from----------> (reads)
exec_mode ----------> exec_mode ----------> (reads)
wave ----------> (reads)
prev_context ----------> (reads)
status
findings
artifacts_produced
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "IMPL-001",
"status": "completed",
"findings": "Implemented JWT auth middleware with access/refresh token support. Created 3 files: jwt.ts, auth-middleware.ts, route-guard.ts. All syntax checks pass.",
"artifacts_produced": "artifacts/implementation-summary.md;src/auth/jwt.ts;src/auth/auth-middleware.ts",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `pattern_found` | `data.pattern_name+data.location` | `{pattern_name, location, description}` | Design pattern identified |
| `file_modified` | `data.file` | `{file, change, lines_added}` | File change recorded |
| `dependency_found` | `data.from+data.to` | `{from, to, type}` | Dependency relationship |
| `issue_found` | `data.file+data.line` | `{file, line, severity, description}` | Issue discovered |
| `decision_made` | `data.decision` | `{decision, rationale, impact}` | Design decision |
| `artifact_produced` | `data.path` | `{name, path, producer, type}` | Deliverable created |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"RESEARCH-001","type":"pattern_found","data":{"pattern_name":"Repository Pattern","location":"src/repos/","description":"Data access layer uses repository pattern"}}
{"ts":"2026-03-08T10:05:00Z","worker":"IMPL-001","type":"file_modified","data":{"file":"src/auth/jwt.ts","change":"Added JWT middleware","lines_added":45}}
{"ts":"2026-03-08T10:10:00Z","worker":"IMPL-001","type":"artifact_produced","data":{"name":"implementation-summary","path":"artifacts/implementation-summary.md","producer":"developer","type":"markdown"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| CSV task findings | Interactive task | Injected via spawn message or send_input |
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
| Role valid | role matches a generated role-instruction | "No instruction for role: {role}" |
| Cross-mechanism deps | Interactive to CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |

View File

@@ -0,0 +1,111 @@
# Knowledge Transfer Protocols
## 1. Transfer Channels
| Channel | Scope | Mechanism | When to Use |
|---------|-------|-----------|-------------|
| **Artifacts** | Producer -> Consumer | Write to `<session>/artifacts/<name>.md`, consumer reads in Phase 2 | Structured deliverables (reports, plans, specs) |
| **State Updates** | Cross-role | `team_msg(operation="log", type="state_update", data={...})` / `team_msg(operation="get_state", session_id=<session-id>)` | Key findings, decisions, metadata (small, structured data) |
| **Wisdom** | Cross-task | Append to `<session>/wisdom/{learnings,decisions,conventions,issues}.md` | Patterns, conventions, risks discovered during execution |
| **Context Accumulator** | Intra-role (inner loop) | In-memory array, passed to each subsequent task in same-prefix loop | Prior task summaries within same role's inner loop |
| **Exploration Cache** | Cross-role | `<session>/explorations/cache-index.json` + per-angle JSON | Codebase discovery results, prevents duplicate exploration |
## 2. Context Loading Protocol (Phase 2)
Every role MUST load context in this order before starting work.
| Step | Action | Required |
|------|--------|----------|
| 1 | Extract session path from task description | Yes |
| 2 | `team_msg(operation="get_state", session_id=<session-id>)` | Yes |
| 3 | Read artifact files from upstream state's `ref` paths | Yes |
| 4 | Read `<session>/wisdom/*.md` if exists | Yes |
| 5 | Check `<session>/explorations/cache-index.json` before new exploration | If exploring |
| 6 | For inner_loop roles: load context_accumulator from prior tasks | If inner_loop |
**Loading rules**:
- Never skip step 2 -- state contains key decisions and findings
- If `ref` path in state does not exist, log warning and continue
- Wisdom files are append-only -- read all entries, newest last
## 3. Context Publishing Protocol (Phase 4)
| Step | Action | Required |
|------|--------|----------|
| 1 | Write deliverable to `<session>/artifacts/<task-id>-<name>.md` | Yes |
| 2 | Send `team_msg(type="state_update")` with payload (see schema below) | Yes |
| 3 | Append wisdom entries for learnings, decisions, issues found | If applicable |
## 4. State Update Schema
Sent via `team_msg(type="state_update")` on task completion.
```json
{
"status": "task_complete",
"task_id": "<TASK-NNN>",
"ref": "<session>/artifacts/<filename>",
"key_findings": [
"Finding 1",
"Finding 2"
],
"decisions": [
"Decision with rationale"
],
"files_modified": [
"path/to/file.ts"
],
"verification": "self-validated | peer-reviewed | tested"
}
```
**Field rules**:
- `ref`: Always an artifact path, never inline content
- `key_findings`: Max 5 items, each under 100 chars
- `decisions`: Include rationale, not just the choice
- `files_modified`: Only for implementation tasks
- `verification`: One of `self-validated`, `peer-reviewed`, `tested`
**Write state** (namespaced by role):
```
team_msg(operation="log", session_id=<session-id>, from=<role>, type="state_update", data={
"<role_name>": { "key_findings": [...], "scope": "..." }
})
```
**Read state**:
```
team_msg(operation="get_state", session_id=<session-id>)
// Returns merged state from all state_update messages
```
## 5. Exploration Cache Protocol
Prevents redundant research across tasks and discussion rounds.
| Step | Action |
|------|--------|
| 1 | Read `<session>/explorations/cache-index.json` |
| 2 | If angle already explored, read cached result from `explore-<angle>.json` |
| 3 | If not cached, perform exploration |
| 4 | Write result to `<session>/explorations/explore-<angle>.json` |
| 5 | Update `cache-index.json` with new entry |
**cache-index.json format**:
```json
{
"entries": [
{
"angle": "competitor-analysis",
"file": "explore-competitor-analysis.json",
"created_by": "RESEARCH-001",
"timestamp": "2026-01-15T10:30:00Z"
}
]
}
```
**Rules**:
- Cache key is the exploration `angle` (normalized to kebab-case)
- Cache entries never expire within a session
- Any role can read cached explorations; only the creator updates them

View File

@@ -0,0 +1,97 @@
# Pipeline Definitions — Team Coordinate
## Dynamic Pipeline Model
team-coordinate does NOT have a static pipeline. All pipelines are generated at runtime from task-analysis.json based on the user's task description.
## Pipeline Generation Process
```
Phase 1: analyze-task.md
-> Signal detection -> capability mapping -> dependency graph
-> Output: task-analysis.json
Phase 2: dispatch.md
-> Read task-analysis.json dependency graph
-> Create tasks.json entries per dependency node
-> Set deps chains from graph edges
-> Output: tasks.json with correct DAG
Phase 3-N: monitor.md
-> handleSpawnNext: spawn ready tasks as team-worker agents
-> handleCallback: mark completed, advance pipeline
-> Repeat until all tasks done
```
## Dynamic Task Naming
| Capability | Prefix | Example |
|------------|--------|---------|
| researcher | RESEARCH | RESEARCH-001 |
| developer | IMPL | IMPL-001 |
| analyst | ANALYSIS | ANALYSIS-001 |
| designer | DESIGN | DESIGN-001 |
| tester | TEST | TEST-001 |
| writer | DRAFT | DRAFT-001 |
| planner | PLAN | PLAN-001 |
| (default) | TASK | TASK-001 |
## Dependency Graph Structure
task-analysis.json encodes the pipeline:
```json
{
"dependency_graph": {
"RESEARCH-001": { "role": "researcher", "blockedBy": [], "priority": "P0" },
"IMPL-001": { "role": "developer", "blockedBy": ["RESEARCH-001"], "priority": "P1" },
"TEST-001": { "role": "tester", "blockedBy": ["IMPL-001"], "priority": "P2" }
}
}
```
## Role-Worker Map
Dynamic — loaded from session role-specs at runtime:
```
<session>/role-specs/<role-name>.md -> team-worker agent
```
Role-spec files contain YAML frontmatter:
```yaml
---
role: <role-name>
prefix: <PREFIX>
inner_loop: <true|false>
message_types:
success: <type>
error: error
---
```
## Checkpoint
| Trigger | Behavior |
|---------|----------|
| capability_gap reported | handleAdapt: generate new role-spec, spawn new worker |
| consensus_blocked HIGH | Create REVISION task or pause for user |
| All tasks complete | handleComplete: interactive completion action |
## Specs Reference
- [role-spec-template.md](role-spec-template.md) — Template for generating dynamic role-specs
- [quality-gates.md](quality-gates.md) — Quality thresholds and scoring dimensions
- [knowledge-transfer.md](knowledge-transfer.md) — Context transfer protocols between roles
## Quality Gate Integration
Dynamic pipelines reference quality thresholds from [specs/quality-gates.md](quality-gates.md).
| Gate Point | Trigger | Criteria Source |
|------------|---------|----------------|
| After artifact production | Producer role Phase 4 | Behavioral Traits in role-spec |
| After validation tasks | Tester/analyst completion | quality-gates.md thresholds |
| Pipeline completion | All tasks done | Aggregate scoring |
Issue classification: Error (blocks) > Warning (proceed with justification) > Info (log for future).

View File

@@ -0,0 +1,112 @@
# Quality Gates
## 1. Quality Thresholds
| Result | Score | Action |
|--------|-------|--------|
| Pass | >= 80% | Report completed |
| Review | 60-79% | Report completed with warnings |
| Fail | < 60% | Retry Phase 3 (max 2 retries) |
## 2. Scoring Dimensions
| Dimension | Weight | Criteria |
|-----------|--------|----------|
| Completeness | 25% | All required outputs present with substantive content |
| Consistency | 25% | Terminology, formatting, cross-references are uniform |
| Accuracy | 25% | Outputs are factually correct and verifiable against sources |
| Depth | 25% | Sufficient detail for downstream consumers to act on deliverables |
**Score** = weighted average of all dimensions (0-100 per dimension).
## 3. Dynamic Role Quality Checks
Quality checks vary by `output_type` (from task-analysis.json role metadata).
### output_type: artifact
| Check | Pass Criteria |
|-------|---------------|
| Artifact exists | File written to `<session>/artifacts/` |
| Content non-empty | Substantive content, not just headers |
| Format correct | Expected format (MD, JSON) matches deliverable |
| Cross-references | All references to upstream artifacts resolve |
### output_type: codebase
| Check | Pass Criteria |
|-------|---------------|
| Files modified | Claimed files actually changed (Read to confirm) |
| Syntax valid | No syntax errors in modified files |
| No regressions | Existing functionality preserved |
| Summary artifact | Implementation summary written to artifacts/ |
### output_type: mixed
All checks from both `artifact` and `codebase` apply.
## 4. Verification Protocol
Derived from Behavioral Traits in [role-spec-template.md](role-spec-template.md).
| Step | Action | Required |
|------|--------|----------|
| 1 | Verify all claimed files exist via Read | Yes |
| 2 | Confirm artifact written to `<session>/artifacts/` | Yes |
| 3 | Check verification summary fields present | Yes |
| 4 | Score against quality dimensions | Yes |
| 5 | Apply threshold -> Pass/Review/Fail | Yes |
**On Fail**: Retry Phase 3 (max 2 retries). After 2 retries, report `partial_completion`.
**On Review**: Proceed with warnings logged to `<session>/wisdom/issues.md`.
## 5. Code Review Dimensions
For REVIEW-* or validation tasks during implementation pipelines.
### Quality
| Check | Severity |
|-------|----------|
| Empty catch blocks | Error |
| `as any` type casts | Warning |
| `@ts-ignore` / `@ts-expect-error` | Warning |
| `console.log` in production code | Warning |
| Unused imports/variables | Info |
### Security
| Check | Severity |
|-------|----------|
| Hardcoded secrets/credentials | Error |
| SQL injection vectors | Error |
| `eval()` or `Function()` usage | Error |
| `innerHTML` assignment | Warning |
| Missing input validation | Warning |
### Architecture
| Check | Severity |
|-------|----------|
| Circular dependencies | Error |
| Deep cross-boundary imports (3+ levels) | Warning |
| Files > 500 lines | Warning |
| Functions > 50 lines | Info |
### Requirements Coverage
| Check | Severity |
|-------|----------|
| Core functionality implemented | Error if missing |
| Acceptance criteria covered | Error if missing |
| Edge cases handled | Warning |
| Error states handled | Warning |
## 6. Issue Classification
| Class | Label | Action |
|-------|-------|--------|
| Error | Must fix | Blocks progression, must resolve before proceeding |
| Warning | Should fix | Should resolve, can proceed with justification |
| Info | Nice to have | Optional improvement, log for future |

View File

@@ -0,0 +1,192 @@
# Dynamic Role-Spec Template
Template used by coordinator to generate lightweight worker role-spec files at runtime. Each generated role-spec is written to `<session>/role-specs/<role-name>.md`.
**Key difference from v1**: Role-specs contain ONLY Phase 2-4 domain logic + YAML frontmatter. All shared behavior (Phase 1 Task Discovery, Phase 5 Report/Fast-Advance, Message Bus, Consensus, Inner Loop) is built into the `team-worker` agent.
## Template
```markdown
---
role: <role_name>
prefix: <PREFIX>
inner_loop: <true|false>
CLI tools: [<CLI tool-names>]
message_types:
success: <prefix>_complete
error: error
---
# <Role Name> — Phase 2-4
## Phase 2: <phase2_name>
<phase2_content>
## Phase 3: <phase3_name>
<phase3_content>
## Phase 4: <phase4_name>
<phase4_content>
## Error Handling
| Scenario | Resolution |
|----------|------------|
<error_entries>
```
## Frontmatter Fields
| Field | Required | Description |
|-------|----------|-------------|
| `role` | Yes | Role name matching session registry |
| `prefix` | Yes | Task prefix to filter (e.g., RESEARCH, DRAFT, IMPL) |
| `inner_loop` | Yes | Whether team-worker loops through same-prefix tasks |
| `CLI tools` | No | Array of CLI tool types this role may call |
| `output_tag` | Yes | Output tag for all messages, e.g., `[researcher]` |
| `message_types` | Yes | Message type mapping for team_msg |
| `message_types.success` | Yes | Type string for successful completion |
| `message_types.error` | Yes | Type string for errors (usually "error") |
## Design Rules
| Rule | Description |
|------|-------------|
| Phase 2-4 only | No Phase 1 (Task Discovery) or Phase 5 (Report) — team-worker handles these |
| No message bus code | No team_msg calls — team-worker handles logging |
| No consensus handling | No consensus_reached/blocked logic — team-worker handles routing |
| No inner loop logic | No Phase 5-L/5-F — team-worker handles looping |
| ~80 lines target | Lightweight, domain-focused |
| No pseudocode | Decision tables + text + tool calls only |
| `<placeholder>` notation | Use angle brackets for variable substitution |
| Reference CLI tools by name | team-worker resolves invocation from its delegation templates |
## Generated Role-Spec Structure
Every generated role-spec MUST include these blocks:
### Identity Block (mandatory — first section of generated spec)
```
Tag: [<role_name>] | Prefix: <PREFIX>-*
Responsibility: <one-line from task analysis>
```
### Boundaries Block (mandatory — after Identity)
```
### MUST
- <3-5 rules derived from task analysis>
### MUST NOT
- Execute work outside assigned prefix
- Modify artifacts from other roles
- Skip Phase 4 verification
```
## Behavioral Traits
All dynamically generated role-specs MUST embed these traits into Phase 4. Coordinator copies this section verbatim into every generated role-spec as a Phase 4 appendix.
**Design principle**: Constrain behavioral characteristics (accuracy, feedback, quality gates), NOT specific actions (which tool, which CLI tool, which path). Tasks are diverse — the coordinator composes task-specific Phase 2-3 instructions, while these traits ensure execution quality regardless of task type.
### Accuracy — outputs must be verifiable
- Files claimed as **created** → Read to confirm file exists and has content
- Files claimed as **modified** → Read to confirm content actually changed
- Analysis claimed as **complete** → artifact file exists in `<session>/artifacts/`
### Feedback Contract — completion report must include evidence
Phase 4 must produce a verification summary with these fields:
| Field | When Required | Content |
|-------|---------------|---------|
| `files_produced` | New files created | Path list |
| `files_modified` | Existing files changed | Path + before/after line count |
| `artifacts_written` | Always | Paths in `<session>/artifacts/` |
| `verification_method` | Always | How verified: Read confirm / syntax check / diff |
### Quality Gate — verify before reporting complete
- Phase 4 MUST verify Phase 3's **actual output** (not planned output)
- Verification fails → retry Phase 3 (max 2 retries)
- Still fails → report `partial_completion` with details, NOT `completed`
- Update shared state via `team_msg(operation="log", type="state_update", data={...})` after verification passes
Quality thresholds from [specs/quality-gates.md](quality-gates.md):
- Pass >= 80%: report completed
- Review 60-79%: report completed with warnings
- Fail < 60%: retry Phase 3 (max 2)
### Error Protocol
- Primary approach fails → try alternative (different CLI tool / different tool)
- 2 retries exhausted → escalate to coordinator with failure details
- NEVER: skip verification and report completed
---
## Reference Patterns
Coordinator MAY reference these patterns when composing Phase 2-4 content for a role-spec. These are **structural guidance, not mandatory templates**. The task description determines specific behavior — patterns only suggest common phase structures.
### Research / Exploration
- Phase 2: Define exploration scope + load prior knowledge from shared state and wisdom
- Phase 3: Explore via CLI tools, direct tool calls, or codebase search — approach chosen by agent
- Phase 4: Verify findings documented (Behavioral Traits) + update shared state
### Document / Content
- Phase 2: Load upstream artifacts + read target files (if modifying existing docs)
- Phase 3: Create new documents OR modify existing documents — determined by task, not template
- Phase 4: Verify documents exist with expected content (Behavioral Traits) + update shared state
### Code Implementation
- Phase 2: Load design/spec artifacts from upstream
- Phase 3: Implement code changes — CLI tool choice and approach determined by task complexity
- Phase 4: Syntax check + file verification (Behavioral Traits) + update shared state
### Analysis / Audit
- Phase 2: Load analysis targets (artifacts or source files)
- Phase 3: Multi-dimension analysis — perspectives and depth determined by task
- Phase 4: Verify report exists + severity classification (Behavioral Traits) + update shared state
### Validation / Testing
- Phase 2: Detect test framework + identify changed files from upstream
- Phase 3: Run test-fix cycle — iteration count and strategy determined by task
- Phase 4: Verify pass rate + coverage (Behavioral Traits) + update shared state
---
## Knowledge Transfer Protocol
Full protocol: [specs/knowledge-transfer.md](knowledge-transfer.md)
Generated role-specs Phase 2 MUST declare which upstream sources to load.
Generated role-specs Phase 4 MUST include state update and artifact publishing.
---
## Generated Role-Spec Validation
Coordinator verifies before writing each role-spec:
| Check | Criteria |
|-------|----------|
| Frontmatter complete | All required fields present (role, prefix, inner_loop, output_tag, message_types, CLI tools) |
| Identity block | Tag, prefix, responsibility defined |
| Boundaries | MUST and MUST NOT rules present |
| Phase 2 | Context loading sources specified |
| Phase 3 | Execution goal clear, not prescriptive about tools |
| Phase 4 | Behavioral Traits copied verbatim |
| Error Handling | Table with 3+ scenarios |
| Line count | Target ~80 lines (max 120) |
| No built-in overlap | No Phase 1/5, no message bus code, no consensus handling |