feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -1,447 +1,185 @@
---
name: team-executor
description: Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution via worker agents. No analysis, no role generation -- only loads and executes. Session path required.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"--session=<path>\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
description: Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on "Team Executor".
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team Executor
## Usage
Lightweight session execution skill: load session -> reconcile state -> spawn team-worker agents -> execute -> deliver. **No analysis, no role generation** -- only executes existing team-coordinate sessions.
```bash
$team-executor "--session=.workflow/.team/TC-project-2026-03-08"
$team-executor -c 4 "--session=.workflow/.team/TC-auth-2026-03-07"
$team-executor -y "--session=.workflow/.team/TC-api-2026-03-06"
$team-executor --continue "EX-project-2026-03-08"
## Architecture
```
+---------------------------------------------------+
| Skill(skill="team-executor") |
| args="--session=<path>" [REQUIRED] |
+-------------------+-------------------------------+
| Session Validation
+---- --session valid? ----+
| NO | YES
v v
Error immediately Orchestration Mode
(no session) -> executor
|
+-------+-------+-------+
v v v v
[team-worker agents loaded from session role-specs]
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--continue`: Resume existing executor session
---
## Session Validation (BEFORE routing)
**CRITICAL**: Session validation MUST occur before any execution.
### Parse Arguments
Extract from `$ARGUMENTS`:
- `--session=<path>`: Path to team-coordinate session folder (REQUIRED)
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
### Validation Steps
---
1. **Check `--session` provided**:
- If missing -> **ERROR**: "Session required. Usage: --session=<path-to-TC-folder>"
## Overview
Lightweight session execution skill: load team-coordinate session → reconcile state → spawn worker agents → execute → deliver. No analysis, no role generation -- only executes existing sessions.
**Execution Model**: Hybrid — CSV wave pipeline (primary) + individual agent spawn (secondary)
```
┌─────────────────────────────────────────────────────────────────────────┐
│ Team Executor WORKFLOW │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Phase 0: Session Validation + State Reconciliation │
│ ├─ Validate session structure (team-session.json, task-analysis.json)│
│ ├─ Load session state and role specifications │
│ ├─ Reconcile with TaskList (bidirectional sync) │
│ ├─ Reset interrupted tasks (in_progress → pending) │
│ ├─ Detect fast-advance orphans and reset │
│ └─ Output: validated session, reconciled state │
│ │
│ Phase 1: Requirement → CSV + Classification │
│ ├─ Load task-analysis.json from session │
│ ├─ Create tasks from analysis with role assignments │
│ ├─ Classify tasks: csv-wave | interactive (from role specs) │
│ ├─ Compute dependency waves (topological sort → depth grouping) │
│ ├─ Generate tasks.csv with wave + exec_mode columns │
│ └─ User validates task breakdown (skip if -y) │
│ │
│ Phase 2: Wave Execution Engine (Extended) │
│ ├─ For each wave (1..N): │
│ │ ├─ Execute pre-wave interactive tasks (if any) │
│ │ ├─ Build wave CSV (filter csv-wave tasks for this wave) │
│ │ ├─ Inject previous findings into prev_context column │
│ │ ├─ spawn_agents_on_csv(wave CSV) │
│ │ ├─ Execute post-wave interactive tasks (if any) │
│ │ ├─ Merge all results into master tasks.csv │
│ │ └─ Check: any failed? → skip dependents │
│ └─ discoveries.ndjson shared across all modes (append-only) │
│ │
│ Phase 3: Results Aggregation │
│ ├─ Export final results.csv │
│ ├─ Generate context.md with all findings │
│ ├─ Display summary: completed/failed/skipped per wave │
│ └─ Offer: view results | retry failed | done │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## Task Classification Rules
Each task is classified by `exec_mode` based on role specification:
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | Role has inner_loop=false |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Role has inner_loop=true |
**Classification Decision**:
| Task Property | Classification |
|---------------|---------------|
| Role inner_loop=false | `csv-wave` |
| Role inner_loop=true | `interactive` |
---
## CSV Schema
### tasks.csv (Master State)
```csv
id,title,description,deps,context_from,exec_mode,role,wave,status,findings,error
1,Implement auth module,Create authentication module with JWT,,,"csv-wave","implementer",1,pending,"",""
2,Write tests,Write unit tests for auth module,1,1,"csv-wave","tester",2,pending,"",""
3,Review code,Review implementation and tests,2,2,"interactive","reviewer",3,pending,"",""
```
**Columns**:
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (string) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `role` | Input | Role name from session role-specs |
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
| `status` | Output | `pending``completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
| `error` | Output | Error message if failed (empty if success) |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
---
## Agent Registry (Interactive Agents)
Interactive agents are loaded dynamically from session role-specs where `inner_loop=true`.
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding role-spec.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 3 |
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
| `context.md` | Human-readable execution report | Created in Phase 3 |
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
| `agents/registry.json` | Active interactive agent tracking | Updated on spawn/close |
---
## Session Structure
```
.workflow/.csv-wave/{session-id}/
├── tasks.csv # Master state (all tasks, both modes)
├── results.csv # Final results export
├── discoveries.ndjson # Shared discovery board (all agents)
├── context.md # Human-readable report
├── wave-{N}.csv # Temporary per-wave input (csv-wave only)
├── interactive/ # Interactive task artifacts
│ ├── {id}-result.json # Per-task results
│ └── cache-index.json # Shared exploration cache
└── agents/
└── registry.json # Active interactive agent tracking
```
---
## Implementation
### Session Initialization
```javascript
// Parse arguments
const args = parseArguments($ARGUMENTS)
const AUTO_YES = args.yes || args.y || false
const CONCURRENCY = args.concurrency || args.c || 3
const CONTINUE_SESSION = args.continue || null
const SESSION_PATH = args.session || null
// Validate session path
if (!SESSION_PATH) {
throw new Error("Session required. Usage: --session=<path-to-TC-folder>")
}
// Generate executor session ID
const sessionId = `EX-${extractSessionName(SESSION_PATH)}-${formatDate(new Date(), 'yyyy-MM-dd')}`
const sessionDir = `.workflow/.csv-wave/${sessionId}`
// Create session structure
Bash({ command: `mkdir -p "${sessionDir}/interactive" "${sessionDir}/agents"` })
Write(`${sessionDir}/discoveries.ndjson`, '')
Write(`${sessionDir}/agents/registry.json`, JSON.stringify({ active: [], closed: [] }))
```
---
### Phase 0: Session Validation + State Reconciliation
**Objective**: Validate session structure and reconcile session state with actual task status
**Validation Steps**:
1. Check `--session` provided
2. Validate session structure:
2. **Validate session structure** (see specs/session-schema.md):
- Directory exists at path
- `team-session.json` exists and valid JSON
- `task-analysis.json` exists and valid JSON
- `role-specs/` directory has at least one `.md` file
- Each role in `team-session.json#roles` has corresponding `.md` file in `role-specs/`
3. If validation fails → ERROR with specific reason → STOP
**Reconciliation Steps**:
1. Load team-session.json and task-analysis.json
2. Compare TaskList() with session.completed_tasks, bidirectional sync
3. Reset any in_progress tasks to pending
4. Detect fast-advance orphans (in_progress tasks without matching active_worker + created > 5 minutes) → reset to pending
5. Create missing tasks (if needed) from task-analysis
6. Update session file with reconciled state
7. TeamCreate if team does not exist
**Success Criteria**:
- Session validated, state reconciled, team ready
- All role-specs loaded and validated
3. **Validation failure**:
- Report specific missing component
- Suggest re-running team-coordinate or checking path
---
### Phase 1: Requirement → CSV + Classification
## Role Router
**Objective**: Generate task breakdown from session task-analysis and create master CSV
This skill is **executor-only**. Workers do NOT invoke this skill -- they are spawned as `team-worker` agents directly.
**Decomposition Rules**:
### Dispatch Logic
Load task-analysis.json from session and create tasks with:
- Task ID, title, description from analysis
- Dependencies from analysis
- Role assignment from analysis
- exec_mode classification based on role inner_loop flag
| Scenario | Action |
|----------|--------|
| No `--session` | **ERROR** immediately |
| `--session` invalid | **ERROR** with specific reason |
| Valid session | Orchestration Mode -> executor |
**Classification Rules**:
### Orchestration Mode
Read each role-spec file to determine inner_loop flag:
- inner_loop=false → `exec_mode=csv-wave`
- inner_loop=true → `exec_mode=interactive`
**Invocation**: `Skill(skill="team-executor", args="--session=<session-folder>")`
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
**Success Criteria**:
- tasks.csv created with valid schema, wave, and exec_mode assignments
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
```javascript
// Load master CSV
const masterCSV = readCSV(`${sessionDir}/tasks.csv`)
const maxWave = Math.max(...masterCSV.map(t => t.wave))
for (let wave = 1; wave <= maxWave; wave++) {
// Execute pre-wave interactive tasks
const preWaveTasks = masterCSV.filter(t =>
t.wave === wave && t.exec_mode === 'interactive' && t.position === 'pre-wave'
)
for (const task of preWaveTasks) {
const roleSpec = Read(`${SESSION_PATH}/role-specs/${task.role}.md`)
const agent = spawn_agent({
message: buildWorkerPrompt(task, roleSpec, sessionDir)
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
close_agent({ id: agent })
updateTaskStatus(task.id, result)
}
// Build wave CSV (csv-wave tasks only)
const waveTasks = masterCSV.filter(t => t.wave === wave && t.exec_mode === 'csv-wave')
if (waveTasks.length > 0) {
// Inject prev_context from context_from tasks
for (const task of waveTasks) {
if (task.context_from) {
const contextIds = task.context_from.split(';')
const contextFindings = masterCSV
.filter(t => contextIds.includes(t.id))
.map(t => `[Task ${t.id}] ${t.findings}`)
.join('\n\n')
task.prev_context = contextFindings
}
}
// Write wave CSV
writeCSV(`${sessionDir}/wave-${wave}.csv`, waveTasks)
// Execute wave
spawn_agents_on_csv({
csv_path: `${sessionDir}/wave-${wave}.csv`,
instruction_path: `${sessionDir}/instructions/agent-instruction.md`,
concurrency: CONCURRENCY
})
// Merge results back to master
const waveResults = readCSV(`${sessionDir}/wave-${wave}.csv`)
for (const result of waveResults) {
const masterTask = masterCSV.find(t => t.id === result.id)
Object.assign(masterTask, result)
}
writeCSV(`${sessionDir}/tasks.csv`, masterCSV)
// Cleanup wave CSV
Bash({ command: `rm "${sessionDir}/wave-${wave}.csv"` })
}
// Execute post-wave interactive tasks
const postWaveTasks = masterCSV.filter(t =>
t.wave === wave && t.exec_mode === 'interactive' && t.position === 'post-wave'
)
for (const task of postWaveTasks) {
const roleSpec = Read(`${SESSION_PATH}/role-specs/${task.role}.md`)
const agent = spawn_agent({
message: buildWorkerPrompt(task, roleSpec, sessionDir)
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
close_agent({ id: agent })
updateTaskStatus(task.id, result)
}
// Check for failures and skip dependents
const failedTasks = masterCSV.filter(t => t.wave === wave && t.status === 'failed')
if (failedTasks.length > 0) {
skipDependents(masterCSV, failedTasks)
}
}
**Lifecycle**:
```
Validate session
-> executor Phase 0: Reconcile state (reset interrupted, detect orphans)
-> executor Phase 1: Spawn first batch team-worker agents (background) -> STOP
-> Worker executes -> callback -> executor advances next step
-> Loop until pipeline complete -> Phase 2 report + completion action
```
**Success Criteria**:
- All waves executed in order
- Both csv-wave and interactive tasks handled per wave
- Each wave's results merged into master CSV before next wave starts
- Dependent tasks skipped when predecessor failed
- discoveries.ndjson accumulated across all waves and mechanisms
- Interactive agent lifecycle tracked in registry.json
**User Commands** (wake paused executor):
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph, no advancement |
| `resume` / `continue` | Check worker states, advance next step |
---
### Phase 3: Results Aggregation
## Role Registry
**Objective**: Generate final results and human-readable report.
| Role | File | Type |
|------|------|------|
| executor | [roles/executor/role.md](roles/executor/role.md) | built-in orchestrator |
| (dynamic) | `<session>/role-specs/<role-name>.md` | loaded from session |
```javascript
// Export results.csv
const masterCSV = readCSV(`${sessionDir}/tasks.csv`)
writeCSV(`${sessionDir}/results.csv`, masterCSV)
---
// Generate context.md
const contextMd = generateContextReport(masterCSV, sessionDir)
Write(`${sessionDir}/context.md`, contextMd)
## Executor Spawn Template
// Cleanup interactive agents
const registry = JSON.parse(Read(`${sessionDir}/agents/registry.json`))
for (const agent of registry.active) {
close_agent({ id: agent.id })
}
Write(`${sessionDir}/agents/registry.json`, JSON.stringify({ active: [], closed: registry.closed }))
### v2 Worker Spawn (all roles)
// Display summary
const summary = {
total: masterCSV.length,
completed: masterCSV.filter(t => t.status === 'completed').length,
failed: masterCSV.filter(t => t.status === 'failed').length,
skipped: masterCSV.filter(t => t.status === 'skipped').length
}
console.log(`Pipeline complete: ${summary.completed}/${summary.total} tasks completed`)
When executor spawns workers, use `team-worker` agent with role-spec path:
// Completion action
const action = request_user_input({
```
spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: <role>
role_spec: <session-folder>/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.` },
{ type: "text", text: `## Task Context
task_id: <task-id>
title: <task-title>
description: <task-description>
pipeline_phase: <pipeline-phase>` },
{ type: "text", text: `## Upstream Context
<prev_context>` }
]
})
```
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
---
## Completion Action
When pipeline completes (all tasks done), executor presents an interactive choice:
```
request_user_input({
questions: [{
question: "Team pipeline complete. Choose next action.",
header: "Done",
id: "completion",
question: "Team pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive (Recommended)", description: "Archive session, clean up team" },
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up team" },
{ label: "Keep Active", description: "Keep session for follow-up work" },
{ label: "Export Results", description: "Export deliverables to target directory, then clean" }
]
}]
})
// Handle completion action
if (action.answers.completion.answers[0] === "Archive (Recommended)") {
// Update session status, cleanup team
} else if (action === "Keep Active") {
// Update session status to paused
} else if (action === "Export Results") {
// Ask for target path, copy artifacts, then archive
}
```
**Success Criteria**:
- results.csv exported (all tasks, both modes)
- context.md generated
- All interactive agents closed (registry.json cleanup)
- Summary displayed to user
- Completion action executed
### Action Handlers
| Choice | Steps |
|--------|-------|
| Archive & Clean | Update session status="completed" -> output final summary with artifact paths |
| Keep Active | Update session status="paused" -> output: "Resume with: Skill(skill='team-executor', args='--session=<path>')" |
| Export Results | request_user_input(target path) -> copy artifacts to target -> Archive & Clean |
---
## Shared Discovery Board Protocol
## Integration with team-coordinate
**Discovery Types**:
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `implementation` | `file+function` | `{file, function, approach, notes}` | Implementation approach taken |
| `test_result` | `test_name` | `{test_name, status, duration}` | Test execution result |
| `review_comment` | `file+line` | `{file, line, severity, comment}` | Code review comment |
| `pattern` | `pattern_name` | `{pattern, files[], occurrences}` | Code pattern identified |
**Discovery NDJSON Format**:
```jsonl
{"ts":"2026-03-08T14:30:22Z","worker":"1","type":"implementation","data":{"file":"src/auth.ts","function":"login","approach":"JWT-based","notes":"Used bcrypt for password hashing"}}
{"ts":"2026-03-08T14:35:10Z","worker":"2","type":"test_result","data":{"test_name":"auth.login.success","status":"pass","duration":125}}
{"ts":"2026-03-08T14:40:05Z","worker":"3","type":"review_comment","data":{"file":"src/auth.ts","line":42,"severity":"medium","comment":"Consider adding rate limiting"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
| Scenario | Skill |
|----------|-------|
| New task, no session | team-coordinate |
| Existing session, resume execution | **team-executor** |
| Session needs new roles | team-coordinate (with resume) |
| Pure execution, no analysis | **team-executor** |
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Scenario | Resolution |
|----------|------------|
| No --session provided | ERROR immediately with usage message |
| Session directory not found | ERROR with path, suggest checking path |
| team-session.json missing | ERROR, session incomplete, suggest re-run team-coordinate |
@@ -449,67 +187,5 @@ if (action.answers.completion.answers[0] === "Archive (Recommended)") {
| No role-specs in session | ERROR, session incomplete, suggest re-run team-coordinate |
| Role-spec file not found | ERROR with expected path |
| capability_gap reported | Warn only, cannot generate new role-specs |
| Circular dependency | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Interactive agent failed | Mark as failed, skip dependents |
| All agents in wave failed | Log error, offer retry or abort |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| Lifecycle leak | Cleanup all active agents via registry.json at end |
| Continue mode: no session found | List available sessions, prompt user to select |
| Fast-advance spawns wrong task | Executor reconciles on next callback |
| Completion action fails | Default to Keep Active, log warning |
---
## Core Rules
1. **Start Immediately**: First action is session validation, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive when role inner_loop=true
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson — both mechanisms share it
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent (tracked in registry.json)
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
---
## Coordinator Role Constraints (Main Agent)
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
- Spawns agents with task assignments
- Waits for agent callbacks
- Merges results and coordinates workflow
- Manages workflow transitions between phases
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
- Wait patiently for `wait()` calls to complete
- NOT skip workflow steps due to perceived delays
- NOT assume agents have failed just because they're taking time
- Trust the timeout mechanisms defined in the skill
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
- Use `send_input()` to ask questions or provide clarification
- NOT skip the agent or move to next phase prematurely
- Give agents opportunity to respond before escalating
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
- Skip phases or stages defined in the workflow
- Bypass required approval or review steps
- Execute dependent tasks before prerequisites complete
- Assume task completion without explicit agent callback
- Make up or fabricate agent results
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
- Total execution time may range from 30-90 minutes or longer
- Each phase may take 10-30 minutes depending on complexity
- The coordinator must remain active and attentive throughout the entire process
- Do not terminate or skip steps due to time concerns

View File

@@ -1,62 +0,0 @@
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read role definition: {role_spec_path} (MUST read first)
2. Read shared discoveries: {session_folder}/discoveries.ndjson (if exists, skip if not)
3. Read project context: .workflow/project-tech.json (if exists)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Description**: {description}
**Role**: {role}
### Previous Tasks' Findings (Context)
{prev_context}
---
## Execution Protocol
1. **Read role definition**: Load {role_spec_path} for Phase 2-4 domain instructions (MANDATORY)
2. **Read discoveries**: Load {session_folder}/discoveries.ndjson for shared exploration findings
3. **Use context**: Apply previous tasks' findings from prev_context above
4. **Execute**: Follow the role-specific instructions from your role definition file
5. **Share discoveries**: Append exploration findings to shared board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> {session_folder}/discoveries.ndjson
```
6. **Report result**: Return JSON via report_agent_job_result
### Role Definition Structure
Your role definition file contains:
- **Phase 2**: Context & scope resolution
- **Phase 3**: Execution steps
- **Phase 4**: Output generation
Follow the phases in order as defined in your role file.
### Discovery Types to Share
- `implementation`: {file, function, approach, notes} — Implementation approach taken
- `test_result`: {test_name, status, duration} — Test execution result
- `review_comment`: {file, line, severity, comment} — Code review comment
- `pattern`: {pattern, files[], occurrences} — Code pattern identified
---
## Output (report_agent_job_result)
Return JSON:
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Key discoveries and implementation notes (max 500 chars)",
"error": ""
}
**Findings format**: Concise summary of what was accomplished, key decisions made, and any important notes for downstream tasks.

View File

@@ -0,0 +1,280 @@
# Command: monitor
## Purpose
Synchronous pipeline coordination using spawn_agent + wait_agent for team-executor v2. Role names are read from `tasks.json#roles`. Workers are spawned as `team_worker` agents with role-spec paths. **handleAdapt is LIMITED**: only warns, cannot generate new role-specs. Includes `handleComplete` for pipeline completion action.
## Constants
| Constant | Value | Description |
|----------|-------|-------------|
| WORKER_AGENT | team_worker | All workers spawned via spawn_agent |
| ONE_STEP_PER_INVOCATION | false | Synchronous wait loop |
| FAST_ADVANCE_AWARE | true | Workers may skip executor for simple linear successors |
| ROLE_GENERATION | disabled | handleAdapt cannot generate new role-specs |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session file | `<session-folder>/tasks.json` | Yes |
| Active agents | tasks.json active_agents | Yes |
| Role registry | tasks.json roles[] | Yes |
**Dynamic role resolution**: Known worker roles are loaded from `tasks.json roles[].name`. Role-spec paths are in `tasks.json roles[].role_spec`.
## Phase 3: Handler Routing
### Wake-up Source Detection
Parse `$ARGUMENTS` to determine handler:
| Priority | Condition | Handler |
|----------|-----------|---------|
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
| 2 | Contains "capability_gap" | handleAdapt |
| 3 | Contains "check" or "status" | handleCheck |
| 4 | Contains "resume", "continue", or "next" | handleResume |
| 5 | Pipeline detected as complete | handleComplete |
| 6 | None of the above (initial spawn after dispatch) | handleSpawnNext |
---
### Handler: handleCallback
Worker completed a task. Verify completion, update state, auto-advance.
```
Receive callback from [<role>]
+- Find matching active agent by role (from tasks.json roles)
+- Is this a progress update (not final)?
| +- YES -> Update tasks.json state, do NOT remove from active_agents -> STOP
+- Task status = completed?
| +- YES -> close_agent, remove from active_agents -> update tasks.json
| | +- -> handleSpawnNext
| +- NO -> progress message, do not advance -> STOP
+- No matching agent found
+- Scan all active agents for completed tasks
+- Found completed -> process each -> handleSpawnNext
+- None completed -> STOP
```
**Fast-advance note**: Check if expected next task is already `in_progress` (fast-advanced). If yes -> skip spawning, sync active_agents.
---
### Handler: handleCheck
Read-only status report. No pipeline advancement.
```
[executor] Pipeline Status
[executor] Progress: <completed>/<total> (<percent>%)
[executor] Execution Graph:
<visual representation with status icons>
done=completed >>>=running o=pending .=not created
[executor] Active agents:
> <subject> (<role>) - running <elapsed>
[executor] Ready to spawn: <subjects>
[executor] Commands: 'resume' to advance | 'check' to refresh
```
Then STOP.
---
### Handler: handleResume
Check active agent completion, process results, advance pipeline.
```
Load active_agents from tasks.json
+- No active agents -> handleSpawnNext
+- Has active agents -> check each:
+- status = completed -> mark done, close_agent, log
+- status = in_progress -> still running, log
+- other status -> worker failure -> reset to pending
After processing:
+- Some completed -> handleSpawnNext
+- All still running -> report status -> STOP
+- All failed -> handleSpawnNext (retry)
```
---
### Handler: handleSpawnNext
Find all ready tasks, spawn team_worker agents, wait for completion, process results.
```
Read tasks.json
+- completedTasks: status = completed
+- inProgressTasks: status = in_progress
+- readyTasks: pending + all deps in completedTasks
Ready tasks found?
+- NONE + work in progress -> report waiting -> STOP
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> handleComplete
+- HAS ready tasks -> for each:
+- Is task owner an Inner Loop role AND already has active_agents?
| +- YES -> SKIP spawn (existing worker picks it up)
| +- NO -> normal spawn below
+- Update task status in tasks.json -> in_progress
+- team_msg log -> task_unblocked (session_id=<session-id>)
+- Spawn team_worker (see spawn tool call below)
+- Add to tasks.json active_agents
```
**Spawn worker tool call**:
```javascript
const agentId = spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: ${task.role}
role_spec: ${sessionFolder}/role-specs/${task.role}.md
session: ${sessionFolder}
session_id: ${sessionId}
team_name: ${teamName}
requirement: ${task.description}
inner_loop: ${hasInnerLoop(task.role)}` },
{ type: "text", text: `Read role_spec file (${sessionFolder}/role-specs/${task.role}.md) to load Phase 2-4 domain instructions.` },
{ type: "text", text: `## Task Context
task_id: ${taskId}
title: ${task.title}
description: ${task.description}` },
{ type: "text", text: `## Upstream Context\n${prevContext}` }
]
})
state.active_agents[taskId] = { agentId, role: task.role, started_at: now }
```
### Wait and Process Results
After spawning all ready tasks:
```javascript
// Batch wait for all spawned workers
const agentIds = Object.values(state.active_agents)
.filter(a => !a.resident)
.map(a => a.agentId)
wait_agent({ ids: agentIds, timeout_ms: 900000 })
// Collect results from discoveries/{task_id}.json
for (const [taskId, agent] of Object.entries(state.active_agents)) {
if (agent.resident) continue
try {
const disc = JSON.parse(Read(`${sessionFolder}/discoveries/${taskId}.json`))
state.tasks[taskId].status = disc.status || 'completed'
state.tasks[taskId].findings = disc.findings || ''
state.tasks[taskId].error = disc.error || null
} catch {
state.tasks[taskId].status = 'failed'
state.tasks[taskId].error = 'No discovery file produced'
}
close_agent({ id: agent.agentId })
delete state.active_agents[taskId]
}
```
### Persist and Loop
After processing all results:
1. Write updated tasks.json
2. Check if more tasks are now ready (deps newly resolved)
3. If yes -> loop back to step 1 of handleSpawnNext
4. If no more ready and all done -> handleComplete
5. If no more ready but some still blocked -> report status, STOP
---
### Handler: handleComplete
Pipeline complete. Execute completion action.
```
All tasks completed (no pending, no in_progress)
+- Generate pipeline summary (deliverables, stats, duration)
+- Read tasks.json completion_action:
|
+- "interactive":
| request_user_input -> user choice:
| +- "Archive & Clean": rm -rf session folder -> summary
| +- "Keep Active": session status="paused" -> resume command
| +- "Export Results": copy artifacts -> Archive & Clean
|
+- "auto_archive": Execute Archive & Clean
+- "auto_keep": Execute Keep Active
```
**Fallback**: If completion action fails, default to Keep Active, log warning.
---
### Handler: handleAdapt (LIMITED)
**UNLIKE team-coordinate, executor CANNOT generate new role-specs.**
```
Receive capability_gap from [<role>]
+- Log via team_msg (type: warning)
+- Check existing roles -> does any cover this?
| +- YES -> redirect to that role -> STOP
| +- NO -> genuine gap, report to user:
| "Capability gap detected. team-executor cannot generate new role-specs.
| Options: 1. Continue 2. Re-run team-coordinate 3. Manually add role-spec"
+- Continue execution with existing roles
```
---
### Worker Failure Handling
1. Reset task -> pending in tasks.json
2. Log via team_msg (type: error)
3. Report to user: task reset, will retry on next resume
### Fast-Advance Failure Recovery
Detect orphaned tasks (in_progress without active_agents, > 5 minutes) -> reset to pending -> handleSpawnNext.
### Consensus-Blocked Handling
```
Route by severity:
+- HIGH: Create REVISION task (max 1). Already revised -> PAUSE for user (request_user_input)
+- MEDIUM: Proceed with warning, log to wisdom/issues.md
+- LOW: Proceed normally as consensus_reached with notes
```
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Session state consistent | active_agents matches tasks.json in_progress tasks |
| No orphaned tasks | Every in_progress task has an active_agents entry |
| Dynamic roles valid | All task owners exist in tasks.json roles |
| Completion detection | readyTasks=0 + inProgressTasks=0 -> PIPELINE_COMPLETE |
| Fast-advance tracking | Detect fast-advanced tasks, sync to active_agents |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Session file not found | Error, suggest re-run team-coordinate |
| Unknown role in callback | Log info, scan for other completions |
| All workers still running on resume | Report status, suggest check later |
| Pipeline stall | Check for missing tasks, report to user |
| Fast-advance conflict | Executor reconciles, no duplicate spawns |
| Role-spec file not found | Error, cannot proceed |
| capability_gap | WARN only, cannot generate new role-specs |
| Completion action fails | Default to Keep Active, log warning |

View File

@@ -0,0 +1,170 @@
# Executor Role
Orchestrate the team-executor workflow: session validation, state reconciliation, team_worker dispatch, progress monitoring, completion action. The sole built-in role -- all worker roles are loaded from session role-specs and spawned via team_worker agent.
## Identity
- **Name**: `executor` | **Tag**: `[executor]`
- **Responsibility**: Validate session -> Reconcile state -> Create session -> Dispatch team_worker agents -> Monitor progress -> Completion action -> Report results
## Boundaries
### MUST
- Validate session structure before any execution
- Reconcile session state with tasks.json on startup
- Reset in_progress tasks to pending (interrupted tasks)
- Detect fast-advance orphans and reset to pending
- Spawn team_worker agents via spawn_agent and wait via wait_agent
- Monitor progress via wait_agent and route messages
- Maintain session state persistence (tasks.json)
- Handle capability_gap reports with warning only (cannot generate role-specs)
- Execute completion action when pipeline finishes
### MUST NOT
- Execute task work directly (delegate to workers)
- Modify task output artifacts (workers own their deliverables)
- Call CLI tools or spawn utility members directly for implementation (code-developer, etc.)
- Generate new role-specs (use existing session role-specs only)
- Skip session validation
- Override consensus_blocked HIGH without user confirmation
- Spawn workers with `general-purpose` agent (MUST use `team_worker`)
> **Core principle**: executor is the orchestrator, not the executor. All actual work is delegated to session-defined worker roles via team_worker agents. Unlike team-coordinate coordinator, executor CANNOT generate new role-specs.
---
## Entry Router
When executor is invoked, first detect the invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Status check | Arguments contain "check" or "status" | -> handleCheck |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
| Pipeline complete | All tasks completed, no pending/in_progress | -> handleComplete |
| New execution | None of above | -> Phase 0 |
For check/resume/adapt/complete: load `commands/monitor.md` and execute the appropriate handler, then STOP.
---
## Phase 0: Session Validation + State Reconciliation
**Objective**: Validate session structure and reconcile session state with actual task status.
### Step 1: Session Validation
Validate session structure (see SKILL.md Session Validation):
- [ ] Directory exists at session path
- [ ] `tasks.json` exists and parses
- [ ] `task-analysis.json` exists and parses
- [ ] `role-specs/` directory has >= 1 .md files
- [ ] All roles in tasks.json#roles have corresponding role-spec .md files
- [ ] Role-spec files have valid YAML frontmatter + Phase 2-4 sections
If validation fails -> ERROR with specific reason -> STOP
### Step 2: Load Session State
Read tasks.json and task-analysis.json.
### Step 3: Reconcile with tasks.json
Compare tasks.json task statuses with session expectations, bidirectional sync.
### Step 4: Reset Interrupted Tasks
Reset any in_progress tasks to pending.
### Step 5: Detect Fast-Advance Orphans
In_progress tasks without matching active_agents + created > 5 minutes -> reset to pending.
### Step 6: Create Missing Tasks (if needed)
For each task in task-analysis, check if exists in tasks.json, create if missing.
### Step 7: Update Session File
Write reconciled tasks.json.
### Step 8: Session Setup
Initialize session folder if needed.
**Success**: Session validated, state reconciled, session ready -> Phase 1
---
## Phase 1: Spawn-and-Wait
**Objective**: Spawn first batch of ready workers as team_worker agents, wait for completion, then continue.
**Workflow**:
1. Load `commands/monitor.md`
2. Find tasks with: status=pending, deps all resolved, owner assigned
3. For each ready task -> spawn team_worker via spawn_agent, wait via wait_agent
4. Process results, advance pipeline
5. Repeat until all tasks complete or pipeline blocked
6. Output status summary with execution graph
**Pipeline advancement** driven by:
- Synchronous wait_agent loop (automatic)
- User "check" -> handleCheck (status only)
- User "resume" -> handleResume (advance)
---
## Phase 2: Report + Completion Action
**Objective**: Completion report, interactive completion choice, and follow-up options.
**Workflow**:
1. Load session state -> count completed tasks, duration
2. List all deliverables with output paths in `<session>/artifacts/`
3. Include discussion summaries (if inline discuss was used)
4. Summarize wisdom accumulated during execution
5. Output report:
```
[executor] ============================================
[executor] TASK COMPLETE
[executor]
[executor] Deliverables:
[executor] - <artifact-1.md> (<producer role>)
[executor] - <artifact-2.md> (<producer role>)
[executor]
[executor] Pipeline: <completed>/<total> tasks
[executor] Roles: <role-list>
[executor] Duration: <elapsed>
[executor]
[executor] Session: <session-folder>
[executor] ============================================
```
6. **Execute Completion Action** (based on tasks.json completion_action):
| Mode | Behavior |
|------|----------|
| `interactive` | request_user_input with Archive/Keep/Export options |
| `auto_archive` | Execute Archive & Clean (rm -rf session folder) without prompt |
| `auto_keep` | Execute Keep Active without prompt |
**Interactive handler**: See SKILL.md Completion Action section.
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Session validation fails | ERROR with specific reason, suggest re-run team-coordinate |
| Task timeout | Log, mark failed, ask user to retry or skip |
| Worker crash | Reset task to pending in tasks.json, respawn via spawn_agent |
| Session corruption | Attempt recovery, fallback to manual reconciliation |
| capability_gap reported | handleAdapt: WARN only, cannot generate new role-specs |
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
| Fast-advance conflict | Executor reconciles, no duplicate spawns |
| Role-spec file not found | ERROR, cannot proceed without role definition |
| Completion action fails | Default to Keep Active, log warning |

View File

@@ -1,141 +0,0 @@
# Team Executor — CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier | `"1"` |
| `title` | string | Yes | Short task title | `"Implement auth module"` |
| `description` | string | Yes | Detailed task description (self-contained) | `"Create authentication module with JWT support"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"1;2"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"1"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
| `role` | string | Yes | Role name from session role-specs | `"implementer"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task 1] Created auth module..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending``completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Implemented JWT auth with bcrypt password hashing"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Example Data
```csv
id,title,description,deps,context_from,exec_mode,role,wave,status,findings,error
1,Implement auth module,Create authentication module with JWT,,,"csv-wave","implementer",1,pending,"",""
2,Write tests,Write unit tests for auth module,1,1,"csv-wave","tester",2,pending,"",""
3,Review code,Review implementation and tests,2,2,"interactive","reviewer",3,pending,"",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
───────────────────── ──────────────────── ─────────────────
id ───────────► id ──────────► id
title ───────────► title ──────────► (reads)
description ───────────► description ──────────► (reads)
deps ───────────► deps ──────────► (reads)
context_from───────────► context_from──────────► (reads)
exec_mode ───────────► exec_mode ──────────► (reads)
role ───────────► role ──────────► (reads)
wave ──────────► (reads)
prev_context ──────────► (reads)
status
findings
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "1",
"status": "completed",
"findings": "Implemented JWT authentication with bcrypt password hashing. Created login, logout, and token refresh endpoints. Added middleware for protected routes.",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `implementation` | `file+function` | `{file, function, approach, notes}` | Implementation approach taken |
| `test_result` | `test_name` | `{test_name, status, duration}` | Test execution result |
| `review_comment` | `file+line` | `{file, line, severity, comment}` | Code review comment |
| `pattern` | `pattern_name` | `{pattern, files[], occurrences}` | Code pattern identified |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T14:30:22Z","worker":"1","type":"implementation","data":{"file":"src/auth.ts","function":"login","approach":"JWT-based","notes":"Used bcrypt for password hashing"}}
{"ts":"2026-03-08T14:35:10Z","worker":"2","type":"test_result","data":{"test_name":"auth.login.success","status":"pass","duration":125}}
{"ts":"2026-03-08T14:40:05Z","worker":"3","type":"review_comment","data":{"file":"src/auth.ts","line":42,"severity":"medium","comment":"Consider adding rate limiting"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| CSV task findings | Interactive task | Injected via spawn message or send_input |
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status ∈ {pending, completed, failed, skipped} | "Invalid status: {status}" |
| Cross-mechanism deps | Interactive→CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
| Role non-empty | Every task has role | "Empty role for task: {id}" |
| Role exists | Role has corresponding role-spec file | "Role not found: {role}" |

View File

@@ -0,0 +1,264 @@
# Session Schema
Required session structure for team-executor v2. All components MUST exist for valid execution. Updated for role-spec architecture (lightweight Phase 2-4 files instead of full role.md files).
## Directory Structure
```
<session-folder>/
+-- team-session.json # Session state + dynamic role registry (REQUIRED)
+-- task-analysis.json # Task analysis output: capabilities, dependency graph (REQUIRED)
+-- role-specs/ # Dynamic role-spec definitions (REQUIRED, >= 1 .md file)
| +-- <role-1>.md # Lightweight: YAML frontmatter + Phase 2-4 only
| +-- <role-2>.md
+-- artifacts/ # All MD deliverables from workers
| +-- <artifact>.md
+-- .msg/ # Team message bus + state
| +-- messages.jsonl # Message log
| +-- meta.json # Session metadata + cross-role state
+-- wisdom/ # Cross-task knowledge
| +-- learnings.md
| +-- decisions.md
| +-- issues.md
+-- explorations/ # Shared explore cache
| +-- cache-index.json
| +-- explore-<angle>.json
+-- discussions/ # Inline discuss records
| +-- <round>.md
```
## Validation Checklist
team-executor validates the following before execution:
### Required Components
| Component | Validation | Error Message |
|-----------|------------|---------------|
| `--session` argument | Must be provided | "Session required. Usage: --session=<path-to-TC-folder>" |
| Directory | Must exist at path | "Session directory not found: <path>" |
| `team-session.json` | Must exist, parse as JSON, and contain all required fields | "Invalid session: team-session.json missing, corrupt, or missing required fields" |
| `task-analysis.json` | Must exist, parse as JSON, and contain all required fields | "Invalid session: task-analysis.json missing, corrupt, or missing required fields" |
| `role-specs/` directory | Must exist and contain >= 1 .md file | "Invalid session: no role-spec files in role-specs/" |
| Role-spec file mapping | Each role in team-session.json#roles must have .md file | "Role-spec file not found: role-specs/<role>.md" |
| Role-spec structure | Each role-spec must have YAML frontmatter + Phase 2-4 sections | "Invalid role-spec: role-specs/<role>.md missing required section" |
### Validation Algorithm
```
1. Parse --session=<path> from arguments
+- Not provided -> ERROR: "Session required. Usage: --session=<path-to-TC-folder>"
2. Check directory exists
+- Not exists -> ERROR: "Session directory not found: <path>"
3. Check team-session.json
+- Not exists -> ERROR: "Invalid session: team-session.json missing"
+- Parse error -> ERROR: "Invalid session: team-session.json corrupt"
+- Validate required fields:
+- session_id (string) -> missing -> ERROR
+- task_description (string) -> missing -> ERROR
+- status (string: active|paused|completed) -> invalid -> ERROR
+- team_name (string) -> missing -> ERROR
+- roles (array, non-empty) -> missing/empty -> ERROR
4. Check task-analysis.json
+- Not exists -> ERROR: "Invalid session: task-analysis.json missing"
+- Parse error -> ERROR: "Invalid session: task-analysis.json corrupt"
+- Validate required fields:
+- capabilities (array) -> missing -> ERROR
+- dependency_graph (object) -> missing -> ERROR
+- roles (array, non-empty) -> missing/empty -> ERROR
5. Check role-specs/ directory
+- Not exists -> ERROR: "Invalid session: role-specs/ directory missing"
+- No .md files -> ERROR: "Invalid session: no role-spec files in role-specs/"
6. Check role-spec file mapping and structure
+- For each role in team-session.json#roles:
+- Check role-specs/<role.name>.md exists
+- Not exists -> ERROR: "Role-spec file not found: role-specs/<role.name>.md"
+- Validate role-spec structure (see Role-Spec Structure Validation)
7. All checks pass -> proceed to Phase 0
```
---
## team-session.json Schema
```json
{
"session_id": "TC-<slug>-<date>",
"task_description": "<original user input>",
"status": "active | paused | completed",
"team_name": "<team-name>",
"roles": [
{
"name": "<role-name>",
"prefix": "<PREFIX>",
"responsibility_type": "<type>",
"inner_loop": false,
"role_spec": "role-specs/<role-name>.md"
}
],
"pipeline": {
"dependency_graph": {},
"tasks_total": 0,
"tasks_completed": 0
},
"active_workers": [],
"completed_tasks": [],
"completion_action": "interactive",
"created_at": "<timestamp>"
}
```
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `session_id` | string | Unique session identifier |
| `task_description` | string | Original task description from user |
| `status` | string | One of: "active", "paused", "completed" |
| `team_name` | string | Team name for Agent tool |
| `roles` | array | List of role definitions |
| `roles[].name` | string | Role name (must match .md filename) |
| `roles[].prefix` | string | Task prefix for this role |
| `roles[].role_spec` | string | Relative path to role-spec file |
### Optional Fields
| Field | Type | Description |
|-------|------|-------------|
| `pipeline` | object | Pipeline metadata |
| `active_workers` | array | Currently running workers |
| `completed_tasks` | array | List of completed task IDs |
| `completion_action` | string | Completion mode: interactive, auto_archive, auto_keep |
| `created_at` | string | ISO timestamp |
---
## task-analysis.json Schema
```json
{
"capabilities": [
{
"name": "<capability-name>",
"description": "<description>",
"artifact_type": "<type>"
}
],
"dependency_graph": {
"<task-id>": {
"depends_on": ["<dependency-task-id>"],
"role": "<role-name>"
}
},
"roles": [
{
"name": "<role-name>",
"prefix": "<PREFIX>",
"responsibility_type": "<type>",
"inner_loop": false,
"role_spec_metadata": {
"additional_members": [],
"message_types": {
"success": "<prefix>_complete",
"error": "error"
}
}
}
],
"complexity_score": 0
}
```
---
## Role-Spec File Schema
Each role-spec in `role-specs/<role-name>.md` follows the lightweight format with YAML frontmatter + Phase 2-4 body.
### Required Structure
```markdown
---
role: <name>
prefix: <PREFIX>
inner_loop: <true|false>
message_types:
success: <type>
error: error
---
# <Role Name> — Phase 2-4
## Phase 2: <Name>
<domain-specific context loading>
## Phase 3: <Name>
<domain-specific execution>
## Phase 4: <Name>
<domain-specific validation>
```
### Role-Spec Structure Validation
```
For each role-spec in role-specs/<role>.md:
1. Read file content
2. Check for YAML frontmatter (content between --- markers)
+- Not found -> ERROR: "Invalid role-spec: role-specs/<role>.md missing frontmatter"
3. Parse frontmatter, check required fields:
+- role (string) -> missing -> ERROR
+- prefix (string) -> missing -> ERROR
+- inner_loop (boolean) -> missing -> ERROR
+- message_types (object) -> missing -> ERROR
4. Check for "## Phase 2" section
+- Not found -> ERROR: "Invalid role-spec: missing Phase 2"
5. Check for "## Phase 3" section
+- Not found -> ERROR: "Invalid role-spec: missing Phase 3"
6. Check for "## Phase 4" section
+- Not found -> ERROR: "Invalid role-spec: missing Phase 4"
7. All checks pass -> role-spec valid
```
---
## Example Valid Session
```
.workflow/.team/TC-auth-feature-2026-02-27/
+-- team-session.json # Valid JSON with session metadata
+-- task-analysis.json # Valid JSON with dependency graph
+-- role-specs/
| +-- researcher.md # YAML frontmatter + Phase 2-4
| +-- developer.md # YAML frontmatter + Phase 2-4
| +-- tester.md # YAML frontmatter + Phase 2-4
+-- artifacts/ # (may be empty)
+-- .msg/ # Team message bus + state (messages.jsonl + meta.json)
+-- wisdom/
| +-- learnings.md
| +-- decisions.md
| +-- issues.md
+-- explorations/
| +-- cache-index.json
+-- discussions/ # (may be empty)
```
---
## Recovery from Invalid Sessions
If session validation fails:
1. **Missing team-session.json**: Re-run team-coordinate with original task
2. **Missing task-analysis.json**: Re-run team-coordinate with resume
3. **Missing role-spec files**: Re-run team-coordinate with resume
4. **Invalid frontmatter**: Manual fix or re-run team-coordinate
5. **Corrupt JSON**: Manual inspection or re-run team-coordinate
**team-executor cannot fix invalid sessions** -- it can only report errors and suggest recovery steps.