feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -1,691 +1,153 @@
---
name: team-designer
description: Meta-skill for generating team skills. Analyzes requirements, scaffolds directory structure, generates role definitions and specs, validates completeness. Produces complete Codex team skill packages with SKILL.md orchestrator, CSV schemas, agent instructions, and interactive agents.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"skill description with roles and domain\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
description: Meta-skill for generating team skills following the v4 architecture pattern. Produces complete skill packages with SKILL.md router, coordinator, worker roles, specs, and templates. Triggers on "team-designer", "design team".
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team Skill Designer
## Usage
Generate complete team skills following the team-lifecycle-v4 architecture: SKILL.md as universal router, coordinator with beat model, worker roles with optional commands/, shared specs, and templates.
```bash
$team-designer "Design a code review team with analyst, reviewer, security-expert roles"
$team-designer -c 4 "Create a documentation team with researcher, writer, editor"
$team-designer -y "Generate a test automation team with planner, executor, tester"
$team-designer --continue "td-code-review-20260308"
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--continue`: Resume existing session
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
---
## Overview
Meta-skill for generating complete team skill packages. Takes a skill description with roles and domain, then: analyzes requirements -> scaffolds directory structure -> generates all role files, specs, templates -> validates the package. The generated skill follows the Codex hybrid team architecture (CSV wave primary + interactive secondary).
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
## Architecture Overview
```
+-------------------------------------------------------------------+
| TEAM SKILL DESIGNER WORKFLOW |
+-------------------------------------------------------------------+
| |
| Phase 0: Pre-Wave Interactive (Requirement Clarification) |
| +- Parse user skill description |
| +- Detect input source (reference, structured, natural) |
| +- Gather core identity (skill name, prefix, domain) |
| +- Output: refined requirements for decomposition |
| |
| Phase 1: Requirement -> CSV + Classification |
| +- Discover roles from domain keywords |
| +- Define pipelines from role combinations |
| +- Determine commands distribution (inline vs commands/) |
| +- Build teamConfig data structure |
| +- Classify tasks: csv-wave | interactive (exec_mode) |
| +- Compute dependency waves (topological sort) |
| +- Generate tasks.csv with wave + exec_mode columns |
| +- User validates task breakdown (skip if -y) |
| |
| Phase 2: Wave Execution Engine (Extended) |
| +- For each wave (1..N): |
| | +- Execute pre-wave interactive tasks (if any) |
| | +- Build wave CSV (filter csv-wave tasks for this wave) |
| | +- Inject previous findings into prev_context column |
| | +- spawn_agents_on_csv(wave CSV) |
| | +- Execute post-wave interactive tasks (if any) |
| | +- Merge all results into master tasks.csv |
| | +- Check: any failed? -> skip dependents |
| +- discoveries.ndjson shared across all modes (append-only) |
| |
| Phase 3: Post-Wave Interactive (Validation) |
| +- Structural validation (files exist, sections present) |
| +- Reference integrity (role registry matches files) |
| +- Pipeline consistency (no circular deps, roles exist) |
| +- Final aggregation / report |
| |
| Phase 4: Results Aggregation |
| +- Export final results.csv |
| +- Generate context.md with all findings |
| +- Display summary: completed/failed/skipped per wave |
| +- Offer: view results | retry failed | done |
| |
+-------------------------------------------------------------------+
┌─────────────────────────────────────────────────────────────────┐
Team Skill Designer (SKILL.md)
│ → Orchestrator: gather requirements, generate files, validate │
└───────────────────────────┬──────────────────────────────────────┘
┌───────────┬───────────┼───────────┬───────────┐
↓ ↓
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │ Phase 4 │
│ Require │ │ Scaffold│ │ Content │ │ Valid │
│ Analysis│ │ Gen │ │ Gen │ │ & Report│
└─────────┘ └─────────┘ └─────────┘ └─────────┘
↓ ↓
teamConfig SKILL.md roles/ Validated
+ dirs specs/ skill pkg
templates/
```
---
## Key Design Principles
## Task Classification Rules
1. **v4 Architecture Compliance**: Generated skills follow team-lifecycle-v4 pattern — SKILL.md = pure router, beat model = coordinator-only, unified structure (roles/ + specs/ + templates/)
2. **Golden Sample Reference**: Uses `team-lifecycle-v4` as reference implementation at `~ or <project>/.claude/skills/team-lifecycle-v4/`
3. **Intelligent Commands Distribution**: Auto-determines which roles need `commands/` (2+ commands) vs inline logic (1 command)
4. **team-worker Compatibility**: Role.md files include correct YAML frontmatter for team-worker agent parsing
Each task is classified by `exec_mode`:
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, needs clarification, revision cycles |
**Classification Decision**:
| Task Property | Classification |
|---------------|---------------|
| Single-pass file generation (role.md, spec.md) | `csv-wave` |
| Directory scaffold creation | `csv-wave` |
| SKILL.md generation (complex, multi-section) | `csv-wave` |
| Coordinator role generation (multi-file) | `csv-wave` |
| Worker role generation (single file) | `csv-wave` |
| Pipeline spec generation | `csv-wave` |
| Template generation | `csv-wave` |
| User requirement clarification | `interactive` |
| Validation requiring user approval | `interactive` |
| Error recovery (auto-fix vs regenerate choice) | `interactive` |
---
## CSV Schema
### tasks.csv (Master State)
```csv
id,title,description,role,file_target,gen_type,deps,context_from,exec_mode,wave,status,findings,files_produced,error
"SCAFFOLD-001","Create directory structure","Create the complete directory structure for the team skill including roles/, specs/, templates/ subdirectories","scaffolder","skill-dir","directory","","","csv-wave","1","pending","","",""
"SPEC-001","Generate pipelines spec","Generate specs/pipelines.md with pipeline definitions, task registry, conditional routing","spec-writer","specs/pipelines.md","spec","SCAFFOLD-001","","csv-wave","2","pending","","",""
"ROLE-001","Generate coordinator role","Generate roles/coordinator/role.md with entry router, command execution protocol, phase logic","role-writer","roles/coordinator/","role-bundle","SCAFFOLD-001;SPEC-001","SPEC-001","csv-wave","2","pending","","",""
"ROLE-002","Generate analyst worker role","Generate roles/analyst/role.md with domain-specific Phase 2-4 logic","role-writer","roles/analyst/role.md","role-inline","SCAFFOLD-001;SPEC-001","SPEC-001","csv-wave","2","pending","","",""
```
**Columns**:
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (string) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description with generation instructions |
| `role` | Input | Generator role: `scaffolder`, `spec-writer`, `role-writer`, `router-writer`, `validator` |
| `file_target` | Input | Target file or directory path relative to skill root |
| `gen_type` | Input | Generation type: `directory`, `router`, `role-bundle`, `role-inline`, `spec`, `template` |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
| `files_produced` | Output | Semicolon-separated paths of produced files |
| `error` | Output | Error message if failed (empty if success) |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
---
## Agent Registry (Interactive Agents)
| Agent | Role File | Pattern | Responsibility | Position |
|-------|-----------|---------|----------------|----------|
| Requirement Clarifier | agents/requirement-clarifier.md | 2.3 (send_input cycle) | Gather and refine skill requirements interactively | standalone (Phase 0) |
| Validation Reporter | agents/validation-reporter.md | 2.3 (send_input cycle) | Validate generated skill package and report results | standalone (Phase 3) |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 4 |
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
| `context.md` | Human-readable execution report | Created in Phase 4 |
| `teamConfig.json` | Phase 0/1 output: skill config, roles, pipelines | Created in Phase 1 |
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
---
## Session Structure
## Execution Flow
```
.workflow/.csv-wave/{session-id}/
+-- tasks.csv # Master state (all tasks, both modes)
+-- results.csv # Final results export
+-- discoveries.ndjson # Shared discovery board (all agents)
+-- context.md # Human-readable report
+-- teamConfig.json # Skill configuration from Phase 1
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
+-- artifacts/ # Generated skill files (intermediate)
+-- interactive/ # Interactive task artifacts
| +-- {id}-result.json
+-- validation/ # Validation reports
+-- structural.json
+-- references.json
Input Parsing:
└─ Parse user requirements (skill name, roles, pipelines, domain)
Phase 1: Requirements Analysis
└─ Ref: phases/01-requirements-analysis.md
├─ Tasks: Detect input → Gather roles → Define pipelines → Build teamConfig
└─ Output: teamConfig
Phase 2: Scaffold Generation
└─ Ref: phases/02-scaffold-generation.md
├─ Tasks: Create dirs → Generate SKILL.md router → Verify
└─ Output: SKILL.md + directory structure
Phase 3: Content Generation
└─ Ref: phases/03-content-generation.md
├─ Tasks: Coordinator → Workers → Specs → Templates
└─ Output: roles/**/*.md, specs/*.md, templates/*.md
Phase 4: Validation
└─ Ref: phases/04-validation.md
└─ Output: Validation report (PASS/REVIEW/FAIL)
Return:
└─ Summary with skill location and usage instructions
```
---
**Phase Reference Documents** (read on-demand when phase executes):
## Implementation
| Phase | Document | Purpose |
|-------|----------|---------|
| 1 | [phases/01-requirements-analysis.md](phases/01-requirements-analysis.md) | Gather team skill requirements, build teamConfig |
| 2 | [phases/02-scaffold-generation.md](phases/02-scaffold-generation.md) | Generate SKILL.md router and directory structure |
| 3 | [phases/03-content-generation.md](phases/03-content-generation.md) | Generate coordinator, workers, specs, templates |
| 4 | [phases/04-validation.md](phases/04-validation.md) | Validate structure, references, and consistency |
### Session Initialization
## Golden Sample
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
Generated skills follow the architecture of `~ or <project>/.claude/skills/team-lifecycle-v4/`:
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
const requirement = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
.trim()
const slug = requirement.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `td-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/interactive ${sessionFolder}/validation`)
// Initialize discoveries.ndjson
Write(`${sessionFolder}/discoveries.ndjson`, '')
```
.claude/skills/<skill-name>/
├── SKILL.md # Universal router (all roles read)
├── roles/
│ ├── coordinator/
│ │ ├── role.md # Orchestrator + beat model + entry router
│ └── commands/
│ ├── analyze.md # Task analysis
│ │ ├── dispatch.md # Task chain creation
│ │ └── monitor.md # Beat control + callbacks
├── <inline-worker>/
│ └── role.md # Phase 2-4 embedded (simple role)
│ └── <command-worker>/
│ ├── role.md # Phase 2-4 dispatcher
│ └── commands/
│ ├── <cmd-1>.md
│ └── <cmd-2>.md
├── specs/
│ ├── pipelines.md # Pipeline definitions + task registry
│ └── <domain-specs>.md # Domain-specific specifications
└── templates/ # Optional document templates
```
---
## Data Flow
### Phase 0: Pre-Wave Interactive (Requirement Clarification)
**Objective**: Parse user skill description, clarify ambiguities, build teamConfig.
**Workflow**:
1. **Parse user skill description** from $ARGUMENTS
2. **Detect input source**:
| Source Type | Detection | Action |
|-------------|-----------|--------|
| Reference | Contains "based on", "like", or existing skill path | Read referenced skill, extract structure |
| Structured | Contains ROLES:, PIPELINES:, or DOMAIN: | Parse structured input directly |
| Natural language | Default | Analyze keywords, discover roles |
3. **Check for existing sessions** (continue mode):
- Scan `.workflow/.csv-wave/td-*/tasks.csv` for sessions with pending tasks
- If `--continue`: resume the specified or most recent session, skip to Phase 2
4. **Gather core identity** (skip if AUTO_YES or already clear):
Read `agents/requirement-clarifier.md`, then:
```javascript
const clarifier = spawn_agent({
message: `## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read: agents/requirement-clarifier.md
2. Read: ${sessionFolder}/discoveries.ndjson (if exists)
---
Goal: Gather team skill requirements from the user
Input: "${requirement}"
Session: ${sessionFolder}
Determine: skill name (kebab-case), session prefix (3-4 chars), domain description, roles, pipelines, commands distribution.`
})
const clarifyResult = wait({ ids: [clarifier], timeout_ms: 600000 })
if (clarifyResult.timed_out) {
send_input({ id: clarifier, message: "Please finalize requirements with current information." })
wait({ ids: [clarifier], timeout_ms: 120000 })
}
Write(`${sessionFolder}/interactive/clarify-result.json`, JSON.stringify({
task_id: "CLARIFY-001", status: "completed", findings: parseFindings(clarifyResult),
timestamp: getUtc8ISOString()
}))
close_agent({ id: clarifier })
```
5. **Build teamConfig** from gathered requirements:
```javascript
const teamConfig = {
skillName: "<kebab-case-name>",
sessionPrefix: "<3-4 char prefix>",
domain: "<domain description>",
title: "<Human Readable Title>",
roles: [
{ name: "coordinator", prefix: "—", inner_loop: false, hasCommands: true, commands: ["analyze", "dispatch", "monitor"], path: "roles/coordinator/role.md" },
// ... discovered worker roles
],
pipelines: [{ name: "<pipeline-name>", tasks: [/* task definitions */] }],
specs: ["pipelines"],
templates: [],
conditionalRouting: false,
targetDir: `.codex/skills/<skill-name>`
}
Write(`${sessionFolder}/teamConfig.json`, JSON.stringify(teamConfig, null, 2))
User Input (skill name, roles, pipelines)
Phase 1: Requirements Analysis
↓ Output: teamConfig
Phase 2: Scaffold Generation
↓ Input: teamConfig
↓ Output: SKILL.md + skillDir
Phase 3: Content Generation
↓ Input: teamConfig + skillDir
↓ Output: roles/, specs/, templates/
Phase 4: Validation
↓ Input: teamConfig + all files
↓ Output: validation report
Return summary to user
```
6. **Decompose into tasks** -- generate tasks.csv from teamConfig:
| Task Pattern | gen_type | Wave | Description |
|--------------|----------|------|-------------|
| Directory scaffold | `directory` | 1 | Create skill directory structure |
| SKILL.md router | `router` | 2 | Generate main SKILL.md orchestrator |
| Pipeline spec | `spec` | 2 | Generate specs/pipelines.md |
| Domain specs | `spec` | 2 | Generate additional specs files |
| Coordinator role | `role-bundle` | 3 | Generate coordinator role.md + commands/ |
| Worker roles (each) | `role-inline` or `role-bundle` | 3 | Generate each worker role.md |
| Templates (each) | `template` | 3 | Generate template files |
| Validation | `validation` | 4 | Validate the complete package |
**Success Criteria**:
- teamConfig.json written with complete configuration
- Refined requirements available for Phase 1 decomposition
- Interactive agents closed, results stored
---
### Phase 1: Requirement -> CSV + Classification
**Objective**: Generate tasks.csv from teamConfig with dependency-ordered waves.
**Decomposition Rules**:
1. **Role Discovery** -- scan domain description for keywords:
| Signal | Keywords | Role Name | Prefix |
|--------|----------|-----------|--------|
| Analysis | analyze, research, investigate, explore | analyst | RESEARCH |
| Planning | plan, design, architect, decompose | planner | PLAN |
| Writing | write, document, draft, spec, report | writer | DRAFT |
| Implementation | implement, build, code, develop | executor | IMPL |
| Testing | test, verify, validate, qa | tester | TEST |
| Review | review, audit, check, inspect | reviewer | REVIEW |
| Security | security, vulnerability, penetration | security-expert | SECURITY |
2. **Commands Distribution** -- determine inline vs commands/:
| Condition | Commands Structure |
|-----------|-------------------|
| 1 distinct action for role | Inline in role.md |
| 2+ distinct actions | commands/ folder |
| Coordinator (always) | commands/: analyze, dispatch, monitor |
3. **Pipeline Construction** -- build from role ordering:
| Role Combination | Pipeline Type |
|------------------|---------------|
| analyst + writer + executor | full-lifecycle |
| analyst + writer (no executor) | spec-only |
| planner + executor (no analyst) | impl-only |
| Other | custom |
**Classification Rules**:
| Task Property | exec_mode |
|---------------|-----------|
| Directory creation | `csv-wave` |
| Single file generation (role.md, spec.md) | `csv-wave` |
| Multi-file bundle generation (coordinator) | `csv-wave` |
| SKILL.md router generation | `csv-wave` |
| User requirement clarification | `interactive` |
| Validation with error recovery | `interactive` |
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
**Success Criteria**:
- tasks.csv created with valid schema, wave, and exec_mode assignments
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
```javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
const maxWave = Math.max(...tasks.map(t => t.wave))
for (let wave = 1; wave <= maxWave; wave++) {
console.log(`\nWave ${wave}/${maxWave}`)
// 1. Separate tasks by exec_mode
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
// 2. Check dependencies -- skip tasks whose deps failed
for (const task of waveTasks) {
const depIds = (task.deps || '').split(';').filter(Boolean)
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
task.status = 'skipped'
task.error = `Dependency failed: ${depIds.filter((id, i) =>
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
}
}
// 3. Execute pre-wave interactive tasks
const preWaveInteractive = interactiveTasks.filter(t => t.status === 'pending')
for (const task of preWaveInteractive) {
// Use appropriate interactive agent
const agentFile = task.gen_type === 'validation'
? 'agents/validation-reporter.md'
: 'agents/requirement-clarifier.md'
Read(agentFile)
const agent = spawn_agent({
message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${agentFile}\n2. Read: ${sessionFolder}/discoveries.ndjson\n\nGoal: ${task.description}\nScope: ${task.title}\nSession: ${sessionFolder}\nteamConfig: ${sessionFolder}/teamConfig.json\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
if (result.timed_out) {
send_input({ id: agent, message: "Please finalize and output current findings." })
wait({ ids: [agent], timeout_ms: 120000 })
}
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
task_id: task.id, status: "completed", findings: parseFindings(result),
timestamp: getUtc8ISOString()
}))
close_agent({ id: agent })
task.status = 'completed'
task.findings = parseFindings(result)
}
// 4. Build prev_context for csv-wave tasks
const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
for (const task of pendingCsvTasks) {
task.prev_context = buildPrevContext(task, tasks)
}
if (pendingCsvTasks.length > 0) {
// 5. Write wave CSV
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))
// 6. Execute wave via spawn_agents_on_csv
spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: Read(`instructions/agent-instruction.md`)
.replace(/<session-folder>/g, sessionFolder),
max_concurrency: maxConcurrency,
max_runtime_seconds: 900,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
files_produced: { type: "string" },
error: { type: "string" }
}
}
})
// 7. Merge results into master CSV
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const r of results) {
const t = tasks.find(t => t.id === r.id)
if (t) Object.assign(t, r)
}
}
// 8. Update master CSV
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
// 9. Cleanup temp files
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
// 10. Display wave summary
const completed = waveTasks.filter(t => t.status === 'completed').length
const failed = waveTasks.filter(t => t.status === 'failed').length
const skipped = waveTasks.filter(t => t.status === 'skipped').length
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
}
```
**Success Criteria**:
- All waves executed in order
- Both csv-wave and interactive tasks handled per wave
- Each wave's results merged into master CSV before next wave starts
- Dependent tasks skipped when predecessor failed
- discoveries.ndjson accumulated across all waves and mechanisms
---
### Phase 3: Post-Wave Interactive (Validation)
**Objective**: Validate the generated team skill package and present results.
Read `agents/validation-reporter.md`, then:
```javascript
const validator = spawn_agent({
message: `## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read: agents/validation-reporter.md
2. Read: ${sessionFolder}/discoveries.ndjson
3. Read: ${sessionFolder}/teamConfig.json
---
Goal: Validate the generated team skill package at ${teamConfig.targetDir}
Session: ${sessionFolder}
### Validation Checks
1. Structural: All files exist per teamConfig
2. SKILL.md: Required sections present, role registry correct
3. Role frontmatter: YAML frontmatter valid for each worker role
4. Pipeline consistency: No circular deps, roles referenced exist
5. Commands distribution: commands/ matches hasCommands flag
### Previous Context
${buildCompletePrevContext(tasks)}`
})
const validResult = wait({ ids: [validator], timeout_ms: 600000 })
if (validResult.timed_out) {
send_input({ id: validator, message: "Please finalize validation with current findings." })
wait({ ids: [validator], timeout_ms: 120000 })
}
Write(`${sessionFolder}/interactive/validation-result.json`, JSON.stringify({
task_id: "VALIDATE-001", status: "completed", findings: parseFindings(validResult),
timestamp: getUtc8ISOString()
}))
close_agent({ id: validator })
```
**Success Criteria**:
- Post-wave interactive processing complete
- Validation report generated
- Interactive agents closed, results stored
---
### Phase 4: Results Aggregation
**Objective**: Generate final results and human-readable report.
```javascript
// 1. Export results.csv
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
// 2. Generate context.md
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
let contextMd = `# Team Skill Designer Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Skill**: ${teamConfig.skillName}\n`
contextMd += `**Target**: ${teamConfig.targetDir}\n\n`
contextMd += `## Summary\n`
contextMd += `| Status | Count |\n|--------|-------|\n`
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`
contextMd += `## Generated Skill Structure\n\n`
contextMd += `\`\`\`\n${teamConfig.targetDir}/\n`
contextMd += `+-- SKILL.md\n+-- schemas/\n| +-- tasks-schema.md\n+-- instructions/\n| +-- agent-instruction.md\n`
// ... roles, specs, templates
contextMd += `\`\`\`\n\n`
contextMd += `## Validation\n`
// ... validation results
Write(`${sessionFolder}/context.md`, contextMd)
// 3. Display final summary
console.log(`\nTeam Skill Designer Complete`)
console.log(`Generated skill: ${teamConfig.targetDir}`)
console.log(`Results: ${sessionFolder}/results.csv`)
console.log(`Report: ${sessionFolder}/context.md`)
console.log(`\nUsage: $${teamConfig.skillName} "task description"`)
```
**Success Criteria**:
- results.csv exported (all tasks, both modes)
- context.md generated
- All interactive agents closed
- Summary displayed to user
---
## Shared Discovery Board Protocol
All agents (csv-wave and interactive) share a single `discoveries.ndjson` file for cross-task knowledge exchange.
**Format**: One JSON object per line (NDJSON):
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"SCAFFOLD-001","type":"dir_created","data":{"path":"~ or <project>/.codex/skills/team-code-review/","description":"Created skill directory structure"}}
{"ts":"2026-03-08T10:05:00Z","worker":"ROLE-001","type":"file_generated","data":{"file":"roles/coordinator/role.md","gen_type":"role-bundle","sections":["entry-router","commands"]}}
{"ts":"2026-03-08T10:10:00Z","worker":"SPEC-001","type":"pattern_found","data":{"pattern_name":"full-lifecycle","description":"Pipeline with analyst -> writer -> executor -> tester"}}
```
**Discovery Types**:
| Type | Data Schema | Description |
|------|-------------|-------------|
| `dir_created` | `{path, description}` | Directory structure created |
| `file_generated` | `{file, gen_type, sections}` | File generated with specific sections |
| `pattern_found` | `{pattern_name, description}` | Design pattern identified in golden sample |
| `config_decision` | `{decision, rationale, impact}` | Configuration decision made |
| `validation_result` | `{check, passed, message}` | Validation check result |
| `reference_found` | `{source, target, type}` | Cross-reference between generated files |
**Protocol**:
1. Agents MUST read discoveries.ndjson at start of execution
2. Agents MUST append relevant discoveries during execution
3. Agents MUST NOT modify or delete existing entries
4. Deduplication by `{type, data.file, data.path}` key
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Circular dependency | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Interactive agent failed | Mark as failed, skip dependents |
| All agents in wave failed | Log error, offer retry or abort |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| Invalid role name | Must be lowercase alphanumeric with hyphens, max 20 chars |
| Directory conflict | Warn if skill directory already exists, ask user to confirm overwrite |
| Golden sample not found | Fall back to embedded templates in instructions |
| Validation FAIL | Offer auto-fix, regenerate, or accept as-is |
| Continue mode: no session found | List available sessions, prompt user to select |
---
## Core Rules
1. **Start Immediately**: First action is session initialization, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
7. **Skip on Failure**: If a dependency failed, skip the dependent task
8. **Golden Sample Fidelity**: Generated files must match existing team skill patterns
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
1. **Start Immediately**: First action is Phase 1 execution
2. **Parse Every Output**: Extract teamConfig from Phase 1 for subsequent phases
3. **Auto-Continue**: After each phase, automatically execute next phase
4. **Progressive Phase Loading**: Read phase docs ONLY when that phase is about to execute
5. **Golden Sample Fidelity**: Generated files must match team-lifecycle-v4 patterns
6. **DO NOT STOP**: Continuous workflow until all 4 phases complete
## Input Processing
---
Convert user input to structured format:
## Coordinator Role Constraints (Main Agent)
```
SKILL_NAME: [kebab-case name, e.g., team-code-review]
DOMAIN: [what this team does, e.g., "multi-stage code review with security analysis"]
ROLES: [worker roles beyond coordinator, e.g., "analyst, reviewer, security-expert"]
PIPELINES: [pipeline types and flows, e.g., "review-only: SCAN-001 → REVIEW-001 → REPORT-001"]
SESSION_PREFIX: [3-4 char, e.g., TCR]
```
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
## Error Handling
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
- Spawns agents with task assignments
- Waits for agent callbacks
- Merges results and coordinates workflow
- Manages workflow transitions between phases
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
- Wait patiently for `wait()` calls to complete
- NOT skip workflow steps due to perceived delays
- NOT assume agents have failed just because they're taking time
- Trust the timeout mechanisms defined in the skill
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
- Use `send_input()` to ask questions or provide clarification
- NOT skip the agent or move to next phase prematurely
- Give agents opportunity to respond before escalating
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
- Skip phases or stages defined in the workflow
- Bypass required approval or review steps
- Execute dependent tasks before prerequisites complete
- Assume task completion without explicit agent callback
- Make up or fabricate agent results
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
- Total execution time may range from 30-90 minutes or longer
- Each phase may take 10-30 minutes depending on complexity
- The coordinator must remain active and attentive throughout the entire process
- Do not terminate or skip steps due to time concerns
- **Invalid role name**: Must be lowercase alphanumeric with hyphens, max 20 chars
- **Circular dependencies**: Detect and report in pipeline validation
- **Missing golden sample**: Fall back to embedded templates in phase files
- **Directory conflict**: Warn if skill directory already exists, ask user to confirm overwrite

View File

@@ -1,247 +0,0 @@
# Requirement Clarifier Agent
Interactive agent for gathering and refining team skill requirements from user input. Used in Phase 0 when the skill description needs clarification or missing details.
## Identity
- **Type**: `interactive`
- **Role File**: `agents/requirement-clarifier.md`
- **Responsibility**: Gather skill name, roles, pipelines, specs, and build teamConfig
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Parse user input to detect input source (reference, structured, natural)
- Gather all required teamConfig fields
- Confirm configuration with user before reporting complete
- Produce structured output following template
- Write teamConfig.json to session folder
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Generate skill files (that is Phase 2 work)
- Approve incomplete configurations
- Produce unstructured output
- Exceed defined scope boundaries
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | built-in | Load reference skills, existing patterns |
| `request_user_input` | built-in | Gather missing details from user |
| `Write` | built-in | Store teamConfig.json |
| `Glob` | built-in | Find reference skill files |
### Tool Usage Patterns
**Read Pattern**: Load reference skill for pattern extraction
```
Read(".codex/skills/<reference-skill>/SKILL.md")
Read(".codex/skills/<reference-skill>/schemas/tasks-schema.md")
```
**Write Pattern**: Store configuration
```
Write("<session>/teamConfig.json", <config>)
```
---
## Execution
### Phase 1: Input Detection
**Objective**: Detect input source type and extract initial information
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| User requirement | Yes | Skill description from $ARGUMENTS |
| Reference skill | No | Existing skill if "based on" detected |
**Steps**:
1. Parse user input to detect source type:
| Source Type | Detection | Action |
|-------------|-----------|--------|
| Reference | Contains "based on", "like", skill path | Read referenced skill, extract roles/pipelines |
| Structured | Contains ROLES:, PIPELINES:, DOMAIN: | Parse structured fields directly |
| Natural language | Default | Analyze keywords for role discovery |
2. Extract initial information from detected source
3. Identify missing required fields
**Output**: Initial teamConfig draft with gaps identified
---
### Phase 2: Requirement Gathering
**Objective**: Fill in all required teamConfig fields
**Steps**:
1. **Core Identity** -- gather if not clear from input:
```javascript
request_user_input({
questions: [
{
question: "Team skill name? (kebab-case, e.g., team-code-review)",
header: "Skill Name",
id: "skill_name",
options: [
{ label: "<auto-suggested-name> (Recommended)", description: "Auto-suggested from description" },
{ label: "Custom", description: "Enter custom name" }
]
},
{
question: "Session prefix? (3-4 chars for task IDs, e.g., TCR)",
header: "Prefix",
id: "session_prefix",
options: [
{ label: "<auto-suggested-prefix> (Recommended)", description: "Auto-suggested" },
{ label: "Custom", description: "Enter custom prefix" }
]
}
]
})
```
2. **Role Discovery** -- identify roles from domain keywords:
| Signal | Keywords | Default Role |
|--------|----------|-------------|
| Analysis | analyze, research, investigate | analyst |
| Planning | plan, design, architect | planner |
| Writing | write, document, draft | writer |
| Implementation | implement, build, code | executor |
| Testing | test, verify, validate | tester |
| Review | review, audit, check | reviewer |
3. **Commands Distribution** -- determine per role:
| Rule | Condition | Result |
|------|-----------|--------|
| Coordinator | Always | commands/: analyze, dispatch, monitor |
| Multi-action role | 2+ distinct actions | commands/ folder |
| Single-action role | 1 action | Inline in role.md |
4. **Pipeline Construction** -- determine from role combination:
| Roles Present | Pipeline Type |
|---------------|---------------|
| analyst + writer + executor | full-lifecycle |
| analyst + writer (no executor) | spec-only |
| planner + executor (no analyst) | impl-only |
| Other combinations | custom |
5. **Specs and Templates** -- determine required specs:
- Always: pipelines.md
- If quality gates needed: quality-gates.md
- If writer role: domain-appropriate templates
**Output**: Complete teamConfig ready for confirmation
---
### Phase 3: Confirmation
**Objective**: Present configuration summary and get user approval
**Steps**:
1. Display configuration summary:
```
Team Skill Configuration Summary
Skill Name: <skillName>
Session Prefix: <sessionPrefix>
Domain: <domain>
Target: .codex/skills/<skillName>/
Roles:
+- coordinator (commands: analyze, dispatch, monitor)
+- <role-a> [PREFIX-*] (inline)
+- <role-b> [PREFIX-*] (commands: cmd1, cmd2)
Pipelines:
+- <pipeline-name>: TASK-001 -> TASK-002 -> TASK-003
Specs: pipelines, <additional>
Templates: <list or none>
```
2. Present confirmation:
```javascript
request_user_input({
questions: [{
question: "Confirm this team skill configuration?",
header: "Config Review",
id: "config_review",
options: [
{ label: "Confirm (Recommended)", description: "Proceed with generation" },
{ label: "Modify Roles", description: "Add, remove, or change roles" },
{ label: "Modify Pipelines", description: "Change pipeline structure" }
]
}]
})
```
3. Handle response:
| Response | Action |
|----------|--------|
| Confirm | Write teamConfig.json, report complete |
| Modify Roles | Loop back to role gathering |
| Modify Pipelines | Loop back to pipeline construction |
| Cancel | Report cancelled status |
**Output**: Confirmed teamConfig.json written to session folder
---
## Structured Output Template
```
## Summary
- Configuration: <confirmed|modified|cancelled>
- Skill: <skill-name>
## Configuration
- Roles: <count> roles defined
- Pipelines: <count> pipelines
- Target: <target-dir>
## Details
- Role list with prefix and commands structure
- Pipeline definitions with task flow
- Specs and templates list
## Open Questions
1. Any unresolved items from clarification
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Reference skill not found | Report error, ask for correct path |
| Invalid role name | Suggest valid kebab-case alternative |
| Conflicting pipeline structure | Ask user to resolve |
| User does not respond | Timeout, report partial with current config |
| Processing failure | Output partial results with clear status indicator |

View File

@@ -1,186 +0,0 @@
# Validation Reporter Agent
Validate generated skill package structure and content, reporting results with PASS/WARN/FAIL verdict.
## Identity
- **Type**: `interactive`
- **Role File**: `agents/validation-reporter.md`
- **Responsibility**: Validate generated skill package structure and content, report results
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Load the generated skill package from session artifacts
- Validate all structural integrity checks
- Produce structured output with clear PASS/WARN/FAIL verdict
- Include specific file references in findings
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Modify generated skill files
- Produce unstructured output
- Report PASS without actually validating all checks
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load generated skill files and verify content |
| `Glob` | builtin | Find files by pattern in skill package |
| `Grep` | builtin | Search for cross-references and patterns |
| `Bash` | builtin | Run validation commands, check JSON syntax |
### Tool Usage Patterns
**Read Pattern**: Load skill package files for validation
```
Read("{session_folder}/artifacts/<skill-name>/SKILL.md")
Read("{session_folder}/artifacts/<skill-name>/team-config.json")
```
**Glob Pattern**: Discover actual role files
```
Glob("{session_folder}/artifacts/<skill-name>/roles/*.md")
Glob("{session_folder}/artifacts/<skill-name>/commands/*.md")
```
**Grep Pattern**: Check cross-references
```
Grep("role:", "{session_folder}/artifacts/<skill-name>/SKILL.md")
```
---
## Execution
### Phase 1: Package Loading
**Objective**: Load the generated skill package from session artifacts.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Skill package path | Yes | Path to generated skill directory in artifacts/ |
| teamConfig.json | Yes | Original configuration used for generation |
**Steps**:
1. Read SKILL.md from the generated package
2. Read team-config.json from the generated package
3. Enumerate all files in the package using Glob
4. Read teamConfig.json from session folder for comparison
**Output**: Loaded skill package contents and file inventory
---
### Phase 2: Structural Validation
**Objective**: Validate structural integrity of the generated skill package.
**Steps**:
1. **SKILL.md validation**:
- Verify file exists
- Verify valid frontmatter (name, description, allowed-tools)
- Verify Role Registry table is present
2. **Role Registry consistency**:
- Extract roles listed in SKILL.md Role Registry table
- Glob actual files in roles/ directory
- Compare: every registry entry has a matching file, every file has a registry entry
3. **Role file validation**:
- Read each role.md in roles/ directory
- Verify valid frontmatter (prefix, inner_loop, message_types)
- Check frontmatter values are non-empty
4. **Pipeline validation**:
- Extract pipeline stages from SKILL.md or specs/pipelines.md
- Verify each stage references an existing role
5. **team-config.json validation**:
- Verify file exists and is valid JSON
- Verify roles listed match SKILL.md Role Registry
6. **Cross-reference validation**:
- Check coordinator commands/ files exist if referenced in SKILL.md
- Verify no broken file paths in cross-references
7. **Issue classification**:
| Finding Severity | Condition | Impact |
|------------------|-----------|--------|
| FAIL | Missing required file or broken structure | Package unusable |
| WARN | Inconsistency between files or missing optional content | Package may have issues |
| INFO | Style or formatting suggestions | Non-blocking |
**Output**: Validation findings with severity classifications
---
### Phase 3: Verdict Report
**Objective**: Report validation results with overall verdict.
| Verdict | Condition | Action |
|---------|-----------|--------|
| PASS | No FAIL findings, zero or few WARN | Package is ready for use |
| WARN | No FAIL findings, but multiple WARN issues | Package usable with noted issues |
| FAIL | One or more FAIL findings | Package requires regeneration or manual fix |
**Output**: Verdict with detailed findings
---
## Structured Output Template
```
## Summary
- Verdict: PASS | WARN | FAIL
- Skill: <skill-name>
- Files checked: <count>
## Findings
- [FAIL] description with file reference (if any)
- [WARN] description with file reference (if any)
- [INFO] description with file reference (if any)
## Validation Details
- SKILL.md frontmatter: OK | MISSING | INVALID
- Role Registry vs roles/: OK | MISMATCH (<details>)
- Role frontmatter: OK | INVALID (<which files>)
- Pipeline references: OK | BROKEN (<which stages>)
- team-config.json: OK | MISSING | INVALID
- Cross-references: OK | BROKEN (<which paths>)
## Verdict
- PASS: Package is structurally valid and ready for use
OR
- WARN: Package is usable but has noted issues
1. Issue description
OR
- FAIL: Package requires fixes before use
1. Issue description + suggested resolution
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Skill package directory not found | Report as FAIL, request correct path |
| SKILL.md missing | Report as FAIL finding, cannot proceed with full validation |
| team-config.json invalid JSON | Report as FAIL, include parse error |
| Role file unreadable | Report as WARN, note which file |
| Timeout approaching | Output current findings with "PARTIAL" status |

View File

@@ -1,163 +0,0 @@
# Agent Instruction Template -- Team Skill Designer
Base instruction template for CSV wave agents. Each agent receives this template with its row's column values substituted at runtime via `spawn_agents_on_csv`.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 1 | Baked into instruction parameter with session folder path |
| Phase 2 | Injected as `instruction` parameter to `spawn_agents_on_csv` |
---
## Base Instruction Template
```markdown
## TASK ASSIGNMENT -- Team Skill Designer
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists, skip if not)
2. Read project context: .workflow/project-tech.json (if exists)
3. Read teamConfig: <session-folder>/teamConfig.json (REQUIRED -- contains complete skill configuration)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: {role}
**File Target**: {file_target}
**Generation Type**: {gen_type}
### Task Description
{description}
### Previous Tasks' Findings (Context)
{prev_context}
---
## Execution Protocol
1. **Read discoveries**: Load <session-folder>/discoveries.ndjson for shared exploration findings
2. **Read teamConfig**: Load <session-folder>/teamConfig.json for complete skill configuration (roles, pipelines, specs, templates)
3. **Use context**: Apply previous tasks' findings from prev_context above
4. **Execute by gen_type**:
### For gen_type = directory
- Parse teamConfig to determine required directories
- Create directory structure at teamConfig.targetDir
- Create subdirectories: roles/, specs/, templates/ (if needed)
- Create per-role subdirectories: roles/<role-name>/ (+ commands/ if hasCommands)
- Verify all directories exist
### For gen_type = router
- Read existing Codex team skill SKILL.md as reference pattern
- Generate SKILL.md with these sections in order:
1. YAML frontmatter (name, description, argument-hint, allowed-tools)
2. Auto Mode section
3. Title + Usage examples
4. Overview with workflow diagram
5. Task Classification Rules
6. CSV Schema (header + column definitions)
7. Agent Registry (if interactive agents exist)
8. Output Artifacts table
9. Session Structure diagram
10. Implementation (session init, phases 0-4)
11. Discovery Board Protocol
12. Error Handling table
13. Core Rules list
- Use teamConfig.roles for role registry
- Use teamConfig.pipelines for pipeline definitions
### For gen_type = role-bundle
- Generate role.md with:
1. YAML frontmatter (role, prefix, inner_loop, message_types)
2. Identity section
3. Boundaries (MUST/MUST NOT)
4. Entry Router (for coordinator)
5. Phase references (Phase 0-5 for coordinator)
- Generate commands/*.md for each command in teamConfig.roles[].commands
- Each command file: Purpose, Constants, Phase 2-4 execution logic
- Coordinator always gets: analyze.md, dispatch.md, monitor.md
### For gen_type = role-inline
- Generate single role.md with:
1. YAML frontmatter (role, prefix, inner_loop, message_types)
2. Identity section
3. Boundaries (MUST/MUST NOT)
4. Phase 2: Context Loading
5. Phase 3: Domain Execution (role-specific logic)
6. Phase 4: Output & Report
### For gen_type = spec
- For pipelines.md: Generate from teamConfig.pipelines
- Pipeline name, task table (ID, Role, Name, Depends On, Checkpoint)
- Task metadata registry
- Conditional routing rules
- Dynamic specialist injection
- For other specs: Generate domain-appropriate content
### For gen_type = template
- Check for reference templates in existing skills
- Generate domain-appropriate template structure
- Include placeholder sections and formatting guidelines
5. **Share discoveries**: Append exploration findings to shared board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> <session-folder>/discoveries.ndjson
```
6. **Report result**: Return JSON via report_agent_job_result
### Discovery Types to Share
- `dir_created`: {path, description} -- Directory structure created
- `file_generated`: {file, gen_type, sections} -- File generated with specific sections
- `pattern_found`: {pattern_name, description} -- Design pattern identified
- `config_decision`: {decision, rationale, impact} -- Configuration decision made
- `reference_found`: {source, target, type} -- Cross-reference between generated files
---
## Output (report_agent_job_result)
Return JSON:
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Key discoveries and generation notes (max 500 chars)",
"files_produced": "semicolon-separated paths of produced files relative to skill root",
"error": ""
}
```
---
## Quality Requirements
All agents must verify before reporting complete:
| Requirement | Criteria |
|-------------|----------|
| Files produced | Verify all claimed files exist via Read |
| teamConfig adherence | Generated content matches teamConfig specifications |
| Pattern fidelity | Generated files follow existing Codex skill patterns |
| Discovery sharing | At least 1 discovery shared to board |
| Error reporting | Non-empty error field if status is failed |
| YAML frontmatter | Role files must have valid frontmatter for agent parsing |
---
## Placeholder Reference
| Placeholder | Resolved By | When |
|-------------|------------|------|
| `<session-folder>` | Skill designer (Phase 1) | Literal path baked into instruction |
| `{id}` | spawn_agents_on_csv | Runtime from CSV row |
| `{title}` | spawn_agents_on_csv | Runtime from CSV row |
| `{description}` | spawn_agents_on_csv | Runtime from CSV row |
| `{role}` | spawn_agents_on_csv | Runtime from CSV row |
| `{file_target}` | spawn_agents_on_csv | Runtime from CSV row |
| `{gen_type}` | spawn_agents_on_csv | Runtime from CSV row |
| `{prev_context}` | spawn_agents_on_csv | Runtime from CSV row |

View File

@@ -0,0 +1,250 @@
# Phase 1: Requirements Analysis
Gather team skill requirements from user input and build the `teamConfig` data structure that drives all subsequent phases.
## Objective
- Parse user input (text description, reference skill, or interactive)
- Determine roles, pipelines, specs, templates
- Auto-decide commands distribution (inline vs commands/ folder)
- Build comprehensive `teamConfig` object
- Confirm with user before proceeding
## Step 1.1: Detect Input Source
```javascript
function detectInputSource(userInput) {
// Source A: Reference to existing skill
if (userInput.includes('based on') || userInput.includes('参考') || userInput.includes('like')) {
return { type: 'reference', refSkill: extractSkillName(userInput) };
}
// Source B: Structured input with roles/pipelines
if (userInput.includes('ROLES:') || userInput.includes('PIPELINES:')) {
return { type: 'structured', data: parseStructuredInput(userInput) };
}
// Source C: Natural language description
return { type: 'natural', description: userInput };
}
```
**For reference source**: Read the referenced skill's SKILL.md and role files to extract structure.
**For natural language**: Use request_user_input to gather missing details interactively.
## Step 1.2: Gather Core Identity
```javascript
const skillNameResponse = request_user_input({
prompt: `团队技能名称?(kebab-case, e.g., team-code-review)\n\nSuggested: ${suggestedName}\nOr enter a custom name.`
});
const prefixResponse = request_user_input({
prompt: `会话前缀?(3-4字符用于任务ID, e.g., TCR)\n\nSuggested: ${suggestedPrefix}\nOr enter a custom prefix.`
});
```
If user provided clear name/prefix in input, skip this step.
## Step 1.3: Determine Roles
### Role Discovery from Domain
Analyze domain description to identify required roles:
```javascript
function discoverRoles(domain) {
const rolePatterns = {
'analyst': ['分析', 'analyze', 'research', 'explore', 'investigate', 'scan'],
'planner': ['规划', 'plan', 'design', 'architect', 'decompose'],
'writer': ['文档', 'write', 'document', 'draft', 'spec', 'report'],
'executor': ['实现', 'implement', 'execute', 'build', 'code', 'develop'],
'tester': ['测试', 'test', 'verify', 'validate', 'qa'],
'reviewer': ['审查', 'review', 'quality', 'check', 'audit', 'inspect'],
'security-expert': ['安全', 'security', 'vulnerability', 'penetration'],
'performance-optimizer': ['性能', 'performance', 'optimize', 'benchmark'],
'data-engineer': ['数据', 'data', 'pipeline', 'etl', 'migration'],
'devops-engineer': ['部署', 'devops', 'deploy', 'ci/cd', 'infrastructure'],
};
const matched = [];
for (const [role, keywords] of Object.entries(rolePatterns)) {
if (keywords.some(kw => domain.toLowerCase().includes(kw))) {
matched.push(role);
}
}
return matched;
}
```
### Role Configuration
For each discovered role, determine:
```javascript
function configureRole(roleName) {
return {
name: roleName,
prefix: determinePrefix(roleName),
inner_loop: determineInnerLoop(roleName),
hasCommands: false, // determined in Step 1.5
commands: [],
message_types: determineMessageTypes(roleName),
path: `roles/${roleName}/role.md`
};
}
// Standard prefix mapping
const prefixMap = {
'analyst': 'RESEARCH',
'writer': 'DRAFT',
'planner': 'PLAN',
'executor': 'IMPL',
'tester': 'TEST',
'reviewer': 'REVIEW',
// Dynamic roles use uppercase role name
};
// Inner loop: roles that process multiple tasks sequentially
const innerLoopRoles = ['executor', 'writer', 'planner'];
// Message types the role handles
const messageMap = {
'analyst': ['state_update'],
'writer': ['state_update', 'discuss_response'],
'planner': ['state_update'],
'executor': ['state_update', 'revision_request'],
'tester': ['state_update'],
'reviewer': ['state_update', 'discuss_request'],
};
```
## Step 1.4: Define Pipelines
### Pipeline Types from Role Combination
```javascript
function definePipelines(roles, domain) {
const has = name => roles.some(r => r.name === name);
// Full lifecycle: analyst → writer → planner → executor → tester → reviewer
if (has('analyst') && has('writer') && has('planner') && has('executor'))
return [{ name: 'full-lifecycle', tasks: buildFullLifecycleTasks(roles) }];
// Spec-only: analyst → writer → reviewer
if (has('analyst') && has('writer') && !has('executor'))
return [{ name: 'spec-only', tasks: buildSpecOnlyTasks(roles) }];
// Impl-only: planner → executor → tester → reviewer
if (has('planner') && has('executor') && !has('analyst'))
return [{ name: 'impl-only', tasks: buildImplOnlyTasks(roles) }];
// Custom: user-defined
return [{ name: 'custom', tasks: buildCustomTasks(roles, domain) }];
}
```
### Task Schema
```javascript
const taskSchema = {
id: 'PREFIX-NNN', // e.g., RESEARCH-001
role: 'analyst', // which role executes
name: 'Seed Analysis', // human-readable name
dependsOn: [], // task IDs that must complete first
isCheckpoint: false, // true for quality gates
isConditional: false, // true for routing decisions
description: '...'
};
```
## Step 1.5: Determine Commands Distribution
**Rule**: 1 action → inline in role.md. 2+ distinct actions → commands/ folder.
```javascript
function determineCommandsDistribution(roles) {
// Coordinator: always has commands/
// coordinator.commands = ['analyze', 'dispatch', 'monitor']
// Standard multi-action roles:
// executor → implement + fix → commands/
// reviewer (if both code & spec review) → review-code + review-spec → commands/
// All others → typically inline
for (const role of roles) {
const actions = countDistinctActions(role);
if (actions.length >= 2) {
role.hasCommands = true;
role.commands = actions.map(a => a.name);
}
}
}
```
## Step 1.6: Determine Specs and Templates
```javascript
// Specs: always include pipelines, add domain-specific
const specs = ['pipelines'];
if (hasQualityGates) specs.push('quality-gates');
if (hasKnowledgeTransfer) specs.push('knowledge-transfer');
// Templates: only if writer role exists
const templates = [];
if (has('writer')) {
// Detect from domain keywords
if (domain.includes('product')) templates.push('product-brief');
if (domain.includes('requirement')) templates.push('requirements');
if (domain.includes('architecture')) templates.push('architecture');
if (domain.includes('epic')) templates.push('epics');
}
```
## Step 1.7: Build teamConfig
```javascript
const teamConfig = {
skillName: string, // e.g., "team-code-review"
sessionPrefix: string, // e.g., "TCR"
domain: string, // domain description
title: string, // e.g., "Code Review Team"
roles: Array<RoleConfig>, // includes coordinator
pipelines: Array<Pipeline>,
specs: Array<string>, // filenames without .md
templates: Array<string>, // filenames without .md
conditionalRouting: boolean,
dynamicSpecialists: Array<string>,
};
```
## Step 1.8: Confirm with User
```
╔══════════════════════════════════════════╗
║ Team Skill Configuration Summary ║
╠══════════════════════════════════════════╣
Skill Name: ${skillName}
Session Prefix: ${sessionPrefix}
Domain: ${domain}
Roles (N):
├─ coordinator (commands: analyze, dispatch, monitor)
├─ role-a [PREFIX-*] (inline) 🔄
└─ role-b [PREFIX-*] (commands: cmd1, cmd2)
Pipelines:
└─ pipeline-name: TASK-001 → TASK-002 → TASK-003
Specs: pipelines, quality-gates
Templates: (none)
╚══════════════════════════════════════════╝
```
Use request_user_input to confirm or allow modifications.
## Output
- **Variable**: `teamConfig` — complete configuration for all subsequent phases
- **Next**: Phase 2 - Scaffold Generation

View File

@@ -0,0 +1,228 @@
# Phase 2: Scaffold Generation
Generate the SKILL.md universal router and create the directory structure for the team skill.
## Objective
- Create directory structure (roles/, specs/, templates/)
- Generate SKILL.md as universal router following v4 pattern
- SKILL.md must NOT contain beat model, pipeline details, or role Phase 2-4 logic
## Step 2.1: Create Directory Structure
```bash
skillDir=".claude/skills/${teamConfig.skillName}"
mkdir -p "${skillDir}"
# Create role directories
for role in teamConfig.roles:
mkdir -p "${skillDir}/roles/${role.name}"
if role.hasCommands:
mkdir -p "${skillDir}/roles/${role.name}/commands"
# Create specs directory
mkdir -p "${skillDir}/specs"
# Create templates directory (if needed)
if teamConfig.templates.length > 0:
mkdir -p "${skillDir}/templates"
```
## Step 2.2: Generate SKILL.md
The SKILL.md follows a strict template. Every generated SKILL.md contains these sections in order:
### Section 1: Frontmatter
```yaml
---
name: ${teamConfig.skillName}
description: ${teamConfig.domain}. Triggers on "${teamConfig.skillName}".
allowed-tools: spawn_agent(*), wait_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
```
### Section 2: Title + Architecture Diagram
```markdown
# ${Title}
${One-line description}
## Architecture
\```
Skill(skill="${teamConfig.skillName}", args="task description")
|
SKILL.md (this file) = Router
|
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze → dispatch → spawn workers → STOP
|
+-------+-------+-------+
v v v v
[team-worker agents, each loads roles/<role>/role.md]
\```
```
### Section 3: Role Registry
```markdown
## Role Registry
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | roles/coordinator/role.md | — | — |
${teamConfig.roles.filter(r => r.name !== 'coordinator').map(r =>
`| ${r.name} | ${r.path} | ${r.prefix}-* | ${r.inner_loop} |`
).join('\n')}
```
### Section 4: Role Router
```markdown
## Role Router
Parse `$ARGUMENTS`:
- Has `--role <name>` → Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role` → Read `roles/coordinator/role.md`, execute entry router
```
### Section 5: Shared Constants
```markdown
## Shared Constants
- **Session prefix**: `${teamConfig.sessionPrefix}`
- **Session path**: `.workflow/.team/${teamConfig.sessionPrefix}-<slug>-<date>/`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
```
### Section 6: Worker Spawn Template
```markdown
## Worker Spawn Template
Coordinator spawns workers using this template:
\```
spawn_agent({
agent_type: "team_worker",
items: [{
description: "Spawn <role> worker",
team_name: <team-name>,
name: "<role>",
prompt: `## Role Assignment
role: <role>
role_spec: .codex/skills/${teamConfig.skillName}/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: <team-name>
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).`
}]
})
\```
```
### Section 7: User Commands
```markdown
## User Commands
| Command | Action |
|---------|--------|
| `check` / `status` | View execution status graph |
| `resume` / `continue` | Advance to next step |
| `revise <TASK-ID> [feedback]` | Revise specific task |
| `feedback <text>` | Inject feedback for revision |
| `recheck` | Re-run quality check |
| `improve [dimension]` | Auto-improve weakest dimension |
```
### Section 8: Completion Action
```markdown
## Completion Action
When pipeline completes, coordinator presents:
\```
request_user_input({
prompt: "Pipeline complete. What would you like to do?\n\nOptions:\n1. Archive & Clean (Recommended) - Archive session, clean up team\n2. Keep Active - Keep session for follow-up work\n3. Export Results - Export deliverables to target directory"
})
\```
```
### Section 9: Specs Reference
```markdown
## Specs Reference
${teamConfig.specs.map(s =>
`- [specs/${s}.md](specs/${s}.md) — ${specDescription(s)}`
).join('\n')}
```
### Section 10: Session Directory
```markdown
## Session Directory
\```
.workflow/.team/${teamConfig.sessionPrefix}-<slug>-<date>/
├── team-session.json # Session state + role registry
├── spec/ # Spec phase outputs
├── plan/ # Implementation plan + TASK-*.json
├── artifacts/ # All deliverables
├── wisdom/ # Cross-task knowledge
├── explorations/ # Shared explore cache
├── discussions/ # Discuss round records
└── .msg/ # Team message bus
\```
```
### Section 11: Error Handling
```markdown
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown command | Error with available command list |
| Role not found | Error with role registry |
| CLI tool fails | Worker fallback to direct implementation |
| Fast-advance conflict | Coordinator reconciles on next callback |
| Completion action fails | Default to Keep Active |
```
## Step 2.3: Assemble and Write
Assemble all sections into a single SKILL.md file and write to `${skillDir}/SKILL.md`.
**Quality Rules**:
1. SKILL.md must NOT contain beat model (ONE_STEP_PER_INVOCATION, spawn-and-stop)
2. SKILL.md must NOT contain pipeline task details (task IDs, dependencies)
3. SKILL.md must NOT contain role Phase 2-4 logic
4. SKILL.md MUST contain role registry table with correct paths
5. SKILL.md MUST contain worker spawn template with correct `role_spec` paths
## Output
- **File**: `.claude/skills/${teamConfig.skillName}/SKILL.md`
- **Variable**: `skillDir` (path to skill root directory)
- **Next**: Phase 3 - Content Generation
## Next Phase
Return to orchestrator, then auto-continue to [Phase 3: Content Generation](03-content-generation.md).

View File

@@ -0,0 +1,330 @@
# Phase 3: Content Generation
Generate all role files, specs, and templates based on `teamConfig` and the generated SKILL.md.
## Objective
- Generate coordinator role.md + commands/ (analyze, dispatch, monitor)
- Generate each worker role.md (inline or with commands/)
- Generate specs/ files (pipelines.md + domain specs)
- Generate templates/ if needed
- Follow team-lifecycle-v4 golden sample patterns
## Golden Sample Reference
Read the golden sample at `~ or <project>/.claude/skills/team-lifecycle-v4/` for each file type before generating. This ensures pattern fidelity.
## Step 3.1: Generate Coordinator
The coordinator is the most complex role. It always has 3 commands.
### coordinator/role.md
```markdown
---
role: coordinator
---
# Coordinator — ${teamConfig.title}
## Identity
You are the coordinator for ${teamConfig.title}. You orchestrate the ${teamConfig.domain} pipeline by analyzing requirements, dispatching tasks, and monitoring worker progress.
## Boundaries
- **DO**: Analyze, dispatch, monitor, reconcile, report
- **DO NOT**: Implement domain work directly — delegate to workers
## Command Execution Protocol
Read command file → Execute ALL steps sequentially → Return to entry router.
Commands: `commands/analyze.md`, `commands/dispatch.md`, `commands/monitor.md`.
## Entry Router
On each invocation, detect current state and route:
| Condition | Handler |
|-----------|---------|
| First invocation (no session) | → Phase 1: Requirement Clarification |
| Session exists, no team | → Phase 2: Team Setup |
| Team exists, no tasks | → Phase 3: Dispatch (analyze.md → dispatch.md) |
| Tasks exist, none started | → Phase 4: Spawn First (monitor.md → handleSpawnNext) |
| Callback received | → monitor.md → handleCallback |
| User says "check"/"status" | → monitor.md → handleCheck |
| User says "resume"/"continue" | → monitor.md → handleResume |
| All tasks completed | → Phase 5: Report & Completion |
## Phase 0: Session Resume
If `.workflow/.team/${teamConfig.sessionPrefix}-*/team-session.json` exists:
- Load session state, verify team, reconcile task status
- Route to appropriate handler based on current state
## Phase 1: Requirement Clarification
- Parse user's task description at TEXT LEVEL
- Use request_user_input if requirements are ambiguous
- Execute `commands/analyze.md` for signal detection + complexity scoring
## Phase 2: Team Setup
- Session folder creation with session ID: `${teamConfig.sessionPrefix}-<slug>-<date>`
- Initialize team_msg message bus
- Create session directory structure
## Phase 3: Dispatch
- Execute `commands/dispatch.md`
- Creates task entries in tasks.json with blockedBy dependencies
## Phase 4: Spawn & Monitor
- Execute `commands/monitor.md` → handleSpawnNext
- Spawn ready workers as team_worker agents via spawn_agent()
- **STOP after spawning** — wait for callback
## Phase 5: Report & Completion
- Aggregate all task artifacts
- Present completion action to user
```
### coordinator/commands/analyze.md
Template based on golden sample — includes:
- Signal detection (keywords → capabilities)
- Dependency graph construction (tiers)
- Complexity scoring (1-3 Low, 4-6 Medium, 7+ High)
- Role minimization (cap at 5)
- Output: task-analysis.json
```markdown
# Command: Analyze
## Signal Detection
Scan requirement text for capability signals:
${teamConfig.roles.filter(r => r.name !== 'coordinator').map(r =>
`- **${r.name}**: [domain-specific keywords]`
).join('\n')}
## Dependency Graph
Build 4-tier dependency graph:
- Tier 0: Independent tasks (can run in parallel)
- Tier 1: Depends on Tier 0
- Tier 2: Depends on Tier 1
- Tier 3: Depends on Tier 2
## Complexity Scoring
| Score | Level | Strategy |
|-------|-------|----------|
| 1-3 | Low | Direct implementation, skip deep planning |
| 4-6 | Medium | Standard pipeline with planning |
| 7+ | High | Full spec → plan → implement cycle |
## Output
Write `task-analysis.json` to session directory:
\```json
{
"signals": [...],
"roles_needed": [...],
"dependency_tiers": [...],
"complexity": { "score": N, "level": "Low|Medium|High" },
"pipeline": "${teamConfig.pipelines[0].name}"
}
\```
```
### coordinator/commands/dispatch.md
Template — includes:
- Topological sort from dependency graph
- tasks.json entries with blockedBy for dependencies
- Task description template (PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS)
### coordinator/commands/monitor.md
Template — includes:
- Beat model constants (ONE_STEP_PER_INVOCATION, SPAWN_MODE: spawn-and-stop)
- 6 handlers: handleCallback, handleCheck, handleResume, handleSpawnNext, handleComplete, handleAdapt
- Checkpoint detection for quality gates
- Fast-advance reconciliation
**Critical**: This is the ONLY file that contains beat model logic.
## Step 3.2: Generate Worker Roles
For each worker role in `teamConfig.roles`:
### Inline Role Template (no commands/)
```markdown
---
role: ${role.name}
prefix: ${role.prefix}
inner_loop: ${role.inner_loop}
message_types: [${role.message_types.join(', ')}]
---
# ${capitalize(role.name)} — ${teamConfig.title}
## Identity
You are the ${role.name} for ${teamConfig.title}.
Task prefix: `${role.prefix}-*`
## Phase 2: Context Loading
- Read task description from tasks.json (find by id)
- Load relevant session artifacts from session directory
- Load specs from `specs/` as needed
## Phase 3: Domain Execution
[Domain-specific execution logic for this role]
### Execution Steps
1. [Step 1 based on role's domain]
2. [Step 2]
3. [Step 3]
### Tools Available
- CLI tools: `ccw cli --mode analysis|write`
- Direct tools: Read, Write, Edit, Bash, Grep, Glob
- Message bus: `mcp__ccw-tools__team_msg`
- **Cannot use spawn_agent()** — workers must use CLI or direct tools
## Phase 4: Output & Report
- Write artifacts to session directory
- Log state_update via team_msg
- Publish wisdom if cross-task knowledge discovered
```
### Command-Based Role Template (has commands/)
```markdown
---
role: ${role.name}
prefix: ${role.prefix}
inner_loop: ${role.inner_loop}
message_types: [${role.message_types.join(', ')}]
---
# ${capitalize(role.name)} — ${teamConfig.title}
## Identity
You are the ${role.name} for ${teamConfig.title}.
Task prefix: `${role.prefix}-*`
## Phase 2: Context Loading
Load task description, detect mode/command.
## Phase 3: Command Router
| Condition | Command |
|-----------|---------|
${role.commands.map(cmd =>
`| [condition for ${cmd}] | → commands/${cmd}.md |`
).join('\n')}
Read command file → Execute ALL steps → Return to Phase 4.
## Phase 4: Output & Report
Write artifacts, log state_update.
```
Then generate each `commands/<cmd>.md` with domain-specific logic.
## Step 3.3: Generate Specs
### specs/pipelines.md
```markdown
# Pipeline Definitions
## Available Pipelines
${teamConfig.pipelines.map(p => `
### ${p.name}
| Task ID | Role | Name | Depends On | Checkpoint |
|---------|------|------|------------|------------|
${p.tasks.map(t =>
`| ${t.id} | ${t.role} | ${t.name} | ${t.dependsOn.join(', ') || '—'} | ${t.isCheckpoint ? '✓' : '—'} |`
).join('\n')}
`).join('\n')}
## Task Metadata Registry
Standard task description template:
\```
PURPOSE: [goal]
TASK: [steps]
CONTEXT: [session artifacts + specs]
EXPECTED: [deliverable format]
CONSTRAINTS: [scope limits]
\```
## Conditional Routing
${teamConfig.conditionalRouting ? `
PLAN-001 complexity assessment routes to:
- Low (1-3): Direct implementation
- Medium (4-6): Standard planning
- High (7+): Full spec → plan → implement
` : 'No conditional routing in this pipeline.'}
## Dynamic Specialist Injection
${teamConfig.dynamicSpecialists.length > 0 ?
teamConfig.dynamicSpecialists.map(s => `- ${s}: Injected when domain keywords detected`).join('\n') :
'No dynamic specialists configured.'
}
```
### Additional Specs
For each additional spec in `teamConfig.specs` (beyond pipelines), generate domain-appropriate content:
- **quality-gates.md**: Thresholds (Pass≥80%, Review 60-79%, Fail<60%), scoring dimensions, per-phase gates
- **knowledge-transfer.md**: 5 transfer channels, Phase 2 loading protocol, Phase 4 publishing protocol
## Step 3.4: Generate Templates
For each template in `teamConfig.templates`:
1. Check if golden sample has matching template at `~ or <project>/.claude/skills/team-lifecycle-v4/templates/`
2. If exists: copy and adapt for new domain
3. If not: generate domain-appropriate template structure
## Step 3.5: Generation Order
Execute in this order (respects dependencies):
1. **specs/** — needed by roles for reference
2. **coordinator/** — role.md + commands/ (3 files)
3. **workers/** — each role.md (+ optional commands/)
4. **templates/** — independent, generate last
For each file:
1. Read golden sample equivalent (if exists)
2. Adapt content for current teamConfig
3. Write file
4. Verify file exists
## Output
- **Files**: All role.md, commands/*.md, specs/*.md, templates/*.md
- **Next**: Phase 4 - Validation

View File

@@ -0,0 +1,320 @@
# Phase 4: Validation
Validate the generated team skill package for structural completeness, reference integrity, role consistency, and team-worker compatibility.
## Objective
- Verify all required files exist
- Validate SKILL.md role registry matches actual role files
- Check role.md frontmatter format for team-worker compatibility
- Verify pipeline task references match existing roles
- Report validation results
## Step 4.1: Structural Validation
```javascript
function validateStructure(teamConfig) {
const skillDir = `.claude/skills/${teamConfig.skillName}`;
const results = { errors: [], warnings: [], info: [] };
// Check SKILL.md exists
if (!fileExists(`${skillDir}/SKILL.md`)) {
results.errors.push('SKILL.md not found');
}
// Check coordinator structure
if (!fileExists(`${skillDir}/roles/coordinator/role.md`)) {
results.errors.push('Coordinator role.md not found');
}
for (const cmd of ['analyze', 'dispatch', 'monitor']) {
if (!fileExists(`${skillDir}/roles/coordinator/commands/${cmd}.md`)) {
results.errors.push(`Coordinator command ${cmd}.md not found`);
}
}
// Check worker roles
for (const role of teamConfig.roles.filter(r => r.name !== 'coordinator')) {
if (!fileExists(`${skillDir}/roles/${role.name}/role.md`)) {
results.errors.push(`Worker role.md not found: ${role.name}`);
}
if (role.hasCommands) {
for (const cmd of role.commands) {
if (!fileExists(`${skillDir}/roles/${role.name}/commands/${cmd}.md`)) {
results.errors.push(`Worker command not found: ${role.name}/commands/${cmd}.md`);
}
}
}
}
// Check specs
if (!fileExists(`${skillDir}/specs/pipelines.md`)) {
results.errors.push('specs/pipelines.md not found');
}
return results;
}
```
## Step 4.2: SKILL.md Content Validation
```javascript
function validateSkillMd(teamConfig) {
const skillDir = `.claude/skills/${teamConfig.skillName}`;
const results = { errors: [], warnings: [], info: [] };
const skillMd = Read(`${skillDir}/SKILL.md`);
// Required sections
const requiredSections = [
'Role Registry', 'Role Router', 'Shared Constants',
'Worker Spawn Template', 'User Commands', 'Session Directory'
];
for (const section of requiredSections) {
if (!skillMd.includes(section)) {
results.errors.push(`SKILL.md missing section: ${section}`);
}
}
// Verify role registry completeness
for (const role of teamConfig.roles) {
if (!skillMd.includes(role.path || `roles/${role.name}/role.md`)) {
results.errors.push(`Role registry missing path for: ${role.name}`);
}
}
// Verify session prefix
if (!skillMd.includes(teamConfig.sessionPrefix)) {
results.warnings.push(`Session prefix ${teamConfig.sessionPrefix} not found in SKILL.md`);
}
// Verify NO beat model content in SKILL.md
const beatModelPatterns = [
'ONE_STEP_PER_INVOCATION',
'spawn-and-stop',
'SPAWN_MODE',
'handleCallback',
'handleSpawnNext'
];
for (const pattern of beatModelPatterns) {
if (skillMd.includes(pattern)) {
results.errors.push(`SKILL.md contains beat model content: ${pattern} (should be in coordinator only)`);
}
}
return results;
}
```
## Step 4.3: Role Frontmatter Validation
Verify role.md files have correct YAML frontmatter for team-worker agent compatibility:
```javascript
function validateRoleFrontmatter(teamConfig) {
const skillDir = `.claude/skills/${teamConfig.skillName}`;
const results = { errors: [], warnings: [], info: [] };
for (const role of teamConfig.roles.filter(r => r.name !== 'coordinator')) {
const roleMd = Read(`${skillDir}/roles/${role.name}/role.md`);
// Check frontmatter exists
if (!roleMd.startsWith('---')) {
results.errors.push(`${role.name}/role.md missing YAML frontmatter`);
continue;
}
// Extract frontmatter
const fmMatch = roleMd.match(/^---\n([\s\S]*?)\n---/);
if (!fmMatch) {
results.errors.push(`${role.name}/role.md malformed frontmatter`);
continue;
}
const fm = fmMatch[1];
// Required fields
if (!fm.includes(`role: ${role.name}`)) {
results.errors.push(`${role.name}/role.md frontmatter missing 'role: ${role.name}'`);
}
if (!fm.includes(`prefix: ${role.prefix}`)) {
results.errors.push(`${role.name}/role.md frontmatter missing 'prefix: ${role.prefix}'`);
}
if (!fm.includes(`inner_loop: ${role.inner_loop}`)) {
results.warnings.push(`${role.name}/role.md frontmatter missing 'inner_loop: ${role.inner_loop}'`);
}
if (!fm.includes('message_types:')) {
results.warnings.push(`${role.name}/role.md frontmatter missing 'message_types'`);
}
}
return results;
}
```
## Step 4.4: Pipeline Consistency
```javascript
function validatePipelines(teamConfig) {
const skillDir = `.claude/skills/${teamConfig.skillName}`;
const results = { errors: [], warnings: [], info: [] };
// Check every role referenced in pipelines exists
const definedRoles = new Set(teamConfig.roles.map(r => r.name));
for (const pipeline of teamConfig.pipelines) {
for (const task of pipeline.tasks) {
if (!definedRoles.has(task.role)) {
results.errors.push(
`Pipeline '${pipeline.name}' task ${task.id} references undefined role: ${task.role}`
);
}
}
// Check for circular dependencies
const visited = new Set();
const stack = new Set();
for (const task of pipeline.tasks) {
if (hasCycle(task, pipeline.tasks, visited, stack)) {
results.errors.push(`Pipeline '${pipeline.name}' has circular dependency involving ${task.id}`);
}
}
}
// Verify specs/pipelines.md contains all pipelines
const pipelinesMd = Read(`${skillDir}/specs/pipelines.md`);
for (const pipeline of teamConfig.pipelines) {
if (!pipelinesMd.includes(pipeline.name)) {
results.warnings.push(`specs/pipelines.md missing pipeline: ${pipeline.name}`);
}
}
return results;
}
```
## Step 4.5: Commands Distribution Validation
```javascript
function validateCommandsDistribution(teamConfig) {
const skillDir = `.claude/skills/${teamConfig.skillName}`;
const results = { errors: [], warnings: [], info: [] };
for (const role of teamConfig.roles.filter(r => r.name !== 'coordinator')) {
if (role.hasCommands) {
// Verify commands/ directory exists and has files
const cmdDir = `${skillDir}/roles/${role.name}/commands`;
if (role.commands.length < 2) {
results.warnings.push(
`${role.name} has commands/ but only ${role.commands.length} command(s) — consider inline`
);
}
// Verify role.md references commands
const roleMd = Read(`${skillDir}/roles/${role.name}/role.md`);
for (const cmd of role.commands) {
if (!roleMd.includes(`commands/${cmd}.md`)) {
results.warnings.push(
`${role.name}/role.md doesn't reference commands/${cmd}.md`
);
}
}
} else {
// Verify no commands/ directory exists
const cmdDir = `${skillDir}/roles/${role.name}/commands`;
if (directoryExists(cmdDir)) {
results.warnings.push(
`${role.name} is inline but has commands/ directory`
);
}
}
}
return results;
}
```
## Step 4.6: Aggregate Results and Report
```javascript
function generateValidationReport(teamConfig) {
const structural = validateStructure(teamConfig);
const skillMd = validateSkillMd(teamConfig);
const frontmatter = validateRoleFrontmatter(teamConfig);
const pipelines = validatePipelines(teamConfig);
const commands = validateCommandsDistribution(teamConfig);
const allErrors = [
...structural.errors, ...skillMd.errors,
...frontmatter.errors, ...pipelines.errors, ...commands.errors
];
const allWarnings = [
...structural.warnings, ...skillMd.warnings,
...frontmatter.warnings, ...pipelines.warnings, ...commands.warnings
];
const gate = allErrors.length === 0 ? 'PASS' :
allErrors.length <= 2 ? 'REVIEW' : 'FAIL';
const skillDir = `.claude/skills/${teamConfig.skillName}`;
console.log(`
╔══════════════════════════════════════╗
║ Team Skill Validation Report ║
╠══════════════════════════════════════╣
║ Skill: ${teamConfig.skillName.padEnd(28)}
║ Gate: ${gate.padEnd(28)}
╚══════════════════════════════════════╝
Structure:
${skillDir}/
├── SKILL.md ✓
├── roles/
│ ├── coordinator/
│ │ ├── role.md ✓
│ │ └── commands/ (analyze, dispatch, monitor)
${teamConfig.roles.filter(r => r.name !== 'coordinator').map(r => {
const structure = r.hasCommands
? ` │ ├── ${r.name}/ (role.md + commands/)`
: ` │ ├── ${r.name}/role.md`;
return `${structure.padEnd(50)}`;
}).join('\n')}
├── specs/
│ └── pipelines.md ✓
└── templates/ ${teamConfig.templates.length > 0 ? '✓' : '(empty)'}
${allErrors.length > 0 ? `Errors (${allErrors.length}):\n${allErrors.map(e => `${e}`).join('\n')}` : 'Errors: None ✓'}
${allWarnings.length > 0 ? `Warnings (${allWarnings.length}):\n${allWarnings.map(w => `${w}`).join('\n')}` : 'Warnings: None ✓'}
Usage:
Skill(skill="${teamConfig.skillName}", args="<task description>")
`);
return { gate, errors: allErrors, warnings: allWarnings };
}
```
## Step 4.7: Error Recovery
```javascript
if (report.gate === 'FAIL') {
const recovery = request_user_input({
prompt: `Validation found ${report.errors.length} errors. How to proceed?\n\nOptions:\n1. Auto-fix - Attempt automatic fixes (missing files, frontmatter)\n2. Regenerate - Re-run Phase 3 with fixes\n3. Accept as-is - Manual fix later`
});
}
```
## Output
- **Report**: Validation results with quality gate (PASS/REVIEW/FAIL)
- **Completion**: Team skill package ready at `.claude/skills/${teamConfig.skillName}/`
## Completion
Team Skill Designer has completed. The generated team skill is ready for use.
```
Next Steps:
1. Review SKILL.md router for correctness
2. Review each role.md for domain accuracy
3. Test: Skill(skill="${teamConfig.skillName}", args="<test task>")
4. Iterate based on execution results
```

View File

@@ -1,180 +0,0 @@
# Team Skill Designer -- CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier | `"SCAFFOLD-001"` |
| `title` | string | Yes | Short task title | `"Create directory structure"` |
| `description` | string | Yes | Detailed generation instructions (self-contained) | `"Create roles/, specs/, templates/ directories..."` |
| `role` | string | Yes | Generator role name | `"scaffolder"` |
| `file_target` | string | Yes | Target file/directory path relative to skill root | `"roles/coordinator/role.md"` |
| `gen_type` | enum | Yes | `directory`, `router`, `role-bundle`, `role-inline`, `spec`, `template`, `validation` | `"role-inline"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"SCAFFOLD-001;SPEC-001"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"SPEC-001"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[SCAFFOLD-001] Created directory structure at .codex/skills/..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Generated coordinator with 3 commands: analyze, dispatch, monitor"` |
| `files_produced` | string | Semicolon-separated paths of produced files | `"roles/coordinator/role.md;roles/coordinator/commands/analyze.md"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Generator Roles
| Role | gen_type Values | Description |
|------|-----------------|-------------|
| `scaffolder` | `directory` | Creates directory structures |
| `router-writer` | `router` | Generates SKILL.md orchestrator files |
| `role-writer` | `role-bundle`, `role-inline` | Generates role.md files (+ optional commands/) |
| `spec-writer` | `spec` | Generates specs/*.md files |
| `template-writer` | `template` | Generates templates/*.md files |
| `validator` | `validation` | Validates generated skill package |
---
### gen_type Values
| gen_type | Target | Description |
|----------|--------|-------------|
| `directory` | Directory path | Create directory structure with subdirectories |
| `router` | SKILL.md | Generate main orchestrator SKILL.md with frontmatter, role registry, router |
| `role-bundle` | Directory path | Generate role.md + commands/ folder with multiple command files |
| `role-inline` | Single .md file | Generate single role.md with inline Phase 2-4 logic |
| `spec` | Single .md file | Generate spec file (pipelines, quality-gates, etc.) |
| `template` | Single .md file | Generate document template file |
| `validation` | Report | Validate complete skill package structure and references |
---
### Example Data
```csv
id,title,description,role,file_target,gen_type,deps,context_from,exec_mode,wave,status,findings,files_produced,error
"SCAFFOLD-001","Create directory structure","Create complete directory structure for team-code-review skill:\n- ~ or <project>/.codex/skills/team-code-review/\n- roles/coordinator/ + commands/\n- roles/analyst/\n- roles/reviewer/\n- specs/\n- templates/","scaffolder","skill-dir","directory","","","csv-wave","1","pending","","",""
"ROUTER-001","Generate SKILL.md","Generate ~ or <project>/.codex/skills/team-code-review/SKILL.md with:\n- Frontmatter (name, description, allowed-tools)\n- Architecture diagram\n- Role registry table\n- CSV schema reference\n- Session structure\n- Wave execution engine\nUse teamConfig.json for role list and pipeline definitions","router-writer","SKILL.md","router","SCAFFOLD-001","SCAFFOLD-001","csv-wave","2","pending","","",""
"SPEC-001","Generate pipelines spec","Generate specs/pipelines.md with:\n- Pipeline definitions from teamConfig\n- Task registry with PREFIX-NNN format\n- Conditional routing rules\n- Dynamic specialist injection\nRoles: analyst(ANALYSIS-*), reviewer(REVIEW-*)","spec-writer","specs/pipelines.md","spec","SCAFFOLD-001","SCAFFOLD-001","csv-wave","2","pending","","",""
"ROLE-001","Generate coordinator","Generate roles/coordinator/role.md with entry router and commands/analyze.md, commands/dispatch.md, commands/monitor.md. Coordinator orchestrates the analysis pipeline","role-writer","roles/coordinator/","role-bundle","SCAFFOLD-001;SPEC-001","SPEC-001","csv-wave","3","pending","","",""
"ROLE-002","Generate analyst role","Generate roles/analyst/role.md with Phase 2 (context loading), Phase 3 (analysis execution), Phase 4 (output). Prefix: ANALYSIS, inner_loop: false","role-writer","roles/analyst/role.md","role-inline","SCAFFOLD-001;SPEC-001","SPEC-001","csv-wave","3","pending","","",""
"ROLE-003","Generate reviewer role","Generate roles/reviewer/role.md with Phase 2 (load artifacts), Phase 3 (review execution), Phase 4 (report). Prefix: REVIEW, inner_loop: false","role-writer","roles/reviewer/role.md","role-inline","SCAFFOLD-001;SPEC-001","SPEC-001","csv-wave","3","pending","","",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
--------------------- -------------------- -----------------
id ----------> id ----------> id
title ----------> title ----------> (reads)
description ----------> description ----------> (reads)
role ----------> role ----------> (reads)
file_target ----------> file_target ----------> (reads)
gen_type ----------> gen_type ----------> (reads)
deps ----------> deps ----------> (reads)
context_from----------> context_from----------> (reads)
exec_mode ----------> exec_mode ----------> (reads)
wave ----------> (reads)
prev_context ----------> (reads)
status
findings
files_produced
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "ROLE-001",
"status": "completed",
"findings": "Generated coordinator role with entry router, 3 commands (analyze, dispatch, monitor), beat model in monitor.md only",
"files_produced": "roles/coordinator/role.md;roles/coordinator/commands/analyze.md;roles/coordinator/commands/dispatch.md;roles/coordinator/commands/monitor.md",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `dir_created` | `data.path` | `{path, description}` | Directory structure created |
| `file_generated` | `data.file` | `{file, gen_type, sections}` | File generated with sections |
| `pattern_found` | `data.pattern_name` | `{pattern_name, description}` | Design pattern from golden sample |
| `config_decision` | `data.decision` | `{decision, rationale, impact}` | Config decision made |
| `validation_result` | `data.check` | `{check, passed, message}` | Validation check result |
| `reference_found` | `data.source+data.target` | `{source, target, type}` | Cross-reference between files |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"SCAFFOLD-001","type":"dir_created","data":{"path":"~ or <project>/.codex/skills/team-code-review/roles/","description":"Created roles directory with coordinator, analyst, reviewer subdirs"}}
{"ts":"2026-03-08T10:05:00Z","worker":"ROLE-001","type":"file_generated","data":{"file":"roles/coordinator/role.md","gen_type":"role-bundle","sections":["entry-router","phase-0","phase-1","phase-2","phase-3"]}}
{"ts":"2026-03-08T10:10:00Z","worker":"SPEC-001","type":"config_decision","data":{"decision":"full-lifecycle pipeline","rationale":"Both analyst and reviewer roles present","impact":"4-tier dependency graph"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| CSV task findings | Interactive task | Injected via spawn message or send_input |
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
| gen_type valid | Value in {directory, router, role-bundle, role-inline, spec, template, validation} | "Invalid gen_type: {value}" |
| file_target valid | Path is relative and uses forward slashes | "Invalid file_target: {path}" |
| Cross-mechanism deps | Interactive to CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |