feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -1,724 +1,167 @@
---
name: team-brainstorm
description: Multi-agent brainstorming pipeline with Generator-Critic loop. Generates ideas, challenges assumptions, synthesizes themes, and evaluates proposals. Supports Quick, Deep, and Full pipeline modes.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"topic description\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
description: Unified team skill for brainstorming team. Uses team-worker agent architecture with role directories for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents. Triggers on "team brainstorm".
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team Brainstorm
## Usage
Orchestrate multi-agent brainstorming: generate ideas -> challenge assumptions -> synthesize -> evaluate. Supports Quick, Deep, and Full pipelines with Generator-Critic loop.
```bash
$team-brainstorm "How should we approach microservices migration?"
$team-brainstorm -c 4 "Innovation strategies for AI-powered developer tools"
$team-brainstorm -y "Quick brainstorm on naming conventions"
$team-brainstorm --continue "brs-microservices-20260308"
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--continue`: Resume existing session
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
---
## Overview
Multi-agent brainstorming with Generator-Critic loop: generate ideas across multiple angles, challenge assumptions, synthesize themes, and evaluate proposals. Supports three pipeline modes (Quick/Deep/Full) with configurable depth and parallel ideation.
**Execution Model**: Hybrid — CSV wave pipeline (primary) + individual agent spawn (secondary for Generator-Critic control)
## Architecture
```
┌─────────────────────────────────────────────────────────────────────────┐
TEAM BRAINSTORM WORKFLOW │
├─────────────────────────────────────────────────────────────────────────┤
│ Phase 0: Pre-Wave Interactive │
├─ Topic clarification + complexity scoring
├─ Pipeline mode selection (quick/deep/full)
└─ Output: refined requirements for decomposition
Phase 1: Requirement → CSV + Classification │
├─ Parse topic into brainstorm tasks per selected pipeline │
├─ Assign roles: ideator, challenger, synthesizer, evaluator │
├─ Classify tasks: csv-wave | interactive (exec_mode)
├─ Compute dependency waves (topological sort → depth grouping) │
├─ Generate tasks.csv with wave + exec_mode columns
│ └─ User validates task breakdown (skip if -y) │
│ │
│ Phase 2: Wave Execution Engine (Extended) │
│ ├─ For each wave (1..N): │
│ │ ├─ Execute pre-wave interactive tasks (if any) │
│ │ ├─ Build wave CSV (filter csv-wave tasks for this wave) │
│ │ ├─ Inject previous findings into prev_context column │
│ │ ├─ spawn_agents_on_csv(wave CSV) │
│ │ ├─ Execute post-wave interactive tasks (if any) │
│ │ ├─ Merge all results into master tasks.csv │
│ │ └─ Check: any failed? → skip dependents │
│ └─ discoveries.ndjson shared across all modes (append-only) │
│ │
│ Phase 3: Post-Wave Interactive │
│ ├─ Generator-Critic (GC) loop control │
│ ├─ If critique severity >= HIGH: trigger revision wave │
│ └─ Max 2 GC rounds, then force convergence │
│ │
│ Phase 4: Results Aggregation │
│ ├─ Export final results.csv │
│ ├─ Generate context.md with all findings │
│ ├─ Display summary: completed/failed/skipped per wave │
│ └─ Offer: view results | retry failed | done │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Skill(skill="team-brainstorm", args="topic description")
|
SKILL.md (this file) = Router
|
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze -> dispatch -> spawn workers -> STOP
|
+-------+-------+-------+
v v v v
[ideator][challenger][synthesizer][evaluator]
```
---
## Role Registry
## Task Classification Rules
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
| ideator | [roles/ideator/role.md](roles/ideator/role.md) | IDEA-* | false |
| challenger | [roles/challenger/role.md](roles/challenger/role.md) | CHALLENGE-* | false |
| synthesizer | [roles/synthesizer/role.md](roles/synthesizer/role.md) | SYNTH-* | false |
| evaluator | [roles/evaluator/role.md](roles/evaluator/role.md) | EVAL-* | false |
Each task is classified by `exec_mode`:
## Role Router
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification, inline utility |
Parse `$ARGUMENTS`:
- Has `--role <name>` -> Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role` -> `roles/coordinator/role.md`, execute entry router
**Classification Decision**:
## Shared Constants
| Task Property | Classification |
|---------------|---------------|
| Idea generation (single angle) | `csv-wave` |
| Parallel ideation (Full pipeline, multiple angles) | `csv-wave` (parallel in same wave) |
| Idea revision (GC loop) | `csv-wave` |
| Critique / challenge | `csv-wave` |
| Synthesis (theme extraction) | `csv-wave` |
| Evaluation (scoring / ranking) | `csv-wave` |
| GC loop control (severity check → decide revision or convergence) | `interactive` |
| Topic clarification (Phase 0) | `interactive` |
- **Session prefix**: `BRS`
- **Session path**: `.workflow/.team/BRS-<slug>-<date>/`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
---
## Worker Spawn Template
## CSV Schema
### tasks.csv (Master State)
```csv
id,title,description,role,angle,gc_round,deps,context_from,exec_mode,wave,status,findings,gc_signal,severity_summary,error
"IDEA-001","Multi-angle idea generation","Generate 3+ ideas per angle with title, description, assumption, impact","ideator","Technical;Product;Innovation","0","","","csv-wave","1","pending","","","",""
"CHALLENGE-001","Critique generated ideas","Challenge each idea across assumption, feasibility, risk, competition dimensions","challenger","","0","IDEA-001","IDEA-001","csv-wave","2","pending","","","",""
"GC-CHECK-001","GC loop decision","Evaluate critique severity and decide: revision or convergence","gc-controller","","1","CHALLENGE-001","CHALLENGE-001","interactive","3","pending","","","",""
```
**Columns**:
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (string) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description |
| `role` | Input | Worker role: ideator, challenger, synthesizer, evaluator |
| `angle` | Input | Brainstorming angle(s) for ideator tasks (semicolon-separated) |
| `gc_round` | Input | Generator-Critic round number (0 = initial, 1+ = revision) |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
| `status` | Output | `pending``completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
| `gc_signal` | Output | Generator-Critic signal: `REVISION_NEEDED` or `CONVERGED` (challenger only) |
| `severity_summary` | Output | Severity count: e.g. "CRITICAL:1 HIGH:2 MEDIUM:3 LOW:1" |
| `error` | Output | Error message if failed (empty if success) |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
---
## Agent Registry (Interactive Agents)
| Agent | Role File | Pattern | Responsibility | Position |
|-------|-----------|---------|----------------|----------|
| gc-controller | agents/gc-controller.md | 2.3 (wait-respond) | Evaluate critique severity, decide revision vs convergence | post-wave (after challenger wave) |
| topic-clarifier | agents/topic-clarifier.md | 2.3 (wait-respond) | Clarify topic, assess complexity, select pipeline mode | standalone (Phase 0) |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 4 |
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
| `context.md` | Human-readable execution report | Created in Phase 4 |
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
---
## Session Structure
Coordinator spawns workers using this template:
```
.workflow/.csv-wave/{session-id}/
├── tasks.csv # Master state (all tasks, both modes)
├── results.csv # Final results export
├── discoveries.ndjson # Shared discovery board (all agents)
├── context.md # Human-readable report
├── wave-{N}.csv # Temporary per-wave input (csv-wave only)
└── interactive/ # Interactive task artifacts
└── {id}-result.json # Per-task results
```
spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: <role>
role_spec: <skill_root>/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
requirement: <topic-description>
inner_loop: false
---
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.` },
## Implementation
{ type: "text", text: `## Task Context
task_id: <task-id>
title: <task-title>
description: <topic-description>
pipeline_phase: <pipeline-phase>` },
### Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse flags
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
// Clean requirement text (remove flags)
const topic = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
.trim()
const slug = topic.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
let sessionId = `brs-${slug}-${dateStr}`
let sessionFolder = `.workflow/.csv-wave/${sessionId}`
// Continue mode: find existing session
if (continueMode) {
const existing = Bash(`ls -t .workflow/.csv-wave/brs-* 2>/dev/null | head -1`).trim()
if (existing) {
sessionId = existing.split('/').pop()
sessionFolder = existing
// Read existing tasks.csv, find incomplete waves, resume from Phase 2
}
}
Bash(`mkdir -p ${sessionFolder}/interactive`)
```
---
### Phase 0: Pre-Wave Interactive
**Objective**: Clarify topic, assess complexity, and select pipeline mode.
**Execution**:
```javascript
const clarifier = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~ or <project>/.codex/skills/team-brainstorm/agents/topic-clarifier.md (MUST read first)
2. Read: .workflow/project-tech.json (if exists)
---
Goal: Clarify brainstorming topic and select pipeline mode
Topic: ${topic}
### Task
1. Assess topic complexity using signal detection:
- Strategic/systemic keywords (+3): strategy, architecture, system, framework, paradigm
- Multi-dimensional keywords (+2): multiple, compare, tradeoff, versus, alternative
- Innovation-focused keywords (+2): innovative, creative, novel, breakthrough
- Simple/basic keywords (-2): simple, quick, straightforward, basic
2. Score >= 4 → full, 2-3 → deep, 0-1 → quick
3. Suggest divergence angles (e.g., Technical, Product, Innovation, Risk)
4. Return structured result
`
{ type: "text", text: `## Upstream Context
<prev_context>` }
]
})
const clarifierResult = wait({ ids: [clarifier], timeout_ms: 120000 })
if (clarifierResult.timed_out) {
send_input({ id: clarifier, message: "Please finalize and output current findings." })
const retry = wait({ ids: [clarifier], timeout_ms: 60000 })
}
// Parse result for pipeline_mode, angles
close_agent({ id: clarifier })
// Store result
Write(`${sessionFolder}/interactive/topic-clarifier-result.json`, JSON.stringify({
task_id: "topic-clarification",
status: "completed",
pipeline_mode: parsedMode, // "quick" | "deep" | "full"
angles: parsedAngles, // ["Technical", "Product", "Innovation", "Risk"]
complexity_score: parsedScore,
timestamp: getUtc8ISOString()
}))
```
If not AUTO_YES, present user with pipeline mode selection for confirmation:
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
```javascript
if (!AUTO_YES) {
const answer = request_user_input({
questions: [{
question: `Topic: "${topic}" — Recommended: ${pipeline_mode}. Approve or override?`,
header: "Pipeline",
id: "pipeline_select",
options: [
{ label: "Approve (Recommended)", description: `Use ${pipeline_mode} pipeline (complexity: ${complexity_score})` },
{ label: "Quick", description: "3 tasks: generate -> challenge -> synthesize" },
{ label: "Deep/Full", description: "6-7 tasks: parallel generation, GC loop, evaluation" }
]
}]
})
// Update pipeline_mode based on user choice
}
**Parallel ideator spawn** (Full pipeline with N angles):
When Full pipeline has N parallel IDEA tasks, spawn N distinct team-worker agents named `ideator-1`, `ideator-2`, etc.
```
spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: ideator
role_spec: <skill_root>/roles/ideator/role.md
session: <session-folder>
session_id: <session-id>
requirement: <topic-description>
agent_name: ideator-<N>
inner_loop: false
Read role_spec file (<skill_root>/roles/ideator/role.md) to load Phase 2-4 domain instructions.` },
{ type: "text", text: `## Task Context
task_id: <task-id>
title: <task-title>
description: <topic-description>
pipeline_phase: <pipeline-phase>` },
{ type: "text", text: `## Upstream Context
<prev_context>` }
]
})
```
**Success Criteria**:
- Refined requirements available for Phase 1 decomposition
- Interactive agents closed, results stored
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
---
## User Commands
### Phase 1: Requirement → CSV + Classification
| Command | Action |
|---------|--------|
| `check` / `status` | View execution status graph, no advancement |
| `resume` / `continue` | Check worker states, advance next step |
**Objective**: Build tasks.csv from selected pipeline mode with proper wave assignments.
## Session Directory
**Decomposition Rules**:
| Pipeline | Tasks | Wave Structure |
|----------|-------|---------------|
| quick | IDEA-001 → CHALLENGE-001 → SYNTH-001 | 3 waves, serial |
| deep | IDEA-001 → CHALLENGE-001 → IDEA-002 → CHALLENGE-002 → SYNTH-001 → EVAL-001 | 6 waves, serial with GC loop |
| full | IDEA-001,002,003 (parallel) → CHALLENGE-001 → IDEA-004 → SYNTH-001 → EVAL-001 | 5 waves, fan-out + GC |
**Classification Rules**:
All brainstorm work tasks (ideation, challenging, synthesis, evaluation) are `csv-wave`. The GC loop controller between challenger and next ideation revision is `interactive` (post-wave, spawned by orchestrator to decide the GC outcome).
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
**Pipeline Task Definitions**:
#### Quick Pipeline (3 csv-wave tasks)
| Task ID | Role | Wave | Deps | Description |
|---------|------|------|------|-------------|
| IDEA-001 | ideator | 1 | (none) | Generate multi-angle ideas: 3+ ideas per angle with title, description, assumption, impact |
| CHALLENGE-001 | challenger | 2 | IDEA-001 | Challenge each idea across 4 dimensions (assumption, feasibility, risk, competition). Assign severity per idea. Output GC signal |
| SYNTH-001 | synthesizer | 3 | CHALLENGE-001 | Synthesize ideas and critiques into 1-3 integrated proposals with feasibility and innovation scores |
#### Deep Pipeline (6 csv-wave tasks + 1 interactive GC check)
Same as Quick plus:
| Task ID | Role | Wave | Deps | Description |
|---------|------|------|------|-------------|
| IDEA-002 | ideator | 4 | CHALLENGE-001 | Revise ideas based on critique feedback (GC Round 1). Address HIGH/CRITICAL challenges |
| CHALLENGE-002 | challenger | 5 | IDEA-002 | Validate revised ideas (GC Round 2). Re-evaluate previously challenged ideas |
| SYNTH-001 | synthesizer | 6 | CHALLENGE-002 | Synthesize all ideas and critiques |
| EVAL-001 | evaluator | 7 | SYNTH-001 | Score and rank proposals: Feasibility 30%, Innovation 25%, Impact 25%, Cost 20% |
GC-CHECK-001 (interactive) runs post-wave after CHALLENGE-001 to decide whether to proceed with revision or skip to synthesis.
#### Full Pipeline (7 csv-wave tasks + GC control)
| Task ID | Role | Wave | Deps | Description |
|---------|------|------|------|-------------|
| IDEA-001 | ideator | 1 | (none) | Generate ideas from angle 1 |
| IDEA-002 | ideator | 1 | (none) | Generate ideas from angle 2 |
| IDEA-003 | ideator | 1 | (none) | Generate ideas from angle 3 |
| CHALLENGE-001 | challenger | 2 | IDEA-001;IDEA-002;IDEA-003 | Critique all generated ideas |
| IDEA-004 | ideator | 3 | CHALLENGE-001 | Revise ideas based on critique |
| SYNTH-001 | synthesizer | 4 | IDEA-004 | Synthesize all ideas and critiques |
| EVAL-001 | evaluator | 5 | SYNTH-001 | Score and rank proposals |
**Success Criteria**:
- tasks.csv created with valid schema, wave, and exec_mode assignments
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
```javascript
const failedIds = new Set()
const skippedIds = new Set()
const MAX_GC_ROUNDS = 2
let gcRound = 0
for (let wave = 1; wave <= maxWave; wave++) {
console.log(`\n## Wave ${wave}/${maxWave}\n`)
// 1. Read current master CSV
const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
// 2. Separate csv-wave and interactive tasks for this wave
const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
// 3. Skip tasks whose deps failed
const executableCsvTasks = []
for (const task of csvTasks) {
const deps = task.deps.split(';').filter(Boolean)
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
skippedIds.add(task.id)
updateMasterCsvRow(sessionFolder, task.id, {
status: 'skipped',
error: 'Dependency failed or skipped'
})
continue
}
executableCsvTasks.push(task)
}
// 4. Build prev_context for each csv-wave task
for (const task of executableCsvTasks) {
const contextIds = task.context_from.split(';').filter(Boolean)
const prevFindings = contextIds
.map(id => {
const prevRow = masterCsv.find(r => r.id === id)
if (prevRow && prevRow.status === 'completed' && prevRow.findings) {
return `[Task ${id}: ${prevRow.title}] ${prevRow.findings}`
}
return null
})
.filter(Boolean)
.join('\n')
task.prev_context = prevFindings || 'No previous context available'
}
// 5. Write wave CSV and execute csv-wave tasks
if (executableCsvTasks.length > 0) {
const waveHeader = 'id,title,description,role,angle,gc_round,deps,context_from,exec_mode,wave,prev_context'
const waveRows = executableCsvTasks.map(t =>
[t.id, t.title, t.description, t.role, t.angle, t.gc_round, t.deps, t.context_from, t.exec_mode, t.wave, t.prev_context]
.map(cell => `"${String(cell).replace(/"/g, '""')}"`)
.join(',')
)
Write(`${sessionFolder}/wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
const waveResult = spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: buildBrainstormInstruction(sessionFolder, wave),
max_concurrency: maxConcurrency,
max_runtime_seconds: 600,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
gc_signal: { type: "string" },
severity_summary: { type: "string" },
error: { type: "string" }
},
required: ["id", "status", "findings"]
}
})
// Blocks until wave completes
// Merge results into master CSV
const waveResults = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const result of waveResults) {
updateMasterCsvRow(sessionFolder, result.id, {
status: result.status,
findings: result.findings || '',
gc_signal: result.gc_signal || '',
severity_summary: result.severity_summary || '',
error: result.error || ''
})
if (result.status === 'failed') failedIds.add(result.id)
}
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`)
}
// 6. Execute post-wave interactive tasks (GC controller)
for (const task of interactiveTasks) {
if (task.status !== 'pending') continue
const deps = task.deps.split(';').filter(Boolean)
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
skippedIds.add(task.id)
continue
}
// Spawn GC controller agent
const gcAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~ or <project>/.codex/skills/team-brainstorm/agents/gc-controller.md (MUST read first)
2. Read: ${sessionFolder}/discoveries.ndjson (shared discoveries)
---
Goal: Evaluate critique severity and decide revision vs convergence
Session: ${sessionFolder}
GC Round: ${gcRound}
Max GC Rounds: ${MAX_GC_ROUNDS}
### Context
Read the latest critique file and determine the GC signal.
If REVISION_NEEDED and gcRound < maxRounds: output "REVISION"
If CONVERGED or gcRound >= maxRounds: output "CONVERGE"
`
})
const gcResult = wait({ ids: [gcAgent], timeout_ms: 120000 })
if (gcResult.timed_out) {
send_input({ id: gcAgent, message: "Please finalize your decision now." })
wait({ ids: [gcAgent], timeout_ms: 60000 })
}
close_agent({ id: gcAgent })
// Parse GC decision and potentially create/skip revision tasks
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
task_id: task.id, status: "completed",
gc_decision: gcDecision, gc_round: gcRound,
timestamp: getUtc8ISOString()
}))
if (gcDecision === "CONVERGE") {
// Skip remaining GC tasks, mark revision tasks as skipped
// Unblock SYNTH directly
} else {
gcRound++
// Let the revision wave proceed naturally
}
updateMasterCsvRow(sessionFolder, task.id, { status: 'completed', findings: `GC decision: ${gcDecision}` })
}
}
```
.workflow/.team/BRS-<slug>-<date>/
├── session.json # Session metadata + pipeline + gc_round
├── task-analysis.json # Coordinator analyze output
├── .msg/
│ ├── messages.jsonl # Message bus log
│ └── meta.json # Session state + cross-role state
├── wisdom/ # Cross-task knowledge
│ ├── learnings.md
│ ├── decisions.md
│ ├── conventions.md
│ └── issues.md
├── ideas/ # Ideator output
│ ├── idea-001.md
│ └── idea-002.md
├── critiques/ # Challenger output
│ ├── critique-001.md
│ └── critique-002.md
├── synthesis/ # Synthesizer output
│ └── synthesis-001.md
└── evaluation/ # Evaluator output
└── evaluation-001.md
```
**Success Criteria**:
- All waves executed in order
- Both csv-wave and interactive tasks handled per wave
- Each wave's results merged into master CSV before next wave starts
- Dependent tasks skipped when predecessor failed
- discoveries.ndjson accumulated across all waves and mechanisms
- GC loop controlled with max 2 rounds
## Specs Reference
---
### Phase 3: Post-Wave Interactive
**Objective**: Handle any final GC loop convergence and prepare for synthesis.
If the pipeline used GC loops and the final GC decision was CONVERGE or max rounds reached, ensure SYNTH-001 is unblocked and all remaining GC-related tasks are properly marked.
**Success Criteria**:
- Post-wave interactive processing complete
- Interactive agents closed, results stored
---
### Phase 4: Results Aggregation
**Objective**: Generate final results and human-readable report.
```javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
Write(`${sessionFolder}/results.csv`, masterCsv)
const tasks = parseCsv(masterCsv)
const completed = tasks.filter(t => t.status === 'completed')
const failed = tasks.filter(t => t.status === 'failed')
const skipped = tasks.filter(t => t.status === 'skipped')
const contextContent = `# Team Brainstorm Report
**Session**: ${sessionId}
**Topic**: ${topic}
**Pipeline**: ${pipeline_mode}
**Completed**: ${getUtc8ISOString()}
---
## Summary
| Metric | Count |
|--------|-------|
| Total Tasks | ${tasks.length} |
| Completed | ${completed.length} |
| Failed | ${failed.length} |
| Skipped | ${skipped.length} |
| GC Rounds | ${gcRound} |
---
## Wave Execution
${waveDetails}
---
## Task Details
${taskDetails}
---
## Brainstorm Artifacts
- Ideas: discoveries with type "idea" in discoveries.ndjson
- Critiques: discoveries with type "critique" in discoveries.ndjson
- Synthesis: discoveries with type "synthesis" in discoveries.ndjson
- Evaluation: discoveries with type "evaluation" in discoveries.ndjson
`
Write(`${sessionFolder}/context.md`, contextContent)
```
If not AUTO_YES and there are failed tasks, offer retry or view report.
**Success Criteria**:
- results.csv exported (all tasks, both modes)
- context.md generated
- All interactive agents closed
- Summary displayed to user
---
## Shared Discovery Board Protocol
All agents across all waves share `discoveries.ndjson`. This enables cross-role knowledge sharing.
**Discovery Types**:
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `idea` | `data.title` | `{title, angle, description, assumption, impact}` | Generated idea |
| `critique` | `data.idea_title` | `{idea_title, dimension, severity, challenge, rationale}` | Critique of an idea |
| `theme` | `data.name` | `{name, strength, supporting_ideas[]}` | Extracted theme from synthesis |
| `proposal` | `data.title` | `{title, source_ideas[], feasibility, innovation, description}` | Integrated proposal |
| `evaluation` | `data.proposal_title` | `{proposal_title, weighted_score, rank, recommendation}` | Proposal evaluation |
| `gc_decision` | `data.round` | `{round, signal, severity_counts}` | GC loop decision |
**Format**: NDJSON, each line is self-contained JSON:
```jsonl
{"ts":"2026-03-08T10:00:00+08:00","worker":"IDEA-001","type":"idea","data":{"title":"API Gateway Pattern","angle":"Technical","description":"Centralized API gateway for microservice routing","assumption":"Services need unified entry point","impact":"Simplifies client integration"}}
{"ts":"2026-03-08T10:05:00+08:00","worker":"CHALLENGE-001","type":"critique","data":{"idea_title":"API Gateway Pattern","dimension":"feasibility","severity":"MEDIUM","challenge":"Single point of failure","rationale":"Requires high availability design"}}
```
**Protocol Rules**:
1. Read board before own work → leverage existing context
2. Write discoveries immediately via `echo >>` → don't batch
3. Deduplicate — check existing entries by type + dedup key
4. Append-only — never modify or delete existing lines
---
## Consensus Severity Routing
When the challenger returns critique results with severity-graded verdicts:
| Severity | Action |
|----------|--------|
| HIGH | Trigger revision round (GC loop), max 2 rounds total |
| MEDIUM | Log warning, continue pipeline |
| LOW | Treat as consensus reached |
**Constraints**: Max 2 GC rounds (revision cycles). If still HIGH after 2 rounds, force convergence to synthesizer.
---
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
## Error Handling
| Error | Resolution |
|-------|------------|
| Circular dependency | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Interactive agent failed | Mark as failed, skip dependents |
| All agents in wave failed | Log error, offer retry or abort |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| GC loop exceeds 2 rounds | Force convergence to synthesizer |
| No ideas generated | Report failure, suggest refining topic |
| Continue mode: no session found | List available sessions, prompt user to select |
---
## Core Rules
1. **Start Immediately**: First action is session initialization, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson — both mechanisms share it
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
---
## Coordinator Role Constraints (Main Agent)
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
- Spawns agents with task assignments
- Waits for agent callbacks
- Merges results and coordinates workflow
- Manages workflow transitions between phases
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
- Wait patiently for `wait()` calls to complete
- NOT skip workflow steps due to perceived delays
- NOT assume agents have failed just because they're taking time
- Trust the timeout mechanisms defined in the skill
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
- Use `send_input()` to ask questions or provide clarification
- NOT skip the agent or move to next phase prematurely
- Give agents opportunity to respond before escalating
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
- Skip phases or stages defined in the workflow
- Bypass required approval or review steps
- Execute dependent tasks before prerequisites complete
- Assume task completion without explicit agent callback
- Make up or fabricate agent results
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
- Total execution time may range from 30-90 minutes or longer
- Each phase may take 10-30 minutes depending on complexity
- The coordinator must remain active and attentive throughout the entire process
- Do not terminate or skip steps due to time concerns
| Scenario | Resolution |
|----------|------------|
| Unknown command | Error with available command list |
| Role not found | Error with role registry |
| CLI tool fails | Worker fallback to direct implementation |
| Fast-advance conflict | Coordinator reconciles on next callback |
| Completion action fails | Default to Keep Active |
| Generator-Critic loop exceeds 2 rounds | Force convergence to synthesizer |
| No ideas generated | Coordinator prompts with seed questions |

View File

@@ -1,122 +0,0 @@
# GC Controller Agent
Evaluate Generator-Critic loop severity and decide whether to trigger revision or converge to synthesis.
## Identity
- **Type**: `interactive`
- **Responsibility**: GC loop decision making
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read the latest critique file to assess severity
- Make a binary decision: REVISION or CONVERGE
- Respect max GC round limits
- Produce structured output following template
### MUST NOT
- Generate ideas or perform critique (delegate to csv-wave agents)
- Exceed 1 decision per invocation
- Ignore the max round constraint
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load critique artifacts and session state |
| `Glob` | builtin | Find critique files in session directory |
---
## Execution
### Phase 1: Context Loading
**Objective**: Load critique results and GC round state
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Session folder | Yes | Path to session directory |
| GC Round | Yes | Current GC round number |
| Max GC Rounds | Yes | Maximum allowed rounds (default: 2) |
**Steps**:
1. Read the session's discoveries.ndjson for critique entries
2. Parse prev_context for the challenger's findings
3. Extract severity counts from the challenger's severity_summary
4. Load current gc_round from spawn message
**Output**: Severity counts and round state loaded
---
### Phase 2: Decision Making
**Objective**: Determine whether to trigger revision or converge
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Severity counts | Yes | CRITICAL, HIGH, MEDIUM, LOW counts |
| GC round | Yes | Current round number |
| Max rounds | Yes | Maximum allowed rounds |
**Steps**:
1. Check severity threshold:
| Condition | Decision |
|-----------|----------|
| gc_round >= max_rounds | CONVERGE (force, regardless of severity) |
| CRITICAL count > 0 | REVISION (if rounds remain) |
| HIGH count > 0 | REVISION (if rounds remain) |
| All MEDIUM or lower | CONVERGE |
2. Log the decision rationale
**Output**: Decision string "REVISION" or "CONVERGE"
---
## Structured Output Template
```
## Summary
- GC Round: <current>/<max>
- Decision: REVISION | CONVERGE
## Severity Assessment
- CRITICAL: <count>
- HIGH: <count>
- MEDIUM: <count>
- LOW: <count>
## Rationale
- <1-2 sentence explanation of decision>
## Next Action
- REVISION: Ideator should address HIGH/CRITICAL challenges in next round
- CONVERGE: Proceed to synthesis phase, skip remaining revision tasks
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No critique data found | Default to CONVERGE (no evidence for revision) |
| Severity parsing fails | Default to CONVERGE with warning |
| Timeout approaching | Output current decision immediately |

View File

@@ -1,126 +0,0 @@
# Topic Clarifier Agent
Assess brainstorming topic complexity, recommend pipeline mode, and suggest divergence angles.
## Identity
- **Type**: `interactive`
- **Responsibility**: Topic analysis and pipeline selection
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Perform text-level analysis only (no source code reading)
- Produce structured output with pipeline recommendation
- Suggest meaningful divergence angles for ideation
### MUST NOT
- Read source code or explore codebase
- Generate ideas (that is the ideator's job)
- Make final pipeline decisions (orchestrator confirms with user)
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load project context if available |
---
## Execution
### Phase 1: Signal Detection
**Objective**: Analyze topic keywords for complexity signals
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Topic text | Yes | The brainstorming topic from user |
**Steps**:
1. Scan topic for complexity signals:
| Signal | Weight | Keywords |
|--------|--------|----------|
| Strategic/systemic | +3 | strategy, architecture, system, framework, paradigm |
| Multi-dimensional | +2 | multiple, compare, tradeoff, versus, alternative |
| Innovation-focused | +2 | innovative, creative, novel, breakthrough |
| Simple/basic | -2 | simple, quick, straightforward, basic |
2. Calculate complexity score
**Output**: Complexity score and matched signals
---
### Phase 2: Pipeline Recommendation
**Objective**: Map complexity to pipeline mode and suggest angles
**Steps**:
1. Map score to pipeline:
| Score | Complexity | Pipeline |
|-------|------------|----------|
| >= 4 | High | full (3x parallel ideation + GC + evaluation) |
| 2-3 | Medium | deep (serial with GC loop + evaluation) |
| 0-1 | Low | quick (generate → challenge → synthesize) |
2. Identify divergence angles from topic context:
- **Technical**: Implementation approaches, architecture patterns
- **Product**: User experience, market fit, value proposition
- **Innovation**: Novel approaches, emerging tech, disruption potential
- **Risk**: Failure modes, mitigation strategies, worst cases
- **Business**: Cost, ROI, competitive advantage
- **Organizational**: Team structure, process, culture
3. Select 3-4 most relevant angles based on topic keywords
**Output**: Pipeline mode, angles, complexity rationale
---
## Structured Output Template
```
## Summary
- Topic: <topic>
- Complexity Score: <score> (<level>)
- Recommended Pipeline: <quick|deep|full>
## Signal Detection
- Matched signals: <list of matched signals with weights>
## Suggested Angles
1. <Angle 1>: <why relevant>
2. <Angle 2>: <why relevant>
3. <Angle 3>: <why relevant>
## Pipeline Details
- <pipeline>: <brief description of what this pipeline does>
- Expected tasks: <count>
- Parallel ideation: <yes/no>
- GC rounds: <0/1/2>
- Evaluation: <yes/no>
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Topic too vague | Suggest clarifying questions in output |
| No signal matches | Default to "deep" pipeline with general angles |
| Timeout approaching | Output current analysis with "PARTIAL" status |

View File

@@ -1,105 +0,0 @@
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read shared discoveries: .workflow/.csv-wave/{session-id}/discoveries.ndjson (if exists, skip if not)
2. Read project context: .workflow/project-tech.json (if exists)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: {role}
**Description**: {description}
**Angle(s)**: {angle}
**GC Round**: {gc_round}
### Previous Tasks' Findings (Context)
{prev_context}
---
## Execution Protocol
1. **Read discoveries**: Load shared discoveries from the session's discoveries.ndjson for cross-task context
2. **Use context**: Apply previous tasks' findings from prev_context above
3. **Execute by role**:
### Role: ideator (IDEA-* tasks)
- **Initial Generation** (gc_round = 0):
- For each angle listed in the Angle(s) field, generate 3+ ideas
- Each idea must include: title, description (2-3 sentences), key assumption, potential impact, implementation hint
- Self-review: ensure >= 6 ideas total, no duplicates, all angles covered
- **GC Revision** (gc_round > 0):
- Read critique findings from prev_context
- Focus on HIGH/CRITICAL severity challenges
- Retain unchallenged ideas intact
- Revise challenged ideas with revision rationale
- Replace unsalvageable ideas with new alternatives
### Role: challenger (CHALLENGE-* tasks)
- Read all idea findings from prev_context
- Challenge each idea across 4 dimensions:
- **Assumption Validity**: Does the core assumption hold? Counter-examples?
- **Feasibility**: Technical/resource/time feasibility?
- **Risk Assessment**: Worst case scenario? Hidden risks?
- **Competitive Analysis**: Better alternatives already exist?
- Assign severity per idea: CRITICAL / HIGH / MEDIUM / LOW
- Determine GC signal:
- Any CRITICAL or HIGH severity → `REVISION_NEEDED`
- All MEDIUM or lower → `CONVERGED`
### Role: synthesizer (SYNTH-* tasks)
- Read all idea and critique findings from prev_context
- Execute synthesis steps:
1. **Theme Extraction**: Identify common themes, rate strength (1-10), list supporting ideas
2. **Conflict Resolution**: Identify contradictions, determine resolution approach
3. **Complementary Grouping**: Group complementary ideas together
4. **Gap Identification**: Discover uncovered perspectives
5. **Integrated Proposals**: Generate 1-3 consolidated proposals with feasibility score (1-10) and innovation score (1-10)
### Role: evaluator (EVAL-* tasks)
- Read synthesis findings from prev_context
- Score each proposal across 4 weighted dimensions:
- Feasibility (30%): Technical feasibility, resource needs, timeline
- Innovation (25%): Novelty, differentiation, breakthrough potential
- Impact (25%): Scope of impact, value creation, problem resolution
- Cost Efficiency (20%): Implementation cost, risk cost, opportunity cost
- Weighted score = (Feasibility * 0.30) + (Innovation * 0.25) + (Impact * 0.25) + (Cost * 0.20)
- Provide recommendation per proposal: Strong Recommend / Recommend / Consider / Pass
- Generate final ranking
4. **Share discoveries**: Append exploration findings to shared board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
```
Discovery types to share:
- `idea`: {title, angle, description, assumption, impact} — generated idea
- `critique`: {idea_title, dimension, severity, challenge, rationale} — critique finding
- `theme`: {name, strength, supporting_ideas[]} — extracted theme
- `proposal`: {title, source_ideas[], feasibility, innovation, description} — integrated proposal
- `evaluation`: {proposal_title, weighted_score, rank, recommendation} — scored proposal
5. **Report result**: Return JSON via report_agent_job_result
---
## Output (report_agent_job_result)
Return JSON:
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Key discoveries and implementation notes (max 500 chars)",
"gc_signal": "REVISION_NEEDED | CONVERGED | (empty for non-challenger roles)",
"severity_summary": "CRITICAL:N HIGH:N MEDIUM:N LOW:N (challenger only, empty for others)",
"error": ""
}
**Role-specific findings guidance**:
- **ideator**: List idea count, angles covered, key themes. Example: "Generated 8 ideas across Technical, Product, Innovation. Top ideas: API Gateway, Event Sourcing, DevEx Platform."
- **challenger**: Summarize severity counts and GC signal. Example: "Challenged 8 ideas. 2 HIGH (require revision), 3 MEDIUM, 3 LOW. GC signal: REVISION_NEEDED."
- **synthesizer**: List proposal count and key themes. Example: "Synthesized 3 proposals from 5 themes. Top: Infrastructure Modernization (feasibility:8, innovation:7)."
- **evaluator**: List ranking and top recommendation. Example: "Ranked 3 proposals. #1: Infrastructure Modernization (7.85) - Strong Recommend."

View File

@@ -0,0 +1,61 @@
---
role: challenger
prefix: CHALLENGE
inner_loop: false
message_types: [state_update]
---
# Challenger
Devil's advocate role. Assumption challenging, feasibility questioning, risk identification. Acts as the Critic in the Generator-Critic loop.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session folder | Task description (Session: line) | Yes |
| Ideas | <session>/ideas/*.md files | Yes |
| Previous critiques | <session>/.msg/meta.json critique_insights | No |
1. Extract session path from task description (match "Session: <path>")
2. Glob idea files from <session>/ideas/
3. Read all idea files for analysis
4. Read .msg/meta.json critique_insights to avoid repeating past challenges
## Phase 3: Critical Analysis
**Challenge Dimensions** (apply to each idea):
| Dimension | Focus |
|-----------|-------|
| Assumption Validity | Does the core assumption hold? Counter-examples? |
| Feasibility | Technical/resource/time feasibility? |
| Risk Assessment | Worst case scenario? Hidden risks? |
| Competitive Analysis | Better alternatives already exist? |
**Severity Classification**:
| Severity | Criteria |
|----------|----------|
| CRITICAL | Fundamental issue, idea may need replacement |
| HIGH | Significant flaw, requires revision |
| MEDIUM | Notable weakness, needs consideration |
| LOW | Minor concern, does not invalidate the idea |
**Generator-Critic Signal**:
| Condition | Signal |
|-----------|--------|
| Any CRITICAL or HIGH severity | REVISION_NEEDED |
| All MEDIUM or lower | CONVERGED |
**Output**: Write to `<session>/critiques/critique-<num>.md`
- Sections: Ideas Reviewed, Per-idea challenges with severity, Summary table with counts, GC Signal
## Phase 4: Severity Summary
1. Count challenges by severity level
2. Determine signal: REVISION_NEEDED if critical+high > 0, else CONVERGED
3. Update shared state:
- Append challenges to .msg/meta.json critique_insights
- Each entry: idea, severity, key_challenge, round

View File

@@ -0,0 +1,58 @@
# Analyze Task
Parse user topic -> detect brainstorming capabilities -> assess complexity -> select pipeline.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Capability | Prefix |
|----------|------------|--------|
| generate, create, brainstorm, ideas, explore | ideator | IDEA |
| challenge, critique, argue, devil, risk | challenger | CHALLENGE |
| synthesize, integrate, combine, merge, themes | synthesizer | SYNTH |
| evaluate, score, rank, prioritize, select | evaluator | EVAL |
## Dependency Graph
Natural ordering tiers:
- Tier 0: ideator (divergent generation -- no dependencies)
- Tier 1: challenger (requires ideator output)
- Tier 2: ideator-revision (requires challenger output, GC loop)
- Tier 3: synthesizer (requires last challenger output)
- Tier 4: evaluator (requires synthesizer output, deep/full only)
## Complexity Scoring
| Factor | Points |
|--------|--------|
| Per capability needed | +1 |
| Strategic/systemic topic | +3 |
| Multi-dimensional analysis | +2 |
| Innovation-focused request | +2 |
| Simple/basic topic | -2 |
Results: 0-1 Low (quick), 2-3 Medium (deep), 4+ High (full)
## Pipeline Selection
| Complexity | Pipeline | Tasks |
|------------|----------|-------|
| Low | quick | IDEA -> CHALLENGE -> SYNTH |
| Medium | deep | IDEA -> CHALLENGE -> IDEA-fix -> CHALLENGE-2 -> SYNTH -> EVAL |
| High | full | 3x IDEA (parallel) -> CHALLENGE -> IDEA-fix -> SYNTH -> EVAL |
## Output
Write <session>/task-analysis.json:
```json
{
"task_description": "<original>",
"pipeline_type": "<quick|deep|full>",
"capabilities": [{ "name": "<cap>", "prefix": "<PREFIX>", "keywords": ["..."] }],
"dependency_graph": { "<TASK-ID>": { "role": "<role>", "blockedBy": ["..."], "priority": "P0|P1|P2" } },
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
"complexity": { "score": 0, "level": "Low|Medium|High" },
"angles": ["Technical", "Product", "Innovation", "Risk"]
}
```

View File

@@ -0,0 +1,162 @@
# Command: Dispatch
Create the brainstorm task chain with correct dependencies and structured task descriptions based on selected pipeline mode.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| User topic | From coordinator Phase 1 | Yes |
| Session folder | From coordinator Phase 2 | Yes |
| Pipeline mode | From session.json pipeline | Yes |
| Angles | From session.json angles | Yes |
1. Load topic, pipeline mode, and angles from session.json
2. Determine task chain from pipeline mode
## Phase 3: Task Chain Creation
### Task Description Template
Every task is built as a JSON entry in the tasks array:
```json
{
"id": "<TASK-ID>",
"title": "<TASK-ID>",
"description": "PURPOSE: <what this task achieves> | Success: <completion criteria>\nTASK:\n - <step 1>\n - <step 2>\n - <step 3>\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Angles: <angle-list>\n - Upstream artifacts: <artifact-list>\nEXPECTED: <deliverable path> + <quality criteria>\nCONSTRAINTS: <scope limits>\n---\nInnerLoop: false",
"status": "pending",
"role": "<role>",
"prefix": "<PREFIX>",
"deps": ["<dependency-list>"],
"findings": "",
"error": ""
}
```
### Pipeline Router
| Mode | Action |
|------|--------|
| quick | Create 3 tasks (IDEA -> CHALLENGE -> SYNTH) |
| deep | Create 6 tasks (IDEA -> CHALLENGE -> IDEA-fix -> CHALLENGE-2 -> SYNTH -> EVAL) |
| full | Create 7 tasks (3 parallel IDEAs -> CHALLENGE -> IDEA-fix -> SYNTH -> EVAL) |
---
### Quick Pipeline
Build the tasks array and write to tasks.json:
```json
[
{
"id": "IDEA-001",
"title": "IDEA-001",
"description": "PURPOSE: Generate multi-angle ideas for brainstorm topic | Success: >= 6 unique ideas across all angles\nTASK:\n - Read topic and angles from session context\n - Generate 3+ ideas per angle with title, description, assumption, impact\n - Self-review for coverage and uniqueness\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Angles: <angle-list>\nEXPECTED: <session>/ideas/idea-001.md with >= 6 ideas\nCONSTRAINTS: Divergent thinking only, no evaluation\n---\nInnerLoop: false",
"status": "pending",
"role": "ideator",
"prefix": "IDEA",
"deps": [],
"findings": "",
"error": ""
},
{
"id": "CHALLENGE-001",
"title": "CHALLENGE-001",
"description": "PURPOSE: Challenge assumptions and assess feasibility of generated ideas | Success: Each idea rated by severity\nTASK:\n - Read all idea files from ideas/ directory\n - Challenge each idea across 4 dimensions (assumption, feasibility, risk, competition)\n - Assign severity (CRITICAL/HIGH/MEDIUM/LOW) per idea\n - Determine GC signal (REVISION_NEEDED or CONVERGED)\nCONTEXT:\n - Session: <session-folder>\n - Upstream artifacts: ideas/idea-001.md\nEXPECTED: <session>/critiques/critique-001.md with severity table and GC signal\nCONSTRAINTS: Critical analysis only, do not generate alternative ideas\n---\nInnerLoop: false",
"status": "pending",
"role": "challenger",
"prefix": "CHALLENGE",
"deps": ["IDEA-001"],
"findings": "",
"error": ""
},
{
"id": "SYNTH-001",
"title": "SYNTH-001",
"description": "PURPOSE: Synthesize ideas and critiques into integrated proposals | Success: >= 1 consolidated proposal\nTASK:\n - Read all ideas and critiques\n - Extract themes, resolve conflicts, group complementary ideas\n - Generate 1-3 integrated proposals with feasibility and innovation scores\nCONTEXT:\n - Session: <session-folder>\n - Upstream artifacts: ideas/*.md, critiques/*.md\nEXPECTED: <session>/synthesis/synthesis-001.md with proposals\nCONSTRAINTS: Integration and synthesis only, no new ideas\n---\nInnerLoop: false",
"status": "pending",
"role": "synthesizer",
"prefix": "SYNTH",
"deps": ["CHALLENGE-001"],
"findings": "",
"error": ""
}
]
```
### Deep Pipeline
Creates all 6 tasks. First 2 same as Quick, then add:
**IDEA-002** (ideator, GC revision):
```json
{
"id": "IDEA-002",
"title": "IDEA-002",
"description": "PURPOSE: Revise ideas based on critique feedback (GC Round 1) | Success: HIGH/CRITICAL challenges addressed\nTASK:\n - Read critique feedback from critiques/\n - Revise challenged ideas, replace unsalvageable ones\n - Retain unchallenged ideas intact\nCONTEXT:\n - Session: <session-folder>\n - Upstream artifacts: critiques/critique-001.md\nEXPECTED: <session>/ideas/idea-002.md with revised ideas\nCONSTRAINTS: Address critique only, focused revision\n---\nInnerLoop: false",
"status": "pending",
"role": "ideator",
"prefix": "IDEA",
"deps": ["CHALLENGE-001"],
"findings": "",
"error": ""
}
```
**CHALLENGE-002** (challenger, round 2):
```json
{
"id": "CHALLENGE-002",
"title": "CHALLENGE-002",
"description": "PURPOSE: Validate revised ideas (GC Round 2) | Success: Severity assessment of revised ideas\nTASK:\n - Read revised idea files\n - Re-evaluate previously challenged ideas\n - Assess new replacement ideas\nCONTEXT:\n - Session: <session-folder>\n - Upstream artifacts: ideas/idea-002.md\nEXPECTED: <session>/critiques/critique-002.md\nCONSTRAINTS: Focus on revised/new ideas\n---\nInnerLoop: false",
"status": "pending",
"role": "challenger",
"prefix": "CHALLENGE",
"deps": ["IDEA-002"],
"findings": "",
"error": ""
}
```
**SYNTH-001** blocked by CHALLENGE-002. **EVAL-001** blocked by SYNTH-001:
```json
{
"id": "EVAL-001",
"title": "EVAL-001",
"description": "PURPOSE: Score and rank synthesized proposals | Success: Ranked list with weighted scores\nTASK:\n - Read synthesis results\n - Score each proposal across 4 dimensions (Feasibility 30%, Innovation 25%, Impact 25%, Cost 20%)\n - Generate final ranking and recommendation\nCONTEXT:\n - Session: <session-folder>\n - Upstream artifacts: synthesis/synthesis-001.md\nEXPECTED: <session>/evaluation/evaluation-001.md with scoring matrix\nCONSTRAINTS: Evaluation only, no new proposals\n---\nInnerLoop: false",
"status": "pending",
"role": "evaluator",
"prefix": "EVAL",
"deps": ["SYNTH-001"],
"findings": "",
"error": ""
}
```
### Full Pipeline
Creates 7 tasks. Parallel ideators:
| Task | Owner | Deps |
|------|-------|------|
| IDEA-001 | ideator-1 | (none) |
| IDEA-002 | ideator-2 | (none) |
| IDEA-003 | ideator-3 | (none) |
| CHALLENGE-001 | challenger | IDEA-001, IDEA-002, IDEA-003 |
| IDEA-004 | ideator | CHALLENGE-001 |
| SYNTH-001 | synthesizer | IDEA-004 |
| EVAL-001 | evaluator | SYNTH-001 |
Each parallel IDEA task scoped to a specific angle from the angles list.
## Phase 4: Validation
1. Verify all tasks created by reading tasks.json
2. Check dependency chain integrity:
- No circular dependencies
- All deps references exist
- First task(s) have empty deps
3. Log task count and pipeline mode

View File

@@ -0,0 +1,171 @@
# Monitor Pipeline
Event-driven pipeline coordination. Beat model: coordinator wake -> process -> spawn -> STOP.
## Constants
- SPAWN_MODE: spawn_agent
- ONE_STEP_PER_INVOCATION: true
- FAST_ADVANCE_AWARE: true
- WORKER_AGENT: team_worker
- MAX_GC_ROUNDS: 2
## Handler Router
| Source | Handler |
|--------|---------|
| Message contains [ideator], [challenger], [synthesizer], [evaluator] | handleCallback |
| "consensus_blocked" | handleConsensus |
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
## handleCallback
Worker completed. Process and advance.
1. Parse message to identify role and task ID:
| Message Pattern | Role Detection |
|----------------|---------------|
| `[ideator]` or task ID `IDEA-*` | ideator |
| `[challenger]` or task ID `CHALLENGE-*` | challenger |
| `[synthesizer]` or task ID `SYNTH-*` | synthesizer |
| `[evaluator]` or task ID `EVAL-*` | evaluator |
2. Mark task as completed: Read tasks.json, update matching entry status to "completed", write back
3. Record completion in session state
4. **Generator-Critic check** (when challenger completes):
- If completed task is CHALLENGE-* AND pipeline is deep or full:
- Read critique file for GC signal
- Read .msg/meta.json for gc_round
| GC Signal | gc_round < max | Action |
|-----------|----------------|--------|
| REVISION_NEEDED | Yes | Increment gc_round, unblock IDEA-fix task |
| REVISION_NEEDED | No (>= max) | Force convergence, unblock SYNTH |
| CONVERGED | - | Unblock SYNTH (skip remaining GC tasks) |
- Log team_msg with type "gc_loop_trigger" or "task_unblocked"
- If skipping GC tasks, mark them as completed (skip)
5. Close completed agent: `close_agent({ id: <agentId> })`
6. Proceed to handleSpawnNext
## handleCheck
Read-only status report, then STOP.
```
[coordinator] Pipeline Status (<pipeline-mode>)
[coordinator] Progress: <done>/<total> (<pct>%)
[coordinator] Active: <workers with elapsed time>
[coordinator] Ready: <pending tasks with resolved deps>
[coordinator] GC Rounds: <gc_round>/<max_gc_rounds>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
## handleResume
1. Audit task list: Tasks stuck in "in_progress" -> reset to "pending"
2. Proceed to handleSpawnNext
## handleSpawnNext
Find ready tasks, spawn workers, STOP.
1. Collect: completedSubjects, inProgressSubjects, readySubjects
2. No ready + work in progress -> report waiting, STOP
3. No ready + nothing in progress -> handleComplete
4. Has ready -> for each:
a. Update tasks.json entry status -> "in_progress"
b. team_msg log -> task_unblocked
c. Spawn team_worker:
```
const agentId = spawn_agent({
agent_type: "team_worker",
items: [{ type: "text", text: `## Role Assignment
role: <role>
role_spec: ~ or <project>/.codex/skills/team-brainstorm/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: brainstorm
requirement: <task-description>
inner_loop: false
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` }]
})
```
d. Collect agent results: `wait_agent({ ids: [agentId], timeout_ms: 900000 })`
e. Read discoveries from output files
f. Update tasks.json with results
g. Close agent: `close_agent({ id: agentId })`
5. Parallel spawn rules:
| Pipeline | Scenario | Spawn Behavior |
|----------|----------|---------------|
| Quick | Single sequential | One worker at a time |
| Deep | Sequential with GC | One worker at a time |
| Full | IDEA-001/002/003 unblocked | Spawn ALL 3 ideator workers in parallel |
| Full | Other stages | One worker at a time |
**Parallel ideator spawn** (Full pipeline):
```
const agentIds = []
for (const task of readyIdeatorTasks) {
agentIds.push(spawn_agent({
agent_type: "team_worker",
items: [{ type: "text", text: `...role: ideator-<N>...` }]
}))
}
const results = wait_agent({ ids: agentIds, timeout_ms: 900000 })
// Process results, close agents
for (const id of agentIds) { close_agent({ id }) }
```
6. Update session, output summary, STOP
## handleComplete
Pipeline done. Generate report and completion action.
Completion check by mode:
| Mode | Completion Condition |
|------|---------------------|
| quick | All 3 tasks completed |
| deep | All 6 tasks (+ any skipped GC tasks) completed |
| full | All 7 tasks (+ any skipped GC tasks) completed |
1. Verify all tasks completed via reading tasks.json
2. If any tasks not completed, return to handleSpawnNext
3. If all completed -> transition to coordinator Phase 5
## handleConsensus
Handle consensus_blocked signals.
| Severity | Action |
|----------|--------|
| HIGH | Pause pipeline, notify user with findings summary |
| MEDIUM | Log finding, attempt to continue |
| LOW | Log finding, continue pipeline |
## handleAdapt
Capability gap reported mid-pipeline.
1. Parse gap description
2. Check if existing role covers it -> redirect
3. Role count < 5 -> generate dynamic role-spec in <session>/role-specs/
4. Create new task (add to tasks.json), spawn worker
5. Role count >= 5 -> merge or pause
## Fast-Advance Reconciliation
On every coordinator wake:
1. Read team_msg entries with type="fast_advance"
2. Sync active_workers with spawned successors
3. No duplicate spawns

View File

@@ -0,0 +1,140 @@
# Coordinator
Orchestrate team-brainstorm: topic clarify -> dispatch -> spawn -> monitor -> report.
## Identity
- Name: coordinator | Tag: [coordinator]
- Responsibility: Topic clarification -> Create team -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- Use `team_worker` agent type for all worker spawns
- Follow Command Execution Protocol for dispatch and monitor commands
- Respect pipeline stage dependencies (deps)
- Stop after spawning workers -- wait for results via wait_agent
- Manage Generator-Critic loop count (max 2 rounds)
- Execute completion action in Phase 5
### MUST NOT
- Generate ideas, challenge assumptions, synthesize, or evaluate -- workers handle this
- Spawn workers without creating tasks first
- Force-advance pipeline past GC loop decisions
- Modify artifact files (ideas/*.md, critiques/*.md, etc.) -- delegate to workers
- Skip GC severity check when critique arrives
## Command Execution Protocol
When coordinator needs to execute a specific phase:
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker result | Result from wait_agent contains [ideator], [challenger], [synthesizer], [evaluator] | -> handleCallback (monitor.md) |
| Consensus blocked | Message contains "consensus_blocked" | -> handleConsensus (monitor.md) |
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
| Capability gap | Message contains "capability_gap" | -> handleAdapt (monitor.md) |
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
| Interrupted session | Active session in .workflow/.team/BRS-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For callback/check/resume/consensus/adapt/complete: load @commands/monitor.md, execute handler, STOP.
## Phase 0: Session Resume Check
1. Scan `.workflow/.team/BRS-*/session.json` for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile (read tasks.json, reset in_progress->pending, rebuild team, kick first ready task)
4. Multiple -> request_user_input for selection
## Phase 1: Topic Clarification + Complexity Assessment
TEXT-LEVEL ONLY. No source code reading.
1. Parse topic from $ARGUMENTS
2. Assess topic complexity:
| Signal | Weight | Keywords |
|--------|--------|----------|
| Strategic/systemic | +3 | strategy, architecture, system, framework, paradigm |
| Multi-dimensional | +2 | multiple, compare, tradeoff, versus, alternative |
| Innovation-focused | +2 | innovative, creative, novel, breakthrough |
| Simple/basic | -2 | simple, quick, straightforward, basic |
| Score | Complexity | Pipeline Recommendation |
|-------|------------|-------------------------|
| >= 4 | High | full |
| 2-3 | Medium | deep |
| 0-1 | Low | quick |
3. request_user_input for pipeline mode and divergence angles
4. Store requirements: mode, scope, angles, constraints
## Phase 2: Create Team + Initialize Session
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash("pwd")`
- `skill_root` = `<project_root>/.codex/skills/team-brainstorm`
2. Generate session ID: `BRS-<topic-slug>-<date>`
3. Create session folder structure: ideas/, critiques/, synthesis/, evaluation/, wisdom/, .msg/
4. Create session folder + initialize `tasks.json` (empty array)
5. Write session.json with pipeline, angles, gc_round=0, max_gc_rounds=2
6. Initialize meta.json via team_msg state_update:
```
mcp__ccw-tools__team_msg({
operation: "log", session_id: "<id>", from: "coordinator",
type: "state_update", summary: "Session initialized",
data: { pipeline_mode: "<mode>", pipeline_stages: ["ideator","challenger","synthesizer","evaluator"], team_name: "brainstorm", topic: "<topic>", angles: [...], gc_round: 0 }
})
```
7. Write session.json
## Phase 3: Create Task Chain
Delegate to @commands/dispatch.md:
1. Read pipeline mode and angles from session.json
2. Build tasks array and write to tasks.json with correct deps
3. Update session.json with task count
## Phase 4: Spawn-and-Stop
Delegate to @commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + deps resolved)
2. Spawn team_worker agents (see SKILL.md Spawn Template)
3. Output status summary
4. STOP
## Phase 5: Report + Completion Action
1. Load session state -> count completed tasks, calculate duration
2. List deliverables:
| Deliverable | Path |
|-------------|------|
| Ideas | <session>/ideas/*.md |
| Critiques | <session>/critiques/*.md |
| Synthesis | <session>/synthesis/*.md |
| Evaluation | <session>/evaluation/*.md (deep/full only) |
3. Output pipeline summary: topic, pipeline mode, GC rounds, total ideas, key themes
4. Execute completion action per session.completion_action:
- interactive -> request_user_input (Archive/Keep/Export)
- auto_archive -> Archive & Clean (status=completed, clean up session)
- auto_keep -> Keep Active (status=paused)
## Error Handling
| Error | Resolution |
|-------|------------|
| Task too vague | request_user_input for clarification |
| Session corruption | Attempt recovery, fallback to manual |
| Worker crash | Reset task to pending, respawn |
| GC loop exceeded | Force convergence to synthesizer |
| No ideas generated | Coordinator prompts with seed questions |

View File

@@ -0,0 +1,56 @@
---
role: evaluator
prefix: EVAL
inner_loop: false
message_types: [state_update]
---
# Evaluator
Scoring, ranking, and final selection. Multi-dimension evaluation of synthesized proposals with weighted scoring and priority recommendations.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session folder | Task description (Session: line) | Yes |
| Synthesis results | <session>/synthesis/*.md files | Yes |
| All ideas | <session>/ideas/*.md files | No (for context) |
| All critiques | <session>/critiques/*.md files | No (for context) |
1. Extract session path from task description (match "Session: <path>")
2. Glob synthesis files from <session>/synthesis/
3. Read all synthesis files for evaluation
4. Optionally read ideas and critiques for full context
## Phase 3: Evaluation and Scoring
**Scoring Dimensions**:
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Feasibility | 30% | Technical feasibility, resource needs, timeline |
| Innovation | 25% | Novelty, differentiation, breakthrough potential |
| Impact | 25% | Scope of impact, value creation, problem resolution |
| Cost Efficiency | 20% | Implementation cost, risk cost, opportunity cost |
**Weighted Score**: `(Feasibility * 0.30) + (Innovation * 0.25) + (Impact * 0.25) + (Cost * 0.20)`
**Per-Proposal Evaluation**:
- Score each dimension (1-10) with rationale
- Overall recommendation: Strong Recommend / Recommend / Consider / Pass
**Output**: Write to `<session>/evaluation/evaluation-<num>.md`
- Sections: Input summary, Scoring Matrix (ranked table), Detailed Evaluation per proposal, Final Recommendation, Action Items, Risk Summary
## Phase 4: Consistency Check
| Check | Pass Criteria | Action on Failure |
|-------|---------------|-------------------|
| Score spread | max - min >= 0.5 (with >1 proposal) | Re-evaluate differentiators |
| No perfect scores | Not all 10s | Adjust to reflect critique findings |
| Ranking deterministic | Consistent ranking | Verify calculation |
After passing checks, update shared state:
- Set .msg/meta.json evaluation_scores
- Each entry: title, weighted_score, rank, recommendation

View File

@@ -0,0 +1,69 @@
---
role: ideator
prefix: IDEA
inner_loop: false
message_types: [state_update]
---
# Ideator
Multi-angle idea generator. Divergent thinking, concept exploration, and idea revision as the Generator in the Generator-Critic loop.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session folder | Task description (Session: line) | Yes |
| Topic | <session>/.msg/meta.json | Yes |
| Angles | <session>/.msg/meta.json | Yes |
| GC Round | <session>/.msg/meta.json | Yes |
| Previous critique | <session>/critiques/*.md | For revision tasks only |
| Previous ideas | <session>/.msg/meta.json generated_ideas | No |
1. Extract session path from task description (match "Session: <path>")
2. Read .msg/meta.json for topic, angles, gc_round
3. Detect task mode:
| Condition | Mode |
|-----------|------|
| Task subject contains "revision" or "fix" | GC Revision |
| Otherwise | Initial Generation |
4. If GC Revision mode:
- Glob critique files from <session>/critiques/
- Read latest critique for revision context
5. Read previous ideas from .msg/meta.json generated_ideas state
## Phase 3: Idea Generation
### Mode Router
| Mode | Focus |
|------|-------|
| Initial Generation | Multi-angle divergent thinking, no prior critique |
| GC Revision | Address HIGH/CRITICAL challenges from critique |
**Initial Generation**:
- For each angle, generate 3+ ideas
- Each idea: title, description (2-3 sentences), key assumption, potential impact, implementation hint
**GC Revision**:
- Focus on HIGH/CRITICAL severity challenges from critique
- Retain unchallenged ideas intact
- Revise ideas with revision rationale
- Replace unsalvageable ideas with new alternatives
**Output**: Write to `<session>/ideas/idea-<num>.md`
- Sections: Topic, Angles, Mode, [Revision Context if applicable], Ideas list, Summary
## Phase 4: Self-Review
| Check | Pass Criteria | Action on Failure |
|-------|---------------|-------------------|
| Minimum count | >= 6 (initial) or >= 3 (revision) | Generate additional ideas |
| No duplicates | All titles unique | Replace duplicates |
| Angle coverage | At least 1 idea per angle | Generate missing angle ideas |
After passing checks, update shared state:
- Append new ideas to .msg/meta.json generated_ideas
- Each entry: id, title, round, revised flag

View File

@@ -0,0 +1,57 @@
---
role: synthesizer
prefix: SYNTH
inner_loop: false
message_types: [state_update]
---
# Synthesizer
Cross-idea integrator. Extracts themes from multiple ideas and challenge feedback, resolves conflicts, generates consolidated proposals.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session folder | Task description (Session: line) | Yes |
| All ideas | <session>/ideas/*.md files | Yes |
| All critiques | <session>/critiques/*.md files | Yes |
| GC rounds completed | <session>/.msg/meta.json gc_round | Yes |
1. Extract session path from task description (match "Session: <path>")
2. Glob all idea files from <session>/ideas/
3. Glob all critique files from <session>/critiques/
4. Read all idea and critique files for synthesis
5. Read .msg/meta.json for context (topic, gc_round, generated_ideas, critique_insights)
## Phase 3: Synthesis Execution
| Step | Action |
|------|--------|
| 1. Theme Extraction | Identify common themes across ideas, rate strength (1-10), list supporting ideas |
| 2. Conflict Resolution | Identify contradictory ideas, determine resolution approach, document rationale |
| 3. Complementary Grouping | Group complementary ideas together |
| 4. Gap Identification | Discover uncovered perspectives |
| 5. Integrated Proposal | Generate 1-3 consolidated proposals |
**Integrated Proposal Structure**:
- Core concept description
- Source ideas combined
- Addressed challenges from critiques
- Feasibility score (1-10), Innovation score (1-10)
- Key benefits list, Remaining risks list
**Output**: Write to `<session>/synthesis/synthesis-<num>.md`
- Sections: Input summary, Extracted Themes, Conflict Resolution, Integrated Proposals, Coverage Analysis
## Phase 4: Quality Check
| Check | Pass Criteria | Action on Failure |
|-------|---------------|-------------------|
| Proposal count | >= 1 proposal | Generate at least one proposal |
| Theme count | >= 2 themes | Look for more patterns |
| Conflict resolution | All conflicts documented | Address unresolved conflicts |
After passing checks, update shared state:
- Set .msg/meta.json synthesis_themes
- Each entry: name, strength, supporting_ideas

View File

@@ -1,171 +0,0 @@
# Team Brainstorm — CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier | `"IDEA-001"` |
| `title` | string | Yes | Short task title | `"Multi-angle idea generation"` |
| `description` | string | Yes | Detailed task description (self-contained) | `"Generate 3+ ideas per angle..."` |
| `role` | string | Yes | Worker role: ideator, challenger, synthesizer, evaluator | `"ideator"` |
| `angle` | string | No | Brainstorming angle(s) for ideator tasks (semicolon-separated) | `"Technical;Product;Innovation"` |
| `gc_round` | integer | Yes | Generator-Critic round number (0 = initial, 1+ = revision) | `"0"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"IDEA-001"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"IDEA-001"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task IDEA-001] Generated 8 ideas..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending``completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Generated 8 ideas across 3 angles..."` |
| `gc_signal` | string | Generator-Critic signal (challenger only): `REVISION_NEEDED` or `CONVERGED` | `"REVISION_NEEDED"` |
| `severity_summary` | string | Severity count summary (challenger only) | `"CRITICAL:0 HIGH:2 MEDIUM:3 LOW:1"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Example Data
```csv
id,title,description,role,angle,gc_round,deps,context_from,exec_mode,wave,status,findings,gc_signal,severity_summary,error
"IDEA-001","Multi-angle idea generation","Generate 3+ ideas per angle with title, description, assumption, and potential impact. Cover all assigned angles comprehensively.","ideator","Technical;Product;Innovation","0","","","csv-wave","1","pending","","","",""
"IDEA-002","Parallel angle generation (Risk)","Generate 3+ ideas focused on Risk angle with title, description, assumption, and potential impact.","ideator","Risk","0","","","csv-wave","1","pending","","","",""
"CHALLENGE-001","Critique generated ideas","Read all idea artifacts. Challenge each idea across assumption validity, feasibility, risk, and competition dimensions. Assign severity (CRITICAL/HIGH/MEDIUM/LOW) per idea. Output GC signal.","challenger","","0","IDEA-001;IDEA-002","IDEA-001;IDEA-002","csv-wave","2","pending","","","",""
"GC-CHECK-001","GC loop decision","Evaluate critique severity counts. If any HIGH/CRITICAL: REVISION_NEEDED. Else: CONVERGED.","gc-controller","","1","CHALLENGE-001","CHALLENGE-001","interactive","3","pending","","","",""
"IDEA-003","Revise ideas (GC Round 1)","Address HIGH/CRITICAL challenges from critique. Retain unchallenged ideas intact. Replace unsalvageable ideas.","ideator","","1","GC-CHECK-001","CHALLENGE-001","csv-wave","4","pending","","","",""
"SYNTH-001","Synthesize proposals","Extract themes from ideas and critiques. Resolve conflicts. Generate 1-3 integrated proposals with feasibility and innovation scores.","synthesizer","","0","IDEA-003","IDEA-001;IDEA-002;IDEA-003;CHALLENGE-001","csv-wave","5","pending","","","",""
"EVAL-001","Score and rank proposals","Score each proposal: Feasibility 30%, Innovation 25%, Impact 25%, Cost 20%. Generate final ranking and recommendation.","evaluator","","0","SYNTH-001","SYNTH-001","csv-wave","6","pending","","","",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
───────────────────── ──────────────────── ─────────────────
id ───────────► id ──────────► id
title ───────────► title ──────────► (reads)
description ───────────► description ──────────► (reads)
role ───────────► role ──────────► (reads)
angle ───────────► angle ──────────► (reads)
gc_round ───────────► gc_round ──────────► (reads)
deps ───────────► deps ──────────► (reads)
context_from───────────► context_from──────────► (reads)
exec_mode ───────────► exec_mode ──────────► (reads)
wave ──────────► (reads)
prev_context ──────────► (reads)
status
findings
gc_signal
severity_summary
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "IDEA-001",
"status": "completed",
"findings": "Generated 8 ideas across Technical, Product, Innovation angles. Key themes: API gateway pattern, event-driven architecture, developer experience tools.",
"gc_signal": "",
"severity_summary": "",
"error": ""
}
```
Challenger-specific output:
```json
{
"id": "CHALLENGE-001",
"status": "completed",
"findings": "Challenged 8 ideas. 2 HIGH severity (require revision), 3 MEDIUM, 3 LOW.",
"gc_signal": "REVISION_NEEDED",
"severity_summary": "CRITICAL:0 HIGH:2 MEDIUM:3 LOW:3",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `idea` | `data.title` | `{title, angle, description, assumption, impact}` | Generated brainstorm idea |
| `critique` | `data.idea_title` | `{idea_title, dimension, severity, challenge, rationale}` | Critique of an idea |
| `theme` | `data.name` | `{name, strength, supporting_ideas[]}` | Extracted theme from synthesis |
| `proposal` | `data.title` | `{title, source_ideas[], feasibility, innovation, description}` | Integrated proposal |
| `evaluation` | `data.proposal_title` | `{proposal_title, weighted_score, rank, recommendation}` | Scored proposal |
| `gc_decision` | `data.round` | `{round, signal, severity_counts}` | GC loop decision record |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T10:00:00+08:00","worker":"IDEA-001","type":"idea","data":{"title":"API Gateway Pattern","angle":"Technical","description":"Centralized API gateway for microservice routing","assumption":"Services need unified entry point","impact":"Simplifies client integration"}}
{"ts":"2026-03-08T10:01:00+08:00","worker":"IDEA-001","type":"idea","data":{"title":"Event Sourcing Migration","angle":"Technical","description":"Adopt event sourcing for service state management","assumption":"Current state is hard to trace across services","impact":"Full audit trail and temporal queries"}}
{"ts":"2026-03-08T10:05:00+08:00","worker":"CHALLENGE-001","type":"critique","data":{"idea_title":"API Gateway Pattern","dimension":"feasibility","severity":"MEDIUM","challenge":"Single point of failure risk","rationale":"Requires HA design with circuit breakers"}}
{"ts":"2026-03-08T10:10:00+08:00","worker":"SYNTH-001","type":"theme","data":{"name":"Infrastructure Modernization","strength":8,"supporting_ideas":["API Gateway Pattern","Event Sourcing Migration"]}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| CSV task findings | Interactive task | Injected via spawn message or send_input |
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
| Valid role | role in {ideator, challenger, synthesizer, evaluator, gc-controller} | "Invalid role: {role}" |
| GC round non-negative | gc_round >= 0 | "Invalid gc_round: {value}" |
| Cross-mechanism deps | Interactive→CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |

View File

@@ -0,0 +1,72 @@
# Pipeline Definitions — team-brainstorm
## Available Pipelines
### Quick Pipeline (3 beats, strictly serial)
```
IDEA-001 → CHALLENGE-001 → SYNTH-001
[ideator] [challenger] [synthesizer]
```
### Deep Pipeline (6 beats, Generator-Critic loop)
```
IDEA-001 → CHALLENGE-001 → IDEA-002(fix) → CHALLENGE-002 → SYNTH-001 → EVAL-001
```
GC loop check: if critique.severity >= HIGH → create IDEA-fix → CHALLENGE-2 → SYNTH; else skip to SYNTH
### Full Pipeline (7 tasks, fan-out parallel ideation + GC)
```
[IDEA-001 + IDEA-002 + IDEA-003](parallel) → CHALLENGE-001(batch) → IDEA-004(fix) → SYNTH-001 → EVAL-001
```
## Task Metadata Registry
| Task ID | Role | Phase | Dependencies | Description |
|---------|------|-------|-------------|-------------|
| IDEA-001 | ideator | generate | (none) | Multi-angle idea generation |
| IDEA-002 | ideator | generate | (none) | Parallel angle (Full pipeline only) |
| IDEA-003 | ideator | generate | (none) | Parallel angle (Full pipeline only) |
| CHALLENGE-001 | challenger | challenge | IDEA-001 (or all IDEA-*) | Devil's advocate critique and feasibility challenge |
| IDEA-004 | ideator | gc-fix | CHALLENGE-001 | Revision based on critique (GC loop, if triggered) |
| CHALLENGE-002 | challenger | gc-fix | IDEA-004 | Re-critique of revised ideas (GC loop round 2) |
| SYNTH-001 | synthesizer | synthesize | last CHALLENGE-* | Cross-idea integration, theme extraction, conflict resolution |
| EVAL-001 | evaluator | evaluate | SYNTH-001 | Scoring, ranking, priority recommendation, final selection |
## Checkpoints
| Trigger | Location | Behavior |
|---------|----------|----------|
| Generator-Critic loop | After CHALLENGE-* | If severity >= HIGH → create IDEA-fix task; else proceed to SYNTH |
| GC loop limit | Max 2 rounds | Exceeds limit → force convergence to SYNTH |
| Pipeline stall | No ready + no running | Check missing tasks, report to user |
## Completion Conditions
| Mode | Completion Condition |
|------|---------------------|
| quick | All 3 tasks completed |
| deep | All 6 tasks (+ any skipped GC tasks) completed |
| full | All 7 tasks (+ any skipped GC tasks) completed |
## Shared State (meta.json)
| Role | State Key |
|------|-----------|
| ideator | `generated_ideas` |
| challenger | `critique_insights` |
| synthesizer | `synthesis_themes` |
| evaluator | `evaluation_scores` |
## Message Types
| Role | Types |
|------|-------|
| coordinator | `pipeline_selected`, `gc_loop_trigger`, `task_unblocked`, `error`, `shutdown` |
| ideator | `ideas_ready`, `ideas_revised`, `error` |
| challenger | `critique_ready`, `error` |
| synthesizer | `synthesis_ready`, `error` |
| evaluator | `evaluation_ready`, `error` |

View File

@@ -0,0 +1,86 @@
{
"team_name": "team-brainstorm",
"team_display_name": "Team Brainstorm",
"description": "Head brainstorming team with Generator-Critic loop, shared memory, and dynamic pipeline selection",
"version": "1.0.0",
"roles": {
"coordinator": {
"task_prefix": null,
"responsibility": "Topic clarification, complexity assessment, pipeline selection, convergence monitoring",
"message_types": ["pipeline_selected", "gc_loop_trigger", "task_unblocked", "error", "shutdown"]
},
"ideator": {
"task_prefix": "IDEA",
"responsibility": "Multi-angle idea generation, concept exploration, divergent thinking",
"message_types": ["ideas_ready", "ideas_revised", "error"]
},
"challenger": {
"task_prefix": "CHALLENGE",
"responsibility": "Devil's advocate, assumption challenging, feasibility questioning",
"message_types": ["critique_ready", "error"]
},
"synthesizer": {
"task_prefix": "SYNTH",
"responsibility": "Cross-idea integration, theme extraction, conflict resolution",
"message_types": ["synthesis_ready", "error"]
},
"evaluator": {
"task_prefix": "EVAL",
"responsibility": "Scoring and ranking, priority recommendation, final selection",
"message_types": ["evaluation_ready", "error"]
}
},
"pipelines": {
"quick": {
"description": "Simple topic: generate → challenge → synthesize",
"task_chain": ["IDEA-001", "CHALLENGE-001", "SYNTH-001"],
"gc_loops": 0
},
"deep": {
"description": "Complex topic with Generator-Critic loop (max 2 rounds)",
"task_chain": ["IDEA-001", "CHALLENGE-001", "IDEA-002", "CHALLENGE-002", "SYNTH-001", "EVAL-001"],
"gc_loops": 2
},
"full": {
"description": "Parallel fan-out ideation + Generator-Critic + evaluation",
"task_chain": ["IDEA-001", "IDEA-002", "IDEA-003", "CHALLENGE-001", "IDEA-004", "SYNTH-001", "EVAL-001"],
"gc_loops": 1,
"parallel_groups": [["IDEA-001", "IDEA-002", "IDEA-003"]]
}
},
"innovation_patterns": {
"generator_critic": {
"generator": "ideator",
"critic": "challenger",
"max_rounds": 2,
"convergence_trigger": "critique.severity < HIGH"
},
"shared_memory": {
"file": "shared-memory.json",
"fields": {
"ideator": "generated_ideas",
"challenger": "critique_insights",
"synthesizer": "synthesis_themes",
"evaluator": "evaluation_scores"
}
},
"dynamic_pipeline": {
"selector": "coordinator",
"criteria": "topic_complexity + scope + time_constraint"
}
},
"collaboration_patterns": ["CP-1", "CP-3", "CP-7"],
"session_dirs": {
"base": ".workflow/.team/BRS-{slug}-{YYYY-MM-DD}/",
"ideas": "ideas/",
"critiques": "critiques/",
"synthesis": "synthesis/",
"evaluation": "evaluation/",
"messages": ".workflow/.team-msg/{team-name}/"
}
}