feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -0,0 +1,90 @@
---
role: analyst
prefix: ANALYZE
inner_loop: false
additional_prefixes: [ANALYZE-fix]
message_types:
success: analysis_ready
error: error
---
# Deep Analyst
Perform deep multi-perspective analysis on exploration results via CLI tools. Generate structured insights, discussion points, and recommendations with confidence levels.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| Exploration results | `<session>/explorations/*.json` | Yes |
1. Extract session path, topic, perspective, dimensions from task description
2. Detect direction-fix mode: `type:\s*direction-fix` with `adjusted_focus:\s*(.+)`
3. Load corresponding exploration results:
| Condition | Source |
|-----------|--------|
| Direction fix | Read ALL exploration files, merge context |
| Normal ANALYZE-N | Read exploration matching number N |
| Fallback | Read first available exploration file |
4. Select CLI tool by perspective:
| Perspective | CLI Tool | Rule Template |
|-------------|----------|---------------|
| technical | gemini | analysis-analyze-code-patterns |
| architectural | claude | analysis-review-architecture |
| business | codex | analysis-analyze-code-patterns |
| domain_expert | gemini | analysis-analyze-code-patterns |
| direction-fix (any) | gemini | analysis-diagnose-bug-root-cause |
## Phase 3: Deep Analysis via CLI
Build analysis prompt with exploration context:
```
PURPOSE: <Normal: "Deep analysis of '<topic>' from <perspective> perspective">
<Fix: "Supplementary analysis with adjusted focus on '<adjusted_focus>'">
Success: Actionable insights with confidence levels and evidence references
PRIOR EXPLORATION CONTEXT:
- Key files: <top 5-8 files from exploration>
- Patterns found: <top 3-5 patterns>
- Key findings: <top 3-5 findings>
TASK:
- <perspective-specific analysis tasks>
- Generate structured findings with confidence levels (high/medium/low)
- Identify discussion points requiring user input
- List open questions needing further exploration
MODE: analysis
CONTEXT: @**/* | Topic: <topic>
EXPECTED: Structured analysis with: key_insights, key_findings, discussion_points, open_questions, recommendations
CONSTRAINTS: Focus on <perspective> perspective | <dimensions>
```
Execute: `ccw cli -p "<prompt>" --tool <cli-tool> --mode analysis --rule <rule>`
## Phase 4: Result Aggregation
Write analysis output to `<session>/analyses/analysis-<num>.json`:
```json
{
"perspective": "<perspective>",
"dimensions": ["<dim1>", "<dim2>"],
"is_direction_fix": false,
"key_insights": [{"insight": "...", "confidence": "high", "evidence": "file:line"}],
"key_findings": [{"finding": "...", "file_ref": "...", "impact": "..."}],
"discussion_points": ["..."],
"open_questions": ["..."],
"recommendations": [{"action": "...", "rationale": "...", "priority": "high"}],
"_metadata": {"cli_tool": "...", "cli_rule": "...", "perspective": "...", "timestamp": "..."}
}
```
Update `<session>/wisdom/.msg/meta.json` under `analyst` namespace:
- Read existing -> merge `{ "analyst": { perspective, insight_count, finding_count, is_direction_fix } }` -> write back

View File

@@ -0,0 +1,73 @@
# Analyze Task
Parse topic -> detect pipeline mode and perspectives -> output analysis config.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
### Pipeline Mode Detection
Parse `--mode` from arguments first. If not specified, auto-detect from topic description:
| Condition | Mode | Depth |
|-----------|------|-------|
| `--mode=quick` or topic contains "quick/overview/fast" | Quick | 1 |
| `--mode=deep` or topic contains "deep/thorough/detailed/comprehensive" | Deep | N (from perspectives) |
| Default (no match) | Standard | N (from perspectives) |
### Dimension Detection
Scan topic keywords to select analysis perspectives:
| Dimension | Keywords |
|-----------|----------|
| architecture | architecture, design, structure |
| implementation | implement, code, source |
| performance | performance, optimize, speed |
| security | security, auth, vulnerability |
| concept | concept, theory, principle |
| comparison | compare, vs, difference |
| decision | decision, choice, tradeoff |
**Depth** = number of selected perspectives. Quick mode always uses depth=1.
## Pipeline Mode Rules
| Mode | Task Structure |
|------|----------------|
| quick | EXPLORE-001 -> ANALYZE-001 -> SYNTH-001 (serial, depth=1) |
| standard | EXPLORE-001..N (parallel) -> ANALYZE-001..N (parallel) -> DISCUSS-001 -> SYNTH-001 |
| deep | Same as standard but SYNTH-001 omitted (created dynamically after discussion loop) |
## Output
Write analysis config to coordinator state (not a file), to be used by dispatch.md:
```json
{
"pipeline_mode": "<quick|standard|deep>",
"depth": <number>,
"perspectives": ["<perspective1>", "<perspective2>"],
"topic": "<original topic>",
"dimensions": ["<dim1>", "<dim2>"]
}
```
## Complexity Scoring
| Factor | Points |
|--------|--------|
| Per perspective | +1 |
| Deep mode | +2 |
| Cross-domain (3+ perspectives) | +1 |
Results: 1-3 Quick, 4-6 Standard, 7+ Deep (if not explicitly set)
## Discussion Loop Configuration
| Mode | Max Discussion Rounds |
|------|----------------------|
| quick | 0 |
| standard | 1 |
| deep | 5 |

View File

@@ -0,0 +1,225 @@
# Command: Dispatch
Create the analysis task chain with correct dependencies and structured task descriptions. Supports Quick, Standard, and Deep pipeline modes.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| User topic | From coordinator Phase 1 | Yes |
| Session folder | From coordinator Phase 2 | Yes |
| Pipeline mode | From coordinator Phase 1 | Yes |
| Perspectives | From coordinator Phase 1 (dimension detection) | Yes |
1. Load topic, pipeline mode, and selected perspectives from coordinator state
2. Load pipeline stage definitions from SKILL.md Task Metadata Registry
3. Determine depth = number of selected perspectives (Quick: always 1)
## Phase 3: Task Chain Creation
### Task Entry Template
Each task in tasks.json `tasks` object:
```json
{
"<TASK-ID>": {
"title": "<concise title>",
"description": "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>\nTASK:\n - <step 1: specific action>\n - <step 2: specific action>\n - <step 3: specific action>\nCONTEXT:\n - Session: <session-folder>\n - Topic: <analysis-topic>\n - Perspective: <perspective or 'all'>\n - Upstream artifacts: <artifact-1>, <artifact-2>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <deliverable path> + <quality criteria>\nCONSTRAINTS: <scope limits, focus areas>\n---\nInnerLoop: false",
"role": "<role-name>",
"prefix": "<PREFIX>",
"deps": ["<dependency-list>"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
### Mode Router
| Mode | Action |
|------|--------|
| `quick` | Create 3 tasks: EXPLORE-001 -> ANALYZE-001 -> SYNTH-001 |
| `standard` | Create N explorers + N analysts + DISCUSS-001 + SYNTH-001 |
| `deep` | Same as standard but omit SYNTH-001 (created after discussion loop) |
---
### Quick Mode Task Chain
**EXPLORE-001** (explorer):
```json
{
"EXPLORE-001": {
"title": "Explore codebase structure for analysis topic",
"description": "PURPOSE: Explore codebase structure for analysis topic | Success: Key files, patterns, and findings collected\nTASK:\n - Detect project structure and relevant modules\n - Search for code related to analysis topic\n - Collect file references, patterns, and key findings\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Perspective: general\n - Dimensions: <dimensions>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/explorations/exploration-001.json | Structured exploration with files and findings\nCONSTRAINTS: Focus on <topic> scope\n---\nInnerLoop: false",
"role": "explorer",
"prefix": "EXPLORE",
"deps": [],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**ANALYZE-001** (analyst):
```json
{
"ANALYZE-001": {
"title": "Deep analysis from technical perspective",
"description": "PURPOSE: Deep analysis of topic from technical perspective | Success: Actionable insights with confidence levels\nTASK:\n - Load exploration results and build analysis context\n - Analyze from technical perspective across selected dimensions\n - Generate insights, findings, discussion points, recommendations\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Perspective: technical\n - Dimensions: <dimensions>\n - Upstream artifacts: explorations/exploration-001.json\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/analyses/analysis-001.json | Structured analysis with evidence\nCONSTRAINTS: Focus on technical perspective | <dimensions>\n---\nInnerLoop: false",
"role": "analyst",
"prefix": "ANALYZE",
"deps": ["EXPLORE-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**SYNTH-001** (synthesizer):
```json
{
"SYNTH-001": {
"title": "Integrate analysis into final conclusions",
"description": "PURPOSE: Integrate analysis into final conclusions | Success: Executive summary with recommendations\nTASK:\n - Load all exploration, analysis, and discussion artifacts\n - Extract themes, consolidate evidence, prioritize recommendations\n - Write conclusions and update discussion.md\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Upstream artifacts: explorations/*.json, analyses/*.json\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/conclusions.json + discussion.md update | Final conclusions with confidence levels\nCONSTRAINTS: Pure integration, no new exploration\n---\nInnerLoop: false",
"role": "synthesizer",
"prefix": "SYNTH",
"deps": ["ANALYZE-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
---
### Standard Mode Task Chain
Create tasks in dependency order with parallel exploration and analysis windows:
**EXPLORE-001..N** (explorer, parallel): One per perspective. Each receives unique agent name (explorer-1, explorer-2, ...) for task discovery matching.
```json
{
"EXPLORE-<NNN>": {
"title": "Explore codebase from <perspective> angle",
"description": "PURPOSE: Explore codebase from <perspective> angle | Success: Perspective-specific files and patterns collected\nTASK:\n - Search codebase from <perspective> perspective\n - Collect files, patterns, findings relevant to this angle\n - Generate questions for downstream analysis\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Perspective: <perspective>\n - Dimensions: <dimensions>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/explorations/exploration-<NNN>.json\nCONSTRAINTS: Focus on <perspective> angle\n---\nInnerLoop: false",
"role": "explorer",
"prefix": "EXPLORE",
"deps": [],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**ANALYZE-001..N** (analyst, parallel): One per perspective. Each blocked by its corresponding EXPLORE-N.
```json
{
"ANALYZE-<NNN>": {
"title": "Deep analysis from <perspective> perspective",
"description": "PURPOSE: Deep analysis from <perspective> perspective | Success: Insights with confidence and evidence\nTASK:\n - Load exploration-<NNN> results\n - Analyze from <perspective> perspective\n - Generate insights, discussion points, open questions\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Perspective: <perspective>\n - Dimensions: <dimensions>\n - Upstream artifacts: explorations/exploration-<NNN>.json\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/analyses/analysis-<NNN>.json\nCONSTRAINTS: <perspective> perspective | <dimensions>\n---\nInnerLoop: false",
"role": "analyst",
"prefix": "ANALYZE",
"deps": ["EXPLORE-<NNN>"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**DISCUSS-001** (discussant): Blocked by all ANALYZE tasks.
```json
{
"DISCUSS-001": {
"title": "Process analysis results into discussion summary",
"description": "PURPOSE: Process analysis results into discussion summary | Success: Convergent themes and discussion points identified\nTASK:\n - Aggregate all analysis results across perspectives\n - Identify convergent themes and conflicting views\n - Generate top discussion points and open questions\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Round: 1\n - Type: initial\n - Upstream artifacts: analyses/*.json\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/discussions/discussion-round-001.json + discussion.md update\nCONSTRAINTS: Aggregate only, no new exploration\n---\nInnerLoop: false",
"role": "discussant",
"prefix": "DISCUSS",
"deps": ["ANALYZE-001", "...", "ANALYZE-<N>"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**SYNTH-001** (synthesizer): Blocked by DISCUSS-001.
```json
{
"SYNTH-001": {
"title": "Cross-perspective integration into final conclusions",
"description": "PURPOSE: Cross-perspective integration into final conclusions | Success: Executive summary with prioritized recommendations\n...same as Quick mode SYNTH-001 but with discussion artifacts...",
"role": "synthesizer",
"prefix": "SYNTH",
"deps": ["DISCUSS-001"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
---
### Deep Mode Task Chain
Same as Standard mode, but **omit SYNTH-001**. It will be created dynamically after the discussion loop completes, with deps on the last DISCUSS-N task.
---
## Discussion Loop Task Creation
Dynamic tasks added to tasks.json during discussion loop:
**DISCUSS-N** (subsequent rounds):
```json
{
"DISCUSS-<NNN>": {
"title": "Process discussion round <N>",
"description": "PURPOSE: Process discussion round <N> | Success: Updated understanding with user feedback integrated\nTASK:\n - Process user feedback: <feedback>\n - Execute <type> discussion strategy\n - Update discussion timeline\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Round: <N>\n - Type: <deepen|direction-adjusted|specific-questions>\n - User feedback: <feedback>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/discussions/discussion-round-<NNN>.json\n---\nInnerLoop: false",
"role": "discussant",
"prefix": "DISCUSS",
"deps": [],
"status": "pending",
"findings": "",
"error": ""
}
}
```
**ANALYZE-fix-N** (direction adjustment):
```json
{
"ANALYZE-fix-<N>": {
"title": "Supplementary analysis with adjusted focus",
"description": "PURPOSE: Supplementary analysis with adjusted focus | Success: New insights from adjusted direction\nTASK:\n - Re-analyze from adjusted perspective: <adjusted_focus>\n - Build on previous exploration findings\n - Generate updated discussion points\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Type: direction-fix\n - Adjusted focus: <adjusted_focus>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/analyses/analysis-fix-<N>.json\n---\nInnerLoop: false",
"role": "analyst",
"prefix": "ANALYZE",
"deps": [],
"status": "pending",
"findings": "",
"error": ""
}
}
```
## Phase 4: Validation
Verify task chain integrity:
| Check | Method | Expected |
|-------|--------|----------|
| Task count correct | tasks.json count | quick: 3, standard: 2N+2, deep: 2N+1 |
| Dependencies correct | Trace deps | Acyclic, correct ordering |
| All descriptions have PURPOSE/TASK/CONTEXT/EXPECTED | Pattern check | All present |
| Session path in every task | Check CONTEXT | Session: <folder> present |

View File

@@ -0,0 +1,327 @@
# Command: Monitor
Handle all coordinator monitoring events: status checks, pipeline advancement, discussion loop control, and completion. Uses spawn_agent + wait_agent for synchronous coordination.
## Constants
| Key | Value |
|-----|-------|
| WORKER_AGENT | team_worker |
| MAX_DISCUSSION_ROUNDS_QUICK | 0 |
| MAX_DISCUSSION_ROUNDS_STANDARD | 1 |
| MAX_DISCUSSION_ROUNDS_DEEP | 5 |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session state | tasks.json | Yes |
| Trigger event | From Entry Router detection | Yes |
| Pipeline mode | From tasks.json `pipeline_mode` | Yes |
| Discussion round | From tasks.json `discussion_round` | Yes |
1. Load tasks.json for current state, `pipeline_mode`, `discussion_round`
2. Read tasks from tasks.json to get current task statuses
3. Identify trigger event type from Entry Router
4. Compute max discussion rounds from pipeline mode:
```
MAX_ROUNDS = pipeline_mode === 'deep' ? 5
: pipeline_mode === 'standard' ? 1
: 0
```
## Phase 3: Event Handlers
### handleCallback
Triggered when a worker completes (wait_agent returns).
1. Determine role from completed task prefix, then resolve completed tasks:
**Role detection** (from task prefix):
| Task Prefix | Role |
|-------------|------|
| `EXPLORE-*` | explorer |
| `ANALYZE-*` | analyst |
| `DISCUSS-*` | discussant |
| `SYNTH-*` | synthesizer |
2. Mark task completed in tasks.json:
```
state.tasks[taskId].status = 'completed'
```
3. Record completion in session state via team_msg
4. **Role-specific post-completion logic**:
| Completed Role | Pipeline Mode | Post-Completion Action |
|---------------|---------------|------------------------|
| explorer | all | Log: exploration ready. Proceed to handleSpawnNext |
| analyst | all | Log: analysis ready. Proceed to handleSpawnNext |
| discussant | all | **Discussion feedback gate** (see below) |
| synthesizer | all | Proceed to handleComplete |
5. **Discussion Feedback Gate** (when discussant completes):
When a DISCUSS-* task completes, the coordinator collects user feedback BEFORE spawning the next task.
```
// Read current discussion_round from tasks.json
discussion_round = state.discussion_round || 0
discussion_round++
// Update tasks.json
state.discussion_round = discussion_round
// Check if discussion loop applies
IF pipeline_mode === 'quick':
// No discussion in quick mode -- proceed to handleSpawnNext (SYNTH)
-> handleSpawnNext
ELSE IF discussion_round >= MAX_ROUNDS:
// Reached max rounds -- force proceed to synthesis
Log: "Max discussion rounds reached, proceeding to synthesis"
IF no SYNTH-001 task exists in tasks.json:
Create SYNTH-001 task in tasks.json with deps on last DISCUSS task
-> handleSpawnNext
ELSE:
// Collect user feedback
request_user_input({
questions: [{
question: "Discussion round <N> complete. What next?",
header: "Discussion Feedback",
multiSelect: false,
options: [
{ label: "Continue deeper", description: "Current direction is good, go deeper" },
{ label: "Adjust direction", description: "Shift analysis focus" },
{ label: "Done", description: "Sufficient depth, proceed to synthesis" }
]
}]
})
```
6. **Feedback handling** (after request_user_input returns):
| Feedback | Action |
|----------|--------|
| "Continue deeper" | Create new DISCUSS-`<N+1>` task in tasks.json (pending, no deps). Record decision in discussion.md. Proceed to handleSpawnNext |
| "Adjust direction" | request_user_input for new focus. Create ANALYZE-fix-`<N>` task in tasks.json (pending). Create DISCUSS-`<N+1>` task (pending, deps: [ANALYZE-fix-`<N>`]). Record direction change in discussion.md. Proceed to handleSpawnNext |
| "Done" | Check if SYNTH-001 already exists in tasks.json: if yes, ensure deps is updated to reference last DISCUSS task; if no, create SYNTH-001 (pending, deps: [last DISCUSS]). Record decision in discussion.md. Proceed to handleSpawnNext |
**Dynamic task creation** -- add entries to tasks.json `tasks` object:
DISCUSS-N (subsequent round):
```json
{
"DISCUSS-<NNN>": {
"title": "Process discussion round <N>",
"description": "PURPOSE: Process discussion round <N> | Success: Updated understanding\nTASK:\n - Process previous round results\n - Execute <type> discussion strategy\n - Update discussion timeline\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Round: <N>\n - Type: <deepen|direction-adjusted|specific-questions>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/discussions/discussion-round-<NNN>.json\n---\nInnerLoop: false",
"role": "discussant",
"prefix": "DISCUSS",
"deps": [],
"status": "pending",
"findings": "",
"error": ""
}
}
```
ANALYZE-fix-N (direction adjustment):
```json
{
"ANALYZE-fix-<N>": {
"title": "Supplementary analysis with adjusted focus",
"description": "PURPOSE: Supplementary analysis with adjusted focus | Success: New insights from adjusted direction\nTASK:\n - Re-analyze from adjusted perspective: <adjusted_focus>\n - Build on previous exploration findings\n - Generate updated discussion points\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Type: direction-fix\n - Adjusted focus: <adjusted_focus>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/analyses/analysis-fix-<N>.json\n---\nInnerLoop: false",
"role": "analyst",
"prefix": "ANALYZE",
"deps": [],
"status": "pending",
"findings": "",
"error": ""
}
}
```
SYNTH-001 (created dynamically -- check existence first):
```javascript
// Guard: only create if SYNTH-001 doesn't exist yet in tasks.json
if (!state.tasks['SYNTH-001']) {
state.tasks['SYNTH-001'] = {
title: "Integrate all analysis into final conclusions",
description: "PURPOSE: Integrate all analysis into final conclusions | Success: Executive summary with recommendations...",
role: "synthesizer",
prefix: "SYNTH",
deps: ["<last-DISCUSS-task-id>"],
status: "pending",
findings: "",
error: ""
}
} else {
// Always update deps to reference the last DISCUSS task
state.tasks['SYNTH-001'].deps = ["<last-DISCUSS-task-id>"]
}
```
7. Record user feedback to decision_trail via team_msg:
```
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "state_update",
data: { decision_trail_entry: {
round: discussion_round,
decision: feedback,
context: "User feedback at discussion round N",
timestamp: current ISO timestamp
}}
})
```
8. Proceed to handleSpawnNext
### handleSpawnNext
Find and spawn the next ready tasks.
1. Read tasks.json, find tasks where:
- Status is "pending"
- All deps tasks have status "completed"
2. For each ready task, determine role from task prefix:
| Task Prefix | Role | Role Spec |
|-------------|------|-----------|
| `EXPLORE-*` | explorer | `<skill_root>/roles/explorer/role.md` |
| `ANALYZE-*` | analyst | `<skill_root>/roles/analyst/role.md` |
| `DISCUSS-*` | discussant | `<skill_root>/roles/discussant/role.md` |
| `SYNTH-*` | synthesizer | `<skill_root>/roles/synthesizer/role.md` |
3. Spawn team_worker for each ready task:
```javascript
// 1) Update status in tasks.json
state.tasks[taskId].status = 'in_progress'
// 2) Spawn worker
const agentId = spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: ${role}
role_spec: ${skillRoot}/roles/${role}/role.md
session: ${sessionFolder}
session_id: ${sessionId}
requirement: ${taskDescription}
agent_name: ${agentName}
inner_loop: false` },
{ type: "text", text: `## Current Task
- Task ID: ${taskId}
- Task: ${taskSubject}
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery, owner=${agentName}) -> role-spec Phase 2-4 -> built-in Phase 5 (report).` }
]
})
// 3) Track agent
state.active_agents[taskId] = { agentId, role, started_at: now }
```
After spawning all ready tasks:
```javascript
// 4) Batch wait for all spawned workers
const agentIds = Object.values(state.active_agents)
.map(a => a.agentId)
wait_agent({ ids: agentIds })
// 5) Collect results and update tasks.json
for (const [taskId, agent] of Object.entries(state.active_agents)) {
state.tasks[taskId].status = 'completed'
delete state.active_agents[taskId]
}
```
4. **Parallel spawn rules**:
| Mode | Stage | Spawn Behavior |
|------|-------|---------------|
| quick | All stages | One worker at a time (serial pipeline) |
| standard/deep | EXPLORE phase | Spawn all EXPLORE-001..N in parallel, wait_agent for all |
| standard/deep | ANALYZE phase | Spawn all ANALYZE-001..N in parallel, wait_agent for all |
| all | DISCUSS phase | One discussant at a time |
| all | SYNTH phase | One synthesizer |
5. **STOP** after processing -- wait for next event
### handleCheck
Output current pipeline status from tasks.json without advancing.
```
Pipeline Status (<mode> mode):
[DONE] EXPLORE-001 (explorer) -> exploration-001.json
[DONE] EXPLORE-002 (explorer) -> exploration-002.json
[DONE] ANALYZE-001 (analyst) -> analysis-001.json
[RUN] ANALYZE-002 (analyst) -> analyzing...
[WAIT] DISCUSS-001 (discussant) -> blocked by ANALYZE-002
[----] SYNTH-001 (synthesizer) -> blocked by DISCUSS-001
Discussion Rounds: 0/<max>
Pipeline Mode: <mode>
Session: <session-id>
```
Output status -- do NOT advance pipeline.
### handleResume
Resume pipeline after user pause or interruption.
1. Audit tasks.json for inconsistencies:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed deps but still "pending" -> include in spawn list
2. Proceed to handleSpawnNext
### handleComplete
Triggered when all pipeline tasks are completed.
**Completion check**:
| Mode | Completion Condition |
|------|---------------------|
| quick | EXPLORE-001 + ANALYZE-001 + SYNTH-001 all completed |
| standard | All EXPLORE + ANALYZE + DISCUSS-001 + SYNTH-001 completed |
| deep | All EXPLORE + ANALYZE + all DISCUSS-N + SYNTH-001 completed |
1. Verify all tasks completed in tasks.json. If any not completed, return to handleSpawnNext
2. If all completed, **inline-execute coordinator Phase 5** (report + completion action). Do NOT STOP here -- continue directly into Phase 5 within the same turn.
## Phase 4: State Persistence
After every handler execution **except handleComplete**:
1. Update tasks.json with current state:
- `discussion_round`: current round count
- `active_agents`: list of in-progress agents
2. Verify task list consistency (no orphan tasks, no broken dependencies)
3. **STOP** and wait for next event
> **handleComplete exception**: handleComplete does NOT STOP -- it transitions directly to coordinator Phase 5.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Worker spawn fails | Retry once. If still fails, report to user via request_user_input: retry / skip / abort |
| Discussion loop exceeds max rounds | Force create SYNTH-001, proceed to synthesis |
| Synthesis fails | Report partial results from analyses and discussions |
| Pipeline stall (no ready + no running) | Check deps chains, report blockage to user |
| Missing task artifacts | Log warning, continue with available data |

View File

@@ -0,0 +1,223 @@
# Coordinator - Ultra Analyze Team
**Role**: coordinator
**Type**: Orchestrator
**Team**: ultra-analyze
Orchestrates the analysis pipeline: topic clarification, pipeline mode selection, task dispatch, discussion loop management, and final synthesis. Spawns team_worker agents for all worker roles.
## Boundaries
### MUST
- Use `team_worker` agent type for all worker spawns (NOT `general-purpose`)
- Follow Command Execution Protocol for dispatch and monitor commands
- Respect pipeline stage dependencies (deps)
- Stop after spawning workers -- wait for results via wait_agent
- Handle discussion loop with max 5 rounds (Deep mode)
- Execute completion action in Phase 5
### MUST NOT
- Implement domain logic (exploring, analyzing, discussing, synthesizing) -- workers handle this
- Spawn workers without creating tasks first
- Skip checkpoints when configured
- Force-advance pipeline past failed stages
- Directly call cli-explore-agent, CLI analysis tools, or execute codebase exploration
---
## Command Execution Protocol
When coordinator needs to execute a command (dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
4. **Execute synchronously** -- complete the command workflow before proceeding
---
## Entry Router
When coordinator is invoked, detect invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Status check | Arguments contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume (monitor.md) |
| Pipeline complete | All tasks have status "completed" | -> handleComplete (monitor.md) |
| Interrupted session | Active/paused session exists | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For check/resume/complete: load `@commands/monitor.md` and execute matched handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/UAN-*/.msg/meta.json` for active/paused sessions
- If found, extract session folder path, status, and `pipeline_mode`
2. **Parse $ARGUMENTS** for detection keywords:
- Check for "check", "status", "resume", "continue" keywords
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Topic Understanding below
---
## Phase 0: Session Resume Check
Triggered when an active/paused session is detected on coordinator entry.
1. Load tasks.json from detected session folder
2. Read tasks from tasks.json
3. Reconcile session state vs task status:
| Task Status | Session Expects | Action |
|-------------|----------------|--------|
| in_progress | Should be running | Reset to pending (worker was interrupted) |
| completed | Already tracked | Skip |
| pending + unblocked | Ready to run | Include in spawn list |
4. Spawn workers for ready tasks -> Phase 4 coordination loop
---
## Phase 1: Topic Understanding & Requirement Clarification
TEXT-LEVEL ONLY. No source code reading.
1. Parse user task description from $ARGUMENTS
2. Extract explicit settings: `--mode`, scope, focus areas
3. Delegate to `@commands/analyze.md` for signal detection and pipeline mode selection
4. **Interactive clarification** (non-auto mode): request_user_input for focus, perspectives, depth.
---
## Phase 2: Create Session + Initialize
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash({ command: "pwd" })`
- `skill_root` = `<project_root>/.codex/skills/team-ultra-analyze`
3. Generate session ID: `UAN-{slug}-{YYYY-MM-DD}`
4. Create session folder structure:
```
.workflow/.team/UAN-{slug}-{date}/
+-- .msg/messages.jsonl
+-- .msg/meta.json
+-- discussion.md
+-- explorations/
+-- analyses/
+-- discussions/
+-- wisdom/
+-- learnings.md, decisions.md, conventions.md, issues.md
```
5. Write initial tasks.json:
```json
{
"session_id": "<id>",
"pipeline_mode": "<Quick|Deep|Standard>",
"topic": "<topic>",
"perspectives": ["<perspective1>", "<perspective2>"],
"created_at": "<ISO timestamp>",
"discussion_round": 0,
"active_agents": {},
"tasks": {}
}
```
6. Initialize .msg/meta.json with pipeline metadata via team_msg:
```typescript
mcp__ccw-tools__team_msg({
operation: "log",
session_id: "<session-id>",
from: "coordinator",
type: "state_update",
summary: "Session initialized",
data: {
pipeline_mode: "<Quick|Deep|Standard>",
pipeline_stages: ["explorer", "analyst", "discussant", "synthesizer"],
roles: ["coordinator", "explorer", "analyst", "discussant", "synthesizer"]
}
})
```
---
## Phase 3: Create Task Chain
Execute `@commands/dispatch.md` inline (Command Execution Protocol):
1. Read `roles/coordinator/commands/dispatch.md`
2. Follow dispatch Phase 2 -> Phase 3 -> Phase 4
3. Result: all pipeline tasks created in tasks.json with correct deps
---
## Phase 4: Spawn & Coordination Loop
### Initial Spawn
Find first unblocked tasks and spawn their workers. Use SKILL.md Worker Spawn Template with:
- `role_spec: <skill_root>/roles/<role>/role.md`
- `inner_loop: false`
**STOP** after spawning and waiting for results.
### Coordination (via monitor.md handlers)
All subsequent coordination is handled by `commands/monitor.md` handlers triggered after wait_agent returns.
---
## Phase 5: Report + Completion Action
### Report
1. Load session state -> count completed tasks, calculate duration
2. List deliverables:
| Deliverable | Path |
|-------------|------|
| Explorations | <session>/explorations/*.json |
| Analyses | <session>/analyses/*.json |
| Discussion | <session>/discussion.md |
| Conclusions | <session>/conclusions.json |
3. Include discussion summaries and decision trail
4. Output pipeline summary: task count, duration, mode
5. **Completion Action** (interactive):
```
request_user_input({
questions: [{
question: "Ultra-Analyze pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and resources" },
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
]
}]
})
```
6. Handle user choice per SKILL.md Completion Action section.
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Explorer finds nothing | Continue with limited context, note limitation |
| Discussion loop stuck >5 rounds | Force synthesis, offer continuation |
| CLI unavailable | Fallback chain: gemini -> codex -> claude |
| User timeout in discussion | Save state, show resume command |
| Session folder conflict | Append timestamp suffix |

View File

@@ -0,0 +1,104 @@
---
role: discussant
prefix: DISCUSS
inner_loop: false
message_types:
success: discussion_processed
error: error
---
# Discussant
Process analysis results and user feedback. Execute direction adjustments, deep-dive explorations, or targeted Q&A based on discussion type. Update discussion timeline.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| Analysis results | `<session>/analyses/*.json` | Yes |
| Exploration results | `<session>/explorations/*.json` | No |
1. Extract session path, topic, round, discussion type, user feedback:
| Field | Pattern | Default |
|-------|---------|---------|
| sessionFolder | `session:\s*(.+)` | required |
| topic | `topic:\s*(.+)` | required |
| round | `round:\s*(\d+)` | 1 |
| discussType | `type:\s*(.+)` | "initial" |
| userFeedback | `user_feedback:\s*(.+)` | empty |
2. Read all analysis and exploration results
3. Aggregate current findings, insights, open questions
## Phase 3: Discussion Processing
Select strategy by discussion type:
| Type | Mode | Description |
|------|------|-------------|
| initial | inline | Aggregate all analyses: convergent themes, conflicts, top discussion points |
| deepen | cli | Use CLI tool to investigate open questions deeper |
| direction-adjusted | cli | Re-analyze via `ccw cli` from adjusted perspective |
| specific-questions | cli | Targeted exploration answering user questions |
**initial**: Cross-perspective summary -- identify convergent themes, conflicting views, top 5 discussion points and open questions from all analyses.
**deepen**: Use CLI tool for deep investigation:
```javascript
Bash({
command: `ccw cli -p "PURPOSE: Investigate open questions and uncertain insights; success = evidence-based findings
TASK: • Focus on open questions: <questions> • Find supporting evidence • Validate uncertain insights • Document findings
MODE: analysis
CONTEXT: @**/* | Memory: Session <session-folder>, previous analyses
EXPECTED: JSON output with investigation results | Write to <session>/discussions/deepen-<num>.json
CONSTRAINTS: Evidence-based analysis only
" --tool gemini --mode analysis --rule analysis-trace-code-execution`,
})
```
**direction-adjusted**: CLI re-analysis from adjusted focus:
```javascript
Bash({
command: `ccw cli -p "Re-analyze '<topic>' with adjusted focus on '<userFeedback>'" --tool gemini --mode analysis`,
})
```
**specific-questions**: Use CLI tool for targeted Q&A:
```javascript
Bash({
command: `ccw cli -p "PURPOSE: Answer specific user questions about <topic>; success = clear, evidence-based answers
TASK: • Answer: <userFeedback> • Provide code references • Explain context
MODE: analysis
CONTEXT: @**/* | Memory: Session <session-folder>
EXPECTED: JSON output with answers and evidence | Write to <session>/discussions/questions-<num>.json
CONSTRAINTS: Direct answers with code references
" --tool gemini --mode analysis`,
})
```
## Phase 4: Update Discussion Timeline
1. Write round content to `<session>/discussions/discussion-round-<num>.json`:
```json
{
"round": 1, "type": "initial", "user_feedback": "...",
"updated_understanding": { "confirmed": [], "corrected": [], "new_insights": [] },
"new_findings": [], "new_questions": [], "timestamp": "..."
}
```
2. Append round section to `<session>/discussion.md`:
```markdown
### Round <N> - Discussion (<timestamp>)
#### Type: <discussType>
#### User Input: <userFeedback or "(Initial discussion round)">
#### Updated Understanding
**Confirmed**: <list> | **Corrected**: <list> | **New Insights**: <list>
#### New Findings / Open Questions
```
Update `<session>/wisdom/.msg/meta.json` under `discussant` namespace:
- Read existing -> merge `{ "discussant": { round, type, new_insight_count, corrected_count } }` -> write back

View File

@@ -0,0 +1,74 @@
---
role: explorer
prefix: EXPLORE
inner_loop: false
message_types:
success: exploration_ready
error: error
---
# Codebase Explorer
Explore codebase structure through cli-explore-agent, collecting structured context (files, patterns, findings) for downstream analysis. One explorer per analysis perspective.
## Phase 2: Context & Scope Assessment
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
1. Load debug specs: Run `ccw spec load --category debug` for known issues and root-cause notes
2. Extract session path, topic, perspective, dimensions from task description:
| Field | Pattern | Default |
|-------|---------|---------|
| sessionFolder | `session:\s*(.+)` | required |
| topic | `topic:\s*(.+)` | required |
| perspective | `perspective:\s*(.+)` | "general" |
| dimensions | `dimensions:\s*(.+)` | "general" |
2. Determine exploration number from task subject (EXPLORE-N)
3. Build exploration strategy by perspective:
| Perspective | Focus | Search Depth |
|-------------|-------|-------------|
| general | Overall codebase structure and patterns | broad |
| technical | Implementation details, code patterns, feasibility | medium |
| architectural | System design, module boundaries, interactions | broad |
| business | Business logic, domain models, value flows | medium |
| domain_expert | Domain patterns, standards, best practices | deep |
## Phase 3: Codebase Exploration
Use CLI tool for codebase exploration:
```javascript
Bash({
command: `ccw cli -p "PURPOSE: Explore codebase for <topic> from <perspective> perspective; success = structured findings with relevant files and patterns
TASK: • Run module depth analysis • Search for topic-related patterns • Identify key files and their relationships • Extract architectural insights
MODE: analysis
CONTEXT: @**/* | Memory: Session <session-folder>, perspective <perspective>
EXPECTED: JSON output with: relevant_files (path, relevance, summary), patterns, key_findings, module_map, questions_for_analysis, _metadata (perspective, search_queries, timestamp)
CONSTRAINTS: Focus on <perspective> angle - <strategy.focus> | Write to <session>/explorations/exploration-<num>.json
" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`,
})
```
**ACE fallback** (when CLI produces no output):
```javascript
mcp__ace-tool__search_context({ project_root_path: ".", query: "<topic> <perspective>" })
```
## Phase 4: Result Validation
| Check | Method | Action on Failure |
|-------|--------|-------------------|
| Output file exists | Read output path | Create empty result, run ACE fallback |
| Has relevant_files | Array length > 0 | Trigger ACE supplementary search |
| Has key_findings | Array length > 0 | Note partial results, proceed |
Write validated exploration to `<session>/explorations/exploration-<num>.json`.
Update `<session>/wisdom/.msg/meta.json` under `explorer` namespace:
- Read existing -> merge `{ "explorer": { perspective, file_count, finding_count } }` -> write back

View File

@@ -0,0 +1,78 @@
---
role: synthesizer
prefix: SYNTH
inner_loop: false
message_types:
success: synthesis_ready
error: error
---
# Synthesizer
Integrate all explorations, analyses, and discussions into final conclusions. Cross-perspective theme extraction, conflict resolution, evidence consolidation, and recommendation prioritization. Pure integration role -- no external tools or CLI calls.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| All artifacts | `<session>/explorations/*.json`, `analyses/*.json`, `discussions/*.json` | Yes |
| Decision trail | From wisdom/.msg/meta.json | No |
1. Extract session path and topic from task description
2. Read all exploration, analysis, and discussion round files
3. Load decision trail and current understanding from meta.json
4. Select synthesis strategy:
| Condition | Strategy |
|-----------|----------|
| Single analysis, no discussions | simple (Quick mode summary) |
| Multiple analyses, >2 discussion rounds | deep (track evolution) |
| Default | standard (cross-perspective integration) |
## Phase 3: Cross-Perspective Synthesis
Execute synthesis across four dimensions:
**1. Theme Extraction**: Identify convergent themes across all analysis perspectives. Cluster insights by similarity, rank by cross-perspective confirmation count.
**2. Conflict Resolution**: Identify contradictions between perspectives. Present both sides with trade-off analysis when irreconcilable.
**3. Evidence Consolidation**: Deduplicate findings, aggregate by file reference. Map evidence to conclusions with confidence levels:
| Level | Criteria |
|-------|----------|
| High | Multiple sources confirm, strong evidence |
| Medium | Single source or partial evidence |
| Low | Speculative, needs verification |
**4. Recommendation Prioritization**: Sort all recommendations by priority (high > medium > low), deduplicate, cap at 10.
Integrate decision trail from discussion rounds into final narrative.
## Phase 4: Write Conclusions
1. Write `<session>/conclusions.json`:
```json
{
"session_id": "...", "topic": "...", "completed": "ISO-8601",
"summary": "Executive summary...",
"key_conclusions": [{"point": "...", "evidence": "...", "confidence": "high"}],
"recommendations": [{"action": "...", "rationale": "...", "priority": "high"}],
"open_questions": ["..."],
"decision_trail": [{"round": 1, "decision": "...", "context": "..."}],
"cross_perspective_synthesis": { "convergent_themes": [], "conflicts_resolved": [], "unique_contributions": [] },
"_metadata": { "explorations": 3, "analyses": 3, "discussions": 2, "strategy": "standard" }
}
```
2. Append conclusions section to `<session>/discussion.md`:
```markdown
## Conclusions
### Summary / Key Conclusions / Recommendations / Remaining Questions
## Decision Trail / Current Understanding (Final) / Session Statistics
```
Update `<session>/wisdom/.msg/meta.json` under `synthesizer` namespace:
- Read existing -> merge `{ "synthesizer": { conclusion_count, recommendation_count, open_question_count } }` -> write back