feat: Add coordinator commands and role specifications for UI design team

- Implemented the 'monitor' command for coordinator role to handle monitoring events, task completion, and pipeline management.
- Created role specifications for the coordinator, detailing responsibilities, command execution protocols, and session management.
- Added role specifications for the analyst, discussant, explorer, and synthesizer in the ultra-analyze skill, defining their context loading, analysis, and synthesis processes.
This commit is contained in:
catlog22
2026-03-03 23:35:41 +08:00
parent a7ed0365f7
commit 26bda9c634
188 changed files with 9332 additions and 3512 deletions

View File

@@ -130,9 +130,9 @@ Each worker executes the same task discovery flow on startup:
Standard report flow after task completion:
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
- Parameters: operation="log", team=<session-id>, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
- **Note**: `team` must be session ID (e.g., `UAN-xxx-date`), NOT team name. Extract from `Session:` field in task description.
- **CLI fallback**: When MCP unavailable -> `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
- Parameters: operation="log", session_id=<session-id>, from=<role>, type=<message-type>, data={ref: "<artifact-path>"}
- `to` and `summary` auto-defaulted -- do NOT specify explicitly
- **CLI fallback**: `ccw team log --session-id <session-id> --from <role> --type <type> --json`
2. **SendMessage**: Send result to coordinator (both content and summary prefixed with `[<role>]`)
3. **TaskUpdate**: Mark task completed
4. **Loop**: Return to Phase 1 to check for next task
@@ -159,16 +159,16 @@ Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory durin
|---------|-----------|
| Process tasks matching own prefix | Process tasks with other role prefixes |
| SendMessage to coordinator | Communicate directly with other workers |
| Read/write shared-memory.json (own fields) | Create tasks for other roles |
| Share state via team_msg(type='state_update') | Create tasks for other roles |
| Delegate to commands/*.md | Modify resources outside own responsibility |
Coordinator additionally prohibited: directly executing code exploration or analysis, directly calling cli-explore-agent or CLI analysis tools, bypassing workers to complete work.
### Shared Memory
### Cross-Role State
Core shared artifact stored at `<session-folder>/shared-memory.json`. Each role reads the full memory but writes only to its own designated field:
Cross-role state managed via `team_msg(type='state_update')`, stored in `.msg/meta.json.role_state`. Each role reads all states but writes only to its own designated field:
| Role | Write Field |
| Role | State Field |
|------|-------------|
| explorer | `explorations` |
| analyst | `analyses` |
@@ -176,13 +176,13 @@ Core shared artifact stored at `<session-folder>/shared-memory.json`. Each role
| synthesizer | `synthesis` |
| coordinator | `decision_trail` + `current_understanding` |
On startup, read the file. After completing work, update own field and write back. If file does not exist, initialize with empty object.
On startup, read role states via `team_msg(operation="get_state", session_id=<session-id>)`. After completing work, share results via `team_msg(operation="log", session_id=<session-id>, from=<role>, type="state_update", data={...})`.
### Message Bus (All Roles)
All roles log messages before sending via SendMessage. Call `mcp__ccw-tools__team_msg` with: operation="log", team=<session-id>, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<file-path>.
All roles log messages before sending via SendMessage. Call `mcp__ccw-tools__team_msg` with: operation="log", session_id=<session-id>, from=<role>, type=<message-type>, data={ref: "<file-path>"}. `to` and `summary` are auto-defaulted.
> **Note**: `team` must be session ID (e.g., `UAN-xxx-date`), NOT team name. Extract from `Session:` field in task description.
| Role | Types |
|------|-------|
@@ -192,9 +192,8 @@ All roles log messages before sending via SendMessage. Call `mcp__ccw-tools__tea
| discussant | `discussion_processed`, `error` |
| synthesizer | `synthesis_ready`, `error` |
**CLI fallback**: When MCP unavailable -> `ccw team log --team "<session-id>" --from "<role>" --to "coordinator" --type "<type>" --summary "<summary>" --json`
**CLI fallback**: When MCP unavailable -> `ccw team log --session-id <session-id> --from <role> --type <type> --json`
> **Note**: `team` must be session ID (e.g., `UAN-xxx-date`), NOT team name.
---
@@ -393,7 +392,7 @@ Session: <session-folder>
|---------|-------|
| Team name | ultra-analyze |
| Session directory | .workflow/.team/UAN-{slug}-{date}/ |
| Shared memory file | shared-memory.json |
| Analysis dimensions | architecture, implementation, performance, security, concept, comparison, decision |
| Max discussion rounds | 5 |
@@ -401,7 +400,8 @@ Session: <session-folder>
```
.workflow/.team/UAN-{slug}-{YYYY-MM-DD}/
+-- shared-memory.json # Exploration/analysis/discussion/synthesis shared memory
+-- .msg/messages.jsonl # Message bus log
+-- .msg/meta.json # Session metadata
+-- discussion.md # Understanding evolution and discussion timeline
+-- explorations/ # Explorer output
| +-- exploration-001.json

View File

@@ -0,0 +1,90 @@
---
prefix: ANALYZE
inner_loop: false
additional_prefixes: [ANALYZE-fix]
subagents: []
message_types:
success: analysis_ready
error: error
---
# Deep Analyst
Perform deep multi-perspective analysis on exploration results via CLI tools. Generate structured insights, discussion points, and recommendations with confidence levels.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| Exploration results | `<session>/explorations/*.json` | Yes |
1. Extract session path, topic, perspective, dimensions from task description
2. Detect direction-fix mode: `type:\s*direction-fix` with `adjusted_focus:\s*(.+)`
3. Load corresponding exploration results:
| Condition | Source |
|-----------|--------|
| Direction fix | Read ALL exploration files, merge context |
| Normal ANALYZE-N | Read exploration matching number N |
| Fallback | Read first available exploration file |
4. Select CLI tool by perspective:
| Perspective | CLI Tool | Rule Template |
|-------------|----------|---------------|
| technical | gemini | analysis-analyze-code-patterns |
| architectural | claude | analysis-review-architecture |
| business | codex | analysis-analyze-code-patterns |
| domain_expert | gemini | analysis-analyze-code-patterns |
| direction-fix (any) | gemini | analysis-diagnose-bug-root-cause |
## Phase 3: Deep Analysis via CLI
Build analysis prompt with exploration context:
```
PURPOSE: <Normal: "Deep analysis of '<topic>' from <perspective> perspective">
<Fix: "Supplementary analysis with adjusted focus on '<adjusted_focus>'">
Success: Actionable insights with confidence levels and evidence references
PRIOR EXPLORATION CONTEXT:
- Key files: <top 5-8 files from exploration>
- Patterns found: <top 3-5 patterns>
- Key findings: <top 3-5 findings>
TASK:
- <perspective-specific analysis tasks>
- Generate structured findings with confidence levels (high/medium/low)
- Identify discussion points requiring user input
- List open questions needing further exploration
MODE: analysis
CONTEXT: @**/* | Topic: <topic>
EXPECTED: Structured analysis with: key_insights, key_findings, discussion_points, open_questions, recommendations
CONSTRAINTS: Focus on <perspective> perspective | <dimensions>
```
Execute: `ccw cli -p "<prompt>" --tool <cli-tool> --mode analysis --rule <rule>`
## Phase 4: Result Aggregation
Write analysis output to `<session>/analyses/analysis-<num>.json`:
```json
{
"perspective": "<perspective>",
"dimensions": ["<dim1>", "<dim2>"],
"is_direction_fix": false,
"key_insights": [{"insight": "...", "confidence": "high", "evidence": "file:line"}],
"key_findings": [{"finding": "...", "file_ref": "...", "impact": "..."}],
"discussion_points": ["..."],
"open_questions": ["..."],
"recommendations": [{"action": "...", "rationale": "...", "priority": "high"}],
"_metadata": {"cli_tool": "...", "cli_rule": "...", "perspective": "...", "timestamp": "..."}
}
```
Update `<session>/wisdom/.msg/meta.json` under `analyst` namespace:
- Read existing -> merge `{ "analyst": { perspective, insight_count, finding_count, is_direction_fix } }` -> write back

View File

@@ -0,0 +1,90 @@
---
prefix: DISCUSS
inner_loop: false
subagents: [cli-explore-agent]
message_types:
success: discussion_processed
error: error
---
# Discussant
Process analysis results and user feedback. Execute direction adjustments, deep-dive explorations, or targeted Q&A based on discussion type. Update discussion timeline.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| Analysis results | `<session>/analyses/*.json` | Yes |
| Exploration results | `<session>/explorations/*.json` | No |
1. Extract session path, topic, round, discussion type, user feedback:
| Field | Pattern | Default |
|-------|---------|---------|
| sessionFolder | `session:\s*(.+)` | required |
| topic | `topic:\s*(.+)` | required |
| round | `round:\s*(\d+)` | 1 |
| discussType | `type:\s*(.+)` | "initial" |
| userFeedback | `user_feedback:\s*(.+)` | empty |
2. Read all analysis and exploration results
3. Aggregate current findings, insights, open questions
## Phase 3: Discussion Processing
Select strategy by discussion type:
| Type | Mode | Description |
|------|------|-------------|
| initial | inline | Aggregate all analyses: convergent themes, conflicts, top discussion points |
| deepen | subagent | Spawn cli-explore-agent to investigate open questions deeper |
| direction-adjusted | cli | Re-analyze via `ccw cli` from adjusted perspective |
| specific-questions | subagent | Targeted exploration answering user questions |
**initial**: Cross-perspective summary -- identify convergent themes, conflicting views, top 5 discussion points and open questions from all analyses.
**deepen**: Spawn cli-explore-agent focused on open questions and uncertain insights:
```
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: "Focus on open questions: <questions>. Find evidence for uncertain insights. Write to: <session>/discussions/deepen-<num>.json"
})
```
**direction-adjusted**: CLI re-analysis from adjusted focus:
```
ccw cli -p "Re-analyze '<topic>' with adjusted focus on '<userFeedback>'" --tool gemini --mode analysis
```
**specific-questions**: Spawn cli-explore-agent targeting user's questions:
```
Task({ subagent_type: "cli-explore-agent", prompt: "Answer: <userFeedback>. Write to: <session>/discussions/questions-<num>.json" })
```
## Phase 4: Update Discussion Timeline
1. Write round content to `<session>/discussions/discussion-round-<num>.json`:
```json
{
"round": 1, "type": "initial", "user_feedback": "...",
"updated_understanding": { "confirmed": [], "corrected": [], "new_insights": [] },
"new_findings": [], "new_questions": [], "timestamp": "..."
}
```
2. Append round section to `<session>/discussion.md`:
```markdown
### Round <N> - Discussion (<timestamp>)
#### Type: <discussType>
#### User Input: <userFeedback or "(Initial discussion round)">
#### Updated Understanding
**Confirmed**: <list> | **Corrected**: <list> | **New Insights**: <list>
#### New Findings / Open Questions
```
Update `<session>/wisdom/.msg/meta.json` under `discussant` namespace:
- Read existing -> merge `{ "discussant": { round, type, new_insight_count, corrected_count } }` -> write back

View File

@@ -0,0 +1,90 @@
---
prefix: EXPLORE
inner_loop: false
subagents: [cli-explore-agent]
message_types:
success: exploration_ready
error: error
---
# Codebase Explorer
Explore codebase structure through cli-explore-agent, collecting structured context (files, patterns, findings) for downstream analysis. One explorer per analysis perspective.
## Phase 2: Context & Scope Assessment
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
1. Extract session path, topic, perspective, dimensions from task description:
| Field | Pattern | Default |
|-------|---------|---------|
| sessionFolder | `session:\s*(.+)` | required |
| topic | `topic:\s*(.+)` | required |
| perspective | `perspective:\s*(.+)` | "general" |
| dimensions | `dimensions:\s*(.+)` | "general" |
2. Determine exploration number from task subject (EXPLORE-N)
3. Build exploration strategy by perspective:
| Perspective | Focus | Search Depth |
|-------------|-------|-------------|
| general | Overall codebase structure and patterns | broad |
| technical | Implementation details, code patterns, feasibility | medium |
| architectural | System design, module boundaries, interactions | broad |
| business | Business logic, domain models, value flows | medium |
| domain_expert | Domain patterns, standards, best practices | deep |
## Phase 3: Codebase Exploration
Spawn `cli-explore-agent` subagent for actual exploration:
```
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore codebase: <topic> (<perspective>)",
prompt: `
## Analysis Context
Topic: <topic>
Perspective: <perspective> -- <strategy.focus>
Dimensions: <dimensions>
Session: <session-folder>
## MANDATORY FIRST STEPS
1. Run: ccw tool exec get_modules_by_depth '{}'
2. Execute searches based on topic + perspective keywords
3. Run: ccw spec load --category exploration
## Exploration Focus (<perspective> angle)
<dimension-specific exploration instructions>
## Output
Write findings to: <session>/explorations/exploration-<num>.json
Schema: { perspective, relevant_files: [{path, relevance, summary}], patterns: [string],
key_findings: [string], module_map: {module: [files]}, questions_for_analysis: [string],
_metadata: {agent, perspective, search_queries, timestamp} }
`
})
```
**ACE fallback** (when cli-explore-agent produces no output):
```
mcp__ace-tool__search_context({ project_root_path: ".", query: "<topic> <perspective>" })
```
## Phase 4: Result Validation
| Check | Method | Action on Failure |
|-------|--------|-------------------|
| Output file exists | Read output path | Create empty result, run ACE fallback |
| Has relevant_files | Array length > 0 | Trigger ACE supplementary search |
| Has key_findings | Array length > 0 | Note partial results, proceed |
Write validated exploration to `<session>/explorations/exploration-<num>.json`.
Update `<session>/wisdom/.msg/meta.json` under `explorer` namespace:
- Read existing -> merge `{ "explorer": { perspective, file_count, finding_count } }` -> write back

View File

@@ -0,0 +1,78 @@
---
prefix: SYNTH
inner_loop: false
subagents: []
message_types:
success: synthesis_ready
error: error
---
# Synthesizer
Integrate all explorations, analyses, and discussions into final conclusions. Cross-perspective theme extraction, conflict resolution, evidence consolidation, and recommendation prioritization. Pure integration role -- no external tools or CLI calls.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| All artifacts | `<session>/explorations/*.json`, `analyses/*.json`, `discussions/*.json` | Yes |
| Decision trail | From wisdom/.msg/meta.json | No |
1. Extract session path and topic from task description
2. Read all exploration, analysis, and discussion round files
3. Load decision trail and current understanding from meta.json
4. Select synthesis strategy:
| Condition | Strategy |
|-----------|----------|
| Single analysis, no discussions | simple (Quick mode summary) |
| Multiple analyses, >2 discussion rounds | deep (track evolution) |
| Default | standard (cross-perspective integration) |
## Phase 3: Cross-Perspective Synthesis
Execute synthesis across four dimensions:
**1. Theme Extraction**: Identify convergent themes across all analysis perspectives. Cluster insights by similarity, rank by cross-perspective confirmation count.
**2. Conflict Resolution**: Identify contradictions between perspectives. Present both sides with trade-off analysis when irreconcilable.
**3. Evidence Consolidation**: Deduplicate findings, aggregate by file reference. Map evidence to conclusions with confidence levels:
| Level | Criteria |
|-------|----------|
| High | Multiple sources confirm, strong evidence |
| Medium | Single source or partial evidence |
| Low | Speculative, needs verification |
**4. Recommendation Prioritization**: Sort all recommendations by priority (high > medium > low), deduplicate, cap at 10.
Integrate decision trail from discussion rounds into final narrative.
## Phase 4: Write Conclusions
1. Write `<session>/conclusions.json`:
```json
{
"session_id": "...", "topic": "...", "completed": "ISO-8601",
"summary": "Executive summary...",
"key_conclusions": [{"point": "...", "evidence": "...", "confidence": "high"}],
"recommendations": [{"action": "...", "rationale": "...", "priority": "high"}],
"open_questions": ["..."],
"decision_trail": [{"round": 1, "decision": "...", "context": "..."}],
"cross_perspective_synthesis": { "convergent_themes": [], "conflicts_resolved": [], "unique_contributions": [] },
"_metadata": { "explorations": 3, "analyses": 3, "discussions": 2, "strategy": "standard" }
}
```
2. Append conclusions section to `<session>/discussion.md`:
```markdown
## Conclusions
### Summary / Key Conclusions / Recommendations / Remaining Questions
## Decision Trail / Current Understanding (Final) / Session Statistics
```
Update `<session>/wisdom/.msg/meta.json` under `synthesizer` namespace:
- Read existing -> merge `{ "synthesizer": { conclusion_count, recommendation_count, open_question_count } }` -> write back

View File

@@ -17,7 +17,7 @@
- Only communicate with coordinator via SendMessage
- Work strictly within deep analysis responsibility scope
- Base analysis on explorer exploration results
- Write analysis results to shared-memory.json `analyses` field
- Share analysis results via team_msg(type='state_update')
### MUST NOT
@@ -72,21 +72,19 @@ Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
team: <session-id>,
session_id: <session-id>,
from: "analyst",
to: "coordinator",
type: "analysis_ready",
summary: "[analyst] ANALYZE complete: <summary>",
ref: "<output-path>"
})
```
> **Note**: `team` must be session ID (e.g., `UAN-xxx-date`), NOT team name. Extract from `Session:` field in task description.
> `to` and `summary` are auto-defaulted by the tool.
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --team <session-id> --from analyst --to coordinator --type analysis_ready --summary \"[analyst] ...\" --ref <path> --json")
Bash("ccw team log --session-id <session-id> --from analyst --type analysis_ready --ref <path> --json")
```
---
@@ -108,7 +106,7 @@ For parallel instances, parse `--agent-name` from arguments for owner matching.
1. Extract session path from task description
2. Extract topic, perspective, dimensions from task metadata
3. Check for direction-fix type (补充分析)
4. Read shared-memory.json for existing context
4. Read role states via team_msg(operation="get_state") for existing context
5. Read corresponding exploration results
**Context extraction**:

View File

@@ -1,237 +1,294 @@
# Command: dispatch
# Command: Dispatch
> 任务链创建与依赖管理。根据管道模式创建 pipeline 任务链并分配给 worker 角色。
Create the analysis task chain with correct dependencies and structured task descriptions. Supports Quick, Standard, and Deep pipeline modes.
## When to Use
## Phase 2: Context Loading
- Phase 3 of Coordinator
- 管道模式已确定,需要创建任务链
- 团队已创建worker 已 spawn
| Input | Source | Required |
|-------|--------|----------|
| User topic | From coordinator Phase 1 | Yes |
| Session folder | From coordinator Phase 2 | Yes |
| Pipeline mode | From coordinator Phase 1 | Yes |
| Perspectives | From coordinator Phase 1 (dimension detection) | Yes |
**Trigger conditions**:
- Coordinator Phase 2 完成后
- 讨论循环中需要创建补充分析任务
- 方向调整需要创建新探索/分析任务
1. Load topic, pipeline mode, and selected perspectives from coordinator state
2. Load pipeline stage definitions from SKILL.md Task Metadata Registry
3. Determine depth = number of selected perspectives (Quick: always 1)
## Strategy
## Phase 3: Task Chain Creation
### Delegation Mode
### Task Description Template
**Mode**: Directcoordinator 直接操作 TaskCreate/TaskUpdate
Every task description uses structured format:
### Decision Logic
```javascript
// 根据 pipelineMode 和 perspectives 选择 pipeline
function buildPipeline(pipelineMode, perspectives, sessionFolder, taskDescription, dimensions) {
const pipelines = {
'quick': [
{ prefix: 'EXPLORE', suffix: '001', owner: 'explorer', desc: '代码库探索', meta: `perspective: general\ndimensions: ${dimensions.join(', ')}`, blockedBy: [] },
{ prefix: 'ANALYZE', suffix: '001', owner: 'analyst', desc: '综合分析', meta: `perspective: technical\ndimensions: ${dimensions.join(', ')}`, blockedBy: ['EXPLORE-001'] },
{ prefix: 'SYNTH', suffix: '001', owner: 'synthesizer', desc: '结论综合', blockedBy: ['ANALYZE-001'] }
],
'standard': buildStandardPipeline(perspectives, dimensions),
'deep': buildDeepPipeline(perspectives, dimensions)
}
return pipelines[pipelineMode] || pipelines['standard']
}
function buildStandardPipeline(perspectives, dimensions) {
const stages = []
const perspectiveList = perspectives.length > 0 ? perspectives : ['technical']
const isParallel = perspectiveList.length > 1
// Parallel explorations — each gets a distinct agent name for true parallelism
perspectiveList.forEach((p, i) => {
const num = String(i + 1).padStart(3, '0')
const explorerName = isParallel ? `explorer-${i + 1}` : 'explorer'
stages.push({
prefix: 'EXPLORE', suffix: num, owner: explorerName,
desc: `代码库探索 (${p})`,
meta: `perspective: ${p}\ndimensions: ${dimensions.join(', ')}`,
blockedBy: []
})
})
// Parallel analyses — each gets a distinct agent name for true parallelism
perspectiveList.forEach((p, i) => {
const num = String(i + 1).padStart(3, '0')
const analystName = isParallel ? `analyst-${i + 1}` : 'analyst'
stages.push({
prefix: 'ANALYZE', suffix: num, owner: analystName,
desc: `深度分析 (${p})`,
meta: `perspective: ${p}\ndimensions: ${dimensions.join(', ')}`,
blockedBy: [`EXPLORE-${num}`]
})
})
// Discussion (blocked by all analyses)
const analyzeIds = perspectiveList.map((_, i) => `ANALYZE-${String(i + 1).padStart(3, '0')}`)
stages.push({
prefix: 'DISCUSS', suffix: '001', owner: 'discussant',
desc: '讨论处理 (Round 1)',
meta: `round: 1\ntype: initial`,
blockedBy: analyzeIds
})
// Synthesis (blocked by discussion)
stages.push({
prefix: 'SYNTH', suffix: '001', owner: 'synthesizer',
desc: '结论综合',
blockedBy: ['DISCUSS-001']
})
return stages
}
function buildDeepPipeline(perspectives, dimensions) {
// Same as standard but SYNTH is not created initially
// It will be created after discussion loop completes
const stages = buildStandardPipeline(perspectives, dimensions)
// Remove SYNTH — will be created dynamically after discussion loop
return stages.filter(s => s.prefix !== 'SYNTH')
}
```
TaskCreate({
subject: "<TASK-ID>",
owner: "<role>",
description: "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>
TASK:
- <step 1: specific action>
- <step 2: specific action>
- <step 3: specific action>
CONTEXT:
- Session: <session-folder>
- Topic: <analysis-topic>
- Perspective: <perspective or 'all'>
- Upstream artifacts: <artifact-1>, <artifact-2>
- Shared memory: <session>/wisdom/.msg/meta.json
EXPECTED: <deliverable path> + <quality criteria>
CONSTRAINTS: <scope limits, focus areas>
---
InnerLoop: false",
blockedBy: [<dependency-list>],
status: "pending"
})
```
## Execution Steps
### Mode Router
### Step 1: Context Preparation
| Mode | Action |
|------|--------|
| `quick` | Create 3 tasks: EXPLORE-001 -> ANALYZE-001 -> SYNTH-001 |
| `standard` | Create N explorers + N analysts + DISCUSS-001 + SYNTH-001 |
| `deep` | Same as standard but omit SYNTH-001 (created after discussion loop) |
```javascript
const pipeline = buildPipeline(pipelineMode, selectedPerspectives, sessionFolder, taskDescription, dimensions)
---
### Quick Mode Task Chain
**EXPLORE-001** (explorer):
```
TaskCreate({
subject: "EXPLORE-001",
description: "PURPOSE: Explore codebase structure for analysis topic | Success: Key files, patterns, and findings collected
TASK:
- Detect project structure and relevant modules
- Search for code related to analysis topic
- Collect file references, patterns, and key findings
CONTEXT:
- Session: <session-folder>
- Topic: <topic>
- Perspective: general
- Dimensions: <dimensions>
- Shared memory: <session>/wisdom/.msg/meta.json
EXPECTED: <session>/explorations/exploration-001.json | Structured exploration with files and findings
CONSTRAINTS: Focus on <topic> scope
---
InnerLoop: false",
status: "pending"
})
```
### Step 2: Execute Strategy
```javascript
const taskIds = {}
for (const stage of pipeline) {
const taskSubject = `${stage.prefix}-${stage.suffix}: ${stage.desc}`
// 构建任务描述(包含 session 和上下文信息)
const fullDesc = [
stage.desc,
`\nsession: ${sessionFolder}`,
`\ntopic: ${taskDescription}`,
stage.meta ? `\n${stage.meta}` : '',
`\n\n目标: ${taskDescription}`
].join('')
// 创建任务
TaskCreate({
subject: taskSubject,
description: fullDesc,
activeForm: `${stage.desc}进行中`
})
// 记录任务 ID
const allTasks = TaskList()
const newTask = allTasks.find(t => t.subject.startsWith(`${stage.prefix}-${stage.suffix}`))
taskIds[`${stage.prefix}-${stage.suffix}`] = newTask.id
// 设置 owner 和依赖
const blockedByIds = stage.blockedBy
.map(dep => taskIds[dep])
.filter(Boolean)
TaskUpdate({
taskId: newTask.id,
owner: stage.owner,
addBlockedBy: blockedByIds
})
}
**ANALYZE-001** (analyst):
```
TaskCreate({
subject: "ANALYZE-001",
description: "PURPOSE: Deep analysis of topic from technical perspective | Success: Actionable insights with confidence levels
TASK:
- Load exploration results and build analysis context
- Analyze from technical perspective across selected dimensions
- Generate insights, findings, discussion points, recommendations
CONTEXT:
- Session: <session-folder>
- Topic: <topic>
- Perspective: technical
- Dimensions: <dimensions>
- Upstream artifacts: explorations/exploration-001.json
- Shared memory: <session>/wisdom/.msg/meta.json
EXPECTED: <session>/analyses/analysis-001.json | Structured analysis with evidence
CONSTRAINTS: Focus on technical perspective | <dimensions>
---
InnerLoop: false",
blockedBy: ["EXPLORE-001"],
status: "pending"
})
```
### Step 3: Result Processing
```javascript
// 验证任务链
const allTasks = TaskList()
const chainTasks = pipeline.map(s => taskIds[`${s.prefix}-${s.suffix}`]).filter(Boolean)
const chainValid = chainTasks.length === pipeline.length
if (!chainValid) {
mcp__ccw-tools__team_msg({
operation: "log", team: sessionId, from: "coordinator",
to: "user", type: "error",
summary: `[coordinator] 任务链创建不完整: ${chainTasks.length}/${pipeline.length}`
})
}
**SYNTH-001** (synthesizer):
```
TaskCreate({
subject: "SYNTH-001",
description: "PURPOSE: Integrate analysis into final conclusions | Success: Executive summary with recommendations
TASK:
- Load all exploration, analysis, and discussion artifacts
- Extract themes, consolidate evidence, prioritize recommendations
- Write conclusions and update discussion.md
CONTEXT:
- Session: <session-folder>
- Topic: <topic>
- Upstream artifacts: explorations/*.json, analyses/*.json
- Shared memory: <session>/wisdom/.msg/meta.json
EXPECTED: <session>/conclusions.json + discussion.md update | Final conclusions with confidence levels
CONSTRAINTS: Pure integration, no new exploration
---
InnerLoop: false",
blockedBy: ["ANALYZE-001"],
status: "pending"
})
```
---
### Standard Mode Task Chain
Create tasks in dependency order with parallel exploration and analysis windows:
**EXPLORE-001..N** (explorer, parallel): One per perspective. Each receives unique agent name (explorer-1, explorer-2, ...) for task discovery matching.
```
// For each perspective[i]:
TaskCreate({
subject: "EXPLORE-<NNN>",
owner: "explorer-<i+1>",
description: "PURPOSE: Explore codebase from <perspective> angle | Success: Perspective-specific files and patterns collected
TASK:
- Search codebase from <perspective> perspective
- Collect files, patterns, findings relevant to this angle
- Generate questions for downstream analysis
CONTEXT:
- Session: <session-folder>
- Topic: <topic>
- Perspective: <perspective>
- Dimensions: <dimensions>
- Shared memory: <session>/wisdom/.msg/meta.json
EXPECTED: <session>/explorations/exploration-<NNN>.json
CONSTRAINTS: Focus on <perspective> angle
---
InnerLoop: false",
status: "pending"
})
```
**ANALYZE-001..N** (analyst, parallel): One per perspective. Each blocked by its corresponding EXPLORE-N.
```
TaskCreate({
subject: "ANALYZE-<NNN>",
owner: "analyst-<i+1>",
description: "PURPOSE: Deep analysis from <perspective> perspective | Success: Insights with confidence and evidence
TASK:
- Load exploration-<NNN> results
- Analyze from <perspective> perspective
- Generate insights, discussion points, open questions
CONTEXT:
- Session: <session-folder>
- Topic: <topic>
- Perspective: <perspective>
- Dimensions: <dimensions>
- Upstream artifacts: explorations/exploration-<NNN>.json
- Shared memory: <session>/wisdom/.msg/meta.json
EXPECTED: <session>/analyses/analysis-<NNN>.json
CONSTRAINTS: <perspective> perspective | <dimensions>
---
InnerLoop: false",
blockedBy: ["EXPLORE-<NNN>"],
status: "pending"
})
```
**DISCUSS-001** (discussant): Blocked by all ANALYZE tasks.
```
TaskCreate({
subject: "DISCUSS-001",
description: "PURPOSE: Process analysis results into discussion summary | Success: Convergent themes and discussion points identified
TASK:
- Aggregate all analysis results across perspectives
- Identify convergent themes and conflicting views
- Generate top discussion points and open questions
CONTEXT:
- Session: <session-folder>
- Topic: <topic>
- Round: 1
- Type: initial
- Upstream artifacts: analyses/*.json
- Shared memory: <session>/wisdom/.msg/meta.json
EXPECTED: <session>/discussions/discussion-round-001.json + discussion.md update
CONSTRAINTS: Aggregate only, no new exploration
---
InnerLoop: false",
blockedBy: ["ANALYZE-001", ..., "ANALYZE-<N>"],
status: "pending"
})
```
**SYNTH-001** (synthesizer): Blocked by DISCUSS-001.
```
TaskCreate({
subject: "SYNTH-001",
description: "PURPOSE: Cross-perspective integration into final conclusions | Success: Executive summary with prioritized recommendations
...same as Quick mode SYNTH-001 but blocked by DISCUSS-001..."
blockedBy: ["DISCUSS-001"],
status: "pending"
})
```
---
### Deep Mode Task Chain
Same as Standard mode, but **omit SYNTH-001**. It will be created dynamically after the discussion loop completes, blocked by the last DISCUSS-N task.
---
## Discussion Loop Task Creation
讨论循环中动态创建任务:
Dynamic tasks created during discussion loop:
```javascript
// 创建新一轮讨论任务
function createDiscussionTask(round, type, userFeedback, sessionFolder) {
const suffix = String(round).padStart(3, '0')
TaskCreate({
subject: `DISCUSS-${suffix}: 讨论处理 (Round ${round})`,
description: `讨论处理\nsession: ${sessionFolder}\nround: ${round}\ntype: ${type}\nuser_feedback: ${userFeedback}`,
activeForm: `讨论 Round ${round} 进行中`
})
const allTasks = TaskList()
const newTask = allTasks.find(t => t.subject.startsWith(`DISCUSS-${suffix}`))
TaskUpdate({ taskId: newTask.id, owner: 'discussant' })
return newTask.id
}
// 创建补充分析任务(方向调整时)
function createAnalysisFix(round, adjustedFocus, sessionFolder) {
const suffix = `fix-${round}`
TaskCreate({
subject: `ANALYZE-${suffix}: 补充分析 (方向调整 Round ${round})`,
description: `补充分析\nsession: ${sessionFolder}\nadjusted_focus: ${adjustedFocus}\ntype: direction-fix`,
activeForm: `补充分析 Round ${round} 进行中`
})
const allTasks = TaskList()
const newTask = allTasks.find(t => t.subject.startsWith(`ANALYZE-${suffix}`))
TaskUpdate({ taskId: newTask.id, owner: 'analyst' })
return newTask.id
}
// 创建最终综合任务
function createSynthesisTask(sessionFolder, blockedByIds) {
TaskCreate({
subject: `SYNTH-001: 结论综合`,
description: `跨视角整合\nsession: ${sessionFolder}\ntype: final`,
activeForm: `结论综合进行中`
})
const allTasks = TaskList()
const newTask = allTasks.find(t => t.subject.startsWith('SYNTH-001'))
TaskUpdate({
taskId: newTask.id,
owner: 'synthesizer',
addBlockedBy: blockedByIds
})
return newTask.id
}
**DISCUSS-N** (subsequent rounds):
```
TaskCreate({
subject: "DISCUSS-<NNN>",
description: "PURPOSE: Process discussion round <N> | Success: Updated understanding with user feedback integrated
TASK:
- Process user feedback: <feedback>
- Execute <type> discussion strategy
- Update discussion timeline
CONTEXT:
- Session: <session-folder>
- Topic: <topic>
- Round: <N>
- Type: <deepen|direction-adjusted|specific-questions>
- User feedback: <feedback>
- Shared memory: <session>/wisdom/.msg/meta.json
EXPECTED: <session>/discussions/discussion-round-<NNN>.json
---
InnerLoop: false",
status: "pending"
})
```
## Output Format
**ANALYZE-fix-N** (direction adjustment):
```
## Task Chain Created
### Mode: [quick|standard|deep]
### Pipeline Stages: [count]
- [prefix]-[suffix]: [description] (owner: [role], blocked by: [deps])
### Verification: PASS/FAIL
TaskCreate({
subject: "ANALYZE-fix-<N>",
description: "PURPOSE: Supplementary analysis with adjusted focus | Success: New insights from adjusted direction
TASK:
- Re-analyze from adjusted perspective: <adjusted_focus>
- Build on previous exploration findings
- Generate updated discussion points
CONTEXT:
- Session: <session-folder>
- Topic: <topic>
- Type: direction-fix
- Adjusted focus: <adjusted_focus>
- Shared memory: <session>/wisdom/.msg/meta.json
EXPECTED: <session>/analyses/analysis-fix-<N>.json
---
InnerLoop: false",
status: "pending"
})
```
## Error Handling
## Phase 4: Validation
| Scenario | Resolution |
|----------|------------|
| Task creation fails | Retry once, then report to user |
| Dependency cycle detected | Flatten dependencies, warn coordinator |
| Invalid pipelineMode | Default to 'standard' mode |
| Too many perspectives (>4) | Truncate to first 4, warn user |
| Timeout (>5 min) | Report partial results, notify coordinator |
Verify task chain integrity:
| Check | Method | Expected |
|-------|--------|----------|
| Task count correct | TaskList count | quick: 3, standard: 2N+2, deep: 2N+1 |
| Dependencies correct | Trace blockedBy | Acyclic, correct ordering |
| All descriptions have PURPOSE/TASK/CONTEXT/EXPECTED | Pattern check | All present |
| Session path in every task | Check CONTEXT | Session: <folder> present |

View File

@@ -67,8 +67,8 @@ const autoYes = /\b(-y|--yes)\b/.test(args)
### Step 1: Context Preparation
```javascript
// 从 shared memory 获取当前状态
const sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
// 从 role state 获取当前状态
const sharedMemory = mcp__ccw-tools__team_msg({ operation: "get_state", session_id: sessionId })
let discussionRound = 0
const MAX_DISCUSSION_ROUNDS = pipelineMode === 'deep' ? 5 : (pipelineMode === 'standard' ? 1 : 0)
@@ -102,36 +102,32 @@ for (const stageTask of preDiscussionTasks) {
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
mcp__ccw-tools__team_msg({
operation: "log", team: sessionId, from: "coordinator",
operation: "log", session_id: sessionId, from: "coordinator",
to: workerConfig.role, type: "task_unblocked",
summary: `[coordinator] 启动阶段: ${stageTask.subject}${workerConfig.role}`
})
// 3. 同步 spawn worker — 阻塞直到 worker 返回Stop-Wait 核心)
const workerResult = Task({
subagent_type: "general-purpose",
subagent_type: "team-worker",
description: `Spawn ${workerConfig.role} worker for ${stageTask.subject}`,
team_name: teamName,
name: workerConfig.role,
prompt: `你是 team "${teamName}" 的 ${workerConfig.role.toUpperCase()}
prompt: `## Role Assignment
role: ${workerConfig.role}
role_spec: .claude/skills/team-ultra-analyze/role-specs/${workerConfig.role}.md
session: ${sessionFolder}
session_id: ${sessionId}
team_name: ${teamName}
requirement: ${stageTask.description || taskDescription}
inner_loop: false
## ⚠️ 首要指令MUST
Skill(skill="team-ultra-analyze", args="${workerConfig.skillArgs}")
## Current Task
- Task ID: ${stageTask.id}
- Task: ${stageTask.subject}
## 当前任务
- 任务 ID: ${stageTask.id}
- 任务: ${stageTask.subject}
- 描述: ${stageTask.description || taskDescription}
- Session: ${sessionFolder}
## 角色准则(强制)
- 所有输出必须带 [${workerConfig.role}] 标识前缀
- 仅与 coordinator 通信
## 工作流程
1. Skill(skill="team-ultra-analyze", args="${workerConfig.skillArgs}") 获取角色定义
2. 执行任务 → 汇报结果
3. TaskUpdate({ taskId: "${stageTask.id}", status: "completed" })`,
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
run_in_background: false
})
@@ -142,7 +138,7 @@ Skill(skill="team-ultra-analyze", args="${workerConfig.skillArgs}")
handleStageTimeout(stageTask, 0, autoYes)
} else {
mcp__ccw-tools__team_msg({
operation: "log", team: sessionId, from: "coordinator",
operation: "log", session_id: sessionId, from: "coordinator",
to: "user", type: "quality_gate",
summary: `[coordinator] 阶段完成: ${stageTask.subject}`
})
@@ -206,26 +202,25 @@ if (MAX_DISCUSSION_ROUNDS === 0) {
if (discussTask) {
TaskUpdate({ taskId: discussTask.id, status: 'in_progress' })
const discussResult = Task({
subagent_type: "general-purpose",
subagent_type: "team-worker",
description: `Spawn discussant worker for ${discussTask.subject}`,
team_name: teamName,
name: "discussant",
prompt: `你是 team "${teamName}" 的 DISCUSSANT。
prompt: `## Role Assignment
role: discussant
role_spec: .claude/skills/team-ultra-analyze/role-specs/discussant.md
session: ${sessionFolder}
session_id: ${sessionId}
team_name: ${teamName}
requirement: Discussion round ${discussionRound + 1}
inner_loop: false
## Primary Directive
Skill(skill="team-ultra-analyze", args="--role=discussant")
## Assignment
## Current Task
- Task ID: ${discussTask.id}
- Task: ${discussTask.subject}
- Session: ${sessionFolder}
## Workflow
1. Skill(skill="team-ultra-analyze", args="--role=discussant") to load role definition
2. Execute task per role.md
3. TaskUpdate({ taskId: "${discussTask.id}", status: "completed" })
All outputs carry [discussant] tag.`,
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
run_in_background: false
})
}
@@ -248,14 +243,19 @@ All outputs carry [discussant] tag.`,
const feedback = feedbackResult["Discussion Feedback"]
// 📌 记录用户反馈到 decision_trail
const latestMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
const latestMemory = mcp__ccw-tools__team_msg({ operation: "get_state", session_id: sessionId })
latestMemory.decision_trail = latestMemory.decision_trail || []
latestMemory.decision_trail.push({
round: discussionRound + 1,
decision: feedback,
context: `User feedback at discussion round ${discussionRound + 1}`,
timestamp: new Date().toISOString()
})
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(latestMemory, null, 2))
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "state_update",
data: { decision_trail: latestMemory.decision_trail }
})
if (feedback === "分析完成") {
// 📌 Record completion decision
@@ -356,7 +356,7 @@ ${data.updated_understanding || '(Updated by discussant)'}
function handleStageTimeout(stageTask, _unused, autoYes) {
if (autoYes) {
mcp__ccw-tools__team_msg({
operation: "log", team: sessionId, from: "coordinator",
operation: "log", session_id: sessionId, from: "coordinator",
to: "user", type: "error",
summary: `[coordinator] [auto] 阶段 ${stageTask.subject} worker 返回但未完成,自动跳过`
})
@@ -382,7 +382,7 @@ function handleStageTimeout(stageTask, _unused, autoYes) {
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
} else if (answer === "终止流水线") {
mcp__ccw-tools__team_msg({
operation: "log", team: sessionId, from: "coordinator",
operation: "log", session_id: sessionId, from: "coordinator",
to: "user", type: "shutdown",
summary: `[coordinator] 用户终止流水线,当前阶段: ${stageTask.subject}`
})
@@ -423,7 +423,7 @@ All outputs carry [synthesizer] tag.`,
}
// 汇总所有结果
const finalMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
const finalMemory = mcp__ccw-tools__team_msg({ operation: "get_state", session_id: sessionId })
const allFinalTasks = TaskList()
const workerTasks = allFinalTasks.filter(t => t.owner && t.owner !== 'coordinator')
const summary = {

View File

@@ -1,340 +1,254 @@
# Coordinator Role
# Coordinator - Ultra Analyze Team
分析团队协调者。编排 pipeline话题澄清 → 管道选择 → 团队创建 → 任务分发 → 讨论循环 → 结果汇报。
**Role**: coordinator
**Type**: Orchestrator
**Team**: ultra-analyze
## Identity
- **Name**: `coordinator` | **Tag**: `[coordinator]`
- **Responsibility**: Orchestration (Parse requirements -> Create team -> Dispatch tasks -> Monitor progress -> Report results)
Orchestrates the analysis pipeline: topic clarification, pipeline mode selection, task dispatch, discussion loop management, and final synthesis. Spawns team-worker agents for all worker roles.
## Boundaries
### MUST
- 所有输出SendMessage、team_msg、日志必须带 `[coordinator]` 标识
- 仅负责话题澄清、管道选择、任务创建/分发、讨论循环驱动、结果汇报
- 通过 TaskCreate 创建任务并分配给 worker 角色
- 通过消息总线监控 worker 进度并路由消息
- 讨论循环中通过 AskUserQuestion 收集用户反馈
- 维护会话状态持久化
- Use `team-worker` agent type for all worker spawns (NOT `general-purpose`)
- Follow Command Execution Protocol for dispatch and monitor commands
- Respect pipeline stage dependencies (blockedBy)
- Stop after spawning workers -- wait for callbacks
- Handle discussion loop with max 5 rounds (Deep mode)
- Execute completion action in Phase 5
### MUST NOT
- 直接执行任何业务任务代码探索、CLI 分析、综合整合等)
- 直接调用 cli-explore-agent、code-developer 等实现类 subagent
- 直接调用 CLI 分析工具ccw cli
- 绕过 worker 角色自行完成应委派的工作
- 在输出中省略 `[coordinator]` 标识
> **核心原则**: coordinator 是指挥者,不是执行者。所有实际工作必须通过 TaskCreate 委派给 worker 角色。
- Implement domain logic (exploring, analyzing, discussing, synthesizing) -- workers handle this
- Spawn workers without creating tasks first
- Skip checkpoints when configured
- Force-advance pipeline past failed stages
- Directly call cli-explore-agent, CLI analysis tools, or execute codebase exploration
---
## Toolbox
## Command Execution Protocol
### Available Commands
When coordinator needs to execute a command (dispatch, monitor):
| Command | File | Phase | Description |
|---------|------|-------|-------------|
| `dispatch` | [commands/dispatch.md](commands/dispatch.md) | Phase 3 | 任务链创建与依赖管理 |
| `monitor` | [commands/monitor.md](commands/monitor.md) | Phase 4 | 讨论循环 + 进度监控 |
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| `TaskCreate` | Task | coordinator | 创建任务并分配给 worker |
| `TaskList` | Task | coordinator | 监控任务状态 |
| `TeamCreate` | Team | coordinator | 创建分析团队 |
| `AskUserQuestion` | Interaction | coordinator | 收集用户反馈 |
| `SendMessage` | Communication | coordinator | 与 worker 通信 |
| `Read/Write` | File | coordinator | 会话状态管理 |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `pipeline_selected` | coordinator → all | 管道模式确定 | Quick/Standard/Deep |
| `discussion_round` | coordinator → discussant | 用户反馈收集后 | 触发讨论处理 |
| `direction_adjusted` | coordinator → analyst | 方向调整 | 触发补充分析 |
| `task_unblocked` | coordinator → worker | 依赖解除 | 任务可执行 |
| `error` | coordinator → user | 协调错误 | 阻塞性问题 |
| `shutdown` | coordinator → all | 团队关闭 | 清理资源 |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
4. **Execute synchronously** -- complete the command workflow before proceeding
Example:
```
mcp__ccw-tools__team_msg({
operation: "log",
team: <session-id>,
from: "coordinator",
to: "<recipient>",
type: "<message-type>",
summary: "[coordinator] <summary>",
ref: "<artifact-path>"
})
```
> **Note**: `team` must be session ID (e.g., `UAN-xxx-date`), NOT team name. Extract from `Session:` field in task description.
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --team <session-id> --from coordinator --to <recipient> --type <type> --summary \"[coordinator] ...\" --ref <path> --json")
Phase 3 needs task dispatch
-> Read roles/coordinator/commands/dispatch.md
-> Execute Phase 2 (Context Loading)
-> Execute Phase 3 (Task Chain Creation)
-> Execute Phase 4 (Validation)
-> Continue to Phase 4
```
---
## Entry Router
When coordinator is invoked, first detect the invocation type:
When coordinator is invoked, detect invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
| New session | None of the above | -> Phase 0 (Session Resume Check) |
| Worker callback | Message contains role tag [explorer], [analyst], [discussant], [synthesizer] | -> handleCallback |
| Status check | Arguments contain "check" or "status" | -> handleCheck |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
| Pipeline complete | All tasks have status "completed" | -> handleComplete |
| Interrupted session | Active/paused session exists | -> Phase 0 (Session Resume Check) |
| New session | None of above | -> Phase 1 (Topic Understanding) |
For callback/check/resume: load `commands/monitor.md` and execute the appropriate handler, then STOP.
For callback/check/resume/complete: load `commands/monitor.md` and execute matched handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/UAN-*/.msg/meta.json` for active/paused sessions
- If found, extract session folder path, status, and `pipeline_mode`
2. **Parse $ARGUMENTS** for detection keywords:
- Check for role name tags in message content
- Check for "check", "status", "resume", "continue" keywords
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Topic Understanding below
---
## Phase 0: Session Resume Check
**Objective**: Detect and resume interrupted sessions before creating new ones.
Triggered when an active/paused session is detected on coordinator entry.
**Workflow**:
1. Load session.json from detected session folder
2. Audit task list:
1. Scan `.workflow/.team/UAN-*/` for sessions with status "active" or "paused"
2. No sessions found -> proceed to Phase 1
3. Single session found -> resume it (-> Session Reconciliation)
4. Multiple sessions -> AskUserQuestion for user selection
```
TaskList()
```
**Session Reconciliation**:
3. Reconcile session state vs task status:
1. Audit TaskList -> get real status of all tasks
2. Reconcile: session state <-> TaskList status (bidirectional sync)
3. Reset any in_progress tasks -> pending (they were interrupted)
4. Determine remaining pipeline from reconciled state
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
6. Create missing tasks with correct blockedBy dependencies
7. Verify dependency chain integrity
8. Update session file with reconciled state
9. Kick first executable task's worker -> Phase 4
| Task Status | Session Expects | Action |
|-------------|----------------|--------|
| in_progress | Should be running | Reset to pending (worker was interrupted) |
| completed | Already tracked | Skip |
| pending + unblocked | Ready to run | Include in spawn list |
4. Rebuild team if not active:
```
TeamCreate({ team_name: "ultra-analyze" })
```
5. Spawn workers for ready tasks -> Phase 4 coordination loop
---
## Phase 1: Topic Understanding & Requirement Clarification
**Objective**: Parse user input and gather execution parameters.
**Workflow**:
1. **Parse arguments** for explicit settings: mode, scope, focus areas
2. **Extract topic description**: Remove `--role`, `--team`, `--mode` flags from arguments
1. Parse user task description from $ARGUMENTS
2. Extract explicit settings: `--mode`, scope, focus areas
3. **Pipeline mode selection**:
| Condition | Mode |
|-----------|------|
| `--mode=quick` explicit or topic contains "quick/overview/fast" | Quick |
| `--mode=deep` explicit or topic contains "deep/thorough/detailed/comprehensive" | Deep |
| Default (no match) | Standard |
| Condition | Mode | Depth |
|-----------|------|-------|
| `--mode=quick` or topic contains "quick/overview/fast" | Quick | 1 |
| `--mode=deep` or topic contains "deep/thorough/detailed/comprehensive" | Deep | N (from perspectives) |
| Default | Standard | N (from perspectives) |
4. **Dimension detection** (from topic keywords):
| Dimension | Keywords |
|-----------|----------|
| architecture | 架构, architecture, design, structure, 设计 |
| implementation | 实现, implement, code, 代码 |
| performance | 性能, performance, optimize, 优化 |
| security | 安全, security, auth, 权限 |
| concept | 概念, concept, theory, 原理 |
| comparison | 比较, compare, vs, 区别 |
| decision | 决策, decision, choice, 选择 |
| architecture | architecture, design, structure |
| implementation | implement, code |
| performance | performance, optimize |
| security | security, auth |
| concept | concept, theory |
| comparison | compare, vs |
| decision | decision, choice |
5. **Interactive clarification** (non-auto mode only):
| Question | Purpose |
|----------|---------|
| Analysis Focus | Multi-select focus directions |
| Analysis Perspectives | Select technical/architectural/business/domain views |
| Analysis Depth | Confirm Quick/Standard/Deep |
**Success**: All parameters captured, mode finalized.
5. **Interactive clarification** (non-auto mode): AskUserQuestion for focus, perspectives, depth.
---
## Phase 2: Create Team + Initialize Session
**Objective**: Initialize team, session file, and wisdom directory.
**Workflow**:
1. **Generate session ID**: `UAN-{slug}-{YYYY-MM-DD}`
2. **Create session folder structure**:
1. Generate session ID: `UAN-{slug}-{YYYY-MM-DD}`
2. Create session folder structure:
```
.workflow/.team/UAN-{slug}-{date}/
+-- shared-memory.json
+-- .msg/messages.jsonl
+-- .msg/meta.json
+-- discussion.md
+-- explorations/
+-- analyses/
+-- discussions/
+-- wisdom/
+-- learnings.md
+-- decisions.md
+-- conventions.md
+-- issues.md
+-- learnings.md, decisions.md, conventions.md, issues.md
```
3. **Initialize shared-memory.json**:
```json
{
"explorations": [],
"analyses": [],
"discussions": [],
"synthesis": null,
"decision_trail": [],
"current_understanding": {
"established": [],
"clarified": [],
"key_insights": []
}
}
```
4. **Initialize discussion.md** with session metadata
5. **Call TeamCreate** with team name "ultra-analyze"
6. **Spawn worker roles** (see SKILL.md Coordinator Spawn Template)
**Success**: Team created, session file written, wisdom initialized, workers ready.
3. Write session.json with mode, requirement, timestamp
4. Initialize .msg/meta.json
5. Call `TeamCreate({ team_name: "ultra-analyze" })`
---
## Phase 3: Create Task Chain
**Objective**: Dispatch tasks based on mode with proper dependencies.
Execute `commands/dispatch.md` inline (Command Execution Protocol):
Delegate to `commands/dispatch.md` which creates the full task chain:
**Quick Mode** (3 beats, serial):
```
EXPLORE-001 → ANALYZE-001 → SYNTH-001
```
**Standard Mode** (4 beats, parallel windows):
```
[EXPLORE-001..N](parallel) → [ANALYZE-001..N](parallel) → DISCUSS-001 → SYNTH-001
```
**Deep Mode** (4+ beats, with discussion loop):
```
[EXPLORE-001..N] → [ANALYZE-001..N] → DISCUSS-001 → [ANALYZE-fix] → DISCUSS-002 → ... → SYNTH-001
```
**Task chain rules**:
1. Reads SKILL.md Task Metadata Registry for task definitions
2. Creates tasks via TaskCreate with correct blockedBy
3. Assigns owner based on role mapping
4. Includes `Session: <session-folder>` in every task description
1. Read `roles/coordinator/commands/dispatch.md`
2. Follow dispatch Phase 2 -> Phase 3 -> Phase 4
3. Result: all pipeline tasks created with correct blockedBy dependencies
---
## Phase 4: Discussion Loop + Coordination
## Phase 4: Spawn & Coordination Loop
**Objective**: Spawn workers in background, monitor callbacks, drive discussion loop.
### Initial Spawn
**Design**: Spawn-and-Stop + Callback pattern.
Find first unblocked tasks and spawn their workers:
- Spawn workers with `Task(run_in_background: true)` -> immediately return
- Worker completes -> SendMessage callback -> auto-advance
- User can use "check" / "resume" to manually advance
- Coordinator does one operation per invocation, then STOPS
```
Task({
subagent_type: "team-worker",
description: "Spawn explorer worker",
team_name: "ultra-analyze",
name: "explorer",
run_in_background: true,
prompt: `## Role Assignment
role: explorer
role_spec: .claude/skills/team-ultra-analyze/role-specs/explorer.md
session: <session-folder>
session_id: <session-id>
team_name: ultra-analyze
requirement: <topic-description>
inner_loop: false
**Workflow** (see `commands/monitor.md` for details):
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
})
```
1. Load `commands/monitor.md`
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
3. For each ready task -> spawn worker (see SKILL.md Spawn Template)
4. Output status summary
5. STOP
**STOP** after spawning. Wait for worker callback.
**Callback handlers**:
### Coordination (via monitor.md handlers)
| Received Message | Action |
|-----------------|--------|
| `exploration_ready` | Mark EXPLORE complete -> unblock ANALYZE |
| `analysis_ready` | Mark ANALYZE complete -> unblock DISCUSS or SYNTH |
| `discussion_processed` | Mark DISCUSS complete -> AskUser -> decide next |
| `synthesis_ready` | Mark SYNTH complete -> Phase 5 |
| Worker: `error` | Assess severity -> retry or report to user |
All subsequent coordination is handled by `commands/monitor.md` handlers triggered by worker callbacks:
**Discussion loop logic** (Standard/Deep mode):
| Round | Action |
|-------|--------|
| After DISCUSS-N completes | AskUserQuestion: continue / adjust direction / complete / specific questions |
| User: "继续深入" | Create DISCUSS-(N+1) |
| User: "调整方向" | Create ANALYZE-fix + DISCUSS-(N+1) |
| User: "分析完成" | Exit loop, create SYNTH-001 |
| Round > MAX_ROUNDS (5) | Force synthesis, offer continuation |
**Pipeline advancement** driven by three wake sources:
- Worker callback (automatic) -> Entry Router -> handleCallback
- User "check" -> handleCheck (status only)
- User "resume" -> handleResume (advance)
- handleCallback -> mark task done -> check pipeline -> handleSpawnNext
- handleSpawnNext -> find ready tasks -> spawn team-worker agents -> STOP
- handleComplete -> all done -> Phase 5
---
## Phase 5: Report + Persist
## Phase 5: Report + Completion Action
**Objective**: Completion report and follow-up options.
1. Load session state -> count completed tasks, calculate duration
2. List deliverables:
**Workflow**:
| Deliverable | Path |
|-------------|------|
| Explorations | <session>/explorations/*.json |
| Analyses | <session>/analyses/*.json |
| Discussion | <session>/discussion.md |
| Conclusions | <session>/conclusions.json |
1. Load session state -> count completed tasks, duration
2. List deliverables with output paths
3. Update session status -> "completed"
4. Output final report
5. Offer next steps to user
3. Include discussion summaries and decision trail
4. Output pipeline summary: task count, duration, mode
**Report structure**:
5. **Completion Action** (interactive):
```
## [coordinator] Analysis Complete
**Mode**: <mode>
**Topic**: <topic>
**Explorations**: <count>
**Analyses**: <count>
**Discussion Rounds**: <count>
**Decisions Made**: <count>
📄 Discussion: <session-folder>/discussion.md
📊 Conclusions: <session-folder>/conclusions.json
AskUserQuestion({
questions: [{
question: "Team pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
]
}]
})
```
**Next step options**:
6. Handle user choice:
| Option | Description |
|--------|-------------|
| 创建Issue | 基于结论创建 Issue |
| 生成任务 | 启动 workflow-lite-planex 规划实施 |
| 导出报告 | 生成独立分析报告 |
| 关闭团队 | 关闭所有 teammate 并清理 |
| Choice | Steps |
|--------|-------|
| Archive & Clean | TaskList -> verify all completed -> update session status="completed" -> TeamDelete("ultra-analyze") -> output final summary |
| Keep Active | Update session status="paused" -> output resume instructions |
| Export Results | AskUserQuestion for target directory -> copy artifacts -> Archive & Clean |
---
@@ -345,10 +259,6 @@ EXPLORE-001 → ANALYZE-001 → SYNTH-001
| Teammate unresponsive | Send follow-up, 2x -> respawn |
| Explorer finds nothing | Continue with limited context, note limitation |
| Discussion loop stuck >5 rounds | Force synthesis, offer continuation |
| CLI unavailable | Fallback chain: gemini -> codex -> manual |
| CLI unavailable | Fallback chain: gemini -> codex -> claude |
| User timeout in discussion | Save state, show resume command |
| Max rounds reached | Force synthesis, offer continuation option |
| Session folder conflict | Append timestamp suffix |
| Task timeout | Log, mark failed, ask user to retry or skip |
| Worker crash | Respawn worker, reassign task |
| Dependency cycle | Detect, report to user, halt |

View File

@@ -17,7 +17,7 @@
- Only communicate with coordinator via SendMessage
- Work strictly within discussion processing responsibility scope
- Execute deep exploration based on user feedback and existing analysis
- Write discussion results to shared-memory.json `discussions` field
- Share discussion results via team_msg(type='state_update')
- Update discussion.md discussion timeline
### MUST NOT
@@ -71,21 +71,19 @@ Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
team: <session-id>,
session_id: <session-id>,
from: "discussant",
to: "coordinator",
type: "discussion_processed",
summary: "[discussant] DISCUSS complete: <summary>",
ref: "<output-path>"
})
```
> **Note**: `team` must be session ID (e.g., `UAN-xxx-date`), NOT team name. Extract from `Session:` field in task description.
> `to` and `summary` are auto-defaulted by the tool.
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --team <session-id> --from discussant --to coordinator --type discussion_processed --summary \"[discussant] ...\" --ref <path> --json")
Bash("ccw team log --session-id <session-id> --from discussant --type discussion_processed --ref <path> --json")
```
---
@@ -106,7 +104,7 @@ Falls back to `discussant` for single-instance role.
1. Extract session path from task description
2. Extract topic, round number, discussion type, user feedback
3. Read shared-memory.json for existing context
3. Read role states via team_msg(operation="get_state") for existing context
4. Read all analysis results
5. Read all exploration results
6. Aggregate current findings, insights, questions

View File

@@ -16,7 +16,7 @@
- All output (SendMessage, team_msg, logs) must carry `[explorer]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within codebase exploration responsibility scope
- Write exploration results to shared-memory.json `explorations` field
- Share exploration results via team_msg(type='state_update')
### MUST NOT
@@ -64,21 +64,19 @@ Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
team: <session-id>,
session_id: <session-id>,
from: "explorer",
to: "coordinator",
type: "exploration_ready",
summary: "[explorer] EXPLORE complete: <summary>",
ref: "<output-path>"
})
```
> **Note**: `team` must be session ID (e.g., `UAN-xxx-date`), NOT team name. Extract from `Session:` field in task description.
> `to` and `summary` are auto-defaulted by the tool.
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --team <session-id> --from explorer --to coordinator --type exploration_ready --summary \"[explorer] ...\" --ref <path> --json")
Bash("ccw team log --session-id <session-id> --from explorer --type exploration_ready --ref <path> --json")
```
---
@@ -99,7 +97,7 @@ For parallel instances, parse `--agent-name` from arguments for owner matching.
1. Extract session path from task description
2. Extract topic, perspective, dimensions from task metadata
3. Read shared-memory.json for existing context
3. Read role states via team_msg(operation="get_state") for existing context
4. Determine exploration number from task subject (EXPLORE-N)
**Context extraction**:

View File

@@ -17,7 +17,7 @@
- Only communicate with coordinator via SendMessage
- Work strictly within synthesis responsibility scope
- Integrate all role outputs to generate final conclusions
- Write synthesis results to shared-memory.json `synthesis` field
- Share synthesis results via team_msg(type='state_update')
- Update discussion.md conclusions section
### MUST NOT
@@ -65,21 +65,19 @@ Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
team: <session-id>,
session_id: <session-id>,
from: "synthesizer",
to: "coordinator",
type: "synthesis_ready",
summary: "[synthesizer] SYNTH complete: <summary>",
ref: "<output-path>"
})
```
> **Note**: `team` must be session ID (e.g., `UAN-xxx-date`), NOT team name. Extract from `Session:` field in task description.
> `to` and `summary` are auto-defaulted by the tool.
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --team <session-id> --from synthesizer --to coordinator --type synthesis_ready --summary \"[synthesizer] ...\" --ref <path> --json")
Bash("ccw team log --session-id <session-id> --from synthesizer --type synthesis_ready --ref <path> --json")
```
---
@@ -100,7 +98,7 @@ Falls back to `synthesizer` for single-instance role.
1. Extract session path from task description
2. Extract topic
3. Read shared-memory.json
3. Read role states via team_msg(operation="get_state")
4. Read all exploration files
5. Read all analysis files
6. Read all discussion round files