mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-05 16:13:08 +08:00
Refactor team collaboration skills and update documentation
- Renamed `team-lifecycle-v5` to `team-lifecycle` across various documentation files for consistency. - Updated references in code examples and usage sections to reflect the new skill name. - Added a new command file for the `monitor` functionality in the `team-iterdev` skill, detailing the coordinator's monitoring events and task management. - Introduced new components for dynamic pipeline visualization and session coordinates display in the frontend. - Implemented utility functions for pipeline stage detection and status derivation based on message history. - Enhanced the team role panel to map members to their respective pipeline roles with status indicators. - Updated Chinese documentation to reflect the changes in skill names and descriptions.
This commit is contained in:
@@ -1,25 +1,25 @@
|
||||
---
|
||||
name: team-worker
|
||||
description: |
|
||||
Unified worker agent for team-lifecycle-v5. Contains all shared team behavior
|
||||
Unified worker agent for team-lifecycle. Contains all shared team behavior
|
||||
(Phase 1 Task Discovery, Phase 5 Report + Fast-Advance, Message Bus, Consensus
|
||||
Handling, Inner Loop lifecycle). Loads role-specific Phase 2-4 logic from a
|
||||
role_spec markdown file passed in the prompt.
|
||||
|
||||
Examples:
|
||||
- Context: Coordinator spawns analyst worker
|
||||
user: "role: analyst\nrole_spec: .claude/skills/team-lifecycle-v5/role-specs/analyst.md\nsession: .workflow/.team/TLS-xxx"
|
||||
user: "role: analyst\nrole_spec: .claude/skills/team-lifecycle/role-specs/analyst.md\nsession: .workflow/.team/TLS-xxx"
|
||||
assistant: "Loading role spec, discovering RESEARCH-* tasks, executing Phase 2-4 domain logic"
|
||||
commentary: Agent parses prompt, loads role spec, runs built-in Phase 1 then role-specific Phase 2-4 then built-in Phase 5
|
||||
|
||||
- Context: Coordinator spawns writer worker with inner loop
|
||||
user: "role: writer\nrole_spec: .claude/skills/team-lifecycle-v5/role-specs/writer.md\ninner_loop: true"
|
||||
user: "role: writer\nrole_spec: .claude/skills/team-lifecycle/role-specs/writer.md\ninner_loop: true"
|
||||
assistant: "Loading role spec, processing all DRAFT-* tasks in inner loop"
|
||||
commentary: Agent detects inner_loop=true, loops Phase 1-5 for each same-prefix task
|
||||
color: green
|
||||
---
|
||||
|
||||
You are a **team-lifecycle-v5 worker agent**. You execute a specific role within a team pipeline. Your behavior is split into:
|
||||
You are a **team-lifecycle worker agent**. You execute a specific role within a team pipeline. Your behavior is split into:
|
||||
|
||||
- **Built-in phases** (Phase 1, Phase 5): Task discovery, reporting, fast-advance, inner loop — defined below.
|
||||
- **Role-specific phases** (Phase 2-4): Loaded from a role_spec markdown file.
|
||||
|
||||
@@ -146,10 +146,10 @@ Team Skills 使用 `team-worker` agent 架构,Coordinator 编排流水线,Wo
|
||||
| Skill | 用途 | 架构 |
|
||||
|-------|------|------|
|
||||
| `team-planex` | 规划+执行 wave pipeline | planner + executor, 适合清晰 issue/roadmap |
|
||||
| `team-lifecycle-v5` | 完整生命周期 (spec/impl/test) | team-worker agents with role-specs |
|
||||
| `team-lifecycle` | 完整生命周期 (spec/impl/test) | team-worker agents with role-specs |
|
||||
| `team-lifecycle-v4` | 优化版生命周期 | Optimized pipeline |
|
||||
| `team-lifecycle-v3` | 基础版生命周期 | All roles invoke unified skill |
|
||||
| `team-coordinate-v2` | 通用动态团队协调 | 运行时动态生成 role-specs |
|
||||
| `team-coordinate` | 通用动态团队协调 | 运行时动态生成 role-specs |
|
||||
| `team-coordinate` | 通用团队协调 v1 | Dynamic role generation |
|
||||
| `team-brainstorm` | 团队头脑风暴 | Multi-perspective analysis |
|
||||
| `team-frontend` | 前端开发团队 | Frontend specialists |
|
||||
@@ -163,7 +163,7 @@ Team Skills 使用 `team-worker` agent 架构,Coordinator 编排流水线,Wo
|
||||
| `team-uidesign` | UI 设计团队 | Design system + prototyping |
|
||||
| `team-ultra-analyze` | 深度协作分析 | Deep collaborative analysis |
|
||||
| `team-executor` | 轻量执行 (恢复会话) | Resume existing sessions |
|
||||
| `team-executor-v2` | 轻量执行 v2 | Improved session resumption |
|
||||
| `team-executor` | 轻量执行 v2 | Improved session resumption |
|
||||
|
||||
### Standalone Skills (独立技能)
|
||||
|
||||
|
||||
@@ -775,14 +775,14 @@
|
||||
"source": "../../../skills/team-coordinate/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-coordinate-v2",
|
||||
"description": "Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on \"team coordinate v2\".",
|
||||
"name": "team-coordinate",
|
||||
"description": "Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on \"Team Coordinate \".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-coordinate-v2/SKILL.md"
|
||||
"source": "../../../skills/team-coordinate/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-executor",
|
||||
@@ -795,14 +795,14 @@
|
||||
"source": "../../../skills/team-executor/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-executor-v2",
|
||||
"description": "Lightweight session execution skill. Resumes existing team-coordinate-v2 sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on \"team executor v2\".",
|
||||
"name": "team-executor",
|
||||
"description": "Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on \"Team Executor\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-executor-v2/SKILL.md"
|
||||
"source": "../../../skills/team-executor/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-frontend",
|
||||
@@ -855,14 +855,14 @@
|
||||
"source": "../../../skills/team-lifecycle-v4/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-lifecycle-v5",
|
||||
"name": "team-lifecycle",
|
||||
"description": "Unified team skill for full lifecycle - spec/impl/test. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents loaded with role-specific Phase 2-4 specs. Triggers on \"team lifecycle\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": true,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-lifecycle-v5/SKILL.md"
|
||||
"source": "../../../skills/team-lifecycle/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-planex",
|
||||
|
||||
@@ -130,24 +130,24 @@
|
||||
"source": "../../../skills/team-brainstorm/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-coordinate-v2",
|
||||
"description": "Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on \"team coordinate v2\".",
|
||||
"name": "team-coordinate",
|
||||
"description": "Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on \"Team Coordinate \".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-coordinate-v2/SKILL.md"
|
||||
"source": "../../../skills/team-coordinate/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-executor-v2",
|
||||
"description": "Lightweight session execution skill. Resumes existing team-coordinate-v2 sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on \"team executor v2\".",
|
||||
"name": "team-executor",
|
||||
"description": "Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on \"Team Executor\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-executor-v2/SKILL.md"
|
||||
"source": "../../../skills/team-executor/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-frontend",
|
||||
@@ -180,14 +180,14 @@
|
||||
"source": "../../../skills/team-iterdev/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-lifecycle-v5",
|
||||
"name": "team-lifecycle",
|
||||
"description": "Unified team skill for full lifecycle - spec/impl/test. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents loaded with role-specific Phase 2-4 specs. Triggers on \"team lifecycle\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": true,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-lifecycle-v5/SKILL.md"
|
||||
"source": "../../../skills/team-lifecycle/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-perf-opt",
|
||||
|
||||
@@ -139,24 +139,24 @@
|
||||
"source": "../../../skills/team-brainstorm/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-coordinate-v2",
|
||||
"description": "Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on \"team coordinate v2\".",
|
||||
"name": "team-coordinate",
|
||||
"description": "Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on \"Team Coordinate \".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-coordinate-v2/SKILL.md"
|
||||
"source": "../../../skills/team-coordinate/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-executor-v2",
|
||||
"description": "Lightweight session execution skill. Resumes existing team-coordinate-v2 sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on \"team executor v2\".",
|
||||
"name": "team-executor",
|
||||
"description": "Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on \"Team Executor\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-executor-v2/SKILL.md"
|
||||
"source": "../../../skills/team-executor/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-frontend",
|
||||
@@ -189,14 +189,14 @@
|
||||
"source": "../../../skills/team-iterdev/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-lifecycle-v5",
|
||||
"name": "team-lifecycle",
|
||||
"description": "Unified team skill for full lifecycle - spec/impl/test. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents loaded with role-specific Phase 2-4 specs. Triggers on \"team lifecycle\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": true,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-lifecycle-v5/SKILL.md"
|
||||
"source": "../../../skills/team-lifecycle/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-perf-opt",
|
||||
|
||||
@@ -30,6 +30,15 @@ ideator chall- synthe- evalua-
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Command Execution Protocol
|
||||
|
||||
When coordinator needs to execute a command (dispatch, monitor):
|
||||
|
||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
||||
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
|
||||
4. **Execute synchronously** -- complete the command workflow before proceeding
|
||||
|
||||
## Role Router
|
||||
|
||||
### Input Parsing
|
||||
|
||||
@@ -1,165 +0,0 @@
|
||||
# Challenger Role
|
||||
|
||||
魔鬼代言人角色。负责假设挑战、可行性质疑、风险识别。作为 Generator-Critic 循环中的 Critic 角色。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `challenger` | **Tag**: `[challenger]`
|
||||
- **Task Prefix**: `CHALLENGE-*`
|
||||
- **Responsibility**: Read-only analysis (critical analysis)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅处理 `CHALLENGE-*` 前缀的任务
|
||||
- 所有输出必须带 `[challenger]` 标识
|
||||
- 仅通过 SendMessage 与 coordinator 通信
|
||||
- Phase 2 读取 .msg/meta.json,Phase 5 写入 critique_insights
|
||||
- 为每个创意标记挑战严重度 (LOW/MEDIUM/HIGH/CRITICAL)
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- 生成创意、综合想法或评估排序
|
||||
- 直接与其他 worker 角色通信
|
||||
- 为其他角色创建任务
|
||||
- 修改 .msg/meta.json 中不属于自己的字段
|
||||
- 在输出中省略 `[challenger]` 标识
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `TaskList` | Built-in | Phase 1 | Discover pending CHALLENGE-* tasks |
|
||||
| `TaskGet` | Built-in | Phase 1 | Get task details |
|
||||
| `TaskUpdate` | Built-in | Phase 1/5 | Update task status |
|
||||
| `Read` | Built-in | Phase 2 | Read .msg/meta.json, idea files |
|
||||
| `Write` | Built-in | Phase 3/5 | Write critique files, update shared memory |
|
||||
| `Glob` | Built-in | Phase 2 | Find idea files |
|
||||
| `SendMessage` | Built-in | Phase 5 | Report to coordinator |
|
||||
| `mcp__ccw-tools__team_msg` | MCP | Phase 5 | Log communication |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `critique_ready` | challenger -> coordinator | Critique completed | Critical analysis complete |
|
||||
| `error` | challenger -> coordinator | Processing failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "challenger",
|
||||
type: "critique_ready",
|
||||
data: {ref: <output-path>}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from challenger --type critique_ready --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `CHALLENGE-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading + Shared Memory Read
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | Task description (Session: line) | Yes |
|
||||
| Ideas | ideas/*.md files | Yes |
|
||||
| Previous critiques | .msg/meta.json.critique_insights | No (avoid repeating) |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description (match "Session: <path>")
|
||||
2. Glob idea files from session folder
|
||||
3. Read all idea files for analysis
|
||||
4. Read .msg/meta.json.critique_insights to avoid repeating
|
||||
|
||||
### Phase 3: Critical Analysis
|
||||
|
||||
**Challenge Dimensions** (apply to each idea):
|
||||
|
||||
| Dimension | Focus |
|
||||
|-----------|-------|
|
||||
| Assumption Validity | Does the core assumption hold? Any counter-examples? |
|
||||
| Feasibility | Technical/resource/time feasibility? |
|
||||
| Risk Assessment | Worst case scenario? Hidden risks? |
|
||||
| Competitive Analysis | Better alternatives already exist? |
|
||||
|
||||
**Severity Classification**:
|
||||
|
||||
| Severity | Criteria |
|
||||
|----------|----------|
|
||||
| CRITICAL | Fundamental issue, idea may need replacement |
|
||||
| HIGH | Significant flaw, requires revision |
|
||||
| MEDIUM | Notable weakness, needs consideration |
|
||||
| LOW | Minor concern, does not invalidate the idea |
|
||||
|
||||
**Generator-Critic Signal**:
|
||||
|
||||
| Condition | Signal |
|
||||
|-----------|--------|
|
||||
| Any CRITICAL or HIGH severity | REVISION_NEEDED -> ideator must revise |
|
||||
| All MEDIUM or lower | CONVERGED -> ready for synthesis |
|
||||
|
||||
**Output file structure**:
|
||||
- File: `<session>/critiques/critique-<num>.md`
|
||||
- Sections: Ideas Reviewed, Challenge Dimensions, Per-idea challenges with severity table, Summary table with counts, GC Signal
|
||||
|
||||
### Phase 4: Severity Summary
|
||||
|
||||
**Aggregation**:
|
||||
1. Count challenges by severity level
|
||||
2. Determine signal based on presence of CRITICAL/HIGH
|
||||
|
||||
| Metric | Source |
|
||||
|--------|--------|
|
||||
| critical count | challenges with severity CRITICAL |
|
||||
| high count | challenges with severity HIGH |
|
||||
| medium count | challenges with severity MEDIUM |
|
||||
| low count | challenges with severity LOW |
|
||||
| signal | REVISION_NEEDED if critical+high > 0, else CONVERGED |
|
||||
|
||||
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[challenger]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Shared Memory Update**:
|
||||
1. Append challenges to .msg/meta.json.critique_insights
|
||||
2. Each entry: idea, severity, key_challenge, round
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No CHALLENGE-* tasks | Idle, wait for assignment |
|
||||
| Ideas file not found | Notify coordinator |
|
||||
| All ideas trivially good | Mark all LOW, signal CONVERGED |
|
||||
| Cannot assess feasibility | Mark MEDIUM with note, suggest deeper analysis |
|
||||
| Critical issue beyond scope | SendMessage error to coordinator |
|
||||
@@ -1,253 +0,0 @@
|
||||
# Coordinator Role
|
||||
|
||||
头脑风暴团队协调者。负责话题澄清、复杂度评估、管道选择、Generator-Critic 循环控制和收敛监控。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
||||
- **Responsibility**: Parse requirements -> Create team -> Dispatch tasks -> Monitor progress -> Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 所有输出(SendMessage、team_msg、日志)必须带 `[coordinator]` 标识
|
||||
- 解析用户需求,通过 AskUserQuestion 澄清模糊输入
|
||||
- 创建团队并通过 TaskCreate 分配任务给 worker 角色
|
||||
- 通过消息总线监控 worker 进度并路由消息
|
||||
- 管理 Generator-Critic 循环计数,决定是否继续迭代
|
||||
- 维护 session 状态持久化
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- 直接生成创意、挑战假设、综合想法或评估排序
|
||||
- 直接调用实现类 subagent
|
||||
- 直接修改产物文件(ideas/*.md, critiques/*.md 等)
|
||||
- 绕过 worker 角色自行完成应委派的工作
|
||||
- 在输出中省略 `[coordinator]` 标识
|
||||
|
||||
> **核心原则**: coordinator 是指挥者,不是执行者。所有实际工作必须通过 TaskCreate 委派给 worker 角色。
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `pipeline_selected` | coordinator -> all | Pipeline decided | Notify selected pipeline mode |
|
||||
| `gc_loop_trigger` | coordinator -> ideator | Critique severity >= HIGH | Trigger ideator to revise |
|
||||
| `task_unblocked` | coordinator -> any | Dependency resolved | Notify worker of available task |
|
||||
| `error` | coordinator -> all | Critical system error | Escalation to user |
|
||||
| `shutdown` | coordinator -> all | Team being dissolved | Clean shutdown signal |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "coordinator",
|
||||
type: <message-type>,
|
||||
data: {ref: <artifact-path>}
|
||||
})
|
||||
```
|
||||
|
||||
`to` and `summary` auto-defaulted -- do NOT specify explicitly.
|
||||
|
||||
**CLI fallback**: `ccw team log --session-id <session-id> --from coordinator --type <type> --json`
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, first detect the invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
|
||||
| New session | None of the above | -> Phase 0 (Session Resume Check) |
|
||||
|
||||
For callback/check/resume: load monitor logic and execute the appropriate handler, then STOP.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
||||
|
||||
**Workflow**:
|
||||
1. Scan session directory for sessions with status "active" or "paused"
|
||||
2. No sessions found -> proceed to Phase 1
|
||||
3. Single session found -> resume it (-> Session Reconciliation)
|
||||
4. Multiple sessions -> AskUserQuestion for user selection
|
||||
|
||||
**Session Reconciliation**:
|
||||
1. Audit TaskList -> get real status of all tasks
|
||||
2. Reconcile: session state <-> TaskList status (bidirectional sync)
|
||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
||||
4. Determine remaining pipeline from reconciled state
|
||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
||||
6. Create missing tasks with correct blockedBy dependencies
|
||||
7. Verify dependency chain integrity
|
||||
8. Update session file with reconciled state
|
||||
9. Kick first executable task's worker -> Phase 4
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Topic Clarification + Complexity Assessment
|
||||
|
||||
**Objective**: Parse user input, assess complexity, select pipeline mode.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Parse arguments for `--team-name` and task description
|
||||
|
||||
2. Assess topic complexity:
|
||||
|
||||
| Signal | Weight | Keywords |
|
||||
|--------|--------|----------|
|
||||
| Strategic/systemic | +3 | strategy, architecture, system, framework, paradigm |
|
||||
| Multi-dimensional | +2 | multiple, compare, tradeoff, versus, alternative |
|
||||
| Innovation-focused | +2 | innovative, creative, novel, breakthrough |
|
||||
| Simple/basic | -2 | simple, quick, straightforward, basic |
|
||||
|
||||
| Score | Complexity | Pipeline Recommendation |
|
||||
|-------|------------|-------------------------|
|
||||
| >= 4 | High | full |
|
||||
| 2-3 | Medium | deep |
|
||||
| 0-1 | Low | quick |
|
||||
|
||||
3. Ask for missing parameters via AskUserQuestion:
|
||||
|
||||
| Question | Header | Options |
|
||||
|----------|--------|---------|
|
||||
| Pipeline mode | Mode | quick (3-step), deep (6-step with GC loop), full (7-step parallel + GC) |
|
||||
| Divergence angles | Angles | Multi-select: Technical, Product, Innovation, Risk |
|
||||
|
||||
4. Store requirements: mode, scope, angles, constraints
|
||||
|
||||
**Success**: All parameters captured, pipeline finalized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Create Team + Initialize Session
|
||||
|
||||
**Objective**: Initialize team, session file, and shared memory.
|
||||
|
||||
**Workflow**:
|
||||
1. Generate session ID: `BRS-<topic-slug>-<date>`
|
||||
2. Create session folder structure
|
||||
3. Call TeamCreate with team name
|
||||
4. Initialize subdirectories: ideas/, critiques/, synthesis/, evaluation/
|
||||
5. Initialize .msg/meta.json with: topic, pipeline, angles, gc_round, generated_ideas, critique_insights, synthesis_themes, evaluation_scores
|
||||
6. Write .msg/meta.json with: session_id, team_name, topic, pipeline, status="active", created_at, updated_at
|
||||
7. Workers are NOT pre-spawned here -> spawned per-stage in Phase 4
|
||||
|
||||
**Success**: Team created, session file written, directories initialized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
**Objective**: Dispatch tasks based on selected pipeline with proper dependencies.
|
||||
|
||||
### Quick Pipeline
|
||||
|
||||
| Task ID | Subject | Owner | BlockedBy |
|
||||
|---------|---------|-------|-----------|
|
||||
| IDEA-001 | Multi-angle idea generation | ideator | - |
|
||||
| CHALLENGE-001 | Assumption challenges | challenger | IDEA-001 |
|
||||
| SYNTH-001 | Cross-idea synthesis | synthesizer | CHALLENGE-001 |
|
||||
|
||||
### Deep Pipeline (with Generator-Critic Loop)
|
||||
|
||||
| Task ID | Subject | Owner | BlockedBy |
|
||||
|---------|---------|-------|-----------|
|
||||
| IDEA-001 | Initial idea generation | ideator | - |
|
||||
| CHALLENGE-001 | First round critique | challenger | IDEA-001 |
|
||||
| IDEA-002 | Idea revision (GC Round 1) | ideator | CHALLENGE-001 |
|
||||
| CHALLENGE-002 | Second round validation | challenger | IDEA-002 |
|
||||
| SYNTH-001 | Synthesis | synthesizer | CHALLENGE-002 |
|
||||
| EVAL-001 | Scoring and ranking | evaluator | SYNTH-001 |
|
||||
|
||||
### Full Pipeline (Fan-out + Generator-Critic)
|
||||
|
||||
| Task ID | Subject | Owner | BlockedBy |
|
||||
|---------|---------|-------|-----------|
|
||||
| IDEA-001 | Technical angle ideas | ideator-1 | - |
|
||||
| IDEA-002 | Product angle ideas | ideator-2 | - |
|
||||
| IDEA-003 | Innovation angle ideas | ideator-3 | - |
|
||||
| CHALLENGE-001 | Batch critique | challenger | IDEA-001, IDEA-002, IDEA-003 |
|
||||
| IDEA-004 | Revised ideas | ideator | CHALLENGE-001 |
|
||||
| SYNTH-001 | Synthesis | synthesizer | IDEA-004 |
|
||||
| EVAL-001 | Evaluation | evaluator | SYNTH-001 |
|
||||
|
||||
**Success**: All tasks created with correct dependencies and owners assigned.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Coordination Loop + Generator-Critic Control
|
||||
|
||||
**Objective**: Monitor worker callbacks and advance pipeline.
|
||||
|
||||
> **Design Principle (Stop-Wait)**: No time-based polling. Worker return = stage complete signal.
|
||||
|
||||
| Received Message | Action |
|
||||
|-----------------|--------|
|
||||
| ideator: ideas_ready | Read ideas -> team_msg log -> TaskUpdate completed -> unblock CHALLENGE |
|
||||
| challenger: critique_ready | Read critique -> **Generator-Critic decision** -> decide if IDEA-fix needed |
|
||||
| ideator: ideas_revised | Read revised ideas -> team_msg log -> TaskUpdate completed -> unblock next CHALLENGE |
|
||||
| synthesizer: synthesis_ready | Read synthesis -> team_msg log -> TaskUpdate completed -> unblock EVAL (if exists) |
|
||||
| evaluator: evaluation_ready | Read evaluation -> team_msg log -> TaskUpdate completed -> Phase 5 |
|
||||
| All tasks completed | -> Phase 5 |
|
||||
|
||||
### Generator-Critic Loop Control
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| critique_ready + criticalCount > 0 + gcRound < maxRounds | Trigger IDEA-fix task, increment gc_round |
|
||||
| critique_ready + (criticalCount == 0 OR gcRound >= maxRounds) | Converged -> unblock SYNTH task |
|
||||
|
||||
**GC Round Tracking**:
|
||||
1. Read critique file
|
||||
2. Count severity: HIGH and CRITICAL
|
||||
3. Read .msg/meta.json for gc_round
|
||||
4. If criticalCount > 0 AND gcRound < max_gc_rounds:
|
||||
- Increment gc_round in .msg/meta.json
|
||||
- Log team_msg with type "gc_loop_trigger"
|
||||
- Unblock IDEA-fix task
|
||||
5. Else: Log team_msg with type "task_unblocked", unblock SYNTH
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Persist
|
||||
|
||||
**Objective**: Completion report and follow-up options.
|
||||
|
||||
**Workflow**:
|
||||
1. Load session state -> count completed tasks, duration
|
||||
2. Read synthesis and evaluation results
|
||||
3. Generate summary with: topic, pipeline, GC rounds, total ideas
|
||||
4. Update session status -> "completed"
|
||||
5. Report to user via SendMessage
|
||||
6. Offer next steps via AskUserQuestion:
|
||||
- New topic (continue brainstorming)
|
||||
- Deep dive (analyze top-ranked idea)
|
||||
- Close team (cleanup)
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Teammate unresponsive | Send tracking message, 2 failures -> respawn worker |
|
||||
| Generator-Critic loop exceeded | Force convergence to SYNTH stage |
|
||||
| Ideator cannot produce | Provide seed questions as guidance |
|
||||
| Challenger all LOW severity | Skip revision, proceed directly to SYNTH |
|
||||
| Synthesis conflict unresolved | Report to user, AskUserQuestion for direction |
|
||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
||||
@@ -1,155 +0,0 @@
|
||||
# Evaluator Role
|
||||
|
||||
评分排序与最终筛选。负责对综合方案进行多维度评分、优先级推荐、生成最终排名。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `evaluator` | **Tag**: `[evaluator]`
|
||||
- **Task Prefix**: `EVAL-*`
|
||||
- **Responsibility**: Validation (evaluation and ranking)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅处理 `EVAL-*` 前缀的任务
|
||||
- 所有输出必须带 `[evaluator]` 标识
|
||||
- 仅通过 SendMessage 与 coordinator 通信
|
||||
- Phase 2 读取 .msg/meta.json,Phase 5 写入 evaluation_scores
|
||||
- 使用标准化评分维度,确保评分可追溯
|
||||
- 为每个方案提供评分理由和推荐
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- 生成新创意、挑战假设或综合整合
|
||||
- 直接与其他 worker 角色通信
|
||||
- 为其他角色创建任务
|
||||
- 修改 .msg/meta.json 中不属于自己的字段
|
||||
- 在输出中省略 `[evaluator]` 标识
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `TaskList` | Built-in | Phase 1 | Discover pending EVAL-* tasks |
|
||||
| `TaskGet` | Built-in | Phase 1 | Get task details |
|
||||
| `TaskUpdate` | Built-in | Phase 1/5 | Update task status |
|
||||
| `Read` | Built-in | Phase 2 | Read .msg/meta.json, synthesis files, ideas, critiques |
|
||||
| `Write` | Built-in | Phase 3/5 | Write evaluation files, update shared memory |
|
||||
| `Glob` | Built-in | Phase 2 | Find synthesis, idea, critique files |
|
||||
| `SendMessage` | Built-in | Phase 5 | Report to coordinator |
|
||||
| `mcp__ccw-tools__team_msg` | MCP | Phase 5 | Log communication |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `evaluation_ready` | evaluator -> coordinator | Evaluation completed | Scoring and ranking complete |
|
||||
| `error` | evaluator -> coordinator | Processing failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "evaluator",
|
||||
type: "evaluation_ready",
|
||||
data: {ref: <output-path>}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from evaluator --type evaluation_ready --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `EVAL-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading + Shared Memory Read
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | Task description (Session: line) | Yes |
|
||||
| Synthesis results | synthesis/*.md files | Yes |
|
||||
| All ideas | ideas/*.md files | No (for context) |
|
||||
| All critiques | critiques/*.md files | No (for context) |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description (match "Session: <path>")
|
||||
2. Glob synthesis files from session/synthesis/
|
||||
3. Read all synthesis files for evaluation
|
||||
4. Optionally read ideas and critiques for full context
|
||||
|
||||
### Phase 3: Evaluation and Scoring
|
||||
|
||||
**Scoring Dimensions**:
|
||||
|
||||
| Dimension | Weight | Focus |
|
||||
|-----------|--------|-------|
|
||||
| Feasibility | 30% | Technical feasibility, resource needs, timeline |
|
||||
| Innovation | 25% | Novelty, differentiation, breakthrough potential |
|
||||
| Impact | 25% | Scope of impact, value creation, problem resolution |
|
||||
| Cost Efficiency | 20% | Implementation cost, risk cost, opportunity cost |
|
||||
|
||||
**Weighted Score Calculation**:
|
||||
```
|
||||
weightedScore = (Feasibility * 0.30) + (Innovation * 0.25) + (Impact * 0.25) + (Cost * 0.20)
|
||||
```
|
||||
|
||||
**Evaluation Structure per Proposal**:
|
||||
- Score for each dimension (1-10)
|
||||
- Rationale for each score
|
||||
- Overall recommendation (Strong Recommend / Recommend / Consider / Pass)
|
||||
|
||||
**Output file structure**:
|
||||
- File: `<session>/evaluation/evaluation-<num>.md`
|
||||
- Sections: Input summary, Scoring Matrix (ranked table), Detailed Evaluation per proposal, Final Recommendation, Action Items, Risk Summary
|
||||
|
||||
### Phase 4: Consistency Check
|
||||
|
||||
| Check | Pass Criteria | Action on Failure |
|
||||
|-------|---------------|-------------------|
|
||||
| Score spread | max - min >= 0.5 (with >1 proposal) | Re-evaluate differentiators |
|
||||
| No perfect scores | Not all 10s | Adjust scores to reflect critique findings |
|
||||
| Ranking deterministic | Consistent ranking | Verify calculation |
|
||||
|
||||
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[evaluator]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Shared Memory Update**:
|
||||
1. Set .msg/meta.json.evaluation_scores
|
||||
2. Each entry: title, weighted_score, rank, recommendation
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No EVAL-* tasks | Idle, wait for assignment |
|
||||
| Synthesis files not found | Notify coordinator |
|
||||
| Only one proposal | Evaluate against absolute criteria, recommend or reject |
|
||||
| All proposals score below 5 | Flag all as weak, recommend re-brainstorming |
|
||||
| Critical issue beyond scope | SendMessage error to coordinator |
|
||||
@@ -1,158 +0,0 @@
|
||||
# Ideator Role
|
||||
|
||||
多角度创意生成者。负责发散思维、概念探索、创意修订。作为 Generator-Critic 循环中的 Generator 角色。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `ideator` | **Tag**: `[ideator]`
|
||||
- **Task Prefix**: `IDEA-*`
|
||||
- **Responsibility**: Read-only analysis (idea generation, no code modification)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅处理 `IDEA-*` 前缀的任务
|
||||
- 所有输出(SendMessage、team_msg、日志)必须带 `[ideator]` 标识
|
||||
- 仅通过 SendMessage 与 coordinator 通信
|
||||
- Phase 2 读取 .msg/meta.json,Phase 5 写入 generated_ideas
|
||||
- 针对每个指定角度产出至少3个创意
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- 执行挑战/评估/综合等其他角色工作
|
||||
- 直接与其他 worker 角色通信
|
||||
- 为其他角色创建任务(TaskCreate 是 coordinator 专属)
|
||||
- 修改 .msg/meta.json 中不属于自己的字段
|
||||
- 在输出中省略 `[ideator]` 标识
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `TaskList` | Built-in | Phase 1 | Discover pending IDEA-* tasks |
|
||||
| `TaskGet` | Built-in | Phase 1 | Get task details |
|
||||
| `TaskUpdate` | Built-in | Phase 1/5 | Update task status |
|
||||
| `Read` | Built-in | Phase 2 | Read .msg/meta.json, critique files |
|
||||
| `Write` | Built-in | Phase 3/5 | Write idea files, update shared memory |
|
||||
| `Glob` | Built-in | Phase 2 | Find critique files |
|
||||
| `SendMessage` | Built-in | Phase 5 | Report to coordinator |
|
||||
| `mcp__ccw-tools__team_msg` | MCP | Phase 5 | Log communication |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `ideas_ready` | ideator -> coordinator | Initial ideas generated | Initial idea generation complete |
|
||||
| `ideas_revised` | ideator -> coordinator | Ideas revised after critique | Revised ideas complete (GC loop) |
|
||||
| `error` | ideator -> coordinator | Processing failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "ideator",
|
||||
type: <ideas_ready|ideas_revised>,
|
||||
data: {ref: <output-path>}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from ideator --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `IDEA-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `ideator` for single-instance roles.
|
||||
|
||||
### Phase 2: Context Loading + Shared Memory Read
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | Task description (Session: line) | Yes |
|
||||
| Topic | .msg/meta.json | Yes |
|
||||
| Angles | .msg/meta.json | Yes |
|
||||
| GC Round | .msg/meta.json | Yes |
|
||||
| Previous critique | critiques/*.md | For revision tasks only |
|
||||
| Previous ideas | .msg/meta.json.generated_ideas | No |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description (match "Session: <path>")
|
||||
2. Read .msg/meta.json for topic, angles, gc_round
|
||||
3. If task is revision (subject contains "revision" or "fix"):
|
||||
- Glob critique files
|
||||
- Read latest critique for revision context
|
||||
4. Read previous ideas from .msg/meta.json generated_ideas state
|
||||
|
||||
### Phase 3: Idea Generation
|
||||
|
||||
| Mode | Condition | Focus |
|
||||
|------|-----------|-------|
|
||||
| Initial Generation | No previous critique | Multi-angle divergent thinking |
|
||||
| GC Revision | Previous critique exists | Address HIGH/CRITICAL challenges |
|
||||
|
||||
**Initial Generation Mode**:
|
||||
- For each angle, generate 3+ ideas
|
||||
- Each idea includes: title, description (2-3 sentences), key assumption, potential impact, implementation hint
|
||||
|
||||
**GC Revision Mode**:
|
||||
- Focus on HIGH/CRITICAL severity challenges from critique
|
||||
- Retain unchallenged ideas intact
|
||||
- Revise ideas with revision rationale
|
||||
- Replace unsalvageable ideas with new alternatives
|
||||
|
||||
**Output file structure**:
|
||||
- File: `<session>/ideas/idea-<num>.md`
|
||||
- Sections: Topic, Angles, Mode, [Revision Context if applicable], Ideas list, Summary
|
||||
|
||||
### Phase 4: Self-Review
|
||||
|
||||
| Check | Pass Criteria | Action on Failure |
|
||||
|-------|---------------|-------------------|
|
||||
| Minimum count | >= 6 (initial) or >= 3 (revision) | Generate additional ideas |
|
||||
| No duplicates | All titles unique | Replace duplicates |
|
||||
| Angle coverage | At least 1 idea per angle | Generate missing angle ideas |
|
||||
|
||||
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[ideator]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Shared Memory Update**:
|
||||
1. Append new ideas to .msg/meta.json.generated_ideas
|
||||
2. Each entry: id, title, round, revised flag
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No IDEA-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Session folder not found | Notify coordinator, request path |
|
||||
| Shared memory read fails | Initialize empty, proceed with generation |
|
||||
| Topic too vague | Generate meta-questions as seed ideas |
|
||||
| Previous critique not found (revision task) | Generate new ideas instead of revising |
|
||||
| Critical issue beyond scope | SendMessage error to coordinator |
|
||||
@@ -1,166 +0,0 @@
|
||||
# Synthesizer Role
|
||||
|
||||
跨想法整合者。负责从多个创意和挑战反馈中提取主题、解决冲突、生成整合方案。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `synthesizer` | **Tag**: `[synthesizer]`
|
||||
- **Task Prefix**: `SYNTH-*`
|
||||
- **Responsibility**: Read-only analysis (synthesis and integration)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅处理 `SYNTH-*` 前缀的任务
|
||||
- 所有输出必须带 `[synthesizer]` 标识
|
||||
- 仅通过 SendMessage 与 coordinator 通信
|
||||
- Phase 2 读取 .msg/meta.json,Phase 5 写入 synthesis_themes
|
||||
- 从所有创意和挑战中提取共同主题
|
||||
- 解决相互矛盾的想法,生成整合方案
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- 生成新创意、挑战假设或评分排序
|
||||
- 直接与其他 worker 角色通信
|
||||
- 为其他角色创建任务
|
||||
- 修改 .msg/meta.json 中不属于自己的字段
|
||||
- 在输出中省略 `[synthesizer]` 标识
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `TaskList` | Built-in | Phase 1 | Discover pending SYNTH-* tasks |
|
||||
| `TaskGet` | Built-in | Phase 1 | Get task details |
|
||||
| `TaskUpdate` | Built-in | Phase 1/5 | Update task status |
|
||||
| `Read` | Built-in | Phase 2 | Read .msg/meta.json, idea files, critique files |
|
||||
| `Write` | Built-in | Phase 3/5 | Write synthesis files, update shared memory |
|
||||
| `Glob` | Built-in | Phase 2 | Find idea and critique files |
|
||||
| `SendMessage` | Built-in | Phase 5 | Report to coordinator |
|
||||
| `mcp__ccw-tools__team_msg` | MCP | Phase 5 | Log communication |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `synthesis_ready` | synthesizer -> coordinator | Synthesis completed | Cross-idea synthesis complete |
|
||||
| `error` | synthesizer -> coordinator | Processing failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "synthesizer",
|
||||
type: "synthesis_ready",
|
||||
data: {ref: <output-path>}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from synthesizer --type synthesis_ready --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `SYNTH-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading + Shared Memory Read
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | Task description (Session: line) | Yes |
|
||||
| All ideas | ideas/*.md files | Yes |
|
||||
| All critiques | critiques/*.md files | Yes |
|
||||
| GC rounds completed | .msg/meta.json.gc_round | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description (match "Session: <path>")
|
||||
2. Glob all idea files from session/ideas/
|
||||
3. Glob all critique files from session/critiques/
|
||||
4. Read all idea and critique files for synthesis
|
||||
5. Read .msg/meta.json for context
|
||||
|
||||
### Phase 3: Synthesis Execution
|
||||
|
||||
**Synthesis Process**:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1. Theme Extraction | Identify common themes across ideas |
|
||||
| 2. Conflict Resolution | Resolve contradictory ideas |
|
||||
| 3. Complementary Grouping | Group complementary ideas together |
|
||||
| 4. Gap Identification | Discover uncovered perspectives |
|
||||
| 5. Integrated Proposal | Generate 1-3 consolidated proposals |
|
||||
|
||||
**Theme Extraction**:
|
||||
- Cross-reference ideas for shared concepts
|
||||
- Rate theme strength (1-10)
|
||||
- List supporting ideas per theme
|
||||
|
||||
**Conflict Resolution**:
|
||||
- Identify contradictory ideas
|
||||
- Determine resolution approach
|
||||
- Document rationale for resolution
|
||||
|
||||
**Integrated Proposal Structure**:
|
||||
- Core concept description
|
||||
- Source ideas combined
|
||||
- Addressed challenges from critiques
|
||||
- Feasibility score (1-10)
|
||||
- Innovation score (1-10)
|
||||
- Key benefits list
|
||||
- Remaining risks list
|
||||
|
||||
**Output file structure**:
|
||||
- File: `<session>/synthesis/synthesis-<num>.md`
|
||||
- Sections: Input summary, Extracted Themes, Conflict Resolution, Integrated Proposals, Coverage Analysis
|
||||
|
||||
### Phase 4: Quality Check
|
||||
|
||||
| Check | Pass Criteria | Action on Failure |
|
||||
|-------|---------------|-------------------|
|
||||
| Proposal count | >= 1 proposal | Generate at least one proposal |
|
||||
| Theme count | >= 2 themes | Look for more patterns |
|
||||
| Conflict resolution | All conflicts documented | Address unresolved conflicts |
|
||||
|
||||
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[synthesizer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Shared Memory Update**:
|
||||
1. Set .msg/meta.json.synthesis_themes
|
||||
2. Each entry: name, strength, supporting_ideas
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No SYNTH-* tasks | Idle, wait for assignment |
|
||||
| No ideas/critiques found | Notify coordinator |
|
||||
| Irreconcilable conflicts | Present both sides, recommend user decision |
|
||||
| Only one idea survives | Create single focused proposal |
|
||||
| Critical issue beyond scope | SendMessage error to coordinator |
|
||||
@@ -1,10 +1,10 @@
|
||||
---
|
||||
name: team-coordinate-v2
|
||||
description: Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on "team coordinate v2".
|
||||
name: team-coordinate
|
||||
description: Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on "Team Coordinate ".
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# Team Coordinate v2
|
||||
# Team Coordinate
|
||||
|
||||
Universal team coordination skill: analyze task -> generate role-specs -> dispatch -> execute -> deliver. Only the **coordinator** is built-in. All worker roles are **dynamically generated** as lightweight role-spec files and spawned via the `team-worker` agent.
|
||||
|
||||
@@ -13,7 +13,7 @@ Universal team coordination skill: analyze task -> generate role-specs -> dispat
|
||||
|
||||
```
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-coordinate-v2") |
|
||||
| Skill(skill="team-coordinate") |
|
||||
| args="task description" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
@@ -64,7 +64,7 @@ Always route to coordinator. Coordinator reads `roles/coordinator/role.md` and e
|
||||
|
||||
User just provides task description.
|
||||
|
||||
**Invocation**: `Skill(skill="team-coordinate-v2", args="task description")`
|
||||
**Invocation**: `Skill(skill="team-coordinate", args="task description")`
|
||||
|
||||
**Lifecycle**:
|
||||
```
|
||||
@@ -143,7 +143,7 @@ AskUserQuestion({
|
||||
| Choice | Steps |
|
||||
|--------|-------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete -> output final summary with artifact paths |
|
||||
| Keep Active | Update session status="paused" -> output: "Resume with: Skill(skill='team-coordinate-v2', args='resume')" |
|
||||
| Keep Active | Update session status="paused" -> output: "Resume with: Skill(skill='team-coordinate', args='resume')" |
|
||||
| Export Results | AskUserQuestion(target path) -> copy artifacts to target -> Archive & Clean |
|
||||
|
||||
---
|
||||
|
||||
@@ -194,7 +194,7 @@ All tasks completed (no pending, no in_progress)
|
||||
| | Output final summary with artifact paths
|
||||
| +- "Keep Active":
|
||||
| | Update session status="paused"
|
||||
| | Output: "Resume with: Skill(skill='team-coordinate-v2', args='resume')"
|
||||
| | Output: "Resume with: Skill(skill='team-coordinate', args='resume')"
|
||||
| +- "Export Results":
|
||||
| AskUserQuestion for target directory
|
||||
| Copy deliverables to target
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
---
|
||||
name: team-executor-v2
|
||||
description: Lightweight session execution skill. Resumes existing team-coordinate-v2 sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on "team executor v2".
|
||||
name: team-executor
|
||||
description: Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on "Team Executor".
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# Team Executor v2
|
||||
# Team Executor
|
||||
|
||||
Lightweight session execution skill: load session -> reconcile state -> spawn team-worker agents -> execute -> deliver. **No analysis, no role generation** -- only executes existing team-coordinate sessions.
|
||||
|
||||
|
||||
@@ -13,35 +13,23 @@ Unified team skill: issue processing pipeline (explore → plan → implement
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-issue") │
|
||||
│ args="<issue-ids>" or args="--role=xxx" │
|
||||
└───────────────────┬───────────────────────────┘
|
||||
│ Role Router
|
||||
┌──── --role present? ────┐
|
||||
│ NO │ YES
|
||||
↓ ↓
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto → coordinator) (route to role.md)
|
||||
│
|
||||
┌────┴────┬───────────┬───────────┬───────────┬───────────┐
|
||||
↓ ↓ ↓ ↓ ↓ ↓
|
||||
┌──────────┐┌─────────┐┌─────────┐┌─────────┐┌──────────┐┌──────────┐
|
||||
│coordinator││explorer ││planner ││reviewer ││integrator││implementer│
|
||||
│ ││EXPLORE-*││SOLVE-* ││AUDIT-* ││MARSHAL-* ││BUILD-* │
|
||||
└──────────┘└─────────┘└─────────┘└─────────┘└──────────┘└──────────┘
|
||||
```
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-issue") |
|
||||
| args="<issue-ids>" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
Orchestration Mode (auto -> coordinator)
|
||||
|
|
||||
Coordinator (inline)
|
||||
Phase 0-5 orchestration
|
||||
|
|
||||
+-----+-----+-----+-----+-----+
|
||||
v v v v v
|
||||
[tw] [tw] [tw] [tw] [tw]
|
||||
explor plann review integ- imple-
|
||||
er er er rator menter
|
||||
|
||||
## Command Architecture
|
||||
|
||||
```
|
||||
roles/
|
||||
├── coordinator.md # Pipeline orchestration (Phase 1/5 inline, Phase 2-4 core logic)
|
||||
├── explorer.md # Context analysis (ACE + cli-explore-agent)
|
||||
├── planner.md # Solution design (wraps issue-plan-agent)
|
||||
├── reviewer.md # Solution review (technical feasibility + risk assessment)
|
||||
├── integrator.md # Queue orchestration (wraps issue-queue-agent)
|
||||
└── implementer.md # Code implementation (wraps code-developer)
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Role Router
|
||||
@@ -52,14 +40,14 @@ Parse `$ARGUMENTS` to extract `--role`. If absent → Orchestration Mode (auto r
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| coordinator | [roles/coordinator.md](roles/coordinator.md) | (none) | orchestrator | **⚠️ 压缩后必须重读** |
|
||||
| explorer | [roles/explorer.md](roles/explorer.md) | EXPLORE-* | pipeline | 压缩后必须重读 |
|
||||
| planner | [roles/planner.md](roles/planner.md) | SOLVE-* | pipeline | 压缩后必须重读 |
|
||||
| reviewer | [roles/reviewer.md](roles/reviewer.md) | AUDIT-* | pipeline | 压缩后必须重读 |
|
||||
| integrator | [roles/integrator.md](roles/integrator.md) | MARSHAL-* | pipeline | 压缩后必须重读 |
|
||||
| implementer | [roles/implementer.md](roles/implementer.md) | BUILD-* | pipeline | 压缩后必须重读 |
|
||||
| Role | Spec | Task Prefix | Inner Loop |
|
||||
|------|------|-------------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
|
||||
| explorer | [role-specs/explorer.md](role-specs/explorer.md) | EXPLORE-* | false |
|
||||
| planner | [role-specs/planner.md](role-specs/planner.md) | SOLVE-* | false |
|
||||
| reviewer | [role-specs/reviewer.md](role-specs/reviewer.md) | AUDIT-* | false |
|
||||
| integrator | [role-specs/integrator.md](role-specs/integrator.md) | MARSHAL-* | false |
|
||||
| implementer | [role-specs/implementer.md](role-specs/implementer.md) | BUILD-* | false |
|
||||
|
||||
> **⚠️ COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
|
||||
|
||||
@@ -308,91 +296,103 @@ Beat 1 2 3 4 5
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use background mode (Spawn-and-Stop).
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
**Standard spawn** (single agent per role): For Quick/Full mode, spawn one agent per role. Explorer, planner, reviewer, integrator each get a single agent.
|
||||
|
||||
**Parallel spawn** (Batch mode): For Batch mode with multiple issues, spawn N explorer agents in parallel (max 5) and M implementer agents in parallel (max 3). Each parallel agent only processes tasks where owner matches its agent name.
|
||||
|
||||
**Spawn template**:
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: "issue",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "issue" <ROLE>.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-issue/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: issue
|
||||
requirement: <task-description>
|
||||
inner_loop: false
|
||||
|
||||
## Primary Directive
|
||||
All your work must be executed through Skill to load role definition:
|
||||
Skill(skill="team-issue", args="--role=<role>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] identifier
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate for other roles
|
||||
- Call mcp__ccw-tools__team_msg before every SendMessage
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> load role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**All roles** (explorer, planner, reviewer, integrator, implementer): Set `inner_loop: false`.
|
||||
|
||||
### Parallel Spawn (Batch Mode)
|
||||
|
||||
> When Batch mode has parallel tasks assigned to the same role, spawn N distinct agents with unique names. A single agent can only process tasks serially.
|
||||
> When Batch mode has parallel tasks assigned to the same role, spawn N distinct team-worker agents with unique names.
|
||||
|
||||
**Explorer parallel spawn** (Batch mode, N issues):
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Batch mode with N issues (N > 1) | Spawn min(N, 5) agents: `explorer-1`, `explorer-2`, ... with `run_in_background: true` |
|
||||
| Quick/Full mode (single explorer) | Standard spawn: single `explorer` agent |
|
||||
| Batch mode with N issues (N > 1) | Spawn min(N, 5) team-worker agents: `explorer-1`, `explorer-2`, ... with `run_in_background: true` |
|
||||
| Quick/Full mode (single explorer) | Standard spawn: single `explorer` team-worker agent |
|
||||
|
||||
**Implementer parallel spawn** (Batch mode, M BUILD tasks):
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Batch mode with M BUILD tasks (M > 2) | Spawn min(M, 3) agents: `implementer-1`, `implementer-2`, ... with `run_in_background: true` |
|
||||
| Quick/Full mode (single implementer) | Standard spawn: single `implementer` agent |
|
||||
| Batch mode with M BUILD tasks (M > 2) | Spawn min(M, 3) team-worker agents: `implementer-1`, `implementer-2`, ... with `run_in_background: true` |
|
||||
| Quick/Full mode (single implementer) | Standard spawn: single `implementer` team-worker agent |
|
||||
|
||||
**Parallel spawn template**:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role>-<N> worker",
|
||||
team_name: "issue",
|
||||
name: "<role>-<N>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "issue" <ROLE> (<role>-<N>).
|
||||
Your agent name is "<role>-<N>", use this name for task discovery owner matching.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-issue/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: issue
|
||||
requirement: <task-description>
|
||||
agent_name: <role>-<N>
|
||||
inner_loop: false
|
||||
|
||||
## Primary Directive
|
||||
Skill(skill="team-issue", args="--role=<role> --agent-name=<role>-<N>")
|
||||
|
||||
## Role Guidelines
|
||||
- Only process tasks where owner === "<role>-<N>" with <PREFIX>-* prefix
|
||||
- All output prefixed with [<role>] identifier
|
||||
|
||||
## Workflow
|
||||
1. TaskList -> find tasks where owner === "<role>-<N>" with <PREFIX>-* prefix
|
||||
2. Skill -> execute role definition
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery, owner=<role>-<N>) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**Dispatch must match agent names**: When dispatching parallel tasks, coordinator sets each task's owner to the corresponding instance name (`explorer-1`, `explorer-2`, etc. or `implementer-1`, `implementer-2`, etc.). In role.md, task discovery uses `--agent-name` for owner matching.
|
||||
**Dispatch must match agent names**: When dispatching parallel tasks, coordinator sets each task's owner to the corresponding instance name (`explorer-1`, `explorer-2`, etc. or `implementer-1`, `implementer-2`, etc.).
|
||||
|
||||
---
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Issue resolution pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(issue) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-issue", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,336 +0,0 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate the issue resolution pipeline: requirement clarification -> mode selection -> team creation -> task chain -> dispatch -> monitoring -> reporting.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
||||
- **Task Prefix**: N/A (coordinator creates tasks, does not receive them)
|
||||
- **Responsibility**: Orchestration
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- All output (SendMessage, team_msg, logs) must carry `[coordinator]` identifier
|
||||
- Responsible only for: requirement clarification, mode selection, task creation/dispatch, progress monitoring, result reporting
|
||||
- Create tasks via TaskCreate and assign to worker roles
|
||||
- Monitor worker progress via message bus and route messages
|
||||
- Parse user requirements and clarify ambiguous inputs via AskUserQuestion
|
||||
- Maintain session state persistence
|
||||
- Dispatch tasks with proper dependency chains (see SKILL.md Task Metadata Registry)
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute any business tasks directly (code writing, solution design, review, etc.)
|
||||
- Call implementation subagents directly (issue-plan-agent, issue-queue-agent, code-developer, etc.)
|
||||
- Modify source code or generated artifacts directly
|
||||
- Bypass worker roles to complete delegated work
|
||||
- Omit `[coordinator]` identifier in any output
|
||||
- Skip dependency validation when creating task chains
|
||||
|
||||
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
> No command files -- all phases execute inline.
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `TeamCreate` | Team | coordinator | Initialize team |
|
||||
| `TeamDelete` | Team | coordinator | Dissolve team |
|
||||
| `SendMessage` | Team | coordinator | Communicate with workers/user |
|
||||
| `TaskCreate` | Task | coordinator | Create and dispatch tasks |
|
||||
| `TaskList` | Task | coordinator | Monitor task status |
|
||||
| `TaskGet` | Task | coordinator | Get task details |
|
||||
| `TaskUpdate` | Task | coordinator | Update task status |
|
||||
| `AskUserQuestion` | UI | coordinator | Clarify requirements |
|
||||
| `Read` | IO | coordinator | Read session files |
|
||||
| `Write` | IO | coordinator | Write session files |
|
||||
| `Bash` | System | coordinator | Execute ccw commands |
|
||||
| `mcp__ccw-tools__team_msg` | Team | coordinator | Log messages to message bus |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `task_assigned` | coordinator -> worker | Task dispatched | Notify worker of new task |
|
||||
| `pipeline_update` | coordinator -> user | Progress milestone | Pipeline progress update |
|
||||
| `escalation` | coordinator -> user | Unresolvable issue | Escalate to user decision |
|
||||
| `shutdown` | coordinator -> all | Team dissolved | Team shutdown notification |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "coordinator",
|
||||
type: <message-type>,
|
||||
data: {ref: <artifact-path>}
|
||||
})
|
||||
```
|
||||
|
||||
`to` and `summary` auto-defaulted -- do NOT specify explicitly.
|
||||
|
||||
**CLI fallback**: `ccw team log --session-id <session-id> --from coordinator --type <type> --json`
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, first detect the invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
|
||||
| New session | None of the above | -> Phase 0 (Session Resume Check) |
|
||||
|
||||
For callback/check/resume: execute the appropriate handler, then STOP.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Check for existing team session via team_msg list
|
||||
2. No sessions found -> proceed to Phase 1
|
||||
3. Single session found -> resume it (-> Session Reconciliation)
|
||||
4. Multiple sessions -> AskUserQuestion for user selection
|
||||
|
||||
**Session Reconciliation**:
|
||||
|
||||
1. Audit TaskList -> get real status of all tasks
|
||||
2. Reconcile: session state <-> TaskList status (bidirectional sync)
|
||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
||||
4. Determine remaining pipeline from reconciled state
|
||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
||||
6. Create missing tasks with correct blockedBy dependencies
|
||||
7. Verify dependency chain integrity
|
||||
8. Update session file with reconciled state
|
||||
9. Kick first executable task's worker -> Phase 4
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Requirement Clarification
|
||||
|
||||
**Objective**: Parse user input and gather execution parameters.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse arguments** for issue IDs and mode:
|
||||
|
||||
| Pattern | Extraction |
|
||||
|---------|------------|
|
||||
| `GH-\d+` | GitHub issue ID |
|
||||
| `ISS-\d{8}-\d{6}` | Local issue ID |
|
||||
| `--mode=<mode>` | Explicit mode |
|
||||
| `--all-pending` | Load all pending issues |
|
||||
|
||||
2. **Load pending issues** if `--all-pending`:
|
||||
|
||||
```
|
||||
Bash("ccw issue list --status registered,pending --json")
|
||||
```
|
||||
|
||||
3. **Ask for missing parameters** via AskUserQuestion if no issue IDs found
|
||||
|
||||
4. **Mode auto-detection** (when user does not specify `--mode`):
|
||||
|
||||
| Condition | Mode |
|
||||
|-----------|------|
|
||||
| Issue count <= 2 AND no high-priority (priority < 4) | `quick` |
|
||||
| Issue count <= 2 AND has high-priority (priority >= 4) | `full` |
|
||||
| Issue count >= 5 | `batch` |
|
||||
| 3-4 issues | `full` |
|
||||
|
||||
5. **Execution method selection** (for BUILD phase):
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `Agent` | code-developer agent (sync, for simple tasks) |
|
||||
| `Codex` | Codex CLI (background, for complex tasks) |
|
||||
| `Gemini` | Gemini CLI (background, for analysis tasks) |
|
||||
| `Auto` | Auto-select based on solution task_count (default) |
|
||||
|
||||
6. **Code review selection**:
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `Skip` | No review |
|
||||
| `Gemini Review` | Gemini CLI review |
|
||||
| `Codex Review` | Git-aware review (--uncommitted) |
|
||||
|
||||
**Success**: All parameters captured, mode finalized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Create Team + Initialize Session
|
||||
|
||||
**Objective**: Initialize team, session file, and wisdom directory.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Generate session ID
|
||||
2. Create session folder
|
||||
3. Call TeamCreate with team name "issue"
|
||||
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
|
||||
5. Write session file with: session_id, mode, scope, status="active"
|
||||
|
||||
**Spawn template**: Workers are NOT pre-spawned here. Workers are spawned on-demand in Phase 4. See SKILL.md Coordinator Spawn Template for worker prompt templates.
|
||||
|
||||
**Worker roles available**:
|
||||
|
||||
- quick mode: explorer, planner, integrator, implementer
|
||||
- full mode: explorer, planner, reviewer, integrator, implementer
|
||||
- batch mode: parallel explorers (max 5), parallel implementers (max 3)
|
||||
|
||||
**Success**: Team created, session file written, wisdom initialized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
**Objective**: Dispatch tasks based on mode with proper dependencies.
|
||||
|
||||
### Quick Mode (4 beats, strictly serial)
|
||||
|
||||
Create task chain for each issue: EXPLORE -> SOLVE -> MARSHAL -> BUILD
|
||||
|
||||
| Task ID | Role | Dependencies | Description |
|
||||
|---------|------|--------------|-------------|
|
||||
| EXPLORE-001 | explorer | (none) | Context analysis |
|
||||
| SOLVE-001 | planner | EXPLORE-001 | Solution design |
|
||||
| MARSHAL-001 | integrator | SOLVE-001 | Queue formation |
|
||||
| BUILD-001 | implementer | MARSHAL-001 | Code implementation |
|
||||
|
||||
### Full Mode (5-7 beats, with review gate)
|
||||
|
||||
Add AUDIT between SOLVE and MARSHAL:
|
||||
|
||||
| Task ID | Role | Dependencies | Description |
|
||||
|---------|------|--------------|-------------|
|
||||
| EXPLORE-001 | explorer | (none) | Context analysis |
|
||||
| SOLVE-001 | planner | EXPLORE-001 | Solution design |
|
||||
| AUDIT-001 | reviewer | SOLVE-001 | Solution review |
|
||||
| MARSHAL-001 | integrator | AUDIT-001 | Queue formation |
|
||||
| BUILD-001 | implementer | MARSHAL-001 | Code implementation |
|
||||
|
||||
### Batch Mode (parallel windows)
|
||||
|
||||
Create parallel task batches:
|
||||
|
||||
| Batch | Tasks | Parallel Limit |
|
||||
|-------|-------|----------------|
|
||||
| EXPLORE-001..N | explorer | max 5 parallel |
|
||||
| SOLVE-001..N | planner | sequential |
|
||||
| AUDIT-001 | reviewer | (all SOLVE complete) |
|
||||
| MARSHAL-001 | integrator | (AUDIT complete) |
|
||||
| BUILD-001..M | implementer | max 3 parallel |
|
||||
|
||||
**Task description must include**: execution_method, code_review settings from Phase 1.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Spawn-and-Stop
|
||||
|
||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
||||
|
||||
**Design**: Spawn-and-Stop + Callback pattern.
|
||||
|
||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
||||
- Worker completes -> SendMessage callback -> auto-advance
|
||||
- User can use "check" / "resume" to manually advance
|
||||
- Coordinator does one operation per invocation, then STOPS
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
||||
2. For each ready task -> spawn worker (see SKILL.md Spawn Template)
|
||||
3. Output status summary
|
||||
4. STOP
|
||||
|
||||
**Pipeline advancement** driven by three wake sources:
|
||||
|
||||
| Wake Source | Handler | Action |
|
||||
|-------------|---------|--------|
|
||||
| Worker callback | handleCallback | Auto-advance next step |
|
||||
| User "check" | handleCheck | Status output only |
|
||||
| User "resume" | handleResume | Advance pipeline |
|
||||
|
||||
### Message Handlers
|
||||
|
||||
| Received Message | Action |
|
||||
|------------------|--------|
|
||||
| `context_ready` from explorer | Unblock SOLVE-* tasks for this issue |
|
||||
| `solution_ready` from planner | Quick: create MARSHAL-*; Full: create AUDIT-* |
|
||||
| `multi_solution` from planner | AskUserQuestion for solution selection, then ccw issue bind |
|
||||
| `approved` from reviewer | Unblock MARSHAL-* task |
|
||||
| `rejected` from reviewer | Create SOLVE-fix task with feedback (max 2 rounds) |
|
||||
| `concerns` from reviewer | Log concerns, proceed to MARSHAL (non-blocking) |
|
||||
| `queue_ready` from integrator | Create BUILD-* tasks based on DAG parallel batches |
|
||||
| `conflict_found` from integrator | AskUserQuestion for conflict resolution |
|
||||
| `impl_complete` from implementer | Refresh DAG, create next BUILD-* batch or complete |
|
||||
| `impl_failed` from implementer | Escalation: retry / skip / abort |
|
||||
| `error` from any worker | Assess severity -> retry or escalate to user |
|
||||
|
||||
### Review-Fix Cycle (max 2 rounds)
|
||||
|
||||
| Round | Rejected Action |
|
||||
|-------|-----------------|
|
||||
| Round 1 | Create SOLVE-fix-1 task with reviewer feedback |
|
||||
| Round 2 | Create SOLVE-fix-2 task with reviewer feedback |
|
||||
| Round 3+ | Escalate to user: Force approve / Manual fix / Skip issue |
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Next Steps
|
||||
|
||||
**Objective**: Completion report and follow-up options.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Load session state -> count completed tasks, duration
|
||||
2. List deliverables with output paths
|
||||
3. Update session status -> "completed"
|
||||
4. Log via team_msg
|
||||
5. Offer next steps to user:
|
||||
|
||||
| Option | Action |
|
||||
|--------|--------|
|
||||
| New batch | Return to Phase 1 with new issue IDs |
|
||||
| View results | Show implementation results and git changes |
|
||||
| Close team | TeamDelete() and cleanup |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No issue IDs provided | AskUserQuestion for IDs |
|
||||
| Issue not found | Skip with warning, continue others |
|
||||
| Worker unresponsive | Send follow-up, 2x -> respawn |
|
||||
| Review rejected 2+ times | Escalate to user |
|
||||
| Build failed | Retry once, then escalate |
|
||||
| All workers error | Shutdown team, report to user |
|
||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
||||
| Worker crash | Respawn worker, reassign task |
|
||||
| Dependency cycle | Detect, report to user, halt |
|
||||
| Invalid mode | Reject with error, ask to clarify |
|
||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
||||
@@ -1,210 +0,0 @@
|
||||
# Explorer Role
|
||||
|
||||
Issue context analysis, codebase exploration, dependency identification, impact assessment. Produces shared context report for planner and reviewer.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `explorer` | **Tag**: `[explorer]`
|
||||
- **Task Prefix**: `EXPLORE-*`
|
||||
- **Responsibility**: Orchestration (context gathering)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `EXPLORE-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[explorer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Produce context-report for subsequent roles (planner, reviewer)
|
||||
- Work strictly within context gathering responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Design solutions (planner responsibility)
|
||||
- Review solution quality (reviewer responsibility)
|
||||
- Modify any source code
|
||||
- Communicate directly with other worker roles
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Omit `[explorer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
> No command files -- all phases execute inline.
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Task` | Subagent | explorer | Spawn cli-explore-agent for deep exploration |
|
||||
| `Read` | IO | explorer | Read context files and issue data |
|
||||
| `Write` | IO | explorer | Write context report |
|
||||
| `Bash` | System | explorer | Execute ccw commands |
|
||||
| `mcp__ace-tool__search_context` | Search | explorer | Semantic code search |
|
||||
| `mcp__ccw-tools__team_msg` | Team | explorer | Log messages to message bus |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `context_ready` | explorer -> coordinator | Context analysis complete | Context report ready |
|
||||
| `impact_assessed` | explorer -> coordinator | Impact scope determined | Impact assessment complete |
|
||||
| `error` | explorer -> coordinator | Blocking error | Cannot complete exploration |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "explorer",
|
||||
type: <message-type>,
|
||||
data: {ref: <artifact-path>}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from explorer --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `EXPLORE-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `explorer` for single-instance roles.
|
||||
|
||||
### Phase 2: Issue Loading & Context Setup
|
||||
|
||||
**Input Sources**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue ID | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
||||
| Issue details | `ccw issue status <id> --json` | Yes |
|
||||
| Project root | Working directory | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract issue ID from task description via regex: `(?:GH-\d+|ISS-\d{8}-\d{6})`
|
||||
2. If no issue ID found -> SendMessage error to coordinator, STOP
|
||||
3. Load issue details:
|
||||
|
||||
```
|
||||
Bash("ccw issue status <issueId> --json")
|
||||
```
|
||||
|
||||
4. Parse JSON response for issue metadata (title, context, priority, labels, feedback)
|
||||
|
||||
### Phase 3: Codebase Exploration & Impact Analysis
|
||||
|
||||
**Complexity assessment determines exploration depth**:
|
||||
|
||||
| Signal | Weight | Keywords |
|
||||
|--------|--------|----------|
|
||||
| Structural change | +2 | refactor, architect, restructure, module, system |
|
||||
| Cross-cutting | +2 | multiple, across, cross |
|
||||
| Integration | +1 | integrate, api, database |
|
||||
| High priority | +1 | priority >= 4 |
|
||||
|
||||
| Score | Complexity | Strategy |
|
||||
|-------|------------|----------|
|
||||
| >= 4 | High | Deep exploration via cli-explore-agent |
|
||||
| 2-3 | Medium | Hybrid: ACE search + selective agent |
|
||||
| 0-1 | Low | Direct ACE search only |
|
||||
|
||||
**Exploration execution**:
|
||||
|
||||
| Complexity | Execution |
|
||||
|------------|-----------|
|
||||
| Low | Direct ACE search: `mcp__ace-tool__search_context(project_root_path, query)` |
|
||||
| Medium/High | Spawn cli-explore-agent: `Task({ subagent_type: "cli-explore-agent", run_in_background: false })` |
|
||||
|
||||
**cli-explore-agent prompt template**:
|
||||
|
||||
```
|
||||
## Issue Context
|
||||
ID: <issueId>
|
||||
Title: <issue.title>
|
||||
Description: <issue.context>
|
||||
Priority: <issue.priority>
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}'
|
||||
2. Execute ACE searches based on issue keywords
|
||||
3. Run: ccw spec load --category exploration
|
||||
|
||||
## Exploration Focus
|
||||
- Identify files directly related to this issue
|
||||
- Map dependencies and integration points
|
||||
- Assess impact scope (how many modules/files affected)
|
||||
- Find existing patterns relevant to the fix
|
||||
- Check for previous related changes (git log)
|
||||
|
||||
## Output
|
||||
Write findings to: .workflow/.team-plan/issue/context-<issueId>.json
|
||||
|
||||
Schema: {
|
||||
issue_id, relevant_files[], dependencies[], impact_scope,
|
||||
existing_patterns[], related_changes[], key_findings[],
|
||||
complexity_assessment, _metadata
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Context Report Generation
|
||||
|
||||
**Report assembly**:
|
||||
|
||||
1. Read exploration results from `.workflow/.team-plan/issue/context-<issueId>.json`
|
||||
2. If file not found, build minimal report from ACE results
|
||||
3. Enrich with issue metadata: id, title, priority, status, labels, feedback
|
||||
|
||||
**Report schema**:
|
||||
|
||||
```
|
||||
{
|
||||
issue_id: string,
|
||||
issue: { id, title, priority, status, labels, feedback },
|
||||
relevant_files: [{ path, relevance }], | string[],
|
||||
dependencies: string[],
|
||||
impact_scope: "low" | "medium" | "high",
|
||||
existing_patterns: string[],
|
||||
related_changes: string[],
|
||||
key_findings: string[],
|
||||
complexity_assessment: "Low" | "Medium" | "High"
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[explorer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No EXPLORE-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Issue ID not found in task | Notify coordinator with error |
|
||||
| Issue ID not found in ccw | Notify coordinator with error |
|
||||
| ACE search returns no results | Fallback to Glob/Grep, report limited context |
|
||||
| cli-explore-agent failure | Retry once with simplified prompt, then report partial results |
|
||||
| Context file write failure | Report via SendMessage with inline context |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -1,318 +0,0 @@
|
||||
# Implementer Role
|
||||
|
||||
Load solution -> route to backend (Agent/Codex/Gemini) based on execution_method -> test validation -> commit. Supports multiple CLI execution backends. Execution method is determined in coordinator Phase 1.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `implementer` | **Tag**: `[implementer]`
|
||||
- **Task Prefix**: `BUILD-*`
|
||||
- **Responsibility**: Code implementation (solution -> route to backend -> test -> commit)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `BUILD-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[implementer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Select execution backend based on `execution_method` field in BUILD-* task
|
||||
- Notify coordinator after each solution completes
|
||||
- Continuously poll for new BUILD-* tasks
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Modify solutions (planner responsibility)
|
||||
- Review implementation results (reviewer responsibility)
|
||||
- Modify execution queue (integrator responsibility)
|
||||
- Communicate directly with other worker roles
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Omit `[implementer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
> No command files -- all phases execute inline.
|
||||
|
||||
### Execution Backends
|
||||
|
||||
| Backend | Tool | Invocation | Mode |
|
||||
|---------|------|------------|------|
|
||||
| `agent` | code-developer subagent | `Task({ subagent_type: "code-developer" })` | Sync |
|
||||
| `codex` | Codex CLI | `ccw cli --tool codex --mode write` | Background |
|
||||
| `gemini` | Gemini CLI | `ccw cli --tool gemini --mode write` | Background |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Task` | Subagent | implementer | Spawn code-developer for agent execution |
|
||||
| `Read` | IO | implementer | Read solution plan and queue files |
|
||||
| `Write` | IO | implementer | Write implementation artifacts |
|
||||
| `Edit` | IO | implementer | Edit source code |
|
||||
| `Bash` | System | implementer | Run tests, git operations, CLI calls |
|
||||
| `mcp__ccw-tools__team_msg` | Team | implementer | Log messages to message bus |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `impl_complete` | implementer -> coordinator | Implementation and tests pass | Implementation complete |
|
||||
| `impl_failed` | implementer -> coordinator | Implementation failed after retries | Implementation failed |
|
||||
| `error` | implementer -> coordinator | Blocking error | Execution error |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "implementer",
|
||||
type: <message-type>,
|
||||
data: {ref: <artifact-path>}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from implementer --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Method Resolution
|
||||
|
||||
Parse execution method from BUILD-* task description:
|
||||
|
||||
| Pattern | Extraction |
|
||||
|---------|------------|
|
||||
| `execution_method:\s*Agent` | Use agent backend |
|
||||
| `execution_method:\s*Codex` | Use codex backend |
|
||||
| `execution_method:\s*Gemini` | Use gemini backend |
|
||||
| `execution_method:\s*Auto` | Auto-select based on task count |
|
||||
|
||||
**Auto-selection logic**:
|
||||
|
||||
| Solution Task Count | Backend |
|
||||
|---------------------|---------|
|
||||
| <= 3 | agent |
|
||||
| > 3 | codex |
|
||||
|
||||
**Code review resolution**:
|
||||
|
||||
| Pattern | Setting |
|
||||
|---------|---------|
|
||||
| `code_review:\s*Skip` | No review |
|
||||
| `code_review:\s*Gemini Review` | Gemini CLI review |
|
||||
| `code_review:\s*Codex Review` | Git-aware review (--uncommitted) |
|
||||
| No match | Skip (default) |
|
||||
|
||||
---
|
||||
|
||||
## Execution Prompt Builder
|
||||
|
||||
Unified prompt template for all backends:
|
||||
|
||||
```
|
||||
## Issue
|
||||
ID: <issueId>
|
||||
Title: <solution.bound.title>
|
||||
|
||||
## Solution Plan
|
||||
<solution.bound JSON>
|
||||
|
||||
## Codebase Context (from explorer)
|
||||
Relevant files: <explorerContext.relevant_files>
|
||||
Existing patterns: <explorerContext.existing_patterns>
|
||||
Dependencies: <explorerContext.dependencies>
|
||||
|
||||
## Implementation Requirements
|
||||
|
||||
1. Follow the solution plan tasks in order
|
||||
2. Write clean, minimal code following existing patterns
|
||||
3. Run tests after each significant change
|
||||
4. Ensure all existing tests still pass
|
||||
5. Do NOT over-engineer -- implement exactly what the solution specifies
|
||||
|
||||
## Quality Checklist
|
||||
- [ ] All solution tasks implemented
|
||||
- [ ] No TypeScript/linting errors
|
||||
- [ ] Existing tests pass
|
||||
- [ ] New tests added where appropriate
|
||||
- [ ] No security vulnerabilities introduced
|
||||
|
||||
## Project Guidelines
|
||||
@.workflow/specs/*.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `BUILD-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `implementer` for single-instance roles.
|
||||
|
||||
### Phase 2: Load Solution & Resolve Executor
|
||||
|
||||
**Input Sources**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue ID | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
||||
| Bound solution | `ccw issue solutions <id> --json` | Yes |
|
||||
| Explorer context | `.workflow/.team-plan/issue/context-<issueId>.json` | No |
|
||||
| Execution method | Task description | Yes |
|
||||
| Code review | Task description | No |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract issue ID from task description
|
||||
2. If no issue ID -> SendMessage error to coordinator, STOP
|
||||
3. Load bound solution:
|
||||
|
||||
```
|
||||
Bash("ccw issue solutions <issueId> --json")
|
||||
```
|
||||
|
||||
4. If no bound solution -> SendMessage error to coordinator, STOP
|
||||
5. Load explorer context (if available)
|
||||
6. Resolve execution method from task description
|
||||
7. Resolve code review setting from task description
|
||||
8. Update issue status:
|
||||
|
||||
```
|
||||
Bash("ccw issue update <issueId> --status in-progress")
|
||||
```
|
||||
|
||||
### Phase 3: Implementation (Multi-Backend Routing)
|
||||
|
||||
Route to backend based on `executor` resolution:
|
||||
|
||||
#### Option A: Agent Execution (`executor === 'agent'`)
|
||||
|
||||
Sync call to code-developer subagent, suitable for simple tasks (task_count <= 3).
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Implement solution for <issueId>",
|
||||
prompt: <executionPrompt>
|
||||
})
|
||||
```
|
||||
|
||||
#### Option B: Codex CLI Execution (`executor === 'codex'`)
|
||||
|
||||
Background call to Codex CLI, suitable for complex tasks. Uses fixed ID for resume support.
|
||||
|
||||
```
|
||||
Bash("ccw cli -p \"<executionPrompt>\" --tool codex --mode write --id issue-<issueId>", { run_in_background: true })
|
||||
```
|
||||
|
||||
**On failure, resume with**:
|
||||
|
||||
```
|
||||
ccw cli -p "Continue implementation" --resume issue-<issueId> --tool codex --mode write --id issue-<issueId>-retry
|
||||
```
|
||||
|
||||
#### Option C: Gemini CLI Execution (`executor === 'gemini'`)
|
||||
|
||||
Background call to Gemini CLI, suitable for composite tasks requiring analysis.
|
||||
|
||||
```
|
||||
Bash("ccw cli -p \"<executionPrompt>\" --tool gemini --mode write --id issue-<issueId>", { run_in_background: true })
|
||||
```
|
||||
|
||||
### Phase 4: Verify & Commit
|
||||
|
||||
**Test detection**:
|
||||
|
||||
| Detection | Method |
|
||||
|-----------|--------|
|
||||
| Package.json exists | Check `scripts.test` or `scripts.test:unit` |
|
||||
| Yarn.lock exists | Use `yarn test` |
|
||||
| Fallback | Use `npm test` |
|
||||
|
||||
**Test execution**:
|
||||
|
||||
```
|
||||
Bash("<testCmd> 2>&1 || echo \"TEST_FAILED\"")
|
||||
```
|
||||
|
||||
**Test result handling**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Tests pass | Proceed to optional code review |
|
||||
| Tests fail | Report impl_failed to coordinator |
|
||||
|
||||
**Failed test report**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "implementer",
|
||||
type: "impl_failed",
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: "## [implementer] Implementation Failed\n\n**Issue**: <issueId>\n**Executor**: <executor>\n**Status**: Tests failing\n**Test Output** (truncated):\n<truncated output>\n\n**Action**: May need solution revision or manual intervention.",
|
||||
})
|
||||
```
|
||||
|
||||
**Optional code review** (if configured):
|
||||
|
||||
| Tool | Command |
|
||||
|------|---------|
|
||||
| Gemini Review | `ccw cli -p "<reviewPrompt>" --tool gemini --mode analysis --id issue-review-<issueId>` |
|
||||
| Codex Review | `ccw cli --tool codex --mode review --uncommitted` |
|
||||
|
||||
**Success completion**:
|
||||
|
||||
```
|
||||
Bash("ccw issue update <issueId> --status resolved")
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[implementer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Report content includes**:
|
||||
|
||||
- Issue ID
|
||||
- Executor used
|
||||
- Solution ID
|
||||
- Code review status
|
||||
- Test status
|
||||
- Issue status update
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No BUILD-* tasks available | Idle, wait for coordinator |
|
||||
| Solution plan not found | Report error to coordinator |
|
||||
| Unknown execution_method | Fallback to `agent` with warning |
|
||||
| Agent (code-developer) failure | Retry once, then report impl_failed |
|
||||
| CLI (Codex/Gemini) failure | Provide resume command with fixed ID, report impl_failed |
|
||||
| CLI timeout | Use fixed ID `issue-{issueId}` for resume |
|
||||
| Tests failing after implementation | Report impl_failed with test output + resume info |
|
||||
| Issue status update failure | Log warning, continue with report |
|
||||
| Dependency not yet complete | Wait -- task is blocked by blockedBy |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -1,235 +0,0 @@
|
||||
# Integrator Role
|
||||
|
||||
Queue orchestration, conflict detection, execution order optimization. Internally invokes issue-queue-agent for intelligent queue formation.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `integrator` | **Tag**: `[integrator]`
|
||||
- **Task Prefix**: `MARSHAL-*`
|
||||
- **Responsibility**: Orchestration (queue formation)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `MARSHAL-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[integrator]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Use issue-queue-agent for queue orchestration
|
||||
- Ensure all issues have bound solutions before queue formation
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Modify solutions (planner responsibility)
|
||||
- Review solution quality (reviewer responsibility)
|
||||
- Implement code (implementer responsibility)
|
||||
- Communicate directly with other worker roles
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Omit `[integrator]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
> No command files -- all phases execute inline.
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Task` | Subagent | integrator | Spawn issue-queue-agent for queue formation |
|
||||
| `Read` | IO | integrator | Read queue files and solution data |
|
||||
| `Write` | IO | integrator | Write queue output |
|
||||
| `Bash` | System | integrator | Execute ccw commands |
|
||||
| `mcp__ccw-tools__team_msg` | Team | integrator | Log messages to message bus |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `queue_ready` | integrator -> coordinator | Queue formed successfully | Queue ready for execution |
|
||||
| `conflict_found` | integrator -> coordinator | File conflicts detected, user input needed | Conflicts need manual decision |
|
||||
| `error` | integrator -> coordinator | Blocking error | Queue formation failed |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "integrator",
|
||||
type: <message-type>,
|
||||
data: {ref: <artifact-path>}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from integrator --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `MARSHAL-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Collect Bound Solutions
|
||||
|
||||
**Input Sources**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue IDs | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
||||
| Bound solutions | `ccw issue solutions <id> --json` | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract issue IDs from task description via regex
|
||||
2. Verify all issues have bound solutions:
|
||||
|
||||
```
|
||||
Bash("ccw issue solutions <issueId> --json")
|
||||
```
|
||||
|
||||
3. Check for unbound issues:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| All issues bound | Proceed to Phase 3 |
|
||||
| Any issue unbound | Report error to coordinator, STOP |
|
||||
|
||||
**Unbound error report**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "integrator",
|
||||
type: "error",
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: "## [integrator] Error: Unbound Issues\n\nThe following issues have no bound solution:\n<unbound list>\n\nPlanner must create solutions before queue formation.",
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 3: Queue Formation via issue-queue-agent
|
||||
|
||||
**Agent invocation**:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "issue-queue-agent",
|
||||
run_in_background: false,
|
||||
description: "Form queue for <count> issues",
|
||||
prompt: "
|
||||
## Issues to Queue
|
||||
|
||||
Issue IDs: <issueIds>
|
||||
|
||||
## Bound Solutions
|
||||
|
||||
<solution list with issue_id, solution_id, task_count>
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Load all bound solutions from .workflow/issues/solutions/
|
||||
2. Analyze file conflicts between solutions using Gemini CLI
|
||||
3. Determine optimal execution order (DAG-based)
|
||||
4. Produce ordered execution queue
|
||||
|
||||
## Expected Output
|
||||
|
||||
Write queue to: .workflow/issues/queue/execution-queue.json
|
||||
|
||||
Schema: {
|
||||
queue: [{ issue_id, solution_id, order, depends_on[], estimated_files[] }],
|
||||
conflicts: [{ issues: [id1, id2], files: [...], resolution }],
|
||||
parallel_groups: [{ group: N, issues: [...] }]
|
||||
}
|
||||
"
|
||||
})
|
||||
```
|
||||
|
||||
**Parse queue result**:
|
||||
|
||||
```
|
||||
Read(".workflow/issues/queue/execution-queue.json")
|
||||
```
|
||||
|
||||
### Phase 4: Conflict Resolution
|
||||
|
||||
**Queue validation**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Queue file exists | Check for unresolved conflicts |
|
||||
| Queue file not found | Report error to coordinator, STOP |
|
||||
|
||||
**Conflict handling**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| No unresolved conflicts | Proceed to Phase 5 |
|
||||
| Has unresolved conflicts | Report to coordinator for user decision |
|
||||
|
||||
**Unresolved conflict report**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "integrator",
|
||||
type: "conflict_found",
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: "## [integrator] Conflicts Found\n\n**Unresolved Conflicts**: <count>\n\n<conflict details>\n\n**Action Required**: Coordinator should present conflicts to user for resolution, then re-trigger MARSHAL.",
|
||||
})
|
||||
```
|
||||
|
||||
**Queue metrics**:
|
||||
|
||||
| Metric | Source |
|
||||
|--------|--------|
|
||||
| Queue size | `queueResult.queue.length` |
|
||||
| Parallel groups | `queueResult.parallel_groups.length` |
|
||||
| Resolved conflicts | Count where `resolution !== 'unresolved'` |
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[integrator]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Report content includes**:
|
||||
|
||||
- Queue size
|
||||
- Number of parallel groups
|
||||
- Resolved conflicts count
|
||||
- Execution order list
|
||||
- Parallel groups breakdown
|
||||
- Queue file path
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No MARSHAL-* tasks available | Idle, wait for coordinator |
|
||||
| Issues without bound solutions | Report to coordinator, block queue formation |
|
||||
| issue-queue-agent failure | Retry once, then report error |
|
||||
| Unresolved file conflicts | Escalate to coordinator for user decision |
|
||||
| Single issue (no conflict possible) | Create trivial queue with one entry |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -1,201 +0,0 @@
|
||||
# Planner Role
|
||||
|
||||
Solution design, task decomposition. Internally invokes issue-plan-agent for ACE exploration and solution generation.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `planner` | **Tag**: `[planner]`
|
||||
- **Task Prefix**: `SOLVE-*`
|
||||
- **Responsibility**: Orchestration (solution design)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `SOLVE-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[planner]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Use issue-plan-agent for solution design
|
||||
- Reference explorer's context-report for solution context
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute code implementation (implementer responsibility)
|
||||
- Review solution quality (reviewer responsibility)
|
||||
- Orchestrate execution queue (integrator responsibility)
|
||||
- Communicate directly with other worker roles
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's responsibility
|
||||
- Omit `[planner]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
> No command files -- all phases execute inline.
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Task` | Subagent | planner | Spawn issue-plan-agent for solution design |
|
||||
| `Read` | IO | planner | Read context reports |
|
||||
| `Bash` | System | planner | Execute ccw commands |
|
||||
| `mcp__ccw-tools__team_msg` | Team | planner | Log messages to message bus |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `solution_ready` | planner -> coordinator | Solution designed and bound | Single solution ready |
|
||||
| `multi_solution` | planner -> coordinator | Multiple solutions, needs selection | Multiple solutions pending selection |
|
||||
| `error` | planner -> coordinator | Blocking error | Solution design failed |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "planner",
|
||||
type: <message-type>,
|
||||
data: {ref: <artifact-path>}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from planner --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `SOLVE-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
**Input Sources**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue ID | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
||||
| Explorer context | `.workflow/.team-plan/issue/context-<issueId>.json` | No |
|
||||
| Review feedback | Task description (for SOLVE-fix tasks) | No |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract issue ID from task description via regex: `(?:GH-\d+|ISS-\d{8}-\d{6})`
|
||||
2. If no issue ID found -> SendMessage error to coordinator, STOP
|
||||
3. Load explorer's context report (if available):
|
||||
|
||||
```
|
||||
Read(".workflow/.team-plan/issue/context-<issueId>.json")
|
||||
```
|
||||
|
||||
4. Check if this is a revision task (SOLVE-fix-N):
|
||||
- If yes, extract reviewer feedback from task description
|
||||
- Design alternative approach addressing reviewer concerns
|
||||
|
||||
### Phase 3: Solution Generation via issue-plan-agent
|
||||
|
||||
**Agent invocation**:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "issue-plan-agent",
|
||||
run_in_background: false,
|
||||
description: "Plan solution for <issueId>",
|
||||
prompt: "
|
||||
issue_ids: [\"<issueId>\"]
|
||||
project_root: \"<projectRoot>\"
|
||||
|
||||
## Explorer Context (pre-gathered)
|
||||
Relevant files: <explorerContext.relevant_files>
|
||||
Key findings: <explorerContext.key_findings>
|
||||
Complexity: <explorerContext.complexity_assessment>
|
||||
|
||||
## Revision Required (if SOLVE-fix)
|
||||
Previous solution was rejected by reviewer. Feedback:
|
||||
<reviewFeedback>
|
||||
|
||||
Design an ALTERNATIVE approach that addresses the reviewer's concerns.
|
||||
"
|
||||
})
|
||||
```
|
||||
|
||||
**Expected agent result**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `bound` | Array of auto-bound solutions: `[{issue_id, solution_id, task_count}]` |
|
||||
| `pending_selection` | Array of multi-solution issues: `[{issue_id, solutions: [...]}]` |
|
||||
|
||||
### Phase 4: Solution Selection & Binding
|
||||
|
||||
**Outcome routing**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Single solution auto-bound | Report `solution_ready` to coordinator |
|
||||
| Multiple solutions pending | Report `multi_solution` to coordinator for user selection |
|
||||
| No solution generated | Report `error` to coordinator |
|
||||
|
||||
**Single solution report**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "planner",
|
||||
type: "solution_ready",
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: "## [planner] Solution Ready\n\n**Issue**: <issue_id>\n**Solution**: <solution_id>\n**Tasks**: <task_count>\n**Status**: Auto-bound (single solution)",
|
||||
})
|
||||
```
|
||||
|
||||
**Multi-solution report**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "planner",
|
||||
type: "multi_solution",
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: "## [planner] Multiple Solutions\n\n**Issue**: <issue_id>\n**Solutions**: <count> options\n\n### Options\n<solution details>\n\n**Action Required**: Coordinator should present options to user for selection.",
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: TaskUpdate completed -> check for next SOLVE-* task -> if found, loop to Phase 1.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No SOLVE-* tasks available | Idle, wait for coordinator |
|
||||
| Issue not found | Notify coordinator with error |
|
||||
| issue-plan-agent failure | Retry once, then report error |
|
||||
| Explorer context missing | Proceed without - agent does its own exploration |
|
||||
| Solution binding failure | Report to coordinator for manual binding |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -1,266 +0,0 @@
|
||||
# Reviewer Role
|
||||
|
||||
Solution review, technical feasibility validation, risk assessment. **Quality gate role** that fills the gap between plan and execute phases.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `reviewer` | **Tag**: `[reviewer]`
|
||||
- **Task Prefix**: `AUDIT-*`
|
||||
- **Responsibility**: Read-only analysis (solution review)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `AUDIT-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[reviewer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Reference explorer's context-report for solution coverage validation
|
||||
- Provide clear verdict for each solution: approved / rejected / concerns
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Modify solutions (planner responsibility)
|
||||
- Modify any source code
|
||||
- Orchestrate execution queue (integrator responsibility)
|
||||
- Communicate directly with other worker roles
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Omit `[reviewer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
> No command files -- all phases execute inline.
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Read` | IO | reviewer | Read solution files and context reports |
|
||||
| `Bash` | System | reviewer | Execute ccw issue commands |
|
||||
| `Glob` | Search | reviewer | Find related files |
|
||||
| `Grep` | Search | reviewer | Search code patterns |
|
||||
| `mcp__ace-tool__search_context` | Search | reviewer | Semantic search for solution validation |
|
||||
| `mcp__ccw-tools__team_msg` | Team | reviewer | Log messages to message bus |
|
||||
| `Write` | IO | reviewer | Write audit report |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `approved` | reviewer -> coordinator | Solution passes all checks | Solution approved |
|
||||
| `rejected` | reviewer -> coordinator | Critical issues found | Solution rejected, needs revision |
|
||||
| `concerns` | reviewer -> coordinator | Minor issues noted | Has concerns but non-blocking |
|
||||
| `error` | reviewer -> coordinator | Blocking error | Review failed |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "reviewer",
|
||||
type: <message-type>,
|
||||
data: {ref: <artifact-path>}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from reviewer --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Review Criteria
|
||||
|
||||
### Technical Feasibility (Weight 40%)
|
||||
|
||||
| Criterion | Check |
|
||||
|-----------|-------|
|
||||
| File Coverage | Solution covers all affected files |
|
||||
| Dependency Awareness | Considers dependency cascade effects |
|
||||
| API Compatibility | Maintains backward compatibility |
|
||||
| Pattern Conformance | Follows existing code patterns |
|
||||
|
||||
### Risk Assessment (Weight 30%)
|
||||
|
||||
| Criterion | Check |
|
||||
|-----------|-------|
|
||||
| Scope Creep | Solution stays within issue boundary |
|
||||
| Breaking Changes | No destructive modifications |
|
||||
| Side Effects | No unforeseen side effects |
|
||||
| Rollback Path | Can rollback if issues occur |
|
||||
|
||||
### Completeness (Weight 30%)
|
||||
|
||||
| Criterion | Check |
|
||||
|-----------|-------|
|
||||
| All Tasks Defined | Task decomposition is complete |
|
||||
| Test Coverage | Includes test plan |
|
||||
| Edge Cases | Considers boundary conditions |
|
||||
| Documentation | Key changes are documented |
|
||||
|
||||
### Verdict Rules
|
||||
|
||||
| Score | Verdict | Action |
|
||||
|-------|---------|--------|
|
||||
| >= 80% | `approved` | Proceed to MARSHAL phase |
|
||||
| 60-79% | `concerns` | Include suggestions, non-blocking |
|
||||
| < 60% | `rejected` | Requires planner revision |
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `AUDIT-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context & Solution Loading
|
||||
|
||||
**Input Sources**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue IDs | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
||||
| Explorer context | `.workflow/.team-plan/issue/context-<issueId>.json` | No |
|
||||
| Bound solution | `ccw issue solutions <id> --json` | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract issue IDs from task description via regex
|
||||
2. Load explorer context reports for each issue:
|
||||
|
||||
```
|
||||
Read(".workflow/.team-plan/issue/context-<issueId>.json")
|
||||
```
|
||||
|
||||
3. Load bound solutions for each issue:
|
||||
|
||||
```
|
||||
Bash("ccw issue solutions <issueId> --json")
|
||||
```
|
||||
|
||||
### Phase 3: Multi-Dimensional Review
|
||||
|
||||
**Review execution for each issue**:
|
||||
|
||||
| Dimension | Weight | Validation Method |
|
||||
|-----------|--------|-------------------|
|
||||
| Technical Feasibility | 40% | Cross-check solution files against explorer context + ACE semantic validation |
|
||||
| Risk Assessment | 30% | Analyze task count for scope creep, check for breaking changes |
|
||||
| Completeness | 30% | Verify task definitions exist, check for test plan |
|
||||
|
||||
**Technical Feasibility validation**:
|
||||
|
||||
| Condition | Score Impact |
|
||||
|-----------|--------------|
|
||||
| All context files covered by solution | 100% |
|
||||
| Partial coverage (some files missing) | -15% per uncovered file, min 40% |
|
||||
| ACE results diverge from solution patterns | -10% |
|
||||
| No explorer context available | 70% (limited validation) |
|
||||
|
||||
**Risk Assessment validation**:
|
||||
|
||||
| Condition | Score |
|
||||
|-----------|-------|
|
||||
| Task count <= 10 | 90% |
|
||||
| Task count > 10 (possible scope creep) | 50% |
|
||||
|
||||
**Completeness validation**:
|
||||
|
||||
| Condition | Score |
|
||||
|-----------|-------|
|
||||
| Tasks defined (count > 0) | 85% |
|
||||
| No tasks defined | 30% |
|
||||
|
||||
**ACE semantic validation**:
|
||||
|
||||
```
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: <projectRoot>,
|
||||
query: "<solution.title>. Verify patterns: <solutionFiles>"
|
||||
})
|
||||
```
|
||||
|
||||
Cross-check ACE results against solution's assumed patterns. If >50% of solution files not found in ACE results, flag as potentially outdated.
|
||||
|
||||
### Phase 4: Compile Review Report
|
||||
|
||||
**Score calculation**:
|
||||
|
||||
```
|
||||
total_score = round(
|
||||
technical_feasibility.score * 0.4 +
|
||||
risk_assessment.score * 0.3 +
|
||||
completeness.score * 0.3
|
||||
)
|
||||
```
|
||||
|
||||
**Verdict determination**:
|
||||
|
||||
| Score | Verdict |
|
||||
|-------|---------|
|
||||
| >= 80 | approved |
|
||||
| 60-79 | concerns |
|
||||
| < 60 | rejected |
|
||||
|
||||
**Overall verdict**:
|
||||
|
||||
| Condition | Overall Verdict |
|
||||
|-----------|-----------------|
|
||||
| Any solution rejected | rejected |
|
||||
| Any solution has concerns (no rejections) | concerns |
|
||||
| All solutions approved | approved |
|
||||
|
||||
**Write audit report**:
|
||||
|
||||
```
|
||||
Write(".workflow/.team-plan/issue/audit-report.json", {
|
||||
timestamp: <ISO timestamp>,
|
||||
overall_verdict: <verdict>,
|
||||
reviews: [{
|
||||
issueId, solutionId, total_score, verdict,
|
||||
technical_feasibility: { score, findings },
|
||||
risk_assessment: { score, findings },
|
||||
completeness: { score, findings }
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[reviewer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Report content includes**:
|
||||
|
||||
- Overall verdict
|
||||
- Per-issue scores and verdicts
|
||||
- Rejection reasons (if any)
|
||||
- Action required for rejected solutions
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No AUDIT-* tasks available | Idle, wait for coordinator |
|
||||
| Solution file not found | Check ccw issue solutions, report error if missing |
|
||||
| Explorer context missing | Proceed with limited review (lower technical score) |
|
||||
| All solutions rejected | Report to coordinator for review-fix cycle |
|
||||
| Review timeout | Report partial results with available data |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -11,20 +11,23 @@ Iterative development team skill. Generator-Critic loops (developer<->reviewer,
|
||||
## Architecture
|
||||
|
||||
```
|
||||
+-------------------------------------------------+
|
||||
| Skill(skill="team-iterdev") |
|
||||
| args="task description" or args="--role=xxx" |
|
||||
+-------------------+-----------------------------+
|
||||
| Role Router
|
||||
+---- --role present? ----+
|
||||
| NO | YES
|
||||
v v
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto -> coordinator) (route to role.md)
|
||||
|
|
||||
+----+----+----------+---------+---------+
|
||||
v v v v v
|
||||
coordinator architect developer tester reviewer
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-iterdev") |
|
||||
| args="<task-description>" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
Orchestration Mode (auto -> coordinator)
|
||||
|
|
||||
Coordinator (inline)
|
||||
Phase 0-5 orchestration
|
||||
|
|
||||
+-------+-------+-------+-------+
|
||||
v v v v
|
||||
[tw] [tw] [tw] [tw]
|
||||
archi- devel- tester review-
|
||||
tect oper er
|
||||
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Role Router
|
||||
@@ -35,13 +38,13 @@ Parse `$ARGUMENTS` to extract `--role`. If absent -> Orchestration Mode (auto ro
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| coordinator | [roles/coordinator.md](roles/coordinator.md) | (none) | orchestrator | **MUST re-read after compression** |
|
||||
| architect | [roles/architect.md](roles/architect.md) | DESIGN-* | pipeline | MUST re-read after compression |
|
||||
| developer | [roles/developer.md](roles/developer.md) | DEV-* | pipeline | MUST re-read after compression |
|
||||
| tester | [roles/tester.md](roles/tester.md) | VERIFY-* | pipeline | MUST re-read after compression |
|
||||
| reviewer | [roles/reviewer.md](roles/reviewer.md) | REVIEW-* | pipeline | MUST re-read after compression |
|
||||
| Role | Spec | Task Prefix | Inner Loop |
|
||||
|------|------|-------------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
|
||||
| architect | [role-specs/architect.md](role-specs/architect.md) | DESIGN-* | false |
|
||||
| developer | [role-specs/developer.md](role-specs/developer.md) | DEV-* | true |
|
||||
| tester | [role-specs/tester.md](role-specs/tester.md) | VERIFY-* | false |
|
||||
| reviewer | [role-specs/reviewer.md](role-specs/reviewer.md) | REVIEW-* | false |
|
||||
|
||||
> **COMPACT PROTECTION**: Role files are execution documents, not reference material. When context compression occurs and role instructions are reduced to summaries, you **MUST immediately `Read` the corresponding role.md to reload before continuing execution**. Never execute any Phase based on summaries alone.
|
||||
|
||||
@@ -404,39 +407,62 @@ Real-time tracking of all sprint task progress. Coordinator updates at each task
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use background mode (Spawn-and-Stop):
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: <team-name>,
|
||||
team_name: "iterdev",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE>.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-iterdev/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: iterdev
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
## Primary Instruction
|
||||
All your work must be executed by calling Skill to load role definition:
|
||||
Skill(skill="team-iterdev", args="--role=<role>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other roles' work
|
||||
- All output must have [<role>] identifier prefix
|
||||
- Communicate only with coordinator
|
||||
- Do not use TaskCreate to create tasks for other roles
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> load role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg + SendMessage result to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**Inner Loop roles** (developer): Set `inner_loop: true`. The team-worker agent handles the loop internally.
|
||||
|
||||
**Single-task roles** (architect, tester, reviewer): Set `inner_loop: false`.
|
||||
|
||||
---
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "IterDev pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(iterdev) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-iterdev", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
---
|
||||
|
||||
## Unified Session Directory
|
||||
|
||||
@@ -1,260 +0,0 @@
|
||||
# Architect Role
|
||||
|
||||
Technical architect. Responsible for technical design, task decomposition, and architecture decision records.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `architect` | **Tag**: `[architect]`
|
||||
- **Task Prefix**: `DESIGN-*`
|
||||
- **Responsibility**: Read-only analysis (Technical Design)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `DESIGN-*` prefixed tasks
|
||||
- All output must carry `[architect]` identifier
|
||||
- Phase 2: Read .msg/meta.json, Phase 5: Write architecture_decisions
|
||||
- Work strictly within technical design responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Write implementation code, execute tests, or perform code review
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's responsibility
|
||||
- Omit `[architect]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| Task | Agent | Spawn cli-explore-agent for codebase exploration |
|
||||
| Read | File | Read session files, shared memory, design files |
|
||||
| Write | File | Write design documents and task breakdown |
|
||||
| Bash | Shell | Execute shell commands |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `design_ready` | architect -> coordinator | Design completed | Design ready for implementation |
|
||||
| `design_revision` | architect -> coordinator | Design revised | Design updated based on feedback |
|
||||
| `error` | architect -> coordinator | Processing failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "architect",
|
||||
type: <message-type>,
|
||||
data: { ref: <design-path> }
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from architect --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `DESIGN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading + Codebase Exploration
|
||||
|
||||
**Inputs**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session path | Task description (Session: <path>) | Yes |
|
||||
| Shared memory | <session-folder>/.msg/meta.json | Yes |
|
||||
| Codebase | Project files | Yes |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json for context
|
||||
|
||||
```
|
||||
Read(<session-folder>/.msg/meta.json)
|
||||
```
|
||||
|
||||
3. Multi-angle codebase exploration via cli-explore-agent:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore architecture",
|
||||
prompt: `Explore codebase architecture for: <task-description>
|
||||
|
||||
Focus on:
|
||||
- Existing patterns
|
||||
- Module structure
|
||||
- Dependencies
|
||||
- Similar implementations
|
||||
|
||||
Report relevant files and integration points.`
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 3: Technical Design + Task Decomposition
|
||||
|
||||
**Design strategy selection**:
|
||||
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Single module change | Direct inline design |
|
||||
| Cross-module change | Multi-component design with integration points |
|
||||
| Large refactoring | Phased approach with milestones |
|
||||
|
||||
**Outputs**:
|
||||
|
||||
1. **Design Document** (`<session-folder>/design/design-<num>.md`):
|
||||
|
||||
```markdown
|
||||
# Technical Design — <num>
|
||||
|
||||
**Requirement**: <task-description>
|
||||
**Sprint**: <sprint-number>
|
||||
|
||||
## Architecture Decision
|
||||
|
||||
**Approach**: <selected-approach>
|
||||
**Rationale**: <rationale>
|
||||
**Alternatives Considered**: <alternatives>
|
||||
|
||||
## Component Design
|
||||
|
||||
### <Component-1>
|
||||
- **Responsibility**: <description>
|
||||
- **Dependencies**: <deps>
|
||||
- **Files**: <file-list>
|
||||
- **Complexity**: <low/medium/high>
|
||||
|
||||
## Task Breakdown
|
||||
|
||||
### Task 1: <title>
|
||||
- **Files**: <file-list>
|
||||
- **Estimated Complexity**: <level>
|
||||
- **Dependencies**: <deps or None>
|
||||
|
||||
## Integration Points
|
||||
|
||||
- **<Integration-1>**: <description>
|
||||
|
||||
## Risks
|
||||
|
||||
- **<Risk-1>**: <mitigation>
|
||||
```
|
||||
|
||||
2. **Task Breakdown JSON** (`<session-folder>/design/task-breakdown.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"design_id": "design-<num>",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "task-1",
|
||||
"title": "<title>",
|
||||
"files": ["<file1>", "<file2>"],
|
||||
"complexity": "<level>",
|
||||
"dependencies": [],
|
||||
"acceptance_criteria": "<criteria>"
|
||||
}
|
||||
],
|
||||
"total_files": <count>,
|
||||
"execution_order": ["task-1", "task-2"]
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Design Validation
|
||||
|
||||
**Validation checks**:
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Components defined | Verify component list | At least 1 component |
|
||||
| Task breakdown exists | Verify task list | At least 1 task |
|
||||
| Dependencies mapped | Check all components have dependencies field | All have dependencies (can be empty) |
|
||||
| Integration points | Verify integration section | Key integrations documented |
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
1. **Update shared memory**:
|
||||
|
||||
```
|
||||
sharedMemory.architecture_decisions.push({
|
||||
design_id: "design-<num>",
|
||||
approach: <approach>,
|
||||
rationale: <rationale>,
|
||||
components: <component-names>,
|
||||
task_count: <count>
|
||||
})
|
||||
Write(<session-folder>/.msg/meta.json, JSON.stringify(sharedMemory, null, 2))
|
||||
```
|
||||
|
||||
2. **Log and send message**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "architect",
|
||||
type: "design_ready",
|
||||
data: { ref: <design-path> }
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: `## [architect] Design Ready
|
||||
|
||||
**Components**: <count>
|
||||
**Tasks**: <task-count>
|
||||
**Design**: <design-path>
|
||||
**Breakdown**: <breakdown-path>`,
|
||||
})
|
||||
```
|
||||
|
||||
3. **Mark task complete**:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
||||
```
|
||||
|
||||
4. **Loop to Phase 1** for next task
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No DESIGN-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Codebase exploration fails | Design based on task description alone |
|
||||
| Too many components identified | Simplify, suggest phased approach |
|
||||
| Conflicting patterns found | Document in design, recommend resolution |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
@@ -1,420 +0,0 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate the IterDev workflow: Sprint planning, backlog management, task ledger maintenance, Generator-Critic loop control (developer<->reviewer, max 3 rounds), cross-sprint learning, conflict handling, concurrency control, rollback strategy, user feedback loop, and tech debt tracking.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
||||
- **Responsibility**: Orchestration + Stability Management + Quality Tracking
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- All output must carry `[coordinator]` identifier
|
||||
- Maintain task-ledger.json for real-time progress
|
||||
- Manage developer<->reviewer GC loop (max 3 rounds)
|
||||
- Record learning to .msg/meta.json at Sprint end
|
||||
- Detect and coordinate task conflicts
|
||||
- Manage shared resource locks (resource_locks)
|
||||
- Record rollback points and support emergency rollback
|
||||
- Collect and track user feedback (user_feedback_items)
|
||||
- Identify and record tech debt (tech_debt_items)
|
||||
- Generate tech debt reports
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute implementation work directly (delegate to workers)
|
||||
- Write source code directly
|
||||
- Call implementation-type subagents directly
|
||||
- Modify task outputs (workers own their deliverables)
|
||||
- Skip dependency validation when creating task chains
|
||||
|
||||
> **Core principle**: Coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, first detect the invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
|
||||
| New session | None of the above | -> Phase 0 (Session Resume Check) |
|
||||
|
||||
For callback/check/resume: load monitor logic and execute the appropriate handler, then STOP.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Scan `.workflow/.team/IDS-*/.msg/meta.json` for sessions with status "active" or "paused"
|
||||
2. No sessions found -> proceed to Phase 1
|
||||
3. Single session found -> resume it (-> Session Reconciliation)
|
||||
4. Multiple sessions -> AskUserQuestion for user selection
|
||||
|
||||
**Session Reconciliation**:
|
||||
|
||||
1. Audit TaskList -> get real status of all tasks
|
||||
2. Reconcile: session state <-> TaskList status (bidirectional sync)
|
||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
||||
4. Determine remaining pipeline from reconciled state
|
||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
||||
6. Create missing tasks with correct blockedBy dependencies
|
||||
7. Verify dependency chain integrity
|
||||
8. Update session file with reconciled state
|
||||
9. Kick first executable task's worker -> Phase 4
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Requirement Clarification
|
||||
|
||||
**Objective**: Parse user input and gather execution parameters.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse arguments** for explicit settings: mode, scope, focus areas
|
||||
|
||||
2. **Assess complexity** for pipeline selection:
|
||||
|
||||
| Signal | Weight | Keywords |
|
||||
|--------|--------|----------|
|
||||
| Changed files > 10 | +3 | Large changeset |
|
||||
| Changed files 3-10 | +2 | Medium changeset |
|
||||
| Structural change | +3 | refactor, architect, restructure, system, module |
|
||||
| Cross-cutting | +2 | multiple, across, cross |
|
||||
| Simple fix | -2 | fix, bug, typo, patch |
|
||||
|
||||
| Score | Pipeline | Description |
|
||||
|-------|----------|-------------|
|
||||
| >= 5 | multi-sprint | Incremental iterative delivery for large features |
|
||||
| 2-4 | sprint | Standard: Design -> Dev -> Verify + Review |
|
||||
| 0-1 | patch | Simple: Dev -> Verify |
|
||||
|
||||
3. **Ask for missing parameters** via AskUserQuestion:
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select development mode:",
|
||||
header: "Mode",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "patch (recommended)", description: "Patch mode: implement -> verify (simple fixes)" },
|
||||
{ label: "sprint (recommended)", description: "Sprint mode: design -> implement -> verify + review" },
|
||||
{ label: "multi-sprint (recommended)", description: "Multi-sprint: incremental iterative delivery (large features)" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Success**: All parameters captured, mode finalized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Create Team + Initialize Session
|
||||
|
||||
**Objective**: Initialize team, session file, task ledger, shared memory, and wisdom directory.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Generate session ID: `IDS-{slug}-{YYYY-MM-DD}`
|
||||
2. Create session folder structure
|
||||
3. Call TeamCreate with team name
|
||||
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
|
||||
5. Write session file with: session_id, mode, scope, status="active"
|
||||
6. Initialize task-ledger.json:
|
||||
|
||||
```
|
||||
{
|
||||
"sprint_id": "sprint-1",
|
||||
"sprint_goal": "<task-description>",
|
||||
"pipeline": "<selected-pipeline>",
|
||||
"tasks": [],
|
||||
"metrics": { "total": 0, "completed": 0, "in_progress": 0, "blocked": 0, "velocity": 0 }
|
||||
}
|
||||
```
|
||||
|
||||
7. Initialize .msg/meta.json:
|
||||
|
||||
```
|
||||
{
|
||||
"sprint_history": [],
|
||||
"architecture_decisions": [],
|
||||
"implementation_context": [],
|
||||
"review_feedback_trends": [],
|
||||
"gc_round": 0,
|
||||
"max_gc_rounds": 3,
|
||||
"resource_locks": {},
|
||||
"task_checkpoints": {},
|
||||
"user_feedback_items": [],
|
||||
"tech_debt_items": []
|
||||
}
|
||||
```
|
||||
|
||||
**Success**: Team created, session file written, wisdom initialized, task ledger and shared memory ready.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
**Objective**: Dispatch tasks based on mode with proper dependencies.
|
||||
|
||||
### Patch Pipeline
|
||||
|
||||
| Task ID | Owner | Blocked By | Description |
|
||||
|---------|-------|------------|-------------|
|
||||
| DEV-001 | developer | (none) | Implement fix |
|
||||
| VERIFY-001 | tester | DEV-001 | Verify fix |
|
||||
|
||||
### Sprint Pipeline
|
||||
|
||||
| Task ID | Owner | Blocked By | Description |
|
||||
|---------|-------|------------|-------------|
|
||||
| DESIGN-001 | architect | (none) | Technical design and task breakdown |
|
||||
| DEV-001 | developer | DESIGN-001 | Implement design |
|
||||
| VERIFY-001 | tester | DEV-001 | Test execution |
|
||||
| REVIEW-001 | reviewer | DEV-001 | Code review |
|
||||
|
||||
### Multi-Sprint Pipeline
|
||||
|
||||
Sprint 1: DESIGN-001 -> DEV-001 -> DEV-002(incremental) -> VERIFY-001 -> DEV-fix -> REVIEW-001
|
||||
|
||||
Subsequent sprints created dynamically after Sprint N completes.
|
||||
|
||||
**Task Creation**: Use TaskCreate + TaskUpdate(owner, addBlockedBy) for each task. Include `Session: <session-folder>` in every task description.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Spawn-and-Stop
|
||||
|
||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
||||
|
||||
**Design**: Spawn-and-Stop + Callback pattern.
|
||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
||||
- Worker completes -> SendMessage callback -> auto-advance
|
||||
- User can use "check" / "resume" to manually advance
|
||||
- Coordinator does one operation per invocation, then STOPS
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
||||
2. For each ready task -> spawn worker using Spawn Template
|
||||
3. Output status summary
|
||||
4. STOP
|
||||
|
||||
### Callback Handler
|
||||
|
||||
| Received Message | Action |
|
||||
|-----------------|--------|
|
||||
| architect: design_ready | Update ledger -> unblock DEV |
|
||||
| developer: dev_complete | Update ledger -> unblock VERIFY + REVIEW |
|
||||
| tester: verify_passed | Update ledger (test_pass_rate) |
|
||||
| tester: verify_failed | Create DEV-fix task |
|
||||
| tester: fix_required | Create DEV-fix task -> assign developer |
|
||||
| reviewer: review_passed | Update ledger (review_score) -> mark complete |
|
||||
| reviewer: review_revision | **GC loop** -> create DEV-fix -> REVIEW-next |
|
||||
| reviewer: review_critical | **GC loop** -> create DEV-fix -> REVIEW-next |
|
||||
|
||||
### GC Loop Control
|
||||
|
||||
When receiving `review_revision` or `review_critical`:
|
||||
|
||||
1. Read .msg/meta.json -> get gc_round
|
||||
2. If gc_round < max_gc_rounds (3):
|
||||
- Increment gc_round
|
||||
- Create DEV-fix task with review feedback
|
||||
- Create REVIEW-next task blocked by DEV-fix
|
||||
- Update ledger
|
||||
- Log gc_loop_trigger message
|
||||
3. Else (max rounds reached):
|
||||
- Accept with warning
|
||||
- Log sprint_complete message
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Next Steps
|
||||
|
||||
**Objective**: Completion report and follow-up options.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Load session state -> count completed tasks, duration
|
||||
2. Record sprint learning to .msg/meta.json
|
||||
3. List deliverables with output paths
|
||||
4. Update session status -> "completed"
|
||||
5. Offer next steps via AskUserQuestion
|
||||
|
||||
---
|
||||
|
||||
## Protocol Implementations
|
||||
|
||||
### Resource Lock Protocol
|
||||
|
||||
Concurrency control for shared resources. Prevents multiple workers from modifying the same files simultaneously.
|
||||
|
||||
| Action | Trigger Condition | Behavior |
|
||||
|--------|-------------------|----------|
|
||||
| Acquire lock | Worker requests exclusive access | Check resource_locks via team_msg(type='state_update'). If unlocked, record lock with task ID, timestamp, holder. Log resource_locked. Return success. |
|
||||
| Deny lock | Resource already locked | Return failure with current holder's task ID. Log resource_contention. Worker must wait. |
|
||||
| Release lock | Worker completes task | Remove lock entry. Log resource_unlocked. |
|
||||
| Force release | Lock held beyond timeout (5 min) | Force-remove lock entry. Notify holder. Log warning. |
|
||||
| Deadlock detection | Multiple tasks waiting on each other | Abort youngest task, release its locks. Notify coordinator. |
|
||||
|
||||
### Conflict Detection Protocol
|
||||
|
||||
Detects and resolves file-level conflicts between concurrent development tasks.
|
||||
|
||||
| Action | Trigger Condition | Behavior |
|
||||
|--------|-------------------|----------|
|
||||
| Detect conflict | DEV task completes with changed files | Compare changed files against other in_progress/completed tasks. If overlap, update task's conflict_info to status "detected". Log conflict_detected. |
|
||||
| Resolve conflict | Conflict detected | Set resolution_strategy (manual/auto_merge/abort). Create fix-conflict task for developer. Log conflict_resolved. |
|
||||
| Skip | No file overlap | No action needed. |
|
||||
|
||||
### Rollback Point Protocol
|
||||
|
||||
Manages state snapshots for safe recovery.
|
||||
|
||||
| Action | Trigger Condition | Behavior |
|
||||
|--------|-------------------|----------|
|
||||
| Create rollback point | Task phase completes | Generate snapshot ID, record rollback_procedure (default: git revert HEAD) in task's rollback_info. |
|
||||
| Execute rollback | Task failure or user request | Log rollback_initiated. Execute stored procedure. Log rollback_completed or rollback_failed. |
|
||||
| Validate snapshot | Before rollback | Verify snapshot ID exists and procedure is valid. |
|
||||
|
||||
### Dependency Validation Protocol
|
||||
|
||||
Validates external dependencies before task execution.
|
||||
|
||||
| Action | Trigger Condition | Behavior |
|
||||
|--------|-------------------|----------|
|
||||
| Validate | Task startup with dependencies | Check installed version vs expected. Record status (ok/mismatch/missing) in external_dependencies. |
|
||||
| Report mismatch | Any dependency has issues | Log dependency_mismatch. Block task until resolved. |
|
||||
| Update notification | Important update available | Log dependency_update_needed. Add to backlog. |
|
||||
|
||||
### Checkpoint Management Protocol
|
||||
|
||||
Saves and restores task execution state for interruption recovery.
|
||||
|
||||
| Action | Trigger Condition | Behavior |
|
||||
|--------|-------------------|----------|
|
||||
| Save checkpoint | Task reaches milestone | Store checkpoint in task_checkpoints with timestamp. Retain last 5 per task. Log context_checkpoint_saved. |
|
||||
| Restore checkpoint | Task resumes after interruption | Load latest checkpoint. Log context_restored. |
|
||||
| Not found | Resume requested but no checkpoints | Return failure. Worker starts fresh. |
|
||||
|
||||
### User Feedback Protocol
|
||||
|
||||
Collects, categorizes, and tracks user feedback.
|
||||
|
||||
| Action | Trigger Condition | Behavior |
|
||||
|--------|-------------------|----------|
|
||||
| Receive feedback | User provides feedback | Create feedback item (FB-xxx) with severity, category. Store in user_feedback_items (max 50). Log user_feedback_received. |
|
||||
| Link to task | Feedback relates to task | Update source_task_id, set status "reviewed". |
|
||||
| Triage | High/critical severity | Prioritize in next sprint. Create task if actionable. |
|
||||
|
||||
### Tech Debt Management Protocol
|
||||
|
||||
Identifies, tracks, and prioritizes technical debt.
|
||||
|
||||
| Action | Trigger Condition | Behavior |
|
||||
|--------|-------------------|----------|
|
||||
| Identify debt | Worker reports tech debt | Create debt item (TD-xxx) with category, severity, effort. Store in tech_debt_items. Log tech_debt_identified. |
|
||||
| Generate report | Sprint retrospective | Aggregate by severity and category. Report totals. |
|
||||
| Prioritize | Sprint planning | Rank by severity. Recommend items for current sprint. |
|
||||
| Resolve | Developer completes debt task | Update status to "resolved". Record in sprint history. |
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| TeamCreate | Team | Create team instance |
|
||||
| TeamDelete | Team | Disband team |
|
||||
| SendMessage | Communication | Send messages to workers |
|
||||
| TaskCreate | Task | Create tasks for workers |
|
||||
| TaskUpdate | Task | Update task status/owner/dependencies |
|
||||
| TaskList | Task | List all tasks |
|
||||
| TaskGet | Task | Get task details |
|
||||
| Task | Agent | Spawn worker agents |
|
||||
| AskUserQuestion | Interaction | Ask user for input |
|
||||
| Read | File | Read session files |
|
||||
| Write | File | Write session files |
|
||||
| Bash | Shell | Execute shell commands |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| sprint_started | coordinator -> all | Sprint begins | Sprint initialization |
|
||||
| gc_loop_trigger | coordinator -> developer | Review needs revision | GC loop iteration |
|
||||
| sprint_complete | coordinator -> all | Sprint ends | Sprint summary |
|
||||
| task_unblocked | coordinator -> worker | Task dependencies resolved | Task ready |
|
||||
| error | coordinator -> all | Error occurred | Error notification |
|
||||
| shutdown | coordinator -> all | Team disbands | Shutdown notice |
|
||||
| conflict_detected | coordinator -> all | File conflict found | Conflict alert |
|
||||
| conflict_resolved | coordinator -> all | Conflict resolved | Resolution notice |
|
||||
| resource_locked | coordinator -> all | Resource acquired | Lock notification |
|
||||
| resource_unlocked | coordinator -> all | Resource released | Unlock notification |
|
||||
| resource_contention | coordinator -> all | Lock denied | Contention alert |
|
||||
| rollback_initiated | coordinator -> all | Rollback started | Rollback notice |
|
||||
| rollback_completed | coordinator -> all | Rollback succeeded | Success notice |
|
||||
| rollback_failed | coordinator -> all | Rollback failed | Failure alert |
|
||||
| dependency_mismatch | coordinator -> all | Dependency issue | Dependency alert |
|
||||
| dependency_update_needed | coordinator -> all | Update available | Update notice |
|
||||
| context_checkpoint_saved | coordinator -> all | Checkpoint created | Checkpoint notice |
|
||||
| context_restored | coordinator -> all | Checkpoint restored | Restore notice |
|
||||
| user_feedback_received | coordinator -> all | Feedback recorded | Feedback notice |
|
||||
| tech_debt_identified | coordinator -> all | Tech debt found | Debt notice |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "coordinator",
|
||||
type: <message-type>,
|
||||
data: { ref: <artifact-path> }
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from coordinator --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| GC loop exceeds 3 rounds | Accept current code, record to sprint_history |
|
||||
| Velocity below 50% | Alert user, suggest scope reduction |
|
||||
| Task ledger corrupted | Rebuild from TaskList state |
|
||||
| Design rejected 3+ times | Coordinator intervenes, simplifies design |
|
||||
| Tests continuously fail | Create DEV-fix for developer |
|
||||
| Conflict detected | Update conflict_info, create DEV-fix task |
|
||||
| Resource lock timeout | Force release after 5 min, notify holder |
|
||||
| Rollback requested | Validate snapshot_id, execute procedure |
|
||||
| Deadlock detected | Abort youngest task, release locks |
|
||||
| Dependency mismatch | Log mismatch, block task until resolved |
|
||||
| Checkpoint restore failure | Log error, worker restarts from Phase 1 |
|
||||
| User feedback critical | Create fix task immediately, elevate priority |
|
||||
| Tech debt exceeds threshold | Generate report, suggest dedicated sprint |
|
||||
| Feedback task link fails | Retain feedback, mark unlinked, manual follow-up |
|
||||
@@ -0,0 +1,172 @@
|
||||
# Command: Monitor
|
||||
|
||||
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, Generator-Critic loop control (developer<->reviewer), and completion.
|
||||
|
||||
## Constants
|
||||
|
||||
| Key | Value |
|
||||
|-----|-------|
|
||||
| SPAWN_MODE | background |
|
||||
| ONE_STEP_PER_INVOCATION | true |
|
||||
| WORKER_AGENT | team-worker |
|
||||
| MAX_GC_ROUNDS | 3 |
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session state | <session>/session.json | Yes |
|
||||
| Task list | TaskList() | Yes |
|
||||
| Trigger event | From Entry Router detection | Yes |
|
||||
| Meta state | <session>/.msg/meta.json | Yes |
|
||||
| Task ledger | <session>/task-ledger.json | No |
|
||||
|
||||
1. Load session.json for current state, pipeline mode, gc_round, max_gc_rounds
|
||||
2. Run TaskList() to get current task statuses
|
||||
3. Identify trigger event type from Entry Router
|
||||
|
||||
## Phase 3: Event Handlers
|
||||
|
||||
### handleCallback
|
||||
|
||||
Triggered when a worker sends completion message.
|
||||
|
||||
1. Parse message to identify role and task ID:
|
||||
|
||||
| Message Pattern | Role Detection |
|
||||
|----------------|---------------|
|
||||
| `[architect]` or task ID `DESIGN-*` | architect |
|
||||
| `[developer]` or task ID `DEV-*` | developer |
|
||||
| `[tester]` or task ID `VERIFY-*` | tester |
|
||||
| `[reviewer]` or task ID `REVIEW-*` | reviewer |
|
||||
|
||||
2. Mark task as completed:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: "<task-id>", status: "completed" })
|
||||
```
|
||||
|
||||
3. Record completion in session state and update task-ledger.json metrics
|
||||
|
||||
4. **Generator-Critic check** (when reviewer completes):
|
||||
- If completed task is REVIEW-* AND pipeline is sprint or multi-sprint:
|
||||
- Read review report for GC signal (critical_count, score)
|
||||
- Read session.json for gc_round
|
||||
|
||||
| GC Signal | gc_round < max | Action |
|
||||
|-----------|----------------|--------|
|
||||
| review.critical_count > 0 OR review.score < 7 | Yes | Increment gc_round, create DEV-fix task blocked by this REVIEW, log `gc_loop_trigger` |
|
||||
| review.critical_count > 0 OR review.score < 7 | No (>= max) | Force convergence, accept with warning, log to wisdom/issues.md |
|
||||
| review.critical_count == 0 AND review.score >= 7 | - | Review passed, proceed to handleComplete check |
|
||||
|
||||
- Log team_msg with type "gc_loop_trigger" or "task_unblocked"
|
||||
|
||||
5. Proceed to handleSpawnNext
|
||||
|
||||
### handleSpawnNext
|
||||
|
||||
Find and spawn the next ready tasks.
|
||||
|
||||
1. Scan task list for tasks where:
|
||||
- Status is "pending"
|
||||
- All blockedBy tasks have status "completed"
|
||||
|
||||
2. For each ready task, determine role from task prefix:
|
||||
|
||||
| Task Prefix | Role | Inner Loop |
|
||||
|-------------|------|------------|
|
||||
| DESIGN-* | architect | false |
|
||||
| DEV-* | developer | true |
|
||||
| VERIFY-* | tester | false |
|
||||
| REVIEW-* | reviewer | false |
|
||||
|
||||
3. Spawn team-worker:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker for <task-id>",
|
||||
team_name: "iterdev",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-iterdev/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: iterdev
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
|
||||
})
|
||||
```
|
||||
|
||||
4. **Parallel spawn rules**:
|
||||
|
||||
| Pipeline | Scenario | Spawn Behavior |
|
||||
|----------|----------|---------------|
|
||||
| Patch | DEV -> VERIFY | One worker at a time |
|
||||
| Sprint | VERIFY + REVIEW both unblocked | Spawn BOTH in parallel |
|
||||
| Sprint | Other stages | One worker at a time |
|
||||
| Multi-Sprint | VERIFY + DEV-fix both unblocked | Spawn BOTH in parallel |
|
||||
| Multi-Sprint | Other stages | One worker at a time |
|
||||
|
||||
5. STOP after spawning -- wait for next callback
|
||||
|
||||
### handleCheck
|
||||
|
||||
Output current pipeline status. Do NOT advance pipeline.
|
||||
|
||||
```
|
||||
Pipeline Status (<pipeline-mode>):
|
||||
[DONE] DESIGN-001 (architect) -> design/design-001.md
|
||||
[DONE] DEV-001 (developer) -> code/dev-log.md
|
||||
[RUN] VERIFY-001 (tester) -> verifying...
|
||||
[RUN] REVIEW-001 (reviewer) -> reviewing...
|
||||
[WAIT] DEV-fix (developer) -> blocked by REVIEW-001
|
||||
|
||||
GC Rounds: <gc_round>/<max_gc_rounds>
|
||||
Sprint: <sprint_id>
|
||||
Session: <session-id>
|
||||
```
|
||||
|
||||
### handleResume
|
||||
|
||||
Resume pipeline after user pause or interruption.
|
||||
|
||||
1. Audit task list for inconsistencies:
|
||||
- Tasks stuck in "in_progress" -> reset to "pending"
|
||||
- Tasks with completed blockers but still "pending" -> include in spawn list
|
||||
2. Proceed to handleSpawnNext
|
||||
|
||||
### handleComplete
|
||||
|
||||
Triggered when all pipeline tasks are completed.
|
||||
|
||||
**Completion check by mode**:
|
||||
|
||||
| Mode | Completion Condition |
|
||||
|------|---------------------|
|
||||
| patch | DEV-001 + VERIFY-001 completed |
|
||||
| sprint | DESIGN-001 + DEV-001 + VERIFY-001 + REVIEW-001 (+ any GC tasks) completed |
|
||||
| multi-sprint | All sprint tasks (+ any GC tasks) completed |
|
||||
|
||||
1. Verify all tasks completed via TaskList()
|
||||
2. If any tasks not completed, return to handleSpawnNext
|
||||
3. **Multi-sprint check**: If multi-sprint AND more sprints planned:
|
||||
- Record sprint metrics to .msg/meta.json sprint_history
|
||||
- Evaluate downgrade eligibility (velocity >= expected, review avg >= 8)
|
||||
- Pause for user confirmation before Sprint N+1
|
||||
4. If all completed, transition to coordinator Phase 5 (Report + Completion Action)
|
||||
|
||||
## Phase 4: State Persistence
|
||||
|
||||
After every handler execution:
|
||||
|
||||
1. Update session.json with current state (gc_round, last event, active tasks)
|
||||
2. Update task-ledger.json metrics (completed count, in_progress count, velocity)
|
||||
3. Update .msg/meta.json gc_round if changed
|
||||
4. Verify task list consistency
|
||||
5. STOP and wait for next event
|
||||
@@ -1,275 +0,0 @@
|
||||
# Developer Role
|
||||
|
||||
Code implementer. Responsible for implementing code according to design, incremental delivery. Acts as Generator in Generator-Critic loop (paired with reviewer).
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `developer` | **Tag**: `[developer]`
|
||||
- **Task Prefix**: `DEV-*`
|
||||
- **Responsibility**: Code generation (Code Implementation)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `DEV-*` prefixed tasks
|
||||
- All output must carry `[developer]` identifier
|
||||
- Phase 2: Read .msg/meta.json + design, Phase 5: Write implementation_context
|
||||
- For fix tasks (DEV-fix-*): Reference review feedback
|
||||
- Work strictly within code implementation responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Execute tests, perform code review, or design architecture
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's responsibility
|
||||
- Omit `[developer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| Task | Agent | Spawn code-developer for implementation |
|
||||
| Read | File | Read design, breakdown, shared memory |
|
||||
| Write | File | Write dev-log |
|
||||
| Edit | File | Modify code files |
|
||||
| Glob | Search | Find review files |
|
||||
| Bash | Shell | Execute syntax check, git commands |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `dev_complete` | developer -> coordinator | Implementation done | Implementation completed |
|
||||
| `dev_progress` | developer -> coordinator | Incremental progress | Progress update |
|
||||
| `error` | developer -> coordinator | Processing failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "developer",
|
||||
type: <message-type>,
|
||||
data: { ref: <dev-log-path> }
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from developer --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `DEV-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
**Inputs**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session path | Task description (Session: <path>) | Yes |
|
||||
| Shared memory | <session-folder>/.msg/meta.json | Yes |
|
||||
| Design document | <session-folder>/design/design-001.md | For non-fix tasks |
|
||||
| Task breakdown | <session-folder>/design/task-breakdown.json | For non-fix tasks |
|
||||
| Review feedback | <session-folder>/review/*.md | For fix tasks |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json
|
||||
|
||||
```
|
||||
Read(<session-folder>/.msg/meta.json)
|
||||
```
|
||||
|
||||
3. Check if this is a fix task (GC loop):
|
||||
|
||||
| Task Type | Detection | Loading |
|
||||
|-----------|-----------|---------|
|
||||
| Fix task | Subject contains "fix" | Read latest review file |
|
||||
| Normal task | Subject does not contain "fix" | Read design + breakdown |
|
||||
|
||||
4. Load previous implementation context from shared memory:
|
||||
|
||||
```
|
||||
prevContext = sharedMemory.implementation_context || []
|
||||
```
|
||||
|
||||
### Phase 3: Code Implementation
|
||||
|
||||
**Implementation strategy selection**:
|
||||
|
||||
| Task Count | Complexity | Strategy |
|
||||
|------------|------------|----------|
|
||||
| <= 2 tasks | Low | Direct: inline Edit/Write |
|
||||
| 3-5 tasks | Medium | Single agent: one code-developer for all |
|
||||
| > 5 tasks | High | Batch agent: group by module, one agent per batch |
|
||||
|
||||
#### Fix Task Mode (GC Loop)
|
||||
|
||||
Focus on review feedback items:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Fix review issues",
|
||||
prompt: `Fix the following code review issues:
|
||||
|
||||
<review-feedback>
|
||||
|
||||
Focus on:
|
||||
1. Critical issues (must fix)
|
||||
2. High issues (should fix)
|
||||
3. Medium issues (if time permits)
|
||||
|
||||
Do NOT change code that wasn't flagged.
|
||||
Maintain existing code style and patterns.`
|
||||
})
|
||||
```
|
||||
|
||||
#### Normal Task Mode
|
||||
|
||||
For each task in breakdown:
|
||||
|
||||
1. Read target files (if exist)
|
||||
2. Apply changes using Edit or Write
|
||||
3. Follow execution order from breakdown
|
||||
|
||||
For complex tasks (>3), delegate to code-developer:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Implement <task-count> tasks",
|
||||
prompt: `## Design
|
||||
<design-content>
|
||||
|
||||
## Task Breakdown
|
||||
<breakdown-json>
|
||||
|
||||
## Previous Context
|
||||
<prev-context>
|
||||
|
||||
Implement each task following the design. Complete tasks in the specified execution order.`
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 4: Self-Validation
|
||||
|
||||
**Validation checks**:
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Syntax | `tsc --noEmit` or equivalent | No errors |
|
||||
| File existence | Verify all planned files exist | All files present |
|
||||
| Import resolution | Check no broken imports | All imports resolve |
|
||||
|
||||
**Syntax check command**:
|
||||
|
||||
```
|
||||
Bash("npx tsc --noEmit 2>&1 || python -m py_compile *.py 2>&1 || true")
|
||||
```
|
||||
|
||||
**Auto-fix**: If validation fails, attempt auto-fix (max 2 attempts), then report remaining issues.
|
||||
|
||||
**Dev log output** (`<session-folder>/code/dev-log.md`):
|
||||
|
||||
```markdown
|
||||
# Dev Log — <task-subject>
|
||||
|
||||
**Changed Files**: <count>
|
||||
**Syntax Clean**: <true/false>
|
||||
**Fix Task**: <true/false>
|
||||
|
||||
## Files Changed
|
||||
- <file-1>
|
||||
- <file-2>
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
1. **Update shared memory**:
|
||||
|
||||
```
|
||||
sharedMemory.implementation_context.push({
|
||||
task: <task-subject>,
|
||||
changed_files: <file-list>,
|
||||
is_fix: <is-fix-task>,
|
||||
syntax_clean: <has-syntax-errors>
|
||||
})
|
||||
Write(<session-folder>/.msg/meta.json, JSON.stringify(sharedMemory, null, 2))
|
||||
```
|
||||
|
||||
2. **Log and send message**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "developer",
|
||||
type: "dev_complete",
|
||||
data: { ref: <dev-log-path> }
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: `## [developer] <Fix|Implementation> Complete
|
||||
|
||||
**Task**: <task-subject>
|
||||
**Changed Files**: <count>
|
||||
**Syntax Clean**: <true/false>
|
||||
<if-fix-task>**GC Round**: <gc-round></if>
|
||||
|
||||
### Files
|
||||
- <file-1>
|
||||
- <file-2>`,
|
||||
})
|
||||
```
|
||||
|
||||
3. **Mark task complete**:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
||||
```
|
||||
|
||||
4. **Loop to Phase 1** for next task
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No DEV-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Design not found | Implement based on task description |
|
||||
| Syntax errors after implementation | Attempt auto-fix, report remaining errors |
|
||||
| Review feedback unclear | Implement best interpretation, note in dev-log |
|
||||
| Code-developer agent fails | Retry once, then implement inline |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
@@ -1,302 +0,0 @@
|
||||
# Reviewer Role
|
||||
|
||||
Code reviewer. Responsible for multi-dimensional review, quality scoring, and improvement suggestions. Acts as Critic in Generator-Critic loop (paired with developer).
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `reviewer` | **Tag**: `[reviewer]`
|
||||
- **Task Prefix**: `REVIEW-*`
|
||||
- **Responsibility**: Read-only analysis (Code Review)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `REVIEW-*` prefixed tasks
|
||||
- All output must carry `[reviewer]` identifier
|
||||
- Phase 2: Read .msg/meta.json + design, Phase 5: Write review_feedback_trends
|
||||
- Mark each issue with severity (CRITICAL/HIGH/MEDIUM/LOW)
|
||||
- Provide quality score (1-10)
|
||||
- Work strictly within code review responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Write implementation code, design architecture, or execute tests
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's responsibility
|
||||
- Omit `[reviewer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| Read | File | Read design, shared memory, file contents |
|
||||
| Write | File | Write review reports |
|
||||
| Bash | Shell | Git diff, CLI-assisted review |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `review_passed` | reviewer -> coordinator | No critical issues, score >= 7 | Review passed |
|
||||
| `review_revision` | reviewer -> coordinator | Issues found, score < 7 | Revision needed (triggers GC) |
|
||||
| `review_critical` | reviewer -> coordinator | Critical issues found | Critical issues (triggers GC) |
|
||||
| `error` | reviewer -> coordinator | Processing failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "reviewer",
|
||||
type: <message-type>,
|
||||
data: { ref: <review-path> }
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from reviewer --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `REVIEW-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
**Inputs**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session path | Task description (Session: <path>) | Yes |
|
||||
| Shared memory | <session-folder>/.msg/meta.json | Yes |
|
||||
| Design document | <session-folder>/design/design-001.md | For requirements alignment |
|
||||
| Changed files | Git diff | Yes |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json
|
||||
|
||||
```
|
||||
Read(<session-folder>/.msg/meta.json)
|
||||
```
|
||||
|
||||
3. Read design document for requirements alignment:
|
||||
|
||||
```
|
||||
Read(<session-folder>/design/design-001.md)
|
||||
```
|
||||
|
||||
4. Get changed files:
|
||||
|
||||
```
|
||||
Bash("git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
|
||||
```
|
||||
|
||||
5. Read file contents (limit to 20 files):
|
||||
|
||||
```
|
||||
Read(<file-1>)
|
||||
Read(<file-2>)
|
||||
...
|
||||
```
|
||||
|
||||
6. Load previous review trends:
|
||||
|
||||
```
|
||||
prevTrends = sharedMemory.review_feedback_trends || []
|
||||
```
|
||||
|
||||
### Phase 3: Multi-Dimensional Review
|
||||
|
||||
**Review dimensions**:
|
||||
|
||||
| Dimension | Focus Areas |
|
||||
|-----------|-------------|
|
||||
| Correctness | Logic correctness, boundary handling |
|
||||
| Completeness | Coverage of design requirements |
|
||||
| Maintainability | Readability, code style, DRY |
|
||||
| Security | Security vulnerabilities, input validation |
|
||||
|
||||
**Analysis strategy selection**:
|
||||
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Single dimension analysis | Direct inline scan |
|
||||
| Multi-dimension analysis | Per-dimension sequential scan |
|
||||
| Deep analysis needed | CLI Fan-out to external tool |
|
||||
|
||||
**Optional CLI-assisted review**:
|
||||
|
||||
```
|
||||
Bash(`ccw cli -p "PURPOSE: Code review for correctness and security
|
||||
TASK: Review changes in: <file-list>
|
||||
MODE: analysis
|
||||
CONTEXT: @<file-list>
|
||||
EXPECTED: Issues with severity (CRITICAL/HIGH/MEDIUM/LOW) and file:line
|
||||
CONSTRAINTS: Focus on correctness and security" --tool gemini --mode analysis`, { run_in_background: true })
|
||||
```
|
||||
|
||||
**Scoring**:
|
||||
|
||||
| Dimension | Weight | Score Range |
|
||||
|-----------|--------|-------------|
|
||||
| Correctness | 30% | 1-10 |
|
||||
| Completeness | 25% | 1-10 |
|
||||
| Maintainability | 25% | 1-10 |
|
||||
| Security | 20% | 1-10 |
|
||||
|
||||
**Overall score**: Weighted average of dimension scores.
|
||||
|
||||
**Output review report** (`<session-folder>/review/review-<num>.md`):
|
||||
|
||||
```markdown
|
||||
# Code Review — Round <num>
|
||||
|
||||
**Files Reviewed**: <count>
|
||||
**Quality Score**: <score>/10
|
||||
**Critical Issues**: <count>
|
||||
**High Issues**: <count>
|
||||
|
||||
## Findings
|
||||
|
||||
### 1. [CRITICAL] <title>
|
||||
|
||||
**File**: <file>:<line>
|
||||
**Dimension**: <dimension>
|
||||
**Description**: <description>
|
||||
**Suggestion**: <suggestion>
|
||||
|
||||
### 2. [HIGH] <title>
|
||||
...
|
||||
|
||||
## Scoring Breakdown
|
||||
|
||||
| Dimension | Score | Notes |
|
||||
|-----------|-------|-------|
|
||||
| Correctness | <score>/10 | <notes> |
|
||||
| Completeness | <score>/10 | <notes> |
|
||||
| Maintainability | <score>/10 | <notes> |
|
||||
| Security | <score>/10 | <notes> |
|
||||
| **Overall** | **<score>/10** | |
|
||||
|
||||
## Signal
|
||||
|
||||
<CRITICAL — Critical issues must be fixed before merge
|
||||
| REVISION_NEEDED — Quality below threshold (7/10)
|
||||
| APPROVED — Code meets quality standards>
|
||||
|
||||
## Design Alignment
|
||||
|
||||
<notes on how implementation aligns with design>
|
||||
```
|
||||
|
||||
### Phase 4: Trend Analysis
|
||||
|
||||
**Compare with previous reviews**:
|
||||
|
||||
1. Extract issue types from current findings
|
||||
2. Compare with previous review trends
|
||||
3. Identify recurring issues
|
||||
|
||||
| Analysis | Method |
|
||||
|----------|--------|
|
||||
| Recurring issues | Match dimension/type with previous reviews |
|
||||
| Improvement areas | Issues that appear in multiple reviews |
|
||||
| New issues | Issues unique to this review |
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
1. **Update shared memory**:
|
||||
|
||||
```
|
||||
sharedMemory.review_feedback_trends.push({
|
||||
review_id: "review-<num>",
|
||||
score: <score>,
|
||||
critical: <critical-count>,
|
||||
high: <high-count>,
|
||||
dimensions: <dimension-list>,
|
||||
gc_round: sharedMemory.gc_round || 0
|
||||
})
|
||||
Write(<session-folder>/.msg/meta.json, JSON.stringify(sharedMemory, null, 2))
|
||||
```
|
||||
|
||||
2. **Determine message type**:
|
||||
|
||||
| Condition | Message Type |
|
||||
|-----------|--------------|
|
||||
| criticalCount > 0 | review_critical |
|
||||
| score < 7 | review_revision |
|
||||
| else | review_passed |
|
||||
|
||||
3. **Log and send message**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "reviewer",
|
||||
type: <message-type>,
|
||||
data: { ref: <review-path> }
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: `## [reviewer] Code Review Results
|
||||
|
||||
**Task**: <task-subject>
|
||||
**Score**: <score>/10
|
||||
**Signal**: <message-type>
|
||||
**Critical**: <count>, **High**: <count>
|
||||
**Output**: <review-path>
|
||||
|
||||
### Top Issues
|
||||
- **[CRITICAL/HIGH]** <title> (<file>:<line>)
|
||||
...`,
|
||||
})
|
||||
```
|
||||
|
||||
4. **Mark task complete**:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
||||
```
|
||||
|
||||
5. **Loop to Phase 1** for next task
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No REVIEW-* tasks available | Idle, wait for coordinator assignment |
|
||||
| No changed files | Review files referenced in design |
|
||||
| CLI review fails | Fall back to inline analysis |
|
||||
| All issues LOW severity | Score high, approve |
|
||||
| Design not found | Review against general quality standards |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
@@ -1,248 +0,0 @@
|
||||
# Tester Role
|
||||
|
||||
Test validator. Responsible for test execution, fix cycles, and regression detection.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `tester` | **Tag**: `[tester]`
|
||||
- **Task Prefix**: `VERIFY-*`
|
||||
- **Responsibility**: Validation (Test Verification)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `VERIFY-*` prefixed tasks
|
||||
- All output must carry `[tester]` identifier
|
||||
- Phase 2: Read .msg/meta.json, Phase 5: Write test_patterns
|
||||
- Work strictly within test validation responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Write implementation code, design architecture, or perform code review
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's responsibility
|
||||
- Omit `[tester]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| Task | Agent | Spawn code-developer for fix cycles |
|
||||
| Read | File | Read shared memory, verify results |
|
||||
| Write | File | Write verification results |
|
||||
| Bash | Shell | Execute tests, git commands |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `verify_passed` | tester -> coordinator | All tests pass | Verification passed |
|
||||
| `verify_failed` | tester -> coordinator | Tests fail | Verification failed |
|
||||
| `fix_required` | tester -> coordinator | Issues found needing fix | Fix required |
|
||||
| `error` | tester -> coordinator | Environment failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "tester",
|
||||
type: <message-type>,
|
||||
data: { ref: <verify-path> }
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from tester --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `VERIFY-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Environment Detection
|
||||
|
||||
**Inputs**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session path | Task description (Session: <path>) | Yes |
|
||||
| Shared memory | <session-folder>/.msg/meta.json | Yes |
|
||||
| Changed files | Git diff | Yes |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
**Detection steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json
|
||||
|
||||
```
|
||||
Read(<session-folder>/.msg/meta.json)
|
||||
```
|
||||
|
||||
3. Get changed files:
|
||||
|
||||
```
|
||||
Bash("git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
|
||||
```
|
||||
|
||||
4. Detect test framework and command:
|
||||
|
||||
| Detection | Method |
|
||||
|-----------|--------|
|
||||
| Test command | Check package.json scripts, pytest.ini, Makefile |
|
||||
| Coverage tool | Check for nyc, coverage.py, jest --coverage config |
|
||||
|
||||
**Common test commands**:
|
||||
- JavaScript: `npm test`, `yarn test`, `pnpm test`
|
||||
- Python: `pytest`, `python -m pytest`
|
||||
- Go: `go test ./...`
|
||||
- Rust: `cargo test`
|
||||
|
||||
### Phase 3: Execution + Fix Cycle
|
||||
|
||||
**Iterative test-fix cycle**:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | Run test command |
|
||||
| 2 | Parse results -> check pass rate |
|
||||
| 3 | Pass rate >= 95% -> exit loop (success) |
|
||||
| 4 | Extract failing test details |
|
||||
| 5 | Delegate fix to code-developer subagent |
|
||||
| 6 | Increment iteration counter |
|
||||
| 7 | iteration >= MAX (5) -> exit loop (report failures) |
|
||||
| 8 | Go to Step 1 |
|
||||
|
||||
**Test execution**:
|
||||
|
||||
```
|
||||
Bash("<test-command> 2>&1 || true")
|
||||
```
|
||||
|
||||
**Fix delegation** (when tests fail):
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Fix test failures (iteration <num>)",
|
||||
prompt: `Test failures:
|
||||
<test-output>
|
||||
|
||||
Fix failing tests. Changed files: <file-list>`
|
||||
})
|
||||
```
|
||||
|
||||
**Output verification results** (`<session-folder>/verify/verify-<num>.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"verify_id": "verify-<num>",
|
||||
"pass_rate": <rate>,
|
||||
"iterations": <count>,
|
||||
"passed": <true/false>,
|
||||
"timestamp": "<iso-timestamp>",
|
||||
"regression_passed": <true/false>
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Regression Check
|
||||
|
||||
**Full test suite for regression**:
|
||||
|
||||
```
|
||||
Bash("<test-command> --all 2>&1 || true")
|
||||
```
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Regression | Run full test suite | No FAIL in output |
|
||||
| Coverage | Run coverage tool | >= 80% (if configured) |
|
||||
|
||||
Update verification results with regression status.
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
1. **Update shared memory**:
|
||||
|
||||
```
|
||||
sharedMemory.test_patterns = sharedMemory.test_patterns || []
|
||||
if (passRate >= 0.95) {
|
||||
sharedMemory.test_patterns.push(`verify-<num>: passed in <iterations> iterations`)
|
||||
}
|
||||
Write(<session-folder>/.msg/meta.json, JSON.stringify(sharedMemory, null, 2))
|
||||
```
|
||||
|
||||
2. **Determine message type**:
|
||||
|
||||
| Condition | Message Type |
|
||||
|-----------|--------------|
|
||||
| passRate >= 0.95 | verify_passed |
|
||||
| passRate < 0.95 && iterations >= MAX | fix_required |
|
||||
| passRate < 0.95 | verify_failed |
|
||||
|
||||
3. **Log and send message**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "tester",
|
||||
type: <message-type>,
|
||||
data: { ref: <verify-path> }
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: `## [tester] Verification Results
|
||||
|
||||
**Pass Rate**: <rate>%
|
||||
**Iterations**: <count>/<MAX>
|
||||
**Regression**: <passed/failed>
|
||||
**Status**: <PASSED/NEEDS FIX>`,
|
||||
})
|
||||
```
|
||||
|
||||
4. **Mark task complete**:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
||||
```
|
||||
|
||||
5. **Loop to Phase 1** for next task
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No VERIFY-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Test command not found | Try common commands (npm test, pytest, vitest) |
|
||||
| Max iterations exceeded | Report fix_required to coordinator |
|
||||
| Test environment broken | Report error, suggest manual fix |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
@@ -1,10 +1,10 @@
|
||||
---
|
||||
name: team-lifecycle-v5
|
||||
name: team-lifecycle
|
||||
description: Unified team skill for full lifecycle - spec/impl/test. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents loaded with role-specific Phase 2-4 specs. Triggers on "team lifecycle".
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# Team Lifecycle v5
|
||||
# Team Lifecycle
|
||||
|
||||
Unified team skill: specification -> implementation -> testing -> review. Built on **team-worker agent architecture** — all worker roles share a single agent definition with role-specific Phase 2-4 loaded from markdown specs.
|
||||
|
||||
@@ -12,7 +12,7 @@ Unified team skill: specification -> implementation -> testing -> review. Built
|
||||
|
||||
```
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-lifecycle-v5") |
|
||||
| Skill(skill="team-lifecycle") |
|
||||
| args="task description" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
@@ -79,7 +79,7 @@ Always route to coordinator. Coordinator reads `roles/coordinator/role.md` and e
|
||||
|
||||
User just provides task description.
|
||||
|
||||
**Invocation**: `Skill(skill="team-lifecycle-v5", args="task description")`
|
||||
**Invocation**: `Skill(skill="team-lifecycle", args="task description")`
|
||||
|
||||
**Lifecycle**:
|
||||
```
|
||||
@@ -118,7 +118,7 @@ Task({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-lifecycle-v5/role-specs/<role>.md
|
||||
role_spec: .claude/skills/team-lifecycle/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: <team-name>
|
||||
|
||||
@@ -136,7 +136,7 @@ Collect task states from TaskList()
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-lifecycle-v5/role-specs/<role>.md
|
||||
role_spec: .claude/skills/team-lifecycle/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: <team-name>
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate the team-lifecycle-v5 workflow: team creation, task dispatching, progress monitoring, session state. Uses **team-worker agent** for all worker spawns — no Skill indirection.
|
||||
Orchestrate the team-lifecycle workflow: team creation, task dispatching, progress monitoring, session state. Uses **team-worker agent** for all worker spawns — no Skill indirection.
|
||||
|
||||
## Identity
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"team_name": "team-lifecycle",
|
||||
"team_display_name": "Team Lifecycle v5",
|
||||
"team_display_name": "Team Lifecycle",
|
||||
"description": "Unified team-worker agent architecture: shared Phase 1/5/Inner Loop in agent, role-specific Phase 2-4 from spec files",
|
||||
"version": "5.0.0",
|
||||
"architecture": "team-worker agent + role-specs",
|
||||
|
||||
@@ -8,7 +8,7 @@ allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), Task
|
||||
|
||||
Unified team skill: plan-and-execute pipeline for issue-based development. Built on **team-worker agent architecture** — all worker roles share a single agent definition with role-specific Phase 2-4 loaded from markdown specs.
|
||||
|
||||
> **Note**: This skill has its own coordinator implementation (`roles/coordinator/role.md`), independent of `team-lifecycle-v5`. It follows the same v5 architectural patterns (team-worker agents, role-specs, Spawn-and-Stop) but with a simplified 2-role pipeline (planner + executor) tailored for plan-and-execute workflows.
|
||||
> **Note**: This skill has its own coordinator implementation (`roles/coordinator/role.md`), independent of `team-lifecycle`. It follows the same v5 architectural patterns (team-worker agents, role-specs, Spawn-and-Stop) but with a simplified 2-role pipeline (planner + executor) tailored for plan-and-execute workflows.
|
||||
|
||||
## Architecture
|
||||
|
||||
|
||||
@@ -11,23 +11,23 @@ Unified team skill: quality assurance combining issue discovery and software tes
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-quality-assurance") │
|
||||
│ args="<task-description>" or args="--role=xxx" │
|
||||
└────────────────────────────┬─────────────────────────────┘
|
||||
│ Role Router
|
||||
┌──── --role present? ────┐
|
||||
│ NO │ YES
|
||||
↓ ↓
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto -> coordinator) (route to role.md)
|
||||
│
|
||||
┌─────┬──────┴──────┬───────────┬──────────┬──────────┐
|
||||
↓ ↓ ↓ ↓ ↓ ↓
|
||||
┌────────┐┌───────┐┌──────────┐┌─────────┐┌────────┐┌────────┐
|
||||
│ coord ││scout ││strategist││generator││executor││analyst │
|
||||
│ ││SCOUT-*││QASTRAT-* ││QAGEN-* ││QARUN-* ││QAANA-* │
|
||||
└────────┘└───────┘└──────────┘└─────────┘└────────┘└────────┘
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-quality-assurance") |
|
||||
| args="<task-description>" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
Orchestration Mode (auto -> coordinator)
|
||||
|
|
||||
Coordinator (inline)
|
||||
Phase 0-5 orchestration
|
||||
|
|
||||
+-----+-----+-----+-----+-----+
|
||||
v v v v v
|
||||
[tw] [tw] [tw] [tw] [tw]
|
||||
scout stra- gene- execu- analy-
|
||||
tegist rator tor st
|
||||
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Command Architecture
|
||||
@@ -71,14 +71,14 @@ Parse `$ARGUMENTS` to extract `--role`. If absent -> Orchestration Mode (auto ro
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | **compressed -> must re-read** |
|
||||
| scout | [roles/scout/role.md](roles/scout/role.md) | SCOUT-* | pipeline | compressed -> must re-read |
|
||||
| strategist | [roles/strategist/role.md](roles/strategist/role.md) | QASTRAT-* | pipeline | compressed -> must re-read |
|
||||
| generator | [roles/generator/role.md](roles/generator/role.md) | QAGEN-* | pipeline | compressed -> must re-read |
|
||||
| executor | [roles/executor/role.md](roles/executor/role.md) | QARUN-* | pipeline | compressed -> must re-read |
|
||||
| analyst | [roles/analyst/role.md](roles/analyst/role.md) | QAANA-* | pipeline | compressed -> must re-read |
|
||||
| Role | Spec | Task Prefix | Inner Loop |
|
||||
|------|------|-------------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
|
||||
| scout | [role-specs/scout.md](role-specs/scout.md) | SCOUT-* | false |
|
||||
| strategist | [role-specs/strategist.md](role-specs/strategist.md) | QASTRAT-* | false |
|
||||
| generator | [role-specs/generator.md](role-specs/generator.md) | QAGEN-* | false |
|
||||
| executor | [role-specs/executor.md](role-specs/executor.md) | QARUN-* | true |
|
||||
| analyst | [role-specs/analyst.md](role-specs/analyst.md) | QAANA-* | false |
|
||||
|
||||
> **COMPACT PROTECTION**: Role files are execution documents, not reference material. When context compression occurs and role instructions are reduced to summaries, **you MUST immediately `Read` the corresponding role.md to reload before continuing execution**. Do not execute any Phase based on summaries.
|
||||
|
||||
@@ -354,42 +354,38 @@ Beat 1 2 3 4 5 6
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use background mode (Spawn-and-Stop):
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: <team-name>,
|
||||
team_name: "quality-assurance",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE>.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-quality-assurance/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: quality-assurance
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
## Primary Directive
|
||||
All your work must be executed through Skill to load role definition:
|
||||
Skill(skill="team-quality-assurance", args="--role=<role>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] identifier
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate for other roles
|
||||
- Call mcp__ccw-tools__team_msg before every SendMessage
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> load role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**Inner Loop roles** (executor): Set `inner_loop: true`.
|
||||
|
||||
**Single-task roles** (scout, strategist, generator, analyst): Set `inner_loop: false`.
|
||||
|
||||
### Parallel Spawn (N agents for same role)
|
||||
|
||||
> When pipeline has parallel tasks assigned to the same role, spawn N distinct agents with unique names. A single agent can only process tasks serially.
|
||||
> When pipeline has parallel tasks assigned to the same role, spawn N distinct team-worker agents with unique names.
|
||||
|
||||
**Parallel detection**:
|
||||
|
||||
@@ -402,30 +398,54 @@ Session: <session-folder>
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role>-<N> worker",
|
||||
team_name: <team-name>,
|
||||
team_name: "quality-assurance",
|
||||
name: "<role>-<N>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE> (<role>-<N>).
|
||||
Your agent name is "<role>-<N>", use this name for task discovery owner matching.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-quality-assurance/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: quality-assurance
|
||||
requirement: <task-description>
|
||||
agent_name: <role>-<N>
|
||||
inner_loop: <true|false>
|
||||
|
||||
## Primary Directive
|
||||
Skill(skill="team-quality-assurance", args="--role=<role> --agent-name=<role>-<N>")
|
||||
|
||||
## Role Guidelines
|
||||
- Only process tasks where owner === "<role>-<N>" with <PREFIX>-* prefix
|
||||
- All output prefixed with [<role>] identifier
|
||||
|
||||
## Workflow
|
||||
1. TaskList -> find tasks where owner === "<role>-<N>" with <PREFIX>-* prefix
|
||||
2. Skill -> execute role definition
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery, owner=<role>-<N>) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**Dispatch must match agent names**: In dispatch, parallel tasks use instance-specific owner: `<role>-<N>`. In role.md, task discovery uses --agent-name for owner matching.
|
||||
**Dispatch must match agent names**: In dispatch, parallel tasks use instance-specific owner: `<role>-<N>`.
|
||||
|
||||
---
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Quality Assurance pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(quality-assurance) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-quality-assurance", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
## Unified Session Directory
|
||||
|
||||
|
||||
@@ -1,360 +0,0 @@
|
||||
# Command: quality-report
|
||||
|
||||
> 缺陷模式分析 + 覆盖率分析 + 综合质量报告。多维度分析 QA 数据,生成质量评分和改进建议。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Analyst
|
||||
- 测试执行完成,需要分析结果
|
||||
- 需要识别缺陷模式和覆盖率趋势
|
||||
|
||||
**Trigger conditions**:
|
||||
- QAANA-* 任务进入执行阶段
|
||||
- 所有 QARUN 任务已完成
|
||||
- Coordinator 请求质量报告
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: CLI Fan-out(深度分析)/ Direct(基础分析)
|
||||
**CLI Tool**: `gemini` (primary)
|
||||
**CLI Mode**: `analysis`
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
const dataPoints = discoveredIssues.length + Object.keys(executionResults).length
|
||||
if (dataPoints <= 5) {
|
||||
// 基础内联分析
|
||||
mode = 'direct'
|
||||
} else {
|
||||
// CLI 辅助深度分析
|
||||
mode = 'cli-assisted'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 从 shared memory 加载所有 QA 数据
|
||||
const discoveredIssues = sharedMemory.discovered_issues || []
|
||||
const strategy = sharedMemory.test_strategy || {}
|
||||
const generatedTests = sharedMemory.generated_tests || {}
|
||||
const executionResults = sharedMemory.execution_results || {}
|
||||
const historicalPatterns = sharedMemory.defect_patterns || []
|
||||
const coverageHistory = sharedMemory.coverage_history || []
|
||||
|
||||
// 读取覆盖率详细数据
|
||||
let coverageData = null
|
||||
try {
|
||||
coverageData = JSON.parse(Read('coverage/coverage-summary.json'))
|
||||
} catch {}
|
||||
|
||||
// 读取各层级执行结果
|
||||
const layerResults = {}
|
||||
try {
|
||||
const resultFiles = Glob(`${sessionFolder}/results/run-*.json`)
|
||||
for (const f of resultFiles) {
|
||||
const data = JSON.parse(Read(f))
|
||||
layerResults[data.layer] = data
|
||||
}
|
||||
} catch {}
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
if (mode === 'direct') {
|
||||
// 基础内联分析
|
||||
analysis = performDirectAnalysis()
|
||||
} else {
|
||||
// CLI 辅助深度分析
|
||||
const analysisContext = JSON.stringify({
|
||||
issues: discoveredIssues.slice(0, 20),
|
||||
execution: layerResults,
|
||||
coverage: coverageData?.total || {},
|
||||
strategy: { layers: strategy.layers?.map(l => ({ level: l.level, target: l.target_coverage })) }
|
||||
}, null, 2)
|
||||
|
||||
Bash(`ccw cli -p "PURPOSE: Perform deep quality analysis on QA results to identify defect patterns, coverage trends, and improvement opportunities
|
||||
TASK: • Classify defects by root cause pattern (logic errors, integration issues, missing validation, etc.) • Identify files with highest defect density • Analyze coverage gaps vs risk levels • Compare actual coverage to targets • Generate actionable improvement recommendations
|
||||
MODE: analysis
|
||||
CONTEXT: @${sessionFolder}/.msg/meta.json @${sessionFolder}/results/**/*
|
||||
EXPECTED: Structured analysis with: defect pattern taxonomy, risk-coverage matrix, quality score rationale, top 5 improvement recommendations with expected impact
|
||||
CONSTRAINTS: Be data-driven, avoid speculation without evidence" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`, {
|
||||
run_in_background: true
|
||||
})
|
||||
// 等待 CLI 完成
|
||||
}
|
||||
|
||||
// ===== 分析维度 =====
|
||||
|
||||
// 1. 缺陷模式分析
|
||||
function analyzeDefectPatterns(issues, results) {
|
||||
const byType = {}
|
||||
for (const issue of issues) {
|
||||
const type = issue.perspective || 'unknown'
|
||||
if (!byType[type]) byType[type] = []
|
||||
byType[type].push(issue)
|
||||
}
|
||||
|
||||
// 识别重复模式
|
||||
const patterns = []
|
||||
for (const [type, typeIssues] of Object.entries(byType)) {
|
||||
if (typeIssues.length >= 2) {
|
||||
// 分析共同特征
|
||||
const commonFiles = findCommonPatterns(typeIssues.map(i => i.file))
|
||||
patterns.push({
|
||||
type,
|
||||
count: typeIssues.length,
|
||||
files: [...new Set(typeIssues.map(i => i.file))],
|
||||
common_pattern: commonFiles,
|
||||
description: `${type} 类问题在 ${typeIssues.length} 处重复出现`,
|
||||
recommendation: generateRecommendation(type, typeIssues)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return { by_type: byType, patterns, total: issues.length }
|
||||
}
|
||||
|
||||
// 2. 覆盖率差距分析
|
||||
function analyzeCoverageGaps(coverage, strategy) {
|
||||
if (!coverage) return { status: 'no_data', gaps: [] }
|
||||
|
||||
const totalCoverage = coverage.total?.lines?.pct || 0
|
||||
const gaps = []
|
||||
|
||||
for (const layer of (strategy.layers || [])) {
|
||||
if (totalCoverage < layer.target_coverage) {
|
||||
gaps.push({
|
||||
layer: layer.level,
|
||||
target: layer.target_coverage,
|
||||
actual: totalCoverage,
|
||||
gap: Math.round(layer.target_coverage - totalCoverage),
|
||||
severity: (layer.target_coverage - totalCoverage) > 20 ? 'high' : 'medium'
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// 按文件分析覆盖率
|
||||
const fileGaps = []
|
||||
if (coverage && typeof coverage === 'object') {
|
||||
for (const [file, data] of Object.entries(coverage)) {
|
||||
if (file === 'total') continue
|
||||
const linePct = data?.lines?.pct || 0
|
||||
if (linePct < 50) {
|
||||
fileGaps.push({ file, coverage: linePct, severity: linePct < 20 ? 'critical' : 'high' })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { total_coverage: totalCoverage, gaps, file_gaps: fileGaps.slice(0, 10) }
|
||||
}
|
||||
|
||||
// 3. 测试有效性分析
|
||||
function analyzeTestEffectiveness(generated, results) {
|
||||
const effectiveness = {}
|
||||
for (const [layer, data] of Object.entries(generated)) {
|
||||
const result = results[layer] || {}
|
||||
effectiveness[layer] = {
|
||||
files_generated: data.files?.length || 0,
|
||||
pass_rate: result.pass_rate || 0,
|
||||
iterations_needed: result.iterations || 0,
|
||||
coverage_achieved: result.coverage || 0,
|
||||
effective: (result.pass_rate || 0) >= 95 && (result.iterations || 0) <= 2
|
||||
}
|
||||
}
|
||||
return effectiveness
|
||||
}
|
||||
|
||||
// 4. 质量趋势分析
|
||||
function analyzeQualityTrend(history) {
|
||||
if (history.length < 2) return { trend: 'insufficient_data', confidence: 'low' }
|
||||
|
||||
const latest = history[history.length - 1]
|
||||
const previous = history[history.length - 2]
|
||||
const delta = (latest?.coverage || 0) - (previous?.coverage || 0)
|
||||
|
||||
return {
|
||||
trend: delta > 5 ? 'improving' : delta < -5 ? 'declining' : 'stable',
|
||||
delta: Math.round(delta * 10) / 10,
|
||||
data_points: history.length,
|
||||
confidence: history.length >= 5 ? 'high' : history.length >= 3 ? 'medium' : 'low'
|
||||
}
|
||||
}
|
||||
|
||||
// 5. 综合质量评分
|
||||
function calculateQualityScore(analysis) {
|
||||
let score = 100
|
||||
|
||||
// 扣分: 安全问题
|
||||
const securityIssues = (analysis.defect_patterns.by_type?.security || []).length
|
||||
score -= securityIssues * 10
|
||||
|
||||
// 扣分: Bug
|
||||
const bugIssues = (analysis.defect_patterns.by_type?.bug || []).length
|
||||
score -= bugIssues * 5
|
||||
|
||||
// 扣分: 覆盖率差距
|
||||
for (const gap of (analysis.coverage_gaps.gaps || [])) {
|
||||
score -= gap.gap * 0.5
|
||||
}
|
||||
|
||||
// 扣分: 测试失败
|
||||
for (const [layer, eff] of Object.entries(analysis.test_effectiveness)) {
|
||||
if (eff.pass_rate < 100) score -= (100 - eff.pass_rate) * 0.3
|
||||
}
|
||||
|
||||
// 加分: 有效测试层
|
||||
const effectiveLayers = Object.values(analysis.test_effectiveness)
|
||||
.filter(e => e.effective).length
|
||||
score += effectiveLayers * 5
|
||||
|
||||
// 加分: 改善趋势
|
||||
if (analysis.quality_trend.trend === 'improving') score += 3
|
||||
|
||||
return Math.max(0, Math.min(100, Math.round(score)))
|
||||
}
|
||||
|
||||
// 辅助函数
|
||||
function findCommonPatterns(files) {
|
||||
const dirs = files.map(f => f.split('/').slice(0, -1).join('/'))
|
||||
const commonDir = dirs.reduce((a, b) => {
|
||||
const partsA = a.split('/')
|
||||
const partsB = b.split('/')
|
||||
const common = []
|
||||
for (let i = 0; i < Math.min(partsA.length, partsB.length); i++) {
|
||||
if (partsA[i] === partsB[i]) common.push(partsA[i])
|
||||
else break
|
||||
}
|
||||
return common.join('/')
|
||||
})
|
||||
return commonDir || 'scattered'
|
||||
}
|
||||
|
||||
function generateRecommendation(type, issues) {
|
||||
const recommendations = {
|
||||
'security': '加强输入验证和安全审计,考虑引入 SAST 工具',
|
||||
'bug': '改进错误处理和边界检查,增加防御性编程',
|
||||
'test-coverage': '补充缺失的测试用例,聚焦未覆盖的分支',
|
||||
'code-quality': '重构复杂函数,消除代码重复',
|
||||
'ux': '统一错误提示和加载状态处理'
|
||||
}
|
||||
return recommendations[type] || '进一步分析并制定改进计划'
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// 组装分析结果
|
||||
const analysis = {
|
||||
defect_patterns: analyzeDefectPatterns(discoveredIssues, layerResults),
|
||||
coverage_gaps: analyzeCoverageGaps(coverageData, strategy),
|
||||
test_effectiveness: analyzeTestEffectiveness(generatedTests, layerResults),
|
||||
quality_trend: analyzeQualityTrend(coverageHistory),
|
||||
quality_score: 0
|
||||
}
|
||||
|
||||
analysis.quality_score = calculateQualityScore(analysis)
|
||||
|
||||
// 生成报告文件
|
||||
const reportContent = generateReportMarkdown(analysis)
|
||||
Bash(`mkdir -p "${sessionFolder}/analysis"`)
|
||||
Write(`${sessionFolder}/analysis/quality-report.md`, reportContent)
|
||||
|
||||
// 更新 shared memory
|
||||
sharedMemory.defect_patterns = analysis.defect_patterns.patterns
|
||||
sharedMemory.quality_score = analysis.quality_score
|
||||
sharedMemory.coverage_history = sharedMemory.coverage_history || []
|
||||
sharedMemory.coverage_history.push({
|
||||
date: new Date().toISOString(),
|
||||
coverage: analysis.coverage_gaps.total_coverage || 0,
|
||||
quality_score: analysis.quality_score,
|
||||
issues: analysis.defect_patterns.total
|
||||
})
|
||||
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(sharedMemory, null, 2))
|
||||
|
||||
function generateReportMarkdown(analysis) {
|
||||
return `# Quality Assurance Report
|
||||
|
||||
## Quality Score: ${analysis.quality_score}/100
|
||||
|
||||
---
|
||||
|
||||
## 1. Defect Pattern Analysis
|
||||
- Total issues found: ${analysis.defect_patterns.total}
|
||||
- Recurring patterns: ${analysis.defect_patterns.patterns.length}
|
||||
|
||||
${analysis.defect_patterns.patterns.map(p =>
|
||||
`### Pattern: ${p.type} (${p.count} occurrences)
|
||||
- Files: ${p.files.join(', ')}
|
||||
- Common location: ${p.common_pattern}
|
||||
- Recommendation: ${p.recommendation}`
|
||||
).join('\n\n')}
|
||||
|
||||
## 2. Coverage Analysis
|
||||
- Overall coverage: ${analysis.coverage_gaps.total_coverage || 'N/A'}%
|
||||
- Coverage gaps: ${(analysis.coverage_gaps.gaps || []).length}
|
||||
|
||||
${(analysis.coverage_gaps.gaps || []).map(g =>
|
||||
`- **${g.layer}**: target ${g.target}% vs actual ${g.actual}% (gap: ${g.gap}%, severity: ${g.severity})`
|
||||
).join('\n')}
|
||||
|
||||
### Low Coverage Files
|
||||
${(analysis.coverage_gaps.file_gaps || []).map(f =>
|
||||
`- ${f.file}: ${f.coverage}% [${f.severity}]`
|
||||
).join('\n')}
|
||||
|
||||
## 3. Test Effectiveness
|
||||
${Object.entries(analysis.test_effectiveness).map(([layer, data]) =>
|
||||
`- **${layer}**: ${data.files_generated} files, pass rate ${data.pass_rate}%, ${data.iterations_needed} fix iterations, ${data.effective ? 'EFFECTIVE' : 'NEEDS IMPROVEMENT'}`
|
||||
).join('\n')}
|
||||
|
||||
## 4. Quality Trend
|
||||
- Trend: ${analysis.quality_trend.trend}
|
||||
${analysis.quality_trend.delta !== undefined ? `- Coverage delta: ${analysis.quality_trend.delta > 0 ? '+' : ''}${analysis.quality_trend.delta}%` : ''}
|
||||
- Confidence: ${analysis.quality_trend.confidence}
|
||||
|
||||
## 5. Recommendations
|
||||
${analysis.quality_score >= 80 ? '- Quality is **GOOD**. Maintain current testing practices.' : ''}
|
||||
${analysis.quality_score >= 60 && analysis.quality_score < 80 ? '- Quality needs **IMPROVEMENT**. Focus on coverage gaps and recurring patterns.' : ''}
|
||||
${analysis.quality_score < 60 ? '- Quality is **CONCERNING**. Recommend comprehensive review and testing effort.' : ''}
|
||||
${analysis.defect_patterns.patterns.map(p => `- [${p.type}] ${p.recommendation}`).join('\n')}
|
||||
${(analysis.coverage_gaps.gaps || []).map(g => `- Close ${g.layer} coverage gap: +${g.gap}% needed`).join('\n')}
|
||||
`
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Quality Analysis Results
|
||||
|
||||
### Quality Score: [score]/100
|
||||
|
||||
### Dimensions
|
||||
1. Defect Patterns: [count] recurring
|
||||
2. Coverage Gaps: [count] layers below target
|
||||
3. Test Effectiveness: [effective_count]/[total_layers] effective
|
||||
4. Quality Trend: [improving|stable|declining]
|
||||
|
||||
### Report Location
|
||||
[session]/analysis/quality-report.md
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No coverage data available | Score based on other dimensions only |
|
||||
| No execution results | Analyze only scout findings and strategy |
|
||||
| Shared memory empty/corrupt | Generate minimal report with available data |
|
||||
| CLI analysis fails | Fall back to direct inline analysis |
|
||||
| Insufficient history for trend | Report 'insufficient_data', skip trend scoring |
|
||||
| Agent/CLI failure | Retry once, then fallback to inline execution |
|
||||
| Timeout (>5 min) | Report partial results, notify coordinator |
|
||||
@@ -1,183 +0,0 @@
|
||||
# Analyst Role
|
||||
|
||||
Quality analyst. Analyze defect patterns, coverage gaps, test effectiveness, and generate comprehensive quality reports. Maintain defect pattern database and provide feedback data for scout and strategist.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `analyst` | **Tag**: `[analyst]`
|
||||
- **Task Prefix**: `QAANA-*`
|
||||
- **Responsibility**: Read-only analysis (quality analysis)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `QAANA-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[analyst]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Generate analysis reports based on data
|
||||
- Update defect patterns and quality score in shared memory
|
||||
- Work strictly within quality analysis responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Modify source code or test code
|
||||
- Execute tests
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Omit `[analyst]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `quality-report` | [commands/quality-report.md](commands/quality-report.md) | Phase 3 | Defect pattern + coverage analysis |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `gemini` | CLI | quality-report.md | Defect pattern recognition and trend analysis |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `analysis_ready` | analyst -> coordinator | Analysis complete | Contains quality score |
|
||||
| `quality_report` | analyst -> coordinator | Report generated | Contains detailed analysis |
|
||||
| `error` | analyst -> coordinator | Analysis failed | Blocking error |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "analyst",
|
||||
type: <message-type>,
|
||||
data: { ref: <report-path> }
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from analyst --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `QAANA-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read shared memory to get all accumulated data
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Shared memory | <session-folder>/.msg/meta.json | Yes |
|
||||
| Discovered issues | sharedMemory.discovered_issues | No |
|
||||
| Test strategy | sharedMemory.test_strategy | No |
|
||||
| Generated tests | sharedMemory.generated_tests | No |
|
||||
| Execution results | sharedMemory.execution_results | No |
|
||||
| Historical patterns | sharedMemory.defect_patterns | No |
|
||||
|
||||
3. Read coverage data from `coverage/coverage-summary.json` if available
|
||||
4. Read test execution logs from `<session-folder>/results/run-*.json`
|
||||
|
||||
### Phase 3: Multi-Dimensional Analysis
|
||||
|
||||
Delegate to `commands/quality-report.md` if available, otherwise execute inline.
|
||||
|
||||
**Analysis Dimensions**:
|
||||
|
||||
| Dimension | Description |
|
||||
|-----------|-------------|
|
||||
| Defect Patterns | Group issues by type, identify recurring patterns |
|
||||
| Coverage Gaps | Compare actual vs target coverage per layer |
|
||||
| Test Effectiveness | Evaluate test generation and execution results |
|
||||
| Quality Trend | Analyze coverage history over time |
|
||||
| Quality Score | Calculate comprehensive score (0-100) |
|
||||
|
||||
**Defect Pattern Analysis**:
|
||||
- Group issues by perspective/type
|
||||
- Identify patterns with >= 2 occurrences
|
||||
- Record pattern type, count, affected files
|
||||
|
||||
**Coverage Gap Analysis**:
|
||||
- Compare total coverage vs layer targets
|
||||
- Record gaps: layer, target, actual, gap percentage
|
||||
|
||||
**Test Effectiveness Analysis**:
|
||||
- Files generated, pass rate, iterations needed
|
||||
- Effective if pass_rate >= 95%
|
||||
|
||||
**Quality Score Calculation**:
|
||||
|
||||
| Factor | Impact |
|
||||
|--------|--------|
|
||||
| Critical issues (security) | -10 per issue |
|
||||
| High issues (bug) | -5 per issue |
|
||||
| Coverage gap | -0.5 per gap percentage |
|
||||
| Effective test layers | +5 per layer |
|
||||
|
||||
### Phase 4: Report Generation
|
||||
|
||||
**Report Structure**:
|
||||
1. Quality Score (0-100)
|
||||
2. Defect Pattern Analysis (total issues, recurring patterns)
|
||||
3. Coverage Analysis (overall coverage, gaps by layer)
|
||||
4. Test Effectiveness (per layer stats)
|
||||
5. Quality Trend (improving/declining/stable)
|
||||
6. Recommendations (based on score range)
|
||||
|
||||
**Score-based Recommendations**:
|
||||
|
||||
| Score Range | Recommendation |
|
||||
|-------------|----------------|
|
||||
| >= 80 | Quality is GOOD. Continue with current testing strategy. |
|
||||
| 60-79 | Quality needs IMPROVEMENT. Focus on coverage gaps and recurring patterns. |
|
||||
| < 60 | Quality is CONCERNING. Recommend deep scan and comprehensive test generation. |
|
||||
|
||||
Write report to `<session-folder>/analysis/quality-report.md`.
|
||||
|
||||
Update shared memory:
|
||||
- `defect_patterns`: identified patterns
|
||||
- `quality_score`: calculated score
|
||||
- `coverage_history`: append new data point
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[analyst]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No QAANA-* tasks available | Idle, wait for coordinator |
|
||||
| Coverage data not found | Report quality score based on other dimensions |
|
||||
| Shared memory empty | Generate minimal report with available data |
|
||||
| No execution results | Analyze only scout findings and strategy coverage |
|
||||
| CLI analysis fails | Fall back to inline pattern analysis |
|
||||
| Critical issue beyond scope | SendMessage error to coordinator |
|
||||
@@ -1,317 +1,204 @@
|
||||
# Command: monitor
|
||||
# Command: Monitor
|
||||
|
||||
> 阶段驱动的协调循环。按 pipeline 阶段顺序等待 worker 完成,路由消息,触发 GC 循环,执行质量门控。
|
||||
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, GC loops, and completion.
|
||||
|
||||
## Constants
|
||||
|
||||
## When to Use
|
||||
| Key | Value |
|
||||
|-----|-------|
|
||||
| SPAWN_MODE | background |
|
||||
| ONE_STEP_PER_INVOCATION | true |
|
||||
| WORKER_AGENT | team-worker |
|
||||
| MAX_GC_ROUNDS | 3 |
|
||||
|
||||
- Phase 4 of Coordinator
|
||||
- 任务链已创建并分发
|
||||
- 需要持续监控直到所有任务完成
|
||||
## Phase 2: Context Loading
|
||||
|
||||
**Trigger conditions**:
|
||||
- dispatch 完成后立即启动
|
||||
- GC 循环创建新任务后重新进入
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session state | <session>/session.json | Yes |
|
||||
| Task list | TaskList() | Yes |
|
||||
| Trigger event | From Entry Router detection | Yes |
|
||||
| Meta state | <session>/.msg/meta.json | Yes |
|
||||
| Pipeline definition | From SKILL.md | Yes |
|
||||
|
||||
## Strategy
|
||||
1. Load session.json for current state, `pipeline_mode`, `gc_rounds`
|
||||
2. Run TaskList() to get current task statuses
|
||||
3. Identify trigger event type from Entry Router
|
||||
|
||||
### Delegation Mode
|
||||
### Role Detection Table
|
||||
|
||||
**Mode**: Stage-driven(按阶段顺序等待,非轮询)
|
||||
| Message Pattern | Role Detection |
|
||||
|----------------|---------------|
|
||||
| `[scout]` or task ID `SCOUT-*` | scout |
|
||||
| `[strategist]` or task ID `QASTRAT-*` | strategist |
|
||||
| `[generator]` or task ID `QAGEN-*` | generator |
|
||||
| `[executor]` or task ID `QARUN-*` | executor |
|
||||
| `[analyst]` or task ID `QAANA-*` | analyst |
|
||||
|
||||
### 设计原则
|
||||
### Pipeline Stage Order
|
||||
|
||||
> **模型执行没有时间概念,禁止任何形式的轮询等待。**
|
||||
>
|
||||
> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态(空转浪费 API 轮次)
|
||||
> - ❌ 禁止: `Bash(sleep N)` / `Bash(timeout /t N)` 作为等待手段
|
||||
> - ✅ 采用: 同步 `Task()` 调用(`run_in_background: false`),call 本身即等待
|
||||
> - ✅ 采用: Worker 返回 = 阶段完成信号(天然回调)
|
||||
>
|
||||
> **原理**: `Task(run_in_background: false)` 是阻塞调用,coordinator 自动挂起直到 worker 返回。
|
||||
> 无需 sleep,无需轮询,无需消息总线监控。Worker 的返回就是回调。
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 消息路由表
|
||||
const routingTable = {
|
||||
// Scout 完成
|
||||
'scan_ready': { action: 'Mark SCOUT complete, unblock QASTRAT' },
|
||||
'issues_found': { action: 'Mark SCOUT complete with issues, unblock QASTRAT' },
|
||||
// Strategist 完成
|
||||
'strategy_ready': { action: 'Mark QASTRAT complete, unblock QAGEN' },
|
||||
// Generator 完成
|
||||
'tests_generated': { action: 'Mark QAGEN complete, unblock QARUN' },
|
||||
'tests_revised': { action: 'Mark QAGEN-fix complete, unblock QARUN-gc' },
|
||||
// Executor 完成
|
||||
'tests_passed': { action: 'Mark QARUN complete, check coverage, unblock next', special: 'check_coverage' },
|
||||
'tests_failed': { action: 'Evaluate failures, decide GC loop or continue', special: 'gc_decision' },
|
||||
// Analyst 完成
|
||||
'analysis_ready': { action: 'Mark QAANA complete, evaluate quality gate', special: 'quality_gate' },
|
||||
'quality_report': { action: 'Quality report received, prepare final report', special: 'finalize' },
|
||||
// 错误
|
||||
'error': { action: 'Assess severity, retry or escalate', special: 'error_handler' }
|
||||
}
|
||||
```
|
||||
SCOUT -> QASTRAT -> QAGEN -> QARUN -> QAANA
|
||||
```
|
||||
|
||||
### Stage-Worker 映射表
|
||||
## Phase 3: Event Handlers
|
||||
|
||||
### handleCallback
|
||||
|
||||
Triggered when a worker sends completion message.
|
||||
|
||||
1. Parse message to identify role and task ID using Role Detection Table
|
||||
|
||||
2. Mark task as completed:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: "<task-id>", status: "completed" })
|
||||
```
|
||||
|
||||
3. Record completion in session state
|
||||
|
||||
4. **GC Loop Check** (when executor QARUN completes):
|
||||
|
||||
Read `<session>/.msg/meta.json` for execution results.
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Coverage >= target OR no coverage data | Proceed to handleSpawnNext |
|
||||
| Coverage < target AND gc_rounds < 3 | Create GC fix tasks, increment gc_rounds |
|
||||
| Coverage < target AND gc_rounds >= 3 | Accept current coverage, proceed to handleSpawnNext |
|
||||
|
||||
**GC Fix Task Creation** (when coverage below target):
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "QAGEN-fix-<round>",
|
||||
description: "PURPOSE: Fix failing tests and improve coverage | Success: Coverage meets target
|
||||
TASK:
|
||||
- Load execution results and failing test details
|
||||
- Fix broken tests and add missing coverage
|
||||
- Re-validate fixes
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Upstream artifacts: <session>/.msg/meta.json
|
||||
EXPECTED: Fixed test files | Improved coverage
|
||||
CONSTRAINTS: Targeted fixes only | Do not introduce regressions",
|
||||
blockedBy: [],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "QARUN-recheck-<round>",
|
||||
description: "PURPOSE: Re-execute tests after fixes | Success: Coverage >= target
|
||||
TASK:
|
||||
- Execute test suite on fixed code
|
||||
- Measure coverage
|
||||
- Report results
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
EXPECTED: Execution results with coverage metrics
|
||||
CONSTRAINTS: Read-only execution",
|
||||
blockedBy: ["QAGEN-fix-<round>"],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
5. Proceed to handleSpawnNext
|
||||
|
||||
### handleSpawnNext
|
||||
|
||||
Find and spawn the next ready tasks.
|
||||
|
||||
1. Scan task list for tasks where:
|
||||
- Status is "pending"
|
||||
- All blockedBy tasks have status "completed"
|
||||
|
||||
2. If no ready tasks and all tasks completed, proceed to handleComplete
|
||||
|
||||
3. If no ready tasks but some still in_progress, STOP and wait
|
||||
|
||||
4. For each ready task, determine role from task subject prefix:
|
||||
|
||||
```javascript
|
||||
const STAGE_WORKER_MAP = {
|
||||
'SCOUT': { role: 'scout', skillArgs: '--role=scout' },
|
||||
'QASTRAT': { role: 'strategist', skillArgs: '--role=strategist' },
|
||||
'QAGEN': { role: 'generator', skillArgs: '--role=generator' },
|
||||
'QARUN': { role: 'executor', skillArgs: '--role=executor' },
|
||||
'QAANA': { role: 'analyst', skillArgs: '--role=analyst' }
|
||||
'SCOUT': { role: 'scout' },
|
||||
'QASTRAT': { role: 'strategist' },
|
||||
'QAGEN': { role: 'generator' },
|
||||
'QARUN': { role: 'executor' },
|
||||
'QAANA': { role: 'analyst' }
|
||||
}
|
||||
|
||||
// ★ 统一 auto mode 检测:-y/--yes 从 $ARGUMENTS 或 ccw 传播
|
||||
const autoYes = /\b(-y|--yes)\b/.test(args)
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
5. Spawn team-worker (one at a time for sequential pipeline):
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 从 shared memory 获取覆盖率目标
|
||||
const sharedMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
const strategy = sharedMemory.test_strategy || {}
|
||||
const coverageTargets = {}
|
||||
for (const layer of (strategy.layers || [])) {
|
||||
coverageTargets[layer.level] = layer.target_coverage
|
||||
}
|
||||
|
||||
let gcIteration = 0
|
||||
const MAX_GC_ITERATIONS = 3
|
||||
|
||||
// 获取 pipeline 阶段列表(来自 dispatch 创建的任务链)
|
||||
const allTasks = TaskList()
|
||||
const pipelineTasks = allTasks
|
||||
.filter(t => t.owner && t.owner !== 'coordinator')
|
||||
.sort((a, b) => Number(a.id) - Number(b.id))
|
||||
```
|
||||
|
||||
### Step 2: Sequential Stage Execution (Stop-Wait)
|
||||
|
||||
> **核心**: 逐阶段 spawn worker,同步阻塞等待返回。
|
||||
> Worker 返回 = 阶段完成。无 sleep、无轮询、无消息总线监控。
|
||||
|
||||
```javascript
|
||||
// 按依赖顺序处理每个阶段
|
||||
for (const stageTask of pipelineTasks) {
|
||||
// 1. 提取阶段前缀 → 确定 worker 角色
|
||||
const stagePrefix = stageTask.subject.match(/^([\w-]+)-\d/)?.[1]?.replace(/-L\d$/, '')
|
||||
const workerConfig = STAGE_WORKER_MAP[stagePrefix]
|
||||
|
||||
if (!workerConfig) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: teamName, from: "coordinator",
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
// 2. 标记任务为执行中
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: teamName, from: "coordinator",
|
||||
to: workerConfig.role, type: "task_unblocked",
|
||||
})
|
||||
|
||||
// 3. 同步 spawn worker — 阻塞直到 worker 返回(Stop-Wait 核心)
|
||||
const workerResult = Task({
|
||||
subagent_type: "team-worker",
|
||||
description: `Spawn ${workerConfig.role} worker for ${stageTask.subject}`,
|
||||
team_name: teamName,
|
||||
name: workerConfig.role,
|
||||
prompt: `## Role Assignment
|
||||
role: ${workerConfig.role}
|
||||
role_spec: .claude/skills/team-quality-assurance/role-specs/${workerConfig.role}.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
team_name: ${teamName}
|
||||
requirement: ${stageTask.description || taskDescription}
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker for <task-id>",
|
||||
team_name: "quality-assurance",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-quality-assurance/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: quality-assurance
|
||||
requirement: <task-description>
|
||||
inner_loop: false
|
||||
|
||||
## Current Task
|
||||
- Task ID: ${stageTask.id}
|
||||
- Task: ${stageTask.subject}
|
||||
- Task ID: <task-id>
|
||||
- Task: <task-subject>
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
|
||||
run_in_background: false
|
||||
})
|
||||
|
||||
// 4. Worker 已返回 — 直接处理结果
|
||||
const taskState = TaskGet({ taskId: stageTask.id })
|
||||
|
||||
if (taskState.status !== 'completed') {
|
||||
// Worker 返回但未标记 completed → 异常处理
|
||||
if (autoYes) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: teamName, from: "coordinator",
|
||||
type: "error",
|
||||
})
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
continue
|
||||
}
|
||||
|
||||
const decision = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `阶段 "${stageTask.subject}" worker 返回但未完成。如何处理?`,
|
||||
header: "Stage Fail",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "重试", description: "重新 spawn worker 执行此阶段" },
|
||||
{ label: "跳过", description: "标记为跳过,继续后续流水线" },
|
||||
{ label: "终止", description: "停止整个 QA 流程,汇报当前结果" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
const answer = decision["Stage Fail"]
|
||||
if (answer === "跳过") {
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
continue
|
||||
} else if (answer === "终止") {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: teamName, from: "coordinator",
|
||||
to: "user", type: "shutdown",
|
||||
})
|
||||
break
|
||||
}
|
||||
// 重试: continue to next iteration will re-process if logic wraps
|
||||
} else {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: teamName, from: "coordinator",
|
||||
})
|
||||
}
|
||||
|
||||
// 5. 阶段间检查(QARUN 阶段检查覆盖率,决定 GC 循环)
|
||||
if (stagePrefix === 'QARUN') {
|
||||
const latestMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
const coverage = latestMemory.execution_results?.coverage || 0
|
||||
const targetLayer = stageTask.metadata?.layer || 'L1'
|
||||
const target = coverageTargets[targetLayer] || 80
|
||||
|
||||
if (coverage < target && gcIteration < MAX_GC_ITERATIONS) {
|
||||
gcIteration++
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: teamName, from: "coordinator",
|
||||
to: "generator", type: "gc_loop_trigger",
|
||||
})
|
||||
// 创建 GC 修复任务追加到 pipeline
|
||||
}
|
||||
}
|
||||
}
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
|
||||
})
|
||||
```
|
||||
|
||||
### Step 2.1: Message Processing (processMessage)
|
||||
6. STOP after spawning -- wait for next callback
|
||||
|
||||
```javascript
|
||||
function processMessage(msg, handler) {
|
||||
switch (handler.special) {
|
||||
case 'check_coverage': {
|
||||
const coverage = msg.data?.coverage || 0
|
||||
const targetLayer = msg.data?.layer || 'L1'
|
||||
const target = coverageTargets[targetLayer] || 80
|
||||
### handleCheck
|
||||
|
||||
if (coverage < target) {
|
||||
handleGCDecision(coverage, targetLayer)
|
||||
}
|
||||
// 覆盖率达标则不做额外处理,流水线自然流转
|
||||
break
|
||||
}
|
||||
|
||||
case 'gc_decision': {
|
||||
const coverage = msg.data?.coverage || 0
|
||||
const targetLayer = msg.data?.layer || 'L1'
|
||||
handleGCDecision(coverage, targetLayer)
|
||||
break
|
||||
}
|
||||
|
||||
case 'quality_gate': {
|
||||
// 重新读取最新 shared memory
|
||||
const latestMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
const qualityScore = latestMemory.quality_score || 0
|
||||
let status = 'PASS'
|
||||
if (qualityScore < 60) status = 'FAIL'
|
||||
else if (qualityScore < 80) status = 'CONDITIONAL'
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: teamName, from: "coordinator",
|
||||
to: "user", type: "quality_gate",
|
||||
})
|
||||
break
|
||||
}
|
||||
|
||||
case 'error_handler': {
|
||||
const severity = msg.data?.severity || 'medium'
|
||||
if (severity === 'critical') {
|
||||
SendMessage({
|
||||
content: `## [coordinator] Critical Error from ${msg.from}\n\n${msg.summary}`,
|
||||
})
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function handleGCDecision(coverage, targetLayer) {
|
||||
if (gcIteration < MAX_GC_ITERATIONS) {
|
||||
gcIteration++
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: teamName, from: "coordinator",
|
||||
data: { iteration: gcIteration, layer: targetLayer, coverage }
|
||||
})
|
||||
// 创建 GC 修复任务(参见 dispatch.md createGCLoopTasks)
|
||||
} else {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: teamName, from: "coordinator",
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// 汇总所有结果
|
||||
const finalSharedMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
const allFinalTasks = TaskList()
|
||||
const workerTasks = allFinalTasks.filter(t => t.owner && t.owner !== 'coordinator')
|
||||
const summary = {
|
||||
total_tasks: workerTasks.length,
|
||||
completed_tasks: workerTasks.filter(t => t.status === 'completed').length,
|
||||
gc_iterations: gcIteration,
|
||||
quality_score: finalSharedMemory.quality_score,
|
||||
coverage: finalSharedMemory.execution_results?.coverage
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
Output current pipeline status.
|
||||
|
||||
```
|
||||
## Coordination Summary
|
||||
Pipeline Status:
|
||||
[DONE] SCOUT-001 (scout) -> scan complete
|
||||
[DONE] QASTRAT-001 (strategist) -> strategy ready
|
||||
[RUN] QAGEN-001 (generator) -> generating tests...
|
||||
[WAIT] QARUN-001 (executor) -> blocked by QAGEN-001
|
||||
[WAIT] QAANA-001 (analyst) -> blocked by QARUN-001
|
||||
|
||||
### Pipeline Status: COMPLETE
|
||||
### Tasks: [completed]/[total]
|
||||
### GC Iterations: [count]
|
||||
### Quality Score: [score]/100
|
||||
### Coverage: [percent]%
|
||||
|
||||
### Message Log (last 10)
|
||||
- [timestamp] [from] → [to]: [type] - [summary]
|
||||
GC Rounds: 0/3
|
||||
Session: <session-id>
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
Output status -- do NOT advance pipeline.
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Worker 返回但未 completed (交互模式) | AskUserQuestion: 重试 / 跳过 / 终止 |
|
||||
| Worker 返回但未 completed (自动模式) | 自动跳过,记录日志 |
|
||||
| Worker spawn 失败 | 重试一次,仍失败则上报用户 |
|
||||
| Quality gate FAIL | Report to user, suggest targeted re-run |
|
||||
| GC loop stuck >3 iterations | Accept current coverage, continue pipeline |
|
||||
### handleResume
|
||||
|
||||
Resume pipeline after user pause or interruption.
|
||||
|
||||
1. Audit task list for inconsistencies:
|
||||
- Tasks stuck in "in_progress" -> reset to "pending"
|
||||
- Tasks with completed blockers but still "pending" -> include in spawn list
|
||||
2. Proceed to handleSpawnNext
|
||||
|
||||
### handleComplete
|
||||
|
||||
Triggered when all pipeline tasks are completed.
|
||||
|
||||
1. Verify all tasks (including any GC fix/recheck tasks) have status "completed"
|
||||
2. If any tasks not completed, return to handleSpawnNext
|
||||
3. If all completed:
|
||||
- Read final state from `<session>/.msg/meta.json`
|
||||
- Compile summary: total tasks, completed, gc_rounds, quality_score, coverage
|
||||
- Transition to coordinator Phase 5
|
||||
|
||||
## Phase 4: State Persistence
|
||||
|
||||
After every handler execution:
|
||||
|
||||
1. Update session.json with current state (active tasks, gc_rounds, last event)
|
||||
2. Verify task list consistency
|
||||
3. STOP and wait for next event
|
||||
|
||||
@@ -1,220 +0,0 @@
|
||||
# Command: run-fix-cycle
|
||||
|
||||
> 迭代测试执行与自动修复。运行测试套件,解析结果,失败时委派 code-developer 修复,最多迭代 5 次。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Executor
|
||||
- 测试代码已生成,需要执行并验证
|
||||
- GC 循环中重新执行修复后的测试
|
||||
|
||||
**Trigger conditions**:
|
||||
- QARUN-* 任务进入执行阶段
|
||||
- Generator 报告测试生成完成
|
||||
- GC 循环中 coordinator 创建的重新执行任务
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Sequential Delegation(修复时)/ Direct(执行时)
|
||||
**Agent Type**: `code-developer`(仅用于修复)
|
||||
**Max Iterations**: 5
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 每次迭代的决策
|
||||
function shouldContinue(iteration, passRate, testsFailed) {
|
||||
if (iteration >= MAX_ITERATIONS) return false
|
||||
if (testsFailed === 0) return false // 全部通过
|
||||
if (passRate >= 95 && iteration >= 2) return false // 足够好
|
||||
return true
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 检测测试框架和命令
|
||||
const strategy = sharedMemory.test_strategy || {}
|
||||
const framework = strategy.test_framework || 'vitest'
|
||||
const targetLayer = task.description.match(/layer:\s*(L[123])/)?.[1] || 'L1'
|
||||
|
||||
// 构建测试命令
|
||||
function buildTestCommand(framework, layer) {
|
||||
const layerFilter = {
|
||||
'L1': 'unit',
|
||||
'L2': 'integration',
|
||||
'L3': 'e2e'
|
||||
}
|
||||
|
||||
const commands = {
|
||||
'vitest': `npx vitest run --coverage --reporter=json --outputFile=test-results.json`,
|
||||
'jest': `npx jest --coverage --json --outputFile=test-results.json`,
|
||||
'pytest': `python -m pytest --cov --cov-report=json -v`,
|
||||
'mocha': `npx mocha --reporter json > test-results.json`
|
||||
}
|
||||
|
||||
let cmd = commands[framework] || 'npm test -- --coverage'
|
||||
|
||||
// 添加层级过滤(如果测试文件按目录组织)
|
||||
const filter = layerFilter[layer]
|
||||
if (filter && framework === 'vitest') {
|
||||
cmd += ` --testPathPattern="${filter}"`
|
||||
}
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
const testCommand = buildTestCommand(framework, targetLayer)
|
||||
|
||||
// 获取关联的测试文件
|
||||
const generatedTests = sharedMemory.generated_tests?.[targetLayer]?.files || []
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
let iteration = 0
|
||||
const MAX_ITERATIONS = 5
|
||||
let lastOutput = ''
|
||||
let passRate = 0
|
||||
let coverage = 0
|
||||
let testsPassed = 0
|
||||
let testsFailed = 0
|
||||
|
||||
while (iteration < MAX_ITERATIONS) {
|
||||
// ===== EXECUTE TESTS =====
|
||||
lastOutput = Bash(`${testCommand} 2>&1 || true`)
|
||||
|
||||
// ===== PARSE RESULTS =====
|
||||
// 解析通过/失败数
|
||||
const passedMatch = lastOutput.match(/(\d+)\s*(?:passed|passing)/)
|
||||
const failedMatch = lastOutput.match(/(\d+)\s*(?:failed|failing)/)
|
||||
testsPassed = passedMatch ? parseInt(passedMatch[1]) : 0
|
||||
testsFailed = failedMatch ? parseInt(failedMatch[1]) : 0
|
||||
const testsTotal = testsPassed + testsFailed
|
||||
|
||||
passRate = testsTotal > 0 ? Math.round(testsPassed / testsTotal * 100) : 0
|
||||
|
||||
// 解析覆盖率
|
||||
try {
|
||||
const coverageJson = JSON.parse(Read('coverage/coverage-summary.json'))
|
||||
coverage = coverageJson.total?.lines?.pct || 0
|
||||
} catch {
|
||||
// 尝试从输出解析
|
||||
const covMatch = lastOutput.match(/(?:Lines|Stmts|All files)\s*[:|]\s*(\d+\.?\d*)%/)
|
||||
coverage = covMatch ? parseFloat(covMatch[1]) : 0
|
||||
}
|
||||
|
||||
// ===== CHECK PASS =====
|
||||
if (testsFailed === 0) {
|
||||
break // 全部通过
|
||||
}
|
||||
|
||||
// ===== SHOULD CONTINUE? =====
|
||||
if (!shouldContinue(iteration + 1, passRate, testsFailed)) {
|
||||
break
|
||||
}
|
||||
|
||||
// ===== AUTO-FIX =====
|
||||
iteration++
|
||||
|
||||
// 提取失败详情
|
||||
const failureLines = lastOutput.split('\n')
|
||||
.filter(l => /FAIL|Error|AssertionError|Expected|Received|TypeError|ReferenceError/.test(l))
|
||||
.slice(0, 30)
|
||||
.join('\n')
|
||||
|
||||
// 委派修复给 code-developer
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: `Fix ${testsFailed} test failures (iteration ${iteration}/${MAX_ITERATIONS})`,
|
||||
prompt: `## Goal
|
||||
Fix failing tests. ONLY modify test files, NEVER modify source code.
|
||||
|
||||
## Test Output
|
||||
\`\`\`
|
||||
${failureLines}
|
||||
\`\`\`
|
||||
|
||||
## Test Files to Fix
|
||||
${generatedTests.map(f => `- ${f}`).join('\n')}
|
||||
|
||||
## Rules
|
||||
- Read each failing test file before modifying
|
||||
- Fix: incorrect assertions, missing imports, wrong mocks, setup issues
|
||||
- Do NOT: skip tests, add \`@ts-ignore\`, use \`as any\`, modify source code
|
||||
- Keep existing test structure and naming
|
||||
- If a test is fundamentally wrong about expected behavior, fix the assertion to match actual source behavior`
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
const resultData = {
|
||||
layer: targetLayer,
|
||||
framework: framework,
|
||||
iterations: iteration,
|
||||
pass_rate: passRate,
|
||||
coverage: coverage,
|
||||
tests_passed: testsPassed,
|
||||
tests_failed: testsFailed,
|
||||
all_passed: testsFailed === 0,
|
||||
max_iterations_reached: iteration >= MAX_ITERATIONS
|
||||
}
|
||||
|
||||
// 保存执行结果
|
||||
Bash(`mkdir -p "${sessionFolder}/results"`)
|
||||
Write(`${sessionFolder}/results/run-${targetLayer}.json`, JSON.stringify(resultData, null, 2))
|
||||
|
||||
// 保存最后一次测试输出(截取关键部分)
|
||||
const outputSummary = lastOutput.split('\n').slice(-30).join('\n')
|
||||
Write(`${sessionFolder}/results/output-${targetLayer}.txt`, outputSummary)
|
||||
|
||||
// 更新 shared memory
|
||||
sharedMemory.execution_results = sharedMemory.execution_results || {}
|
||||
sharedMemory.execution_results[targetLayer] = resultData
|
||||
sharedMemory.execution_results.pass_rate = passRate
|
||||
sharedMemory.execution_results.coverage = coverage
|
||||
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(sharedMemory, null, 2))
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Test Execution Results
|
||||
|
||||
### Layer: [L1|L2|L3]
|
||||
### Framework: [vitest|jest|pytest]
|
||||
### Status: [PASS|FAIL]
|
||||
|
||||
### Results
|
||||
- Tests passed: [count]
|
||||
- Tests failed: [count]
|
||||
- Pass rate: [percent]%
|
||||
- Coverage: [percent]%
|
||||
- Fix iterations: [count]/[max]
|
||||
|
||||
### Failure Details (if any)
|
||||
- [test name]: [error description]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Test command not found | Try fallback: npm test → npx vitest → npx jest → pytest |
|
||||
| Test environment broken | Report error to coordinator, suggest manual fix |
|
||||
| Max iterations reached with failures | Report current state, let coordinator decide (GC loop or accept) |
|
||||
| Coverage data unavailable | Report 0%, note coverage collection failure |
|
||||
| Sub-agent fix introduces new failures | Revert last fix, try different approach |
|
||||
| No test files to run | Report empty, notify coordinator |
|
||||
| Agent/CLI failure | Retry once, then fallback to inline execution |
|
||||
| Timeout (>5 min) | Report partial results, notify coordinator |
|
||||
@@ -1,175 +0,0 @@
|
||||
# Executor Role
|
||||
|
||||
Test executor. Run test suites, collect coverage data, and perform automatic fix cycles when tests fail. Implement the execution side of the Generator-Executor (GC) loop.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `executor` | **Tag**: `[executor]`
|
||||
- **Task Prefix**: `QARUN-*`
|
||||
- **Responsibility**: Validation (test execution and fix)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `QARUN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[executor]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Execute tests and collect coverage
|
||||
- Attempt automatic fix on failure
|
||||
- Work strictly within test execution responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Generate new tests from scratch (that's generator's responsibility)
|
||||
- Modify source code (unless fixing tests themselves)
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Omit `[executor]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `run-fix-cycle` | [commands/run-fix-cycle.md](commands/run-fix-cycle.md) | Phase 3 | Iterative test execution and auto-fix |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `code-developer` | subagent | run-fix-cycle.md | Test failure auto-fix |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `tests_passed` | executor -> coordinator | All tests pass | Contains coverage data |
|
||||
| `tests_failed` | executor -> coordinator | Tests fail | Contains failure details and fix attempts |
|
||||
| `coverage_report` | executor -> coordinator | Coverage collected | Coverage data |
|
||||
| `error` | executor -> coordinator | Execution environment error | Blocking error |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "executor",
|
||||
type: <message-type>,
|
||||
data: { ref: <results-file>, pass_rate, coverage, iterations }
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from executor --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `QARUN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `executor` for single-instance execution.
|
||||
|
||||
### Phase 2: Environment Detection
|
||||
|
||||
**Detection steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read shared memory for strategy and generated tests
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Shared memory | <session-folder>/.msg/meta.json | Yes |
|
||||
| Test strategy | sharedMemory.test_strategy | Yes |
|
||||
| Generated tests | sharedMemory.generated_tests | Yes |
|
||||
| Target layer | task description | Yes |
|
||||
|
||||
3. Detect test command based on framework:
|
||||
|
||||
| Framework | Command Pattern |
|
||||
|-----------|-----------------|
|
||||
| jest | `npx jest --coverage --testPathPattern="<layer>"` |
|
||||
| vitest | `npx vitest run --coverage --reporter=json` |
|
||||
| pytest | `python -m pytest --cov --cov-report=json` |
|
||||
| mocha | `npx mocha --reporter json` |
|
||||
| unknown | `npm test -- --coverage` |
|
||||
|
||||
4. Get changed test files from generated_tests[targetLayer].files
|
||||
|
||||
### Phase 3: Execution & Fix Cycle
|
||||
|
||||
Delegate to `commands/run-fix-cycle.md` if available, otherwise execute inline.
|
||||
|
||||
**Iterative Test-Fix Cycle**:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | Run test command |
|
||||
| 2 | Parse results -> check pass rate |
|
||||
| 3 | Pass rate >= 95% -> exit loop (success) |
|
||||
| 4 | Extract failing test details |
|
||||
| 5 | Delegate fix to code-developer subagent |
|
||||
| 6 | Increment iteration counter |
|
||||
| 7 | iteration >= MAX (5) -> exit loop (report failures) |
|
||||
| 8 | Go to Step 1 |
|
||||
|
||||
**Fix Agent Prompt Structure**:
|
||||
- Goal: Fix failing tests
|
||||
- Constraint: Do NOT modify source code, only fix test files
|
||||
- Input: Failure details, test file list
|
||||
- Instructions: Read failing tests, fix assertions/imports/setup, do NOT skip/ignore tests
|
||||
|
||||
### Phase 4: Result Analysis
|
||||
|
||||
**Analyze test outcomes**:
|
||||
|
||||
| Metric | Source | Threshold |
|
||||
|--------|--------|-----------|
|
||||
| Pass rate | Test output parser | >= 95% |
|
||||
| Coverage | Coverage tool output | Per layer target |
|
||||
| Flaky tests | Compare runs | 0 flaky |
|
||||
|
||||
**Result Data Structure**:
|
||||
- layer, iterations, pass_rate, coverage
|
||||
- tests_passed, tests_failed, all_passed
|
||||
|
||||
Save results to `<session-folder>/results/run-<layer>.json`.
|
||||
|
||||
Update shared memory with `execution_results` field.
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[executor]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
Message type selection: `tests_passed` if all_passed, else `tests_failed`.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No QARUN-* tasks available | Idle, wait for coordinator |
|
||||
| Test command fails to execute | Try fallback: `npm test`, `npx vitest run`, `pytest` |
|
||||
| Max iterations reached | Report current pass rate, let coordinator decide |
|
||||
| Coverage data unavailable | Report 0%, note coverage collection failure |
|
||||
| Test environment broken | SendMessage error to coordinator, suggest manual fix |
|
||||
| Sub-agent fix introduces new failures | Revert fix, try next failure |
|
||||
@@ -1,258 +0,0 @@
|
||||
# Command: generate-tests
|
||||
|
||||
> 按层级生成测试代码。根据 strategist 策略和项目现有测试模式,生成 L1/L2/L3 测试用例。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Generator
|
||||
- 策略已制定,需要生成对应层级的测试代码
|
||||
- GC 循环中修订失败的测试
|
||||
|
||||
**Trigger conditions**:
|
||||
- QAGEN-* 任务进入执行阶段
|
||||
- 测试策略中包含当前层级
|
||||
- GC 循环触发修复任务(QAGEN-fix-*)
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Sequential Delegation(复杂时)/ Direct(简单时)
|
||||
**Agent Type**: `code-developer`
|
||||
**Delegation Scope**: Per-layer
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
const focusFiles = layerConfig.focus_files || []
|
||||
const isGCFix = task.subject.includes('fix')
|
||||
|
||||
if (isGCFix) {
|
||||
// GC 修复模式:读取失败信息,针对性修复
|
||||
mode = 'gc-fix'
|
||||
} else if (focusFiles.length <= 3) {
|
||||
// 直接生成:内联 Read → 分析 → Write
|
||||
mode = 'direct'
|
||||
} else {
|
||||
// 委派给 code-developer
|
||||
mode = 'delegate'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 从 shared memory 获取策略
|
||||
const strategy = sharedMemory.test_strategy || {}
|
||||
const targetLayer = task.description.match(/layer:\s*(L[123])/)?.[1] || 'L1'
|
||||
|
||||
// 确定层级配置
|
||||
const layerConfig = strategy.layers?.find(l => l.level === targetLayer) || {
|
||||
level: targetLayer,
|
||||
name: targetLayer === 'L1' ? 'Unit Tests' : targetLayer === 'L2' ? 'Integration Tests' : 'E2E Tests',
|
||||
target_coverage: targetLayer === 'L1' ? 80 : targetLayer === 'L2' ? 60 : 40,
|
||||
focus_files: []
|
||||
}
|
||||
|
||||
// 学习现有测试模式(必须找 3 个相似测试文件)
|
||||
const existingTests = Glob(`**/*.{test,spec}.{ts,tsx,js,jsx}`)
|
||||
const testPatterns = existingTests.slice(0, 3).map(f => ({
|
||||
path: f,
|
||||
content: Read(f)
|
||||
}))
|
||||
|
||||
// 检测测试约定
|
||||
const testConventions = detectTestConventions(testPatterns)
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
if (mode === 'gc-fix') {
|
||||
// GC 修复模式
|
||||
// 读取失败信息
|
||||
const failedTests = sharedMemory.execution_results?.[targetLayer]
|
||||
const failureOutput = Read(`${sessionFolder}/results/run-${targetLayer}.json`)
|
||||
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: `Fix failing ${targetLayer} tests (GC iteration)`,
|
||||
prompt: `## Goal
|
||||
Fix the failing tests based on execution results. Do NOT modify source code.
|
||||
|
||||
## Test Execution Results
|
||||
${JSON.stringify(failedTests, null, 2)}
|
||||
|
||||
## Test Conventions
|
||||
${JSON.stringify(testConventions, null, 2)}
|
||||
|
||||
## Instructions
|
||||
- Read each failing test file
|
||||
- Fix assertions, imports, mocks, or test setup
|
||||
- Ensure tests match actual source behavior
|
||||
- Do NOT skip or ignore tests
|
||||
- Do NOT modify source files`
|
||||
})
|
||||
|
||||
} else if (mode === 'direct') {
|
||||
// 直接生成模式
|
||||
const focusFiles = layerConfig.focus_files || []
|
||||
|
||||
for (const sourceFile of focusFiles) {
|
||||
const sourceContent = Read(sourceFile)
|
||||
|
||||
// 确定测试文件路径(遵循项目约定)
|
||||
const testPath = determineTestPath(sourceFile, testConventions)
|
||||
|
||||
// 检查是否已有测试
|
||||
let existingTest = null
|
||||
try { existingTest = Read(testPath) } catch {}
|
||||
|
||||
if (existingTest) {
|
||||
// 补充现有测试:分析缺失的测试用例
|
||||
const missingCases = analyzeMissingCases(sourceContent, existingTest)
|
||||
if (missingCases.length > 0) {
|
||||
// 追加测试用例
|
||||
Edit({
|
||||
file_path: testPath,
|
||||
old_string: findLastTestBlock(existingTest),
|
||||
new_string: `${findLastTestBlock(existingTest)}\n\n${generateCases(missingCases, testConventions)}`
|
||||
})
|
||||
}
|
||||
} else {
|
||||
// 创建新测试文件
|
||||
const testContent = generateFullTestFile(sourceFile, sourceContent, testConventions, targetLayer)
|
||||
Write(testPath, testContent)
|
||||
}
|
||||
}
|
||||
|
||||
} else {
|
||||
// 委派模式
|
||||
const focusFiles = layerConfig.focus_files || []
|
||||
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: `Generate ${targetLayer} tests for ${focusFiles.length} files`,
|
||||
prompt: `## Goal
|
||||
Generate ${layerConfig.name} for the following source files.
|
||||
|
||||
## Test Framework
|
||||
${strategy.test_framework || 'vitest'}
|
||||
|
||||
## Existing Test Patterns (MUST follow these exactly)
|
||||
${testPatterns.map(t => `### ${t.path}\n\`\`\`\n${t.content.substring(0, 800)}\n\`\`\``).join('\n\n')}
|
||||
|
||||
## Test Conventions
|
||||
- Test file location: ${testConventions.location}
|
||||
- Import style: ${testConventions.importStyle}
|
||||
- Describe/it nesting: ${testConventions.nesting}
|
||||
|
||||
## Source Files to Test
|
||||
${focusFiles.map(f => `- ${f}`).join('\n')}
|
||||
|
||||
## Requirements
|
||||
- Follow existing test patterns exactly (import style, naming, structure)
|
||||
- Cover: happy path + edge cases + error cases
|
||||
- Target coverage: ${layerConfig.target_coverage}%
|
||||
- Do NOT modify source files, only create/modify test files
|
||||
- Do NOT use \`any\` type assertions
|
||||
- Do NOT skip or mark tests as TODO without implementation`
|
||||
})
|
||||
}
|
||||
|
||||
// 辅助函数
|
||||
function determineTestPath(sourceFile, conventions) {
|
||||
if (conventions.location === 'colocated') {
|
||||
return sourceFile.replace(/\.(ts|tsx|js|jsx)$/, `.test.$1`)
|
||||
} else if (conventions.location === '__tests__') {
|
||||
const dir = sourceFile.substring(0, sourceFile.lastIndexOf('/'))
|
||||
const name = sourceFile.substring(sourceFile.lastIndexOf('/') + 1)
|
||||
return `${dir}/__tests__/${name.replace(/\.(ts|tsx|js|jsx)$/, `.test.$1`)}`
|
||||
}
|
||||
return sourceFile.replace(/\.(ts|tsx|js|jsx)$/, `.test.$1`)
|
||||
}
|
||||
|
||||
function detectTestConventions(patterns) {
|
||||
const conventions = {
|
||||
location: 'colocated', // or '__tests__'
|
||||
importStyle: 'named', // or 'default'
|
||||
nesting: 'describe-it', // or 'test-only'
|
||||
framework: 'vitest'
|
||||
}
|
||||
|
||||
for (const p of patterns) {
|
||||
if (p.path.includes('__tests__')) conventions.location = '__tests__'
|
||||
if (p.content.includes("import { describe")) conventions.nesting = 'describe-it'
|
||||
if (p.content.includes("from 'vitest'")) conventions.framework = 'vitest'
|
||||
if (p.content.includes("from '@jest'") || p.content.includes("from 'jest'")) conventions.framework = 'jest'
|
||||
}
|
||||
|
||||
return conventions
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// 收集生成/修改的测试文件
|
||||
const generatedTests = Bash(`git diff --name-only`).split('\n')
|
||||
.filter(f => /\.(test|spec)\.(ts|tsx|js|jsx)$/.test(f))
|
||||
|
||||
// TypeScript 语法检查
|
||||
const syntaxResult = Bash(`npx tsc --noEmit ${generatedTests.join(' ')} 2>&1 || true`)
|
||||
const hasSyntaxErrors = syntaxResult.includes('error TS')
|
||||
|
||||
// 自动修复语法错误(最多 3 次)
|
||||
if (hasSyntaxErrors) {
|
||||
let fixAttempt = 0
|
||||
while (fixAttempt < 3 && syntaxResult.includes('error TS')) {
|
||||
const errors = syntaxResult.split('\n').filter(l => l.includes('error TS')).slice(0, 5)
|
||||
// 尝试修复每个错误...
|
||||
fixAttempt++
|
||||
}
|
||||
}
|
||||
|
||||
// 更新 shared memory
|
||||
const testInfo = {
|
||||
layer: targetLayer,
|
||||
files: generatedTests,
|
||||
count: generatedTests.length,
|
||||
syntax_clean: !hasSyntaxErrors,
|
||||
mode: mode,
|
||||
gc_fix: mode === 'gc-fix'
|
||||
}
|
||||
|
||||
sharedMemory.generated_tests = sharedMemory.generated_tests || {}
|
||||
sharedMemory.generated_tests[targetLayer] = testInfo
|
||||
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(sharedMemory, null, 2))
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Test Generation Results
|
||||
|
||||
### Layer: [L1|L2|L3]
|
||||
### Mode: [direct|delegate|gc-fix]
|
||||
### Files Generated: [count]
|
||||
- [test file path]
|
||||
|
||||
### Syntax Check: PASS/FAIL
|
||||
### Conventions Applied: [framework], [location], [nesting]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No focus files in strategy | Generate L1 tests for all source files in scope |
|
||||
| No existing test patterns | Use framework defaults (vitest/jest/pytest) |
|
||||
| Sub-agent failure | Retry once, fallback to direct generation |
|
||||
| Syntax errors persist after 3 fixes | Report errors, proceed with available tests |
|
||||
| Source file not found | Skip file, log warning |
|
||||
| Agent/CLI failure | Retry once, then fallback to inline execution |
|
||||
| Timeout (>5 min) | Report partial results, notify coordinator |
|
||||
@@ -1,169 +0,0 @@
|
||||
# Generator Role
|
||||
|
||||
Test case generator. Generate test code according to strategist's strategy and layers. Support L1 unit tests, L2 integration tests, L3 E2E tests. Follow project's existing test patterns and framework conventions.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `generator` | **Tag**: `[generator]`
|
||||
- **Task Prefix**: `QAGEN-*`
|
||||
- **Responsibility**: Code generation (test code generation)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `QAGEN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[generator]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Follow project's existing test framework and patterns
|
||||
- Generated tests must be runnable
|
||||
- Work strictly within test code generation responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Modify source code (only generate test code)
|
||||
- Execute tests
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Omit `[generator]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `generate-tests` | [commands/generate-tests.md](commands/generate-tests.md) | Phase 3 | Layer-based test code generation |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `code-developer` | subagent | generate-tests.md | Complex test code generation |
|
||||
| `gemini` | CLI | generate-tests.md | Analyze existing test patterns |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `tests_generated` | generator -> coordinator | Test generation complete | Contains generated test file list |
|
||||
| `tests_revised` | generator -> coordinator | Test revision complete | After revision in GC loop |
|
||||
| `error` | generator -> coordinator | Generation failed | Blocking error |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "generator",
|
||||
type: <message-type>,
|
||||
data: { ref: <first-test-file> }
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from generator --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `QAGEN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `generator` for single-instance execution.
|
||||
|
||||
### Phase 2: Strategy & Pattern Loading
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read shared memory to get strategy
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Shared memory | <session-folder>/.msg/meta.json | Yes |
|
||||
| Test strategy | sharedMemory.test_strategy | Yes |
|
||||
| Target layer | task description or strategy.layers[0] | Yes |
|
||||
|
||||
3. Determine target layer config:
|
||||
|
||||
| Layer | Name | Coverage Target |
|
||||
|-------|------|-----------------|
|
||||
| L1 | Unit Tests | 80% |
|
||||
| L2 | Integration Tests | 60% |
|
||||
| L3 | E2E Tests | 40% |
|
||||
|
||||
4. Learn existing test patterns (find 3 similar test files)
|
||||
5. Detect test framework and configuration
|
||||
|
||||
### Phase 3: Test Generation
|
||||
|
||||
Delegate to `commands/generate-tests.md` if available, otherwise execute inline.
|
||||
|
||||
**Implementation Strategy Selection**:
|
||||
|
||||
| Focus File Count | Complexity | Strategy |
|
||||
|------------------|------------|----------|
|
||||
| <= 3 files | Low | Direct: inline Edit/Write |
|
||||
| 3-5 files | Medium | Single code-developer agent |
|
||||
| > 5 files | High | Batch by module, one agent per batch |
|
||||
|
||||
**Direct Generation Flow**:
|
||||
1. Read source file content
|
||||
2. Determine test file path (follow project convention)
|
||||
3. Check if test already exists -> supplement, else create new
|
||||
4. Generate test content based on source exports and existing patterns
|
||||
|
||||
**Test Content Generation**:
|
||||
- Import source exports
|
||||
- Create describe blocks per export
|
||||
- Include happy path, edge cases, error cases tests
|
||||
|
||||
### Phase 4: Self-Validation
|
||||
|
||||
**Validation Checks**:
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Syntax | TypeScript check | No errors |
|
||||
| File existence | Verify all planned files exist | All files present |
|
||||
| Import resolution | Check no broken imports | All imports resolve |
|
||||
|
||||
If validation fails -> attempt auto-fix (max 2 attempts) -> report remaining issues.
|
||||
|
||||
Update shared memory with `generated_tests` field for this layer.
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[generator]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
Message type selection: `tests_generated` for new generation, `tests_revised` for fix iterations.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No QAGEN-* tasks available | Idle, wait for coordinator |
|
||||
| Strategy not found in shared memory | Generate L1 unit tests for changed files |
|
||||
| No existing test patterns found | Use framework defaults |
|
||||
| Sub-agent failure | Retry once, fallback to direct generation |
|
||||
| Syntax errors in generated tests | Auto-fix up to 3 attempts, report remaining |
|
||||
| Source file not found | Skip file, report to coordinator |
|
||||
@@ -1,216 +0,0 @@
|
||||
# Command: scan
|
||||
|
||||
> 多视角 CLI Fan-out 扫描。从 bug、安全、测试覆盖、代码质量、UX 等视角并行分析代码,发现潜在问题。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Scout
|
||||
- 需要对代码库进行多视角问题扫描
|
||||
- 复杂度为 Medium 或 High 时使用 CLI Fan-out
|
||||
|
||||
**Trigger conditions**:
|
||||
- SCOUT-* 任务进入 Phase 3
|
||||
- 复杂度评估为 Medium/High
|
||||
- 需要深度分析超出 ACE 搜索能力
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: CLI Fan-out
|
||||
**CLI Tool**: `gemini` (primary)
|
||||
**CLI Mode**: `analysis`
|
||||
**Parallel Perspectives**: 2-5(根据复杂度)
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 复杂度决定扫描策略
|
||||
if (complexity === 'Low') {
|
||||
// ACE 搜索 + Grep 内联分析(不使用 CLI)
|
||||
mode = 'inline'
|
||||
} else if (complexity === 'Medium') {
|
||||
// CLI Fan-out: 3 个核心视角
|
||||
mode = 'cli-fanout'
|
||||
activePerspectives = perspectives.slice(0, 3)
|
||||
} else {
|
||||
// CLI Fan-out: 所有视角
|
||||
mode = 'cli-fanout'
|
||||
activePerspectives = perspectives
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 确定扫描范围
|
||||
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()
|
||||
const scanScope = task.description.match(/scope:\s*(.+)/)?.[1] || '**/*'
|
||||
|
||||
// 获取变更文件用于聚焦扫描
|
||||
const changedFiles = Bash(`git diff --name-only HEAD~5 2>/dev/null || echo ""`)
|
||||
.split('\n').filter(Boolean)
|
||||
|
||||
// 构建文件上下文
|
||||
const fileContext = changedFiles.length > 0
|
||||
? changedFiles.map(f => `@${f}`).join(' ')
|
||||
: `@${scanScope}`
|
||||
|
||||
// 已知缺陷模式(来自 shared memory)
|
||||
const knownPatternsText = knownPatterns.length > 0
|
||||
? `\nKnown defect patterns to verify: ${knownPatterns.map(p => p.description).join('; ')}`
|
||||
: ''
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
if (mode === 'inline') {
|
||||
// 快速内联扫描
|
||||
const aceResults = mcp__ace-tool__search_context({
|
||||
project_root_path: projectRoot,
|
||||
query: "potential bugs, error handling issues, unchecked return values, security vulnerabilities, missing input validation"
|
||||
})
|
||||
|
||||
// 解析 ACE 结果并分类
|
||||
for (const result of aceResults) {
|
||||
classifyFinding(result)
|
||||
}
|
||||
} else {
|
||||
// CLI Fan-out: 每个视角一个 CLI 调用
|
||||
const perspectivePrompts = {
|
||||
'bug': `PURPOSE: Discover potential bugs and logic errors
|
||||
TASK: • Find unchecked return values • Identify race conditions • Check null/undefined handling • Find off-by-one errors • Detect resource leaks
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}${knownPatternsText}
|
||||
EXPECTED: List of findings with severity, file:line, description, and fix suggestion
|
||||
CONSTRAINTS: Focus on real bugs, avoid false positives`,
|
||||
|
||||
'security': `PURPOSE: Identify security vulnerabilities and risks
|
||||
TASK: • Check for injection flaws (SQL, command, XSS) • Find authentication/authorization gaps • Identify sensitive data exposure • Check input validation • Review crypto usage
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: Security findings with CVSS-style severity, file:line, CWE references where applicable
|
||||
CONSTRAINTS: Focus on exploitable vulnerabilities`,
|
||||
|
||||
'test-coverage': `PURPOSE: Identify untested code paths and coverage gaps
|
||||
TASK: • Find functions/methods without tests • Identify complex logic without assertions • Check error paths without coverage • Find boundary conditions untested
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: List of untested areas with file:line, complexity indicator, and test suggestion
|
||||
CONSTRAINTS: Focus on high-risk untested code`,
|
||||
|
||||
'code-quality': `PURPOSE: Detect code quality issues and anti-patterns
|
||||
TASK: • Find code duplication • Identify overly complex functions • Check naming conventions • Find dead code • Detect God objects/functions
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: Quality findings with severity, file:line, and improvement suggestion
|
||||
CONSTRAINTS: Focus on maintainability impacts`,
|
||||
|
||||
'ux': `PURPOSE: Identify UX-impacting issues in code
|
||||
TASK: • Find missing loading states • Check error message quality • Identify accessibility gaps • Find inconsistent UI patterns • Check responsive handling
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: UX findings with impact level, file:line, and user-facing description
|
||||
CONSTRAINTS: Focus on user-visible issues`
|
||||
}
|
||||
|
||||
for (const perspective of activePerspectives) {
|
||||
const prompt = perspectivePrompts[perspective]
|
||||
if (!prompt) continue
|
||||
|
||||
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule analysis-assess-security-risks`, {
|
||||
run_in_background: true
|
||||
})
|
||||
}
|
||||
|
||||
// 等待所有 CLI 完成(hook 回调通知)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// 聚合所有视角的结果
|
||||
const allFindings = { critical: [], high: [], medium: [], low: [] }
|
||||
|
||||
// 从 CLI 输出解析结果
|
||||
for (const perspective of activePerspectives) {
|
||||
const findings = parseCliOutput(cliResults[perspective])
|
||||
for (const finding of findings) {
|
||||
finding.perspective = perspective
|
||||
allFindings[finding.severity].push(finding)
|
||||
}
|
||||
}
|
||||
|
||||
// 去重:相同 file:line 的发现合并
|
||||
function deduplicateFindings(findings) {
|
||||
const seen = new Set()
|
||||
const unique = []
|
||||
for (const f of findings) {
|
||||
const key = `${f.file}:${f.line}`
|
||||
if (!seen.has(key)) {
|
||||
seen.add(key)
|
||||
unique.push(f)
|
||||
} else {
|
||||
// 合并视角信息到已有条目
|
||||
const existing = unique.find(u => `${u.file}:${u.line}` === key)
|
||||
if (existing) existing.perspectives = [...(existing.perspectives || [existing.perspective]), f.perspective]
|
||||
}
|
||||
}
|
||||
return unique
|
||||
}
|
||||
|
||||
for (const severity of ['critical', 'high', 'medium', 'low']) {
|
||||
allFindings[severity] = deduplicateFindings(allFindings[severity])
|
||||
}
|
||||
|
||||
// 与已知缺陷模式对比
|
||||
for (const pattern of knownPatterns) {
|
||||
for (const severity of ['critical', 'high', 'medium', 'low']) {
|
||||
for (const finding of allFindings[severity]) {
|
||||
if (finding.file === pattern.file || finding.description.includes(pattern.type)) {
|
||||
finding.known_pattern = true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Scan Results
|
||||
|
||||
### Perspectives Scanned: [list]
|
||||
### Complexity: [Low|Medium|High]
|
||||
|
||||
### Findings by Severity
|
||||
#### Critical ([count])
|
||||
- [file:line] [perspective] - [description]
|
||||
|
||||
#### High ([count])
|
||||
- [file:line] [perspective] - [description]
|
||||
|
||||
#### Medium ([count])
|
||||
- [file:line] - [description]
|
||||
|
||||
#### Low ([count])
|
||||
- [file:line] - [description]
|
||||
|
||||
### Known Pattern Matches: [count]
|
||||
### New Findings: [count]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI tool unavailable | Fall back to ACE search + Grep inline analysis |
|
||||
| CLI returns empty for a perspective | Note incomplete perspective, continue others |
|
||||
| Too many findings (>50) | Prioritize critical/high, summarize medium/low |
|
||||
| Timeout on CLI call | Use partial results, note incomplete perspectives |
|
||||
| Agent/CLI failure | Retry once, then fallback to inline execution |
|
||||
| Timeout (>5 min) | Report partial results, notify coordinator |
|
||||
@@ -1,242 +0,0 @@
|
||||
# Role: scout
|
||||
|
||||
多视角问题侦察员。主动扫描代码库,从 bug、安全、UX、测试覆盖、代码质量等多个视角发现潜在问题,创建结构化 issue。融合 issue-discover 的多视角扫描能力。
|
||||
|
||||
## Role Identity
|
||||
|
||||
- **Name**: `scout`
|
||||
- **Task Prefix**: `SCOUT-*`
|
||||
- **Responsibility**: Orchestration(多视角扫描编排)
|
||||
- **Communication**: SendMessage to coordinator only
|
||||
- **Output Tag**: `[scout]`
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅处理 `SCOUT-*` 前缀的任务
|
||||
- 所有输出必须带 `[scout]` 标识
|
||||
- 仅通过 SendMessage 与 coordinator 通信
|
||||
- 严格在问题发现职责范围内工作
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- ❌ 编写或修改代码
|
||||
- ❌ 执行测试
|
||||
- ❌ 为其他角色创建任务
|
||||
- ❌ 直接与其他 worker 通信
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `scan_ready` | scout → coordinator | 扫描完成 | 包含发现的问题列表 |
|
||||
| `issues_found` | scout → coordinator | 发现高优先级问题 | 需要关注的关键发现 |
|
||||
| `error` | scout → coordinator | 扫描失败 | 阻塞性错误 |
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `scan` | [commands/scan.md](commands/scan.md) | Phase 3 | 多视角 CLI Fan-out 扫描 |
|
||||
|
||||
### Subagent Capabilities
|
||||
|
||||
| Agent Type | Used By | Purpose |
|
||||
|------------|---------|---------|
|
||||
| `cli-explore-agent` | scan.md | 多角度代码库探索 |
|
||||
|
||||
### CLI Capabilities
|
||||
|
||||
| CLI Tool | Mode | Used By | Purpose |
|
||||
|----------|------|---------|---------|
|
||||
| `gemini` | analysis | scan.md | 多视角代码分析 |
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
```javascript
|
||||
const tasks = TaskList()
|
||||
const myTasks = tasks.filter(t =>
|
||||
t.subject.startsWith('SCOUT-') &&
|
||||
t.owner === 'scout' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
|
||||
if (myTasks.length === 0) return // idle
|
||||
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
```
|
||||
|
||||
### Phase 2: Context & Scope Assessment
|
||||
|
||||
```javascript
|
||||
// 确定扫描范围
|
||||
const scanScope = task.description.match(/scope:\s*(.+)/)?.[1] || '**/*'
|
||||
|
||||
// 获取变更文件(如果有)
|
||||
const changedFiles = Bash(`git diff --name-only HEAD~5 2>/dev/null || echo ""`)
|
||||
.split('\n').filter(Boolean)
|
||||
|
||||
// 读取 shared memory 获取历史缺陷模式
|
||||
const sessionFolder = task.description.match(/session:\s*(.+)/)?.[1] || '.'
|
||||
let sharedMemory = {}
|
||||
try { sharedMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`)) } catch {}
|
||||
const knownPatterns = sharedMemory.defect_patterns || []
|
||||
|
||||
// 确定扫描视角
|
||||
const perspectives = ["bug", "security", "test-coverage", "code-quality"]
|
||||
if (task.description.includes('ux')) perspectives.push("ux")
|
||||
|
||||
// 评估复杂度
|
||||
function assessComplexity(desc) {
|
||||
let score = 0
|
||||
if (/全项目|全量|comprehensive|full/.test(desc)) score += 3
|
||||
if (/security|安全/.test(desc)) score += 1
|
||||
if (/multiple|across|cross|多模块/.test(desc)) score += 2
|
||||
return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low'
|
||||
}
|
||||
const complexity = assessComplexity(task.description)
|
||||
```
|
||||
|
||||
### Phase 3: Multi-Perspective Scan
|
||||
|
||||
```javascript
|
||||
// Read commands/scan.md for full CLI Fan-out implementation
|
||||
Read("commands/scan.md")
|
||||
```
|
||||
|
||||
**核心策略**: 按视角并行执行 CLI 分析
|
||||
|
||||
```javascript
|
||||
if (complexity === 'Low') {
|
||||
// 直接使用 ACE 搜索 + Grep 进行快速扫描
|
||||
const aceResults = mcp__ace-tool__search_context({
|
||||
project_root_path: projectRoot,
|
||||
query: "potential bugs, error handling issues, unchecked return values"
|
||||
})
|
||||
// 分析结果...
|
||||
} else {
|
||||
// CLI Fan-out: 每个视角一个 CLI 调用
|
||||
for (const perspective of perspectives) {
|
||||
Bash(`ccw cli -p "PURPOSE: Scan code from ${perspective} perspective to discover potential issues
|
||||
TASK: • Analyze code patterns for ${perspective} problems • Identify anti-patterns • Check for common ${perspective} issues
|
||||
MODE: analysis
|
||||
CONTEXT: @${scanScope}
|
||||
EXPECTED: List of findings with severity (critical/high/medium/low), file:line references, description
|
||||
CONSTRAINTS: Focus on actionable findings only, no false positives" --tool gemini --mode analysis --rule analysis-assess-security-risks`, { run_in_background: true })
|
||||
}
|
||||
// 等待所有 CLI 完成,聚合结果
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Result Aggregation & Issue Creation
|
||||
|
||||
```javascript
|
||||
// 聚合所有视角的发现
|
||||
const allFindings = {
|
||||
critical: [],
|
||||
high: [],
|
||||
medium: [],
|
||||
low: []
|
||||
}
|
||||
|
||||
// 去重:相同 file:line 的发现合并
|
||||
// 排序:按严重性排列
|
||||
// 与已知缺陷模式对比:标记重复发现
|
||||
|
||||
const discoveredIssues = allFindings.critical
|
||||
.concat(allFindings.high)
|
||||
.map((f, i) => ({
|
||||
id: `SCOUT-ISSUE-${i + 1}`,
|
||||
severity: f.severity,
|
||||
perspective: f.perspective,
|
||||
file: f.file,
|
||||
line: f.line,
|
||||
description: f.description,
|
||||
suggestion: f.suggestion
|
||||
}))
|
||||
|
||||
// 更新 shared memory
|
||||
sharedMemory.discovered_issues = discoveredIssues
|
||||
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(sharedMemory, null, 2))
|
||||
|
||||
// 保存扫描结果
|
||||
Write(`${sessionFolder}/scan/scan-results.json`, JSON.stringify({
|
||||
scan_date: new Date().toISOString(),
|
||||
perspectives: perspectives,
|
||||
total_findings: Object.values(allFindings).flat().length,
|
||||
by_severity: {
|
||||
critical: allFindings.critical.length,
|
||||
high: allFindings.high.length,
|
||||
medium: allFindings.medium.length,
|
||||
low: allFindings.low.length
|
||||
},
|
||||
findings: allFindings,
|
||||
issues_created: discoveredIssues.length
|
||||
}, null, 2))
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
```javascript
|
||||
const resultSummary = `发现 ${discoveredIssues.length} 个问题(Critical: ${allFindings.critical.length}, High: ${allFindings.high.length}, Medium: ${allFindings.medium.length}, Low: ${allFindings.low.length})`
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: teamName,
|
||||
from: "scout",
|
||||
type: discoveredIssues.length > 0 ? "issues_found" : "scan_ready",
|
||||
data: { ref: `${sessionFolder}/scan/scan-results.json` }
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "coordinator",
|
||||
content: `## [scout] Scan Results
|
||||
|
||||
**Task**: ${task.subject}
|
||||
**Perspectives**: ${perspectives.join(', ')}
|
||||
**Status**: ${discoveredIssues.length > 0 ? 'Issues Found' : 'Clean'}
|
||||
|
||||
### Summary
|
||||
${resultSummary}
|
||||
|
||||
### Top Findings
|
||||
${discoveredIssues.slice(0, 5).map(i => `- **[${i.severity}]** ${i.file}:${i.line} - ${i.description}`).join('\n')}
|
||||
|
||||
### Scan Report
|
||||
${sessionFolder}/scan/scan-results.json`,
|
||||
summary: `[scout] SCOUT complete: ${resultSummary}`
|
||||
})
|
||||
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
|
||||
// Check for next task
|
||||
const nextTasks = TaskList().filter(t =>
|
||||
t.subject.startsWith('SCOUT-') &&
|
||||
t.owner === 'scout' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
|
||||
if (nextTasks.length > 0) {
|
||||
// Continue with next task → back to Phase 1
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No SCOUT-* tasks available | Idle, wait for coordinator assignment |
|
||||
| CLI tool unavailable | Fall back to ACE search + Grep inline analysis |
|
||||
| Scan scope too broad | Narrow to changed files only, report partial results |
|
||||
| All perspectives return empty | Report clean scan, notify coordinator |
|
||||
| CLI timeout | Use partial results, note incomplete perspectives |
|
||||
| Critical issue beyond scope | SendMessage issues_found to coordinator |
|
||||
@@ -1,221 +0,0 @@
|
||||
# Command: analyze-scope
|
||||
|
||||
> 变更范围分析 + 测试策略制定。分析代码变更、scout 发现和项目结构,确定测试层级和覆盖率目标。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 2-3 of Strategist
|
||||
- 需要分析代码变更范围
|
||||
- 需要将 scout 发现转化为测试策略
|
||||
|
||||
**Trigger conditions**:
|
||||
- QASTRAT-* 任务进入执行阶段
|
||||
- 变更文件数 > 5 需要 CLI 辅助分析
|
||||
- 存在 scout 发现的高优先级问题
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: CLI Fan-out(复杂项目)/ Direct(简单项目)
|
||||
**CLI Tool**: `gemini` (primary)
|
||||
**CLI Mode**: `analysis`
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
const totalScope = changedFiles.length + discoveredIssues.length
|
||||
if (totalScope <= 5) {
|
||||
// 直接内联分析
|
||||
mode = 'direct'
|
||||
} else if (totalScope <= 15) {
|
||||
// 单次 CLI 分析
|
||||
mode = 'single-cli'
|
||||
} else {
|
||||
// 多维度 CLI 分析
|
||||
mode = 'multi-cli'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 从 shared memory 获取 scout 发现
|
||||
const discoveredIssues = sharedMemory.discovered_issues || []
|
||||
|
||||
// 分析 git diff 获取变更范围
|
||||
const changedFiles = Bash(`git diff --name-only HEAD~5 2>/dev/null || git diff --name-only --cached 2>/dev/null || echo ""`)
|
||||
.split('\n').filter(Boolean)
|
||||
|
||||
// 分类变更文件
|
||||
const fileCategories = {
|
||||
source: changedFiles.filter(f => /\.(ts|tsx|js|jsx|py|java|go|rs)$/.test(f)),
|
||||
test: changedFiles.filter(f => /\.(test|spec)\.(ts|tsx|js|jsx)$/.test(f) || /test_/.test(f)),
|
||||
config: changedFiles.filter(f => /\.(json|yaml|yml|toml|env)$/.test(f)),
|
||||
style: changedFiles.filter(f => /\.(css|scss|less)$/.test(f)),
|
||||
docs: changedFiles.filter(f => /\.(md|txt|rst)$/.test(f))
|
||||
}
|
||||
|
||||
// 检测项目测试框架
|
||||
const packageJson = Read('package.json')
|
||||
const testFramework = detectFramework(packageJson)
|
||||
|
||||
// 获取已有测试覆盖率基线
|
||||
let baselineCoverage = null
|
||||
try {
|
||||
const coverageSummary = JSON.parse(Read('coverage/coverage-summary.json'))
|
||||
baselineCoverage = coverageSummary.total?.lines?.pct || null
|
||||
} catch {}
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
if (mode === 'direct') {
|
||||
// 内联分析:直接构建策略
|
||||
buildStrategyDirect(fileCategories, discoveredIssues, testFramework)
|
||||
} else if (mode === 'single-cli') {
|
||||
// 单次 CLI 综合分析
|
||||
Bash(`ccw cli -p "PURPOSE: Analyze code changes and scout findings to determine optimal test strategy
|
||||
TASK: • Classify ${changedFiles.length} changed files by risk level • Map ${discoveredIssues.length} scout issues to test requirements • Identify integration points between changed modules • Recommend test layers (L1/L2/L3) with coverage targets
|
||||
MODE: analysis
|
||||
CONTEXT: @${changedFiles.slice(0, 20).join(' @')} | Memory: Scout found ${discoveredIssues.length} issues, baseline coverage ${baselineCoverage || 'unknown'}%
|
||||
EXPECTED: JSON with layers array, each containing level, name, target_coverage, focus_files, rationale
|
||||
CONSTRAINTS: Be conservative with L3 E2E tests | Focus L1 on changed source files" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`, {
|
||||
run_in_background: true
|
||||
})
|
||||
// 等待 CLI 完成
|
||||
} else {
|
||||
// 多维度分析
|
||||
// Dimension 1: 变更风险分析
|
||||
Bash(`ccw cli -p "PURPOSE: Assess risk level of code changes
|
||||
TASK: • Classify each file by change risk (high/medium/low) • Identify files touching critical paths • Map dependency chains
|
||||
MODE: analysis
|
||||
CONTEXT: @${fileCategories.source.join(' @')}
|
||||
EXPECTED: Risk matrix with file:risk_level mapping
|
||||
CONSTRAINTS: Focus on source files only" --tool gemini --mode analysis`, {
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// Dimension 2: 测试覆盖差距分析
|
||||
Bash(`ccw cli -p "PURPOSE: Identify test coverage gaps for changed code
|
||||
TASK: • Find changed functions without tests • Map test files to source files • Identify missing integration test scenarios
|
||||
MODE: analysis
|
||||
CONTEXT: @${[...fileCategories.source, ...fileCategories.test].join(' @')}
|
||||
EXPECTED: Coverage gap report with untested functions and modules
|
||||
CONSTRAINTS: Compare existing tests to changed code" --tool gemini --mode analysis`, {
|
||||
run_in_background: true
|
||||
})
|
||||
// 等待所有 CLI 完成
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// 构建测试策略
|
||||
const strategy = {
|
||||
scope: {
|
||||
total_changed: changedFiles.length,
|
||||
source_files: fileCategories.source.length,
|
||||
test_files: fileCategories.test.length,
|
||||
issue_count: discoveredIssues.length,
|
||||
baseline_coverage: baselineCoverage
|
||||
},
|
||||
test_framework: testFramework,
|
||||
layers: [],
|
||||
coverage_targets: {}
|
||||
}
|
||||
|
||||
// 层级选择算法
|
||||
// L1: Unit Tests - 所有有源码变更的文件
|
||||
if (fileCategories.source.length > 0 || discoveredIssues.length > 0) {
|
||||
const l1Files = fileCategories.source.length > 0
|
||||
? fileCategories.source
|
||||
: [...new Set(discoveredIssues.map(i => i.file))]
|
||||
|
||||
strategy.layers.push({
|
||||
level: 'L1',
|
||||
name: 'Unit Tests',
|
||||
target_coverage: 80,
|
||||
focus_files: l1Files,
|
||||
rationale: fileCategories.source.length > 0
|
||||
? '所有变更的源文件需要单元测试覆盖'
|
||||
: 'Scout 发现的问题需要测试覆盖'
|
||||
})
|
||||
}
|
||||
|
||||
// L2: Integration Tests - 多模块变更或关键问题
|
||||
if (fileCategories.source.length >= 3 || discoveredIssues.some(i => i.severity === 'critical')) {
|
||||
const integrationPoints = fileCategories.source
|
||||
.filter(f => /service|controller|handler|middleware|route|api/.test(f))
|
||||
|
||||
if (integrationPoints.length > 0) {
|
||||
strategy.layers.push({
|
||||
level: 'L2',
|
||||
name: 'Integration Tests',
|
||||
target_coverage: 60,
|
||||
focus_areas: integrationPoints,
|
||||
rationale: '多文件变更涉及模块间交互,需要集成测试'
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// L3: E2E Tests - 大量高优先级问题
|
||||
const criticalHighCount = discoveredIssues
|
||||
.filter(i => i.severity === 'critical' || i.severity === 'high').length
|
||||
if (criticalHighCount >= 3) {
|
||||
strategy.layers.push({
|
||||
level: 'L3',
|
||||
name: 'E2E Tests',
|
||||
target_coverage: 40,
|
||||
focus_flows: [...new Set(discoveredIssues
|
||||
.filter(i => i.severity === 'critical' || i.severity === 'high')
|
||||
.map(i => i.file.split('/')[1] || 'main'))],
|
||||
rationale: `${criticalHighCount} 个高优先级问题需要端到端验证`
|
||||
})
|
||||
}
|
||||
|
||||
// 设置覆盖率目标
|
||||
for (const layer of strategy.layers) {
|
||||
strategy.coverage_targets[layer.level] = layer.target_coverage
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Test Strategy
|
||||
|
||||
### Scope Analysis
|
||||
- Changed files: [count]
|
||||
- Source files: [count]
|
||||
- Scout issues: [count]
|
||||
- Baseline coverage: [percent]%
|
||||
|
||||
### Test Layers
|
||||
#### L1: Unit Tests
|
||||
- Coverage target: 80%
|
||||
- Focus files: [list]
|
||||
|
||||
#### L2: Integration Tests (if applicable)
|
||||
- Coverage target: 60%
|
||||
- Focus areas: [list]
|
||||
|
||||
#### L3: E2E Tests (if applicable)
|
||||
- Coverage target: 40%
|
||||
- Focus flows: [list]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No changed files | Use scout issues as scope |
|
||||
| No scout issues | Generate L1 tests for all source files |
|
||||
| Test framework unknown | Default to Jest/Vitest (JS/TS) or pytest (Python) |
|
||||
| CLI analysis returns unusable results | Fall back to heuristic-based strategy |
|
||||
| Agent/CLI failure | Retry once, then fallback to inline execution |
|
||||
| Timeout (>5 min) | Report partial results, notify coordinator |
|
||||
@@ -1,163 +0,0 @@
|
||||
# Strategist Role
|
||||
|
||||
Test strategist. Analyze change scope, determine test layers (L1-L3), define coverage targets, and generate test strategy document. Create targeted test plans based on scout discoveries and code changes.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `strategist` | **Tag**: `[strategist]`
|
||||
- **Task Prefix**: `QASTRAT-*`
|
||||
- **Responsibility**: Orchestration (strategy formulation)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `QASTRAT-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[strategist]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within strategy formulation responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Write test code
|
||||
- Execute tests
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify source code
|
||||
- Omit `[strategist]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `analyze-scope` | [commands/analyze-scope.md](commands/analyze-scope.md) | Phase 2-3 | Change scope analysis + strategy formulation |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `cli-explore-agent` | subagent | analyze-scope.md | Code structure and dependency analysis |
|
||||
| `gemini` | CLI | analyze-scope.md | Test strategy analysis |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `strategy_ready` | strategist -> coordinator | Strategy complete | Contains layer selection and coverage targets |
|
||||
| `error` | strategist -> coordinator | Strategy failed | Blocking error |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "strategist",
|
||||
type: <message-type>,
|
||||
data: { ref: <artifact-path> }
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from strategist --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `QASTRAT-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context & Change Analysis
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read shared memory to get scout discoveries
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Shared memory | <session-folder>/.msg/meta.json | Yes |
|
||||
| Discovered issues | sharedMemory.discovered_issues | No |
|
||||
| Defect patterns | sharedMemory.defect_patterns | No |
|
||||
|
||||
3. Analyze change scope:
|
||||
|
||||
```
|
||||
Bash("git diff --name-only HEAD~5 2>/dev/null || git diff --name-only --cached 2>/dev/null || echo \"\"")
|
||||
```
|
||||
|
||||
4. Categorize changed files:
|
||||
|
||||
| Category | Pattern |
|
||||
|----------|---------|
|
||||
| Source | `/\.(ts|tsx|js|jsx|py|java|go|rs)$/` |
|
||||
| Test | `/\.(test|spec)\.(ts|tsx|js|jsx)$/` or `/test_/` |
|
||||
| Config | `/\.(json|yaml|yml|toml|env)$/` |
|
||||
| Style | `/\.(css|scss|less)$/` |
|
||||
|
||||
5. Detect test framework from project files
|
||||
6. Check existing coverage data if available
|
||||
|
||||
### Phase 3: Strategy Generation
|
||||
|
||||
**Layer Selection Logic**:
|
||||
|
||||
| Condition | Layer | Coverage Target |
|
||||
|-----------|-------|-----------------|
|
||||
| Has source file changes | L1: Unit Tests | 80% |
|
||||
| >= 3 source files OR critical issues found | L2: Integration Tests | 60% |
|
||||
| >= 3 critical/high severity issues | L3: E2E Tests | 40% |
|
||||
| No changes but has scout issues | L1 focused on issue files | 80% |
|
||||
|
||||
**Strategy Document Structure**:
|
||||
- Scope Analysis: changed files count, source files, scout issues, test framework
|
||||
- Test Layers: level, name, coverage target, focus files/areas, rationale
|
||||
- Priority Issues: top 10 issues from scout
|
||||
|
||||
Write strategy document to `<session-folder>/strategy/test-strategy.md`.
|
||||
|
||||
Update shared memory with `test_strategy` field.
|
||||
|
||||
### Phase 4: Strategy Validation
|
||||
|
||||
**Validation Checks**:
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| has_layers | strategy.layers.length > 0 |
|
||||
| has_targets | All layers have target_coverage > 0 |
|
||||
| covers_issues | Discovered issues covered by focus_files |
|
||||
| framework_detected | testFramework !== 'unknown' |
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[strategist]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No QASTRAT-* tasks available | Idle, wait for coordinator |
|
||||
| No changed files detected | Use scout issues as scope, or scan full project |
|
||||
| Test framework unknown | Default to Jest/Vitest for JS/TS, pytest for Python |
|
||||
| Shared memory not found | Create with defaults, proceed |
|
||||
| Critical issue beyond scope | SendMessage error to coordinator |
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: team-review
|
||||
description: "Unified team skill for code scanning, vulnerability review, optimization suggestions, and automated fix. 4-role team: coordinator, scanner, reviewer, fixer. Triggers on team-review."
|
||||
allowed-tools: Task, AskUserQuestion, TaskCreate, TaskUpdate, TaskList, TaskGet, Read, Write, Edit, Bash, Glob, Grep, Skill, mcp__ace-tool__search_context
|
||||
description: "Unified team skill for code review. Uses team-worker agent architecture with role-spec files. 3-role pipeline: scanner, reviewer, fixer. Triggers on team-review."
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*)
|
||||
---
|
||||
|
||||
# Team Review
|
||||
@@ -11,23 +11,23 @@ Unified team skill: code scanning, vulnerability review, optimization suggestion
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-review") │
|
||||
│ args="<target>" or args="--role=xxx" │
|
||||
└──────────────────────────┬──────────────────────────────────┘
|
||||
│ Role Router
|
||||
┌──── --role present? ────┐
|
||||
│ NO │ YES
|
||||
↓ ↓
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto → coordinator) (route to role.md)
|
||||
│
|
||||
┌────┴────┬───────────┬───────────┐
|
||||
↓ ↓ ↓ ↓
|
||||
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
|
||||
│ coord │ │scanner │ │reviewer│ │ fixer │
|
||||
│ (RC-*) │ │(SCAN-*)│ │(REV-*) │ │(FIX-*) │
|
||||
└────────┘ └────────┘ └────────┘ └────────┘
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-review") |
|
||||
| args="<task-description>" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
Orchestration Mode (auto -> coordinator)
|
||||
|
|
||||
Coordinator (inline)
|
||||
Phase 0-5 orchestration
|
||||
|
|
||||
+-------+-------+-------+
|
||||
v v v
|
||||
[tw] [tw] [tw]
|
||||
scann- review- fixer
|
||||
er er
|
||||
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Role Router
|
||||
@@ -38,12 +38,12 @@ Parse `$ARGUMENTS` to extract `--role`. If absent → Orchestration Mode (auto r
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | RC-* | orchestrator | **⚠️ 压缩后必须重读** |
|
||||
| scanner | [roles/scanner/role.md](roles/scanner/role.md) | SCAN-* | read-only-analysis | 压缩后必须重读 |
|
||||
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REV-* | read-only-analysis | 压缩后必须重读 |
|
||||
| fixer | [roles/fixer/role.md](roles/fixer/role.md) | FIX-* | code-generation | 压缩后必须重读 |
|
||||
| Role | Spec | Task Prefix | Inner Loop |
|
||||
|------|------|-------------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
|
||||
| scanner | [role-specs/scanner.md](role-specs/scanner.md) | SCAN-* | false |
|
||||
| reviewer | [role-specs/reviewer.md](role-specs/reviewer.md) | REV-* | false |
|
||||
| fixer | [role-specs/fixer.md](role-specs/fixer.md) | FIX-* | true |
|
||||
|
||||
> **⚠️ COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
|
||||
|
||||
@@ -204,9 +204,12 @@ Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at se
|
||||
|
||||
Coordinator additional restrictions: Do not write/modify code directly, do not call implementation subagents, do not execute analysis/test/review directly.
|
||||
|
||||
| Component | Location |
|
||||
|-----------|----------|
|
||||
| Session directory | `.workflow/.team-review/<workflow_id>/` |
|
||||
### Team Configuration
|
||||
|
||||
| Setting | Value |
|
||||
|---------|-------|
|
||||
| Team name | review |
|
||||
| Session directory | `.workflow/.team/RV-<slug>-<date>/` |
|
||||
| Shared memory | `.msg/meta.json` in session dir |
|
||||
| Team config | `specs/team-config.json` |
|
||||
| Finding schema | `specs/finding-schema.json` |
|
||||
@@ -216,14 +219,35 @@ Coordinator additional restrictions: Do not write/modify code directly, do not c
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use Skill invocation:
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Skill(skill="team-review", args="--role=scanner <target> <flags>")
|
||||
Skill(skill="team-review", args="--role=reviewer --input <scan-output> <flags>")
|
||||
Skill(skill="team-review", args="--role=fixer --input <fix-manifest> <flags>")
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: "review",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-review/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: review
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**Inner Loop roles** (fixer): Set `inner_loop: true`.
|
||||
|
||||
**Single-task roles** (scanner, reviewer): Set `inner_loop: false`.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
@@ -246,6 +270,55 @@ Skill(skill="team-review", args="--role=fixer --input fix-manifest.json")
|
||||
--fix # fix mode only
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Review pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(review) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-review", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
---
|
||||
|
||||
## Session Directory
|
||||
|
||||
```
|
||||
.workflow/.team/RV-<slug>-<YYYY-MM-DD>/
|
||||
├── .msg/
|
||||
│ ├── messages.jsonl # Message bus log
|
||||
│ └── meta.json # Session state + cross-role state
|
||||
├── wisdom/ # Cross-task knowledge
|
||||
│ ├── learnings.md
|
||||
│ ├── decisions.md
|
||||
│ ├── conventions.md
|
||||
│ └── issues.md
|
||||
├── scan/ # Scanner output
|
||||
│ └── scan-results.json
|
||||
├── review/ # Reviewer output
|
||||
│ └── review-report.json
|
||||
└── fix/ # Fixer output
|
||||
└── fix-manifest.json
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|
||||
@@ -1,212 +1,284 @@
|
||||
# Command: monitor
|
||||
# Command: Monitor
|
||||
|
||||
> Stop-Wait stage execution. Spawns each worker via Skill(), blocks until return, drives transitions.
|
||||
Handle all coordinator monitoring events for the review pipeline using the async Spawn-and-Stop pattern. One operation per invocation, then STOP and wait for the next callback.
|
||||
|
||||
## When to Use
|
||||
## Constants
|
||||
|
||||
- Phase 4 of Coordinator, after dispatch complete
|
||||
| Key | Value | Description |
|
||||
|-----|-------|-------------|
|
||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
||||
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
|
||||
| WORKER_AGENT | team-worker | All workers spawned as team-worker agents |
|
||||
|
||||
## Strategy
|
||||
### Role-Worker Map
|
||||
|
||||
**Mode**: Stop-Wait (synchronous Skill call, not polling)
|
||||
| Prefix | Role | Role Spec | inner_loop |
|
||||
|--------|------|-----------|------------|
|
||||
| SCAN | scanner | `.claude/skills/team-review/role-specs/scanner.md` | false |
|
||||
| REV | reviewer | `.claude/skills/team-review/role-specs/reviewer.md` | false |
|
||||
| FIX | fixer | `.claude/skills/team-review/role-specs/fixer.md` | false |
|
||||
|
||||
> **No polling. Synchronous Skill() call IS the wait mechanism.**
|
||||
>
|
||||
> - FORBIDDEN: `while` + `sleep` + check status
|
||||
> - REQUIRED: `Skill()` blocking call = worker return = stage done
|
||||
### Pipeline Modes
|
||||
|
||||
### Stage-Worker Map
|
||||
| Mode | Stages |
|
||||
|------|--------|
|
||||
| scan-only | SCAN-001 |
|
||||
| default | SCAN-001 -> REV-001 |
|
||||
| full | SCAN-001 -> REV-001 -> FIX-001 |
|
||||
| fix-only | FIX-001 |
|
||||
|
||||
```javascript
|
||||
const STAGE_WORKER_MAP = {
|
||||
'SCAN': { role: 'scanner', skillArgs: '--role=scanner' },
|
||||
'REV': { role: 'reviewer', skillArgs: '--role=reviewer' },
|
||||
'FIX': { role: 'fixer', skillArgs: '--role=fixer' }
|
||||
}
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session file | `<session-folder>/.msg/meta.json` | Yes |
|
||||
| Task list | `TaskList()` | Yes |
|
||||
| Active workers | session.active_workers[] | Yes |
|
||||
| Pipeline mode | session.pipeline_mode | Yes |
|
||||
|
||||
```
|
||||
Load session state:
|
||||
1. Read <session-folder>/.msg/meta.json -> session
|
||||
2. TaskList() -> allTasks
|
||||
3. Extract pipeline_mode from session
|
||||
4. Extract active_workers[] from session (default: [])
|
||||
5. Parse $ARGUMENTS to determine trigger event
|
||||
6. autoYes = /\b(-y|--yes)\b/.test(args)
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
## Phase 3: Event Handlers
|
||||
|
||||
### Step 1: Context Preparation
|
||||
### Wake-up Source Detection
|
||||
|
||||
```javascript
|
||||
const sharedMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
Parse `$ARGUMENTS` to determine handler:
|
||||
|
||||
// Get pipeline tasks in creation order (= dependency order)
|
||||
const allTasks = TaskList()
|
||||
const pipelineTasks = allTasks
|
||||
.filter(t => t.owner && t.owner !== 'coordinator')
|
||||
.sort((a, b) => Number(a.id) - Number(b.id))
|
||||
| Priority | Condition | Handler |
|
||||
|----------|-----------|---------|
|
||||
| 1 | Message contains `[scanner]`, `[reviewer]`, or `[fixer]` | handleCallback |
|
||||
| 2 | Contains "check" or "status" | handleCheck |
|
||||
| 3 | Contains "resume", "continue", or "next" | handleResume |
|
||||
| 4 | Pipeline detected as complete (no pending, no in_progress) | handleComplete |
|
||||
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
||||
|
||||
// Auto mode detection
|
||||
const autoYes = /\b(-y|--yes)\b/.test(args)
|
||||
---
|
||||
|
||||
### Handler: handleCallback
|
||||
|
||||
Worker completed a task. Verify completion, check pipeline conditions, advance.
|
||||
|
||||
```
|
||||
Receive callback from [<role>]
|
||||
+- Find matching active worker by role tag
|
||||
+- Task status = completed?
|
||||
| +- YES -> remove from active_workers -> update session
|
||||
| | +- role = scanner?
|
||||
| | | +- Read session.findings_count from meta.json
|
||||
| | | +- findings_count === 0?
|
||||
| | | | +- YES -> Skip remaining stages:
|
||||
| | | | | Delete all REV-* and FIX-* tasks (TaskUpdate status='deleted')
|
||||
| | | | | Log: "0 findings, skipping review/fix stages"
|
||||
| | | | | -> handleComplete
|
||||
| | | | +- NO -> normal advance
|
||||
| | | +- -> handleSpawnNext
|
||||
| | +- role = reviewer?
|
||||
| | | +- pipeline_mode === 'full'?
|
||||
| | | | +- YES -> Need fix confirmation gate
|
||||
| | | | | +- autoYes?
|
||||
| | | | | | +- YES -> Set fix_scope='all' in meta.json
|
||||
| | | | | | +- Write fix-manifest.json
|
||||
| | | | | | +- -> handleSpawnNext
|
||||
| | | | | +- NO -> AskUserQuestion:
|
||||
| | | | | question: "<N> findings reviewed. Proceed with fix?"
|
||||
| | | | | header: "Fix Confirmation"
|
||||
| | | | | options:
|
||||
| | | | | - "Fix all": Set fix_scope='all'
|
||||
| | | | | - "Fix critical/high only": Set fix_scope='critical,high'
|
||||
| | | | | - "Skip fix": Delete FIX-* tasks -> handleComplete
|
||||
| | | | | +- Write fix_scope to meta.json
|
||||
| | | | | +- Write fix-manifest.json:
|
||||
| | | | | { source: "<session>/review/review-report.json",
|
||||
| | | | | scope: fix_scope, session: sessionFolder }
|
||||
| | | | | +- -> handleSpawnNext
|
||||
| | | | +- NO -> normal advance -> handleSpawnNext
|
||||
| | +- role = fixer?
|
||||
| | +- -> handleSpawnNext (checks for completion naturally)
|
||||
| +- NO -> progress message, do not advance -> STOP
|
||||
+- No matching worker found
|
||||
+- Scan all active workers for completed tasks
|
||||
+- Found completed -> process each -> handleSpawnNext
|
||||
+- None completed -> STOP
|
||||
```
|
||||
|
||||
### Step 2: Sequential Stage Execution (Stop-Wait)
|
||||
---
|
||||
|
||||
> **Core**: Spawn one worker per stage, block until return.
|
||||
> Worker return = stage complete. No sleep, no polling.
|
||||
### Handler: handleSpawnNext
|
||||
|
||||
```javascript
|
||||
for (const stageTask of pipelineTasks) {
|
||||
// 1. Extract stage prefix -> determine worker role
|
||||
const stagePrefix = stageTask.subject.match(/^(\w+)-/)?.[1]
|
||||
const workerConfig = STAGE_WORKER_MAP[stagePrefix]
|
||||
Find all ready tasks, spawn one team-worker agent in background, update session, STOP.
|
||||
|
||||
if (!workerConfig) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "error",
|
||||
})
|
||||
continue
|
||||
}
|
||||
```
|
||||
Collect task states from TaskList()
|
||||
+- completedSubjects: status = completed
|
||||
+- inProgressSubjects: status = in_progress
|
||||
+- deletedSubjects: status = deleted
|
||||
+- readySubjects: status = pending
|
||||
AND (no blockedBy OR all blockedBy in completedSubjects)
|
||||
|
||||
// 2. Mark task in progress
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
to: workerConfig.role, type: "stage_transition",
|
||||
})
|
||||
|
||||
// 3. Build worker arguments
|
||||
const workerArgs = buildWorkerArgs(stageTask, workerConfig)
|
||||
|
||||
// 4. Spawn worker via Skill — blocks until return (Stop-Wait core)
|
||||
Skill(skill="team-review", args=workerArgs)
|
||||
|
||||
// 5. Worker returned — check result
|
||||
const taskState = TaskGet({ taskId: stageTask.id })
|
||||
|
||||
if (taskState.status !== 'completed') {
|
||||
const action = handleStageFailure(stageTask, taskState, workerConfig, autoYes)
|
||||
if (action === 'abort') break
|
||||
if (action === 'skip') continue
|
||||
} else {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "stage_transition",
|
||||
})
|
||||
}
|
||||
|
||||
// 6. Post-stage: After SCAN check findings
|
||||
if (stagePrefix === 'SCAN') {
|
||||
const mem = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
if ((mem.findings_count || 0) === 0) {
|
||||
mcp__ccw-tools__team_msg({ operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "pipeline_complete",
|
||||
for (const r of pipelineTasks.slice(pipelineTasks.indexOf(stageTask) + 1))
|
||||
TaskUpdate({ taskId: r.id, status: 'deleted' })
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// 7. Post-stage: After REV confirm fix scope
|
||||
if (stagePrefix === 'REV' && pipelineMode === 'full') {
|
||||
const mem = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
|
||||
if (!autoYes) {
|
||||
const conf = AskUserQuestion({ questions: [{
|
||||
question: `${mem.findings_count || 0} findings reviewed. Proceed with fix?`,
|
||||
header: "Fix Confirmation", multiSelect: false,
|
||||
options: [
|
||||
{ label: "Fix all", description: "All actionable findings" },
|
||||
{ label: "Fix critical/high only", description: "Severity filter" },
|
||||
{ label: "Skip fix", description: "No code changes" }
|
||||
]
|
||||
}] })
|
||||
|
||||
if (conf["Fix Confirmation"] === "Skip fix") {
|
||||
pipelineTasks.filter(t => t.subject.startsWith('FIX-'))
|
||||
.forEach(ft => TaskUpdate({ taskId: ft.id, status: 'deleted' }))
|
||||
break
|
||||
}
|
||||
mem.fix_scope = conf["Fix Confirmation"] === "Fix critical/high only" ? 'critical,high' : 'all'
|
||||
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(mem, null, 2))
|
||||
}
|
||||
|
||||
Write(`${sessionFolder}/fix/fix-manifest.json`, JSON.stringify({
|
||||
source: `${sessionFolder}/review/review-report.json`,
|
||||
scope: mem.fix_scope || 'all', session: sessionFolder
|
||||
}, null, 2))
|
||||
}
|
||||
}
|
||||
Ready tasks found?
|
||||
+- NONE + work in progress -> report waiting -> STOP
|
||||
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> handleComplete
|
||||
+- HAS ready tasks -> take first ready task:
|
||||
+- Determine role from prefix:
|
||||
| SCAN-* -> scanner
|
||||
| REV-* -> reviewer
|
||||
| FIX-* -> fixer
|
||||
+- TaskUpdate -> in_progress
|
||||
+- team_msg log -> task_unblocked (team_session_id=<session-id>)
|
||||
+- Spawn team-worker (see spawn call below)
|
||||
+- Add to session.active_workers
|
||||
+- Update session file
|
||||
+- Output: "[coordinator] Spawned <role> for <subject>"
|
||||
+- STOP
|
||||
```
|
||||
|
||||
### Step 2.1: Worker Argument Builder
|
||||
**Spawn worker tool call**:
|
||||
|
||||
```javascript
|
||||
function buildWorkerArgs(stageTask, workerConfig) {
|
||||
const stagePrefix = stageTask.subject.match(/^(\w+)-/)?.[1]
|
||||
let workerArgs = `${workerConfig.skillArgs} --session ${sessionFolder}`
|
||||
```
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker for <subject>",
|
||||
team_name: "review",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-review/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: review
|
||||
requirement: <task-description>
|
||||
inner_loop: false
|
||||
|
||||
if (stagePrefix === 'SCAN') {
|
||||
workerArgs += ` ${target} --dimensions ${dimensions.join(',')}`
|
||||
if (stageTask.description?.includes('quick: true')) workerArgs += ' -q'
|
||||
} else if (stagePrefix === 'REV') {
|
||||
workerArgs += ` --input ${sessionFolder}/scan/scan-results.json --dimensions ${dimensions.join(',')}`
|
||||
} else if (stagePrefix === 'FIX') {
|
||||
workerArgs += ` --input ${sessionFolder}/fix/fix-manifest.json`
|
||||
}
|
||||
## Current Task
|
||||
- Task ID: <task-id>
|
||||
- Task: <subject>
|
||||
|
||||
if (autoYes) workerArgs += ' -y'
|
||||
return workerArgs
|
||||
}
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
|
||||
})
|
||||
```
|
||||
|
||||
### Step 2.2: Stage Failure Handler
|
||||
---
|
||||
|
||||
```javascript
|
||||
function handleStageFailure(stageTask, taskState, workerConfig, autoYes) {
|
||||
if (autoYes) {
|
||||
mcp__ccw-tools__team_msg({ operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "error",
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
return 'skip'
|
||||
}
|
||||
### Handler: handleCheck
|
||||
|
||||
const decision = AskUserQuestion({ questions: [{
|
||||
question: `Stage "${stageTask.subject}" incomplete (${taskState.status}). Action?`,
|
||||
header: "Stage Failure", multiSelect: false,
|
||||
options: [
|
||||
{ label: "Retry", description: "Re-spawn worker" },
|
||||
{ label: "Skip", description: "Continue pipeline" },
|
||||
{ label: "Abort", description: "Stop pipeline" }
|
||||
]
|
||||
}] })
|
||||
Read-only status report. No pipeline advancement.
|
||||
|
||||
const answer = decision["Stage Failure"]
|
||||
if (answer === "Retry") {
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
|
||||
Skill(skill="team-review", args=buildWorkerArgs(stageTask, workerConfig))
|
||||
if (TaskGet({ taskId: stageTask.id }).status !== 'completed')
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
return 'retried'
|
||||
} else if (answer === "Skip") {
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
return 'skip'
|
||||
} else {
|
||||
mcp__ccw-tools__team_msg({ operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "error",
|
||||
return 'abort'
|
||||
}
|
||||
}
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
[coordinator] Review Pipeline Status
|
||||
[coordinator] Mode: <pipeline_mode>
|
||||
[coordinator] Progress: <completed>/<total> (<percent>%)
|
||||
|
||||
[coordinator] Pipeline Graph:
|
||||
SCAN-001: <status-icon> <summary>
|
||||
REV-001: <status-icon> <summary>
|
||||
FIX-001: <status-icon> <summary>
|
||||
|
||||
done=completed >>>=running o=pending x=deleted .=not created
|
||||
|
||||
[coordinator] Active Workers:
|
||||
> <subject> (<role>) - running
|
||||
|
||||
[coordinator] Ready to spawn: <subjects>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
### Step 3: Finalize
|
||||
Then STOP.
|
||||
|
||||
```javascript
|
||||
const finalMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
finalMemory.pipeline_status = 'complete'
|
||||
finalMemory.completed_at = new Date().toISOString()
|
||||
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(finalMemory, null, 2))
|
||||
---
|
||||
|
||||
### Handler: handleResume
|
||||
|
||||
Check active worker completion, process results, advance pipeline.
|
||||
|
||||
```
|
||||
Load active_workers from session
|
||||
+- No active workers -> handleSpawnNext
|
||||
+- Has active workers -> check each:
|
||||
+- status = completed -> mark done, remove from active_workers, log
|
||||
+- status = in_progress -> still running, log
|
||||
+- other status -> worker failure -> reset to pending
|
||||
After processing:
|
||||
+- Some completed -> handleSpawnNext
|
||||
+- All still running -> report status -> STOP
|
||||
+- All failed -> handleSpawnNext (retry)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleComplete
|
||||
|
||||
Pipeline complete. Generate summary and finalize session.
|
||||
|
||||
```
|
||||
All tasks completed or deleted (no pending, no in_progress)
|
||||
+- Read final session state from meta.json
|
||||
+- Generate pipeline summary:
|
||||
| - Pipeline mode
|
||||
| - Findings count
|
||||
| - Stages completed
|
||||
| - Fix results (if applicable)
|
||||
| - Deliverable paths
|
||||
|
|
||||
+- Update session:
|
||||
| session.pipeline_status = 'complete'
|
||||
| session.completed_at = <timestamp>
|
||||
| Write meta.json
|
||||
|
|
||||
+- team_msg log -> pipeline_complete
|
||||
+- Output summary to user
|
||||
+- STOP
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Worker Failure Handling
|
||||
|
||||
When a worker has unexpected status (not completed, not in_progress):
|
||||
|
||||
1. Reset task -> pending via TaskUpdate
|
||||
2. Remove from active_workers
|
||||
3. Log via team_msg (type: error)
|
||||
4. Report to user: task reset, will retry on next resume
|
||||
|
||||
## Phase 4: State Persistence
|
||||
|
||||
After every handler action, before STOP:
|
||||
|
||||
| Check | Action |
|
||||
|-------|--------|
|
||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
||||
| Meta.json updated | Write updated session state |
|
||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> handleComplete |
|
||||
|
||||
```
|
||||
Persist:
|
||||
1. Reconcile active_workers with actual TaskList states
|
||||
2. Remove entries for completed/deleted tasks
|
||||
3. Write updated meta.json
|
||||
4. Verify consistency
|
||||
5. STOP (wait for next callback)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Worker incomplete (interactive) | AskUser: Retry / Skip / Abort |
|
||||
| Worker incomplete (auto) | Auto-skip, log warning |
|
||||
| 0 findings after scan | Skip remaining stages |
|
||||
| User declines fix | Delete FIX tasks, report review-only |
|
||||
| Session file not found | Error, suggest re-initialization |
|
||||
| Worker callback from unknown role | Log info, scan for other completions |
|
||||
| All workers still running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready, no running, has pending) | Check blockedBy chains, report to user |
|
||||
| 0 findings after scan | Delete remaining stages, complete pipeline |
|
||||
| User declines fix | Delete FIX tasks, complete with review-only results |
|
||||
|
||||
@@ -1,162 +0,0 @@
|
||||
# Command: execute-fixes
|
||||
|
||||
> Applies fixes from fix-plan.json via code-developer subagents. Quick path = 1 agent; standard = 1 agent per group.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3B of Fixer, after plan-fixes
|
||||
- Requires: `${sessionFolder}/fix/fix-plan.json`, `sessionFolder`, `projectRoot`
|
||||
|
||||
## Strategy
|
||||
|
||||
**Mode**: Sequential Delegation (code-developer agents via Task)
|
||||
|
||||
```
|
||||
quick_path=true -> 1 agent, all findings sequentially
|
||||
quick_path=false -> 1 agent per group, groups in execution_order
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Plan + Helpers
|
||||
|
||||
```javascript
|
||||
const fixPlan = JSON.parse(Read(`${sessionFolder}/fix/fix-plan.json`))
|
||||
const { groups, execution_order, quick_path: isQuickPath } = fixPlan
|
||||
const results = { fixed: [], failed: [], skipped: [] }
|
||||
|
||||
// --- Agent prompt builder ---
|
||||
function buildAgentPrompt(findings, files) {
|
||||
const fileContents = {}
|
||||
for (const file of files) { try { fileContents[file] = Read(file) } catch {} }
|
||||
|
||||
const fDesc = findings.map((f, i) => {
|
||||
const fix = f.suggested_fix || f.optimization?.approach || '(no suggestion)'
|
||||
const deps = (f.fix_dependencies||[]).length ? `\nDepends on: ${f.fix_dependencies.join(', ')}` : ''
|
||||
return `### ${i+1}. ${f.id} [${f.severity}]\n**File**: ${f.location?.file}:${f.location?.line}\n**Title**: ${f.title}\n**Desc**: ${f.description}\n**Strategy**: ${f.fix_strategy||'minimal'}\n**Fix**: ${fix}${deps}`
|
||||
}).join('\n\n')
|
||||
|
||||
const fContent = Object.entries(fileContents)
|
||||
.filter(([,c]) => c).map(([f,c]) => `### ${f}\n\`\`\`\n${String(c).slice(0,8000)}\n\`\`\``).join('\n\n')
|
||||
|
||||
return `You are a code fixer agent. Apply fixes to the codebase.
|
||||
|
||||
## CRITICAL RULES
|
||||
1. Apply each fix using Edit tool, in the order given (dependency-sorted)
|
||||
2. After each fix, run related tests: tests/**/{filename}.test.* or *_test.*
|
||||
3. Tests PASS -> finding is "fixed"
|
||||
4. Tests FAIL -> revert: Bash("git checkout -- {file}") -> mark "failed" -> continue
|
||||
5. Do NOT retry failed fixes with different strategy. Rollback and move on.
|
||||
6. If a finding depends on a previously failed finding, mark "skipped"
|
||||
|
||||
## Findings (in order)
|
||||
${fDesc}
|
||||
|
||||
## File Contents
|
||||
${fContent}
|
||||
|
||||
## Required Output
|
||||
After ALL findings, output JSON:
|
||||
\`\`\`json
|
||||
{"results":[{"id":"SEC-001","status":"fixed","file":"src/a.ts"},{"id":"COR-002","status":"failed","file":"src/b.ts","error":"reason"}]}
|
||||
\`\`\`
|
||||
Process each finding now. Rollback on failure, never retry.`
|
||||
}
|
||||
|
||||
// --- Result parser ---
|
||||
function parseAgentResults(output, findings) {
|
||||
const failedIds = new Set()
|
||||
let parsed = []
|
||||
try {
|
||||
const m = (output||'').match(/```json\s*\n?([\s\S]*?)\n?```/)
|
||||
if (m) { const j = JSON.parse(m[1]); parsed = j.results || j || [] }
|
||||
} catch {}
|
||||
|
||||
if (parsed.length > 0) {
|
||||
for (const r of parsed) {
|
||||
const f = findings.find(x => x.id === r.id); if (!f) continue
|
||||
if (r.status === 'fixed') results.fixed.push({...f})
|
||||
else if (r.status === 'failed') { results.failed.push({...f, error: r.error||'unknown'}); failedIds.add(r.id) }
|
||||
else if (r.status === 'skipped') { results.skipped.push({...f, error: r.error||'dep failed'}); failedIds.add(r.id) }
|
||||
}
|
||||
} else {
|
||||
// Fallback: check git diff per file
|
||||
for (const f of findings) {
|
||||
const file = f.location?.file
|
||||
if (!file) { results.skipped.push({...f, error:'no file'}); continue }
|
||||
const diff = Bash(`git diff --name-only -- "${file}" 2>/dev/null`).trim()
|
||||
if (diff) results.fixed.push({...f})
|
||||
else { results.failed.push({...f, error:'no changes detected'}); failedIds.add(f.id) }
|
||||
}
|
||||
}
|
||||
// Catch unprocessed findings
|
||||
const done = new Set([...results.fixed,...results.failed,...results.skipped].map(x=>x.id))
|
||||
for (const f of findings) {
|
||||
if (done.has(f.id)) continue
|
||||
if ((f.fix_dependencies||[]).some(d => failedIds.has(d)))
|
||||
results.skipped.push({...f, error:'dependency failed'})
|
||||
else results.failed.push({...f, error:'not processed'})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Execute
|
||||
|
||||
```javascript
|
||||
if (isQuickPath) {
|
||||
// Single agent for all findings
|
||||
const group = groups[0]
|
||||
const prompt = buildAgentPrompt(group.findings, group.files)
|
||||
const out = Task({ subagent_type:"code-developer", prompt, run_in_background:false })
|
||||
parseAgentResults(out, group.findings)
|
||||
} else {
|
||||
// One agent per group in execution_order
|
||||
const completedGroups = new Set()
|
||||
|
||||
// Build group dependency map
|
||||
const groupDeps = {}
|
||||
for (const g of groups) {
|
||||
groupDeps[g.id] = new Set()
|
||||
for (const f of g.findings) {
|
||||
for (const depId of (f.fix_dependencies||[])) {
|
||||
const dg = groups.find(x => x.findings.some(fx => fx.id === depId))
|
||||
if (dg && dg.id !== g.id) groupDeps[g.id].add(dg.id)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (const gid of execution_order) {
|
||||
const group = groups.find(g => g.id === gid)
|
||||
if (!group) continue
|
||||
|
||||
const prompt = buildAgentPrompt(group.findings, group.files)
|
||||
const out = Task({ subagent_type:"code-developer", prompt, run_in_background:false })
|
||||
parseAgentResults(out, group.findings)
|
||||
completedGroups.add(gid)
|
||||
|
||||
Write(`${sessionFolder}/fix/fix-progress.json`, JSON.stringify({
|
||||
completed_groups:[...completedGroups],
|
||||
results_so_far:{fixed:results.fixed.length, failed:results.failed.length}
|
||||
}, null, 2))
|
||||
|
||||
mcp__ccw-tools__team_msg({ operation:"log", session_id: sessionId, from:"fixer",
|
||||
to:"coordinator", type:"fix_progress",
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Write Results
|
||||
|
||||
```javascript
|
||||
Write(`${sessionFolder}/fix/execution-results.json`, JSON.stringify(results, null, 2))
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Agent crashes | Mark group findings as failed, continue next group |
|
||||
| Test failure after fix | Rollback (`git checkout -- {file}`), mark failed, continue |
|
||||
| No structured output | Fallback to git diff detection |
|
||||
| Dependency failed | Skip dependent findings automatically |
|
||||
| fix-plan.json missing | Report error, write empty results |
|
||||
@@ -1,186 +0,0 @@
|
||||
# Command: plan-fixes
|
||||
|
||||
> Deterministic grouping algorithm. Groups findings by file, merges dependent groups, topological sorts within groups, writes fix-plan.json.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3A of Fixer, after context resolution
|
||||
- Requires: `fixableFindings[]`, `sessionFolder`, `quickPath` from Phase 2
|
||||
|
||||
**Trigger conditions**:
|
||||
- FIX-* task in Phase 3 with at least 1 fixable finding
|
||||
|
||||
## Strategy
|
||||
|
||||
**Mode**: Direct (inline execution, deterministic algorithm, no CLI needed)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Group Findings by Primary File
|
||||
|
||||
```javascript
|
||||
const fileGroups = {}
|
||||
for (const f of fixableFindings) {
|
||||
const file = f.location?.file || '_unknown'
|
||||
if (!fileGroups[file]) fileGroups[file] = []
|
||||
fileGroups[file].push(f)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Merge Groups with Cross-File Dependencies
|
||||
|
||||
```javascript
|
||||
// Build adjacency: if finding A (group X) depends on finding B (group Y), merge X into Y
|
||||
const findingFileMap = {}
|
||||
for (const f of fixableFindings) {
|
||||
findingFileMap[f.id] = f.location?.file || '_unknown'
|
||||
}
|
||||
|
||||
// Union-Find for group merging
|
||||
const parent = {}
|
||||
const find = (x) => parent[x] === x ? x : (parent[x] = find(parent[x]))
|
||||
const union = (a, b) => { parent[find(a)] = find(b) }
|
||||
|
||||
const allFiles = Object.keys(fileGroups)
|
||||
for (const file of allFiles) parent[file] = file
|
||||
|
||||
for (const f of fixableFindings) {
|
||||
const myFile = f.location?.file || '_unknown'
|
||||
for (const depId of (f.fix_dependencies || [])) {
|
||||
const depFile = findingFileMap[depId]
|
||||
if (depFile && depFile !== myFile) {
|
||||
union(myFile, depFile)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Collect merged groups
|
||||
const mergedGroupMap = {}
|
||||
for (const file of allFiles) {
|
||||
const root = find(file)
|
||||
if (!mergedGroupMap[root]) mergedGroupMap[root] = { files: [], findings: [] }
|
||||
mergedGroupMap[root].files.push(file)
|
||||
mergedGroupMap[root].findings.push(...fileGroups[file])
|
||||
}
|
||||
|
||||
// Deduplicate files
|
||||
for (const g of Object.values(mergedGroupMap)) {
|
||||
g.files = [...new Set(g.files)]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Topological Sort Within Each Group
|
||||
|
||||
```javascript
|
||||
function topoSort(findings) {
|
||||
const idSet = new Set(findings.map(f => f.id))
|
||||
const inDegree = {}
|
||||
const adj = {}
|
||||
for (const f of findings) {
|
||||
inDegree[f.id] = 0
|
||||
adj[f.id] = []
|
||||
}
|
||||
for (const f of findings) {
|
||||
for (const depId of (f.fix_dependencies || [])) {
|
||||
if (idSet.has(depId)) {
|
||||
adj[depId].push(f.id)
|
||||
inDegree[f.id]++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const queue = findings.filter(f => inDegree[f.id] === 0).map(f => f.id)
|
||||
const sorted = []
|
||||
while (queue.length > 0) {
|
||||
const id = queue.shift()
|
||||
sorted.push(id)
|
||||
for (const next of adj[id]) {
|
||||
inDegree[next]--
|
||||
if (inDegree[next] === 0) queue.push(next)
|
||||
}
|
||||
}
|
||||
|
||||
// Handle cycles: append any unsorted findings at the end
|
||||
const sortedSet = new Set(sorted)
|
||||
for (const f of findings) {
|
||||
if (!sortedSet.has(f.id)) sorted.push(f.id)
|
||||
}
|
||||
|
||||
const findingMap = Object.fromEntries(findings.map(f => [f.id, f]))
|
||||
return sorted.map(id => findingMap[id])
|
||||
}
|
||||
|
||||
const groups = Object.entries(mergedGroupMap).map(([root, g], i) => {
|
||||
const sorted = topoSort(g.findings)
|
||||
const maxSev = sorted.reduce((max, f) => {
|
||||
const ord = { critical: 0, high: 1, medium: 2, low: 3 }
|
||||
return (ord[f.severity] ?? 4) < (ord[max] ?? 4) ? f.severity : max
|
||||
}, 'low')
|
||||
return {
|
||||
id: `G${i + 1}`,
|
||||
files: g.files,
|
||||
findings: sorted,
|
||||
max_severity: maxSev
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
### Step 4: Sort Groups by Max Severity
|
||||
|
||||
```javascript
|
||||
const SEV_ORDER = { critical: 0, high: 1, medium: 2, low: 3 }
|
||||
groups.sort((a, b) => (SEV_ORDER[a.max_severity] ?? 4) - (SEV_ORDER[b.max_severity] ?? 4))
|
||||
|
||||
// Re-assign IDs after sort
|
||||
groups.forEach((g, i) => { g.id = `G${i + 1}` })
|
||||
|
||||
const execution_order = groups.map(g => g.id)
|
||||
```
|
||||
|
||||
### Step 5: Determine Execution Path
|
||||
|
||||
```javascript
|
||||
const totalFindings = fixableFindings.length
|
||||
const totalGroups = groups.length
|
||||
const isQuickPath = totalFindings <= 5 && totalGroups <= 1
|
||||
```
|
||||
|
||||
### Step 6: Write fix-plan.json
|
||||
|
||||
```javascript
|
||||
const fixPlan = {
|
||||
plan_id: `fix-plan-${Date.now()}`,
|
||||
quick_path: isQuickPath,
|
||||
groups: groups.map(g => ({
|
||||
id: g.id,
|
||||
files: g.files,
|
||||
findings: g.findings.map(f => ({
|
||||
id: f.id, severity: f.severity, dimension: f.dimension,
|
||||
title: f.title, description: f.description,
|
||||
location: f.location, suggested_fix: f.suggested_fix,
|
||||
fix_strategy: f.fix_strategy, fix_complexity: f.fix_complexity,
|
||||
fix_dependencies: f.fix_dependencies,
|
||||
root_cause: f.root_cause, optimization: f.optimization
|
||||
})),
|
||||
max_severity: g.max_severity
|
||||
})),
|
||||
execution_order: execution_order,
|
||||
total_findings: totalFindings,
|
||||
total_groups: totalGroups
|
||||
}
|
||||
|
||||
Bash(`mkdir -p "${sessionFolder}/fix"`)
|
||||
Write(`${sessionFolder}/fix/fix-plan.json`, JSON.stringify(fixPlan, null, 2))
|
||||
|
||||
mcp__ccw-tools__team_msg({ operation:"log", session_id: sessionId, from:"fixer",
|
||||
to:"coordinator", type:"fix_progress",
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| All findings share one file | Single group, likely quick path |
|
||||
| Dependency cycle detected | Topo sort appends cycle members at end |
|
||||
| Finding references unknown dependency | Ignore that dependency edge |
|
||||
| Empty fixableFindings | Should not reach this command (checked in Phase 2) |
|
||||
@@ -1,245 +0,0 @@
|
||||
# Fixer Role
|
||||
|
||||
Fix code based on reviewed findings. Load manifest, group, apply with rollback-on-failure, verify. Code-generation role -- modifies source files.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `fixer` | **Tag**: `[fixer]`
|
||||
- **Task Prefix**: `FIX-*`
|
||||
- **Responsibility**: code-generation
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `FIX-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[fixer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Write only to session fix directory
|
||||
- Rollback on test failure -- never self-retry failed fixes
|
||||
- Work strictly within code-generation scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Create tasks for other roles
|
||||
- Contact scanner/reviewer directly
|
||||
- Retry failed fixes (report and continue)
|
||||
- Modify files outside scope
|
||||
- Omit `[fixer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `plan-fixes` | [commands/plan-fixes.md](commands/plan-fixes.md) | Phase 3A | Group + sort findings |
|
||||
| `execute-fixes` | [commands/execute-fixes.md](commands/execute-fixes.md) | Phase 3B | Apply fixes per plan |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Read` | Built-in | fixer | Load manifest and reports |
|
||||
| `Write` | Built-in | fixer | Write fix summaries |
|
||||
| `Edit` | Built-in | fixer | Apply code fixes |
|
||||
| `Bash` | Built-in | fixer | Run verification tools |
|
||||
| `TaskUpdate` | Built-in | fixer | Update task status |
|
||||
| `team_msg` | MCP | fixer | Log communication |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `fix_progress` | fixer -> coordinator | Milestone | Progress update during fix |
|
||||
| `fix_complete` | fixer -> coordinator | Phase 5 | Fix finished with summary |
|
||||
| `fix_failed` | fixer -> coordinator | Failure | Fix failed, partial results |
|
||||
| `error` | fixer -> coordinator | Error | Error requiring attention |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "fixer",
|
||||
type: "fix_complete",
|
||||
ref: "<session-folder>/fix/fix-summary.json"
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from fixer --type fix_complete --ref <path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `FIX-*` + status pending + blockedBy empty -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
Extract from task description:
|
||||
|
||||
| Parameter | Extraction Pattern | Default |
|
||||
|-----------|-------------------|---------|
|
||||
| Session folder | `session: <path>` | (required) |
|
||||
| Input path | `input: <path>` | `<session>/fix/fix-manifest.json` |
|
||||
|
||||
Load manifest and source report. If missing -> report error, complete task.
|
||||
|
||||
**Resume Artifact Check**: If `fix-summary.json` exists and is complete -> skip to Phase 5.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Context Resolution
|
||||
|
||||
**Objective**: Resolve fixable findings and detect verification tools.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Filter fixable findings**:
|
||||
|
||||
| Condition | Include |
|
||||
|-----------|---------|
|
||||
| Severity in scope | manifest.scope == 'all' or severity matches scope |
|
||||
| Not skip | fix_strategy !== 'skip' |
|
||||
|
||||
If 0 fixable findings -> report complete immediately.
|
||||
|
||||
2. **Detect complexity**:
|
||||
|
||||
| Signal | Quick Path |
|
||||
|--------|------------|
|
||||
| Findings <= 5 | Yes |
|
||||
| No cross-file dependencies | Yes |
|
||||
| Both conditions | Quick path enabled |
|
||||
|
||||
3. **Detect verification tools**:
|
||||
|
||||
| Tool | Detection Method |
|
||||
|------|------------------|
|
||||
| tsc | `tsconfig.json` exists |
|
||||
| eslint | `eslint` in package.json |
|
||||
| jest | `jest` in package.json |
|
||||
| pytest | pytest command + pyproject.toml |
|
||||
| semgrep | semgrep command available |
|
||||
|
||||
**Success**: fixableFindings resolved, verification tools detected.
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Plan + Execute
|
||||
|
||||
**Objective**: Create fix plan and apply fixes.
|
||||
|
||||
### Phase 3A: Plan Fixes
|
||||
|
||||
Delegate to `commands/plan-fixes.md`.
|
||||
|
||||
**Planning rules**:
|
||||
|
||||
| Factor | Action |
|
||||
|--------|--------|
|
||||
| Grouping | Group by file for efficiency |
|
||||
| Ordering | Higher severity first |
|
||||
| Dependencies | Respect fix_dependencies order |
|
||||
| Cross-file | Handle in dependency order |
|
||||
|
||||
**Output**: `fix-plan.json`
|
||||
|
||||
### Phase 3B: Execute Fixes
|
||||
|
||||
Delegate to `commands/execute-fixes.md`.
|
||||
|
||||
**Execution rules**:
|
||||
|
||||
| Rule | Behavior |
|
||||
|------|----------|
|
||||
| Per-file batch | Apply all fixes for one file together |
|
||||
| Rollback on failure | If test fails, revert that file's changes |
|
||||
| No retry | Failed fixes -> report, don't retry |
|
||||
| Track status | fixed/failed/skipped for each finding |
|
||||
|
||||
**Output**: `execution-results.json`
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Post-Fix Verification
|
||||
|
||||
**Objective**: Run verification tools to validate fixes.
|
||||
|
||||
**Verification tools**:
|
||||
|
||||
| Tool | Command | Pass Criteria |
|
||||
|------|---------|---------------|
|
||||
| tsc | `npx tsc --noEmit` | 0 errors |
|
||||
| eslint | `npx eslint <files>` | 0 errors |
|
||||
| jest | `npx jest --passWithNoTests` | Tests pass |
|
||||
| pytest | `pytest --tb=short` | Tests pass |
|
||||
| semgrep | `semgrep --config auto <files> --json` | 0 results |
|
||||
|
||||
**Verification scope**: Only run tools that are:
|
||||
1. Available (detected in Phase 2)
|
||||
2. Relevant (files were modified)
|
||||
|
||||
**Rollback logic**: If verification fails critically, rollback last batch of fixes.
|
||||
|
||||
**Output**: `verify-results.json`
|
||||
|
||||
**Success**: Verification results recorded, fix rate calculated.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
**Objective**: Report fix results to coordinator.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Generate fix-summary.json with: fix_id, fix_date, scope, total, fixed, failed, skipped, fix_rate, verification results
|
||||
2. Generate fix-summary.md (human-readable)
|
||||
3. Update .msg/meta.json with fix results
|
||||
4. Log via team_msg with `[fixer]` prefix
|
||||
5. SendMessage to coordinator
|
||||
6. TaskUpdate completed
|
||||
7. Loop to Phase 1 for next task
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Scope | all / critical,high / custom |
|
||||
| Fixed | Count by severity |
|
||||
| Failed | Count + error details |
|
||||
| Skipped | Count |
|
||||
| Fix rate | Percentage |
|
||||
| Verification | Pass/fail per tool |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Manifest/report missing | Error, complete task |
|
||||
| 0 fixable findings | Complete immediately |
|
||||
| Test failure after fix | Rollback, mark failed, continue |
|
||||
| Tool unavailable | Skip that check |
|
||||
| All findings fail | Report 0%, complete |
|
||||
| Session folder missing | Re-create fix subdirectory |
|
||||
| Edit tool fails | Log error, mark finding as failed |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
@@ -1,186 +0,0 @@
|
||||
# Command: deep-analyze
|
||||
|
||||
> CLI Fan-out deep analysis. Splits findings into 2 domain groups, runs parallel CLI agents for root cause / impact / optimization enrichment.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Reviewer, when `deep_analysis.length > 0`
|
||||
- Requires `deep_analysis[]` array and `sessionFolder` from Phase 2
|
||||
|
||||
**Trigger conditions**:
|
||||
- REV-* task in Phase 3 with at least 1 finding triaged for deep analysis
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: CLI Fan-out (max 2 parallel agents, analysis only)
|
||||
|
||||
### Tool Fallback Chain
|
||||
|
||||
```
|
||||
gemini (primary) -> qwen (fallback) -> codex (fallback)
|
||||
```
|
||||
|
||||
### Group Split
|
||||
|
||||
```
|
||||
Group A: Security + Correctness findings -> 1 CLI agent
|
||||
Group B: Performance + Maintainability findings -> 1 CLI agent
|
||||
If either group empty -> skip that agent (run single agent only)
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Split Findings into Groups
|
||||
|
||||
```javascript
|
||||
const groupA = deep_analysis.filter(f =>
|
||||
f.dimension === 'security' || f.dimension === 'correctness'
|
||||
)
|
||||
const groupB = deep_analysis.filter(f =>
|
||||
f.dimension === 'performance' || f.dimension === 'maintainability'
|
||||
)
|
||||
|
||||
// Collect all affected files for CLI context
|
||||
const collectFiles = (group) => [...new Set(
|
||||
group.map(f => f.location?.file).filter(Boolean)
|
||||
)]
|
||||
const filesA = collectFiles(groupA)
|
||||
const filesB = collectFiles(groupB)
|
||||
```
|
||||
|
||||
### Step 2: Build CLI Prompts
|
||||
|
||||
```javascript
|
||||
function buildPrompt(group, groupLabel, affectedFiles) {
|
||||
const findingsJson = JSON.stringify(group, null, 2)
|
||||
const filePattern = affectedFiles.length <= 20
|
||||
? affectedFiles.map(f => `@${f}`).join(' ')
|
||||
: '@**/*.{ts,tsx,js,jsx,py,go,java,rs}'
|
||||
|
||||
return `PURPOSE: Deep analysis of ${groupLabel} code findings -- root cause, impact, optimization suggestions.
|
||||
TASK:
|
||||
- For each finding: trace root cause (independent issue or symptom of another finding?)
|
||||
- Identify findings sharing the same root cause -> mark related_findings with their IDs
|
||||
- Assess impact scope and affected files (blast_radius: function/module/system)
|
||||
- Propose fix strategy (minimal fix vs refactor) with tradeoff analysis
|
||||
- Identify fix dependencies (which findings must be fixed first?)
|
||||
- For each finding add these enrichment fields:
|
||||
root_cause: { description: string, related_findings: string[], is_symptom: boolean }
|
||||
impact: { scope: "low"|"medium"|"high", affected_files: string[], blast_radius: string }
|
||||
optimization: { approach: string, alternative: string, tradeoff: string }
|
||||
fix_strategy: "minimal" | "refactor" | "skip"
|
||||
fix_complexity: "low" | "medium" | "high"
|
||||
fix_dependencies: string[] (finding IDs that must be fixed first)
|
||||
MODE: analysis
|
||||
CONTEXT: ${filePattern}
|
||||
Findings to analyze:
|
||||
${findingsJson}
|
||||
EXPECTED: Respond with ONLY a JSON array. Each element is the original finding object with the 6 enrichment fields added. Preserve ALL original fields exactly.
|
||||
CONSTRAINTS: Preserve original finding fields | Only add enrichment fields | Return raw JSON array only | No markdown wrapping`
|
||||
}
|
||||
|
||||
const promptA = groupA.length > 0
|
||||
? buildPrompt(groupA, 'Security + Correctness', filesA) : null
|
||||
const promptB = groupB.length > 0
|
||||
? buildPrompt(groupB, 'Performance + Maintainability', filesB) : null
|
||||
```
|
||||
|
||||
### Step 3: Execute CLI Agents (Parallel)
|
||||
|
||||
```javascript
|
||||
function runCli(prompt) {
|
||||
const tools = ['gemini', 'qwen', 'codex']
|
||||
for (const tool of tools) {
|
||||
try {
|
||||
const out = Bash(
|
||||
`ccw cli -p "${prompt.replace(/"/g, '\\"')}" --tool ${tool} --mode analysis --rule analysis-diagnose-bug-root-cause`,
|
||||
{ timeout: 300000 }
|
||||
)
|
||||
return out
|
||||
} catch { continue }
|
||||
}
|
||||
return null // All tools failed
|
||||
}
|
||||
|
||||
// Run both groups -- if both present, execute via Bash run_in_background for parallelism
|
||||
let resultA = null, resultB = null
|
||||
|
||||
if (promptA && promptB) {
|
||||
// Both groups: run in parallel
|
||||
// Group A in background
|
||||
Bash(`ccw cli -p "${promptA.replace(/"/g, '\\"')}" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause > "${sessionFolder}/review/_groupA.txt" 2>&1`,
|
||||
{ run_in_background: true, timeout: 300000 })
|
||||
// Group B synchronous (blocks until done)
|
||||
resultB = runCli(promptB)
|
||||
// Wait for Group A to finish, then read output
|
||||
Bash(`sleep 5`) // Brief wait if B finished faster
|
||||
try { resultA = Read(`${sessionFolder}/review/_groupA.txt`) } catch {}
|
||||
// If background failed, try synchronous fallback
|
||||
if (!resultA) resultA = runCli(promptA)
|
||||
} else if (promptA) {
|
||||
resultA = runCli(promptA)
|
||||
} else if (promptB) {
|
||||
resultB = runCli(promptB)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Parse & Merge Results
|
||||
|
||||
```javascript
|
||||
function parseCliOutput(output) {
|
||||
if (!output) return []
|
||||
try {
|
||||
const match = output.match(/\[[\s\S]*\]/)
|
||||
if (!match) return []
|
||||
const parsed = JSON.parse(match[0])
|
||||
// Validate enrichment fields exist
|
||||
return parsed.filter(f => f.id && f.dimension).map(f => ({
|
||||
...f,
|
||||
root_cause: f.root_cause || { description: 'Unknown', related_findings: [], is_symptom: false },
|
||||
impact: f.impact || { scope: 'medium', affected_files: [f.location?.file].filter(Boolean), blast_radius: 'module' },
|
||||
optimization: f.optimization || { approach: f.suggested_fix || '', alternative: '', tradeoff: '' },
|
||||
fix_strategy: ['minimal', 'refactor', 'skip'].includes(f.fix_strategy) ? f.fix_strategy : 'minimal',
|
||||
fix_complexity: ['low', 'medium', 'high'].includes(f.fix_complexity) ? f.fix_complexity : 'medium',
|
||||
fix_dependencies: Array.isArray(f.fix_dependencies) ? f.fix_dependencies : []
|
||||
}))
|
||||
} catch { return [] }
|
||||
}
|
||||
|
||||
const enrichedA = parseCliOutput(resultA)
|
||||
const enrichedB = parseCliOutput(resultB)
|
||||
|
||||
// Merge: CLI-enriched findings replace originals, unenriched originals kept as fallback
|
||||
const enrichedMap = new Map()
|
||||
for (const f of [...enrichedA, ...enrichedB]) enrichedMap.set(f.id, f)
|
||||
|
||||
const enrichedFindings = deep_analysis.map(f =>
|
||||
enrichedMap.get(f.id) || {
|
||||
...f,
|
||||
root_cause: { description: 'Analysis unavailable', related_findings: [], is_symptom: false },
|
||||
impact: { scope: 'medium', affected_files: [f.location?.file].filter(Boolean), blast_radius: 'unknown' },
|
||||
optimization: { approach: f.suggested_fix || '', alternative: '', tradeoff: '' },
|
||||
fix_strategy: 'minimal',
|
||||
fix_complexity: 'medium',
|
||||
fix_dependencies: []
|
||||
}
|
||||
)
|
||||
|
||||
// Write output
|
||||
Write(`${sessionFolder}/review/enriched-findings.json`, JSON.stringify(enrichedFindings, null, 2))
|
||||
|
||||
// Cleanup temp files
|
||||
Bash(`rm -f "${sessionFolder}/review/_groupA.txt" "${sessionFolder}/review/_groupB.txt"`)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| gemini CLI fails | Fallback to qwen, then codex |
|
||||
| All CLI tools fail for a group | Use original findings with default enrichment |
|
||||
| CLI output not valid JSON | Attempt regex extraction, else use defaults |
|
||||
| Background task hangs | Synchronous fallback after timeout |
|
||||
| One group fails, other succeeds | Merge partial results with defaults |
|
||||
| Invalid enrichment fields | Apply defaults for missing/invalid fields |
|
||||
@@ -1,174 +0,0 @@
|
||||
# Command: generate-report
|
||||
|
||||
> Cross-correlate enriched + pass-through findings, compute metrics, write review-report.json (for fixer) and review-report.md (for humans).
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 4 of Reviewer, after deep analysis (or directly if deep_analysis was empty)
|
||||
- Requires: `enrichedFindings[]` (from Phase 3 or empty), `pass_through[]` (from Phase 2), `sessionFolder`
|
||||
|
||||
## Strategy
|
||||
|
||||
**Mode**: Direct (inline execution, no CLI needed)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load & Combine Findings
|
||||
|
||||
```javascript
|
||||
let enrichedFindings = []
|
||||
try { enrichedFindings = JSON.parse(Read(`${sessionFolder}/review/enriched-findings.json`)) } catch {}
|
||||
const allFindings = [...enrichedFindings, ...pass_through]
|
||||
```
|
||||
|
||||
### Step 2: Cross-Correlate
|
||||
|
||||
```javascript
|
||||
// 2a: Critical files (file appears in >=2 dimensions)
|
||||
const fileDimMap = {}
|
||||
for (const f of allFindings) {
|
||||
const file = f.location?.file; if (!file) continue
|
||||
if (!fileDimMap[file]) fileDimMap[file] = new Set()
|
||||
fileDimMap[file].add(f.dimension)
|
||||
}
|
||||
const critical_files = Object.entries(fileDimMap)
|
||||
.filter(([, dims]) => dims.size >= 2)
|
||||
.map(([file, dims]) => ({
|
||||
file, dimensions: [...dims],
|
||||
finding_count: allFindings.filter(f => f.location?.file === file).length,
|
||||
severities: [...new Set(allFindings.filter(f => f.location?.file === file).map(f => f.severity))]
|
||||
})).sort((a, b) => b.finding_count - a.finding_count)
|
||||
|
||||
// 2b: Group by shared root cause
|
||||
const rootCauseGroups = [], grouped = new Set()
|
||||
for (const f of allFindings) {
|
||||
if (grouped.has(f.id)) continue
|
||||
const related = (f.root_cause?.related_findings || []).filter(rid => !grouped.has(rid))
|
||||
if (related.length > 0) {
|
||||
const ids = [f.id, ...related]; ids.forEach(id => grouped.add(id))
|
||||
rootCauseGroups.push({ root_cause: f.root_cause?.description || f.title,
|
||||
finding_ids: ids, primary_id: f.id, dimension: f.dimension, severity: f.severity })
|
||||
}
|
||||
}
|
||||
|
||||
// 2c: Optimization suggestions from root cause groups + standalone enriched
|
||||
const optimization_suggestions = []
|
||||
for (const group of rootCauseGroups) {
|
||||
const p = allFindings.find(f => f.id === group.primary_id)
|
||||
if (p?.optimization?.approach) {
|
||||
optimization_suggestions.push({ title: `Fix root cause: ${group.root_cause}`,
|
||||
approach: p.optimization.approach, alternative: p.optimization.alternative || '',
|
||||
tradeoff: p.optimization.tradeoff || '', affected_findings: group.finding_ids,
|
||||
fix_strategy: p.fix_strategy || 'minimal', fix_complexity: p.fix_complexity || 'medium',
|
||||
estimated_impact: `Resolves ${group.finding_ids.length} findings` })
|
||||
}
|
||||
}
|
||||
for (const f of enrichedFindings) {
|
||||
if (grouped.has(f.id) || !f.optimization?.approach || f.severity === 'low' || f.severity === 'info') continue
|
||||
optimization_suggestions.push({ title: `${f.id}: ${f.title}`,
|
||||
approach: f.optimization.approach, alternative: f.optimization.alternative || '',
|
||||
tradeoff: f.optimization.tradeoff || '', affected_findings: [f.id],
|
||||
fix_strategy: f.fix_strategy || 'minimal', fix_complexity: f.fix_complexity || 'medium',
|
||||
estimated_impact: 'Resolves 1 finding' })
|
||||
}
|
||||
|
||||
// 2d: Metrics
|
||||
const by_dimension = {}, by_severity = {}, dimension_severity_matrix = {}
|
||||
for (const f of allFindings) {
|
||||
by_dimension[f.dimension] = (by_dimension[f.dimension] || 0) + 1
|
||||
by_severity[f.severity] = (by_severity[f.severity] || 0) + 1
|
||||
if (!dimension_severity_matrix[f.dimension]) dimension_severity_matrix[f.dimension] = {}
|
||||
dimension_severity_matrix[f.dimension][f.severity] = (dimension_severity_matrix[f.dimension][f.severity] || 0) + 1
|
||||
}
|
||||
const fixable = allFindings.filter(f => f.fix_strategy !== 'skip')
|
||||
const autoFixable = fixable.filter(f => f.fix_complexity === 'low' && f.fix_strategy === 'minimal')
|
||||
```
|
||||
|
||||
### Step 3: Write review-report.json
|
||||
|
||||
```javascript
|
||||
const reviewReport = {
|
||||
review_id: `rev-${Date.now()}`, review_date: new Date().toISOString(),
|
||||
findings: allFindings, critical_files, optimization_suggestions, root_cause_groups: rootCauseGroups,
|
||||
summary: { total: allFindings.length, deep_analyzed: enrichedFindings.length,
|
||||
pass_through: pass_through.length, by_dimension, by_severity, dimension_severity_matrix,
|
||||
fixable_count: fixable.length, auto_fixable_count: autoFixable.length,
|
||||
critical_file_count: critical_files.length, optimization_count: optimization_suggestions.length }
|
||||
}
|
||||
Bash(`mkdir -p "${sessionFolder}/review"`)
|
||||
Write(`${sessionFolder}/review/review-report.json`, JSON.stringify(reviewReport, null, 2))
|
||||
```
|
||||
|
||||
### Step 4: Write review-report.md
|
||||
|
||||
```javascript
|
||||
const dims = ['security','correctness','performance','maintainability']
|
||||
const sevs = ['critical','high','medium','low','info']
|
||||
const S = reviewReport.summary
|
||||
|
||||
// Dimension x Severity matrix
|
||||
let mx = '| Dimension | Critical | High | Medium | Low | Info | Total |\n|---|---|---|---|---|---|---|\n'
|
||||
for (const d of dims) {
|
||||
mx += `| ${d} | ${sevs.map(s => dimension_severity_matrix[d]?.[s]||0).join(' | ')} | ${by_dimension[d]||0} |\n`
|
||||
}
|
||||
mx += `| **Total** | ${sevs.map(s => by_severity[s]||0).join(' | ')} | **${S.total}** |\n`
|
||||
|
||||
// Critical+High findings table
|
||||
const ch = allFindings.filter(f => f.severity==='critical'||f.severity==='high')
|
||||
.sort((a,b) => (a.severity==='critical'?0:1)-(b.severity==='critical'?0:1))
|
||||
let ft = '| ID | Sev | Dim | File:Line | Title | Fix |\n|---|---|---|---|---|---|\n'
|
||||
if (ch.length) ch.forEach(f => { ft += `| ${f.id} | ${f.severity} | ${f.dimension} | ${f.location?.file}:${f.location?.line} | ${f.title} | ${f.fix_strategy||'-'} |\n` })
|
||||
else ft += '| - | - | - | - | No critical/high findings | - |\n'
|
||||
|
||||
// Optimization suggestions
|
||||
let os = optimization_suggestions.map((o,i) =>
|
||||
`### ${i+1}. ${o.title}\n- **Approach**: ${o.approach}\n${o.tradeoff?`- **Tradeoff**: ${o.tradeoff}\n`:''}- **Strategy**: ${o.fix_strategy} | **Complexity**: ${o.fix_complexity} | ${o.estimated_impact}`
|
||||
).join('\n\n') || '_No optimization suggestions._'
|
||||
|
||||
// Critical files
|
||||
const cf = critical_files.slice(0,10).map(c =>
|
||||
`- **${c.file}** (${c.finding_count} findings, dims: ${c.dimensions.join(', ')})`
|
||||
).join('\n') || '_No critical files._'
|
||||
|
||||
// Fix scope
|
||||
const fs = [
|
||||
by_severity.critical ? `${by_severity.critical} critical (must fix)` : '',
|
||||
by_severity.high ? `${by_severity.high} high (should fix)` : '',
|
||||
autoFixable.length ? `${autoFixable.length} auto-fixable (low effort)` : ''
|
||||
].filter(Boolean).map(s => `- ${s}`).join('\n') || '- No actionable findings.'
|
||||
|
||||
Write(`${sessionFolder}/review/review-report.md`,
|
||||
`# Review Report
|
||||
|
||||
**ID**: ${reviewReport.review_id} | **Date**: ${reviewReport.review_date}
|
||||
**Findings**: ${S.total} | **Fixable**: ${S.fixable_count} | **Auto-fixable**: ${S.auto_fixable_count}
|
||||
|
||||
## Executive Summary
|
||||
- Deep analyzed: ${S.deep_analyzed} | Pass-through: ${S.pass_through}
|
||||
- Critical files: ${S.critical_file_count} | Optimizations: ${S.optimization_count}
|
||||
|
||||
## Metrics Matrix
|
||||
${mx}
|
||||
## Critical & High Findings
|
||||
${ft}
|
||||
## Critical Files
|
||||
${cf}
|
||||
|
||||
## Optimization Suggestions
|
||||
${os}
|
||||
|
||||
## Recommended Fix Scope
|
||||
${fs}
|
||||
|
||||
**Total fixable**: ${S.fixable_count} / ${S.total}
|
||||
`)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Enriched findings missing | Use empty array, report pass_through only |
|
||||
| JSON parse failure | Log warning, use raw findings |
|
||||
| Session folder missing | Create review subdir via mkdir |
|
||||
| Empty allFindings | Write minimal "clean" report |
|
||||
@@ -1,231 +0,0 @@
|
||||
# Reviewer Role
|
||||
|
||||
Deep analysis on scan findings, enrichment with root cause / impact / optimization, and structured review report generation. Read-only -- never modifies source code.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `reviewer` | **Tag**: `[reviewer]`
|
||||
- **Task Prefix**: `REV-*`
|
||||
- **Responsibility**: read-only-analysis
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `REV-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[reviewer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Write only to session review directory
|
||||
- Triage findings before deep analysis (cap at 15 for deep analysis)
|
||||
- Work strictly within read-only analysis scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Modify source code files
|
||||
- Fix issues
|
||||
- Create tasks for other roles
|
||||
- Contact scanner/fixer directly
|
||||
- Run any write-mode CLI commands
|
||||
- Omit `[reviewer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `deep-analyze` | [commands/deep-analyze.md](commands/deep-analyze.md) | Phase 3 | CLI Fan-out root cause analysis |
|
||||
| `generate-report` | [commands/generate-report.md](commands/generate-report.md) | Phase 4 | Cross-correlate + report generation |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Read` | Built-in | reviewer | Load scan results |
|
||||
| `Write` | Built-in | reviewer | Write review reports |
|
||||
| `TaskUpdate` | Built-in | reviewer | Update task status |
|
||||
| `team_msg` | MCP | reviewer | Log communication |
|
||||
| `Bash` | Built-in | reviewer | CLI analysis calls |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `review_progress` | reviewer -> coordinator | Milestone | Progress update during review |
|
||||
| `review_complete` | reviewer -> coordinator | Phase 5 | Review finished with findings |
|
||||
| `error` | reviewer -> coordinator | Failure | Error requiring attention |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "reviewer",
|
||||
type: "review_complete",
|
||||
ref: "<session-folder>/review/review-report.json"
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from reviewer --type review_complete --ref <path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `REV-*` + status pending + blockedBy empty -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
Extract from task description:
|
||||
|
||||
| Parameter | Extraction Pattern | Default |
|
||||
|-----------|-------------------|---------|
|
||||
| Session folder | `session: <path>` | (required) |
|
||||
| Input path | `input: <path>` | `<session>/scan/scan-results.json` |
|
||||
| Dimensions | `dimensions: <list>` | `sec,cor,perf,maint` |
|
||||
|
||||
Load scan results from input path. If missing or empty -> report clean, complete immediately.
|
||||
|
||||
**Resume Artifact Check**: If `review-report.json` exists and is complete -> skip to Phase 5.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Triage Findings
|
||||
|
||||
**Objective**: Split findings into deep analysis vs pass-through buckets.
|
||||
|
||||
**Triage rules**:
|
||||
|
||||
| Category | Severity | Action |
|
||||
|----------|----------|--------|
|
||||
| Deep analysis | critical, high, medium | Enrich with root cause, impact, optimization |
|
||||
| Pass-through | low | Include in report without enrichment |
|
||||
|
||||
**Limits**:
|
||||
|
||||
| Parameter | Value | Reason |
|
||||
|-----------|-------|--------|
|
||||
| MAX_DEEP | 15 | CLI call efficiency |
|
||||
| Priority order | critical -> high -> medium | Highest impact first |
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Filter findings with severity in [critical, high, medium]
|
||||
2. Sort by severity (critical first)
|
||||
3. Take first MAX_DEEP for deep analysis
|
||||
4. Remaining findings -> pass-through bucket
|
||||
|
||||
**Success**: deep_analysis and pass_through buckets populated.
|
||||
|
||||
If deep_analysis bucket is empty -> skip Phase 3, go directly to Phase 4.
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Deep Analysis
|
||||
|
||||
**Objective**: Enrich selected findings with root cause, impact, and optimization suggestions.
|
||||
|
||||
Delegate to `commands/deep-analyze.md` which performs CLI Fan-out analysis.
|
||||
|
||||
**Analysis strategy**:
|
||||
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Single dimension analysis | Direct inline scan |
|
||||
| Multi-dimension analysis | Per-dimension sequential scan |
|
||||
| Deep analysis needed | CLI Fan-out to external tool |
|
||||
|
||||
**Enrichment fields** (added to each finding):
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| root_cause | Underlying cause of the issue |
|
||||
| impact | Business/technical impact |
|
||||
| optimization | Suggested optimization approach |
|
||||
| fix_strategy | auto/manual/skip |
|
||||
| fix_complexity | low/medium/high |
|
||||
| fix_dependencies | Array of dependent finding IDs |
|
||||
|
||||
**Output**: `enriched-findings.json`
|
||||
|
||||
If CLI deep analysis fails -> use original findings without enrichment.
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Generate Report
|
||||
|
||||
**Objective**: Cross-correlate enriched + pass-through findings, generate review report.
|
||||
|
||||
Delegate to `commands/generate-report.md`.
|
||||
|
||||
**Report structure**:
|
||||
|
||||
| Section | Content |
|
||||
|---------|---------|
|
||||
| Summary | Total count, by_severity, by_dimension, fixable_count, auto_fixable_count |
|
||||
| Critical files | Files with multiple critical/high findings |
|
||||
| Findings | All findings with enrichment data |
|
||||
|
||||
**Output files**:
|
||||
|
||||
| File | Format | Purpose |
|
||||
|------|--------|---------|
|
||||
| review-report.json | JSON | Machine-readable for fixer |
|
||||
| review-report.md | Markdown | Human-readable summary |
|
||||
|
||||
**Success**: Both report files written.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
**Objective**: Report review results to coordinator.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Update .msg/meta.json with review results summary
|
||||
2. Build top findings summary (critical/high, max 8)
|
||||
3. Log via team_msg with `[reviewer]` prefix
|
||||
4. SendMessage to coordinator
|
||||
5. TaskUpdate completed
|
||||
6. Loop to Phase 1 for next task
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Findings count | Total |
|
||||
| Severity summary | critical:n high:n medium:n low:n |
|
||||
| Fixable count | Number of auto-fixable |
|
||||
| Top findings | Critical/high items |
|
||||
| Critical files | Files with most issues |
|
||||
| Output path | review-report.json location |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Scan results file missing | Report error, complete task cleanly |
|
||||
| 0 findings in scan | Report clean, complete immediately |
|
||||
| CLI deep analysis fails | Use original findings without enrichment |
|
||||
| Report generation fails | Write minimal report with raw findings |
|
||||
| Session folder missing | Re-create review subdirectory |
|
||||
| JSON parse failures | Log warning, use fallback data |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -1,186 +0,0 @@
|
||||
# Command: semantic-scan
|
||||
|
||||
> LLM-based semantic analysis via CLI. Supplements toolchain findings with issues that static tools cannot detect: business logic flaws, architectural problems, complex security patterns.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Scanner, Standard mode, Step B
|
||||
- Runs AFTER toolchain-scan completes (needs its output to avoid duplication)
|
||||
- Quick mode does NOT use this command
|
||||
|
||||
**Trigger conditions**:
|
||||
- SCAN-* task in Phase 3 with `quickMode === false`
|
||||
- toolchain-scan.md has completed (toolchain-findings.json exists or empty)
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: CLI Fan-out (single gemini agent, analysis only)
|
||||
|
||||
### Tool Fallback Chain
|
||||
|
||||
```
|
||||
gemini (primary) -> qwen (fallback) -> codex (fallback)
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Prepare Context
|
||||
|
||||
Build the CLI prompt with target files and a summary of toolchain findings to avoid duplication.
|
||||
|
||||
```javascript
|
||||
// Read toolchain findings for dedup context
|
||||
let toolFindings = []
|
||||
try {
|
||||
toolFindings = JSON.parse(Read(`${sessionFolder}/scan/toolchain-findings.json`))
|
||||
} catch { /* no toolchain findings */ }
|
||||
|
||||
// Build toolchain summary for dedup (compact: file:line:rule per line)
|
||||
const toolSummary = toolFindings.length > 0
|
||||
? toolFindings.slice(0, 50).map(f =>
|
||||
`${f.location?.file}:${f.location?.line} [${f.source}] ${f.title}`
|
||||
).join('\n')
|
||||
: '(no toolchain findings)'
|
||||
|
||||
// Build target file list for CLI context
|
||||
// Limit to reasonable size for CLI prompt
|
||||
const fileList = targetFiles.slice(0, 100)
|
||||
const targetPattern = fileList.length <= 20
|
||||
? fileList.join(' ')
|
||||
: `${target}/**/*.{ts,tsx,js,jsx,py,go,java,rs}`
|
||||
|
||||
// Map requested dimensions to scan focus areas
|
||||
const DIM_FOCUS = {
|
||||
sec: 'Security: business logic vulnerabilities, privilege escalation, sensitive data flow, auth bypass, injection beyond simple patterns',
|
||||
cor: 'Correctness: logic errors, unhandled exception paths, state management bugs, race conditions, incorrect algorithm implementation',
|
||||
perf: 'Performance: algorithm complexity (O(n^2)+), N+1 queries, unnecessary sync operations, memory leaks, missing caching opportunities',
|
||||
maint: 'Maintainability: architectural coupling, abstraction leaks, project convention violations, dead code paths, excessive complexity'
|
||||
}
|
||||
|
||||
const focusAreas = dimensions
|
||||
.map(d => DIM_FOCUS[d])
|
||||
.filter(Boolean)
|
||||
.map((desc, i) => `${i + 1}. ${desc}`)
|
||||
.join('\n')
|
||||
```
|
||||
|
||||
### Step 2: Execute CLI Scan
|
||||
|
||||
```javascript
|
||||
const maxPerDimension = 5
|
||||
const minSeverity = 'medium'
|
||||
|
||||
const cliPrompt = `PURPOSE: Supplement toolchain scan with semantic analysis that static tools cannot detect. Find logic errors, architectural issues, and complex vulnerability patterns.
|
||||
TASK:
|
||||
${focusAreas}
|
||||
MODE: analysis
|
||||
CONTEXT: @${targetPattern}
|
||||
Toolchain already detected these issues (DO NOT repeat them):
|
||||
${toolSummary}
|
||||
EXPECTED: Respond with ONLY a JSON array (no markdown, no explanation). Each element:
|
||||
{"dimension":"security|correctness|performance|maintainability","category":"<sub-category>","severity":"critical|high|medium","title":"<concise title>","description":"<detailed explanation>","location":{"file":"<path>","line":<number>,"end_line":<number>,"code_snippet":"<relevant code>"},"source":"llm","suggested_fix":"<how to fix>","effort":"low|medium|high","confidence":"high|medium|low"}
|
||||
CONSTRAINTS: Max ${maxPerDimension} findings per dimension | Only ${minSeverity} severity and above | Do not duplicate toolchain findings | Focus on issues tools CANNOT detect | Return raw JSON array only`
|
||||
|
||||
let cliOutput = null
|
||||
let cliTool = 'gemini'
|
||||
|
||||
// Try primary tool
|
||||
try {
|
||||
cliOutput = Bash(
|
||||
`ccw cli -p "${cliPrompt.replace(/"/g, '\\"')}" --tool gemini --mode analysis --rule analysis-review-code-quality`,
|
||||
{ timeout: 300000 }
|
||||
)
|
||||
} catch {
|
||||
// Fallback to qwen
|
||||
try {
|
||||
cliTool = 'qwen'
|
||||
cliOutput = Bash(
|
||||
`ccw cli -p "${cliPrompt.replace(/"/g, '\\"')}" --tool qwen --mode analysis`,
|
||||
{ timeout: 300000 }
|
||||
)
|
||||
} catch {
|
||||
// Fallback to codex
|
||||
try {
|
||||
cliTool = 'codex'
|
||||
cliOutput = Bash(
|
||||
`ccw cli -p "${cliPrompt.replace(/"/g, '\\"')}" --tool codex --mode analysis`,
|
||||
{ timeout: 300000 }
|
||||
)
|
||||
} catch {
|
||||
// All CLI tools failed
|
||||
cliOutput = null
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Parse & Validate Output
|
||||
|
||||
```javascript
|
||||
let semanticFindings = []
|
||||
|
||||
if (cliOutput) {
|
||||
try {
|
||||
// Extract JSON array from CLI output (may have surrounding text)
|
||||
const jsonMatch = cliOutput.match(/\[[\s\S]*\]/)
|
||||
if (jsonMatch) {
|
||||
const parsed = JSON.parse(jsonMatch[0])
|
||||
|
||||
// Validate each finding against schema
|
||||
semanticFindings = parsed.filter(f => {
|
||||
// Required fields check
|
||||
if (!f.dimension || !f.title || !f.location?.file) return false
|
||||
// Dimension must be valid
|
||||
if (!['security', 'correctness', 'performance', 'maintainability'].includes(f.dimension)) return false
|
||||
// Severity must be valid and meet minimum
|
||||
const validSev = ['critical', 'high', 'medium']
|
||||
if (!validSev.includes(f.severity)) return false
|
||||
return true
|
||||
}).map(f => ({
|
||||
dimension: f.dimension,
|
||||
category: f.category || 'general',
|
||||
severity: f.severity,
|
||||
title: f.title,
|
||||
description: f.description || f.title,
|
||||
location: {
|
||||
file: f.location.file,
|
||||
line: f.location.line || 1,
|
||||
end_line: f.location.end_line || f.location.line || 1,
|
||||
code_snippet: f.location.code_snippet || ''
|
||||
},
|
||||
source: 'llm',
|
||||
tool_rule: null,
|
||||
suggested_fix: f.suggested_fix || '',
|
||||
effort: ['low', 'medium', 'high'].includes(f.effort) ? f.effort : 'medium',
|
||||
confidence: ['high', 'medium', 'low'].includes(f.confidence) ? f.confidence : 'medium'
|
||||
}))
|
||||
}
|
||||
} catch {
|
||||
// JSON parse failed - log and continue with empty
|
||||
}
|
||||
}
|
||||
|
||||
// Enforce per-dimension limits
|
||||
const dimCounts = {}
|
||||
semanticFindings = semanticFindings.filter(f => {
|
||||
dimCounts[f.dimension] = (dimCounts[f.dimension] || 0) + 1
|
||||
return dimCounts[f.dimension] <= maxPerDimension
|
||||
})
|
||||
|
||||
// Write output
|
||||
Write(`${sessionFolder}/scan/semantic-findings.json`,
|
||||
JSON.stringify(semanticFindings, null, 2))
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| gemini CLI fails | Fallback to qwen, then codex |
|
||||
| All CLI tools fail | Log warning, write empty findings array (toolchain results still valid) |
|
||||
| CLI output not valid JSON | Attempt regex extraction, else empty findings |
|
||||
| Findings exceed per-dimension limit | Truncate to max per dimension |
|
||||
| Invalid dimension/severity in output | Filter out invalid entries |
|
||||
| CLI timeout (>5 min) | Kill, log warning, return empty findings |
|
||||
@@ -1,187 +0,0 @@
|
||||
# Command: toolchain-scan
|
||||
|
||||
> Parallel static analysis tool execution. Detects available tools, runs concurrently, normalizes output into standardized findings.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Scanner, Standard mode, Step A
|
||||
- At least one tool detected in Phase 2
|
||||
- Quick mode does NOT use this command
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Direct (Bash parallel execution)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Build Tool Commands
|
||||
|
||||
```javascript
|
||||
if (!Object.values(toolchain).some(Boolean)) {
|
||||
Write(`${sessionFolder}/scan/toolchain-findings.json`, '[]')
|
||||
return
|
||||
}
|
||||
|
||||
const tmpDir = `${sessionFolder}/scan/tmp`
|
||||
Bash(`mkdir -p "${tmpDir}"`)
|
||||
|
||||
const cmds = []
|
||||
|
||||
if (toolchain.tsc)
|
||||
cmds.push(`(cd "${projectRoot}" && npx tsc --noEmit --pretty false 2>&1 | head -500 > "${tmpDir}/tsc.txt") &`)
|
||||
if (toolchain.eslint)
|
||||
cmds.push(`(cd "${projectRoot}" && npx eslint "${target}" --format json --no-error-on-unmatched-pattern 2>/dev/null | head -5000 > "${tmpDir}/eslint.json") &`)
|
||||
if (toolchain.semgrep)
|
||||
cmds.push(`(cd "${projectRoot}" && semgrep --config auto --json "${target}" 2>/dev/null | head -5000 > "${tmpDir}/semgrep.json") &`)
|
||||
if (toolchain.ruff)
|
||||
cmds.push(`(cd "${projectRoot}" && ruff check "${target}" --output-format json 2>/dev/null | head -5000 > "${tmpDir}/ruff.json") &`)
|
||||
if (toolchain.mypy)
|
||||
cmds.push(`(cd "${projectRoot}" && mypy "${target}" --output json 2>/dev/null | head -2000 > "${tmpDir}/mypy.txt") &`)
|
||||
if (toolchain.npmAudit)
|
||||
cmds.push(`(cd "${projectRoot}" && npm audit --json 2>/dev/null | head -5000 > "${tmpDir}/audit.json") &`)
|
||||
```
|
||||
|
||||
### Step 2: Parallel Execution
|
||||
|
||||
```javascript
|
||||
Bash(cmds.join('\n') + '\nwait', { timeout: 300000 })
|
||||
```
|
||||
|
||||
### Step 3: Parse Tool Outputs
|
||||
|
||||
Each parser normalizes to: `{ dimension, category, severity, title, description, location:{file,line,end_line,code_snippet}, source, tool_rule, suggested_fix, effort, confidence }`
|
||||
|
||||
```javascript
|
||||
const findings = []
|
||||
|
||||
// --- tsc: file(line,col): error TSxxxx: message ---
|
||||
if (toolchain.tsc) {
|
||||
try {
|
||||
const out = Read(`${tmpDir}/tsc.txt`)
|
||||
const re = /^(.+)\((\d+),\d+\):\s+(error|warning)\s+(TS\d+):\s+(.+)$/gm
|
||||
let m; while ((m = re.exec(out)) !== null) {
|
||||
findings.push({
|
||||
dimension: 'correctness', category: 'type-safety',
|
||||
severity: m[3] === 'error' ? 'high' : 'medium',
|
||||
title: `tsc ${m[4]}: ${m[5].slice(0,80)}`, description: m[5],
|
||||
location: { file: m[1], line: +m[2] },
|
||||
source: 'tool:tsc', tool_rule: m[4], suggested_fix: '',
|
||||
effort: 'low', confidence: 'high'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// --- eslint: JSON array of {filePath, messages[{severity,ruleId,message,line}]} ---
|
||||
if (toolchain.eslint) {
|
||||
try {
|
||||
const data = JSON.parse(Read(`${tmpDir}/eslint.json`))
|
||||
for (const f of data) for (const msg of (f.messages || [])) {
|
||||
const isErr = msg.severity === 2
|
||||
findings.push({
|
||||
dimension: isErr ? 'correctness' : 'maintainability',
|
||||
category: isErr ? 'bug' : 'code-smell',
|
||||
severity: isErr ? 'high' : 'medium',
|
||||
title: `eslint ${msg.ruleId || '?'}: ${(msg.message||'').slice(0,80)}`,
|
||||
description: msg.message || '',
|
||||
location: { file: f.filePath, line: msg.line || 1, end_line: msg.endLine, code_snippet: msg.source || '' },
|
||||
source: 'tool:eslint', tool_rule: msg.ruleId || null,
|
||||
suggested_fix: msg.fix ? 'Auto-fixable' : '', effort: msg.fix ? 'low' : 'medium', confidence: 'high'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// --- semgrep: {results[{path,start:{line},end:{line},check_id,extra:{severity,message,fix,lines}}]} ---
|
||||
if (toolchain.semgrep) {
|
||||
try {
|
||||
const data = JSON.parse(Read(`${tmpDir}/semgrep.json`))
|
||||
const smap = { ERROR:'high', WARNING:'medium', INFO:'low' }
|
||||
for (const r of (data.results || [])) {
|
||||
findings.push({
|
||||
dimension: 'security', category: r.check_id?.split('.').pop() || 'generic',
|
||||
severity: smap[r.extra?.severity] || 'medium',
|
||||
title: `semgrep: ${(r.extra?.message || r.check_id || '').slice(0,80)}`,
|
||||
description: r.extra?.message || '', location: { file: r.path, line: r.start?.line || 1, end_line: r.end?.line, code_snippet: r.extra?.lines || '' },
|
||||
source: 'tool:semgrep', tool_rule: r.check_id || null,
|
||||
suggested_fix: r.extra?.fix || '', effort: 'medium', confidence: smap[r.extra?.severity] === 'high' ? 'high' : 'medium'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// --- ruff: [{code,message,filename,location:{row},end_location:{row},fix}] ---
|
||||
if (toolchain.ruff) {
|
||||
try {
|
||||
const data = JSON.parse(Read(`${tmpDir}/ruff.json`))
|
||||
for (const item of data) {
|
||||
const code = item.code || ''
|
||||
const dim = code.startsWith('S') ? 'security' : (code.startsWith('F') || code.startsWith('B')) ? 'correctness' : 'maintainability'
|
||||
findings.push({
|
||||
dimension: dim, category: dim === 'security' ? 'input-validation' : dim === 'correctness' ? 'bug' : 'code-smell',
|
||||
severity: (code.startsWith('S') || code.startsWith('F')) ? 'high' : 'medium',
|
||||
title: `ruff ${code}: ${(item.message||'').slice(0,80)}`, description: item.message || '',
|
||||
location: { file: item.filename, line: item.location?.row || 1, end_line: item.end_location?.row },
|
||||
source: 'tool:ruff', tool_rule: code, suggested_fix: item.fix?.message || '',
|
||||
effort: item.fix ? 'low' : 'medium', confidence: 'high'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// --- npm audit: {vulnerabilities:{name:{severity,title,fixAvailable,via}}} ---
|
||||
if (toolchain.npmAudit) {
|
||||
try {
|
||||
const data = JSON.parse(Read(`${tmpDir}/audit.json`))
|
||||
const smap = { critical:'critical', high:'high', moderate:'medium', low:'low', info:'info' }
|
||||
for (const [,v] of Object.entries(data.vulnerabilities || {})) {
|
||||
findings.push({
|
||||
dimension: 'security', category: 'dependency', severity: smap[v.severity] || 'medium',
|
||||
title: `npm audit: ${v.name} - ${(v.title || '').slice(0,80)}`,
|
||||
description: v.title || `Vulnerable: ${v.name}`,
|
||||
location: { file: 'package.json', line: 1 },
|
||||
source: 'tool:npm-audit', tool_rule: null,
|
||||
suggested_fix: v.fixAvailable ? 'npm audit fix' : 'Manual resolution',
|
||||
effort: v.fixAvailable ? 'low' : 'high', confidence: 'high'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// --- mypy: file:line: error: message [code] ---
|
||||
if (toolchain.mypy) {
|
||||
try {
|
||||
const out = Read(`${tmpDir}/mypy.txt`)
|
||||
const re = /^(.+):(\d+):\s+(error|warning):\s+(.+?)(?:\s+\[(\w[\w-]*)\])?$/gm
|
||||
let m; while ((m = re.exec(out)) !== null) {
|
||||
if (m[3] === 'note') continue
|
||||
findings.push({
|
||||
dimension: 'correctness', category: 'type-safety',
|
||||
severity: m[3] === 'error' ? 'high' : 'medium',
|
||||
title: `mypy${m[5] ? ` [${m[5]}]` : ''}: ${m[4].slice(0,80)}`, description: m[4],
|
||||
location: { file: m[1], line: +m[2] },
|
||||
source: 'tool:mypy', tool_rule: m[5] || null, suggested_fix: '',
|
||||
effort: 'low', confidence: 'high'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Write Output
|
||||
|
||||
```javascript
|
||||
Write(`${sessionFolder}/scan/toolchain-findings.json`, JSON.stringify(findings, null, 2))
|
||||
Bash(`rm -rf "${tmpDir}"`)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Tool not found at runtime | Skip gracefully, continue with others |
|
||||
| Tool times out (>5 min) | Killed by `wait` timeout, partial output used |
|
||||
| Tool output unparseable | try/catch skips that tool's findings |
|
||||
| All tools fail | Empty array written, semantic-scan covers all dimensions |
|
||||
@@ -1,253 +0,0 @@
|
||||
# Scanner Role
|
||||
|
||||
Toolchain + LLM semantic scan producing structured findings. Static analysis tools in parallel, then LLM for issues tools miss. Read-only -- never modifies source code.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `scanner` | **Tag**: `[scanner]`
|
||||
- **Task Prefix**: `SCAN-*`
|
||||
- **Responsibility**: read-only-analysis
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `SCAN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[scanner]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Write only to session scan directory
|
||||
- Assign dimension-prefixed IDs: SEC-001, COR-001, PRF-001, MNT-001
|
||||
- Work strictly within read-only analysis scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Modify source files
|
||||
- Fix issues
|
||||
- Create tasks for other roles
|
||||
- Contact reviewer/fixer directly
|
||||
- Run any write-mode CLI commands
|
||||
- Omit `[scanner]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `toolchain-scan` | [commands/toolchain-scan.md](commands/toolchain-scan.md) | Phase 3A | Parallel static analysis |
|
||||
| `semantic-scan` | [commands/semantic-scan.md](commands/semantic-scan.md) | Phase 3B | LLM analysis via CLI |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Read` | Built-in | scanner | Load context files |
|
||||
| `Write` | Built-in | scanner | Write scan results |
|
||||
| `Glob` | Built-in | scanner | Find target files |
|
||||
| `Bash` | Built-in | scanner | Run toolchain commands |
|
||||
| `TaskUpdate` | Built-in | scanner | Update task status |
|
||||
| `team_msg` | MCP | scanner | Log communication |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `scan_progress` | scanner -> coordinator | Milestone | Progress update during scan |
|
||||
| `scan_complete` | scanner -> coordinator | Phase 5 | Scan finished with findings count |
|
||||
| `error` | scanner -> coordinator | Failure | Error requiring attention |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "scanner",
|
||||
type: "scan_complete",
|
||||
ref: "<session-folder>/scan/scan-results.json"
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from scanner --type scan_complete --ref <path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `SCAN-*` + status pending + blockedBy empty -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
Extract from task description:
|
||||
|
||||
| Parameter | Extraction Pattern | Default |
|
||||
|-----------|-------------------|---------|
|
||||
| Target | `target: <path>` | `.` |
|
||||
| Dimensions | `dimensions: <list>` | `sec,cor,perf,maint` |
|
||||
| Quick mode | `quick: true` | false |
|
||||
| Session folder | `session: <path>` | (required) |
|
||||
|
||||
**Resume Artifact Check**: If `scan-results.json` exists and is complete -> skip to Phase 5.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Context Resolution
|
||||
|
||||
**Objective**: Resolve target files and detect available toolchain.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Resolve target files**:
|
||||
|
||||
| Input Type | Resolution Method |
|
||||
|------------|-------------------|
|
||||
| Glob pattern | Direct Glob |
|
||||
| Directory | Glob `<dir>/**/*.{ts,tsx,js,jsx,py,go,java,rs}` |
|
||||
|
||||
If no source files found -> report empty, complete task cleanly.
|
||||
|
||||
2. **Detect toolchain availability**:
|
||||
|
||||
| Tool | Detection Method |
|
||||
|------|------------------|
|
||||
| tsc | `tsconfig.json` exists |
|
||||
| eslint | `.eslintrc*` or `eslint.config.*` or `eslint` in package.json |
|
||||
| semgrep | `.semgrep.yml` exists |
|
||||
| ruff | `pyproject.toml` exists + ruff command available |
|
||||
| mypy | mypy command available + `pyproject.toml` exists |
|
||||
| npmAudit | `package-lock.json` exists |
|
||||
|
||||
**Success**: Target files resolved, toolchain detected.
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Scan Execution
|
||||
|
||||
**Objective**: Execute toolchain + semantic scans.
|
||||
|
||||
**Strategy selection**:
|
||||
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Quick mode | Single inline CLI call, max 20 findings |
|
||||
| Standard mode | Sequential: toolchain-scan -> semantic-scan |
|
||||
|
||||
**Quick Mode**:
|
||||
|
||||
1. Execute single CLI call with analysis mode
|
||||
2. Parse JSON response for findings (max 20)
|
||||
3. Skip toolchain execution
|
||||
|
||||
**Standard Mode**:
|
||||
|
||||
1. Delegate to `commands/toolchain-scan.md` -> produces `toolchain-findings.json`
|
||||
2. Delegate to `commands/semantic-scan.md` -> produces `semantic-findings.json`
|
||||
|
||||
**Success**: Findings collected from toolchain and/or semantic scan.
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Aggregate & Deduplicate
|
||||
|
||||
**Objective**: Merge findings, assign IDs, write results.
|
||||
|
||||
**Deduplication rules**:
|
||||
|
||||
| Key | Rule |
|
||||
|-----|------|
|
||||
| Duplicate detection | Same file + line + dimension = duplicate |
|
||||
| Priority | Keep first occurrence |
|
||||
|
||||
**ID Assignment**:
|
||||
|
||||
| Dimension | Prefix | Example ID |
|
||||
|-----------|--------|------------|
|
||||
| security | SEC | SEC-001 |
|
||||
| correctness | COR | COR-001 |
|
||||
| performance | PRF | PRF-001 |
|
||||
| maintainability | MNT | MNT-001 |
|
||||
|
||||
**Output schema** (`scan-results.json`):
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| scan_date | string | ISO timestamp |
|
||||
| target | string | Scan target |
|
||||
| dimensions | array | Enabled dimensions |
|
||||
| quick_mode | boolean | Quick mode flag |
|
||||
| total_findings | number | Total count |
|
||||
| by_severity | object | Count per severity |
|
||||
| by_dimension | object | Count per dimension |
|
||||
| findings | array | Finding objects |
|
||||
|
||||
**Each finding**:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| id | string | Dimension-prefixed ID |
|
||||
| dimension | string | security/correctness/performance/maintainability |
|
||||
| category | string | Category within dimension |
|
||||
| severity | string | critical/high/medium/low |
|
||||
| title | string | Short title |
|
||||
| description | string | Detailed description |
|
||||
| location | object | {file, line} |
|
||||
| source | string | toolchain/llm |
|
||||
| suggested_fix | string | Optional fix hint |
|
||||
| effort | string | low/medium/high |
|
||||
| confidence | string | low/medium/high |
|
||||
|
||||
**Success**: `scan-results.json` written with unique findings.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
**Objective**: Report findings to coordinator.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Update .msg/meta.json with scan results summary
|
||||
2. Build top findings summary (critical/high, max 10)
|
||||
3. Log via team_msg with `[scanner]` prefix
|
||||
4. SendMessage to coordinator
|
||||
5. TaskUpdate completed
|
||||
6. Loop to Phase 1 for next task
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Target | Scanned path |
|
||||
| Mode | quick/standard |
|
||||
| Findings count | Total |
|
||||
| Dimension summary | SEC:n COR:n PRF:n MNT:n |
|
||||
| Top findings | Critical/high items |
|
||||
| Output path | scan-results.json location |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No source files match target | Report empty, complete task cleanly |
|
||||
| All toolchain tools unavailable | Skip toolchain, run semantic-only |
|
||||
| CLI semantic scan fails | Log warning, use toolchain results only |
|
||||
| Quick mode CLI timeout | Return partial or empty findings |
|
||||
| Toolchain tool crashes | Skip that tool, continue with others |
|
||||
| Session folder missing | Re-create scan subdirectory |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -11,24 +11,23 @@ Unified team skill: roadmap-driven development with phased execution pipeline. C
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-roadmap-dev") │
|
||||
│ args="<task-description>" or args="--role=xxx"│
|
||||
└───────────────────┬───────────────────────────┘
|
||||
│ Role Router
|
||||
┌──── --role present? ────┐
|
||||
│ NO │ YES
|
||||
↓ ↓
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto → coordinator) (route to role.md)
|
||||
│
|
||||
┌────┴────┬───────────┬───────────┐
|
||||
↓ ↓ ↓ ↓
|
||||
┌──────────┐┌─────────┐┌──────────┐┌──────────┐
|
||||
│coordinator││ planner ││ executor ││ verifier │
|
||||
│ (human ││ PLAN-* ││ EXEC-* ││ VERIFY-* │
|
||||
│ 交互) ││ ││ ││ │
|
||||
└──────────┘└─────────┘└──────────┘└──────────┘
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-roadmap-dev") |
|
||||
| args="<task-description>" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
Orchestration Mode (auto -> coordinator)
|
||||
|
|
||||
Coordinator (inline)
|
||||
Phase 0-5 orchestration
|
||||
|
|
||||
+-------+-------+-------+
|
||||
v v v
|
||||
[tw] [tw] [tw]
|
||||
plann- execu- verif-
|
||||
er tor ier
|
||||
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Command Architecture
|
||||
@@ -66,12 +65,12 @@ Parse `$ARGUMENTS` to extract `--role`. If absent → Orchestration Mode (auto r
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | **⚠️ 压缩后必须重读** |
|
||||
| planner | [roles/planner/role.md](roles/planner/role.md) | PLAN-* | pipeline | 压缩后必须重读 |
|
||||
| executor | [roles/executor/role.md](roles/executor/role.md) | EXEC-* | pipeline | 压缩后必须重读 |
|
||||
| verifier | [roles/verifier/role.md](roles/verifier/role.md) | VERIFY-* | pipeline | 压缩后必须重读 |
|
||||
| Role | Spec | Task Prefix | Inner Loop |
|
||||
|------|------|-------------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
|
||||
| planner | [role-specs/planner.md](role-specs/planner.md) | PLAN-* | true |
|
||||
| executor | [role-specs/executor.md](role-specs/executor.md) | EXEC-* | true |
|
||||
| verifier | [role-specs/verifier.md](role-specs/verifier.md) | VERIFY-* | true |
|
||||
|
||||
> **⚠️ COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
|
||||
|
||||
@@ -309,39 +308,60 @@ Phase N: PLAN-N01 → EXEC-N01 → VERIFY-N01 → Complete
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use background mode (Spawn-and-Stop):
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: "roadmap-dev",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "roadmap-dev" <ROLE>.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-roadmap-dev/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: roadmap-dev
|
||||
requirement: <task-description>
|
||||
inner_loop: true
|
||||
|
||||
## Primary Directive
|
||||
All your work must be executed through Skill to load role definition:
|
||||
Skill(skill="team-roadmap-dev", args="--role=<role>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] identifier
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate for other roles
|
||||
- Call mcp__ccw-tools__team_msg before every SendMessage
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> load role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**All roles** (planner, executor, verifier): Set `inner_loop: true`.
|
||||
|
||||
---
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all phases done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Roadmap Dev pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(roadmap-dev) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-roadmap-dev", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
## Session Directory
|
||||
|
||||
```
|
||||
|
||||
@@ -1,373 +1,445 @@
|
||||
# Command: monitor
|
||||
# Command: Monitor
|
||||
|
||||
Stop-Wait phase execution loop. Spawns workers synchronously and manages phase transitions.
|
||||
Handle all coordinator monitoring events for the roadmap-dev pipeline using the async Spawn-and-Stop pattern. Multi-phase execution with gap closure expressed as event-driven state machine transitions. One operation per invocation, then STOP and wait for the next callback.
|
||||
|
||||
## Purpose
|
||||
## Constants
|
||||
|
||||
Execute all roadmap phases sequentially using the Stop-Wait pattern. The coordinator spawns each worker synchronously (run_in_background: false), waits for completion, then proceeds to the next step. Handles gap closure loops and phase transitions.
|
||||
| Key | Value | Description |
|
||||
|-----|-------|-------------|
|
||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
||||
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
|
||||
| WORKER_AGENT | team-worker | All workers spawned as team-worker agents |
|
||||
| MAX_GAP_ITERATIONS | 3 | Maximum gap closure re-plan/exec/verify cycles per phase |
|
||||
|
||||
## Design Principle
|
||||
### Role-Worker Map
|
||||
|
||||
Models have no concept of time. Polling, sleeping, and periodic checking are forbidden. All coordination uses synchronous Task() calls where worker return = step done.
|
||||
| Prefix | Role | Role Spec | inner_loop |
|
||||
|--------|------|-----------|------------|
|
||||
| PLAN | planner | `.claude/skills/team-roadmap-dev/role-specs/planner.md` | true (subagents: cli-explore-agent, action-planning-agent) |
|
||||
| EXEC | executor | `.claude/skills/team-roadmap-dev/role-specs/executor.md` | true (subagents: code-developer) |
|
||||
| VERIFY | verifier | `.claude/skills/team-roadmap-dev/role-specs/verifier.md` | true |
|
||||
|
||||
## Strategy
|
||||
### Pipeline Structure
|
||||
|
||||
Sequential spawning -- coordinator spawns one worker at a time via synchronous Task() calls. Each worker processes its task and returns. The coordinator inspects the result and decides the next action.
|
||||
Per-phase task chain: `PLAN-{phase}01 -> EXEC-{phase}01 -> VERIFY-{phase}01`
|
||||
|
||||
Gap closure creates: `PLAN-{phase}0N -> EXEC-{phase}0N -> VERIFY-{phase}0N` (N = iteration + 1)
|
||||
|
||||
Multi-phase: Phases execute sequentially. Each phase completes its full PLAN/EXEC/VERIFY cycle (including gap closure) before the next phase is dispatched.
|
||||
|
||||
### State Machine Coordinates
|
||||
|
||||
The coordinator tracks its position using these state variables in `meta.json`:
|
||||
|
||||
```
|
||||
Coordinator ──spawn──→ Planner (blocks until done)
|
||||
←─return──
|
||||
──spawn──→ Executor (blocks until done)
|
||||
←─return──
|
||||
──spawn──→ Verifier (blocks until done)
|
||||
←─return──
|
||||
──decide──→ next phase or gap closure
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Source | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `sessionFolder` | From coordinator | Session artifact directory |
|
||||
| `roadmap` | From roadmap.md | Parsed phase list |
|
||||
| `config` | From config.json | Execution mode and gates |
|
||||
| `resumePhase` | From resume command (optional) | Phase to resume from |
|
||||
| `resumeStep` | From resume command (optional) | Step within phase to resume from (plan/exec/verify/gap_closure) |
|
||||
| `resumeGapIteration` | From resume command (optional) | Gap iteration to resume from |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Session State
|
||||
|
||||
```javascript
|
||||
const roadmap = Read(`${sessionFolder}/roadmap.md`)
|
||||
const config = JSON.parse(Read(`${sessionFolder}/config.json`))
|
||||
const state = Read(`${sessionFolder}/state.md`)
|
||||
|
||||
const totalPhases = countPhases(roadmap)
|
||||
|
||||
// Support resume: use resume coordinates if provided, else parse from state
|
||||
const currentPhase = resumePhase || parseCurrentPhase(state)
|
||||
const startStep = resumeStep || 'plan' // plan|exec|verify|gap_closure
|
||||
const startGapIteration = resumeGapIteration || 0
|
||||
```
|
||||
|
||||
### Step 2: Phase Loop
|
||||
|
||||
```javascript
|
||||
for (let phase = currentPhase; phase <= totalPhases; phase++) {
|
||||
|
||||
// --- Phase N execution ---
|
||||
|
||||
// 2a. Dispatch task chain (if not already dispatched)
|
||||
// Read("commands/dispatch.md") → creates PLAN/EXEC/VERIFY tasks
|
||||
dispatch(phase, sessionFolder)
|
||||
|
||||
let phaseComplete = false
|
||||
let gapIteration = 0
|
||||
const MAX_GAP_ITERATIONS = 3
|
||||
|
||||
while (!phaseComplete && gapIteration <= MAX_GAP_ITERATIONS) {
|
||||
|
||||
// 2b. Spawn Planner (Stop-Wait)
|
||||
const planResult = spawnPlanner(phase, gapIteration, sessionFolder)
|
||||
|
||||
// 2c. Gate: plan_check (if configured)
|
||||
if (config.gates.plan_check && gapIteration === 0) {
|
||||
const plans = Glob(`${sessionFolder}/phase-${phase}/plan-*.md`)
|
||||
// Present plan summary to user
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Phase ${phase} plan ready. Proceed with execution?`,
|
||||
header: "Plan Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Proceed", description: "Execute the plan as-is" },
|
||||
{ label: "Revise", description: "Ask planner to revise" },
|
||||
{ label: "Skip phase", description: "Skip this phase entirely" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
// Handle "Revise" → re-spawn planner
|
||||
// Handle "Skip phase" → break to next phase
|
||||
}
|
||||
|
||||
// 2d. Spawn Executor (Stop-Wait)
|
||||
const execResult = spawnExecutor(phase, gapIteration, sessionFolder)
|
||||
|
||||
// 2e. Spawn Verifier (Stop-Wait)
|
||||
const verifyResult = spawnVerifier(phase, gapIteration, sessionFolder)
|
||||
|
||||
// 2f. Check verification result
|
||||
const verification = Read(`${sessionFolder}/phase-${phase}/verification.md`)
|
||||
const gapsFound = parseGapsFound(verification)
|
||||
|
||||
if (!gapsFound || gapsFound.length === 0) {
|
||||
// Phase passed
|
||||
phaseComplete = true
|
||||
} else if (gapIteration < MAX_GAP_ITERATIONS) {
|
||||
// Gap closure: create new task chain for gaps
|
||||
gapIteration++
|
||||
triggerGapClosure(phase, gapIteration, gapsFound, sessionFolder)
|
||||
} else {
|
||||
// Max iterations reached, report to user
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Phase ${phase} still has ${gapsFound.length} gaps after ${MAX_GAP_ITERATIONS} attempts. How to proceed?`,
|
||||
header: "Gap Closure Limit",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Continue anyway", description: "Accept current state, move to next phase" },
|
||||
{ label: "Retry once more", description: "One more gap closure attempt" },
|
||||
{ label: "Stop", description: "Halt execution for manual intervention" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
// Handle user choice
|
||||
phaseComplete = true // or stop based on choice
|
||||
}
|
||||
}
|
||||
|
||||
// 2g. Phase transition
|
||||
updateStatePhaseComplete(phase, sessionFolder)
|
||||
|
||||
// 2h. Interactive gate at phase boundary
|
||||
if (config.mode === "interactive" && phase < totalPhases) {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Phase ${phase} complete. Proceed to phase ${phase + 1}?`,
|
||||
header: "Phase Transition",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Proceed", description: `Start phase ${phase + 1}` },
|
||||
{ label: "Review results", description: "Show phase summary before continuing" },
|
||||
{ label: "Stop", description: "Pause execution here" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
// Handle "Review results" → display summary, then re-ask
|
||||
// Handle "Stop" → invoke pause command
|
||||
if (userChoice === "Stop") {
|
||||
Read("commands/pause.md")
|
||||
// Execute pause: save state with currentPhase, currentStep, gapIteration
|
||||
pauseSession(phase, "transition", gapIteration, sessionFolder)
|
||||
return // Exit monitor loop
|
||||
}
|
||||
}
|
||||
// If mode === "yolo": auto-advance, no user interaction
|
||||
session.coordinates = {
|
||||
current_phase: <number>, // Active phase (1-based)
|
||||
total_phases: <number>, // Total phases from roadmap
|
||||
gap_iteration: <number>, // Current gap closure iteration within phase (0 = initial)
|
||||
step: <string>, // Current step: "plan" | "exec" | "verify" | "gap_closure" | "transition"
|
||||
status: <string> // "running" | "paused" | "complete"
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Spawn Functions (Stop-Wait Pattern)
|
||||
## Phase 2: Context Loading
|
||||
|
||||
#### Spawn Planner
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session file | `<session-folder>/.msg/meta.json` | Yes |
|
||||
| Task list | `TaskList()` | Yes |
|
||||
| Active workers | session.active_workers[] | Yes |
|
||||
| Coordinates | session.coordinates | Yes |
|
||||
| Config | `<session-folder>/config.json` | Yes |
|
||||
| State | `<session-folder>/state.md` | Yes |
|
||||
|
||||
```javascript
|
||||
function spawnPlanner(phase, gapIteration, sessionFolder) {
|
||||
const suffix = gapIteration === 0 ? "01" : `0${gapIteration + 1}`
|
||||
const gapContext = gapIteration > 0
|
||||
? `\nGap closure iteration ${gapIteration}. Fix gaps from: ${sessionFolder}/phase-${phase}/verification.md`
|
||||
: ""
|
||||
```
|
||||
Load session state:
|
||||
1. Read <session-folder>/.msg/meta.json -> session
|
||||
2. Read <session-folder>/config.json -> config
|
||||
3. TaskList() -> allTasks
|
||||
4. Extract coordinates from session (current_phase, gap_iteration, step)
|
||||
5. Extract active_workers[] from session (default: [])
|
||||
6. Parse $ARGUMENTS to determine trigger event
|
||||
```
|
||||
|
||||
// Synchronous call - blocks until planner returns
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: `Spawn planner worker for phase ${phase}`,
|
||||
team_name: "roadmap-dev",
|
||||
name: "planner",
|
||||
prompt: `## Role Assignment
|
||||
role: planner
|
||||
role_spec: .claude/skills/team-roadmap-dev/role-specs/planner.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
## Phase 3: Event Handlers
|
||||
|
||||
### Wake-up Source Detection
|
||||
|
||||
Parse `$ARGUMENTS` to determine handler:
|
||||
|
||||
| Priority | Condition | Handler |
|
||||
|----------|-----------|---------|
|
||||
| 1 | Message contains `[planner]`, `[executor]`, or `[verifier]` | handleCallback |
|
||||
| 2 | Contains "check" or "status" | handleCheck |
|
||||
| 3 | Contains "resume", "continue", or "next" | handleResume |
|
||||
| 4 | Pipeline detected as complete (all phases done) | handleComplete |
|
||||
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCallback
|
||||
|
||||
Worker completed a task. Determine which step completed via prefix, apply pipeline logic, advance.
|
||||
|
||||
```
|
||||
Receive callback from [<role>]
|
||||
+- Find matching active worker by role tag
|
||||
+- Is this a progress update (not final)? (Inner Loop intermediate)
|
||||
| +- YES -> Update session state -> STOP
|
||||
+- Task status = completed?
|
||||
| +- YES -> remove from active_workers -> update session
|
||||
| | +- Determine completed step from task prefix:
|
||||
| | |
|
||||
| | +- PLAN-* completed:
|
||||
| | | +- Update coordinates.step = "plan_done"
|
||||
| | | +- Is this initial plan (gap_iteration === 0)?
|
||||
| | | | +- YES + config.gates.plan_check?
|
||||
| | | | | +- AskUserQuestion:
|
||||
| | | | | question: "Phase <N> plan ready. Proceed with execution?"
|
||||
| | | | | header: "Plan Review"
|
||||
| | | | | options:
|
||||
| | | | | - "Proceed": -> handleSpawnNext (spawns EXEC)
|
||||
| | | | | - "Revise": Create new PLAN task with incremented suffix
|
||||
| | | | | blockedBy: [] (immediate), -> handleSpawnNext
|
||||
| | | | | - "Skip phase": Delete all phase tasks
|
||||
| | | | | -> advanceToNextPhase
|
||||
| | | | +- NO (gap closure plan) -> handleSpawnNext (spawns EXEC)
|
||||
| | | +- -> handleSpawnNext
|
||||
| | |
|
||||
| | +- EXEC-* completed:
|
||||
| | | +- Update coordinates.step = "exec_done"
|
||||
| | | +- -> handleSpawnNext (spawns VERIFY)
|
||||
| | |
|
||||
| | +- VERIFY-* completed:
|
||||
| | +- Update coordinates.step = "verify_done"
|
||||
| | +- Read verification result from:
|
||||
| | | <session-folder>/phase-<N>/verification.md
|
||||
| | +- Parse gaps from verification
|
||||
| | +- Gaps found?
|
||||
| | +- NO -> Phase passed
|
||||
| | | +- -> advanceToNextPhase
|
||||
| | +- YES + gap_iteration < MAX_GAP_ITERATIONS?
|
||||
| | | +- -> triggerGapClosure
|
||||
| | +- YES + gap_iteration >= MAX_GAP_ITERATIONS?
|
||||
| | +- AskUserQuestion:
|
||||
| | question: "Phase <N> still has <count> gaps after <max> attempts."
|
||||
| | header: "Gap Closure Limit"
|
||||
| | options:
|
||||
| | - "Continue anyway": Accept, -> advanceToNextPhase
|
||||
| | - "Retry once more": Increment max, -> triggerGapClosure
|
||||
| | - "Stop": -> pauseSession
|
||||
| |
|
||||
| +- NO -> progress message -> STOP
|
||||
+- No matching worker found
|
||||
+- Scan all active workers for completed tasks
|
||||
+- Found completed -> process each (same logic above) -> handleSpawnNext
|
||||
+- None completed -> STOP
|
||||
```
|
||||
|
||||
**Sub-procedure: advanceToNextPhase**
|
||||
|
||||
```
|
||||
advanceToNextPhase:
|
||||
+- Update state.md: mark current phase completed
|
||||
+- current_phase < total_phases?
|
||||
| +- YES:
|
||||
| | +- config.mode === "interactive"?
|
||||
| | | +- AskUserQuestion:
|
||||
| | | question: "Phase <N> complete. Proceed to phase <N+1>?"
|
||||
| | | header: "Phase Transition"
|
||||
| | | options:
|
||||
| | | - "Proceed": Dispatch next phase tasks, -> handleSpawnNext
|
||||
| | | - "Review results": Output phase summary, re-ask
|
||||
| | | - "Stop": -> pauseSession
|
||||
| | +- Auto mode: Dispatch next phase tasks directly
|
||||
| | +- Update coordinates:
|
||||
| | current_phase++, gap_iteration=0, step="plan"
|
||||
| | +- Dispatch new phase tasks (PLAN/EXEC/VERIFY with blockedBy)
|
||||
| | +- -> handleSpawnNext
|
||||
| +- NO -> All phases done -> handleComplete
|
||||
```
|
||||
|
||||
**Sub-procedure: triggerGapClosure**
|
||||
|
||||
```
|
||||
triggerGapClosure:
|
||||
+- Increment coordinates.gap_iteration
|
||||
+- suffix = "0" + (gap_iteration + 1)
|
||||
+- phase = coordinates.current_phase
|
||||
+- Read gaps from verification.md
|
||||
+- Log: team_msg gap_closure
|
||||
+- Create gap closure task chain:
|
||||
|
|
||||
| TaskCreate: PLAN-{phase}{suffix}
|
||||
| subject: "PLAN-{phase}{suffix}: Gap closure for phase {phase} (iteration {gap_iteration})"
|
||||
| description: includes gap list, references to previous verification
|
||||
| blockedBy: [] (immediate start)
|
||||
|
|
||||
| TaskCreate: EXEC-{phase}{suffix}
|
||||
| subject: "EXEC-{phase}{suffix}: Execute gap fixes for phase {phase}"
|
||||
| blockedBy: [PLAN-{phase}{suffix}]
|
||||
|
|
||||
| TaskCreate: VERIFY-{phase}{suffix}
|
||||
| subject: "VERIFY-{phase}{suffix}: Verify gap closure for phase {phase}"
|
||||
| blockedBy: [EXEC-{phase}{suffix}]
|
||||
|
|
||||
+- Set owners: planner, executor, verifier
|
||||
+- Update coordinates.step = "gap_closure"
|
||||
+- -> handleSpawnNext (picks up the new PLAN task)
|
||||
```
|
||||
|
||||
**Sub-procedure: pauseSession**
|
||||
|
||||
```
|
||||
pauseSession:
|
||||
+- Save coordinates to meta.json (phase, step, gap_iteration)
|
||||
+- Update coordinates.status = "paused"
|
||||
+- Update state.md with pause marker
|
||||
+- team_msg log -> session_paused
|
||||
+- Output: "Session paused at phase <N>, step <step>. Resume with 'resume'."
|
||||
+- STOP
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleSpawnNext
|
||||
|
||||
Find all ready tasks, spawn team-worker agent in background, update session, STOP.
|
||||
|
||||
```
|
||||
Collect task states from TaskList()
|
||||
+- completedSubjects: status = completed
|
||||
+- inProgressSubjects: status = in_progress
|
||||
+- readySubjects: status = pending
|
||||
AND (no blockedBy OR all blockedBy in completedSubjects)
|
||||
|
||||
Ready tasks found?
|
||||
+- NONE + work in progress -> report waiting -> STOP
|
||||
+- NONE + nothing in progress:
|
||||
| +- More phases to dispatch? -> advanceToNextPhase
|
||||
| +- No more phases -> handleComplete
|
||||
+- HAS ready tasks -> take first ready task:
|
||||
+- Is task owner an Inner Loop role AND that role already has active_worker?
|
||||
| +- YES -> SKIP spawn (existing worker picks it up via inner loop)
|
||||
| +- NO -> normal spawn below
|
||||
+- Determine role from prefix:
|
||||
| PLAN-* -> planner
|
||||
| EXEC-* -> executor
|
||||
| VERIFY-* -> verifier
|
||||
+- TaskUpdate -> in_progress
|
||||
+- team_msg log -> task_unblocked (team_session_id=<session-id>)
|
||||
+- Spawn team-worker (see spawn call below)
|
||||
+- Add to session.active_workers
|
||||
+- Update session file
|
||||
+- Output: "[coordinator] Spawned <role> for <subject>"
|
||||
+- STOP
|
||||
```
|
||||
|
||||
**Spawn worker tool call** (one per ready task):
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker for <subject>",
|
||||
team_name: "roadmap-dev",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-roadmap-dev/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: roadmap-dev
|
||||
requirement: Phase ${phase} planning${gapContext}
|
||||
inner_loop: false
|
||||
requirement: <task-description>
|
||||
inner_loop: true
|
||||
|
||||
## Current Task
|
||||
- Task: PLAN-${phase}${suffix}
|
||||
- Phase: ${phase}
|
||||
- Task ID: <task-id>
|
||||
- Task: <subject>
|
||||
- Phase: <current_phase>
|
||||
- Gap Iteration: <gap_iteration>
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
|
||||
run_in_background: false // CRITICAL: Stop-Wait, blocks until done
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
#### Spawn Executor
|
||||
|
||||
```javascript
|
||||
function spawnExecutor(phase, gapIteration, sessionFolder) {
|
||||
const suffix = gapIteration === 0 ? "01" : `0${gapIteration + 1}`
|
||||
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: `Spawn executor worker for phase ${phase}`,
|
||||
team_name: "roadmap-dev",
|
||||
name: "executor",
|
||||
prompt: `## Role Assignment
|
||||
role: executor
|
||||
role_spec: .claude/skills/team-roadmap-dev/role-specs/executor.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
team_name: roadmap-dev
|
||||
requirement: Phase ${phase} execution
|
||||
inner_loop: false
|
||||
|
||||
## Current Task
|
||||
- Task: EXEC-${phase}${suffix}
|
||||
- Phase: ${phase}
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
|
||||
run_in_background: false // CRITICAL: Stop-Wait
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
#### Spawn Verifier
|
||||
|
||||
```javascript
|
||||
function spawnVerifier(phase, gapIteration, sessionFolder) {
|
||||
const suffix = gapIteration === 0 ? "01" : `0${gapIteration + 1}`
|
||||
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: `Spawn verifier worker for phase ${phase}`,
|
||||
team_name: "roadmap-dev",
|
||||
name: "verifier",
|
||||
prompt: `## Role Assignment
|
||||
role: verifier
|
||||
role_spec: .claude/skills/team-roadmap-dev/role-specs/verifier.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
team_name: roadmap-dev
|
||||
requirement: Phase ${phase} verification
|
||||
inner_loop: false
|
||||
|
||||
## Current Task
|
||||
- Task: VERIFY-${phase}${suffix}
|
||||
- Phase: ${phase}
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
|
||||
run_in_background: false // CRITICAL: Stop-Wait
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Gap Closure
|
||||
|
||||
```javascript
|
||||
function triggerGapClosure(phase, iteration, gaps, sessionFolder) {
|
||||
const suffix = `0${iteration + 1}`
|
||||
|
||||
// Log gap closure initiation
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: sessionId // MUST be session ID (e.g., RD-xxx-date), NOT team name,
|
||||
from: "coordinator", to: "planner",
|
||||
type: "gap_closure",
|
||||
ref: `${sessionFolder}/phase-${phase}/verification.md`
|
||||
})
|
||||
|
||||
// Create new task chain for gap closure
|
||||
// PLAN-{phase}{suffix}: re-plan focusing on gaps only
|
||||
TaskCreate({
|
||||
subject: `PLAN-${phase}${suffix}: Gap closure for phase ${phase} (iteration ${iteration})`,
|
||||
description: `[coordinator] Gap closure re-planning for phase ${phase}.
|
||||
|
||||
## Session
|
||||
- Folder: ${sessionFolder}
|
||||
- Phase: ${phase}
|
||||
- Gap Iteration: ${iteration}
|
||||
|
||||
## Gaps to Address
|
||||
${gaps.map(g => `- ${g}`).join('\n')}
|
||||
|
||||
## Reference
|
||||
- Original verification: ${sessionFolder}/phase-${phase}/verification.md
|
||||
- Previous plans: ${sessionFolder}/phase-${phase}/plan-*.md
|
||||
|
||||
## Instructions
|
||||
1. Focus ONLY on the listed gaps -- do not re-plan completed work
|
||||
2. Create ${sessionFolder}/phase-${phase}/plan-${suffix}.md for gap fixes
|
||||
3. TaskUpdate completed when gap plan is written`,
|
||||
activeForm: `Re-planning phase ${phase} gaps (iteration ${iteration})`
|
||||
})
|
||||
|
||||
// EXEC and VERIFY tasks follow same pattern with blockedBy
|
||||
// (same as dispatch.md Step 4 and Step 5, with gap suffix)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: State Updates
|
||||
|
||||
```javascript
|
||||
function updateStatePhaseComplete(phase, sessionFolder) {
|
||||
const state = Read(`${sessionFolder}/state.md`)
|
||||
|
||||
// Update current phase status
|
||||
Edit(`${sessionFolder}/state.md`, {
|
||||
old_string: `- Phase: ${phase}\n- Status: in_progress`,
|
||||
new_string: `- Phase: ${phase}\n- Status: completed\n- Completed: ${new Date().toISOString().slice(0, 19)}`
|
||||
})
|
||||
|
||||
// If more phases remain, set next phase as ready
|
||||
const nextPhase = phase + 1
|
||||
if (nextPhase <= totalPhases) {
|
||||
// Append next phase readiness
|
||||
Edit(`${sessionFolder}/state.md`, {
|
||||
old_string: `- Phase: ${phase}\n- Status: completed`,
|
||||
new_string: `- Phase: ${phase}\n- Status: completed\n\n- Phase: ${nextPhase}\n- Status: ready_to_dispatch`
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Completion
|
||||
|
||||
```javascript
|
||||
// All phases done -- return control to coordinator Phase 5 (Report + Persist)
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: sessionId // MUST be session ID (e.g., RD-xxx-date), NOT team name,
|
||||
from: "coordinator", to: "all",
|
||||
type: "project_complete",
|
||||
ref: `${sessionFolder}/roadmap.md`
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
|
||||
})
|
||||
```
|
||||
|
||||
## Gap Closure Loop Diagram
|
||||
---
|
||||
|
||||
### Handler: handleCheck
|
||||
|
||||
Read-only status report. No pipeline advancement.
|
||||
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
VERIFY-{N}01 → gaps_found?
|
||||
│ NO → Phase complete → next phase
|
||||
│ YES ↓
|
||||
PLAN-{N}02 (gaps only) → EXEC-{N}02 → VERIFY-{N}02
|
||||
│ gaps_found?
|
||||
│ NO → Phase complete
|
||||
│ YES ↓
|
||||
PLAN-{N}03 → EXEC-{N}03 → VERIFY-{N}03
|
||||
│ gaps_found?
|
||||
│ NO → Phase complete
|
||||
│ YES → Max iterations (3) → ask user
|
||||
[coordinator] Roadmap Pipeline Status
|
||||
[coordinator] Phase: <current>/<total> | Gap Iteration: <N>/<max>
|
||||
[coordinator] Progress: <completed>/<total tasks> (<percent>%)
|
||||
|
||||
[coordinator] Current Phase <N> Graph:
|
||||
PLAN-{N}01: <status-icon> <summary>
|
||||
EXEC-{N}01: <status-icon> <summary>
|
||||
VERIFY-{N}01: <status-icon> <summary>
|
||||
[PLAN-{N}02: <status-icon> (gap closure #1)]
|
||||
[EXEC-{N}02: <status-icon>]
|
||||
[VERIFY-{N}02:<status-icon>]
|
||||
|
||||
done=completed >>>=running o=pending x=deleted .=not created
|
||||
|
||||
[coordinator] Phase Summary:
|
||||
Phase 1: completed
|
||||
Phase 2: in_progress (step: exec)
|
||||
Phase 3: not started
|
||||
|
||||
[coordinator] Active Workers:
|
||||
> <subject> (<role>) - running [inner-loop: N/M tasks done]
|
||||
|
||||
[coordinator] Ready to spawn: <subjects>
|
||||
[coordinator] Coordinates: phase=<N> step=<step> gap=<iteration>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
## Forbidden Patterns
|
||||
Then STOP.
|
||||
|
||||
| Pattern | Why Forbidden | Alternative |
|
||||
|---------|---------------|-------------|
|
||||
| `setTimeout` / `sleep` | Models have no time concept | Synchronous Task() return |
|
||||
| `setInterval` / polling loop | Wastes tokens, unreliable | Stop-Wait spawn pattern |
|
||||
| `TaskOutput` with sleep polling | Indirect, fragile | `run_in_background: false` |
|
||||
| `while (!done) { check() }` | Busy wait, no progress | Sequential synchronous calls |
|
||||
---
|
||||
|
||||
### Handler: handleResume
|
||||
|
||||
Check active worker completion, process results, advance pipeline. Also handles resume from paused state.
|
||||
|
||||
```
|
||||
Check coordinates.status:
|
||||
+- "paused" -> Restore coordinates, resume from saved position
|
||||
| Reset coordinates.status = "running"
|
||||
| -> handleSpawnNext (picks up where it left off)
|
||||
+- "running" -> Normal resume:
|
||||
Load active_workers from session
|
||||
+- No active workers -> handleSpawnNext
|
||||
+- Has active workers -> check each:
|
||||
+- status = completed -> mark done, remove from active_workers, log
|
||||
+- status = in_progress -> still running, log
|
||||
+- other status -> worker failure -> reset to pending
|
||||
After processing:
|
||||
+- Some completed -> handleSpawnNext
|
||||
+- All still running -> report status -> STOP
|
||||
+- All failed -> handleSpawnNext (retry)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleComplete
|
||||
|
||||
All phases done. Generate final project summary and finalize session.
|
||||
|
||||
```
|
||||
All phases completed (no pending, no in_progress across all phases)
|
||||
+- Generate project-level summary:
|
||||
| - Roadmap overview (phases completed)
|
||||
| - Per-phase results:
|
||||
| - Gap closure iterations used
|
||||
| - Verification status
|
||||
| - Key deliverables
|
||||
| - Overall stats (tasks completed, phases, total gap iterations)
|
||||
|
|
||||
+- Update session:
|
||||
| coordinates.status = "complete"
|
||||
| session.completed_at = <timestamp>
|
||||
| Write meta.json
|
||||
|
|
||||
+- Update state.md: mark all phases completed
|
||||
+- team_msg log -> project_complete
|
||||
+- Output summary to user
|
||||
+- STOP
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Worker Failure Handling
|
||||
|
||||
When a worker has unexpected status (not completed, not in_progress):
|
||||
|
||||
1. Reset task -> pending via TaskUpdate
|
||||
2. Remove from active_workers
|
||||
3. Log via team_msg (type: error)
|
||||
4. Report to user: task reset, will retry on next resume
|
||||
|
||||
## Phase 4: State Persistence
|
||||
|
||||
After every handler action, before STOP:
|
||||
|
||||
| Check | Action |
|
||||
|-------|--------|
|
||||
| Coordinates updated | current_phase, step, gap_iteration reflect actual state |
|
||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
||||
| Meta.json updated | Write updated session state and coordinates |
|
||||
| State.md updated | Phase progress reflects actual completion |
|
||||
| Completion detection | All phases done + no pending + no in_progress -> handleComplete |
|
||||
|
||||
```
|
||||
Persist:
|
||||
1. Update coordinates in meta.json
|
||||
2. Reconcile active_workers with actual TaskList states
|
||||
3. Remove entries for completed/deleted tasks
|
||||
4. Write updated meta.json
|
||||
5. Update state.md if phase status changed
|
||||
6. Verify consistency
|
||||
7. STOP (wait for next callback)
|
||||
```
|
||||
|
||||
## State Machine Diagram
|
||||
|
||||
```
|
||||
[dispatch] -> PLAN-{N}01 spawned
|
||||
|
|
||||
[planner callback]
|
||||
|
|
||||
plan_check gate? --YES--> AskUser --> "Revise" --> new PLAN task --> [spawn]
|
||||
| "Skip" --> advanceToNextPhase
|
||||
| "Proceed" / no gate
|
||||
v
|
||||
EXEC-{N}01 spawned
|
||||
|
|
||||
[executor callback]
|
||||
|
|
||||
v
|
||||
VERIFY-{N}01 spawned
|
||||
|
|
||||
[verifier callback]
|
||||
|
|
||||
gaps found? --NO--> advanceToNextPhase
|
||||
|
|
||||
YES + iteration < MAX
|
||||
|
|
||||
v
|
||||
triggerGapClosure:
|
||||
PLAN-{N}02 -> EXEC-{N}02 -> VERIFY-{N}02
|
||||
|
|
||||
[repeat verify check]
|
||||
|
|
||||
gaps found? --NO--> advanceToNextPhase
|
||||
|
|
||||
YES + iteration >= MAX
|
||||
|
|
||||
v
|
||||
AskUser: "Continue anyway" / "Retry" / "Stop"
|
||||
|
||||
advanceToNextPhase:
|
||||
+- phase < total? --YES--> interactive gate? --> dispatch phase+1 --> [spawn PLAN]
|
||||
+- phase = total? --> handleComplete
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Worker Task() throws error | Log error, retry once. If still fails, report to user |
|
||||
| Session file not found | Error, suggest re-initialization |
|
||||
| Worker callback from unknown role | Log info, scan for other completions |
|
||||
| All workers still running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready, no running, has pending) | Check blockedBy chains, report to user |
|
||||
| Verification file missing | Treat as gap -- verifier may have crashed, re-spawn |
|
||||
| Phase dispatch fails | Check roadmap integrity, report to user |
|
||||
| User chooses "Stop" at gate | Invoke pause command: save state.md with coordinates, exit cleanly |
|
||||
| Max gap iterations exceeded | Present to user with gap details, ask for guidance |
|
||||
| Max gap iterations exceeded | Ask user: continue / retry / stop |
|
||||
| User chooses "Stop" at any gate | Pause session with coordinates, exit cleanly |
|
||||
|
||||
@@ -1,307 +0,0 @@
|
||||
# Command: implement
|
||||
|
||||
Wave-based task execution using code-developer subagent. Reads IMPL-*.json task files, computes execution waves from dependency graph, and executes sequentially by wave with parallel tasks within each wave.
|
||||
|
||||
## Purpose
|
||||
|
||||
Read IMPL-*.json task files for the current phase, compute wave groups from depends_on graph, and execute each task by delegating to a code-developer subagent. Produce summary-NN.md per task with structured YAML frontmatter for verifier and cross-task context.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of executor execution (after loading tasks, before self-validation)
|
||||
- Called once per EXEC-* task
|
||||
|
||||
## Strategy
|
||||
|
||||
Compute waves from dependency graph (topological sort). Sequential waves, parallel tasks within each wave. Each task is delegated to a code-developer subagent with the full task JSON plus prior summary context. After each task completes, a summary is written. After each wave completes, wave progress is reported.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Source | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `sessionFolder` | From EXEC-* task description | Session artifact directory |
|
||||
| `phaseNumber` | From EXEC-* task description | Phase number (1-based) |
|
||||
| `tasks` | From executor Phase 2 | Parsed task JSON objects |
|
||||
| `waves` | From executor Phase 2 | Wave-grouped task map |
|
||||
| `waveNumbers` | From executor Phase 2 | Sorted wave number array |
|
||||
| `priorSummaries` | From executor Phase 2 | Summaries from earlier phases |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Compute Waves from Dependency Graph
|
||||
|
||||
```javascript
|
||||
// Tasks loaded in executor Phase 2 from .task/IMPL-*.json
|
||||
// Compute wave assignment from depends_on graph
|
||||
|
||||
function computeWaves(tasks) {
|
||||
const waveMap = {} // taskId → waveNumber
|
||||
const assigned = new Set()
|
||||
let currentWave = 1
|
||||
|
||||
while (assigned.size < tasks.length) {
|
||||
const ready = tasks.filter(t =>
|
||||
!assigned.has(t.id) &&
|
||||
(t.depends_on || []).every(d => assigned.has(d))
|
||||
)
|
||||
|
||||
if (ready.length === 0 && assigned.size < tasks.length) {
|
||||
// Cycle detected — force lowest unassigned
|
||||
const unassigned = tasks.find(t => !assigned.has(t.id))
|
||||
ready.push(unassigned)
|
||||
}
|
||||
|
||||
for (const task of ready) {
|
||||
waveMap[task.id] = currentWave
|
||||
assigned.add(task.id)
|
||||
}
|
||||
currentWave++
|
||||
}
|
||||
|
||||
// Group by wave
|
||||
const waves = {}
|
||||
for (const task of tasks) {
|
||||
const w = waveMap[task.id]
|
||||
if (!waves[w]) waves[w] = []
|
||||
waves[w].push(task)
|
||||
}
|
||||
|
||||
return {
|
||||
waves,
|
||||
waveNumbers: Object.keys(waves).map(Number).sort((a, b) => a - b),
|
||||
totalWaves: currentWave - 1
|
||||
}
|
||||
}
|
||||
|
||||
const { waves, waveNumbers, totalWaves } = computeWaves(tasks)
|
||||
const totalTasks = tasks.length
|
||||
let completedTasks = 0
|
||||
```
|
||||
|
||||
### Step 2: Sequential Wave Execution
|
||||
|
||||
```javascript
|
||||
for (const waveNum of waveNumbers) {
|
||||
const waveTasks = waves[waveNum]
|
||||
|
||||
for (const task of waveTasks) {
|
||||
const startTime = Date.now()
|
||||
|
||||
// 2a. Build context from prior summaries
|
||||
const contextSummaries = []
|
||||
|
||||
// From earlier phases
|
||||
for (const ps of priorSummaries) {
|
||||
contextSummaries.push(ps.content)
|
||||
}
|
||||
|
||||
// From earlier waves in this phase
|
||||
for (const earlierWave of waveNumbers.filter(w => w < waveNum)) {
|
||||
for (const earlierTask of waves[earlierWave]) {
|
||||
try {
|
||||
const summaryFile = `${sessionFolder}/phase-${phaseNumber}/summary-${earlierTask.id}.md`
|
||||
contextSummaries.push(Read(summaryFile))
|
||||
} catch {}
|
||||
}
|
||||
}
|
||||
|
||||
const contextSection = contextSummaries.length > 0
|
||||
? `## Prior Context\n\n${contextSummaries.join('\n\n---\n\n')}`
|
||||
: "## Prior Context\n\nNone (first task in first wave)."
|
||||
|
||||
// 2b. Build implementation prompt from task JSON
|
||||
const filesSection = (task.files || [])
|
||||
.map(f => `- \`${f.path}\` (${f.action}): ${f.change}`)
|
||||
.join('\n')
|
||||
|
||||
const stepsSection = (task.implementation || [])
|
||||
.map((step, i) => typeof step === 'string' ? `${i + 1}. ${step}` : `${i + 1}. ${step.step}: ${step.description}`)
|
||||
.join('\n')
|
||||
|
||||
const convergenceSection = task.convergence
|
||||
? `## Success Criteria\n${(task.convergence.criteria || []).map(c => `- ${c}`).join('\n')}\n\n**Verification**: ${task.convergence.verification || 'N/A'}`
|
||||
: ''
|
||||
|
||||
// 2c. Delegate to code-developer subagent
|
||||
const implResult = Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
prompt: `Implement the following task. Write production-quality code following existing patterns.
|
||||
|
||||
## Task: ${task.id} - ${task.title}
|
||||
|
||||
${task.description}
|
||||
|
||||
## Files
|
||||
${filesSection}
|
||||
|
||||
## Implementation Steps
|
||||
${stepsSection}
|
||||
|
||||
${convergenceSection}
|
||||
|
||||
${contextSection}
|
||||
|
||||
## Implementation Rules
|
||||
- Follow existing code patterns and conventions in the project
|
||||
- Write clean, minimal code that satisfies the task requirements
|
||||
- Create all files listed with action "create"
|
||||
- Modify files listed with action "modify" as described
|
||||
- Handle errors appropriately
|
||||
- Do NOT add unnecessary features beyond what the task specifies
|
||||
- Do NOT modify files outside the task scope unless absolutely necessary
|
||||
|
||||
## Output
|
||||
After implementation, report:
|
||||
1. Files created or modified (with brief description of changes)
|
||||
2. Key decisions made during implementation
|
||||
3. Any deviations from the task (and why)
|
||||
4. Capabilities provided (exports, APIs, components)
|
||||
5. Technologies/patterns used`
|
||||
})
|
||||
|
||||
const duration = Math.round((Date.now() - startTime) / 60000)
|
||||
|
||||
// 2d. Write summary
|
||||
const summaryPath = `${sessionFolder}/phase-${phaseNumber}/summary-${task.id}.md`
|
||||
const affectedPaths = (task.files || []).map(f => f.path)
|
||||
|
||||
Write(summaryPath, `---
|
||||
phase: ${phaseNumber}
|
||||
task: "${task.id}"
|
||||
title: "${task.title}"
|
||||
requires: [${(task.depends_on || []).map(d => `"${d}"`).join(', ')}]
|
||||
provides: ["${task.id}"]
|
||||
affects:
|
||||
${affectedPaths.map(p => ` - "${p}"`).join('\n')}
|
||||
tech-stack: []
|
||||
key-files:
|
||||
${affectedPaths.map(p => ` - "${p}"`).join('\n')}
|
||||
key-decisions: []
|
||||
patterns-established: []
|
||||
convergence-met: pending
|
||||
duration: ${duration}m
|
||||
completed: ${new Date().toISOString().slice(0, 19)}
|
||||
---
|
||||
|
||||
# Summary: ${task.id} - ${task.title}
|
||||
|
||||
## Implementation Result
|
||||
|
||||
${implResult || "Implementation delegated to code-developer subagent."}
|
||||
|
||||
## Files Affected
|
||||
|
||||
${affectedPaths.map(p => `- \`${p}\``).join('\n')}
|
||||
|
||||
## Convergence Criteria
|
||||
${(task.convergence?.criteria || []).map(c => `- [ ] ${c}`).join('\n')}
|
||||
`)
|
||||
|
||||
completedTasks++
|
||||
}
|
||||
|
||||
// 2e. Report wave progress
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId,
|
||||
from: "executor",
|
||||
type: "exec_progress",
|
||||
ref: `${sessionFolder}/phase-${phaseNumber}/`
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Report Execution Complete
|
||||
|
||||
```javascript
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId,
|
||||
from: "executor",
|
||||
type: "exec_complete",
|
||||
ref: `${sessionFolder}/phase-${phaseNumber}/`
|
||||
})
|
||||
```
|
||||
|
||||
## Summary File Format
|
||||
|
||||
Each summary-{IMPL-ID}.md uses YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
phase: N
|
||||
task: "IMPL-N"
|
||||
title: "Task title"
|
||||
requires: ["IMPL-N"]
|
||||
provides: ["IMPL-N"]
|
||||
affects: [paths]
|
||||
tech-stack: [technologies]
|
||||
key-files: [paths]
|
||||
key-decisions: [decisions]
|
||||
patterns-established: [patterns]
|
||||
convergence-met: pending|pass|fail
|
||||
duration: Xm
|
||||
completed: timestamp
|
||||
---
|
||||
```
|
||||
|
||||
### Frontmatter Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `phase` | number | Phase this summary belongs to |
|
||||
| `task` | string | Task ID that was executed |
|
||||
| `title` | string | Task title |
|
||||
| `requires` | string[] | Dependency task IDs consumed |
|
||||
| `provides` | string[] | Task ID provided to downstream |
|
||||
| `affects` | string[] | File paths created or modified |
|
||||
| `tech-stack` | string[] | Technologies/frameworks used |
|
||||
| `key-files` | string[] | Primary files (subset of affects) |
|
||||
| `key-decisions` | string[] | Decisions made during implementation |
|
||||
| `patterns-established` | string[] | Patterns introduced |
|
||||
| `convergence-met` | string | Whether convergence criteria passed |
|
||||
| `duration` | string | Execution time |
|
||||
| `completed` | string | ISO timestamp |
|
||||
|
||||
## Deviation Rules
|
||||
|
||||
| Deviation | Action | Report |
|
||||
|-----------|--------|--------|
|
||||
| **Bug found** in existing code | Auto-fix, continue | Log in summary key-decisions |
|
||||
| **Missing critical** dependency | Add to scope, implement | Log in summary key-decisions |
|
||||
| **Blocking dependency** (unresolvable) | Stop task execution | Report error to coordinator |
|
||||
| **Architectural concern** | Do NOT auto-fix | Report error to coordinator, await guidance |
|
||||
|
||||
## Wave Execution Example
|
||||
|
||||
```
|
||||
Phase 2, 4 tasks, 3 waves (computed from depends_on):
|
||||
|
||||
Wave 1: [IMPL-201 (types)] — no dependencies
|
||||
-> delegate IMPL-201 to code-developer
|
||||
-> write summary-IMPL-201.md
|
||||
-> report: Wave 1/3 complete (1/4 tasks)
|
||||
|
||||
Wave 2: [IMPL-202 (API), IMPL-203 (UI)] — depend on IMPL-201
|
||||
-> delegate IMPL-202 (loads summary-IMPL-201 as context)
|
||||
-> write summary-IMPL-202.md
|
||||
-> delegate IMPL-203 (loads summary-IMPL-201 as context)
|
||||
-> write summary-IMPL-203.md
|
||||
-> report: Wave 2/3 complete (3/4 tasks)
|
||||
|
||||
Wave 3: [IMPL-204 (tests)] — depends on IMPL-202, IMPL-203
|
||||
-> delegate IMPL-204 (loads summaries 201-203 as context)
|
||||
-> write summary-IMPL-204.md
|
||||
-> report: Wave 3/3 complete (4/4 tasks)
|
||||
|
||||
-> report: exec_complete
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| code-developer subagent fails | Retry once. If still fails, write error summary, continue with next task |
|
||||
| File write conflict | Last write wins. Log in summary. Verifier will validate |
|
||||
| Task references non-existent file | Check if dependency task creates it. If yes, load summary. If no, log error |
|
||||
| All tasks in a wave fail | Report wave failure to coordinator, attempt next wave |
|
||||
| Summary write fails | Retry with Bash fallback. Critical — verifier needs summaries |
|
||||
@@ -1,217 +0,0 @@
|
||||
# Executor Role
|
||||
|
||||
Code implementation per phase. Reads IMPL-*.json task files from the phase's .task/ directory, computes execution waves from the dependency graph, and executes sequentially by wave with parallel tasks within each wave. Each task is delegated to a code-developer subagent. Produces summary-{IMPL-ID}.md files for verifier consumption.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `executor` | **Tag**: `[executor]`
|
||||
- **Task Prefix**: `EXEC-*`
|
||||
- **Responsibility**: Code generation
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- All outputs must carry `[executor]` prefix
|
||||
- Only process `EXEC-*` prefixed tasks
|
||||
- Only communicate with coordinator (SendMessage)
|
||||
- Delegate implementation to commands/implement.md
|
||||
- Execute tasks in dependency order (sequential waves, parallel within wave)
|
||||
- Write summary-{IMPL-ID}.md per task after execution
|
||||
- Report wave progress to coordinator
|
||||
- Work strictly within Code generation responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Create plans or modify IMPL-*.json task files
|
||||
- Verify implementation against must_haves (that is verifier's job)
|
||||
- Create tasks for other roles (TaskCreate)
|
||||
- Interact with user (AskUserQuestion)
|
||||
- Process PLAN-* or VERIFY-* tasks
|
||||
- Skip loading prior summaries for cross-plan context
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Omit `[executor]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `implement` | [commands/implement.md](commands/implement.md) | Phase 3 | Wave-based plan execution via code-developer subagent |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `code-developer` | Subagent | executor | Code implementation per plan |
|
||||
| `Read/Write` | File operations | executor | Task JSON and summary management |
|
||||
| `Glob` | Search | executor | Find task files and summaries |
|
||||
| `Bash` | Shell | executor | Syntax validation, lint checks |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `exec_complete` | executor -> coordinator | All plans executed | Implementation done, summaries written |
|
||||
| `exec_progress` | executor -> coordinator | Wave completed | Wave N of M done |
|
||||
| `error` | executor -> coordinator | Failure | Implementation failed |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "executor",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from executor --type <type> --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `EXEC-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
**Resume Artifact Check**: Check whether this task's output artifact already exists:
|
||||
- All summaries exist for phase tasks -> skip to Phase 5
|
||||
- Artifact incomplete or missing -> normal Phase 2-4 execution
|
||||
|
||||
### Phase 2: Load Tasks
|
||||
|
||||
**Objective**: Load task JSONs and compute execution waves.
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task JSONs | <session-folder>/phase-{N}/.task/IMPL-*.json | Yes |
|
||||
| Prior summaries | <session-folder>/phase-{1..N-1}/summary-*.md | No |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
1. **Find task files**:
|
||||
- Glob `{sessionFolder}/phase-{phaseNumber}/.task/IMPL-*.json`
|
||||
- If no files found -> error to coordinator
|
||||
|
||||
2. **Parse all task JSONs**:
|
||||
- Read each task file
|
||||
- Extract: id, description, depends_on, files, convergence
|
||||
|
||||
3. **Compute waves from dependency graph**:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | Start with wave=1, assigned=set(), waveMap={} |
|
||||
| 2 | Find tasks with all dependencies in assigned |
|
||||
| 3 | If none found but tasks remain -> force-assign first unassigned |
|
||||
| 4 | Assign ready tasks to current wave, add to assigned |
|
||||
| 5 | Increment wave, repeat until all tasks assigned |
|
||||
| 6 | Group tasks by wave number |
|
||||
|
||||
4. **Load prior summaries for cross-task context**:
|
||||
- For each prior phase, read summary files
|
||||
- Store for reference during implementation
|
||||
|
||||
### Phase 3: Implement (via command)
|
||||
|
||||
**Objective**: Execute wave-based implementation.
|
||||
|
||||
Delegate to `commands/implement.md`:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | For each wave (sequential): |
|
||||
| 2 | For each task in wave: delegate to code-developer subagent |
|
||||
| 3 | Write summary-{IMPL-ID}.md per task |
|
||||
| 4 | Report wave progress |
|
||||
| 5 | Continue to next wave |
|
||||
|
||||
**Implementation strategy selection**:
|
||||
|
||||
| Task Count | Complexity | Strategy |
|
||||
|------------|------------|----------|
|
||||
| <= 2 tasks | Low | Direct: inline Edit/Write |
|
||||
| 3-5 tasks | Medium | Single agent: one code-developer for all |
|
||||
| > 5 tasks | High | Batch agent: group by module, one agent per batch |
|
||||
|
||||
**Produces**: `{sessionFolder}/phase-{N}/summary-IMPL-*.md`
|
||||
|
||||
**Command**: [commands/implement.md](commands/implement.md)
|
||||
|
||||
### Phase 4: Self-Validation
|
||||
|
||||
**Objective**: Basic validation after implementation (NOT full verification).
|
||||
|
||||
**Validation checks**:
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| File existence | `test -f <path>` | All affected files exist |
|
||||
| TypeScript syntax | `npx tsc --noEmit` | No TS errors |
|
||||
| Lint | `npm run lint` | No critical errors |
|
||||
|
||||
**Validation steps**:
|
||||
|
||||
1. **Find summary files**: Glob `{sessionFolder}/phase-{phaseNumber}/summary-*.md`
|
||||
|
||||
2. **For each summary**:
|
||||
- Parse frontmatter for affected files
|
||||
- Check each file exists
|
||||
- Run syntax check for TypeScript files
|
||||
- Log errors via team_msg
|
||||
|
||||
3. **Run lint once for all changes** (best-effort)
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[executor]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Report message**:
|
||||
```
|
||||
SendMessage({
|
||||
message: "[executor] Phase <N> execution complete.
|
||||
- Tasks executed: <count>
|
||||
- Waves: <wave-count>
|
||||
- Summaries: <file-list>
|
||||
|
||||
Ready for verification."
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No EXEC-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
| Command file not found | Fall back to inline execution |
|
||||
| No task JSON files found | Error to coordinator -- planner may have failed |
|
||||
| code-developer subagent fails | Retry once. If still fails, log error in summary, continue with next plan |
|
||||
| Syntax errors after implementation | Log in summary, continue -- verifier will catch remaining issues |
|
||||
| Missing dependency from earlier wave | Error to coordinator -- dependency graph may be incorrect |
|
||||
| File conflict between parallel plans | Log warning, last write wins -- verifier will validate correctness |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
@@ -1,355 +0,0 @@
|
||||
# Command: create-plans
|
||||
|
||||
Generate execution plans via action-planning-agent. Produces IMPL_PLAN.md, .task/IMPL-*.json, and TODO_LIST.md — the same artifact format as workflow-plan skill.
|
||||
|
||||
## Purpose
|
||||
|
||||
Transform phase context into structured task JSONs and implementation plan. Delegates to action-planning-agent for document generation. Produces artifacts compatible with workflow-plan's output format, enabling reuse of executor and verifier logic.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of planner execution (after research, before self-validation)
|
||||
- Called once per PLAN-* task
|
||||
|
||||
## Strategy
|
||||
|
||||
Delegate to action-planning-agent with phase context (context.md + roadmap phase section). The agent produces task JSONs with convergence criteria (replacing the old must_haves concept), dependency graph (replacing wave numbering), and implementation steps.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Source | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `sessionFolder` | From PLAN-* task description | Session artifact directory |
|
||||
| `phaseNumber` | From PLAN-* task description | Phase number (1-based) |
|
||||
|
||||
## Output Artifact Mapping (vs old plan-NN.md)
|
||||
|
||||
| Old (plan-NN.md) | New (IMPL-*.json) | Notes |
|
||||
|-------------------|--------------------|-------|
|
||||
| `plan: NN` | `id: "IMPL-N"` | Task identifier |
|
||||
| `wave: N` | `depends_on: [...]` | Dependency graph replaces explicit waves |
|
||||
| `files_modified: [...]` | `files: [{path, action, change}]` | Structured file list |
|
||||
| `requirements: [REQ-IDs]` | `description` + `scope` | Requirements embedded in description |
|
||||
| `must_haves.truths` | `convergence.criteria` | Observable behaviors → measurable criteria |
|
||||
| `must_haves.artifacts` | `files` + `convergence.verification` | File checks in verification command |
|
||||
| `must_haves.key_links` | `convergence.verification` | Import wiring in verification command |
|
||||
| Plan body (implementation steps) | `implementation: [...]` | Step-by-step actions |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase Context
|
||||
|
||||
```javascript
|
||||
const context = Read(`${sessionFolder}/phase-${phaseNumber}/context.md`)
|
||||
const roadmap = Read(`${sessionFolder}/roadmap.md`)
|
||||
const config = JSON.parse(Read(`${sessionFolder}/config.json`))
|
||||
|
||||
// Extract phase section from roadmap
|
||||
const phaseGoal = extractPhaseGoal(roadmap, phaseNumber)
|
||||
const requirements = extractRequirements(roadmap, phaseNumber)
|
||||
const successCriteria = extractSuccessCriteria(roadmap, phaseNumber)
|
||||
|
||||
// Check for gap closure context
|
||||
const isGapClosure = context.includes("Gap Closure Context")
|
||||
|
||||
// Load prior phase summaries for cross-phase context
|
||||
const priorSummaries = []
|
||||
for (let p = 1; p < phaseNumber; p++) {
|
||||
try {
|
||||
const summaryFiles = Glob(`${sessionFolder}/phase-${p}/summary-*.md`)
|
||||
for (const sf of summaryFiles) {
|
||||
priorSummaries.push(Read(sf))
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Prepare Output Directories
|
||||
|
||||
```javascript
|
||||
Bash(`mkdir -p "${sessionFolder}/phase-${phaseNumber}/.task"`)
|
||||
```
|
||||
|
||||
### Step 3: Delegate to action-planning-agent
|
||||
|
||||
```javascript
|
||||
const taskDir = `${sessionFolder}/phase-${phaseNumber}/.task`
|
||||
const implPlanPath = `${sessionFolder}/phase-${phaseNumber}/IMPL_PLAN.md`
|
||||
const todoListPath = `${sessionFolder}/phase-${phaseNumber}/TODO_LIST.md`
|
||||
|
||||
Task({
|
||||
subagent_type: "action-planning-agent",
|
||||
run_in_background: false,
|
||||
description: `Generate phase ${phaseNumber} planning documents`,
|
||||
prompt: `
|
||||
## TASK OBJECTIVE
|
||||
Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for roadmap-dev session phase ${phaseNumber}.
|
||||
|
||||
IMPORTANT: This is PLANNING ONLY - generate planning documents, NOT implementing code.
|
||||
|
||||
## PHASE CONTEXT
|
||||
${context}
|
||||
|
||||
## ROADMAP PHASE ${phaseNumber}
|
||||
Goal: ${phaseGoal}
|
||||
|
||||
Requirements:
|
||||
${requirements.map(r => `- ${r.id}: ${r.desc}`).join('\n')}
|
||||
|
||||
Success Criteria:
|
||||
${successCriteria.map(c => `- ${c}`).join('\n')}
|
||||
|
||||
${isGapClosure ? `## GAP CLOSURE
|
||||
This is a gap closure iteration. Only address gaps listed in context — do NOT re-plan completed work.
|
||||
Existing task JSONs in ${taskDir} represent prior work. Create gap-specific tasks starting from next available ID.` : ''}
|
||||
|
||||
${priorSummaries.length > 0 ? `## PRIOR PHASE CONTEXT
|
||||
${priorSummaries.join('\n\n---\n\n')}` : ''}
|
||||
|
||||
## SESSION PATHS
|
||||
Output:
|
||||
- Task Dir: ${taskDir}
|
||||
- IMPL_PLAN: ${implPlanPath}
|
||||
- TODO_LIST: ${todoListPath}
|
||||
|
||||
## CONTEXT METADATA
|
||||
Session: ${sessionFolder}
|
||||
Phase: ${phaseNumber}
|
||||
Depth: ${config.depth || 'standard'}
|
||||
|
||||
## USER CONFIGURATION
|
||||
Execution Method: agent
|
||||
Preferred CLI Tool: gemini
|
||||
|
||||
## EXPECTED DELIVERABLES
|
||||
1. Task JSON Files (${taskDir}/IMPL-*.json)
|
||||
- Unified flat schema (task-schema.json)
|
||||
- Quantified requirements with explicit counts
|
||||
- focus_paths from context.md relevant files
|
||||
- convergence criteria derived from success criteria (goal-backward)
|
||||
|
||||
2. Implementation Plan (${implPlanPath})
|
||||
- Phase goal and context
|
||||
- Task breakdown and execution strategy
|
||||
- Dependency graph
|
||||
|
||||
3. TODO List (${todoListPath})
|
||||
- Flat structure with [ ] for pending
|
||||
- Links to task JSONs
|
||||
|
||||
## TASK ID FORMAT
|
||||
Use: IMPL-{phaseNumber}{seq} (e.g., IMPL-101, IMPL-102 for phase 1)
|
||||
|
||||
## CONVERGENCE CRITERIA RULES (replacing old must_haves)
|
||||
Each task MUST include convergence:
|
||||
- criteria: Measurable conditions derived from success criteria (goal-backward, not task-forward)
|
||||
- Include file existence checks
|
||||
- Include export/symbol presence checks
|
||||
- Include test passage checks where applicable
|
||||
- verification: Executable command to verify criteria
|
||||
- definition_of_done: Business-language completion definition
|
||||
|
||||
## CLI EXECUTION ID FORMAT
|
||||
Each task: cli_execution.id = "RD-${sessionFolder.split('/').pop()}-{task_id}"
|
||||
|
||||
## QUALITY STANDARDS
|
||||
- Task count <= 10 per phase (hard limit)
|
||||
- All requirements quantified
|
||||
- Acceptance criteria measurable
|
||||
- Dependencies form a valid DAG (no cycles)
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Step 4: Validate Generated Artifacts
|
||||
|
||||
```javascript
|
||||
// 4a. Verify task JSONs were created
|
||||
const taskFiles = Glob(`${taskDir}/IMPL-*.json`)
|
||||
if (!taskFiles || taskFiles.length === 0) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId,
|
||||
from: "planner",
|
||||
type: "error",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// 4b. Validate each task JSON
|
||||
for (const taskFile of taskFiles) {
|
||||
const taskJson = JSON.parse(Read(taskFile))
|
||||
|
||||
// Required fields check
|
||||
const requiredFields = ['id', 'title', 'description', 'files', 'implementation', 'convergence']
|
||||
for (const field of requiredFields) {
|
||||
if (!taskJson[field]) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId,
|
||||
from: "planner",
|
||||
type: "plan_progress",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Convergence criteria check
|
||||
if (!taskJson.convergence?.criteria || taskJson.convergence.criteria.length === 0) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId,
|
||||
from: "planner",
|
||||
type: "plan_progress",
|
||||
})
|
||||
}
|
||||
|
||||
// Dependency cycle check (simple: task cannot depend on itself)
|
||||
if (taskJson.depends_on?.includes(taskJson.id)) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId,
|
||||
from: "planner",
|
||||
type: "error",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// 4c. Validate dependency DAG (no cycles)
|
||||
const allTasks = taskFiles.map(f => JSON.parse(Read(f)))
|
||||
const taskIds = new Set(allTasks.map(t => t.id))
|
||||
|
||||
// Check all depends_on references are valid
|
||||
for (const task of allTasks) {
|
||||
for (const dep of (task.depends_on || [])) {
|
||||
if (!taskIds.has(dep)) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId,
|
||||
from: "planner",
|
||||
type: "plan_progress",
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 4d. Verify IMPL_PLAN.md exists
|
||||
const implPlanExists = Bash(`test -f "${implPlanPath}" && echo "EXISTS" || echo "NOT_FOUND"`).trim()
|
||||
if (implPlanExists === "NOT_FOUND") {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId,
|
||||
from: "planner",
|
||||
type: "plan_progress",
|
||||
})
|
||||
// Create minimal IMPL_PLAN.md from task JSONs
|
||||
generateMinimalImplPlan(allTasks, implPlanPath, phaseGoal, phaseNumber)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Compute Wave Structure (for reporting)
|
||||
|
||||
```javascript
|
||||
// Derive wave structure from dependency graph (for reporting only — executor uses depends_on directly)
|
||||
function computeWaves(tasks) {
|
||||
const waves = {}
|
||||
const assigned = new Set()
|
||||
let currentWave = 1
|
||||
|
||||
while (assigned.size < tasks.length) {
|
||||
const waveMembers = tasks.filter(t =>
|
||||
!assigned.has(t.id) &&
|
||||
(t.depends_on || []).every(d => assigned.has(d))
|
||||
)
|
||||
|
||||
if (waveMembers.length === 0 && assigned.size < tasks.length) {
|
||||
const unassigned = tasks.find(t => !assigned.has(t.id))
|
||||
waveMembers.push(unassigned)
|
||||
}
|
||||
|
||||
for (const task of waveMembers) {
|
||||
waves[task.id] = currentWave
|
||||
assigned.add(task.id)
|
||||
}
|
||||
currentWave++
|
||||
}
|
||||
return { waves, totalWaves: currentWave - 1 }
|
||||
}
|
||||
|
||||
const { waves, totalWaves } = computeWaves(allTasks)
|
||||
```
|
||||
|
||||
### Step 6: Report Plan Structure
|
||||
|
||||
```javascript
|
||||
const taskCount = allTasks.length
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId,
|
||||
from: "planner",
|
||||
type: "plan_progress",
|
||||
ref: `${sessionFolder}/phase-${phaseNumber}/`
|
||||
})
|
||||
|
||||
return {
|
||||
taskCount,
|
||||
totalWaves,
|
||||
waves,
|
||||
taskFiles,
|
||||
implPlanPath,
|
||||
todoListPath
|
||||
}
|
||||
```
|
||||
|
||||
## Gap Closure Plans
|
||||
|
||||
When creating plans for gap closure (re-planning after verification found gaps):
|
||||
|
||||
```javascript
|
||||
if (isGapClosure) {
|
||||
// 1. Existing IMPL-*.json files represent completed work
|
||||
// 2. action-planning-agent receives gap context and creates gap-specific tasks
|
||||
// 3. New task IDs start from next available (e.g., IMPL-103 if 101,102 exist)
|
||||
// 4. convergence criteria should directly address gap descriptions from verification.md
|
||||
// 5. Gap tasks may depend on existing completed tasks
|
||||
}
|
||||
```
|
||||
|
||||
## Helper: Minimal IMPL_PLAN.md Generation
|
||||
|
||||
```javascript
|
||||
function generateMinimalImplPlan(tasks, outputPath, phaseGoal, phaseNumber) {
|
||||
const content = `# Implementation Plan: Phase ${phaseNumber}
|
||||
|
||||
## Goal
|
||||
|
||||
${phaseGoal}
|
||||
|
||||
## Tasks
|
||||
|
||||
${tasks.map(t => `### ${t.id}: ${t.title}
|
||||
|
||||
${t.description}
|
||||
|
||||
**Files**: ${(t.files || []).map(f => f.path).join(', ')}
|
||||
**Depends on**: ${(t.depends_on || []).join(', ') || 'None'}
|
||||
|
||||
**Convergence Criteria**:
|
||||
${(t.convergence?.criteria || []).map(c => `- ${c}`).join('\n')}
|
||||
`).join('\n---\n\n')}
|
||||
|
||||
## Dependency Graph
|
||||
|
||||
${'```'}
|
||||
${tasks.map(t => `${t.id} → [${(t.depends_on || []).join(', ')}]`).join('\n')}
|
||||
${'```'}
|
||||
`
|
||||
|
||||
Write(outputPath, content)
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| context.md not found | Error — research phase was skipped or failed |
|
||||
| action-planning-agent fails | Retry once. If still fails, error to coordinator |
|
||||
| No task JSONs generated | Error to coordinator — agent may have misunderstood input |
|
||||
| Dependency cycle detected | Log warning, break cycle at lowest-numbered task |
|
||||
| Too many tasks (>10) | Log warning — agent should self-limit but validate |
|
||||
| Missing convergence criteria | Log warning — every task should have at least one criterion |
|
||||
| IMPL_PLAN.md not generated | Create minimal version from task JSONs |
|
||||
@@ -1,219 +0,0 @@
|
||||
# Command: research
|
||||
|
||||
Gather context for a phase before creating execution plans. Explores the codebase, reads requirements from roadmap, and produces a structured context.md file.
|
||||
|
||||
## Purpose
|
||||
|
||||
Build a comprehensive understanding of the phase's scope by combining roadmap requirements, prior phase outputs, and codebase analysis. The resulting context.md is the sole input for the create-plans command.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 2 of planner execution (after task discovery, before plan creation)
|
||||
- Called once per PLAN-* task (including gap closure iterations)
|
||||
|
||||
## Strategy
|
||||
|
||||
Subagent delegation (cli-explore-agent) for codebase exploration, supplemented by optional Gemini CLI for deep analysis when depth warrants it. Planner does NOT explore the codebase directly -- it delegates.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Source | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `sessionFolder` | From PLAN-* task description | Session artifact directory |
|
||||
| `phaseNumber` | From PLAN-* task description | Phase to research (1-based) |
|
||||
| `depth` | From config.json or task description | "quick" / "standard" / "comprehensive" |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Read Roadmap and Extract Phase Requirements
|
||||
|
||||
```javascript
|
||||
const roadmap = Read(`${sessionFolder}/roadmap.md`)
|
||||
const config = JSON.parse(Read(`${sessionFolder}/config.json`))
|
||||
const depth = config.depth || "standard"
|
||||
|
||||
// Parse phase section from roadmap
|
||||
// Extract: goal, requirements (REQ-IDs), success criteria
|
||||
const phaseSection = extractPhaseSection(roadmap, phaseNumber)
|
||||
const phaseGoal = phaseSection.goal
|
||||
const requirements = phaseSection.requirements // [{id: "REQ-101", desc: "..."}, ...]
|
||||
const successCriteria = phaseSection.successCriteria // ["testable behavior 1", ...]
|
||||
```
|
||||
|
||||
### Step 2: Read Prior Phase Context (if applicable)
|
||||
|
||||
```javascript
|
||||
const priorContext = []
|
||||
|
||||
if (phaseNumber > 1) {
|
||||
// Load summaries from previous phases for dependency context
|
||||
for (let p = 1; p < phaseNumber; p++) {
|
||||
try {
|
||||
const summary = Glob(`${sessionFolder}/phase-${p}/summary-*.md`)
|
||||
for (const summaryFile of summary) {
|
||||
priorContext.push({
|
||||
phase: p,
|
||||
file: summaryFile,
|
||||
content: Read(summaryFile)
|
||||
})
|
||||
}
|
||||
} catch {
|
||||
// Prior phase may not have summaries yet (first phase)
|
||||
}
|
||||
|
||||
// Also load verification results for dependency awareness
|
||||
try {
|
||||
const verification = Read(`${sessionFolder}/phase-${p}/verification.md`)
|
||||
priorContext.push({
|
||||
phase: p,
|
||||
file: `${sessionFolder}/phase-${p}/verification.md`,
|
||||
content: verification
|
||||
})
|
||||
} catch {}
|
||||
}
|
||||
}
|
||||
|
||||
// For gap closure: load the verification that triggered re-planning
|
||||
const isGapClosure = planTaskDescription.includes("Gap closure")
|
||||
let gapContext = null
|
||||
if (isGapClosure) {
|
||||
gapContext = Read(`${sessionFolder}/phase-${phaseNumber}/verification.md`)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Codebase Exploration via cli-explore-agent
|
||||
|
||||
```javascript
|
||||
// Build exploration query from requirements
|
||||
const explorationQuery = requirements.map(r => r.desc).join('; ')
|
||||
|
||||
const exploreResult = Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore codebase for phase ${phaseNumber} requirements`,
|
||||
prompt: `Explore this codebase to gather context for the following requirements:
|
||||
|
||||
## Phase Goal
|
||||
${phaseGoal}
|
||||
|
||||
## Requirements
|
||||
${requirements.map(r => `- ${r.id}: ${r.desc}`).join('\n')}
|
||||
|
||||
## Success Criteria
|
||||
${successCriteria.map(c => `- ${c}`).join('\n')}
|
||||
|
||||
## What to Find
|
||||
1. Files that will need modification to satisfy these requirements
|
||||
2. Existing patterns and conventions relevant to this work
|
||||
3. Dependencies and integration points
|
||||
4. Test patterns used in this project
|
||||
5. Configuration or schema files that may need updates
|
||||
|
||||
## Output Format
|
||||
Provide a structured summary:
|
||||
- **Relevant Files**: List of files with brief description of relevance
|
||||
- **Patterns Found**: Coding patterns, naming conventions, architecture patterns
|
||||
- **Dependencies**: Internal and external dependencies that affect this work
|
||||
- **Test Infrastructure**: Test framework, test file locations, test patterns
|
||||
- **Risks**: Potential issues or complications discovered`
|
||||
})
|
||||
```
|
||||
|
||||
### Step 4: Optional Deep Analysis via Gemini CLI
|
||||
|
||||
```javascript
|
||||
// Only for comprehensive depth or complex phases
|
||||
if (depth === "comprehensive") {
|
||||
const analysisResult = Bash({
|
||||
command: `ccw cli -p "PURPOSE: Deep codebase analysis for implementation planning. Phase goal: ${phaseGoal}
|
||||
TASK: \
|
||||
- Analyze module boundaries and coupling for affected files \
|
||||
- Identify shared utilities and helpers that can be reused \
|
||||
- Map data flow through affected components \
|
||||
- Assess test coverage gaps in affected areas \
|
||||
- Identify backward compatibility concerns
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Requirements: ${requirements.map(r => r.desc).join(', ')}
|
||||
EXPECTED: Structured analysis with: module map, reuse opportunities, data flow diagram, test gaps, compatibility risks
|
||||
CONSTRAINTS: Focus on files relevant to phase ${phaseNumber} requirements" \
|
||||
--tool gemini --mode analysis --rule analysis-analyze-code-patterns`,
|
||||
run_in_background: false,
|
||||
timeout: 300000
|
||||
})
|
||||
|
||||
// Store deep analysis result for context.md
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Write context.md
|
||||
|
||||
```javascript
|
||||
Bash(`mkdir -p "${sessionFolder}/phase-${phaseNumber}"`)
|
||||
|
||||
const contextContent = `# Phase ${phaseNumber} Context
|
||||
|
||||
Generated: ${new Date().toISOString().slice(0, 19)}
|
||||
Session: ${sessionFolder}
|
||||
Depth: ${depth}
|
||||
|
||||
## Phase Goal
|
||||
|
||||
${phaseGoal}
|
||||
|
||||
## Requirements
|
||||
|
||||
${requirements.map(r => `- **${r.id}**: ${r.desc}`).join('\n')}
|
||||
|
||||
## Success Criteria
|
||||
|
||||
${successCriteria.map(c => `- [ ] ${c}`).join('\n')}
|
||||
|
||||
## Prior Phase Dependencies
|
||||
|
||||
${priorContext.length > 0
|
||||
? priorContext.map(p => `### Phase ${p.phase}\n- Source: ${p.file}\n- Key outputs: ${extractKeyOutputs(p.content)}`).join('\n\n')
|
||||
: 'None (this is the first phase)'}
|
||||
|
||||
${isGapClosure ? `## Gap Closure Context\n\nThis is a gap closure iteration. Gaps from previous verification:\n${gapContext}` : ''}
|
||||
|
||||
## Relevant Files
|
||||
|
||||
${exploreResult.relevantFiles.map(f => `- \`${f.path}\`: ${f.description}`).join('\n')}
|
||||
|
||||
## Patterns Identified
|
||||
|
||||
${exploreResult.patterns.map(p => `- **${p.name}**: ${p.description}`).join('\n')}
|
||||
|
||||
## Dependencies
|
||||
|
||||
${exploreResult.dependencies.map(d => `- ${d}`).join('\n')}
|
||||
|
||||
## Test Infrastructure
|
||||
|
||||
${exploreResult.testInfo || 'Not analyzed (quick depth)'}
|
||||
|
||||
${depth === "comprehensive" && analysisResult ? `## Deep Analysis\n\n${analysisResult}` : ''}
|
||||
|
||||
## Questions / Risks
|
||||
|
||||
${exploreResult.risks.map(r => `- ${r}`).join('\n')}
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/phase-${phaseNumber}/context.md`, contextContent)
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
| Artifact | Path | Description |
|
||||
|----------|------|-------------|
|
||||
| context.md | `{sessionFolder}/phase-{N}/context.md` | Structured phase context for plan creation |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| roadmap.md not found | Error to coordinator via message bus |
|
||||
| cli-explore-agent fails | Retry once. Fallback: use ACE search_context directly |
|
||||
| Gemini CLI fails | Skip deep analysis section, proceed with basic context |
|
||||
| Prior phase summaries missing | Log warning, proceed without dependency context |
|
||||
| Phase section not found in roadmap | Error to coordinator -- phase number may be invalid |
|
||||
@@ -1,239 +0,0 @@
|
||||
# Planner Role
|
||||
|
||||
Research and plan creation per phase. Gathers codebase context via cli-explore-agent and Gemini CLI, then generates wave-based execution plans with convergence criteria. Each plan is a self-contained unit of work that an executor can implement autonomously.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `planner` | **Tag**: `[planner]`
|
||||
- **Task Prefix**: `PLAN-*`
|
||||
- **Responsibility**: Orchestration (research + plan generation)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- All outputs must carry `[planner]` prefix
|
||||
- Only process `PLAN-*` prefixed tasks
|
||||
- Only communicate with coordinator (SendMessage)
|
||||
- Delegate research to commands/research.md
|
||||
- Delegate plan creation to commands/create-plans.md
|
||||
- Reference real files discovered during research (never fabricate paths)
|
||||
- Verify plans have no dependency cycles before reporting
|
||||
- Work strictly within Orchestration responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Direct code writing or modification
|
||||
- Call code-developer or other implementation subagents
|
||||
- Create tasks for other roles (TaskCreate)
|
||||
- Interact with user (AskUserQuestion)
|
||||
- Process EXEC-* or VERIFY-* tasks
|
||||
- Skip the research phase
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Omit `[planner]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `research` | [commands/research.md](commands/research.md) | Phase 2 | Context gathering via codebase exploration |
|
||||
| `create-plans` | [commands/create-plans.md](commands/create-plans.md) | Phase 3 | Wave-based plan file generation |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `cli-explore-agent` | Subagent | planner | Codebase exploration, pattern analysis |
|
||||
| `action-planning-agent` | Subagent | planner | Task JSON + IMPL_PLAN.md generation |
|
||||
| `gemini` | CLI tool | planner | Deep analysis for complex phases (optional) |
|
||||
| `Read/Write` | File operations | planner | Context and plan file management |
|
||||
| `Glob/Grep` | Search | planner | File discovery and pattern matching |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `plan_ready` | planner -> coordinator | Plans created | Plan files written with wave structure |
|
||||
| `plan_progress` | planner -> coordinator | Research complete | Context gathered, starting plan creation |
|
||||
| `error` | planner -> coordinator | Failure | Research or planning failed |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "planner",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from planner --type <type> --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `PLAN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
**Resume Artifact Check**: Check whether this task's output artifact already exists:
|
||||
- `<session>/phase-N/context.md` exists -> skip to Phase 3
|
||||
- Artifact incomplete or missing -> normal Phase 2-4 execution
|
||||
|
||||
### Phase 2: Research (via command)
|
||||
|
||||
**Objective**: Gather codebase context for plan generation.
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| roadmap.md | <session-folder>/roadmap.md | Yes |
|
||||
| Prior phase summaries | <session-folder>/phase-*/summary-*.md | No |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
Delegate to `commands/research.md`:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | Read roadmap.md for phase goal and requirements |
|
||||
| 2 | Read prior phase summaries (if any) |
|
||||
| 3 | Launch cli-explore-agent for codebase exploration |
|
||||
| 4 | Optional: Gemini CLI for deeper analysis (if depth=comprehensive) |
|
||||
| 5 | Write context.md to {sessionFolder}/phase-{N}/context.md |
|
||||
|
||||
**Produces**: `{sessionFolder}/phase-{N}/context.md`
|
||||
|
||||
**Command**: [commands/research.md](commands/research.md)
|
||||
|
||||
**Report progress via team_msg**:
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId,
|
||||
from: "planner",
|
||||
type: "plan_progress",
|
||||
ref: "<session>/phase-<N>/context.md"
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 3: Create Plans (via command)
|
||||
|
||||
**Objective**: Generate wave-based execution plans.
|
||||
|
||||
Delegate to `commands/create-plans.md`:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | Load context.md for phase |
|
||||
| 2 | Prepare output directories (.task/) |
|
||||
| 3 | Delegate to action-planning-agent |
|
||||
| 4 | Agent produces IMPL_PLAN.md + .task/IMPL-*.json + TODO_LIST.md |
|
||||
| 5 | Validate generated artifacts |
|
||||
| 6 | Return task count and dependency structure |
|
||||
|
||||
**Produces**:
|
||||
- `{sessionFolder}/phase-{N}/IMPL_PLAN.md`
|
||||
- `{sessionFolder}/phase-{N}/.task/IMPL-*.json`
|
||||
- `{sessionFolder}/phase-{N}/TODO_LIST.md`
|
||||
|
||||
**Command**: [commands/create-plans.md](commands/create-plans.md)
|
||||
|
||||
### Phase 4: Self-Validation
|
||||
|
||||
**Objective**: Verify task JSONs before reporting.
|
||||
|
||||
**Validation checks**:
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Referenced files exist | `test -f <path>` for modify actions | All files found or warning logged |
|
||||
| Self-dependency | Check if depends_on includes own ID | No self-dependencies |
|
||||
| Convergence criteria | Check convergence.criteria exists | Each task has criteria |
|
||||
| Cross-dependency | Verify all depends_on IDs exist | All dependencies valid |
|
||||
|
||||
**Validation steps**:
|
||||
|
||||
1. **File existence check** (for modify actions):
|
||||
- For each task file with action="modify"
|
||||
- Check file exists
|
||||
- Log warning if not found
|
||||
|
||||
2. **Self-dependency check**:
|
||||
- For each task, verify task.id not in task.depends_on
|
||||
- Log error if self-dependency detected
|
||||
|
||||
3. **Convergence criteria check**:
|
||||
- Verify each task has convergence.criteria array
|
||||
- Log warning if missing
|
||||
|
||||
4. **Cross-dependency validation**:
|
||||
- Collect all task IDs
|
||||
- Verify each depends_on reference exists
|
||||
- Log warning if unknown dependency
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[planner]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Wave count computation**:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | Start with wave=1, assigned=set() |
|
||||
| 2 | Find tasks with all dependencies in assigned |
|
||||
| 3 | Assign those tasks to current wave, add to assigned |
|
||||
| 4 | Increment wave, repeat until all tasks assigned |
|
||||
| 5 | Return wave count |
|
||||
|
||||
**Report message**:
|
||||
```
|
||||
SendMessage({
|
||||
message: "[planner] Phase <N> planning complete.
|
||||
- Tasks: <count>
|
||||
- Waves: <wave-count>
|
||||
- IMPL_PLAN: <session>/phase-<N>/IMPL_PLAN.md
|
||||
- Task JSONs: <file-list>
|
||||
|
||||
All tasks validated. Ready for execution."
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No PLAN-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
| Command file not found | Fall back to inline execution |
|
||||
| roadmap.md not found | Error to coordinator -- dispatch may have failed |
|
||||
| cli-explore-agent fails | Retry once. If still fails, use direct ACE search as fallback |
|
||||
| Gemini CLI fails | Skip deep analysis, proceed with basic context |
|
||||
| action-planning-agent fails | Retry once. If still fails, error to coordinator |
|
||||
| No task JSONs generated | Error to coordinator -- agent may have misunderstood input |
|
||||
| No requirements found for phase | Error to coordinator -- roadmap may be malformed |
|
||||
| Dependency cycle detected | Log warning, break cycle |
|
||||
| Referenced file not found | Log warning. If file is from prior wave, acceptable |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
@@ -1,335 +0,0 @@
|
||||
# Command: verify
|
||||
|
||||
Goal-backward verification of convergence criteria from IMPL-*.json task files against actual codebase state. Checks convergence criteria (measurable conditions), file operations (existence/content), and runs verification commands.
|
||||
|
||||
## Purpose
|
||||
|
||||
For each task's convergence criteria, verify that the expected goals are met in the actual codebase. This is goal-backward verification: check what should exist, NOT what tasks were done. Produce a structured pass/fail result per task with gap details for any failures.
|
||||
|
||||
## Key Principle
|
||||
|
||||
**Goal-backward, not task-forward.** Do not check "did the executor follow the steps?" — check "does the codebase now have the properties that were required?"
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of verifier execution (after loading targets, before compiling results)
|
||||
- Called once per VERIFY-* task
|
||||
|
||||
## Strategy
|
||||
|
||||
For each task, check convergence criteria and file operations. Use Bash for verification commands, Read/Grep for file checks, and optionally Gemini CLI for semantic validation of complex criteria.
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Source | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `sessionFolder` | From VERIFY-* task description | Session artifact directory |
|
||||
| `phaseNumber` | From VERIFY-* task description | Phase number (1-based) |
|
||||
| `tasks` | From verifier Phase 2 | Parsed task JSON objects with convergence criteria |
|
||||
| `summaries` | From verifier Phase 2 | Parsed summary objects for context |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Initialize Results
|
||||
|
||||
```javascript
|
||||
const verificationResults = []
|
||||
```
|
||||
|
||||
### Step 2: Verify Each Task
|
||||
|
||||
```javascript
|
||||
for (const task of tasks) {
|
||||
const taskResult = {
|
||||
task: task.id,
|
||||
title: task.title,
|
||||
status: 'pass', // will be downgraded if any check fails
|
||||
details: [],
|
||||
gaps: []
|
||||
}
|
||||
|
||||
// --- 2a. Check Convergence Criteria ---
|
||||
const criteria = task.convergence?.criteria || []
|
||||
for (const criterion of criteria) {
|
||||
const check = checkCriterion(criterion, task)
|
||||
taskResult.details.push({
|
||||
type: 'criterion',
|
||||
description: criterion,
|
||||
passed: check.passed
|
||||
})
|
||||
if (!check.passed) {
|
||||
taskResult.gaps.push({
|
||||
task: task.id,
|
||||
type: 'criterion',
|
||||
item: criterion,
|
||||
expected: criterion,
|
||||
actual: check.actual || 'Check failed'
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// --- 2b. Check File Operations ---
|
||||
const files = task.files || []
|
||||
for (const fileEntry of files) {
|
||||
const fileChecks = checkFileEntry(fileEntry)
|
||||
for (const check of fileChecks) {
|
||||
taskResult.details.push(check.detail)
|
||||
if (!check.passed) {
|
||||
taskResult.gaps.push({
|
||||
...check.gap,
|
||||
task: task.id
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// --- 2c. Run Verification Command ---
|
||||
if (task.convergence?.verification) {
|
||||
const verifyCheck = runVerificationCommand(task.convergence.verification)
|
||||
taskResult.details.push({
|
||||
type: 'verification_command',
|
||||
description: `Verification: ${task.convergence.verification}`,
|
||||
passed: verifyCheck.passed
|
||||
})
|
||||
if (!verifyCheck.passed) {
|
||||
taskResult.gaps.push({
|
||||
task: task.id,
|
||||
type: 'verification_command',
|
||||
item: task.convergence.verification,
|
||||
expected: 'Command exits with code 0',
|
||||
actual: verifyCheck.actual || 'Command failed'
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// --- 2d. Score task ---
|
||||
const totalChecks = taskResult.details.length
|
||||
const passedChecks = taskResult.details.filter(d => d.passed).length
|
||||
|
||||
if (passedChecks === totalChecks) {
|
||||
taskResult.status = 'pass'
|
||||
} else if (passedChecks > 0) {
|
||||
taskResult.status = 'partial'
|
||||
} else {
|
||||
taskResult.status = 'fail'
|
||||
}
|
||||
|
||||
verificationResults.push(taskResult)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Criterion Checking Function
|
||||
|
||||
```javascript
|
||||
function checkCriterion(criterion, task) {
|
||||
// Criteria are measurable conditions
|
||||
// Strategy: derive testable assertions from criterion text
|
||||
|
||||
// Attempt 1: If criterion mentions a test command, run it
|
||||
const testMatch = criterion.match(/test[s]?\s+(pass|run|succeed)/i)
|
||||
if (testMatch) {
|
||||
const testResult = Bash(`npm test 2>&1 || yarn test 2>&1 || pytest 2>&1 || true`)
|
||||
const passed = !testResult.includes('FAIL') && !testResult.includes('failed')
|
||||
return { passed, actual: passed ? 'Tests pass' : 'Test failures detected' }
|
||||
}
|
||||
|
||||
// Attempt 2: If criterion mentions specific counts or exports, check files
|
||||
const filePaths = (task.files || []).map(f => f.path)
|
||||
for (const filePath of filePaths) {
|
||||
const exists = Bash(`test -f "${filePath}" && echo "EXISTS" || echo "NOT_FOUND"`).trim()
|
||||
if (exists === "EXISTS") {
|
||||
const content = Read(filePath)
|
||||
// Check if criterion keywords appear in the implementation
|
||||
const keywords = criterion.split(/\s+/).filter(w => w.length > 4)
|
||||
const relevant = keywords.filter(kw =>
|
||||
content.toLowerCase().includes(kw.toLowerCase())
|
||||
)
|
||||
if (relevant.length >= Math.ceil(keywords.length * 0.3)) {
|
||||
return { passed: true, actual: 'Implementation contains relevant logic' }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Attempt 3: Compile check for affected files
|
||||
for (const filePath of filePaths) {
|
||||
if (filePath.endsWith('.ts') || filePath.endsWith('.tsx')) {
|
||||
const compileCheck = Bash(`npx tsc --noEmit "${filePath}" 2>&1 || true`)
|
||||
if (compileCheck.includes('error TS')) {
|
||||
return { passed: false, actual: `TypeScript errors in ${filePath}` }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Default: mark as passed if files exist and compile
|
||||
return { passed: true, actual: 'Files exist and compile without errors' }
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: File Entry Checking Function
|
||||
|
||||
```javascript
|
||||
function checkFileEntry(fileEntry) {
|
||||
const checks = []
|
||||
const path = fileEntry.path
|
||||
const action = fileEntry.action // create, modify, delete
|
||||
|
||||
// 4a. Check file existence based on action
|
||||
const exists = Bash(`test -f "${path}" && echo "EXISTS" || echo "NOT_FOUND"`).trim()
|
||||
|
||||
if (action === 'create' || action === 'modify') {
|
||||
checks.push({
|
||||
passed: exists === "EXISTS",
|
||||
detail: {
|
||||
type: 'file',
|
||||
description: `File exists: ${path} (${action})`,
|
||||
passed: exists === "EXISTS"
|
||||
},
|
||||
gap: exists !== "EXISTS" ? {
|
||||
type: 'file',
|
||||
item: `file_exists: ${path}`,
|
||||
expected: `File should exist (action: ${action})`,
|
||||
actual: 'File not found'
|
||||
} : null
|
||||
})
|
||||
|
||||
if (exists !== "EXISTS") {
|
||||
return checks.filter(c => c.gap !== null || c.passed)
|
||||
}
|
||||
|
||||
// 4b. Check minimum content (non-empty for create)
|
||||
if (action === 'create') {
|
||||
const content = Read(path)
|
||||
const lineCount = content.split('\n').length
|
||||
const minLines = 3 // Minimum for a meaningful file
|
||||
checks.push({
|
||||
passed: lineCount >= minLines,
|
||||
detail: {
|
||||
type: 'file',
|
||||
description: `${path}: has content (${lineCount} lines)`,
|
||||
passed: lineCount >= minLines
|
||||
},
|
||||
gap: lineCount < minLines ? {
|
||||
type: 'file',
|
||||
item: `min_content: ${path}`,
|
||||
expected: `>= ${minLines} lines`,
|
||||
actual: `${lineCount} lines`
|
||||
} : null
|
||||
})
|
||||
}
|
||||
} else if (action === 'delete') {
|
||||
checks.push({
|
||||
passed: exists === "NOT_FOUND",
|
||||
detail: {
|
||||
type: 'file',
|
||||
description: `File deleted: ${path}`,
|
||||
passed: exists === "NOT_FOUND"
|
||||
},
|
||||
gap: exists !== "NOT_FOUND" ? {
|
||||
type: 'file',
|
||||
item: `file_deleted: ${path}`,
|
||||
expected: 'File should be deleted',
|
||||
actual: 'File still exists'
|
||||
} : null
|
||||
})
|
||||
}
|
||||
|
||||
return checks.filter(c => c.gap !== null || c.passed)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Verification Command Runner
|
||||
|
||||
```javascript
|
||||
function runVerificationCommand(command) {
|
||||
try {
|
||||
const result = Bash(`${command} 2>&1; echo "EXIT:$?"`)
|
||||
const exitCodeMatch = result.match(/EXIT:(\d+)/)
|
||||
const exitCode = exitCodeMatch ? parseInt(exitCodeMatch[1]) : 1
|
||||
return {
|
||||
passed: exitCode === 0,
|
||||
actual: exitCode === 0 ? 'Command succeeded' : `Exit code: ${exitCode}\n${result.slice(0, 200)}`
|
||||
}
|
||||
} catch (e) {
|
||||
return { passed: false, actual: `Command error: ${e.message}` }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Write verification.md
|
||||
|
||||
```javascript
|
||||
const totalGaps = verificationResults.flatMap(r => r.gaps)
|
||||
const overallStatus = totalGaps.length === 0 ? 'passed' : 'gaps_found'
|
||||
|
||||
Write(`${sessionFolder}/phase-${phaseNumber}/verification.md`, `---
|
||||
phase: ${phaseNumber}
|
||||
status: ${overallStatus}
|
||||
tasks_checked: ${tasks.length}
|
||||
tasks_passed: ${verificationResults.filter(r => r.status === 'pass').length}
|
||||
gaps:
|
||||
${totalGaps.map(g => ` - task: "${g.task}"
|
||||
type: "${g.type}"
|
||||
item: "${g.item}"
|
||||
expected: "${g.expected}"
|
||||
actual: "${g.actual}"`).join('\n')}
|
||||
---
|
||||
|
||||
# Phase ${phaseNumber} Verification
|
||||
|
||||
## Summary
|
||||
|
||||
- **Status**: ${overallStatus}
|
||||
- **Tasks Checked**: ${tasks.length}
|
||||
- **Passed**: ${verificationResults.filter(r => r.status === 'pass').length}
|
||||
- **Partial**: ${verificationResults.filter(r => r.status === 'partial').length}
|
||||
- **Failed**: ${verificationResults.filter(r => r.status === 'fail').length}
|
||||
- **Total Gaps**: ${totalGaps.length}
|
||||
|
||||
## Task Results
|
||||
|
||||
${verificationResults.map(r => `### ${r.task}: ${r.title} — ${r.status.toUpperCase()}
|
||||
${r.details.map(d => `- [${d.passed ? 'x' : ' '}] (${d.type}) ${d.description}`).join('\n')}`).join('\n\n')}
|
||||
|
||||
${totalGaps.length > 0 ? `## Gaps for Re-Planning
|
||||
|
||||
The following gaps must be addressed in a gap closure iteration:
|
||||
|
||||
${totalGaps.map((g, i) => `### Gap ${i + 1}
|
||||
- **Task**: ${g.task}
|
||||
- **Type**: ${g.type}
|
||||
- **Item**: ${g.item}
|
||||
- **Expected**: ${g.expected}
|
||||
- **Actual**: ${g.actual}`).join('\n\n')}` : '## All Goals Met'}
|
||||
`)
|
||||
```
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
### Convergence Criteria
|
||||
|
||||
| Check Method | Tool | When |
|
||||
|--------------|------|------|
|
||||
| Run tests | Bash(`npm test`) | Criterion mentions "test" |
|
||||
| Compile check | Bash(`npx tsc --noEmit`) | TypeScript files |
|
||||
| Keyword match | Read + string match | General behavioral criteria |
|
||||
| Verification command | Bash(convergence.verification) | Always if provided |
|
||||
| Semantic check | Gemini CLI (analysis) | Complex criteria (optional) |
|
||||
|
||||
### File Operations
|
||||
|
||||
| Check | Tool | What |
|
||||
|-------|------|------|
|
||||
| File exists (create/modify) | Bash(`test -f`) | files[].path with action create/modify |
|
||||
| File deleted | Bash(`test -f`) | files[].path with action delete |
|
||||
| Minimum content | Read + line count | Newly created files |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No task JSON files found | Error to coordinator -- planner may have failed |
|
||||
| No summary files found | Error to coordinator -- executor may have failed |
|
||||
| Verification command fails | Record as gap with error output |
|
||||
| File referenced in task missing | Record as gap (file type) |
|
||||
| Task JSON malformed (no convergence) | Log warning, score as pass (nothing to check) |
|
||||
| All checks for a task fail | Score as 'fail', include all gaps |
|
||||
@@ -1,244 +0,0 @@
|
||||
# Verifier Role
|
||||
|
||||
Goal-backward verification per phase. Reads convergence criteria from IMPL-*.json task files and checks them against the actual codebase state after execution. Does NOT modify code -- read-only validation. Produces verification.md with pass/fail results and structured gap lists.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `verifier` | **Tag**: `[verifier]`
|
||||
- **Task Prefix**: `VERIFY-*`
|
||||
- **Responsibility**: Validation
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- All outputs must carry `[verifier]` prefix
|
||||
- Only process `VERIFY-*` prefixed tasks
|
||||
- Only communicate with coordinator (SendMessage)
|
||||
- Delegate verification to commands/verify.md
|
||||
- Check goals (what should exist), NOT tasks (what was done)
|
||||
- Produce structured gap lists for failed items
|
||||
- Remain read-only -- never modify source code
|
||||
- Work strictly within Validation responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Modify any source code or project files
|
||||
- Create plans or execute implementations
|
||||
- Create tasks for other roles (TaskCreate)
|
||||
- Interact with user (AskUserQuestion)
|
||||
- Process PLAN-* or EXEC-* tasks
|
||||
- Auto-fix issues (report them, let planner/executor handle fixes)
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Omit `[verifier]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `verify` | [commands/verify.md](commands/verify.md) | Phase 3 | Goal-backward convergence criteria checking |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `gemini` | CLI tool | verifier | Deep semantic checks for complex truths (optional) |
|
||||
| `Read` | File operations | verifier | Task JSON and summary reading |
|
||||
| `Glob` | Search | verifier | Find task and summary files |
|
||||
| `Bash` | Shell | verifier | Execute verification commands |
|
||||
| `Grep` | Search | verifier | Pattern matching in codebase |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `verify_passed` | verifier -> coordinator | All convergence criteria met | Phase verification passed |
|
||||
| `gaps_found` | verifier -> coordinator | Some criteria failed | Structured gap list for re-planning |
|
||||
| `error` | verifier -> coordinator | Failure | Verification process failed |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "verifier",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from verifier --type <type> --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `VERIFY-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
**Resume Artifact Check**: Check whether this task's output artifact already exists:
|
||||
- `<session>/phase-N/verification.md` exists -> skip to Phase 5
|
||||
- Artifact incomplete or missing -> normal Phase 2-4 execution
|
||||
|
||||
### Phase 2: Load Verification Targets
|
||||
|
||||
**Objective**: Load task JSONs and summaries for verification.
|
||||
|
||||
**Detection steps**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task JSONs | <session-folder>/phase-{N}/.task/IMPL-*.json | Yes |
|
||||
| Summaries | <session-folder>/phase-{N}/summary-*.md | Yes |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
1. **Read task JSON files**:
|
||||
- Find all IMPL-*.json files
|
||||
- Extract convergence criteria from each task
|
||||
- If no files found -> error to coordinator
|
||||
|
||||
2. **Read summary files**:
|
||||
- Find all summary-*.md files
|
||||
- Parse frontmatter for: task, affects, provides
|
||||
- If no files found -> error to coordinator
|
||||
|
||||
### Phase 3: Goal-Backward Verification (via command)
|
||||
|
||||
**Objective**: Execute convergence criteria checks.
|
||||
|
||||
Delegate to `commands/verify.md`:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | For each task's convergence criteria |
|
||||
| 2 | Check criteria type: files, command, pattern |
|
||||
| 3 | Execute appropriate verification method |
|
||||
| 4 | Score each task: pass / partial / fail |
|
||||
| 5 | Compile gap list for failed items |
|
||||
|
||||
**Verification strategy selection**:
|
||||
|
||||
| Criteria Type | Method |
|
||||
|---------------|--------|
|
||||
| File existence | `test -f <path>` |
|
||||
| Command execution | Run specified command, check exit code |
|
||||
| Pattern match | Grep for pattern in specified files |
|
||||
| Semantic check | Optional: Gemini CLI for deep analysis |
|
||||
|
||||
**Produces**: verificationResults (structured data)
|
||||
|
||||
**Command**: [commands/verify.md](commands/verify.md)
|
||||
|
||||
### Phase 4: Compile Results
|
||||
|
||||
**Objective**: Aggregate pass/fail and generate verification.md.
|
||||
|
||||
**Result aggregation**:
|
||||
|
||||
| Metric | Source | Threshold |
|
||||
|--------|--------|-----------|
|
||||
| Pass rate | Task results | >= 100% for passed |
|
||||
| Gaps count | Failed criteria | 0 for passed |
|
||||
|
||||
**Compile steps**:
|
||||
|
||||
1. **Aggregate results per task**:
|
||||
- Count passed, partial, failed
|
||||
- Collect all gaps from partial/failed tasks
|
||||
|
||||
2. **Determine overall status**:
|
||||
- `passed` if gaps.length === 0
|
||||
- `gaps_found` otherwise
|
||||
|
||||
3. **Write verification.md**:
|
||||
- YAML frontmatter with status, counts, gaps
|
||||
- Summary section
|
||||
- Task results section
|
||||
- Gaps section (if any)
|
||||
|
||||
**Verification.md structure**:
|
||||
```yaml
|
||||
---
|
||||
phase: <N>
|
||||
status: passed | gaps_found
|
||||
tasks_checked: <count>
|
||||
tasks_passed: <count>
|
||||
gaps:
|
||||
- task: "<task-id>"
|
||||
type: "<criteria-type>"
|
||||
item: "<description>"
|
||||
expected: "<expected-value>"
|
||||
actual: "<actual-value>"
|
||||
---
|
||||
|
||||
# Phase <N> Verification
|
||||
|
||||
## Summary
|
||||
- Status: <status>
|
||||
- Tasks Checked: <count>
|
||||
- Passed: <count>
|
||||
- Total Gaps: <count>
|
||||
|
||||
## Task Results
|
||||
### TASK-ID: Title - STATUS
|
||||
- [x] (type) description
|
||||
- [ ] (type) description
|
||||
|
||||
## Gaps (if any)
|
||||
### Gap 1: Task - Type
|
||||
- Expected: ...
|
||||
- Actual: ...
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[verifier]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Report message**:
|
||||
```
|
||||
SendMessage({
|
||||
message: "[verifier] Phase <N> verification complete.
|
||||
- Status: <status>
|
||||
- Tasks: <passed>/<total> passed
|
||||
- Gaps: <gap-count>
|
||||
|
||||
Verification written to: <verification-path>"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No VERIFY-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
| Command file not found | Fall back to inline execution |
|
||||
| No task JSON files found | Error to coordinator -- planner may have failed |
|
||||
| No summary files found | Error to coordinator -- executor may have failed |
|
||||
| File referenced in task missing | Record as gap (file type) |
|
||||
| Bash command fails during check | Record as gap with error message |
|
||||
| Verification command fails | Record as gap with exit code |
|
||||
| Gemini CLI fails | Fallback to direct checks, skip semantic analysis |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
@@ -11,17 +11,23 @@ allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), Task
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-tech-debt", args="--role=xxx") │
|
||||
└────────────────────────┬─────────────────────────────────┘
|
||||
│ Role Router
|
||||
┌────────┬───────────┼───────────┬──────────┬──────────┐
|
||||
↓ ↓ ↓ ↓ ↓ ↓
|
||||
┌────────┐┌────────┐┌──────────┐┌─────────┐┌────────┐┌─────────┐
|
||||
│coordi- ││scanner ││assessor ││planner ││executor││validator│
|
||||
│nator ││TDSCAN-*││TDEVAL-* ││TDPLAN-* ││TDFIX-* ││TDVAL-* │
|
||||
│ roles/ ││ roles/ ││ roles/ ││ roles/ ││ roles/ ││ roles/ │
|
||||
└────────┘└────────┘└──────────┘└─────────┘└────────┘└─────────┘
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-tech-debt") |
|
||||
| args="<task-description>" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
Orchestration Mode (auto -> coordinator)
|
||||
|
|
||||
Coordinator (inline)
|
||||
Phase 0-5 orchestration
|
||||
|
|
||||
+-----+-----+-----+-----+-----+
|
||||
v v v v v
|
||||
[tw] [tw] [tw] [tw] [tw]
|
||||
scann- asses- plan- execu- valid-
|
||||
er sor ner tor ator
|
||||
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Command Architecture
|
||||
@@ -67,14 +73,14 @@ If no `--role` -> Orchestration Mode (auto route to coordinator).
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | **压缩后必须重读** |
|
||||
| scanner | [roles/scanner/role.md](roles/scanner/role.md) | TDSCAN-* | pipeline | 压缩后必须重读 |
|
||||
| assessor | [roles/assessor/role.md](roles/assessor/role.md) | TDEVAL-* | pipeline | 压缩后必须重读 |
|
||||
| planner | [roles/planner/role.md](roles/planner/role.md) | TDPLAN-* | pipeline | 压缩后必须重读 |
|
||||
| executor | [roles/executor/role.md](roles/executor/role.md) | TDFIX-* | pipeline | 压缩后必须重读 |
|
||||
| validator | [roles/validator/role.md](roles/validator/role.md) | TDVAL-* | pipeline | 压缩后必须重读 |
|
||||
| Role | Spec | Task Prefix | Inner Loop |
|
||||
|------|------|-------------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
|
||||
| scanner | [role-specs/scanner.md](role-specs/scanner.md) | TDSCAN-* | false |
|
||||
| assessor | [role-specs/assessor.md](role-specs/assessor.md) | TDEVAL-* | false |
|
||||
| planner | [role-specs/planner.md](role-specs/planner.md) | TDPLAN-* | false |
|
||||
| executor | [role-specs/executor.md](role-specs/executor.md) | TDFIX-* | true |
|
||||
| validator | [role-specs/validator.md](role-specs/validator.md) | TDVAL-* | false |
|
||||
|
||||
> **COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
|
||||
|
||||
@@ -263,53 +269,71 @@ TDFIX -> TDVAL -> (if regression or quality drop) -> TDFIX-fix -> TDVAL-2
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
> **Note**: This skill uses Stop-Wait coordination (`run_in_background: false`). Each role completes before next spawns. This is intentionally different from the v3 default of `run_in_background: true` (Spawn-and-Stop). The Stop-Wait strategy ensures sequential pipeline execution where each phase's output is fully available before the next phase begins -- critical for the scan->assess->plan->execute->validate dependency chain.
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
> 以下模板作为 worker prompt 参考。在 Stop-Wait 策略下,coordinator 不在 Phase 2 预先 spawn 所有 worker。而是在 Phase 4 (monitor) 中,按 pipeline 阶段逐个 spawn worker(同步阻塞 `Task(run_in_background: false)`),worker 返回即阶段完成。详见 `roles/coordinator/commands/monitor.md`。
|
||||
> **Note**: This skill uses Stop-Wait coordination (`run_in_background: false`). Each role completes before next spawns. This is intentionally different from the default `run_in_background: true` (Spawn-and-Stop). The Stop-Wait strategy ensures sequential pipeline execution where each phase's output is fully available before the next phase begins -- critical for the scan->assess->plan->execute->validate dependency chain.
|
||||
|
||||
**通用 Worker Spawn 格式**:
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
prompt: `你是 team "<team-name>" 的 <ROLE>.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-tech-debt/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: tech-debt
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
## 首要指令(MUST)
|
||||
你的所有工作必须通过调用 Skill 获取角色定义后执行,禁止自行发挥:
|
||||
Skill(skill="team-tech-debt", args="--role=<role>")
|
||||
此调用会加载你的角色定义(role.md)、可用命令(commands/*.md)和完整执行逻辑。
|
||||
|
||||
当前需求: <task-description>
|
||||
约束: <constraints>
|
||||
|
||||
## 角色准则(强制)
|
||||
- 你只能处理 <PREFIX>-* 前缀的任务,不得执行其他角色的工作
|
||||
- 所有输出(SendMessage、team_msg)必须带 [<role>] 标识前缀
|
||||
- 仅与 coordinator 通信,不得直接联系其他 worker
|
||||
- 不得使用 TaskCreate 为其他角色创建任务
|
||||
|
||||
## 消息总线(必须)
|
||||
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||
|
||||
## 工作流程(严格按顺序)
|
||||
1. 调用 Skill(skill="team-tech-debt", args="--role=<role>") 获取角色定义和执行逻辑
|
||||
2. 按 role.md 中的 5-Phase 流程执行(TaskList -> 找到 <PREFIX>-* 任务 -> 执行 -> 汇报)
|
||||
3. team_msg log + SendMessage 结果给 coordinator(带 [<role>] 标识)
|
||||
4. TaskUpdate completed -> 检查下一个任务 -> 回到步骤 1`,
|
||||
run_in_background: false // Stop-Wait: 同步阻塞,等待 worker 完成
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`,
|
||||
run_in_background: false // Stop-Wait: synchronous blocking, wait for worker completion
|
||||
})
|
||||
```
|
||||
|
||||
**各角色 Spawn 参数**:
|
||||
**Inner Loop roles** (executor): Set `inner_loop: true`.
|
||||
|
||||
| Role | Prefix | Skill Args |
|
||||
**Single-task roles** (scanner, assessor, planner, validator): Set `inner_loop: false`.
|
||||
|
||||
**Role-specific spawn parameters**:
|
||||
|
||||
| Role | Prefix | inner_loop |
|
||||
|------|--------|------------|
|
||||
| scanner | TDSCAN-* | `--role=scanner` |
|
||||
| assessor | TDEVAL-* | `--role=assessor` |
|
||||
| planner | TDPLAN-* | `--role=planner` |
|
||||
| executor | TDFIX-* | `--role=executor` |
|
||||
| validator | TDVAL-* | `--role=validator` |
|
||||
| scanner | TDSCAN-* | false |
|
||||
| assessor | TDEVAL-* | false |
|
||||
| planner | TDPLAN-* | false |
|
||||
| executor | TDFIX-* | true |
|
||||
| validator | TDVAL-* | false |
|
||||
|
||||
---
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Tech Debt pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(tech-debt) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-tech-debt", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,164 +0,0 @@
|
||||
# Command: evaluate
|
||||
|
||||
> CLI 分析评估债务项。对每项债务评估业务影响(1-5)、修复成本(1-5)、未修复风险,产出优先级象限分配。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Assessor
|
||||
- 需要对债务清单中的项目进行量化评估
|
||||
- 债务项数量较多需要 CLI 辅助分析
|
||||
|
||||
**Trigger conditions**:
|
||||
- TDEVAL-* 任务进入 Phase 3
|
||||
- 债务清单包含 >10 项需要评估的条目
|
||||
- 需要上下文理解来评估影响和成本
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: CLI Batch Analysis
|
||||
**CLI Tool**: `gemini` (primary)
|
||||
**CLI Mode**: `analysis`
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 评估策略选择
|
||||
if (debtInventory.length <= 10) {
|
||||
// 少量项目:内联评估(基于严重性和工作量启发式)
|
||||
mode = 'heuristic'
|
||||
} else if (debtInventory.length <= 50) {
|
||||
// 中等规模:单次 CLI 批量评估
|
||||
mode = 'cli-batch'
|
||||
} else {
|
||||
// 大规模:分批 CLI 评估
|
||||
mode = 'cli-chunked'
|
||||
chunkSize = 25
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 准备评估上下文
|
||||
const debtSummary = debtInventory.map(item =>
|
||||
`[${item.id}] [${item.dimension}] [${item.severity}] ${item.file}:${item.line} - ${item.description}`
|
||||
).join('\n')
|
||||
|
||||
// 读取项目元信息用于上下文
|
||||
const projectContext = []
|
||||
try {
|
||||
const pkg = JSON.parse(Read('package.json'))
|
||||
projectContext.push(`Project: ${pkg.name}, Dependencies: ${Object.keys(pkg.dependencies || {}).length}`)
|
||||
} catch {}
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
if (mode === 'heuristic') {
|
||||
// 内联启发式评估
|
||||
for (const item of debtInventory) {
|
||||
const severityImpact = { critical: 5, high: 4, medium: 3, low: 1 }
|
||||
const effortCost = { small: 1, medium: 3, large: 5 }
|
||||
item.impact_score = severityImpact[item.severity] || 3
|
||||
item.cost_score = effortCost[item.estimated_effort] || 3
|
||||
item.risk_if_unfixed = getRiskDescription(item)
|
||||
item.priority_quadrant = assignQuadrant(item.impact_score, item.cost_score)
|
||||
}
|
||||
} else {
|
||||
// CLI 批量评估
|
||||
const prompt = `PURPOSE: Evaluate technical debt items for business impact and fix cost to create a priority matrix
|
||||
TASK: • For each debt item, assess business impact (1-5 scale: 1=negligible, 5=critical) • Assess fix complexity/cost (1-5 scale: 1=trivial, 5=major refactor) • Describe risk if unfixed • Assign priority quadrant: quick-win (high impact + low cost), strategic (high impact + high cost), backlog (low impact + low cost), defer (low impact + high cost)
|
||||
MODE: analysis
|
||||
CONTEXT: ${projectContext.join(' | ')}
|
||||
EXPECTED: JSON array with: [{id, impact_score, cost_score, risk_if_unfixed, priority_quadrant}] for each item
|
||||
CONSTRAINTS: Be realistic about costs, consider dependencies between items
|
||||
|
||||
## Debt Items to Evaluate
|
||||
${debtSummary}`
|
||||
|
||||
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`, {
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// 等待 CLI 完成,解析结果,合并回 debtInventory
|
||||
}
|
||||
|
||||
function assignQuadrant(impact, cost) {
|
||||
if (impact >= 4 && cost <= 2) return 'quick-win'
|
||||
if (impact >= 4 && cost >= 3) return 'strategic'
|
||||
if (impact <= 3 && cost <= 2) return 'backlog'
|
||||
return 'defer'
|
||||
}
|
||||
|
||||
function getRiskDescription(item) {
|
||||
const risks = {
|
||||
'code': 'Increased maintenance cost and bug probability',
|
||||
'architecture': 'Growing coupling makes changes harder and riskier',
|
||||
'testing': 'Reduced confidence in changes, higher regression risk',
|
||||
'dependency': 'Security vulnerabilities and compatibility issues',
|
||||
'documentation': 'Onboarding friction and knowledge loss'
|
||||
}
|
||||
return risks[item.dimension] || 'Technical quality degradation over time'
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// 验证评估结果完整性
|
||||
const evaluated = debtInventory.filter(i => i.priority_quadrant)
|
||||
const unevaluated = debtInventory.filter(i => !i.priority_quadrant)
|
||||
|
||||
if (unevaluated.length > 0) {
|
||||
// 未评估的项目使用启发式兜底
|
||||
for (const item of unevaluated) {
|
||||
item.impact_score = item.impact_score || 3
|
||||
item.cost_score = item.cost_score || 3
|
||||
item.priority_quadrant = assignQuadrant(item.impact_score, item.cost_score)
|
||||
item.risk_if_unfixed = item.risk_if_unfixed || getRiskDescription(item)
|
||||
}
|
||||
}
|
||||
|
||||
// 生成统计
|
||||
const stats = {
|
||||
total: debtInventory.length,
|
||||
evaluated_by_cli: evaluated.length,
|
||||
evaluated_by_heuristic: unevaluated.length,
|
||||
avg_impact: (debtInventory.reduce((s, i) => s + i.impact_score, 0) / debtInventory.length).toFixed(1),
|
||||
avg_cost: (debtInventory.reduce((s, i) => s + i.cost_score, 0) / debtInventory.length).toFixed(1)
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Evaluation Results
|
||||
|
||||
### Method: [heuristic|cli-batch|cli-chunked]
|
||||
### Total Items: [count]
|
||||
### Average Impact: [score]/5
|
||||
### Average Cost: [score]/5
|
||||
|
||||
### Priority Distribution
|
||||
| Quadrant | Count | % |
|
||||
|----------|-------|---|
|
||||
| Quick-Win | [n] | [%] |
|
||||
| Strategic | [n] | [%] |
|
||||
| Backlog | [n] | [%] |
|
||||
| Defer | [n] | [%] |
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI returns invalid JSON | Fall back to heuristic scoring |
|
||||
| CLI timeout | Evaluate processed items, heuristic for rest |
|
||||
| Debt inventory too large (>200) | Chunk into batches of 25 |
|
||||
| Missing severity/effort data | Use dimension-based defaults |
|
||||
| All items same quadrant | Re-evaluate with adjusted thresholds |
|
||||
@@ -1,185 +0,0 @@
|
||||
# Assessor Role
|
||||
|
||||
技术债务量化评估师。对扫描发现的每项债务进行影响评分(1-5)和修复成本评分(1-5),划分优先级象限,生成 priority-matrix.json。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `assessor` | **Tag**: `[assessor]`
|
||||
- **Task Prefix**: `TDEVAL-*`
|
||||
- **Responsibility**: Read-only analysis (量化评估)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `TDEVAL-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[assessor]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within quantitative assessment responsibility scope
|
||||
- Base evaluations on data from debt inventory
|
||||
|
||||
### MUST NOT
|
||||
- Modify source code or test code
|
||||
- Execute fix operations
|
||||
- Create tasks for other roles
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Omit `[assessor]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `evaluate` | [commands/evaluate.md](commands/evaluate.md) | Phase 3 | 影响/成本矩阵评估 |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `gemini` | CLI | evaluate.md | 债务影响与修复成本评估 |
|
||||
|
||||
> Assessor does not directly use subagents
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `assessment_complete` | assessor -> coordinator | 评估完成 | 包含优先级矩阵摘要 |
|
||||
| `error` | assessor -> coordinator | 评估失败 | 阻塞性错误 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "assessor",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from assessor --type <message-type> --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `TDEVAL-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Load Debt Inventory
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | task.description (regex: `session:\s*(.+)`) | Yes |
|
||||
| Shared memory | `<session-folder>/.msg/meta.json` | Yes |
|
||||
| Debt inventory | meta.json:debt_inventory OR `<session-folder>/scan/debt-inventory.json` | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json
|
||||
3. Load debt_inventory from shared memory or fallback to debt-inventory.json file
|
||||
4. If debt_inventory is empty -> report empty assessment and exit
|
||||
|
||||
### Phase 3: Evaluate Each Item
|
||||
|
||||
Delegate to `commands/evaluate.md` if available, otherwise execute inline.
|
||||
|
||||
**Core Strategy**: For each debt item, evaluate impact(1-5) + cost(1-5) + priority quadrant
|
||||
|
||||
**Impact Score Mapping**:
|
||||
|
||||
| Severity | Impact Score |
|
||||
|----------|--------------|
|
||||
| critical | 5 |
|
||||
| high | 4 |
|
||||
| medium | 3 |
|
||||
| low | 1 |
|
||||
|
||||
**Cost Score Mapping**:
|
||||
|
||||
| Estimated Effort | Cost Score |
|
||||
|------------------|------------|
|
||||
| small | 1 |
|
||||
| medium | 3 |
|
||||
| large | 5 |
|
||||
| unknown | 3 |
|
||||
|
||||
**Priority Quadrant Classification**:
|
||||
|
||||
| Impact | Cost | Quadrant | Description |
|
||||
|--------|------|----------|-------------|
|
||||
| >= 4 | <= 2 | quick-win | High impact, low cost |
|
||||
| >= 4 | >= 3 | strategic | High impact, high cost |
|
||||
| <= 3 | <= 2 | backlog | Low impact, low cost |
|
||||
| <= 3 | >= 3 | defer | Low impact, high cost |
|
||||
|
||||
**Evaluation record**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `impact_score` | 1-5, business impact |
|
||||
| `cost_score` | 1-5, fix effort |
|
||||
| `risk_if_unfixed` | Risk description |
|
||||
| `priority_quadrant` | quick-win/strategic/backlog/defer |
|
||||
|
||||
### Phase 4: Generate Priority Matrix
|
||||
|
||||
**Matrix structure**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `evaluation_date` | ISO timestamp |
|
||||
| `total_items` | Count of evaluated items |
|
||||
| `by_quadrant` | Items grouped by quadrant |
|
||||
| `summary` | Count per quadrant |
|
||||
|
||||
**Sorting**: Within each quadrant, sort by impact_score descending
|
||||
|
||||
**Save outputs**:
|
||||
|
||||
1. Write `<session-folder>/assessment/priority-matrix.json`
|
||||
2. Update .msg/meta.json with `priority_matrix` summary and evaluated `debt_inventory`
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[assessor]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Task | task.subject |
|
||||
| Total Items | Count of evaluated items |
|
||||
| Priority Matrix | Count per quadrant |
|
||||
| Top Quick-Wins | Top 5 quick-win items with details |
|
||||
| Priority Matrix File | Path to priority-matrix.json |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No TDEVAL-* tasks available | Idle, wait for coordinator |
|
||||
| Debt inventory empty | Report empty assessment, notify coordinator |
|
||||
| Shared memory corrupted | Re-read from debt-inventory.json file |
|
||||
| CLI analysis fails | Fall back to severity-based heuristic scoring |
|
||||
| Too many items (>200) | Batch-evaluate top 50 critical/high first |
|
||||
@@ -1,413 +1,234 @@
|
||||
# Command: monitor
|
||||
# Command: Monitor
|
||||
|
||||
> 停止等待(Stop-Wait)协调。按 pipeline 阶段顺序,逐阶段 spawn worker 同步执行,worker 返回即阶段完成,无需轮询。
|
||||
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, fix-verify loops, and completion.
|
||||
|
||||
## When to Use
|
||||
## Constants
|
||||
|
||||
- Phase 4 of Coordinator
|
||||
- 任务链已创建(dispatch 完成)
|
||||
- 需要逐阶段驱动 worker 执行直到所有任务完成
|
||||
| Key | Value |
|
||||
|-----|-------|
|
||||
| SPAWN_MODE | background |
|
||||
| ONE_STEP_PER_INVOCATION | true |
|
||||
| WORKER_AGENT | team-worker |
|
||||
| MAX_GC_ROUNDS | 3 |
|
||||
|
||||
**Trigger conditions**:
|
||||
- dispatch 完成后立即启动
|
||||
- Fix-Verify 循环创建新任务后重新进入
|
||||
## Phase 2: Context Loading
|
||||
|
||||
## Strategy
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session state | <session>/session.json | Yes |
|
||||
| Task list | TaskList() | Yes |
|
||||
| Trigger event | From Entry Router detection | Yes |
|
||||
| Meta state | <session>/.msg/meta.json | Yes |
|
||||
| Pipeline definition | From SKILL.md | Yes |
|
||||
|
||||
### Delegation Mode
|
||||
1. Load session.json for current state, `pipeline_mode`, `gc_rounds`
|
||||
2. Run TaskList() to get current task statuses
|
||||
3. Identify trigger event type from Entry Router
|
||||
|
||||
**Mode**: Stop-Wait(同步阻塞 Task call,非轮询)
|
||||
### Role Detection Table
|
||||
|
||||
### 设计原则
|
||||
| Message Pattern | Role Detection |
|
||||
|----------------|---------------|
|
||||
| `[scanner]` or task ID `TDSCAN-*` | scanner |
|
||||
| `[assessor]` or task ID `TDEVAL-*` | assessor |
|
||||
| `[planner]` or task ID `TDPLAN-*` | planner |
|
||||
| `[executor]` or task ID `TDFIX-*` | executor |
|
||||
| `[validator]` or task ID `TDVAL-*` | validator |
|
||||
|
||||
> **模型执行没有时间概念,禁止任何形式的轮询等待。**
|
||||
>
|
||||
> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态(空转浪费 API 轮次)
|
||||
> - ❌ 禁止: `Bash(sleep N)` / `Bash(timeout /t N)` 作为等待手段
|
||||
> - ✅ 采用: 同步 `Task()` 调用(`run_in_background: false`),call 本身即等待
|
||||
> - ✅ 采用: Worker 返回 = 阶段完成信号(天然回调)
|
||||
>
|
||||
> **原理**: `Task(run_in_background: false)` 是阻塞调用,coordinator 自动挂起直到 worker 返回。
|
||||
> 无需 sleep,无需轮询,无需消息总线监控。Worker 的返回就是回调。
|
||||
### Pipeline Stage Order
|
||||
|
||||
### Stage-Worker 映射表
|
||||
```
|
||||
TDSCAN -> TDEVAL -> TDPLAN -> TDFIX -> TDVAL
|
||||
```
|
||||
|
||||
## Phase 3: Event Handlers
|
||||
|
||||
### handleCallback
|
||||
|
||||
Triggered when a worker sends completion message.
|
||||
|
||||
1. Parse message to identify role and task ID using Role Detection Table
|
||||
|
||||
2. Mark task as completed:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: "<task-id>", status: "completed" })
|
||||
```
|
||||
|
||||
3. Record completion in session state
|
||||
|
||||
4. **Plan Approval Gate** (when planner TDPLAN completes):
|
||||
|
||||
Before advancing to TDFIX, present the remediation plan to the user for approval.
|
||||
|
||||
```
|
||||
// Read the generated plan
|
||||
planContent = Read(<session>/plan/remediation-plan.md)
|
||||
|| Read(<session>/plan/remediation-plan.json)
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Remediation plan generated. Review and decide:",
|
||||
header: "Plan Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Approve", description: "Proceed with fix execution" },
|
||||
{ label: "Revise", description: "Re-run planner with feedback" },
|
||||
{ label: "Abort", description: "Stop pipeline, no fixes applied" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Decision | Action |
|
||||
|----------|--------|
|
||||
| Approve | Proceed to handleSpawnNext (TDFIX becomes ready) |
|
||||
| Revise | Create TDPLAN-revised task, proceed to handleSpawnNext |
|
||||
| Abort | Log shutdown, transition to handleComplete |
|
||||
|
||||
5. **GC Loop Check** (when validator TDVAL completes):
|
||||
|
||||
Read `<session>/.msg/meta.json` for validation results.
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| No regressions found | Proceed to handleSpawnNext (pipeline complete) |
|
||||
| Regressions found AND gc_rounds < 3 | Create fix-verify tasks, increment gc_rounds |
|
||||
| Regressions found AND gc_rounds >= 3 | Accept current state, proceed to handleComplete |
|
||||
|
||||
**Fix-Verify Task Creation** (when regressions detected):
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "TDFIX-fix-<round>",
|
||||
description: "PURPOSE: Fix regressions found by validator | Success: All regressions resolved
|
||||
TASK:
|
||||
- Load validation report with regression details
|
||||
- Apply targeted fixes for each regression
|
||||
- Re-validate fixes locally before completion
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Upstream artifacts: <session>/.msg/meta.json
|
||||
EXPECTED: Fixed source files | Regressions resolved
|
||||
CONSTRAINTS: Targeted fixes only | Do not introduce new regressions",
|
||||
blockedBy: [],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "TDVAL-recheck-<round>",
|
||||
description: "PURPOSE: Re-validate after regression fixes | Success: Zero regressions
|
||||
TASK:
|
||||
- Run full validation suite on fixed code
|
||||
- Compare debt scores before and after
|
||||
- Report regression status
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
EXPECTED: Validation results with regression count
|
||||
CONSTRAINTS: Read-only validation",
|
||||
blockedBy: ["TDFIX-fix-<round>"],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
6. Proceed to handleSpawnNext
|
||||
|
||||
### handleSpawnNext
|
||||
|
||||
Find and spawn the next ready tasks.
|
||||
|
||||
1. Scan task list for tasks where:
|
||||
- Status is "pending"
|
||||
- All blockedBy tasks have status "completed"
|
||||
|
||||
2. If no ready tasks and all tasks completed, proceed to handleComplete
|
||||
|
||||
3. If no ready tasks but some still in_progress, STOP and wait
|
||||
|
||||
4. For each ready task, determine role from task subject prefix:
|
||||
|
||||
```javascript
|
||||
const STAGE_WORKER_MAP = {
|
||||
'TDSCAN': { role: 'scanner', skillArgs: '--role=scanner' },
|
||||
'TDEVAL': { role: 'assessor', skillArgs: '--role=assessor' },
|
||||
'TDPLAN': { role: 'planner', skillArgs: '--role=planner' },
|
||||
'TDFIX': { role: 'executor', skillArgs: '--role=executor' },
|
||||
'TDVAL': { role: 'validator', skillArgs: '--role=validator' }
|
||||
'TDSCAN': { role: 'scanner' },
|
||||
'TDEVAL': { role: 'assessor' },
|
||||
'TDPLAN': { role: 'planner' },
|
||||
'TDFIX': { role: 'executor' },
|
||||
'TDVAL': { role: 'validator' }
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
5. Spawn team-worker (one at a time for sequential pipeline):
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
const sharedMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
|
||||
let fixVerifyIteration = 0
|
||||
const MAX_FIX_VERIFY_ITERATIONS = 3
|
||||
let worktreeCreated = false
|
||||
|
||||
// 获取 pipeline 阶段列表(按创建顺序 = 依赖顺序)
|
||||
const allTasks = TaskList()
|
||||
const pipelineTasks = allTasks
|
||||
.filter(t => t.owner && t.owner !== 'coordinator')
|
||||
.sort((a, b) => Number(a.id) - Number(b.id))
|
||||
|
||||
// 统一 auto mode 检测
|
||||
const autoYes = /\b(-y|--yes)\b/.test(args)
|
||||
```
|
||||
|
||||
### Step 2: Sequential Stage Execution (Stop-Wait)
|
||||
|
||||
> **核心**: 逐阶段 spawn worker,同步阻塞等待返回。
|
||||
> Worker 返回 = 阶段完成。无 sleep、无轮询、无消息总线监控。
|
||||
|
||||
```javascript
|
||||
for (const stageTask of pipelineTasks) {
|
||||
// 1. 提取阶段前缀 → 确定 worker 角色
|
||||
const stagePrefix = stageTask.subject.match(/^(TD\w+)-/)?.[1]
|
||||
const workerConfig = STAGE_WORKER_MAP[stagePrefix]
|
||||
|
||||
if (!workerConfig) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "error",
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
// 2. 标记任务为执行中
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
to: workerConfig.role, type: "task_unblocked",
|
||||
})
|
||||
|
||||
// 3. 同步 spawn worker — 阻塞直到 worker 返回(Stop-Wait 核心)
|
||||
// Task() 本身就是等待机制,无需 sleep/poll
|
||||
const workerResult = Task({
|
||||
subagent_type: "team-worker",
|
||||
description: `Spawn ${workerConfig.role} worker for ${stageTask.subject}`,
|
||||
team_name: teamName,
|
||||
name: workerConfig.role,
|
||||
prompt: buildWorkerPrompt(stageTask, workerConfig, sessionFolder, taskDescription),
|
||||
run_in_background: false // ← 同步阻塞 = 天然回调
|
||||
})
|
||||
|
||||
// 4. Worker 已返回 — 直接处理结果(无需检查状态)
|
||||
const taskState = TaskGet({ taskId: stageTask.id })
|
||||
|
||||
if (taskState.status !== 'completed') {
|
||||
// Worker 返回但未标记 completed → 异常处理
|
||||
handleStageFailure(stageTask, taskState, workerConfig, autoYes)
|
||||
} else {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "quality_gate",
|
||||
})
|
||||
}
|
||||
|
||||
// 5. Plan Approval Gate(TDPLAN 完成后,进入 TDFIX 前)
|
||||
if (stagePrefix === 'TDPLAN' && taskState.status === 'completed') {
|
||||
// 读取治理方案
|
||||
let planContent = ''
|
||||
try { planContent = Read(`${sessionFolder}/plan/remediation-plan.md`) } catch {}
|
||||
if (!planContent) {
|
||||
try { planContent = JSON.stringify(JSON.parse(Read(`${sessionFolder}/plan/remediation-plan.json`)), null, 2) } catch {}
|
||||
}
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "plan_approval",
|
||||
})
|
||||
|
||||
if (!autoYes) {
|
||||
// 输出方案摘要供用户审阅
|
||||
// 注意: 方案内容通过 AskUserQuestion 的描述呈现
|
||||
const approval = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `治理方案已生成,请审阅后决定:\n\n${planContent ? planContent.slice(0, 2000) : '(方案文件未找到,请查看 ' + sessionFolder + '/plan/)'}${planContent && planContent.length > 2000 ? '\n\n... (已截断,完整方案见 ' + sessionFolder + '/plan/)' : ''}`,
|
||||
header: "Plan Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "批准执行", description: "按此方案创建 worktree 并执行修复" },
|
||||
{ label: "修订方案", description: "重新规划(重新 spawn planner)" },
|
||||
{ label: "终止", description: "停止流水线,不执行修复" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
const planDecision = approval["Plan Review"]
|
||||
if (planDecision === "修订方案") {
|
||||
// 重新创建 TDPLAN 任务并 spawn planner
|
||||
const revisedTask = TaskCreate({
|
||||
subject: `TDPLAN-revised: 修订治理方案`,
|
||||
description: `session: ${sessionFolder}\n需求: ${taskDescription}\n用户要求修订方案`,
|
||||
activeForm: "Revising remediation plan"
|
||||
})
|
||||
TaskUpdate({ taskId: revisedTask.id, owner: 'planner', status: 'pending' })
|
||||
// 将修订任务插入到当前位置之后重新执行
|
||||
pipelineTasks.splice(pipelineTasks.indexOf(stageTask) + 1, 0, {
|
||||
id: revisedTask.id,
|
||||
subject: `TDPLAN-revised`,
|
||||
description: revisedTask.description
|
||||
})
|
||||
continue // 跳到下一阶段(即刚插入的修订任务)
|
||||
} else if (planDecision === "终止") {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "shutdown",
|
||||
})
|
||||
break // 退出 pipeline 循环
|
||||
}
|
||||
// "批准执行" → 继续
|
||||
}
|
||||
}
|
||||
|
||||
// 6. Worktree Creation(TDFIX 之前,方案已批准)
|
||||
if (stagePrefix === 'TDFIX' && !worktreeCreated) {
|
||||
const branchName = `tech-debt/TD-${sessionSlug}-${sessionDate}`
|
||||
const worktreePath = `.worktrees/TD-${sessionSlug}-${sessionDate}`
|
||||
|
||||
// 创建 worktree 和新分支
|
||||
Bash(`git worktree add -b "${branchName}" "${worktreePath}"`)
|
||||
|
||||
// 安装依赖(如有 package.json)
|
||||
Bash(`cd "${worktreePath}" && npm install --ignore-scripts 2>/dev/null || true`)
|
||||
|
||||
// 存入 shared memory
|
||||
sharedMemory.worktree = { path: worktreePath, branch: branchName }
|
||||
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(sharedMemory, null, 2))
|
||||
|
||||
worktreeCreated = true
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "worktree_created",
|
||||
})
|
||||
}
|
||||
|
||||
// 7. 阶段间质量检查(仅 TDVAL 阶段)
|
||||
if (stagePrefix === 'TDVAL') {
|
||||
const needsFixVerify = evaluateValidationResult(sessionFolder)
|
||||
if (needsFixVerify && fixVerifyIteration < MAX_FIX_VERIFY_ITERATIONS) {
|
||||
fixVerifyIteration++
|
||||
const fixVerifyTasks = createFixVerifyTasks(fixVerifyIteration, sessionFolder)
|
||||
// 将 Fix-Verify 任务追加到 pipeline 末尾继续执行
|
||||
pipelineTasks.push(...fixVerifyTasks)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.1: Worker Prompt Builder
|
||||
|
||||
```javascript
|
||||
function buildWorkerPrompt(stageTask, workerConfig, sessionFolder, taskDescription) {
|
||||
const stagePrefix = stageTask.subject.match(/^(TD\w+)-/)?.[1] || 'TD'
|
||||
|
||||
// Worktree 注入(TDFIX 和 TDVAL 阶段)
|
||||
let worktreeSection = ''
|
||||
if (sharedMemory.worktree && (stagePrefix === 'TDFIX' || stagePrefix === 'TDVAL')) {
|
||||
worktreeSection = `
|
||||
## Worktree(强制)
|
||||
- Worktree 路径: ${sharedMemory.worktree.path}
|
||||
- 分支: ${sharedMemory.worktree.branch}
|
||||
- **所有文件读取、修改、命令执行必须在 worktree 路径下进行**
|
||||
- 使用 \`cd "${sharedMemory.worktree.path}" && ...\` 前缀执行所有 Bash 命令
|
||||
- 禁止在主工作树中修改任何文件`
|
||||
}
|
||||
|
||||
return `## Role Assignment
|
||||
role: ${workerConfig.role}
|
||||
role_spec: .claude/skills/team-tech-debt/role-specs/${workerConfig.role}.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
team_name: ${teamName}
|
||||
requirement: ${stageTask.description || taskDescription}
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker for <task-id>",
|
||||
team_name: "tech-debt",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-tech-debt/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: tech-debt
|
||||
requirement: <task-description>
|
||||
inner_loop: false
|
||||
|
||||
## Current Task
|
||||
- Task ID: ${stageTask.id}
|
||||
- Task: ${stageTask.subject}
|
||||
- Task Prefix: ${stagePrefix}
|
||||
${worktreeSection}
|
||||
- Task ID: <task-id>
|
||||
- Task: <task-subject>
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
### Step 2.2: Stage Failure Handler
|
||||
6. STOP after spawning -- wait for next callback
|
||||
|
||||
```javascript
|
||||
function handleStageFailure(stageTask, taskState, workerConfig, autoYes) {
|
||||
if (autoYes) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "error",
|
||||
})
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
return 'skip'
|
||||
}
|
||||
### handleCheck
|
||||
|
||||
const decision = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `阶段 "${stageTask.subject}" worker 返回但未完成 (status=${taskState.status})。如何处理?`,
|
||||
header: "Stage Fail",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "重试", description: "重新 spawn worker 执行此阶段" },
|
||||
{ label: "跳过", description: "标记为跳过,继续后续流水线" },
|
||||
{ label: "终止", description: "停止整个流程,汇报当前结果" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
const answer = decision["Stage Fail"]
|
||||
if (answer === "重试") {
|
||||
// 重新 spawn worker(递归单次)
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
|
||||
const retryResult = Task({
|
||||
subagent_type: "team-worker",
|
||||
description: `Retry ${workerConfig.role} worker for ${stageTask.subject}`,
|
||||
team_name: teamName,
|
||||
name: workerConfig.role,
|
||||
prompt: buildWorkerPrompt(stageTask, workerConfig, sessionFolder, taskDescription),
|
||||
run_in_background: false
|
||||
})
|
||||
const retryState = TaskGet({ taskId: stageTask.id })
|
||||
if (retryState.status !== 'completed') {
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
}
|
||||
return 'retried'
|
||||
} else if (answer === "跳过") {
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
return 'skip'
|
||||
} else {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "shutdown",
|
||||
})
|
||||
return 'abort'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.3: Validation Evaluation
|
||||
|
||||
```javascript
|
||||
function evaluateValidationResult(sessionFolder) {
|
||||
const latestMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
const debtBefore = latestMemory.debt_score_before || 0
|
||||
const debtAfter = latestMemory.debt_score_after || 0
|
||||
const regressions = latestMemory.validation_results?.regressions || 0
|
||||
const improved = debtAfter < debtBefore
|
||||
|
||||
let status = 'PASS'
|
||||
if (!improved && regressions > 0) status = 'FAIL'
|
||||
else if (!improved) status = 'CONDITIONAL'
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "quality_gate",
|
||||
})
|
||||
|
||||
return regressions > 0
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing + PR Creation
|
||||
|
||||
```javascript
|
||||
// 汇总所有结果
|
||||
const finalSharedMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
const allFinalTasks = TaskList()
|
||||
const workerTasks = allFinalTasks.filter(t => t.owner && t.owner !== 'coordinator')
|
||||
|
||||
// PR 创建(worktree 执行模式下,验证通过后)
|
||||
if (finalSharedMemory.worktree && finalSharedMemory.validation_results?.passed) {
|
||||
const { path: wtPath, branch } = finalSharedMemory.worktree
|
||||
|
||||
// Commit all changes in worktree
|
||||
Bash(`cd "${wtPath}" && git add -A && git commit -m "$(cat <<'EOF'
|
||||
tech-debt: ${taskDescription}
|
||||
|
||||
Automated tech debt cleanup via team-tech-debt pipeline.
|
||||
Mode: ${pipelineMode}
|
||||
Items fixed: ${finalSharedMemory.fix_results?.items_fixed || 0}
|
||||
Debt score: ${finalSharedMemory.debt_score_before} → ${finalSharedMemory.debt_score_after}
|
||||
EOF
|
||||
)"`)
|
||||
|
||||
// Push + Create PR
|
||||
Bash(`cd "${wtPath}" && git push -u origin "${branch}"`)
|
||||
|
||||
const prTitle = `Tech Debt: ${taskDescription.slice(0, 50)}`
|
||||
Bash(`cd "${wtPath}" && gh pr create --title "${prTitle}" --body "$(cat <<'EOF'
|
||||
## Tech Debt Cleanup
|
||||
|
||||
**Mode**: ${pipelineMode}
|
||||
**Items fixed**: ${finalSharedMemory.fix_results?.items_fixed || 0}
|
||||
**Debt score**: ${finalSharedMemory.debt_score_before} → ${finalSharedMemory.debt_score_after}
|
||||
|
||||
### Validation
|
||||
- Tests: ${finalSharedMemory.validation_results?.checks?.test_suite?.status || 'N/A'}
|
||||
- Types: ${finalSharedMemory.validation_results?.checks?.type_check?.status || 'N/A'}
|
||||
- Lint: ${finalSharedMemory.validation_results?.checks?.lint_check?.status || 'N/A'}
|
||||
|
||||
### Session
|
||||
${sessionFolder}
|
||||
EOF
|
||||
)"`)
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "pr_created",
|
||||
})
|
||||
|
||||
// Cleanup worktree
|
||||
Bash(`git worktree remove "${wtPath}" 2>/dev/null || true`)
|
||||
} else if (finalSharedMemory.worktree && !finalSharedMemory.validation_results?.passed) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "quality_gate",
|
||||
})
|
||||
}
|
||||
|
||||
const summary = {
|
||||
total_tasks: workerTasks.length,
|
||||
completed_tasks: workerTasks.filter(t => t.status === 'completed').length,
|
||||
fix_verify_iterations: fixVerifyIteration,
|
||||
debt_score_before: finalSharedMemory.debt_score_before,
|
||||
debt_score_after: finalSharedMemory.debt_score_after
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
Output current pipeline status.
|
||||
|
||||
```
|
||||
## Coordination Summary
|
||||
Pipeline Status:
|
||||
[DONE] TDSCAN-001 (scanner) -> scan complete
|
||||
[DONE] TDEVAL-001 (assessor) -> assessment ready
|
||||
[DONE] TDPLAN-001 (planner) -> plan approved
|
||||
[RUN] TDFIX-001 (executor) -> fixing...
|
||||
[WAIT] TDVAL-001 (validator) -> blocked by TDFIX-001
|
||||
|
||||
### Pipeline Status: COMPLETE
|
||||
### Tasks: [completed]/[total]
|
||||
### Fix-Verify Iterations: [count]
|
||||
### Debt Score: [before] → [after]
|
||||
GC Rounds: 0/3
|
||||
Session: <session-id>
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
Output status -- do NOT advance pipeline.
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Worker 返回但未 completed (交互模式) | AskUserQuestion: 重试 / 跳过 / 终止 |
|
||||
| Worker 返回但未 completed (自动模式) | 自动跳过,记录日志 |
|
||||
| Worker spawn 失败 | 重试一次,仍失败则上报用户 |
|
||||
| Quality gate FAIL | Report to user, suggest targeted re-run |
|
||||
| Fix-Verify loop stuck >3 iterations | Accept current state, continue pipeline |
|
||||
| Shared memory 读取失败 | 降级为 TaskList 状态判断 |
|
||||
### handleResume
|
||||
|
||||
Resume pipeline after user pause or interruption.
|
||||
|
||||
1. Audit task list for inconsistencies:
|
||||
- Tasks stuck in "in_progress" -> reset to "pending"
|
||||
- Tasks with completed blockers but still "pending" -> include in spawn list
|
||||
2. Proceed to handleSpawnNext
|
||||
|
||||
### handleComplete
|
||||
|
||||
Triggered when all pipeline tasks are completed.
|
||||
|
||||
1. Verify all tasks (including any fix-verify tasks) have status "completed"
|
||||
2. If any tasks not completed, return to handleSpawnNext
|
||||
3. If all completed:
|
||||
- Read final state from `<session>/.msg/meta.json`
|
||||
- Compile summary: total tasks, completed, gc_rounds, debt_score_before, debt_score_after
|
||||
- If worktree exists and validation passed: commit changes, create PR, cleanup worktree
|
||||
- Transition to coordinator Phase 5
|
||||
|
||||
## Phase 4: State Persistence
|
||||
|
||||
After every handler execution:
|
||||
|
||||
1. Update session.json with current state (active tasks, gc_rounds, last event)
|
||||
2. Verify task list consistency
|
||||
3. STOP and wait for next event
|
||||
|
||||
@@ -300,32 +300,24 @@ Delegate to `commands/dispatch.md` which creates the full task chain.
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
prompt: `你是 team "tech-debt" 的 <ROLE>.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-tech-debt/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: tech-debt
|
||||
requirement: <task-description>
|
||||
inner_loop: false
|
||||
|
||||
## 首要指令(MUST)
|
||||
你的所有工作必须通过调用 Skill 获取角色定义后执行,禁止自行发挥:
|
||||
Skill(skill="team-tech-debt", args="--role=<role>")
|
||||
此调用会加载你的角色定义(role.md)、可用命令(commands/*.md)和完整执行逻辑。
|
||||
## Current Task
|
||||
- Task ID: <task-id>
|
||||
- Task: <PREFIX>-<NNN>
|
||||
- Task Prefix: <PREFIX>
|
||||
|
||||
当前需求: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## 角色准则(强制)
|
||||
- 你只能处理 <PREFIX>-* 前缀的任务,不得执行其他角色的工作
|
||||
- 所有输出(SendMessage、team_msg)必须带 [<role>] 标识前缀
|
||||
- 仅与 coordinator 通信,不得直接联系其他 worker
|
||||
- 不得使用 TaskCreate 为其他角色创建任务
|
||||
|
||||
## 消息总线(必须)
|
||||
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||
|
||||
## 工作流程(严格按顺序)
|
||||
1. 调用 Skill(skill="team-tech-debt", args="--role=<role>") 获取角色定义和执行逻辑
|
||||
2. 按 role.md 中的 5-Phase 流程执行(TaskList -> 找到 <PREFIX>-* 任务 -> 执行 -> 汇报)
|
||||
3. team_msg log + SendMessage 结果给 coordinator(带 [<role>] 标识)
|
||||
4. TaskUpdate completed -> 检查下一个任务 -> 回到步骤 1`,
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
|
||||
run_in_background: false // Stop-Wait: synchronous blocking
|
||||
})
|
||||
```
|
||||
|
||||
@@ -1,180 +0,0 @@
|
||||
# Command: remediate
|
||||
|
||||
> 分批委派 code-developer 执行债务清理。按修复类型分组(重构、死代码移除、依赖更新、文档补充),每批委派给 code-developer。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Executor
|
||||
- 治理方案已加载,修复 actions 已分批
|
||||
- 需要通过 code-developer 执行代码修改
|
||||
|
||||
**Trigger conditions**:
|
||||
- TDFIX-* 任务进入 Phase 3
|
||||
- 修复 actions 列表非空
|
||||
- 目标文件可访问
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Sequential Batch Delegation
|
||||
**Subagent**: `code-developer`
|
||||
**Batch Strategy**: 按修复类型分组,每组一个委派
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 分批策略
|
||||
const batchOrder = ['refactor', 'update-deps', 'add-tests', 'add-docs', 'restructure']
|
||||
|
||||
// 按优先级排序批次
|
||||
function sortBatches(batches) {
|
||||
const sorted = {}
|
||||
for (const type of batchOrder) {
|
||||
if (batches[type]) sorted[type] = batches[type]
|
||||
}
|
||||
// 追加未知类型
|
||||
for (const [type, actions] of Object.entries(batches)) {
|
||||
if (!sorted[type]) sorted[type] = actions
|
||||
}
|
||||
return sorted
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 按类型分组并排序
|
||||
const sortedBatches = sortBatches(batches)
|
||||
|
||||
// Worktree 路径(从 shared memory 加载)
|
||||
const worktreePath = sharedMemory.worktree?.path || null
|
||||
const cmdPrefix = worktreePath ? `cd "${worktreePath}" && ` : ''
|
||||
|
||||
// 每批最大 items 数
|
||||
const MAX_ITEMS_PER_BATCH = 10
|
||||
|
||||
// 如果单批过大,进一步拆分
|
||||
function splitLargeBatches(batches) {
|
||||
const result = {}
|
||||
for (const [type, actions] of Object.entries(batches)) {
|
||||
if (actions.length <= MAX_ITEMS_PER_BATCH) {
|
||||
result[type] = actions
|
||||
} else {
|
||||
for (let i = 0; i < actions.length; i += MAX_ITEMS_PER_BATCH) {
|
||||
const chunk = actions.slice(i, i + MAX_ITEMS_PER_BATCH)
|
||||
result[`${type}-${Math.floor(i / MAX_ITEMS_PER_BATCH) + 1}`] = chunk
|
||||
}
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
const finalBatches = splitLargeBatches(sortedBatches)
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
for (const [batchName, actions] of Object.entries(finalBatches)) {
|
||||
// 构建修复上下文
|
||||
const batchType = batchName.replace(/-\d+$/, '')
|
||||
const fileList = actions.map(a => a.file).filter(Boolean)
|
||||
|
||||
// 根据类型选择修复提示
|
||||
const typePrompts = {
|
||||
'refactor': `Refactor the following code to reduce complexity and improve readability. Preserve all existing behavior.`,
|
||||
'update-deps': `Update the specified dependencies. Check for breaking changes in changelogs.`,
|
||||
'add-tests': `Add missing test coverage for the specified modules. Follow existing test patterns.`,
|
||||
'add-docs': `Add documentation (JSDoc/docstrings) for the specified public APIs. Follow existing doc style.`,
|
||||
'restructure': `Restructure module boundaries to reduce coupling. Move code to appropriate locations.`
|
||||
}
|
||||
|
||||
const prompt = typePrompts[batchType] || 'Apply the specified fix to resolve technical debt.'
|
||||
|
||||
// 委派给 code-developer
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: `Tech debt cleanup: ${batchName} (${actions.length} items)`,
|
||||
prompt: `## Goal
|
||||
${prompt}
|
||||
${worktreePath ? `\n## Worktree(强制)\n- 工作目录: ${worktreePath}\n- **所有文件操作必须在 ${worktreePath} 下进行**\n- 读文件: Read("${worktreePath}/path/to/file")\n- Bash 命令: cd "${worktreePath}" && ...\n- 禁止修改主工作树\n` : ''}
|
||||
## Items to Fix
|
||||
${actions.map(a => `### ${a.debt_id}: ${a.action}
|
||||
- File: ${a.file || 'N/A'}
|
||||
- Type: ${a.type}
|
||||
${a.steps ? '- Steps:\n' + a.steps.map(s => ` 1. ${s}`).join('\n') : ''}`).join('\n\n')}
|
||||
|
||||
## Constraints
|
||||
- Read each file BEFORE modifying
|
||||
- Make minimal changes - fix only the specified debt item
|
||||
- Preserve backward compatibility
|
||||
- Do NOT skip tests or add @ts-ignore
|
||||
- Do NOT introduce new dependencies unless explicitly required
|
||||
- Run syntax check after modifications
|
||||
|
||||
## Files to Read First
|
||||
${fileList.map(f => `- ${f}`).join('\n')}`
|
||||
})
|
||||
|
||||
// 验证批次结果
|
||||
const batchResult = {
|
||||
batch: batchName,
|
||||
items: actions.length,
|
||||
status: 'completed'
|
||||
}
|
||||
|
||||
// 检查文件是否被修改(在 worktree 中执行)
|
||||
for (const file of fileList) {
|
||||
const modified = Bash(`${cmdPrefix}git diff --name-only -- "${file}" 2>/dev/null`).trim()
|
||||
if (modified) {
|
||||
fixResults.files_modified.push(file)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// 统计修复结果
|
||||
const totalActions = Object.values(finalBatches).flat().length
|
||||
fixResults.items_fixed = fixResults.files_modified.length
|
||||
fixResults.items_failed = totalActions - fixResults.items_fixed
|
||||
fixResults.items_remaining = fixResults.items_failed
|
||||
|
||||
// 生成修复摘要
|
||||
const batchSummaries = Object.entries(finalBatches).map(([name, actions]) =>
|
||||
`- ${name}: ${actions.length} items`
|
||||
).join('\n')
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Remediation Results
|
||||
|
||||
### Batches Executed: [count]
|
||||
### Items Fixed: [count]/[total]
|
||||
### Files Modified: [count]
|
||||
|
||||
### Batch Details
|
||||
- [batch-name]: [count] items - [status]
|
||||
|
||||
### Modified Files
|
||||
- [file-path]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| code-developer fails on a batch | Retry once, mark failed items |
|
||||
| File locked or read-only | Skip file, log error |
|
||||
| Syntax error after fix | Revert with git checkout, mark as failed |
|
||||
| New import/dependency needed | Add minimally, document in fix log |
|
||||
| Batch too large (>10 items) | Auto-split into sub-batches |
|
||||
| Agent timeout | Use partial results, continue next batch |
|
||||
@@ -1,226 +0,0 @@
|
||||
# Executor Role
|
||||
|
||||
技术债务清理执行者。根据治理方案执行重构、依赖更新、代码清理、文档补充等操作。通过 code-developer subagent 分批执行修复任务,包含自验证环节。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `executor` | **Tag**: `[executor]`
|
||||
- **Task Prefix**: `TDFIX-*`
|
||||
- **Responsibility**: Code generation (债务清理执行)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `TDFIX-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[executor]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within debt remediation responsibility scope
|
||||
- Execute fixes according to remediation plan
|
||||
- Perform self-validation (syntax check, lint)
|
||||
|
||||
### MUST NOT
|
||||
- Create new features from scratch (only cleanup debt)
|
||||
- Modify code outside the remediation plan
|
||||
- Create tasks for other roles
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Skip self-validation step
|
||||
- Omit `[executor]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `remediate` | [commands/remediate.md](commands/remediate.md) | Phase 3 | 分批委派 code-developer 执行修复 |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `code-developer` | Subagent | remediate.md | 代码修复执行 |
|
||||
|
||||
> Executor does not directly use CLI analysis tools (uses code-developer subagent indirectly)
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `fix_complete` | executor -> coordinator | 修复完成 | 包含修复摘要 |
|
||||
| `fix_progress` | executor -> coordinator | 批次完成 | 进度更新 |
|
||||
| `error` | executor -> coordinator | 执行失败 | 阻塞性错误 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "executor",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from executor --type <message-type> --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `TDFIX-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Load Remediation Plan
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | task.description (regex: `session:\s*(.+)`) | Yes |
|
||||
| Shared memory | `<session-folder>/.msg/meta.json` | Yes |
|
||||
| Remediation plan | `<session-folder>/plan/remediation-plan.json` | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json for worktree info:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `worktree.path` | Worktree directory path |
|
||||
| `worktree.branch` | Worktree branch name |
|
||||
|
||||
3. Read remediation-plan.json for actions
|
||||
4. Extract all actions from plan phases
|
||||
5. Identify target files (unique file paths from actions)
|
||||
6. Group actions by type for batch processing
|
||||
|
||||
**Batch grouping**:
|
||||
|
||||
| Action Type | Description |
|
||||
|-------------|-------------|
|
||||
| refactor | Code refactoring |
|
||||
| restructure | Architecture changes |
|
||||
| add-tests | Test additions |
|
||||
| update-deps | Dependency updates |
|
||||
| add-docs | Documentation additions |
|
||||
|
||||
### Phase 3: Execute Fixes
|
||||
|
||||
Delegate to `commands/remediate.md` if available, otherwise execute inline.
|
||||
|
||||
**Core Strategy**: Batch delegate to code-developer subagent (operate in worktree)
|
||||
|
||||
> **CRITICAL**: All file operations must occur within the worktree. Use `run_in_background: false` for synchronous execution.
|
||||
|
||||
**Fix Results Tracking**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `items_fixed` | Count of successfully fixed items |
|
||||
| `items_failed` | Count of failed items |
|
||||
| `items_remaining` | Count of remaining items |
|
||||
| `batches_completed` | Count of completed batches |
|
||||
| `files_modified` | Array of modified file paths |
|
||||
| `errors` | Array of error messages |
|
||||
|
||||
**Batch execution flow**:
|
||||
|
||||
For each batch type and its actions:
|
||||
1. Spawn code-developer subagent with worktree context
|
||||
2. Wait for completion (synchronous)
|
||||
3. Log progress via team_msg
|
||||
4. Increment batch counter
|
||||
|
||||
**Subagent prompt template**:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false, // Stop-Wait: synchronous execution
|
||||
description: "Fix tech debt batch: <batch-type> (<count> items)",
|
||||
prompt: `## Goal
|
||||
Execute tech debt cleanup for <batch-type> items.
|
||||
|
||||
## Worktree (Mandatory)
|
||||
- Working directory: <worktree-path>
|
||||
- **All file reads and modifications must be within <worktree-path>**
|
||||
- Read files using <worktree-path>/path/to/file
|
||||
- Prefix Bash commands with cd "<worktree-path>" && ...
|
||||
|
||||
## Actions
|
||||
<action-list>
|
||||
|
||||
## Instructions
|
||||
- Read each target file before modifying
|
||||
- Apply the specified fix
|
||||
- Preserve backward compatibility
|
||||
- Do NOT introduce new features
|
||||
- Do NOT modify unrelated code
|
||||
- Run basic syntax check after each change`
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 4: Self-Validation
|
||||
|
||||
> **CRITICAL**: All commands must execute in worktree
|
||||
|
||||
**Validation checks**:
|
||||
|
||||
| Check | Command | Pass Criteria |
|
||||
|-------|---------|---------------|
|
||||
| Syntax | `tsc --noEmit` or `python -m py_compile` | No errors |
|
||||
| Lint | `eslint --no-error-on-unmatched-pattern` | No errors |
|
||||
|
||||
**Command prefix** (if worktree): `cd "<worktree-path>" && `
|
||||
|
||||
**Validation flow**:
|
||||
|
||||
1. Run syntax check -> record PASS/FAIL
|
||||
2. Run lint check -> record PASS/FAIL
|
||||
3. Update fix_results.self_validation
|
||||
4. Write `<session-folder>/fixes/fix-log.json`
|
||||
5. Update .msg/meta.json with fix_results
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[executor]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Task | task.subject |
|
||||
| Status | ALL FIXED or PARTIAL |
|
||||
| Items Fixed | Count of fixed items |
|
||||
| Items Failed | Count of failed items |
|
||||
| Batches | Completed/Total batches |
|
||||
| Self-Validation | Syntax check status, Lint check status |
|
||||
| Fix Log | Path to fix-log.json |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No TDFIX-* tasks available | Idle, wait for coordinator |
|
||||
| Remediation plan missing | Request plan from shared memory, report error if empty |
|
||||
| code-developer fails | Retry once, skip item on second failure |
|
||||
| Syntax check fails after fix | Revert change, mark item as failed |
|
||||
| Lint errors introduced | Attempt auto-fix with eslint --fix, report if persistent |
|
||||
| File not found | Skip item, log warning |
|
||||
@@ -1,165 +0,0 @@
|
||||
# Command: create-plan
|
||||
|
||||
> 使用 gemini CLI 创建结构化治理方案。将 quick-wins 归为立即执行,systematic 归为中期治理,识别预防机制用于长期改善。输出 remediation-plan.md。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Planner
|
||||
- 评估矩阵已就绪,需要创建治理方案
|
||||
- 债务项已按优先级象限分组
|
||||
|
||||
**Trigger conditions**:
|
||||
- TDPLAN-* 任务进入 Phase 3
|
||||
- 评估数据可用(priority-matrix.json)
|
||||
- 需要 CLI 辅助生成详细修复建议
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: CLI Analysis + Template Generation
|
||||
**CLI Tool**: `gemini` (primary)
|
||||
**CLI Mode**: `analysis`
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 方案生成策略
|
||||
if (quickWins.length + strategic.length <= 5) {
|
||||
// 少量项目:内联生成方案
|
||||
mode = 'inline'
|
||||
} else {
|
||||
// 较多项目:CLI 辅助生成详细修复步骤
|
||||
mode = 'cli-assisted'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 准备债务摘要供 CLI 分析
|
||||
const debtSummary = debtInventory
|
||||
.filter(i => i.priority_quadrant === 'quick-win' || i.priority_quadrant === 'strategic')
|
||||
.map(i => `[${i.id}] [${i.priority_quadrant}] [${i.dimension}] ${i.file}:${i.line} - ${i.description} (impact: ${i.impact_score}, cost: ${i.cost_score})`)
|
||||
.join('\n')
|
||||
|
||||
// 读取相关源文件获取上下文
|
||||
const affectedFiles = [...new Set(debtInventory.map(i => i.file).filter(Boolean))]
|
||||
const fileContext = affectedFiles.slice(0, 20).map(f => `@${f}`).join(' ')
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
if (mode === 'inline') {
|
||||
// 内联生成方案
|
||||
for (const item of quickWins) {
|
||||
item.remediation_steps = [
|
||||
`Read ${item.file}`,
|
||||
`Apply fix: ${item.suggestion || 'Resolve ' + item.description}`,
|
||||
`Verify fix with relevant tests`
|
||||
]
|
||||
}
|
||||
for (const item of strategic) {
|
||||
item.remediation_steps = [
|
||||
`Analyze impact scope of ${item.file}`,
|
||||
`Plan refactoring: ${item.suggestion || 'Address ' + item.description}`,
|
||||
`Implement changes incrementally`,
|
||||
`Run full test suite to verify`
|
||||
]
|
||||
}
|
||||
} else {
|
||||
// CLI 辅助生成修复方案
|
||||
const prompt = `PURPOSE: Create detailed remediation steps for each technical debt item, grouped into actionable phases
|
||||
TASK: • For each quick-win item, generate specific fix steps (1-3 steps) • For each strategic item, generate a refactoring plan (3-5 steps) • Identify prevention mechanisms based on recurring patterns • Group related items that should be fixed together
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: Structured remediation plan with: phase name, items, steps per item, dependencies between fixes, estimated time per phase
|
||||
CONSTRAINTS: Focus on backward-compatible changes, prefer incremental fixes over big-bang refactoring
|
||||
|
||||
## Debt Items to Plan
|
||||
${debtSummary}
|
||||
|
||||
## Recurring Patterns
|
||||
${[...new Set(debtInventory.map(i => i.dimension))].map(d => {
|
||||
const count = debtInventory.filter(i => i.dimension === d).length
|
||||
return `- ${d}: ${count} items`
|
||||
}).join('\n')}`
|
||||
|
||||
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, {
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// 等待 CLI 完成,解析结果
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// 生成 Markdown 治理方案
|
||||
function generatePlanMarkdown(plan, validation) {
|
||||
return `# Tech Debt Remediation Plan
|
||||
|
||||
## Overview
|
||||
- **Total Actions**: ${validation.total_actions}
|
||||
- **Files Affected**: ${validation.files_affected.length}
|
||||
- **Total Estimated Effort**: ${validation.total_effort} points
|
||||
|
||||
## Phase 1: Quick Wins (Immediate)
|
||||
> High impact, low cost items for immediate action.
|
||||
|
||||
${plan.phases[0].actions.map((a, i) => `### ${i + 1}. ${a.debt_id}: ${a.action}
|
||||
- **File**: ${a.file || 'N/A'}
|
||||
- **Type**: ${a.type}
|
||||
${a.steps ? a.steps.map(s => `- [ ] ${s}`).join('\n') : ''}`).join('\n\n')}
|
||||
|
||||
## Phase 2: Systematic (Medium-term)
|
||||
> High impact items requiring structured refactoring.
|
||||
|
||||
${plan.phases[1].actions.map((a, i) => `### ${i + 1}. ${a.debt_id}: ${a.action}
|
||||
- **File**: ${a.file || 'N/A'}
|
||||
- **Type**: ${a.type}
|
||||
${a.steps ? a.steps.map(s => `- [ ] ${s}`).join('\n') : ''}`).join('\n\n')}
|
||||
|
||||
## Phase 3: Prevention (Long-term)
|
||||
> Mechanisms to prevent future debt accumulation.
|
||||
|
||||
${plan.phases[2].actions.map((a, i) => `### ${i + 1}. ${a.action}
|
||||
- **Dimension**: ${a.dimension || 'general'}
|
||||
- **Type**: ${a.type}`).join('\n\n')}
|
||||
|
||||
## Execution Notes
|
||||
- Execute Phase 1 first for maximum ROI
|
||||
- Phase 2 items may require feature branches
|
||||
- Phase 3 should be integrated into CI/CD pipeline
|
||||
`
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Remediation Plan Created
|
||||
|
||||
### Phases: 3
|
||||
### Quick Wins: [count] actions
|
||||
### Systematic: [count] actions
|
||||
### Prevention: [count] actions
|
||||
### Files Affected: [count]
|
||||
|
||||
### Output: [sessionFolder]/plan/remediation-plan.md
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI returns unstructured text | Parse manually, extract action items |
|
||||
| No quick-wins available | Focus plan on systematic and prevention |
|
||||
| File references invalid | Verify with Glob, skip non-existent files |
|
||||
| CLI timeout | Generate plan from heuristic data only |
|
||||
| Agent/CLI failure | Retry once, then inline generation |
|
||||
| Timeout (>5 min) | Report partial plan, notify planner |
|
||||
@@ -1,188 +0,0 @@
|
||||
# Planner Role
|
||||
|
||||
技术债务治理方案规划师。基于评估矩阵创建分阶段治理方案:quick-wins 立即执行、systematic 中期系统治理、prevention 长期预防机制。产出 remediation-plan.md。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `planner` | **Tag**: `[planner]`
|
||||
- **Task Prefix**: `TDPLAN-*`
|
||||
- **Responsibility**: Orchestration (治理规划)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `TDPLAN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[planner]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within remediation planning responsibility scope
|
||||
- Base plans on assessment data from shared memory
|
||||
|
||||
### MUST NOT
|
||||
- Modify source code or test code
|
||||
- Execute fix operations
|
||||
- Create tasks for other roles
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Omit `[planner]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `create-plan` | [commands/create-plan.md](commands/create-plan.md) | Phase 3 | 分阶段治理方案生成 |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `cli-explore-agent` | Subagent | create-plan.md | 代码库探索验证方案可行性 |
|
||||
| `gemini` | CLI | create-plan.md | 治理方案生成 |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `plan_ready` | planner -> coordinator | 方案完成 | 包含分阶段治理方案 |
|
||||
| `plan_revision` | planner -> coordinator | 方案修订 | 根据反馈调整方案 |
|
||||
| `error` | planner -> coordinator | 规划失败 | 阻塞性错误 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "planner",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from planner --type <message-type> --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `TDPLAN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Load Assessment Data
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | task.description (regex: `session:\s*(.+)`) | Yes |
|
||||
| Shared memory | `<session-folder>/.msg/meta.json` | Yes |
|
||||
| Priority matrix | `<session-folder>/assessment/priority-matrix.json` | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json for debt_inventory
|
||||
3. Read priority-matrix.json for quadrant groupings
|
||||
4. Group items by priority quadrant:
|
||||
|
||||
| Quadrant | Filter |
|
||||
|----------|--------|
|
||||
| quickWins | priority_quadrant === 'quick-win' |
|
||||
| strategic | priority_quadrant === 'strategic' |
|
||||
| backlog | priority_quadrant === 'backlog' |
|
||||
| deferred | priority_quadrant === 'defer' |
|
||||
|
||||
### Phase 3: Create Remediation Plan
|
||||
|
||||
Delegate to `commands/create-plan.md` if available, otherwise execute inline.
|
||||
|
||||
**Core Strategy**: 3-phase remediation plan
|
||||
|
||||
| Phase | Name | Description | Items |
|
||||
|-------|------|-------------|-------|
|
||||
| 1 | Quick Wins | 高影响低成本项,立即执行 | quickWins |
|
||||
| 2 | Systematic | 高影响高成本项,需系统规划 | strategic |
|
||||
| 3 | Prevention | 预防机制建设,长期生效 | Generated from inventory |
|
||||
|
||||
**Action Type Mapping**:
|
||||
|
||||
| Dimension | Action Type |
|
||||
|-----------|-------------|
|
||||
| code | refactor |
|
||||
| architecture | restructure |
|
||||
| testing | add-tests |
|
||||
| dependency | update-deps |
|
||||
| documentation | add-docs |
|
||||
|
||||
**Prevention Action Generation**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| dimension count >= 3 | Generate prevention action for that dimension |
|
||||
|
||||
| Dimension | Prevention Action |
|
||||
|-----------|-------------------|
|
||||
| code | Add linting rules for complexity thresholds and code smell detection |
|
||||
| architecture | Introduce module boundary checks in CI pipeline |
|
||||
| testing | Set minimum coverage thresholds in CI and add pre-commit test hooks |
|
||||
| dependency | Configure automated dependency update bot (Renovate/Dependabot) |
|
||||
| documentation | Add JSDoc/docstring enforcement in linting rules |
|
||||
|
||||
### Phase 4: Validate Plan Feasibility
|
||||
|
||||
**Validation metrics**:
|
||||
|
||||
| Metric | Description |
|
||||
|--------|-------------|
|
||||
| total_actions | Sum of actions across all phases |
|
||||
| total_effort | Sum of estimated effort scores |
|
||||
| files_affected | Unique files in action list |
|
||||
| has_quick_wins | Boolean: quickWins.length > 0 |
|
||||
| has_prevention | Boolean: prevention actions exist |
|
||||
|
||||
**Save outputs**:
|
||||
|
||||
1. Write `<session-folder>/plan/remediation-plan.md` (markdown format)
|
||||
2. Write `<session-folder>/plan/remediation-plan.json` (machine-readable)
|
||||
3. Update .msg/meta.json with `remediation_plan` summary
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[planner]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Task | task.subject |
|
||||
| Total Actions | Count of all actions |
|
||||
| Files Affected | Count of unique files |
|
||||
| Phase 1: Quick Wins | Top 5 quick-win items |
|
||||
| Phase 2: Systematic | Top 3 strategic items |
|
||||
| Phase 3: Prevention | Top 3 prevention actions |
|
||||
| Plan Document | Path to remediation-plan.md |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No TDPLAN-* tasks available | Idle, wait for coordinator |
|
||||
| Assessment data empty | Create minimal plan based on debt inventory |
|
||||
| No quick-wins found | Skip Phase 1, focus on systematic |
|
||||
| CLI analysis fails | Fall back to heuristic plan generation |
|
||||
| Too many items for single plan | Split into multiple phases with priorities |
|
||||
@@ -1,388 +0,0 @@
|
||||
# Command: scan-debt
|
||||
|
||||
> 三层并行 Fan-out 技术债务扫描。Subagent 结构探索 + CLI 维度分析 + 多视角 Gemini 深度分析,三层并行执行后 Fan-in 聚合。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Scanner
|
||||
- 需要对代码库进行多维度技术债务扫描
|
||||
- 复杂度为 Medium 或 High 时使用并行 Fan-out
|
||||
|
||||
**Trigger conditions**:
|
||||
- TDSCAN-* 任务进入 Phase 3
|
||||
- 复杂度评估为 Medium/High
|
||||
- 需要深度分析超出 ACE 搜索能力
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Triple Fan-out + Fan-in
|
||||
**Subagent**: `cli-explore-agent`(并行结构探索)
|
||||
**CLI Tool**: `gemini` (primary)
|
||||
**CLI Mode**: `analysis`
|
||||
**Parallel Layers**:
|
||||
- Fan-out A: 2-3 并行 subagent(结构探索)
|
||||
- Fan-out B: 3-5 并行 CLI(维度分析)
|
||||
- Fan-out C: 2-4 并行 CLI(多视角 Gemini)
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 复杂度决定扫描策略
|
||||
if (complexity === 'Low') {
|
||||
// ACE 搜索 + Grep 内联分析(不使用 CLI)
|
||||
mode = 'inline'
|
||||
} else if (complexity === 'Medium') {
|
||||
// 双层 Fan-out: subagent 探索 + CLI 3 维度
|
||||
mode = 'dual-fanout'
|
||||
activeDimensions = ['code', 'testing', 'dependency']
|
||||
exploreAngles = ['structure', 'patterns']
|
||||
} else {
|
||||
// 三层 Fan-out: subagent 探索 + CLI 5 维度 + 多视角 Gemini
|
||||
mode = 'triple-fanout'
|
||||
activeDimensions = dimensions // all 5
|
||||
exploreAngles = ['structure', 'patterns', 'dependencies']
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 确定扫描范围
|
||||
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()
|
||||
const scanScope = task.description.match(/scope:\s*(.+)/)?.[1] || '**/*'
|
||||
|
||||
// 获取变更文件用于聚焦扫描
|
||||
const changedFiles = Bash(`git diff --name-only HEAD~10 2>/dev/null || echo ""`)
|
||||
.split('\n').filter(Boolean)
|
||||
|
||||
// 构建文件上下文
|
||||
const fileContext = changedFiles.length > 0
|
||||
? changedFiles.map(f => `@${f}`).join(' ')
|
||||
: `@${scanScope}`
|
||||
|
||||
// 多视角检测(从 role.md Phase 2 传入)
|
||||
// perspectives = detectPerspectives(task.description)
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
if (mode === 'inline') {
|
||||
// 快速内联扫描(Low 复杂度)
|
||||
const aceResults = mcp__ace-tool__search_context({
|
||||
project_root_path: projectRoot,
|
||||
query: "code smells, TODO/FIXME, deprecated APIs, complex functions, dead code, missing tests, circular imports"
|
||||
})
|
||||
// 解析 ACE 结果并分类到维度
|
||||
} else {
|
||||
// === 三层并行 Fan-out ===
|
||||
// A、B、C 三层同时启动,互不依赖
|
||||
|
||||
// ─── Fan-out A: Subagent 并行探索(codebase 结构理解)───
|
||||
executeExploreAngles(exploreAngles)
|
||||
|
||||
// ─── Fan-out B: CLI 维度分析(并行 gemini)───
|
||||
executeDimensionAnalysis(activeDimensions)
|
||||
|
||||
// ─── Fan-out C: 多视角 Gemini 深度分析(并行)───
|
||||
if (mode === 'triple-fanout') {
|
||||
executePerspectiveAnalysis(perspectives)
|
||||
}
|
||||
|
||||
// 等待所有 Fan-out 完成(hook 回调通知)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2a: Fan-out A — Subagent Exploration
|
||||
|
||||
> 并行启动 cli-explore-agent 探索代码库结构,为后续分析提供上下文。
|
||||
> 每个角度独立执行,不互相依赖。
|
||||
|
||||
```javascript
|
||||
function executeExploreAngles(angles) {
|
||||
const explorePrompts = {
|
||||
'structure': `Explore the codebase structure and module organization.
|
||||
Focus on: directory layout, module boundaries, entry points, build configuration.
|
||||
Project root: ${projectRoot}
|
||||
Report: module map, key entry files, build system type, framework detection.`,
|
||||
|
||||
'patterns': `Explore coding patterns and conventions used in this codebase.
|
||||
Focus on: naming conventions, import patterns, error handling patterns, state management, design patterns.
|
||||
Project root: ${projectRoot}
|
||||
Report: dominant patterns, anti-patterns found, consistency assessment.`,
|
||||
|
||||
'dependencies': `Explore dependency graph and inter-module relationships.
|
||||
Focus on: import/require chains, circular dependencies, external dependency usage, shared utilities.
|
||||
Project root: ${projectRoot}
|
||||
Report: dependency hotspots, tightly-coupled modules, dependency depth analysis.`
|
||||
}
|
||||
|
||||
// 并行启动所有探索角度(每个 cli-explore-agent 独立执行)
|
||||
for (const angle of angles) {
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: explorePrompts[angle] || `Explore from ${angle} perspective. Project: ${projectRoot}`
|
||||
})
|
||||
}
|
||||
|
||||
// 所有 subagent 返回后,探索结果已可用
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2b: Fan-out B — CLI Dimension Analysis
|
||||
|
||||
> 每个维度独立的 gemini CLI 分析,全部并行启动。
|
||||
|
||||
```javascript
|
||||
function executeDimensionAnalysis(activeDimensions) {
|
||||
const dimensionPrompts = {
|
||||
'code': `PURPOSE: Identify code quality debt - complexity, duplication, code smells
|
||||
TASK: • Find functions with cyclomatic complexity > 10 • Detect code duplication (>20 lines) • Identify code smells (God class, long method, feature envy) • Find TODO/FIXME/HACK comments • Detect dead code and unused exports
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: List of findings with severity (critical/high/medium/low), file:line, description, estimated fix effort (small/medium/large)
|
||||
CONSTRAINTS: Focus on actionable items, skip generated code`,
|
||||
|
||||
'architecture': `PURPOSE: Identify architecture debt - coupling, circular dependencies, layering violations
|
||||
TASK: • Detect circular dependencies between modules • Find tight coupling between components • Identify layering violations (e.g., UI importing DB) • Check for God modules with too many responsibilities • Find missing abstraction layers
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: Architecture debt findings with severity, affected modules, dependency graph issues
|
||||
CONSTRAINTS: Focus on structural issues, not style`,
|
||||
|
||||
'testing': `PURPOSE: Identify testing debt - coverage gaps, test quality, missing test types
|
||||
TASK: • Find modules without any test files • Identify complex logic without test coverage • Check for test anti-patterns (flaky tests, hardcoded values) • Find missing edge case tests • Detect test files that import from test utilities incorrectly
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: Testing debt findings with severity, affected files, missing test type (unit/integration/e2e)
|
||||
CONSTRAINTS: Focus on high-risk untested code paths`,
|
||||
|
||||
'dependency': `PURPOSE: Identify dependency debt - outdated packages, vulnerabilities, unnecessary deps
|
||||
TASK: • Find outdated major-version dependencies • Identify known vulnerability packages • Detect unused dependencies • Find duplicate functionality from different packages • Check for pinned vs range versions
|
||||
MODE: analysis
|
||||
CONTEXT: @package.json @package-lock.json @requirements.txt @go.mod @pom.xml
|
||||
EXPECTED: Dependency debt with severity, package name, current vs latest version, CVE references
|
||||
CONSTRAINTS: Focus on security and compatibility risks`,
|
||||
|
||||
'documentation': `PURPOSE: Identify documentation debt - missing docs, stale docs, undocumented APIs
|
||||
TASK: • Find public APIs without JSDoc/docstrings • Identify README files that are outdated • Check for missing architecture documentation • Find configuration options without documentation • Detect stale comments that don't match code
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: Documentation debt with severity, file:line, type (missing/stale/incomplete)
|
||||
CONSTRAINTS: Focus on public interfaces and critical paths`
|
||||
}
|
||||
|
||||
// 并行启动所有维度分析
|
||||
for (const dimension of activeDimensions) {
|
||||
const prompt = dimensionPrompts[dimension]
|
||||
if (!prompt) continue
|
||||
|
||||
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`, {
|
||||
run_in_background: true
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2c: Fan-out C — Multi-Perspective Gemini Analysis
|
||||
|
||||
> 多视角深度分析,每个视角关注不同质量维度。
|
||||
> 视角由 `detectPerspectives()` 自动检测,或在 High 复杂度下全量启用。
|
||||
> 与 Fan-out B(维度分析)的区别:维度分析按"代码/测试/依赖"横切,视角分析按"安全/性能/质量/架构"纵切,交叉覆盖。
|
||||
|
||||
```javascript
|
||||
function executePerspectiveAnalysis(perspectives) {
|
||||
const perspectivePrompts = {
|
||||
'security': `PURPOSE: Security-focused analysis of codebase to identify vulnerability debt
|
||||
TASK: • Find injection vulnerabilities (SQL, command, XSS, LDAP) • Check authentication/authorization weaknesses • Identify hardcoded secrets or credentials • Detect insecure data handling (sensitive data exposure) • Find missing input validation on trust boundaries • Check for outdated crypto or insecure hash functions
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: Security findings with: severity (critical/high/medium/low), CWE/OWASP reference, file:line, remediation suggestion
|
||||
CONSTRAINTS: Focus on exploitable vulnerabilities, not theoretical risks`,
|
||||
|
||||
'performance': `PURPOSE: Performance-focused analysis to identify performance debt
|
||||
TASK: • Find N+1 query patterns in database calls • Detect unnecessary re-renders or recomputations • Identify missing caching opportunities • Find synchronous blocking in async contexts • Detect memory leak patterns (event listener accumulation, unclosed resources) • Check for unoptimized loops or O(n²) algorithms on large datasets
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: Performance findings with: severity, impact estimate (latency/memory/CPU), file:line, optimization suggestion
|
||||
CONSTRAINTS: Focus on measurable impact, not micro-optimizations`,
|
||||
|
||||
'code-quality': `PURPOSE: Code quality deep analysis beyond surface-level linting
|
||||
TASK: • Identify functions violating single responsibility principle • Find overly complex conditional chains (>3 nesting levels) • Detect hidden temporal coupling between functions • Find magic numbers and unexplained constants • Identify error handling anti-patterns (empty catch, swallowed errors) • Detect feature envy (methods that access other classes more than their own)
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: Quality findings with: severity, code smell category, file:line, refactoring suggestion with pattern name
|
||||
CONSTRAINTS: Focus on maintainability impact, skip style-only issues`,
|
||||
|
||||
'architecture': `PURPOSE: Architecture-level analysis of system design debt
|
||||
TASK: • Identify layering violations (skip-layer calls, reverse dependencies) • Find God modules/classes with >5 distinct responsibilities • Detect missing domain boundaries (business logic in UI/API layer) • Check for abstraction leaks (implementation details in interfaces) • Identify duplicated business logic across modules • Find tightly coupled modules that should be independent
|
||||
MODE: analysis
|
||||
CONTEXT: ${fileContext}
|
||||
EXPECTED: Architecture findings with: severity, affected modules, coupling metric, suggested restructuring
|
||||
CONSTRAINTS: Focus on structural issues affecting scalability and team autonomy`
|
||||
}
|
||||
|
||||
// 并行启动所有视角分析
|
||||
for (const perspective of perspectives) {
|
||||
const prompt = perspectivePrompts[perspective]
|
||||
if (!prompt) continue
|
||||
|
||||
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule analysis-review-architecture`, {
|
||||
run_in_background: true
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Fan-in Result Processing
|
||||
|
||||
> 三层 Fan-out 结果聚合:探索结果提供上下文,维度分析 + 视角分析交叉去重。
|
||||
|
||||
```javascript
|
||||
// ─── 3a: 聚合探索结果(来自 Fan-out A)───
|
||||
const exploreContext = {
|
||||
structure: exploreResults['structure'] || {},
|
||||
patterns: exploreResults['patterns'] || {},
|
||||
dependencies: exploreResults['dependencies'] || {}
|
||||
}
|
||||
|
||||
// ─── 3b: 聚合维度分析结果(来自 Fan-out B)───
|
||||
const dimensionFindings = []
|
||||
for (const dimension of activeDimensions) {
|
||||
const findings = parseCliOutput(cliResults[dimension])
|
||||
for (const finding of findings) {
|
||||
finding.dimension = dimension
|
||||
finding.source = 'dimension-analysis'
|
||||
dimensionFindings.push(finding)
|
||||
}
|
||||
}
|
||||
|
||||
// ─── 3c: 聚合视角分析结果(来自 Fan-out C)───
|
||||
const perspectiveFindings = []
|
||||
if (mode === 'triple-fanout') {
|
||||
for (const perspective of perspectives) {
|
||||
const findings = parseCliOutput(cliResults[perspective])
|
||||
for (const finding of findings) {
|
||||
finding.perspective = perspective
|
||||
finding.source = 'perspective-analysis'
|
||||
// 映射视角到最近维度(用于统一归类)
|
||||
finding.dimension = finding.dimension || mapPerspectiveToDimension(perspective)
|
||||
perspectiveFindings.push(finding)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── 3d: 合并 + 交叉去重 ───
|
||||
const allFindings = [...dimensionFindings, ...perspectiveFindings]
|
||||
|
||||
function deduplicateFindings(findings) {
|
||||
const seen = new Map() // key → finding (保留严重性更高的)
|
||||
for (const f of findings) {
|
||||
const key = `${f.file}:${f.line}`
|
||||
const existing = seen.get(key)
|
||||
if (!existing) {
|
||||
seen.set(key, f)
|
||||
} else {
|
||||
// 同一位置多角度发现 → 合并,提升严重性
|
||||
const severityOrder = { critical: 0, high: 1, medium: 2, low: 3 }
|
||||
if ((severityOrder[f.severity] || 3) < (severityOrder[existing.severity] || 3)) {
|
||||
existing.severity = f.severity
|
||||
}
|
||||
// 记录交叉引用(被多个视角/维度发现的条目更可信)
|
||||
existing.crossRefs = existing.crossRefs || []
|
||||
existing.crossRefs.push({ source: f.source, perspective: f.perspective, dimension: f.dimension })
|
||||
}
|
||||
}
|
||||
return [...seen.values()]
|
||||
}
|
||||
|
||||
// 视角 → 维度映射
|
||||
function mapPerspectiveToDimension(perspective) {
|
||||
const map = {
|
||||
'security': 'code',
|
||||
'performance': 'code',
|
||||
'code-quality': 'code',
|
||||
'architecture': 'architecture'
|
||||
}
|
||||
return map[perspective] || 'code'
|
||||
}
|
||||
|
||||
const deduped = deduplicateFindings(allFindings)
|
||||
|
||||
// ─── 3e: 按严重性排序(交叉引用的条目优先)───
|
||||
deduped.sort((a, b) => {
|
||||
// 被多角度发现的条目 → 优先级提升
|
||||
const aBoost = (a.crossRefs?.length || 0) > 0 ? -0.5 : 0
|
||||
const bBoost = (b.crossRefs?.length || 0) > 0 ? -0.5 : 0
|
||||
const order = { critical: 0, high: 1, medium: 2, low: 3 }
|
||||
return ((order[a.severity] || 3) + aBoost) - ((order[b.severity] || 3) + bBoost)
|
||||
})
|
||||
|
||||
// ─── 3f: 用探索上下文增强发现(可选)───
|
||||
// 利用 Fan-out A 的结构探索结果标注模块归属
|
||||
for (const finding of deduped) {
|
||||
if (finding.file && exploreContext.structure?.modules) {
|
||||
const module = exploreContext.structure.modules.find(m =>
|
||||
finding.file.startsWith(m.path)
|
||||
)
|
||||
if (module) finding.module = module.name
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Debt Scan Results
|
||||
|
||||
### Scan Mode: [inline|dual-fanout|triple-fanout]
|
||||
### Complexity: [Low|Medium|High]
|
||||
### Perspectives: [security, performance, code-quality, architecture]
|
||||
|
||||
### Findings by Dimension
|
||||
#### Code Quality ([count])
|
||||
- [file:line] [severity] - [description] [crossRefs: N perspectives]
|
||||
|
||||
#### Architecture ([count])
|
||||
- [module] [severity] - [description]
|
||||
|
||||
#### Testing ([count])
|
||||
- [file:line] [severity] - [description]
|
||||
|
||||
#### Dependency ([count])
|
||||
- [package] [severity] - [description]
|
||||
|
||||
#### Documentation ([count])
|
||||
- [file:line] [severity] - [description]
|
||||
|
||||
### Multi-Perspective Highlights
|
||||
#### Security Findings ([count])
|
||||
- [file:line] [severity] - [CWE-xxx] [description]
|
||||
|
||||
#### Performance Findings ([count])
|
||||
- [file:line] [severity] - [impact] [description]
|
||||
|
||||
### Cross-Referenced Items (多角度交叉验证)
|
||||
- [file:line] confirmed by [N] sources - [description]
|
||||
|
||||
### Total Debt Items: [count]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI tool unavailable | Fall back to ACE search + Grep inline analysis |
|
||||
| CLI returns empty for a dimension | Note incomplete dimension, continue others |
|
||||
| Subagent explore fails | Skip explore context, proceed with CLI analysis only |
|
||||
| Too many findings (>100) | Prioritize critical/high + cross-referenced, summarize rest |
|
||||
| Timeout on CLI call | Use partial results, note incomplete dimensions/perspectives |
|
||||
| Agent/CLI failure | Retry once, then fallback to inline execution |
|
||||
| Perspective analysis timeout | Use dimension-only results, note missing perspectives |
|
||||
| All Fan-out layers fail | Fall back to ACE inline scan (guaranteed minimum) |
|
||||
@@ -1,223 +0,0 @@
|
||||
# Scanner Role
|
||||
|
||||
多维度技术债务扫描器。扫描代码库的 5 个维度:代码质量、架构、测试、依赖、文档,生成结构化债务清单。通过 CLI Fan-out 并行分析,产出 debt-inventory.json。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `scanner` | **Tag**: `[scanner]`
|
||||
- **Task Prefix**: `TDSCAN-*`
|
||||
- **Responsibility**: Orchestration (多维度扫描编排)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `TDSCAN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[scanner]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within debt scanning responsibility scope
|
||||
- Tag all findings with dimension (code, architecture, testing, dependency, documentation)
|
||||
|
||||
### MUST NOT
|
||||
- Write or modify any code
|
||||
- Execute fix operations
|
||||
- Create tasks for other roles
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Omit `[scanner]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `scan-debt` | [commands/scan-debt.md](commands/scan-debt.md) | Phase 3 | 多维度 CLI Fan-out 扫描 |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `gemini` | CLI | scan-debt.md | 多维度代码分析(dimension fan-out) |
|
||||
| `cli-explore-agent` | Subagent | scan-debt.md | 并行代码库结构探索 |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `scan_complete` | scanner -> coordinator | 扫描完成 | 包含债务清单摘要 |
|
||||
| `debt_items_found` | scanner -> coordinator | 发现高优先级债务 | 需要关注的关键发现 |
|
||||
| `error` | scanner -> coordinator | 扫描失败 | 阻塞性错误 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "scanner",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from scanner --type <message-type> --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `TDSCAN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Scan scope | task.description (regex: `scope:\s*(.+)`) | No (default: `**/*`) |
|
||||
| Session folder | task.description (regex: `session:\s*(.+)`) | Yes |
|
||||
| Shared memory | `<session-folder>/.msg/meta.json` | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json for team context
|
||||
3. Detect project type and framework:
|
||||
|
||||
| Detection | Method |
|
||||
|-----------|--------|
|
||||
| Node.js project | Check for package.json |
|
||||
| Python project | Check for pyproject.toml or requirements.txt |
|
||||
| Go project | Check for go.mod |
|
||||
|
||||
4. Determine scan dimensions (default: code, architecture, testing, dependency, documentation)
|
||||
5. Detect perspectives from task description:
|
||||
|
||||
| Condition | Perspective |
|
||||
|-----------|-------------|
|
||||
| `security\|auth\|inject\|xss` | security |
|
||||
| `performance\|speed\|optimize` | performance |
|
||||
| `quality\|clean\|maintain\|debt` | code-quality |
|
||||
| `architect\|pattern\|structure` | architecture |
|
||||
| Default | code-quality + architecture |
|
||||
|
||||
6. Assess complexity:
|
||||
|
||||
| Signal | Weight |
|
||||
|--------|--------|
|
||||
| `全项目\|全量\|comprehensive\|full` | +3 |
|
||||
| `architecture\|架构` | +1 |
|
||||
| `multiple\|across\|cross\|多模块` | +2 |
|
||||
|
||||
| Score | Complexity |
|
||||
|-------|------------|
|
||||
| >= 4 | High |
|
||||
| 2-3 | Medium |
|
||||
| 0-1 | Low |
|
||||
|
||||
### Phase 3: Multi-Dimension Scan
|
||||
|
||||
Delegate to `commands/scan-debt.md` if available, otherwise execute inline.
|
||||
|
||||
**Core Strategy**: Three-layer parallel Fan-out
|
||||
|
||||
| Complexity | Strategy |
|
||||
|------------|----------|
|
||||
| Low | Direct: ACE search + Grep inline scan |
|
||||
| Medium/High | Fan-out A: Subagent exploration (cli-explore-agent) + Fan-out B: CLI dimension analysis (gemini per dimension) + Fan-out C: Multi-perspective Gemini analysis |
|
||||
|
||||
**Fan-out Architecture**:
|
||||
|
||||
```
|
||||
Fan-out A: Subagent Exploration (parallel cli-explore)
|
||||
structure perspective | patterns perspective | deps perspective
|
||||
↓ merge
|
||||
Fan-out B: CLI Dimension Analysis (parallel gemini)
|
||||
code | architecture | testing | dependency | documentation
|
||||
↓ merge
|
||||
Fan-out C: Multi-Perspective Gemini (parallel)
|
||||
security | performance | code-quality | architecture
|
||||
↓ Fan-in aggregate
|
||||
debt-inventory.json
|
||||
```
|
||||
|
||||
**Low Complexity Path** (inline):
|
||||
|
||||
```
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: <project-root>,
|
||||
query: "code smells, TODO/FIXME, deprecated APIs, complex functions, missing tests"
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 4: Aggregate into Debt Inventory
|
||||
|
||||
**Standardize findings**:
|
||||
|
||||
For each finding, create entry:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `id` | `TD-NNN` (sequential) |
|
||||
| `dimension` | code, architecture, testing, dependency, documentation |
|
||||
| `severity` | critical, high, medium, low |
|
||||
| `file` | File path |
|
||||
| `line` | Line number |
|
||||
| `description` | Issue description |
|
||||
| `suggestion` | Fix suggestion |
|
||||
| `estimated_effort` | small, medium, large, unknown |
|
||||
|
||||
**Save outputs**:
|
||||
|
||||
1. Update .msg/meta.json with `debt_inventory` and `debt_score_before`
|
||||
2. Write `<session-folder>/scan/debt-inventory.json`:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `scan_date` | ISO timestamp |
|
||||
| `dimensions` | Array of scanned dimensions |
|
||||
| `total_items` | Count of debt items |
|
||||
| `by_dimension` | Count per dimension |
|
||||
| `by_severity` | Count per severity level |
|
||||
| `items` | Array of debt entries |
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[scanner]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Task | task.subject |
|
||||
| Dimensions | dimensions scanned |
|
||||
| Status | "Debt Found" or "Clean" |
|
||||
| Summary | Total items with dimension breakdown |
|
||||
| Top Debt Items | Top 5 critical/high severity items |
|
||||
| Debt Inventory | Path to debt-inventory.json |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No TDSCAN-* tasks available | Idle, wait for coordinator assignment |
|
||||
| CLI tool unavailable | Fall back to ACE search + Grep inline analysis |
|
||||
| Scan scope too broad | Narrow to src/ directory, report partial results |
|
||||
| All dimensions return empty | Report clean scan, notify coordinator |
|
||||
| CLI timeout | Use partial results, note incomplete dimensions |
|
||||
| Critical issue beyond scope | SendMessage debt_items_found to coordinator |
|
||||
@@ -1,203 +0,0 @@
|
||||
# Command: verify
|
||||
|
||||
> 回归测试与质量验证。运行测试套件、类型检查、lint、可选 CLI 质量分析。对比 debt_score_before vs debt_score_after 评估改善程度。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Validator
|
||||
- 修复操作已完成,需要验证结果
|
||||
- Fix-Verify 循环中的验证阶段
|
||||
|
||||
**Trigger conditions**:
|
||||
- TDVAL-* 任务进入 Phase 3
|
||||
- 修复日志可用(fix-log.json)
|
||||
- 需要对比 before/after 指标
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Sequential Checks + Optional CLI Analysis
|
||||
**CLI Tool**: `gemini` (for quality comparison)
|
||||
**CLI Mode**: `analysis`
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 验证策略选择
|
||||
const checks = ['test_suite', 'type_check', 'lint_check']
|
||||
|
||||
// 可选:CLI 质量分析(仅当修改文件较多时)
|
||||
if (modifiedFiles.length > 5) {
|
||||
checks.push('cli_quality_analysis')
|
||||
}
|
||||
|
||||
// Fix-Verify 循环中的验证:聚焦于回归文件
|
||||
const isFixVerify = task.description.includes('fix-verify')
|
||||
if (isFixVerify) {
|
||||
// 仅验证上次回归的文件
|
||||
targetScope = 'regression_files_only'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 获取修改文件列表
|
||||
const modifiedFiles = fixLog.files_modified || []
|
||||
|
||||
// 获取原始债务分数
|
||||
const debtScoreBefore = sharedMemory.debt_score_before || 0
|
||||
|
||||
// Worktree 路径(从 shared memory 加载)
|
||||
const worktreePath = sharedMemory.worktree?.path || null
|
||||
const cmdPrefix = worktreePath ? `cd "${worktreePath}" && ` : ''
|
||||
|
||||
// 检测可用的验证工具(在 worktree 中检测)
|
||||
const hasNpm = Bash(`${cmdPrefix}which npm 2>/dev/null && echo "yes" || echo "no"`).trim() === 'yes'
|
||||
const hasTsc = Bash(`${cmdPrefix}which npx 2>/dev/null && npx tsc --version 2>/dev/null && echo "yes" || echo "no"`).includes('yes')
|
||||
const hasEslint = Bash(`${cmdPrefix}npx eslint --version 2>/dev/null && echo "yes" || echo "no"`).includes('yes')
|
||||
const hasPytest = Bash(`${cmdPrefix}which pytest 2>/dev/null && echo "yes" || echo "no"`).trim() === 'yes'
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
// === Check 1: Test Suite(worktree 中执行) ===
|
||||
let testOutput = ''
|
||||
let testsPassed = true
|
||||
let testRegressions = 0
|
||||
|
||||
if (hasNpm) {
|
||||
testOutput = Bash(`${cmdPrefix}npm test 2>&1 || true`)
|
||||
} else if (hasPytest) {
|
||||
testOutput = Bash(`${cmdPrefix}python -m pytest 2>&1 || true`)
|
||||
} else {
|
||||
testOutput = 'no-test-runner'
|
||||
}
|
||||
|
||||
if (testOutput !== 'no-test-runner') {
|
||||
testsPassed = !/FAIL|error|failed/i.test(testOutput)
|
||||
testRegressions = testsPassed ? 0 : (testOutput.match(/(\d+) failed/)?.[1] || 1) * 1
|
||||
}
|
||||
|
||||
// === Check 2: Type Checking(worktree 中执行) ===
|
||||
let typeErrors = 0
|
||||
if (hasTsc) {
|
||||
const tscOutput = Bash(`${cmdPrefix}npx tsc --noEmit 2>&1 || true`)
|
||||
typeErrors = (tscOutput.match(/error TS/g) || []).length
|
||||
}
|
||||
|
||||
// === Check 3: Linting(worktree 中执行) ===
|
||||
let lintErrors = 0
|
||||
if (hasEslint && modifiedFiles.length > 0) {
|
||||
const lintOutput = Bash(`${cmdPrefix}npx eslint --no-error-on-unmatched-pattern ${modifiedFiles.join(' ')} 2>&1 || true`)
|
||||
lintErrors = (lintOutput.match(/(\d+) error/)?.[0]?.match(/\d+/)?.[0] || 0) * 1
|
||||
}
|
||||
|
||||
// === Check 4: Optional CLI Quality Analysis ===
|
||||
let qualityImprovement = 0
|
||||
if (checks.includes('cli_quality_analysis')) {
|
||||
const prompt = `PURPOSE: Compare code quality before and after tech debt cleanup to measure improvement
|
||||
TASK: • Analyze the modified files for quality metrics • Compare complexity, duplication, naming quality • Assess if the changes actually reduced debt • Identify any new issues introduced
|
||||
MODE: analysis
|
||||
CONTEXT: ${modifiedFiles.map(f => `@${f}`).join(' ')}
|
||||
EXPECTED: Quality comparison with: metrics_before, metrics_after, improvement_score (0-100), new_issues_found
|
||||
CONSTRAINTS: Focus on the specific changes, not overall project quality`
|
||||
|
||||
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule analysis-review-code-quality${worktreePath ? ' --cd "' + worktreePath + '"' : ''}`, {
|
||||
run_in_background: true
|
||||
})
|
||||
// 等待 CLI 完成,解析质量改善分数
|
||||
}
|
||||
|
||||
// === 计算债务分数 ===
|
||||
// 已修复的项不计入 after 分数
|
||||
const fixedDebtIds = new Set(
|
||||
(sharedMemory.fix_results?.files_modified || [])
|
||||
.flatMap(f => debtInventory.filter(i => i.file === f).map(i => i.id))
|
||||
)
|
||||
const debtScoreAfter = debtInventory.filter(i => !fixedDebtIds.has(i.id)).length
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
const totalRegressions = testRegressions + typeErrors + lintErrors
|
||||
const passed = totalRegressions === 0
|
||||
|
||||
// 如果有少量回归,尝试通过 code-developer 修复
|
||||
if (totalRegressions > 0 && totalRegressions <= 3) {
|
||||
const regressionDetails = []
|
||||
if (testRegressions > 0) regressionDetails.push(`${testRegressions} test failures`)
|
||||
if (typeErrors > 0) regressionDetails.push(`${typeErrors} type errors`)
|
||||
if (lintErrors > 0) regressionDetails.push(`${lintErrors} lint errors`)
|
||||
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: `Fix ${totalRegressions} regressions from debt cleanup`,
|
||||
prompt: `## Goal
|
||||
Fix regressions introduced by tech debt cleanup.
|
||||
${worktreePath ? `\n## Worktree(强制)\n- 工作目录: ${worktreePath}\n- **所有文件操作必须在 ${worktreePath} 下进行**\n- Bash 命令使用 cd "${worktreePath}" && ... 前缀\n` : ''}
|
||||
## Regressions
|
||||
${regressionDetails.join('\n')}
|
||||
|
||||
## Modified Files
|
||||
${modifiedFiles.map(f => `- ${f}`).join('\n')}
|
||||
|
||||
## Test Output (if failed)
|
||||
${testOutput.split('\n').filter(l => /FAIL|Error|error/i.test(l)).slice(0, 20).join('\n')}
|
||||
|
||||
## Constraints
|
||||
- Fix ONLY the regressions, do not undo the debt fixes
|
||||
- Preserve the debt cleanup changes
|
||||
- Do NOT skip tests or add suppressions`
|
||||
})
|
||||
|
||||
// Re-run checks after fix attempt
|
||||
// ... (simplified: re-check test suite)
|
||||
}
|
||||
|
||||
// 生成最终验证结果
|
||||
const validationReport = {
|
||||
passed,
|
||||
regressions: totalRegressions,
|
||||
debt_score_before: debtScoreBefore,
|
||||
debt_score_after: debtScoreAfter,
|
||||
improvement_percentage: debtScoreBefore > 0
|
||||
? Math.round(((debtScoreBefore - debtScoreAfter) / debtScoreBefore) * 100)
|
||||
: 0
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Validation Results
|
||||
|
||||
### Status: [PASS|FAIL]
|
||||
### Regressions: [count]
|
||||
- Test Suite: [PASS|FAIL] ([n] regressions)
|
||||
- Type Check: [PASS|FAIL] ([n] errors)
|
||||
- Lint: [PASS|FAIL] ([n] errors)
|
||||
- Quality: [IMPROVED|NO_CHANGE]
|
||||
|
||||
### Debt Score
|
||||
- Before: [score]
|
||||
- After: [score]
|
||||
- Improvement: [%]%
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No test runner available | Skip test check, rely on type+lint |
|
||||
| tsc not available | Skip type check, rely on test+lint |
|
||||
| eslint not available | Skip lint check, rely on test+type |
|
||||
| All checks unavailable | Report minimal validation, warn coordinator |
|
||||
| Fix attempt introduces new regressions | Revert fix, report original regressions |
|
||||
| CLI quality analysis times out | Skip quality analysis, use debt score comparison only |
|
||||
@@ -1,235 +0,0 @@
|
||||
# Validator Role
|
||||
|
||||
技术债务清理结果验证者。运行测试套件验证无回归、执行类型检查和 lint、通过 CLI 分析代码质量改善程度。对比 before/after 债务分数,生成 validation-report.json。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `validator` | **Tag**: `[validator]`
|
||||
- **Task Prefix**: `TDVAL-*`
|
||||
- **Responsibility**: Validation (清理结果验证)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `TDVAL-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[validator]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within validation responsibility scope
|
||||
- Run complete validation flow (tests, type check, lint, quality analysis)
|
||||
- Report regression_found if regressions detected
|
||||
|
||||
### MUST NOT
|
||||
- Fix code directly (only attempt small fixes via code-developer)
|
||||
- Create tasks for other roles
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Skip any validation step
|
||||
- Omit `[validator]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `verify` | [commands/verify.md](commands/verify.md) | Phase 3 | 回归测试与质量验证 |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `code-developer` | Subagent | verify.md | 小修复尝试(验证失败时) |
|
||||
| `gemini` | CLI | verify.md | 代码质量改善分析 |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `validation_complete` | validator -> coordinator | 验证通过 | 包含 before/after 指标 |
|
||||
| `regression_found` | validator -> coordinator | 发现回归 | 触发 Fix-Verify 循环 |
|
||||
| `error` | validator -> coordinator | 验证环境错误 | 阻塞性错误 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "validator",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from validator --type <message-type> --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `TDVAL-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Load Context
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | task.description (regex: `session:\s*(.+)`) | Yes |
|
||||
| Shared memory | `<session-folder>/.msg/meta.json` | Yes |
|
||||
| Fix log | `<session-folder>/fixes/fix-log.json` | No |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json for:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `worktree.path` | Worktree directory path |
|
||||
| `debt_inventory` | Debt items list |
|
||||
| `fix_results` | Fix results from executor |
|
||||
| `debt_score_before` | Debt score before fixes |
|
||||
|
||||
3. Determine command prefix for worktree:
|
||||
|
||||
| Condition | Command Prefix |
|
||||
|-----------|---------------|
|
||||
| worktree exists | `cd "<worktree-path>" && ` |
|
||||
| no worktree | Empty string |
|
||||
|
||||
4. Read fix-log.json for modified files list
|
||||
|
||||
### Phase 3: Run Validation Checks
|
||||
|
||||
Delegate to `commands/verify.md` if available, otherwise execute inline.
|
||||
|
||||
**Core Strategy**: 4-layer validation (all commands in worktree)
|
||||
|
||||
**Validation Results Structure**:
|
||||
|
||||
| Check | Status Field | Details |
|
||||
|-------|--------------|---------|
|
||||
| Test Suite | test_suite.status | regressions count |
|
||||
| Type Check | type_check.status | errors count |
|
||||
| Lint Check | lint_check.status | errors count |
|
||||
| Quality Analysis | quality_analysis.status | improvement percentage |
|
||||
|
||||
**1. Test Suite** (in worktree):
|
||||
|
||||
| Detection | Command |
|
||||
|-----------|---------|
|
||||
| Node.js | `<cmdPrefix>npm test` or `<cmdPrefix>npx vitest run` |
|
||||
| Python | `<cmdPrefix>python -m pytest` |
|
||||
| No tests | Skip with "no-tests" note |
|
||||
|
||||
| Pass Criteria | Status |
|
||||
|---------------|--------|
|
||||
| No FAIL/error/failed keywords | PASS |
|
||||
| "no-tests" detected | PASS (skip) |
|
||||
| Otherwise | FAIL + count regressions |
|
||||
|
||||
**2. Type Check** (in worktree):
|
||||
|
||||
| Command | `<cmdPrefix>npx tsc --noEmit` |
|
||||
|---------|-------------------------------|
|
||||
|
||||
| Pass Criteria | Status |
|
||||
|---------------|--------|
|
||||
| No TS errors or "skip" | PASS |
|
||||
| TS errors found | FAIL + count errors |
|
||||
|
||||
**3. Lint Check** (in worktree):
|
||||
|
||||
| Command | `<cmdPrefix>npx eslint --no-error-on-unmatched-pattern <files>` |
|
||||
|---------|----------------------------------------------------------------|
|
||||
|
||||
| Pass Criteria | Status |
|
||||
|---------------|--------|
|
||||
| No errors or "skip" | PASS |
|
||||
| Errors found | FAIL + count errors |
|
||||
|
||||
**4. Quality Analysis**:
|
||||
|
||||
| Metric | Calculation |
|
||||
|--------|-------------|
|
||||
| debt_score_after | debtInventory.filter(not in modified files).length |
|
||||
| improvement | debt_score_before - debt_score_after |
|
||||
|
||||
| Condition | Status |
|
||||
|-----------|--------|
|
||||
| debt_score_after < debt_score_before | IMPROVED |
|
||||
| Otherwise | NO_CHANGE |
|
||||
|
||||
### Phase 4: Compare Before/After & Generate Report
|
||||
|
||||
**Calculate totals**:
|
||||
|
||||
| Metric | Calculation |
|
||||
|--------|-------------|
|
||||
| total_regressions | test_regressions + type_errors + lint_errors |
|
||||
| passed | total_regressions === 0 |
|
||||
|
||||
**Report structure**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `validation_date` | ISO timestamp |
|
||||
| `passed` | Boolean |
|
||||
| `regressions` | Total regression count |
|
||||
| `checks` | Validation results per check |
|
||||
| `debt_score_before` | Initial debt score |
|
||||
| `debt_score_after` | Final debt score |
|
||||
| `improvement_percentage` | Percentage improvement |
|
||||
|
||||
**Save outputs**:
|
||||
|
||||
1. Write `<session-folder>/validation/validation-report.json`
|
||||
2. Update .msg/meta.json with `validation_results` and `debt_score_after`
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[validator]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Message type selection**:
|
||||
|
||||
| Condition | Message Type |
|
||||
|-----------|--------------|
|
||||
| passed | validation_complete |
|
||||
| not passed | regression_found |
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Task | task.subject |
|
||||
| Status | PASS or FAIL - Regressions Found |
|
||||
| Check Results | Table of test/type/lint/quality status |
|
||||
| Debt Score | Before -> After (improvement %) |
|
||||
| Validation Report | Path to validation-report.json |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No TDVAL-* tasks available | Idle, wait for coordinator |
|
||||
| Test environment broken | Report error, suggest manual fix |
|
||||
| No test suite found | Skip test check, validate with type+lint only |
|
||||
| Fix log empty | Validate all source files, report minimal analysis |
|
||||
| Type check fails | Attempt code-developer fix for type errors |
|
||||
| Critical regression (>10) | Report immediately, do not attempt fix |
|
||||
@@ -30,6 +30,15 @@ gist tor tor
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Command Execution Protocol
|
||||
|
||||
When coordinator needs to execute a command (dispatch, monitor):
|
||||
|
||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
||||
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
|
||||
4. **Execute synchronously** -- complete the command workflow before proceeding
|
||||
|
||||
## Role Router
|
||||
|
||||
### Input Parsing
|
||||
@@ -291,39 +300,62 @@ Beat 1 2 3 4 5
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use background mode (Spawn-and-Stop):
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: "testing",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "testing" <ROLE>.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-testing/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: testing
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
## Primary Directive
|
||||
All your work must be executed through Skill to load role definition:
|
||||
Skill(skill="team-testing", args="--role=<role>")
|
||||
|
||||
Current task: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] identifier
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate for other roles
|
||||
- Call mcp__ccw-tools__team_msg before every SendMessage
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> load role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**Inner Loop roles** (generator, executor): Set `inner_loop: true`. The team-worker agent handles the loop internally.
|
||||
|
||||
**Single-task roles** (strategist, analyst): Set `inner_loop: false`.
|
||||
|
||||
---
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Testing pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(testing) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-testing", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
---
|
||||
|
||||
## Unified Session Directory
|
||||
|
||||
@@ -1,267 +0,0 @@
|
||||
# Analyst Role
|
||||
|
||||
Test quality analyst. Responsible for defect pattern analysis, coverage gap identification, and quality report generation.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `analyst` | **Tag**: `[analyst]`
|
||||
- **Task Prefix**: `TESTANA-*`
|
||||
- **Responsibility**: Read-only analysis (quality analysis)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `TESTANA-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[analyst]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within read-only analysis responsibility scope
|
||||
- Phase 2: Read role states via team_msg(operation='get_state')
|
||||
- Phase 5: Share analysis_report via team_msg(type='state_update')
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope (no test generation, execution, or strategy formulation)
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's responsibility
|
||||
- Omit `[analyst]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| Read | Read | Phase 2 | Load role states, strategy, results |
|
||||
| Glob | Read | Phase 2 | Find result files, test files |
|
||||
| Write | Write | Phase 3 | Create quality-report.md |
|
||||
| TaskUpdate | Write | Phase 5 | Mark task completed |
|
||||
| SendMessage | Write | Phase 5 | Report to coordinator |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `analysis_ready` | analyst -> coordinator | Analysis completed | Analysis report complete |
|
||||
| `error` | analyst -> coordinator | Processing failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "analyst",
|
||||
type: <message-type>,
|
||||
data: {ref: "<artifact-path>"}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from analyst --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `TESTANA-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
**Input Sources**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session path | Task description (Session: <path>) | Yes |
|
||||
| Role state | team_msg(operation="get_state", session_id=<session-id>) | Yes |
|
||||
| Execution results | <session-folder>/results/run-*.json | Yes |
|
||||
| Test strategy | <session-folder>/strategy/test-strategy.md | Yes |
|
||||
| Test files | <session-folder>/tests/**/* | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description (look for `Session: <path>`)
|
||||
|
||||
2. Read role states:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({ operation: "get_state", session_id: <session-id> })
|
||||
```
|
||||
|
||||
3. Read all execution results:
|
||||
|
||||
```
|
||||
Glob({ pattern: "<session-folder>/results/run-*.json" })
|
||||
Read("<session-folder>/results/run-001.json")
|
||||
Read("<session-folder>/results/run-002.json")
|
||||
...
|
||||
```
|
||||
|
||||
4. Read test strategy:
|
||||
|
||||
```
|
||||
Read("<session-folder>/strategy/test-strategy.md")
|
||||
```
|
||||
|
||||
5. Read test files for pattern analysis:
|
||||
|
||||
```
|
||||
Glob({ pattern: "<session-folder>/tests/**/*" })
|
||||
```
|
||||
|
||||
### Phase 3: Quality Analysis
|
||||
|
||||
**Analysis dimensions**:
|
||||
|
||||
1. **Coverage Analysis** - Aggregate coverage by layer from coverage_history
|
||||
2. **Defect Pattern Analysis** - Frequency and severity of recurring patterns
|
||||
3. **GC Loop Effectiveness** - Coverage improvement across rounds
|
||||
4. **Test Quality Metrics** - Effective patterns, test file count
|
||||
|
||||
**Coverage Summary Table**:
|
||||
|
||||
| Layer | Coverage | Target | Status |
|
||||
|-------|----------|--------|--------|
|
||||
| L1 | <coverage>% | <target>% | <Met/Below> |
|
||||
| L2 | <coverage>% | <target>% | <Met/Below> |
|
||||
| L3 | <coverage>% | <target>% | <Met/Below> |
|
||||
|
||||
**Defect Pattern Analysis**:
|
||||
|
||||
| Pattern | Frequency | Severity |
|
||||
|---------|-----------|----------|
|
||||
| <pattern-1> | <count> | HIGH (>=3), MEDIUM (>=2), LOW (<2) |
|
||||
|
||||
**GC Loop Effectiveness**:
|
||||
|
||||
| Metric | Value | Assessment |
|
||||
|--------|-------|------------|
|
||||
| Rounds Executed | <N> | - |
|
||||
| Coverage Improvement | <+/-X%> | HIGH (>10%), MEDIUM (>5%), LOW (<=5%) |
|
||||
| Recommendation | <text> | Based on effectiveness |
|
||||
|
||||
**Coverage Gaps**:
|
||||
|
||||
For each gap identified:
|
||||
- Area: <module/feature>
|
||||
- Current: <X>%
|
||||
- Gap: <target - current>%
|
||||
- Reason: <why gap exists>
|
||||
- Recommendation: <how to close>
|
||||
|
||||
**Quality Score**:
|
||||
|
||||
| Dimension | Score (1-10) | Weight | Weighted |
|
||||
|-----------|--------------|--------|----------|
|
||||
| Coverage Achievement | <score> | 30% | <weighted> |
|
||||
| Test Effectiveness | <score> | 25% | <weighted> |
|
||||
| Defect Detection | <score> | 25% | <weighted> |
|
||||
| GC Loop Efficiency | <score> | 20% | <weighted> |
|
||||
| **Total** | | | **<total>/10** |
|
||||
|
||||
**Output file**: `<session-folder>/analysis/quality-report.md`
|
||||
|
||||
```
|
||||
Write("<session-folder>/analysis/quality-report.md", <report-content>)
|
||||
```
|
||||
|
||||
### Phase 4: Trend Analysis (if historical data available)
|
||||
|
||||
**Historical comparison**:
|
||||
|
||||
```
|
||||
Glob({ pattern: ".workflow/.team/TST-*/.msg/meta.json" })
|
||||
```
|
||||
|
||||
If multiple sessions exist:
|
||||
- Track coverage trends over time
|
||||
- Identify defect pattern evolution
|
||||
- Compare GC loop effectiveness across sessions
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
1. **Share analysis report via team_msg(type='state_update')**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "analyst",
|
||||
type: "state_update",
|
||||
data: {
|
||||
analysis_report: {
|
||||
quality_score: <total-score>,
|
||||
coverage_gaps: <gap-list>,
|
||||
top_defect_patterns: <patterns>.slice(0, 5),
|
||||
gc_effectiveness: <improvement>,
|
||||
recommendations: <immediate-actions>
|
||||
}
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
2. **Log via team_msg**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "analyst",
|
||||
type: "analysis_ready",
|
||||
data: {ref: "<session-folder>/analysis/quality-report.md"}
|
||||
})
|
||||
```
|
||||
|
||||
3. **SendMessage to coordinator**:
|
||||
|
||||
```
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: "## [analyst] Quality Analysis Complete
|
||||
|
||||
**Quality Score**: <score>/10
|
||||
**Defect Patterns**: <count>
|
||||
**Coverage Gaps**: <count>
|
||||
**GC Effectiveness**: <+/-><X>%
|
||||
**Output**: <report-path>
|
||||
|
||||
### Top Issues
|
||||
1. <issue-1>
|
||||
2. <issue-2>
|
||||
3. <issue-3>",
|
||||
summary: "[analyst] Quality: <score>/10"
|
||||
})
|
||||
```
|
||||
|
||||
4. **TaskUpdate completed**:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
||||
```
|
||||
|
||||
5. **Loop**: Return to Phase 1 to check next task
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No TESTANA-* tasks available | Idle, wait for coordinator assignment |
|
||||
| No execution results | Generate report based on strategy only |
|
||||
| Incomplete data | Report available metrics, flag gaps |
|
||||
| Previous session data corrupted | Analyze current session only |
|
||||
| Shared memory not found | Notify coordinator, request location |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -1,321 +0,0 @@
|
||||
# Coordinator Role
|
||||
|
||||
Test team orchestrator. Responsible for change scope analysis, test layer selection, Generator-Critic loop control (generator<->executor), and quality gates.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
||||
- **Responsibility**: Parse requirements -> Create team -> Dispatch tasks -> Monitor progress -> Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Parse user requirements and clarify ambiguous inputs via AskUserQuestion
|
||||
- Create team and spawn worker subagents in background
|
||||
- Dispatch tasks with proper dependency chains (see SKILL.md Task Metadata Registry)
|
||||
- Monitor progress via worker callbacks and route messages
|
||||
- Maintain session state persistence
|
||||
- All output (SendMessage, team_msg, logs) must carry `[coordinator]` identifier
|
||||
- Manage Generator-Critic loop counter (generator <-> executor cycle)
|
||||
- Decide whether to trigger revision loop based on coverage results
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute test generation, test execution, or coverage analysis directly (delegate to workers)
|
||||
- Modify task outputs (workers own their deliverables)
|
||||
- Call implementation subagents directly
|
||||
- Skip dependency validation when creating task chains
|
||||
- Modify test files or source code
|
||||
- Bypass worker roles to do delegated work
|
||||
|
||||
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, first detect the invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
|
||||
| New session | None of the above | -> Phase 0 (Session Resume Check) |
|
||||
|
||||
For callback/check/resume: load `commands/monitor.md` if available, execute the appropriate handler, then STOP.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
||||
|
||||
**Workflow**:
|
||||
1. Scan session directory for sessions with status "active" or "paused"
|
||||
2. No sessions found -> proceed to Phase 1
|
||||
3. Single session found -> resume it (-> Session Reconciliation)
|
||||
4. Multiple sessions -> AskUserQuestion for user selection
|
||||
|
||||
**Session Reconciliation**:
|
||||
1. Audit TaskList -> get real status of all tasks
|
||||
2. Reconcile: session state <-> TaskList status (bidirectional sync)
|
||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
||||
4. Determine remaining pipeline from reconciled state
|
||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
||||
6. Create missing tasks with correct blockedBy dependencies
|
||||
7. Verify dependency chain integrity
|
||||
8. Update session file with reconciled state
|
||||
9. Kick first executable task's worker -> Phase 4
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Change Scope Analysis
|
||||
|
||||
**Objective**: Parse user input and gather execution parameters.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse arguments** for explicit settings: mode, scope, focus areas
|
||||
|
||||
2. **Analyze change scope**:
|
||||
|
||||
```
|
||||
Bash("git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
|
||||
```
|
||||
|
||||
Extract changed files and modules for pipeline selection.
|
||||
|
||||
3. **Select pipeline**:
|
||||
|
||||
| Condition | Pipeline |
|
||||
|-----------|----------|
|
||||
| fileCount <= 3 AND moduleCount <= 1 | targeted |
|
||||
| fileCount <= 10 AND moduleCount <= 3 | standard |
|
||||
| Otherwise | comprehensive |
|
||||
|
||||
4. **Ask for missing parameters** via AskUserQuestion:
|
||||
|
||||
**Mode Selection**:
|
||||
- Targeted: Strategy -> Generate L1 -> Execute (small scope)
|
||||
- Standard: L1 -> L2 progressive (includes analysis)
|
||||
- Comprehensive: Parallel L1+L2 -> L3 (includes analysis)
|
||||
|
||||
**Coverage Target**:
|
||||
- Standard: L1:80% L2:60% L3:40%
|
||||
- Strict: L1:90% L2:75% L3:60%
|
||||
- Minimum: L1:60% L2:40% L3:20%
|
||||
|
||||
5. **Store requirements**: mode, scope, focus, constraints
|
||||
|
||||
**Success**: All parameters captured, mode finalized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Create Team + Initialize Session
|
||||
|
||||
**Objective**: Initialize team, session file, and wisdom directory.
|
||||
|
||||
**Workflow**:
|
||||
1. Generate session ID: `TST-<slug>-<YYYY-MM-DD>`
|
||||
2. Create session folder structure:
|
||||
|
||||
```
|
||||
.workflow/.team/TST-<slug>-<date>/
|
||||
├── strategy/
|
||||
├── tests/L1-unit/
|
||||
├── tests/L2-integration/
|
||||
├── tests/L3-e2e/
|
||||
├── results/
|
||||
├── analysis/
|
||||
└── wisdom/
|
||||
```
|
||||
|
||||
3. Call TeamCreate with team name
|
||||
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
|
||||
5. Initialize shared state via `team_msg(type='state_update')` and write `meta.json`:
|
||||
|
||||
```
|
||||
Write("<session-folder>/.msg/meta.json", {
|
||||
session_id: <session-id>,
|
||||
mode: <selected-mode>,
|
||||
scope: <scope>,
|
||||
status: "active"
|
||||
})
|
||||
```
|
||||
|
||||
6. Initialize cross-role state via team_msg(type='state_update'):
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "coordinator",
|
||||
type: "state_update",
|
||||
data: {
|
||||
task: <description>,
|
||||
pipeline: <selected-pipeline>,
|
||||
changed_files: [...],
|
||||
changed_modules: [...],
|
||||
coverage_targets: {...},
|
||||
gc_round: 0,
|
||||
max_gc_rounds: 3,
|
||||
test_strategy: null,
|
||||
generated_tests: [],
|
||||
execution_results: [],
|
||||
defect_patterns: [],
|
||||
effective_test_patterns: [],
|
||||
coverage_history: []
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
7. Write session state with: session_id, mode, scope, status="active"
|
||||
|
||||
**Success**: Team created, session file written, wisdom initialized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
**Objective**: Dispatch tasks based on mode with proper dependencies.
|
||||
|
||||
### Targeted Pipeline
|
||||
|
||||
| Task ID | Role | Blocked By | Description |
|
||||
|---------|------|------------|-------------|
|
||||
| STRATEGY-001 | strategist | (none) | Analyze change scope, define test strategy |
|
||||
| TESTGEN-001 | generator | STRATEGY-001 | Generate L1 unit tests |
|
||||
| TESTRUN-001 | executor | TESTGEN-001 | Execute L1 tests, collect coverage |
|
||||
|
||||
### Standard Pipeline
|
||||
|
||||
| Task ID | Role | Blocked By | Description |
|
||||
|---------|------|------------|-------------|
|
||||
| STRATEGY-001 | strategist | (none) | Analyze change scope |
|
||||
| TESTGEN-001 | generator | STRATEGY-001 | Generate L1 unit tests |
|
||||
| TESTRUN-001 | executor | TESTGEN-001 | Execute L1 tests |
|
||||
| TESTGEN-002 | generator | TESTRUN-001 | Generate L2 integration tests |
|
||||
| TESTRUN-002 | executor | TESTGEN-002 | Execute L2 tests |
|
||||
| TESTANA-001 | analyst | TESTRUN-002 | Quality analysis report |
|
||||
|
||||
### Comprehensive Pipeline
|
||||
|
||||
| Task ID | Role | Blocked By | Description |
|
||||
|---------|------|------------|-------------|
|
||||
| STRATEGY-001 | strategist | (none) | Analyze change scope |
|
||||
| TESTGEN-001 | generator | STRATEGY-001 | Generate L1 unit tests |
|
||||
| TESTGEN-002 | generator | STRATEGY-001 | Generate L2 integration tests (parallel) |
|
||||
| TESTRUN-001 | executor | TESTGEN-001 | Execute L1 tests |
|
||||
| TESTRUN-002 | executor | TESTGEN-002 | Execute L2 tests (parallel) |
|
||||
| TESTGEN-003 | generator | TESTRUN-001, TESTRUN-002 | Generate L3 E2E tests |
|
||||
| TESTRUN-003 | executor | TESTGEN-003 | Execute L3 tests |
|
||||
| TESTANA-001 | analyst | TESTRUN-003 | Quality analysis report |
|
||||
|
||||
**Task creation pattern**:
|
||||
```
|
||||
TaskCreate({ subject: "<TASK-ID>: <description>", description: "Session: <session-folder>\n...", activeForm: "..." })
|
||||
TaskUpdate({ taskId: <id>, owner: "<role>", addBlockedBy: [...] })
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Coordination Loop + Generator-Critic Control
|
||||
|
||||
> **Design principle (Stop-Wait)**: Model execution has no time concept. No polling or sleep loops.
|
||||
> - Use synchronous Task(run_in_background: false) calls. Worker return = phase complete signal.
|
||||
> - Follow Phase 3 task chain, spawn workers stage by stage.
|
||||
|
||||
### Callback Message Handling
|
||||
|
||||
| Received Message | Action |
|
||||
|-----------------|--------|
|
||||
| strategist: strategy_ready | Read strategy -> team_msg log -> TaskUpdate completed |
|
||||
| generator: tests_generated | team_msg log -> TaskUpdate completed -> unblock TESTRUN |
|
||||
| executor: tests_passed | Read coverage -> **Quality gate** -> proceed to next layer |
|
||||
| executor: tests_failed | **Generator-Critic decision** -> decide whether to trigger revision |
|
||||
| executor: coverage_report | Read coverage data -> update shared memory |
|
||||
| analyst: analysis_ready | Read report -> team_msg log -> Phase 5 |
|
||||
|
||||
### Generator-Critic Loop Control
|
||||
|
||||
When receiving `tests_failed` or `coverage_report`:
|
||||
|
||||
**Decision table**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| passRate < 0.95 AND gcRound < maxRounds | Create TESTGEN-fix task, increment gc_round, trigger revision |
|
||||
| coverage < target AND gcRound < maxRounds | Create TESTGEN-fix task, increment gc_round, trigger revision |
|
||||
| gcRound >= maxRounds | Accept current coverage, log warning, proceed |
|
||||
| Coverage met | Log success, proceed to next layer |
|
||||
|
||||
**GC Loop trigger message**:
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "coordinator",
|
||||
type: "gc_loop_trigger",
|
||||
data: {ref: "<session-folder>"}
|
||||
})
|
||||
```
|
||||
|
||||
**Spawn-and-Stop pattern**:
|
||||
1. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
||||
2. For each ready task -> spawn worker (see SKILL.md Spawn Template)
|
||||
3. Output status summary
|
||||
4. STOP
|
||||
|
||||
**Pipeline advancement** driven by three wake sources:
|
||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
||||
- User "check" -> handleCheck (status only)
|
||||
- User "resume" -> handleResume (advance)
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Next Steps
|
||||
|
||||
**Objective**: Completion report and follow-up options.
|
||||
|
||||
**Workflow**:
|
||||
1. Load session state -> count completed tasks, duration
|
||||
2. List deliverables with output paths
|
||||
3. Generate summary:
|
||||
|
||||
```
|
||||
## [coordinator] Testing Complete
|
||||
|
||||
**Task**: <description>
|
||||
**Pipeline**: <selected-pipeline>
|
||||
**GC Rounds**: <count>
|
||||
**Changed Files**: <count>
|
||||
|
||||
### Coverage
|
||||
<For each layer>: **<layer>**: <coverage>% (target: <target>%)
|
||||
|
||||
### Quality Report
|
||||
<analysis-summary>
|
||||
```
|
||||
|
||||
4. Update session status -> "completed"
|
||||
5. Offer next steps via AskUserQuestion:
|
||||
- New test: Run tests on new changes
|
||||
- Deepen test: Add test layers or increase coverage
|
||||
- Close team: Shutdown all teammates and cleanup
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Teammate no response | Send tracking message, 2 times -> respawn worker |
|
||||
| GC loop exceeded (3 rounds) | Accept current coverage, log to shared memory |
|
||||
| Test environment failure | Report to user, suggest manual fix |
|
||||
| All tests fail | Check test framework config, notify analyst |
|
||||
| Coverage tool unavailable | Degrade to pass rate judgment |
|
||||
| Worker crash | Respawn worker, reassign task |
|
||||
| Dependency cycle | Detect, report to user, halt |
|
||||
| Invalid mode | Reject with error, ask to clarify |
|
||||
@@ -1,298 +0,0 @@
|
||||
# Executor Role
|
||||
|
||||
Test executor. Executes tests, collects coverage, attempts auto-fix for failures. Acts as the Critic in the Generator-Critic loop.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `executor` | **Tag**: `[executor]`
|
||||
- **Task Prefix**: `TESTRUN-*`
|
||||
- **Responsibility**: Validation (test execution and verification)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `TESTRUN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[executor]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within validation responsibility scope
|
||||
- Phase 2: Read role states via team_msg(operation='get_state')
|
||||
- Phase 5: Share execution_results + defect_patterns via team_msg(type='state_update')
|
||||
- Report coverage and pass rate for coordinator's GC decision
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope (no test generation, strategy formulation, or trend analysis)
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's responsibility
|
||||
- Omit `[executor]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| Read | Read | Phase 2 | Load role states |
|
||||
| Glob | Read | Phase 2 | Find test files to execute |
|
||||
| Bash | Execute | Phase 3 | Run test commands |
|
||||
| Write | Write | Phase 3 | Save test results |
|
||||
| Task | Delegate | Phase 3 | Delegate fix to code-developer |
|
||||
| TaskUpdate | Write | Phase 5 | Mark task completed |
|
||||
| SendMessage | Write | Phase 5 | Report to coordinator |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `tests_passed` | executor -> coordinator | All tests pass + coverage met | Tests passed |
|
||||
| `tests_failed` | executor -> coordinator | Tests fail or coverage below target | Tests failed / coverage insufficient |
|
||||
| `coverage_report` | executor -> coordinator | Coverage data collected | Coverage data |
|
||||
| `error` | executor -> coordinator | Execution environment failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "executor",
|
||||
type: <message-type>,
|
||||
data: {ref: "<artifact-path>"}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from executor --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `TESTRUN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
**Input Sources**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session path | Task description (Session: <path>) | Yes |
|
||||
| Role state | team_msg(operation="get_state", session_id=<session-id>) | Yes |
|
||||
| Test directory | Task description (Input: <path>) | Yes |
|
||||
| Coverage target | Task description | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description (look for `Session: <path>`)
|
||||
2. Extract test directory from task description (look for `Input: <path>`)
|
||||
3. Extract coverage target from task description (default: 80%)
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({ operation: "get_state", session_id: <session-id> })
|
||||
```
|
||||
|
||||
4. Determine test framework from shared memory:
|
||||
|
||||
| Framework | Detection |
|
||||
|-----------|-----------|
|
||||
| Jest | sharedMemory.test_strategy.framework === "Jest" |
|
||||
| Pytest | sharedMemory.test_strategy.framework === "Pytest" |
|
||||
| Vitest | sharedMemory.test_strategy.framework === "Vitest" |
|
||||
| Unknown | Default to Jest |
|
||||
|
||||
5. Find test files to execute:
|
||||
|
||||
```
|
||||
Glob({ pattern: "<session-folder>/<test-dir>/**/*" })
|
||||
```
|
||||
|
||||
### Phase 3: Test Execution + Fix Cycle
|
||||
|
||||
**Iterative test-fix cycle** (max 3 iterations):
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | Run test command |
|
||||
| 2 | Parse results -> check pass rate |
|
||||
| 3 | Pass rate >= 95% AND coverage >= target -> exit loop (success) |
|
||||
| 4 | Extract failing test details |
|
||||
| 5 | Delegate fix to code-developer subagent |
|
||||
| 6 | Increment iteration counter |
|
||||
| 7 | Iteration >= MAX (3) -> exit loop (report failures) |
|
||||
| 8 | Go to Step 1 |
|
||||
|
||||
**Test commands by framework**:
|
||||
|
||||
| Framework | Command |
|
||||
|-----------|---------|
|
||||
| Jest | `npx jest --coverage --json --outputFile=<session>/results/jest-output.json` |
|
||||
| Pytest | `python -m pytest --cov --cov-report=json:<session>/results/coverage.json -v` |
|
||||
| Vitest | `npx vitest run --coverage --reporter=json` |
|
||||
|
||||
**Execution**:
|
||||
|
||||
```
|
||||
Bash("<test-command> 2>&1 || true")
|
||||
```
|
||||
|
||||
**Result parsing**:
|
||||
|
||||
| Metric | Parse Method |
|
||||
|--------|--------------|
|
||||
| Passed | Output does not contain "FAIL" or "FAILED" |
|
||||
| Pass rate | Parse from test output (e.g., "X passed, Y failed") |
|
||||
| Coverage | Parse from coverage output (e.g., "All files | XX") |
|
||||
|
||||
**Auto-fix delegation** (on failure):
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Fix test failures (iteration <N>)",
|
||||
prompt: "Fix these test failures:
|
||||
|
||||
<test-output>
|
||||
|
||||
Only fix the test files, not the source code."
|
||||
})
|
||||
```
|
||||
|
||||
**Result data structure**:
|
||||
|
||||
```
|
||||
{
|
||||
run_id: "run-<N>",
|
||||
pass_rate: <0.0-1.0>,
|
||||
coverage: <percentage>,
|
||||
coverage_target: <target>,
|
||||
iterations: <N>,
|
||||
passed: <pass_rate >= 0.95 && coverage >= target>,
|
||||
failure_summary: <string or null>,
|
||||
timestamp: <ISO-date>
|
||||
}
|
||||
```
|
||||
|
||||
**Save results**:
|
||||
|
||||
```
|
||||
Write("<session-folder>/results/run-<N>.json", <result-json>)
|
||||
```
|
||||
|
||||
### Phase 4: Defect Pattern Extraction
|
||||
|
||||
**Extract patterns from failures** (if failure_summary exists):
|
||||
|
||||
| Pattern Type | Detection |
|
||||
|--------------|-----------|
|
||||
| Null reference | "null", "undefined", "Cannot read property" |
|
||||
| Async timing | "timeout", "async", "await", "promise" |
|
||||
| Import errors | "Cannot find module", "import" |
|
||||
| Type mismatches | "type", "expected", "received" |
|
||||
|
||||
**Record effective test patterns** (if pass_rate > 0.8):
|
||||
|
||||
| Pattern | Detection |
|
||||
|---------|-----------|
|
||||
| Happy path | Tests with "should succeed" or "valid input" |
|
||||
| Edge cases | Tests with "edge", "boundary", "limit" |
|
||||
| Error handling | Tests with "should fail", "error", "throw" |
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
1. **Share execution results via team_msg(type='state_update')**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "executor",
|
||||
type: "state_update",
|
||||
data: {
|
||||
execution_results: [...sharedMemory.execution_results, <result-data>],
|
||||
defect_patterns: [
|
||||
...sharedMemory.defect_patterns,
|
||||
...(<result-data>.defect_patterns || [])
|
||||
],
|
||||
effective_test_patterns: [
|
||||
...new Set([...sharedMemory.effective_test_patterns, ...(<result-data>.effective_patterns || [])])
|
||||
],
|
||||
coverage_history: [...sharedMemory.coverage_history, {
|
||||
layer: <test-dir>,
|
||||
coverage: <coverage>,
|
||||
target: <target>,
|
||||
pass_rate: <pass_rate>,
|
||||
timestamp: <ISO-date>
|
||||
}]
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
2. **Log via team_msg**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "executor",
|
||||
type: <passed ? "tests_passed" : "tests_failed">,
|
||||
data: {ref: "<session-folder>/results/run-<N>.json"}
|
||||
})
|
||||
```
|
||||
|
||||
3. **SendMessage to coordinator**:
|
||||
|
||||
```
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: "## [executor] Test Execution Results
|
||||
|
||||
**Task**: <task-subject>
|
||||
**Pass Rate**: <pass_rate>%
|
||||
**Coverage**: <coverage>% (target: <target>%)
|
||||
**Fix Iterations**: <N>/3
|
||||
**Status**: <PASSED|NEEDS REVISION>
|
||||
|
||||
<if-defect-patterns>
|
||||
### Defect Patterns
|
||||
- <pattern-1>
|
||||
- <pattern-2>
|
||||
</if-defect-patterns>",
|
||||
summary: "[executor] <PASSED|FAILED>: <coverage>% coverage"
|
||||
})
|
||||
```
|
||||
|
||||
4. **TaskUpdate completed**:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
||||
```
|
||||
|
||||
5. **Loop**: Return to Phase 1 to check next task
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No TESTRUN-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Test command fails to start | Check framework installation, notify coordinator |
|
||||
| Coverage tool unavailable | Report pass rate only |
|
||||
| All tests timeout | Increase timeout, retry once |
|
||||
| Auto-fix makes tests worse | Revert, report original failures |
|
||||
| Shared memory not found | Notify coordinator, request location |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -1,274 +0,0 @@
|
||||
# Generator Role
|
||||
|
||||
Test case generator. Generates test code by layer (L1 unit / L2 integration / L3 E2E). Acts as the Generator in the Generator-Critic loop.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `generator` | **Tag**: `[generator]`
|
||||
- **Task Prefix**: `TESTGEN-*`
|
||||
- **Responsibility**: Code generation (test code creation)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `TESTGEN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[generator]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within code generation responsibility scope
|
||||
- Phase 2: Read role states via team_msg(operation='get_state') + test strategy
|
||||
- Phase 5: Share generated_tests via team_msg(type='state_update')
|
||||
- Generate executable test code
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope (no test execution, coverage analysis, or strategy formulation)
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify source code (only generate test code)
|
||||
- Omit `[generator]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| Read | Read | Phase 2 | Load role states, strategy, source files |
|
||||
| Glob | Read | Phase 2 | Find test files, source files |
|
||||
| Write | Write | Phase 3 | Create test files |
|
||||
| Edit | Write | Phase 3 | Modify existing test files |
|
||||
| Bash | Read | Phase 4 | Syntax validation (tsc --noEmit) |
|
||||
| Task | Delegate | Phase 3 | Delegate to code-developer for complex generation |
|
||||
| TaskUpdate | Write | Phase 5 | Mark task completed |
|
||||
| SendMessage | Write | Phase 5 | Report to coordinator |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `tests_generated` | generator -> coordinator | Tests created | Test generation complete |
|
||||
| `tests_revised` | generator -> coordinator | Tests revised after failure | Tests revised (GC loop) |
|
||||
| `error` | generator -> coordinator | Processing failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "generator",
|
||||
type: <message-type>,
|
||||
data: {ref: "<artifact-path>"}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from generator --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `TESTGEN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
**Input Sources**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session path | Task description (Session: <path>) | Yes |
|
||||
| Role state | team_msg(operation="get_state", session_id=<session-id>) | Yes |
|
||||
| Test strategy | <session-folder>/strategy/test-strategy.md | Yes |
|
||||
| Source files | From test_strategy.priority_files | Yes |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description (look for `Session: <path>`)
|
||||
2. Extract layer from task description (look for `Layer: <L1-unit|L2-integration|L3-e2e>`)
|
||||
|
||||
3. Read role states:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({ operation: "get_state", session_id: <session-id> })
|
||||
```
|
||||
|
||||
4. Read test strategy:
|
||||
|
||||
```
|
||||
Read("<session-folder>/strategy/test-strategy.md")
|
||||
```
|
||||
|
||||
5. Read source files to test (limit to 20 files):
|
||||
|
||||
```
|
||||
Read("<source-file-1>")
|
||||
Read("<source-file-2>")
|
||||
...
|
||||
```
|
||||
|
||||
6. Check if this is a revision (GC loop):
|
||||
|
||||
| Condition | Revision Mode |
|
||||
|-----------|---------------|
|
||||
| Task subject contains "fix" or "revised" | Yes - load previous failures |
|
||||
| Otherwise | No - fresh generation |
|
||||
|
||||
**For revision mode**:
|
||||
- Read latest result file for failure details
|
||||
- Load effective test patterns from shared memory
|
||||
|
||||
7. Read wisdom files if available
|
||||
|
||||
### Phase 3: Test Generation
|
||||
|
||||
**Strategy selection**:
|
||||
|
||||
| File Count | Complexity | Strategy |
|
||||
|------------|------------|----------|
|
||||
| <= 3 files | Low | Direct: inline Write/Edit |
|
||||
| 3-5 files | Medium | Single agent: one code-developer for all |
|
||||
| > 5 files | High | Batch agent: group by module, one agent per batch |
|
||||
|
||||
**Direct generation (low complexity)**:
|
||||
|
||||
For each source file:
|
||||
1. Generate test path based on layer convention
|
||||
2. Generate test code covering: happy path, edge cases, error handling
|
||||
3. Write test file
|
||||
|
||||
```
|
||||
Write("<session-folder>/tests/<layer>/<test-file>", <test-code>)
|
||||
```
|
||||
|
||||
**Agent delegation (medium/high complexity)**:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Generate <layer> tests",
|
||||
prompt: "Generate <layer> tests using <framework> for the following files:
|
||||
|
||||
<file-list-with-content>
|
||||
|
||||
<if-revision>
|
||||
## Previous Failures
|
||||
<failure-details>
|
||||
</if-revision>
|
||||
|
||||
<if-effective-patterns>
|
||||
## Effective Patterns (from previous rounds)
|
||||
<pattern-list>
|
||||
</if-effective-patterns>
|
||||
|
||||
Write test files to: <session-folder>/tests/<layer>/
|
||||
Use <framework> conventions.
|
||||
Each test file should cover: happy path, edge cases, error handling."
|
||||
})
|
||||
```
|
||||
|
||||
**Output verification**:
|
||||
|
||||
```
|
||||
Glob({ pattern: "<session-folder>/tests/<layer>/**/*" })
|
||||
```
|
||||
|
||||
### Phase 4: Self-Validation
|
||||
|
||||
**Validation checks**:
|
||||
|
||||
| Check | Method | Pass Criteria | Action on Fail |
|
||||
|-------|--------|---------------|----------------|
|
||||
| Syntax | `tsc --noEmit` or equivalent | No errors | Auto-fix imports and types |
|
||||
| File count | Count generated files | >= 1 file | Report issue |
|
||||
| Import resolution | Check no broken imports | All imports resolve | Fix import paths |
|
||||
|
||||
**Syntax check command**:
|
||||
|
||||
```
|
||||
Bash("cd \"<session-folder>\" && npx tsc --noEmit tests/<layer>/**/*.ts 2>&1 || true")
|
||||
```
|
||||
|
||||
If syntax errors found, attempt auto-fix for common issues (imports, types).
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
1. **Share generated tests via team_msg(type='state_update')**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "generator",
|
||||
type: "state_update",
|
||||
data: {
|
||||
generated_tests: [
|
||||
...sharedMemory.generated_tests,
|
||||
...<new-test-files>.map(f => ({
|
||||
file: f,
|
||||
layer: <layer>,
|
||||
round: <is-revision ? gc_round : 0>,
|
||||
revised: <is-revision>
|
||||
}))
|
||||
]
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
2. **Log via team_msg**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "generator",
|
||||
type: <is-revision ? "tests_revised" : "tests_generated">,
|
||||
data: {ref: "<session-folder>/tests/<layer>/"}
|
||||
})
|
||||
```
|
||||
|
||||
3. **SendMessage to coordinator**:
|
||||
|
||||
```
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: "## [generator] Tests <Generated|Revised>\n\n**Layer**: <layer>\n**Files**: <file-count>\n**Framework**: <framework>\n**Revision**: <Yes/No>\n**Output**: <path>",
|
||||
summary: "[generator] <file-count> <layer> tests <generated|revised>"
|
||||
})
|
||||
```
|
||||
|
||||
4. **TaskUpdate completed**:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
||||
```
|
||||
|
||||
5. **Loop**: Return to Phase 1 to check next task
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No TESTGEN-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Source file not found | Skip, notify coordinator |
|
||||
| Test framework unknown | Default to Jest patterns |
|
||||
| Revision with no failure data | Generate additional tests instead of revising |
|
||||
| Syntax errors in generated tests | Auto-fix imports and types |
|
||||
| Shared memory not found | Notify coordinator, request location |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -1,220 +0,0 @@
|
||||
# Strategist Role
|
||||
|
||||
Test strategy designer. Analyzes git diff, determines test layers, defines coverage targets and test priorities.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `strategist` | **Tag**: `[strategist]`
|
||||
- **Task Prefix**: `STRATEGY-*`
|
||||
- **Responsibility**: Read-only analysis (strategy formulation)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `STRATEGY-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[strategist]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within read-only analysis responsibility scope
|
||||
- Phase 2: Read role states via team_msg(operation='get_state')
|
||||
- Phase 5: Share test_strategy via team_msg(type='state_update')
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope (no test generation, execution, or result analysis)
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's responsibility
|
||||
- Omit `[strategist]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| Read | Read | Phase 2 | Load role states, existing test patterns |
|
||||
| Bash | Read | Phase 2 | Git diff analysis, framework detection |
|
||||
| Glob | Read | Phase 2 | Find test files, config files |
|
||||
| Write | Write | Phase 3 | Create test-strategy.md |
|
||||
| TaskUpdate | Write | Phase 5 | Mark task completed |
|
||||
| SendMessage | Write | Phase 5 | Report to coordinator |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `strategy_ready` | strategist -> coordinator | Strategy completed | Strategy formulation complete |
|
||||
| `error` | strategist -> coordinator | Processing failure | Error report |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "strategist",
|
||||
type: <message-type>,
|
||||
data: {ref: "<artifact-path>"}
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from strategist --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `STRATEGY-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
**Input Sources**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session path | Task description (Session: <path>) | Yes |
|
||||
| Role state | team_msg(operation="get_state", session_id=<session-id>) | Yes |
|
||||
| Git diff | `git diff HEAD~1` or `git diff --cached` | Yes |
|
||||
| Changed files | From git diff --name-only | Yes |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description (look for `Session: <path>`)
|
||||
2. Read role states for changed files and modules
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({ operation: "get_state", session_id: <session-id> })
|
||||
```
|
||||
|
||||
3. Get detailed git diff for analysis:
|
||||
|
||||
```
|
||||
Bash("git diff HEAD~1 -- <file1> <file2> ... 2>/dev/null || git diff --cached -- <files>")
|
||||
```
|
||||
|
||||
4. Detect test framework from project files:
|
||||
|
||||
| Framework | Detection Method |
|
||||
|-----------|-----------------|
|
||||
| Jest | Check jest.config.js or jest.config.ts exists |
|
||||
| Pytest | Check pytest.ini or pyproject.toml exists |
|
||||
| Vitest | Check vitest.config.ts or vitest.config.js exists |
|
||||
|
||||
```
|
||||
Bash("test -f jest.config.js || test -f jest.config.ts && echo \"yes\" || echo \"no\"")
|
||||
```
|
||||
|
||||
### Phase 3: Strategy Formulation
|
||||
|
||||
**Analysis dimensions**:
|
||||
|
||||
| Change Type | Analysis | Impact |
|
||||
|-------------|----------|--------|
|
||||
| New files | Need new tests | High priority |
|
||||
| Modified functions | Need updated tests | Medium priority |
|
||||
| Deleted files | Need test cleanup | Low priority |
|
||||
| Config changes | May need integration tests | Variable |
|
||||
|
||||
**Strategy structure**:
|
||||
|
||||
1. **Change Analysis Table**: File, Change Type, Impact, Priority
|
||||
2. **Test Layer Recommendations**:
|
||||
- L1 Unit Tests: Scope, Coverage Target, Priority Files, Test Patterns
|
||||
- L2 Integration Tests: Scope, Coverage Target, Integration Points
|
||||
- L3 E2E Tests: Scope, Coverage Target, User Scenarios
|
||||
3. **Risk Assessment**: Risk, Probability, Impact, Mitigation
|
||||
4. **Test Execution Order**: Prioritized sequence
|
||||
|
||||
**Output file**: `<session-folder>/strategy/test-strategy.md`
|
||||
|
||||
```
|
||||
Write("<session-folder>/strategy/test-strategy.md", <strategy-content>)
|
||||
```
|
||||
|
||||
### Phase 4: Self-Validation
|
||||
|
||||
**Validation checks**:
|
||||
|
||||
| Check | Criteria | Action |
|
||||
|-------|----------|--------|
|
||||
| Has L1 scope | L1 scope not empty | If empty, set default based on changed files |
|
||||
| Has coverage targets | L1 target > 0 | If missing, use default (80/60/40) |
|
||||
| Has priority files | Priority list not empty | If empty, use all changed files |
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
1. **Share test strategy via team_msg(type='state_update')**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "strategist",
|
||||
type: "state_update",
|
||||
data: {
|
||||
test_strategy: {
|
||||
framework: <detected-framework>,
|
||||
layers: { L1: [...], L2: [...], L3: [...] },
|
||||
coverage_targets: { L1: <n>, L2: <n>, L3: <n> },
|
||||
priority_files: [...],
|
||||
risks: [...]
|
||||
}
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
2. **Log via team_msg**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: <session-id>, from: "strategist",
|
||||
type: "strategy_ready",
|
||||
data: {ref: "<session-folder>/strategy/test-strategy.md"}
|
||||
})
|
||||
```
|
||||
|
||||
3. **SendMessage to coordinator**:
|
||||
|
||||
```
|
||||
SendMessage({
|
||||
type: "message", recipient: "coordinator",
|
||||
content: "## [strategist] Test Strategy Ready\n\n**Files**: <count>\n**Layers**: L1(<count>), L2(<count>), L3(<count>)\n**Framework**: <framework>\n**Output**: <path>",
|
||||
summary: "[strategist] Strategy ready"
|
||||
})
|
||||
```
|
||||
|
||||
4. **TaskUpdate completed**:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
||||
```
|
||||
|
||||
5. **Loop**: Return to Phase 1 to check next task
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No STRATEGY-* tasks available | Idle, wait for coordinator assignment |
|
||||
| No changed files | Analyze full codebase, recommend smoke tests |
|
||||
| Unknown test framework | Recommend Jest/Pytest based on project language |
|
||||
| All files are config | Recommend integration tests only |
|
||||
| Shared memory not found | Notify coordinator, request location |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -11,25 +11,34 @@ Unified team skill: design system analysis, token definition, component specific
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-uidesign") │
|
||||
│ args="<task>" or args="--role=xxx" │
|
||||
└───────────────────┬───────────────────────────────┘
|
||||
│ Role Router
|
||||
┌──── --role present? ────┐
|
||||
│ NO │ YES
|
||||
↓ ↓
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto → coordinator) (route to role.md)
|
||||
│
|
||||
┌────┴────┬───────────┬───────────┬───────────┐
|
||||
↓ ↓ ↓ ↓ ↓
|
||||
┌──────────┐┌──────────┐┌──────────┐┌──────────┐┌───────────┐
|
||||
│coordinator││researcher││ designer ││ reviewer ││implementer│
|
||||
│ ││RESEARCH-*││ DESIGN-* ││ AUDIT-* ││ BUILD-* │
|
||||
└──────────┘└──────────┘└──────────┘└──────────┘└───────────┘
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-uidesign") |
|
||||
| args="<task-description>" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
Orchestration Mode (auto -> coordinator)
|
||||
|
|
||||
Coordinator (inline)
|
||||
Phase 0-5 orchestration
|
||||
|
|
||||
+-------+-------+-------+-------+
|
||||
v v v v
|
||||
[tw] [tw] [tw] [tw]
|
||||
resear- design- review- imple-
|
||||
cher er er menter
|
||||
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Command Execution Protocol
|
||||
|
||||
When coordinator needs to execute a command (dispatch, monitor):
|
||||
|
||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
||||
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
|
||||
4. **Execute synchronously** -- complete the command workflow before proceeding
|
||||
|
||||
## Role Router
|
||||
|
||||
### Input Parsing
|
||||
@@ -38,13 +47,13 @@ Parse `$ARGUMENTS` to extract `--role`. If absent → Orchestration Mode (auto r
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| coordinator | [roles/coordinator.md](roles/coordinator.md) | (none) | orchestrator | **⚠️ 压缩后必须重读** |
|
||||
| researcher | [roles/researcher.md](roles/researcher.md) | RESEARCH-* | pipeline | 压缩后必须重读 |
|
||||
| designer | [roles/designer.md](roles/designer.md) | DESIGN-* | pipeline | 压缩后必须重读 |
|
||||
| reviewer | [roles/reviewer.md](roles/reviewer.md) | AUDIT-* | pipeline | 压缩后必须重读 |
|
||||
| implementer | [roles/implementer.md](roles/implementer.md) | BUILD-* | pipeline | 压缩后必须重读 |
|
||||
| Role | Spec | Task Prefix | Inner Loop |
|
||||
|------|------|-------------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
|
||||
| researcher | [role-specs/researcher.md](role-specs/researcher.md) | RESEARCH-* | false |
|
||||
| designer | [role-specs/designer.md](role-specs/designer.md) | DESIGN-* | false |
|
||||
| reviewer | [role-specs/reviewer.md](role-specs/reviewer.md) | AUDIT-* | false |
|
||||
| implementer | [role-specs/implementer.md](role-specs/implementer.md) | BUILD-* | false |
|
||||
|
||||
> **⚠️ COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
|
||||
|
||||
@@ -338,39 +347,60 @@ Beat 1 2 3 4 5 6
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use background mode (Spawn-and-Stop):
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: "uidesign",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "uidesign" <ROLE>.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-uidesign/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: uidesign
|
||||
requirement: <task-description>
|
||||
inner_loop: false
|
||||
|
||||
## Primary Directive
|
||||
All your work must be executed through Skill to load role definition:
|
||||
Skill(skill="team-uidesign", args="--role=<role>")
|
||||
|
||||
Current task: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] identifier
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate for other roles
|
||||
- Call mcp__ccw-tools__team_msg before every SendMessage
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> load role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**All roles** (researcher, designer, reviewer, implementer): Set `inner_loop: false`.
|
||||
|
||||
---
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "UI Design pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(uidesign) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-uidesign", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
---
|
||||
|
||||
## Unified Session Directory
|
||||
|
||||
@@ -1,326 +0,0 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate the UI Design workflow: team creation, task dispatching, progress monitoring, session state. Manages dual-track pipelines (design + implementation), sync points, and Generator-Critic loops between designer and reviewer.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
||||
- **Responsibility**: Parse requirements -> Create team -> Dispatch tasks -> Monitor progress -> Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Parse user requirements and clarify ambiguous inputs via AskUserQuestion
|
||||
- Create team and spawn worker subagents in background
|
||||
- Dispatch tasks with proper dependency chains (see SKILL.md Task Metadata Registry)
|
||||
- Monitor progress via worker callbacks and route messages
|
||||
- Maintain session state persistence
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute design/implementation work directly (delegate to workers)
|
||||
- Modify task outputs (workers own their deliverables)
|
||||
- Call implementation subagents directly
|
||||
- Skip dependency validation when creating task chains
|
||||
|
||||
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, first detect the invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
|
||||
| New session | None of the above | -> Phase 0 (Session Resume Check) |
|
||||
|
||||
For callback/check/resume: load coordination logic and execute the appropriate handler, then STOP.
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `task_unblocked` | coordinator -> any | Dependency resolved / sync point passed | Notify worker of available task |
|
||||
| `sync_checkpoint` | coordinator -> all | Audit passed at sync point | Design artifacts stable for consumption |
|
||||
| `fix_required` | coordinator -> designer | Audit found issues | Create DESIGN-fix task |
|
||||
| `error` | coordinator -> all | Critical system error | Escalation to user |
|
||||
| `shutdown` | coordinator -> all | Team being dissolved | Clean shutdown signal |
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
||||
|
||||
**Workflow**:
|
||||
1. Scan session directory for sessions with status "active" or "paused"
|
||||
2. No sessions found -> proceed to Phase 1
|
||||
3. Single session found -> resume it (-> Session Reconciliation)
|
||||
4. Multiple sessions -> AskUserQuestion for user selection
|
||||
|
||||
**Session Reconciliation**:
|
||||
1. Audit TaskList -> get real status of all tasks
|
||||
2. Reconcile: session state <-> TaskList status (bidirectional sync)
|
||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
||||
4. Determine remaining pipeline from reconciled state
|
||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
||||
6. Create missing tasks with correct blockedBy dependencies
|
||||
7. Verify dependency chain integrity
|
||||
8. Update session file with reconciled state
|
||||
9. Kick first executable task's worker -> Phase 4
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Requirement Clarification
|
||||
|
||||
**Objective**: Parse user input and gather execution parameters.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse arguments** for explicit settings: mode, scope, focus areas
|
||||
2. **Ask for missing parameters** via AskUserQuestion:
|
||||
|
||||
| Question | Header | Options |
|
||||
|----------|--------|---------|
|
||||
| UI design scope | Scope | Single component / Component system / Full design system |
|
||||
| Product type/industry | Industry | SaaS/Tech / E-commerce / Healthcare/Finance / Education/Content / Other |
|
||||
| Design constraints | Constraint | Existing design system / WCAG AA / Responsive / Dark mode |
|
||||
|
||||
3. **Map scope to pipeline**:
|
||||
|
||||
| Scope | Pipeline |
|
||||
|-------|----------|
|
||||
| Single component | `component` |
|
||||
| Component system | `system` |
|
||||
| Full design system | `full-system` |
|
||||
|
||||
4. **Industry config** affects audit strictness and design intelligence:
|
||||
|
||||
| Industry | Strictness | Must Have |
|
||||
|----------|------------|-----------|
|
||||
| SaaS/Tech | standard | Responsive, Dark mode |
|
||||
| E-commerce | standard | Responsive, Fast loading |
|
||||
| Healthcare/Finance | strict | WCAG AA, High contrast, Clear typography |
|
||||
| Education/Content | standard | Readability, Responsive |
|
||||
| Other | standard | (none) |
|
||||
|
||||
**Success**: All parameters captured, mode finalized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Create Team + Initialize Session
|
||||
|
||||
**Objective**: Initialize team, session file, and wisdom directory.
|
||||
|
||||
**Workflow**:
|
||||
1. Generate session ID: `UDS-<slug>-<date>`
|
||||
2. Create session folder: `.workflow/.team/UDS-<slug>-<date>/`
|
||||
3. Call TeamCreate with team name "uidesign"
|
||||
4. Create directory structure:
|
||||
|
||||
```
|
||||
UDS-<slug>-<date>/
|
||||
├── .msg/messages.jsonl
|
||||
├── .msg/meta.json
|
||||
├── wisdom/
|
||||
│ ├── learnings.md
|
||||
│ ├── decisions.md
|
||||
│ ├── conventions.md
|
||||
│ └── issues.md
|
||||
├── research/
|
||||
├── design/
|
||||
│ ├── component-specs/
|
||||
│ └── layout-specs/
|
||||
├── audit/
|
||||
└── build/
|
||||
├── token-files/
|
||||
└── component-files/
|
||||
```
|
||||
|
||||
5. Initialize cross-role state via team_msg(type='state_update'):
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "coordinator",
|
||||
type: "state_update",
|
||||
data: {
|
||||
design_intelligence: {},
|
||||
design_token_registry: { colors: {}, typography: {}, spacing: {}, shadows: {}, borders: {} },
|
||||
style_decisions: [],
|
||||
component_inventory: [],
|
||||
accessibility_patterns: [],
|
||||
audit_history: [],
|
||||
industry_context: { industry: <industry>, config: <config> },
|
||||
_metadata: { created_at: <timestamp>, pipeline: <pipeline> }
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
6. Write meta.json with:
|
||||
- session_id, team_name, topic, pipeline, status
|
||||
- current_phase, completed_tasks, sync_points
|
||||
- gc_state: { round, max_rounds: 2, converged }
|
||||
- user_preferences, industry_config, pipeline_progress
|
||||
|
||||
**Success**: Team created, session file written, wisdom initialized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
**Objective**: Dispatch tasks based on mode with proper dependencies.
|
||||
|
||||
### Component Pipeline
|
||||
|
||||
| Task ID | Role | Dependencies | Description |
|
||||
|---------|------|--------------|-------------|
|
||||
| RESEARCH-001 | researcher | (none) | Design system analysis, component inventory, accessibility audit |
|
||||
| DESIGN-001 | designer | RESEARCH-001 | Component design and specification |
|
||||
| AUDIT-001 | reviewer | DESIGN-001 | Design review (GC loop entry) |
|
||||
| BUILD-001 | implementer | AUDIT-001 | Component code implementation |
|
||||
|
||||
### System Pipeline (Dual-Track)
|
||||
|
||||
| Task ID | Role | Dependencies | Description |
|
||||
|---------|------|--------------|-------------|
|
||||
| RESEARCH-001 | researcher | (none) | Design system analysis |
|
||||
| DESIGN-001 | designer | RESEARCH-001 | Design token system definition |
|
||||
| AUDIT-001 | reviewer | DESIGN-001 | Token audit [Sync Point 1] |
|
||||
| DESIGN-002 | designer | AUDIT-001 | Component specification design |
|
||||
| BUILD-001 | implementer | AUDIT-001 | Token code implementation |
|
||||
| AUDIT-002 | reviewer | DESIGN-002 | Component audit [Sync Point 2] |
|
||||
| BUILD-002 | implementer | AUDIT-002, BUILD-001 | Component code implementation |
|
||||
|
||||
### Full-System Pipeline
|
||||
|
||||
Same as System Pipeline, plus:
|
||||
- AUDIT-003: Final comprehensive audit (blockedBy BUILD-002)
|
||||
|
||||
**Task Creation**:
|
||||
- Include `Session: <session-folder>` in every task description
|
||||
- Set owner based on role mapping
|
||||
- Set blockedBy for dependency chains
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Spawn-and-Stop
|
||||
|
||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
||||
|
||||
**Design**: Spawn-and-Stop + Callback pattern.
|
||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
||||
- Worker completes -> SendMessage callback -> auto-advance
|
||||
- User can use "check" / "resume" to manually advance
|
||||
- Coordinator does one operation per invocation, then STOPS
|
||||
|
||||
**Workflow**:
|
||||
1. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
||||
2. For each ready task -> spawn worker (see SKILL.md Coordinator Spawn Template)
|
||||
3. Output status summary
|
||||
4. STOP
|
||||
|
||||
**Pipeline advancement** driven by three wake sources:
|
||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
||||
- User "check" -> handleCheck (status only)
|
||||
- User "resume" -> handleResume (advance)
|
||||
|
||||
### Message Handling
|
||||
|
||||
| Received Message | Action |
|
||||
|-----------------|--------|
|
||||
| Researcher: research_ready | Read research output -> team_msg log -> TaskUpdate completed (auto-unblocks DESIGN) |
|
||||
| Designer: design_ready | Read design artifacts -> team_msg log -> TaskUpdate completed (auto-unblocks AUDIT) |
|
||||
| Designer: design_revision | GC loop: update round count, re-assign DESIGN-fix task |
|
||||
| Reviewer: audit_passed (score >= 8) | **Sync Point**: team_msg log(sync_checkpoint) -> TaskUpdate completed -> unblock parallel tasks |
|
||||
| Reviewer: audit_result (score 6-7) | GC round < max -> Create DESIGN-fix -> assign designer |
|
||||
| Reviewer: fix_required (score < 6) | GC round < max -> Create DESIGN-fix with severity CRITICAL -> assign designer |
|
||||
| Reviewer: audit_result + GC round >= max | Escalate to user: "Design review failed after {max} rounds" |
|
||||
| Implementer: build_complete | team_msg log -> TaskUpdate completed -> check if next AUDIT unblocked |
|
||||
| All tasks completed | -> Phase 5 |
|
||||
|
||||
### Generator-Critic Loop Control
|
||||
|
||||
| Condition | Score | Critical | Action |
|
||||
|-----------|-------|----------|--------|
|
||||
| Converged | >= 8 | 0 | Proceed (mark as sync_checkpoint) |
|
||||
| Not converged | 6-7 | 0 | GC round < max -> Create DESIGN-fix task |
|
||||
| Critical issues | < 6 | > 0 | GC round < max -> Create DESIGN-fix (CRITICAL) |
|
||||
| Exceeded max | any | any | Escalate to user |
|
||||
|
||||
**GC Escalation Options**:
|
||||
1. Accept current design - Skip remaining review, continue implementation
|
||||
2. Try one more round - Extra GC loop opportunity
|
||||
3. Terminate - Stop and handle manually
|
||||
|
||||
### Dual-Track Sync Point Management
|
||||
|
||||
**When AUDIT at sync point passes**:
|
||||
1. Record sync point in session.sync_points
|
||||
2. Unblock parallel tasks on both tracks
|
||||
3. team_msg log(sync_checkpoint)
|
||||
|
||||
**Dual-track failure fallback**:
|
||||
- Convert remaining parallel tasks to sequential
|
||||
- Remove parallel dependencies, add sequential blockedBy
|
||||
- team_msg log(error): "Dual-track sync failed, falling back to sequential"
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Next Steps
|
||||
|
||||
**Objective**: Completion report and follow-up options.
|
||||
|
||||
**Workflow**:
|
||||
1. Load session state -> count completed tasks, duration
|
||||
2. List deliverables with output paths
|
||||
3. Update session status -> "completed"
|
||||
4. Offer next steps to user:
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| New component | Design new component (reuse team) |
|
||||
| Integration test | Verify component in actual page context |
|
||||
| Close team | Dismiss all teammates and cleanup |
|
||||
|
||||
**Report Structure**:
|
||||
- pipeline, tasks_completed, gc_rounds, sync_points_passed, final_audit_score
|
||||
- artifacts: { research, design, audit, build }
|
||||
|
||||
---
|
||||
|
||||
## Session State Tracking
|
||||
|
||||
**Update on task completion**:
|
||||
- completed_tasks: append task prefix
|
||||
- pipeline_progress.completed: increment
|
||||
|
||||
**Update on sync point passed**:
|
||||
- sync_points: append { audit, timestamp }
|
||||
|
||||
**Update on GC round**:
|
||||
- gc_state.round: increment
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Audit score < 6 after 2 GC rounds | Escalate to user for decision |
|
||||
| Dual-track sync failure | Fall back to single-track sequential execution |
|
||||
| BUILD cannot find design files | Wait for Sync Point or escalate |
|
||||
| Design token conflict | Reviewer arbitrates, coordinator intervenes |
|
||||
| Worker no response | Track messages, 2x no response -> respawn worker |
|
||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
||||
| Worker crash | Respawn worker, reassign task |
|
||||
| Dependency cycle | Detect, report to user, halt |
|
||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
||||
@@ -1,249 +0,0 @@
|
||||
# Designer Role
|
||||
|
||||
Design token architect and component specification author. Defines visual language, component behavior, and responsive layouts. Acts as Generator in the designer<->reviewer Generator-Critic loop.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `designer` | **Tag**: `[designer]`
|
||||
- **Task Prefix**: `DESIGN-*`
|
||||
- **Responsibility**: Code generation (design artifacts)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `DESIGN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[designer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within design artifact generation responsibility scope
|
||||
- Consume design intelligence from ui-ux-pro-max when available
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files outside design/ output directory
|
||||
- Omit `[designer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| Read | Read | Read research findings, design intelligence |
|
||||
| Write | Write | Create design artifacts |
|
||||
| Edit | Write | Modify existing design files |
|
||||
| Glob | Search | Find existing design files |
|
||||
| Grep | Search | Search patterns in files |
|
||||
| Task | Delegate | Delegate to code-developer for complex generation |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `design_ready` | designer -> coordinator | Design artifact complete | Summary + file references |
|
||||
| `design_revision` | designer -> coordinator | GC fix iteration complete | What changed + audit feedback addressed |
|
||||
| `design_progress` | designer -> coordinator | Intermediate update | Current progress |
|
||||
| `error` | designer -> coordinator | Failure | Error details |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "designer",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
> `to` and `summary` are auto-defaulted by the tool.
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from designer --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `DESIGN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
**Task type detection**:
|
||||
|
||||
| Pattern | Task Type |
|
||||
|---------|-----------|
|
||||
| Subject contains "token" or "token" | Token design |
|
||||
| Subject contains "component" or "component" | Component spec |
|
||||
| Subject contains "fix" or "revision" | GC fix |
|
||||
|
||||
### Phase 2: Context Loading + Shared Memory Read
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read role states via team_msg(operation="get_state")
|
||||
3. Read research findings:
|
||||
|
||||
| File | Content |
|
||||
|------|---------|
|
||||
| design-system-analysis.json | Existing tokens, styling approach |
|
||||
| component-inventory.json | Component list, patterns |
|
||||
| accessibility-audit.json | WCAG level, issues |
|
||||
|
||||
4. Read design intelligence:
|
||||
|
||||
| Field | Usage |
|
||||
|-------|-------|
|
||||
| design_system.colors | Recommended color values |
|
||||
| design_system.typography | Recommended font stacks |
|
||||
| recommendations.anti_patterns | Patterns to avoid |
|
||||
| ux_guidelines | Implementation hints |
|
||||
|
||||
5. If GC fix task: Read audit feedback from audit files
|
||||
|
||||
### Phase 3: Design Execution
|
||||
|
||||
#### Token System Design (DESIGN-001)
|
||||
|
||||
**Objective**: Define complete design token system following W3C Design Tokens Format.
|
||||
|
||||
**Token Categories**:
|
||||
|
||||
| Category | Tokens |
|
||||
|----------|--------|
|
||||
| Color | primary, secondary, background, surface, text (primary/secondary), semantic (success/warning/error/info) |
|
||||
| Typography | font-family (base/mono), font-size (xs-3xl), font-weight, line-height |
|
||||
| Spacing | xs(4px), sm(8px), md(16px), lg(24px), xl(32px), 2xl(48px) |
|
||||
| Shadow | sm, md, lg |
|
||||
| Border | radius (sm/md/lg/full), width |
|
||||
| Breakpoint | mobile(320px), tablet(768px), desktop(1024px), wide(1280px) |
|
||||
|
||||
**Design Intelligence Integration**:
|
||||
|
||||
| Source | Usage |
|
||||
|--------|-------|
|
||||
| recommended.colors.primary | -> color.primary.$value.light |
|
||||
| recommended.typography.heading | -> typography.font-family.base |
|
||||
| anti_patterns | -> Document in spec for implementer |
|
||||
|
||||
**Theme Support**:
|
||||
- All color tokens must have light/dark variants
|
||||
- Use `$value: { light: ..., dark: ... }` format
|
||||
|
||||
**Output**: `design/design-tokens.json`
|
||||
|
||||
#### Component Specification (DESIGN-002)
|
||||
|
||||
**Objective**: Define component specs consuming design tokens.
|
||||
|
||||
**Spec Structure**:
|
||||
|
||||
| Section | Content |
|
||||
|---------|---------|
|
||||
| Overview | Type (atom/molecule/organism), purpose |
|
||||
| Design Tokens Consumed | Token -> Usage -> Value Reference |
|
||||
| States | default, hover, focus, active, disabled |
|
||||
| Responsive Behavior | Changes per breakpoint |
|
||||
| Accessibility | Role, ARIA, keyboard, focus indicator, contrast |
|
||||
| Variants | Variant descriptions and token overrides |
|
||||
| Anti-Patterns | From design intelligence |
|
||||
| Implementation Hints | From ux_guidelines |
|
||||
|
||||
**State Definition Requirements**:
|
||||
|
||||
| State | Required |
|
||||
|-------|----------|
|
||||
| default | Visual appearance |
|
||||
| hover | Background/opacity change |
|
||||
| focus | Outline specification (2px solid, offset 2px) |
|
||||
| active | Pressed state |
|
||||
| disabled | Opacity 0.5, cursor not-allowed |
|
||||
|
||||
**Output**: `design/component-specs/{component-name}.md`
|
||||
|
||||
#### GC Fix Mode (DESIGN-fix-N)
|
||||
|
||||
**Objective**: Address audit feedback and revise design.
|
||||
|
||||
**Workflow**:
|
||||
1. Parse audit feedback for specific issues
|
||||
2. Re-read affected design artifacts
|
||||
3. Apply fixes based on feedback:
|
||||
- Token value adjustments (contrast ratios, spacing)
|
||||
- Missing state definitions
|
||||
- Accessibility gaps
|
||||
- Naming convention fixes
|
||||
4. Re-write affected files with corrections
|
||||
5. Signal `design_revision` instead of `design_ready`
|
||||
|
||||
### Phase 4: Validation
|
||||
|
||||
**Self-check design artifacts**:
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| tokens_valid | Verify all $value fields non-empty | All values defined |
|
||||
| states_complete | Check all 5 states defined | default/hover/focus/active/disabled |
|
||||
| a11y_specified | Check accessibility section | Role, ARIA, keyboard defined |
|
||||
| responsive_defined | Check breakpoint specs | At least mobile/desktop |
|
||||
| token_refs_valid | Verify `{token.path}` references | All resolve to defined tokens |
|
||||
|
||||
**Token integrity check**:
|
||||
- Light/dark values exist for all color tokens
|
||||
- No empty $value fields
|
||||
- Valid CSS-parseable values
|
||||
|
||||
**Component spec check**:
|
||||
- All token references resolve
|
||||
- All states defined
|
||||
- A11y section complete
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[designer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Update shared memory**:
|
||||
|
||||
| Field | Update |
|
||||
|-------|--------|
|
||||
| design_token_registry | Token categories and keys |
|
||||
| style_decisions | Append design decision with timestamp |
|
||||
|
||||
**Message type selection**:
|
||||
|
||||
| Task Type | Message Type |
|
||||
|-----------|--------------|
|
||||
| GC fix task | `design_revision` |
|
||||
| Normal task | `design_ready` |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No DESIGN-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Research data missing | Use default tokens + mark for confirmation |
|
||||
| Token conflict | Document decision rationale, submit for review arbitration |
|
||||
| GC fix cannot satisfy all feedback | Document trade-offs, let coordinator decide |
|
||||
| Too many components | Prioritize MVP components, mark post-MVP |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
@@ -1,278 +0,0 @@
|
||||
# Implementer Role
|
||||
|
||||
Component code builder responsible for translating design specifications into production code. Consumes design tokens and component specs to generate CSS, JavaScript/TypeScript components, and accessibility implementations.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `implementer` | **Tag**: `[implementer]`
|
||||
- **Task Prefix**: `BUILD-*`
|
||||
- **Responsibility**: Code generation
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `BUILD-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[implementer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within code implementation responsibility scope
|
||||
- Consume design tokens via CSS custom properties (no hardcoded values)
|
||||
- Follow design specifications exactly
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify design artifacts (only consume them)
|
||||
- Omit `[implementer]` identifier in any output
|
||||
- Use hardcoded colors/spacing (must use design tokens)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| Read | Read | Read design tokens, component specs, audit reports |
|
||||
| Write | Write | Create implementation files |
|
||||
| Edit | Write | Modify existing code files |
|
||||
| Glob | Search | Find files matching patterns |
|
||||
| Grep | Search | Search patterns in files |
|
||||
| Bash | Execute | Run build commands, tests |
|
||||
| Task | Delegate | Delegate to code-developer for implementation |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `build_complete` | implementer -> coordinator | Implementation finished | Changed files + summary |
|
||||
| `build_progress` | implementer -> coordinator | Intermediate update | Current progress |
|
||||
| `error` | implementer -> coordinator | Failure | Error details |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "implementer",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
> `to` and `summary` are auto-defaulted by the tool.
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from implementer --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `BUILD-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
**Build type detection**:
|
||||
|
||||
| Pattern | Build Type |
|
||||
|---------|-----------|
|
||||
| Subject contains "token" or "token" | Token implementation |
|
||||
| Subject contains "component" or "component" | Component implementation |
|
||||
|
||||
### Phase 2: Context Loading + Shared Memory Read
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read role states via team_msg(operation="get_state"):
|
||||
|
||||
| Field | Usage |
|
||||
|-------|-------|
|
||||
| design_token_registry | Expected token categories |
|
||||
| style_decisions | Styling approach decisions |
|
||||
|
||||
3. Read design artifacts:
|
||||
|
||||
| Artifact | Build Type |
|
||||
|----------|-----------|
|
||||
| design-tokens.json | Token build |
|
||||
| component-specs/*.md | Component build |
|
||||
|
||||
4. Read latest audit report (for approved changes and feedback)
|
||||
|
||||
5. Read design intelligence:
|
||||
|
||||
| Field | Usage |
|
||||
|-------|-------|
|
||||
| stack_guidelines | Tech-specific implementation patterns |
|
||||
| recommendations.anti_patterns | Patterns to avoid |
|
||||
| ux_guidelines | Best practices |
|
||||
|
||||
6. Detect project tech stack from package.json
|
||||
|
||||
### Phase 3: Implementation Execution
|
||||
|
||||
#### Token Implementation (BUILD-001)
|
||||
|
||||
**Objective**: Convert design tokens to production code.
|
||||
|
||||
**Output files**:
|
||||
|
||||
| File | Content |
|
||||
|------|---------|
|
||||
| tokens.css | CSS custom properties with :root and [data-theme="dark"] |
|
||||
| tokens.ts | TypeScript constants/types for programmatic access |
|
||||
| README.md | Token usage guide |
|
||||
|
||||
**CSS Output Structure**:
|
||||
|
||||
```css
|
||||
:root {
|
||||
--color-primary: #1976d2;
|
||||
--color-text-primary: rgba(0,0,0,0.87);
|
||||
--spacing-md: 16px;
|
||||
/* ... */
|
||||
}
|
||||
|
||||
[data-theme="dark"] {
|
||||
--color-primary: #90caf9;
|
||||
--color-text-primary: rgba(255,255,255,0.87);
|
||||
/* ... */
|
||||
}
|
||||
|
||||
@media (prefers-color-scheme: dark) {
|
||||
:root:not([data-theme="light"]) {
|
||||
/* ... */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Requirements**:
|
||||
- Semantic token names matching design tokens
|
||||
- All color tokens have both light and dark values
|
||||
- CSS custom properties for runtime theming
|
||||
- TypeScript types enable autocomplete
|
||||
|
||||
**Execution**: Delegate to code-developer subagent.
|
||||
|
||||
#### Component Implementation (BUILD-002)
|
||||
|
||||
**Objective**: Implement component code from design specifications.
|
||||
|
||||
**Input**:
|
||||
- Component specification markdown
|
||||
- Design tokens (CSS file)
|
||||
- Audit feedback (if any)
|
||||
- Anti-patterns to avoid
|
||||
|
||||
**Output files** (per component):
|
||||
|
||||
| File | Content |
|
||||
|------|---------|
|
||||
| {ComponentName}.tsx | React/Vue/Svelte component |
|
||||
| {ComponentName}.css | Styles consuming tokens |
|
||||
| {ComponentName}.test.tsx | Basic render + state tests |
|
||||
| index.ts | Re-export |
|
||||
|
||||
**Implementation Requirements**:
|
||||
|
||||
| Requirement | Details |
|
||||
|-------------|---------|
|
||||
| Token consumption | Use var(--token-name), no hardcoded values |
|
||||
| States | Implement all 5: default, hover, focus, active, disabled |
|
||||
| ARIA | Add attributes as specified in design spec |
|
||||
| Responsive | Support breakpoints from spec |
|
||||
| Patterns | Follow project's existing component patterns |
|
||||
|
||||
**Accessibility Requirements**:
|
||||
|
||||
| Requirement | Criteria |
|
||||
|-------------|----------|
|
||||
| Keyboard navigation | Must work (Tab, Enter, Space, etc.) |
|
||||
| Screen reader | ARIA support |
|
||||
| Focus indicator | Visible using design token |
|
||||
| Color contrast | WCAG AA (4.5:1 text, 3:1 UI) |
|
||||
|
||||
**Anti-pattern Avoidance**:
|
||||
- Check against design intelligence anti_patterns
|
||||
- Verify no violation in implementation
|
||||
|
||||
**Execution**: Delegate to code-developer subagent per component.
|
||||
|
||||
### Phase 4: Validation
|
||||
|
||||
**Token build validation**:
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| File existence | Read tokens.css, tokens.ts | Files exist |
|
||||
| Token coverage | Parse CSS | All defined tokens present |
|
||||
| Theme support | Parse CSS | Light/dark variants exist |
|
||||
|
||||
**Component build validation**:
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| File existence | Glob component dir | At least 3 files (component, style, index) |
|
||||
| Token usage | Grep hardcoded values | No #xxx or rgb() in CSS (except in tokens.css) |
|
||||
| Focus styles | Grep :focus | :focus or :focus-visible defined |
|
||||
| Responsive | Grep @media | Media queries present |
|
||||
|
||||
**Hardcoded value detection**:
|
||||
|
||||
| Pattern | Severity |
|
||||
|---------|----------|
|
||||
| `#[0-9a-fA-F]{3,8}` in component CSS | Warning (should use token) |
|
||||
| `rgb(` or `rgba(` in component CSS | Warning (should use token) |
|
||||
| `cursor: pointer` missing on interactive | Info |
|
||||
| Missing :focus styles on interactive | Warning |
|
||||
|
||||
**Anti-pattern self-check**:
|
||||
- Verify implementation doesn't violate any anti_patterns from design intelligence
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[implementer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Update shared memory** (for component build):
|
||||
|
||||
| Field | Update |
|
||||
|-------|--------|
|
||||
| component_inventory | Add implementation_path, set implemented=true |
|
||||
|
||||
**Report content**:
|
||||
- Build type (token/component)
|
||||
- Output file count
|
||||
- Output directory path
|
||||
- List of created files
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No BUILD-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Design token file not found | Wait for sync point or report error |
|
||||
| Component spec incomplete | Use defaults + mark for confirmation |
|
||||
| Code generation fails | Retry once, still fails -> report error |
|
||||
| Hardcoded values detected | Auto-replace with token references |
|
||||
| Unknown tech stack | Default to React + CSS Modules |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
@@ -1,284 +0,0 @@
|
||||
# Researcher Role
|
||||
|
||||
Design system analyst responsible for current state assessment, component inventory, accessibility baseline, and competitive research. Provides foundation data for downstream designer and reviewer roles.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `researcher` | **Tag**: `[researcher]`
|
||||
- **Task Prefix**: `RESEARCH-*`
|
||||
- **Responsibility**: Read-only analysis
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `RESEARCH-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[researcher]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within read-only analysis responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify any files or resources (read-only analysis only)
|
||||
- Omit `[researcher]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| Read | Read | Read files and session data |
|
||||
| Glob | Search | Find files matching patterns |
|
||||
| Grep | Search | Search file contents |
|
||||
| Bash | Read | Execute read-only shell commands |
|
||||
| Task | Delegate | Delegate to cli-explore-agent, Explore agent |
|
||||
| Skill | Delegate | Call ui-ux-pro-max for design intelligence |
|
||||
| WebSearch | External | Search external documentation |
|
||||
| WebFetch | External | Fetch external resources |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `research_ready` | researcher -> coordinator | Research complete | Summary of findings + file references |
|
||||
| `research_progress` | researcher -> coordinator | Intermediate update | Current progress status |
|
||||
| `error` | researcher -> coordinator | Failure | Error details |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "researcher",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
> `to` and `summary` are auto-defaulted by the tool.
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from researcher --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `RESEARCH-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Context Loading + Shared Memory Read
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description (pattern: `Session: <path>`)
|
||||
2. Read role states via team_msg(operation="get_state") from session
|
||||
3. Load existing component_inventory and accessibility_patterns if available
|
||||
|
||||
**Input Sources**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | Task description | Yes |
|
||||
| Role state | team_msg(operation="get_state", session_id=<session-id>) | Yes |
|
||||
| Wisdom files | Session/wisdom/ | No |
|
||||
|
||||
### Phase 3: Research Execution
|
||||
|
||||
Research is divided into 4 analysis streams. Streams 1-3 analyze the codebase, Stream 4 retrieves design intelligence from ui-ux-pro-max.
|
||||
|
||||
#### Stream 1: Design System Analysis
|
||||
|
||||
**Objective**: Analyze existing design system and styling patterns.
|
||||
|
||||
**Tasks**:
|
||||
- Search for existing design tokens (CSS variables, theme configs, token files)
|
||||
- Identify styling patterns (CSS-in-JS, CSS modules, utility classes, SCSS)
|
||||
- Map color palette, typography scale, spacing system
|
||||
- Find component library usage (MUI, Ant Design, custom, etc.)
|
||||
|
||||
**Output**: `design-system-analysis.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"existing_tokens": { "colors": [], "typography": [], "spacing": [], "shadows": [] },
|
||||
"styling_approach": "css-modules | css-in-js | utility | scss | mixed",
|
||||
"component_library": { "name": "", "version": "", "usage_count": 0 },
|
||||
"custom_components": [],
|
||||
"inconsistencies": [],
|
||||
"_metadata": { "timestamp": "..." }
|
||||
}
|
||||
```
|
||||
|
||||
**Execution**: Delegate to cli-explore-agent subagent.
|
||||
|
||||
#### Stream 2: Component Inventory
|
||||
|
||||
**Objective**: Discover all UI components in the codebase.
|
||||
|
||||
**Tasks**:
|
||||
- Find all component files
|
||||
- Identify props/API surface
|
||||
- Identify states supported (hover, focus, disabled, etc.)
|
||||
- Check accessibility attributes (ARIA labels, roles, etc.)
|
||||
- Map dependencies on other components
|
||||
|
||||
**Output**: `component-inventory.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"components": [{
|
||||
"name": "", "path": "", "type": "atom|molecule|organism|template",
|
||||
"props": [], "states": [], "aria_attributes": [],
|
||||
"dependencies": [], "usage_count": 0
|
||||
}],
|
||||
"patterns": {
|
||||
"naming_convention": "",
|
||||
"file_structure": "",
|
||||
"state_management": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Execution**: Delegate to Explore subagent.
|
||||
|
||||
#### Stream 3: Accessibility Baseline
|
||||
|
||||
**Objective**: Assess current accessibility state.
|
||||
|
||||
**Tasks**:
|
||||
- Check for ARIA attributes usage patterns
|
||||
- Identify keyboard navigation support
|
||||
- Check color contrast ratios (if design tokens found)
|
||||
- Find focus management patterns
|
||||
- Check semantic HTML usage
|
||||
|
||||
**Output**: `accessibility-audit.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"wcag_level": "none|partial-A|A|partial-AA|AA",
|
||||
"aria_coverage": { "labeled": 0, "total": 0, "percentage": 0 },
|
||||
"keyboard_nav": { "supported": [], "missing": [] },
|
||||
"contrast_issues": [],
|
||||
"focus_management": { "pattern": "", "coverage": "" },
|
||||
"semantic_html": { "score": 0, "issues": [] },
|
||||
"recommendations": []
|
||||
}
|
||||
```
|
||||
|
||||
**Execution**: Delegate to Explore subagent.
|
||||
|
||||
#### Stream 4: Design Intelligence (ui-ux-pro-max)
|
||||
|
||||
**Objective**: Retrieve industry-specific design intelligence.
|
||||
|
||||
**Detection**:
|
||||
- Industry from task description or session config
|
||||
- Tech stack from package.json
|
||||
|
||||
| Package | Detected Stack |
|
||||
|---------|---------------|
|
||||
| next | nextjs |
|
||||
| react | react |
|
||||
| vue | vue |
|
||||
| svelte | svelte |
|
||||
| @shadcn/ui | shadcn |
|
||||
| (default) | html-tailwind |
|
||||
|
||||
**Execution**: Call Skill(ui-ux-pro-max) with:
|
||||
1. `--design-system` for design system recommendations
|
||||
2. `--domain ux` for UX guidelines (accessibility, animation, responsive)
|
||||
3. `--stack <detected>` for stack-specific guidelines
|
||||
|
||||
**Output**: `design-intelligence.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"_source": "ui-ux-pro-max-skill | llm-general-knowledge",
|
||||
"_generated_at": "...",
|
||||
"industry": "...",
|
||||
"detected_stack": "...",
|
||||
"design_system": { "colors", "typography", "style" },
|
||||
"ux_guidelines": [],
|
||||
"stack_guidelines": {},
|
||||
"recommendations": { "anti_patterns": [], "must_have": [] }
|
||||
}
|
||||
```
|
||||
|
||||
**Degradation**: When ui-ux-pro-max unavailable, use LLM general knowledge, mark `_source` as `"llm-general-knowledge"`.
|
||||
|
||||
### Phase 4: Validation
|
||||
|
||||
**Verification checks**:
|
||||
|
||||
| File | Check |
|
||||
|------|-------|
|
||||
| design-system-analysis.json | Exists and valid JSON |
|
||||
| component-inventory.json | Exists and has components array |
|
||||
| accessibility-audit.json | Exists and has wcag_level |
|
||||
| design-intelligence.json | Exists and has required fields |
|
||||
|
||||
**If missing**: Re-run corresponding stream.
|
||||
|
||||
**Compile research summary**:
|
||||
|
||||
| Metric | Source |
|
||||
|--------|--------|
|
||||
| design_system_exists | designAnalysis.component_library?.name |
|
||||
| styling_approach | designAnalysis.styling_approach |
|
||||
| total_components | inventory.components?.length |
|
||||
| accessibility_level | a11yAudit.wcag_level |
|
||||
| design_intelligence_source | designIntelligence._source |
|
||||
| anti_patterns_count | designIntelligence.recommendations.anti_patterns?.length |
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[researcher]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Update shared memory**:
|
||||
- component_inventory: inventory.components
|
||||
- accessibility_patterns: a11yAudit.recommendations
|
||||
- design_intelligence: designIntelligence
|
||||
- industry_context: { industry, detected_stack }
|
||||
|
||||
**Report content**:
|
||||
- Total components discovered
|
||||
- Styling approach detected
|
||||
- Accessibility level assessed
|
||||
- Component library (if any)
|
||||
- Design intelligence source
|
||||
- Anti-patterns count
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No RESEARCH-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Cannot detect design system | Report as "greenfield", recommend building from scratch |
|
||||
| Component inventory timeout | Report partial findings + mark unscanned areas |
|
||||
| Accessibility tools unavailable | Manual spot-check + degraded report |
|
||||
| ui-ux-pro-max unavailable | Degrade to LLM general knowledge, mark `_source: "llm-general-knowledge"` |
|
||||
| Session/Plan file not found | Notify coordinator, request location |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
@@ -1,299 +0,0 @@
|
||||
# Reviewer Role
|
||||
|
||||
Design auditor responsible for consistency, accessibility compliance, and visual quality review. Acts as Critic in the designer<->reviewer Generator-Critic loop. Serves as sync point gatekeeper in dual-track pipelines.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `reviewer` | **Tag**: `[reviewer]`
|
||||
- **Task Prefix**: `AUDIT-*`
|
||||
- **Responsibility**: Read-only analysis (Validation)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `AUDIT-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[reviewer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within validation responsibility scope
|
||||
- Apply industry-specific anti-patterns from design intelligence
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify any files or resources (read-only analysis only)
|
||||
- Omit `[reviewer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| Read | Read | Read design artifacts, audit history |
|
||||
| Glob | Search | Find design and build files |
|
||||
| Grep | Search | Search patterns in files |
|
||||
| Bash | Read | Execute read-only shell commands |
|
||||
| Task | Delegate | Delegate to Explore agent for analysis |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `audit_passed` | reviewer -> coordinator | Score >= 8, no critical issues | Audit report + score, GC converged |
|
||||
| `audit_result` | reviewer -> coordinator | Score 6-7, non-critical issues | Feedback for GC revision |
|
||||
| `fix_required` | reviewer -> coordinator | Score < 6, critical issues found | Critical issues list |
|
||||
| `error` | reviewer -> coordinator | Failure | Error details |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "reviewer",
|
||||
type: <message-type>,
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
> `to` and `summary` are auto-defaulted by the tool.
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from reviewer --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `AUDIT-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
**Audit type detection**:
|
||||
|
||||
| Pattern | Audit Type |
|
||||
|---------|-----------|
|
||||
| Subject contains "token" or "token" | Token audit |
|
||||
| Subject contains "component" or "component" | Component audit |
|
||||
| Subject contains "final" or "final" | Final audit (cross-cutting) |
|
||||
| Subject contains "Sync Point" or "sync" | Sync point audit |
|
||||
|
||||
### Phase 2: Context Loading + Shared Memory Read
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read role states via team_msg(operation="get_state"):
|
||||
|
||||
| Field | Usage |
|
||||
|-------|-------|
|
||||
| audit_history | Previous audit scores for trend |
|
||||
| design_token_registry | Expected token categories |
|
||||
| industry_context | Strictness level, must-have features |
|
||||
|
||||
3. Read design intelligence for anti-patterns:
|
||||
|
||||
| Field | Usage |
|
||||
|-------|-------|
|
||||
| recommendations.anti_patterns | Industry-specific violations to check |
|
||||
| ux_guidelines | Best practices reference |
|
||||
|
||||
4. Read design artifacts:
|
||||
|
||||
| Artifact | When |
|
||||
|----------|------|
|
||||
| design-tokens.json | Token audit, component audit |
|
||||
| component-specs/*.md | Component audit, final audit |
|
||||
| build/**/* | Final audit only |
|
||||
|
||||
### Phase 3: Audit Execution
|
||||
|
||||
#### Audit Dimensions
|
||||
|
||||
5 dimensions scored on 1-10 scale:
|
||||
|
||||
| Dimension | Weight | Criteria |
|
||||
|-----------|--------|----------|
|
||||
| Consistency | 20% | Token usage, naming conventions, visual uniformity |
|
||||
| Accessibility | 25% | WCAG AA compliance, ARIA attributes, keyboard nav, contrast |
|
||||
| Completeness | 20% | All states defined, responsive specs, edge cases |
|
||||
| Quality | 15% | Token reference integrity, documentation clarity, maintainability |
|
||||
| Industry Compliance | 20% | Anti-pattern avoidance, UX best practices, design intelligence adherence |
|
||||
|
||||
#### Token Audit
|
||||
|
||||
**Consistency checks**:
|
||||
- Naming convention (kebab-case, semantic names)
|
||||
- Value patterns (consistent units: rem/px/%)
|
||||
- Theme completeness (light + dark for all colors)
|
||||
|
||||
**Accessibility checks**:
|
||||
- Color contrast ratios (text on background >= 4.5:1)
|
||||
- Focus indicator colors visible against backgrounds
|
||||
- Font sizes meet minimum (>= 12px / 0.75rem)
|
||||
|
||||
**Completeness checks**:
|
||||
- All token categories present (color, typography, spacing, shadow, border)
|
||||
- Breakpoints defined
|
||||
- Semantic color tokens (success, warning, error, info)
|
||||
|
||||
**Quality checks**:
|
||||
- $type metadata present (W3C format)
|
||||
- Values are valid (CSS-parseable)
|
||||
- No duplicate definitions
|
||||
|
||||
**Industry Compliance checks**:
|
||||
- Anti-patterns from ui-ux-pro-max not present in design
|
||||
- UX best practices followed
|
||||
- Design intelligence recommendations adhered to
|
||||
|
||||
#### Component Audit
|
||||
|
||||
**Consistency**:
|
||||
- Token references resolve
|
||||
- Naming matches convention
|
||||
|
||||
**Accessibility**:
|
||||
- ARIA roles defined
|
||||
- Keyboard behavior specified
|
||||
- Focus indicator defined
|
||||
|
||||
**Completeness**:
|
||||
- All 5 states (default/hover/focus/active/disabled)
|
||||
- Responsive breakpoints specified
|
||||
- Variants documented
|
||||
|
||||
**Quality**:
|
||||
- Clear descriptions
|
||||
- Variant system defined
|
||||
- Interaction specs clear
|
||||
|
||||
#### Final Audit (Cross-cutting)
|
||||
|
||||
**Token <-> Component consistency**:
|
||||
- All token references in components resolve to defined tokens
|
||||
- No hardcoded values in component specs
|
||||
|
||||
**Code <-> Design consistency** (if build artifacts exist):
|
||||
- CSS variables match design tokens
|
||||
- Component implementation matches spec states
|
||||
- ARIA attributes implemented as specified
|
||||
|
||||
**Cross-component consistency**:
|
||||
- Consistent spacing patterns
|
||||
- Consistent color usage for similar elements
|
||||
- Consistent interaction patterns
|
||||
|
||||
#### Score Calculation
|
||||
|
||||
```
|
||||
overallScore = round(
|
||||
consistency.score * 0.20 +
|
||||
accessibility.score * 0.25 +
|
||||
completeness.score * 0.20 +
|
||||
quality.score * 0.15 +
|
||||
industryCompliance.score * 0.20
|
||||
)
|
||||
```
|
||||
|
||||
**Issue Severity Classification**:
|
||||
|
||||
| Severity | Criteria | GC Impact |
|
||||
|----------|----------|-----------|
|
||||
| CRITICAL | Accessibility non-compliant (contrast < 3:1), missing critical states | Blocks GC convergence |
|
||||
| HIGH | Token reference inconsistent, missing ARIA, partial states | Counts toward GC score |
|
||||
| MEDIUM | Naming non-standard, incomplete docs, minor style issues | Recommended fix |
|
||||
| LOW | Code style, optional optimization | Informational |
|
||||
|
||||
**Signal Determination**:
|
||||
|
||||
| Condition | Signal |
|
||||
|-----------|--------|
|
||||
| Score >= 8 AND critical_count === 0 | `audit_passed` (GC CONVERGED) |
|
||||
| Score >= 6 AND critical_count === 0 | `audit_result` (GC REVISION NEEDED) |
|
||||
| Score < 6 OR critical_count > 0 | `fix_required` (CRITICAL FIX NEEDED) |
|
||||
|
||||
#### Audit Report Generation
|
||||
|
||||
**Output**: `audit/audit-{NNN}.md`
|
||||
|
||||
**Report Structure**:
|
||||
|
||||
| Section | Content |
|
||||
|---------|---------|
|
||||
| Summary | Overall score, signal, critical/high/medium counts |
|
||||
| Sync Point Status | (If sync point) PASSED/BLOCKED |
|
||||
| Dimension Scores | Table with score/weight/weighted per dimension |
|
||||
| Critical Issues | Description, location, fix suggestion |
|
||||
| High Issues | Description, fix suggestion |
|
||||
| Medium Issues | Description |
|
||||
| Recommendations | Improvement suggestions |
|
||||
| GC Loop Status | Signal, action required |
|
||||
|
||||
### Phase 4: Validation
|
||||
|
||||
**Verification checks**:
|
||||
|
||||
| Check | Method |
|
||||
|-------|--------|
|
||||
| Audit report written | Read audit file exists |
|
||||
| Score valid | 1-10 range |
|
||||
| Signal valid | One of: audit_passed, audit_result, fix_required |
|
||||
|
||||
**Trend analysis** (if audit_history exists):
|
||||
|
||||
| Comparison | Trend |
|
||||
|------------|-------|
|
||||
| current > previous | improving |
|
||||
| current = previous | stable |
|
||||
| current < previous | declining |
|
||||
|
||||
Include trend in report.
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[reviewer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Update shared memory**:
|
||||
|
||||
| Field | Update |
|
||||
|-------|--------|
|
||||
| audit_history | Append { audit_id, score, critical_count, signal, is_sync_point, timestamp } |
|
||||
|
||||
**Message content**:
|
||||
- Audit number
|
||||
- Score
|
||||
- Signal
|
||||
- Critical/High issue counts
|
||||
- Sync point status (if applicable)
|
||||
- Issues requiring fix (if not passed)
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No AUDIT-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Design files not found | Report error, notify coordinator |
|
||||
| Token format unparseable | Degrade to text review |
|
||||
| Audit dimension cannot be assessed | Mark as N/A, exclude from score |
|
||||
| Anti-pattern check fails | Mark Industry Compliance as N/A |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
@@ -11,21 +11,23 @@ Deep collaborative analysis team skill. Splits monolithic analysis into 5-role c
|
||||
## Architecture
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------+
|
||||
| Skill(skill="team-ultra-analyze") |
|
||||
| args="topic description" or args="--role=xxx" |
|
||||
+----------------------------+--------------------------------+
|
||||
| Role Router
|
||||
+---- --role present? ----+
|
||||
| NO | YES
|
||||
v v
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto -> coordinator) (route to role.md)
|
||||
|
|
||||
+-----+------+----------+-----------+
|
||||
v v v v v
|
||||
coordinator explorer analyst discussant synthesizer
|
||||
EXPLORE-* ANALYZE-* DISCUSS-* SYNTH-*
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-ultra-analyze") |
|
||||
| args="<topic-description>" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
Orchestration Mode (auto -> coordinator)
|
||||
|
|
||||
Coordinator (inline)
|
||||
Phase 0-5 orchestration
|
||||
|
|
||||
+-------+-------+-------+-------+
|
||||
v v v v
|
||||
[tw] [tw] [tw] [tw]
|
||||
explor- analy- discu- synthe-
|
||||
er st ssant sizer
|
||||
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Command Architecture
|
||||
@@ -65,13 +67,13 @@ Parse `$ARGUMENTS` to extract `--role` and optional `--agent-name`. If `--role`
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | **compress: must re-read** |
|
||||
| explorer | [roles/explorer/role.md](roles/explorer/role.md) | EXPLORE-* | parallel worker | compress: must re-read |
|
||||
| analyst | [roles/analyst/role.md](roles/analyst/role.md) | ANALYZE-* | parallel worker | compress: must re-read |
|
||||
| discussant | [roles/discussant/role.md](roles/discussant/role.md) | DISCUSS-* | pipeline | compress: must re-read |
|
||||
| synthesizer | [roles/synthesizer/role.md](roles/synthesizer/role.md) | SYNTH-* | pipeline | compress: must re-read |
|
||||
| Role | Spec | Task Prefix | Inner Loop |
|
||||
|------|------|-------------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
|
||||
| explorer | [role-specs/explorer.md](role-specs/explorer.md) | EXPLORE-* | false |
|
||||
| analyst | [role-specs/analyst.md](role-specs/analyst.md) | ANALYZE-* | false |
|
||||
| discussant | [role-specs/discussant.md](role-specs/discussant.md) | DISCUSS-* | false |
|
||||
| synthesizer | [role-specs/synthesizer.md](role-specs/synthesizer.md) | SYNTH-* | false |
|
||||
|
||||
> **COMPACT PROTECTION**: Role files are execution documents, not reference material. When context compression occurs and role instructions are reduced to summaries, you **must immediately `Read` the corresponding role.md to reload before continuing execution**. Never execute any Phase based on compressed summaries alone.
|
||||
|
||||
@@ -338,51 +340,43 @@ Beat 1 2 3 3a... 4
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use background mode (Spawn-and-Stop pattern). The coordinator determines the depth (number of parallel agents) based on selected perspectives.
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
**Phase 1 - Spawn Explorers**: Create depth explorer agents in parallel (EXPLORE-1 through EXPLORE-depth). Each explorer receives its assigned perspective/domain and agent name for task matching. All spawned with run_in_background:true. Coordinator stops after spawning and waits for callbacks.
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path. The coordinator determines depth (number of parallel agents) based on selected perspectives.
|
||||
|
||||
**Phase 2 - Spawn Analysts**: After all explorers complete, create depth analyst agents in parallel (ANALYZE-1 through ANALYZE-depth). Each analyst receives its assigned perspective matching the corresponding explorer. All spawned with run_in_background:true. Coordinator stops.
|
||||
**Phase 1 - Spawn Explorers**: Create depth explorer team-worker agents in parallel (EXPLORE-1 through EXPLORE-depth). Each receives its assigned perspective/domain and agent name for task matching.
|
||||
|
||||
**Phase 3 - Spawn Discussant**: After all analysts complete, create 1 discussant. It processes all analysis results and presents findings to user. Coordinator stops.
|
||||
**Phase 2 - Spawn Analysts**: After all explorers complete, create depth analyst team-worker agents in parallel (ANALYZE-1 through ANALYZE-depth).
|
||||
|
||||
**Phase 3a - Discussion Loop** (Deep mode only): Based on user feedback, coordinator may create additional ANALYZE-fix and DISCUSS tasks. Loop continues until user is satisfied or 5 rounds reached.
|
||||
**Phase 3 - Spawn Discussant**: After all analysts complete, create 1 discussant. Coordinator stops.
|
||||
|
||||
**Phase 4 - Spawn Synthesizer**: After final discussion round, create 1 synthesizer. It integrates all explorations, analyses, and discussions into final conclusions. Coordinator stops.
|
||||
**Phase 3a - Discussion Loop** (Deep mode only): Based on user feedback, coordinator may create additional ANALYZE-fix and DISCUSS tasks.
|
||||
|
||||
**Quick mode exception**: When depth=1, spawn single explorer, single analyst, single discussant, single synthesizer -- all as simple agents without numbered suffixes.
|
||||
**Phase 4 - Spawn Synthesizer**: After final discussion round, create 1 synthesizer.
|
||||
|
||||
**Single spawn example** (worker template used for all roles):
|
||||
**Quick mode exception**: When depth=1, spawn single explorer, single analyst, single discussant, single synthesizer without numbered suffixes.
|
||||
|
||||
**Single spawn template** (worker template used for all roles):
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: <team-name>,
|
||||
team_name: "ultra-analyze",
|
||||
name: "<agent-name>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE> (<agent-name>).
|
||||
Your agent name is "<agent-name>", use it for task discovery owner matching.
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-ultra-analyze/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: ultra-analyze
|
||||
requirement: <topic-description>
|
||||
agent_name: <agent-name>
|
||||
inner_loop: false
|
||||
|
||||
## Primary Instruction
|
||||
All work must be executed by calling Skill for role definition:
|
||||
Skill(skill="team-ultra-analyze", args="--role=<role> --agent-name=<agent-name>")
|
||||
|
||||
Current topic: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Rules
|
||||
- Only process <PREFIX>-* tasks where owner === "<agent-name>"
|
||||
- All output prefixed with [<role>] identifier
|
||||
- Communicate only with coordinator
|
||||
- Do not use TaskCreate for other roles
|
||||
- Call mcp__ccw-tools__team_msg before every SendMessage
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> load role definition and execution logic
|
||||
2. Execute role.md 5-Phase process
|
||||
3. team_msg + SendMessage result to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery, owner=<agent-name>) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
@@ -419,6 +413,31 @@ Session: <session-folder>
|
||||
| +-- issues.md
|
||||
```
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Ultra-Analyze pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(ultra-analyze) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-ultra-analyze", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
## Session Resume
|
||||
|
||||
Coordinator supports `--resume` / `--continue` for interrupted sessions:
|
||||
|
||||
@@ -1,210 +0,0 @@
|
||||
# Command: analyze
|
||||
|
||||
> CLI 多视角深度分析。基于探索结果,通过 CLI 工具执行深度分析并生成结构化洞察。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Analyst
|
||||
- 探索结果已就绪,需要深度分析
|
||||
- 每个 ANALYZE-* 任务触发一次
|
||||
|
||||
**Trigger conditions**:
|
||||
- Analyst Phase 2 完成后(上下文已加载)
|
||||
- 方向调整时创建的 ANALYZE-fix 任务
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: CLI(通过 ccw cli 执行分析,Bash run_in_background: true)
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 根据 perspective 选择 CLI 工具和分析模板
|
||||
function buildAnalysisConfig(perspective, isDirectionFix) {
|
||||
const configs = {
|
||||
'technical': {
|
||||
tool: 'gemini',
|
||||
rule: 'analysis-analyze-code-patterns',
|
||||
focus: 'Implementation patterns, code quality, technical debt, feasibility',
|
||||
tasks: [
|
||||
'Analyze code structure and organization patterns',
|
||||
'Identify technical debt and anti-patterns',
|
||||
'Evaluate error handling and edge cases',
|
||||
'Assess testing coverage and quality'
|
||||
]
|
||||
},
|
||||
'architectural': {
|
||||
tool: 'claude',
|
||||
rule: 'analysis-review-architecture',
|
||||
focus: 'System design, scalability, component coupling, boundaries',
|
||||
tasks: [
|
||||
'Evaluate module boundaries and coupling',
|
||||
'Analyze data flow and component interactions',
|
||||
'Assess scalability and extensibility',
|
||||
'Review design pattern usage and consistency'
|
||||
]
|
||||
},
|
||||
'business': {
|
||||
tool: 'codex',
|
||||
rule: 'analysis-analyze-code-patterns',
|
||||
focus: 'Business logic, domain models, value delivery, stakeholder impact',
|
||||
tasks: [
|
||||
'Map business logic to code implementation',
|
||||
'Identify domain model completeness',
|
||||
'Evaluate business rule enforcement',
|
||||
'Assess impact on stakeholders and users'
|
||||
]
|
||||
},
|
||||
'domain_expert': {
|
||||
tool: 'gemini',
|
||||
rule: 'analysis-analyze-code-patterns',
|
||||
focus: 'Domain-specific patterns, standards compliance, best practices',
|
||||
tasks: [
|
||||
'Compare against domain best practices',
|
||||
'Check standards and convention compliance',
|
||||
'Identify domain-specific anti-patterns',
|
||||
'Evaluate domain model accuracy'
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
const config = configs[perspective] || configs['technical']
|
||||
|
||||
if (isDirectionFix) {
|
||||
config.rule = 'analysis-diagnose-bug-root-cause'
|
||||
config.tasks = [
|
||||
'Re-analyze from adjusted perspective',
|
||||
'Identify previously missed patterns',
|
||||
'Generate new insights from fresh angle',
|
||||
'Update discussion points based on direction change'
|
||||
]
|
||||
}
|
||||
|
||||
return config
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
const config = buildAnalysisConfig(perspective, isDirectionFix)
|
||||
|
||||
// 构建探索上下文摘要
|
||||
const explorationSummary = `
|
||||
PRIOR EXPLORATION CONTEXT:
|
||||
- Key files: ${(explorationContext.relevant_files || []).slice(0, 8).map(f => f.path || f).join(', ')}
|
||||
- Patterns found: ${(explorationContext.patterns || []).slice(0, 5).join('; ')}
|
||||
- Key findings: ${(explorationContext.key_findings || []).slice(0, 5).join('; ')}
|
||||
- Questions from exploration: ${(explorationContext.questions_for_analysis || []).slice(0, 3).join('; ')}`
|
||||
```
|
||||
|
||||
### Step 2: Execute CLI Analysis
|
||||
|
||||
```javascript
|
||||
const cliPrompt = `PURPOSE: ${isDirectionFix
|
||||
? `Supplementary analysis with adjusted focus on "${adjustedFocus}" for topic "${topic}"`
|
||||
: `Deep analysis of "${topic}" from ${perspective} perspective`}
|
||||
Success: ${isDirectionFix
|
||||
? 'New insights from adjusted direction with clear evidence'
|
||||
: 'Actionable insights with confidence levels and evidence references'}
|
||||
|
||||
${explorationSummary}
|
||||
|
||||
TASK:
|
||||
${config.tasks.map(t => `• ${t}`).join('\n')}
|
||||
• Generate structured findings with confidence levels (high/medium/low)
|
||||
• Identify discussion points requiring user input
|
||||
• List open questions needing further exploration
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Topic: ${topic}
|
||||
EXPECTED: JSON-structured analysis with sections: key_insights (with confidence), key_findings (with evidence), discussion_points, open_questions, recommendations (with priority)
|
||||
CONSTRAINTS: Focus on ${perspective} perspective | ${dimensions.join(', ')} dimensions${isDirectionFix ? ` | Adjusted focus: ${adjustedFocus}` : ''}`
|
||||
|
||||
Bash({
|
||||
command: `ccw cli -p "${cliPrompt}" --tool ${config.tool} --mode analysis --rule ${config.rule}`,
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// ⚠️ STOP POINT: Wait for CLI callback before continuing
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// CLI 结果返回后,解析并结构化
|
||||
const outputPath = `${sessionFolder}/analyses/analysis-${analyzeNum}.json`
|
||||
|
||||
// 从 CLI 输出中提取结构化数据
|
||||
// CLI 输出通常是 markdown,需要解析为 JSON
|
||||
const analysisResult = {
|
||||
perspective,
|
||||
dimensions,
|
||||
is_direction_fix: isDirectionFix,
|
||||
adjusted_focus: adjustedFocus || null,
|
||||
key_insights: [
|
||||
// 从 CLI 输出提取,每个包含 {insight, confidence, evidence}
|
||||
],
|
||||
key_findings: [
|
||||
// 具体发现 {finding, file_ref, impact}
|
||||
],
|
||||
discussion_points: [
|
||||
// 需要用户输入的讨论要点
|
||||
],
|
||||
open_questions: [
|
||||
// 未解决的问题
|
||||
],
|
||||
recommendations: [
|
||||
// {action, rationale, priority}
|
||||
],
|
||||
_metadata: {
|
||||
cli_tool: config.tool,
|
||||
cli_rule: config.rule,
|
||||
perspective,
|
||||
is_direction_fix: isDirectionFix,
|
||||
timestamp: new Date().toISOString()
|
||||
}
|
||||
}
|
||||
|
||||
Write(outputPath, JSON.stringify(analysisResult, null, 2))
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```json
|
||||
{
|
||||
"perspective": "technical",
|
||||
"dimensions": ["architecture", "implementation"],
|
||||
"is_direction_fix": false,
|
||||
"key_insights": [
|
||||
{"insight": "Authentication uses stateless JWT", "confidence": "high", "evidence": "src/auth/jwt.ts:L42"}
|
||||
],
|
||||
"key_findings": [
|
||||
{"finding": "No rate limiting on login endpoint", "file_ref": "src/routes/auth.ts:L15", "impact": "Security risk"}
|
||||
],
|
||||
"discussion_points": [
|
||||
"Should we implement token rotation for refresh tokens?"
|
||||
],
|
||||
"open_questions": [
|
||||
"What is the expected concurrent user load?"
|
||||
],
|
||||
"recommendations": [
|
||||
{"action": "Add rate limiting to auth endpoints", "rationale": "Prevent brute force attacks", "priority": "high"}
|
||||
],
|
||||
"_metadata": {"cli_tool": "gemini", "cli_rule": "analysis-analyze-code-patterns", "timestamp": "..."}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI tool unavailable | Try fallback: gemini → codex → claude |
|
||||
| CLI timeout | Retry with shorter prompt, or use exploration results directly |
|
||||
| CLI returns empty | Use exploration findings as-is, note analysis gap |
|
||||
| Invalid CLI output | Extract what's parseable, fill gaps with defaults |
|
||||
| Exploration context missing | Analyze with topic keywords only |
|
||||
@@ -1,251 +0,0 @@
|
||||
# Analyst Role
|
||||
|
||||
深度分析师。基于 explorer 的代码库探索结果,通过 CLI 多视角深度分析,生成结构化洞察和讨论要点。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `analyst` | **Tag**: `[analyst]`
|
||||
- **Task Prefix**: `ANALYZE-*`
|
||||
- **Responsibility**: Read-only analysis (深度分析)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `ANALYZE-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[analyst]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within deep analysis responsibility scope
|
||||
- Base analysis on explorer exploration results
|
||||
- Share analysis results via team_msg(type='state_update')
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute codebase exploration (belongs to explorer)
|
||||
- Handle user feedback (belongs to discussant)
|
||||
- Generate final conclusions (belongs to synthesizer)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Communicate directly with other worker roles
|
||||
- Modify source code
|
||||
- Omit `[analyst]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `analyze` | [commands/analyze.md](commands/analyze.md) | Phase 3 | CLI 多视角深度分析 |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Bash` | CLI | analyze.md | Execute ccw cli for analysis |
|
||||
| `Read` | File | analyst | Read exploration results and session context |
|
||||
| `Write` | File | analyst | Write analysis results |
|
||||
| `Glob` | File | analyst | Find exploration/analysis files |
|
||||
|
||||
### CLI Tools
|
||||
|
||||
| CLI Tool | Mode | Used By | Purpose |
|
||||
|----------|------|---------|---------|
|
||||
| `gemini` | analysis | analyze.md | 技术/领域分析 |
|
||||
| `codex` | analysis | analyze.md | 业务视角分析 |
|
||||
| `claude` | analysis | analyze.md | 架构视角分析 |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `analysis_ready` | analyst → coordinator | 分析完成 | 包含洞察、讨论要点、开放问题 |
|
||||
| `error` | analyst → coordinator | 分析失败 | 阻塞性错误 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "analyst",
|
||||
type: "analysis_ready",
|
||||
ref: "<output-path>"
|
||||
})
|
||||
```
|
||||
|
||||
> `to` and `summary` are auto-defaulted by the tool.
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from analyst --type analysis_ready --ref <path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `ANALYZE-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `analyst` for single-instance roles.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Extract topic, perspective, dimensions from task metadata
|
||||
3. Check for direction-fix type (补充分析)
|
||||
4. Read role states via team_msg(operation="get_state") for existing context
|
||||
5. Read corresponding exploration results
|
||||
|
||||
**Context extraction**:
|
||||
|
||||
| Field | Source | Pattern |
|
||||
|-------|--------|---------|
|
||||
| sessionFolder | task description | `session:\s*(.+)` |
|
||||
| topic | task description | `topic:\s*(.+)` |
|
||||
| perspective | task description | `perspective:\s*(.+)` or default "technical" |
|
||||
| dimensions | task description | `dimensions:\s*(.+)` or default "general" |
|
||||
| isDirectionFix | task description | `type:\s*direction-fix` |
|
||||
| adjustedFocus | task description | `adjusted_focus:\s*(.+)` |
|
||||
|
||||
**Exploration context loading**:
|
||||
|
||||
| Condition | Source |
|
||||
|-----------|--------|
|
||||
| Direction fix | Read ALL exploration files, merge context |
|
||||
| Normal analysis | Read exploration file matching ANALYZE-N number |
|
||||
| Fallback | Read first available exploration file |
|
||||
|
||||
**CLI tool selection**:
|
||||
|
||||
| Perspective | CLI Tool |
|
||||
|-------------|----------|
|
||||
| technical | gemini |
|
||||
| architectural | claude |
|
||||
| business | codex |
|
||||
| domain_expert | gemini |
|
||||
|
||||
### Phase 3: Deep Analysis via CLI
|
||||
|
||||
Delegate to `commands/analyze.md` if available, otherwise execute inline.
|
||||
|
||||
**Analysis prompt structure** (Direction Fix):
|
||||
|
||||
```
|
||||
PURPOSE: 补充分析 - 方向调整至 "<adjusted_focus>"
|
||||
Success: 针对新方向的深入洞察
|
||||
|
||||
PRIOR EXPLORATION CONTEXT:
|
||||
- Key files: <top 5 files from exploration>
|
||||
- Patterns: <top 3 patterns>
|
||||
- Previous findings: <top 3 findings>
|
||||
|
||||
TASK:
|
||||
- Focus analysis on: <adjusted_focus>
|
||||
- Build on previous exploration findings
|
||||
- Identify new insights from adjusted perspective
|
||||
- Generate discussion points for user
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Topic: <topic>
|
||||
EXPECTED: Structured analysis with adjusted focus, new insights, updated discussion points
|
||||
CONSTRAINTS: Focus on <adjusted_focus>
|
||||
```
|
||||
|
||||
**Analysis prompt structure** (Normal):
|
||||
|
||||
```
|
||||
PURPOSE: Analyze topic '<topic>' from <perspective> perspective across <dimensions> dimensions
|
||||
Success: Actionable insights with clear reasoning and evidence
|
||||
|
||||
PRIOR EXPLORATION CONTEXT:
|
||||
- Key files: <top 5 files from exploration>
|
||||
- Patterns found: <top 3 patterns>
|
||||
- Key findings: <top 3 findings>
|
||||
|
||||
TASK:
|
||||
- Build on exploration findings above
|
||||
- Analyze from <perspective> perspective: <dimensions>
|
||||
- Identify patterns, anti-patterns, and opportunities
|
||||
- Generate discussion points for user clarification
|
||||
- Assess confidence level for each insight
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Topic: <topic>
|
||||
EXPECTED: Structured analysis with: key insights (with confidence), discussion points, open questions, recommendations with rationale
|
||||
CONSTRAINTS: Focus on <dimensions> | <perspective> perspective
|
||||
```
|
||||
|
||||
**CLI execution**:
|
||||
|
||||
```
|
||||
Bash({
|
||||
command: "ccw cli -p \"<analysis-prompt>\" --tool <cli-tool> --mode analysis",
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// STOP POINT: Wait for CLI callback
|
||||
```
|
||||
|
||||
### Phase 4: Result Aggregation
|
||||
|
||||
**Analysis output structure**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| perspective | Analysis perspective |
|
||||
| dimensions | Analysis dimensions |
|
||||
| is_direction_fix | Boolean for direction fix mode |
|
||||
| adjusted_focus | Focus area if direction fix |
|
||||
| key_insights | Main insights with confidence levels |
|
||||
| key_findings | Specific findings |
|
||||
| discussion_points | Points for user discussion |
|
||||
| open_questions | Unresolved questions |
|
||||
| recommendations | Actionable recommendations |
|
||||
| evidence | Supporting evidence references |
|
||||
|
||||
**Output path**: `<session-folder>/analyses/analysis-<num>.json`
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[analyst]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Shared memory update**:
|
||||
|
||||
```
|
||||
sharedMemory.analyses.push({
|
||||
id: "analysis-<num>",
|
||||
perspective: <perspective>,
|
||||
is_direction_fix: <boolean>,
|
||||
insight_count: <count>,
|
||||
finding_count: <count>,
|
||||
timestamp: <timestamp>
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No ANALYZE-* tasks available | Idle, wait for coordinator assignment |
|
||||
| CLI tool unavailable | Fallback chain: gemini -> codex -> claude |
|
||||
| No exploration results found | Analyze with topic keywords only, note limitation |
|
||||
| CLI timeout | Use partial results, report incomplete |
|
||||
| Invalid exploration JSON | Skip context, analyze from scratch |
|
||||
| Command file not found | Fall back to inline execution |
|
||||
@@ -1,461 +1,324 @@
|
||||
# Command: monitor
|
||||
# Command: Monitor
|
||||
|
||||
> 阶段驱动的协调循环 + 讨论循环。按 pipeline 阶段顺序等待 worker 完成,驱动讨论循环,执行最终综合触发。
|
||||
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, discussion loop control, and completion.
|
||||
|
||||
## When to Use
|
||||
## Constants
|
||||
|
||||
- Phase 4 of Coordinator
|
||||
- 任务链已创建并分发
|
||||
- 需要持续监控直到所有任务完成
|
||||
| Key | Value |
|
||||
|-----|-------|
|
||||
| SPAWN_MODE | background |
|
||||
| ONE_STEP_PER_INVOCATION | true |
|
||||
| WORKER_AGENT | team-worker |
|
||||
| MAX_DISCUSSION_ROUNDS_QUICK | 0 |
|
||||
| MAX_DISCUSSION_ROUNDS_STANDARD | 1 |
|
||||
| MAX_DISCUSSION_ROUNDS_DEEP | 5 |
|
||||
|
||||
**Trigger conditions**:
|
||||
- dispatch 完成后立即启动
|
||||
- 讨论循环创建新任务后重新进入
|
||||
## Phase 2: Context Loading
|
||||
|
||||
## Strategy
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session state | `<session>/session.json` | Yes |
|
||||
| Task list | `TaskList()` | Yes |
|
||||
| Trigger event | From Entry Router detection | Yes |
|
||||
| Pipeline mode | From session.json `pipeline_mode` | Yes |
|
||||
| Discussion round | From session.json `discussion_round` | Yes |
|
||||
|
||||
### Delegation Mode
|
||||
1. Load session.json for current state, `pipeline_mode`, `discussion_round`
|
||||
2. Run `TaskList()` to get current task statuses
|
||||
3. Identify trigger event type from Entry Router
|
||||
4. Compute max discussion rounds from pipeline mode:
|
||||
|
||||
**Mode**: Stage-driven(按阶段顺序等待,非轮询)+ Discussion-loop(讨论循环由 coordinator 驱动)
|
||||
|
||||
### 设计原则
|
||||
|
||||
> **模型执行没有时间概念,禁止任何形式的轮询等待。**
|
||||
>
|
||||
> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态(空转浪费 API 轮次)
|
||||
> - ❌ 禁止: `Bash(sleep N)` / `Bash(timeout /t N)` 作为等待手段
|
||||
> - ✅ 采用: 同步 `Task()` 调用(`run_in_background: false`),call 本身即等待
|
||||
> - ✅ 采用: Worker 返回 = 阶段完成信号(天然回调)
|
||||
>
|
||||
> **原理**: `Task(run_in_background: false)` 是阻塞调用,coordinator 自动挂起直到 worker 返回。
|
||||
> 无需 sleep,无需轮询,无需消息总线监控。Worker 的返回就是回调。
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 消息路由表
|
||||
const routingTable = {
|
||||
// Explorer 完成
|
||||
'exploration_ready': { action: 'Mark EXPLORE complete, unblock ANALYZE' },
|
||||
// Analyst 完成
|
||||
'analysis_ready': { action: 'Mark ANALYZE complete, unblock DISCUSS or SYNTH' },
|
||||
// Discussant 完成
|
||||
'discussion_processed': { action: 'Mark DISCUSS complete, trigger user feedback collection', special: 'discussion_feedback' },
|
||||
// Synthesizer 完成
|
||||
'synthesis_ready': { action: 'Mark SYNTH complete, prepare final report', special: 'finalize' },
|
||||
// 错误
|
||||
'error': { action: 'Assess severity, retry or escalate', special: 'error_handler' }
|
||||
}
|
||||
```
|
||||
MAX_ROUNDS = pipeline_mode === 'deep' ? 5
|
||||
: pipeline_mode === 'standard' ? 1
|
||||
: 0
|
||||
```
|
||||
|
||||
### Stage-Worker 映射表
|
||||
## Phase 3: Event Handlers
|
||||
|
||||
```javascript
|
||||
const STAGE_WORKER_MAP = {
|
||||
'EXPLORE': { role: 'explorer', skillArgs: '--role=explorer' },
|
||||
'ANALYZE': { role: 'analyst', skillArgs: '--role=analyst' },
|
||||
'DISCUSS': { role: 'discussant', skillArgs: '--role=discussant' },
|
||||
'SYNTH': { role: 'synthesizer', skillArgs: '--role=synthesizer' }
|
||||
}
|
||||
### handleCallback
|
||||
|
||||
// ★ 统一 auto mode 检测
|
||||
const autoYes = /\b(-y|--yes)\b/.test(args)
|
||||
Triggered when a worker sends completion message (via SendMessage callback).
|
||||
|
||||
1. Parse message to identify role and task ID:
|
||||
|
||||
| Message Pattern | Role Detection |
|
||||
|----------------|---------------|
|
||||
| `[explorer]` or task ID `EXPLORE-*` | explorer |
|
||||
| `[analyst]` or task ID `ANALYZE-*` | analyst |
|
||||
| `[discussant]` or task ID `DISCUSS-*` | discussant |
|
||||
| `[synthesizer]` or task ID `SYNTH-*` | synthesizer |
|
||||
|
||||
2. Mark task as completed:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: "<task-id>", status: "completed" })
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
3. Record completion in session state via team_msg
|
||||
|
||||
### Step 1: Context Preparation
|
||||
4. **Role-specific post-completion logic**:
|
||||
|
||||
```javascript
|
||||
// 从 role state 获取当前状态
|
||||
const sharedMemory = mcp__ccw-tools__team_msg({ operation: "get_state", session_id: sessionId })
|
||||
| Completed Role | Pipeline Mode | Post-Completion Action |
|
||||
|---------------|---------------|------------------------|
|
||||
| explorer | all | Log: exploration ready. Proceed to handleSpawnNext |
|
||||
| analyst | all | Log: analysis ready. Proceed to handleSpawnNext |
|
||||
| discussant | all | **Discussion feedback gate** (see below) |
|
||||
| synthesizer | all | Proceed to handleComplete |
|
||||
|
||||
let discussionRound = 0
|
||||
const MAX_DISCUSSION_ROUNDS = pipelineMode === 'deep' ? 5 : (pipelineMode === 'standard' ? 1 : 0)
|
||||
5. **Discussion Feedback Gate** (when discussant completes):
|
||||
|
||||
When a DISCUSS-* task completes, the coordinator collects user feedback BEFORE spawning the next task. This replaces any while-loop pattern.
|
||||
|
||||
// 获取 pipeline 阶段列表(来自 dispatch 创建的任务链)
|
||||
const allTasks = TaskList()
|
||||
const pipelineTasks = allTasks
|
||||
.filter(t => t.owner && t.owner !== 'coordinator')
|
||||
.sort((a, b) => Number(a.id) - Number(b.id))
|
||||
```
|
||||
// Read current discussion_round from session state
|
||||
discussion_round = session.discussion_round || 0
|
||||
discussion_round++
|
||||
|
||||
### Step 2: Sequential Stage Execution (Stop-Wait) — Exploration + Analysis
|
||||
// Update session state
|
||||
Update session.json: discussion_round = discussion_round
|
||||
|
||||
> **核心**: 逐阶段 spawn worker,同步阻塞等待返回。
|
||||
> Worker 返回 = 阶段完成。无 sleep、无轮询、无消息总线监控。
|
||||
// Check if discussion loop applies
|
||||
IF pipeline_mode === 'quick':
|
||||
// No discussion in quick mode -- proceed to handleSpawnNext (SYNTH)
|
||||
-> handleSpawnNext
|
||||
|
||||
```javascript
|
||||
// 处理 EXPLORE 和 ANALYZE 阶段
|
||||
const preDiscussionTasks = pipelineTasks.filter(t =>
|
||||
t.subject.startsWith('EXPLORE-') || t.subject.startsWith('ANALYZE-')
|
||||
)
|
||||
ELSE IF discussion_round >= MAX_ROUNDS:
|
||||
// Reached max rounds -- force proceed to synthesis
|
||||
Log: "Max discussion rounds reached, proceeding to synthesis"
|
||||
IF no SYNTH-001 task exists:
|
||||
Create SYNTH-001 task blocked by last DISCUSS task
|
||||
-> handleSpawnNext
|
||||
|
||||
for (const stageTask of preDiscussionTasks) {
|
||||
// 1. 提取阶段前缀 → 确定 worker 角色
|
||||
const stagePrefix = stageTask.subject.match(/^(\w+)-/)?.[1]
|
||||
const workerConfig = STAGE_WORKER_MAP[stagePrefix]
|
||||
|
||||
if (!workerConfig) continue
|
||||
|
||||
// 2. 标记任务为执行中
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
to: workerConfig.role, type: "task_unblocked",
|
||||
summary: `[coordinator] 启动阶段: ${stageTask.subject} → ${workerConfig.role}`
|
||||
})
|
||||
|
||||
// 3. 同步 spawn worker — 阻塞直到 worker 返回(Stop-Wait 核心)
|
||||
const workerResult = Task({
|
||||
subagent_type: "team-worker",
|
||||
description: `Spawn ${workerConfig.role} worker for ${stageTask.subject}`,
|
||||
team_name: teamName,
|
||||
name: workerConfig.role,
|
||||
prompt: `## Role Assignment
|
||||
role: ${workerConfig.role}
|
||||
role_spec: .claude/skills/team-ultra-analyze/role-specs/${workerConfig.role}.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
team_name: ${teamName}
|
||||
requirement: ${stageTask.description || taskDescription}
|
||||
inner_loop: false
|
||||
|
||||
## Current Task
|
||||
- Task ID: ${stageTask.id}
|
||||
- Task: ${stageTask.subject}
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
|
||||
run_in_background: false
|
||||
})
|
||||
|
||||
// 4. Worker 已返回 — 检查结果
|
||||
const taskState = TaskGet({ taskId: stageTask.id })
|
||||
|
||||
if (taskState.status !== 'completed') {
|
||||
handleStageTimeout(stageTask, 0, autoYes)
|
||||
} else {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
to: "user", type: "quality_gate",
|
||||
summary: `[coordinator] 阶段完成: ${stageTask.subject}`
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.1: Update discussion.md with Round 1
|
||||
|
||||
```javascript
|
||||
// 读取所有探索和分析结果
|
||||
const explorationFiles = Glob({ pattern: `${sessionFolder}/explorations/*.json` })
|
||||
const analysisFiles = Glob({ pattern: `${sessionFolder}/analyses/*.json` })
|
||||
|
||||
const explorations = explorationFiles.map(f => JSON.parse(Read(f)))
|
||||
const analyses = analysisFiles.map(f => JSON.parse(Read(f)))
|
||||
|
||||
// 更新 discussion.md — Round 1
|
||||
const round1Content = `
|
||||
### Round 1 - Initial Exploration & Analysis (${new Date().toISOString()})
|
||||
|
||||
#### Exploration Results
|
||||
${explorations.map(e => `- **${e.perspective || 'general'}**: ${e.key_findings?.slice(0, 3).join('; ') || 'No findings'}`).join('\n')}
|
||||
|
||||
#### Analysis Results
|
||||
${analyses.map(a => `- **${a.perspective || 'general'}**: ${a.key_insights?.slice(0, 3).join('; ') || 'No insights'}`).join('\n')}
|
||||
|
||||
#### Key Findings
|
||||
${analyses.flatMap(a => a.key_findings || []).slice(0, 5).map(f => `- ${f}`).join('\n')}
|
||||
|
||||
#### Discussion Points
|
||||
${analyses.flatMap(a => a.discussion_points || []).slice(0, 5).map(p => `- ${p}`).join('\n')}
|
||||
|
||||
#### Decision Log
|
||||
> **Decision**: Selected ${pipelineMode} pipeline with ${explorations.length} exploration(s) and ${analyses.length} analysis perspective(s)
|
||||
> - **Context**: Topic analysis and user preference
|
||||
> - **Chosen**: ${pipelineMode} mode — **Reason**: ${pipelineMode === 'quick' ? 'Fast overview requested' : pipelineMode === 'deep' ? 'Thorough analysis needed' : 'Balanced depth and breadth'}
|
||||
`
|
||||
|
||||
Edit({
|
||||
file_path: `${sessionFolder}/discussion.md`,
|
||||
old_string: '## Discussion Timeline\n',
|
||||
new_string: `## Discussion Timeline\n${round1Content}\n`
|
||||
})
|
||||
```
|
||||
|
||||
### Step 3: Discussion Loop (Standard/Deep mode)
|
||||
|
||||
```javascript
|
||||
if (MAX_DISCUSSION_ROUNDS === 0) {
|
||||
// Quick mode: skip discussion, go to synthesis
|
||||
createSynthesisTask(sessionFolder, [lastAnalyzeTaskId])
|
||||
} else {
|
||||
// Wait for initial DISCUSS-001 to complete
|
||||
// Then enter discussion loop
|
||||
|
||||
while (discussionRound < MAX_DISCUSSION_ROUNDS) {
|
||||
// 等待当前 DISCUSS 任务完成(Stop-Wait: spawn discussant worker)
|
||||
const currentDiscussId = `DISCUSS-${String(discussionRound + 1).padStart(3, '0')}`
|
||||
const discussTask = pipelineTasks.find(t => t.subject.startsWith(currentDiscussId))
|
||||
if (discussTask) {
|
||||
TaskUpdate({ taskId: discussTask.id, status: 'in_progress' })
|
||||
const discussResult = Task({
|
||||
subagent_type: "team-worker",
|
||||
description: `Spawn discussant worker for ${discussTask.subject}`,
|
||||
team_name: teamName,
|
||||
name: "discussant",
|
||||
prompt: `## Role Assignment
|
||||
role: discussant
|
||||
role_spec: .claude/skills/team-ultra-analyze/role-specs/discussant.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
team_name: ${teamName}
|
||||
requirement: Discussion round ${discussionRound + 1}
|
||||
inner_loop: false
|
||||
|
||||
## Current Task
|
||||
- Task ID: ${discussTask.id}
|
||||
- Task: ${discussTask.subject}
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
|
||||
run_in_background: false
|
||||
})
|
||||
}
|
||||
|
||||
// 收集用户反馈
|
||||
const feedbackResult = AskUserQuestion({
|
||||
ELSE:
|
||||
// Collect user feedback
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Round ${discussionRound + 1} 分析结果已就绪。请选择下一步:`,
|
||||
question: "Discussion round <N> complete. What next?",
|
||||
header: "Discussion Feedback",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "同意,继续深入", description: "分析方向正确,继续深入探索" },
|
||||
{ label: "需要调整方向", description: "有不同理解或关注点" },
|
||||
{ label: "分析完成", description: "已获得足够信息" },
|
||||
{ label: "有具体问题", description: "有特定问题需要解答" }
|
||||
{ label: "Continue deeper", description: "Current direction is good, go deeper" },
|
||||
{ label: "Adjust direction", description: "Shift analysis focus" },
|
||||
{ label: "Done", description: "Sufficient depth, proceed to synthesis" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
const feedback = feedbackResult["Discussion Feedback"]
|
||||
|
||||
// 📌 记录用户反馈到 decision_trail
|
||||
const latestMemory = mcp__ccw-tools__team_msg({ operation: "get_state", session_id: sessionId })
|
||||
latestMemory.decision_trail = latestMemory.decision_trail || []
|
||||
latestMemory.decision_trail.push({
|
||||
round: discussionRound + 1,
|
||||
decision: feedback,
|
||||
context: `User feedback at discussion round ${discussionRound + 1}`,
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "state_update",
|
||||
data: { decision_trail: latestMemory.decision_trail }
|
||||
})
|
||||
|
||||
if (feedback === "分析完成") {
|
||||
// 📌 Record completion decision
|
||||
appendToDiscussion(sessionFolder, discussionRound + 1, {
|
||||
user_input: "分析完成",
|
||||
decision: "Exit discussion loop, proceed to synthesis",
|
||||
reason: "User satisfied with current analysis depth"
|
||||
})
|
||||
break
|
||||
}
|
||||
|
||||
if (feedback === "需要调整方向") {
|
||||
// 收集调整方向
|
||||
const directionResult = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "请选择新的关注方向:",
|
||||
header: "Direction Adjustment",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "代码细节", description: "深入具体代码实现" },
|
||||
{ label: "架构层面", description: "关注系统架构设计" },
|
||||
{ label: "最佳实践", description: "对比行业最佳实践" },
|
||||
{ label: "自定义", description: "输入自定义方向" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
const newDirection = directionResult["Direction Adjustment"]
|
||||
|
||||
// 📌 Record direction change
|
||||
appendToDiscussion(sessionFolder, discussionRound + 1, {
|
||||
user_input: `调整方向: ${newDirection}`,
|
||||
decision: `Direction adjusted to: ${newDirection}`,
|
||||
reason: "User requested focus change"
|
||||
})
|
||||
|
||||
// 创建补充分析 + 新讨论任务
|
||||
const fixId = createAnalysisFix(discussionRound + 1, newDirection, sessionFolder)
|
||||
discussionRound++
|
||||
createDiscussionTask(discussionRound + 1, 'direction-adjusted', newDirection, sessionFolder)
|
||||
continue
|
||||
}
|
||||
|
||||
if (feedback === "有具体问题") {
|
||||
// 📌 Record question
|
||||
appendToDiscussion(sessionFolder, discussionRound + 1, {
|
||||
user_input: "有具体问题(由 discussant 处理)",
|
||||
decision: "Create discussion task for specific questions"
|
||||
})
|
||||
|
||||
discussionRound++
|
||||
createDiscussionTask(discussionRound + 1, 'specific-questions', 'User has specific questions', sessionFolder)
|
||||
continue
|
||||
}
|
||||
|
||||
// 同意,继续深入
|
||||
appendToDiscussion(sessionFolder, discussionRound + 1, {
|
||||
user_input: "同意,继续深入",
|
||||
decision: "Continue deepening in current direction"
|
||||
})
|
||||
|
||||
discussionRound++
|
||||
if (discussionRound < MAX_DISCUSSION_ROUNDS) {
|
||||
createDiscussionTask(discussionRound + 1, 'deepen', 'Continue current direction', sessionFolder)
|
||||
}
|
||||
}
|
||||
|
||||
// 创建最终综合任务
|
||||
const lastDiscussTaskId = getLastCompletedTaskId('DISCUSS')
|
||||
createSynthesisTask(sessionFolder, [lastDiscussTaskId])
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3.1: Discussion Helper Functions
|
||||
6. **Feedback handling** (still inside handleCallback, after AskUserQuestion returns):
|
||||
|
||||
```javascript
|
||||
function appendToDiscussion(sessionFolder, round, data) {
|
||||
const roundContent = `
|
||||
### Round ${round + 1} - Discussion (${new Date().toISOString()})
|
||||
| Feedback | Action |
|
||||
|----------|--------|
|
||||
| "Continue deeper" | Create new DISCUSS-`<N+1>` task (pending, no blockedBy). Record decision in discussion.md. Proceed to handleSpawnNext |
|
||||
| "Adjust direction" | AskUserQuestion for new focus. Create ANALYZE-fix-`<N>` task (pending). Create DISCUSS-`<N+1>` task (pending, blockedBy ANALYZE-fix-`<N>`). Record direction change in discussion.md. Proceed to handleSpawnNext |
|
||||
| "Done" | Create SYNTH-001 task (pending, blockedBy last DISCUSS). Record decision in discussion.md. Proceed to handleSpawnNext |
|
||||
|
||||
#### User Input
|
||||
${data.user_input}
|
||||
**Dynamic task creation templates**:
|
||||
|
||||
#### Decision Log
|
||||
> **Decision**: ${data.decision}
|
||||
> - **Context**: Discussion round ${round + 1}
|
||||
> - **Reason**: ${data.reason || 'User-directed'}
|
||||
|
||||
#### Updated Understanding
|
||||
${data.updated_understanding || '(Updated by discussant)'}
|
||||
|
||||
`
|
||||
// Append to discussion.md
|
||||
const currentContent = Read(`${sessionFolder}/discussion.md`)
|
||||
Write(`${sessionFolder}/discussion.md`, currentContent + roundContent)
|
||||
}
|
||||
|
||||
function handleStageTimeout(stageTask, _unused, autoYes) {
|
||||
if (autoYes) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
to: "user", type: "error",
|
||||
summary: `[coordinator] [auto] 阶段 ${stageTask.subject} worker 返回但未完成,自动跳过`
|
||||
})
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
return
|
||||
}
|
||||
|
||||
const decision = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `阶段 "${stageTask.subject}" worker 返回但未完成。如何处理?`,
|
||||
header: "Stage Fail",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "重试", description: "重新 spawn worker 执行此阶段" },
|
||||
{ label: "跳过此阶段", description: "标记为跳过,继续后续流水线" },
|
||||
{ label: "终止流水线", description: "停止整个分析流程" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
const answer = decision["Stage Fail"]
|
||||
if (answer === "跳过此阶段") {
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
} else if (answer === "终止流水线") {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
to: "user", type: "shutdown",
|
||||
summary: `[coordinator] 用户终止流水线,当前阶段: ${stageTask.subject}`
|
||||
})
|
||||
}
|
||||
}
|
||||
DISCUSS-N (subsequent round):
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "DISCUSS-<NNN>",
|
||||
description: "PURPOSE: Process discussion round <N> | Success: Updated understanding
|
||||
TASK:
|
||||
- Process previous round results
|
||||
- Execute <type> discussion strategy
|
||||
- Update discussion timeline
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Topic: <topic>
|
||||
- Round: <N>
|
||||
- Type: <deepen|direction-adjusted|specific-questions>
|
||||
- Shared memory: <session>/wisdom/.msg/meta.json
|
||||
EXPECTED: <session>/discussions/discussion-round-<NNN>.json
|
||||
---
|
||||
InnerLoop: false",
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
### Step 4: Wait for Synthesis + Result Processing
|
||||
|
||||
```javascript
|
||||
// 等待 SYNTH-001 完成(Stop-Wait: spawn synthesizer worker)
|
||||
const synthTask = pipelineTasks.find(t => t.subject.startsWith('SYNTH-'))
|
||||
if (synthTask) {
|
||||
TaskUpdate({ taskId: synthTask.id, status: 'in_progress' })
|
||||
const synthResult = Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: `Spawn synthesizer worker for ${synthTask.subject}`,
|
||||
team_name: teamName,
|
||||
name: "synthesizer",
|
||||
prompt: `你是 team "${teamName}" 的 SYNTHESIZER。
|
||||
|
||||
## Primary Directive
|
||||
Skill(skill="team-ultra-analyze", args="--role=synthesizer")
|
||||
|
||||
## Assignment
|
||||
- Task ID: ${synthTask.id}
|
||||
- Task: ${synthTask.subject}
|
||||
- Session: ${sessionFolder}
|
||||
|
||||
## Workflow
|
||||
1. Skill(skill="team-ultra-analyze", args="--role=synthesizer") to load role definition
|
||||
2. Execute task per role.md
|
||||
3. TaskUpdate({ taskId: "${synthTask.id}", status: "completed" })
|
||||
|
||||
All outputs carry [synthesizer] tag.`,
|
||||
run_in_background: false
|
||||
})
|
||||
}
|
||||
|
||||
// 汇总所有结果
|
||||
const finalMemory = mcp__ccw-tools__team_msg({ operation: "get_state", session_id: sessionId })
|
||||
const allFinalTasks = TaskList()
|
||||
const workerTasks = allFinalTasks.filter(t => t.owner && t.owner !== 'coordinator')
|
||||
const summary = {
|
||||
total_tasks: workerTasks.length,
|
||||
completed_tasks: workerTasks.filter(t => t.status === 'completed').length,
|
||||
discussion_rounds: discussionRound,
|
||||
has_synthesis: !!finalMemory.synthesis,
|
||||
decisions_made: finalMemory.decision_trail?.length || 0
|
||||
}
|
||||
ANALYZE-fix-N (direction adjustment):
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "ANALYZE-fix-<N>",
|
||||
description: "PURPOSE: Supplementary analysis with adjusted focus | Success: New insights from adjusted direction
|
||||
TASK:
|
||||
- Re-analyze from adjusted perspective: <adjusted_focus>
|
||||
- Build on previous exploration findings
|
||||
- Generate updated discussion points
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Topic: <topic>
|
||||
- Type: direction-fix
|
||||
- Adjusted focus: <adjusted_focus>
|
||||
- Shared memory: <session>/wisdom/.msg/meta.json
|
||||
EXPECTED: <session>/analyses/analysis-fix-<N>.json
|
||||
---
|
||||
InnerLoop: false",
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
## Output Format
|
||||
SYNTH-001 (created dynamically in deep mode):
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "SYNTH-001",
|
||||
description: "PURPOSE: Integrate all analysis into final conclusions | Success: Executive summary with recommendations
|
||||
TASK:
|
||||
- Load all exploration, analysis, and discussion artifacts
|
||||
- Extract themes, consolidate evidence, prioritize recommendations
|
||||
- Write conclusions and update discussion.md
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Topic: <topic>
|
||||
- Upstream artifacts: explorations/*.json, analyses/*.json, discussions/*.json
|
||||
- Shared memory: <session>/wisdom/.msg/meta.json
|
||||
EXPECTED: <session>/conclusions.json + discussion.md update
|
||||
CONSTRAINTS: Pure integration, no new exploration
|
||||
---
|
||||
InnerLoop: false",
|
||||
blockedBy: ["<last-DISCUSS-task-id>"],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
7. Record user feedback to decision_trail via team_msg:
|
||||
|
||||
```
|
||||
## Coordination Summary
|
||||
|
||||
### Pipeline Status: COMPLETE
|
||||
### Mode: [quick|standard|deep]
|
||||
### Tasks: [completed]/[total]
|
||||
### Discussion Rounds: [count]
|
||||
### Decisions Made: [count]
|
||||
|
||||
### Message Log (last 10)
|
||||
- [timestamp] [from] → [to]: [type] - [summary]
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "state_update",
|
||||
data: { decision_trail_entry: {
|
||||
round: discussion_round,
|
||||
decision: feedback,
|
||||
context: "User feedback at discussion round N",
|
||||
timestamp: current ISO timestamp
|
||||
}}
|
||||
})
|
||||
```
|
||||
|
||||
8. Proceed to handleSpawnNext
|
||||
|
||||
### handleSpawnNext
|
||||
|
||||
Find and spawn the next ready tasks.
|
||||
|
||||
1. Scan task list for tasks where:
|
||||
- Status is "pending"
|
||||
- All blockedBy tasks have status "completed"
|
||||
|
||||
2. For each ready task, determine role from task prefix:
|
||||
|
||||
| Task Prefix | Role | Role Spec |
|
||||
|-------------|------|-----------|
|
||||
| `EXPLORE-*` | explorer | `.claude/skills/team-ultra-analyze/role-specs/explorer.md` |
|
||||
| `ANALYZE-*` | analyst | `.claude/skills/team-ultra-analyze/role-specs/analyst.md` |
|
||||
| `DISCUSS-*` | discussant | `.claude/skills/team-ultra-analyze/role-specs/discussant.md` |
|
||||
| `SYNTH-*` | synthesizer | `.claude/skills/team-ultra-analyze/role-specs/synthesizer.md` |
|
||||
|
||||
3. Spawn team-worker for each ready task:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker for <task-subject>",
|
||||
team_name: "ultra-analyze",
|
||||
name: "<agent-name>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-ultra-analyze/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: ultra-analyze
|
||||
requirement: <task-description>
|
||||
agent_name: <agent-name>
|
||||
inner_loop: false
|
||||
|
||||
## Current Task
|
||||
- Task ID: <task-id>
|
||||
- Task: <task-subject>
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery, owner=<agent-name>) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
4. **Parallel spawn rules**:
|
||||
|
||||
| Mode | Stage | Spawn Behavior |
|
||||
|------|-------|---------------|
|
||||
| quick | All stages | One worker at a time (serial pipeline) |
|
||||
| standard/deep | EXPLORE phase | Spawn all EXPLORE-001..N in parallel |
|
||||
| standard/deep | ANALYZE phase | Spawn all ANALYZE-001..N in parallel |
|
||||
| all | DISCUSS phase | One discussant at a time |
|
||||
| all | SYNTH phase | One synthesizer |
|
||||
|
||||
5. **STOP** after spawning -- wait for next callback
|
||||
|
||||
### handleCheck
|
||||
|
||||
Output current pipeline status without advancing.
|
||||
|
||||
```
|
||||
Pipeline Status (<mode> mode):
|
||||
[DONE] EXPLORE-001 (explorer) -> exploration-001.json
|
||||
[DONE] EXPLORE-002 (explorer) -> exploration-002.json
|
||||
[DONE] ANALYZE-001 (analyst) -> analysis-001.json
|
||||
[RUN] ANALYZE-002 (analyst) -> analyzing...
|
||||
[WAIT] DISCUSS-001 (discussant) -> blocked by ANALYZE-002
|
||||
[----] SYNTH-001 (synthesizer) -> blocked by DISCUSS-001
|
||||
|
||||
Discussion Rounds: 0/<max>
|
||||
Pipeline Mode: <mode>
|
||||
Session: <session-id>
|
||||
```
|
||||
|
||||
Output status -- do NOT advance pipeline.
|
||||
|
||||
### handleResume
|
||||
|
||||
Resume pipeline after user pause or interruption.
|
||||
|
||||
1. Audit task list for inconsistencies:
|
||||
- Tasks stuck in "in_progress" -> reset to "pending"
|
||||
- Tasks with completed blockers but still "pending" -> include in spawn list
|
||||
2. Proceed to handleSpawnNext
|
||||
|
||||
### handleComplete
|
||||
|
||||
Triggered when all pipeline tasks are completed.
|
||||
|
||||
**Completion check**:
|
||||
|
||||
| Mode | Completion Condition |
|
||||
|------|---------------------|
|
||||
| quick | EXPLORE-001 + ANALYZE-001 + SYNTH-001 all completed |
|
||||
| standard | All EXPLORE + ANALYZE + DISCUSS-001 + SYNTH-001 completed |
|
||||
| deep | All EXPLORE + ANALYZE + all DISCUSS-N + SYNTH-001 completed |
|
||||
|
||||
1. Verify all tasks completed. If any not completed, return to handleSpawnNext
|
||||
2. If all completed, transition to coordinator Phase 5
|
||||
|
||||
## Phase 4: State Persistence
|
||||
|
||||
After every handler execution:
|
||||
|
||||
1. Update session.json with current state:
|
||||
- `discussion_round`: current round count
|
||||
- `last_event`: event type and timestamp
|
||||
- `active_tasks`: list of in-progress task IDs
|
||||
2. Verify task list consistency (no orphan tasks, no broken dependencies)
|
||||
3. **STOP** and wait for next event
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Worker 返回但未 completed (交互模式) | AskUserQuestion: 重试 / 跳过 / 终止 |
|
||||
| Worker 返回但未 completed (自动模式) | 自动跳过,记录日志 |
|
||||
| Worker spawn 失败 | 重试一次,仍失败则上报用户 |
|
||||
| Discussion loop stuck >5 rounds | Force synthesis, offer continuation |
|
||||
| Synthesis fails | Report partial results from analyses |
|
||||
| Worker callback but task not completed | Log warning, reset task to pending, include in next handleSpawnNext |
|
||||
| Worker spawn fails | Retry once. If still fails, report to user via AskUserQuestion: retry / skip / abort |
|
||||
| Discussion loop exceeds max rounds | Force create SYNTH-001, proceed to synthesis |
|
||||
| Synthesis fails | Report partial results from analyses and discussions |
|
||||
| Pipeline stall (no ready + no running) | Check blockedBy chains, report blockage to user |
|
||||
| Missing task artifacts | Log warning, continue with available data |
|
||||
|
||||
@@ -1,222 +0,0 @@
|
||||
# Command: deepen
|
||||
|
||||
> 深入探索与补充分析。根据讨论类型执行针对性的代码探索或 CLI 分析。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Discussant
|
||||
- 用户反馈已收集,需要深入处理
|
||||
- 每个 DISCUSS-* 任务触发一次
|
||||
|
||||
**Trigger conditions**:
|
||||
- initial: 首轮讨论,汇总分析结果
|
||||
- deepen: 继续深入当前方向
|
||||
- direction-adjusted: 方向调整后重新分析
|
||||
- specific-questions: 回答用户具体问题
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Mixed(简单汇总内联,深入探索用 subagent/CLI)
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
function selectDeepenStrategy(discussType, complexity) {
|
||||
const strategies = {
|
||||
'initial': {
|
||||
mode: 'inline',
|
||||
description: 'Summarize all analysis results into discussion format'
|
||||
},
|
||||
'deepen': {
|
||||
mode: complexity === 'High' ? 'cli' : 'subagent',
|
||||
description: 'Further exploration in current direction'
|
||||
},
|
||||
'direction-adjusted': {
|
||||
mode: 'cli',
|
||||
description: 'Re-analyze from new perspective'
|
||||
},
|
||||
'specific-questions': {
|
||||
mode: 'subagent',
|
||||
description: 'Targeted exploration to answer questions'
|
||||
}
|
||||
}
|
||||
return strategies[discussType] || strategies['initial']
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Strategy Selection
|
||||
|
||||
```javascript
|
||||
const strategy = selectDeepenStrategy(discussType, assessComplexity(userFeedback))
|
||||
```
|
||||
|
||||
### Step 2: Execute by Type
|
||||
|
||||
#### Initial Discussion
|
||||
|
||||
```javascript
|
||||
function processInitialDiscussion() {
|
||||
// 汇总所有分析结果
|
||||
const summary = {
|
||||
perspectives_analyzed: allAnalyses.map(a => a.perspective),
|
||||
total_insights: currentInsights.length,
|
||||
total_findings: currentFindings.length,
|
||||
convergent_themes: identifyConvergentThemes(allAnalyses),
|
||||
conflicting_views: identifyConflicts(allAnalyses),
|
||||
top_discussion_points: discussionPoints.slice(0, 5),
|
||||
open_questions: openQuestions.slice(0, 5)
|
||||
}
|
||||
|
||||
roundContent.updated_understanding.new_insights = summary.convergent_themes
|
||||
roundContent.new_findings = currentFindings.slice(0, 10)
|
||||
roundContent.new_questions = openQuestions.slice(0, 5)
|
||||
}
|
||||
|
||||
function identifyConvergentThemes(analyses) {
|
||||
// 跨视角找共同主题
|
||||
const allInsights = analyses.flatMap(a =>
|
||||
(a.key_insights || []).map(i => typeof i === 'string' ? i : i.insight)
|
||||
)
|
||||
// 简单去重 + 聚合
|
||||
return [...new Set(allInsights)].slice(0, 5)
|
||||
}
|
||||
|
||||
function identifyConflicts(analyses) {
|
||||
// 识别视角间的矛盾
|
||||
return [] // 由实际分析结果决定
|
||||
}
|
||||
```
|
||||
|
||||
#### Deepen Discussion
|
||||
|
||||
```javascript
|
||||
function processDeepenDiscussion() {
|
||||
// 在当前方向上进一步探索
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Deepen exploration: ${topic} (round ${round})`,
|
||||
prompt: `
|
||||
## Context
|
||||
Topic: ${topic}
|
||||
Round: ${round}
|
||||
Previous findings: ${currentFindings.slice(0, 5).join('; ')}
|
||||
Open questions: ${openQuestions.slice(0, 3).join('; ')}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Focus on open questions from previous analysis
|
||||
2. Search for specific patterns mentioned in findings
|
||||
3. Look for edge cases and exceptions
|
||||
|
||||
## Exploration Focus
|
||||
- Deepen understanding of confirmed patterns
|
||||
- Investigate open questions
|
||||
- Find additional evidence for uncertain insights
|
||||
|
||||
## Output
|
||||
Write to: ${sessionFolder}/discussions/deepen-${discussNum}.json
|
||||
Schema: {new_findings, answered_questions, remaining_questions, evidence}
|
||||
`
|
||||
})
|
||||
|
||||
// 读取深入探索结果
|
||||
let deepenResult = {}
|
||||
try {
|
||||
deepenResult = JSON.parse(Read(`${sessionFolder}/discussions/deepen-${discussNum}.json`))
|
||||
} catch {}
|
||||
|
||||
roundContent.updated_understanding.new_insights = deepenResult.new_findings || []
|
||||
roundContent.new_findings = deepenResult.new_findings || []
|
||||
roundContent.new_questions = deepenResult.remaining_questions || []
|
||||
}
|
||||
```
|
||||
|
||||
#### Direction Adjusted
|
||||
|
||||
```javascript
|
||||
function processDirectionAdjusted() {
|
||||
// 方向调整后,通过 CLI 重新分析
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Re-analyze '${topic}' with adjusted focus on '${userFeedback}'
|
||||
Success: New insights from adjusted direction
|
||||
|
||||
PREVIOUS ANALYSIS CONTEXT:
|
||||
- Previous insights: ${currentInsights.slice(0, 5).map(i => typeof i === 'string' ? i : i.insight).join('; ')}
|
||||
- Direction change reason: User requested focus on '${userFeedback}'
|
||||
|
||||
TASK:
|
||||
• Re-evaluate findings from new perspective
|
||||
• Identify what changes with adjusted focus
|
||||
• Find new patterns relevant to adjusted direction
|
||||
• Note what previous findings remain valid
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Topic: ${topic}
|
||||
EXPECTED: Updated analysis with: validated findings, new insights, invalidated assumptions
|
||||
CONSTRAINTS: Focus on ${userFeedback}
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// ⚠️ STOP: Wait for CLI callback
|
||||
|
||||
roundContent.updated_understanding.corrected = ['Direction adjusted per user request']
|
||||
roundContent.updated_understanding.new_insights = [] // From CLI result
|
||||
}
|
||||
```
|
||||
|
||||
#### Specific Questions
|
||||
|
||||
```javascript
|
||||
function processSpecificQuestions() {
|
||||
// 针对用户问题进行探索
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Answer questions: ${topic}`,
|
||||
prompt: `
|
||||
## Context
|
||||
Topic: ${topic}
|
||||
User questions: ${userFeedback}
|
||||
Known findings: ${currentFindings.slice(0, 5).join('; ')}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Search for code related to user's questions
|
||||
2. Trace execution paths relevant to questions
|
||||
3. Check configuration and environment factors
|
||||
|
||||
## Output
|
||||
Write to: ${sessionFolder}/discussions/questions-${discussNum}.json
|
||||
Schema: {answers: [{question, answer, evidence, confidence}], follow_up_questions}
|
||||
`
|
||||
})
|
||||
|
||||
let questionResult = {}
|
||||
try {
|
||||
questionResult = JSON.parse(Read(`${sessionFolder}/discussions/questions-${discussNum}.json`))
|
||||
} catch {}
|
||||
|
||||
roundContent.updated_understanding.new_insights =
|
||||
(questionResult.answers || []).map(a => `Q: ${a.question} → A: ${a.answer}`)
|
||||
roundContent.new_questions = questionResult.follow_up_questions || []
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// 结果已写入 roundContent,由 role.md Phase 4 处理
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| cli-explore-agent fails | Use existing analysis results, note limitation |
|
||||
| CLI timeout | Report partial results |
|
||||
| No previous analyses | Process as initial with empty context |
|
||||
| User feedback unparseable | Treat as 'deepen' type |
|
||||
@@ -1,225 +0,0 @@
|
||||
# Discussant Role
|
||||
|
||||
讨论处理者。根据 coordinator 传递的用户反馈,执行方向调整、深入探索或补充分析,更新讨论时间线。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `discussant` | **Tag**: `[discussant]`
|
||||
- **Task Prefix**: `DISCUSS-*`
|
||||
- **Responsibility**: Analysis + Exploration (讨论处理)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `DISCUSS-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[discussant]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within discussion processing responsibility scope
|
||||
- Execute deep exploration based on user feedback and existing analysis
|
||||
- Share discussion results via team_msg(type='state_update')
|
||||
- Update discussion.md discussion timeline
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Interact directly with user (AskUserQuestion is coordinator-driven)
|
||||
- Generate final conclusions (belongs to synthesizer)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Communicate directly with other worker roles
|
||||
- Modify source code
|
||||
- Omit `[discussant]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `deepen` | [commands/deepen.md](commands/deepen.md) | Phase 3 | 深入探索与补充分析 |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Task` | Subagent | deepen.md | Spawn cli-explore-agent for targeted exploration |
|
||||
| `Bash` | CLI | deepen.md | Execute ccw cli for deep analysis |
|
||||
| `Read` | File | discussant | Read analysis results and session context |
|
||||
| `Write` | File | discussant | Write discussion results |
|
||||
| `Glob` | File | discussant | Find analysis/exploration files |
|
||||
|
||||
### CLI Tools
|
||||
|
||||
| CLI Tool | Mode | Used By | Purpose |
|
||||
|----------|------|---------|---------|
|
||||
| `gemini` | analysis | deepen.md | 深入分析 |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `discussion_processed` | discussant → coordinator | 讨论处理完成 | 包含更新的理解和新发现 |
|
||||
| `error` | discussant → coordinator | 处理失败 | 阻塞性错误 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "discussant",
|
||||
type: "discussion_processed",
|
||||
ref: "<output-path>"
|
||||
})
|
||||
```
|
||||
|
||||
> `to` and `summary` are auto-defaulted by the tool.
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from discussant --type discussion_processed --ref <path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `DISCUSS-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
Falls back to `discussant` for single-instance role.
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Extract topic, round number, discussion type, user feedback
|
||||
3. Read role states via team_msg(operation="get_state") for existing context
|
||||
4. Read all analysis results
|
||||
5. Read all exploration results
|
||||
6. Aggregate current findings, insights, questions
|
||||
|
||||
**Context extraction**:
|
||||
|
||||
| Field | Source | Pattern |
|
||||
|-------|--------|---------|
|
||||
| sessionFolder | task description | `session:\s*(.+)` |
|
||||
| topic | task description | `topic:\s*(.+)` |
|
||||
| round | task description | `round:\s*(\d+)` or default 1 |
|
||||
| discussType | task description | `type:\s*(.+)` or default "initial" |
|
||||
| userFeedback | task description | `user_feedback:\s*(.+)` or empty |
|
||||
|
||||
**Discussion types**:
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| initial | 首轮讨论:汇总所有分析结果,生成讨论摘要 |
|
||||
| deepen | 继续深入:在当前方向上进一步探索 |
|
||||
| direction-adjusted | 方向调整:基于新方向重新组织发现 |
|
||||
| specific-questions | 具体问题:针对用户问题进行分析 |
|
||||
|
||||
### Phase 3: Discussion Processing
|
||||
|
||||
Delegate to `commands/deepen.md` if available, otherwise execute inline.
|
||||
|
||||
**Processing by discussion type**:
|
||||
|
||||
| Type | Strategy |
|
||||
|------|----------|
|
||||
| initial | Aggregate all analysis results, generate discussion summary with confirmed/corrected/new insights |
|
||||
| deepen | Focus on current direction, explore deeper with cli-explore-agent |
|
||||
| direction-adjusted | Re-organize findings around new focus, identify new patterns |
|
||||
| specific-questions | Targeted analysis addressing user's specific questions |
|
||||
|
||||
**Round content structure**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| round | Discussion round number |
|
||||
| type | Discussion type |
|
||||
| user_feedback | User input (if any) |
|
||||
| updated_understanding | confirmed, corrected, new_insights arrays |
|
||||
| new_findings | New discoveries |
|
||||
| new_questions | Open questions |
|
||||
| timestamp | ISO timestamp |
|
||||
|
||||
### Phase 4: Update Discussion Timeline
|
||||
|
||||
**Output path**: `<session-folder>/discussions/discussion-round-<num>.json`
|
||||
|
||||
**discussion.md update template**:
|
||||
|
||||
```markdown
|
||||
### Round <N> - Discussion (<timestamp>)
|
||||
|
||||
#### Type
|
||||
<discussType>
|
||||
|
||||
#### User Input
|
||||
<userFeedback or "(Initial discussion round)">
|
||||
|
||||
#### Updated Understanding
|
||||
**Confirmed**: <list of confirmed assumptions>
|
||||
**Corrected**: <list of corrected assumptions>
|
||||
**New Insights**: <list of new insights>
|
||||
|
||||
#### New Findings
|
||||
<list of new findings or "(None)">
|
||||
|
||||
#### Open Questions
|
||||
<list of open questions or "(None)">
|
||||
```
|
||||
|
||||
**Update steps**:
|
||||
|
||||
1. Write round content JSON to discussions folder
|
||||
2. Read current discussion.md
|
||||
3. Append new round section
|
||||
4. Write updated discussion.md
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[discussant]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Shared memory update**:
|
||||
|
||||
```
|
||||
sharedMemory.discussions.push({
|
||||
id: "discussion-round-<num>",
|
||||
round: <round>,
|
||||
type: <discussType>,
|
||||
new_insight_count: <count>,
|
||||
corrected_count: <count>,
|
||||
timestamp: <timestamp>
|
||||
})
|
||||
|
||||
// Update current_understanding
|
||||
sharedMemory.current_understanding.established += confirmed
|
||||
sharedMemory.current_understanding.clarified += corrected
|
||||
sharedMemory.current_understanding.key_insights += new_insights
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No DISCUSS-* tasks available | Idle, wait for coordinator assignment |
|
||||
| No analysis results found | Report empty discussion, notify coordinator |
|
||||
| CLI tool unavailable | Use existing analysis results for discussion |
|
||||
| User feedback unclear | Process as 'deepen' type, note ambiguity |
|
||||
| Session folder missing | Error to coordinator |
|
||||
| Command file not found | Fall back to inline execution |
|
||||
@@ -1,194 +0,0 @@
|
||||
# Command: explore
|
||||
|
||||
> cli-explore-agent 并行代码库探索。根据话题和视角,通过 subagent 收集代码库上下文。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Explorer
|
||||
- 需要收集代码库上下文供后续分析
|
||||
- 每个 EXPLORE-* 任务触发一次
|
||||
|
||||
**Trigger conditions**:
|
||||
- Explorer Phase 2 完成后
|
||||
- 任务包含明确的 perspective 和 dimensions
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Subagent(cli-explore-agent 执行实际探索)
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 根据 perspective 确定探索策略
|
||||
function buildExplorationStrategy(perspective, dimensions, topic) {
|
||||
const strategies = {
|
||||
'general': {
|
||||
focus: 'Overall codebase structure and patterns',
|
||||
searches: [topic, ...dimensions],
|
||||
depth: 'broad'
|
||||
},
|
||||
'technical': {
|
||||
focus: 'Implementation details, code patterns, technical feasibility',
|
||||
searches: [`${topic} implementation`, `${topic} pattern`, `${topic} handler`],
|
||||
depth: 'medium'
|
||||
},
|
||||
'architectural': {
|
||||
focus: 'System design, module boundaries, component interactions',
|
||||
searches: [`${topic} module`, `${topic} service`, `${topic} interface`],
|
||||
depth: 'broad'
|
||||
},
|
||||
'business': {
|
||||
focus: 'Business logic, domain models, value flows',
|
||||
searches: [`${topic} model`, `${topic} domain`, `${topic} workflow`],
|
||||
depth: 'medium'
|
||||
},
|
||||
'domain_expert': {
|
||||
focus: 'Domain-specific patterns, standards compliance, best practices',
|
||||
searches: [`${topic} standard`, `${topic} convention`, `${topic} best practice`],
|
||||
depth: 'deep'
|
||||
}
|
||||
}
|
||||
return strategies[perspective] || strategies['general']
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
const strategy = buildExplorationStrategy(perspective, dimensions, topic)
|
||||
const exploreNum = task.subject.match(/EXPLORE-(\d+)/)?.[1] || '001'
|
||||
const outputPath = `${sessionFolder}/explorations/exploration-${exploreNum}.json`
|
||||
```
|
||||
|
||||
### Step 2: Execute Exploration
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore codebase: ${topic} (${perspective})`,
|
||||
prompt: `
|
||||
## Analysis Context
|
||||
Topic: ${topic}
|
||||
Perspective: ${perspective} — ${strategy.focus}
|
||||
Dimensions: ${dimensions.join(', ')}
|
||||
Session: ${sessionFolder}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}'
|
||||
2. Execute searches: ${strategy.searches.map(s => `"${s}"`).join(', ')}
|
||||
3. Run: ccw spec load --category exploration
|
||||
|
||||
## Exploration Focus (${perspective} angle)
|
||||
- **Depth**: ${strategy.depth}
|
||||
- **Focus**: ${strategy.focus}
|
||||
${dimensions.map(d => `- ${d}: Identify relevant code patterns, structures, and relationships`).join('\n')}
|
||||
|
||||
## Search Strategy
|
||||
${strategy.searches.map((s, i) => `${i + 1}. Search for: "${s}" — find related files, functions, types`).join('\n')}
|
||||
|
||||
## Additional Exploration
|
||||
- Identify entry points related to the topic
|
||||
- Map dependencies between relevant modules
|
||||
- Note any configuration or environment dependencies
|
||||
- Look for test files that reveal expected behavior
|
||||
|
||||
## Output
|
||||
Write findings to: ${outputPath}
|
||||
|
||||
Schema:
|
||||
{
|
||||
"perspective": "${perspective}",
|
||||
"relevant_files": [
|
||||
{"path": "string", "relevance": "high|medium|low", "summary": "what this file does"}
|
||||
],
|
||||
"patterns": ["pattern descriptions found in codebase"],
|
||||
"key_findings": ["important discoveries"],
|
||||
"module_map": {"module_name": ["related_files"]},
|
||||
"questions_for_analysis": ["questions that need deeper analysis"],
|
||||
"_metadata": {
|
||||
"agent": "cli-explore-agent",
|
||||
"perspective": "${perspective}",
|
||||
"search_queries": ${JSON.stringify(strategy.searches)},
|
||||
"timestamp": "ISO string"
|
||||
}
|
||||
}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
// 验证输出文件
|
||||
let result = {}
|
||||
try {
|
||||
result = JSON.parse(Read(outputPath))
|
||||
} catch {
|
||||
// Fallback: ACE search
|
||||
const aceResults = mcp__ace-tool__search_context({
|
||||
project_root_path: ".",
|
||||
query: `${topic} ${perspective}`
|
||||
})
|
||||
|
||||
result = {
|
||||
perspective,
|
||||
relevant_files: [],
|
||||
patterns: [],
|
||||
key_findings: [`ACE fallback: ${aceResults?.summary || 'No results'}`],
|
||||
questions_for_analysis: [`What is the ${perspective} perspective on ${topic}?`],
|
||||
_metadata: {
|
||||
agent: 'ace-fallback',
|
||||
perspective,
|
||||
timestamp: new Date().toISOString()
|
||||
}
|
||||
}
|
||||
Write(outputPath, JSON.stringify(result, null, 2))
|
||||
}
|
||||
|
||||
// 质量验证
|
||||
const quality = {
|
||||
has_files: (result.relevant_files?.length || 0) > 0,
|
||||
has_findings: (result.key_findings?.length || 0) > 0,
|
||||
has_patterns: (result.patterns?.length || 0) > 0
|
||||
}
|
||||
|
||||
if (!quality.has_files && !quality.has_findings) {
|
||||
// 补充搜索
|
||||
const supplementary = mcp__ace-tool__search_context({
|
||||
project_root_path: ".",
|
||||
query: topic
|
||||
})
|
||||
// Merge supplementary results
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```json
|
||||
{
|
||||
"perspective": "technical",
|
||||
"relevant_files": [
|
||||
{"path": "src/auth/handler.ts", "relevance": "high", "summary": "Authentication request handler"}
|
||||
],
|
||||
"patterns": ["Repository pattern used for data access", "Middleware chain for auth"],
|
||||
"key_findings": ["JWT tokens stored in HTTP-only cookies", "Rate limiting at gateway level"],
|
||||
"module_map": {"auth": ["src/auth/handler.ts", "src/auth/middleware.ts"]},
|
||||
"questions_for_analysis": ["Is the token refresh mechanism secure?"],
|
||||
"_metadata": {"agent": "cli-explore-agent", "perspective": "technical", "timestamp": "..."}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| cli-explore-agent unavailable | Fall back to ACE search + Grep |
|
||||
| Agent produces no output file | Create minimal result with ACE fallback |
|
||||
| Agent timeout | Use partial results if available |
|
||||
| Invalid JSON output | Attempt repair, fall back to raw text extraction |
|
||||
| Session folder missing | Create directory, continue |
|
||||
@@ -1,217 +0,0 @@
|
||||
# Explorer Role
|
||||
|
||||
代码库探索者。通过 cli-explore-agent 多角度并行探索代码库,收集结构化上下文供后续分析使用。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `explorer` | **Tag**: `[explorer]`
|
||||
- **Task Prefix**: `EXPLORE-*`
|
||||
- **Responsibility**: Orchestration (代码库探索编排)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `EXPLORE-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[explorer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within codebase exploration responsibility scope
|
||||
- Share exploration results via team_msg(type='state_update')
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute deep analysis (belongs to analyst)
|
||||
- Handle user feedback (belongs to discussant)
|
||||
- Generate conclusions or recommendations (belongs to synthesizer)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Communicate directly with other worker roles
|
||||
- Omit `[explorer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `explore` | [commands/explore.md](commands/explore.md) | Phase 3 | cli-explore-agent 并行探索 |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Task` | Subagent | explore.md | Spawn cli-explore-agent for codebase exploration |
|
||||
| `Read` | File | explorer | Read session files and exploration context |
|
||||
| `Write` | File | explorer | Write exploration results |
|
||||
| `Glob` | File | explorer | Find relevant files |
|
||||
| `mcp__ace-tool__search_context` | MCP | explorer | ACE semantic search fallback |
|
||||
| `Grep` | Search | explorer | Pattern search fallback |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `exploration_ready` | explorer → coordinator | 探索完成 | 包含发现的文件、模式、关键发现 |
|
||||
| `error` | explorer → coordinator | 探索失败 | 阻塞性错误 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "explorer",
|
||||
type: "exploration_ready",
|
||||
ref: "<output-path>"
|
||||
})
|
||||
```
|
||||
|
||||
> `to` and `summary` are auto-defaulted by the tool.
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from explorer --type exploration_ready --ref <path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `EXPLORE-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `explorer` for single-instance roles.
|
||||
|
||||
### Phase 2: Context & Scope Assessment
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Extract topic, perspective, dimensions from task metadata
|
||||
3. Read role states via team_msg(operation="get_state") for existing context
|
||||
4. Determine exploration number from task subject (EXPLORE-N)
|
||||
|
||||
**Context extraction**:
|
||||
|
||||
| Field | Source | Pattern |
|
||||
|-------|--------|---------|
|
||||
| sessionFolder | task description | `session:\s*(.+)` |
|
||||
| topic | task description | `topic:\s*(.+)` |
|
||||
| perspective | task description | `perspective:\s*(.+)` or default "general" |
|
||||
| dimensions | task description | `dimensions:\s*(.+)` or default "general" |
|
||||
|
||||
### Phase 3: Codebase Exploration
|
||||
|
||||
Delegate to `commands/explore.md` if available, otherwise execute inline.
|
||||
|
||||
**Exploration strategy**:
|
||||
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Single perspective | Direct cli-explore-agent spawn |
|
||||
| Multi-perspective | Per-perspective exploration with focused prompts |
|
||||
| Limited context | ACE search + Grep fallback |
|
||||
|
||||
**cli-explore-agent spawn**:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore codebase: <topic> (<perspective>)",
|
||||
prompt: `
|
||||
## Analysis Context
|
||||
Topic: <topic>
|
||||
Perspective: <perspective>
|
||||
Dimensions: <dimensions>
|
||||
Session: <session-folder>
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}'
|
||||
2. Execute relevant searches based on topic keywords
|
||||
3. Run: ccw spec load --category exploration
|
||||
|
||||
## Exploration Focus (<perspective> angle)
|
||||
<dimensions map to exploration focus areas>
|
||||
|
||||
## Output
|
||||
Write findings to: <session-folder>/explorations/exploration-<num>.json
|
||||
|
||||
Schema: {
|
||||
perspective: "<perspective>",
|
||||
relevant_files: [{path, relevance, summary}],
|
||||
patterns: [string],
|
||||
key_findings: [string],
|
||||
questions_for_analysis: [string],
|
||||
_metadata: {agent: "cli-explore-agent", timestamp}
|
||||
}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 4: Result Validation
|
||||
|
||||
**Validation steps**:
|
||||
|
||||
| Check | Method | Action on Failure |
|
||||
|-------|--------|-------------------|
|
||||
| Output file exists | Read output path | Create empty result structure |
|
||||
| Has relevant files | Check array length | Trigger ACE fallback |
|
||||
| Has findings | Check key_findings | Note partial results |
|
||||
|
||||
**ACE fallback** (when exploration produces no output):
|
||||
|
||||
```
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: ".",
|
||||
query: <topic>
|
||||
})
|
||||
```
|
||||
|
||||
**Quality validation**:
|
||||
|
||||
| Metric | Threshold | Action |
|
||||
|--------|-----------|--------|
|
||||
| relevant_files count | > 0 | Proceed |
|
||||
| key_findings count | > 0 | Proceed |
|
||||
| Both empty | - | Use ACE fallback, mark partial |
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[explorer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Shared memory update**:
|
||||
|
||||
```
|
||||
sharedMemory.explorations.push({
|
||||
id: "exploration-<num>",
|
||||
perspective: <perspective>,
|
||||
file_count: <count>,
|
||||
finding_count: <count>,
|
||||
timestamp: <timestamp>
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No EXPLORE-* tasks available | Idle, wait for coordinator assignment |
|
||||
| cli-explore-agent fails | Fall back to ACE search + Grep inline |
|
||||
| Exploration scope too broad | Narrow to topic keywords, report partial |
|
||||
| Agent timeout | Use partial results, note incomplete |
|
||||
| Session folder missing | Create it, warn coordinator |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -1,255 +0,0 @@
|
||||
# Command: synthesize
|
||||
|
||||
> 跨视角整合。从所有探索、分析、讨论结果中提取主题、解决冲突、生成最终结论和建议。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Synthesizer
|
||||
- 所有探索、分析、讨论已完成
|
||||
- 每个 SYNTH-* 任务触发一次
|
||||
|
||||
**Trigger conditions**:
|
||||
- 讨论循环结束后(用户选择"分析完成"或达到最大轮次)
|
||||
- Quick 模式下分析完成后直接触发
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Inline(纯整合,不调用外部工具)
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
function buildSynthesisStrategy(explorationCount, analysisCount, discussionCount) {
|
||||
if (analysisCount <= 1 && discussionCount === 0) {
|
||||
return 'simple' // Quick mode: 单视角直接总结
|
||||
}
|
||||
if (discussionCount > 2) {
|
||||
return 'deep' // Deep mode: 多轮讨论需要追踪演进
|
||||
}
|
||||
return 'standard' // Standard: 多视角交叉整合
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
const strategy = buildSynthesisStrategy(
|
||||
allExplorations.length, allAnalyses.length, allDiscussions.length
|
||||
)
|
||||
|
||||
// 提取所有洞察
|
||||
const allInsights = allAnalyses.flatMap(a =>
|
||||
(a.key_insights || []).map(i => ({
|
||||
...(typeof i === 'string' ? { insight: i } : i),
|
||||
perspective: a.perspective
|
||||
}))
|
||||
)
|
||||
|
||||
// 提取所有发现
|
||||
const allFindings = allAnalyses.flatMap(a =>
|
||||
(a.key_findings || []).map(f => ({
|
||||
...(typeof f === 'string' ? { finding: f } : f),
|
||||
perspective: a.perspective
|
||||
}))
|
||||
)
|
||||
|
||||
// 提取所有建议
|
||||
const allRecommendations = allAnalyses.flatMap(a =>
|
||||
(a.recommendations || []).map(r => ({
|
||||
...(typeof r === 'string' ? { action: r } : r),
|
||||
perspective: a.perspective
|
||||
}))
|
||||
)
|
||||
|
||||
// 提取讨论演进
|
||||
const discussionEvolution = allDiscussions.map(d => ({
|
||||
round: d.round,
|
||||
type: d.type,
|
||||
confirmed: d.updated_understanding?.confirmed || [],
|
||||
corrected: d.updated_understanding?.corrected || [],
|
||||
new_insights: d.updated_understanding?.new_insights || []
|
||||
}))
|
||||
```
|
||||
|
||||
### Step 2: Cross-Perspective Synthesis
|
||||
|
||||
```javascript
|
||||
// 1. Theme Extraction — 跨视角共同主题
|
||||
const themes = extractThemes(allInsights)
|
||||
|
||||
// 2. Conflict Resolution — 视角间矛盾
|
||||
const conflicts = identifyConflicts(allAnalyses)
|
||||
|
||||
// 3. Evidence Consolidation — 证据汇总
|
||||
const consolidatedEvidence = consolidateEvidence(allFindings)
|
||||
|
||||
// 4. Recommendation Prioritization — 建议优先级排序
|
||||
const prioritizedRecommendations = prioritizeRecommendations(allRecommendations)
|
||||
|
||||
// 5. Decision Trail Integration — 决策追踪整合
|
||||
const decisionSummary = summarizeDecisions(decisionTrail)
|
||||
|
||||
function extractThemes(insights) {
|
||||
// 按关键词聚类,识别跨视角共同主题
|
||||
const themeMap = {}
|
||||
for (const insight of insights) {
|
||||
const text = insight.insight || insight
|
||||
// 简单聚类:相似洞察归为同一主题
|
||||
const key = text.slice(0, 30)
|
||||
if (!themeMap[key]) {
|
||||
themeMap[key] = { theme: text, perspectives: [], count: 0 }
|
||||
}
|
||||
themeMap[key].perspectives.push(insight.perspective)
|
||||
themeMap[key].count++
|
||||
}
|
||||
return Object.values(themeMap)
|
||||
.sort((a, b) => b.count - a.count)
|
||||
.slice(0, 10)
|
||||
}
|
||||
|
||||
function identifyConflicts(analyses) {
|
||||
// 识别不同视角间的矛盾发现
|
||||
const conflicts = []
|
||||
for (let i = 0; i < analyses.length; i++) {
|
||||
for (let j = i + 1; j < analyses.length; j++) {
|
||||
// 比较两个视角的发现是否矛盾
|
||||
// 实际实现中需要语义比较
|
||||
}
|
||||
}
|
||||
return conflicts
|
||||
}
|
||||
|
||||
function consolidateEvidence(findings) {
|
||||
// 去重并按文件引用聚合
|
||||
const byFile = {}
|
||||
for (const f of findings) {
|
||||
const ref = f.file_ref || f.finding
|
||||
if (!byFile[ref]) byFile[ref] = []
|
||||
byFile[ref].push(f)
|
||||
}
|
||||
return byFile
|
||||
}
|
||||
|
||||
function prioritizeRecommendations(recommendations) {
|
||||
const priorityOrder = { high: 0, medium: 1, low: 2 }
|
||||
return recommendations
|
||||
.sort((a, b) => (priorityOrder[a.priority] || 2) - (priorityOrder[b.priority] || 2))
|
||||
.slice(0, 10)
|
||||
}
|
||||
|
||||
function summarizeDecisions(trail) {
|
||||
return trail.map(d => ({
|
||||
round: d.round,
|
||||
decision: d.decision,
|
||||
context: d.context,
|
||||
impact: d.impact || 'Shaped analysis direction'
|
||||
}))
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Build Conclusions
|
||||
|
||||
```javascript
|
||||
const conclusions = {
|
||||
session_id: sessionFolder.split('/').pop(),
|
||||
topic,
|
||||
completed: new Date().toISOString(),
|
||||
total_rounds: allDiscussions.length,
|
||||
strategy_used: strategy,
|
||||
|
||||
summary: generateSummary(themes, allFindings, allDiscussions),
|
||||
|
||||
key_conclusions: themes.slice(0, 7).map(t => ({
|
||||
point: t.theme,
|
||||
evidence: t.perspectives.join(', ') + ' perspectives',
|
||||
confidence: t.count >= 3 ? 'high' : t.count >= 2 ? 'medium' : 'low'
|
||||
})),
|
||||
|
||||
recommendations: prioritizedRecommendations.map(r => ({
|
||||
action: r.action,
|
||||
rationale: r.rationale || 'Based on analysis findings',
|
||||
priority: r.priority || 'medium',
|
||||
source_perspective: r.perspective
|
||||
})),
|
||||
|
||||
open_questions: allAnalyses
|
||||
.flatMap(a => a.open_questions || [])
|
||||
.filter((q, i, arr) => arr.indexOf(q) === i)
|
||||
.slice(0, 5),
|
||||
|
||||
follow_up_suggestions: generateFollowUps(conclusions),
|
||||
|
||||
decision_trail: decisionSummary,
|
||||
|
||||
cross_perspective_synthesis: {
|
||||
convergent_themes: themes.filter(t => t.perspectives.length > 1),
|
||||
conflicts_resolved: conflicts,
|
||||
unique_contributions: allAnalyses.map(a => ({
|
||||
perspective: a.perspective,
|
||||
unique_insights: (a.key_insights || []).slice(0, 2)
|
||||
}))
|
||||
},
|
||||
|
||||
_metadata: {
|
||||
explorations: allExplorations.length,
|
||||
analyses: allAnalyses.length,
|
||||
discussions: allDiscussions.length,
|
||||
decisions: decisionTrail.length,
|
||||
synthesis_strategy: strategy
|
||||
}
|
||||
}
|
||||
|
||||
function generateSummary(themes, findings, discussions) {
|
||||
const topThemes = themes.slice(0, 3).map(t => t.theme).join('; ')
|
||||
const roundCount = discussions.length
|
||||
return `Analysis of "${topic}" identified ${themes.length} key themes across ${allAnalyses.length} perspective(s) and ${roundCount} discussion round(s). Top themes: ${topThemes}`
|
||||
}
|
||||
|
||||
function generateFollowUps(conclusions) {
|
||||
const suggestions = []
|
||||
if ((conclusions.open_questions || []).length > 2) {
|
||||
suggestions.push({ type: 'deeper-analysis', summary: 'Further analysis needed for open questions' })
|
||||
}
|
||||
if ((conclusions.recommendations || []).some(r => r.priority === 'high')) {
|
||||
suggestions.push({ type: 'issue-creation', summary: 'Create issues for high-priority recommendations' })
|
||||
}
|
||||
suggestions.push({ type: 'implementation-plan', summary: 'Generate implementation plan from recommendations' })
|
||||
return suggestions
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "UAN-auth-analysis-2026-02-18",
|
||||
"topic": "认证架构优化",
|
||||
"completed": "2026-02-18T...",
|
||||
"total_rounds": 2,
|
||||
"summary": "Analysis identified 5 key themes...",
|
||||
"key_conclusions": [
|
||||
{"point": "JWT stateless approach is sound", "evidence": "technical, architectural", "confidence": "high"}
|
||||
],
|
||||
"recommendations": [
|
||||
{"action": "Add rate limiting", "rationale": "Prevent brute force", "priority": "high"}
|
||||
],
|
||||
"open_questions": ["Token rotation strategy?"],
|
||||
"decision_trail": [
|
||||
{"round": 1, "decision": "Focus on security", "context": "User preference"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No analyses available | Synthesize from explorations only |
|
||||
| Single perspective only | Generate focused synthesis without cross-perspective |
|
||||
| Irreconcilable conflicts | Present both sides with trade-off analysis |
|
||||
| Empty discussion rounds | Skip discussion evolution, focus on analysis results |
|
||||
| Shared memory corrupted | Rebuild from individual JSON files |
|
||||
@@ -1,249 +0,0 @@
|
||||
# Synthesizer Role
|
||||
|
||||
综合整合者。跨视角整合所有探索、分析、讨论结果,生成最终结论、建议和决策追踪。
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `synthesizer` | **Tag**: `[synthesizer]`
|
||||
- **Task Prefix**: `SYNTH-*`
|
||||
- **Responsibility**: Read-only analysis (综合结论)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `SYNTH-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[synthesizer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within synthesis responsibility scope
|
||||
- Integrate all role outputs to generate final conclusions
|
||||
- Share synthesis results via team_msg(type='state_update')
|
||||
- Update discussion.md conclusions section
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Execute new code exploration or CLI analysis
|
||||
- Interact directly with user
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Communicate directly with other worker roles
|
||||
- Modify source code
|
||||
- Omit `[synthesizer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `synthesize` | [commands/synthesize.md](commands/synthesize.md) | Phase 3 | 跨视角整合 |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Read` | File | synthesizer | Read all session artifacts |
|
||||
| `Write` | File | synthesizer | Write conclusions and updates |
|
||||
| `Glob` | File | synthesizer | Find all exploration/analysis/discussion files |
|
||||
|
||||
> Synthesizer does not use subagents or CLI tools (pure integration role).
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `synthesis_ready` | synthesizer → coordinator | 综合完成 | 包含最终结论和建议 |
|
||||
| `error` | synthesizer → coordinator | 综合失败 | 阻塞性错误 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "synthesizer",
|
||||
type: "synthesis_ready",
|
||||
ref: "<output-path>"
|
||||
})
|
||||
```
|
||||
|
||||
> `to` and `summary` are auto-defaulted by the tool.
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from synthesizer --type synthesis_ready --ref <path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `SYNTH-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
Falls back to `synthesizer` for single-instance role.
|
||||
|
||||
### Phase 2: Context Loading + Shared Memory Read
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Extract topic
|
||||
3. Read role states via team_msg(operation="get_state")
|
||||
4. Read all exploration files
|
||||
5. Read all analysis files
|
||||
6. Read all discussion round files
|
||||
7. Extract decision trail and current understanding
|
||||
|
||||
**Context extraction**:
|
||||
|
||||
| Field | Source | Pattern |
|
||||
|-------|--------|---------|
|
||||
| sessionFolder | task description | `session:\s*(.+)` |
|
||||
| topic | task description | `topic:\s*(.+)` |
|
||||
|
||||
**File loading**:
|
||||
|
||||
| Artifact | Path Pattern |
|
||||
|----------|--------------|
|
||||
| Explorations | `<session>/explorations/*.json` |
|
||||
| Analyses | `<session>/analyses/*.json` |
|
||||
| Discussions | `<session>/discussions/discussion-round-*.json` |
|
||||
| Decision trail | `sharedMemory.decision_trail` |
|
||||
| Current understanding | `sharedMemory.current_understanding` |
|
||||
|
||||
### Phase 3: Synthesis Execution
|
||||
|
||||
Delegate to `commands/synthesize.md` if available, otherwise execute inline.
|
||||
|
||||
**Synthesis dimensions**:
|
||||
|
||||
| Dimension | Source | Purpose |
|
||||
|-----------|--------|---------|
|
||||
| Explorations | All exploration files | Cross-perspective file relevance |
|
||||
| Analyses | All analysis files | Key insights and discussion points |
|
||||
| Discussions | All discussion rounds | Understanding evolution |
|
||||
| Decision trail | sharedMemory | Critical decision history |
|
||||
|
||||
**Conclusions structure**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| summary | Executive summary |
|
||||
| key_conclusions | Array of {point, confidence, evidence} |
|
||||
| recommendations | Array of {priority, action, rationale} |
|
||||
| open_questions | Remaining unresolved questions |
|
||||
| _metadata | Synthesis metadata |
|
||||
|
||||
**Confidence levels**:
|
||||
|
||||
| Level | Criteria |
|
||||
|-------|----------|
|
||||
| High | Multiple sources confirm, strong evidence |
|
||||
| Medium | Single source or partial evidence |
|
||||
| Low | Speculative, needs verification |
|
||||
|
||||
### Phase 4: Write Conclusions + Update discussion.md
|
||||
|
||||
**Output paths**:
|
||||
|
||||
| File | Path |
|
||||
|------|------|
|
||||
| Conclusions | `<session-folder>/conclusions.json` |
|
||||
| Discussion update | `<session-folder>/discussion.md` |
|
||||
|
||||
**discussion.md conclusions section**:
|
||||
|
||||
```markdown
|
||||
## Conclusions
|
||||
|
||||
### Summary
|
||||
<conclusions.summary>
|
||||
|
||||
### Key Conclusions
|
||||
1. **<point>** (Confidence: <confidence>)
|
||||
- Evidence: <evidence>
|
||||
2. ...
|
||||
|
||||
### Recommendations
|
||||
1. **[<priority>]** <action>
|
||||
- Rationale: <rationale>
|
||||
2. ...
|
||||
|
||||
### Remaining Questions
|
||||
- <question 1>
|
||||
- <question 2>
|
||||
|
||||
## Decision Trail
|
||||
|
||||
### Critical Decisions
|
||||
- **Round N**: <decision> — <context>
|
||||
|
||||
## Current Understanding (Final)
|
||||
|
||||
### What We Established
|
||||
- <established item 1>
|
||||
- <established item 2>
|
||||
|
||||
### What Was Clarified/Corrected
|
||||
- <clarified item 1>
|
||||
- <clarified item 2>
|
||||
|
||||
### Key Insights
|
||||
- <insight 1>
|
||||
- <insight 2>
|
||||
|
||||
## Session Statistics
|
||||
- **Explorations**: <count>
|
||||
- **Analyses**: <count>
|
||||
- **Discussion Rounds**: <count>
|
||||
- **Decisions Made**: <count>
|
||||
- **Completed**: <timestamp>
|
||||
```
|
||||
|
||||
**Update steps**:
|
||||
|
||||
1. Write conclusions.json
|
||||
2. Read current discussion.md
|
||||
3. Append conclusions section
|
||||
4. Write updated discussion.md
|
||||
|
||||
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[synthesizer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
**Shared memory update**:
|
||||
|
||||
```
|
||||
sharedMemory.synthesis = {
|
||||
conclusion_count: <count>,
|
||||
recommendation_count: <count>,
|
||||
open_question_count: <count>,
|
||||
timestamp: <timestamp>
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No SYNTH-* tasks | Idle, wait for assignment |
|
||||
| No analyses/discussions found | Synthesize from explorations only |
|
||||
| Conflicting analyses | Present both sides, recommend user decision |
|
||||
| Empty shared memory | Generate minimal conclusions from discussion.md |
|
||||
| Only one perspective | Create focused single-perspective synthesis |
|
||||
| Command file not found | Fall back to inline execution |
|
||||
| Session folder missing | Error to coordinator |
|
||||
Reference in New Issue
Block a user