Refactor team collaboration skills and update documentation

- Renamed `team-lifecycle-v5` to `team-lifecycle` across various documentation files for consistency.
- Updated references in code examples and usage sections to reflect the new skill name.
- Added a new command file for the `monitor` functionality in the `team-iterdev` skill, detailing the coordinator's monitoring events and task management.
- Introduced new components for dynamic pipeline visualization and session coordinates display in the frontend.
- Implemented utility functions for pipeline stage detection and status derivation based on message history.
- Enhanced the team role panel to map members to their respective pipeline roles with status indicators.
- Updated Chinese documentation to reflect the changes in skill names and descriptions.
This commit is contained in:
catlog22
2026-03-04 11:07:48 +08:00
parent 5e96722c09
commit ffd5282932
132 changed files with 2938 additions and 18916 deletions

View File

@@ -11,17 +11,23 @@ allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), Task
## Architecture Overview
```
┌──────────────────────────────────────────────────────────┐
Skill(skill="team-tech-debt", args="--role=xxx")
└────────────────────────┬─────────────────────────────────┘
│ Role Router
┌────────┬───────────┼───────────┬──────────┬──────────┐
↓ ↓ ↓ ↓ ↓
┌────────┐┌────────┐┌──────────┐┌─────────┐┌────────┐┌─────────┐
│coordi- ││scanner ││assessor ││planner ││executor││validator│
│nator ││TDSCAN-*││TDEVAL-* ││TDPLAN-* ││TDFIX-* ││TDVAL-* │
│ roles/ ││ roles/ ││ roles/ ││ roles/ ││ roles/ ││ roles/
└────────┘└────────┘└──────────┘└─────────┘└────────┘└─────────┘
+---------------------------------------------------+
| Skill(skill="team-tech-debt") |
| args="<task-description>" |
+-------------------+-------------------------------+
|
Orchestration Mode (auto -> coordinator)
|
Coordinator (inline)
Phase 0-5 orchestration
|
+-----+-----+-----+-----+-----+
v v v v v
[tw] [tw] [tw] [tw] [tw]
scann- asses- plan- execu- valid-
er sor ner tor ator
(tw) = team-worker agent
```
## Command Architecture
@@ -67,14 +73,14 @@ If no `--role` -> Orchestration Mode (auto route to coordinator).
### Role Registry
| Role | File | Task Prefix | Type | Compact |
|------|------|-------------|------|---------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | **压缩后必须重读** |
| scanner | [roles/scanner/role.md](roles/scanner/role.md) | TDSCAN-* | pipeline | 压缩后必须重读 |
| assessor | [roles/assessor/role.md](roles/assessor/role.md) | TDEVAL-* | pipeline | 压缩后必须重读 |
| planner | [roles/planner/role.md](roles/planner/role.md) | TDPLAN-* | pipeline | 压缩后必须重读 |
| executor | [roles/executor/role.md](roles/executor/role.md) | TDFIX-* | pipeline | 压缩后必须重读 |
| validator | [roles/validator/role.md](roles/validator/role.md) | TDVAL-* | pipeline | 压缩后必须重读 |
| Role | Spec | Task Prefix | Inner Loop |
|------|------|-------------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
| scanner | [role-specs/scanner.md](role-specs/scanner.md) | TDSCAN-* | false |
| assessor | [role-specs/assessor.md](role-specs/assessor.md) | TDEVAL-* | false |
| planner | [role-specs/planner.md](role-specs/planner.md) | TDPLAN-* | false |
| executor | [role-specs/executor.md](role-specs/executor.md) | TDFIX-* | true |
| validator | [role-specs/validator.md](role-specs/validator.md) | TDVAL-* | false |
> **COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
@@ -263,53 +269,71 @@ TDFIX -> TDVAL -> (if regression or quality drop) -> TDFIX-fix -> TDVAL-2
## Coordinator Spawn Template
> **Note**: This skill uses Stop-Wait coordination (`run_in_background: false`). Each role completes before next spawns. This is intentionally different from the v3 default of `run_in_background: true` (Spawn-and-Stop). The Stop-Wait strategy ensures sequential pipeline execution where each phase's output is fully available before the next phase begins -- critical for the scan->assess->plan->execute->validate dependency chain.
### v5 Worker Spawn (all roles)
> 以下模板作为 worker prompt 参考。在 Stop-Wait 策略下,coordinator 不在 Phase 2 预先 spawn 所有 worker。而是在 Phase 4 (monitor) 中,按 pipeline 阶段逐个 spawn worker同步阻塞 `Task(run_in_background: false)`worker 返回即阶段完成。详见 `roles/coordinator/commands/monitor.md`。
> **Note**: This skill uses Stop-Wait coordination (`run_in_background: false`). Each role completes before next spawns. This is intentionally different from the default `run_in_background: true` (Spawn-and-Stop). The Stop-Wait strategy ensures sequential pipeline execution where each phase's output is fully available before the next phase begins -- critical for the scan->assess->plan->execute->validate dependency chain.
**通用 Worker Spawn 格式**:
When coordinator spawns workers, use `team-worker` agent with role-spec path:
```
Task({
subagent_type: "general-purpose",
subagent_type: "team-worker",
description: "Spawn <role> worker",
prompt: `你是 team "<team-name>" 的 <ROLE>.
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-tech-debt/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: tech-debt
requirement: <task-description>
inner_loop: <true|false>
## 首要指令MUST
你的所有工作必须通过调用 Skill 获取角色定义后执行,禁止自行发挥:
Skill(skill="team-tech-debt", args="--role=<role>")
此调用会加载你的角色定义role.md、可用命令commands/*.md和完整执行逻辑。
当前需求: <task-description>
约束: <constraints>
## 角色准则(强制)
- 你只能处理 <PREFIX>-* 前缀的任务,不得执行其他角色的工作
- 所有输出SendMessage、team_msg必须带 [<role>] 标识前缀
- 仅与 coordinator 通信,不得直接联系其他 worker
- 不得使用 TaskCreate 为其他角色创建任务
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
## 工作流程(严格按顺序)
1. 调用 Skill(skill="team-tech-debt", args="--role=<role>") 获取角色定义和执行逻辑
2. 按 role.md 中的 5-Phase 流程执行TaskList -> 找到 <PREFIX>-* 任务 -> 执行 -> 汇报)
3. team_msg log + SendMessage 结果给 coordinator带 [<role>] 标识)
4. TaskUpdate completed -> 检查下一个任务 -> 回到步骤 1`,
run_in_background: false // Stop-Wait: 同步阻塞,等待 worker 完成
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`,
run_in_background: false // Stop-Wait: synchronous blocking, wait for worker completion
})
```
**各角色 Spawn 参数**:
**Inner Loop roles** (executor): Set `inner_loop: true`.
| Role | Prefix | Skill Args |
**Single-task roles** (scanner, assessor, planner, validator): Set `inner_loop: false`.
**Role-specific spawn parameters**:
| Role | Prefix | inner_loop |
|------|--------|------------|
| scanner | TDSCAN-* | `--role=scanner` |
| assessor | TDEVAL-* | `--role=assessor` |
| planner | TDPLAN-* | `--role=planner` |
| executor | TDFIX-* | `--role=executor` |
| validator | TDVAL-* | `--role=validator` |
| scanner | TDSCAN-* | false |
| assessor | TDEVAL-* | false |
| planner | TDPLAN-* | false |
| executor | TDFIX-* | true |
| validator | TDVAL-* | false |
---
## Completion Action
When the pipeline completes (all tasks done, coordinator Phase 5):
```
AskUserQuestion({
questions: [{
question: "Tech Debt pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
]
}]
})
```
| Choice | Action |
|--------|--------|
| Archive & Clean | Update session status="completed" -> TeamDelete(tech-debt) -> output final summary |
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-tech-debt", args="resume")` |
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
---

View File

@@ -1,164 +0,0 @@
# Command: evaluate
> CLI 分析评估债务项。对每项债务评估业务影响(1-5)、修复成本(1-5)、未修复风险,产出优先级象限分配。
## When to Use
- Phase 3 of Assessor
- 需要对债务清单中的项目进行量化评估
- 债务项数量较多需要 CLI 辅助分析
**Trigger conditions**:
- TDEVAL-* 任务进入 Phase 3
- 债务清单包含 >10 项需要评估的条目
- 需要上下文理解来评估影响和成本
## Strategy
### Delegation Mode
**Mode**: CLI Batch Analysis
**CLI Tool**: `gemini` (primary)
**CLI Mode**: `analysis`
### Decision Logic
```javascript
// 评估策略选择
if (debtInventory.length <= 10) {
// 少量项目:内联评估(基于严重性和工作量启发式)
mode = 'heuristic'
} else if (debtInventory.length <= 50) {
// 中等规模:单次 CLI 批量评估
mode = 'cli-batch'
} else {
// 大规模:分批 CLI 评估
mode = 'cli-chunked'
chunkSize = 25
}
```
## Execution Steps
### Step 1: Context Preparation
```javascript
// 准备评估上下文
const debtSummary = debtInventory.map(item =>
`[${item.id}] [${item.dimension}] [${item.severity}] ${item.file}:${item.line} - ${item.description}`
).join('\n')
// 读取项目元信息用于上下文
const projectContext = []
try {
const pkg = JSON.parse(Read('package.json'))
projectContext.push(`Project: ${pkg.name}, Dependencies: ${Object.keys(pkg.dependencies || {}).length}`)
} catch {}
```
### Step 2: Execute Strategy
```javascript
if (mode === 'heuristic') {
// 内联启发式评估
for (const item of debtInventory) {
const severityImpact = { critical: 5, high: 4, medium: 3, low: 1 }
const effortCost = { small: 1, medium: 3, large: 5 }
item.impact_score = severityImpact[item.severity] || 3
item.cost_score = effortCost[item.estimated_effort] || 3
item.risk_if_unfixed = getRiskDescription(item)
item.priority_quadrant = assignQuadrant(item.impact_score, item.cost_score)
}
} else {
// CLI 批量评估
const prompt = `PURPOSE: Evaluate technical debt items for business impact and fix cost to create a priority matrix
TASK: • For each debt item, assess business impact (1-5 scale: 1=negligible, 5=critical) • Assess fix complexity/cost (1-5 scale: 1=trivial, 5=major refactor) • Describe risk if unfixed • Assign priority quadrant: quick-win (high impact + low cost), strategic (high impact + high cost), backlog (low impact + low cost), defer (low impact + high cost)
MODE: analysis
CONTEXT: ${projectContext.join(' | ')}
EXPECTED: JSON array with: [{id, impact_score, cost_score, risk_if_unfixed, priority_quadrant}] for each item
CONSTRAINTS: Be realistic about costs, consider dependencies between items
## Debt Items to Evaluate
${debtSummary}`
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`, {
run_in_background: true
})
// 等待 CLI 完成,解析结果,合并回 debtInventory
}
function assignQuadrant(impact, cost) {
if (impact >= 4 && cost <= 2) return 'quick-win'
if (impact >= 4 && cost >= 3) return 'strategic'
if (impact <= 3 && cost <= 2) return 'backlog'
return 'defer'
}
function getRiskDescription(item) {
const risks = {
'code': 'Increased maintenance cost and bug probability',
'architecture': 'Growing coupling makes changes harder and riskier',
'testing': 'Reduced confidence in changes, higher regression risk',
'dependency': 'Security vulnerabilities and compatibility issues',
'documentation': 'Onboarding friction and knowledge loss'
}
return risks[item.dimension] || 'Technical quality degradation over time'
}
```
### Step 3: Result Processing
```javascript
// 验证评估结果完整性
const evaluated = debtInventory.filter(i => i.priority_quadrant)
const unevaluated = debtInventory.filter(i => !i.priority_quadrant)
if (unevaluated.length > 0) {
// 未评估的项目使用启发式兜底
for (const item of unevaluated) {
item.impact_score = item.impact_score || 3
item.cost_score = item.cost_score || 3
item.priority_quadrant = assignQuadrant(item.impact_score, item.cost_score)
item.risk_if_unfixed = item.risk_if_unfixed || getRiskDescription(item)
}
}
// 生成统计
const stats = {
total: debtInventory.length,
evaluated_by_cli: evaluated.length,
evaluated_by_heuristic: unevaluated.length,
avg_impact: (debtInventory.reduce((s, i) => s + i.impact_score, 0) / debtInventory.length).toFixed(1),
avg_cost: (debtInventory.reduce((s, i) => s + i.cost_score, 0) / debtInventory.length).toFixed(1)
}
```
## Output Format
```
## Evaluation Results
### Method: [heuristic|cli-batch|cli-chunked]
### Total Items: [count]
### Average Impact: [score]/5
### Average Cost: [score]/5
### Priority Distribution
| Quadrant | Count | % |
|----------|-------|---|
| Quick-Win | [n] | [%] |
| Strategic | [n] | [%] |
| Backlog | [n] | [%] |
| Defer | [n] | [%] |
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI returns invalid JSON | Fall back to heuristic scoring |
| CLI timeout | Evaluate processed items, heuristic for rest |
| Debt inventory too large (>200) | Chunk into batches of 25 |
| Missing severity/effort data | Use dimension-based defaults |
| All items same quadrant | Re-evaluate with adjusted thresholds |

View File

@@ -1,185 +0,0 @@
# Assessor Role
技术债务量化评估师。对扫描发现的每项债务进行影响评分(1-5)和修复成本评分(1-5),划分优先级象限,生成 priority-matrix.json。
## Identity
- **Name**: `assessor` | **Tag**: `[assessor]`
- **Task Prefix**: `TDEVAL-*`
- **Responsibility**: Read-only analysis (量化评估)
## Boundaries
### MUST
- Only process `TDEVAL-*` prefixed tasks
- All output (SendMessage, team_msg, logs) must carry `[assessor]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within quantitative assessment responsibility scope
- Base evaluations on data from debt inventory
### MUST NOT
- Modify source code or test code
- Execute fix operations
- Create tasks for other roles
- Communicate directly with other worker roles (must go through coordinator)
- Omit `[assessor]` identifier in any output
---
## Toolbox
### Available Commands
| Command | File | Phase | Description |
|---------|------|-------|-------------|
| `evaluate` | [commands/evaluate.md](commands/evaluate.md) | Phase 3 | 影响/成本矩阵评估 |
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| `gemini` | CLI | evaluate.md | 债务影响与修复成本评估 |
> Assessor does not directly use subagents
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `assessment_complete` | assessor -> coordinator | 评估完成 | 包含优先级矩阵摘要 |
| `error` | assessor -> coordinator | 评估失败 | 阻塞性错误 |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "assessor",
type: <message-type>,
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --session-id <session-id> --from assessor --type <message-type> --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `TDEVAL-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
### Phase 2: Load Debt Inventory
| Input | Source | Required |
|-------|--------|----------|
| Session folder | task.description (regex: `session:\s*(.+)`) | Yes |
| Shared memory | `<session-folder>/.msg/meta.json` | Yes |
| Debt inventory | meta.json:debt_inventory OR `<session-folder>/scan/debt-inventory.json` | Yes |
**Loading steps**:
1. Extract session path from task description
2. Read .msg/meta.json
3. Load debt_inventory from shared memory or fallback to debt-inventory.json file
4. If debt_inventory is empty -> report empty assessment and exit
### Phase 3: Evaluate Each Item
Delegate to `commands/evaluate.md` if available, otherwise execute inline.
**Core Strategy**: For each debt item, evaluate impact(1-5) + cost(1-5) + priority quadrant
**Impact Score Mapping**:
| Severity | Impact Score |
|----------|--------------|
| critical | 5 |
| high | 4 |
| medium | 3 |
| low | 1 |
**Cost Score Mapping**:
| Estimated Effort | Cost Score |
|------------------|------------|
| small | 1 |
| medium | 3 |
| large | 5 |
| unknown | 3 |
**Priority Quadrant Classification**:
| Impact | Cost | Quadrant | Description |
|--------|------|----------|-------------|
| >= 4 | <= 2 | quick-win | High impact, low cost |
| >= 4 | >= 3 | strategic | High impact, high cost |
| <= 3 | <= 2 | backlog | Low impact, low cost |
| <= 3 | >= 3 | defer | Low impact, high cost |
**Evaluation record**:
| Field | Description |
|-------|-------------|
| `impact_score` | 1-5, business impact |
| `cost_score` | 1-5, fix effort |
| `risk_if_unfixed` | Risk description |
| `priority_quadrant` | quick-win/strategic/backlog/defer |
### Phase 4: Generate Priority Matrix
**Matrix structure**:
| Field | Description |
|-------|-------------|
| `evaluation_date` | ISO timestamp |
| `total_items` | Count of evaluated items |
| `by_quadrant` | Items grouped by quadrant |
| `summary` | Count per quadrant |
**Sorting**: Within each quadrant, sort by impact_score descending
**Save outputs**:
1. Write `<session-folder>/assessment/priority-matrix.json`
2. Update .msg/meta.json with `priority_matrix` summary and evaluated `debt_inventory`
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
Standard report flow: team_msg log -> SendMessage with `[assessor]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
**Report content**:
| Field | Value |
|-------|-------|
| Task | task.subject |
| Total Items | Count of evaluated items |
| Priority Matrix | Count per quadrant |
| Top Quick-Wins | Top 5 quick-win items with details |
| Priority Matrix File | Path to priority-matrix.json |
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No TDEVAL-* tasks available | Idle, wait for coordinator |
| Debt inventory empty | Report empty assessment, notify coordinator |
| Shared memory corrupted | Re-read from debt-inventory.json file |
| CLI analysis fails | Fall back to severity-based heuristic scoring |
| Too many items (>200) | Batch-evaluate top 50 critical/high first |

View File

@@ -1,413 +1,234 @@
# Command: monitor
# Command: Monitor
> 停止等待Stop-Wait协调。按 pipeline 阶段顺序,逐阶段 spawn worker 同步执行worker 返回即阶段完成,无需轮询。
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, fix-verify loops, and completion.
## When to Use
## Constants
- Phase 4 of Coordinator
- 任务链已创建dispatch 完成)
- 需要逐阶段驱动 worker 执行直到所有任务完成
| Key | Value |
|-----|-------|
| SPAWN_MODE | background |
| ONE_STEP_PER_INVOCATION | true |
| WORKER_AGENT | team-worker |
| MAX_GC_ROUNDS | 3 |
**Trigger conditions**:
- dispatch 完成后立即启动
- Fix-Verify 循环创建新任务后重新进入
## Phase 2: Context Loading
## Strategy
| Input | Source | Required |
|-------|--------|----------|
| Session state | <session>/session.json | Yes |
| Task list | TaskList() | Yes |
| Trigger event | From Entry Router detection | Yes |
| Meta state | <session>/.msg/meta.json | Yes |
| Pipeline definition | From SKILL.md | Yes |
### Delegation Mode
1. Load session.json for current state, `pipeline_mode`, `gc_rounds`
2. Run TaskList() to get current task statuses
3. Identify trigger event type from Entry Router
**Mode**: Stop-Wait同步阻塞 Task call非轮询
### Role Detection Table
### 设计原则
| Message Pattern | Role Detection |
|----------------|---------------|
| `[scanner]` or task ID `TDSCAN-*` | scanner |
| `[assessor]` or task ID `TDEVAL-*` | assessor |
| `[planner]` or task ID `TDPLAN-*` | planner |
| `[executor]` or task ID `TDFIX-*` | executor |
| `[validator]` or task ID `TDVAL-*` | validator |
> **模型执行没有时间概念,禁止任何形式的轮询等待。**
>
> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态(空转浪费 API 轮次)
> - ❌ 禁止: `Bash(sleep N)` / `Bash(timeout /t N)` 作为等待手段
> - ✅ 采用: 同步 `Task()` 调用(`run_in_background: false`call 本身即等待
> - ✅ 采用: Worker 返回 = 阶段完成信号(天然回调)
>
> **原理**: `Task(run_in_background: false)` 是阻塞调用coordinator 自动挂起直到 worker 返回。
> 无需 sleep无需轮询无需消息总线监控。Worker 的返回就是回调。
### Pipeline Stage Order
### Stage-Worker 映射表
```
TDSCAN -> TDEVAL -> TDPLAN -> TDFIX -> TDVAL
```
## Phase 3: Event Handlers
### handleCallback
Triggered when a worker sends completion message.
1. Parse message to identify role and task ID using Role Detection Table
2. Mark task as completed:
```
TaskUpdate({ taskId: "<task-id>", status: "completed" })
```
3. Record completion in session state
4. **Plan Approval Gate** (when planner TDPLAN completes):
Before advancing to TDFIX, present the remediation plan to the user for approval.
```
// Read the generated plan
planContent = Read(<session>/plan/remediation-plan.md)
|| Read(<session>/plan/remediation-plan.json)
AskUserQuestion({
questions: [{
question: "Remediation plan generated. Review and decide:",
header: "Plan Review",
multiSelect: false,
options: [
{ label: "Approve", description: "Proceed with fix execution" },
{ label: "Revise", description: "Re-run planner with feedback" },
{ label: "Abort", description: "Stop pipeline, no fixes applied" }
]
}]
})
```
| Decision | Action |
|----------|--------|
| Approve | Proceed to handleSpawnNext (TDFIX becomes ready) |
| Revise | Create TDPLAN-revised task, proceed to handleSpawnNext |
| Abort | Log shutdown, transition to handleComplete |
5. **GC Loop Check** (when validator TDVAL completes):
Read `<session>/.msg/meta.json` for validation results.
| Condition | Action |
|-----------|--------|
| No regressions found | Proceed to handleSpawnNext (pipeline complete) |
| Regressions found AND gc_rounds < 3 | Create fix-verify tasks, increment gc_rounds |
| Regressions found AND gc_rounds >= 3 | Accept current state, proceed to handleComplete |
**Fix-Verify Task Creation** (when regressions detected):
```
TaskCreate({
subject: "TDFIX-fix-<round>",
description: "PURPOSE: Fix regressions found by validator | Success: All regressions resolved
TASK:
- Load validation report with regression details
- Apply targeted fixes for each regression
- Re-validate fixes locally before completion
CONTEXT:
- Session: <session-folder>
- Upstream artifacts: <session>/.msg/meta.json
EXPECTED: Fixed source files | Regressions resolved
CONSTRAINTS: Targeted fixes only | Do not introduce new regressions",
blockedBy: [],
status: "pending"
})
TaskCreate({
subject: "TDVAL-recheck-<round>",
description: "PURPOSE: Re-validate after regression fixes | Success: Zero regressions
TASK:
- Run full validation suite on fixed code
- Compare debt scores before and after
- Report regression status
CONTEXT:
- Session: <session-folder>
EXPECTED: Validation results with regression count
CONSTRAINTS: Read-only validation",
blockedBy: ["TDFIX-fix-<round>"],
status: "pending"
})
```
6. Proceed to handleSpawnNext
### handleSpawnNext
Find and spawn the next ready tasks.
1. Scan task list for tasks where:
- Status is "pending"
- All blockedBy tasks have status "completed"
2. If no ready tasks and all tasks completed, proceed to handleComplete
3. If no ready tasks but some still in_progress, STOP and wait
4. For each ready task, determine role from task subject prefix:
```javascript
const STAGE_WORKER_MAP = {
'TDSCAN': { role: 'scanner', skillArgs: '--role=scanner' },
'TDEVAL': { role: 'assessor', skillArgs: '--role=assessor' },
'TDPLAN': { role: 'planner', skillArgs: '--role=planner' },
'TDFIX': { role: 'executor', skillArgs: '--role=executor' },
'TDVAL': { role: 'validator', skillArgs: '--role=validator' }
'TDSCAN': { role: 'scanner' },
'TDEVAL': { role: 'assessor' },
'TDPLAN': { role: 'planner' },
'TDFIX': { role: 'executor' },
'TDVAL': { role: 'validator' }
}
```
## Execution Steps
5. Spawn team-worker (one at a time for sequential pipeline):
### Step 1: Context Preparation
```javascript
const sharedMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
let fixVerifyIteration = 0
const MAX_FIX_VERIFY_ITERATIONS = 3
let worktreeCreated = false
// 获取 pipeline 阶段列表(按创建顺序 = 依赖顺序)
const allTasks = TaskList()
const pipelineTasks = allTasks
.filter(t => t.owner && t.owner !== 'coordinator')
.sort((a, b) => Number(a.id) - Number(b.id))
// 统一 auto mode 检测
const autoYes = /\b(-y|--yes)\b/.test(args)
```
### Step 2: Sequential Stage Execution (Stop-Wait)
> **核心**: 逐阶段 spawn worker同步阻塞等待返回。
> Worker 返回 = 阶段完成。无 sleep、无轮询、无消息总线监控。
```javascript
for (const stageTask of pipelineTasks) {
// 1. 提取阶段前缀 → 确定 worker 角色
const stagePrefix = stageTask.subject.match(/^(TD\w+)-/)?.[1]
const workerConfig = STAGE_WORKER_MAP[stagePrefix]
if (!workerConfig) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "error",
})
continue
}
// 2. 标记任务为执行中
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
to: workerConfig.role, type: "task_unblocked",
})
// 3. 同步 spawn worker — 阻塞直到 worker 返回Stop-Wait 核心)
// Task() 本身就是等待机制,无需 sleep/poll
const workerResult = Task({
subagent_type: "team-worker",
description: `Spawn ${workerConfig.role} worker for ${stageTask.subject}`,
team_name: teamName,
name: workerConfig.role,
prompt: buildWorkerPrompt(stageTask, workerConfig, sessionFolder, taskDescription),
run_in_background: false // ← 同步阻塞 = 天然回调
})
// 4. Worker 已返回 — 直接处理结果(无需检查状态)
const taskState = TaskGet({ taskId: stageTask.id })
if (taskState.status !== 'completed') {
// Worker 返回但未标记 completed → 异常处理
handleStageFailure(stageTask, taskState, workerConfig, autoYes)
} else {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "quality_gate",
})
}
// 5. Plan Approval GateTDPLAN 完成后,进入 TDFIX 前)
if (stagePrefix === 'TDPLAN' && taskState.status === 'completed') {
// 读取治理方案
let planContent = ''
try { planContent = Read(`${sessionFolder}/plan/remediation-plan.md`) } catch {}
if (!planContent) {
try { planContent = JSON.stringify(JSON.parse(Read(`${sessionFolder}/plan/remediation-plan.json`)), null, 2) } catch {}
}
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "plan_approval",
})
if (!autoYes) {
// 输出方案摘要供用户审阅
// 注意: 方案内容通过 AskUserQuestion 的描述呈现
const approval = AskUserQuestion({
questions: [{
question: `治理方案已生成,请审阅后决定:\n\n${planContent ? planContent.slice(0, 2000) : '(方案文件未找到,请查看 ' + sessionFolder + '/plan/)'}${planContent && planContent.length > 2000 ? '\n\n... (已截断,完整方案见 ' + sessionFolder + '/plan/)' : ''}`,
header: "Plan Review",
multiSelect: false,
options: [
{ label: "批准执行", description: "按此方案创建 worktree 并执行修复" },
{ label: "修订方案", description: "重新规划(重新 spawn planner" },
{ label: "终止", description: "停止流水线,不执行修复" }
]
}]
})
const planDecision = approval["Plan Review"]
if (planDecision === "修订方案") {
// 重新创建 TDPLAN 任务并 spawn planner
const revisedTask = TaskCreate({
subject: `TDPLAN-revised: 修订治理方案`,
description: `session: ${sessionFolder}\n需求: ${taskDescription}\n用户要求修订方案`,
activeForm: "Revising remediation plan"
})
TaskUpdate({ taskId: revisedTask.id, owner: 'planner', status: 'pending' })
// 将修订任务插入到当前位置之后重新执行
pipelineTasks.splice(pipelineTasks.indexOf(stageTask) + 1, 0, {
id: revisedTask.id,
subject: `TDPLAN-revised`,
description: revisedTask.description
})
continue // 跳到下一阶段(即刚插入的修订任务)
} else if (planDecision === "终止") {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "shutdown",
})
break // 退出 pipeline 循环
}
// "批准执行" → 继续
}
}
// 6. Worktree CreationTDFIX 之前,方案已批准)
if (stagePrefix === 'TDFIX' && !worktreeCreated) {
const branchName = `tech-debt/TD-${sessionSlug}-${sessionDate}`
const worktreePath = `.worktrees/TD-${sessionSlug}-${sessionDate}`
// 创建 worktree 和新分支
Bash(`git worktree add -b "${branchName}" "${worktreePath}"`)
// 安装依赖(如有 package.json
Bash(`cd "${worktreePath}" && npm install --ignore-scripts 2>/dev/null || true`)
// 存入 shared memory
sharedMemory.worktree = { path: worktreePath, branch: branchName }
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(sharedMemory, null, 2))
worktreeCreated = true
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "worktree_created",
})
}
// 7. 阶段间质量检查(仅 TDVAL 阶段)
if (stagePrefix === 'TDVAL') {
const needsFixVerify = evaluateValidationResult(sessionFolder)
if (needsFixVerify && fixVerifyIteration < MAX_FIX_VERIFY_ITERATIONS) {
fixVerifyIteration++
const fixVerifyTasks = createFixVerifyTasks(fixVerifyIteration, sessionFolder)
// 将 Fix-Verify 任务追加到 pipeline 末尾继续执行
pipelineTasks.push(...fixVerifyTasks)
}
}
}
```
### Step 2.1: Worker Prompt Builder
```javascript
function buildWorkerPrompt(stageTask, workerConfig, sessionFolder, taskDescription) {
const stagePrefix = stageTask.subject.match(/^(TD\w+)-/)?.[1] || 'TD'
// Worktree 注入TDFIX 和 TDVAL 阶段)
let worktreeSection = ''
if (sharedMemory.worktree && (stagePrefix === 'TDFIX' || stagePrefix === 'TDVAL')) {
worktreeSection = `
## Worktree强制
- Worktree 路径: ${sharedMemory.worktree.path}
- 分支: ${sharedMemory.worktree.branch}
- **所有文件读取、修改、命令执行必须在 worktree 路径下进行**
- 使用 \`cd "${sharedMemory.worktree.path}" && ...\` 前缀执行所有 Bash 命令
- 禁止在主工作树中修改任何文件`
}
return `## Role Assignment
role: ${workerConfig.role}
role_spec: .claude/skills/team-tech-debt/role-specs/${workerConfig.role}.md
session: ${sessionFolder}
session_id: ${sessionId}
team_name: ${teamName}
requirement: ${stageTask.description || taskDescription}
Task({
subagent_type: "team-worker",
description: "Spawn <role> worker for <task-id>",
team_name: "tech-debt",
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-tech-debt/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: tech-debt
requirement: <task-description>
inner_loop: false
## Current Task
- Task ID: ${stageTask.id}
- Task: ${stageTask.subject}
- Task Prefix: ${stagePrefix}
${worktreeSection}
- Task ID: <task-id>
- Task: <task-subject>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
}
})
```
### Step 2.2: Stage Failure Handler
6. STOP after spawning -- wait for next callback
```javascript
function handleStageFailure(stageTask, taskState, workerConfig, autoYes) {
if (autoYes) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "error",
})
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
return 'skip'
}
### handleCheck
const decision = AskUserQuestion({
questions: [{
question: `阶段 "${stageTask.subject}" worker 返回但未完成 (status=${taskState.status})。如何处理?`,
header: "Stage Fail",
multiSelect: false,
options: [
{ label: "重试", description: "重新 spawn worker 执行此阶段" },
{ label: "跳过", description: "标记为跳过,继续后续流水线" },
{ label: "终止", description: "停止整个流程,汇报当前结果" }
]
}]
})
const answer = decision["Stage Fail"]
if (answer === "重试") {
// 重新 spawn worker递归单次
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
const retryResult = Task({
subagent_type: "team-worker",
description: `Retry ${workerConfig.role} worker for ${stageTask.subject}`,
team_name: teamName,
name: workerConfig.role,
prompt: buildWorkerPrompt(stageTask, workerConfig, sessionFolder, taskDescription),
run_in_background: false
})
const retryState = TaskGet({ taskId: stageTask.id })
if (retryState.status !== 'completed') {
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
}
return 'retried'
} else if (answer === "跳过") {
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
return 'skip'
} else {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "shutdown",
})
return 'abort'
}
}
```
### Step 2.3: Validation Evaluation
```javascript
function evaluateValidationResult(sessionFolder) {
const latestMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
const debtBefore = latestMemory.debt_score_before || 0
const debtAfter = latestMemory.debt_score_after || 0
const regressions = latestMemory.validation_results?.regressions || 0
const improved = debtAfter < debtBefore
let status = 'PASS'
if (!improved && regressions > 0) status = 'FAIL'
else if (!improved) status = 'CONDITIONAL'
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "quality_gate",
})
return regressions > 0
}
```
### Step 3: Result Processing + PR Creation
```javascript
// 汇总所有结果
const finalSharedMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
const allFinalTasks = TaskList()
const workerTasks = allFinalTasks.filter(t => t.owner && t.owner !== 'coordinator')
// PR 创建worktree 执行模式下,验证通过后)
if (finalSharedMemory.worktree && finalSharedMemory.validation_results?.passed) {
const { path: wtPath, branch } = finalSharedMemory.worktree
// Commit all changes in worktree
Bash(`cd "${wtPath}" && git add -A && git commit -m "$(cat <<'EOF'
tech-debt: ${taskDescription}
Automated tech debt cleanup via team-tech-debt pipeline.
Mode: ${pipelineMode}
Items fixed: ${finalSharedMemory.fix_results?.items_fixed || 0}
Debt score: ${finalSharedMemory.debt_score_before}${finalSharedMemory.debt_score_after}
EOF
)"`)
// Push + Create PR
Bash(`cd "${wtPath}" && git push -u origin "${branch}"`)
const prTitle = `Tech Debt: ${taskDescription.slice(0, 50)}`
Bash(`cd "${wtPath}" && gh pr create --title "${prTitle}" --body "$(cat <<'EOF'
## Tech Debt Cleanup
**Mode**: ${pipelineMode}
**Items fixed**: ${finalSharedMemory.fix_results?.items_fixed || 0}
**Debt score**: ${finalSharedMemory.debt_score_before}${finalSharedMemory.debt_score_after}
### Validation
- Tests: ${finalSharedMemory.validation_results?.checks?.test_suite?.status || 'N/A'}
- Types: ${finalSharedMemory.validation_results?.checks?.type_check?.status || 'N/A'}
- Lint: ${finalSharedMemory.validation_results?.checks?.lint_check?.status || 'N/A'}
### Session
${sessionFolder}
EOF
)"`)
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "pr_created",
})
// Cleanup worktree
Bash(`git worktree remove "${wtPath}" 2>/dev/null || true`)
} else if (finalSharedMemory.worktree && !finalSharedMemory.validation_results?.passed) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "quality_gate",
})
}
const summary = {
total_tasks: workerTasks.length,
completed_tasks: workerTasks.filter(t => t.status === 'completed').length,
fix_verify_iterations: fixVerifyIteration,
debt_score_before: finalSharedMemory.debt_score_before,
debt_score_after: finalSharedMemory.debt_score_after
}
```
## Output Format
Output current pipeline status.
```
## Coordination Summary
Pipeline Status:
[DONE] TDSCAN-001 (scanner) -> scan complete
[DONE] TDEVAL-001 (assessor) -> assessment ready
[DONE] TDPLAN-001 (planner) -> plan approved
[RUN] TDFIX-001 (executor) -> fixing...
[WAIT] TDVAL-001 (validator) -> blocked by TDFIX-001
### Pipeline Status: COMPLETE
### Tasks: [completed]/[total]
### Fix-Verify Iterations: [count]
### Debt Score: [before] → [after]
GC Rounds: 0/3
Session: <session-id>
```
## Error Handling
Output status -- do NOT advance pipeline.
| Scenario | Resolution |
|----------|------------|
| Worker 返回但未 completed (交互模式) | AskUserQuestion: 重试 / 跳过 / 终止 |
| Worker 返回但未 completed (自动模式) | 自动跳过,记录日志 |
| Worker spawn 失败 | 重试一次,仍失败则上报用户 |
| Quality gate FAIL | Report to user, suggest targeted re-run |
| Fix-Verify loop stuck >3 iterations | Accept current state, continue pipeline |
| Shared memory 读取失败 | 降级为 TaskList 状态判断 |
### handleResume
Resume pipeline after user pause or interruption.
1. Audit task list for inconsistencies:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed blockers but still "pending" -> include in spawn list
2. Proceed to handleSpawnNext
### handleComplete
Triggered when all pipeline tasks are completed.
1. Verify all tasks (including any fix-verify tasks) have status "completed"
2. If any tasks not completed, return to handleSpawnNext
3. If all completed:
- Read final state from `<session>/.msg/meta.json`
- Compile summary: total tasks, completed, gc_rounds, debt_score_before, debt_score_after
- If worktree exists and validation passed: commit changes, create PR, cleanup worktree
- Transition to coordinator Phase 5
## Phase 4: State Persistence
After every handler execution:
1. Update session.json with current state (active tasks, gc_rounds, last event)
2. Verify task list consistency
3. STOP and wait for next event

View File

@@ -300,32 +300,24 @@ Delegate to `commands/dispatch.md` which creates the full task chain.
```
Task({
subagent_type: "general-purpose",
subagent_type: "team-worker",
description: "Spawn <role> worker",
prompt: `你是 team "tech-debt" 的 <ROLE>.
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-tech-debt/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: tech-debt
requirement: <task-description>
inner_loop: false
## 首要指令MUST
你的所有工作必须通过调用 Skill 获取角色定义后执行,禁止自行发挥:
Skill(skill="team-tech-debt", args="--role=<role>")
此调用会加载你的角色定义role.md、可用命令commands/*.md和完整执行逻辑。
## Current Task
- Task ID: <task-id>
- Task: <PREFIX>-<NNN>
- Task Prefix: <PREFIX>
当前需求: <task-description>
Session: <session-folder>
## 角色准则(强制)
- 你只能处理 <PREFIX>-* 前缀的任务,不得执行其他角色的工作
- 所有输出SendMessage、team_msg必须带 [<role>] 标识前缀
- 仅与 coordinator 通信,不得直接联系其他 worker
- 不得使用 TaskCreate 为其他角色创建任务
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
## 工作流程(严格按顺序)
1. 调用 Skill(skill="team-tech-debt", args="--role=<role>") 获取角色定义和执行逻辑
2. 按 role.md 中的 5-Phase 流程执行TaskList -> 找到 <PREFIX>-* 任务 -> 执行 -> 汇报)
3. team_msg log + SendMessage 结果给 coordinator带 [<role>] 标识)
4. TaskUpdate completed -> 检查下一个任务 -> 回到步骤 1`,
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
run_in_background: false // Stop-Wait: synchronous blocking
})
```

View File

@@ -1,180 +0,0 @@
# Command: remediate
> 分批委派 code-developer 执行债务清理。按修复类型分组(重构、死代码移除、依赖更新、文档补充),每批委派给 code-developer。
## When to Use
- Phase 3 of Executor
- 治理方案已加载,修复 actions 已分批
- 需要通过 code-developer 执行代码修改
**Trigger conditions**:
- TDFIX-* 任务进入 Phase 3
- 修复 actions 列表非空
- 目标文件可访问
## Strategy
### Delegation Mode
**Mode**: Sequential Batch Delegation
**Subagent**: `code-developer`
**Batch Strategy**: 按修复类型分组,每组一个委派
### Decision Logic
```javascript
// 分批策略
const batchOrder = ['refactor', 'update-deps', 'add-tests', 'add-docs', 'restructure']
// 按优先级排序批次
function sortBatches(batches) {
const sorted = {}
for (const type of batchOrder) {
if (batches[type]) sorted[type] = batches[type]
}
// 追加未知类型
for (const [type, actions] of Object.entries(batches)) {
if (!sorted[type]) sorted[type] = actions
}
return sorted
}
```
## Execution Steps
### Step 1: Context Preparation
```javascript
// 按类型分组并排序
const sortedBatches = sortBatches(batches)
// Worktree 路径(从 shared memory 加载)
const worktreePath = sharedMemory.worktree?.path || null
const cmdPrefix = worktreePath ? `cd "${worktreePath}" && ` : ''
// 每批最大 items 数
const MAX_ITEMS_PER_BATCH = 10
// 如果单批过大,进一步拆分
function splitLargeBatches(batches) {
const result = {}
for (const [type, actions] of Object.entries(batches)) {
if (actions.length <= MAX_ITEMS_PER_BATCH) {
result[type] = actions
} else {
for (let i = 0; i < actions.length; i += MAX_ITEMS_PER_BATCH) {
const chunk = actions.slice(i, i + MAX_ITEMS_PER_BATCH)
result[`${type}-${Math.floor(i / MAX_ITEMS_PER_BATCH) + 1}`] = chunk
}
}
}
return result
}
const finalBatches = splitLargeBatches(sortedBatches)
```
### Step 2: Execute Strategy
```javascript
for (const [batchName, actions] of Object.entries(finalBatches)) {
// 构建修复上下文
const batchType = batchName.replace(/-\d+$/, '')
const fileList = actions.map(a => a.file).filter(Boolean)
// 根据类型选择修复提示
const typePrompts = {
'refactor': `Refactor the following code to reduce complexity and improve readability. Preserve all existing behavior.`,
'update-deps': `Update the specified dependencies. Check for breaking changes in changelogs.`,
'add-tests': `Add missing test coverage for the specified modules. Follow existing test patterns.`,
'add-docs': `Add documentation (JSDoc/docstrings) for the specified public APIs. Follow existing doc style.`,
'restructure': `Restructure module boundaries to reduce coupling. Move code to appropriate locations.`
}
const prompt = typePrompts[batchType] || 'Apply the specified fix to resolve technical debt.'
// 委派给 code-developer
Task({
subagent_type: "code-developer",
run_in_background: false,
description: `Tech debt cleanup: ${batchName} (${actions.length} items)`,
prompt: `## Goal
${prompt}
${worktreePath ? `\n## Worktree强制\n- 工作目录: ${worktreePath}\n- **所有文件操作必须在 ${worktreePath} 下进行**\n- 读文件: Read("${worktreePath}/path/to/file")\n- Bash 命令: cd "${worktreePath}" && ...\n- 禁止修改主工作树\n` : ''}
## Items to Fix
${actions.map(a => `### ${a.debt_id}: ${a.action}
- File: ${a.file || 'N/A'}
- Type: ${a.type}
${a.steps ? '- Steps:\n' + a.steps.map(s => ` 1. ${s}`).join('\n') : ''}`).join('\n\n')}
## Constraints
- Read each file BEFORE modifying
- Make minimal changes - fix only the specified debt item
- Preserve backward compatibility
- Do NOT skip tests or add @ts-ignore
- Do NOT introduce new dependencies unless explicitly required
- Run syntax check after modifications
## Files to Read First
${fileList.map(f => `- ${f}`).join('\n')}`
})
// 验证批次结果
const batchResult = {
batch: batchName,
items: actions.length,
status: 'completed'
}
// 检查文件是否被修改(在 worktree 中执行)
for (const file of fileList) {
const modified = Bash(`${cmdPrefix}git diff --name-only -- "${file}" 2>/dev/null`).trim()
if (modified) {
fixResults.files_modified.push(file)
}
}
}
```
### Step 3: Result Processing
```javascript
// 统计修复结果
const totalActions = Object.values(finalBatches).flat().length
fixResults.items_fixed = fixResults.files_modified.length
fixResults.items_failed = totalActions - fixResults.items_fixed
fixResults.items_remaining = fixResults.items_failed
// 生成修复摘要
const batchSummaries = Object.entries(finalBatches).map(([name, actions]) =>
`- ${name}: ${actions.length} items`
).join('\n')
```
## Output Format
```
## Remediation Results
### Batches Executed: [count]
### Items Fixed: [count]/[total]
### Files Modified: [count]
### Batch Details
- [batch-name]: [count] items - [status]
### Modified Files
- [file-path]
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| code-developer fails on a batch | Retry once, mark failed items |
| File locked or read-only | Skip file, log error |
| Syntax error after fix | Revert with git checkout, mark as failed |
| New import/dependency needed | Add minimally, document in fix log |
| Batch too large (>10 items) | Auto-split into sub-batches |
| Agent timeout | Use partial results, continue next batch |

View File

@@ -1,226 +0,0 @@
# Executor Role
技术债务清理执行者。根据治理方案执行重构、依赖更新、代码清理、文档补充等操作。通过 code-developer subagent 分批执行修复任务,包含自验证环节。
## Identity
- **Name**: `executor` | **Tag**: `[executor]`
- **Task Prefix**: `TDFIX-*`
- **Responsibility**: Code generation (债务清理执行)
## Boundaries
### MUST
- Only process `TDFIX-*` prefixed tasks
- All output (SendMessage, team_msg, logs) must carry `[executor]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within debt remediation responsibility scope
- Execute fixes according to remediation plan
- Perform self-validation (syntax check, lint)
### MUST NOT
- Create new features from scratch (only cleanup debt)
- Modify code outside the remediation plan
- Create tasks for other roles
- Communicate directly with other worker roles (must go through coordinator)
- Skip self-validation step
- Omit `[executor]` identifier in any output
---
## Toolbox
### Available Commands
| Command | File | Phase | Description |
|---------|------|-------|-------------|
| `remediate` | [commands/remediate.md](commands/remediate.md) | Phase 3 | 分批委派 code-developer 执行修复 |
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| `code-developer` | Subagent | remediate.md | 代码修复执行 |
> Executor does not directly use CLI analysis tools (uses code-developer subagent indirectly)
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `fix_complete` | executor -> coordinator | 修复完成 | 包含修复摘要 |
| `fix_progress` | executor -> coordinator | 批次完成 | 进度更新 |
| `error` | executor -> coordinator | 执行失败 | 阻塞性错误 |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "executor",
type: <message-type>,
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --session-id <session-id> --from executor --type <message-type> --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `TDFIX-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
### Phase 2: Load Remediation Plan
| Input | Source | Required |
|-------|--------|----------|
| Session folder | task.description (regex: `session:\s*(.+)`) | Yes |
| Shared memory | `<session-folder>/.msg/meta.json` | Yes |
| Remediation plan | `<session-folder>/plan/remediation-plan.json` | Yes |
**Loading steps**:
1. Extract session path from task description
2. Read .msg/meta.json for worktree info:
| Field | Description |
|-------|-------------|
| `worktree.path` | Worktree directory path |
| `worktree.branch` | Worktree branch name |
3. Read remediation-plan.json for actions
4. Extract all actions from plan phases
5. Identify target files (unique file paths from actions)
6. Group actions by type for batch processing
**Batch grouping**:
| Action Type | Description |
|-------------|-------------|
| refactor | Code refactoring |
| restructure | Architecture changes |
| add-tests | Test additions |
| update-deps | Dependency updates |
| add-docs | Documentation additions |
### Phase 3: Execute Fixes
Delegate to `commands/remediate.md` if available, otherwise execute inline.
**Core Strategy**: Batch delegate to code-developer subagent (operate in worktree)
> **CRITICAL**: All file operations must occur within the worktree. Use `run_in_background: false` for synchronous execution.
**Fix Results Tracking**:
| Field | Description |
|-------|-------------|
| `items_fixed` | Count of successfully fixed items |
| `items_failed` | Count of failed items |
| `items_remaining` | Count of remaining items |
| `batches_completed` | Count of completed batches |
| `files_modified` | Array of modified file paths |
| `errors` | Array of error messages |
**Batch execution flow**:
For each batch type and its actions:
1. Spawn code-developer subagent with worktree context
2. Wait for completion (synchronous)
3. Log progress via team_msg
4. Increment batch counter
**Subagent prompt template**:
```
Task({
subagent_type: "code-developer",
run_in_background: false, // Stop-Wait: synchronous execution
description: "Fix tech debt batch: <batch-type> (<count> items)",
prompt: `## Goal
Execute tech debt cleanup for <batch-type> items.
## Worktree (Mandatory)
- Working directory: <worktree-path>
- **All file reads and modifications must be within <worktree-path>**
- Read files using <worktree-path>/path/to/file
- Prefix Bash commands with cd "<worktree-path>" && ...
## Actions
<action-list>
## Instructions
- Read each target file before modifying
- Apply the specified fix
- Preserve backward compatibility
- Do NOT introduce new features
- Do NOT modify unrelated code
- Run basic syntax check after each change`
})
```
### Phase 4: Self-Validation
> **CRITICAL**: All commands must execute in worktree
**Validation checks**:
| Check | Command | Pass Criteria |
|-------|---------|---------------|
| Syntax | `tsc --noEmit` or `python -m py_compile` | No errors |
| Lint | `eslint --no-error-on-unmatched-pattern` | No errors |
**Command prefix** (if worktree): `cd "<worktree-path>" && `
**Validation flow**:
1. Run syntax check -> record PASS/FAIL
2. Run lint check -> record PASS/FAIL
3. Update fix_results.self_validation
4. Write `<session-folder>/fixes/fix-log.json`
5. Update .msg/meta.json with fix_results
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
Standard report flow: team_msg log -> SendMessage with `[executor]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
**Report content**:
| Field | Value |
|-------|-------|
| Task | task.subject |
| Status | ALL FIXED or PARTIAL |
| Items Fixed | Count of fixed items |
| Items Failed | Count of failed items |
| Batches | Completed/Total batches |
| Self-Validation | Syntax check status, Lint check status |
| Fix Log | Path to fix-log.json |
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No TDFIX-* tasks available | Idle, wait for coordinator |
| Remediation plan missing | Request plan from shared memory, report error if empty |
| code-developer fails | Retry once, skip item on second failure |
| Syntax check fails after fix | Revert change, mark item as failed |
| Lint errors introduced | Attempt auto-fix with eslint --fix, report if persistent |
| File not found | Skip item, log warning |

View File

@@ -1,165 +0,0 @@
# Command: create-plan
> 使用 gemini CLI 创建结构化治理方案。将 quick-wins 归为立即执行systematic 归为中期治理,识别预防机制用于长期改善。输出 remediation-plan.md。
## When to Use
- Phase 3 of Planner
- 评估矩阵已就绪,需要创建治理方案
- 债务项已按优先级象限分组
**Trigger conditions**:
- TDPLAN-* 任务进入 Phase 3
- 评估数据可用priority-matrix.json
- 需要 CLI 辅助生成详细修复建议
## Strategy
### Delegation Mode
**Mode**: CLI Analysis + Template Generation
**CLI Tool**: `gemini` (primary)
**CLI Mode**: `analysis`
### Decision Logic
```javascript
// 方案生成策略
if (quickWins.length + strategic.length <= 5) {
// 少量项目:内联生成方案
mode = 'inline'
} else {
// 较多项目CLI 辅助生成详细修复步骤
mode = 'cli-assisted'
}
```
## Execution Steps
### Step 1: Context Preparation
```javascript
// 准备债务摘要供 CLI 分析
const debtSummary = debtInventory
.filter(i => i.priority_quadrant === 'quick-win' || i.priority_quadrant === 'strategic')
.map(i => `[${i.id}] [${i.priority_quadrant}] [${i.dimension}] ${i.file}:${i.line} - ${i.description} (impact: ${i.impact_score}, cost: ${i.cost_score})`)
.join('\n')
// 读取相关源文件获取上下文
const affectedFiles = [...new Set(debtInventory.map(i => i.file).filter(Boolean))]
const fileContext = affectedFiles.slice(0, 20).map(f => `@${f}`).join(' ')
```
### Step 2: Execute Strategy
```javascript
if (mode === 'inline') {
// 内联生成方案
for (const item of quickWins) {
item.remediation_steps = [
`Read ${item.file}`,
`Apply fix: ${item.suggestion || 'Resolve ' + item.description}`,
`Verify fix with relevant tests`
]
}
for (const item of strategic) {
item.remediation_steps = [
`Analyze impact scope of ${item.file}`,
`Plan refactoring: ${item.suggestion || 'Address ' + item.description}`,
`Implement changes incrementally`,
`Run full test suite to verify`
]
}
} else {
// CLI 辅助生成修复方案
const prompt = `PURPOSE: Create detailed remediation steps for each technical debt item, grouped into actionable phases
TASK: • For each quick-win item, generate specific fix steps (1-3 steps) • For each strategic item, generate a refactoring plan (3-5 steps) • Identify prevention mechanisms based on recurring patterns • Group related items that should be fixed together
MODE: analysis
CONTEXT: ${fileContext}
EXPECTED: Structured remediation plan with: phase name, items, steps per item, dependencies between fixes, estimated time per phase
CONSTRAINTS: Focus on backward-compatible changes, prefer incremental fixes over big-bang refactoring
## Debt Items to Plan
${debtSummary}
## Recurring Patterns
${[...new Set(debtInventory.map(i => i.dimension))].map(d => {
const count = debtInventory.filter(i => i.dimension === d).length
return `- ${d}: ${count} items`
}).join('\n')}`
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, {
run_in_background: true
})
// 等待 CLI 完成,解析结果
}
```
### Step 3: Result Processing
```javascript
// 生成 Markdown 治理方案
function generatePlanMarkdown(plan, validation) {
return `# Tech Debt Remediation Plan
## Overview
- **Total Actions**: ${validation.total_actions}
- **Files Affected**: ${validation.files_affected.length}
- **Total Estimated Effort**: ${validation.total_effort} points
## Phase 1: Quick Wins (Immediate)
> High impact, low cost items for immediate action.
${plan.phases[0].actions.map((a, i) => `### ${i + 1}. ${a.debt_id}: ${a.action}
- **File**: ${a.file || 'N/A'}
- **Type**: ${a.type}
${a.steps ? a.steps.map(s => `- [ ] ${s}`).join('\n') : ''}`).join('\n\n')}
## Phase 2: Systematic (Medium-term)
> High impact items requiring structured refactoring.
${plan.phases[1].actions.map((a, i) => `### ${i + 1}. ${a.debt_id}: ${a.action}
- **File**: ${a.file || 'N/A'}
- **Type**: ${a.type}
${a.steps ? a.steps.map(s => `- [ ] ${s}`).join('\n') : ''}`).join('\n\n')}
## Phase 3: Prevention (Long-term)
> Mechanisms to prevent future debt accumulation.
${plan.phases[2].actions.map((a, i) => `### ${i + 1}. ${a.action}
- **Dimension**: ${a.dimension || 'general'}
- **Type**: ${a.type}`).join('\n\n')}
## Execution Notes
- Execute Phase 1 first for maximum ROI
- Phase 2 items may require feature branches
- Phase 3 should be integrated into CI/CD pipeline
`
}
```
## Output Format
```
## Remediation Plan Created
### Phases: 3
### Quick Wins: [count] actions
### Systematic: [count] actions
### Prevention: [count] actions
### Files Affected: [count]
### Output: [sessionFolder]/plan/remediation-plan.md
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI returns unstructured text | Parse manually, extract action items |
| No quick-wins available | Focus plan on systematic and prevention |
| File references invalid | Verify with Glob, skip non-existent files |
| CLI timeout | Generate plan from heuristic data only |
| Agent/CLI failure | Retry once, then inline generation |
| Timeout (>5 min) | Report partial plan, notify planner |

View File

@@ -1,188 +0,0 @@
# Planner Role
技术债务治理方案规划师。基于评估矩阵创建分阶段治理方案quick-wins 立即执行、systematic 中期系统治理、prevention 长期预防机制。产出 remediation-plan.md。
## Identity
- **Name**: `planner` | **Tag**: `[planner]`
- **Task Prefix**: `TDPLAN-*`
- **Responsibility**: Orchestration (治理规划)
## Boundaries
### MUST
- Only process `TDPLAN-*` prefixed tasks
- All output (SendMessage, team_msg, logs) must carry `[planner]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within remediation planning responsibility scope
- Base plans on assessment data from shared memory
### MUST NOT
- Modify source code or test code
- Execute fix operations
- Create tasks for other roles
- Communicate directly with other worker roles (must go through coordinator)
- Omit `[planner]` identifier in any output
---
## Toolbox
### Available Commands
| Command | File | Phase | Description |
|---------|------|-------|-------------|
| `create-plan` | [commands/create-plan.md](commands/create-plan.md) | Phase 3 | 分阶段治理方案生成 |
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| `cli-explore-agent` | Subagent | create-plan.md | 代码库探索验证方案可行性 |
| `gemini` | CLI | create-plan.md | 治理方案生成 |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `plan_ready` | planner -> coordinator | 方案完成 | 包含分阶段治理方案 |
| `plan_revision` | planner -> coordinator | 方案修订 | 根据反馈调整方案 |
| `error` | planner -> coordinator | 规划失败 | 阻塞性错误 |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "planner",
type: <message-type>,
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --session-id <session-id> --from planner --type <message-type> --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `TDPLAN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
### Phase 2: Load Assessment Data
| Input | Source | Required |
|-------|--------|----------|
| Session folder | task.description (regex: `session:\s*(.+)`) | Yes |
| Shared memory | `<session-folder>/.msg/meta.json` | Yes |
| Priority matrix | `<session-folder>/assessment/priority-matrix.json` | Yes |
**Loading steps**:
1. Extract session path from task description
2. Read .msg/meta.json for debt_inventory
3. Read priority-matrix.json for quadrant groupings
4. Group items by priority quadrant:
| Quadrant | Filter |
|----------|--------|
| quickWins | priority_quadrant === 'quick-win' |
| strategic | priority_quadrant === 'strategic' |
| backlog | priority_quadrant === 'backlog' |
| deferred | priority_quadrant === 'defer' |
### Phase 3: Create Remediation Plan
Delegate to `commands/create-plan.md` if available, otherwise execute inline.
**Core Strategy**: 3-phase remediation plan
| Phase | Name | Description | Items |
|-------|------|-------------|-------|
| 1 | Quick Wins | 高影响低成本项,立即执行 | quickWins |
| 2 | Systematic | 高影响高成本项,需系统规划 | strategic |
| 3 | Prevention | 预防机制建设,长期生效 | Generated from inventory |
**Action Type Mapping**:
| Dimension | Action Type |
|-----------|-------------|
| code | refactor |
| architecture | restructure |
| testing | add-tests |
| dependency | update-deps |
| documentation | add-docs |
**Prevention Action Generation**:
| Condition | Action |
|-----------|--------|
| dimension count >= 3 | Generate prevention action for that dimension |
| Dimension | Prevention Action |
|-----------|-------------------|
| code | Add linting rules for complexity thresholds and code smell detection |
| architecture | Introduce module boundary checks in CI pipeline |
| testing | Set minimum coverage thresholds in CI and add pre-commit test hooks |
| dependency | Configure automated dependency update bot (Renovate/Dependabot) |
| documentation | Add JSDoc/docstring enforcement in linting rules |
### Phase 4: Validate Plan Feasibility
**Validation metrics**:
| Metric | Description |
|--------|-------------|
| total_actions | Sum of actions across all phases |
| total_effort | Sum of estimated effort scores |
| files_affected | Unique files in action list |
| has_quick_wins | Boolean: quickWins.length > 0 |
| has_prevention | Boolean: prevention actions exist |
**Save outputs**:
1. Write `<session-folder>/plan/remediation-plan.md` (markdown format)
2. Write `<session-folder>/plan/remediation-plan.json` (machine-readable)
3. Update .msg/meta.json with `remediation_plan` summary
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
Standard report flow: team_msg log -> SendMessage with `[planner]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
**Report content**:
| Field | Value |
|-------|-------|
| Task | task.subject |
| Total Actions | Count of all actions |
| Files Affected | Count of unique files |
| Phase 1: Quick Wins | Top 5 quick-win items |
| Phase 2: Systematic | Top 3 strategic items |
| Phase 3: Prevention | Top 3 prevention actions |
| Plan Document | Path to remediation-plan.md |
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No TDPLAN-* tasks available | Idle, wait for coordinator |
| Assessment data empty | Create minimal plan based on debt inventory |
| No quick-wins found | Skip Phase 1, focus on systematic |
| CLI analysis fails | Fall back to heuristic plan generation |
| Too many items for single plan | Split into multiple phases with priorities |

View File

@@ -1,388 +0,0 @@
# Command: scan-debt
> 三层并行 Fan-out 技术债务扫描。Subagent 结构探索 + CLI 维度分析 + 多视角 Gemini 深度分析,三层并行执行后 Fan-in 聚合。
## When to Use
- Phase 3 of Scanner
- 需要对代码库进行多维度技术债务扫描
- 复杂度为 Medium 或 High 时使用并行 Fan-out
**Trigger conditions**:
- TDSCAN-* 任务进入 Phase 3
- 复杂度评估为 Medium/High
- 需要深度分析超出 ACE 搜索能力
## Strategy
### Delegation Mode
**Mode**: Triple Fan-out + Fan-in
**Subagent**: `cli-explore-agent`(并行结构探索)
**CLI Tool**: `gemini` (primary)
**CLI Mode**: `analysis`
**Parallel Layers**:
- Fan-out A: 2-3 并行 subagent结构探索
- Fan-out B: 3-5 并行 CLI维度分析
- Fan-out C: 2-4 并行 CLI多视角 Gemini
### Decision Logic
```javascript
// 复杂度决定扫描策略
if (complexity === 'Low') {
// ACE 搜索 + Grep 内联分析(不使用 CLI
mode = 'inline'
} else if (complexity === 'Medium') {
// 双层 Fan-out: subagent 探索 + CLI 3 维度
mode = 'dual-fanout'
activeDimensions = ['code', 'testing', 'dependency']
exploreAngles = ['structure', 'patterns']
} else {
// 三层 Fan-out: subagent 探索 + CLI 5 维度 + 多视角 Gemini
mode = 'triple-fanout'
activeDimensions = dimensions // all 5
exploreAngles = ['structure', 'patterns', 'dependencies']
}
```
## Execution Steps
### Step 1: Context Preparation
```javascript
// 确定扫描范围
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()
const scanScope = task.description.match(/scope:\s*(.+)/)?.[1] || '**/*'
// 获取变更文件用于聚焦扫描
const changedFiles = Bash(`git diff --name-only HEAD~10 2>/dev/null || echo ""`)
.split('\n').filter(Boolean)
// 构建文件上下文
const fileContext = changedFiles.length > 0
? changedFiles.map(f => `@${f}`).join(' ')
: `@${scanScope}`
// 多视角检测(从 role.md Phase 2 传入)
// perspectives = detectPerspectives(task.description)
```
### Step 2: Execute Strategy
```javascript
if (mode === 'inline') {
// 快速内联扫描Low 复杂度)
const aceResults = mcp__ace-tool__search_context({
project_root_path: projectRoot,
query: "code smells, TODO/FIXME, deprecated APIs, complex functions, dead code, missing tests, circular imports"
})
// 解析 ACE 结果并分类到维度
} else {
// === 三层并行 Fan-out ===
// A、B、C 三层同时启动,互不依赖
// ─── Fan-out A: Subagent 并行探索codebase 结构理解)───
executeExploreAngles(exploreAngles)
// ─── Fan-out B: CLI 维度分析(并行 gemini───
executeDimensionAnalysis(activeDimensions)
// ─── Fan-out C: 多视角 Gemini 深度分析(并行)───
if (mode === 'triple-fanout') {
executePerspectiveAnalysis(perspectives)
}
// 等待所有 Fan-out 完成hook 回调通知)
}
```
### Step 2a: Fan-out A — Subagent Exploration
> 并行启动 cli-explore-agent 探索代码库结构,为后续分析提供上下文。
> 每个角度独立执行,不互相依赖。
```javascript
function executeExploreAngles(angles) {
const explorePrompts = {
'structure': `Explore the codebase structure and module organization.
Focus on: directory layout, module boundaries, entry points, build configuration.
Project root: ${projectRoot}
Report: module map, key entry files, build system type, framework detection.`,
'patterns': `Explore coding patterns and conventions used in this codebase.
Focus on: naming conventions, import patterns, error handling patterns, state management, design patterns.
Project root: ${projectRoot}
Report: dominant patterns, anti-patterns found, consistency assessment.`,
'dependencies': `Explore dependency graph and inter-module relationships.
Focus on: import/require chains, circular dependencies, external dependency usage, shared utilities.
Project root: ${projectRoot}
Report: dependency hotspots, tightly-coupled modules, dependency depth analysis.`
}
// 并行启动所有探索角度(每个 cli-explore-agent 独立执行)
for (const angle of angles) {
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore: ${angle}`,
prompt: explorePrompts[angle] || `Explore from ${angle} perspective. Project: ${projectRoot}`
})
}
// 所有 subagent 返回后,探索结果已可用
}
```
### Step 2b: Fan-out B — CLI Dimension Analysis
> 每个维度独立的 gemini CLI 分析,全部并行启动。
```javascript
function executeDimensionAnalysis(activeDimensions) {
const dimensionPrompts = {
'code': `PURPOSE: Identify code quality debt - complexity, duplication, code smells
TASK: • Find functions with cyclomatic complexity > 10 • Detect code duplication (>20 lines) • Identify code smells (God class, long method, feature envy) • Find TODO/FIXME/HACK comments • Detect dead code and unused exports
MODE: analysis
CONTEXT: ${fileContext}
EXPECTED: List of findings with severity (critical/high/medium/low), file:line, description, estimated fix effort (small/medium/large)
CONSTRAINTS: Focus on actionable items, skip generated code`,
'architecture': `PURPOSE: Identify architecture debt - coupling, circular dependencies, layering violations
TASK: • Detect circular dependencies between modules • Find tight coupling between components • Identify layering violations (e.g., UI importing DB) • Check for God modules with too many responsibilities • Find missing abstraction layers
MODE: analysis
CONTEXT: ${fileContext}
EXPECTED: Architecture debt findings with severity, affected modules, dependency graph issues
CONSTRAINTS: Focus on structural issues, not style`,
'testing': `PURPOSE: Identify testing debt - coverage gaps, test quality, missing test types
TASK: • Find modules without any test files • Identify complex logic without test coverage • Check for test anti-patterns (flaky tests, hardcoded values) • Find missing edge case tests • Detect test files that import from test utilities incorrectly
MODE: analysis
CONTEXT: ${fileContext}
EXPECTED: Testing debt findings with severity, affected files, missing test type (unit/integration/e2e)
CONSTRAINTS: Focus on high-risk untested code paths`,
'dependency': `PURPOSE: Identify dependency debt - outdated packages, vulnerabilities, unnecessary deps
TASK: • Find outdated major-version dependencies • Identify known vulnerability packages • Detect unused dependencies • Find duplicate functionality from different packages • Check for pinned vs range versions
MODE: analysis
CONTEXT: @package.json @package-lock.json @requirements.txt @go.mod @pom.xml
EXPECTED: Dependency debt with severity, package name, current vs latest version, CVE references
CONSTRAINTS: Focus on security and compatibility risks`,
'documentation': `PURPOSE: Identify documentation debt - missing docs, stale docs, undocumented APIs
TASK: • Find public APIs without JSDoc/docstrings • Identify README files that are outdated • Check for missing architecture documentation • Find configuration options without documentation • Detect stale comments that don't match code
MODE: analysis
CONTEXT: ${fileContext}
EXPECTED: Documentation debt with severity, file:line, type (missing/stale/incomplete)
CONSTRAINTS: Focus on public interfaces and critical paths`
}
// 并行启动所有维度分析
for (const dimension of activeDimensions) {
const prompt = dimensionPrompts[dimension]
if (!prompt) continue
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`, {
run_in_background: true
})
}
}
```
### Step 2c: Fan-out C — Multi-Perspective Gemini Analysis
> 多视角深度分析,每个视角关注不同质量维度。
> 视角由 `detectPerspectives()` 自动检测,或在 High 复杂度下全量启用。
> 与 Fan-out B维度分析的区别维度分析按"代码/测试/依赖"横切,视角分析按"安全/性能/质量/架构"纵切,交叉覆盖。
```javascript
function executePerspectiveAnalysis(perspectives) {
const perspectivePrompts = {
'security': `PURPOSE: Security-focused analysis of codebase to identify vulnerability debt
TASK: • Find injection vulnerabilities (SQL, command, XSS, LDAP) • Check authentication/authorization weaknesses • Identify hardcoded secrets or credentials • Detect insecure data handling (sensitive data exposure) • Find missing input validation on trust boundaries • Check for outdated crypto or insecure hash functions
MODE: analysis
CONTEXT: ${fileContext}
EXPECTED: Security findings with: severity (critical/high/medium/low), CWE/OWASP reference, file:line, remediation suggestion
CONSTRAINTS: Focus on exploitable vulnerabilities, not theoretical risks`,
'performance': `PURPOSE: Performance-focused analysis to identify performance debt
TASK: • Find N+1 query patterns in database calls • Detect unnecessary re-renders or recomputations • Identify missing caching opportunities • Find synchronous blocking in async contexts • Detect memory leak patterns (event listener accumulation, unclosed resources) • Check for unoptimized loops or O(n²) algorithms on large datasets
MODE: analysis
CONTEXT: ${fileContext}
EXPECTED: Performance findings with: severity, impact estimate (latency/memory/CPU), file:line, optimization suggestion
CONSTRAINTS: Focus on measurable impact, not micro-optimizations`,
'code-quality': `PURPOSE: Code quality deep analysis beyond surface-level linting
TASK: • Identify functions violating single responsibility principle • Find overly complex conditional chains (>3 nesting levels) • Detect hidden temporal coupling between functions • Find magic numbers and unexplained constants • Identify error handling anti-patterns (empty catch, swallowed errors) • Detect feature envy (methods that access other classes more than their own)
MODE: analysis
CONTEXT: ${fileContext}
EXPECTED: Quality findings with: severity, code smell category, file:line, refactoring suggestion with pattern name
CONSTRAINTS: Focus on maintainability impact, skip style-only issues`,
'architecture': `PURPOSE: Architecture-level analysis of system design debt
TASK: • Identify layering violations (skip-layer calls, reverse dependencies) • Find God modules/classes with >5 distinct responsibilities • Detect missing domain boundaries (business logic in UI/API layer) • Check for abstraction leaks (implementation details in interfaces) • Identify duplicated business logic across modules • Find tightly coupled modules that should be independent
MODE: analysis
CONTEXT: ${fileContext}
EXPECTED: Architecture findings with: severity, affected modules, coupling metric, suggested restructuring
CONSTRAINTS: Focus on structural issues affecting scalability and team autonomy`
}
// 并行启动所有视角分析
for (const perspective of perspectives) {
const prompt = perspectivePrompts[perspective]
if (!prompt) continue
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule analysis-review-architecture`, {
run_in_background: true
})
}
}
```
### Step 3: Fan-in Result Processing
> 三层 Fan-out 结果聚合:探索结果提供上下文,维度分析 + 视角分析交叉去重。
```javascript
// ─── 3a: 聚合探索结果(来自 Fan-out A───
const exploreContext = {
structure: exploreResults['structure'] || {},
patterns: exploreResults['patterns'] || {},
dependencies: exploreResults['dependencies'] || {}
}
// ─── 3b: 聚合维度分析结果(来自 Fan-out B───
const dimensionFindings = []
for (const dimension of activeDimensions) {
const findings = parseCliOutput(cliResults[dimension])
for (const finding of findings) {
finding.dimension = dimension
finding.source = 'dimension-analysis'
dimensionFindings.push(finding)
}
}
// ─── 3c: 聚合视角分析结果(来自 Fan-out C───
const perspectiveFindings = []
if (mode === 'triple-fanout') {
for (const perspective of perspectives) {
const findings = parseCliOutput(cliResults[perspective])
for (const finding of findings) {
finding.perspective = perspective
finding.source = 'perspective-analysis'
// 映射视角到最近维度(用于统一归类)
finding.dimension = finding.dimension || mapPerspectiveToDimension(perspective)
perspectiveFindings.push(finding)
}
}
}
// ─── 3d: 合并 + 交叉去重 ───
const allFindings = [...dimensionFindings, ...perspectiveFindings]
function deduplicateFindings(findings) {
const seen = new Map() // key → finding (保留严重性更高的)
for (const f of findings) {
const key = `${f.file}:${f.line}`
const existing = seen.get(key)
if (!existing) {
seen.set(key, f)
} else {
// 同一位置多角度发现 → 合并,提升严重性
const severityOrder = { critical: 0, high: 1, medium: 2, low: 3 }
if ((severityOrder[f.severity] || 3) < (severityOrder[existing.severity] || 3)) {
existing.severity = f.severity
}
// 记录交叉引用(被多个视角/维度发现的条目更可信)
existing.crossRefs = existing.crossRefs || []
existing.crossRefs.push({ source: f.source, perspective: f.perspective, dimension: f.dimension })
}
}
return [...seen.values()]
}
// 视角 → 维度映射
function mapPerspectiveToDimension(perspective) {
const map = {
'security': 'code',
'performance': 'code',
'code-quality': 'code',
'architecture': 'architecture'
}
return map[perspective] || 'code'
}
const deduped = deduplicateFindings(allFindings)
// ─── 3e: 按严重性排序(交叉引用的条目优先)───
deduped.sort((a, b) => {
// 被多角度发现的条目 → 优先级提升
const aBoost = (a.crossRefs?.length || 0) > 0 ? -0.5 : 0
const bBoost = (b.crossRefs?.length || 0) > 0 ? -0.5 : 0
const order = { critical: 0, high: 1, medium: 2, low: 3 }
return ((order[a.severity] || 3) + aBoost) - ((order[b.severity] || 3) + bBoost)
})
// ─── 3f: 用探索上下文增强发现(可选)───
// 利用 Fan-out A 的结构探索结果标注模块归属
for (const finding of deduped) {
if (finding.file && exploreContext.structure?.modules) {
const module = exploreContext.structure.modules.find(m =>
finding.file.startsWith(m.path)
)
if (module) finding.module = module.name
}
}
```
## Output Format
```
## Debt Scan Results
### Scan Mode: [inline|dual-fanout|triple-fanout]
### Complexity: [Low|Medium|High]
### Perspectives: [security, performance, code-quality, architecture]
### Findings by Dimension
#### Code Quality ([count])
- [file:line] [severity] - [description] [crossRefs: N perspectives]
#### Architecture ([count])
- [module] [severity] - [description]
#### Testing ([count])
- [file:line] [severity] - [description]
#### Dependency ([count])
- [package] [severity] - [description]
#### Documentation ([count])
- [file:line] [severity] - [description]
### Multi-Perspective Highlights
#### Security Findings ([count])
- [file:line] [severity] - [CWE-xxx] [description]
#### Performance Findings ([count])
- [file:line] [severity] - [impact] [description]
### Cross-Referenced Items (多角度交叉验证)
- [file:line] confirmed by [N] sources - [description]
### Total Debt Items: [count]
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI tool unavailable | Fall back to ACE search + Grep inline analysis |
| CLI returns empty for a dimension | Note incomplete dimension, continue others |
| Subagent explore fails | Skip explore context, proceed with CLI analysis only |
| Too many findings (>100) | Prioritize critical/high + cross-referenced, summarize rest |
| Timeout on CLI call | Use partial results, note incomplete dimensions/perspectives |
| Agent/CLI failure | Retry once, then fallback to inline execution |
| Perspective analysis timeout | Use dimension-only results, note missing perspectives |
| All Fan-out layers fail | Fall back to ACE inline scan (guaranteed minimum) |

View File

@@ -1,223 +0,0 @@
# Scanner Role
多维度技术债务扫描器。扫描代码库的 5 个维度:代码质量、架构、测试、依赖、文档,生成结构化债务清单。通过 CLI Fan-out 并行分析,产出 debt-inventory.json。
## Identity
- **Name**: `scanner` | **Tag**: `[scanner]`
- **Task Prefix**: `TDSCAN-*`
- **Responsibility**: Orchestration (多维度扫描编排)
## Boundaries
### MUST
- Only process `TDSCAN-*` prefixed tasks
- All output (SendMessage, team_msg, logs) must carry `[scanner]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within debt scanning responsibility scope
- Tag all findings with dimension (code, architecture, testing, dependency, documentation)
### MUST NOT
- Write or modify any code
- Execute fix operations
- Create tasks for other roles
- Communicate directly with other worker roles (must go through coordinator)
- Omit `[scanner]` identifier in any output
---
## Toolbox
### Available Commands
| Command | File | Phase | Description |
|---------|------|-------|-------------|
| `scan-debt` | [commands/scan-debt.md](commands/scan-debt.md) | Phase 3 | 多维度 CLI Fan-out 扫描 |
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| `gemini` | CLI | scan-debt.md | 多维度代码分析dimension fan-out |
| `cli-explore-agent` | Subagent | scan-debt.md | 并行代码库结构探索 |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `scan_complete` | scanner -> coordinator | 扫描完成 | 包含债务清单摘要 |
| `debt_items_found` | scanner -> coordinator | 发现高优先级债务 | 需要关注的关键发现 |
| `error` | scanner -> coordinator | 扫描失败 | 阻塞性错误 |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "scanner",
type: <message-type>,
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --session-id <session-id> --from scanner --type <message-type> --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `TDSCAN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
### Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Scan scope | task.description (regex: `scope:\s*(.+)`) | No (default: `**/*`) |
| Session folder | task.description (regex: `session:\s*(.+)`) | Yes |
| Shared memory | `<session-folder>/.msg/meta.json` | Yes |
**Loading steps**:
1. Extract session path from task description
2. Read .msg/meta.json for team context
3. Detect project type and framework:
| Detection | Method |
|-----------|--------|
| Node.js project | Check for package.json |
| Python project | Check for pyproject.toml or requirements.txt |
| Go project | Check for go.mod |
4. Determine scan dimensions (default: code, architecture, testing, dependency, documentation)
5. Detect perspectives from task description:
| Condition | Perspective |
|-----------|-------------|
| `security\|auth\|inject\|xss` | security |
| `performance\|speed\|optimize` | performance |
| `quality\|clean\|maintain\|debt` | code-quality |
| `architect\|pattern\|structure` | architecture |
| Default | code-quality + architecture |
6. Assess complexity:
| Signal | Weight |
|--------|--------|
| `全项目\|全量\|comprehensive\|full` | +3 |
| `architecture\|架构` | +1 |
| `multiple\|across\|cross\|多模块` | +2 |
| Score | Complexity |
|-------|------------|
| >= 4 | High |
| 2-3 | Medium |
| 0-1 | Low |
### Phase 3: Multi-Dimension Scan
Delegate to `commands/scan-debt.md` if available, otherwise execute inline.
**Core Strategy**: Three-layer parallel Fan-out
| Complexity | Strategy |
|------------|----------|
| Low | Direct: ACE search + Grep inline scan |
| Medium/High | Fan-out A: Subagent exploration (cli-explore-agent) + Fan-out B: CLI dimension analysis (gemini per dimension) + Fan-out C: Multi-perspective Gemini analysis |
**Fan-out Architecture**:
```
Fan-out A: Subagent Exploration (parallel cli-explore)
structure perspective | patterns perspective | deps perspective
↓ merge
Fan-out B: CLI Dimension Analysis (parallel gemini)
code | architecture | testing | dependency | documentation
↓ merge
Fan-out C: Multi-Perspective Gemini (parallel)
security | performance | code-quality | architecture
↓ Fan-in aggregate
debt-inventory.json
```
**Low Complexity Path** (inline):
```
mcp__ace-tool__search_context({
project_root_path: <project-root>,
query: "code smells, TODO/FIXME, deprecated APIs, complex functions, missing tests"
})
```
### Phase 4: Aggregate into Debt Inventory
**Standardize findings**:
For each finding, create entry:
| Field | Description |
|-------|-------------|
| `id` | `TD-NNN` (sequential) |
| `dimension` | code, architecture, testing, dependency, documentation |
| `severity` | critical, high, medium, low |
| `file` | File path |
| `line` | Line number |
| `description` | Issue description |
| `suggestion` | Fix suggestion |
| `estimated_effort` | small, medium, large, unknown |
**Save outputs**:
1. Update .msg/meta.json with `debt_inventory` and `debt_score_before`
2. Write `<session-folder>/scan/debt-inventory.json`:
| Field | Description |
|-------|-------------|
| `scan_date` | ISO timestamp |
| `dimensions` | Array of scanned dimensions |
| `total_items` | Count of debt items |
| `by_dimension` | Count per dimension |
| `by_severity` | Count per severity level |
| `items` | Array of debt entries |
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
Standard report flow: team_msg log -> SendMessage with `[scanner]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
**Report content**:
| Field | Value |
|-------|-------|
| Task | task.subject |
| Dimensions | dimensions scanned |
| Status | "Debt Found" or "Clean" |
| Summary | Total items with dimension breakdown |
| Top Debt Items | Top 5 critical/high severity items |
| Debt Inventory | Path to debt-inventory.json |
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No TDSCAN-* tasks available | Idle, wait for coordinator assignment |
| CLI tool unavailable | Fall back to ACE search + Grep inline analysis |
| Scan scope too broad | Narrow to src/ directory, report partial results |
| All dimensions return empty | Report clean scan, notify coordinator |
| CLI timeout | Use partial results, note incomplete dimensions |
| Critical issue beyond scope | SendMessage debt_items_found to coordinator |

View File

@@ -1,203 +0,0 @@
# Command: verify
> 回归测试与质量验证。运行测试套件、类型检查、lint、可选 CLI 质量分析。对比 debt_score_before vs debt_score_after 评估改善程度。
## When to Use
- Phase 3 of Validator
- 修复操作已完成,需要验证结果
- Fix-Verify 循环中的验证阶段
**Trigger conditions**:
- TDVAL-* 任务进入 Phase 3
- 修复日志可用fix-log.json
- 需要对比 before/after 指标
## Strategy
### Delegation Mode
**Mode**: Sequential Checks + Optional CLI Analysis
**CLI Tool**: `gemini` (for quality comparison)
**CLI Mode**: `analysis`
### Decision Logic
```javascript
// 验证策略选择
const checks = ['test_suite', 'type_check', 'lint_check']
// 可选CLI 质量分析(仅当修改文件较多时)
if (modifiedFiles.length > 5) {
checks.push('cli_quality_analysis')
}
// Fix-Verify 循环中的验证:聚焦于回归文件
const isFixVerify = task.description.includes('fix-verify')
if (isFixVerify) {
// 仅验证上次回归的文件
targetScope = 'regression_files_only'
}
```
## Execution Steps
### Step 1: Context Preparation
```javascript
// 获取修改文件列表
const modifiedFiles = fixLog.files_modified || []
// 获取原始债务分数
const debtScoreBefore = sharedMemory.debt_score_before || 0
// Worktree 路径(从 shared memory 加载)
const worktreePath = sharedMemory.worktree?.path || null
const cmdPrefix = worktreePath ? `cd "${worktreePath}" && ` : ''
// 检测可用的验证工具(在 worktree 中检测)
const hasNpm = Bash(`${cmdPrefix}which npm 2>/dev/null && echo "yes" || echo "no"`).trim() === 'yes'
const hasTsc = Bash(`${cmdPrefix}which npx 2>/dev/null && npx tsc --version 2>/dev/null && echo "yes" || echo "no"`).includes('yes')
const hasEslint = Bash(`${cmdPrefix}npx eslint --version 2>/dev/null && echo "yes" || echo "no"`).includes('yes')
const hasPytest = Bash(`${cmdPrefix}which pytest 2>/dev/null && echo "yes" || echo "no"`).trim() === 'yes'
```
### Step 2: Execute Strategy
```javascript
// === Check 1: Test Suiteworktree 中执行) ===
let testOutput = ''
let testsPassed = true
let testRegressions = 0
if (hasNpm) {
testOutput = Bash(`${cmdPrefix}npm test 2>&1 || true`)
} else if (hasPytest) {
testOutput = Bash(`${cmdPrefix}python -m pytest 2>&1 || true`)
} else {
testOutput = 'no-test-runner'
}
if (testOutput !== 'no-test-runner') {
testsPassed = !/FAIL|error|failed/i.test(testOutput)
testRegressions = testsPassed ? 0 : (testOutput.match(/(\d+) failed/)?.[1] || 1) * 1
}
// === Check 2: Type Checkingworktree 中执行) ===
let typeErrors = 0
if (hasTsc) {
const tscOutput = Bash(`${cmdPrefix}npx tsc --noEmit 2>&1 || true`)
typeErrors = (tscOutput.match(/error TS/g) || []).length
}
// === Check 3: Lintingworktree 中执行) ===
let lintErrors = 0
if (hasEslint && modifiedFiles.length > 0) {
const lintOutput = Bash(`${cmdPrefix}npx eslint --no-error-on-unmatched-pattern ${modifiedFiles.join(' ')} 2>&1 || true`)
lintErrors = (lintOutput.match(/(\d+) error/)?.[0]?.match(/\d+/)?.[0] || 0) * 1
}
// === Check 4: Optional CLI Quality Analysis ===
let qualityImprovement = 0
if (checks.includes('cli_quality_analysis')) {
const prompt = `PURPOSE: Compare code quality before and after tech debt cleanup to measure improvement
TASK: • Analyze the modified files for quality metrics • Compare complexity, duplication, naming quality • Assess if the changes actually reduced debt • Identify any new issues introduced
MODE: analysis
CONTEXT: ${modifiedFiles.map(f => `@${f}`).join(' ')}
EXPECTED: Quality comparison with: metrics_before, metrics_after, improvement_score (0-100), new_issues_found
CONSTRAINTS: Focus on the specific changes, not overall project quality`
Bash(`ccw cli -p "${prompt}" --tool gemini --mode analysis --rule analysis-review-code-quality${worktreePath ? ' --cd "' + worktreePath + '"' : ''}`, {
run_in_background: true
})
// 等待 CLI 完成,解析质量改善分数
}
// === 计算债务分数 ===
// 已修复的项不计入 after 分数
const fixedDebtIds = new Set(
(sharedMemory.fix_results?.files_modified || [])
.flatMap(f => debtInventory.filter(i => i.file === f).map(i => i.id))
)
const debtScoreAfter = debtInventory.filter(i => !fixedDebtIds.has(i.id)).length
```
### Step 3: Result Processing
```javascript
const totalRegressions = testRegressions + typeErrors + lintErrors
const passed = totalRegressions === 0
// 如果有少量回归,尝试通过 code-developer 修复
if (totalRegressions > 0 && totalRegressions <= 3) {
const regressionDetails = []
if (testRegressions > 0) regressionDetails.push(`${testRegressions} test failures`)
if (typeErrors > 0) regressionDetails.push(`${typeErrors} type errors`)
if (lintErrors > 0) regressionDetails.push(`${lintErrors} lint errors`)
Task({
subagent_type: "code-developer",
run_in_background: false,
description: `Fix ${totalRegressions} regressions from debt cleanup`,
prompt: `## Goal
Fix regressions introduced by tech debt cleanup.
${worktreePath ? `\n## Worktree强制\n- 工作目录: ${worktreePath}\n- **所有文件操作必须在 ${worktreePath} 下进行**\n- Bash 命令使用 cd "${worktreePath}" && ... 前缀\n` : ''}
## Regressions
${regressionDetails.join('\n')}
## Modified Files
${modifiedFiles.map(f => `- ${f}`).join('\n')}
## Test Output (if failed)
${testOutput.split('\n').filter(l => /FAIL|Error|error/i.test(l)).slice(0, 20).join('\n')}
## Constraints
- Fix ONLY the regressions, do not undo the debt fixes
- Preserve the debt cleanup changes
- Do NOT skip tests or add suppressions`
})
// Re-run checks after fix attempt
// ... (simplified: re-check test suite)
}
// 生成最终验证结果
const validationReport = {
passed,
regressions: totalRegressions,
debt_score_before: debtScoreBefore,
debt_score_after: debtScoreAfter,
improvement_percentage: debtScoreBefore > 0
? Math.round(((debtScoreBefore - debtScoreAfter) / debtScoreBefore) * 100)
: 0
}
```
## Output Format
```
## Validation Results
### Status: [PASS|FAIL]
### Regressions: [count]
- Test Suite: [PASS|FAIL] ([n] regressions)
- Type Check: [PASS|FAIL] ([n] errors)
- Lint: [PASS|FAIL] ([n] errors)
- Quality: [IMPROVED|NO_CHANGE]
### Debt Score
- Before: [score]
- After: [score]
- Improvement: [%]%
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No test runner available | Skip test check, rely on type+lint |
| tsc not available | Skip type check, rely on test+lint |
| eslint not available | Skip lint check, rely on test+type |
| All checks unavailable | Report minimal validation, warn coordinator |
| Fix attempt introduces new regressions | Revert fix, report original regressions |
| CLI quality analysis times out | Skip quality analysis, use debt score comparison only |

View File

@@ -1,235 +0,0 @@
# Validator Role
技术债务清理结果验证者。运行测试套件验证无回归、执行类型检查和 lint、通过 CLI 分析代码质量改善程度。对比 before/after 债务分数,生成 validation-report.json。
## Identity
- **Name**: `validator` | **Tag**: `[validator]`
- **Task Prefix**: `TDVAL-*`
- **Responsibility**: Validation (清理结果验证)
## Boundaries
### MUST
- Only process `TDVAL-*` prefixed tasks
- All output (SendMessage, team_msg, logs) must carry `[validator]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within validation responsibility scope
- Run complete validation flow (tests, type check, lint, quality analysis)
- Report regression_found if regressions detected
### MUST NOT
- Fix code directly (only attempt small fixes via code-developer)
- Create tasks for other roles
- Communicate directly with other worker roles (must go through coordinator)
- Skip any validation step
- Omit `[validator]` identifier in any output
---
## Toolbox
### Available Commands
| Command | File | Phase | Description |
|---------|------|-------|-------------|
| `verify` | [commands/verify.md](commands/verify.md) | Phase 3 | 回归测试与质量验证 |
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| `code-developer` | Subagent | verify.md | 小修复尝试(验证失败时) |
| `gemini` | CLI | verify.md | 代码质量改善分析 |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `validation_complete` | validator -> coordinator | 验证通过 | 包含 before/after 指标 |
| `regression_found` | validator -> coordinator | 发现回归 | 触发 Fix-Verify 循环 |
| `error` | validator -> coordinator | 验证环境错误 | 阻塞性错误 |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "validator",
type: <message-type>,
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --session-id <session-id> --from validator --type <message-type> --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `TDVAL-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
### Phase 2: Load Context
| Input | Source | Required |
|-------|--------|----------|
| Session folder | task.description (regex: `session:\s*(.+)`) | Yes |
| Shared memory | `<session-folder>/.msg/meta.json` | Yes |
| Fix log | `<session-folder>/fixes/fix-log.json` | No |
**Loading steps**:
1. Extract session path from task description
2. Read .msg/meta.json for:
| Field | Description |
|-------|-------------|
| `worktree.path` | Worktree directory path |
| `debt_inventory` | Debt items list |
| `fix_results` | Fix results from executor |
| `debt_score_before` | Debt score before fixes |
3. Determine command prefix for worktree:
| Condition | Command Prefix |
|-----------|---------------|
| worktree exists | `cd "<worktree-path>" && ` |
| no worktree | Empty string |
4. Read fix-log.json for modified files list
### Phase 3: Run Validation Checks
Delegate to `commands/verify.md` if available, otherwise execute inline.
**Core Strategy**: 4-layer validation (all commands in worktree)
**Validation Results Structure**:
| Check | Status Field | Details |
|-------|--------------|---------|
| Test Suite | test_suite.status | regressions count |
| Type Check | type_check.status | errors count |
| Lint Check | lint_check.status | errors count |
| Quality Analysis | quality_analysis.status | improvement percentage |
**1. Test Suite** (in worktree):
| Detection | Command |
|-----------|---------|
| Node.js | `<cmdPrefix>npm test` or `<cmdPrefix>npx vitest run` |
| Python | `<cmdPrefix>python -m pytest` |
| No tests | Skip with "no-tests" note |
| Pass Criteria | Status |
|---------------|--------|
| No FAIL/error/failed keywords | PASS |
| "no-tests" detected | PASS (skip) |
| Otherwise | FAIL + count regressions |
**2. Type Check** (in worktree):
| Command | `<cmdPrefix>npx tsc --noEmit` |
|---------|-------------------------------|
| Pass Criteria | Status |
|---------------|--------|
| No TS errors or "skip" | PASS |
| TS errors found | FAIL + count errors |
**3. Lint Check** (in worktree):
| Command | `<cmdPrefix>npx eslint --no-error-on-unmatched-pattern <files>` |
|---------|----------------------------------------------------------------|
| Pass Criteria | Status |
|---------------|--------|
| No errors or "skip" | PASS |
| Errors found | FAIL + count errors |
**4. Quality Analysis**:
| Metric | Calculation |
|--------|-------------|
| debt_score_after | debtInventory.filter(not in modified files).length |
| improvement | debt_score_before - debt_score_after |
| Condition | Status |
|-----------|--------|
| debt_score_after < debt_score_before | IMPROVED |
| Otherwise | NO_CHANGE |
### Phase 4: Compare Before/After & Generate Report
**Calculate totals**:
| Metric | Calculation |
|--------|-------------|
| total_regressions | test_regressions + type_errors + lint_errors |
| passed | total_regressions === 0 |
**Report structure**:
| Field | Description |
|-------|-------------|
| `validation_date` | ISO timestamp |
| `passed` | Boolean |
| `regressions` | Total regression count |
| `checks` | Validation results per check |
| `debt_score_before` | Initial debt score |
| `debt_score_after` | Final debt score |
| `improvement_percentage` | Percentage improvement |
**Save outputs**:
1. Write `<session-folder>/validation/validation-report.json`
2. Update .msg/meta.json with `validation_results` and `debt_score_after`
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
Standard report flow: team_msg log -> SendMessage with `[validator]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
**Message type selection**:
| Condition | Message Type |
|-----------|--------------|
| passed | validation_complete |
| not passed | regression_found |
**Report content**:
| Field | Value |
|-------|-------|
| Task | task.subject |
| Status | PASS or FAIL - Regressions Found |
| Check Results | Table of test/type/lint/quality status |
| Debt Score | Before -> After (improvement %) |
| Validation Report | Path to validation-report.json |
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No TDVAL-* tasks available | Idle, wait for coordinator |
| Test environment broken | Report error, suggest manual fix |
| No test suite found | Skip test check, validate with type+lint only |
| Fix log empty | Validate all source files, report minimal analysis |
| Type check fails | Attempt code-developer fix for type errors |
| Critical regression (>10) | Report immediately, do not attempt fix |