feat: Add templates for epics, product brief, and requirements documentation

- Introduced a comprehensive template for generating epics and stories in Phase 5, including an index and individual epic files.
- Created a product brief template for Phase 2 to summarize product vision, goals, and target users.
- Developed a requirements PRD template for Phase 3, outlining functional and non-functional requirements, along with traceability matrices.

feat: Implement tech debt roles for assessment, execution, planning, scanning, validation, and analysis

- Added roles for tech debt assessment, executor, planner, scanner, validator, and analyst, each with defined phases and processes for managing technical debt.
- Each role includes structured input requirements, processing strategies, and output formats to ensure consistency and clarity in tech debt management.
This commit is contained in:
catlog22
2026-03-07 13:32:04 +08:00
parent 7ee9b579fa
commit 29a1fea467
255 changed files with 14407 additions and 21120 deletions

View File

@@ -0,0 +1,69 @@
---
role: assessor
prefix: TDEVAL
inner_loop: false
message_types: [state_update]
---
# Tech Debt Assessor
Quantitative evaluator for tech debt items. Score each debt item on business impact (1-5) and fix cost (1-5), classify into priority quadrants, produce priority-matrix.json.
## Phase 2: Load Debt Inventory
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Debt inventory | meta.json:debt_inventory OR <session>/scan/debt-inventory.json | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for team context
3. Load debt_inventory from shared memory or fallback to debt-inventory.json file
4. If debt_inventory is empty -> report empty assessment and exit
## Phase 3: Evaluate Each Item
**Strategy selection**:
| Item Count | Strategy |
|------------|----------|
| <= 10 | Heuristic: severity-based impact + effort-based cost |
| 11-50 | CLI batch: single gemini analysis call |
| > 50 | CLI chunked: batches of 25 items |
**Impact Score Mapping** (heuristic):
| Severity | Impact Score |
|----------|-------------|
| critical | 5 |
| high | 4 |
| medium | 3 |
| low | 1 |
**Cost Score Mapping** (heuristic):
| Estimated Effort | Cost Score |
|------------------|------------|
| small | 1 |
| medium | 3 |
| large | 5 |
| unknown | 3 |
**Priority Quadrant Classification**:
| Impact | Cost | Quadrant |
|--------|------|----------|
| >= 4 | <= 2 | quick-win |
| >= 4 | >= 3 | strategic |
| <= 3 | <= 2 | backlog |
| <= 3 | >= 3 | defer |
For CLI mode, prompt gemini with full debt summary requesting JSON array of `{id, impact_score, cost_score, risk_if_unfixed, priority_quadrant}`. Unevaluated items fall back to heuristic scoring.
## Phase 4: Generate Priority Matrix
1. Build matrix structure: evaluation_date, total_items, by_quadrant (grouped), summary (counts per quadrant)
2. Sort within each quadrant by impact_score descending
3. Write `<session>/assessment/priority-matrix.json`
4. Update .msg/meta.json with `priority_matrix` summary and evaluated `debt_inventory`

View File

@@ -0,0 +1,47 @@
# Analyze Task
Parse user task -> detect tech debt signals -> assess complexity -> determine pipeline mode and roles.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Signal | Mode Hint |
|----------|--------|-----------|
| 扫描, scan, 审计, audit | debt-scan | scan |
| 评估, assess, quantify | debt-assess | scan |
| 规划, plan, roadmap | debt-plan | targeted |
| 修复, fix, remediate, clean | debt-fix | remediate |
| 验证, validate, verify | debt-validate | remediate |
| 定向, targeted, specific | debt-targeted | targeted |
## Complexity Scoring
| Factor | Points |
|--------|--------|
| Full codebase scope | +2 |
| Multiple debt dimensions | +1 per dimension (max 3) |
| Large codebase (implied) | +1 |
| Targeted specific items | -1 |
Results: 1-3 Low (scan mode), 4-6 Medium (remediate), 7+ High (remediate + full pipeline)
## Pipeline Mode Determination
| Score + Signals | Mode |
|----------------|------|
| scan/audit keywords | scan |
| targeted/specific keywords | targeted |
| Default | remediate |
## Output
Write scope context to coordinator memory:
```json
{
"pipeline_mode": "<scan|remediate|targeted>",
"scope": "<detected-scope>",
"focus_dimensions": ["code", "architecture", "testing", "dependency", "documentation"],
"complexity": { "score": 0, "level": "Low|Medium|High" }
}
```

View File

@@ -1,163 +1,118 @@
# Command: Monitor
# Monitor Pipeline
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, fix-verify loops, and completion.
Event-driven pipeline coordination. Beat model: coordinator wake -> process -> spawn -> STOP.
## Constants
| Key | Value |
|-----|-------|
| SPAWN_MODE | background |
| ONE_STEP_PER_INVOCATION | true |
| WORKER_AGENT | team-worker |
| MAX_GC_ROUNDS | 3 |
- SPAWN_MODE: background
- ONE_STEP_PER_INVOCATION: true
- FAST_ADVANCE_AWARE: true
- WORKER_AGENT: team-worker
- MAX_GC_ROUNDS: 3
## Phase 2: Context Loading
## Handler Router
| Input | Source | Required |
|-------|--------|----------|
| Session state | <session>/session.json | Yes |
| Task list | TaskList() | Yes |
| Trigger event | From Entry Router detection | Yes |
| Meta state | <session>/.msg/meta.json | Yes |
| Pipeline definition | From SKILL.md | Yes |
| Source | Handler |
|--------|---------|
| Message contains [scanner], [assessor], [planner], [executor], [validator] | handleCallback |
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
1. Load session.json for current state, `pipeline_mode`, `gc_rounds`
2. Run TaskList() to get current task statuses
3. Identify trigger event type from Entry Router
## handleCallback
### Role Detection Table
Worker completed. Process and advance.
| Message Pattern | Role Detection |
|----------------|---------------|
| `[scanner]` or task ID `TDSCAN-*` | scanner |
| `[assessor]` or task ID `TDEVAL-*` | assessor |
| `[planner]` or task ID `TDPLAN-*` | planner |
| `[executor]` or task ID `TDFIX-*` | executor |
| `[validator]` or task ID `TDVAL-*` | validator |
1. Find matching worker by role tag in message
2. Check if progress update (inner loop) or final completion
3. Progress update -> update session state, STOP
4. Completion -> mark task done:
```
TaskUpdate({ taskId: "<task-id>", status: "completed" })
```
5. Remove from active_workers, record completion in session
### Pipeline Stage Order
6. Check for checkpoints:
- **TDPLAN-001 completes** -> Plan Approval Gate:
```
AskUserQuestion({
questions: [{ question: "Remediation plan generated. Review and decide:",
header: "Plan Review", multiSelect: false,
options: [
{ label: "Approve", description: "Proceed with fix execution" },
{ label: "Revise", description: "Re-run planner with feedback" },
{ label: "Abort", description: "Stop pipeline" }
]
}]
})
```
- Approve -> Worktree Creation -> handleSpawnNext
- Revise -> Create TDPLAN-revised task -> handleSpawnNext
- Abort -> Log shutdown -> handleComplete
- **Worktree Creation** (before TDFIX):
```
Bash("git worktree add .worktrees/TD-<slug>-<date> -b tech-debt/TD-<slug>-<date>")
```
Update .msg/meta.json with worktree info.
- **TDVAL-* completes** -> GC Loop Check:
Read validation results from .msg/meta.json
| Condition | Action |
|-----------|--------|
| No regressions | -> handleSpawnNext (pipeline complete) |
| Regressions AND gc_rounds < 3 | Create fix-verify tasks, increment gc_rounds |
| Regressions AND gc_rounds >= 3 | Accept current state -> handleComplete |
Fix-Verify Task Creation:
```
TaskCreate({ subject: "TDFIX-fix-<round>", description: "PURPOSE: Fix regressions | Session: <session>" })
TaskCreate({ subject: "TDVAL-recheck-<round>", description: "...", blockedBy: ["TDFIX-fix-<round>"] })
```
7. -> handleSpawnNext
## handleCheck
Read-only status report, then STOP.
```
TDSCAN -> TDEVAL -> TDPLAN -> TDFIX -> TDVAL
Pipeline Status (<mode>):
[DONE] TDSCAN-001 (scanner) -> scan complete
[DONE] TDEVAL-001 (assessor) -> assessment ready
[RUN] TDPLAN-001 (planner) -> planning...
[WAIT] TDFIX-001 (executor) -> blocked by TDPLAN-001
[WAIT] TDVAL-001 (validator) -> blocked by TDFIX-001
GC Rounds: 0/3
Session: <session-id>
Commands: 'resume' to advance | 'check' to refresh
```
## Phase 3: Event Handlers
Output status -- do NOT advance pipeline.
### handleCallback
## handleResume
Triggered when a worker sends completion message.
1. Audit task list:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed blockers but still "pending" -> include in spawn list
2. -> handleSpawnNext
1. Parse message to identify role and task ID using Role Detection Table
## handleSpawnNext
2. Mark task as completed:
Find ready tasks, spawn workers, STOP.
```
TaskUpdate({ taskId: "<task-id>", status: "completed" })
```
3. Record completion in session state
4. **Plan Approval Gate** (when planner TDPLAN completes):
Before advancing to TDFIX, present the remediation plan to the user for approval.
```
// Read the generated plan
planContent = Read(<session>/plan/remediation-plan.md)
|| Read(<session>/plan/remediation-plan.json)
AskUserQuestion({
questions: [{
question: "Remediation plan generated. Review and decide:",
header: "Plan Review",
multiSelect: false,
options: [
{ label: "Approve", description: "Proceed with fix execution" },
{ label: "Revise", description: "Re-run planner with feedback" },
{ label: "Abort", description: "Stop pipeline, no fixes applied" }
]
}]
})
```
| Decision | Action |
|----------|--------|
| Approve | Proceed to handleSpawnNext (TDFIX becomes ready) |
| Revise | Create TDPLAN-revised task, proceed to handleSpawnNext |
| Abort | Log shutdown, transition to handleComplete |
5. **GC Loop Check** (when validator TDVAL completes):
Read `<session>/.msg/meta.json` for validation results.
| Condition | Action |
|-----------|--------|
| No regressions found | Proceed to handleSpawnNext (pipeline complete) |
| Regressions found AND gc_rounds < 3 | Create fix-verify tasks, increment gc_rounds |
| Regressions found AND gc_rounds >= 3 | Accept current state, proceed to handleComplete |
**Fix-Verify Task Creation** (when regressions detected):
```
TaskCreate({
subject: "TDFIX-fix-<round>",
description: "PURPOSE: Fix regressions found by validator | Success: All regressions resolved
TASK:
- Load validation report with regression details
- Apply targeted fixes for each regression
- Re-validate fixes locally before completion
CONTEXT:
- Session: <session-folder>
- Upstream artifacts: <session>/.msg/meta.json
EXPECTED: Fixed source files | Regressions resolved
CONSTRAINTS: Targeted fixes only | Do not introduce new regressions",
blockedBy: [],
status: "pending"
})
TaskCreate({
subject: "TDVAL-recheck-<round>",
description: "PURPOSE: Re-validate after regression fixes | Success: Zero regressions
TASK:
- Run full validation suite on fixed code
- Compare debt scores before and after
- Report regression status
CONTEXT:
- Session: <session-folder>
EXPECTED: Validation results with regression count
CONSTRAINTS: Read-only validation",
blockedBy: ["TDFIX-fix-<round>"],
status: "pending"
})
```
6. Proceed to handleSpawnNext
### handleSpawnNext
Find and spawn the next ready tasks.
1. Scan task list for tasks where:
- Status is "pending"
- All blockedBy tasks have status "completed"
2. If no ready tasks and all tasks completed, proceed to handleComplete
3. If no ready tasks but some still in_progress, STOP and wait
4. For each ready task, determine role from task subject prefix:
```javascript
const STAGE_WORKER_MAP = {
'TDSCAN': { role: 'scanner' },
'TDEVAL': { role: 'assessor' },
'TDPLAN': { role: 'planner' },
'TDFIX': { role: 'executor' },
'TDVAL': { role: 'validator' }
}
```
5. Spawn team-worker (one at a time for sequential pipeline):
1. Collect: completedSubjects, inProgressSubjects, readySubjects (pending + all blockedBy completed)
2. No ready + work in progress -> report waiting, STOP
3. No ready + nothing in progress -> handleComplete
4. Has ready -> for each:
a. Check inner loop role with active worker -> skip (worker picks up)
b. TaskUpdate -> in_progress
c. team_msg log -> task_unblocked
d. Spawn team-worker:
```
Agent({
@@ -168,67 +123,54 @@ Agent({
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-tech-debt/role-specs/<role>.md
role_spec: .claude/skills/team-tech-debt/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: tech-debt
requirement: <task-description>
inner_loop: false
## Current Task
- Task ID: <task-id>
- Task: <task-subject>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).`
})
```
6. STOP after spawning -- wait for next callback
Stage-to-role mapping:
| Task Prefix | Role |
|-------------|------|
| TDSCAN | scanner |
| TDEVAL | assessor |
| TDPLAN | planner |
| TDFIX | executor |
| TDVAL | validator |
### handleCheck
5. Add to active_workers, update session, output summary, STOP
Output current pipeline status.
## handleComplete
```
Pipeline Status:
[DONE] TDSCAN-001 (scanner) -> scan complete
[DONE] TDEVAL-001 (assessor) -> assessment ready
[DONE] TDPLAN-001 (planner) -> plan approved
[RUN] TDFIX-001 (executor) -> fixing...
[WAIT] TDVAL-001 (validator) -> blocked by TDFIX-001
Pipeline done. Generate report and completion action.
GC Rounds: 0/3
Session: <session-id>
```
Output status -- do NOT advance pipeline.
### handleResume
Resume pipeline after user pause or interruption.
1. Audit task list for inconsistencies:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed blockers but still "pending" -> include in spawn list
2. Proceed to handleSpawnNext
### handleComplete
Triggered when all pipeline tasks are completed.
1. Verify all tasks (including any fix-verify tasks) have status "completed"
2. If any tasks not completed, return to handleSpawnNext
1. Verify all tasks (including fix-verify tasks) have status "completed"
2. If any not completed -> handleSpawnNext
3. If all completed:
- Read final state from `<session>/.msg/meta.json`
- Read final state from .msg/meta.json
- If worktree exists and validation passed: commit, push, gh pr create, cleanup worktree
- Compile summary: total tasks, completed, gc_rounds, debt_score_before, debt_score_after
- If worktree exists and validation passed: commit changes, create PR, cleanup worktree
- Transition to coordinator Phase 5
## Phase 4: State Persistence
## handleAdapt
After every handler execution:
Capability gap reported mid-pipeline.
1. Update session.json with current state (active tasks, gc_rounds, last event)
2. Verify task list consistency
3. STOP and wait for next event
1. Parse gap description
2. Check if existing role covers it -> redirect
3. Role count < 5 -> generate dynamic role spec in <session>/role-specs/
4. Create new task, spawn worker
5. Role count >= 5 -> merge or pause
## Fast-Advance Reconciliation
On every coordinator wake:
1. Read team_msg entries with type="fast_advance"
2. Sync active_workers with spawned successors
3. No duplicate spawns

View File

@@ -3,8 +3,7 @@
技术债务治理团队协调者。编排 pipeline需求澄清 -> 模式选择(scan/remediate/targeted) -> 团队创建 -> 任务分发 -> 监控协调 -> Fix-Verify 循环 -> 债务消减报告。
## Identity
- **Name**: `coordinator` | **Tag**: `[coordinator]`
- **Name**: coordinator | **Tag**: [coordinator]
- **Responsibility**: Parse requirements -> Create team -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
@@ -25,245 +24,65 @@
- Skip dependency validation when creating task chains
- Omit `[coordinator]` identifier in any output
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
---
## Command Execution Protocol
When coordinator needs to execute a command (dispatch, monitor):
When coordinator needs to execute a command (analyze, dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
4. **Execute synchronously** -- complete the command workflow before proceeding
Example:
```
Phase 3 needs task dispatch
-> Read roles/coordinator/commands/dispatch.md
-> Execute Phase 2 (Context Loading)
-> Execute Phase 3 (Task Chain Creation)
-> Execute Phase 4 (Validation)
-> Continue to Phase 4
```
---
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
When coordinator is invoked, detect invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains role tag [scanner], [assessor], [planner], [executor], [validator] | -> handleCallback |
| Status check | Arguments contain "check" or "status" | -> handleCheck |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
| Pipeline complete | All tasks have status "completed" | -> handleComplete |
| Interrupted session | Active/paused session exists | -> Phase 0 (Session Resume Check) |
| Worker callback | Message contains [scanner], [assessor], [planner], [executor], [validator] | -> handleCallback (monitor.md) |
| Status check | Arguments contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume (monitor.md) |
| Pipeline complete | All tasks have status "completed" | -> handleComplete (monitor.md) |
| Interrupted session | Active/paused session exists in .workflow/.team/TD-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For callback/check/resume/complete: load `commands/monitor.md` and execute matched handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/TD-*/.msg/meta.json` for active/paused sessions
- If found, extract session folder path, status, and pipeline mode
2. **Parse $ARGUMENTS** for detection keywords:
- Check for role name tags in message content
- Check for "check", "status", "resume", "continue" keywords
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Requirement Clarification below
---
## Toolbox
### Available Commands
| Command | File | Phase | Description |
|---------|------|-------|-------------|
| `dispatch` | [commands/dispatch.md](commands/dispatch.md) | Phase 3 | 任务链创建与依赖管理 |
| `monitor` | [commands/monitor.md](commands/monitor.md) | Phase 4 | 消息总线轮询与协调循环 |
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| `TeamCreate` | Tool | Phase 2 | Team initialization |
| `TaskCreate` | Tool | Phase 3 | Task chain creation |
| `Task` | Tool | Phase 4 | Worker spawning |
| `AskUserQuestion` | Tool | Phase 1 | Requirement clarification |
> Coordinator does not directly use CLI analysis tools or CLI code generation
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `mode_selected` | coordinator -> all | 模式确定 | scan/remediate/targeted |
| `plan_approval` | coordinator -> user | TDPLAN 完成 | 呈现治理方案供审批 |
| `worktree_created` | coordinator -> user | TDFIX 前 | Worktree 和分支已创建 |
| `pr_created` | coordinator -> user | TDVAL 通过 | PR 已创建worktree 已清理 |
| `quality_gate` | coordinator -> user | 质量评估 | 通过/不通过/有条件通过 |
| `task_unblocked` | coordinator -> worker | 依赖解除 | 任务可执行 |
| `error` | coordinator -> user | 协调错误 | 阻塞性问题 |
| `shutdown` | coordinator -> all | 团队关闭 | 清理资源 |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "coordinator",
type: <message-type>,
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --session-id <session-id> --from coordinator --type <message-type> --json")
```
---
For callback/check/resume/complete: load `commands/monitor.md`, execute matched handler, STOP.
## Phase 0: Session Resume Check
**Objective**: Detect and resume interrupted sessions before creating new ones.
**Workflow**:
1. Scan session directory for sessions with status "active" or "paused"
2. No sessions found -> proceed to Phase 1
3. Single session found -> resume it (-> Session Reconciliation)
4. Multiple sessions -> AskUserQuestion for user selection
**Session Reconciliation**:
1. Audit TaskList -> get real status of all tasks
2. Reconcile: session state <-> TaskList status (bidirectional sync)
3. Reset any in_progress tasks -> pending (they were interrupted)
4. Determine remaining pipeline from reconciled state
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
6. Create missing tasks with correct blockedBy dependencies
7. Verify dependency chain integrity
8. Update session file with reconciled state
9. Kick first executable task's worker -> Phase 4
---
1. Scan `.workflow/.team/TD-*/.msg/meta.json` for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile (audit TaskList, reset in_progress->pending, rebuild team, kick first ready task)
4. Multiple -> AskUserQuestion for selection
## Phase 1: Requirement Clarification
**Objective**: Parse user input and gather execution parameters.
TEXT-LEVEL ONLY. No source code reading.
**Workflow**:
1. Parse arguments for explicit settings: mode, scope, focus areas
2. Detect mode:
1. **Parse arguments** for explicit settings: mode, scope, focus areas
| Condition | Mode |
|-----------|------|
| `--mode=scan` or keywords: 扫描, scan, 审计, audit, 评估, assess | scan |
| `--mode=targeted` or keywords: 定向, targeted, 指定, specific, 修复已知 | targeted |
| `-y` or `--yes` specified | Skip confirmations |
| Default | remediate |
2. **Mode Detection**:
| Detection | Condition | Mode |
|-----------|-----------|------|
| Explicit | `--mode=scan` specified | scan |
| Explicit | `--mode=remediate` specified | remediate |
| Explicit | `--mode=targeted` specified | targeted |
| Keyword | Contains: 扫描, scan, 审计, audit, 评估, assess | scan |
| Keyword | Contains: 定向, targeted, 指定, specific, 修复已知 | targeted |
| Default | No match | remediate |
3. **Auto mode detection**:
| Flag | Behavior |
|------|----------|
| `-y` or `--yes` | Skip confirmations |
4. **Ask for missing parameters** via AskUserQuestion (skip if auto mode):
| Question | Options |
|----------|---------|
| Tech Debt Target | 自定义 / 全项目扫描 / 完整治理 / 定向修复 |
5. **Store requirements**: mode, scope, focus, constraints
**Success**: All parameters captured, mode finalized.
---
3. Ask for missing parameters (skip if auto mode):
- AskUserQuestion: Tech Debt Target (自定义 / 全项目扫描 / 完整治理 / 定向修复)
4. Store: mode, scope, focus, constraints
5. Delegate to commands/analyze.md -> output task-analysis context
## Phase 2: Create Team + Initialize Session
**Objective**: Initialize team, session file, and wisdom directory.
**Workflow**:
1. Generate session ID: `TD-{slug}-{YYYY-MM-DD}`
2. Create session folder structure:
```
<session-folder>/
├── scan/
├── assessment/
├── plan/
├── fixes/
├── validation/
└── wisdom/
├── learnings.md
├── decisions.md
├── conventions.md
└── issues.md
```
3. Initialize .msg/meta.json with pipeline metadata:
```typescript
// Use team_msg to write pipeline metadata to .msg/meta.json
mcp__ccw-tools__team_msg({
operation: "log",
session_id: "<session-id>",
from: "coordinator",
type: "state_update",
summary: "Session initialized",
data: {
pipeline_mode: "<scan|remediate|targeted>",
pipeline_stages: ["scanner", "assessor", "planner", "executor", "validator"],
roles: ["coordinator", "scanner", "assessor", "planner", "executor", "validator"],
team_name: "tech-debt",
debt_inventory: [],
priority_matrix: {},
remediation_plan: {},
fix_results: {},
validation_results: {},
debt_score_before: null,
debt_score_after: null
}
})
```
4. Call TeamCreate with team name "tech-debt"
5. **Do NOT spawn workers yet** - Workers are spawned on-demand in Phase 4 (Stop-Wait strategy)
**Success**: Team created, session file written, wisdom initialized.
---
1. Generate session ID: `TD-<slug>-<YYYY-MM-DD>`
2. Create session folder structure (scan/, assessment/, plan/, fixes/, validation/, wisdom/)
3. Initialize .msg/meta.json via team_msg state_update with pipeline metadata
4. TeamCreate(team_name="tech-debt")
5. Do NOT spawn workers yet - deferred to Phase 4
## Phase 3: Create Task Chain
**Objective**: Dispatch tasks based on mode with proper dependencies.
Delegate to `commands/dispatch.md` which creates the full task chain.
**Task Chain by Mode**:
Delegate to commands/dispatch.md. Task chain by mode:
| Mode | Task Chain |
|------|------------|
@@ -271,146 +90,22 @@ Delegate to `commands/dispatch.md` which creates the full task chain.
| remediate | TDSCAN-001 -> TDEVAL-001 -> TDPLAN-001 -> TDFIX-001 -> TDVAL-001 |
| targeted | TDPLAN-001 -> TDFIX-001 -> TDVAL-001 |
**Task Metadata**:
## Phase 4: Spawn-and-Stop
| Task ID | Role | Dependencies | Description |
|---------|------|--------------|-------------|
| TDSCAN-001 | scanner | (none) | 多维度技术债务扫描 |
| TDEVAL-001 | assessor | TDSCAN-001 | 量化评估与优先级排序 |
| TDPLAN-001 | planner | TDEVAL-001 (or none in targeted) | 分阶段治理方案规划 |
| TDFIX-001 | executor | TDPLAN-001 + Plan Approval | 债务清理执行worktree |
| TDVAL-001 | validator | TDFIX-001 | 回归测试与质量验证 |
**Success**: Tasks created with correct dependencies, assigned to appropriate owners.
---
## Phase 4: Sequential Stage Execution (Stop-Wait)
> **Strategy**: Spawn workers stage-by-stage, synchronous blocking wait. Worker returns = stage complete. No polling needed.
> **CRITICAL**: Use `Task(run_in_background: false)` for synchronous execution. This is intentionally different from the v3 default Spawn-and-Stop pattern.
**Workflow**:
1. Load `commands/monitor.md` if available
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
3. For each ready task -> spawn worker (see SKILL.md Spawn Template)
4. Output status summary
5. STOP after spawning (wait for worker callback)
**Stage Transitions**:
| Current Stage | Worker | After Completion |
|---------------|--------|------------------|
| TDSCAN-001 | scanner | -> Start TDEVAL |
| TDEVAL-001 | assessor | -> Start TDPLAN |
| TDPLAN-001 | planner | -> [Plan Approval Gate] -> [Create Worktree] -> Start TDFIX |
| TDFIX-001 | executor (worktree) | -> Start TDVAL |
| TDVAL-001 | validator (worktree) | -> Quality Gate -> [Commit+PR] |
**Worker Spawn Template**:
```
Agent({
subagent_type: "team-worker",
description: "Spawn <role> worker",
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-tech-debt/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: tech-debt
requirement: <task-description>
inner_loop: false
## Current Task
- Task ID: <task-id>
- Task: <PREFIX>-<NNN>
- Task Prefix: <PREFIX>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
run_in_background: false // Stop-Wait: synchronous blocking
})
```
**Plan Approval Gate** (after TDPLAN completes):
1. Read remediation plan
2. Present to user via AskUserQuestion:
| Option | Action |
|--------|--------|
| 批准 | Continue pipeline, create worktree |
| 修订 | Request plan revision from planner |
| 终止 | Stop pipeline, report current status |
**Worktree Creation** (before TDFIX):
1. Create worktree: `git worktree add <path> -b <branch>`
2. Update .msg/meta.json with worktree info
3. Notify user via team_msg
**Fix-Verify Loop** (when TDVAL finds regressions):
| Condition | Action |
|-----------|--------|
| regressionFound && iteration < 3 | Create TDFIX-fix + TDVAL-verify tasks, continue |
| iteration >= 3 | Accept current state, report with warning |
---
Delegate to commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + blockedBy resolved)
2. Spawn team-worker agents (see SKILL.md Spawn Template)
3. Output status summary
4. STOP
## Phase 5: Report + Debt Reduction Metrics + PR
**Objective**: Completion report and follow-up options.
**Workflow**:
1. **Read shared memory** -> collect all results
2. **PR Creation** (worktree mode, validation passed):
| Step | Action |
|------|--------|
| Commit | `cd <worktree> && git add -A && git commit -m "tech-debt: <description>"` |
| Push | `cd <worktree> && git push -u origin <branch>` |
| Create PR | `cd <worktree> && gh pr create --title "Tech Debt: ..." --body "..."` |
| Notify | team_msg with pr_created |
| Cleanup | `git worktree remove <worktree>` (if validation passed) |
3. **Calculate metrics**:
| Metric | Calculation |
|--------|-------------|
| debt_items_found | debt_inventory.length |
| items_fixed | fix_results.items_fixed |
| reduction_rate | (items_fixed / debt_items_found) * 100 |
4. **Generate report**:
| Field | Value |
|-------|-------|
| mode | scan/remediate/targeted |
| debt_items_found | Count |
| debt_score_before | Initial score |
| debt_score_after | Final score |
| items_fixed | Count |
| items_remaining | Count |
| validation_passed | Boolean |
| regressions | Count |
5. **Output**: SendMessage with `[coordinator]` prefix + team_msg log
6. **Ask next steps** (skip if auto mode):
| Option | Action |
|--------|--------|
| 新目标 | New tech debt target |
| 深度修复 | Continue with remaining high-priority items |
| 关闭团队 | Cleanup and close |
---
1. Read shared memory -> collect all results
2. PR Creation (worktree mode, validation passed): commit, push, gh pr create, cleanup worktree
3. Calculate: debt_items_found, items_fixed, reduction_rate
4. Generate report with mode, debt scores, validation status
5. Output with [coordinator] prefix
6. Execute completion action (AskUserQuestion: 新目标 / 深度修复 / 关闭团队)
## Error Handling
@@ -421,8 +116,5 @@ Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`,
| Dependency cycle | Detect, report to user, halt |
| Invalid mode | Reject with error, ask to clarify |
| Session corruption | Attempt recovery, fallback to manual reconciliation |
| Teammate unresponsive | Send follow-up, 2x -> respawn |
| Scanner finds no debt | Report clean codebase, skip to summary |
| Fix-Verify loop stuck >3 iterations | Accept current state, continue pipeline |
| Build/test environment broken | Notify user, suggest manual fix |
| All tasks completed but debt_score_after > debt_score_before | Report with WARNING, suggest re-run |

View File

@@ -0,0 +1,76 @@
---
role: executor
prefix: TDFIX
inner_loop: true
message_types: [state_update]
---
# Tech Debt Executor
Debt cleanup executor. Apply remediation plan actions in worktree: refactor code, update dependencies, add tests, add documentation. Batch-delegate to CLI tools, self-validate after each batch.
## Phase 2: Load Remediation Plan
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Remediation plan | <session>/plan/remediation-plan.json | Yes |
| Worktree info | meta.json:worktree.path, worktree.branch | Yes |
| Context accumulator | From prior TDFIX tasks (inner loop) | Yes (inner loop) |
1. Extract session path from task description
2. Read .msg/meta.json for worktree path and branch
3. Read remediation-plan.json, extract all actions from plan phases
4. Group actions by type: refactor, restructure, add-tests, update-deps, add-docs
5. Split large groups (> 10 items) into sub-batches of 10
6. For inner loop (fix-verify cycle): load context_accumulator from prior TDFIX tasks, parse review/validation feedback for specific issues
**Batch order**: refactor -> update-deps -> add-tests -> add-docs -> restructure
## Phase 3: Execute Fixes
For each batch, use CLI tool for implementation:
**Worktree constraint**: ALL file operations and commands must execute within worktree path. Use `cd "<worktree-path>" && ...` prefix for all Bash commands.
**Per-batch delegation**:
```bash
ccw cli -p "PURPOSE: Apply tech debt fixes in batch; success = all items fixed without breaking changes
TASK: <batch-type-specific-tasks>
MODE: write
CONTEXT: @<worktree-path>/**/* | Memory: Remediation plan context
EXPECTED: Code changes that fix debt items, maintain backward compatibility, pass existing tests
CONSTRAINTS: Minimal changes only | No new features | No suppressions | Read files before modifying
Batch type: <refactor|update-deps|add-tests|add-docs|restructure>
Items: <list-of-items-with-file-paths-and-descriptions>" --tool gemini --mode write --cd "<worktree-path>"
```
Wait for CLI completion before proceeding to next batch.
**Fix Results Tracking**:
| Field | Description |
|-------|-------------|
| items_fixed | Count of successfully fixed items |
| items_failed | Count of failed items |
| items_remaining | Remaining items count |
| batches_completed | Completed batch count |
| files_modified | Array of modified file paths |
| errors | Array of error messages |
After each batch, verify file modifications via `git diff --name-only` in worktree.
## Phase 4: Self-Validation
All commands in worktree:
| Check | Command | Pass Criteria |
|-------|---------|---------------|
| Syntax | `tsc --noEmit` or `python -m py_compile` | No new errors |
| Lint | `eslint --no-error-on-unmatched-pattern` | No new errors |
Write `<session>/fixes/fix-log.json` with fix results. Update .msg/meta.json with `fix_results`.
Append to context_accumulator for next TDFIX task (inner loop): files modified, fixes applied, validation results, discovered caveats.

View File

@@ -0,0 +1,69 @@
---
role: planner
prefix: TDPLAN
inner_loop: false
message_types: [state_update]
---
# Tech Debt Planner
Remediation plan designer. Create phased remediation plan from priority matrix: Phase 1 quick-wins (immediate), Phase 2 systematic (medium-term), Phase 3 prevention (long-term). Produce remediation-plan.md.
## Phase 2: Load Assessment Data
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Priority matrix | <session>/assessment/priority-matrix.json | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for debt_inventory
3. Read priority-matrix.json for quadrant groupings
4. Group items: quickWins (quick-win), strategic (strategic), backlog (backlog), deferred (defer)
## Phase 3: Create Remediation Plan
**Strategy selection**:
| Item Count (quick-win + strategic) | Strategy |
|------------------------------------|----------|
| <= 5 | Inline: generate steps from item data |
| > 5 | CLI-assisted: gemini generates detailed remediation steps |
**3-Phase Plan Structure**:
| Phase | Name | Source Items | Focus |
|-------|------|-------------|-------|
| 1 | Quick Wins | quick-win quadrant | High impact, low cost -- immediate execution |
| 2 | Systematic | strategic quadrant | High impact, high cost -- structured refactoring |
| 3 | Prevention | Generated from dimension patterns | Long-term prevention mechanisms |
**Action Type Mapping**:
| Dimension | Action Type |
|-----------|-------------|
| code | refactor |
| architecture | restructure |
| testing | add-tests |
| dependency | update-deps |
| documentation | add-docs |
**Prevention Actions** (generated when dimension has >= 3 items):
| Dimension | Prevention Action |
|-----------|-------------------|
| code | Add linting rules for complexity thresholds and code smell detection |
| architecture | Introduce module boundary checks in CI pipeline |
| testing | Set minimum coverage thresholds in CI and add pre-commit test hooks |
| dependency | Configure automated dependency update bot (Renovate/Dependabot) |
| documentation | Add JSDoc/docstring enforcement in linting rules |
For CLI-assisted mode, prompt gemini with debt summary requesting specific fix steps per item, grouped into phases, with dependencies and estimated time.
## Phase 4: Validate & Save
1. Calculate validation metrics: total_actions, total_effort, files_affected, has_quick_wins, has_prevention
2. Write `<session>/plan/remediation-plan.md` (markdown with per-item checklists)
3. Write `<session>/plan/remediation-plan.json` (machine-readable)
4. Update .msg/meta.json with `remediation_plan` summary

View File

@@ -0,0 +1,81 @@
---
role: scanner
prefix: TDSCAN
inner_loop: false
message_types: [state_update]
---
# Tech Debt Scanner
Multi-dimension tech debt scanner. Scan codebase across 5 dimensions (code, architecture, testing, dependency, documentation), produce structured debt inventory with severity rankings.
## Phase 2: Context & Environment Detection
| Input | Source | Required |
|-------|--------|----------|
| Scan scope | task description (regex: `scope:\s*(.+)`) | No (default: `**/*`) |
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
1. Extract session path and scan scope from task description
2. Read .msg/meta.json for team context
3. Detect project type and framework:
| Signal File | Project Type |
|-------------|-------------|
| package.json + React/Vue/Angular | Frontend Node |
| package.json + Express/Fastify/NestJS | Backend Node |
| pyproject.toml / requirements.txt | Python |
| go.mod | Go |
| No detection | Generic |
4. Determine scan dimensions (default: code, architecture, testing, dependency, documentation)
5. Detect perspectives from task description:
| Condition | Perspective |
|-----------|-------------|
| `security\|auth\|inject\|xss` | security |
| `performance\|speed\|optimize` | performance |
| `quality\|clean\|maintain\|debt` | code-quality |
| `architect\|pattern\|structure` | architecture |
| Default | code-quality + architecture |
6. Assess complexity:
| Score | Complexity | Strategy |
|-------|------------|----------|
| >= 4 | High | Triple Fan-out: CLI explore + CLI 5 dimensions + multi-perspective Gemini |
| 2-3 | Medium | Dual Fan-out: CLI explore + CLI 3 dimensions |
| 0-1 | Low | Inline: ACE search + Grep |
## Phase 3: Multi-Dimension Scan
**Low Complexity** (inline):
- Use `mcp__ace-tool__search_context` for code smells, TODO/FIXME, deprecated APIs, complex functions, dead code, missing tests
- Classify findings into dimensions
**Medium/High Complexity** (Fan-out):
- Fan-out A: CLI exploration (structure, patterns, dependencies angles) via `ccw cli --tool gemini --mode analysis`
- Fan-out B: CLI dimension analysis (parallel gemini per dimension -- code, architecture, testing, dependency, documentation)
- Fan-out C (High only): Multi-perspective Gemini analysis (security, performance, code-quality, architecture)
- Fan-in: Merge results, cross-deduplicate by file:line, boost severity for multi-source findings
**Standardize each finding**:
| Field | Description |
|-------|-------------|
| `id` | `TD-NNN` (sequential) |
| `dimension` | code, architecture, testing, dependency, documentation |
| `severity` | critical, high, medium, low |
| `file` | File path |
| `line` | Line number |
| `description` | Issue description |
| `suggestion` | Fix suggestion |
| `estimated_effort` | small, medium, large, unknown |
## Phase 4: Aggregate & Save
1. Deduplicate findings across Fan-out layers (file:line key), merge cross-references
2. Sort by severity (cross-referenced items boosted)
3. Write `<session>/scan/debt-inventory.json` with scan_date, dimensions, total_items, by_dimension, by_severity, items
4. Update .msg/meta.json with `debt_inventory` array and `debt_score_before` count

View File

@@ -0,0 +1,78 @@
---
role: validator
prefix: TDVAL
inner_loop: false
message_types: [state_update]
---
# Tech Debt Validator
Cleanup result validator. Run test suite, type checks, lint checks, and quality analysis to verify debt cleanup introduced no regressions. Compare before/after debt scores, produce validation-report.json.
## Phase 2: Load Context
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Fix log | <session>/fixes/fix-log.json | No |
1. Extract session path from task description
2. Read .msg/meta.json for: worktree.path, debt_inventory, fix_results, debt_score_before
3. Determine command prefix: `cd "<worktree-path>" && ` if worktree exists
4. Read fix-log.json for modified files list
5. Detect available validation tools in worktree:
| Signal | Tool | Method |
|--------|------|--------|
| package.json + npm | npm test | Test suite |
| pytest available | python -m pytest | Test suite |
| npx tsc available | npx tsc --noEmit | Type check |
| npx eslint available | npx eslint | Lint check |
## Phase 3: Run Validation Checks
Execute 4-layer validation (all commands in worktree):
**1. Test Suite**:
- Run `npm test` or `python -m pytest` in worktree
- PASS if no FAIL/error/failed keywords; FAIL with regression count otherwise
- Skip with "no-tests" if no test runner available
**2. Type Check**:
- Run `npx tsc --noEmit` in worktree
- Count `error TS` occurrences for error count
**3. Lint Check**:
- Run `npx eslint --no-error-on-unmatched-pattern <modified-files>` in worktree
- Count error occurrences
**4. Quality Analysis** (optional, when > 5 modified files):
- Use gemini CLI to compare code quality before/after
- Assess complexity, duplication, naming quality improvements
**Debt Score Calculation**:
- debt_score_after = debt items NOT in modified files (remaining unfixed items)
- improvement_percentage = ((before - after) / before) * 100
**Auto-fix attempt** (when total_regressions <= 3):
- Use CLI tool to fix regressions in worktree:
```
Bash({
command: `cd "${worktreePath}" && ccw cli -p "PURPOSE: Fix regressions found in validation
TASK: ${regressionDetails}
MODE: write
CONTEXT: @${modifiedFiles.join(' @')}
EXPECTED: Fixed regressions
CONSTRAINTS: Fix only regressions | Preserve debt cleanup changes | No suppressions" --tool gemini --mode write`,
run_in_background: false
})
```
- Re-run validation checks after fix attempt
## Phase 4: Compare & Report
1. Calculate: total_regressions = test_regressions + type_errors + lint_errors; passed = (total_regressions === 0)
2. Write `<session>/validation/validation-report.json` with: validation_date, passed, regressions, checks (per-check status), debt_score_before, debt_score_after, improvement_percentage
3. Update .msg/meta.json with `validation_results` and `debt_score_after`
4. Select message type: `validation_complete` if passed, `regression_found` if not