feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -1,708 +1,129 @@
---
name: team-tech-debt
description: Systematic tech debt governance with CSV wave pipeline. Scans codebase for tech debt across 5 dimensions, assesses severity with priority matrix, plans phased remediation, executes fixes in worktree, validates with 4-layer checks. Supports scan/remediate/targeted pipeline modes with fix-verify GC loop.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] [--mode=scan|remediate|targeted] \"scope or description\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
description: Unified team skill for tech debt identification and remediation. Scans codebase for tech debt, assesses severity, plans and executes fixes with validation. Uses team-worker agent architecture with roles/ for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents. Triggers on "team tech debt".
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*), mcp__ccw-tools__read_file(*), mcp__ccw-tools__write_file(*), mcp__ccw-tools__edit_file(*), mcp__ccw-tools__team_msg(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team Tech Debt
## Usage
Systematic tech debt governance: scan -> assess -> plan -> fix -> validate. Built on **team-worker agent architecture** — all worker roles share a single agent definition with role-specific Phase 2-4 loaded from `roles/<role>/role.md`.
```bash
$team-tech-debt "Scan and fix tech debt in src/ module"
$team-tech-debt --mode=scan "Audit codebase for tech debt"
$team-tech-debt --mode=targeted "Fix known TODO/FIXME items in auth module"
$team-tech-debt -c 4 -y "Full remediation pipeline for entire project"
$team-tech-debt --continue "td-auth-debt-20260308"
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--continue`: Resume existing session
- `--mode=scan`: Scan and assess only, no fixes
- `--mode=targeted`: Skip scan/assess, direct fix path for known debt
- `--mode=remediate`: Full pipeline (default)
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
---
## Overview
Systematic tech debt governance: scan -> assess -> plan -> fix -> validate. Five specialized worker roles execute as CSV wave agents, with interactive agents for plan approval checkpoints and fix-verify GC loops.
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
## Architecture
```
+-------------------------------------------------------------------+
| TEAM TECH DEBT WORKFLOW |
+-------------------------------------------------------------------+
| |
| Phase 0: Pre-Wave Interactive (Requirement Clarification) |
| +- Parse mode (scan/remediate/targeted) |
| +- Clarify scope and focus areas |
| +- Output: pipeline mode + scope for decomposition |
| |
| Phase 1: Requirement -> CSV + Classification |
| +- Select pipeline mode (scan/remediate/targeted) |
| +- Build task chain with fixed role assignments |
| +- Classify tasks: csv-wave | interactive (exec_mode) |
| +- Compute dependency waves (linear chain) |
| +- Generate tasks.csv with wave + exec_mode columns |
| +- User validates task breakdown (skip if -y) |
| |
| Phase 2: Wave Execution Engine (Extended) |
| +- For each wave (1..N): |
| | +- Execute pre-wave interactive tasks (plan approval) |
| | +- Build wave CSV (filter csv-wave tasks for this wave) |
| | +- Inject previous findings into prev_context column |
| | +- spawn_agents_on_csv(wave CSV) |
| | +- Execute post-wave interactive tasks (if any) |
| | +- Merge all results into master tasks.csv |
| | +- Check: any failed? -> skip dependents |
| | +- TDVAL checkpoint: GC loop check |
| +- discoveries.ndjson shared across all modes (append-only) |
| |
| Phase 3: Post-Wave Interactive (Completion + PR) |
| +- PR creation (if worktree mode, validation passed) |
| +- Debt reduction metrics report |
| +- Interactive completion choice |
| |
| Phase 4: Results Aggregation |
| +- Export final results.csv |
| +- Generate context.md with debt metrics |
| +- Display summary: debt scores, reduction rate |
| +- Offer: new target | deep fix | close |
| |
+-------------------------------------------------------------------+
Skill(skill="team-tech-debt", args="task description")
|
SKILL.md (this file) = Router
|
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze → dispatch → spawn workers → STOP
|
+-------+-------+-------+-------+
v v v v v
[team-worker agents, each loads roles/<role>/role.md]
scanner assessor planner executor validator
```
---
## Role Registry
## Task Classification Rules
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
| scanner | [roles/scanner/role.md](roles/scanner/role.md) | TDSCAN-* | false |
| assessor | [roles/assessor/role.md](roles/assessor/role.md) | TDEVAL-* | false |
| planner | [roles/planner/role.md](roles/planner/role.md) | TDPLAN-* | false |
| executor | [roles/executor/role.md](roles/executor/role.md) | TDFIX-* | true |
| validator | [roles/validator/role.md](roles/validator/role.md) | TDVAL-* | false |
Each task is classified by `exec_mode`:
## Role Router
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot scan, assessment, planning, execution, validation |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Plan approval checkpoint, fix-verify GC loop management |
Parse `$ARGUMENTS`:
- Has `--role <name>` → Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role` `roles/coordinator/role.md`, execute entry router
**Classification Decision**:
## Shared Constants
| Task Property | Classification |
|---------------|---------------|
| Multi-dimension debt scan (TDSCAN) | `csv-wave` |
| Quantitative assessment (TDEVAL) | `csv-wave` |
| Remediation planning (TDPLAN) | `csv-wave` |
| Plan approval gate | `interactive` |
| Debt cleanup execution (TDFIX) | `csv-wave` |
| Cleanup validation (TDVAL) | `csv-wave` |
| Fix-verify GC loop management | `interactive` |
- **Session prefix**: `TD`
- **Session path**: `.workflow/.team/TD-<slug>-<date>/`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
- **Max GC rounds**: 3
---
## Worker Spawn Template
## CSV Schema
### tasks.csv (Master State)
```csv
id,title,description,role,debt_dimension,pipeline_mode,deps,context_from,exec_mode,wave,status,findings,debt_items_count,artifacts_produced,error
"TDSCAN-001","Multi-dimension debt scan","Scan codebase across 5 dimensions for tech debt items","scanner","all","remediate","","","csv-wave","1","pending","","0","",""
"TDEVAL-001","Severity assessment","Quantify impact and fix cost for each debt item","assessor","all","remediate","TDSCAN-001","TDSCAN-001","csv-wave","2","pending","","0","",""
"TDPLAN-001","Remediation planning","Create phased remediation plan from priority matrix","planner","all","remediate","TDEVAL-001","TDEVAL-001","csv-wave","3","pending","","0","",""
```
**Columns**:
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (TDPREFIX-NNN) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description with scope and context |
| `role` | Input | Worker role: scanner, assessor, planner, executor, validator |
| `debt_dimension` | Input | `all`, `code`, `architecture`, `testing`, `dependency`, `documentation` |
| `pipeline_mode` | Input | `scan`, `remediate`, `targeted` |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `wave` | Computed | Wave number (1-based) |
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or execution notes (max 500 chars) |
| `debt_items_count` | Output | Number of debt items found/fixed/validated |
| `artifacts_produced` | Output | Semicolon-separated paths of produced artifacts |
| `error` | Output | Error message if failed |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
---
## Agent Registry (Interactive Agents)
| Agent | Role File | Pattern | Responsibility | Position |
|-------|-----------|---------|----------------|----------|
| Plan Approver | agents/plan-approver.md | 2.3 (send_input cycle) | Review remediation plan, approve/revise/abort | pre-wave (before TDFIX) |
| GC Loop Manager | agents/gc-loop-manager.md | 2.3 (send_input cycle) | Manage fix-verify loop, create retry tasks | post-wave (after TDVAL) |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 4 |
| `discoveries.ndjson` | Shared exploration board (all agents) | Append-only, carries across waves |
| `context.md` | Human-readable report with debt metrics | Created in Phase 4 |
| `scan/debt-inventory.json` | Scanner output: structured debt inventory | Created by TDSCAN |
| `assessment/priority-matrix.json` | Assessor output: prioritized debt items | Created by TDEVAL |
| `plan/remediation-plan.md` | Planner output: phased fix plan | Created by TDPLAN |
| `plan/remediation-plan.json` | Planner output: machine-readable plan | Created by TDPLAN |
| `fixes/fix-log.json` | Executor output: fix results | Created by TDFIX |
| `validation/validation-report.json` | Validator output: validation results | Created by TDVAL |
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
---
## Session Structure
Coordinator spawns workers using this template:
```
.workflow/.csv-wave/{session-id}/
+-- tasks.csv # Master state
+-- results.csv # Final results
+-- discoveries.ndjson # Shared discovery board
+-- context.md # Human-readable report
+-- wave-{N}.csv # Temporary per-wave input
+-- scan/
| +-- debt-inventory.json # Scanner output
+-- assessment/
| +-- priority-matrix.json # Assessor output
+-- plan/
| +-- remediation-plan.md # Planner output (human)
| +-- remediation-plan.json # Planner output (machine)
+-- fixes/
| +-- fix-log.json # Executor output
+-- validation/
| +-- validation-report.json # Validator output
+-- interactive/
| +-- {id}-result.json # Interactive task results
+-- wisdom/
+-- learnings.md
+-- decisions.md
spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: <role>
role_spec: <skill_root>/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.` },
{ type: "text", text: `## Task Context
task_id: <task-id>
title: <task-title>
description: <task-description>
pipeline_phase: <pipeline-phase>` },
{ type: "text", text: `## Upstream Context
<prev_context>` }
]
})
```
---
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
## Implementation
## User Commands
### Session Initialization
| Command | Action |
|---------|--------|
| `check` / `status` | View execution status graph |
| `resume` / `continue` | Advance to next step |
| `--mode=scan` | Run scan-only pipeline (TDSCAN + TDEVAL) |
| `--mode=targeted` | Run targeted pipeline (TDPLAN + TDFIX + TDVAL) |
| `--mode=remediate` | Run full pipeline (default) |
| `-y` / `--yes` | Skip confirmations |
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
## Specs Reference
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
// Detect pipeline mode
let pipelineMode = 'remediate'
if ($ARGUMENTS.includes('--mode=scan')) pipelineMode = 'scan'
else if ($ARGUMENTS.includes('--mode=targeted')) pipelineMode = 'targeted'
## Session Directory
const requirement = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+|--mode=\w+/g, '')
.trim()
const slug = requirement.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `td-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/scan ${sessionFolder}/assessment ${sessionFolder}/plan ${sessionFolder}/fixes ${sessionFolder}/validation ${sessionFolder}/interactive ${sessionFolder}/wisdom`)
// Initialize discoveries.ndjson
Write(`${sessionFolder}/discoveries.ndjson`, '')
Write(`${sessionFolder}/wisdom/learnings.md`, '# Learnings\n')
Write(`${sessionFolder}/wisdom/decisions.md`, '# Decisions\n')
```
---
### Phase 0: Pre-Wave Interactive (Requirement Clarification)
**Objective**: Parse mode, clarify scope, prepare pipeline configuration.
**Workflow**:
1. **Detect mode from arguments** (--mode=scan/remediate/targeted) or from keywords:
| Keywords | Mode |
|----------|------|
| scan, audit, assess | scan |
| targeted, specific, fix known | targeted |
| Default | remediate |
2. **Clarify scope** (skip if AUTO_YES):
```javascript
request_user_input({
questions: [{
question: "Select tech debt governance scope.",
header: "Scope",
id: "debt_scope",
options: [
{ label: "Full scan (Recommended)", description: "Scan entire codebase" },
{ label: "Specific module", description: "Target specific directory" },
{ label: "Custom scope", description: "Specify file patterns" }
]
}]
})
```
3. **Detect debt dimensions** from task description:
| Keywords | Dimension |
|----------|-----------|
| code quality, complexity, smell | code |
| architecture, coupling, structure | architecture |
| test, coverage, quality | testing |
| dependency, outdated, vulnerable | dependency |
| documentation, api doc, comments | documentation |
| Default | all |
4. **Output**: pipeline mode, scope, focus dimensions
**Success Criteria**:
- Pipeline mode determined
- Scope and dimensions clarified
---
### Phase 1: Requirement -> CSV + Classification
**Objective**: Build task chain based on pipeline mode, generate tasks.csv.
**Pipeline Definitions**:
| Mode | Task Chain |
|------|------------|
| scan | TDSCAN-001 -> TDEVAL-001 |
| remediate | TDSCAN-001 -> TDEVAL-001 -> TDPLAN-001 -> (plan-approval) -> TDFIX-001 -> TDVAL-001 |
| targeted | TDPLAN-001 -> (plan-approval) -> TDFIX-001 -> TDVAL-001 |
**Task Registry**:
| Task ID | Role | Prefix | exec_mode | Wave | Description |
|---------|------|--------|-----------|------|-------------|
| TDSCAN-001 | scanner | TDSCAN | csv-wave | 1 | Multi-dimension codebase scan |
| TDEVAL-001 | assessor | TDEVAL | csv-wave | 2 | Severity assessment with priority matrix |
| PLAN-APPROVE | - | - | interactive | 3 (pre-wave) | Plan approval checkpoint |
| TDPLAN-001 | planner | TDPLAN | csv-wave | 3 | Phased remediation plan |
| TDFIX-001 | executor | TDFIX | csv-wave | 4 | Worktree-based incremental fixes |
| TDVAL-001 | validator | TDVAL | csv-wave | 5 | 4-layer validation |
**Worktree Creation** (before TDFIX, remediate mode):
```bash
git worktree add .worktrees/td-<slug>-<date> -b tech-debt/td-<slug>-<date>
.workflow/.team/TD-<slug>-<date>/
├── .msg/
│ ├── messages.jsonl # Team message bus
│ └── meta.json # Pipeline config + role state snapshot
├── scan/ # Scanner output
├── assessment/ # Assessor output
├── plan/ # Planner output
├── fixes/ # Executor output
├── validation/ # Validator output
└── wisdom/ # Cross-task knowledge
```
**Wave Computation**: Linear chain, waves assigned by position in pipeline.
**User Validation**: Display pipeline with mode and task chain (skip if AUTO_YES).
**Success Criteria**:
- tasks.csv created with correct pipeline chain
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with checkpoints and GC loop support.
```javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
const maxWave = Math.max(...tasks.map(t => t.wave))
let gcRounds = 0
const MAX_GC_ROUNDS = 3
for (let wave = 1; wave <= maxWave; wave++) {
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
// Check dependencies
for (const task of waveTasks) {
const depIds = (task.deps || '').split(';').filter(Boolean)
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
task.status = 'skipped'
task.error = `Dependency failed`
}
}
// Pre-wave interactive: Plan Approval Gate (after TDPLAN completes)
if (interactiveTasks.some(t => t.id === 'PLAN-APPROVE' && t.status === 'pending')) {
Read('agents/plan-approver.md')
const planTask = interactiveTasks.find(t => t.id === 'PLAN-APPROVE')
const agent = spawn_agent({
message: `## PLAN REVIEW\n\n### MANDATORY FIRST STEPS\n1. Read: ${sessionFolder}/plan/remediation-plan.md\n2. Read: ${sessionFolder}/discoveries.ndjson\n\nReview the remediation plan and decide: Approve / Revise / Abort\n\nSession: ${sessionFolder}`
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
// Parse decision
if (result includes "Abort") {
// Skip remaining pipeline
for (const t of tasks.filter(t => t.status === 'pending')) t.status = 'skipped'
} else if (result includes "Revise") {
// Create revision task, re-run planner
// ... create TDPLAN-revised task
}
// Approve: continue normally
close_agent({ id: agent })
planTask.status = 'completed'
// Create worktree for fix execution
if (pipelineMode === 'remediate' || pipelineMode === 'targeted') {
Bash(`git worktree add .worktrees/${sessionId} -b tech-debt/${sessionId}`)
}
}
// Execute csv-wave tasks
const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
for (const task of pendingCsvTasks) {
task.prev_context = buildPrevContext(task, tasks)
}
if (pendingCsvTasks.length > 0) {
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))
// Select instruction based on role
const role = pendingCsvTasks[0].role
const instruction = Read(`instructions/agent-instruction.md`)
// Customize instruction for role (scanner/assessor/planner/executor/validator)
spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: buildRoleInstruction(role, sessionFolder, wave),
max_concurrency: maxConcurrency,
max_runtime_seconds: 900,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
debt_items_count: { type: "string" },
artifacts_produced: { type: "string" },
error: { type: "string" }
}
}
})
// Merge results
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const r of results) {
const t = tasks.find(t => t.id === r.id)
if (t) Object.assign(t, r)
}
}
// Post-wave: TDVAL GC Loop Check
const completedVal = tasks.find(t => t.id.startsWith('TDVAL') && t.status === 'completed' && t.wave === wave)
if (completedVal) {
// Read validation results
const valReport = JSON.parse(Read(`${sessionFolder}/validation/validation-report.json`))
if (!valReport.passed && gcRounds < MAX_GC_ROUNDS) {
gcRounds++
// Create fix-verify retry tasks
const fixId = `TDFIX-fix-${gcRounds}`
const valId = `TDVAL-recheck-${gcRounds}`
tasks.push({
id: fixId, title: `Fix regressions (GC #${gcRounds})`, role: 'executor',
description: `Fix regressions found in validation round ${gcRounds}`,
debt_dimension: 'all', pipeline_mode: pipelineMode,
deps: completedVal.id, context_from: completedVal.id,
exec_mode: 'csv-wave', wave: wave + 1, status: 'pending',
findings: '', debt_items_count: '0', artifacts_produced: '', error: ''
})
tasks.push({
id: valId, title: `Revalidate (GC #${gcRounds})`, role: 'validator',
description: `Revalidate after fix round ${gcRounds}`,
debt_dimension: 'all', pipeline_mode: pipelineMode,
deps: fixId, context_from: fixId,
exec_mode: 'csv-wave', wave: wave + 2, status: 'pending',
findings: '', debt_items_count: '0', artifacts_produced: '', error: ''
})
// Extend maxWave
} else if (!valReport.passed && gcRounds >= MAX_GC_ROUNDS) {
// Accept current state
console.log(`Max GC rounds (${MAX_GC_ROUNDS}) reached. Accepting current state.`)
}
}
// Update master CSV
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
}
```
**Success Criteria**:
- All waves executed in order
- Plan approval checkpoint enforced before fix execution
- GC loop properly bounded (max 3 rounds)
- Worktree created for fix execution
- discoveries.ndjson accumulated across all waves
---
### Phase 3: Post-Wave Interactive (Completion + PR)
**Objective**: Create PR from worktree if validation passed, generate debt reduction report.
```javascript
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const allCompleted = tasks.every(t => t.status === 'completed' || t.status === 'skipped')
// PR Creation (if worktree exists and validation passed)
const worktreePath = `.worktrees/${sessionId}`
const valReport = JSON.parse(Read(`${sessionFolder}/validation/validation-report.json`) || '{}')
if (valReport.passed && fileExists(worktreePath)) {
Bash(`cd ${worktreePath} && git add -A && git commit -m "tech-debt: remediate debt items (${sessionId})" && git push -u origin tech-debt/${sessionId}`)
Bash(`gh pr create --title "Tech Debt Remediation: ${sessionId}" --body "Automated tech debt cleanup. See ${sessionFolder}/context.md for details."`)
Bash(`git worktree remove ${worktreePath}`)
}
// Debt reduction metrics
const scanReport = JSON.parse(Read(`${sessionFolder}/scan/debt-inventory.json`) || '{}')
const debtBefore = scanReport.total_items || 0
const debtAfter = valReport.debt_score_after || 0
const reductionRate = debtBefore > 0 ? Math.round(((debtBefore - debtAfter) / debtBefore) * 100) : 0
console.log(`
============================================
TECH DEBT GOVERNANCE COMPLETE
Mode: ${pipelineMode}
Debt Items Found: ${debtBefore}
Debt Items Fixed: ${debtBefore - debtAfter}
Reduction Rate: ${reductionRate}%
GC Rounds: ${gcRounds}/${MAX_GC_ROUNDS}
Validation: ${valReport.passed ? 'PASSED' : 'FAILED'}
Session: ${sessionFolder}
============================================
`)
// Completion action
if (!AUTO_YES) {
request_user_input({
questions: [{
question: "Tech debt governance complete. Choose next action.",
header: "Done",
id: "completion",
options: [
{ label: "Close (Recommended)", description: "Archive session" },
{ label: "New target", description: "Run another scan/fix cycle" },
{ label: "Deep fix", description: "Continue fixing remaining items" }
]
}]
})
}
```
**Success Criteria**:
- PR created if applicable
- Debt metrics calculated and reported
- User informed of next steps
---
### Phase 4: Results Aggregation
**Objective**: Generate final results and human-readable report.
```javascript
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
let contextMd = `# Tech Debt Governance Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Mode**: ${pipelineMode}\n`
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`
contextMd += `## Debt Metrics\n`
contextMd += `| Metric | Value |\n|--------|-------|\n`
contextMd += `| Items Found | ${debtBefore} |\n`
contextMd += `| Items Fixed | ${debtBefore - debtAfter} |\n`
contextMd += `| Reduction Rate | ${reductionRate}% |\n`
contextMd += `| GC Rounds | ${gcRounds} |\n`
contextMd += `| Validation | ${valReport.passed ? 'PASSED' : 'FAILED'} |\n\n`
contextMd += `## Pipeline Execution\n\n`
for (const t of tasks) {
const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
contextMd += `${icon} **${t.title}** [${t.role}] ${t.findings || ''}\n\n`
}
Write(`${sessionFolder}/context.md`, contextMd)
```
**Success Criteria**:
- results.csv exported
- context.md generated with debt metrics
- Summary displayed to user
---
## Shared Discovery Board Protocol
**Format**: NDJSON (one JSON per line)
**Discovery Types**:
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `debt_item_found` | `data.file+data.line` | `{id, dimension, severity, file, line, description, suggestion}` | Tech debt item identified |
| `pattern_found` | `data.pattern_name+data.location` | `{pattern_name, location, description}` | Code pattern (anti-pattern) found |
| `fix_applied` | `data.file+data.change` | `{file, change, lines_modified, debt_id}` | Fix applied to debt item |
| `regression_found` | `data.file+data.test` | `{file, test, description, severity}` | Regression found during validation |
| `dependency_issue` | `data.package+data.issue` | `{package, current, latest, issue, severity}` | Dependency problem |
| `metric_recorded` | `data.metric` | `{metric, value, dimension, file}` | Quality metric recorded |
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"TDSCAN-001","type":"debt_item_found","data":{"id":"TD-001","dimension":"code","severity":"high","file":"src/auth/jwt.ts","line":42,"description":"Complexity > 15","suggestion":"Extract helper functions"}}
{"ts":"2026-03-08T10:15:00Z","worker":"TDFIX-001","type":"fix_applied","data":{"file":"src/auth/jwt.ts","change":"Extracted 3 helper functions","lines_modified":25,"debt_id":"TD-001"}}
```
---
## Checkpoints
| Checkpoint | Trigger | Condition | Action |
|------------|---------|-----------|--------|
| Plan Approval Gate | TDPLAN-001 completes | Always (remediate/targeted mode) | Interactive: Approve / Revise / Abort |
| Worktree Creation | Plan approved | Before TDFIX | `git worktree add .worktrees/{session-id}` |
| Fix-Verify GC Loop | TDVAL-* completes | Regressions found | Create TDFIX-fix-N + TDVAL-recheck-N (max 3 rounds) |
---
## Pipeline Mode Details
### Scan Mode
```
Wave 1: TDSCAN-001 (scanner) -> Scan 5 dimensions
Wave 2: TDEVAL-001 (assessor) -> Priority matrix
```
### Remediate Mode (Full Pipeline)
```
Wave 1: TDSCAN-001 (scanner) -> Scan 5 dimensions
Wave 2: TDEVAL-001 (assessor) -> Priority matrix
Wave 3: TDPLAN-001 (planner) -> Remediation plan
PLAN-APPROVE (interactive) -> User approval
Wave 4: TDFIX-001 (executor) -> Apply fixes in worktree
Wave 5: TDVAL-001 (validator) -> 4-layer validation
[GC Loop: TDFIX-fix-N -> TDVAL-recheck-N, max 3]
```
### Targeted Mode
```
Wave 1: TDPLAN-001 (planner) -> Targeted fix plan
PLAN-APPROVE (interactive) -> User approval
Wave 2: TDFIX-001 (executor) -> Apply fixes in worktree
Wave 3: TDVAL-001 (validator) -> 4-layer validation
[GC Loop: TDFIX-fix-N -> TDVAL-recheck-N, max 3]
```
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Circular dependency | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Scenario | Resolution |
|----------|------------|
| Unknown command | Error with available command list |
| Role not found | Error with role registry |
| Session corruption | Attempt recovery, fallback to manual |
| Fast-advance conflict | Coordinator reconciles on next callback |
| Completion action fails | Default to Keep Active |
| Scanner finds no debt | Report clean codebase, skip to summary |
| Plan rejected by user | Abort pipeline or create revision task |
| Fix-verify loop stuck (>3 rounds) | Accept current state, continue to completion |
| Worktree creation fails | Fall back to direct changes with user confirmation |
| Validation tools not available | Skip unavailable checks, report partial validation |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| Continue mode: no session found | List available sessions, prompt user to select |
---
## Core Rules
1. **Start Immediately**: First action is session initialization, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive for approval checkpoints
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
7. **Skip on Failure**: If a dependency failed, skip the dependent task
8. **GC Loop Bounded**: Maximum 3 fix-verify rounds before accepting current state
9. **Worktree Isolation**: All fix execution happens in git worktree, not main branch
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
---
## Coordinator Role Constraints (Main Agent)
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
- Spawns agents with task assignments
- Waits for agent callbacks
- Merges results and coordinates workflow
- Manages workflow transitions between phases
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
- Wait patiently for `wait()` calls to complete
- NOT skip workflow steps due to perceived delays
- NOT assume agents have failed just because they're taking time
- Trust the timeout mechanisms defined in the skill
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
- Use `send_input()` to ask questions or provide clarification
- NOT skip the agent or move to next phase prematurely
- Give agents opportunity to respond before escalating
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
- Skip phases or stages defined in the workflow
- Bypass required approval or review steps
- Execute dependent tasks before prerequisites complete
- Assume task completion without explicit agent callback
- Make up or fabricate agent results
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
- Total execution time may range from 30-90 minutes or longer
- Each phase may take 10-30 minutes depending on complexity
- The coordinator must remain active and attentive throughout the entire process
- Do not terminate or skip steps due to time concerns

View File

@@ -1,130 +0,0 @@
# GC Loop Manager Agent
Interactive agent for managing the fix-verify GC (Garbage Collection) loop. Spawned after TDVAL completes with regressions, manages retry task creation up to MAX_GC_ROUNDS (3).
## Identity
- **Type**: `interactive`
- **Role File**: `agents/gc-loop-manager.md`
- **Responsibility**: Evaluate validation results, decide whether to retry or accept, create GC loop tasks
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read validation report to determine regression status
- Track GC round count (max 3)
- Create fix-verify retry tasks when regressions found and rounds remain
- Accept current state when GC rounds exhausted
- Report decision to orchestrator
- Produce structured output following template
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Execute fix actions directly
- Exceed MAX_GC_ROUNDS (3)
- Skip validation report reading
- Produce unstructured output
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | built-in | Load validation report and context |
| `Write` | built-in | Store GC decision result |
---
## Execution
### Phase 1: Validation Assessment
**Objective**: Read validation results and determine action
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| validation-report.json | Yes | Validation results |
| discoveries.ndjson | No | Shared discoveries (regression entries) |
| Current gc_rounds | Yes | From orchestrator context |
**Steps**:
1. Read validation-report.json
2. Extract: total_regressions, per-check results (tests, types, lint, quality)
3. Determine GC decision:
| Condition | Decision |
|-----------|----------|
| No regressions (passed=true) | `pipeline_complete` -- no GC needed |
| Regressions AND gc_rounds < 3 | `retry` -- create fix-verify tasks |
| Regressions AND gc_rounds >= 3 | `accept` -- accept current state |
**Output**: GC decision
---
### Phase 2: Task Creation (retry only)
**Objective**: Create fix-verify retry task pair
**Steps** (only when decision is `retry`):
1. Increment gc_rounds
2. Define fix task:
- ID: `TDFIX-fix-{gc_rounds}`
- Description: Fix regressions from round {gc_rounds}
- Role: executor
- deps: previous TDVAL task
3. Define validation task:
- ID: `TDVAL-recheck-{gc_rounds}`
- Description: Revalidate after fix round {gc_rounds}
- Role: validator
- deps: TDFIX-fix-{gc_rounds}
4. Report new tasks to orchestrator for CSV insertion
**Output**: New task definitions for orchestrator to add to master CSV
---
## Structured Output Template
```
## Summary
- Validation result: <passed|failed>
- Total regressions: <count>
- GC round: <current>/<max>
- Decision: <pipeline_complete|retry|accept>
## Regression Details (if any)
- Test failures: <count>
- Type errors: <count>
- Lint errors: <count>
## Action Taken
- Decision: <decision>
- New tasks created: <task-ids or none>
## Metrics
- Debt score before: <score>
- Debt score after: <score>
- Improvement: <percentage>%
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Validation report not found | Report error, suggest re-running validator |
| Report parse error | Treat as failed validation, trigger retry if rounds remain |
| GC rounds already at max | Accept current state, report to orchestrator |
| Processing failure | Output partial results with clear status |

View File

@@ -1,151 +0,0 @@
# Plan Approver Agent
Interactive agent for reviewing the tech debt remediation plan at the plan approval gate checkpoint. Spawned after TDPLAN-001 completes, before TDFIX execution begins.
## Identity
- **Type**: `interactive`
- **Role File**: `agents/plan-approver.md`
- **Responsibility**: Review remediation plan, present to user, handle Approve/Revise/Abort
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read the remediation plan (both .md and .json)
- Present clear summary with phases, item counts, effort estimates
- Wait for user approval before reporting
- Handle all three outcomes (Approve, Revise, Abort)
- Produce structured output following template
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Approve plan without user confirmation
- Modify the plan artifacts directly
- Execute any fix actions
- Produce unstructured output
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | built-in | Load plan artifacts and context |
| `request_user_input` | built-in | Get user approval decision |
| `Write` | built-in | Store approval result |
### Tool Usage Patterns
**Read Pattern**: Load plan before review
```
Read("<session>/plan/remediation-plan.md")
Read("<session>/plan/remediation-plan.json")
Read("<session>/assessment/priority-matrix.json")
```
---
## Execution
### Phase 1: Plan Loading
**Objective**: Load and summarize the remediation plan
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| remediation-plan.md | Yes | Human-readable plan |
| remediation-plan.json | Yes | Machine-readable plan |
| priority-matrix.json | No | Assessment context |
| discoveries.ndjson | No | Shared discoveries |
**Steps**:
1. Read remediation-plan.md for overview
2. Read remediation-plan.json for metrics
3. Summarize: total actions, effort distribution, phases
4. Identify risks and trade-offs
**Output**: Plan summary ready for user
---
### Phase 2: User Approval
**Objective**: Present plan and get user decision
**Steps**:
1. Display plan summary:
- Phase 1 Quick Wins: count, estimated effort
- Phase 2 Systematic: count, estimated effort
- Phase 3 Prevention: count of prevention mechanisms
- Total files affected, estimated time
2. Present decision:
```javascript
request_user_input({
questions: [{
question: "Remediation plan generated. Review and decide:",
header: "Plan Approval",
id: "plan_approval",
options: [
{ label: "Approve (Recommended)", description: "Proceed with fix execution in worktree" },
{ label: "Revise", description: "Re-run planner with specific feedback" },
{ label: "Abort", description: "Stop pipeline, keep scan/assessment results" }
]
}]
})
```
3. Handle response:
| Response | Action |
|----------|--------|
| Approve | Report approved, trigger worktree creation |
| Revise | Collect revision feedback, report revision-needed |
| Abort | Report abort, pipeline stops |
**Output**: Approval decision with details
---
## Structured Output Template
```
## Summary
- Plan reviewed: remediation-plan.md
- Decision: <approved|revision-needed|aborted>
## Plan Overview
- Phase 1 Quick Wins: <count> items, <effort> effort
- Phase 2 Systematic: <count> items, <effort> effort
- Phase 3 Prevention: <count> mechanisms
- Files affected: <count>
## Decision Details
- User choice: <Approve|Revise|Abort>
- Feedback: <user feedback if revision>
## Risks Identified
- Risk 1: description
- Risk 2: description
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Plan file not found | Report error, suggest re-running planner |
| Plan is empty (no actions) | Report clean codebase, suggest closing |
| User does not respond | Timeout, report awaiting-review |
| Plan JSON parse error | Fall back to .md for review, report warning |

View File

@@ -1,390 +0,0 @@
# Agent Instruction Template -- Team Tech Debt
Role-specific instruction templates for CSV wave agents in the tech debt pipeline. Each role has a specialized instruction that is injected as the `instruction` parameter to `spawn_agents_on_csv`.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 1 | Orchestrator selects role-specific instruction based on task role |
| Phase 2 | Injected as `instruction` parameter to `spawn_agents_on_csv` |
---
## Scanner Instruction
```markdown
## TECH DEBT SCAN TASK
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists)
2. Read project context: .workflow/project-tech.json (if exists)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: scanner
**Dimension Focus**: {debt_dimension}
**Pipeline Mode**: {pipeline_mode}
### Task Description
{description}
### Previous Context
{prev_context}
---
## Execution Protocol
1. **Read discoveries**: Load <session-folder>/discoveries.ndjson
2. **Detect project type**: Check package.json, pyproject.toml, go.mod, etc.
3. **Scan 5 dimensions**:
- **Code**: Complexity > 10, TODO/FIXME, deprecated APIs, dead code, duplicated logic
- **Architecture**: Circular dependencies, god classes, layering violations, tight coupling
- **Testing**: Missing tests, low coverage, test quality issues, no integration tests
- **Dependency**: Outdated packages, known vulnerabilities, unused dependencies
- **Documentation**: Missing JSDoc/docstrings, stale API docs, no README sections
4. **Use tools**: mcp__ace-tool__search_context for semantic search, Grep for pattern matching, Bash for static analysis tools
5. **Standardize each finding**:
- id: TD-NNN (sequential)
- dimension: code|architecture|testing|dependency|documentation
- severity: critical|high|medium|low
- file: path, line: number
- description: issue description
- suggestion: fix suggestion
- estimated_effort: small|medium|large|unknown
6. **Share discoveries**: Append each finding to discovery board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"debt_item_found","data":{"id":"TD-NNN","dimension":"<dim>","severity":"<sev>","file":"<path>","line":<n>,"description":"<desc>","suggestion":"<fix>","estimated_effort":"<effort>"}}' >> <session-folder>/discoveries.ndjson
```
7. **Write artifact**: Save structured inventory to <session-folder>/scan/debt-inventory.json
8. **Report result**
---
## Output (report_agent_job_result)
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Scanned N dimensions. Found M debt items: X critical, Y high... (max 500 chars)",
"debt_items_count": "<total count>",
"artifacts_produced": "scan/debt-inventory.json",
"error": ""
}
```
---
## Assessor Instruction
```markdown
## TECH DEBT ASSESSMENT TASK
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists)
2. Read debt inventory: <session-folder>/scan/debt-inventory.json
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: assessor
### Task Description
{description}
### Previous Context
{prev_context}
---
## Execution Protocol
1. **Load debt inventory** from <session-folder>/scan/debt-inventory.json
2. **Score each item**:
- **Impact Score** (1-5): critical=5, high=4, medium=3, low=1
- **Cost Score** (1-5): small=1, medium=3, large=5, unknown=3
3. **Classify into priority quadrants**:
| Impact | Cost | Quadrant |
|--------|------|----------|
| >= 4 | <= 2 | quick-win |
| >= 4 | >= 3 | strategic |
| <= 3 | <= 2 | backlog |
| <= 3 | >= 3 | defer |
4. **Sort** within each quadrant by impact_score descending
5. **Share discoveries**: Append assessment summary to discovery board
6. **Write artifact**: <session-folder>/assessment/priority-matrix.json
7. **Report result**
---
## Output (report_agent_job_result)
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Assessed M items. Quick-wins: X, Strategic: Y, Backlog: Z, Defer: W (max 500 chars)",
"debt_items_count": "<total assessed>",
"artifacts_produced": "assessment/priority-matrix.json",
"error": ""
}
```
---
## Planner Instruction
```markdown
## TECH DEBT PLANNING TASK
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists)
2. Read priority matrix: <session-folder>/assessment/priority-matrix.json
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: planner
### Task Description
{description}
### Previous Context
{prev_context}
---
## Execution Protocol
1. **Load priority matrix** from <session-folder>/assessment/priority-matrix.json
2. **Group items**: quickWins (quick-win), strategic, backlog, deferred
3. **Create 3-phase remediation plan**:
- **Phase 1: Quick Wins** -- High impact, low cost, immediate execution
- **Phase 2: Systematic** -- High impact, high cost, structured refactoring
- **Phase 3: Prevention** -- Long-term prevention mechanisms
4. **Map action types** per dimension:
| Dimension | Action Type |
|-----------|-------------|
| code | refactor |
| architecture | restructure |
| testing | add-tests |
| dependency | update-deps |
| documentation | add-docs |
5. **Generate prevention actions** for dimensions with >= 3 items:
| Dimension | Prevention |
|-----------|------------|
| code | Add linting rules for complexity thresholds |
| architecture | Introduce module boundary checks in CI |
| testing | Set minimum coverage thresholds |
| dependency | Configure automated update bot |
| documentation | Add docstring enforcement in linting |
6. **Write artifacts**:
- <session-folder>/plan/remediation-plan.md (human-readable with checklists)
- <session-folder>/plan/remediation-plan.json (machine-readable)
7. **Report result**
---
## Output (report_agent_job_result)
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Created 3-phase plan. Phase 1: X quick-wins. Phase 2: Y systematic. Phase 3: Z prevention. Total actions: N (max 500 chars)",
"debt_items_count": "<total planned items>",
"artifacts_produced": "plan/remediation-plan.md;plan/remediation-plan.json",
"error": ""
}
```
---
## Executor Instruction
```markdown
## TECH DEBT FIX EXECUTION TASK
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists)
2. Read remediation plan: <session-folder>/plan/remediation-plan.json
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: executor
### Task Description
{description}
### Previous Context
{prev_context}
---
## Execution Protocol
**CRITICAL**: ALL file operations must execute within the worktree path.
1. **Load remediation plan** from <session-folder>/plan/remediation-plan.json
2. **Extract worktree path** from task description
3. **Group actions by type**: refactor -> update-deps -> add-tests -> add-docs -> restructure
4. **For each batch**:
- Read target files in worktree
- Apply changes following project conventions
- Validate changes compile/lint: `cd "<worktree>" && npx tsc --noEmit` or equivalent
- Track: items_fixed, items_failed, files_modified
5. **After each batch**: Verify via `cd "<worktree>" && git diff --name-only`
6. **Share discoveries**: Append fix_applied entries to discovery board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"fix_applied","data":{"file":"<path>","change":"<desc>","lines_modified":<n>,"debt_id":"<TD-NNN>"}}' >> <session-folder>/discoveries.ndjson
```
7. **Self-validate**:
| Check | Command | Pass Criteria |
|-------|---------|---------------|
| Syntax | `cd "<worktree>" && npx tsc --noEmit` | No new errors |
| Lint | `cd "<worktree>" && npx eslint --no-error-on-unmatched-pattern` | No new errors |
8. **Write artifact**: <session-folder>/fixes/fix-log.json
9. **Report result**
---
## Output (report_agent_job_result)
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Fixed X/Y items. Batches: refactor(N), update-deps(N), add-tests(N). Files modified: Z (max 500 chars)",
"debt_items_count": "<items fixed>",
"artifacts_produced": "fixes/fix-log.json",
"error": ""
}
```
---
## Validator Instruction
```markdown
## TECH DEBT VALIDATION TASK
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists)
2. Read fix log: <session-folder>/fixes/fix-log.json (if exists)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: validator
### Task Description
{description}
### Previous Context
{prev_context}
---
## Execution Protocol
**CRITICAL**: ALL validation commands must execute within the worktree path.
1. **Extract worktree path** from task description
2. **Load fix results** from <session-folder>/fixes/fix-log.json
3. **Run 4-layer validation**:
**Layer 1 -- Test Suite**:
- Command: `cd "<worktree>" && npm test` or `cd "<worktree>" && python -m pytest`
- PASS: No FAIL/error/failed keywords
- SKIP: No test runner available
**Layer 2 -- Type Check**:
- Command: `cd "<worktree>" && npx tsc --noEmit`
- Count: `error TS` occurrences
**Layer 3 -- Lint Check**:
- Command: `cd "<worktree>" && npx eslint --no-error-on-unmatched-pattern <modified-files>`
- Count: error occurrences
**Layer 4 -- Quality Analysis** (when > 5 modified files):
- Compare code quality before/after
- Assess complexity, duplication, naming improvements
4. **Calculate debt score**:
- debt_score_after = debt items NOT in modified files (remaining unfixed)
- improvement_percentage = ((before - after) / before) * 100
5. **Auto-fix attempt** (when total_regressions <= 3):
- Fix minor regressions inline
- Re-run validation checks
6. **Share discoveries**: Append regression_found entries if any:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"regression_found","data":{"file":"<path>","test":"<test>","description":"<desc>","severity":"<sev>"}}' >> <session-folder>/discoveries.ndjson
```
7. **Write artifact**: <session-folder>/validation/validation-report.json with:
- validation_date, passed (bool), total_regressions
- checks: {tests, types, lint, quality} with per-check status
- debt_score_before, debt_score_after, improvement_percentage
8. **Report result**
---
## Output (report_agent_job_result)
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Validation: PASSED|FAILED. Tests: OK/N failures. Types: OK/N errors. Lint: OK/N errors. Debt reduction: X% (max 500 chars)",
"debt_items_count": "<debt_score_after>",
"artifacts_produced": "validation/validation-report.json",
"error": ""
}
```
---
## Placeholder Reference
| Placeholder | Resolved By | When |
|-------------|------------|------|
| `<session-folder>` | Skill designer (Phase 1) | Literal path baked into instruction |
| `{id}` | spawn_agents_on_csv | Runtime from CSV row |
| `{title}` | spawn_agents_on_csv | Runtime from CSV row |
| `{description}` | spawn_agents_on_csv | Runtime from CSV row |
| `{role}` | spawn_agents_on_csv | Runtime from CSV row |
| `{debt_dimension}` | spawn_agents_on_csv | Runtime from CSV row |
| `{pipeline_mode}` | spawn_agents_on_csv | Runtime from CSV row |
| `{prev_context}` | spawn_agents_on_csv | Runtime from CSV row |
---
## Instruction Selection Logic
The orchestrator selects the appropriate instruction section based on the task's `role` column:
| Role | Instruction Section |
|------|-------------------|
| scanner | Scanner Instruction |
| assessor | Assessor Instruction |
| planner | Planner Instruction |
| executor | Executor Instruction |
| validator | Validator Instruction |
Since each wave typically contains tasks from a single role (linear pipeline), the orchestrator uses the role of the first task in the wave to select the instruction template. The `<session-folder>` placeholder is replaced with the actual session path before injection.

View File

@@ -0,0 +1,69 @@
---
role: assessor
prefix: TDEVAL
inner_loop: false
message_types: [state_update]
---
# Tech Debt Assessor
Quantitative evaluator for tech debt items. Score each debt item on business impact (1-5) and fix cost (1-5), classify into priority quadrants, produce priority-matrix.json.
## Phase 2: Load Debt Inventory
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Debt inventory | meta.json:debt_inventory OR <session>/scan/debt-inventory.json | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for team context
3. Load debt_inventory from shared memory or fallback to debt-inventory.json file
4. If debt_inventory is empty -> report empty assessment and exit
## Phase 3: Evaluate Each Item
**Strategy selection**:
| Item Count | Strategy |
|------------|----------|
| <= 10 | Heuristic: severity-based impact + effort-based cost |
| 11-50 | CLI batch: single gemini analysis call |
| > 50 | CLI chunked: batches of 25 items |
**Impact Score Mapping** (heuristic):
| Severity | Impact Score |
|----------|-------------|
| critical | 5 |
| high | 4 |
| medium | 3 |
| low | 1 |
**Cost Score Mapping** (heuristic):
| Estimated Effort | Cost Score |
|------------------|------------|
| small | 1 |
| medium | 3 |
| large | 5 |
| unknown | 3 |
**Priority Quadrant Classification**:
| Impact | Cost | Quadrant |
|--------|------|----------|
| >= 4 | <= 2 | quick-win |
| >= 4 | >= 3 | strategic |
| <= 3 | <= 2 | backlog |
| <= 3 | >= 3 | defer |
For CLI mode, prompt gemini with full debt summary requesting JSON array of `{id, impact_score, cost_score, risk_if_unfixed, priority_quadrant}`. Unevaluated items fall back to heuristic scoring.
## Phase 4: Generate Priority Matrix
1. Build matrix structure: evaluation_date, total_items, by_quadrant (grouped), summary (counts per quadrant)
2. Sort within each quadrant by impact_score descending
3. Write `<session>/assessment/priority-matrix.json`
4. Update .msg/meta.json with `priority_matrix` summary and evaluated `debt_inventory`

View File

@@ -0,0 +1,47 @@
# Analyze Task
Parse user task -> detect tech debt signals -> assess complexity -> determine pipeline mode and roles.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Signal | Mode Hint |
|----------|--------|-----------|
| 扫描, scan, 审计, audit | debt-scan | scan |
| 评估, assess, quantify | debt-assess | scan |
| 规划, plan, roadmap | debt-plan | targeted |
| 修复, fix, remediate, clean | debt-fix | remediate |
| 验证, validate, verify | debt-validate | remediate |
| 定向, targeted, specific | debt-targeted | targeted |
## Complexity Scoring
| Factor | Points |
|--------|--------|
| Full codebase scope | +2 |
| Multiple debt dimensions | +1 per dimension (max 3) |
| Large codebase (implied) | +1 |
| Targeted specific items | -1 |
Results: 1-3 Low (scan mode), 4-6 Medium (remediate), 7+ High (remediate + full pipeline)
## Pipeline Mode Determination
| Score + Signals | Mode |
|----------------|------|
| scan/audit keywords | scan |
| targeted/specific keywords | targeted |
| Default | remediate |
## Output
Write scope context to coordinator memory:
```json
{
"pipeline_mode": "<scan|remediate|targeted>",
"scope": "<detected-scope>",
"focus_dimensions": ["code", "architecture", "testing", "dependency", "documentation"],
"complexity": { "score": 0, "level": "Low|Medium|High" }
}
```

View File

@@ -0,0 +1,163 @@
# Command: dispatch
> 任务链创建与依赖管理。根据 pipeline 模式创建技术债务治理任务链并写入 tasks.json。
## When to Use
- Phase 3 of Coordinator
- Pipeline 模式已确定,需要创建任务链
- 团队已创建worker 待 spawn
**Trigger conditions**:
- Coordinator Phase 2 完成后
- 模式切换需要重建任务链
- Fix-Verify 循环需要创建修复任务
## Strategy
### Delegation Mode
**Mode**: Directcoordinator 直接操作 tasks.json
### Decision Logic
```javascript
// 根据 pipelineMode 选择 pipeline
function buildPipeline(pipelineMode, sessionFolder, taskDescription) {
const pipelines = {
'scan': [
{ prefix: 'TDSCAN', owner: 'scanner', desc: '多维度技术债务扫描', deps: [] },
{ prefix: 'TDEVAL', owner: 'assessor', desc: '量化评估与优先级排序', deps: ['TDSCAN'] }
],
'remediate': [
{ prefix: 'TDSCAN', owner: 'scanner', desc: '多维度技术债务扫描', deps: [] },
{ prefix: 'TDEVAL', owner: 'assessor', desc: '量化评估与优先级排序', deps: ['TDSCAN'] },
{ prefix: 'TDPLAN', owner: 'planner', desc: '分阶段治理方案规划', deps: ['TDEVAL'] },
{ prefix: 'TDFIX', owner: 'executor', desc: '债务清理执行', deps: ['TDPLAN'] },
{ prefix: 'TDVAL', owner: 'validator', desc: '清理结果验证', deps: ['TDFIX'] }
],
'targeted': [
{ prefix: 'TDPLAN', owner: 'planner', desc: '定向修复方案规划', deps: [] },
{ prefix: 'TDFIX', owner: 'executor', desc: '债务清理执行', deps: ['TDPLAN'] },
{ prefix: 'TDVAL', owner: 'validator', desc: '清理结果验证', deps: ['TDFIX'] }
]
}
return pipelines[pipelineMode] || pipelines['scan']
}
```
## Execution Steps
### Step 1: Context Preparation
```javascript
const pipeline = buildPipeline(pipelineMode, sessionFolder, taskDescription)
```
### Step 2: Build Tasks JSON
```javascript
const tasks = {}
for (const stage of pipeline) {
const taskId = `${stage.prefix}-001`
// 构建任务描述(包含 session 和上下文信息)
const fullDesc = [
stage.desc,
`\nsession: ${sessionFolder}`,
`\n\n目标: ${taskDescription}`
].join('')
// 构建依赖 ID 列表
const depIds = stage.deps.map(dep => `${dep}-001`)
// 添加任务到 tasks 对象
tasks[taskId] = {
title: stage.desc,
description: fullDesc,
role: stage.owner,
prefix: stage.prefix,
deps: depIds,
status: "pending",
findings: null,
error: null
}
}
// 写入 tasks.json
state.tasks = { ...state.tasks, ...tasks }
// Write updated tasks.json
```
### Step 3: Result Processing
```javascript
// 验证任务链
const chainValid = Object.keys(tasks).length === pipeline.length
if (!chainValid) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "error",
})
}
```
## Fix-Verify Loop Task Creation
当 validator 报告回归问题时coordinator 调用此逻辑追加任务到 tasks.json
```javascript
function createFixVerifyTasks(fixVerifyIteration, sessionFolder) {
const fixId = `TDFIX-fix-${fixVerifyIteration}`
const valId = `TDVAL-recheck-${fixVerifyIteration}`
// 添加修复任务到 tasks.json
state.tasks[fixId] = {
title: `修复回归问题 (Fix-Verify #${fixVerifyIteration})`,
description: `修复验证发现的回归问题\nsession: ${sessionFolder}\ntype: fix-verify`,
role: "executor",
prefix: "TDFIX",
deps: [],
status: "pending",
findings: null,
error: null
}
// 添加重新验证任务到 tasks.json
state.tasks[valId] = {
title: `重新验证 (Fix-Verify #${fixVerifyIteration})`,
description: `重新验证修复结果\nsession: ${sessionFolder}`,
role: "validator",
prefix: "TDVAL",
deps: [fixId],
status: "pending",
findings: null,
error: null
}
// Write updated tasks.json
}
```
## Output Format
```
## Task Chain Created
### Mode: [scan|remediate|targeted]
### Pipeline Stages: [count]
- [prefix]-001: [description] (owner: [role], deps: [deps])
### Verification: PASS/FAIL
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Task creation fails | Retry once, then report to user |
| Dependency cycle detected | Flatten dependencies, warn coordinator |
| Invalid pipelineMode | Default to 'scan' mode |
| Timeout (>5 min) | Report partial results, notify coordinator |

View File

@@ -0,0 +1,231 @@
# Monitor Pipeline
Synchronous pipeline coordination using spawn_agent + wait_agent.
## Constants
- WORKER_AGENT: team_worker
- ONE_STEP_PER_INVOCATION: false (synchronous wait loop)
- FAST_ADVANCE_AWARE: true
- MAX_GC_ROUNDS: 3
## Handler Router
| Source | Handler |
|--------|---------|
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
## handleCheck
Read-only status report from tasks.json, then STOP.
1. Read tasks.json
2. Count tasks by status (pending, in_progress, completed, failed)
```
Pipeline Status (<mode>):
[DONE] TDSCAN-001 (scanner) -> scan complete
[DONE] TDEVAL-001 (assessor) -> assessment ready
[RUN] TDPLAN-001 (planner) -> planning...
[WAIT] TDFIX-001 (executor) -> blocked by TDPLAN-001
[WAIT] TDVAL-001 (validator) -> blocked by TDFIX-001
GC Rounds: 0/3
Session: <session-id>
Commands: 'resume' to advance | 'check' to refresh
```
Output status -- do NOT advance pipeline.
## handleResume
1. Read tasks.json, check active_agents
2. Tasks stuck in "in_progress" -> reset to "pending"
3. Tasks with completed deps but still "pending" -> include in spawn list
4. -> handleSpawnNext
## handleSpawnNext
Find ready tasks, spawn workers, wait for completion, process results.
1. Read tasks.json
2. Collect: completedTasks, inProgressTasks, readyTasks (pending + all deps completed)
3. No ready + work in progress -> report waiting, STOP
4. No ready + nothing in progress -> handleComplete
5. Has ready -> for each:
a. Check inner loop role with active worker -> skip (worker picks up)
b. Update task status in tasks.json -> in_progress
c. team_msg log -> task_unblocked
### Spawn Workers
For each ready task:
```javascript
// 1) Update status in tasks.json
state.tasks[taskId].status = 'in_progress'
// 2) Spawn worker
const agentId = spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: ${task.role}
role_spec: ${skillRoot}/roles/${task.role}/role.md
session: ${sessionFolder}
session_id: ${sessionId}
team_name: tech-debt
requirement: ${task.description}
inner_loop: ${task.role === 'executor'}` },
{ type: "text", text: `Read role_spec file (${skillRoot}/roles/${task.role}/role.md) to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` },
{ type: "text", text: `## Task Context
task_id: ${taskId}
title: ${task.title}
description: ${task.description}` },
{ type: "text", text: `## Upstream Context\n${prevContext}` }
]
})
// 3) Track agent
state.active_agents[taskId] = { agentId, role: task.role, started_at: now }
```
Stage-to-role mapping:
| Task Prefix | Role |
|-------------|------|
| TDSCAN | scanner |
| TDEVAL | assessor |
| TDPLAN | planner |
| TDFIX | executor |
| TDVAL | validator |
### Wait and Process Results
After spawning all ready tasks:
```javascript
// 4) Batch wait for all spawned workers
const agentIds = Object.values(state.active_agents).map(a => a.agentId)
wait_agent({ ids: agentIds, timeout_ms: 900000 })
// 5) Collect results
for (const [taskId, agent] of Object.entries(state.active_agents)) {
state.tasks[taskId].status = 'completed'
close_agent({ id: agent.agentId })
delete state.active_agents[taskId]
}
```
### Checkpoint Processing
After task completion, check for checkpoints:
- **TDPLAN-001 completes** -> Plan Approval Gate:
```javascript
request_user_input({
questions: [{
question: "Remediation plan generated. Review and decide:",
header: "Plan Review",
multiSelect: false,
options: [
{ label: "Approve", description: "Proceed with fix execution" },
{ label: "Revise", description: "Re-run planner with feedback" },
{ label: "Abort", description: "Stop pipeline" }
]
}]
})
```
- Approve -> Worktree Creation -> continue handleSpawnNext loop
- Revise -> Add TDPLAN-revised task to tasks.json -> continue
- Abort -> Log shutdown -> handleComplete
- **Worktree Creation** (before TDFIX):
```
Bash("git worktree add .worktrees/TD-<slug>-<date> -b tech-debt/TD-<slug>-<date>")
```
Update .msg/meta.json with worktree info.
- **TDVAL-* completes** -> GC Loop Check:
Read validation results from .msg/meta.json
| Condition | Action |
|-----------|--------|
| No regressions | -> continue (pipeline complete) |
| Regressions AND gc_rounds < 3 | Add fix-verify tasks to tasks.json, increment gc_rounds |
| Regressions AND gc_rounds >= 3 | Accept current state -> handleComplete |
Fix-Verify Task Creation (add to tasks.json):
```json
{
"TDFIX-fix-<round>": {
"title": "Fix regressions (Fix-Verify #<round>)",
"description": "PURPOSE: Fix regressions | Session: <session>",
"role": "executor",
"prefix": "TDFIX",
"deps": [],
"status": "pending",
"findings": null,
"error": null
},
"TDVAL-recheck-<round>": {
"title": "Recheck after fix (Fix-Verify #<round>)",
"description": "Re-validate after fix",
"role": "validator",
"prefix": "TDVAL",
"deps": ["TDFIX-fix-<round>"],
"status": "pending",
"findings": null,
"error": null
}
}
```
### Persist and Loop
After processing all results:
1. Write updated tasks.json
2. Check if more tasks are now ready (deps newly resolved)
3. If yes -> loop back to step 1 of handleSpawnNext
4. If no more ready and all done -> handleComplete
5. If no more ready but some still blocked -> report status, STOP
## handleComplete
Pipeline done. Generate report and completion action.
1. Verify all tasks (including fix-verify tasks) have status "completed"
2. If any not completed -> handleSpawnNext
3. If all completed:
- Read final state from .msg/meta.json
- If worktree exists and validation passed: commit, push, gh pr create, cleanup worktree
- Compile summary: total tasks, completed, gc_rounds, debt_score_before, debt_score_after
- Transition to coordinator Phase 5
4. Execute completion action per tasks.json completion_action:
- interactive -> request_user_input (Archive/Keep/Export)
- auto_archive -> Archive & Clean (rm -rf session folder)
- auto_keep -> Keep Active (status=paused)
## handleAdapt
Capability gap reported mid-pipeline.
1. Parse gap description
2. Check if existing role covers it -> redirect
3. Role count < 5 -> generate dynamic role spec in <session>/role-specs/
4. Add new task to tasks.json, spawn worker via spawn_agent + wait_agent
5. Role count >= 5 -> merge or pause
## Fast-Advance Reconciliation
On every coordinator wake:
1. Read team_msg entries with type="fast_advance"
2. Sync active_agents with spawned successors
3. No duplicate spawns

View File

@@ -0,0 +1,141 @@
# Coordinator Role
技术债务治理团队协调者。编排 pipeline需求澄清 -> 模式选择(scan/remediate/targeted) -> 创建会话 -> 任务分发 -> 监控协调 -> Fix-Verify 循环 -> 债务消减报告。
## Identity
- **Name**: coordinator | **Tag**: [coordinator]
- **Responsibility**: Parse requirements -> Create session -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- All output (team_msg, logs) must carry `[coordinator]` identifier
- Only responsible for: requirement clarification, mode selection, task creation/dispatch, progress monitoring, quality gates, result reporting
- Create tasks in tasks.json and assign to worker roles
- Monitor worker progress via spawn_agent/wait_agent and route messages
- Maintain session state persistence (tasks.json)
### MUST NOT
- Execute tech debt work directly (delegate to workers)
- Modify task outputs (workers own their deliverables)
- Call CLI tools for analysis, exploration, or code generation
- Modify source code or generate artifact files directly
- Bypass worker roles to complete delegated work
- Skip dependency validation when creating task chains
- Omit `[coordinator]` identifier in any output
## Command Execution Protocol
When coordinator needs to execute a command (analyze, dispatch, monitor):
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Status check | Arguments contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume (monitor.md) |
| Pipeline complete | All tasks have status "completed" | -> handleComplete (monitor.md) |
| Interrupted session | Active/paused session exists in .workflow/.team/TD-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For check/resume/complete: load `@commands/monitor.md`, execute matched handler, STOP.
## Phase 0: Session Resume Check
1. Scan `.workflow/.team/TD-*/tasks.json` for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile:
a. Read tasks.json, reset in_progress -> pending
b. Rebuild active_agents map
c. Kick first ready task via handleSpawnNext
4. Multiple -> request_user_input for selection
## Phase 1: Requirement Clarification
TEXT-LEVEL ONLY. No source code reading.
1. Parse arguments for explicit settings: mode, scope, focus areas
2. Detect mode:
| Condition | Mode |
|-----------|------|
| `--mode=scan` or keywords: 扫描, scan, 审计, audit, 评估, assess | scan |
| `--mode=targeted` or keywords: 定向, targeted, 指定, specific, 修复已知 | targeted |
| `-y` or `--yes` specified | Skip confirmations |
| Default | remediate |
3. Ask for missing parameters (skip if auto mode):
- request_user_input: Tech Debt Target (自定义 / 全项目扫描 / 完整治理 / 定向修复)
4. Store: mode, scope, focus, constraints
5. Delegate to @commands/analyze.md -> output task-analysis context
## Phase 2: Create Session + Initialize
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash({ command: "pwd" })`
- `skill_root` = `<project_root>/.codex/skills/team-tech-debt`
2. Generate session ID: `TD-<slug>-<YYYY-MM-DD>`
3. Create session folder structure:
```bash
mkdir -p .workflow/.team/${SESSION_ID}/{scan,assessment,plan,fixes,validation,wisdom,wisdom/.msg}
```
4. Initialize .msg/meta.json via team_msg state_update with pipeline metadata
5. Write initial tasks.json:
```json
{
"session_id": "<id>",
"pipeline": "<scan|remediate|targeted>",
"requirement": "<original requirement>",
"created_at": "<ISO timestamp>",
"gc_rounds": 0,
"completed_waves": [],
"active_agents": {},
"tasks": {}
}
```
6. Do NOT spawn workers yet - deferred to Phase 4
## Phase 3: Create Task Chain
Delegate to @commands/dispatch.md. Task chain by mode:
| Mode | Task Chain |
|------|------------|
| scan | TDSCAN-001 -> TDEVAL-001 |
| remediate | TDSCAN-001 -> TDEVAL-001 -> TDPLAN-001 -> TDFIX-001 -> TDVAL-001 |
| targeted | TDPLAN-001 -> TDFIX-001 -> TDVAL-001 |
## Phase 4: Spawn-and-Wait
Delegate to @commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + deps resolved)
2. Spawn team_worker agents via spawn_agent
3. Wait for completion via wait_agent
4. Process results, advance pipeline
5. Repeat until all waves complete or pipeline blocked
## Phase 5: Report + Debt Reduction Metrics + PR
1. Read shared memory -> collect all results
2. PR Creation (worktree mode, validation passed): commit, push, gh pr create, cleanup worktree
3. Calculate: debt_items_found, items_fixed, reduction_rate
4. Generate report with mode, debt scores, validation status
5. Output with [coordinator] prefix
6. Execute completion action (request_user_input: 新目标 / 深度修复 / 关闭团队)
## Error Handling
| Error | Resolution |
|-------|------------|
| Task timeout | Log, mark failed, ask user to retry or skip |
| Worker crash | Reset task to pending in tasks.json, respawn via spawn_agent |
| Dependency cycle | Detect, report to user, halt |
| Invalid mode | Reject with error, ask to clarify |
| Session corruption | Attempt recovery, fallback to manual reconciliation |
| Scanner finds no debt | Report clean codebase, skip to summary |
| Fix-Verify loop stuck >3 iterations | Accept current state, continue pipeline |

View File

@@ -0,0 +1,76 @@
---
role: executor
prefix: TDFIX
inner_loop: true
message_types: [state_update]
---
# Tech Debt Executor
Debt cleanup executor. Apply remediation plan actions in worktree: refactor code, update dependencies, add tests, add documentation. Batch-delegate to CLI tools, self-validate after each batch.
## Phase 2: Load Remediation Plan
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Remediation plan | <session>/plan/remediation-plan.json | Yes |
| Worktree info | meta.json:worktree.path, worktree.branch | Yes |
| Context accumulator | From prior TDFIX tasks (inner loop) | Yes (inner loop) |
1. Extract session path from task description
2. Read .msg/meta.json for worktree path and branch
3. Read remediation-plan.json, extract all actions from plan phases
4. Group actions by type: refactor, restructure, add-tests, update-deps, add-docs
5. Split large groups (> 10 items) into sub-batches of 10
6. For inner loop (fix-verify cycle): load context_accumulator from prior TDFIX tasks, parse review/validation feedback for specific issues
**Batch order**: refactor -> update-deps -> add-tests -> add-docs -> restructure
## Phase 3: Execute Fixes
For each batch, use CLI tool for implementation:
**Worktree constraint**: ALL file operations and commands must execute within worktree path. Use `cd "<worktree-path>" && ...` prefix for all Bash commands.
**Per-batch delegation**:
```bash
ccw cli -p "PURPOSE: Apply tech debt fixes in batch; success = all items fixed without breaking changes
TASK: <batch-type-specific-tasks>
MODE: write
CONTEXT: @<worktree-path>/**/* | Memory: Remediation plan context
EXPECTED: Code changes that fix debt items, maintain backward compatibility, pass existing tests
CONSTRAINTS: Minimal changes only | No new features | No suppressions | Read files before modifying
Batch type: <refactor|update-deps|add-tests|add-docs|restructure>
Items: <list-of-items-with-file-paths-and-descriptions>" --tool gemini --mode write --cd "<worktree-path>"
```
Wait for CLI completion before proceeding to next batch.
**Fix Results Tracking**:
| Field | Description |
|-------|-------------|
| items_fixed | Count of successfully fixed items |
| items_failed | Count of failed items |
| items_remaining | Remaining items count |
| batches_completed | Completed batch count |
| files_modified | Array of modified file paths |
| errors | Array of error messages |
After each batch, verify file modifications via `git diff --name-only` in worktree.
## Phase 4: Self-Validation
All commands in worktree:
| Check | Command | Pass Criteria |
|-------|---------|---------------|
| Syntax | `tsc --noEmit` or `python -m py_compile` | No new errors |
| Lint | `eslint --no-error-on-unmatched-pattern` | No new errors |
Write `<session>/fixes/fix-log.json` with fix results. Update .msg/meta.json with `fix_results`.
Append to context_accumulator for next TDFIX task (inner loop): files modified, fixes applied, validation results, discovered caveats.

View File

@@ -0,0 +1,69 @@
---
role: planner
prefix: TDPLAN
inner_loop: false
message_types: [state_update]
---
# Tech Debt Planner
Remediation plan designer. Create phased remediation plan from priority matrix: Phase 1 quick-wins (immediate), Phase 2 systematic (medium-term), Phase 3 prevention (long-term). Produce remediation-plan.md.
## Phase 2: Load Assessment Data
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Priority matrix | <session>/assessment/priority-matrix.json | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for debt_inventory
3. Read priority-matrix.json for quadrant groupings
4. Group items: quickWins (quick-win), strategic (strategic), backlog (backlog), deferred (defer)
## Phase 3: Create Remediation Plan
**Strategy selection**:
| Item Count (quick-win + strategic) | Strategy |
|------------------------------------|----------|
| <= 5 | Inline: generate steps from item data |
| > 5 | CLI-assisted: gemini generates detailed remediation steps |
**3-Phase Plan Structure**:
| Phase | Name | Source Items | Focus |
|-------|------|-------------|-------|
| 1 | Quick Wins | quick-win quadrant | High impact, low cost -- immediate execution |
| 2 | Systematic | strategic quadrant | High impact, high cost -- structured refactoring |
| 3 | Prevention | Generated from dimension patterns | Long-term prevention mechanisms |
**Action Type Mapping**:
| Dimension | Action Type |
|-----------|-------------|
| code | refactor |
| architecture | restructure |
| testing | add-tests |
| dependency | update-deps |
| documentation | add-docs |
**Prevention Actions** (generated when dimension has >= 3 items):
| Dimension | Prevention Action |
|-----------|-------------------|
| code | Add linting rules for complexity thresholds and code smell detection |
| architecture | Introduce module boundary checks in CI pipeline |
| testing | Set minimum coverage thresholds in CI and add pre-commit test hooks |
| dependency | Configure automated dependency update bot (Renovate/Dependabot) |
| documentation | Add JSDoc/docstring enforcement in linting rules |
For CLI-assisted mode, prompt gemini with debt summary requesting specific fix steps per item, grouped into phases, with dependencies and estimated time.
## Phase 4: Validate & Save
1. Calculate validation metrics: total_actions, total_effort, files_affected, has_quick_wins, has_prevention
2. Write `<session>/plan/remediation-plan.md` (markdown with per-item checklists)
3. Write `<session>/plan/remediation-plan.json` (machine-readable)
4. Update .msg/meta.json with `remediation_plan` summary

View File

@@ -0,0 +1,82 @@
---
role: scanner
prefix: TDSCAN
inner_loop: false
message_types: [state_update]
---
# Tech Debt Scanner
Multi-dimension tech debt scanner. Scan codebase across 5 dimensions (code, architecture, testing, dependency, documentation), produce structured debt inventory with severity rankings.
## Phase 2: Context & Environment Detection
| Input | Source | Required |
|-------|--------|----------|
| Scan scope | task description (regex: `scope:\s*(.+)`) | No (default: `**/*`) |
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
1. Extract session path and scan scope from task description
2. Load debug specs: Run `ccw spec load --category debug` for known issues, workarounds, and root-cause notes
3. Read .msg/meta.json for team context
3. Detect project type and framework:
| Signal File | Project Type |
|-------------|-------------|
| package.json + React/Vue/Angular | Frontend Node |
| package.json + Express/Fastify/NestJS | Backend Node |
| pyproject.toml / requirements.txt | Python |
| go.mod | Go |
| No detection | Generic |
4. Determine scan dimensions (default: code, architecture, testing, dependency, documentation)
5. Detect perspectives from task description:
| Condition | Perspective |
|-----------|-------------|
| `security\|auth\|inject\|xss` | security |
| `performance\|speed\|optimize` | performance |
| `quality\|clean\|maintain\|debt` | code-quality |
| `architect\|pattern\|structure` | architecture |
| Default | code-quality + architecture |
6. Assess complexity:
| Score | Complexity | Strategy |
|-------|------------|----------|
| >= 4 | High | Triple Fan-out: CLI explore + CLI 5 dimensions + multi-perspective Gemini |
| 2-3 | Medium | Dual Fan-out: CLI explore + CLI 3 dimensions |
| 0-1 | Low | Inline: ACE search + Grep |
## Phase 3: Multi-Dimension Scan
**Low Complexity** (inline):
- Use `mcp__ace-tool__search_context` for code smells, TODO/FIXME, deprecated APIs, complex functions, dead code, missing tests
- Classify findings into dimensions
**Medium/High Complexity** (Fan-out):
- Fan-out A: CLI exploration (structure, patterns, dependencies angles) via `ccw cli --tool gemini --mode analysis`
- Fan-out B: CLI dimension analysis (parallel gemini per dimension -- code, architecture, testing, dependency, documentation)
- Fan-out C (High only): Multi-perspective Gemini analysis (security, performance, code-quality, architecture)
- Fan-in: Merge results, cross-deduplicate by file:line, boost severity for multi-source findings
**Standardize each finding**:
| Field | Description |
|-------|-------------|
| `id` | `TD-NNN` (sequential) |
| `dimension` | code, architecture, testing, dependency, documentation |
| `severity` | critical, high, medium, low |
| `file` | File path |
| `line` | Line number |
| `description` | Issue description |
| `suggestion` | Fix suggestion |
| `estimated_effort` | small, medium, large, unknown |
## Phase 4: Aggregate & Save
1. Deduplicate findings across Fan-out layers (file:line key), merge cross-references
2. Sort by severity (cross-referenced items boosted)
3. Write `<session>/scan/debt-inventory.json` with scan_date, dimensions, total_items, by_dimension, by_severity, items
4. Update .msg/meta.json with `debt_inventory` array and `debt_score_before` count

View File

@@ -0,0 +1,75 @@
---
role: validator
prefix: TDVAL
inner_loop: false
message_types: [state_update]
---
# Tech Debt Validator
Cleanup result validator. Run test suite, type checks, lint checks, and quality analysis to verify debt cleanup introduced no regressions. Compare before/after debt scores, produce validation-report.json.
## Phase 2: Load Context
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Fix log | <session>/fixes/fix-log.json | No |
1. Extract session path from task description
2. Read .msg/meta.json for: worktree.path, debt_inventory, fix_results, debt_score_before
3. Determine command prefix: `cd "<worktree-path>" && ` if worktree exists
4. Read fix-log.json for modified files list
5. Detect available validation tools in worktree:
| Signal | Tool | Method |
|--------|------|--------|
| package.json + npm | npm test | Test suite |
| pytest available | python -m pytest | Test suite |
| npx tsc available | npx tsc --noEmit | Type check |
| npx eslint available | npx eslint | Lint check |
## Phase 3: Run Validation Checks
Execute 4-layer validation (all commands in worktree):
**1. Test Suite**:
- Run `npm test` or `python -m pytest` in worktree
- PASS if no FAIL/error/failed keywords; FAIL with regression count otherwise
- Skip with "no-tests" if no test runner available
**2. Type Check**:
- Run `npx tsc --noEmit` in worktree
- Count `error TS` occurrences for error count
**3. Lint Check**:
- Run `npx eslint --no-error-on-unmatched-pattern <modified-files>` in worktree
- Count error occurrences
**4. Quality Analysis** (optional, when > 5 modified files):
- Use gemini CLI to compare code quality before/after
- Assess complexity, duplication, naming quality improvements
**Debt Score Calculation**:
- debt_score_after = debt items NOT in modified files (remaining unfixed items)
- improvement_percentage = ((before - after) / before) * 100
**Auto-fix attempt** (when total_regressions <= 3):
- Use CLI tool to fix regressions in worktree:
```
Bash(`cd "${worktreePath}" && ccw cli -p "PURPOSE: Fix regressions found in validation
TASK: ${regressionDetails}
MODE: write
CONTEXT: @${modifiedFiles.join(' @')}
EXPECTED: Fixed regressions
CONSTRAINTS: Fix only regressions | Preserve debt cleanup changes | No suppressions" --tool gemini --mode write`)
```
- Re-run validation checks after fix attempt
## Phase 4: Compare & Report
1. Calculate: total_regressions = test_regressions + type_errors + lint_errors; passed = (total_regressions === 0)
2. Write `<session>/validation/validation-report.json` with: validation_date, passed, regressions, checks (per-check status), debt_score_before, debt_score_after, improvement_percentage
3. Update .msg/meta.json with `validation_results` and `debt_score_after`
4. Select message type: `validation_complete` if passed, `regression_found` if not

View File

@@ -1,196 +0,0 @@
# Team Tech Debt -- CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier (TDPREFIX-NNN) | `"TDSCAN-001"` |
| `title` | string | Yes | Short task title | `"Multi-dimension debt scan"` |
| `description` | string | Yes | Detailed task description (self-contained) | `"Scan codebase across 5 dimensions..."` |
| `role` | enum | Yes | Worker role: `scanner`, `assessor`, `planner`, `executor`, `validator` | `"scanner"` |
| `debt_dimension` | string | Yes | Target dimensions: `all`, or specific dimension(s) | `"all"` |
| `pipeline_mode` | enum | Yes | Pipeline mode: `scan`, `remediate`, `targeted` | `"remediate"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"TDSCAN-001"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"TDSCAN-001"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from pipeline position) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[TDSCAN-001] Found 42 debt items..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Found 42 debt items: 5 critical, 12 high..."` |
| `debt_items_count` | integer | Number of debt items processed | `42` |
| `artifacts_produced` | string | Semicolon-separated artifact paths | `"scan/debt-inventory.json"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Plan approval, GC loop management |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Role Registry
| Role | Prefix | Responsibility | inner_loop |
|------|--------|----------------|------------|
| scanner | TDSCAN | Multi-dimension debt scanning | false |
| assessor | TDEVAL | Quantitative severity assessment | false |
| planner | TDPLAN | Phased remediation planning | false |
| executor | TDFIX | Worktree-based debt cleanup | true |
| validator | TDVAL | 4-layer validation | false |
---
### Debt Dimensions
| Dimension | Description | Tools/Methods |
|-----------|-------------|---------------|
| code | Code smells, complexity, duplication | Static analysis, complexity metrics |
| architecture | Coupling, circular deps, layering violations | Dependency graph, coupling analysis |
| testing | Missing tests, low coverage, test quality | Coverage analysis, test quality |
| dependency | Outdated packages, vulnerabilities | Outdated check, vulnerability scan |
| documentation | Missing docs, stale API docs | Doc coverage, API doc check |
---
### Example Data
```csv
id,title,description,role,debt_dimension,pipeline_mode,deps,context_from,exec_mode,wave,status,findings,debt_items_count,artifacts_produced,error
"TDSCAN-001","Multi-dimension debt scan","Scan codebase across code, architecture, testing, dependency, and documentation dimensions. Produce structured debt inventory with severity rankings.\nSession: .workflow/.csv-wave/td-auth-20260308\nScope: src/**","scanner","all","remediate","","","csv-wave","1","pending","","0","",""
"TDEVAL-001","Severity assessment","Evaluate each debt item: impact score (1-5) x cost score (1-5). Classify into priority quadrants: quick-win, strategic, backlog, defer.\nSession: .workflow/.csv-wave/td-auth-20260308\nUpstream: TDSCAN-001 debt inventory","assessor","all","remediate","TDSCAN-001","TDSCAN-001","csv-wave","2","pending","","0","",""
"TDPLAN-001","Remediation planning","Create 3-phase remediation plan: Phase 1 quick-wins, Phase 2 systematic, Phase 3 prevention.\nSession: .workflow/.csv-wave/td-auth-20260308\nUpstream: TDEVAL-001 priority matrix","planner","all","remediate","TDEVAL-001","TDEVAL-001","csv-wave","3","pending","","0","",""
"PLAN-APPROVE","Plan approval gate","Review remediation plan and approve for execution","","all","remediate","TDPLAN-001","TDPLAN-001","interactive","3","pending","","0","",""
"TDFIX-001","Debt cleanup execution","Apply remediation plan actions in worktree: refactor, update deps, add tests, add docs.\nSession: .workflow/.csv-wave/td-auth-20260308\nWorktree: .worktrees/td-auth-20260308","executor","all","remediate","PLAN-APPROVE","TDPLAN-001","csv-wave","4","pending","","0","",""
"TDVAL-001","Cleanup validation","Run 4-layer validation: tests, type check, lint, quality analysis. Compare before/after debt scores.\nSession: .workflow/.csv-wave/td-auth-20260308\nWorktree: .worktrees/td-auth-20260308","validator","all","remediate","TDFIX-001","TDFIX-001","csv-wave","5","pending","","0","",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
--------------------- -------------------- -----------------
id ----------> id ----------> id
title ----------> title ----------> (reads)
description ----------> description ----------> (reads)
role ----------> role ----------> (reads)
debt_dimension -------> debt_dimension -------> (reads)
pipeline_mode --------> pipeline_mode --------> (reads)
deps ----------> deps ----------> (reads)
context_from----------> context_from----------> (reads)
exec_mode ----------> exec_mode ----------> (reads)
wave ----------> (reads)
prev_context ----------> (reads)
status
findings
debt_items_count
artifacts_produced
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "TDSCAN-001",
"status": "completed",
"findings": "Scanned 5 dimensions. Found 42 debt items: 5 critical, 12 high, 15 medium, 10 low. Top issues: complex auth logic (code), circular deps in services (architecture), missing integration tests (testing).",
"debt_items_count": "42",
"artifacts_produced": "scan/debt-inventory.json",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `debt_item_found` | `data.file+data.line` | `{id, dimension, severity, file, line, description, suggestion, estimated_effort}` | Tech debt item identified |
| `pattern_found` | `data.pattern_name+data.location` | `{pattern_name, location, description}` | Anti-pattern found |
| `fix_applied` | `data.file+data.change` | `{file, change, lines_modified, debt_id}` | Fix applied |
| `regression_found` | `data.file+data.test` | `{file, test, description, severity}` | Regression in validation |
| `dependency_issue` | `data.package+data.issue` | `{package, current, latest, issue, severity}` | Dependency problem |
| `metric_recorded` | `data.metric` | `{metric, value, dimension, file}` | Quality metric |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"TDSCAN-001","type":"debt_item_found","data":{"id":"TD-001","dimension":"code","severity":"high","file":"src/auth/jwt.ts","line":42,"description":"Cyclomatic complexity 18 exceeds threshold 10","suggestion":"Extract token validation logic","estimated_effort":"medium"}}
{"ts":"2026-03-08T10:05:00Z","worker":"TDSCAN-001","type":"dependency_issue","data":{"package":"express","current":"4.17.1","latest":"4.19.2","issue":"Known security vulnerability CVE-2024-XXXX","severity":"critical"}}
{"ts":"2026-03-08T10:30:00Z","worker":"TDFIX-001","type":"fix_applied","data":{"file":"src/auth/jwt.ts","change":"Extracted validateToken helper","lines_modified":25,"debt_id":"TD-001"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| Scanner findings | Assessor | prev_context from TDSCAN + scan/debt-inventory.json |
| Assessor matrix | Planner | prev_context from TDEVAL + assessment/priority-matrix.json |
| Planner plan | Plan Approver | Interactive spawn reads plan/remediation-plan.md |
| Plan approval | Executor | Interactive result in interactive/PLAN-APPROVE-result.json |
| Executor fixes | Validator | prev_context from TDFIX + fixes/fix-log.json |
| Validator results | GC Loop | Interactive read of validation/validation-report.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## GC Loop Schema
| Field | Type | Description |
|-------|------|-------------|
| `gc_rounds` | integer | Current GC round (0-based) |
| `max_gc_rounds` | integer | Maximum rounds (3) |
| `fix_task_id` | string | Current fix task ID (TDFIX-fix-N) |
| `val_task_id` | string | Current validation task ID (TDVAL-recheck-N) |
| `regressions` | array | List of regression descriptions |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Role valid | role in {scanner, assessor, planner, executor, validator} | "Invalid role: {role}" |
| Pipeline mode valid | pipeline_mode in {scan, remediate, targeted} | "Invalid pipeline_mode: {mode}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
| GC round limit | gc_rounds <= 3 | "GC round limit exceeded" |

View File

@@ -0,0 +1,47 @@
# Pipeline Definitions
Tech debt pipeline modes and task registry.
## Pipeline Modes
| Mode | Description | Task Chain |
|------|-------------|------------|
| scan | Scan and assess only, no fixes | TDSCAN-001 -> TDEVAL-001 |
| remediate | Full pipeline: scan -> assess -> plan -> fix -> validate | TDSCAN-001 -> TDEVAL-001 -> TDPLAN-001 -> TDFIX-001 -> TDVAL-001 |
| targeted | Skip scan/assess, direct fix path | TDPLAN-001 -> TDFIX-001 -> TDVAL-001 |
## Task Registry
| Task ID | Role | Prefix | blockedBy | Description |
|---------|------|--------|-----------|-------------|
| TDSCAN-001 | scanner | TDSCAN | [] | Fan-out multi-dimension codebase scan (code, architecture, testing, dependency, documentation) |
| TDEVAL-001 | assessor | TDEVAL | [TDSCAN-001] | Severity assessment with priority quadrant matrix |
| TDPLAN-001 | planner | TDPLAN | [TDEVAL-001] | 3-phase remediation plan with effort estimates |
| TDFIX-001 | executor | TDFIX | [TDPLAN-001] | Worktree-based incremental fixes (inner_loop: true) |
| TDVAL-001 | validator | TDVAL | [TDFIX-001] | 4-layer validation: syntax, tests, integration, regression |
## Checkpoints
| Checkpoint | Trigger | Condition | Action |
|------------|---------|-----------|--------|
| Plan Approval Gate | TDPLAN-001 completes | Always | request_user_input: Approve / Revise / Abort |
| Worktree Creation | Plan approved | Before TDFIX | git worktree add .worktrees/TD-<slug>-<date> |
| Fix-Verify GC Loop | TDVAL-* completes | Regressions found | Create TDFIX-fix-<round> + TDVAL-recheck-<round> (max 3 rounds) |
## GC Loop Behavior
| Condition | Action |
|-----------|--------|
| No regressions | Pipeline complete |
| Regressions AND gc_rounds < 3 | Create fix-verify tasks, increment gc_rounds |
| Regressions AND gc_rounds >= 3 | Accept current state, handleComplete |
## Output Artifacts
| Task | Output Path |
|------|-------------|
| TDSCAN-001 | <session>/scan/scan-report.json |
| TDEVAL-001 | <session>/assessment/debt-assessment.json |
| TDPLAN-001 | <session>/plan/remediation-plan.md |
| TDFIX-001 | <session>/fixes/ (worktree) |
| TDVAL-001 | <session>/validation/validation-report.md |

View File

@@ -0,0 +1,129 @@
{
"team_name": "tech-debt",
"version": "1.0.0",
"description": "技术债务识别与清理团队 - 融合\"债务扫描\"、\"量化评估\"、\"治理规划\"、\"清理执行\"、\"验证回归\"五大能力域,形成扫描→评估→规划→清理→验证的闭环",
"skill_entry": "team-tech-debt",
"invocation": "Skill(skill=\"team-tech-debt\", args=\"--role=coordinator ...\")",
"roles": {
"coordinator": {
"name": "coordinator",
"responsibility": "Orchestration",
"task_prefix": null,
"description": "技术债务治理协调者。编排 pipeline需求澄清 → 模式选择 → 团队创建 → 任务分发 → 监控协调 → 质量门控 → 结果汇报",
"message_types_sent": ["mode_selected", "quality_gate", "task_unblocked", "error", "shutdown"],
"message_types_received": ["scan_complete", "assessment_complete", "plan_ready", "fix_complete", "validation_complete", "regression_found", "error"],
"commands": ["dispatch", "monitor"]
},
"scanner": {
"name": "scanner",
"responsibility": "Orchestration (多维度债务扫描)",
"task_prefix": "TDSCAN",
"description": "技术债务扫描员。多维度扫描代码库:代码债务、架构债务、测试债务、依赖债务、文档债务,生成债务清单",
"message_types_sent": ["scan_complete", "debt_items_found", "error"],
"message_types_received": [],
"commands": ["scan-debt"],
"cli_tools": ["gemini"]
},
"assessor": {
"name": "assessor",
"responsibility": "Read-only analysis (量化评估)",
"task_prefix": "TDEVAL",
"description": "技术债务评估师。量化评估债务项的影响和修复成本,按优先级矩阵排序,生成评估报告",
"message_types_sent": ["assessment_complete", "error"],
"message_types_received": [],
"commands": ["evaluate"],
"cli_tools": ["gemini"]
},
"planner": {
"name": "planner",
"responsibility": "Orchestration (治理规划)",
"task_prefix": "TDPLAN",
"description": "技术债务治理规划师。制定分阶段治理方案:短期速赢、中期系统性治理、长期预防机制",
"message_types_sent": ["plan_ready", "plan_revision", "error"],
"message_types_received": [],
"commands": ["create-plan"],
"cli_tools": ["gemini"]
},
"executor": {
"name": "executor",
"responsibility": "Code generation (债务清理执行)",
"task_prefix": "TDFIX",
"description": "技术债务清理执行者。按优先级执行重构、依赖更新、代码清理、测试补充等治理动作",
"message_types_sent": ["fix_complete", "fix_progress", "error"],
"message_types_received": [],
"commands": ["remediate"],
"cli_tools": [{"tool": "gemini", "mode": "write"}]
},
"validator": {
"name": "validator",
"responsibility": "Validation (清理验证)",
"task_prefix": "TDVAL",
"description": "技术债务清理验证者。验证清理后无回归、质量指标提升、债务确实消除",
"message_types_sent": ["validation_complete", "regression_found", "error"],
"message_types_received": [],
"commands": ["verify"],
"cli_tools": [{"tool": "gemini", "mode": "write"}]
}
},
"pipeline_modes": {
"scan": {
"description": "仅扫描评估,不执行修复(审计模式)",
"stages": ["TDSCAN", "TDEVAL"],
"entry_role": "scanner"
},
"remediate": {
"description": "完整闭环:扫描 → 评估 → 规划 → 修复 → 验证",
"stages": ["TDSCAN", "TDEVAL", "TDPLAN", "TDFIX", "TDVAL"],
"entry_role": "scanner"
},
"targeted": {
"description": "定向修复:用户已知债务项,直接规划执行",
"stages": ["TDPLAN", "TDFIX", "TDVAL"],
"entry_role": "planner"
}
},
"fix_verify_loop": {
"max_iterations": 3,
"trigger": "validation fails or regression found",
"participants": ["executor", "validator"],
"flow": "TDFIX-fix → TDVAL-verify → evaluate"
},
"shared_memory": {
"file": ".msg/meta.json",
"fields": {
"debt_inventory": { "owner": "scanner", "type": "array" },
"assessment_matrix": { "owner": "assessor", "type": "object" },
"remediation_plan": { "owner": "planner", "type": "object" },
"fix_results": { "owner": "executor", "type": "object" },
"validation_results": { "owner": "validator", "type": "object" },
"debt_score_before": { "owner": "assessor", "type": "number" },
"debt_score_after": { "owner": "validator", "type": "number" }
}
},
"collaboration_patterns": [
"CP-1: Linear Pipeline (scan/remediate/targeted mode)",
"CP-2: Review-Fix Cycle (Executor ↔ Validator loop)",
"CP-3: Fan-out (Scanner multi-dimension scan)",
"CP-5: Escalation (Worker → Coordinator → User)",
"CP-6: Incremental Delivery (batch remediation)",
"CP-10: Post-Mortem (debt reduction report)"
],
"debt_dimensions": {
"code": { "name": "代码债务", "tools": ["static-analysis", "complexity-metrics"] },
"architecture": { "name": "架构债务", "tools": ["dependency-graph", "coupling-analysis"] },
"testing": { "name": "测试债务", "tools": ["coverage-analysis", "test-quality"] },
"dependency": { "name": "依赖债务", "tools": ["outdated-check", "vulnerability-scan"] },
"documentation": { "name": "文档债务", "tools": ["doc-coverage", "api-doc-check"] }
},
"session_directory": {
"pattern": ".workflow/.team/TD-{slug}-{date}",
"subdirectories": ["scan", "assessment", "plan", "fixes", "validation"]
}
}