feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -0,0 +1,69 @@
---
role: assessor
prefix: TDEVAL
inner_loop: false
message_types: [state_update]
---
# Tech Debt Assessor
Quantitative evaluator for tech debt items. Score each debt item on business impact (1-5) and fix cost (1-5), classify into priority quadrants, produce priority-matrix.json.
## Phase 2: Load Debt Inventory
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Debt inventory | meta.json:debt_inventory OR <session>/scan/debt-inventory.json | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for team context
3. Load debt_inventory from shared memory or fallback to debt-inventory.json file
4. If debt_inventory is empty -> report empty assessment and exit
## Phase 3: Evaluate Each Item
**Strategy selection**:
| Item Count | Strategy |
|------------|----------|
| <= 10 | Heuristic: severity-based impact + effort-based cost |
| 11-50 | CLI batch: single gemini analysis call |
| > 50 | CLI chunked: batches of 25 items |
**Impact Score Mapping** (heuristic):
| Severity | Impact Score |
|----------|-------------|
| critical | 5 |
| high | 4 |
| medium | 3 |
| low | 1 |
**Cost Score Mapping** (heuristic):
| Estimated Effort | Cost Score |
|------------------|------------|
| small | 1 |
| medium | 3 |
| large | 5 |
| unknown | 3 |
**Priority Quadrant Classification**:
| Impact | Cost | Quadrant |
|--------|------|----------|
| >= 4 | <= 2 | quick-win |
| >= 4 | >= 3 | strategic |
| <= 3 | <= 2 | backlog |
| <= 3 | >= 3 | defer |
For CLI mode, prompt gemini with full debt summary requesting JSON array of `{id, impact_score, cost_score, risk_if_unfixed, priority_quadrant}`. Unevaluated items fall back to heuristic scoring.
## Phase 4: Generate Priority Matrix
1. Build matrix structure: evaluation_date, total_items, by_quadrant (grouped), summary (counts per quadrant)
2. Sort within each quadrant by impact_score descending
3. Write `<session>/assessment/priority-matrix.json`
4. Update .msg/meta.json with `priority_matrix` summary and evaluated `debt_inventory`

View File

@@ -0,0 +1,47 @@
# Analyze Task
Parse user task -> detect tech debt signals -> assess complexity -> determine pipeline mode and roles.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Signal | Mode Hint |
|----------|--------|-----------|
| 扫描, scan, 审计, audit | debt-scan | scan |
| 评估, assess, quantify | debt-assess | scan |
| 规划, plan, roadmap | debt-plan | targeted |
| 修复, fix, remediate, clean | debt-fix | remediate |
| 验证, validate, verify | debt-validate | remediate |
| 定向, targeted, specific | debt-targeted | targeted |
## Complexity Scoring
| Factor | Points |
|--------|--------|
| Full codebase scope | +2 |
| Multiple debt dimensions | +1 per dimension (max 3) |
| Large codebase (implied) | +1 |
| Targeted specific items | -1 |
Results: 1-3 Low (scan mode), 4-6 Medium (remediate), 7+ High (remediate + full pipeline)
## Pipeline Mode Determination
| Score + Signals | Mode |
|----------------|------|
| scan/audit keywords | scan |
| targeted/specific keywords | targeted |
| Default | remediate |
## Output
Write scope context to coordinator memory:
```json
{
"pipeline_mode": "<scan|remediate|targeted>",
"scope": "<detected-scope>",
"focus_dimensions": ["code", "architecture", "testing", "dependency", "documentation"],
"complexity": { "score": 0, "level": "Low|Medium|High" }
}
```

View File

@@ -0,0 +1,163 @@
# Command: dispatch
> 任务链创建与依赖管理。根据 pipeline 模式创建技术债务治理任务链并写入 tasks.json。
## When to Use
- Phase 3 of Coordinator
- Pipeline 模式已确定,需要创建任务链
- 团队已创建worker 待 spawn
**Trigger conditions**:
- Coordinator Phase 2 完成后
- 模式切换需要重建任务链
- Fix-Verify 循环需要创建修复任务
## Strategy
### Delegation Mode
**Mode**: Directcoordinator 直接操作 tasks.json
### Decision Logic
```javascript
// 根据 pipelineMode 选择 pipeline
function buildPipeline(pipelineMode, sessionFolder, taskDescription) {
const pipelines = {
'scan': [
{ prefix: 'TDSCAN', owner: 'scanner', desc: '多维度技术债务扫描', deps: [] },
{ prefix: 'TDEVAL', owner: 'assessor', desc: '量化评估与优先级排序', deps: ['TDSCAN'] }
],
'remediate': [
{ prefix: 'TDSCAN', owner: 'scanner', desc: '多维度技术债务扫描', deps: [] },
{ prefix: 'TDEVAL', owner: 'assessor', desc: '量化评估与优先级排序', deps: ['TDSCAN'] },
{ prefix: 'TDPLAN', owner: 'planner', desc: '分阶段治理方案规划', deps: ['TDEVAL'] },
{ prefix: 'TDFIX', owner: 'executor', desc: '债务清理执行', deps: ['TDPLAN'] },
{ prefix: 'TDVAL', owner: 'validator', desc: '清理结果验证', deps: ['TDFIX'] }
],
'targeted': [
{ prefix: 'TDPLAN', owner: 'planner', desc: '定向修复方案规划', deps: [] },
{ prefix: 'TDFIX', owner: 'executor', desc: '债务清理执行', deps: ['TDPLAN'] },
{ prefix: 'TDVAL', owner: 'validator', desc: '清理结果验证', deps: ['TDFIX'] }
]
}
return pipelines[pipelineMode] || pipelines['scan']
}
```
## Execution Steps
### Step 1: Context Preparation
```javascript
const pipeline = buildPipeline(pipelineMode, sessionFolder, taskDescription)
```
### Step 2: Build Tasks JSON
```javascript
const tasks = {}
for (const stage of pipeline) {
const taskId = `${stage.prefix}-001`
// 构建任务描述(包含 session 和上下文信息)
const fullDesc = [
stage.desc,
`\nsession: ${sessionFolder}`,
`\n\n目标: ${taskDescription}`
].join('')
// 构建依赖 ID 列表
const depIds = stage.deps.map(dep => `${dep}-001`)
// 添加任务到 tasks 对象
tasks[taskId] = {
title: stage.desc,
description: fullDesc,
role: stage.owner,
prefix: stage.prefix,
deps: depIds,
status: "pending",
findings: null,
error: null
}
}
// 写入 tasks.json
state.tasks = { ...state.tasks, ...tasks }
// Write updated tasks.json
```
### Step 3: Result Processing
```javascript
// 验证任务链
const chainValid = Object.keys(tasks).length === pipeline.length
if (!chainValid) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: sessionId, from: "coordinator",
type: "error",
})
}
```
## Fix-Verify Loop Task Creation
当 validator 报告回归问题时coordinator 调用此逻辑追加任务到 tasks.json
```javascript
function createFixVerifyTasks(fixVerifyIteration, sessionFolder) {
const fixId = `TDFIX-fix-${fixVerifyIteration}`
const valId = `TDVAL-recheck-${fixVerifyIteration}`
// 添加修复任务到 tasks.json
state.tasks[fixId] = {
title: `修复回归问题 (Fix-Verify #${fixVerifyIteration})`,
description: `修复验证发现的回归问题\nsession: ${sessionFolder}\ntype: fix-verify`,
role: "executor",
prefix: "TDFIX",
deps: [],
status: "pending",
findings: null,
error: null
}
// 添加重新验证任务到 tasks.json
state.tasks[valId] = {
title: `重新验证 (Fix-Verify #${fixVerifyIteration})`,
description: `重新验证修复结果\nsession: ${sessionFolder}`,
role: "validator",
prefix: "TDVAL",
deps: [fixId],
status: "pending",
findings: null,
error: null
}
// Write updated tasks.json
}
```
## Output Format
```
## Task Chain Created
### Mode: [scan|remediate|targeted]
### Pipeline Stages: [count]
- [prefix]-001: [description] (owner: [role], deps: [deps])
### Verification: PASS/FAIL
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Task creation fails | Retry once, then report to user |
| Dependency cycle detected | Flatten dependencies, warn coordinator |
| Invalid pipelineMode | Default to 'scan' mode |
| Timeout (>5 min) | Report partial results, notify coordinator |

View File

@@ -0,0 +1,231 @@
# Monitor Pipeline
Synchronous pipeline coordination using spawn_agent + wait_agent.
## Constants
- WORKER_AGENT: team_worker
- ONE_STEP_PER_INVOCATION: false (synchronous wait loop)
- FAST_ADVANCE_AWARE: true
- MAX_GC_ROUNDS: 3
## Handler Router
| Source | Handler |
|--------|---------|
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
## handleCheck
Read-only status report from tasks.json, then STOP.
1. Read tasks.json
2. Count tasks by status (pending, in_progress, completed, failed)
```
Pipeline Status (<mode>):
[DONE] TDSCAN-001 (scanner) -> scan complete
[DONE] TDEVAL-001 (assessor) -> assessment ready
[RUN] TDPLAN-001 (planner) -> planning...
[WAIT] TDFIX-001 (executor) -> blocked by TDPLAN-001
[WAIT] TDVAL-001 (validator) -> blocked by TDFIX-001
GC Rounds: 0/3
Session: <session-id>
Commands: 'resume' to advance | 'check' to refresh
```
Output status -- do NOT advance pipeline.
## handleResume
1. Read tasks.json, check active_agents
2. Tasks stuck in "in_progress" -> reset to "pending"
3. Tasks with completed deps but still "pending" -> include in spawn list
4. -> handleSpawnNext
## handleSpawnNext
Find ready tasks, spawn workers, wait for completion, process results.
1. Read tasks.json
2. Collect: completedTasks, inProgressTasks, readyTasks (pending + all deps completed)
3. No ready + work in progress -> report waiting, STOP
4. No ready + nothing in progress -> handleComplete
5. Has ready -> for each:
a. Check inner loop role with active worker -> skip (worker picks up)
b. Update task status in tasks.json -> in_progress
c. team_msg log -> task_unblocked
### Spawn Workers
For each ready task:
```javascript
// 1) Update status in tasks.json
state.tasks[taskId].status = 'in_progress'
// 2) Spawn worker
const agentId = spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: ${task.role}
role_spec: ${skillRoot}/roles/${task.role}/role.md
session: ${sessionFolder}
session_id: ${sessionId}
team_name: tech-debt
requirement: ${task.description}
inner_loop: ${task.role === 'executor'}` },
{ type: "text", text: `Read role_spec file (${skillRoot}/roles/${task.role}/role.md) to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` },
{ type: "text", text: `## Task Context
task_id: ${taskId}
title: ${task.title}
description: ${task.description}` },
{ type: "text", text: `## Upstream Context\n${prevContext}` }
]
})
// 3) Track agent
state.active_agents[taskId] = { agentId, role: task.role, started_at: now }
```
Stage-to-role mapping:
| Task Prefix | Role |
|-------------|------|
| TDSCAN | scanner |
| TDEVAL | assessor |
| TDPLAN | planner |
| TDFIX | executor |
| TDVAL | validator |
### Wait and Process Results
After spawning all ready tasks:
```javascript
// 4) Batch wait for all spawned workers
const agentIds = Object.values(state.active_agents).map(a => a.agentId)
wait_agent({ ids: agentIds, timeout_ms: 900000 })
// 5) Collect results
for (const [taskId, agent] of Object.entries(state.active_agents)) {
state.tasks[taskId].status = 'completed'
close_agent({ id: agent.agentId })
delete state.active_agents[taskId]
}
```
### Checkpoint Processing
After task completion, check for checkpoints:
- **TDPLAN-001 completes** -> Plan Approval Gate:
```javascript
request_user_input({
questions: [{
question: "Remediation plan generated. Review and decide:",
header: "Plan Review",
multiSelect: false,
options: [
{ label: "Approve", description: "Proceed with fix execution" },
{ label: "Revise", description: "Re-run planner with feedback" },
{ label: "Abort", description: "Stop pipeline" }
]
}]
})
```
- Approve -> Worktree Creation -> continue handleSpawnNext loop
- Revise -> Add TDPLAN-revised task to tasks.json -> continue
- Abort -> Log shutdown -> handleComplete
- **Worktree Creation** (before TDFIX):
```
Bash("git worktree add .worktrees/TD-<slug>-<date> -b tech-debt/TD-<slug>-<date>")
```
Update .msg/meta.json with worktree info.
- **TDVAL-* completes** -> GC Loop Check:
Read validation results from .msg/meta.json
| Condition | Action |
|-----------|--------|
| No regressions | -> continue (pipeline complete) |
| Regressions AND gc_rounds < 3 | Add fix-verify tasks to tasks.json, increment gc_rounds |
| Regressions AND gc_rounds >= 3 | Accept current state -> handleComplete |
Fix-Verify Task Creation (add to tasks.json):
```json
{
"TDFIX-fix-<round>": {
"title": "Fix regressions (Fix-Verify #<round>)",
"description": "PURPOSE: Fix regressions | Session: <session>",
"role": "executor",
"prefix": "TDFIX",
"deps": [],
"status": "pending",
"findings": null,
"error": null
},
"TDVAL-recheck-<round>": {
"title": "Recheck after fix (Fix-Verify #<round>)",
"description": "Re-validate after fix",
"role": "validator",
"prefix": "TDVAL",
"deps": ["TDFIX-fix-<round>"],
"status": "pending",
"findings": null,
"error": null
}
}
```
### Persist and Loop
After processing all results:
1. Write updated tasks.json
2. Check if more tasks are now ready (deps newly resolved)
3. If yes -> loop back to step 1 of handleSpawnNext
4. If no more ready and all done -> handleComplete
5. If no more ready but some still blocked -> report status, STOP
## handleComplete
Pipeline done. Generate report and completion action.
1. Verify all tasks (including fix-verify tasks) have status "completed"
2. If any not completed -> handleSpawnNext
3. If all completed:
- Read final state from .msg/meta.json
- If worktree exists and validation passed: commit, push, gh pr create, cleanup worktree
- Compile summary: total tasks, completed, gc_rounds, debt_score_before, debt_score_after
- Transition to coordinator Phase 5
4. Execute completion action per tasks.json completion_action:
- interactive -> request_user_input (Archive/Keep/Export)
- auto_archive -> Archive & Clean (rm -rf session folder)
- auto_keep -> Keep Active (status=paused)
## handleAdapt
Capability gap reported mid-pipeline.
1. Parse gap description
2. Check if existing role covers it -> redirect
3. Role count < 5 -> generate dynamic role spec in <session>/role-specs/
4. Add new task to tasks.json, spawn worker via spawn_agent + wait_agent
5. Role count >= 5 -> merge or pause
## Fast-Advance Reconciliation
On every coordinator wake:
1. Read team_msg entries with type="fast_advance"
2. Sync active_agents with spawned successors
3. No duplicate spawns

View File

@@ -0,0 +1,141 @@
# Coordinator Role
技术债务治理团队协调者。编排 pipeline需求澄清 -> 模式选择(scan/remediate/targeted) -> 创建会话 -> 任务分发 -> 监控协调 -> Fix-Verify 循环 -> 债务消减报告。
## Identity
- **Name**: coordinator | **Tag**: [coordinator]
- **Responsibility**: Parse requirements -> Create session -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- All output (team_msg, logs) must carry `[coordinator]` identifier
- Only responsible for: requirement clarification, mode selection, task creation/dispatch, progress monitoring, quality gates, result reporting
- Create tasks in tasks.json and assign to worker roles
- Monitor worker progress via spawn_agent/wait_agent and route messages
- Maintain session state persistence (tasks.json)
### MUST NOT
- Execute tech debt work directly (delegate to workers)
- Modify task outputs (workers own their deliverables)
- Call CLI tools for analysis, exploration, or code generation
- Modify source code or generate artifact files directly
- Bypass worker roles to complete delegated work
- Skip dependency validation when creating task chains
- Omit `[coordinator]` identifier in any output
## Command Execution Protocol
When coordinator needs to execute a command (analyze, dispatch, monitor):
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Status check | Arguments contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume (monitor.md) |
| Pipeline complete | All tasks have status "completed" | -> handleComplete (monitor.md) |
| Interrupted session | Active/paused session exists in .workflow/.team/TD-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For check/resume/complete: load `@commands/monitor.md`, execute matched handler, STOP.
## Phase 0: Session Resume Check
1. Scan `.workflow/.team/TD-*/tasks.json` for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile:
a. Read tasks.json, reset in_progress -> pending
b. Rebuild active_agents map
c. Kick first ready task via handleSpawnNext
4. Multiple -> request_user_input for selection
## Phase 1: Requirement Clarification
TEXT-LEVEL ONLY. No source code reading.
1. Parse arguments for explicit settings: mode, scope, focus areas
2. Detect mode:
| Condition | Mode |
|-----------|------|
| `--mode=scan` or keywords: 扫描, scan, 审计, audit, 评估, assess | scan |
| `--mode=targeted` or keywords: 定向, targeted, 指定, specific, 修复已知 | targeted |
| `-y` or `--yes` specified | Skip confirmations |
| Default | remediate |
3. Ask for missing parameters (skip if auto mode):
- request_user_input: Tech Debt Target (自定义 / 全项目扫描 / 完整治理 / 定向修复)
4. Store: mode, scope, focus, constraints
5. Delegate to @commands/analyze.md -> output task-analysis context
## Phase 2: Create Session + Initialize
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash({ command: "pwd" })`
- `skill_root` = `<project_root>/.codex/skills/team-tech-debt`
2. Generate session ID: `TD-<slug>-<YYYY-MM-DD>`
3. Create session folder structure:
```bash
mkdir -p .workflow/.team/${SESSION_ID}/{scan,assessment,plan,fixes,validation,wisdom,wisdom/.msg}
```
4. Initialize .msg/meta.json via team_msg state_update with pipeline metadata
5. Write initial tasks.json:
```json
{
"session_id": "<id>",
"pipeline": "<scan|remediate|targeted>",
"requirement": "<original requirement>",
"created_at": "<ISO timestamp>",
"gc_rounds": 0,
"completed_waves": [],
"active_agents": {},
"tasks": {}
}
```
6. Do NOT spawn workers yet - deferred to Phase 4
## Phase 3: Create Task Chain
Delegate to @commands/dispatch.md. Task chain by mode:
| Mode | Task Chain |
|------|------------|
| scan | TDSCAN-001 -> TDEVAL-001 |
| remediate | TDSCAN-001 -> TDEVAL-001 -> TDPLAN-001 -> TDFIX-001 -> TDVAL-001 |
| targeted | TDPLAN-001 -> TDFIX-001 -> TDVAL-001 |
## Phase 4: Spawn-and-Wait
Delegate to @commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + deps resolved)
2. Spawn team_worker agents via spawn_agent
3. Wait for completion via wait_agent
4. Process results, advance pipeline
5. Repeat until all waves complete or pipeline blocked
## Phase 5: Report + Debt Reduction Metrics + PR
1. Read shared memory -> collect all results
2. PR Creation (worktree mode, validation passed): commit, push, gh pr create, cleanup worktree
3. Calculate: debt_items_found, items_fixed, reduction_rate
4. Generate report with mode, debt scores, validation status
5. Output with [coordinator] prefix
6. Execute completion action (request_user_input: 新目标 / 深度修复 / 关闭团队)
## Error Handling
| Error | Resolution |
|-------|------------|
| Task timeout | Log, mark failed, ask user to retry or skip |
| Worker crash | Reset task to pending in tasks.json, respawn via spawn_agent |
| Dependency cycle | Detect, report to user, halt |
| Invalid mode | Reject with error, ask to clarify |
| Session corruption | Attempt recovery, fallback to manual reconciliation |
| Scanner finds no debt | Report clean codebase, skip to summary |
| Fix-Verify loop stuck >3 iterations | Accept current state, continue pipeline |

View File

@@ -0,0 +1,76 @@
---
role: executor
prefix: TDFIX
inner_loop: true
message_types: [state_update]
---
# Tech Debt Executor
Debt cleanup executor. Apply remediation plan actions in worktree: refactor code, update dependencies, add tests, add documentation. Batch-delegate to CLI tools, self-validate after each batch.
## Phase 2: Load Remediation Plan
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Remediation plan | <session>/plan/remediation-plan.json | Yes |
| Worktree info | meta.json:worktree.path, worktree.branch | Yes |
| Context accumulator | From prior TDFIX tasks (inner loop) | Yes (inner loop) |
1. Extract session path from task description
2. Read .msg/meta.json for worktree path and branch
3. Read remediation-plan.json, extract all actions from plan phases
4. Group actions by type: refactor, restructure, add-tests, update-deps, add-docs
5. Split large groups (> 10 items) into sub-batches of 10
6. For inner loop (fix-verify cycle): load context_accumulator from prior TDFIX tasks, parse review/validation feedback for specific issues
**Batch order**: refactor -> update-deps -> add-tests -> add-docs -> restructure
## Phase 3: Execute Fixes
For each batch, use CLI tool for implementation:
**Worktree constraint**: ALL file operations and commands must execute within worktree path. Use `cd "<worktree-path>" && ...` prefix for all Bash commands.
**Per-batch delegation**:
```bash
ccw cli -p "PURPOSE: Apply tech debt fixes in batch; success = all items fixed without breaking changes
TASK: <batch-type-specific-tasks>
MODE: write
CONTEXT: @<worktree-path>/**/* | Memory: Remediation plan context
EXPECTED: Code changes that fix debt items, maintain backward compatibility, pass existing tests
CONSTRAINTS: Minimal changes only | No new features | No suppressions | Read files before modifying
Batch type: <refactor|update-deps|add-tests|add-docs|restructure>
Items: <list-of-items-with-file-paths-and-descriptions>" --tool gemini --mode write --cd "<worktree-path>"
```
Wait for CLI completion before proceeding to next batch.
**Fix Results Tracking**:
| Field | Description |
|-------|-------------|
| items_fixed | Count of successfully fixed items |
| items_failed | Count of failed items |
| items_remaining | Remaining items count |
| batches_completed | Completed batch count |
| files_modified | Array of modified file paths |
| errors | Array of error messages |
After each batch, verify file modifications via `git diff --name-only` in worktree.
## Phase 4: Self-Validation
All commands in worktree:
| Check | Command | Pass Criteria |
|-------|---------|---------------|
| Syntax | `tsc --noEmit` or `python -m py_compile` | No new errors |
| Lint | `eslint --no-error-on-unmatched-pattern` | No new errors |
Write `<session>/fixes/fix-log.json` with fix results. Update .msg/meta.json with `fix_results`.
Append to context_accumulator for next TDFIX task (inner loop): files modified, fixes applied, validation results, discovered caveats.

View File

@@ -0,0 +1,69 @@
---
role: planner
prefix: TDPLAN
inner_loop: false
message_types: [state_update]
---
# Tech Debt Planner
Remediation plan designer. Create phased remediation plan from priority matrix: Phase 1 quick-wins (immediate), Phase 2 systematic (medium-term), Phase 3 prevention (long-term). Produce remediation-plan.md.
## Phase 2: Load Assessment Data
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Priority matrix | <session>/assessment/priority-matrix.json | Yes |
1. Extract session path from task description
2. Read .msg/meta.json for debt_inventory
3. Read priority-matrix.json for quadrant groupings
4. Group items: quickWins (quick-win), strategic (strategic), backlog (backlog), deferred (defer)
## Phase 3: Create Remediation Plan
**Strategy selection**:
| Item Count (quick-win + strategic) | Strategy |
|------------------------------------|----------|
| <= 5 | Inline: generate steps from item data |
| > 5 | CLI-assisted: gemini generates detailed remediation steps |
**3-Phase Plan Structure**:
| Phase | Name | Source Items | Focus |
|-------|------|-------------|-------|
| 1 | Quick Wins | quick-win quadrant | High impact, low cost -- immediate execution |
| 2 | Systematic | strategic quadrant | High impact, high cost -- structured refactoring |
| 3 | Prevention | Generated from dimension patterns | Long-term prevention mechanisms |
**Action Type Mapping**:
| Dimension | Action Type |
|-----------|-------------|
| code | refactor |
| architecture | restructure |
| testing | add-tests |
| dependency | update-deps |
| documentation | add-docs |
**Prevention Actions** (generated when dimension has >= 3 items):
| Dimension | Prevention Action |
|-----------|-------------------|
| code | Add linting rules for complexity thresholds and code smell detection |
| architecture | Introduce module boundary checks in CI pipeline |
| testing | Set minimum coverage thresholds in CI and add pre-commit test hooks |
| dependency | Configure automated dependency update bot (Renovate/Dependabot) |
| documentation | Add JSDoc/docstring enforcement in linting rules |
For CLI-assisted mode, prompt gemini with debt summary requesting specific fix steps per item, grouped into phases, with dependencies and estimated time.
## Phase 4: Validate & Save
1. Calculate validation metrics: total_actions, total_effort, files_affected, has_quick_wins, has_prevention
2. Write `<session>/plan/remediation-plan.md` (markdown with per-item checklists)
3. Write `<session>/plan/remediation-plan.json` (machine-readable)
4. Update .msg/meta.json with `remediation_plan` summary

View File

@@ -0,0 +1,82 @@
---
role: scanner
prefix: TDSCAN
inner_loop: false
message_types: [state_update]
---
# Tech Debt Scanner
Multi-dimension tech debt scanner. Scan codebase across 5 dimensions (code, architecture, testing, dependency, documentation), produce structured debt inventory with severity rankings.
## Phase 2: Context & Environment Detection
| Input | Source | Required |
|-------|--------|----------|
| Scan scope | task description (regex: `scope:\s*(.+)`) | No (default: `**/*`) |
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
1. Extract session path and scan scope from task description
2. Load debug specs: Run `ccw spec load --category debug` for known issues, workarounds, and root-cause notes
3. Read .msg/meta.json for team context
3. Detect project type and framework:
| Signal File | Project Type |
|-------------|-------------|
| package.json + React/Vue/Angular | Frontend Node |
| package.json + Express/Fastify/NestJS | Backend Node |
| pyproject.toml / requirements.txt | Python |
| go.mod | Go |
| No detection | Generic |
4. Determine scan dimensions (default: code, architecture, testing, dependency, documentation)
5. Detect perspectives from task description:
| Condition | Perspective |
|-----------|-------------|
| `security\|auth\|inject\|xss` | security |
| `performance\|speed\|optimize` | performance |
| `quality\|clean\|maintain\|debt` | code-quality |
| `architect\|pattern\|structure` | architecture |
| Default | code-quality + architecture |
6. Assess complexity:
| Score | Complexity | Strategy |
|-------|------------|----------|
| >= 4 | High | Triple Fan-out: CLI explore + CLI 5 dimensions + multi-perspective Gemini |
| 2-3 | Medium | Dual Fan-out: CLI explore + CLI 3 dimensions |
| 0-1 | Low | Inline: ACE search + Grep |
## Phase 3: Multi-Dimension Scan
**Low Complexity** (inline):
- Use `mcp__ace-tool__search_context` for code smells, TODO/FIXME, deprecated APIs, complex functions, dead code, missing tests
- Classify findings into dimensions
**Medium/High Complexity** (Fan-out):
- Fan-out A: CLI exploration (structure, patterns, dependencies angles) via `ccw cli --tool gemini --mode analysis`
- Fan-out B: CLI dimension analysis (parallel gemini per dimension -- code, architecture, testing, dependency, documentation)
- Fan-out C (High only): Multi-perspective Gemini analysis (security, performance, code-quality, architecture)
- Fan-in: Merge results, cross-deduplicate by file:line, boost severity for multi-source findings
**Standardize each finding**:
| Field | Description |
|-------|-------------|
| `id` | `TD-NNN` (sequential) |
| `dimension` | code, architecture, testing, dependency, documentation |
| `severity` | critical, high, medium, low |
| `file` | File path |
| `line` | Line number |
| `description` | Issue description |
| `suggestion` | Fix suggestion |
| `estimated_effort` | small, medium, large, unknown |
## Phase 4: Aggregate & Save
1. Deduplicate findings across Fan-out layers (file:line key), merge cross-references
2. Sort by severity (cross-referenced items boosted)
3. Write `<session>/scan/debt-inventory.json` with scan_date, dimensions, total_items, by_dimension, by_severity, items
4. Update .msg/meta.json with `debt_inventory` array and `debt_score_before` count

View File

@@ -0,0 +1,75 @@
---
role: validator
prefix: TDVAL
inner_loop: false
message_types: [state_update]
---
# Tech Debt Validator
Cleanup result validator. Run test suite, type checks, lint checks, and quality analysis to verify debt cleanup introduced no regressions. Compare before/after debt scores, produce validation-report.json.
## Phase 2: Load Context
| Input | Source | Required |
|-------|--------|----------|
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
| Fix log | <session>/fixes/fix-log.json | No |
1. Extract session path from task description
2. Read .msg/meta.json for: worktree.path, debt_inventory, fix_results, debt_score_before
3. Determine command prefix: `cd "<worktree-path>" && ` if worktree exists
4. Read fix-log.json for modified files list
5. Detect available validation tools in worktree:
| Signal | Tool | Method |
|--------|------|--------|
| package.json + npm | npm test | Test suite |
| pytest available | python -m pytest | Test suite |
| npx tsc available | npx tsc --noEmit | Type check |
| npx eslint available | npx eslint | Lint check |
## Phase 3: Run Validation Checks
Execute 4-layer validation (all commands in worktree):
**1. Test Suite**:
- Run `npm test` or `python -m pytest` in worktree
- PASS if no FAIL/error/failed keywords; FAIL with regression count otherwise
- Skip with "no-tests" if no test runner available
**2. Type Check**:
- Run `npx tsc --noEmit` in worktree
- Count `error TS` occurrences for error count
**3. Lint Check**:
- Run `npx eslint --no-error-on-unmatched-pattern <modified-files>` in worktree
- Count error occurrences
**4. Quality Analysis** (optional, when > 5 modified files):
- Use gemini CLI to compare code quality before/after
- Assess complexity, duplication, naming quality improvements
**Debt Score Calculation**:
- debt_score_after = debt items NOT in modified files (remaining unfixed items)
- improvement_percentage = ((before - after) / before) * 100
**Auto-fix attempt** (when total_regressions <= 3):
- Use CLI tool to fix regressions in worktree:
```
Bash(`cd "${worktreePath}" && ccw cli -p "PURPOSE: Fix regressions found in validation
TASK: ${regressionDetails}
MODE: write
CONTEXT: @${modifiedFiles.join(' @')}
EXPECTED: Fixed regressions
CONSTRAINTS: Fix only regressions | Preserve debt cleanup changes | No suppressions" --tool gemini --mode write`)
```
- Re-run validation checks after fix attempt
## Phase 4: Compare & Report
1. Calculate: total_regressions = test_regressions + type_errors + lint_errors; passed = (total_regressions === 0)
2. Write `<session>/validation/validation-report.json` with: validation_date, passed, regressions, checks (per-check status), debt_score_before, debt_score_after, improvement_percentage
3. Update .msg/meta.json with `validation_results` and `debt_score_after`
4. Select message type: `validation_complete` if passed, `regression_found` if not