Add unit tests for various components and stores in the terminal dashboard

- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management.
- Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping.
- Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
catlog22
2026-03-08 21:38:20 +08:00
parent 9aa07e8d01
commit 62d8aa3623
157 changed files with 36544 additions and 71 deletions

View File

@@ -0,0 +1,670 @@
---
name: team-tech-debt
description: Systematic tech debt governance with CSV wave pipeline. Scans codebase for tech debt across 5 dimensions, assesses severity with priority matrix, plans phased remediation, executes fixes in worktree, validates with 4-layer checks. Supports scan/remediate/targeted pipeline modes with fix-verify GC loop.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] [--mode=scan|remediate|targeted] \"scope or description\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team Tech Debt
## Usage
```bash
$team-tech-debt "Scan and fix tech debt in src/ module"
$team-tech-debt --mode=scan "Audit codebase for tech debt"
$team-tech-debt --mode=targeted "Fix known TODO/FIXME items in auth module"
$team-tech-debt -c 4 -y "Full remediation pipeline for entire project"
$team-tech-debt --continue "td-auth-debt-20260308"
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--continue`: Resume existing session
- `--mode=scan`: Scan and assess only, no fixes
- `--mode=targeted`: Skip scan/assess, direct fix path for known debt
- `--mode=remediate`: Full pipeline (default)
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
---
## Overview
Systematic tech debt governance: scan -> assess -> plan -> fix -> validate. Five specialized worker roles execute as CSV wave agents, with interactive agents for plan approval checkpoints and fix-verify GC loops.
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
```
+-------------------------------------------------------------------+
| TEAM TECH DEBT WORKFLOW |
+-------------------------------------------------------------------+
| |
| Phase 0: Pre-Wave Interactive (Requirement Clarification) |
| +- Parse mode (scan/remediate/targeted) |
| +- Clarify scope and focus areas |
| +- Output: pipeline mode + scope for decomposition |
| |
| Phase 1: Requirement -> CSV + Classification |
| +- Select pipeline mode (scan/remediate/targeted) |
| +- Build task chain with fixed role assignments |
| +- Classify tasks: csv-wave | interactive (exec_mode) |
| +- Compute dependency waves (linear chain) |
| +- Generate tasks.csv with wave + exec_mode columns |
| +- User validates task breakdown (skip if -y) |
| |
| Phase 2: Wave Execution Engine (Extended) |
| +- For each wave (1..N): |
| | +- Execute pre-wave interactive tasks (plan approval) |
| | +- Build wave CSV (filter csv-wave tasks for this wave) |
| | +- Inject previous findings into prev_context column |
| | +- spawn_agents_on_csv(wave CSV) |
| | +- Execute post-wave interactive tasks (if any) |
| | +- Merge all results into master tasks.csv |
| | +- Check: any failed? -> skip dependents |
| | +- TDVAL checkpoint: GC loop check |
| +- discoveries.ndjson shared across all modes (append-only) |
| |
| Phase 3: Post-Wave Interactive (Completion + PR) |
| +- PR creation (if worktree mode, validation passed) |
| +- Debt reduction metrics report |
| +- Interactive completion choice |
| |
| Phase 4: Results Aggregation |
| +- Export final results.csv |
| +- Generate context.md with debt metrics |
| +- Display summary: debt scores, reduction rate |
| +- Offer: new target | deep fix | close |
| |
+-------------------------------------------------------------------+
```
---
## Task Classification Rules
Each task is classified by `exec_mode`:
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot scan, assessment, planning, execution, validation |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Plan approval checkpoint, fix-verify GC loop management |
**Classification Decision**:
| Task Property | Classification |
|---------------|---------------|
| Multi-dimension debt scan (TDSCAN) | `csv-wave` |
| Quantitative assessment (TDEVAL) | `csv-wave` |
| Remediation planning (TDPLAN) | `csv-wave` |
| Plan approval gate | `interactive` |
| Debt cleanup execution (TDFIX) | `csv-wave` |
| Cleanup validation (TDVAL) | `csv-wave` |
| Fix-verify GC loop management | `interactive` |
---
## CSV Schema
### tasks.csv (Master State)
```csv
id,title,description,role,debt_dimension,pipeline_mode,deps,context_from,exec_mode,wave,status,findings,debt_items_count,artifacts_produced,error
"TDSCAN-001","Multi-dimension debt scan","Scan codebase across 5 dimensions for tech debt items","scanner","all","remediate","","","csv-wave","1","pending","","0","",""
"TDEVAL-001","Severity assessment","Quantify impact and fix cost for each debt item","assessor","all","remediate","TDSCAN-001","TDSCAN-001","csv-wave","2","pending","","0","",""
"TDPLAN-001","Remediation planning","Create phased remediation plan from priority matrix","planner","all","remediate","TDEVAL-001","TDEVAL-001","csv-wave","3","pending","","0","",""
```
**Columns**:
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (TDPREFIX-NNN) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description with scope and context |
| `role` | Input | Worker role: scanner, assessor, planner, executor, validator |
| `debt_dimension` | Input | `all`, `code`, `architecture`, `testing`, `dependency`, `documentation` |
| `pipeline_mode` | Input | `scan`, `remediate`, `targeted` |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `wave` | Computed | Wave number (1-based) |
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or execution notes (max 500 chars) |
| `debt_items_count` | Output | Number of debt items found/fixed/validated |
| `artifacts_produced` | Output | Semicolon-separated paths of produced artifacts |
| `error` | Output | Error message if failed |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
---
## Agent Registry (Interactive Agents)
| Agent | Role File | Pattern | Responsibility | Position |
|-------|-----------|---------|----------------|----------|
| Plan Approver | agents/plan-approver.md | 2.3 (send_input cycle) | Review remediation plan, approve/revise/abort | pre-wave (before TDFIX) |
| GC Loop Manager | agents/gc-loop-manager.md | 2.3 (send_input cycle) | Manage fix-verify loop, create retry tasks | post-wave (after TDVAL) |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 4 |
| `discoveries.ndjson` | Shared exploration board (all agents) | Append-only, carries across waves |
| `context.md` | Human-readable report with debt metrics | Created in Phase 4 |
| `scan/debt-inventory.json` | Scanner output: structured debt inventory | Created by TDSCAN |
| `assessment/priority-matrix.json` | Assessor output: prioritized debt items | Created by TDEVAL |
| `plan/remediation-plan.md` | Planner output: phased fix plan | Created by TDPLAN |
| `plan/remediation-plan.json` | Planner output: machine-readable plan | Created by TDPLAN |
| `fixes/fix-log.json` | Executor output: fix results | Created by TDFIX |
| `validation/validation-report.json` | Validator output: validation results | Created by TDVAL |
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
---
## Session Structure
```
.workflow/.csv-wave/{session-id}/
+-- tasks.csv # Master state
+-- results.csv # Final results
+-- discoveries.ndjson # Shared discovery board
+-- context.md # Human-readable report
+-- wave-{N}.csv # Temporary per-wave input
+-- scan/
| +-- debt-inventory.json # Scanner output
+-- assessment/
| +-- priority-matrix.json # Assessor output
+-- plan/
| +-- remediation-plan.md # Planner output (human)
| +-- remediation-plan.json # Planner output (machine)
+-- fixes/
| +-- fix-log.json # Executor output
+-- validation/
| +-- validation-report.json # Validator output
+-- interactive/
| +-- {id}-result.json # Interactive task results
+-- wisdom/
+-- learnings.md
+-- decisions.md
```
---
## Implementation
### Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
// Detect pipeline mode
let pipelineMode = 'remediate'
if ($ARGUMENTS.includes('--mode=scan')) pipelineMode = 'scan'
else if ($ARGUMENTS.includes('--mode=targeted')) pipelineMode = 'targeted'
const requirement = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+|--mode=\w+/g, '')
.trim()
const slug = requirement.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `td-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/scan ${sessionFolder}/assessment ${sessionFolder}/plan ${sessionFolder}/fixes ${sessionFolder}/validation ${sessionFolder}/interactive ${sessionFolder}/wisdom`)
// Initialize discoveries.ndjson
Write(`${sessionFolder}/discoveries.ndjson`, '')
Write(`${sessionFolder}/wisdom/learnings.md`, '# Learnings\n')
Write(`${sessionFolder}/wisdom/decisions.md`, '# Decisions\n')
```
---
### Phase 0: Pre-Wave Interactive (Requirement Clarification)
**Objective**: Parse mode, clarify scope, prepare pipeline configuration.
**Workflow**:
1. **Detect mode from arguments** (--mode=scan/remediate/targeted) or from keywords:
| Keywords | Mode |
|----------|------|
| scan, audit, assess | scan |
| targeted, specific, fix known | targeted |
| Default | remediate |
2. **Clarify scope** (skip if AUTO_YES):
```javascript
AskUserQuestion({
questions: [{
question: "Tech debt governance scope:",
header: "Scope Selection",
multiSelect: false,
options: [
{ label: "Full project scan", description: "Scan entire codebase" },
{ label: "Specific module", description: "Target specific directory" },
{ label: "Custom scope", description: "Specify file patterns" }
]
}]
})
```
3. **Detect debt dimensions** from task description:
| Keywords | Dimension |
|----------|-----------|
| code quality, complexity, smell | code |
| architecture, coupling, structure | architecture |
| test, coverage, quality | testing |
| dependency, outdated, vulnerable | dependency |
| documentation, api doc, comments | documentation |
| Default | all |
4. **Output**: pipeline mode, scope, focus dimensions
**Success Criteria**:
- Pipeline mode determined
- Scope and dimensions clarified
---
### Phase 1: Requirement -> CSV + Classification
**Objective**: Build task chain based on pipeline mode, generate tasks.csv.
**Pipeline Definitions**:
| Mode | Task Chain |
|------|------------|
| scan | TDSCAN-001 -> TDEVAL-001 |
| remediate | TDSCAN-001 -> TDEVAL-001 -> TDPLAN-001 -> (plan-approval) -> TDFIX-001 -> TDVAL-001 |
| targeted | TDPLAN-001 -> (plan-approval) -> TDFIX-001 -> TDVAL-001 |
**Task Registry**:
| Task ID | Role | Prefix | exec_mode | Wave | Description |
|---------|------|--------|-----------|------|-------------|
| TDSCAN-001 | scanner | TDSCAN | csv-wave | 1 | Multi-dimension codebase scan |
| TDEVAL-001 | assessor | TDEVAL | csv-wave | 2 | Severity assessment with priority matrix |
| PLAN-APPROVE | - | - | interactive | 3 (pre-wave) | Plan approval checkpoint |
| TDPLAN-001 | planner | TDPLAN | csv-wave | 3 | Phased remediation plan |
| TDFIX-001 | executor | TDFIX | csv-wave | 4 | Worktree-based incremental fixes |
| TDVAL-001 | validator | TDVAL | csv-wave | 5 | 4-layer validation |
**Worktree Creation** (before TDFIX, remediate mode):
```bash
git worktree add .worktrees/td-<slug>-<date> -b tech-debt/td-<slug>-<date>
```
**Wave Computation**: Linear chain, waves assigned by position in pipeline.
**User Validation**: Display pipeline with mode and task chain (skip if AUTO_YES).
**Success Criteria**:
- tasks.csv created with correct pipeline chain
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with checkpoints and GC loop support.
```javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
const maxWave = Math.max(...tasks.map(t => t.wave))
let gcRounds = 0
const MAX_GC_ROUNDS = 3
for (let wave = 1; wave <= maxWave; wave++) {
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
// Check dependencies
for (const task of waveTasks) {
const depIds = (task.deps || '').split(';').filter(Boolean)
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
task.status = 'skipped'
task.error = `Dependency failed`
}
}
// Pre-wave interactive: Plan Approval Gate (after TDPLAN completes)
if (interactiveTasks.some(t => t.id === 'PLAN-APPROVE' && t.status === 'pending')) {
Read('agents/plan-approver.md')
const planTask = interactiveTasks.find(t => t.id === 'PLAN-APPROVE')
const agent = spawn_agent({
message: `## PLAN REVIEW\n\n### MANDATORY FIRST STEPS\n1. Read: ${sessionFolder}/plan/remediation-plan.md\n2. Read: ${sessionFolder}/discoveries.ndjson\n\nReview the remediation plan and decide: Approve / Revise / Abort\n\nSession: ${sessionFolder}`
})
const result = wait({ ids: [agent], timeout_ms: 600000 })
// Parse decision
if (result includes "Abort") {
// Skip remaining pipeline
for (const t of tasks.filter(t => t.status === 'pending')) t.status = 'skipped'
} else if (result includes "Revise") {
// Create revision task, re-run planner
// ... create TDPLAN-revised task
}
// Approve: continue normally
close_agent({ id: agent })
planTask.status = 'completed'
// Create worktree for fix execution
if (pipelineMode === 'remediate' || pipelineMode === 'targeted') {
Bash(`git worktree add .worktrees/${sessionId} -b tech-debt/${sessionId}`)
}
}
// Execute csv-wave tasks
const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
for (const task of pendingCsvTasks) {
task.prev_context = buildPrevContext(task, tasks)
}
if (pendingCsvTasks.length > 0) {
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))
// Select instruction based on role
const role = pendingCsvTasks[0].role
const instruction = Read(`instructions/agent-instruction.md`)
// Customize instruction for role (scanner/assessor/planner/executor/validator)
spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: buildRoleInstruction(role, sessionFolder, wave),
max_concurrency: maxConcurrency,
max_runtime_seconds: 900,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
debt_items_count: { type: "string" },
artifacts_produced: { type: "string" },
error: { type: "string" }
}
}
})
// Merge results
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const r of results) {
const t = tasks.find(t => t.id === r.id)
if (t) Object.assign(t, r)
}
}
// Post-wave: TDVAL GC Loop Check
const completedVal = tasks.find(t => t.id.startsWith('TDVAL') && t.status === 'completed' && t.wave === wave)
if (completedVal) {
// Read validation results
const valReport = JSON.parse(Read(`${sessionFolder}/validation/validation-report.json`))
if (!valReport.passed && gcRounds < MAX_GC_ROUNDS) {
gcRounds++
// Create fix-verify retry tasks
const fixId = `TDFIX-fix-${gcRounds}`
const valId = `TDVAL-recheck-${gcRounds}`
tasks.push({
id: fixId, title: `Fix regressions (GC #${gcRounds})`, role: 'executor',
description: `Fix regressions found in validation round ${gcRounds}`,
debt_dimension: 'all', pipeline_mode: pipelineMode,
deps: completedVal.id, context_from: completedVal.id,
exec_mode: 'csv-wave', wave: wave + 1, status: 'pending',
findings: '', debt_items_count: '0', artifacts_produced: '', error: ''
})
tasks.push({
id: valId, title: `Revalidate (GC #${gcRounds})`, role: 'validator',
description: `Revalidate after fix round ${gcRounds}`,
debt_dimension: 'all', pipeline_mode: pipelineMode,
deps: fixId, context_from: fixId,
exec_mode: 'csv-wave', wave: wave + 2, status: 'pending',
findings: '', debt_items_count: '0', artifacts_produced: '', error: ''
})
// Extend maxWave
} else if (!valReport.passed && gcRounds >= MAX_GC_ROUNDS) {
// Accept current state
console.log(`Max GC rounds (${MAX_GC_ROUNDS}) reached. Accepting current state.`)
}
}
// Update master CSV
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
}
```
**Success Criteria**:
- All waves executed in order
- Plan approval checkpoint enforced before fix execution
- GC loop properly bounded (max 3 rounds)
- Worktree created for fix execution
- discoveries.ndjson accumulated across all waves
---
### Phase 3: Post-Wave Interactive (Completion + PR)
**Objective**: Create PR from worktree if validation passed, generate debt reduction report.
```javascript
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const allCompleted = tasks.every(t => t.status === 'completed' || t.status === 'skipped')
// PR Creation (if worktree exists and validation passed)
const worktreePath = `.worktrees/${sessionId}`
const valReport = JSON.parse(Read(`${sessionFolder}/validation/validation-report.json`) || '{}')
if (valReport.passed && fileExists(worktreePath)) {
Bash(`cd ${worktreePath} && git add -A && git commit -m "tech-debt: remediate debt items (${sessionId})" && git push -u origin tech-debt/${sessionId}`)
Bash(`gh pr create --title "Tech Debt Remediation: ${sessionId}" --body "Automated tech debt cleanup. See ${sessionFolder}/context.md for details."`)
Bash(`git worktree remove ${worktreePath}`)
}
// Debt reduction metrics
const scanReport = JSON.parse(Read(`${sessionFolder}/scan/debt-inventory.json`) || '{}')
const debtBefore = scanReport.total_items || 0
const debtAfter = valReport.debt_score_after || 0
const reductionRate = debtBefore > 0 ? Math.round(((debtBefore - debtAfter) / debtBefore) * 100) : 0
console.log(`
============================================
TECH DEBT GOVERNANCE COMPLETE
Mode: ${pipelineMode}
Debt Items Found: ${debtBefore}
Debt Items Fixed: ${debtBefore - debtAfter}
Reduction Rate: ${reductionRate}%
GC Rounds: ${gcRounds}/${MAX_GC_ROUNDS}
Validation: ${valReport.passed ? 'PASSED' : 'FAILED'}
Session: ${sessionFolder}
============================================
`)
// Completion action
if (!AUTO_YES) {
AskUserQuestion({
questions: [{
question: "What next?",
header: "Completion",
multiSelect: false,
options: [
{ label: "New target", description: "Run another scan/fix cycle" },
{ label: "Deep fix", description: "Continue fixing remaining items" },
{ label: "Close", description: "Archive session" }
]
}]
})
}
```
**Success Criteria**:
- PR created if applicable
- Debt metrics calculated and reported
- User informed of next steps
---
### Phase 4: Results Aggregation
**Objective**: Generate final results and human-readable report.
```javascript
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
let contextMd = `# Tech Debt Governance Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Mode**: ${pipelineMode}\n`
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`
contextMd += `## Debt Metrics\n`
contextMd += `| Metric | Value |\n|--------|-------|\n`
contextMd += `| Items Found | ${debtBefore} |\n`
contextMd += `| Items Fixed | ${debtBefore - debtAfter} |\n`
contextMd += `| Reduction Rate | ${reductionRate}% |\n`
contextMd += `| GC Rounds | ${gcRounds} |\n`
contextMd += `| Validation | ${valReport.passed ? 'PASSED' : 'FAILED'} |\n\n`
contextMd += `## Pipeline Execution\n\n`
for (const t of tasks) {
const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
contextMd += `${icon} **${t.title}** [${t.role}] ${t.findings || ''}\n\n`
}
Write(`${sessionFolder}/context.md`, contextMd)
```
**Success Criteria**:
- results.csv exported
- context.md generated with debt metrics
- Summary displayed to user
---
## Shared Discovery Board Protocol
**Format**: NDJSON (one JSON per line)
**Discovery Types**:
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `debt_item_found` | `data.file+data.line` | `{id, dimension, severity, file, line, description, suggestion}` | Tech debt item identified |
| `pattern_found` | `data.pattern_name+data.location` | `{pattern_name, location, description}` | Code pattern (anti-pattern) found |
| `fix_applied` | `data.file+data.change` | `{file, change, lines_modified, debt_id}` | Fix applied to debt item |
| `regression_found` | `data.file+data.test` | `{file, test, description, severity}` | Regression found during validation |
| `dependency_issue` | `data.package+data.issue` | `{package, current, latest, issue, severity}` | Dependency problem |
| `metric_recorded` | `data.metric` | `{metric, value, dimension, file}` | Quality metric recorded |
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"TDSCAN-001","type":"debt_item_found","data":{"id":"TD-001","dimension":"code","severity":"high","file":"src/auth/jwt.ts","line":42,"description":"Complexity > 15","suggestion":"Extract helper functions"}}
{"ts":"2026-03-08T10:15:00Z","worker":"TDFIX-001","type":"fix_applied","data":{"file":"src/auth/jwt.ts","change":"Extracted 3 helper functions","lines_modified":25,"debt_id":"TD-001"}}
```
---
## Checkpoints
| Checkpoint | Trigger | Condition | Action |
|------------|---------|-----------|--------|
| Plan Approval Gate | TDPLAN-001 completes | Always (remediate/targeted mode) | Interactive: Approve / Revise / Abort |
| Worktree Creation | Plan approved | Before TDFIX | `git worktree add .worktrees/{session-id}` |
| Fix-Verify GC Loop | TDVAL-* completes | Regressions found | Create TDFIX-fix-N + TDVAL-recheck-N (max 3 rounds) |
---
## Pipeline Mode Details
### Scan Mode
```
Wave 1: TDSCAN-001 (scanner) -> Scan 5 dimensions
Wave 2: TDEVAL-001 (assessor) -> Priority matrix
```
### Remediate Mode (Full Pipeline)
```
Wave 1: TDSCAN-001 (scanner) -> Scan 5 dimensions
Wave 2: TDEVAL-001 (assessor) -> Priority matrix
Wave 3: TDPLAN-001 (planner) -> Remediation plan
PLAN-APPROVE (interactive) -> User approval
Wave 4: TDFIX-001 (executor) -> Apply fixes in worktree
Wave 5: TDVAL-001 (validator) -> 4-layer validation
[GC Loop: TDFIX-fix-N -> TDVAL-recheck-N, max 3]
```
### Targeted Mode
```
Wave 1: TDPLAN-001 (planner) -> Targeted fix plan
PLAN-APPROVE (interactive) -> User approval
Wave 2: TDFIX-001 (executor) -> Apply fixes in worktree
Wave 3: TDVAL-001 (validator) -> 4-layer validation
[GC Loop: TDFIX-fix-N -> TDVAL-recheck-N, max 3]
```
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Circular dependency | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Scanner finds no debt | Report clean codebase, skip to summary |
| Plan rejected by user | Abort pipeline or create revision task |
| Fix-verify loop stuck (>3 rounds) | Accept current state, continue to completion |
| Worktree creation fails | Fall back to direct changes with user confirmation |
| Validation tools not available | Skip unavailable checks, report partial validation |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| Continue mode: no session found | List available sessions, prompt user to select |
---
## Core Rules
1. **Start Immediately**: First action is session initialization, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive for approval checkpoints
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
7. **Skip on Failure**: If a dependency failed, skip the dependent task
8. **GC Loop Bounded**: Maximum 3 fix-verify rounds before accepting current state
9. **Worktree Isolation**: All fix execution happens in git worktree, not main branch
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped

View File

@@ -0,0 +1,130 @@
# GC Loop Manager Agent
Interactive agent for managing the fix-verify GC (Garbage Collection) loop. Spawned after TDVAL completes with regressions, manages retry task creation up to MAX_GC_ROUNDS (3).
## Identity
- **Type**: `interactive`
- **Role File**: `agents/gc-loop-manager.md`
- **Responsibility**: Evaluate validation results, decide whether to retry or accept, create GC loop tasks
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read validation report to determine regression status
- Track GC round count (max 3)
- Create fix-verify retry tasks when regressions found and rounds remain
- Accept current state when GC rounds exhausted
- Report decision to orchestrator
- Produce structured output following template
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Execute fix actions directly
- Exceed MAX_GC_ROUNDS (3)
- Skip validation report reading
- Produce unstructured output
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | built-in | Load validation report and context |
| `Write` | built-in | Store GC decision result |
---
## Execution
### Phase 1: Validation Assessment
**Objective**: Read validation results and determine action
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| validation-report.json | Yes | Validation results |
| discoveries.ndjson | No | Shared discoveries (regression entries) |
| Current gc_rounds | Yes | From orchestrator context |
**Steps**:
1. Read validation-report.json
2. Extract: total_regressions, per-check results (tests, types, lint, quality)
3. Determine GC decision:
| Condition | Decision |
|-----------|----------|
| No regressions (passed=true) | `pipeline_complete` -- no GC needed |
| Regressions AND gc_rounds < 3 | `retry` -- create fix-verify tasks |
| Regressions AND gc_rounds >= 3 | `accept` -- accept current state |
**Output**: GC decision
---
### Phase 2: Task Creation (retry only)
**Objective**: Create fix-verify retry task pair
**Steps** (only when decision is `retry`):
1. Increment gc_rounds
2. Define fix task:
- ID: `TDFIX-fix-{gc_rounds}`
- Description: Fix regressions from round {gc_rounds}
- Role: executor
- deps: previous TDVAL task
3. Define validation task:
- ID: `TDVAL-recheck-{gc_rounds}`
- Description: Revalidate after fix round {gc_rounds}
- Role: validator
- deps: TDFIX-fix-{gc_rounds}
4. Report new tasks to orchestrator for CSV insertion
**Output**: New task definitions for orchestrator to add to master CSV
---
## Structured Output Template
```
## Summary
- Validation result: <passed|failed>
- Total regressions: <count>
- GC round: <current>/<max>
- Decision: <pipeline_complete|retry|accept>
## Regression Details (if any)
- Test failures: <count>
- Type errors: <count>
- Lint errors: <count>
## Action Taken
- Decision: <decision>
- New tasks created: <task-ids or none>
## Metrics
- Debt score before: <score>
- Debt score after: <score>
- Improvement: <percentage>%
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Validation report not found | Report error, suggest re-running validator |
| Report parse error | Treat as failed validation, trigger retry if rounds remain |
| GC rounds already at max | Accept current state, report to orchestrator |
| Processing failure | Output partial results with clear status |

View File

@@ -0,0 +1,151 @@
# Plan Approver Agent
Interactive agent for reviewing the tech debt remediation plan at the plan approval gate checkpoint. Spawned after TDPLAN-001 completes, before TDFIX execution begins.
## Identity
- **Type**: `interactive`
- **Role File**: `agents/plan-approver.md`
- **Responsibility**: Review remediation plan, present to user, handle Approve/Revise/Abort
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read the remediation plan (both .md and .json)
- Present clear summary with phases, item counts, effort estimates
- Wait for user approval before reporting
- Handle all three outcomes (Approve, Revise, Abort)
- Produce structured output following template
### MUST NOT
- Skip the MANDATORY FIRST STEPS role loading
- Approve plan without user confirmation
- Modify the plan artifacts directly
- Execute any fix actions
- Produce unstructured output
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | built-in | Load plan artifacts and context |
| `AskUserQuestion` | built-in | Get user approval decision |
| `Write` | built-in | Store approval result |
### Tool Usage Patterns
**Read Pattern**: Load plan before review
```
Read("<session>/plan/remediation-plan.md")
Read("<session>/plan/remediation-plan.json")
Read("<session>/assessment/priority-matrix.json")
```
---
## Execution
### Phase 1: Plan Loading
**Objective**: Load and summarize the remediation plan
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| remediation-plan.md | Yes | Human-readable plan |
| remediation-plan.json | Yes | Machine-readable plan |
| priority-matrix.json | No | Assessment context |
| discoveries.ndjson | No | Shared discoveries |
**Steps**:
1. Read remediation-plan.md for overview
2. Read remediation-plan.json for metrics
3. Summarize: total actions, effort distribution, phases
4. Identify risks and trade-offs
**Output**: Plan summary ready for user
---
### Phase 2: User Approval
**Objective**: Present plan and get user decision
**Steps**:
1. Display plan summary:
- Phase 1 Quick Wins: count, estimated effort
- Phase 2 Systematic: count, estimated effort
- Phase 3 Prevention: count of prevention mechanisms
- Total files affected, estimated time
2. Present decision:
```javascript
AskUserQuestion({
questions: [{
question: "Remediation plan generated. Review and decide:",
header: "Plan Approval Gate",
multiSelect: false,
options: [
{ label: "Approve", description: "Proceed with fix execution in worktree" },
{ label: "Revise", description: "Re-run planner with specific feedback" },
{ label: "Abort", description: "Stop pipeline, keep scan/assessment results" }
]
}]
})
```
3. Handle response:
| Response | Action |
|----------|--------|
| Approve | Report approved, trigger worktree creation |
| Revise | Collect revision feedback, report revision-needed |
| Abort | Report abort, pipeline stops |
**Output**: Approval decision with details
---
## Structured Output Template
```
## Summary
- Plan reviewed: remediation-plan.md
- Decision: <approved|revision-needed|aborted>
## Plan Overview
- Phase 1 Quick Wins: <count> items, <effort> effort
- Phase 2 Systematic: <count> items, <effort> effort
- Phase 3 Prevention: <count> mechanisms
- Files affected: <count>
## Decision Details
- User choice: <Approve|Revise|Abort>
- Feedback: <user feedback if revision>
## Risks Identified
- Risk 1: description
- Risk 2: description
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Plan file not found | Report error, suggest re-running planner |
| Plan is empty (no actions) | Report clean codebase, suggest closing |
| User does not respond | Timeout, report awaiting-review |
| Plan JSON parse error | Fall back to .md for review, report warning |

View File

@@ -0,0 +1,390 @@
# Agent Instruction Template -- Team Tech Debt
Role-specific instruction templates for CSV wave agents in the tech debt pipeline. Each role has a specialized instruction that is injected as the `instruction` parameter to `spawn_agents_on_csv`.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 1 | Orchestrator selects role-specific instruction based on task role |
| Phase 2 | Injected as `instruction` parameter to `spawn_agents_on_csv` |
---
## Scanner Instruction
```markdown
## TECH DEBT SCAN TASK
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists)
2. Read project context: .workflow/project-tech.json (if exists)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: scanner
**Dimension Focus**: {debt_dimension}
**Pipeline Mode**: {pipeline_mode}
### Task Description
{description}
### Previous Context
{prev_context}
---
## Execution Protocol
1. **Read discoveries**: Load <session-folder>/discoveries.ndjson
2. **Detect project type**: Check package.json, pyproject.toml, go.mod, etc.
3. **Scan 5 dimensions**:
- **Code**: Complexity > 10, TODO/FIXME, deprecated APIs, dead code, duplicated logic
- **Architecture**: Circular dependencies, god classes, layering violations, tight coupling
- **Testing**: Missing tests, low coverage, test quality issues, no integration tests
- **Dependency**: Outdated packages, known vulnerabilities, unused dependencies
- **Documentation**: Missing JSDoc/docstrings, stale API docs, no README sections
4. **Use tools**: mcp__ace-tool__search_context for semantic search, Grep for pattern matching, Bash for static analysis tools
5. **Standardize each finding**:
- id: TD-NNN (sequential)
- dimension: code|architecture|testing|dependency|documentation
- severity: critical|high|medium|low
- file: path, line: number
- description: issue description
- suggestion: fix suggestion
- estimated_effort: small|medium|large|unknown
6. **Share discoveries**: Append each finding to discovery board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"debt_item_found","data":{"id":"TD-NNN","dimension":"<dim>","severity":"<sev>","file":"<path>","line":<n>,"description":"<desc>","suggestion":"<fix>","estimated_effort":"<effort>"}}' >> <session-folder>/discoveries.ndjson
```
7. **Write artifact**: Save structured inventory to <session-folder>/scan/debt-inventory.json
8. **Report result**
---
## Output (report_agent_job_result)
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Scanned N dimensions. Found M debt items: X critical, Y high... (max 500 chars)",
"debt_items_count": "<total count>",
"artifacts_produced": "scan/debt-inventory.json",
"error": ""
}
```
---
## Assessor Instruction
```markdown
## TECH DEBT ASSESSMENT TASK
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists)
2. Read debt inventory: <session-folder>/scan/debt-inventory.json
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: assessor
### Task Description
{description}
### Previous Context
{prev_context}
---
## Execution Protocol
1. **Load debt inventory** from <session-folder>/scan/debt-inventory.json
2. **Score each item**:
- **Impact Score** (1-5): critical=5, high=4, medium=3, low=1
- **Cost Score** (1-5): small=1, medium=3, large=5, unknown=3
3. **Classify into priority quadrants**:
| Impact | Cost | Quadrant |
|--------|------|----------|
| >= 4 | <= 2 | quick-win |
| >= 4 | >= 3 | strategic |
| <= 3 | <= 2 | backlog |
| <= 3 | >= 3 | defer |
4. **Sort** within each quadrant by impact_score descending
5. **Share discoveries**: Append assessment summary to discovery board
6. **Write artifact**: <session-folder>/assessment/priority-matrix.json
7. **Report result**
---
## Output (report_agent_job_result)
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Assessed M items. Quick-wins: X, Strategic: Y, Backlog: Z, Defer: W (max 500 chars)",
"debt_items_count": "<total assessed>",
"artifacts_produced": "assessment/priority-matrix.json",
"error": ""
}
```
---
## Planner Instruction
```markdown
## TECH DEBT PLANNING TASK
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists)
2. Read priority matrix: <session-folder>/assessment/priority-matrix.json
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: planner
### Task Description
{description}
### Previous Context
{prev_context}
---
## Execution Protocol
1. **Load priority matrix** from <session-folder>/assessment/priority-matrix.json
2. **Group items**: quickWins (quick-win), strategic, backlog, deferred
3. **Create 3-phase remediation plan**:
- **Phase 1: Quick Wins** -- High impact, low cost, immediate execution
- **Phase 2: Systematic** -- High impact, high cost, structured refactoring
- **Phase 3: Prevention** -- Long-term prevention mechanisms
4. **Map action types** per dimension:
| Dimension | Action Type |
|-----------|-------------|
| code | refactor |
| architecture | restructure |
| testing | add-tests |
| dependency | update-deps |
| documentation | add-docs |
5. **Generate prevention actions** for dimensions with >= 3 items:
| Dimension | Prevention |
|-----------|------------|
| code | Add linting rules for complexity thresholds |
| architecture | Introduce module boundary checks in CI |
| testing | Set minimum coverage thresholds |
| dependency | Configure automated update bot |
| documentation | Add docstring enforcement in linting |
6. **Write artifacts**:
- <session-folder>/plan/remediation-plan.md (human-readable with checklists)
- <session-folder>/plan/remediation-plan.json (machine-readable)
7. **Report result**
---
## Output (report_agent_job_result)
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Created 3-phase plan. Phase 1: X quick-wins. Phase 2: Y systematic. Phase 3: Z prevention. Total actions: N (max 500 chars)",
"debt_items_count": "<total planned items>",
"artifacts_produced": "plan/remediation-plan.md;plan/remediation-plan.json",
"error": ""
}
```
---
## Executor Instruction
```markdown
## TECH DEBT FIX EXECUTION TASK
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists)
2. Read remediation plan: <session-folder>/plan/remediation-plan.json
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: executor
### Task Description
{description}
### Previous Context
{prev_context}
---
## Execution Protocol
**CRITICAL**: ALL file operations must execute within the worktree path.
1. **Load remediation plan** from <session-folder>/plan/remediation-plan.json
2. **Extract worktree path** from task description
3. **Group actions by type**: refactor -> update-deps -> add-tests -> add-docs -> restructure
4. **For each batch**:
- Read target files in worktree
- Apply changes following project conventions
- Validate changes compile/lint: `cd "<worktree>" && npx tsc --noEmit` or equivalent
- Track: items_fixed, items_failed, files_modified
5. **After each batch**: Verify via `cd "<worktree>" && git diff --name-only`
6. **Share discoveries**: Append fix_applied entries to discovery board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"fix_applied","data":{"file":"<path>","change":"<desc>","lines_modified":<n>,"debt_id":"<TD-NNN>"}}' >> <session-folder>/discoveries.ndjson
```
7. **Self-validate**:
| Check | Command | Pass Criteria |
|-------|---------|---------------|
| Syntax | `cd "<worktree>" && npx tsc --noEmit` | No new errors |
| Lint | `cd "<worktree>" && npx eslint --no-error-on-unmatched-pattern` | No new errors |
8. **Write artifact**: <session-folder>/fixes/fix-log.json
9. **Report result**
---
## Output (report_agent_job_result)
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Fixed X/Y items. Batches: refactor(N), update-deps(N), add-tests(N). Files modified: Z (max 500 chars)",
"debt_items_count": "<items fixed>",
"artifacts_produced": "fixes/fix-log.json",
"error": ""
}
```
---
## Validator Instruction
```markdown
## TECH DEBT VALIDATION TASK
### MANDATORY FIRST STEPS
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists)
2. Read fix log: <session-folder>/fixes/fix-log.json (if exists)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: validator
### Task Description
{description}
### Previous Context
{prev_context}
---
## Execution Protocol
**CRITICAL**: ALL validation commands must execute within the worktree path.
1. **Extract worktree path** from task description
2. **Load fix results** from <session-folder>/fixes/fix-log.json
3. **Run 4-layer validation**:
**Layer 1 -- Test Suite**:
- Command: `cd "<worktree>" && npm test` or `cd "<worktree>" && python -m pytest`
- PASS: No FAIL/error/failed keywords
- SKIP: No test runner available
**Layer 2 -- Type Check**:
- Command: `cd "<worktree>" && npx tsc --noEmit`
- Count: `error TS` occurrences
**Layer 3 -- Lint Check**:
- Command: `cd "<worktree>" && npx eslint --no-error-on-unmatched-pattern <modified-files>`
- Count: error occurrences
**Layer 4 -- Quality Analysis** (when > 5 modified files):
- Compare code quality before/after
- Assess complexity, duplication, naming improvements
4. **Calculate debt score**:
- debt_score_after = debt items NOT in modified files (remaining unfixed)
- improvement_percentage = ((before - after) / before) * 100
5. **Auto-fix attempt** (when total_regressions <= 3):
- Fix minor regressions inline
- Re-run validation checks
6. **Share discoveries**: Append regression_found entries if any:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"regression_found","data":{"file":"<path>","test":"<test>","description":"<desc>","severity":"<sev>"}}' >> <session-folder>/discoveries.ndjson
```
7. **Write artifact**: <session-folder>/validation/validation-report.json with:
- validation_date, passed (bool), total_regressions
- checks: {tests, types, lint, quality} with per-check status
- debt_score_before, debt_score_after, improvement_percentage
8. **Report result**
---
## Output (report_agent_job_result)
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Validation: PASSED|FAILED. Tests: OK/N failures. Types: OK/N errors. Lint: OK/N errors. Debt reduction: X% (max 500 chars)",
"debt_items_count": "<debt_score_after>",
"artifacts_produced": "validation/validation-report.json",
"error": ""
}
```
---
## Placeholder Reference
| Placeholder | Resolved By | When |
|-------------|------------|------|
| `<session-folder>` | Skill designer (Phase 1) | Literal path baked into instruction |
| `{id}` | spawn_agents_on_csv | Runtime from CSV row |
| `{title}` | spawn_agents_on_csv | Runtime from CSV row |
| `{description}` | spawn_agents_on_csv | Runtime from CSV row |
| `{role}` | spawn_agents_on_csv | Runtime from CSV row |
| `{debt_dimension}` | spawn_agents_on_csv | Runtime from CSV row |
| `{pipeline_mode}` | spawn_agents_on_csv | Runtime from CSV row |
| `{prev_context}` | spawn_agents_on_csv | Runtime from CSV row |
---
## Instruction Selection Logic
The orchestrator selects the appropriate instruction section based on the task's `role` column:
| Role | Instruction Section |
|------|-------------------|
| scanner | Scanner Instruction |
| assessor | Assessor Instruction |
| planner | Planner Instruction |
| executor | Executor Instruction |
| validator | Validator Instruction |
Since each wave typically contains tasks from a single role (linear pipeline), the orchestrator uses the role of the first task in the wave to select the instruction template. The `<session-folder>` placeholder is replaced with the actual session path before injection.

View File

@@ -0,0 +1,196 @@
# Team Tech Debt -- CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier (TDPREFIX-NNN) | `"TDSCAN-001"` |
| `title` | string | Yes | Short task title | `"Multi-dimension debt scan"` |
| `description` | string | Yes | Detailed task description (self-contained) | `"Scan codebase across 5 dimensions..."` |
| `role` | enum | Yes | Worker role: `scanner`, `assessor`, `planner`, `executor`, `validator` | `"scanner"` |
| `debt_dimension` | string | Yes | Target dimensions: `all`, or specific dimension(s) | `"all"` |
| `pipeline_mode` | enum | Yes | Pipeline mode: `scan`, `remediate`, `targeted` | `"remediate"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"TDSCAN-001"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"TDSCAN-001"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from pipeline position) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[TDSCAN-001] Found 42 debt items..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Found 42 debt items: 5 critical, 12 high..."` |
| `debt_items_count` | integer | Number of debt items processed | `42` |
| `artifacts_produced` | string | Semicolon-separated artifact paths | `"scan/debt-inventory.json"` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Plan approval, GC loop management |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Role Registry
| Role | Prefix | Responsibility | inner_loop |
|------|--------|----------------|------------|
| scanner | TDSCAN | Multi-dimension debt scanning | false |
| assessor | TDEVAL | Quantitative severity assessment | false |
| planner | TDPLAN | Phased remediation planning | false |
| executor | TDFIX | Worktree-based debt cleanup | true |
| validator | TDVAL | 4-layer validation | false |
---
### Debt Dimensions
| Dimension | Description | Tools/Methods |
|-----------|-------------|---------------|
| code | Code smells, complexity, duplication | Static analysis, complexity metrics |
| architecture | Coupling, circular deps, layering violations | Dependency graph, coupling analysis |
| testing | Missing tests, low coverage, test quality | Coverage analysis, test quality |
| dependency | Outdated packages, vulnerabilities | Outdated check, vulnerability scan |
| documentation | Missing docs, stale API docs | Doc coverage, API doc check |
---
### Example Data
```csv
id,title,description,role,debt_dimension,pipeline_mode,deps,context_from,exec_mode,wave,status,findings,debt_items_count,artifacts_produced,error
"TDSCAN-001","Multi-dimension debt scan","Scan codebase across code, architecture, testing, dependency, and documentation dimensions. Produce structured debt inventory with severity rankings.\nSession: .workflow/.csv-wave/td-auth-20260308\nScope: src/**","scanner","all","remediate","","","csv-wave","1","pending","","0","",""
"TDEVAL-001","Severity assessment","Evaluate each debt item: impact score (1-5) x cost score (1-5). Classify into priority quadrants: quick-win, strategic, backlog, defer.\nSession: .workflow/.csv-wave/td-auth-20260308\nUpstream: TDSCAN-001 debt inventory","assessor","all","remediate","TDSCAN-001","TDSCAN-001","csv-wave","2","pending","","0","",""
"TDPLAN-001","Remediation planning","Create 3-phase remediation plan: Phase 1 quick-wins, Phase 2 systematic, Phase 3 prevention.\nSession: .workflow/.csv-wave/td-auth-20260308\nUpstream: TDEVAL-001 priority matrix","planner","all","remediate","TDEVAL-001","TDEVAL-001","csv-wave","3","pending","","0","",""
"PLAN-APPROVE","Plan approval gate","Review remediation plan and approve for execution","","all","remediate","TDPLAN-001","TDPLAN-001","interactive","3","pending","","0","",""
"TDFIX-001","Debt cleanup execution","Apply remediation plan actions in worktree: refactor, update deps, add tests, add docs.\nSession: .workflow/.csv-wave/td-auth-20260308\nWorktree: .worktrees/td-auth-20260308","executor","all","remediate","PLAN-APPROVE","TDPLAN-001","csv-wave","4","pending","","0","",""
"TDVAL-001","Cleanup validation","Run 4-layer validation: tests, type check, lint, quality analysis. Compare before/after debt scores.\nSession: .workflow/.csv-wave/td-auth-20260308\nWorktree: .worktrees/td-auth-20260308","validator","all","remediate","TDFIX-001","TDFIX-001","csv-wave","5","pending","","0","",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
--------------------- -------------------- -----------------
id ----------> id ----------> id
title ----------> title ----------> (reads)
description ----------> description ----------> (reads)
role ----------> role ----------> (reads)
debt_dimension -------> debt_dimension -------> (reads)
pipeline_mode --------> pipeline_mode --------> (reads)
deps ----------> deps ----------> (reads)
context_from----------> context_from----------> (reads)
exec_mode ----------> exec_mode ----------> (reads)
wave ----------> (reads)
prev_context ----------> (reads)
status
findings
debt_items_count
artifacts_produced
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "TDSCAN-001",
"status": "completed",
"findings": "Scanned 5 dimensions. Found 42 debt items: 5 critical, 12 high, 15 medium, 10 low. Top issues: complex auth logic (code), circular deps in services (architecture), missing integration tests (testing).",
"debt_items_count": "42",
"artifacts_produced": "scan/debt-inventory.json",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `debt_item_found` | `data.file+data.line` | `{id, dimension, severity, file, line, description, suggestion, estimated_effort}` | Tech debt item identified |
| `pattern_found` | `data.pattern_name+data.location` | `{pattern_name, location, description}` | Anti-pattern found |
| `fix_applied` | `data.file+data.change` | `{file, change, lines_modified, debt_id}` | Fix applied |
| `regression_found` | `data.file+data.test` | `{file, test, description, severity}` | Regression in validation |
| `dependency_issue` | `data.package+data.issue` | `{package, current, latest, issue, severity}` | Dependency problem |
| `metric_recorded` | `data.metric` | `{metric, value, dimension, file}` | Quality metric |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"TDSCAN-001","type":"debt_item_found","data":{"id":"TD-001","dimension":"code","severity":"high","file":"src/auth/jwt.ts","line":42,"description":"Cyclomatic complexity 18 exceeds threshold 10","suggestion":"Extract token validation logic","estimated_effort":"medium"}}
{"ts":"2026-03-08T10:05:00Z","worker":"TDSCAN-001","type":"dependency_issue","data":{"package":"express","current":"4.17.1","latest":"4.19.2","issue":"Known security vulnerability CVE-2024-XXXX","severity":"critical"}}
{"ts":"2026-03-08T10:30:00Z","worker":"TDFIX-001","type":"fix_applied","data":{"file":"src/auth/jwt.ts","change":"Extracted validateToken helper","lines_modified":25,"debt_id":"TD-001"}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| Scanner findings | Assessor | prev_context from TDSCAN + scan/debt-inventory.json |
| Assessor matrix | Planner | prev_context from TDEVAL + assessment/priority-matrix.json |
| Planner plan | Plan Approver | Interactive spawn reads plan/remediation-plan.md |
| Plan approval | Executor | Interactive result in interactive/PLAN-APPROVE-result.json |
| Executor fixes | Validator | prev_context from TDFIX + fixes/fix-log.json |
| Validator results | GC Loop | Interactive read of validation/validation-report.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## GC Loop Schema
| Field | Type | Description |
|-------|------|-------------|
| `gc_rounds` | integer | Current GC round (0-based) |
| `max_gc_rounds` | integer | Maximum rounds (3) |
| `fix_task_id` | string | Current fix task ID (TDFIX-fix-N) |
| `val_task_id` | string | Current validation task ID (TDVAL-recheck-N) |
| `regressions` | array | List of regression descriptions |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Role valid | role in {scanner, assessor, planner, executor, validator} | "Invalid role: {role}" |
| Pipeline mode valid | pipeline_mode in {scan, remediate, targeted} | "Invalid pipeline_mode: {mode}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
| GC round limit | gc_rounds <= 3 | "GC round limit exceeded" |