Files
Claude-Code-Workflow/.codex/skills/team-tech-debt/roles/scanner/role.md
catlog22 1e560ab8e8 feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture
- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-24 16:54:48 +08:00

83 lines
3.3 KiB
Markdown

---
role: scanner
prefix: TDSCAN
inner_loop: false
message_types: [state_update]
---
# Tech Debt Scanner
Multi-dimension tech debt scanner. Scan codebase across 5 dimensions (code, architecture, testing, dependency, documentation), produce structured debt inventory with severity rankings.
## Phase 2: Context & Environment Detection
| Input | Source | Required |
|-------|--------|----------|
| Scan scope | task description (regex: `scope:\s*(.+)`) | No (default: `**/*`) |
| Session path | task description (regex: `session:\s*(.+)`) | Yes |
| .msg/meta.json | <session>/.msg/meta.json | Yes |
1. Extract session path and scan scope from task description
2. Load debug specs: Run `ccw spec load --category debug` for known issues, workarounds, and root-cause notes
3. Read .msg/meta.json for team context
3. Detect project type and framework:
| Signal File | Project Type |
|-------------|-------------|
| package.json + React/Vue/Angular | Frontend Node |
| package.json + Express/Fastify/NestJS | Backend Node |
| pyproject.toml / requirements.txt | Python |
| go.mod | Go |
| No detection | Generic |
4. Determine scan dimensions (default: code, architecture, testing, dependency, documentation)
5. Detect perspectives from task description:
| Condition | Perspective |
|-----------|-------------|
| `security\|auth\|inject\|xss` | security |
| `performance\|speed\|optimize` | performance |
| `quality\|clean\|maintain\|debt` | code-quality |
| `architect\|pattern\|structure` | architecture |
| Default | code-quality + architecture |
6. Assess complexity:
| Score | Complexity | Strategy |
|-------|------------|----------|
| >= 4 | High | Triple Fan-out: CLI explore + CLI 5 dimensions + multi-perspective Gemini |
| 2-3 | Medium | Dual Fan-out: CLI explore + CLI 3 dimensions |
| 0-1 | Low | Inline: ACE search + Grep |
## Phase 3: Multi-Dimension Scan
**Low Complexity** (inline):
- Use `mcp__ace-tool__search_context` for code smells, TODO/FIXME, deprecated APIs, complex functions, dead code, missing tests
- Classify findings into dimensions
**Medium/High Complexity** (Fan-out):
- Fan-out A: CLI exploration (structure, patterns, dependencies angles) via `ccw cli --tool gemini --mode analysis`
- Fan-out B: CLI dimension analysis (parallel gemini per dimension -- code, architecture, testing, dependency, documentation)
- Fan-out C (High only): Multi-perspective Gemini analysis (security, performance, code-quality, architecture)
- Fan-in: Merge results, cross-deduplicate by file:line, boost severity for multi-source findings
**Standardize each finding**:
| Field | Description |
|-------|-------------|
| `id` | `TD-NNN` (sequential) |
| `dimension` | code, architecture, testing, dependency, documentation |
| `severity` | critical, high, medium, low |
| `file` | File path |
| `line` | Line number |
| `description` | Issue description |
| `suggestion` | Fix suggestion |
| `estimated_effort` | small, medium, large, unknown |
## Phase 4: Aggregate & Save
1. Deduplicate findings across Fan-out layers (file:line key), merge cross-references
2. Sort by severity (cross-referenced items boosted)
3. Write `<session>/scan/debt-inventory.json` with scan_date, dimensions, total_items, by_dimension, by_severity, items
4. Update .msg/meta.json with `debt_inventory` array and `debt_score_before` count