- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files) - Delete old team-lifecycle (v3) and team-planex-v2 - Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs) - Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate) to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input) - Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor) - Convert all coordinator role files: dispatch.md, monitor.md, role.md - Convert all worker role files: remove run_in_background, fix Bash syntax - Convert all specs/pipelines.md references - Final state: 20 team skills, 217 .md files, zero Claude Code API residuals Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
3.6 KiB
role, prefix, inner_loop, message_types
| role | prefix | inner_loop | message_types | ||||
|---|---|---|---|---|---|---|---|
| scanner | SCAN | false |
|
Code Scanner
Toolchain + LLM semantic scan producing structured findings. Static analysis tools in parallel, then LLM for issues tools miss. Read-only -- never modifies source code. 4-dimension system: security (SEC), correctness (COR), performance (PRF), maintainability (MNT).
Phase 2: Context & Toolchain Detection
| Input | Source | Required |
|---|---|---|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | /.msg/meta.json | No |
- Extract session path, target, dimensions, quick flag from task description
- Resolve target files (glob pattern or directory ->
**/*.{ts,tsx,js,jsx,py,go,java,rs}) - If no source files found -> report empty, complete task cleanly
- Detect toolchain availability:
| Tool | Detection | Dimension |
|---|---|---|
| tsc | tsconfig.json exists |
COR |
| eslint | .eslintrc* or eslint in package.json |
COR/MNT |
| semgrep | .semgrep.yml exists |
SEC |
| ruff | pyproject.toml + ruff available |
SEC/COR/MNT |
| mypy | mypy available + pyproject.toml |
COR |
| npmAudit | package-lock.json exists |
SEC |
- Load wisdom files from
<session>/wisdom/if they exist
Phase 3: Scan Execution
Quick mode: Single CLI call with analysis mode, max 20 findings, skip toolchain.
Standard mode (sequential):
3A: Toolchain Scan
Run detected tools in parallel via Bash backgrounding. Each tool writes to <session>/scan/tmp/<tool>.{json|txt}. After wait, parse each output into normalized findings:
- tsc:
file(line,col): error TSxxxx: msg-> dimension=correctness, source=tool:tsc - eslint: JSON array -> severity 2=correctness/high, else=maintainability/medium
- semgrep:
{results[]}-> dimension=security, severity from extra.severity - ruff:
[{code,message,filename}]-> S*=security, F*/B*=correctness, else=maintainability - mypy:
file:line: error: msg [code]-> dimension=correctness - npm audit:
{vulnerabilities:{}}-> dimension=security, category=dependency
Write <session>/scan/toolchain-findings.json.
3B: Semantic Scan (LLM via CLI)
Build prompt with target file patterns, toolchain dedup summary, and per-dimension focus areas:
- SEC: Business logic vulnerabilities, privilege escalation, sensitive data flow, auth bypass
- COR: Logic errors, unhandled exception paths, state management bugs, race conditions
- PRF: Algorithm complexity, N+1 queries, unnecessary sync, memory leaks, missing caching
- MNT: Architectural coupling, abstraction leaks, convention violations, dead code
Execute via ccw cli --tool gemini --mode analysis --rule analysis-review-code-quality (fallback: qwen -> codex). Parse JSON array response, validate required fields (dimension, title, location.file), enforce per-dimension limit (max 5 each), filter minimum severity (medium+). Write <session>/scan/semantic-findings.json.
Phase 4: Aggregate & Output
- Merge toolchain + semantic findings, deduplicate (same file + line + dimension = duplicate)
- Assign dimension-prefixed IDs: SEC-001, COR-001, PRF-001, MNT-001
- Write
<session>/scan/scan-results.jsonwith schema:{scan_date, target, dimensions, quick_mode, total_findings, by_severity, by_dimension, findings[]} - Each finding:
{id, dimension, category, severity, title, description, location:{file,line}, source, suggested_fix, effort, confidence} - Update
<session>/.msg/meta.jsonwith scan summary (findings_count, by_severity, by_dimension) - Contribute discoveries to
<session>/wisdom/files