feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture

- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files)
- Delete old team-lifecycle (v3) and team-planex-v2
- Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs)
- Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate)
  to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input)
- Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor)
- Convert all coordinator role files: dispatch.md, monitor.md, role.md
- Convert all worker role files: remove run_in_background, fix Bash syntax
- Convert all specs/pipelines.md references
- Final state: 20 team skills, 217 .md files, zero Claude Code API residuals

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-24 16:54:48 +08:00
parent 54283e5dbb
commit 1e560ab8e8
334 changed files with 28996 additions and 35516 deletions

View File

@@ -0,0 +1,71 @@
# Analyze Task
Parse user task -> detect review capabilities -> build dependency graph -> design pipeline.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Capability | Prefix |
|----------|------------|--------|
| scan, lint, static analysis, toolchain | scanner | SCAN |
| review, analyze, audit, findings | reviewer | REV |
| fix, repair, remediate, patch | fixer | FIX |
## Pipeline Mode Detection
| Condition | Mode |
|-----------|------|
| Flag `--fix` | fix-only |
| Flag `--full` | full |
| Flag `-q` or `--quick` | quick |
| (none) | default |
## Dependency Graph
Natural ordering for review pipeline:
- Tier 0: scanner (toolchain + semantic scan, no upstream dependency)
- Tier 1: reviewer (deep analysis, requires scan findings)
- Tier 2: fixer (apply fixes, requires reviewed findings + user confirm)
## Pipeline Definitions
```
quick: SCAN(quick=true)
default: SCAN -> REV
full: SCAN -> REV -> [user confirm] -> FIX
fix-only: FIX
```
## Complexity Scoring
| Factor | Points |
|--------|--------|
| Per capability | +1 |
| Large target scope (>20 files) | +2 |
| Multiple dimensions | +1 |
| Fix phase included | +1 |
Results: 1-2 Low, 3-4 Medium, 5+ High
## Role Minimization
- Cap at 4 roles (coordinator + 3 workers)
- Sequential pipeline: scanner -> reviewer -> fixer
## Output
Write <session>/task-analysis.json:
```json
{
"task_description": "<original>",
"pipeline_mode": "<quick|default|full|fix-only>",
"target": "<path>",
"dimensions": ["sec", "cor", "prf", "mnt"],
"auto_confirm": false,
"capabilities": [{ "name": "<cap>", "prefix": "<PREFIX>" }],
"dependency_graph": { "<TASK-ID>": { "role": "<role>", "blockedBy": ["..."] } },
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
"complexity": { "score": 0, "level": "Low|Medium|High" }
}
```

View File

@@ -0,0 +1,90 @@
# Dispatch Tasks
Create task chains from pipeline mode, write to tasks.json with proper deps relationships.
## Workflow
1. Read task-analysis.json -> extract pipeline_mode and parameters
2. Read specs/pipelines.md -> get task registry for selected pipeline
3. Topological sort tasks (respect deps)
4. Validate all owners exist in role registry (SKILL.md)
5. For each task (in order):
- Add task entry to tasks.json `tasks` object (see template below)
- Set deps array with upstream task IDs
6. Update tasks.json metadata with pipeline.tasks_total
7. Validate chain (no orphans, no cycles, all refs valid)
## Task Entry Template
Each task in tasks.json `tasks` object:
```json
{
"<TASK-ID>": {
"title": "<concise title>",
"description": "PURPOSE: <goal> | Success: <criteria>\nTASK:\n - <step 1>\n - <step 2>\nCONTEXT:\n - Session: <session-folder>\n - Target: <target>\n - Dimensions: <dimensions>\n - Upstream artifacts: <list>\nEXPECTED: <artifact path> + <quality criteria>\nCONSTRAINTS: <scope limits>\n---\nInnerLoop: <true|false>\nRoleSpec: <project>/.codex/skills/team-review/roles/<role>/role.md",
"role": "<role-name>",
"prefix": "<PREFIX>",
"deps": ["<upstream-task-id>"],
"status": "pending",
"findings": "",
"error": ""
}
}
```
## Pipeline Task Registry
### default Mode
```
SCAN-001 (scanner): Multi-dimension code scan
deps: [], meta: target=<target>, dimensions=<dims>
REV-001 (reviewer): Deep finding analysis and review
deps: [SCAN-001]
```
### full Mode
```
SCAN-001 (scanner): Multi-dimension code scan
deps: [], meta: target=<target>, dimensions=<dims>
REV-001 (reviewer): Deep finding analysis and review
deps: [SCAN-001]
FIX-001 (fixer): Plan and execute fixes
deps: [REV-001]
```
### fix-only Mode
```
FIX-001 (fixer): Execute fixes from manifest
deps: [], meta: input=<fix-manifest>
```
### quick Mode
```
SCAN-001 (scanner): Quick scan (fast mode)
deps: [], meta: target=<target>, quick=true
```
## InnerLoop Flag Rules
- true: fixer role (iterative fix cycles)
- false: scanner, reviewer roles
## Dependency Validation
- No orphan tasks (all tasks have valid owner)
- No circular dependencies
- All deps references exist in tasks object
- Session reference in every task description
- RoleSpec reference in every task description
## Log After Creation
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "coordinator",
type: "dispatch_ready",
data: { pipeline: "<mode>", task_count: <N>, target: "<target>" }
})
```

View File

@@ -0,0 +1,185 @@
# Monitor Pipeline
Synchronous pipeline coordination using spawn_agent + wait_agent.
## Constants
- WORKER_AGENT: team_worker
- FAST_ADVANCE_AWARE: true
## Handler Router
| Source | Handler |
|--------|---------|
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
## Role-Worker Map
| Prefix | Role | Role Spec | inner_loop |
|--------|------|-----------|------------|
| SCAN-* | scanner | `<project>/.codex/skills/team-review/roles/scanner/role.md` | false |
| REV-* | reviewer | `<project>/.codex/skills/team-review/roles/reviewer/role.md` | false |
| FIX-* | fixer | `<project>/.codex/skills/team-review/roles/fixer/role.md` | true |
## handleCheck
Read-only status report from tasks.json, then STOP.
1. Read tasks.json
2. Count tasks by status (pending, in_progress, completed, failed)
Output:
```
[coordinator] Review Pipeline Status
[coordinator] Mode: <pipeline_mode>
[coordinator] Progress: <completed>/<total> (<percent>%)
[coordinator] Pipeline Graph:
SCAN-001: <done|run|wait|deleted> <summary>
REV-001: <done|run|wait|deleted> <summary>
FIX-001: <done|run|wait|deleted> <summary>
done=completed >>>=running o=pending x=deleted
[coordinator] Active Agents: <list from active_agents>
[coordinator] Ready to spawn: <subjects>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
Then STOP.
## handleResume
1. Read tasks.json, check active_agents
2. No active agents -> handleSpawnNext
3. Has active agents -> check each status
- completed -> mark done in tasks.json
- in_progress -> still running
- other -> worker failure -> reset to pending
4. Some completed -> handleSpawnNext
5. All running -> report status, STOP
## handleSpawnNext
Find ready tasks, spawn workers, wait for results, process.
1. Read tasks.json:
- completedTasks: status = completed
- inProgressTasks: status = in_progress
- deletedTasks: status = deleted
- readyTasks: status = pending AND all deps in completedTasks
2. No ready + work in progress -> report waiting, STOP
3. No ready + nothing in progress -> handleComplete
4. Has ready -> take first ready task:
a. Determine role from prefix (use Role-Worker Map)
b. Update task status to in_progress in tasks.json
c. team_msg log -> task_unblocked
d. Spawn team_worker:
```javascript
// 1) Update status in tasks.json
state.tasks[taskId].status = 'in_progress'
// 2) Spawn worker
const agentId = spawn_agent({
agent_type: "team_worker",
items: [
{ type: "text", text: `## Role Assignment
role: ${role}
role_spec: ${skillRoot}/roles/${role}/role.md
session: ${sessionFolder}
session_id: ${sessionId}
requirement: ${taskDescription}
inner_loop: ${innerLoop}` },
{ type: "text", text: `## Current Task
- Task ID: ${taskId}
- Task: ${taskSubject}
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).` }
]
})
// 3) Track agent
state.active_agents[taskId] = { agentId, role, started_at: now }
// 4) Wait for completion
wait_agent({ ids: [agentId] })
// 5) Collect results
state.tasks[taskId].status = 'completed'
delete state.active_agents[taskId]
```
e. Check for checkpoints after worker completes:
- scanner completes -> read meta.json for findings_count:
- findings_count === 0 -> mark remaining REV-*/FIX-* tasks as deleted -> handleComplete
- findings_count > 0 -> proceed to handleSpawnNext
- reviewer completes AND pipeline_mode === 'full':
- autoYes flag set -> write fix-manifest.json, set fix_scope='all' -> handleSpawnNext
- NO autoYes -> request_user_input:
```
question: "<N> findings reviewed. Proceed with fix?"
options:
- "Fix all": set fix_scope='all'
- "Fix critical/high only": set fix_scope='critical,high'
- "Skip fix": mark FIX-* tasks as deleted -> handleComplete
```
Write fix_scope to meta.json, write fix-manifest.json, -> handleSpawnNext
- fixer completes -> handleSpawnNext (checks for completion naturally)
5. Update tasks.json, output summary, STOP
## handleComplete
Pipeline done. Generate report and completion action.
1. All tasks completed or deleted (no pending, no in_progress)
2. Read final session state from meta.json
3. Generate pipeline summary: mode, target, findings_count, stages_completed, fix results (if applicable), deliverable paths
4. Update session: pipeline_status='complete', completed_at=<timestamp>
5. Read session.completion_action:
- interactive -> request_user_input (Archive/Keep/Export)
- auto_archive -> Archive & Clean
- auto_keep -> Keep Active (status=paused)
## handleAdapt
Capability gap reported mid-pipeline.
1. Parse gap description
2. Check if existing role covers it -> redirect
3. Role count < 4 -> generate dynamic role-spec in <session>/role-specs/
4. Create new task in tasks.json, spawn worker
5. Role count >= 4 -> merge or pause
## Fast-Advance Reconciliation
On every coordinator wake:
1. Read tasks.json for completed tasks
2. Sync active_agents with actual state
3. No duplicate spawns
## State Persistence
After every handler execution:
1. Reconcile active_agents with actual tasks.json states
2. Remove entries for completed/deleted tasks
3. Write updated tasks.json
4. STOP (wait for next event)
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Session file not found | Error, suggest re-initialization |
| 0 findings after scan | Delete remaining stages, complete pipeline |
| User declines fix | Delete FIX-* tasks, complete with review-only results |
| Pipeline stall | Check deps chains, report to user |
| Worker failure | Reset task to pending, respawn on next resume |

View File

@@ -0,0 +1,142 @@
# Coordinator Role
Orchestrate team-review: parse target -> detect mode -> dispatch task chain -> monitor -> report.
## Identity
- Name: coordinator | Tag: [coordinator]
- Responsibility: Target parsing, mode detection, task creation/dispatch, stage monitoring, result aggregation
## Boundaries
### MUST
- All output prefixed with `[coordinator]`
- Parse task description and detect pipeline mode
- Create session folder and spawn team_worker agents via spawn_agent
- Dispatch task chain with proper dependencies (tasks.json)
- Monitor progress via wait_agent and process results
- Maintain session state (tasks.json)
- Execute completion action when pipeline finishes
### MUST NOT
- Run analysis tools directly (semgrep, eslint, tsc, etc.)
- Modify source code files
- Perform code review or scanning directly
- Bypass worker roles
- Spawn workers with general-purpose agent (MUST use team_worker)
## Command Execution Protocol
When coordinator needs to execute a specific phase:
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
| Capability gap | Message contains "capability_gap" | -> handleAdapt (monitor.md) |
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
| Interrupted session | Active session in .workflow/.team/RV-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For check/resume/adapt/complete: load @commands/monitor.md, execute handler, STOP.
## Phase 0: Session Resume Check
1. Scan .workflow/.team/RV-*/tasks.json for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile (read tasks.json, reset in_progress->pending, kick first ready task)
4. Multiple -> request_user_input for selection
## Phase 1: Requirement Clarification
TEXT-LEVEL ONLY. No source code reading.
1. Parse arguments for explicit settings:
| Flag | Mode | Description |
|------|------|-------------|
| `--fix` | fix-only | Skip scan/review, go directly to fixer |
| `--full` | full | scan + review + fix pipeline |
| `-q` / `--quick` | quick | Quick scan only, no review/fix |
| (none) | default | scan + review pipeline |
2. Extract parameters: target, dimensions, auto-confirm flag
3. Clarify if ambiguous (request_user_input for target path)
4. Delegate to @commands/analyze.md
5. Output: task-analysis.json
6. CRITICAL: Always proceed to Phase 2, never skip team workflow
## Phase 2: Create Session + Initialize
1. Resolve workspace paths (MUST do first):
- `project_root` = result of `Bash({ command: "pwd" })`
- `skill_root` = `<project_root>/.codex/skills/team-review`
2. Generate session ID: RV-<slug>-<date>
3. Create session folder structure (scan/, review/, fix/, wisdom/)
4. Read specs/pipelines.md -> select pipeline based on mode
5. Initialize tasks.json:
```json
{
"session_id": "<id>",
"pipeline_mode": "<default|full|fix-only|quick>",
"target": "<target>",
"dimensions": "<dimensions>",
"auto_confirm": false,
"created_at": "<ISO timestamp>",
"active_agents": {},
"tasks": {}
}
```
6. Initialize pipeline via team_msg state_update:
```
mcp__ccw-tools__team_msg({
operation: "log", session_id: "<id>", from: "coordinator",
type: "state_update", summary: "Session initialized",
data: {
pipeline_mode: "<default|full|fix-only|quick>",
pipeline_stages: ["scanner", "reviewer", "fixer"],
target: "<target>",
dimensions: "<dimensions>",
auto_confirm: "<auto_confirm>"
}
})
```
7. Write session meta.json
## Phase 3: Create Task Chain
Delegate to @commands/dispatch.md:
1. Read specs/pipelines.md for selected pipeline's task registry
2. Add task entries to tasks.json `tasks` object with deps
3. Update tasks.json metadata with pipeline.tasks_total
## Phase 4: Spawn-and-Wait
Delegate to @commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + deps resolved)
2. Spawn team_worker agents via spawn_agent, wait_agent for results
3. Output status summary
4. STOP
## Phase 5: Report + Completion Action
1. Generate summary (mode, target, findings_total, by_severity, fix_rate if applicable)
2. Execute completion action per session.completion_action:
- interactive -> request_user_input (Archive/Keep/Export)
- auto_archive -> Archive & Clean
- auto_keep -> Keep Active
## Error Handling
| Error | Resolution |
|-------|------------|
| Task too vague | request_user_input for clarification |
| Session corruption | Attempt recovery, fallback to manual |
| Worker crash | Reset task to pending, respawn |
| Scanner finds 0 findings | Report clean, skip review + fix stages |
| Fix verification fails | Log warning, report partial results |
| Target path invalid | request_user_input for corrected path |

View File

@@ -0,0 +1,76 @@
---
role: fixer
prefix: FIX
inner_loop: true
message_types:
success: fix_complete
error: fix_failed
---
# Code Fixer
Fix code based on reviewed findings. Load manifest, plan fix groups, apply with rollback-on-failure, verify. Code-generation role -- modifies source files.
## Phase 2: Context & Scope Resolution
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| Fix manifest | <session>/fix/fix-manifest.json | Yes |
| Review report | <session>/review/review-report.json | Yes |
| .msg/meta.json | <session>/.msg/meta.json | No |
1. Extract session path, input path from task description
2. Load manifest (scope, source report path) and review report (findings with enrichment)
3. Filter fixable findings: severity in scope AND fix_strategy !== 'skip'
4. If 0 fixable -> report complete immediately
5. Detect quick path: findings <= 5 AND no cross-file dependencies
6. Detect verification tools: tsc (tsconfig.json), eslint (package.json), jest (package.json), pytest (pyproject.toml), semgrep (semgrep available)
7. Load wisdom files from `<session>/wisdom/`
## Phase 3: Plan + Execute
### 3A: Plan Fixes (deterministic, no CLI)
1. Group findings by primary file
2. Merge groups with cross-file dependencies (union-find)
3. Topological sort within each group (respect fix_dependencies, append cycles at end)
4. Sort groups by max severity (critical first)
5. Determine execution path: quick_path (<=5 findings, <=1 group) or standard
6. Write `<session>/fix/fix-plan.json`: `{plan_id, quick_path, groups[{id, files[], findings[], max_severity}], execution_order[], total_findings, total_groups}`
### 3B: Execute Fixes
**Quick path**: Single code-developer agent for all findings.
**Standard path**: One code-developer agent per group, in execution_order.
Agent prompt includes: finding list (dependency-sorted), file contents (truncated 8K), critical rules:
1. Apply each fix using Edit tool in order
2. After each fix, run related tests
3. Tests PASS -> finding is "fixed"
4. Tests FAIL -> `git checkout -- {file}` -> mark "failed" -> continue
5. No retry on failure. Rollback and move on
6. If finding depends on previously failed finding -> mark "skipped"
Agent returns JSON: `{results:[{id, status: fixed|failed|skipped, file, error?}]}`
Fallback: check git diff per file if no structured output.
Write `<session>/fix/execution-results.json`: `{fixed[], failed[], skipped[]}`
## Phase 4: Post-Fix Verification
1. Run available verification tools on modified files:
| Tool | Command | Pass Criteria |
|------|---------|---------------|
| tsc | `npx tsc --noEmit` | 0 errors |
| eslint | `npx eslint <files>` | 0 errors |
| jest | `npx jest --passWithNoTests` | Tests pass |
| pytest | `pytest --tb=short` | Tests pass |
| semgrep | `semgrep --config auto <files> --json` | 0 results |
2. If verification fails critically -> rollback last batch
3. Write `<session>/fix/verify-results.json`
4. Generate `<session>/fix/fix-summary.json`: `{fix_id, fix_date, scope, total, fixed, failed, skipped, fix_rate, verification}`
5. Generate `<session>/fix/fix-summary.md` (human-readable)
6. Update `<session>/.msg/meta.json` with fix results
7. Contribute discoveries to `<session>/wisdom/` files

View File

@@ -0,0 +1,68 @@
---
role: reviewer
prefix: REV
inner_loop: false
message_types:
success: review_complete
error: error
---
# Finding Reviewer
Deep analysis on scan findings: triage, root cause / impact / optimization enrichment via CLI fan-out, cross-correlation, and structured review report generation. Read-only -- never modifies source code.
## Phase 2: Context & Triage
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| Scan results | <session>/scan/scan-results.json | Yes |
| .msg/meta.json | <session>/.msg/meta.json | No |
1. Extract session path, input path, dimensions from task description
2. Load review specs: Run `ccw spec load --category review` for review standards, checklists, and approval gates
3. Load scan results. If missing or empty -> report clean, complete immediately
3. Load wisdom files from `<session>/wisdom/`
4. Triage findings into two buckets:
| Bucket | Criteria | Action |
|--------|----------|--------|
| deep_analysis | severity in [critical, high, medium], max 15, sorted critical-first | Enrich with root cause, impact, optimization |
| pass_through | remaining (low, info, or overflow) | Include in report without enrichment |
If deep_analysis empty -> skip Phase 3, go to Phase 4.
## Phase 3: Deep Analysis (CLI Fan-out)
Split deep_analysis into two domain groups, run parallel CLI agents:
| Group | Dimensions | Focus |
|-------|-----------|-------|
| A | Security + Correctness | Root cause tracing, fix dependencies, blast radius |
| B | Performance + Maintainability | Optimization approaches, refactor tradeoffs |
If either group empty -> skip that agent.
Build prompt per group requesting 6 enrichment fields per finding:
- `root_cause`: `{description, related_findings[], is_symptom}`
- `impact`: `{scope: low/medium/high, affected_files[], blast_radius}`
- `optimization`: `{approach, alternative, tradeoff}`
- `fix_strategy`: minimal / refactor / skip
- `fix_complexity`: low / medium / high
- `fix_dependencies`: finding IDs that must be fixed first
Execute via `ccw cli --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause` (fallback: qwen -> codex). Parse JSON array responses, merge with originals (CLI-enriched replace originals, unenriched get defaults). Write `<session>/review/enriched-findings.json`.
## Phase 4: Report Generation
1. Combine enriched + pass_through findings
2. Cross-correlate:
- **Critical files**: file appears in >=2 dimensions -> list with finding_count, severities
- **Root cause groups**: cluster findings sharing related_findings -> identify primary
- **Optimization suggestions**: from root cause groups + standalone enriched findings
3. Compute metrics: by_dimension, by_severity, dimension_severity_matrix, fixable_count, auto_fixable_count
4. Write `<session>/review/review-report.json`: `{review_id, review_date, findings[], critical_files[], optimization_suggestions[], root_cause_groups[], summary}`
5. Write `<session>/review/review-report.md`: Executive summary, metrics matrix (dimension x severity), critical/high findings table, critical files list, optimization suggestions, recommended fix scope
6. Update `<session>/.msg/meta.json` with review summary
7. Contribute discoveries to `<session>/wisdom/` files

View File

@@ -0,0 +1,71 @@
---
role: scanner
prefix: SCAN
inner_loop: false
message_types:
success: scan_complete
error: error
---
# Code Scanner
Toolchain + LLM semantic scan producing structured findings. Static analysis tools in parallel, then LLM for issues tools miss. Read-only -- never modifies source code. 4-dimension system: security (SEC), correctness (COR), performance (PRF), maintainability (MNT).
## Phase 2: Context & Toolchain Detection
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/.msg/meta.json | No |
1. Extract session path, target, dimensions, quick flag from task description
2. Resolve target files (glob pattern or directory -> `**/*.{ts,tsx,js,jsx,py,go,java,rs}`)
3. If no source files found -> report empty, complete task cleanly
4. Detect toolchain availability:
| Tool | Detection | Dimension |
|------|-----------|-----------|
| tsc | `tsconfig.json` exists | COR |
| eslint | `.eslintrc*` or `eslint` in package.json | COR/MNT |
| semgrep | `.semgrep.yml` exists | SEC |
| ruff | `pyproject.toml` + ruff available | SEC/COR/MNT |
| mypy | mypy available + `pyproject.toml` | COR |
| npmAudit | `package-lock.json` exists | SEC |
5. Load wisdom files from `<session>/wisdom/` if they exist
## Phase 3: Scan Execution
**Quick mode**: Single CLI call with analysis mode, max 20 findings, skip toolchain.
**Standard mode** (sequential):
### 3A: Toolchain Scan
Run detected tools in parallel via Bash backgrounding. Each tool writes to `<session>/scan/tmp/<tool>.{json|txt}`. After `wait`, parse each output into normalized findings:
- tsc: `file(line,col): error TSxxxx: msg` -> dimension=correctness, source=tool:tsc
- eslint: JSON array -> severity 2=correctness/high, else=maintainability/medium
- semgrep: `{results[]}` -> dimension=security, severity from extra.severity
- ruff: `[{code,message,filename}]` -> S*=security, F*/B*=correctness, else=maintainability
- mypy: `file:line: error: msg [code]` -> dimension=correctness
- npm audit: `{vulnerabilities:{}}` -> dimension=security, category=dependency
Write `<session>/scan/toolchain-findings.json`.
### 3B: Semantic Scan (LLM via CLI)
Build prompt with target file patterns, toolchain dedup summary, and per-dimension focus areas:
- SEC: Business logic vulnerabilities, privilege escalation, sensitive data flow, auth bypass
- COR: Logic errors, unhandled exception paths, state management bugs, race conditions
- PRF: Algorithm complexity, N+1 queries, unnecessary sync, memory leaks, missing caching
- MNT: Architectural coupling, abstraction leaks, convention violations, dead code
Execute via `ccw cli --tool gemini --mode analysis --rule analysis-review-code-quality` (fallback: qwen -> codex). Parse JSON array response, validate required fields (dimension, title, location.file), enforce per-dimension limit (max 5 each), filter minimum severity (medium+). Write `<session>/scan/semantic-findings.json`.
## Phase 4: Aggregate & Output
1. Merge toolchain + semantic findings, deduplicate (same file + line + dimension = duplicate)
2. Assign dimension-prefixed IDs: SEC-001, COR-001, PRF-001, MNT-001
3. Write `<session>/scan/scan-results.json` with schema: `{scan_date, target, dimensions, quick_mode, total_findings, by_severity, by_dimension, findings[]}`
4. Each finding: `{id, dimension, category, severity, title, description, location:{file,line}, source, suggested_fix, effort, confidence}`
5. Update `<session>/.msg/meta.json` with scan summary (findings_count, by_severity, by_dimension)
6. Contribute discoveries to `<session>/wisdom/` files