mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-06 16:31:12 +08:00
Refactor team collaboration skills and update documentation
- Renamed `team-lifecycle-v5` to `team-lifecycle` across various documentation files for consistency. - Updated references in code examples and usage sections to reflect the new skill name. - Added a new command file for the `monitor` functionality in the `team-iterdev` skill, detailing the coordinator's monitoring events and task management. - Introduced new components for dynamic pipeline visualization and session coordinates display in the frontend. - Implemented utility functions for pipeline stage detection and status derivation based on message history. - Enhanced the team role panel to map members to their respective pipeline roles with status indicators. - Updated Chinese documentation to reflect the changes in skill names and descriptions.
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: team-review
|
||||
description: "Unified team skill for code scanning, vulnerability review, optimization suggestions, and automated fix. 4-role team: coordinator, scanner, reviewer, fixer. Triggers on team-review."
|
||||
allowed-tools: Task, AskUserQuestion, TaskCreate, TaskUpdate, TaskList, TaskGet, Read, Write, Edit, Bash, Glob, Grep, Skill, mcp__ace-tool__search_context
|
||||
description: "Unified team skill for code review. Uses team-worker agent architecture with role-spec files. 3-role pipeline: scanner, reviewer, fixer. Triggers on team-review."
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*)
|
||||
---
|
||||
|
||||
# Team Review
|
||||
@@ -11,23 +11,23 @@ Unified team skill: code scanning, vulnerability review, optimization suggestion
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-review") │
|
||||
│ args="<target>" or args="--role=xxx" │
|
||||
└──────────────────────────┬──────────────────────────────────┘
|
||||
│ Role Router
|
||||
┌──── --role present? ────┐
|
||||
│ NO │ YES
|
||||
↓ ↓
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto → coordinator) (route to role.md)
|
||||
│
|
||||
┌────┴────┬───────────┬───────────┐
|
||||
↓ ↓ ↓ ↓
|
||||
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
|
||||
│ coord │ │scanner │ │reviewer│ │ fixer │
|
||||
│ (RC-*) │ │(SCAN-*)│ │(REV-*) │ │(FIX-*) │
|
||||
└────────┘ └────────┘ └────────┘ └────────┘
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-review") |
|
||||
| args="<task-description>" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
Orchestration Mode (auto -> coordinator)
|
||||
|
|
||||
Coordinator (inline)
|
||||
Phase 0-5 orchestration
|
||||
|
|
||||
+-------+-------+-------+
|
||||
v v v
|
||||
[tw] [tw] [tw]
|
||||
scann- review- fixer
|
||||
er er
|
||||
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Role Router
|
||||
@@ -38,12 +38,12 @@ Parse `$ARGUMENTS` to extract `--role`. If absent → Orchestration Mode (auto r
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | RC-* | orchestrator | **⚠️ 压缩后必须重读** |
|
||||
| scanner | [roles/scanner/role.md](roles/scanner/role.md) | SCAN-* | read-only-analysis | 压缩后必须重读 |
|
||||
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REV-* | read-only-analysis | 压缩后必须重读 |
|
||||
| fixer | [roles/fixer/role.md](roles/fixer/role.md) | FIX-* | code-generation | 压缩后必须重读 |
|
||||
| Role | Spec | Task Prefix | Inner Loop |
|
||||
|------|------|-------------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
|
||||
| scanner | [role-specs/scanner.md](role-specs/scanner.md) | SCAN-* | false |
|
||||
| reviewer | [role-specs/reviewer.md](role-specs/reviewer.md) | REV-* | false |
|
||||
| fixer | [role-specs/fixer.md](role-specs/fixer.md) | FIX-* | true |
|
||||
|
||||
> **⚠️ COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
|
||||
|
||||
@@ -204,9 +204,12 @@ Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at se
|
||||
|
||||
Coordinator additional restrictions: Do not write/modify code directly, do not call implementation subagents, do not execute analysis/test/review directly.
|
||||
|
||||
| Component | Location |
|
||||
|-----------|----------|
|
||||
| Session directory | `.workflow/.team-review/<workflow_id>/` |
|
||||
### Team Configuration
|
||||
|
||||
| Setting | Value |
|
||||
|---------|-------|
|
||||
| Team name | review |
|
||||
| Session directory | `.workflow/.team/RV-<slug>-<date>/` |
|
||||
| Shared memory | `.msg/meta.json` in session dir |
|
||||
| Team config | `specs/team-config.json` |
|
||||
| Finding schema | `specs/finding-schema.json` |
|
||||
@@ -216,14 +219,35 @@ Coordinator additional restrictions: Do not write/modify code directly, do not c
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use Skill invocation:
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Skill(skill="team-review", args="--role=scanner <target> <flags>")
|
||||
Skill(skill="team-review", args="--role=reviewer --input <scan-output> <flags>")
|
||||
Skill(skill="team-review", args="--role=fixer --input <fix-manifest> <flags>")
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: "review",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-review/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: review
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**Inner Loop roles** (fixer): Set `inner_loop: true`.
|
||||
|
||||
**Single-task roles** (scanner, reviewer): Set `inner_loop: false`.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
@@ -246,6 +270,55 @@ Skill(skill="team-review", args="--role=fixer --input fix-manifest.json")
|
||||
--fix # fix mode only
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Review pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(review) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-review", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
---
|
||||
|
||||
## Session Directory
|
||||
|
||||
```
|
||||
.workflow/.team/RV-<slug>-<YYYY-MM-DD>/
|
||||
├── .msg/
|
||||
│ ├── messages.jsonl # Message bus log
|
||||
│ └── meta.json # Session state + cross-role state
|
||||
├── wisdom/ # Cross-task knowledge
|
||||
│ ├── learnings.md
|
||||
│ ├── decisions.md
|
||||
│ ├── conventions.md
|
||||
│ └── issues.md
|
||||
├── scan/ # Scanner output
|
||||
│ └── scan-results.json
|
||||
├── review/ # Reviewer output
|
||||
│ └── review-report.json
|
||||
└── fix/ # Fixer output
|
||||
└── fix-manifest.json
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|
||||
@@ -1,212 +1,284 @@
|
||||
# Command: monitor
|
||||
# Command: Monitor
|
||||
|
||||
> Stop-Wait stage execution. Spawns each worker via Skill(), blocks until return, drives transitions.
|
||||
Handle all coordinator monitoring events for the review pipeline using the async Spawn-and-Stop pattern. One operation per invocation, then STOP and wait for the next callback.
|
||||
|
||||
## When to Use
|
||||
## Constants
|
||||
|
||||
- Phase 4 of Coordinator, after dispatch complete
|
||||
| Key | Value | Description |
|
||||
|-----|-------|-------------|
|
||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
||||
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
|
||||
| WORKER_AGENT | team-worker | All workers spawned as team-worker agents |
|
||||
|
||||
## Strategy
|
||||
### Role-Worker Map
|
||||
|
||||
**Mode**: Stop-Wait (synchronous Skill call, not polling)
|
||||
| Prefix | Role | Role Spec | inner_loop |
|
||||
|--------|------|-----------|------------|
|
||||
| SCAN | scanner | `.claude/skills/team-review/role-specs/scanner.md` | false |
|
||||
| REV | reviewer | `.claude/skills/team-review/role-specs/reviewer.md` | false |
|
||||
| FIX | fixer | `.claude/skills/team-review/role-specs/fixer.md` | false |
|
||||
|
||||
> **No polling. Synchronous Skill() call IS the wait mechanism.**
|
||||
>
|
||||
> - FORBIDDEN: `while` + `sleep` + check status
|
||||
> - REQUIRED: `Skill()` blocking call = worker return = stage done
|
||||
### Pipeline Modes
|
||||
|
||||
### Stage-Worker Map
|
||||
| Mode | Stages |
|
||||
|------|--------|
|
||||
| scan-only | SCAN-001 |
|
||||
| default | SCAN-001 -> REV-001 |
|
||||
| full | SCAN-001 -> REV-001 -> FIX-001 |
|
||||
| fix-only | FIX-001 |
|
||||
|
||||
```javascript
|
||||
const STAGE_WORKER_MAP = {
|
||||
'SCAN': { role: 'scanner', skillArgs: '--role=scanner' },
|
||||
'REV': { role: 'reviewer', skillArgs: '--role=reviewer' },
|
||||
'FIX': { role: 'fixer', skillArgs: '--role=fixer' }
|
||||
}
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session file | `<session-folder>/.msg/meta.json` | Yes |
|
||||
| Task list | `TaskList()` | Yes |
|
||||
| Active workers | session.active_workers[] | Yes |
|
||||
| Pipeline mode | session.pipeline_mode | Yes |
|
||||
|
||||
```
|
||||
Load session state:
|
||||
1. Read <session-folder>/.msg/meta.json -> session
|
||||
2. TaskList() -> allTasks
|
||||
3. Extract pipeline_mode from session
|
||||
4. Extract active_workers[] from session (default: [])
|
||||
5. Parse $ARGUMENTS to determine trigger event
|
||||
6. autoYes = /\b(-y|--yes)\b/.test(args)
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
## Phase 3: Event Handlers
|
||||
|
||||
### Step 1: Context Preparation
|
||||
### Wake-up Source Detection
|
||||
|
||||
```javascript
|
||||
const sharedMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
Parse `$ARGUMENTS` to determine handler:
|
||||
|
||||
// Get pipeline tasks in creation order (= dependency order)
|
||||
const allTasks = TaskList()
|
||||
const pipelineTasks = allTasks
|
||||
.filter(t => t.owner && t.owner !== 'coordinator')
|
||||
.sort((a, b) => Number(a.id) - Number(b.id))
|
||||
| Priority | Condition | Handler |
|
||||
|----------|-----------|---------|
|
||||
| 1 | Message contains `[scanner]`, `[reviewer]`, or `[fixer]` | handleCallback |
|
||||
| 2 | Contains "check" or "status" | handleCheck |
|
||||
| 3 | Contains "resume", "continue", or "next" | handleResume |
|
||||
| 4 | Pipeline detected as complete (no pending, no in_progress) | handleComplete |
|
||||
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
||||
|
||||
// Auto mode detection
|
||||
const autoYes = /\b(-y|--yes)\b/.test(args)
|
||||
---
|
||||
|
||||
### Handler: handleCallback
|
||||
|
||||
Worker completed a task. Verify completion, check pipeline conditions, advance.
|
||||
|
||||
```
|
||||
Receive callback from [<role>]
|
||||
+- Find matching active worker by role tag
|
||||
+- Task status = completed?
|
||||
| +- YES -> remove from active_workers -> update session
|
||||
| | +- role = scanner?
|
||||
| | | +- Read session.findings_count from meta.json
|
||||
| | | +- findings_count === 0?
|
||||
| | | | +- YES -> Skip remaining stages:
|
||||
| | | | | Delete all REV-* and FIX-* tasks (TaskUpdate status='deleted')
|
||||
| | | | | Log: "0 findings, skipping review/fix stages"
|
||||
| | | | | -> handleComplete
|
||||
| | | | +- NO -> normal advance
|
||||
| | | +- -> handleSpawnNext
|
||||
| | +- role = reviewer?
|
||||
| | | +- pipeline_mode === 'full'?
|
||||
| | | | +- YES -> Need fix confirmation gate
|
||||
| | | | | +- autoYes?
|
||||
| | | | | | +- YES -> Set fix_scope='all' in meta.json
|
||||
| | | | | | +- Write fix-manifest.json
|
||||
| | | | | | +- -> handleSpawnNext
|
||||
| | | | | +- NO -> AskUserQuestion:
|
||||
| | | | | question: "<N> findings reviewed. Proceed with fix?"
|
||||
| | | | | header: "Fix Confirmation"
|
||||
| | | | | options:
|
||||
| | | | | - "Fix all": Set fix_scope='all'
|
||||
| | | | | - "Fix critical/high only": Set fix_scope='critical,high'
|
||||
| | | | | - "Skip fix": Delete FIX-* tasks -> handleComplete
|
||||
| | | | | +- Write fix_scope to meta.json
|
||||
| | | | | +- Write fix-manifest.json:
|
||||
| | | | | { source: "<session>/review/review-report.json",
|
||||
| | | | | scope: fix_scope, session: sessionFolder }
|
||||
| | | | | +- -> handleSpawnNext
|
||||
| | | | +- NO -> normal advance -> handleSpawnNext
|
||||
| | +- role = fixer?
|
||||
| | +- -> handleSpawnNext (checks for completion naturally)
|
||||
| +- NO -> progress message, do not advance -> STOP
|
||||
+- No matching worker found
|
||||
+- Scan all active workers for completed tasks
|
||||
+- Found completed -> process each -> handleSpawnNext
|
||||
+- None completed -> STOP
|
||||
```
|
||||
|
||||
### Step 2: Sequential Stage Execution (Stop-Wait)
|
||||
---
|
||||
|
||||
> **Core**: Spawn one worker per stage, block until return.
|
||||
> Worker return = stage complete. No sleep, no polling.
|
||||
### Handler: handleSpawnNext
|
||||
|
||||
```javascript
|
||||
for (const stageTask of pipelineTasks) {
|
||||
// 1. Extract stage prefix -> determine worker role
|
||||
const stagePrefix = stageTask.subject.match(/^(\w+)-/)?.[1]
|
||||
const workerConfig = STAGE_WORKER_MAP[stagePrefix]
|
||||
Find all ready tasks, spawn one team-worker agent in background, update session, STOP.
|
||||
|
||||
if (!workerConfig) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "error",
|
||||
})
|
||||
continue
|
||||
}
|
||||
```
|
||||
Collect task states from TaskList()
|
||||
+- completedSubjects: status = completed
|
||||
+- inProgressSubjects: status = in_progress
|
||||
+- deletedSubjects: status = deleted
|
||||
+- readySubjects: status = pending
|
||||
AND (no blockedBy OR all blockedBy in completedSubjects)
|
||||
|
||||
// 2. Mark task in progress
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
to: workerConfig.role, type: "stage_transition",
|
||||
})
|
||||
|
||||
// 3. Build worker arguments
|
||||
const workerArgs = buildWorkerArgs(stageTask, workerConfig)
|
||||
|
||||
// 4. Spawn worker via Skill — blocks until return (Stop-Wait core)
|
||||
Skill(skill="team-review", args=workerArgs)
|
||||
|
||||
// 5. Worker returned — check result
|
||||
const taskState = TaskGet({ taskId: stageTask.id })
|
||||
|
||||
if (taskState.status !== 'completed') {
|
||||
const action = handleStageFailure(stageTask, taskState, workerConfig, autoYes)
|
||||
if (action === 'abort') break
|
||||
if (action === 'skip') continue
|
||||
} else {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "stage_transition",
|
||||
})
|
||||
}
|
||||
|
||||
// 6. Post-stage: After SCAN check findings
|
||||
if (stagePrefix === 'SCAN') {
|
||||
const mem = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
if ((mem.findings_count || 0) === 0) {
|
||||
mcp__ccw-tools__team_msg({ operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "pipeline_complete",
|
||||
for (const r of pipelineTasks.slice(pipelineTasks.indexOf(stageTask) + 1))
|
||||
TaskUpdate({ taskId: r.id, status: 'deleted' })
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// 7. Post-stage: After REV confirm fix scope
|
||||
if (stagePrefix === 'REV' && pipelineMode === 'full') {
|
||||
const mem = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
|
||||
if (!autoYes) {
|
||||
const conf = AskUserQuestion({ questions: [{
|
||||
question: `${mem.findings_count || 0} findings reviewed. Proceed with fix?`,
|
||||
header: "Fix Confirmation", multiSelect: false,
|
||||
options: [
|
||||
{ label: "Fix all", description: "All actionable findings" },
|
||||
{ label: "Fix critical/high only", description: "Severity filter" },
|
||||
{ label: "Skip fix", description: "No code changes" }
|
||||
]
|
||||
}] })
|
||||
|
||||
if (conf["Fix Confirmation"] === "Skip fix") {
|
||||
pipelineTasks.filter(t => t.subject.startsWith('FIX-'))
|
||||
.forEach(ft => TaskUpdate({ taskId: ft.id, status: 'deleted' }))
|
||||
break
|
||||
}
|
||||
mem.fix_scope = conf["Fix Confirmation"] === "Fix critical/high only" ? 'critical,high' : 'all'
|
||||
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(mem, null, 2))
|
||||
}
|
||||
|
||||
Write(`${sessionFolder}/fix/fix-manifest.json`, JSON.stringify({
|
||||
source: `${sessionFolder}/review/review-report.json`,
|
||||
scope: mem.fix_scope || 'all', session: sessionFolder
|
||||
}, null, 2))
|
||||
}
|
||||
}
|
||||
Ready tasks found?
|
||||
+- NONE + work in progress -> report waiting -> STOP
|
||||
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> handleComplete
|
||||
+- HAS ready tasks -> take first ready task:
|
||||
+- Determine role from prefix:
|
||||
| SCAN-* -> scanner
|
||||
| REV-* -> reviewer
|
||||
| FIX-* -> fixer
|
||||
+- TaskUpdate -> in_progress
|
||||
+- team_msg log -> task_unblocked (team_session_id=<session-id>)
|
||||
+- Spawn team-worker (see spawn call below)
|
||||
+- Add to session.active_workers
|
||||
+- Update session file
|
||||
+- Output: "[coordinator] Spawned <role> for <subject>"
|
||||
+- STOP
|
||||
```
|
||||
|
||||
### Step 2.1: Worker Argument Builder
|
||||
**Spawn worker tool call**:
|
||||
|
||||
```javascript
|
||||
function buildWorkerArgs(stageTask, workerConfig) {
|
||||
const stagePrefix = stageTask.subject.match(/^(\w+)-/)?.[1]
|
||||
let workerArgs = `${workerConfig.skillArgs} --session ${sessionFolder}`
|
||||
```
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker for <subject>",
|
||||
team_name: "review",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-review/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: review
|
||||
requirement: <task-description>
|
||||
inner_loop: false
|
||||
|
||||
if (stagePrefix === 'SCAN') {
|
||||
workerArgs += ` ${target} --dimensions ${dimensions.join(',')}`
|
||||
if (stageTask.description?.includes('quick: true')) workerArgs += ' -q'
|
||||
} else if (stagePrefix === 'REV') {
|
||||
workerArgs += ` --input ${sessionFolder}/scan/scan-results.json --dimensions ${dimensions.join(',')}`
|
||||
} else if (stagePrefix === 'FIX') {
|
||||
workerArgs += ` --input ${sessionFolder}/fix/fix-manifest.json`
|
||||
}
|
||||
## Current Task
|
||||
- Task ID: <task-id>
|
||||
- Task: <subject>
|
||||
|
||||
if (autoYes) workerArgs += ' -y'
|
||||
return workerArgs
|
||||
}
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
|
||||
})
|
||||
```
|
||||
|
||||
### Step 2.2: Stage Failure Handler
|
||||
---
|
||||
|
||||
```javascript
|
||||
function handleStageFailure(stageTask, taskState, workerConfig, autoYes) {
|
||||
if (autoYes) {
|
||||
mcp__ccw-tools__team_msg({ operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "error",
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
return 'skip'
|
||||
}
|
||||
### Handler: handleCheck
|
||||
|
||||
const decision = AskUserQuestion({ questions: [{
|
||||
question: `Stage "${stageTask.subject}" incomplete (${taskState.status}). Action?`,
|
||||
header: "Stage Failure", multiSelect: false,
|
||||
options: [
|
||||
{ label: "Retry", description: "Re-spawn worker" },
|
||||
{ label: "Skip", description: "Continue pipeline" },
|
||||
{ label: "Abort", description: "Stop pipeline" }
|
||||
]
|
||||
}] })
|
||||
Read-only status report. No pipeline advancement.
|
||||
|
||||
const answer = decision["Stage Failure"]
|
||||
if (answer === "Retry") {
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'in_progress' })
|
||||
Skill(skill="team-review", args=buildWorkerArgs(stageTask, workerConfig))
|
||||
if (TaskGet({ taskId: stageTask.id }).status !== 'completed')
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
return 'retried'
|
||||
} else if (answer === "Skip") {
|
||||
TaskUpdate({ taskId: stageTask.id, status: 'deleted' })
|
||||
return 'skip'
|
||||
} else {
|
||||
mcp__ccw-tools__team_msg({ operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "error",
|
||||
return 'abort'
|
||||
}
|
||||
}
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
[coordinator] Review Pipeline Status
|
||||
[coordinator] Mode: <pipeline_mode>
|
||||
[coordinator] Progress: <completed>/<total> (<percent>%)
|
||||
|
||||
[coordinator] Pipeline Graph:
|
||||
SCAN-001: <status-icon> <summary>
|
||||
REV-001: <status-icon> <summary>
|
||||
FIX-001: <status-icon> <summary>
|
||||
|
||||
done=completed >>>=running o=pending x=deleted .=not created
|
||||
|
||||
[coordinator] Active Workers:
|
||||
> <subject> (<role>) - running
|
||||
|
||||
[coordinator] Ready to spawn: <subjects>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
### Step 3: Finalize
|
||||
Then STOP.
|
||||
|
||||
```javascript
|
||||
const finalMemory = JSON.parse(Read(`${sessionFolder}/.msg/meta.json`))
|
||||
finalMemory.pipeline_status = 'complete'
|
||||
finalMemory.completed_at = new Date().toISOString()
|
||||
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(finalMemory, null, 2))
|
||||
---
|
||||
|
||||
### Handler: handleResume
|
||||
|
||||
Check active worker completion, process results, advance pipeline.
|
||||
|
||||
```
|
||||
Load active_workers from session
|
||||
+- No active workers -> handleSpawnNext
|
||||
+- Has active workers -> check each:
|
||||
+- status = completed -> mark done, remove from active_workers, log
|
||||
+- status = in_progress -> still running, log
|
||||
+- other status -> worker failure -> reset to pending
|
||||
After processing:
|
||||
+- Some completed -> handleSpawnNext
|
||||
+- All still running -> report status -> STOP
|
||||
+- All failed -> handleSpawnNext (retry)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleComplete
|
||||
|
||||
Pipeline complete. Generate summary and finalize session.
|
||||
|
||||
```
|
||||
All tasks completed or deleted (no pending, no in_progress)
|
||||
+- Read final session state from meta.json
|
||||
+- Generate pipeline summary:
|
||||
| - Pipeline mode
|
||||
| - Findings count
|
||||
| - Stages completed
|
||||
| - Fix results (if applicable)
|
||||
| - Deliverable paths
|
||||
|
|
||||
+- Update session:
|
||||
| session.pipeline_status = 'complete'
|
||||
| session.completed_at = <timestamp>
|
||||
| Write meta.json
|
||||
|
|
||||
+- team_msg log -> pipeline_complete
|
||||
+- Output summary to user
|
||||
+- STOP
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Worker Failure Handling
|
||||
|
||||
When a worker has unexpected status (not completed, not in_progress):
|
||||
|
||||
1. Reset task -> pending via TaskUpdate
|
||||
2. Remove from active_workers
|
||||
3. Log via team_msg (type: error)
|
||||
4. Report to user: task reset, will retry on next resume
|
||||
|
||||
## Phase 4: State Persistence
|
||||
|
||||
After every handler action, before STOP:
|
||||
|
||||
| Check | Action |
|
||||
|-------|--------|
|
||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
||||
| Meta.json updated | Write updated session state |
|
||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> handleComplete |
|
||||
|
||||
```
|
||||
Persist:
|
||||
1. Reconcile active_workers with actual TaskList states
|
||||
2. Remove entries for completed/deleted tasks
|
||||
3. Write updated meta.json
|
||||
4. Verify consistency
|
||||
5. STOP (wait for next callback)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Worker incomplete (interactive) | AskUser: Retry / Skip / Abort |
|
||||
| Worker incomplete (auto) | Auto-skip, log warning |
|
||||
| 0 findings after scan | Skip remaining stages |
|
||||
| User declines fix | Delete FIX tasks, report review-only |
|
||||
| Session file not found | Error, suggest re-initialization |
|
||||
| Worker callback from unknown role | Log info, scan for other completions |
|
||||
| All workers still running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready, no running, has pending) | Check blockedBy chains, report to user |
|
||||
| 0 findings after scan | Delete remaining stages, complete pipeline |
|
||||
| User declines fix | Delete FIX tasks, complete with review-only results |
|
||||
|
||||
@@ -1,162 +0,0 @@
|
||||
# Command: execute-fixes
|
||||
|
||||
> Applies fixes from fix-plan.json via code-developer subagents. Quick path = 1 agent; standard = 1 agent per group.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3B of Fixer, after plan-fixes
|
||||
- Requires: `${sessionFolder}/fix/fix-plan.json`, `sessionFolder`, `projectRoot`
|
||||
|
||||
## Strategy
|
||||
|
||||
**Mode**: Sequential Delegation (code-developer agents via Task)
|
||||
|
||||
```
|
||||
quick_path=true -> 1 agent, all findings sequentially
|
||||
quick_path=false -> 1 agent per group, groups in execution_order
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Plan + Helpers
|
||||
|
||||
```javascript
|
||||
const fixPlan = JSON.parse(Read(`${sessionFolder}/fix/fix-plan.json`))
|
||||
const { groups, execution_order, quick_path: isQuickPath } = fixPlan
|
||||
const results = { fixed: [], failed: [], skipped: [] }
|
||||
|
||||
// --- Agent prompt builder ---
|
||||
function buildAgentPrompt(findings, files) {
|
||||
const fileContents = {}
|
||||
for (const file of files) { try { fileContents[file] = Read(file) } catch {} }
|
||||
|
||||
const fDesc = findings.map((f, i) => {
|
||||
const fix = f.suggested_fix || f.optimization?.approach || '(no suggestion)'
|
||||
const deps = (f.fix_dependencies||[]).length ? `\nDepends on: ${f.fix_dependencies.join(', ')}` : ''
|
||||
return `### ${i+1}. ${f.id} [${f.severity}]\n**File**: ${f.location?.file}:${f.location?.line}\n**Title**: ${f.title}\n**Desc**: ${f.description}\n**Strategy**: ${f.fix_strategy||'minimal'}\n**Fix**: ${fix}${deps}`
|
||||
}).join('\n\n')
|
||||
|
||||
const fContent = Object.entries(fileContents)
|
||||
.filter(([,c]) => c).map(([f,c]) => `### ${f}\n\`\`\`\n${String(c).slice(0,8000)}\n\`\`\``).join('\n\n')
|
||||
|
||||
return `You are a code fixer agent. Apply fixes to the codebase.
|
||||
|
||||
## CRITICAL RULES
|
||||
1. Apply each fix using Edit tool, in the order given (dependency-sorted)
|
||||
2. After each fix, run related tests: tests/**/{filename}.test.* or *_test.*
|
||||
3. Tests PASS -> finding is "fixed"
|
||||
4. Tests FAIL -> revert: Bash("git checkout -- {file}") -> mark "failed" -> continue
|
||||
5. Do NOT retry failed fixes with different strategy. Rollback and move on.
|
||||
6. If a finding depends on a previously failed finding, mark "skipped"
|
||||
|
||||
## Findings (in order)
|
||||
${fDesc}
|
||||
|
||||
## File Contents
|
||||
${fContent}
|
||||
|
||||
## Required Output
|
||||
After ALL findings, output JSON:
|
||||
\`\`\`json
|
||||
{"results":[{"id":"SEC-001","status":"fixed","file":"src/a.ts"},{"id":"COR-002","status":"failed","file":"src/b.ts","error":"reason"}]}
|
||||
\`\`\`
|
||||
Process each finding now. Rollback on failure, never retry.`
|
||||
}
|
||||
|
||||
// --- Result parser ---
|
||||
function parseAgentResults(output, findings) {
|
||||
const failedIds = new Set()
|
||||
let parsed = []
|
||||
try {
|
||||
const m = (output||'').match(/```json\s*\n?([\s\S]*?)\n?```/)
|
||||
if (m) { const j = JSON.parse(m[1]); parsed = j.results || j || [] }
|
||||
} catch {}
|
||||
|
||||
if (parsed.length > 0) {
|
||||
for (const r of parsed) {
|
||||
const f = findings.find(x => x.id === r.id); if (!f) continue
|
||||
if (r.status === 'fixed') results.fixed.push({...f})
|
||||
else if (r.status === 'failed') { results.failed.push({...f, error: r.error||'unknown'}); failedIds.add(r.id) }
|
||||
else if (r.status === 'skipped') { results.skipped.push({...f, error: r.error||'dep failed'}); failedIds.add(r.id) }
|
||||
}
|
||||
} else {
|
||||
// Fallback: check git diff per file
|
||||
for (const f of findings) {
|
||||
const file = f.location?.file
|
||||
if (!file) { results.skipped.push({...f, error:'no file'}); continue }
|
||||
const diff = Bash(`git diff --name-only -- "${file}" 2>/dev/null`).trim()
|
||||
if (diff) results.fixed.push({...f})
|
||||
else { results.failed.push({...f, error:'no changes detected'}); failedIds.add(f.id) }
|
||||
}
|
||||
}
|
||||
// Catch unprocessed findings
|
||||
const done = new Set([...results.fixed,...results.failed,...results.skipped].map(x=>x.id))
|
||||
for (const f of findings) {
|
||||
if (done.has(f.id)) continue
|
||||
if ((f.fix_dependencies||[]).some(d => failedIds.has(d)))
|
||||
results.skipped.push({...f, error:'dependency failed'})
|
||||
else results.failed.push({...f, error:'not processed'})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Execute
|
||||
|
||||
```javascript
|
||||
if (isQuickPath) {
|
||||
// Single agent for all findings
|
||||
const group = groups[0]
|
||||
const prompt = buildAgentPrompt(group.findings, group.files)
|
||||
const out = Task({ subagent_type:"code-developer", prompt, run_in_background:false })
|
||||
parseAgentResults(out, group.findings)
|
||||
} else {
|
||||
// One agent per group in execution_order
|
||||
const completedGroups = new Set()
|
||||
|
||||
// Build group dependency map
|
||||
const groupDeps = {}
|
||||
for (const g of groups) {
|
||||
groupDeps[g.id] = new Set()
|
||||
for (const f of g.findings) {
|
||||
for (const depId of (f.fix_dependencies||[])) {
|
||||
const dg = groups.find(x => x.findings.some(fx => fx.id === depId))
|
||||
if (dg && dg.id !== g.id) groupDeps[g.id].add(dg.id)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (const gid of execution_order) {
|
||||
const group = groups.find(g => g.id === gid)
|
||||
if (!group) continue
|
||||
|
||||
const prompt = buildAgentPrompt(group.findings, group.files)
|
||||
const out = Task({ subagent_type:"code-developer", prompt, run_in_background:false })
|
||||
parseAgentResults(out, group.findings)
|
||||
completedGroups.add(gid)
|
||||
|
||||
Write(`${sessionFolder}/fix/fix-progress.json`, JSON.stringify({
|
||||
completed_groups:[...completedGroups],
|
||||
results_so_far:{fixed:results.fixed.length, failed:results.failed.length}
|
||||
}, null, 2))
|
||||
|
||||
mcp__ccw-tools__team_msg({ operation:"log", session_id: sessionId, from:"fixer",
|
||||
to:"coordinator", type:"fix_progress",
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Write Results
|
||||
|
||||
```javascript
|
||||
Write(`${sessionFolder}/fix/execution-results.json`, JSON.stringify(results, null, 2))
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Agent crashes | Mark group findings as failed, continue next group |
|
||||
| Test failure after fix | Rollback (`git checkout -- {file}`), mark failed, continue |
|
||||
| No structured output | Fallback to git diff detection |
|
||||
| Dependency failed | Skip dependent findings automatically |
|
||||
| fix-plan.json missing | Report error, write empty results |
|
||||
@@ -1,186 +0,0 @@
|
||||
# Command: plan-fixes
|
||||
|
||||
> Deterministic grouping algorithm. Groups findings by file, merges dependent groups, topological sorts within groups, writes fix-plan.json.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3A of Fixer, after context resolution
|
||||
- Requires: `fixableFindings[]`, `sessionFolder`, `quickPath` from Phase 2
|
||||
|
||||
**Trigger conditions**:
|
||||
- FIX-* task in Phase 3 with at least 1 fixable finding
|
||||
|
||||
## Strategy
|
||||
|
||||
**Mode**: Direct (inline execution, deterministic algorithm, no CLI needed)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Group Findings by Primary File
|
||||
|
||||
```javascript
|
||||
const fileGroups = {}
|
||||
for (const f of fixableFindings) {
|
||||
const file = f.location?.file || '_unknown'
|
||||
if (!fileGroups[file]) fileGroups[file] = []
|
||||
fileGroups[file].push(f)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Merge Groups with Cross-File Dependencies
|
||||
|
||||
```javascript
|
||||
// Build adjacency: if finding A (group X) depends on finding B (group Y), merge X into Y
|
||||
const findingFileMap = {}
|
||||
for (const f of fixableFindings) {
|
||||
findingFileMap[f.id] = f.location?.file || '_unknown'
|
||||
}
|
||||
|
||||
// Union-Find for group merging
|
||||
const parent = {}
|
||||
const find = (x) => parent[x] === x ? x : (parent[x] = find(parent[x]))
|
||||
const union = (a, b) => { parent[find(a)] = find(b) }
|
||||
|
||||
const allFiles = Object.keys(fileGroups)
|
||||
for (const file of allFiles) parent[file] = file
|
||||
|
||||
for (const f of fixableFindings) {
|
||||
const myFile = f.location?.file || '_unknown'
|
||||
for (const depId of (f.fix_dependencies || [])) {
|
||||
const depFile = findingFileMap[depId]
|
||||
if (depFile && depFile !== myFile) {
|
||||
union(myFile, depFile)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Collect merged groups
|
||||
const mergedGroupMap = {}
|
||||
for (const file of allFiles) {
|
||||
const root = find(file)
|
||||
if (!mergedGroupMap[root]) mergedGroupMap[root] = { files: [], findings: [] }
|
||||
mergedGroupMap[root].files.push(file)
|
||||
mergedGroupMap[root].findings.push(...fileGroups[file])
|
||||
}
|
||||
|
||||
// Deduplicate files
|
||||
for (const g of Object.values(mergedGroupMap)) {
|
||||
g.files = [...new Set(g.files)]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Topological Sort Within Each Group
|
||||
|
||||
```javascript
|
||||
function topoSort(findings) {
|
||||
const idSet = new Set(findings.map(f => f.id))
|
||||
const inDegree = {}
|
||||
const adj = {}
|
||||
for (const f of findings) {
|
||||
inDegree[f.id] = 0
|
||||
adj[f.id] = []
|
||||
}
|
||||
for (const f of findings) {
|
||||
for (const depId of (f.fix_dependencies || [])) {
|
||||
if (idSet.has(depId)) {
|
||||
adj[depId].push(f.id)
|
||||
inDegree[f.id]++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const queue = findings.filter(f => inDegree[f.id] === 0).map(f => f.id)
|
||||
const sorted = []
|
||||
while (queue.length > 0) {
|
||||
const id = queue.shift()
|
||||
sorted.push(id)
|
||||
for (const next of adj[id]) {
|
||||
inDegree[next]--
|
||||
if (inDegree[next] === 0) queue.push(next)
|
||||
}
|
||||
}
|
||||
|
||||
// Handle cycles: append any unsorted findings at the end
|
||||
const sortedSet = new Set(sorted)
|
||||
for (const f of findings) {
|
||||
if (!sortedSet.has(f.id)) sorted.push(f.id)
|
||||
}
|
||||
|
||||
const findingMap = Object.fromEntries(findings.map(f => [f.id, f]))
|
||||
return sorted.map(id => findingMap[id])
|
||||
}
|
||||
|
||||
const groups = Object.entries(mergedGroupMap).map(([root, g], i) => {
|
||||
const sorted = topoSort(g.findings)
|
||||
const maxSev = sorted.reduce((max, f) => {
|
||||
const ord = { critical: 0, high: 1, medium: 2, low: 3 }
|
||||
return (ord[f.severity] ?? 4) < (ord[max] ?? 4) ? f.severity : max
|
||||
}, 'low')
|
||||
return {
|
||||
id: `G${i + 1}`,
|
||||
files: g.files,
|
||||
findings: sorted,
|
||||
max_severity: maxSev
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
### Step 4: Sort Groups by Max Severity
|
||||
|
||||
```javascript
|
||||
const SEV_ORDER = { critical: 0, high: 1, medium: 2, low: 3 }
|
||||
groups.sort((a, b) => (SEV_ORDER[a.max_severity] ?? 4) - (SEV_ORDER[b.max_severity] ?? 4))
|
||||
|
||||
// Re-assign IDs after sort
|
||||
groups.forEach((g, i) => { g.id = `G${i + 1}` })
|
||||
|
||||
const execution_order = groups.map(g => g.id)
|
||||
```
|
||||
|
||||
### Step 5: Determine Execution Path
|
||||
|
||||
```javascript
|
||||
const totalFindings = fixableFindings.length
|
||||
const totalGroups = groups.length
|
||||
const isQuickPath = totalFindings <= 5 && totalGroups <= 1
|
||||
```
|
||||
|
||||
### Step 6: Write fix-plan.json
|
||||
|
||||
```javascript
|
||||
const fixPlan = {
|
||||
plan_id: `fix-plan-${Date.now()}`,
|
||||
quick_path: isQuickPath,
|
||||
groups: groups.map(g => ({
|
||||
id: g.id,
|
||||
files: g.files,
|
||||
findings: g.findings.map(f => ({
|
||||
id: f.id, severity: f.severity, dimension: f.dimension,
|
||||
title: f.title, description: f.description,
|
||||
location: f.location, suggested_fix: f.suggested_fix,
|
||||
fix_strategy: f.fix_strategy, fix_complexity: f.fix_complexity,
|
||||
fix_dependencies: f.fix_dependencies,
|
||||
root_cause: f.root_cause, optimization: f.optimization
|
||||
})),
|
||||
max_severity: g.max_severity
|
||||
})),
|
||||
execution_order: execution_order,
|
||||
total_findings: totalFindings,
|
||||
total_groups: totalGroups
|
||||
}
|
||||
|
||||
Bash(`mkdir -p "${sessionFolder}/fix"`)
|
||||
Write(`${sessionFolder}/fix/fix-plan.json`, JSON.stringify(fixPlan, null, 2))
|
||||
|
||||
mcp__ccw-tools__team_msg({ operation:"log", session_id: sessionId, from:"fixer",
|
||||
to:"coordinator", type:"fix_progress",
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| All findings share one file | Single group, likely quick path |
|
||||
| Dependency cycle detected | Topo sort appends cycle members at end |
|
||||
| Finding references unknown dependency | Ignore that dependency edge |
|
||||
| Empty fixableFindings | Should not reach this command (checked in Phase 2) |
|
||||
@@ -1,245 +0,0 @@
|
||||
# Fixer Role
|
||||
|
||||
Fix code based on reviewed findings. Load manifest, group, apply with rollback-on-failure, verify. Code-generation role -- modifies source files.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `fixer` | **Tag**: `[fixer]`
|
||||
- **Task Prefix**: `FIX-*`
|
||||
- **Responsibility**: code-generation
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `FIX-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[fixer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Write only to session fix directory
|
||||
- Rollback on test failure -- never self-retry failed fixes
|
||||
- Work strictly within code-generation scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Create tasks for other roles
|
||||
- Contact scanner/reviewer directly
|
||||
- Retry failed fixes (report and continue)
|
||||
- Modify files outside scope
|
||||
- Omit `[fixer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `plan-fixes` | [commands/plan-fixes.md](commands/plan-fixes.md) | Phase 3A | Group + sort findings |
|
||||
| `execute-fixes` | [commands/execute-fixes.md](commands/execute-fixes.md) | Phase 3B | Apply fixes per plan |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Read` | Built-in | fixer | Load manifest and reports |
|
||||
| `Write` | Built-in | fixer | Write fix summaries |
|
||||
| `Edit` | Built-in | fixer | Apply code fixes |
|
||||
| `Bash` | Built-in | fixer | Run verification tools |
|
||||
| `TaskUpdate` | Built-in | fixer | Update task status |
|
||||
| `team_msg` | MCP | fixer | Log communication |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `fix_progress` | fixer -> coordinator | Milestone | Progress update during fix |
|
||||
| `fix_complete` | fixer -> coordinator | Phase 5 | Fix finished with summary |
|
||||
| `fix_failed` | fixer -> coordinator | Failure | Fix failed, partial results |
|
||||
| `error` | fixer -> coordinator | Error | Error requiring attention |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "fixer",
|
||||
type: "fix_complete",
|
||||
ref: "<session-folder>/fix/fix-summary.json"
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from fixer --type fix_complete --ref <path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `FIX-*` + status pending + blockedBy empty -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
Extract from task description:
|
||||
|
||||
| Parameter | Extraction Pattern | Default |
|
||||
|-----------|-------------------|---------|
|
||||
| Session folder | `session: <path>` | (required) |
|
||||
| Input path | `input: <path>` | `<session>/fix/fix-manifest.json` |
|
||||
|
||||
Load manifest and source report. If missing -> report error, complete task.
|
||||
|
||||
**Resume Artifact Check**: If `fix-summary.json` exists and is complete -> skip to Phase 5.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Context Resolution
|
||||
|
||||
**Objective**: Resolve fixable findings and detect verification tools.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Filter fixable findings**:
|
||||
|
||||
| Condition | Include |
|
||||
|-----------|---------|
|
||||
| Severity in scope | manifest.scope == 'all' or severity matches scope |
|
||||
| Not skip | fix_strategy !== 'skip' |
|
||||
|
||||
If 0 fixable findings -> report complete immediately.
|
||||
|
||||
2. **Detect complexity**:
|
||||
|
||||
| Signal | Quick Path |
|
||||
|--------|------------|
|
||||
| Findings <= 5 | Yes |
|
||||
| No cross-file dependencies | Yes |
|
||||
| Both conditions | Quick path enabled |
|
||||
|
||||
3. **Detect verification tools**:
|
||||
|
||||
| Tool | Detection Method |
|
||||
|------|------------------|
|
||||
| tsc | `tsconfig.json` exists |
|
||||
| eslint | `eslint` in package.json |
|
||||
| jest | `jest` in package.json |
|
||||
| pytest | pytest command + pyproject.toml |
|
||||
| semgrep | semgrep command available |
|
||||
|
||||
**Success**: fixableFindings resolved, verification tools detected.
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Plan + Execute
|
||||
|
||||
**Objective**: Create fix plan and apply fixes.
|
||||
|
||||
### Phase 3A: Plan Fixes
|
||||
|
||||
Delegate to `commands/plan-fixes.md`.
|
||||
|
||||
**Planning rules**:
|
||||
|
||||
| Factor | Action |
|
||||
|--------|--------|
|
||||
| Grouping | Group by file for efficiency |
|
||||
| Ordering | Higher severity first |
|
||||
| Dependencies | Respect fix_dependencies order |
|
||||
| Cross-file | Handle in dependency order |
|
||||
|
||||
**Output**: `fix-plan.json`
|
||||
|
||||
### Phase 3B: Execute Fixes
|
||||
|
||||
Delegate to `commands/execute-fixes.md`.
|
||||
|
||||
**Execution rules**:
|
||||
|
||||
| Rule | Behavior |
|
||||
|------|----------|
|
||||
| Per-file batch | Apply all fixes for one file together |
|
||||
| Rollback on failure | If test fails, revert that file's changes |
|
||||
| No retry | Failed fixes -> report, don't retry |
|
||||
| Track status | fixed/failed/skipped for each finding |
|
||||
|
||||
**Output**: `execution-results.json`
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Post-Fix Verification
|
||||
|
||||
**Objective**: Run verification tools to validate fixes.
|
||||
|
||||
**Verification tools**:
|
||||
|
||||
| Tool | Command | Pass Criteria |
|
||||
|------|---------|---------------|
|
||||
| tsc | `npx tsc --noEmit` | 0 errors |
|
||||
| eslint | `npx eslint <files>` | 0 errors |
|
||||
| jest | `npx jest --passWithNoTests` | Tests pass |
|
||||
| pytest | `pytest --tb=short` | Tests pass |
|
||||
| semgrep | `semgrep --config auto <files> --json` | 0 results |
|
||||
|
||||
**Verification scope**: Only run tools that are:
|
||||
1. Available (detected in Phase 2)
|
||||
2. Relevant (files were modified)
|
||||
|
||||
**Rollback logic**: If verification fails critically, rollback last batch of fixes.
|
||||
|
||||
**Output**: `verify-results.json`
|
||||
|
||||
**Success**: Verification results recorded, fix rate calculated.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
**Objective**: Report fix results to coordinator.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Generate fix-summary.json with: fix_id, fix_date, scope, total, fixed, failed, skipped, fix_rate, verification results
|
||||
2. Generate fix-summary.md (human-readable)
|
||||
3. Update .msg/meta.json with fix results
|
||||
4. Log via team_msg with `[fixer]` prefix
|
||||
5. SendMessage to coordinator
|
||||
6. TaskUpdate completed
|
||||
7. Loop to Phase 1 for next task
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Scope | all / critical,high / custom |
|
||||
| Fixed | Count by severity |
|
||||
| Failed | Count + error details |
|
||||
| Skipped | Count |
|
||||
| Fix rate | Percentage |
|
||||
| Verification | Pass/fail per tool |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Manifest/report missing | Error, complete task |
|
||||
| 0 fixable findings | Complete immediately |
|
||||
| Test failure after fix | Rollback, mark failed, continue |
|
||||
| Tool unavailable | Skip that check |
|
||||
| All findings fail | Report 0%, complete |
|
||||
| Session folder missing | Re-create fix subdirectory |
|
||||
| Edit tool fails | Log error, mark finding as failed |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
@@ -1,186 +0,0 @@
|
||||
# Command: deep-analyze
|
||||
|
||||
> CLI Fan-out deep analysis. Splits findings into 2 domain groups, runs parallel CLI agents for root cause / impact / optimization enrichment.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Reviewer, when `deep_analysis.length > 0`
|
||||
- Requires `deep_analysis[]` array and `sessionFolder` from Phase 2
|
||||
|
||||
**Trigger conditions**:
|
||||
- REV-* task in Phase 3 with at least 1 finding triaged for deep analysis
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: CLI Fan-out (max 2 parallel agents, analysis only)
|
||||
|
||||
### Tool Fallback Chain
|
||||
|
||||
```
|
||||
gemini (primary) -> qwen (fallback) -> codex (fallback)
|
||||
```
|
||||
|
||||
### Group Split
|
||||
|
||||
```
|
||||
Group A: Security + Correctness findings -> 1 CLI agent
|
||||
Group B: Performance + Maintainability findings -> 1 CLI agent
|
||||
If either group empty -> skip that agent (run single agent only)
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Split Findings into Groups
|
||||
|
||||
```javascript
|
||||
const groupA = deep_analysis.filter(f =>
|
||||
f.dimension === 'security' || f.dimension === 'correctness'
|
||||
)
|
||||
const groupB = deep_analysis.filter(f =>
|
||||
f.dimension === 'performance' || f.dimension === 'maintainability'
|
||||
)
|
||||
|
||||
// Collect all affected files for CLI context
|
||||
const collectFiles = (group) => [...new Set(
|
||||
group.map(f => f.location?.file).filter(Boolean)
|
||||
)]
|
||||
const filesA = collectFiles(groupA)
|
||||
const filesB = collectFiles(groupB)
|
||||
```
|
||||
|
||||
### Step 2: Build CLI Prompts
|
||||
|
||||
```javascript
|
||||
function buildPrompt(group, groupLabel, affectedFiles) {
|
||||
const findingsJson = JSON.stringify(group, null, 2)
|
||||
const filePattern = affectedFiles.length <= 20
|
||||
? affectedFiles.map(f => `@${f}`).join(' ')
|
||||
: '@**/*.{ts,tsx,js,jsx,py,go,java,rs}'
|
||||
|
||||
return `PURPOSE: Deep analysis of ${groupLabel} code findings -- root cause, impact, optimization suggestions.
|
||||
TASK:
|
||||
- For each finding: trace root cause (independent issue or symptom of another finding?)
|
||||
- Identify findings sharing the same root cause -> mark related_findings with their IDs
|
||||
- Assess impact scope and affected files (blast_radius: function/module/system)
|
||||
- Propose fix strategy (minimal fix vs refactor) with tradeoff analysis
|
||||
- Identify fix dependencies (which findings must be fixed first?)
|
||||
- For each finding add these enrichment fields:
|
||||
root_cause: { description: string, related_findings: string[], is_symptom: boolean }
|
||||
impact: { scope: "low"|"medium"|"high", affected_files: string[], blast_radius: string }
|
||||
optimization: { approach: string, alternative: string, tradeoff: string }
|
||||
fix_strategy: "minimal" | "refactor" | "skip"
|
||||
fix_complexity: "low" | "medium" | "high"
|
||||
fix_dependencies: string[] (finding IDs that must be fixed first)
|
||||
MODE: analysis
|
||||
CONTEXT: ${filePattern}
|
||||
Findings to analyze:
|
||||
${findingsJson}
|
||||
EXPECTED: Respond with ONLY a JSON array. Each element is the original finding object with the 6 enrichment fields added. Preserve ALL original fields exactly.
|
||||
CONSTRAINTS: Preserve original finding fields | Only add enrichment fields | Return raw JSON array only | No markdown wrapping`
|
||||
}
|
||||
|
||||
const promptA = groupA.length > 0
|
||||
? buildPrompt(groupA, 'Security + Correctness', filesA) : null
|
||||
const promptB = groupB.length > 0
|
||||
? buildPrompt(groupB, 'Performance + Maintainability', filesB) : null
|
||||
```
|
||||
|
||||
### Step 3: Execute CLI Agents (Parallel)
|
||||
|
||||
```javascript
|
||||
function runCli(prompt) {
|
||||
const tools = ['gemini', 'qwen', 'codex']
|
||||
for (const tool of tools) {
|
||||
try {
|
||||
const out = Bash(
|
||||
`ccw cli -p "${prompt.replace(/"/g, '\\"')}" --tool ${tool} --mode analysis --rule analysis-diagnose-bug-root-cause`,
|
||||
{ timeout: 300000 }
|
||||
)
|
||||
return out
|
||||
} catch { continue }
|
||||
}
|
||||
return null // All tools failed
|
||||
}
|
||||
|
||||
// Run both groups -- if both present, execute via Bash run_in_background for parallelism
|
||||
let resultA = null, resultB = null
|
||||
|
||||
if (promptA && promptB) {
|
||||
// Both groups: run in parallel
|
||||
// Group A in background
|
||||
Bash(`ccw cli -p "${promptA.replace(/"/g, '\\"')}" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause > "${sessionFolder}/review/_groupA.txt" 2>&1`,
|
||||
{ run_in_background: true, timeout: 300000 })
|
||||
// Group B synchronous (blocks until done)
|
||||
resultB = runCli(promptB)
|
||||
// Wait for Group A to finish, then read output
|
||||
Bash(`sleep 5`) // Brief wait if B finished faster
|
||||
try { resultA = Read(`${sessionFolder}/review/_groupA.txt`) } catch {}
|
||||
// If background failed, try synchronous fallback
|
||||
if (!resultA) resultA = runCli(promptA)
|
||||
} else if (promptA) {
|
||||
resultA = runCli(promptA)
|
||||
} else if (promptB) {
|
||||
resultB = runCli(promptB)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Parse & Merge Results
|
||||
|
||||
```javascript
|
||||
function parseCliOutput(output) {
|
||||
if (!output) return []
|
||||
try {
|
||||
const match = output.match(/\[[\s\S]*\]/)
|
||||
if (!match) return []
|
||||
const parsed = JSON.parse(match[0])
|
||||
// Validate enrichment fields exist
|
||||
return parsed.filter(f => f.id && f.dimension).map(f => ({
|
||||
...f,
|
||||
root_cause: f.root_cause || { description: 'Unknown', related_findings: [], is_symptom: false },
|
||||
impact: f.impact || { scope: 'medium', affected_files: [f.location?.file].filter(Boolean), blast_radius: 'module' },
|
||||
optimization: f.optimization || { approach: f.suggested_fix || '', alternative: '', tradeoff: '' },
|
||||
fix_strategy: ['minimal', 'refactor', 'skip'].includes(f.fix_strategy) ? f.fix_strategy : 'minimal',
|
||||
fix_complexity: ['low', 'medium', 'high'].includes(f.fix_complexity) ? f.fix_complexity : 'medium',
|
||||
fix_dependencies: Array.isArray(f.fix_dependencies) ? f.fix_dependencies : []
|
||||
}))
|
||||
} catch { return [] }
|
||||
}
|
||||
|
||||
const enrichedA = parseCliOutput(resultA)
|
||||
const enrichedB = parseCliOutput(resultB)
|
||||
|
||||
// Merge: CLI-enriched findings replace originals, unenriched originals kept as fallback
|
||||
const enrichedMap = new Map()
|
||||
for (const f of [...enrichedA, ...enrichedB]) enrichedMap.set(f.id, f)
|
||||
|
||||
const enrichedFindings = deep_analysis.map(f =>
|
||||
enrichedMap.get(f.id) || {
|
||||
...f,
|
||||
root_cause: { description: 'Analysis unavailable', related_findings: [], is_symptom: false },
|
||||
impact: { scope: 'medium', affected_files: [f.location?.file].filter(Boolean), blast_radius: 'unknown' },
|
||||
optimization: { approach: f.suggested_fix || '', alternative: '', tradeoff: '' },
|
||||
fix_strategy: 'minimal',
|
||||
fix_complexity: 'medium',
|
||||
fix_dependencies: []
|
||||
}
|
||||
)
|
||||
|
||||
// Write output
|
||||
Write(`${sessionFolder}/review/enriched-findings.json`, JSON.stringify(enrichedFindings, null, 2))
|
||||
|
||||
// Cleanup temp files
|
||||
Bash(`rm -f "${sessionFolder}/review/_groupA.txt" "${sessionFolder}/review/_groupB.txt"`)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| gemini CLI fails | Fallback to qwen, then codex |
|
||||
| All CLI tools fail for a group | Use original findings with default enrichment |
|
||||
| CLI output not valid JSON | Attempt regex extraction, else use defaults |
|
||||
| Background task hangs | Synchronous fallback after timeout |
|
||||
| One group fails, other succeeds | Merge partial results with defaults |
|
||||
| Invalid enrichment fields | Apply defaults for missing/invalid fields |
|
||||
@@ -1,174 +0,0 @@
|
||||
# Command: generate-report
|
||||
|
||||
> Cross-correlate enriched + pass-through findings, compute metrics, write review-report.json (for fixer) and review-report.md (for humans).
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 4 of Reviewer, after deep analysis (or directly if deep_analysis was empty)
|
||||
- Requires: `enrichedFindings[]` (from Phase 3 or empty), `pass_through[]` (from Phase 2), `sessionFolder`
|
||||
|
||||
## Strategy
|
||||
|
||||
**Mode**: Direct (inline execution, no CLI needed)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load & Combine Findings
|
||||
|
||||
```javascript
|
||||
let enrichedFindings = []
|
||||
try { enrichedFindings = JSON.parse(Read(`${sessionFolder}/review/enriched-findings.json`)) } catch {}
|
||||
const allFindings = [...enrichedFindings, ...pass_through]
|
||||
```
|
||||
|
||||
### Step 2: Cross-Correlate
|
||||
|
||||
```javascript
|
||||
// 2a: Critical files (file appears in >=2 dimensions)
|
||||
const fileDimMap = {}
|
||||
for (const f of allFindings) {
|
||||
const file = f.location?.file; if (!file) continue
|
||||
if (!fileDimMap[file]) fileDimMap[file] = new Set()
|
||||
fileDimMap[file].add(f.dimension)
|
||||
}
|
||||
const critical_files = Object.entries(fileDimMap)
|
||||
.filter(([, dims]) => dims.size >= 2)
|
||||
.map(([file, dims]) => ({
|
||||
file, dimensions: [...dims],
|
||||
finding_count: allFindings.filter(f => f.location?.file === file).length,
|
||||
severities: [...new Set(allFindings.filter(f => f.location?.file === file).map(f => f.severity))]
|
||||
})).sort((a, b) => b.finding_count - a.finding_count)
|
||||
|
||||
// 2b: Group by shared root cause
|
||||
const rootCauseGroups = [], grouped = new Set()
|
||||
for (const f of allFindings) {
|
||||
if (grouped.has(f.id)) continue
|
||||
const related = (f.root_cause?.related_findings || []).filter(rid => !grouped.has(rid))
|
||||
if (related.length > 0) {
|
||||
const ids = [f.id, ...related]; ids.forEach(id => grouped.add(id))
|
||||
rootCauseGroups.push({ root_cause: f.root_cause?.description || f.title,
|
||||
finding_ids: ids, primary_id: f.id, dimension: f.dimension, severity: f.severity })
|
||||
}
|
||||
}
|
||||
|
||||
// 2c: Optimization suggestions from root cause groups + standalone enriched
|
||||
const optimization_suggestions = []
|
||||
for (const group of rootCauseGroups) {
|
||||
const p = allFindings.find(f => f.id === group.primary_id)
|
||||
if (p?.optimization?.approach) {
|
||||
optimization_suggestions.push({ title: `Fix root cause: ${group.root_cause}`,
|
||||
approach: p.optimization.approach, alternative: p.optimization.alternative || '',
|
||||
tradeoff: p.optimization.tradeoff || '', affected_findings: group.finding_ids,
|
||||
fix_strategy: p.fix_strategy || 'minimal', fix_complexity: p.fix_complexity || 'medium',
|
||||
estimated_impact: `Resolves ${group.finding_ids.length} findings` })
|
||||
}
|
||||
}
|
||||
for (const f of enrichedFindings) {
|
||||
if (grouped.has(f.id) || !f.optimization?.approach || f.severity === 'low' || f.severity === 'info') continue
|
||||
optimization_suggestions.push({ title: `${f.id}: ${f.title}`,
|
||||
approach: f.optimization.approach, alternative: f.optimization.alternative || '',
|
||||
tradeoff: f.optimization.tradeoff || '', affected_findings: [f.id],
|
||||
fix_strategy: f.fix_strategy || 'minimal', fix_complexity: f.fix_complexity || 'medium',
|
||||
estimated_impact: 'Resolves 1 finding' })
|
||||
}
|
||||
|
||||
// 2d: Metrics
|
||||
const by_dimension = {}, by_severity = {}, dimension_severity_matrix = {}
|
||||
for (const f of allFindings) {
|
||||
by_dimension[f.dimension] = (by_dimension[f.dimension] || 0) + 1
|
||||
by_severity[f.severity] = (by_severity[f.severity] || 0) + 1
|
||||
if (!dimension_severity_matrix[f.dimension]) dimension_severity_matrix[f.dimension] = {}
|
||||
dimension_severity_matrix[f.dimension][f.severity] = (dimension_severity_matrix[f.dimension][f.severity] || 0) + 1
|
||||
}
|
||||
const fixable = allFindings.filter(f => f.fix_strategy !== 'skip')
|
||||
const autoFixable = fixable.filter(f => f.fix_complexity === 'low' && f.fix_strategy === 'minimal')
|
||||
```
|
||||
|
||||
### Step 3: Write review-report.json
|
||||
|
||||
```javascript
|
||||
const reviewReport = {
|
||||
review_id: `rev-${Date.now()}`, review_date: new Date().toISOString(),
|
||||
findings: allFindings, critical_files, optimization_suggestions, root_cause_groups: rootCauseGroups,
|
||||
summary: { total: allFindings.length, deep_analyzed: enrichedFindings.length,
|
||||
pass_through: pass_through.length, by_dimension, by_severity, dimension_severity_matrix,
|
||||
fixable_count: fixable.length, auto_fixable_count: autoFixable.length,
|
||||
critical_file_count: critical_files.length, optimization_count: optimization_suggestions.length }
|
||||
}
|
||||
Bash(`mkdir -p "${sessionFolder}/review"`)
|
||||
Write(`${sessionFolder}/review/review-report.json`, JSON.stringify(reviewReport, null, 2))
|
||||
```
|
||||
|
||||
### Step 4: Write review-report.md
|
||||
|
||||
```javascript
|
||||
const dims = ['security','correctness','performance','maintainability']
|
||||
const sevs = ['critical','high','medium','low','info']
|
||||
const S = reviewReport.summary
|
||||
|
||||
// Dimension x Severity matrix
|
||||
let mx = '| Dimension | Critical | High | Medium | Low | Info | Total |\n|---|---|---|---|---|---|---|\n'
|
||||
for (const d of dims) {
|
||||
mx += `| ${d} | ${sevs.map(s => dimension_severity_matrix[d]?.[s]||0).join(' | ')} | ${by_dimension[d]||0} |\n`
|
||||
}
|
||||
mx += `| **Total** | ${sevs.map(s => by_severity[s]||0).join(' | ')} | **${S.total}** |\n`
|
||||
|
||||
// Critical+High findings table
|
||||
const ch = allFindings.filter(f => f.severity==='critical'||f.severity==='high')
|
||||
.sort((a,b) => (a.severity==='critical'?0:1)-(b.severity==='critical'?0:1))
|
||||
let ft = '| ID | Sev | Dim | File:Line | Title | Fix |\n|---|---|---|---|---|---|\n'
|
||||
if (ch.length) ch.forEach(f => { ft += `| ${f.id} | ${f.severity} | ${f.dimension} | ${f.location?.file}:${f.location?.line} | ${f.title} | ${f.fix_strategy||'-'} |\n` })
|
||||
else ft += '| - | - | - | - | No critical/high findings | - |\n'
|
||||
|
||||
// Optimization suggestions
|
||||
let os = optimization_suggestions.map((o,i) =>
|
||||
`### ${i+1}. ${o.title}\n- **Approach**: ${o.approach}\n${o.tradeoff?`- **Tradeoff**: ${o.tradeoff}\n`:''}- **Strategy**: ${o.fix_strategy} | **Complexity**: ${o.fix_complexity} | ${o.estimated_impact}`
|
||||
).join('\n\n') || '_No optimization suggestions._'
|
||||
|
||||
// Critical files
|
||||
const cf = critical_files.slice(0,10).map(c =>
|
||||
`- **${c.file}** (${c.finding_count} findings, dims: ${c.dimensions.join(', ')})`
|
||||
).join('\n') || '_No critical files._'
|
||||
|
||||
// Fix scope
|
||||
const fs = [
|
||||
by_severity.critical ? `${by_severity.critical} critical (must fix)` : '',
|
||||
by_severity.high ? `${by_severity.high} high (should fix)` : '',
|
||||
autoFixable.length ? `${autoFixable.length} auto-fixable (low effort)` : ''
|
||||
].filter(Boolean).map(s => `- ${s}`).join('\n') || '- No actionable findings.'
|
||||
|
||||
Write(`${sessionFolder}/review/review-report.md`,
|
||||
`# Review Report
|
||||
|
||||
**ID**: ${reviewReport.review_id} | **Date**: ${reviewReport.review_date}
|
||||
**Findings**: ${S.total} | **Fixable**: ${S.fixable_count} | **Auto-fixable**: ${S.auto_fixable_count}
|
||||
|
||||
## Executive Summary
|
||||
- Deep analyzed: ${S.deep_analyzed} | Pass-through: ${S.pass_through}
|
||||
- Critical files: ${S.critical_file_count} | Optimizations: ${S.optimization_count}
|
||||
|
||||
## Metrics Matrix
|
||||
${mx}
|
||||
## Critical & High Findings
|
||||
${ft}
|
||||
## Critical Files
|
||||
${cf}
|
||||
|
||||
## Optimization Suggestions
|
||||
${os}
|
||||
|
||||
## Recommended Fix Scope
|
||||
${fs}
|
||||
|
||||
**Total fixable**: ${S.fixable_count} / ${S.total}
|
||||
`)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Enriched findings missing | Use empty array, report pass_through only |
|
||||
| JSON parse failure | Log warning, use raw findings |
|
||||
| Session folder missing | Create review subdir via mkdir |
|
||||
| Empty allFindings | Write minimal "clean" report |
|
||||
@@ -1,231 +0,0 @@
|
||||
# Reviewer Role
|
||||
|
||||
Deep analysis on scan findings, enrichment with root cause / impact / optimization, and structured review report generation. Read-only -- never modifies source code.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `reviewer` | **Tag**: `[reviewer]`
|
||||
- **Task Prefix**: `REV-*`
|
||||
- **Responsibility**: read-only-analysis
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `REV-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[reviewer]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Write only to session review directory
|
||||
- Triage findings before deep analysis (cap at 15 for deep analysis)
|
||||
- Work strictly within read-only analysis scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Modify source code files
|
||||
- Fix issues
|
||||
- Create tasks for other roles
|
||||
- Contact scanner/fixer directly
|
||||
- Run any write-mode CLI commands
|
||||
- Omit `[reviewer]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `deep-analyze` | [commands/deep-analyze.md](commands/deep-analyze.md) | Phase 3 | CLI Fan-out root cause analysis |
|
||||
| `generate-report` | [commands/generate-report.md](commands/generate-report.md) | Phase 4 | Cross-correlate + report generation |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Read` | Built-in | reviewer | Load scan results |
|
||||
| `Write` | Built-in | reviewer | Write review reports |
|
||||
| `TaskUpdate` | Built-in | reviewer | Update task status |
|
||||
| `team_msg` | MCP | reviewer | Log communication |
|
||||
| `Bash` | Built-in | reviewer | CLI analysis calls |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `review_progress` | reviewer -> coordinator | Milestone | Progress update during review |
|
||||
| `review_complete` | reviewer -> coordinator | Phase 5 | Review finished with findings |
|
||||
| `error` | reviewer -> coordinator | Failure | Error requiring attention |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "reviewer",
|
||||
type: "review_complete",
|
||||
ref: "<session-folder>/review/review-report.json"
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from reviewer --type review_complete --ref <path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `REV-*` + status pending + blockedBy empty -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
Extract from task description:
|
||||
|
||||
| Parameter | Extraction Pattern | Default |
|
||||
|-----------|-------------------|---------|
|
||||
| Session folder | `session: <path>` | (required) |
|
||||
| Input path | `input: <path>` | `<session>/scan/scan-results.json` |
|
||||
| Dimensions | `dimensions: <list>` | `sec,cor,perf,maint` |
|
||||
|
||||
Load scan results from input path. If missing or empty -> report clean, complete immediately.
|
||||
|
||||
**Resume Artifact Check**: If `review-report.json` exists and is complete -> skip to Phase 5.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Triage Findings
|
||||
|
||||
**Objective**: Split findings into deep analysis vs pass-through buckets.
|
||||
|
||||
**Triage rules**:
|
||||
|
||||
| Category | Severity | Action |
|
||||
|----------|----------|--------|
|
||||
| Deep analysis | critical, high, medium | Enrich with root cause, impact, optimization |
|
||||
| Pass-through | low | Include in report without enrichment |
|
||||
|
||||
**Limits**:
|
||||
|
||||
| Parameter | Value | Reason |
|
||||
|-----------|-------|--------|
|
||||
| MAX_DEEP | 15 | CLI call efficiency |
|
||||
| Priority order | critical -> high -> medium | Highest impact first |
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Filter findings with severity in [critical, high, medium]
|
||||
2. Sort by severity (critical first)
|
||||
3. Take first MAX_DEEP for deep analysis
|
||||
4. Remaining findings -> pass-through bucket
|
||||
|
||||
**Success**: deep_analysis and pass_through buckets populated.
|
||||
|
||||
If deep_analysis bucket is empty -> skip Phase 3, go directly to Phase 4.
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Deep Analysis
|
||||
|
||||
**Objective**: Enrich selected findings with root cause, impact, and optimization suggestions.
|
||||
|
||||
Delegate to `commands/deep-analyze.md` which performs CLI Fan-out analysis.
|
||||
|
||||
**Analysis strategy**:
|
||||
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Single dimension analysis | Direct inline scan |
|
||||
| Multi-dimension analysis | Per-dimension sequential scan |
|
||||
| Deep analysis needed | CLI Fan-out to external tool |
|
||||
|
||||
**Enrichment fields** (added to each finding):
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| root_cause | Underlying cause of the issue |
|
||||
| impact | Business/technical impact |
|
||||
| optimization | Suggested optimization approach |
|
||||
| fix_strategy | auto/manual/skip |
|
||||
| fix_complexity | low/medium/high |
|
||||
| fix_dependencies | Array of dependent finding IDs |
|
||||
|
||||
**Output**: `enriched-findings.json`
|
||||
|
||||
If CLI deep analysis fails -> use original findings without enrichment.
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Generate Report
|
||||
|
||||
**Objective**: Cross-correlate enriched + pass-through findings, generate review report.
|
||||
|
||||
Delegate to `commands/generate-report.md`.
|
||||
|
||||
**Report structure**:
|
||||
|
||||
| Section | Content |
|
||||
|---------|---------|
|
||||
| Summary | Total count, by_severity, by_dimension, fixable_count, auto_fixable_count |
|
||||
| Critical files | Files with multiple critical/high findings |
|
||||
| Findings | All findings with enrichment data |
|
||||
|
||||
**Output files**:
|
||||
|
||||
| File | Format | Purpose |
|
||||
|------|--------|---------|
|
||||
| review-report.json | JSON | Machine-readable for fixer |
|
||||
| review-report.md | Markdown | Human-readable summary |
|
||||
|
||||
**Success**: Both report files written.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
**Objective**: Report review results to coordinator.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Update .msg/meta.json with review results summary
|
||||
2. Build top findings summary (critical/high, max 8)
|
||||
3. Log via team_msg with `[reviewer]` prefix
|
||||
4. SendMessage to coordinator
|
||||
5. TaskUpdate completed
|
||||
6. Loop to Phase 1 for next task
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Findings count | Total |
|
||||
| Severity summary | critical:n high:n medium:n low:n |
|
||||
| Fixable count | Number of auto-fixable |
|
||||
| Top findings | Critical/high items |
|
||||
| Critical files | Files with most issues |
|
||||
| Output path | review-report.json location |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Scan results file missing | Report error, complete task cleanly |
|
||||
| 0 findings in scan | Report clean, complete immediately |
|
||||
| CLI deep analysis fails | Use original findings without enrichment |
|
||||
| Report generation fails | Write minimal report with raw findings |
|
||||
| Session folder missing | Re-create review subdirectory |
|
||||
| JSON parse failures | Log warning, use fallback data |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
@@ -1,186 +0,0 @@
|
||||
# Command: semantic-scan
|
||||
|
||||
> LLM-based semantic analysis via CLI. Supplements toolchain findings with issues that static tools cannot detect: business logic flaws, architectural problems, complex security patterns.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Scanner, Standard mode, Step B
|
||||
- Runs AFTER toolchain-scan completes (needs its output to avoid duplication)
|
||||
- Quick mode does NOT use this command
|
||||
|
||||
**Trigger conditions**:
|
||||
- SCAN-* task in Phase 3 with `quickMode === false`
|
||||
- toolchain-scan.md has completed (toolchain-findings.json exists or empty)
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: CLI Fan-out (single gemini agent, analysis only)
|
||||
|
||||
### Tool Fallback Chain
|
||||
|
||||
```
|
||||
gemini (primary) -> qwen (fallback) -> codex (fallback)
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Prepare Context
|
||||
|
||||
Build the CLI prompt with target files and a summary of toolchain findings to avoid duplication.
|
||||
|
||||
```javascript
|
||||
// Read toolchain findings for dedup context
|
||||
let toolFindings = []
|
||||
try {
|
||||
toolFindings = JSON.parse(Read(`${sessionFolder}/scan/toolchain-findings.json`))
|
||||
} catch { /* no toolchain findings */ }
|
||||
|
||||
// Build toolchain summary for dedup (compact: file:line:rule per line)
|
||||
const toolSummary = toolFindings.length > 0
|
||||
? toolFindings.slice(0, 50).map(f =>
|
||||
`${f.location?.file}:${f.location?.line} [${f.source}] ${f.title}`
|
||||
).join('\n')
|
||||
: '(no toolchain findings)'
|
||||
|
||||
// Build target file list for CLI context
|
||||
// Limit to reasonable size for CLI prompt
|
||||
const fileList = targetFiles.slice(0, 100)
|
||||
const targetPattern = fileList.length <= 20
|
||||
? fileList.join(' ')
|
||||
: `${target}/**/*.{ts,tsx,js,jsx,py,go,java,rs}`
|
||||
|
||||
// Map requested dimensions to scan focus areas
|
||||
const DIM_FOCUS = {
|
||||
sec: 'Security: business logic vulnerabilities, privilege escalation, sensitive data flow, auth bypass, injection beyond simple patterns',
|
||||
cor: 'Correctness: logic errors, unhandled exception paths, state management bugs, race conditions, incorrect algorithm implementation',
|
||||
perf: 'Performance: algorithm complexity (O(n^2)+), N+1 queries, unnecessary sync operations, memory leaks, missing caching opportunities',
|
||||
maint: 'Maintainability: architectural coupling, abstraction leaks, project convention violations, dead code paths, excessive complexity'
|
||||
}
|
||||
|
||||
const focusAreas = dimensions
|
||||
.map(d => DIM_FOCUS[d])
|
||||
.filter(Boolean)
|
||||
.map((desc, i) => `${i + 1}. ${desc}`)
|
||||
.join('\n')
|
||||
```
|
||||
|
||||
### Step 2: Execute CLI Scan
|
||||
|
||||
```javascript
|
||||
const maxPerDimension = 5
|
||||
const minSeverity = 'medium'
|
||||
|
||||
const cliPrompt = `PURPOSE: Supplement toolchain scan with semantic analysis that static tools cannot detect. Find logic errors, architectural issues, and complex vulnerability patterns.
|
||||
TASK:
|
||||
${focusAreas}
|
||||
MODE: analysis
|
||||
CONTEXT: @${targetPattern}
|
||||
Toolchain already detected these issues (DO NOT repeat them):
|
||||
${toolSummary}
|
||||
EXPECTED: Respond with ONLY a JSON array (no markdown, no explanation). Each element:
|
||||
{"dimension":"security|correctness|performance|maintainability","category":"<sub-category>","severity":"critical|high|medium","title":"<concise title>","description":"<detailed explanation>","location":{"file":"<path>","line":<number>,"end_line":<number>,"code_snippet":"<relevant code>"},"source":"llm","suggested_fix":"<how to fix>","effort":"low|medium|high","confidence":"high|medium|low"}
|
||||
CONSTRAINTS: Max ${maxPerDimension} findings per dimension | Only ${minSeverity} severity and above | Do not duplicate toolchain findings | Focus on issues tools CANNOT detect | Return raw JSON array only`
|
||||
|
||||
let cliOutput = null
|
||||
let cliTool = 'gemini'
|
||||
|
||||
// Try primary tool
|
||||
try {
|
||||
cliOutput = Bash(
|
||||
`ccw cli -p "${cliPrompt.replace(/"/g, '\\"')}" --tool gemini --mode analysis --rule analysis-review-code-quality`,
|
||||
{ timeout: 300000 }
|
||||
)
|
||||
} catch {
|
||||
// Fallback to qwen
|
||||
try {
|
||||
cliTool = 'qwen'
|
||||
cliOutput = Bash(
|
||||
`ccw cli -p "${cliPrompt.replace(/"/g, '\\"')}" --tool qwen --mode analysis`,
|
||||
{ timeout: 300000 }
|
||||
)
|
||||
} catch {
|
||||
// Fallback to codex
|
||||
try {
|
||||
cliTool = 'codex'
|
||||
cliOutput = Bash(
|
||||
`ccw cli -p "${cliPrompt.replace(/"/g, '\\"')}" --tool codex --mode analysis`,
|
||||
{ timeout: 300000 }
|
||||
)
|
||||
} catch {
|
||||
// All CLI tools failed
|
||||
cliOutput = null
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Parse & Validate Output
|
||||
|
||||
```javascript
|
||||
let semanticFindings = []
|
||||
|
||||
if (cliOutput) {
|
||||
try {
|
||||
// Extract JSON array from CLI output (may have surrounding text)
|
||||
const jsonMatch = cliOutput.match(/\[[\s\S]*\]/)
|
||||
if (jsonMatch) {
|
||||
const parsed = JSON.parse(jsonMatch[0])
|
||||
|
||||
// Validate each finding against schema
|
||||
semanticFindings = parsed.filter(f => {
|
||||
// Required fields check
|
||||
if (!f.dimension || !f.title || !f.location?.file) return false
|
||||
// Dimension must be valid
|
||||
if (!['security', 'correctness', 'performance', 'maintainability'].includes(f.dimension)) return false
|
||||
// Severity must be valid and meet minimum
|
||||
const validSev = ['critical', 'high', 'medium']
|
||||
if (!validSev.includes(f.severity)) return false
|
||||
return true
|
||||
}).map(f => ({
|
||||
dimension: f.dimension,
|
||||
category: f.category || 'general',
|
||||
severity: f.severity,
|
||||
title: f.title,
|
||||
description: f.description || f.title,
|
||||
location: {
|
||||
file: f.location.file,
|
||||
line: f.location.line || 1,
|
||||
end_line: f.location.end_line || f.location.line || 1,
|
||||
code_snippet: f.location.code_snippet || ''
|
||||
},
|
||||
source: 'llm',
|
||||
tool_rule: null,
|
||||
suggested_fix: f.suggested_fix || '',
|
||||
effort: ['low', 'medium', 'high'].includes(f.effort) ? f.effort : 'medium',
|
||||
confidence: ['high', 'medium', 'low'].includes(f.confidence) ? f.confidence : 'medium'
|
||||
}))
|
||||
}
|
||||
} catch {
|
||||
// JSON parse failed - log and continue with empty
|
||||
}
|
||||
}
|
||||
|
||||
// Enforce per-dimension limits
|
||||
const dimCounts = {}
|
||||
semanticFindings = semanticFindings.filter(f => {
|
||||
dimCounts[f.dimension] = (dimCounts[f.dimension] || 0) + 1
|
||||
return dimCounts[f.dimension] <= maxPerDimension
|
||||
})
|
||||
|
||||
// Write output
|
||||
Write(`${sessionFolder}/scan/semantic-findings.json`,
|
||||
JSON.stringify(semanticFindings, null, 2))
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| gemini CLI fails | Fallback to qwen, then codex |
|
||||
| All CLI tools fail | Log warning, write empty findings array (toolchain results still valid) |
|
||||
| CLI output not valid JSON | Attempt regex extraction, else empty findings |
|
||||
| Findings exceed per-dimension limit | Truncate to max per dimension |
|
||||
| Invalid dimension/severity in output | Filter out invalid entries |
|
||||
| CLI timeout (>5 min) | Kill, log warning, return empty findings |
|
||||
@@ -1,187 +0,0 @@
|
||||
# Command: toolchain-scan
|
||||
|
||||
> Parallel static analysis tool execution. Detects available tools, runs concurrently, normalizes output into standardized findings.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Scanner, Standard mode, Step A
|
||||
- At least one tool detected in Phase 2
|
||||
- Quick mode does NOT use this command
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Direct (Bash parallel execution)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Build Tool Commands
|
||||
|
||||
```javascript
|
||||
if (!Object.values(toolchain).some(Boolean)) {
|
||||
Write(`${sessionFolder}/scan/toolchain-findings.json`, '[]')
|
||||
return
|
||||
}
|
||||
|
||||
const tmpDir = `${sessionFolder}/scan/tmp`
|
||||
Bash(`mkdir -p "${tmpDir}"`)
|
||||
|
||||
const cmds = []
|
||||
|
||||
if (toolchain.tsc)
|
||||
cmds.push(`(cd "${projectRoot}" && npx tsc --noEmit --pretty false 2>&1 | head -500 > "${tmpDir}/tsc.txt") &`)
|
||||
if (toolchain.eslint)
|
||||
cmds.push(`(cd "${projectRoot}" && npx eslint "${target}" --format json --no-error-on-unmatched-pattern 2>/dev/null | head -5000 > "${tmpDir}/eslint.json") &`)
|
||||
if (toolchain.semgrep)
|
||||
cmds.push(`(cd "${projectRoot}" && semgrep --config auto --json "${target}" 2>/dev/null | head -5000 > "${tmpDir}/semgrep.json") &`)
|
||||
if (toolchain.ruff)
|
||||
cmds.push(`(cd "${projectRoot}" && ruff check "${target}" --output-format json 2>/dev/null | head -5000 > "${tmpDir}/ruff.json") &`)
|
||||
if (toolchain.mypy)
|
||||
cmds.push(`(cd "${projectRoot}" && mypy "${target}" --output json 2>/dev/null | head -2000 > "${tmpDir}/mypy.txt") &`)
|
||||
if (toolchain.npmAudit)
|
||||
cmds.push(`(cd "${projectRoot}" && npm audit --json 2>/dev/null | head -5000 > "${tmpDir}/audit.json") &`)
|
||||
```
|
||||
|
||||
### Step 2: Parallel Execution
|
||||
|
||||
```javascript
|
||||
Bash(cmds.join('\n') + '\nwait', { timeout: 300000 })
|
||||
```
|
||||
|
||||
### Step 3: Parse Tool Outputs
|
||||
|
||||
Each parser normalizes to: `{ dimension, category, severity, title, description, location:{file,line,end_line,code_snippet}, source, tool_rule, suggested_fix, effort, confidence }`
|
||||
|
||||
```javascript
|
||||
const findings = []
|
||||
|
||||
// --- tsc: file(line,col): error TSxxxx: message ---
|
||||
if (toolchain.tsc) {
|
||||
try {
|
||||
const out = Read(`${tmpDir}/tsc.txt`)
|
||||
const re = /^(.+)\((\d+),\d+\):\s+(error|warning)\s+(TS\d+):\s+(.+)$/gm
|
||||
let m; while ((m = re.exec(out)) !== null) {
|
||||
findings.push({
|
||||
dimension: 'correctness', category: 'type-safety',
|
||||
severity: m[3] === 'error' ? 'high' : 'medium',
|
||||
title: `tsc ${m[4]}: ${m[5].slice(0,80)}`, description: m[5],
|
||||
location: { file: m[1], line: +m[2] },
|
||||
source: 'tool:tsc', tool_rule: m[4], suggested_fix: '',
|
||||
effort: 'low', confidence: 'high'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// --- eslint: JSON array of {filePath, messages[{severity,ruleId,message,line}]} ---
|
||||
if (toolchain.eslint) {
|
||||
try {
|
||||
const data = JSON.parse(Read(`${tmpDir}/eslint.json`))
|
||||
for (const f of data) for (const msg of (f.messages || [])) {
|
||||
const isErr = msg.severity === 2
|
||||
findings.push({
|
||||
dimension: isErr ? 'correctness' : 'maintainability',
|
||||
category: isErr ? 'bug' : 'code-smell',
|
||||
severity: isErr ? 'high' : 'medium',
|
||||
title: `eslint ${msg.ruleId || '?'}: ${(msg.message||'').slice(0,80)}`,
|
||||
description: msg.message || '',
|
||||
location: { file: f.filePath, line: msg.line || 1, end_line: msg.endLine, code_snippet: msg.source || '' },
|
||||
source: 'tool:eslint', tool_rule: msg.ruleId || null,
|
||||
suggested_fix: msg.fix ? 'Auto-fixable' : '', effort: msg.fix ? 'low' : 'medium', confidence: 'high'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// --- semgrep: {results[{path,start:{line},end:{line},check_id,extra:{severity,message,fix,lines}}]} ---
|
||||
if (toolchain.semgrep) {
|
||||
try {
|
||||
const data = JSON.parse(Read(`${tmpDir}/semgrep.json`))
|
||||
const smap = { ERROR:'high', WARNING:'medium', INFO:'low' }
|
||||
for (const r of (data.results || [])) {
|
||||
findings.push({
|
||||
dimension: 'security', category: r.check_id?.split('.').pop() || 'generic',
|
||||
severity: smap[r.extra?.severity] || 'medium',
|
||||
title: `semgrep: ${(r.extra?.message || r.check_id || '').slice(0,80)}`,
|
||||
description: r.extra?.message || '', location: { file: r.path, line: r.start?.line || 1, end_line: r.end?.line, code_snippet: r.extra?.lines || '' },
|
||||
source: 'tool:semgrep', tool_rule: r.check_id || null,
|
||||
suggested_fix: r.extra?.fix || '', effort: 'medium', confidence: smap[r.extra?.severity] === 'high' ? 'high' : 'medium'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// --- ruff: [{code,message,filename,location:{row},end_location:{row},fix}] ---
|
||||
if (toolchain.ruff) {
|
||||
try {
|
||||
const data = JSON.parse(Read(`${tmpDir}/ruff.json`))
|
||||
for (const item of data) {
|
||||
const code = item.code || ''
|
||||
const dim = code.startsWith('S') ? 'security' : (code.startsWith('F') || code.startsWith('B')) ? 'correctness' : 'maintainability'
|
||||
findings.push({
|
||||
dimension: dim, category: dim === 'security' ? 'input-validation' : dim === 'correctness' ? 'bug' : 'code-smell',
|
||||
severity: (code.startsWith('S') || code.startsWith('F')) ? 'high' : 'medium',
|
||||
title: `ruff ${code}: ${(item.message||'').slice(0,80)}`, description: item.message || '',
|
||||
location: { file: item.filename, line: item.location?.row || 1, end_line: item.end_location?.row },
|
||||
source: 'tool:ruff', tool_rule: code, suggested_fix: item.fix?.message || '',
|
||||
effort: item.fix ? 'low' : 'medium', confidence: 'high'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// --- npm audit: {vulnerabilities:{name:{severity,title,fixAvailable,via}}} ---
|
||||
if (toolchain.npmAudit) {
|
||||
try {
|
||||
const data = JSON.parse(Read(`${tmpDir}/audit.json`))
|
||||
const smap = { critical:'critical', high:'high', moderate:'medium', low:'low', info:'info' }
|
||||
for (const [,v] of Object.entries(data.vulnerabilities || {})) {
|
||||
findings.push({
|
||||
dimension: 'security', category: 'dependency', severity: smap[v.severity] || 'medium',
|
||||
title: `npm audit: ${v.name} - ${(v.title || '').slice(0,80)}`,
|
||||
description: v.title || `Vulnerable: ${v.name}`,
|
||||
location: { file: 'package.json', line: 1 },
|
||||
source: 'tool:npm-audit', tool_rule: null,
|
||||
suggested_fix: v.fixAvailable ? 'npm audit fix' : 'Manual resolution',
|
||||
effort: v.fixAvailable ? 'low' : 'high', confidence: 'high'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// --- mypy: file:line: error: message [code] ---
|
||||
if (toolchain.mypy) {
|
||||
try {
|
||||
const out = Read(`${tmpDir}/mypy.txt`)
|
||||
const re = /^(.+):(\d+):\s+(error|warning):\s+(.+?)(?:\s+\[(\w[\w-]*)\])?$/gm
|
||||
let m; while ((m = re.exec(out)) !== null) {
|
||||
if (m[3] === 'note') continue
|
||||
findings.push({
|
||||
dimension: 'correctness', category: 'type-safety',
|
||||
severity: m[3] === 'error' ? 'high' : 'medium',
|
||||
title: `mypy${m[5] ? ` [${m[5]}]` : ''}: ${m[4].slice(0,80)}`, description: m[4],
|
||||
location: { file: m[1], line: +m[2] },
|
||||
source: 'tool:mypy', tool_rule: m[5] || null, suggested_fix: '',
|
||||
effort: 'low', confidence: 'high'
|
||||
})
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Write Output
|
||||
|
||||
```javascript
|
||||
Write(`${sessionFolder}/scan/toolchain-findings.json`, JSON.stringify(findings, null, 2))
|
||||
Bash(`rm -rf "${tmpDir}"`)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Tool not found at runtime | Skip gracefully, continue with others |
|
||||
| Tool times out (>5 min) | Killed by `wait` timeout, partial output used |
|
||||
| Tool output unparseable | try/catch skips that tool's findings |
|
||||
| All tools fail | Empty array written, semantic-scan covers all dimensions |
|
||||
@@ -1,253 +0,0 @@
|
||||
# Scanner Role
|
||||
|
||||
Toolchain + LLM semantic scan producing structured findings. Static analysis tools in parallel, then LLM for issues tools miss. Read-only -- never modifies source code.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `scanner` | **Tag**: `[scanner]`
|
||||
- **Task Prefix**: `SCAN-*`
|
||||
- **Responsibility**: read-only-analysis
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `SCAN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[scanner]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Write only to session scan directory
|
||||
- Assign dimension-prefixed IDs: SEC-001, COR-001, PRF-001, MNT-001
|
||||
- Work strictly within read-only analysis scope
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Modify source files
|
||||
- Fix issues
|
||||
- Create tasks for other roles
|
||||
- Contact reviewer/fixer directly
|
||||
- Run any write-mode CLI commands
|
||||
- Omit `[scanner]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `toolchain-scan` | [commands/toolchain-scan.md](commands/toolchain-scan.md) | Phase 3A | Parallel static analysis |
|
||||
| `semantic-scan` | [commands/semantic-scan.md](commands/semantic-scan.md) | Phase 3B | LLM analysis via CLI |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `Read` | Built-in | scanner | Load context files |
|
||||
| `Write` | Built-in | scanner | Write scan results |
|
||||
| `Glob` | Built-in | scanner | Find target files |
|
||||
| `Bash` | Built-in | scanner | Run toolchain commands |
|
||||
| `TaskUpdate` | Built-in | scanner | Update task status |
|
||||
| `team_msg` | MCP | scanner | Log communication |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `scan_progress` | scanner -> coordinator | Milestone | Progress update during scan |
|
||||
| `scan_complete` | scanner -> coordinator | Phase 5 | Scan finished with findings count |
|
||||
| `error` | scanner -> coordinator | Failure | Error requiring attention |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "scanner",
|
||||
type: "scan_complete",
|
||||
ref: "<session-folder>/scan/scan-results.json"
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from scanner --type scan_complete --ref <path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `SCAN-*` + status pending + blockedBy empty -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
Extract from task description:
|
||||
|
||||
| Parameter | Extraction Pattern | Default |
|
||||
|-----------|-------------------|---------|
|
||||
| Target | `target: <path>` | `.` |
|
||||
| Dimensions | `dimensions: <list>` | `sec,cor,perf,maint` |
|
||||
| Quick mode | `quick: true` | false |
|
||||
| Session folder | `session: <path>` | (required) |
|
||||
|
||||
**Resume Artifact Check**: If `scan-results.json` exists and is complete -> skip to Phase 5.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Context Resolution
|
||||
|
||||
**Objective**: Resolve target files and detect available toolchain.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Resolve target files**:
|
||||
|
||||
| Input Type | Resolution Method |
|
||||
|------------|-------------------|
|
||||
| Glob pattern | Direct Glob |
|
||||
| Directory | Glob `<dir>/**/*.{ts,tsx,js,jsx,py,go,java,rs}` |
|
||||
|
||||
If no source files found -> report empty, complete task cleanly.
|
||||
|
||||
2. **Detect toolchain availability**:
|
||||
|
||||
| Tool | Detection Method |
|
||||
|------|------------------|
|
||||
| tsc | `tsconfig.json` exists |
|
||||
| eslint | `.eslintrc*` or `eslint.config.*` or `eslint` in package.json |
|
||||
| semgrep | `.semgrep.yml` exists |
|
||||
| ruff | `pyproject.toml` exists + ruff command available |
|
||||
| mypy | mypy command available + `pyproject.toml` exists |
|
||||
| npmAudit | `package-lock.json` exists |
|
||||
|
||||
**Success**: Target files resolved, toolchain detected.
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Scan Execution
|
||||
|
||||
**Objective**: Execute toolchain + semantic scans.
|
||||
|
||||
**Strategy selection**:
|
||||
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Quick mode | Single inline CLI call, max 20 findings |
|
||||
| Standard mode | Sequential: toolchain-scan -> semantic-scan |
|
||||
|
||||
**Quick Mode**:
|
||||
|
||||
1. Execute single CLI call with analysis mode
|
||||
2. Parse JSON response for findings (max 20)
|
||||
3. Skip toolchain execution
|
||||
|
||||
**Standard Mode**:
|
||||
|
||||
1. Delegate to `commands/toolchain-scan.md` -> produces `toolchain-findings.json`
|
||||
2. Delegate to `commands/semantic-scan.md` -> produces `semantic-findings.json`
|
||||
|
||||
**Success**: Findings collected from toolchain and/or semantic scan.
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Aggregate & Deduplicate
|
||||
|
||||
**Objective**: Merge findings, assign IDs, write results.
|
||||
|
||||
**Deduplication rules**:
|
||||
|
||||
| Key | Rule |
|
||||
|-----|------|
|
||||
| Duplicate detection | Same file + line + dimension = duplicate |
|
||||
| Priority | Keep first occurrence |
|
||||
|
||||
**ID Assignment**:
|
||||
|
||||
| Dimension | Prefix | Example ID |
|
||||
|-----------|--------|------------|
|
||||
| security | SEC | SEC-001 |
|
||||
| correctness | COR | COR-001 |
|
||||
| performance | PRF | PRF-001 |
|
||||
| maintainability | MNT | MNT-001 |
|
||||
|
||||
**Output schema** (`scan-results.json`):
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| scan_date | string | ISO timestamp |
|
||||
| target | string | Scan target |
|
||||
| dimensions | array | Enabled dimensions |
|
||||
| quick_mode | boolean | Quick mode flag |
|
||||
| total_findings | number | Total count |
|
||||
| by_severity | object | Count per severity |
|
||||
| by_dimension | object | Count per dimension |
|
||||
| findings | array | Finding objects |
|
||||
|
||||
**Each finding**:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| id | string | Dimension-prefixed ID |
|
||||
| dimension | string | security/correctness/performance/maintainability |
|
||||
| category | string | Category within dimension |
|
||||
| severity | string | critical/high/medium/low |
|
||||
| title | string | Short title |
|
||||
| description | string | Detailed description |
|
||||
| location | object | {file, line} |
|
||||
| source | string | toolchain/llm |
|
||||
| suggested_fix | string | Optional fix hint |
|
||||
| effort | string | low/medium/high |
|
||||
| confidence | string | low/medium/high |
|
||||
|
||||
**Success**: `scan-results.json` written with unique findings.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
**Objective**: Report findings to coordinator.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Update .msg/meta.json with scan results summary
|
||||
2. Build top findings summary (critical/high, max 10)
|
||||
3. Log via team_msg with `[scanner]` prefix
|
||||
4. SendMessage to coordinator
|
||||
5. TaskUpdate completed
|
||||
6. Loop to Phase 1 for next task
|
||||
|
||||
**Report content**:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Target | Scanned path |
|
||||
| Mode | quick/standard |
|
||||
| Findings count | Total |
|
||||
| Dimension summary | SEC:n COR:n PRF:n MNT:n |
|
||||
| Top findings | Critical/high items |
|
||||
| Output path | scan-results.json location |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No source files match target | Report empty, complete task cleanly |
|
||||
| All toolchain tools unavailable | Skip toolchain, run semantic-only |
|
||||
| CLI semantic scan fails | Log warning, use toolchain results only |
|
||||
| Quick mode CLI timeout | Return partial or empty findings |
|
||||
| Toolchain tool crashes | Skip that tool, continue with others |
|
||||
| Session folder missing | Re-create scan subdirectory |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
Reference in New Issue
Block a user