chore: move 3 skills to ccw-skill-hub repository

Migrated to D:/ccw-skill-hub/skills/:
- project-analyze
- copyright-docs
- software-manual
This commit is contained in:
catlog22
2026-02-24 12:23:41 +08:00
parent 61e313a0c1
commit a859698c7d
6 changed files with 1068 additions and 0 deletions

View File

@@ -0,0 +1,186 @@
# Command: deep-analyze
> CLI Fan-out deep analysis. Splits findings into 2 domain groups, runs parallel CLI agents for root cause / impact / optimization enrichment.
## When to Use
- Phase 3 of Reviewer, when `deep_analysis.length > 0`
- Requires `deep_analysis[]` array and `sessionFolder` from Phase 2
**Trigger conditions**:
- REV-* task in Phase 3 with at least 1 finding triaged for deep analysis
## Strategy
### Delegation Mode
**Mode**: CLI Fan-out (max 2 parallel agents, analysis only)
### Tool Fallback Chain
```
gemini (primary) -> qwen (fallback) -> codex (fallback)
```
### Group Split
```
Group A: Security + Correctness findings -> 1 CLI agent
Group B: Performance + Maintainability findings -> 1 CLI agent
If either group empty -> skip that agent (run single agent only)
```
## Execution Steps
### Step 1: Split Findings into Groups
```javascript
const groupA = deep_analysis.filter(f =>
f.dimension === 'security' || f.dimension === 'correctness'
)
const groupB = deep_analysis.filter(f =>
f.dimension === 'performance' || f.dimension === 'maintainability'
)
// Collect all affected files for CLI context
const collectFiles = (group) => [...new Set(
group.map(f => f.location?.file).filter(Boolean)
)]
const filesA = collectFiles(groupA)
const filesB = collectFiles(groupB)
```
### Step 2: Build CLI Prompts
```javascript
function buildPrompt(group, groupLabel, affectedFiles) {
const findingsJson = JSON.stringify(group, null, 2)
const filePattern = affectedFiles.length <= 20
? affectedFiles.map(f => `@${f}`).join(' ')
: '@**/*.{ts,tsx,js,jsx,py,go,java,rs}'
return `PURPOSE: Deep analysis of ${groupLabel} code findings -- root cause, impact, optimization suggestions.
TASK:
- For each finding: trace root cause (independent issue or symptom of another finding?)
- Identify findings sharing the same root cause -> mark related_findings with their IDs
- Assess impact scope and affected files (blast_radius: function/module/system)
- Propose fix strategy (minimal fix vs refactor) with tradeoff analysis
- Identify fix dependencies (which findings must be fixed first?)
- For each finding add these enrichment fields:
root_cause: { description: string, related_findings: string[], is_symptom: boolean }
impact: { scope: "low"|"medium"|"high", affected_files: string[], blast_radius: string }
optimization: { approach: string, alternative: string, tradeoff: string }
fix_strategy: "minimal" | "refactor" | "skip"
fix_complexity: "low" | "medium" | "high"
fix_dependencies: string[] (finding IDs that must be fixed first)
MODE: analysis
CONTEXT: ${filePattern}
Findings to analyze:
${findingsJson}
EXPECTED: Respond with ONLY a JSON array. Each element is the original finding object with the 6 enrichment fields added. Preserve ALL original fields exactly.
CONSTRAINTS: Preserve original finding fields | Only add enrichment fields | Return raw JSON array only | No markdown wrapping`
}
const promptA = groupA.length > 0
? buildPrompt(groupA, 'Security + Correctness', filesA) : null
const promptB = groupB.length > 0
? buildPrompt(groupB, 'Performance + Maintainability', filesB) : null
```
### Step 3: Execute CLI Agents (Parallel)
```javascript
function runCli(prompt) {
const tools = ['gemini', 'qwen', 'codex']
for (const tool of tools) {
try {
const out = Bash(
`ccw cli -p "${prompt.replace(/"/g, '\\"')}" --tool ${tool} --mode analysis --rule analysis-diagnose-bug-root-cause`,
{ timeout: 300000 }
)
return out
} catch { continue }
}
return null // All tools failed
}
// Run both groups -- if both present, execute via Bash run_in_background for parallelism
let resultA = null, resultB = null
if (promptA && promptB) {
// Both groups: run in parallel
// Group A in background
Bash(`ccw cli -p "${promptA.replace(/"/g, '\\"')}" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause > "${sessionFolder}/review/_groupA.txt" 2>&1`,
{ run_in_background: true, timeout: 300000 })
// Group B synchronous (blocks until done)
resultB = runCli(promptB)
// Wait for Group A to finish, then read output
Bash(`sleep 5`) // Brief wait if B finished faster
try { resultA = Read(`${sessionFolder}/review/_groupA.txt`) } catch {}
// If background failed, try synchronous fallback
if (!resultA) resultA = runCli(promptA)
} else if (promptA) {
resultA = runCli(promptA)
} else if (promptB) {
resultB = runCli(promptB)
}
```
### Step 4: Parse & Merge Results
```javascript
function parseCliOutput(output) {
if (!output) return []
try {
const match = output.match(/\[[\s\S]*\]/)
if (!match) return []
const parsed = JSON.parse(match[0])
// Validate enrichment fields exist
return parsed.filter(f => f.id && f.dimension).map(f => ({
...f,
root_cause: f.root_cause || { description: 'Unknown', related_findings: [], is_symptom: false },
impact: f.impact || { scope: 'medium', affected_files: [f.location?.file].filter(Boolean), blast_radius: 'module' },
optimization: f.optimization || { approach: f.suggested_fix || '', alternative: '', tradeoff: '' },
fix_strategy: ['minimal', 'refactor', 'skip'].includes(f.fix_strategy) ? f.fix_strategy : 'minimal',
fix_complexity: ['low', 'medium', 'high'].includes(f.fix_complexity) ? f.fix_complexity : 'medium',
fix_dependencies: Array.isArray(f.fix_dependencies) ? f.fix_dependencies : []
}))
} catch { return [] }
}
const enrichedA = parseCliOutput(resultA)
const enrichedB = parseCliOutput(resultB)
// Merge: CLI-enriched findings replace originals, unenriched originals kept as fallback
const enrichedMap = new Map()
for (const f of [...enrichedA, ...enrichedB]) enrichedMap.set(f.id, f)
const enrichedFindings = deep_analysis.map(f =>
enrichedMap.get(f.id) || {
...f,
root_cause: { description: 'Analysis unavailable', related_findings: [], is_symptom: false },
impact: { scope: 'medium', affected_files: [f.location?.file].filter(Boolean), blast_radius: 'unknown' },
optimization: { approach: f.suggested_fix || '', alternative: '', tradeoff: '' },
fix_strategy: 'minimal',
fix_complexity: 'medium',
fix_dependencies: []
}
)
// Write output
Write(`${sessionFolder}/review/enriched-findings.json`, JSON.stringify(enrichedFindings, null, 2))
// Cleanup temp files
Bash(`rm -f "${sessionFolder}/review/_groupA.txt" "${sessionFolder}/review/_groupB.txt"`)
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| gemini CLI fails | Fallback to qwen, then codex |
| All CLI tools fail for a group | Use original findings with default enrichment |
| CLI output not valid JSON | Attempt regex extraction, else use defaults |
| Background task hangs | Synchronous fallback after timeout |
| One group fails, other succeeds | Merge partial results with defaults |
| Invalid enrichment fields | Apply defaults for missing/invalid fields |

View File

@@ -0,0 +1,174 @@
# Command: generate-report
> Cross-correlate enriched + pass-through findings, compute metrics, write review-report.json (for fixer) and review-report.md (for humans).
## When to Use
- Phase 4 of Reviewer, after deep analysis (or directly if deep_analysis was empty)
- Requires: `enrichedFindings[]` (from Phase 3 or empty), `pass_through[]` (from Phase 2), `sessionFolder`
## Strategy
**Mode**: Direct (inline execution, no CLI needed)
## Execution Steps
### Step 1: Load & Combine Findings
```javascript
let enrichedFindings = []
try { enrichedFindings = JSON.parse(Read(`${sessionFolder}/review/enriched-findings.json`)) } catch {}
const allFindings = [...enrichedFindings, ...pass_through]
```
### Step 2: Cross-Correlate
```javascript
// 2a: Critical files (file appears in >=2 dimensions)
const fileDimMap = {}
for (const f of allFindings) {
const file = f.location?.file; if (!file) continue
if (!fileDimMap[file]) fileDimMap[file] = new Set()
fileDimMap[file].add(f.dimension)
}
const critical_files = Object.entries(fileDimMap)
.filter(([, dims]) => dims.size >= 2)
.map(([file, dims]) => ({
file, dimensions: [...dims],
finding_count: allFindings.filter(f => f.location?.file === file).length,
severities: [...new Set(allFindings.filter(f => f.location?.file === file).map(f => f.severity))]
})).sort((a, b) => b.finding_count - a.finding_count)
// 2b: Group by shared root cause
const rootCauseGroups = [], grouped = new Set()
for (const f of allFindings) {
if (grouped.has(f.id)) continue
const related = (f.root_cause?.related_findings || []).filter(rid => !grouped.has(rid))
if (related.length > 0) {
const ids = [f.id, ...related]; ids.forEach(id => grouped.add(id))
rootCauseGroups.push({ root_cause: f.root_cause?.description || f.title,
finding_ids: ids, primary_id: f.id, dimension: f.dimension, severity: f.severity })
}
}
// 2c: Optimization suggestions from root cause groups + standalone enriched
const optimization_suggestions = []
for (const group of rootCauseGroups) {
const p = allFindings.find(f => f.id === group.primary_id)
if (p?.optimization?.approach) {
optimization_suggestions.push({ title: `Fix root cause: ${group.root_cause}`,
approach: p.optimization.approach, alternative: p.optimization.alternative || '',
tradeoff: p.optimization.tradeoff || '', affected_findings: group.finding_ids,
fix_strategy: p.fix_strategy || 'minimal', fix_complexity: p.fix_complexity || 'medium',
estimated_impact: `Resolves ${group.finding_ids.length} findings` })
}
}
for (const f of enrichedFindings) {
if (grouped.has(f.id) || !f.optimization?.approach || f.severity === 'low' || f.severity === 'info') continue
optimization_suggestions.push({ title: `${f.id}: ${f.title}`,
approach: f.optimization.approach, alternative: f.optimization.alternative || '',
tradeoff: f.optimization.tradeoff || '', affected_findings: [f.id],
fix_strategy: f.fix_strategy || 'minimal', fix_complexity: f.fix_complexity || 'medium',
estimated_impact: 'Resolves 1 finding' })
}
// 2d: Metrics
const by_dimension = {}, by_severity = {}, dimension_severity_matrix = {}
for (const f of allFindings) {
by_dimension[f.dimension] = (by_dimension[f.dimension] || 0) + 1
by_severity[f.severity] = (by_severity[f.severity] || 0) + 1
if (!dimension_severity_matrix[f.dimension]) dimension_severity_matrix[f.dimension] = {}
dimension_severity_matrix[f.dimension][f.severity] = (dimension_severity_matrix[f.dimension][f.severity] || 0) + 1
}
const fixable = allFindings.filter(f => f.fix_strategy !== 'skip')
const autoFixable = fixable.filter(f => f.fix_complexity === 'low' && f.fix_strategy === 'minimal')
```
### Step 3: Write review-report.json
```javascript
const reviewReport = {
review_id: `rev-${Date.now()}`, review_date: new Date().toISOString(),
findings: allFindings, critical_files, optimization_suggestions, root_cause_groups: rootCauseGroups,
summary: { total: allFindings.length, deep_analyzed: enrichedFindings.length,
pass_through: pass_through.length, by_dimension, by_severity, dimension_severity_matrix,
fixable_count: fixable.length, auto_fixable_count: autoFixable.length,
critical_file_count: critical_files.length, optimization_count: optimization_suggestions.length }
}
Bash(`mkdir -p "${sessionFolder}/review"`)
Write(`${sessionFolder}/review/review-report.json`, JSON.stringify(reviewReport, null, 2))
```
### Step 4: Write review-report.md
```javascript
const dims = ['security','correctness','performance','maintainability']
const sevs = ['critical','high','medium','low','info']
const S = reviewReport.summary
// Dimension x Severity matrix
let mx = '| Dimension | Critical | High | Medium | Low | Info | Total |\n|---|---|---|---|---|---|---|\n'
for (const d of dims) {
mx += `| ${d} | ${sevs.map(s => dimension_severity_matrix[d]?.[s]||0).join(' | ')} | ${by_dimension[d]||0} |\n`
}
mx += `| **Total** | ${sevs.map(s => by_severity[s]||0).join(' | ')} | **${S.total}** |\n`
// Critical+High findings table
const ch = allFindings.filter(f => f.severity==='critical'||f.severity==='high')
.sort((a,b) => (a.severity==='critical'?0:1)-(b.severity==='critical'?0:1))
let ft = '| ID | Sev | Dim | File:Line | Title | Fix |\n|---|---|---|---|---|---|\n'
if (ch.length) ch.forEach(f => { ft += `| ${f.id} | ${f.severity} | ${f.dimension} | ${f.location?.file}:${f.location?.line} | ${f.title} | ${f.fix_strategy||'-'} |\n` })
else ft += '| - | - | - | - | No critical/high findings | - |\n'
// Optimization suggestions
let os = optimization_suggestions.map((o,i) =>
`### ${i+1}. ${o.title}\n- **Approach**: ${o.approach}\n${o.tradeoff?`- **Tradeoff**: ${o.tradeoff}\n`:''}- **Strategy**: ${o.fix_strategy} | **Complexity**: ${o.fix_complexity} | ${o.estimated_impact}`
).join('\n\n') || '_No optimization suggestions._'
// Critical files
const cf = critical_files.slice(0,10).map(c =>
`- **${c.file}** (${c.finding_count} findings, dims: ${c.dimensions.join(', ')})`
).join('\n') || '_No critical files._'
// Fix scope
const fs = [
by_severity.critical ? `${by_severity.critical} critical (must fix)` : '',
by_severity.high ? `${by_severity.high} high (should fix)` : '',
autoFixable.length ? `${autoFixable.length} auto-fixable (low effort)` : ''
].filter(Boolean).map(s => `- ${s}`).join('\n') || '- No actionable findings.'
Write(`${sessionFolder}/review/review-report.md`,
`# Review Report
**ID**: ${reviewReport.review_id} | **Date**: ${reviewReport.review_date}
**Findings**: ${S.total} | **Fixable**: ${S.fixable_count} | **Auto-fixable**: ${S.auto_fixable_count}
## Executive Summary
- Deep analyzed: ${S.deep_analyzed} | Pass-through: ${S.pass_through}
- Critical files: ${S.critical_file_count} | Optimizations: ${S.optimization_count}
## Metrics Matrix
${mx}
## Critical & High Findings
${ft}
## Critical Files
${cf}
## Optimization Suggestions
${os}
## Recommended Fix Scope
${fs}
**Total fixable**: ${S.fixable_count} / ${S.total}
`)
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Enriched findings missing | Use empty array, report pass_through only |
| JSON parse failure | Log warning, use raw findings |
| Session folder missing | Create review subdir via mkdir |
| Empty allFindings | Write minimal "clean" report |